mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-05 11:12:03 +00:00
Remove all docs which are moving to http://kubernetes.github.io
All .md files now are only a pointer to where they likely are on the new site. All other files are untouched.
This commit is contained in:
@@ -32,201 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Creating a Kubernetes Cluster
|
||||
----------------------------------------
|
||||
|
||||
Kubernetes can run on a range of platforms, from your laptop, to VMs on a cloud provider, to rack of
|
||||
bare metal servers. The effort required to set up a cluster varies from running a single command to
|
||||
crafting your own customized cluster. We'll guide you in picking a solution that fits for your needs.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Picking the Right Solution](#picking-the-right-solution)
|
||||
- [Local-machine Solutions](#local-machine-solutions)
|
||||
- [Hosted Solutions](#hosted-solutions)
|
||||
- [Turn-key Cloud Solutions](#turn-key-cloud-solutions)
|
||||
- [Custom Solutions](#custom-solutions)
|
||||
- [Cloud](#cloud)
|
||||
- [On-Premises VMs](#on-premises-vms)
|
||||
- [Bare Metal](#bare-metal)
|
||||
- [Integrations](#integrations)
|
||||
- [Table of Solutions](#table-of-solutions)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
|
||||
## Picking the Right Solution
|
||||
|
||||
If you just want to "kick the tires" on Kubernetes, we recommend the [local Docker-based](docker.md) solution.
|
||||
|
||||
The local Docker-based solution is one of several [Local cluster](#local-machine-solutions) solutions
|
||||
that are quick to set up, but are limited to running on one machine.
|
||||
|
||||
When you are ready to scale up to more machines and higher availability, a [Hosted](#hosted-solutions)
|
||||
solution is the easiest to create and maintain.
|
||||
|
||||
[Turn-key cloud solutions](#turn-key-cloud-solutions) require only a few commands to create
|
||||
and cover a wider range of cloud providers.
|
||||
|
||||
[Custom solutions](#custom-solutions) require more effort to setup but cover and even
|
||||
they vary from step-by-step instructions to general advice for setting up
|
||||
a Kubernetes cluster from scratch.
|
||||
|
||||
### Local-machine Solutions
|
||||
|
||||
Local-machine solutions create a single cluster with one or more Kubernetes nodes on a single
|
||||
physical machine. Setup is completely automated and doesn't require a cloud provider account.
|
||||
But their size and availability is limited to that of a single machine.
|
||||
|
||||
The local-machine solutions are:
|
||||
|
||||
- [Local Docker-based](docker.md) (recommended starting point)
|
||||
- [Vagrant](vagrant.md) (works on any platform with Vagrant: Linux, MacOS, or Windows.)
|
||||
- [No-VM local cluster](../devel/running-locally.md) (Linux only)
|
||||
|
||||
|
||||
### Hosted Solutions
|
||||
|
||||
[Google Container Engine](https://cloud.google.com/container-engine) offers managed Kubernetes
|
||||
clusters.
|
||||
|
||||
### Turn-key Cloud Solutions
|
||||
|
||||
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
|
||||
few commands, and have active community support.
|
||||
|
||||
- [GCE](gce.md)
|
||||
- [AWS](aws.md)
|
||||
- [Azure](coreos/azure/README.md)
|
||||
|
||||
### Custom Solutions
|
||||
|
||||
Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many
|
||||
base operating systems.
|
||||
|
||||
If you can find a guide below that matches your needs, use it. It may be a little out of date, but
|
||||
it will be easier than starting from scratch. If you do want to start from scratch because you
|
||||
have special requirements or just because you want to understand what is underneath a Kubernetes
|
||||
cluster, try the [Getting Started from Scratch](scratch.md) guide.
|
||||
|
||||
If you are interested in supporting Kubernetes on a new platform, check out our [advice for
|
||||
writing a new solution](../../docs/devel/writing-a-getting-started-guide.md).
|
||||
|
||||
#### Cloud
|
||||
|
||||
These solutions are combinations of cloud provider and OS not covered by the above solutions.
|
||||
|
||||
- [AWS + CoreOS](coreos.md)
|
||||
- [GCE + CoreOS](coreos.md)
|
||||
- [AWS + Ubuntu](juju.md)
|
||||
- [Joyent + Ubuntu](juju.md)
|
||||
- [Rackspace + CoreOS](rackspace.md)
|
||||
|
||||
#### On-Premises VMs
|
||||
|
||||
- [Vagrant](coreos.md) (uses CoreOS and flannel)
|
||||
- [CloudStack](cloudstack.md) (uses Ansible, CoreOS and flannel)
|
||||
- [Vmware](vsphere.md) (uses Debian)
|
||||
- [juju.md](juju.md) (uses Juju, Ubuntu and flannel)
|
||||
- [Vmware](coreos.md) (uses CoreOS and flannel)
|
||||
- [libvirt-coreos.md](libvirt-coreos.md) (uses CoreOS)
|
||||
- [oVirt](ovirt.md)
|
||||
- [libvirt](fedora/flannel_multi_node_cluster.md) (uses Fedora and flannel)
|
||||
- [KVM](fedora/flannel_multi_node_cluster.md) (uses Fedora and flannel)
|
||||
|
||||
#### Bare Metal
|
||||
|
||||
- [Offline](coreos/bare_metal_offline.md) (no internet required. Uses CoreOS and Flannel)
|
||||
- [fedora/fedora_ansible_config.md](fedora/fedora_ansible_config.md)
|
||||
- [Fedora single node](fedora/fedora_manual_config.md)
|
||||
- [Fedora multi node](fedora/flannel_multi_node_cluster.md)
|
||||
- [Centos](centos/centos_manual_config.md)
|
||||
- [Ubuntu](ubuntu.md)
|
||||
- [Docker Multi Node](docker-multinode.md)
|
||||
|
||||
#### Integrations
|
||||
|
||||
These solutions provide integration with 3rd party schedulers, resource managers, and/or lower level platforms.
|
||||
|
||||
- [Kubernetes on Mesos](mesos.md)
|
||||
- Instructions specify GCE, but are generic enough to be adapted to most existing Mesos clusters
|
||||
- [Kubernetes on DCOS](dcos.md)
|
||||
- Community Edition DCOS uses AWS
|
||||
- Enterprise Edition DCOS supports cloud hosting, on-premise VMs, and bare metal
|
||||
|
||||
## Table of Solutions
|
||||
|
||||
Here are all the solutions mentioned above in table form.
|
||||
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | [✓][3] | Commercial
|
||||
Vagrant | Saltstack | Fedora | flannel | [docs](vagrant.md) | [✓][2] | Project
|
||||
GCE | Saltstack | Debian | GCE | [docs](gce.md) | [✓][1] | Project
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](coreos/azure/README.md) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
|
||||
Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project ([@brendandburns](https://github.com/brendandburns))
|
||||
Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project ([@brendandburns](https://github.com/brendandburns))
|
||||
Bare-metal | Ansible | Fedora | flannel | [docs](fedora/fedora_ansible_config.md) | | Project
|
||||
Bare-metal | custom | Fedora | _none_ | [docs](fedora/fedora_manual_config.md) | | Project
|
||||
Bare-metal | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
libvirt | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
KVM | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
Mesos/Docker | custom | Ubuntu | Docker | [docs](mesos-docker.md) | [✓][4] | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
|
||||
Mesos/GCE | | | | [docs](mesos.md) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
|
||||
DCOS | Marathon | CoreOS/Alpine | custom | [docs](dcos.md) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
|
||||
AWS | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community
|
||||
GCE | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@pires](https://github.com/pires))
|
||||
Vagrant | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
|
||||
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](coreos/bare_metal_offline.md) | | Community ([@jeffbean](https://github.com/jeffbean))
|
||||
Bare-metal | CoreOS | CoreOS | Calico | [docs](coreos/bare_metal_calico.md) | [✓][5] | Community ([@caseydavenport](https://github.com/caseydavenport))
|
||||
CloudStack | Ansible | CoreOS | flannel | [docs](cloudstack.md) | | Community ([@runseb](https://github.com/runseb))
|
||||
Vmware | | Debian | OVS | [docs](vsphere.md) | | Community ([@pietern](https://github.com/pietern))
|
||||
Bare-metal | custom | CentOS | _none_ | [docs](centos/centos_manual_config.md) | | Community ([@coolsvap](https://github.com/coolsvap))
|
||||
AWS | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
Joyent | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
|
||||
AWS | Saltstack | Ubuntu | OVS | [docs](aws.md) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
Bare-metal | custom | Ubuntu | Calico | [docs](ubuntu-calico.md) | | Community ([@djosborne](https://github.com/djosborne))
|
||||
Bare-metal | custom | Ubuntu | flannel | [docs](ubuntu.md) | | Community ([@resouer](https://github.com/resouer), [@dalanlan](https://github.com/dalanlan), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
|
||||
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](libvirt-coreos.md) | | Community ([@lhuard1A](https://github.com/lhuard1A))
|
||||
oVirt | | | | [docs](ovirt.md) | | Community ([@simon3z](https://github.com/simon3z))
|
||||
Rackspace | CoreOS | CoreOS | flannel | [docs](rackspace.md) | | Community ([@doublerr](https://github.com/doublerr))
|
||||
any | any | any | any | [docs](scratch.md) | | Community ([@erictune](https://github.com/erictune))
|
||||
|
||||
|
||||
*Note*: The above table is ordered by version test/used in notes followed by support level.
|
||||
|
||||
Definition of columns:
|
||||
|
||||
- **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on.
|
||||
- **OS** is the base operating system of the nodes.
|
||||
- **Config. Mgmt** is the configuration management system that helps install and maintain Kubernetes software on the
|
||||
nodes.
|
||||
- **Networking** is what implements the [networking model](../../docs/admin/networking.md). Those with networking type
|
||||
_none_ may not support more than one node, or may support multiple VM nodes only in the same physical node.
|
||||
- **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
|
||||
tests for supporting the API and base features of Kubernetes v1.0.0.
|
||||
- Support Levels
|
||||
- **Project**: Kubernetes Committers regularly use this configuration, so it usually works with the latest release
|
||||
of Kubernetes.
|
||||
- **Commercial**: A commercial offering with its own support arrangements.
|
||||
- **Community**: Actively supported by community contributions. May not work with more recent releases of Kubernetes.
|
||||
- **Inactive**: No active maintainer. Not recommended for first-time Kubernetes users, and may be deleted soon.
|
||||
- **Notes** is relevant information such as the version of Kubernetes used.
|
||||
|
||||
|
||||
<!-- reference style links below here -->
|
||||
<!-- GCE conformance test result -->
|
||||
[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
|
||||
<!-- Vagrant conformance test result -->
|
||||
[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
|
||||
<!-- GKE conformance test result -->
|
||||
[3]: https://gist.github.com/erictune/2f39b22f72565365e59b
|
||||
<!-- Mesos/Docker conformance test result -->
|
||||
[4]: https://gist.github.com/sttts/d27f3b879223895494d4
|
||||
<!-- Calico/CoreOS conformance test result -->
|
||||
[5]: https://gist.github.com/caseydavenport/98ca87e709b21f03d195
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/README/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,158 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on AWS EC2
|
||||
--------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Cluster turnup](#cluster-turnup)
|
||||
- [Supported procedure: `get-kube`](#supported-procedure-get-kube)
|
||||
- [Alternatives](#alternatives)
|
||||
- [Getting started with your cluster](#getting-started-with-your-cluster)
|
||||
- [Command line administration tool: `kubectl`](#command-line-administration-tool-kubectl)
|
||||
- [Examples](#examples)
|
||||
- [Tearing down the cluster](#tearing-down-the-cluster)
|
||||
- [Further reading](#further-reading)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started
|
||||
2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli)
|
||||
3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access.
|
||||
|
||||
NOTE: This script use the 'default' AWS profile by default.
|
||||
You may explicitly set AWS profile to use using the `AWS_DEFAULT_PROFILE` environment variable:
|
||||
|
||||
```bash
|
||||
export AWS_DEFAULT_PROFILE=myawsprofile
|
||||
```
|
||||
|
||||
## Cluster turnup
|
||||
|
||||
### Supported procedure: `get-kube`
|
||||
|
||||
```bash
|
||||
#Using wget
|
||||
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
|
||||
|
||||
#Using cURL
|
||||
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/HEAD/cluster/kube-up.sh)
|
||||
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/HEAD/cluster/aws/util.sh)
|
||||
using [cluster/aws/config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh).
|
||||
|
||||
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
|
||||
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
|
||||
tokens are written in `~/.kube/config`, they will be necessary to use the CLI or the HTTP Basic Auth.
|
||||
|
||||
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with EC2 instances running on Ubuntu.
|
||||
You can override the variables defined in [config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh) to change this behavior as follows:
|
||||
|
||||
```bash
|
||||
export KUBE_AWS_ZONE=eu-west-1c
|
||||
export NUM_NODES=2
|
||||
export MASTER_SIZE=m3.medium
|
||||
export NODE_SIZE=m3.medium
|
||||
export AWS_S3_REGION=eu-west-1
|
||||
export AWS_S3_BUCKET=mycompany-kubernetes-artifacts
|
||||
export INSTANCE_PREFIX=k8s
|
||||
...
|
||||
```
|
||||
|
||||
If you don't specify master and minion sizes, the scripts will attempt to guess
|
||||
the correct size of the master and worker nodes based on `${NUM_NODES}`. In
|
||||
version 1.2 these default are:
|
||||
|
||||
* For the master, for clusters of less than 150 nodes it will use an
|
||||
`m3.medium`, for clusters of greater than 150 nodes it will use an
|
||||
`m3.large`.
|
||||
|
||||
* For worker nodes, for clusters less than 50 nodes it will use a `t2.micro`,
|
||||
for clusters between 50 and 150 nodes it will use a `t2.small` and for
|
||||
clusters with greater than 150 nodes it will use a `t2.medium`.
|
||||
|
||||
WARNING: beware that `t2` instances receive a limited number of CPU credits per hour and might not be suitable for clusters where the CPU is used
|
||||
consistently. As a rough estimation, consider 15 pods/node the absolute limit a `t2.large` instance can handle before it starts exhausting its CPU credits
|
||||
steadily, although this number depends heavily on the usage.
|
||||
|
||||
In prior versions of kubernetes, we defaulted the master node to a t2-class
|
||||
instance, but found that this sometimes gave hard-to-diagnose problems when the
|
||||
master ran out of memory or CPU credits. If you are running a test cluster
|
||||
and want to save money, you can specify `export MASTER_SIZE=t2.micro` but if
|
||||
your master pauses do check the CPU credits in the AWS console.
|
||||
|
||||
For production usage, we recommend at least `export MASTER_SIZE=m3.medium` and
|
||||
`export NODE_SIZE=m3.medium`. And once you get above a handful of nodes, be
|
||||
aware that one m3.large instance has more storage than two m3.medium instances,
|
||||
for the same price.
|
||||
|
||||
We generally recommend the m3 instances over the m4 instances, because the m3
|
||||
instances include local instance storage. Historically local instance storage
|
||||
has been more reliable than AWS EBS, and performance should be more consistent.
|
||||
The ephemeral nature of this storage is a match for ephemeral container
|
||||
workloads also!
|
||||
|
||||
If you use an m4 instance, or another instance type which does not have local
|
||||
instance storage, you may want to increase the `NODE_ROOT_DISK_SIZE` value,
|
||||
although the default value of 32 is probably sufficient for the smaller
|
||||
instance types in the m4 family.
|
||||
|
||||
The script will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion".
|
||||
If these already exist, make sure you want them to be used here.
|
||||
|
||||
NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key.
|
||||
|
||||
### Alternatives
|
||||
|
||||
CoreOS maintains [a CLI tool](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html), `kube-aws` that will create and manage a Kubernetes cluster based on [CoreOS](http://www.coreos.com), using AWS tools: EC2, CloudFormation and Autoscaling.
|
||||
|
||||
## Getting started with your cluster
|
||||
|
||||
### Command line administration tool: `kubectl`
|
||||
|
||||
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
|
||||
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
|
||||
|
||||
Next, add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
An up-to-date documentation page for this tool is available here: [kubectl manual](../../docs/user-guide/kubectl/kubectl.md)
|
||||
|
||||
By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
|
||||
For more information, please read [kubeconfig files](../../docs/user-guide/kubeconfig-file.md)
|
||||
|
||||
### Examples
|
||||
|
||||
See [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
|
||||
|
||||
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](../../examples/guestbook/)
|
||||
|
||||
For more complete applications, please look in the [examples directory](../../examples/)
|
||||
|
||||
## Tearing down the cluster
|
||||
|
||||
Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
|
||||
`kubernetes` directory:
|
||||
|
||||
```bash
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
## Further reading
|
||||
|
||||
Please see the [Kubernetes docs](../../docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/aws/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,10 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Getting started on Microsoft Azure
|
||||
----------------------------------
|
||||
|
||||
Checkout the [coreos azure getting started guide](coreos/azure/README.md)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/azure/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,29 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Getting a Binary Release
|
||||
|
||||
You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release.
|
||||
|
||||
### Prebuilt Binary Release
|
||||
|
||||
The list of binary releases is available for download from the [GitHub Kubernetes repo release page](https://github.com/kubernetes/kubernetes/releases).
|
||||
|
||||
Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud.
|
||||
|
||||
### Building from source
|
||||
|
||||
Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.
|
||||
|
||||
Building a release is simple.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kubernetes/kubernetes.git
|
||||
cd kubernetes
|
||||
make release
|
||||
```
|
||||
|
||||
For more details on the release process see the [`build/` directory](http://releases.k8s.io/HEAD/build/)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/binary_release/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,162 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on [CentOS](http://centos.org)
|
||||
----------------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a cluster](#starting-a-cluster)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need two machines with CentOS installed on them.
|
||||
|
||||
## Starting a cluster
|
||||
|
||||
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
Hosts:
|
||||
|
||||
```
|
||||
centos-master = 192.168.121.9
|
||||
centos-minion = 192.168.121.65
|
||||
```
|
||||
|
||||
**Prepare the hosts:**
|
||||
|
||||
* Create a virt7-docker-common-release repo on all hosts - centos-{master,minion} with following information.
|
||||
|
||||
```
|
||||
[virt7-docker-common-release]
|
||||
name=virt7-docker-common-release
|
||||
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
|
||||
gpgcheck=0
|
||||
```
|
||||
|
||||
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
|
||||
|
||||
```sh
|
||||
yum -y install --enablerepo=virt7-docker-common-release kubernetes
|
||||
```
|
||||
|
||||
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
|
||||
|
||||
```sh
|
||||
echo "192.168.121.9 centos-master
|
||||
192.168.121.65 centos-minion" >> /etc/hosts
|
||||
```
|
||||
|
||||
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
|
||||
|
||||
```sh
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
|
||||
|
||||
# logging to stderr means we get it in the systemd journal
|
||||
KUBE_LOGTOSTDERR="--logtostderr=true"
|
||||
|
||||
# journal message level, 0 is debug
|
||||
KUBE_LOG_LEVEL="--v=0"
|
||||
|
||||
# Should this cluster be allowed to run privileged docker containers
|
||||
KUBE_ALLOW_PRIV="--allow-privileged=false"
|
||||
```
|
||||
|
||||
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
|
||||
|
||||
```sh
|
||||
systemctl disable iptables-services firewalld
|
||||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such:
|
||||
|
||||
```sh
|
||||
# The address on the local server to listen to.
|
||||
KUBE_API_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# The port on the local server to listen on.
|
||||
KUBE_API_PORT="--port=8080"
|
||||
|
||||
# How the replication controller and scheduler find the kube-apiserver
|
||||
KUBE_MASTER="--master=http://centos-master:8080"
|
||||
|
||||
# Port kubelets listen on
|
||||
KUBELET_PORT="--kubelet-port=10250"
|
||||
|
||||
# Address range to use for services
|
||||
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
|
||||
|
||||
# Add your own!
|
||||
KUBE_API_ARGS=""
|
||||
```
|
||||
|
||||
* Start the appropriate services on master:
|
||||
|
||||
```sh
|
||||
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
systemctl status $SERVICES
|
||||
done
|
||||
```
|
||||
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet and start the kubelet and proxy***
|
||||
|
||||
* Edit /etc/kubernetes/kubelet to appear as such:
|
||||
|
||||
```sh
|
||||
# The address for the info server to serve on
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# The port for the info server to serve on
|
||||
KUBELET_PORT="--port=10250"
|
||||
|
||||
# You may leave this blank to use the actual hostname
|
||||
KUBELET_HOSTNAME="--hostname-override=centos-minion"
|
||||
|
||||
# Location of the api-server
|
||||
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
|
||||
|
||||
# Add your own!
|
||||
KUBELET_ARGS=""
|
||||
```
|
||||
|
||||
* Start the appropriate services on node (centos-minion).
|
||||
|
||||
```sh
|
||||
for SERVICES in kube-proxy kubelet docker; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
systemctl status $SERVICES
|
||||
done
|
||||
```
|
||||
|
||||
*You should be finished!*
|
||||
|
||||
* Check to make sure the cluster can see the node (on centos-master)
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
centos-minion <none> Ready
|
||||
```
|
||||
|
||||
**The cluster should be running! Launch a test pod.**
|
||||
|
||||
You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)!
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/centos/centos_manual_config/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,90 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on [CloudStack](http://cloudstack.apache.org)
|
||||
------------------------------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Clone the playbook](#clone-the-playbook)
|
||||
- [Create a Kubernetes cluster](#create-a-kubernetes-cluster)
|
||||
|
||||
### Introduction
|
||||
|
||||
CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. [Exoscale](http://exoscale.ch) for instance makes a [CoreOS](http://coreos.com) template available, therefore instructions to deploy Kubernetes on coreOS can be used. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
|
||||
|
||||
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
|
||||
This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
|
||||
This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](coreos/coreos_multinode_cluster.md).
|
||||
|
||||
|
||||
This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
$ sudo apt-get install -y python-pip
|
||||
$ sudo pip install ansible
|
||||
$ sudo pip install cs
|
||||
|
||||
[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
|
||||
|
||||
Set your CloudStack endpoint, API keys and HTTP method used.
|
||||
|
||||
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
|
||||
|
||||
Or create a `~/.cloudstack.ini` file:
|
||||
|
||||
[cloudstack]
|
||||
endpoint = <your cloudstack api endpoint>
|
||||
key = <your api access key>
|
||||
secret = <your api secret key>
|
||||
method = post
|
||||
|
||||
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
|
||||
|
||||
### Clone the playbook
|
||||
|
||||
$ git clone --recursive https://github.com/runseb/ansible-kubernetes.git
|
||||
$ cd ansible-kubernetes
|
||||
|
||||
The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`.
|
||||
|
||||
### Create a Kubernetes cluster
|
||||
|
||||
You simply need to run the playbook.
|
||||
|
||||
$ ansible-playbook k8s.yml
|
||||
|
||||
Some variables can be edited in the `k8s.yml` file.
|
||||
|
||||
vars:
|
||||
ssh_key: k8s
|
||||
k8s_num_nodes: 2
|
||||
k8s_security_group_name: k8s
|
||||
k8s_node_prefix: k8s2
|
||||
k8s_template: Linux CoreOS alpha 435 64-bit 10GB Disk
|
||||
k8s_instance_type: Tiny
|
||||
|
||||
This will start a Kubernetes master node and a number of compute nodes (by default 2).
|
||||
The `instance_type` and `template` by default are specific to [exoscale](http://exoscale.ch), edit them to specify your CloudStack cloud specific template and instance type (i.e service offering).
|
||||
|
||||
Check the tasks and templates in `roles/k8s` if you want to modify anything.
|
||||
|
||||
Once the playbook as finished, it will print out the IP of the Kubernetes master:
|
||||
|
||||
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
|
||||
|
||||
SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster:
|
||||
|
||||
$ ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
|
||||
$ fleetctl list-machines
|
||||
MACHINE IP METADATA
|
||||
a017c422... <node #1 IP> role=node
|
||||
ad13bf84... <master IP> role=master
|
||||
e9af8293... <node #2 IP> role=node
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/cloudstack/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,85 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Getting Started on [CoreOS](https://coreos.com)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/
|
||||
|
||||
There are multiple guides on running Kubernetes with [CoreOS](https://coreos.com/kubernetes/docs/latest/):
|
||||
|
||||
### Official CoreOS Guides
|
||||
|
||||
These guides are maintained by CoreOS and deploy Kubernetes the "CoreOS Way" with full TLS, the DNS add-on, and more. These guides pass Kubernetes conformance testing and we encourage you to [test this yourself](https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
|
||||
|
||||
[**AWS Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
|
||||
|
||||
Guide and CLI tool for setting up a multi-node cluster on AWS. CloudFormation is used to set up a master and multiple workers in auto-scaling groups.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Vagrant Multi-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
|
||||
|
||||
Guide to setting up a multi-node cluster on Vagrant. The deployer can independently configure the number of etcd nodes, master nodes, and worker nodes to bring up a fully HA control plane.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Vagrant Single-Node**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
|
||||
|
||||
The quickest way to set up a Kubernetes development environment locally. As easy as `git clone`, `vagrant up` and configuring `kubectl`.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Full Step by Step Guide**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
|
||||
|
||||
A generic guide to setting up an HA cluster on any cloud or bare metal, with full TLS. Repeat the master or worker steps to configure more machines of that role.
|
||||
|
||||
### Community Guides
|
||||
|
||||
These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS.
|
||||
|
||||
[**Multi-node Cluster**](coreos/coreos_multinode_cluster.md)
|
||||
|
||||
Set up a single master, multi-worker cluster on your choice of platform: AWS, GCE, or VMware Fusion.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
|
||||
|
||||
Scripted installation of a single master, multi-worker cluster on GCE. Kubernetes components are managed by [fleet](https://github.com/coreos/fleet).
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Multi-node cluster using cloud-config and Weave on Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
|
||||
|
||||
Configure a Vagrant-based cluster of 3 machines with networking provided by Weave.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Multi-node cluster using cloud-config and Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
|
||||
|
||||
Configure a single master, multi-worker cluster locally, running on your choice of hypervisor: VirtualBox, Parallels, or VMware
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Single-node cluster using a small OS X App**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
|
||||
|
||||
Guide to running a solo cluster (master + worker) controlled by an OS X menubar application. Uses xhyve + CoreOS under the hood.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Multi-node cluster with Vagrant and fleet units using a small OS X App**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
|
||||
|
||||
Guide to running a single master, multi-worker cluster controlled by an OS X menubar application. Uses Vagrant under the hood.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Resizable multi-node cluster on Azure with Weave**](coreos/azure/README.md)
|
||||
|
||||
Guide to running an HA etcd cluster with a single master on Azure. Uses the Azure node.js CLI to resize the cluster.
|
||||
|
||||
<hr/>
|
||||
|
||||
[**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
|
||||
|
||||
Configure a single master, single worker cluster on VMware ESXi.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -31,245 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Kubernetes on Azure with CoreOS and [Weave](http://weave.works)
|
||||
---------------------------------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Let's go!](#lets-go)
|
||||
- [Deploying the workload](#deploying-the-workload)
|
||||
- [Scaling](#scaling)
|
||||
- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world)
|
||||
- [Next steps](#next-steps)
|
||||
- [Tear down...](#tear-down)
|
||||
|
||||
## Introduction
|
||||
|
||||
In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need an Azure account.
|
||||
|
||||
## Let's go!
|
||||
|
||||
To get started, you need to checkout the code:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/kubernetes/kubernetes
|
||||
cd kubernetes/docs/getting-started-guides/coreos/azure/
|
||||
```
|
||||
|
||||
You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already.
|
||||
|
||||
First, you need to install some of the dependencies with
|
||||
|
||||
```sh
|
||||
npm install
|
||||
```
|
||||
|
||||
Now, all you need to do is:
|
||||
|
||||
```sh
|
||||
./azure-login.js -u <your_username>
|
||||
./create-kubernetes-cluster.js
|
||||
```
|
||||
|
||||
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes: 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
||||
If you need to pass Azure specific options for the creation script you can do this via additional environment variables e.g.
|
||||
|
||||
```
|
||||
AZ_SUBSCRIPTION=<id> AZ_LOCATION="East US" ./create-kubernetes-cluster.js
|
||||
# or
|
||||
AZ_VM_COREOS_CHANNEL=beta ./create-kubernetes-cluster.js
|
||||
```
|
||||
|
||||

|
||||
|
||||
Once the creation of Azure VMs has finished, you should see the following:
|
||||
|
||||
```console
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
[ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml`
|
||||
```
|
||||
|
||||
Let's login to the master node like so:
|
||||
|
||||
```sh
|
||||
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
|
||||
```
|
||||
|
||||
> Note: config file name will be different, make sure to use the one you see.
|
||||
|
||||
Check there are 2 nodes in the cluster:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 kubernetes.io/hostname=kube-01 Ready
|
||||
kube-02 kubernetes.io/hostname=kube-02 Ready
|
||||
```
|
||||
|
||||
## Deploying the workload
|
||||
|
||||
Let's follow the Guestbook example now:
|
||||
|
||||
```sh
|
||||
kubectl create -f ~/guestbook-example
|
||||
```
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Pending` to `Running`.
|
||||
|
||||
```sh
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
||||
> Note: the most time it will spend downloading Docker container images on each of the nodes.
|
||||
|
||||
Eventually you should see:
|
||||
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-0a9xi 1/1 Running 0 4m
|
||||
frontend-4wahe 1/1 Running 0 4m
|
||||
frontend-6l36j 1/1 Running 0 4m
|
||||
redis-master-talmr 1/1 Running 0 4m
|
||||
redis-slave-12zfd 1/1 Running 0 4m
|
||||
redis-slave-3nbce 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
## Scaling
|
||||
|
||||
Two single-core nodes are certainly not enough for a production system of today. Let's scale the cluster by adding a couple of bigger nodes.
|
||||
|
||||
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/kubernetes/docs/getting-started-guides/coreos/azure/`).
|
||||
|
||||
First, lets set the size of new VMs:
|
||||
|
||||
```sh
|
||||
export AZ_VM_SIZE=Large
|
||||
```
|
||||
|
||||
Now, run scale script with state file of the previous deployment and number of nodes to add:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
[ 'etcd-00',
|
||||
'etcd-01',
|
||||
'etcd-02',
|
||||
'kube-00',
|
||||
'kube-01',
|
||||
'kube-02',
|
||||
'kube-03',
|
||||
'kube-04' ]
|
||||
azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml`
|
||||
```
|
||||
|
||||
> Note: this step has created new files in `./output`.
|
||||
|
||||
Back on `kube-00`:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 kubernetes.io/hostname=kube-01 Ready
|
||||
kube-02 kubernetes.io/hostname=kube-02 Ready
|
||||
kube-03 kubernetes.io/hostname=kube-03 Ready
|
||||
kube-04 kubernetes.io/hostname=kube-04 Ready
|
||||
```
|
||||
|
||||
You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now.
|
||||
|
||||
First, double-check how many replication controllers there are:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
ONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 2
|
||||
```
|
||||
|
||||
As there are 4 nodes, let's scale proportionally:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
|
||||
>>>>>>> coreos/azure: Updates for 1.0
|
||||
scaled
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
|
||||
scaled
|
||||
```
|
||||
|
||||
Check what you have now:
|
||||
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
|
||||
redis-master master redis name=redis-master 1
|
||||
redis-slave worker kubernetes/redis-slave:v2 name=redis-slave 4
|
||||
```
|
||||
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
|
||||
|
||||
```console
|
||||
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-0a9xi 1/1 Running 0 22m
|
||||
frontend-4wahe 1/1 Running 0 22m
|
||||
frontend-6l36j 1/1 Running 0 22m
|
||||
frontend-z9oxo 1/1 Running 0 41s
|
||||
```
|
||||
|
||||
## Exposing the app to the outside world
|
||||
|
||||
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
|
||||
|
||||
```
|
||||
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
|
||||
Guestbook app is on port 31605, will map it to port 80 on kube-00
|
||||
info: Executing command vm endpoint create
|
||||
+ Getting virtual machines
|
||||
+ Reading network configuration
|
||||
+ Updating network configuration
|
||||
info: vm endpoint create command OK
|
||||
info: Executing command vm endpoint show
|
||||
+ Getting virtual machines
|
||||
data: Name : tcp-80-31605
|
||||
data: Local port : 31605
|
||||
data: Protcol : tcp
|
||||
data: Virtual IP Address : 137.117.156.164
|
||||
data: Direct server return : Disabled
|
||||
info: vm endpoint show command OK
|
||||
```
|
||||
|
||||
You then should be able to access it from anywhere via the Azure virtual IP for `kube-00` displayed above, i.e. `http://137.117.156.164/` in my case.
|
||||
|
||||
## Next steps
|
||||
|
||||
You now have a full-blow cluster running in Azure, congrats!
|
||||
|
||||
You should probably try deploy other [example apps](../../../../examples/) or write your own ;)
|
||||
|
||||
## Tear down...
|
||||
|
||||
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
|
||||
|
||||
```sh
|
||||
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
|
||||
```
|
||||
|
||||
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
|
||||
|
||||
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/azure/README/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,203 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Bare Metal Kubernetes on CoreOS with Calico Networking
|
||||
------------------------------------------
|
||||
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ CoreOS. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/bare_metal_calico/
|
||||
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
|
||||
|
||||
Specifically, this guide will have you do the following:
|
||||
- Deploy a Kubernetes master node on CoreOS using cloud-config.
|
||||
- Deploy two Kubernetes compute nodes with Calico Networking using cloud-config.
|
||||
- Configure `kubectl` to access your cluster.
|
||||
|
||||
The resulting cluster will use SSL between Kubernetes components. It will run the SkyDNS service and kube-ui, and be fully conformant with the Kubernetes v1.1 conformance tests.
|
||||
|
||||
## Prerequisites and Assumptions
|
||||
|
||||
- At least three bare-metal machines (or VMs) to work with. This guide will configure them as follows:
|
||||
- 1 Kubernetes Master
|
||||
- 2 Kubernetes Nodes
|
||||
- Your nodes should have IP connectivity to each other and the internet.
|
||||
- This guide assumes a DHCP server on your network to assign server IPs.
|
||||
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
|
||||
|
||||
## Cloud-config
|
||||
|
||||
This guide will use [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) to configure each of the nodes in our Kubernetes cluster.
|
||||
|
||||
We'll use two cloud-config files:
|
||||
- `master-config.yaml`: cloud-config for the Kubernetes master
|
||||
- `node-config.yaml`: cloud-config for each Kubernetes node
|
||||
|
||||
## Download CoreOS
|
||||
|
||||
Download the stable CoreOS bootable ISO from the [CoreOS website](https://coreos.com/docs/running-coreos/platforms/iso/).
|
||||
|
||||
## Configure the Kubernetes Master
|
||||
|
||||
1. Once you've downloaded the ISO image, burn the ISO to a CD/DVD/USB key and boot from it (if using a virtual machine you can boot directly from the ISO). Once booted, you should be automatically logged in as the `core` user at the terminal. At this point CoreOS is running from the ISO and it hasn't been installed yet.
|
||||
|
||||
2. *On another machine*, download the the [master cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/master-config-template.yaml) and save it as `master-config.yaml`.
|
||||
|
||||
3. Replace the following variables in the `master-config.yaml` file.
|
||||
|
||||
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server. See [generating ssh keys](https://help.github.com/articles/generating-ssh-keys/)
|
||||
|
||||
4. Copy the edited `master-config.yaml` to your Kubernetes master machine (using a USB stick, for example).
|
||||
|
||||
5. The CoreOS bootable ISO comes with a tool called `coreos-install` which will allow us to install CoreOS and configure the machine using a cloud-config file. The following command will download and install stable CoreOS using the `master-config.yaml` file we just created for configuration. Run this on the Kubernetes master.
|
||||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```
|
||||
sudo coreos-install -d /dev/sda -C stable -c master-config.yaml
|
||||
```
|
||||
|
||||
6. Once complete, restart the server and boot from `/dev/sda` (you may need to remove the ISO image). When it comes back up, you should have SSH access as the `core` user using the public key provided in the `master-config.yaml` file.
|
||||
|
||||
### Configure TLS
|
||||
|
||||
The master requires the CA certificate, `ca.pem`; its own certificate, `apiserver.pem` and its private key, `apiserver-key.pem`. This [CoreOS guide](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to generate these.
|
||||
|
||||
1. Generate the necessary certificates for the master. This [guide for generating Kubernetes TLS Assets](https://coreos.com/kubernetes/docs/latest/openssl.html) explains how to use OpenSSL to generate the required assets.
|
||||
|
||||
2. Send the three files to your master host (using `scp` for example).
|
||||
|
||||
3. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
|
||||
|
||||
```
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
|
||||
|
||||
# Set Permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
|
||||
```
|
||||
|
||||
4. Restart the kubelet to pick up the changes:
|
||||
|
||||
```
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
## Configure the compute nodes
|
||||
|
||||
The following steps will set up a single Kubernetes node for use as a compute host. Run these steps to deploy each Kubernetes node in your cluster.
|
||||
|
||||
1. Boot up the node machine using the bootable ISO we downloaded earlier. You should be automatically logged in as the `core` user.
|
||||
|
||||
2. Make a copy of the [node cloud-config template](https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/cloud-config/node-config-template.yaml) for this machine.
|
||||
|
||||
3. Replace the following placeholders in the `node-config.yaml` file to match your deployment.
|
||||
|
||||
- `<HOSTNAME>`: Hostname for this node (e.g. kube-node1, kube-node2)
|
||||
- `<SSH_PUBLIC_KEY>`: The public key you will use for SSH access to this server.
|
||||
- `<KUBERNETES_MASTER>`: The IPv4 address of the Kubernetes master.
|
||||
|
||||
4. Replace the following placeholders with the contents of their respective files.
|
||||
|
||||
- `<CA_CERT>`: Complete contents of `ca.pem`
|
||||
- `<CA_KEY_CERT>`: Complete contents of `ca-key.pem`
|
||||
|
||||
> **Important:** in a production deployment, embedding the secret key in cloud-config is a bad idea! In production you should use an appropriate secret manager.
|
||||
|
||||
> **Important:** Make sure you indent the entire file to match the indentation of the placeholder. For example:
|
||||
>
|
||||
> ```
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
> content: |
|
||||
> <CA_CERT>
|
||||
> ```
|
||||
>
|
||||
> should look like this once the certificate is in place:
|
||||
>
|
||||
> ```
|
||||
> - path: /etc/kubernetes/ssl/ca.pem
|
||||
> owner: core
|
||||
> permissions: 0644
|
||||
> content: |
|
||||
> -----BEGIN CERTIFICATE-----
|
||||
> MIIC9zCCAd+gAwIBAgIJAJMnVnhVhy5pMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
|
||||
> ...<snip>...
|
||||
> QHwi1rNc8eBLNrd4BM/A1ZeDVh/Q9KxN+ZG/hHIXhmWKgN5wQx6/81FIFg==
|
||||
> -----END CERTIFICATE-----
|
||||
> ```
|
||||
|
||||
5. Move the modified `node-config.yaml` to your Kubernetes node machine and install and configure CoreOS on the node using the following command.
|
||||
|
||||
> **Warning:** this is a destructive operation that erases disk `sda` on your server.
|
||||
|
||||
```
|
||||
sudo coreos-install -d /dev/sda -C stable -c node-config.yaml
|
||||
```
|
||||
|
||||
6. Once complete, restart the server and boot into `/dev/sda`. When it comes back up, you should have SSH access as the `core` user using the public key provided in the `node-config.yaml` file. It will take some time for the node to be fully configured.
|
||||
|
||||
## Configure Kubeconfig
|
||||
|
||||
To administer your cluster from a separate host, you will need the client and admin certificates generated earlier (`ca.pem`, `admin.pem`, `admin-key.pem`). With certificates in place, run the following commands with the appropriate filepaths.
|
||||
|
||||
```
|
||||
kubectl config set-cluster calico-cluster --server=https://<KUBERNETES_MASTER> --certificate-authority=<CA_CERT_PATH>
|
||||
kubectl config set-credentials calico-admin --certificate-authority=<CA_CERT_PATH> --client-key=<ADMIN_KEY_PATH> --client-certificate=<ADMIN_CERT_PATH>
|
||||
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
|
||||
kubectl config use-context calico
|
||||
```
|
||||
|
||||
Check your work with `kubectl get nodes`.
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
|
||||
```
|
||||
|
||||
## Install the Kubernetes UI Addon (Optional)
|
||||
|
||||
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
|
||||
```
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../../examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
|
||||
|
||||
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
|
||||
```
|
||||
|
||||
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
|
||||
```
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
|
||||
|
||||
[](https://github.com/igrigorik/ga-beacon)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -31,676 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Bare Metal CoreOS with Kubernetes (OFFLINE)
|
||||
------------------------------------------
|
||||
Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [High Level Design](#high-level-design)
|
||||
- [This Guides variables](#this-guides-variables)
|
||||
- [Setup PXELINUX CentOS](#setup-pxelinux-centos)
|
||||
- [Adding CoreOS to PXE](#adding-coreos-to-pxe)
|
||||
- [DHCP configuration](#dhcp-configuration)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [Cloud Configs](#cloud-configs)
|
||||
- [master.yml](#masteryml)
|
||||
- [node.yml](#nodeyml)
|
||||
- [New pxelinux.cfg file](#new-pxelinuxcfg-file)
|
||||
- [Specify the pxelinux targets](#specify-the-pxelinux-targets)
|
||||
- [Creating test pod](#creating-test-pod)
|
||||
- [Helping commands for debugging](#helping-commands-for-debugging)
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Installed *CentOS 6* for PXE server
|
||||
2. At least two bare metal nodes to work with
|
||||
|
||||
## High Level Design
|
||||
|
||||
1. Manage the tftp directory
|
||||
* /tftpboot/(coreos)(centos)(RHEL)
|
||||
* /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
|
||||
2. Update per install the link for pxelinux
|
||||
3. Update the DHCP config to reflect the host needing deployment
|
||||
4. Setup nodes to deploy CoreOS creating a etcd cluster.
|
||||
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
|
||||
6. Installing the CoreOS slaves to become Kubernetes nodes.
|
||||
|
||||
## This Guides variables
|
||||
|
||||
| Node Description | MAC | IP |
|
||||
| :---------------------------- | :---------------: | :---------: |
|
||||
| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
|
||||
| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
|
||||
| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
|
||||
|
||||
|
||||
## Setup PXELINUX CentOS
|
||||
|
||||
To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
|
||||
|
||||
1. Install packages needed on CentOS
|
||||
|
||||
sudo yum install tftp-server dhcp syslinux
|
||||
|
||||
2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
|
||||
disable = no
|
||||
|
||||
3. Copy over the syslinux images we will need.
|
||||
|
||||
su -
|
||||
mkdir -p /tftpboot
|
||||
cd /tftpboot
|
||||
cp /usr/share/syslinux/pxelinux.0 /tftpboot
|
||||
cp /usr/share/syslinux/menu.c32 /tftpboot
|
||||
cp /usr/share/syslinux/memdisk /tftpboot
|
||||
cp /usr/share/syslinux/mboot.c32 /tftpboot
|
||||
cp /usr/share/syslinux/chain.c32 /tftpboot
|
||||
|
||||
/sbin/service dhcpd start
|
||||
/sbin/service xinetd start
|
||||
/sbin/chkconfig tftp on
|
||||
|
||||
4. Setup default boot menu
|
||||
|
||||
mkdir /tftpboot/pxelinux.cfg
|
||||
touch /tftpboot/pxelinux.cfg/default
|
||||
|
||||
5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
timeout 15
|
||||
ONTIMEOUT local
|
||||
display boot.msg
|
||||
|
||||
MENU TITLE Main Menu
|
||||
|
||||
LABEL local
|
||||
MENU LABEL Boot local hard drive
|
||||
LOCALBOOT 0
|
||||
|
||||
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
|
||||
|
||||
## Adding CoreOS to PXE
|
||||
|
||||
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
|
||||
|
||||
1. Find or create the TFTP root directory that everything will be based off of.
|
||||
* For this document we will assume `/tftpboot/` is our root directory.
|
||||
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
|
||||
3. Download the CoreOS PXE files provided by the CoreOS team.
|
||||
|
||||
MY_TFTPROOT_DIR=/tftpboot
|
||||
mkdir -p $MY_TFTPROOT_DIR/images/coreos/
|
||||
cd $MY_TFTPROOT_DIR/images/coreos/
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
|
||||
wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
|
||||
gpg --verify coreos_production_pxe.vmlinuz.sig
|
||||
gpg --verify coreos_production_pxe_image.cpio.gz.sig
|
||||
|
||||
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
timeout 300
|
||||
ONTIMEOUT local
|
||||
display boot.msg
|
||||
|
||||
MENU TITLE Main Menu
|
||||
|
||||
LABEL local
|
||||
MENU LABEL Boot local hard drive
|
||||
LOCALBOOT 0
|
||||
|
||||
MENU BEGIN CoreOS Menu
|
||||
|
||||
LABEL coreos-master
|
||||
MENU LABEL CoreOS Master
|
||||
KERNEL images/coreos/coreos_production_pxe.vmlinuz
|
||||
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
|
||||
|
||||
LABEL coreos-slave
|
||||
MENU LABEL CoreOS Slave
|
||||
KERNEL images/coreos/coreos_production_pxe.vmlinuz
|
||||
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
|
||||
MENU END
|
||||
|
||||
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
|
||||
|
||||
## DHCP configuration
|
||||
|
||||
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
|
||||
|
||||
1. Add the `filename` to the _host_ or _subnet_ sections.
|
||||
|
||||
filename "/tftpboot/pxelinux.0";
|
||||
|
||||
2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
|
||||
|
||||
subnet 10.20.30.0 netmask 255.255.255.0 {
|
||||
next-server 10.20.30.242;
|
||||
option broadcast-address 10.20.30.255;
|
||||
filename "<other default image>";
|
||||
|
||||
...
|
||||
# http://www.syslinux.org/wiki/index.php/PXELINUX
|
||||
host core_os_master {
|
||||
hardware ethernet d0:00:67:13:0d:00;
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.40;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
host core_os_slave {
|
||||
hardware ethernet d0:00:67:13:0d:01;
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.41;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
host core_os_slave2 {
|
||||
hardware ethernet d0:00:67:13:0d:02;
|
||||
option routers 10.20.30.1;
|
||||
fixed-address 10.20.30.42;
|
||||
option domain-name-servers 10.20.30.242;
|
||||
filename "/pxelinux.0";
|
||||
}
|
||||
...
|
||||
}
|
||||
|
||||
We will be specifying the node configuration later in the guide.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
|
||||
2. Have a service discovery protocol running in our stack to do auto discovery.
|
||||
|
||||
This demo we just make a static single `etcd` server to host our Kubernetes and `etcd` master servers.
|
||||
|
||||
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
|
||||
|
||||
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
|
||||
|
||||
To get this up and running we are going to setup a simple `apache` server to serve our binaries needed to bootstrap Kubernetes.
|
||||
|
||||
This is on the PXE server from the previous section:
|
||||
|
||||
rm /etc/httpd/conf.d/welcome.conf
|
||||
cd /var/www/html/
|
||||
wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
|
||||
wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
|
||||
wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
|
||||
|
||||
This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
|
||||
|
||||
Now for the good stuff!
|
||||
|
||||
## Cloud Configs
|
||||
|
||||
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
|
||||
|
||||
These are based on the work found here: [master.yml](cloud-configs/master.yaml), [node.yml](cloud-configs/node.yaml)
|
||||
|
||||
To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
- Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
|
||||
- Replace `<MASTER_SERVER_IP>` with the Kubernetes master ip address (e.g. 10.20.30.40)
|
||||
- If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
|
||||
- If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
|
||||
- Add your own SSH public key(s) to the cloud config at the end
|
||||
|
||||
### master.yml
|
||||
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
|
||||
|
||||
|
||||
#cloud-config
|
||||
---
|
||||
write_files:
|
||||
- path: /opt/bin/waiter.sh
|
||||
owner: root
|
||||
content: |
|
||||
#! /usr/bin/bash
|
||||
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
|
||||
- path: /opt/bin/kubernetes-download.sh
|
||||
owner: root
|
||||
permissions: 0755
|
||||
content: |
|
||||
#! /usr/bin/bash
|
||||
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
|
||||
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
|
||||
/usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
|
||||
chmod +x /opt/bin/*
|
||||
- path: /etc/profile.d/opt-path.sh
|
||||
owner: root
|
||||
permissions: 0755
|
||||
content: |
|
||||
#! /usr/bin/bash
|
||||
PATH=$PATH/opt/bin
|
||||
coreos:
|
||||
units:
|
||||
- name: 10-eno1.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=eno1
|
||||
[Network]
|
||||
DHCP=yes
|
||||
- name: 20-nodhcp.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=en*
|
||||
[Network]
|
||||
DHCP=none
|
||||
- name: get-kube-tools.service
|
||||
runtime: true
|
||||
command: start
|
||||
content: |
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStart=/opt/bin/kubernetes-download.sh
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: setup-network-environment.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Setup Network Environment
|
||||
Documentation=https://github.com/kelseyhightower/setup-network-environment
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
|
||||
ExecStart=/opt/bin/setup-network-environment
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: etcd.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=etcd
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/network-environment
|
||||
User=etcd
|
||||
PermissionsStartOnly=true
|
||||
ExecStart=/usr/bin/etcd \
|
||||
--name ${DEFAULT_IPV4} \
|
||||
--addr ${DEFAULT_IPV4}:4001 \
|
||||
--bind-addr 0.0.0.0 \
|
||||
--cluster-active-size 1 \
|
||||
--data-dir /var/lib/etcd \
|
||||
--http-read-timeout 86400 \
|
||||
--peer-addr ${DEFAULT_IPV4}:7001 \
|
||||
--snapshot true
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
- name: fleet.socket
|
||||
command: start
|
||||
content: |
|
||||
[Socket]
|
||||
ListenStream=/var/run/fleet.sock
|
||||
- name: fleet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=fleet daemon
|
||||
Wants=etcd.service
|
||||
After=etcd.service
|
||||
Wants=fleet.socket
|
||||
After=fleet.socket
|
||||
[Service]
|
||||
Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
|
||||
Environment="FLEET_METADATA=role=master"
|
||||
ExecStart=/usr/bin/fleetd
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
- name: etcd-waiter.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=etcd waiter
|
||||
Wants=network-online.target
|
||||
Wants=etcd.service
|
||||
After=etcd.service
|
||||
After=network-online.target
|
||||
Before=flannel.service
|
||||
Before=setup-network-environment.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
|
||||
ExecStart=/usr/bin/bash /opt/bin/waiter.sh
|
||||
RemainAfterExit=true
|
||||
Type=oneshot
|
||||
- name: flannel.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Wants=etcd-waiter.service
|
||||
After=etcd-waiter.service
|
||||
Requires=etcd.service
|
||||
After=etcd.service
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
Description=flannel is an etcd backed overlay network for containers
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
|
||||
ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
|
||||
ExecStart=/opt/bin/flanneld
|
||||
- name: kube-apiserver.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes API Server
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=etcd.service
|
||||
After=etcd.service
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
|
||||
ExecStart=/opt/bin/kube-apiserver \
|
||||
--address=0.0.0.0 \
|
||||
--port=8080 \
|
||||
--service-cluster-ip-range=10.100.0.0/16 \
|
||||
--etcd-servers=http://127.0.0.1:4001 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-controller-manager.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Controller Manager
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
|
||||
ExecStart=/opt/bin/kube-controller-manager \
|
||||
--master=127.0.0.1:8080 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-scheduler.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Scheduler
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
|
||||
ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-register.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Registration Service
|
||||
Documentation=https://github.com/kelseyhightower/kube-register
|
||||
Requires=kube-apiserver.service
|
||||
After=kube-apiserver.service
|
||||
Requires=fleet.service
|
||||
After=fleet.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
|
||||
ExecStart=/opt/bin/kube-register \
|
||||
--metadata=role=node \
|
||||
--fleet-endpoint=unix:///var/run/fleet.sock \
|
||||
--healthz-port=10248 \
|
||||
--api-endpoint=http://127.0.0.1:8080
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
|
||||
|
||||
|
||||
### node.yml
|
||||
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
|
||||
|
||||
#cloud-config
|
||||
---
|
||||
write_files:
|
||||
- path: /etc/default/docker
|
||||
content: |
|
||||
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
|
||||
coreos:
|
||||
units:
|
||||
- name: 10-eno1.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=eno1
|
||||
[Network]
|
||||
DHCP=yes
|
||||
- name: 20-nodhcp.network
|
||||
runtime: true
|
||||
content: |
|
||||
[Match]
|
||||
Name=en*
|
||||
[Network]
|
||||
DHCP=none
|
||||
- name: etcd.service
|
||||
mask: true
|
||||
- name: docker.service
|
||||
drop-ins:
|
||||
- name: 50-insecure-registry.conf
|
||||
content: |
|
||||
[Service]
|
||||
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
|
||||
- name: fleet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=fleet daemon
|
||||
Wants=fleet.socket
|
||||
After=fleet.socket
|
||||
[Service]
|
||||
Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
|
||||
Environment="FLEET_METADATA=role=node"
|
||||
ExecStart=/usr/bin/fleetd
|
||||
Restart=always
|
||||
RestartSec=10s
|
||||
- name: flannel.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
Description=flannel is an etcd backed overlay network for containers
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
|
||||
ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
|
||||
- name: docker.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
After=flannel.service
|
||||
Wants=flannel.service
|
||||
Description=Docker Application Container Engine
|
||||
Documentation=http://docs.docker.io
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/default/docker
|
||||
EnvironmentFile=/run/flannel/subnet.env
|
||||
ExecStartPre=/bin/mount --make-rprivate /
|
||||
ExecStart=/usr/bin/docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: setup-network-environment.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Setup Network Environment
|
||||
Documentation=https://github.com/kelseyhightower/setup-network-environment
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
ExecStartPre=-/usr/bin/mkdir -p /opt/bin
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
|
||||
ExecStart=/opt/bin/setup-network-environment
|
||||
RemainAfterExit=yes
|
||||
Type=oneshot
|
||||
- name: kube-proxy.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
|
||||
ExecStart=/opt/bin/kube-proxy \
|
||||
--etcd-servers=http://<MASTER_SERVER_IP>:4001 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- name: kube-kubelet.service
|
||||
command: start
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
Requires=setup-network-environment.service
|
||||
After=setup-network-environment.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/network-environment
|
||||
ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
|
||||
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
|
||||
ExecStart=/opt/bin/kubelet \
|
||||
--address=0.0.0.0 \
|
||||
--port=10250 \
|
||||
--hostname-override=${DEFAULT_IPV4} \
|
||||
--api-servers=<MASTER_SERVER_IP>:8080 \
|
||||
--healthz-bind-address=0.0.0.0 \
|
||||
--healthz-port=10248 \
|
||||
--logtostderr=true
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
update:
|
||||
group: stable
|
||||
reboot-strategy: off
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAAAB3NzaC1yc2EAAAAD...
|
||||
|
||||
|
||||
## New pxelinux.cfg file
|
||||
|
||||
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
timeout 15
|
||||
|
||||
display boot.msg
|
||||
|
||||
label coreos
|
||||
menu default
|
||||
kernel images/coreos/coreos_production_pxe.vmlinuz
|
||||
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
|
||||
|
||||
And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
timeout 15
|
||||
|
||||
display boot.msg
|
||||
|
||||
label coreos
|
||||
menu default
|
||||
kernel images/coreos/coreos_production_pxe.vmlinuz
|
||||
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
|
||||
|
||||
## Specify the pxelinux targets
|
||||
|
||||
Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
|
||||
|
||||
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
|
||||
|
||||
cd /tftpboot/pxelinux.cfg
|
||||
ln -s coreos-node-master 01-d0-00-67-13-0d-00
|
||||
ln -s coreos-node-slave 01-d0-00-67-13-0d-01
|
||||
ln -s coreos-node-slave 01-d0-00-67-13-0d-02
|
||||
|
||||
|
||||
Reboot these servers to get the images PXEd and ready for running containers!
|
||||
|
||||
## Creating test pod
|
||||
|
||||
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
|
||||
|
||||
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](../../../examples/).
|
||||
|
||||
## Helping commands for debugging
|
||||
|
||||
List all keys in etcd:
|
||||
|
||||
etcdctl ls --recursive
|
||||
|
||||
List fleet machines
|
||||
|
||||
fleetctl list-machines
|
||||
|
||||
Check system status of services on master:
|
||||
|
||||
systemctl status kube-apiserver
|
||||
systemctl status kube-controller-manager
|
||||
systemctl status kube-scheduler
|
||||
systemctl status kube-register
|
||||
|
||||
Check system status of services on a node:
|
||||
|
||||
systemctl status kube-kubelet
|
||||
systemctl status docker.service
|
||||
|
||||
List Kubernetes
|
||||
|
||||
kubectl get pods
|
||||
kubectl get nodes
|
||||
|
||||
|
||||
Kill all pods:
|
||||
|
||||
for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/bare_metal_offline/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,203 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# CoreOS Multinode Cluster
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/coreos/coreos_multinode_cluster/
|
||||
|
||||
Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster.
|
||||
|
||||
> **Attention**: This requires at least CoreOS version **[695.0.0][coreos695]**, which includes `etcd2`.
|
||||
|
||||
[coreos695]: https://coreos.com/releases/#695.0.0
|
||||
|
||||
## Overview
|
||||
|
||||
* Provision the master node
|
||||
* Capture the master node private IP address
|
||||
* Edit node.yaml
|
||||
* Provision one or more worker nodes
|
||||
|
||||
### AWS
|
||||
|
||||
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```sh
|
||||
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
|
||||
```
|
||||
|
||||
```sh
|
||||
aws ec2 run-instances \
|
||||
--image-id <ami_image_id> \
|
||||
--key-name <keypair> \
|
||||
--region us-west-2 \
|
||||
--security-groups kubernetes \
|
||||
--instance-type m3.medium \
|
||||
--user-data file://master.yaml
|
||||
```
|
||||
|
||||
#### Capture the private IP address
|
||||
|
||||
```sh
|
||||
aws ec2 describe-instances --instance-id <master-instance-id>
|
||||
```
|
||||
|
||||
#### Edit node.yaml
|
||||
|
||||
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
|
||||
|
||||
#### Provision worker nodes
|
||||
|
||||
```sh
|
||||
aws ec2 run-instances \
|
||||
--count 1 \
|
||||
--image-id <ami_image_id> \
|
||||
--key-name <keypair> \
|
||||
--region us-west-2 \
|
||||
--security-groups kubernetes \
|
||||
--instance-type m3.medium \
|
||||
--user-data file://node.yaml
|
||||
```
|
||||
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```sh
|
||||
gcloud compute instances create master \
|
||||
--image-project coreos-cloud \
|
||||
--image <gce_image_id> \
|
||||
--boot-disk-size 200GB \
|
||||
--machine-type n1-standard-1 \
|
||||
--zone us-central1-a \
|
||||
--metadata-from-file user-data=master.yaml
|
||||
```
|
||||
|
||||
#### Capture the private IP address
|
||||
|
||||
```sh
|
||||
gcloud compute instances list
|
||||
```
|
||||
|
||||
#### Edit node.yaml
|
||||
|
||||
Edit `node.yaml` and replace all instances of `<master-private-ip>` with the private IP address of the master node.
|
||||
|
||||
#### Provision worker nodes
|
||||
|
||||
```sh
|
||||
gcloud compute instances create node1 \
|
||||
--image-project coreos-cloud \
|
||||
--image <gce_image_id> \
|
||||
--boot-disk-size 200GB \
|
||||
--machine-type n1-standard-1 \
|
||||
--zone us-central1-a \
|
||||
--metadata-from-file user-data=node.yaml
|
||||
```
|
||||
|
||||
#### Establish network connectivity
|
||||
|
||||
Next, setup an ssh tunnel to the master so you can run kubectl from your local host.
|
||||
In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second
|
||||
run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
|
||||
|
||||
### OpenStack
|
||||
|
||||
These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard.
|
||||
These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack.
|
||||
|
||||
#### Make sure you can connect with OpenStack
|
||||
|
||||
Make sure the environment variables are set for OpenStack such as:
|
||||
|
||||
```sh
|
||||
OS_TENANT_ID
|
||||
OS_PASSWORD
|
||||
OS_AUTH_URL
|
||||
OS_USERNAME
|
||||
OS_TENANT_NAME
|
||||
```
|
||||
|
||||
Test this works with something like:
|
||||
|
||||
```
|
||||
nova list
|
||||
```
|
||||
|
||||
#### Get a Suitable CoreOS Image
|
||||
|
||||
You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack.html)
|
||||
Once you download that, upload it to glance. An example is shown below:
|
||||
|
||||
```sh
|
||||
glance image-create --name CoreOS723 \
|
||||
--container-format bare --disk-format qcow2 \
|
||||
--file coreos_production_openstack_image.img \
|
||||
--is-public True
|
||||
```
|
||||
|
||||
#### Create security group
|
||||
|
||||
```sh
|
||||
nova secgroup-create kubernetes "Kubernetes Security Group"
|
||||
nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0
|
||||
nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0
|
||||
```
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```sh
|
||||
nova boot \
|
||||
--image <image_name> \
|
||||
--key-name <my_key> \
|
||||
--flavor <flavor id> \
|
||||
--security-group kubernetes \
|
||||
--user-data files/master.yaml \
|
||||
kube-master
|
||||
```
|
||||
|
||||
```<image_name>``` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723'
|
||||
|
||||
```<my_key>``` is the keypair name that you already generated to access the instance.
|
||||
|
||||
```<flavor_id>``` is the flavor ID you use to size the instance. Run ```nova flavor-list``` to get the IDs. 3 on the system this was tested with gives the m1.large size.
|
||||
|
||||
The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work.
|
||||
|
||||
Next, assign it a public IP address:
|
||||
|
||||
```
|
||||
nova floating-ip-list
|
||||
```
|
||||
|
||||
Get an IP address that's free and run:
|
||||
|
||||
```
|
||||
nova floating-ip-associate kube-master <ip address>
|
||||
```
|
||||
|
||||
where ```<ip address>``` is the IP address that was available from the ```nova floating-ip-list``` command.
|
||||
|
||||
#### Provision Worker Nodes
|
||||
|
||||
Edit ```node.yaml``` and replace all instances of ```<master-private-ip>``` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it.
|
||||
|
||||
```sh
|
||||
nova boot \
|
||||
--image <image_name> \
|
||||
--key-name <my_key> \
|
||||
--flavor <flavor id> \
|
||||
--security-group kubernetes \
|
||||
--user-data files/node.yaml \
|
||||
minion01
|
||||
```
|
||||
|
||||
This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -32,153 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Getting started with Kubernetes on DCOS
|
||||
----------------------------------------
|
||||
|
||||
This guide will walk you through installing [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) on [Datacenter Operating System (DCOS)](https://mesosphere.com/product/) with the [DCOS CLI](https://github.com/mesosphere/dcos-cli) and operating Kubernetes with the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl).
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [About Kubernetes on DCOS](#about-kubernetes-on-dcos)
|
||||
- [Resources](#resources)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Install](#install)
|
||||
- [Uninstall](#uninstall)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
|
||||
## About Kubernetes on DCOS
|
||||
|
||||
DCOS is system software that manages computer cluster hardware and software resources and provides common services for distributed applications. Among other services, it provides [Apache Mesos](http://mesos.apache.org/) as its cluster kernel and [Marathon](https://mesosphere.github.io/marathon/) as its init system. With DCOS CLI, Mesos frameworks like [Kubernetes-Mesos](https://github.com/mesosphere/kubernetes-mesos) can be installed with a single command.
|
||||
|
||||
Another feature of the DCOS CLI is that it allows plugins like the [DCOS Kubectl plugin](https://github.com/mesosphere/dcos-kubectl). This allows for easy access to a version-compatible Kubectl without having to manually download or install.
|
||||
|
||||
Further information about the benefits of installing Kubernetes on DCOS can be found in the [Kubernetes-Mesos documentation](../../contrib/mesos/README.md).
|
||||
|
||||
For more details about the Kubernetes DCOS packaging, see the [Kubernetes-Mesos project](https://github.com/mesosphere/kubernetes-mesos).
|
||||
|
||||
Since Kubernetes-Mesos is still alpha, it is a good idea to familiarize yourself with the [current known issues](../../contrib/mesos/docs/issues.md) which may limit or modify the behavior of Kubernetes on DCOS.
|
||||
|
||||
If you have problems completing the steps below, please [file an issue against the kubernetes-mesos project](https://github.com/mesosphere/kubernetes-mesos/issues).
|
||||
|
||||
|
||||
## Resources
|
||||
|
||||
Explore the following resources for more information about Kubernetes, Kubernetes on Mesos/DCOS, and DCOS itself.
|
||||
|
||||
- [DCOS Documentation](https://docs.mesosphere.com/)
|
||||
- [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/)
|
||||
- [Kubernetes Examples](../../examples/README.md)
|
||||
- [Kubernetes on Mesos Documentation](../../contrib/mesos/README.md)
|
||||
- [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases)
|
||||
- [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos)
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A running [DCOS cluster](https://mesosphere.com/product/)
|
||||
- [DCOS Community Edition](https://docs.mesosphere.com/install/) is currently available on [AWS](https://mesosphere.com/amazon/).
|
||||
- [DCOS Enterprise Edition](https://mesosphere.com/product/) can be deployed on virtual or bare metal machines. Contact sales@mesosphere.com for more info and to set up an engagement.
|
||||
- [DCOS CLI](https://docs.mesosphere.com/install/cli/) installed locally
|
||||
|
||||
|
||||
## Install
|
||||
|
||||
1. Configure and validate the [Mesosphere Multiverse](https://github.com/mesosphere/multiverse) as a package source repository
|
||||
|
||||
```
|
||||
$ dcos config prepend package.sources https://github.com/mesosphere/multiverse/archive/version-1.x.zip
|
||||
$ dcos package update --validate
|
||||
```
|
||||
|
||||
2. Install etcd
|
||||
|
||||
By default, the Kubernetes DCOS package starts a single-node etcd. In order to avoid state loss in the event of Kubernetes component container failure, install an HA [etcd-mesos](https://github.com/mesosphere/etcd-mesos) cluster on DCOS.
|
||||
|
||||
```
|
||||
$ dcos package install etcd
|
||||
```
|
||||
|
||||
3. Verify that etcd is installed and healthy
|
||||
|
||||
The etcd cluster takes a short while to deploy. Verify that `/etcd` is healthy before going on to the next step.
|
||||
|
||||
```
|
||||
$ dcos marathon app list
|
||||
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
|
||||
/etcd 128 0.2 1/1 1/1 --- DOCKER None
|
||||
```
|
||||
|
||||
4. Create Kubernetes installation configuration
|
||||
|
||||
Configure Kubernetes to use the HA etcd installed on DCOS.
|
||||
|
||||
```
|
||||
$ cat >/tmp/options.json <<EOF
|
||||
{
|
||||
"kubernetes": {
|
||||
"etcd-mesos-framework-name": "etcd"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
5. Install Kubernetes
|
||||
|
||||
```
|
||||
$ dcos package install --options=/tmp/options.json kubernetes
|
||||
```
|
||||
|
||||
6. Verify that Kubernetes is installed and healthy
|
||||
|
||||
The Kubernetes cluster takes a short while to deploy. Verify that `/kubernetes` is healthy before going on to the next step.
|
||||
|
||||
```
|
||||
$ dcos marathon app list
|
||||
ID MEM CPUS TASKS HEALTH DEPLOYMENT CONTAINER CMD
|
||||
/etcd 128 0.2 1/1 1/1 --- DOCKER None
|
||||
/kubernetes 768 1 1/1 1/1 --- DOCKER None
|
||||
```
|
||||
|
||||
7. Verify that Kube-DNS & Kube-UI are deployed, running, and ready
|
||||
|
||||
```
|
||||
$ dcos kubectl get pods --namespace=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-dns-v8-tjxk9 4/4 Running 0 1m
|
||||
kube-ui-v2-tjq7b 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
Names and ages may vary.
|
||||
|
||||
|
||||
Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](../../examples/README.md) or the [Kubernetes User Guide](../user-guide/README.md).
|
||||
|
||||
|
||||
## Uninstall
|
||||
|
||||
1. Stop and delete all replication controllers and pods in each namespace:
|
||||
|
||||
Before uninstalling Kubernetes, destroy all the pods and replication controllers. The uninstall process will try to do this itself, but by default it times out quickly and may leave your cluster in a dirty state.
|
||||
|
||||
```
|
||||
$ dcos kubectl delete rc,pods --all --namespace=default
|
||||
$ dcos kubectl delete rc,pods --all --namespace=kube-system
|
||||
```
|
||||
|
||||
2. Validate that all pods have been deleted
|
||||
|
||||
```
|
||||
$ dcos kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
3. Uninstall Kubernetes
|
||||
|
||||
```
|
||||
$ dcos package uninstall kubernetes
|
||||
```
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/dcos/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,100 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Running Multi-Node Kubernetes Using Docker
|
||||
------------------------------------------
|
||||
|
||||
_Note_:
|
||||
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
|
||||
interested in just starting to explore Kubernetes, we recommend that you start there.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Overview](#overview)
|
||||
- [Bootstrap Docker](#bootstrap-docker)
|
||||
- [Master Node](#master-node)
|
||||
- [Adding a worker node](#adding-a-worker-node)
|
||||
- [Deploy a DNS](#deploy-a-dns)
|
||||
- [Testing your cluster](#testing-your-cluster)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The only thing you need is a machine with **Docker 1.7.1 or higher**
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
|
||||
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
|
||||
times to create larger clusters.
|
||||
|
||||
Here's a diagram of what the final result will look like:
|
||||

|
||||
|
||||
### Bootstrap Docker
|
||||
|
||||
This guide also uses a pattern of running two instances of the Docker daemon
|
||||
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
|
||||
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
|
||||
|
||||
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
|
||||
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
|
||||
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
|
||||
|
||||
You can specify the version on every node before install:
|
||||
|
||||
```sh
|
||||
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
|
||||
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
|
||||
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
|
||||
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
|
||||
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
|
||||
```
|
||||
|
||||
Otherwise, we'll use latest `hyperkube` image as default k8s version.
|
||||
|
||||
## Master Node
|
||||
|
||||
The first step in the process is to initialize the master node.
|
||||
|
||||
The MASTER_IP step here is optional, it defaults to the first value of `hostname -I`.
|
||||
Clone the Kubernetes repo, and run [master.sh](docker-multinode/master.sh) on the master machine _with root_:
|
||||
|
||||
```console
|
||||
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
|
||||
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
|
||||
$ ./master.sh
|
||||
```
|
||||
|
||||
`Master done!`
|
||||
|
||||
See [here](docker-multinode/master.md) for detailed instructions explanation.
|
||||
|
||||
## Adding a worker node
|
||||
|
||||
Once your master is up and running you can add one or more workers on different machines.
|
||||
|
||||
Clone the Kubernetes repo, and run [worker.sh](docker-multinode/worker.sh) on the worker machine _with root_:
|
||||
|
||||
```console
|
||||
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
|
||||
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
|
||||
$ ./worker.sh
|
||||
```
|
||||
|
||||
`Worker done!`
|
||||
|
||||
See [here](docker-multinode/worker.md) for detailed instructions explanation.
|
||||
|
||||
## Deploy a DNS
|
||||
|
||||
See [here](docker-multinode/deployDNS.md) for instructions.
|
||||
|
||||
## Testing your cluster
|
||||
|
||||
Once your cluster has been created you can [test it out](docker-multinode/testing.md)
|
||||
|
||||
For more complete applications, please look in the [examples directory](../../examples/)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,41 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Deploy DNS on `docker` and `docker-multinode`
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/deployDNS/
|
||||
|
||||
### Get the template file
|
||||
|
||||
First of all, download the dns template
|
||||
|
||||
[skydns template](skydns.yaml.in)
|
||||
|
||||
### Set environment variables
|
||||
|
||||
Then you need to set `DNS_REPLICAS`, `DNS_DOMAIN` and `DNS_SERVER_IP` envs
|
||||
|
||||
```console
|
||||
$ export DNS_REPLICAS=1
|
||||
|
||||
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
|
||||
|
||||
$ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` for containerized kubelet
|
||||
```
|
||||
|
||||
### Replace the corresponding value in the template and create the pod
|
||||
|
||||
```console
|
||||
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns.yaml.in > ./skydns.yaml
|
||||
|
||||
# If the kube-system namespace isn't already created, create it
|
||||
$ kubectl get ns
|
||||
$ kubectl create -f ./kube-system.yaml
|
||||
|
||||
$ kubectl create -f ./skydns.yaml
|
||||
```
|
||||
|
||||
### Test if DNS works
|
||||
|
||||
Follow [this link](../../../cluster/addons/dns/#how-do-i-test-if-it-is-working) to check it out.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -32,247 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Installing a Kubernetes Master Node via Docker
|
||||
|
||||
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine
|
||||
is `${MASTER_IP}`. We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
|
||||
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.7"
|
||||
|
||||
Enviroinment variables used:
|
||||
|
||||
```sh
|
||||
export MASTER_IP=<the_master_ip_here>
|
||||
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.7)>
|
||||
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
|
||||
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
|
||||
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
|
||||
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
|
||||
```
|
||||
|
||||
There are two main phases to installing the master:
|
||||
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
|
||||
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
|
||||
|
||||
|
||||
## Setting up flanneld and etcd
|
||||
|
||||
_Note_:
|
||||
This guide expects **Docker 1.7.1 or higher**.
|
||||
|
||||
### Setup Docker Bootstrap
|
||||
|
||||
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
|
||||
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
|
||||
`--iptables=false` so that it can only run containers with `--net=host`. That's sufficient to bootstrap our system.
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_If you have Docker 1.8.0 or higher run this instead_
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_Important Note_:
|
||||
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
|
||||
across reboots and failures.
|
||||
|
||||
|
||||
### Startup etcd for flannel and the API server to use
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
|
||||
--net=host \
|
||||
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
|
||||
/usr/local/bin/etcd \
|
||||
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
|
||||
--advertise-client-urls=http://${MASTER_IP}:4001 \
|
||||
--data-dir=/var/etcd/data
|
||||
```
|
||||
|
||||
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run \
|
||||
--net=host \
|
||||
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
|
||||
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
|
||||
```
|
||||
|
||||
|
||||
### Set up Flannel on the master node
|
||||
|
||||
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
|
||||
|
||||
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
|
||||
|
||||
#### Bring down Docker
|
||||
|
||||
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
|
||||
|
||||
Turning down Docker is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker stop
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
sudo systemctl stop docker
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
sudo service docker stop
|
||||
```
|
||||
|
||||
or it may be something else.
|
||||
|
||||
#### Run flannel
|
||||
|
||||
Now run flanneld itself:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
|
||||
--net=host \
|
||||
--privileged \
|
||||
-v /dev/net:/dev/net \
|
||||
quay.io/coreos/flannel:${FLANNEL_VERSION} \
|
||||
--ip-masq=${FLANNEL_IPMASQ} \
|
||||
--iface=${FLANNEL_IFACE}
|
||||
```
|
||||
|
||||
The previous command should have printed a really long hash, the container id, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
|
||||
#### Edit the docker configuration
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
Again this is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker start
|
||||
```
|
||||
|
||||
it may be:
|
||||
|
||||
```sh
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
## Starting the Kubernetes Master
|
||||
|
||||
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
|
||||
|
||||
```sh
|
||||
sudo docker run \
|
||||
--volume=/:/rootfs:ro \
|
||||
--volume=/sys:/sys:ro \
|
||||
--volume=/var/lib/docker/:/var/lib/docker:rw \
|
||||
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
--volume=/var/run:/var/run:rw \
|
||||
--net=host \
|
||||
--privileged=true \
|
||||
--pid=host \
|
||||
-d \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube kubelet \
|
||||
--allow-privileged=true \
|
||||
--api-servers=http://localhost:8080 \
|
||||
--v=2 \
|
||||
--address=0.0.0.0 \
|
||||
--enable-server \
|
||||
--hostname-override=127.0.0.1 \
|
||||
--config=/etc/kubernetes/manifests-multi \
|
||||
--containerized \
|
||||
--cluster-dns=10.0.0.10 \
|
||||
--cluster-domain=cluster.local
|
||||
```
|
||||
|
||||
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
|
||||
|
||||
### Test it out
|
||||
|
||||
At this point, you should have a functioning 1-node cluster. Let's test it out!
|
||||
|
||||
Download the kubectl binary for `${K8S_VERSION}` (look at the URL in the following links) and make it available by editing your PATH environment variable.
|
||||
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/amd64/kubectl))
|
||||
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/386/kubectl))
|
||||
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/amd64/kubectl))
|
||||
([linux/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/386/kubectl))
|
||||
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/arm/kubectl))
|
||||
|
||||
For example, OS X:
|
||||
|
||||
```console
|
||||
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
|
||||
$ chmod 755 kubectl
|
||||
$ PATH=$PATH:`pwd`
|
||||
```
|
||||
|
||||
Linux:
|
||||
|
||||
```console
|
||||
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
|
||||
$ chmod 755 kubectl
|
||||
$ PATH=$PATH:`pwd`
|
||||
```
|
||||
|
||||
Now you can list the nodes:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
This should print something like:
|
||||
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
|
||||
If all else fails, ask questions on [Slack](../../troubleshooting.md#slack).
|
||||
|
||||
|
||||
### Next steps
|
||||
|
||||
Move on to [adding one or more workers](worker.md) or [deploy a dns](deployDNS.md)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/master/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,73 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Testing your Kubernetes cluster.
|
||||
|
||||
To validate that your node(s) have been added, run:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
That should show something like:
|
||||
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](../../troubleshooting.md#slack).
|
||||
|
||||
### Run an application
|
||||
|
||||
```sh
|
||||
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
```sh
|
||||
kubectl expose rc nginx --port=80
|
||||
```
|
||||
|
||||
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx
|
||||
```
|
||||
|
||||
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx --template={{.spec.clusterIP}}
|
||||
```
|
||||
|
||||
Hit the webserver with the first IP (CLUSTER_IP):
|
||||
|
||||
```sh
|
||||
curl <insert-cluster-ip-here>
|
||||
```
|
||||
|
||||
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
|
||||
|
||||
### Scaling
|
||||
|
||||
Now try to scale up the nginx you created before:
|
||||
|
||||
```sh
|
||||
kubectl scale rc nginx --replicas=3
|
||||
```
|
||||
|
||||
And list the pods
|
||||
|
||||
```sh
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
You should see pods landing on the newly added machine.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/testing/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,184 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Adding a Kubernetes worker node via Docker.
|
||||
|
||||
|
||||
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
|
||||
You need to repeat these instructions for each node you want to join the cluster.
|
||||
We will assume that you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master.md). We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
|
||||
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.6"
|
||||
|
||||
Enviroinment variables used:
|
||||
|
||||
```sh
|
||||
export MASTER_IP=<the_master_ip_here>
|
||||
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.6)>
|
||||
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
|
||||
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
|
||||
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
|
||||
```
|
||||
|
||||
For each worker node, there are three steps:
|
||||
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
|
||||
|
||||
### Set up Flanneld on the worker node
|
||||
|
||||
As before, the Flannel daemon is going to provide network connectivity.
|
||||
|
||||
_Note_:
|
||||
This guide expects **Docker 1.7.1 or higher**.
|
||||
|
||||
|
||||
#### Set up a bootstrap docker
|
||||
|
||||
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
|
||||
|
||||
Run:
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_If you have Docker 1.8.0 or higher run this instead_
|
||||
|
||||
```sh
|
||||
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
|
||||
```
|
||||
|
||||
_Important Note_:
|
||||
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
|
||||
across reboots and failures.
|
||||
|
||||
#### Bring down Docker
|
||||
|
||||
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
|
||||
|
||||
Turning down Docker is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker stop
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
sudo systemctl stop docker
|
||||
```
|
||||
|
||||
or it may be something else.
|
||||
|
||||
#### Run flannel
|
||||
|
||||
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
|
||||
--net=host \
|
||||
--privileged \
|
||||
-v /dev/net:/dev/net \
|
||||
quay.io/coreos/flannel:${FLANNEL_VERSION} \
|
||||
/opt/bin/flanneld \
|
||||
--ip-masq=${FLANNEL_IPMASQ} \
|
||||
--etcd-endpoints=http://${MASTER_IP}:4001 \
|
||||
--iface=${FLANNEL_IFACE}
|
||||
```
|
||||
|
||||
The previous command should have printed a really long hash, the container id, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
|
||||
|
||||
#### Edit the docker configuration
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
Again this is system dependent, it may be:
|
||||
|
||||
```sh
|
||||
sudo /etc/init.d/docker start
|
||||
```
|
||||
|
||||
or it may be:
|
||||
|
||||
```sh
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
### Start Kubernetes on the worker node
|
||||
|
||||
#### Run the kubelet
|
||||
|
||||
Again this is similar to the above, but the `--api-servers` now points to the master we set up in the beginning.
|
||||
|
||||
```sh
|
||||
sudo docker run \
|
||||
--volume=/:/rootfs:ro \
|
||||
--volume=/sys:/sys:ro \
|
||||
--volume=/dev:/dev \
|
||||
--volume=/var/lib/docker/:/var/lib/docker:rw \
|
||||
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
--volume=/var/run:/var/run:rw \
|
||||
--net=host \
|
||||
--privileged=true \
|
||||
--pid=host \
|
||||
-d \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube kubelet \
|
||||
--allow-privileged=true \
|
||||
--api-servers=http://${MASTER_IP}:8080 \
|
||||
--v=2 \
|
||||
--address=0.0.0.0 \
|
||||
--enable-server \
|
||||
--containerized \
|
||||
--cluster-dns=10.0.0.10 \
|
||||
--cluster-domain=cluster.local
|
||||
```
|
||||
|
||||
#### Run the service proxy
|
||||
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
|
||||
|
||||
```sh
|
||||
sudo docker run -d \
|
||||
--net=host \
|
||||
--privileged \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube proxy \
|
||||
--master=http://${MASTER_IP}:8080 \
|
||||
--v=2
|
||||
```
|
||||
|
||||
### Next steps
|
||||
|
||||
Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker-multinode/worker/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,198 +31,9 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Running Kubernetes locally via Docker
|
||||
-------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/docker/
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Run it](#run-it)
|
||||
- [Download kubectl](#download-kubectl)
|
||||
- [Test it out](#test-it-out)
|
||||
- [Run an application](#run-an-application)
|
||||
- [Expose it as a service](#expose-it-as-a-service)
|
||||
- [Deploy a DNS](#deploy-a-dns)
|
||||
- [A note on turning down your cluster](#a-note-on-turning-down-your-cluster)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
### Overview
|
||||
|
||||
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
|
||||
|
||||
Here's a diagram of what the final result will look like:
|
||||

|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need to have docker installed on one machine.
|
||||
2. Decide what Kubernetes version to use. Set the `${K8S_VERSION}` variable to
|
||||
a released version of Kubernetes >= "1.2.0-alpha.7"
|
||||
|
||||
### Run it
|
||||
|
||||
```sh
|
||||
docker run \
|
||||
--volume=/:/rootfs:ro \
|
||||
--volume=/sys:/sys:ro \
|
||||
--volume=/var/lib/docker/:/var/lib/docker:rw \
|
||||
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
--volume=/var/run:/var/run:rw \
|
||||
--net=host \
|
||||
--pid=host \
|
||||
--privileged=true \
|
||||
-d \
|
||||
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
|
||||
/hyperkube kubelet \
|
||||
--containerized \
|
||||
--hostname-override="127.0.0.1" \
|
||||
--address="0.0.0.0" \
|
||||
--api-servers=http://localhost:8080 \
|
||||
--config=/etc/kubernetes/manifests \
|
||||
--cluster-dns=10.0.0.10 \
|
||||
--cluster-domain=cluster.local \
|
||||
--allow-privileged=true --v=2
|
||||
```
|
||||
|
||||
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
|
||||
|
||||
> If you would like to mount an external device as a volume, add `--volume=/dev:/dev` to the command above. It may however, cause some problems described in [#18230](https://github.com/kubernetes/kubernetes/issues/18230)
|
||||
|
||||
This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods.md) that contains the other master components.
|
||||
|
||||
### Download `kubectl`
|
||||
|
||||
At this point you should have a running Kubernetes cluster. You can test this
|
||||
by downloading the kubectl binary for `${K8S_VERSION}` (look at the URL in the
|
||||
following links) and make it available by editing your PATH environment
|
||||
variable.
|
||||
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/amd64/kubectl))
|
||||
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/darwin/386/kubectl))
|
||||
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/amd64/kubectl))
|
||||
([linux/386](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/386/kubectl))
|
||||
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/v1.2.0-alpha.7/bin/linux/arm/kubectl))
|
||||
|
||||
For example, OS X:
|
||||
|
||||
```console
|
||||
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
|
||||
$ chmod 755 kubectl
|
||||
$ PATH=$PATH:`pwd`
|
||||
```
|
||||
|
||||
Linux:
|
||||
|
||||
```console
|
||||
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
|
||||
$ chmod 755 kubectl
|
||||
$ PATH=$PATH:`pwd`
|
||||
```
|
||||
|
||||
Create configuration:
|
||||
|
||||
```
|
||||
$ kubectl config set-cluster test-doc --server=http://localhost:8080
|
||||
$ kubectl config set-context test-doc --cluster=test-doc
|
||||
$ kubectl config use-context test-doc
|
||||
```
|
||||
|
||||
For Max OS X users instead of `localhost` you will have to use IP address of your docker machine,
|
||||
which you can find by running `docker-machine env <machinename>` (see [documentation](https://docs.docker.com/machine/reference/env/)
|
||||
for details).
|
||||
|
||||
### Test it out
|
||||
|
||||
List the nodes in your cluster by running:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
This should print:
|
||||
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
### Run an application
|
||||
|
||||
```sh
|
||||
kubectl run nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
Now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
```sh
|
||||
kubectl expose rc nginx --port=80
|
||||
```
|
||||
|
||||
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP (if a LoadBalancer is configured)
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx
|
||||
```
|
||||
|
||||
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
|
||||
|
||||
```sh
|
||||
kubectl get svc nginx --template={{.spec.clusterIP}}
|
||||
```
|
||||
|
||||
Hit the webserver with the first IP (CLUSTER_IP):
|
||||
|
||||
```sh
|
||||
curl <insert-cluster-ip-here>
|
||||
```
|
||||
|
||||
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
|
||||
|
||||
## Deploy a DNS
|
||||
|
||||
See [here](docker-multinode/deployDNS.md) for instructions.
|
||||
|
||||
### A note on turning down your cluster
|
||||
|
||||
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
|
||||
the cluster, you need to first kill the kubelet container, and then any other containers.
|
||||
|
||||
You may use `docker kill $(docker ps -aq)`, note this removes _all_ containers running under Docker, so use with caution.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### Node is in `NotReady` state
|
||||
|
||||
If you see your node as `NotReady` it's possible that your OS does not have memcg and swap enabled.
|
||||
|
||||
1. Your kernel should support memory and swap accounting. Ensure that the
|
||||
following configs are turned on in your linux kernel:
|
||||
|
||||
```console
|
||||
CONFIG_RESOURCE_COUNTERS=y
|
||||
CONFIG_MEMCG=y
|
||||
CONFIG_MEMCG_SWAP=y
|
||||
CONFIG_MEMCG_SWAP_ENABLED=y
|
||||
CONFIG_MEMCG_KMEM=y
|
||||
```
|
||||
|
||||
2. Enable the memory and swap accounting in the kernel, at boot, as command line
|
||||
parameters as follows:
|
||||
|
||||
```console
|
||||
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
|
||||
```
|
||||
|
||||
NOTE: The above is specifically for GRUB2.
|
||||
You can check the command line parameters passed to your kernel by looking at the
|
||||
output of /proc/cmdline:
|
||||
|
||||
```console
|
||||
$ cat /proc/cmdline
|
||||
BOOT_IMAGE=/boot/vmlinuz-3.18.4-aufs root=/dev/sda5 ro cgroup_enable=memory swapaccount=1
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -31,236 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Configuring Kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Architecture of the cluster](#architecture-of-the-cluster)
|
||||
- [Setting up ansible access to your nodes](#setting-up-ansible-access-to-your-nodes)
|
||||
- [Setting up the cluster](#setting-up-the-cluster)
|
||||
- [Testing and using your new cluster](#testing-and-using-your-new-cluster)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/kubernetes/kubernetes.git)
|
||||
2. A Fedora 21+ host to act as cluster master
|
||||
3. As many Fedora 21+ hosts as you would like, that act as cluster nodes
|
||||
|
||||
The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes.
|
||||
|
||||
## Architecture of the cluster
|
||||
|
||||
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
|
||||
|
||||
```console
|
||||
master,etcd = kube-master.example.com
|
||||
node1 = kube-node-01.example.com
|
||||
node2 = kube-node-02.example.com
|
||||
```
|
||||
|
||||
**Make sure your local machine has**
|
||||
|
||||
- ansible (must be 1.9.0+)
|
||||
- git
|
||||
- python-netaddr
|
||||
|
||||
If not
|
||||
|
||||
```sh
|
||||
yum install -y ansible git python-netaddr
|
||||
```
|
||||
|
||||
**Now clone down the Kubernetes repository**
|
||||
|
||||
```sh
|
||||
git clone https://github.com/kubernetes/contrib.git
|
||||
cd contrib/ansible
|
||||
```
|
||||
|
||||
**Tell ansible about each machine and its role in your cluster**
|
||||
|
||||
Get the IP addresses from the master and nodes. Add those to the `~/contrib/ansible/inventory` file on the host running Ansible.
|
||||
|
||||
```console
|
||||
[masters]
|
||||
kube-master.example.com
|
||||
|
||||
[etcd]
|
||||
kube-master.example.com
|
||||
|
||||
[nodes]
|
||||
kube-node-01.example.com
|
||||
kube-node-02.example.com
|
||||
```
|
||||
|
||||
## Setting up ansible access to your nodes
|
||||
|
||||
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/contrib/ansible/group_vars/all.yml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
|
||||
|
||||
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
|
||||
|
||||
edit: ~/contrib/ansible/group_vars/all.yml
|
||||
|
||||
```yaml
|
||||
ansible_ssh_user: root
|
||||
```
|
||||
|
||||
**Configuring ssh access to the cluster**
|
||||
|
||||
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
|
||||
|
||||
Make sure your local machine (root) has an ssh key pair if not
|
||||
|
||||
```sh
|
||||
ssh-keygen
|
||||
```
|
||||
|
||||
Copy the ssh public key to **all** nodes in the cluster
|
||||
|
||||
```sh
|
||||
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
|
||||
ssh-copy-id ${node}
|
||||
done
|
||||
```
|
||||
|
||||
## Setting up the cluster
|
||||
|
||||
Although the default value of variables in `~/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
|
||||
|
||||
edit: ~/contrib/ansible/group_vars/all.yml
|
||||
|
||||
**Configure access to kubernetes packages**
|
||||
|
||||
Modify `source_type` as below to access kubernetes packages through the package manager.
|
||||
|
||||
```yaml
|
||||
source_type: packageManager
|
||||
```
|
||||
|
||||
**Configure the IP addresses used for services**
|
||||
|
||||
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
|
||||
|
||||
```yaml
|
||||
kube_service_addresses: 10.254.0.0/16
|
||||
```
|
||||
|
||||
**Managing flannel**
|
||||
|
||||
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
|
||||
|
||||
|
||||
**Managing add on services in your cluster**
|
||||
|
||||
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
|
||||
|
||||
```yaml
|
||||
cluster_logging: true
|
||||
```
|
||||
|
||||
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
|
||||
|
||||
```yaml
|
||||
cluster_monitoring: true
|
||||
```
|
||||
|
||||
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
|
||||
|
||||
```yaml
|
||||
dns_setup: true
|
||||
```
|
||||
|
||||
**Tell ansible to get to work!**
|
||||
|
||||
This will finally setup your whole Kubernetes cluster for you.
|
||||
|
||||
```sh
|
||||
cd ~/contrib/ansible/
|
||||
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
## Testing and using your new cluster
|
||||
|
||||
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
|
||||
|
||||
**Show kubernetes nodes**
|
||||
|
||||
Run the following on the kube-master:
|
||||
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
**Show services running on masters and nodes**
|
||||
|
||||
```sh
|
||||
systemctl | grep -i kube
|
||||
```
|
||||
|
||||
**Show firewall rules on the masters and nodes**
|
||||
|
||||
```sh
|
||||
iptables -nvL
|
||||
```
|
||||
|
||||
**Create /tmp/apache.json on the master with the following contents and deploy pod**
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "fedoraapache",
|
||||
"labels": {
|
||||
"name": "fedoraapache"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "fedoraapache",
|
||||
"image": "fedora/apache",
|
||||
"ports": [
|
||||
{
|
||||
"hostPort": 80,
|
||||
"containerPort": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
kubectl create -f /tmp/apache.json
|
||||
```
|
||||
|
||||
**Check where the pod was created**
|
||||
|
||||
```sh
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
**Check Docker status on nodes**
|
||||
|
||||
```sh
|
||||
docker ps
|
||||
docker images
|
||||
```
|
||||
|
||||
**After the pod is 'Running' Check web server access on the node**
|
||||
|
||||
```sh
|
||||
curl http://localhost
|
||||
```
|
||||
|
||||
That's it !
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/fedora/fedora_ansible_config/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,211 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on [Fedora](http://fedoraproject.org)
|
||||
-----------------------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Instructions](#instructions)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. You need 2 or more machines with Fedora installed.
|
||||
|
||||
## Instructions
|
||||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
Hosts:
|
||||
|
||||
```
|
||||
fed-master = 192.168.121.9
|
||||
fed-node = 192.168.121.65
|
||||
```
|
||||
|
||||
**Prepare the hosts:**
|
||||
|
||||
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
|
||||
|
||||
```sh
|
||||
yum -y install --enablerepo=updates-testing kubernetes
|
||||
```
|
||||
|
||||
* Install etcd and iptables
|
||||
|
||||
```sh
|
||||
yum -y install etcd iptables
|
||||
```
|
||||
|
||||
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
|
||||
|
||||
```sh
|
||||
echo "192.168.121.9 fed-master
|
||||
192.168.121.65 fed-node" >> /etc/hosts
|
||||
```
|
||||
|
||||
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
|
||||
|
||||
```sh
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_MASTER="--master=http://fed-master:8080"
|
||||
|
||||
# logging to stderr means we get it in the systemd journal
|
||||
KUBE_LOGTOSTDERR="--logtostderr=true"
|
||||
|
||||
# journal message level, 0 is debug
|
||||
KUBE_LOG_LEVEL="--v=0"
|
||||
|
||||
# Should this cluster be allowed to run privileged docker containers
|
||||
KUBE_ALLOW_PRIV="--allow-privileged=false"
|
||||
```
|
||||
|
||||
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
|
||||
|
||||
```sh
|
||||
systemctl disable iptables-services firewalld
|
||||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
|
||||
|
||||
```sh
|
||||
# The address on the local server to listen to.
|
||||
KUBE_API_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
|
||||
|
||||
# Address range to use for services
|
||||
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
|
||||
|
||||
# Add your own!
|
||||
KUBE_API_ARGS=""
|
||||
```
|
||||
|
||||
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001).
|
||||
|
||||
```sh
|
||||
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
|
||||
```
|
||||
|
||||
* Create /var/run/kubernetes on master:
|
||||
|
||||
```sh
|
||||
mkdir /var/run/kubernetes
|
||||
chown kube:kube /var/run/kubernetes
|
||||
chmod 750 /var/run/kubernetes
|
||||
```
|
||||
|
||||
* Start the appropriate services on master:
|
||||
|
||||
```sh
|
||||
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
systemctl status $SERVICES
|
||||
done
|
||||
```
|
||||
|
||||
* Addition of nodes:
|
||||
|
||||
* Create following node.json file on Kubernetes master node:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Node",
|
||||
"metadata": {
|
||||
"name": "fed-node",
|
||||
"labels":{ "name": "fed-node-label"}
|
||||
},
|
||||
"spec": {
|
||||
"externalID": "fed-node"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now create a node object internally in your Kubernetes cluster by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f ./node.json
|
||||
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
fed-node name=fed-node-label Unknown
|
||||
```
|
||||
|
||||
Please note that in the above, it only creates a representation for the node
|
||||
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
|
||||
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
|
||||
reachable from Kubernetes master node. This guide will discuss how to provision
|
||||
a Kubernetes node (fed-node) below.
|
||||
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet on the node.***
|
||||
|
||||
* Edit /etc/kubernetes/kubelet to appear as such:
|
||||
|
||||
```sh
|
||||
###
|
||||
# Kubernetes kubelet (node) config
|
||||
|
||||
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# You may leave this blank to use the actual hostname
|
||||
KUBELET_HOSTNAME="--hostname-override=fed-node"
|
||||
|
||||
# location of the api-server
|
||||
KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
|
||||
|
||||
# Add your own!
|
||||
#KUBELET_ARGS=""
|
||||
```
|
||||
|
||||
* Start the appropriate services on the node (fed-node).
|
||||
|
||||
```sh
|
||||
for SERVICES in kube-proxy kubelet docker; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
systemctl status $SERVICES
|
||||
done
|
||||
```
|
||||
|
||||
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
|
||||
|
||||
```console
|
||||
kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
fed-node name=fed-node-label Ready
|
||||
```
|
||||
|
||||
* Deletion of nodes:
|
||||
|
||||
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
|
||||
```sh
|
||||
kubectl delete -f ./node.json
|
||||
```
|
||||
|
||||
*You should be finished!*
|
||||
|
||||
**The cluster should be running! Launch a test pod.**
|
||||
|
||||
You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)!
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/fedora/fedora_manual_config/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,188 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Kubernetes multiple nodes cluster with flannel on Fedora
|
||||
--------------------------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Master Setup](#master-setup)
|
||||
- [Node Setup](#node-setup)
|
||||
- [**Test the cluster and flannel configuration**](#test-the-cluster-and-flannel-configuration)
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. You need 2 or more machines with Fedora installed.
|
||||
|
||||
## Master Setup
|
||||
|
||||
**Perform following commands on the Kubernetes master**
|
||||
|
||||
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
|
||||
|
||||
```json
|
||||
{
|
||||
"Network": "18.16.0.0/16",
|
||||
"SubnetLen": 24,
|
||||
"Backend": {
|
||||
"Type": "vxlan",
|
||||
"VNI": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**NOTE:** Choose an IP range that is *NOT* part of the public IP address range.
|
||||
|
||||
* Add the configuration to the etcd server on fed-master.
|
||||
|
||||
```sh
|
||||
etcdctl set /coreos.com/network/config < flannel-config.json
|
||||
```
|
||||
|
||||
* Verify the key exists in the etcd server on fed-master.
|
||||
|
||||
```sh
|
||||
etcdctl get /coreos.com/network/config
|
||||
```
|
||||
|
||||
## Node Setup
|
||||
|
||||
**Perform following commands on all Kubernetes nodes**
|
||||
|
||||
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
|
||||
|
||||
```sh
|
||||
# Flanneld configuration options
|
||||
|
||||
# etcd url location. Point this to the server where etcd runs
|
||||
FLANNEL_ETCD="http://fed-master:4001"
|
||||
|
||||
# etcd config key. This is the configuration key that flannel queries
|
||||
# For address range assignment
|
||||
FLANNEL_ETCD_KEY="/coreos.com/network"
|
||||
|
||||
# Any additional options that you want to pass
|
||||
FLANNEL_OPTIONS=""
|
||||
```
|
||||
|
||||
**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line.
|
||||
|
||||
* Enable the flannel service.
|
||||
|
||||
```sh
|
||||
systemctl enable flanneld
|
||||
```
|
||||
|
||||
* If docker is not running, then starting flannel service is enough and skip the next step.
|
||||
|
||||
```sh
|
||||
systemctl start flanneld
|
||||
```
|
||||
|
||||
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
|
||||
|
||||
```sh
|
||||
systemctl stop docker
|
||||
ip link delete docker0
|
||||
systemctl start flanneld
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
***
|
||||
|
||||
## **Test the cluster and flannel configuration**
|
||||
|
||||
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
|
||||
```console
|
||||
# ip -4 a|grep inet
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
|
||||
inet 18.16.29.0/16 scope global flannel.1
|
||||
inet 18.16.29.1/24 scope global docker0
|
||||
```
|
||||
|
||||
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
|
||||
|
||||
```sh
|
||||
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"node": {
|
||||
"key": "/coreos.com/network/subnets",
|
||||
{
|
||||
"key": "/coreos.com/network/subnets/18.16.29.0-24",
|
||||
"value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}"
|
||||
},
|
||||
{
|
||||
"key": "/coreos.com/network/subnets/18.16.83.0-24",
|
||||
"value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}"
|
||||
},
|
||||
{
|
||||
"key": "/coreos.com/network/subnets/18.16.90.0-24",
|
||||
"value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
|
||||
|
||||
```console
|
||||
# cat /run/flannel/subnet.env
|
||||
FLANNEL_SUBNET=18.16.29.1/24
|
||||
FLANNEL_MTU=1450
|
||||
FLANNEL_IPMASQ=false
|
||||
```
|
||||
|
||||
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
|
||||
|
||||
* Issue the following commands on any 2 nodes:
|
||||
|
||||
```console
|
||||
# docker run -it fedora:latest bash
|
||||
bash-4.3#
|
||||
```
|
||||
|
||||
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
|
||||
|
||||
```console
|
||||
bash-4.3# yum -y install iproute iputils
|
||||
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
|
||||
```
|
||||
|
||||
* Now note the IP address on the first node:
|
||||
|
||||
```console
|
||||
bash-4.3# ip -4 a l eth0 | grep inet
|
||||
inet 18.16.29.4/24 scope global eth0
|
||||
```
|
||||
|
||||
* And also note the IP address on the other node:
|
||||
|
||||
```console
|
||||
bash-4.3# ip a l eth0 | grep inet
|
||||
inet 18.16.90.4/24 scope global eth0
|
||||
```
|
||||
|
||||
* Now ping from the first node to the other node:
|
||||
|
||||
```console
|
||||
bash-4.3# ping 18.16.90.4
|
||||
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
|
||||
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
|
||||
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
|
||||
```
|
||||
|
||||
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,236 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on Google Compute Engine
|
||||
----------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Before you start](#before-you-start)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a cluster](#starting-a-cluster)
|
||||
- [Installing the Kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
|
||||
- [Getting started with your cluster](#getting-started-with-your-cluster)
|
||||
- [Inspect your cluster](#inspect-your-cluster)
|
||||
- [Run some examples](#run-some-examples)
|
||||
- [Tearing down the cluster](#tearing-down-the-cluster)
|
||||
- [Customizing](#customizing)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Project settings](#project-settings)
|
||||
- [Cluster initialization hang](#cluster-initialization-hang)
|
||||
- [SSH](#ssh)
|
||||
- [Networking](#networking)
|
||||
|
||||
|
||||
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
|
||||
|
||||
### Before you start
|
||||
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
|
||||
|
||||
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
|
||||
1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
1. Enable the [Compute Engine Instance Group Manager API](https://developers.google.com/console/help/new/#activatingapis) in the [Google Cloud developers console](https://console.developers.google.com).
|
||||
1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project <project-id>`.
|
||||
1. Make sure you have credentials for GCloud by running ` gcloud auth login`.
|
||||
1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
|
||||
1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
|
||||
|
||||
### Starting a cluster
|
||||
|
||||
You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
|
||||
|
||||
|
||||
```bash
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
|
||||
|
||||
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging.md), while `heapster` provides [monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/README.md) services.
|
||||
|
||||
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
|
||||
|
||||
Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `<kubernetes>/cluster/kube-up.sh` script to start the cluster:
|
||||
|
||||
```bash
|
||||
cd kubernetes
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
|
||||
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](../troubleshooting.md#slack).
|
||||
|
||||
The next few steps will show you:
|
||||
|
||||
1. how to set up the command line client on your workstation to manage the cluster
|
||||
1. examples of how to use the cluster
|
||||
1. how to delete the cluster
|
||||
1. how to start clusters with non-default options (like larger clusters)
|
||||
|
||||
### Installing the Kubernetes command line tools on your workstation
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
The next step is to make sure the `kubectl` tool is in your path.
|
||||
|
||||
The [kubectl](../user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
||||
# Linux
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
|
||||
However the gcloud bundled kubectl version may be older than the one downloaded by the
|
||||
get.k8s.io install script. We recommend you use the downloaded binary to avoid
|
||||
potential issues with client/server version skew.
|
||||
|
||||
#### Enabling bash completion of the Kubernetes command line tools
|
||||
|
||||
You may find it useful to enable `kubectl` bash completion:
|
||||
|
||||
```
|
||||
$ source ./contrib/completions/bash/kubectl
|
||||
```
|
||||
|
||||
**Note**: This will last for the duration of your bash session. If you want to make this permanent you need to add this line in your bash profile.
|
||||
|
||||
Alternatively, on most linux distributions you can also move the completions file to your bash_completions.d like this:
|
||||
|
||||
```
|
||||
$ cp ./contrib/completions/bash/kubectl /etc/bash_completion.d/
|
||||
```
|
||||
|
||||
but then you have to update it when you update kubectl.
|
||||
|
||||
### Getting started with your cluster
|
||||
|
||||
#### Inspect your cluster
|
||||
|
||||
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
|
||||
|
||||
```console
|
||||
$ kubectl get --all-namespaces services
|
||||
```
|
||||
|
||||
should show a set of [services](../user-guide/services.md) that look something like this:
|
||||
|
||||
```console
|
||||
NAMESPACE NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
default kubernetes 10.0.0.1 <none> 443/TCP <none> 1d
|
||||
kube-system kube-dns 10.0.0.2 <none> 53/TCP,53/UDP k8s-app=kube-dns 1d
|
||||
kube-system kube-ui 10.0.0.3 <none> 80/TCP k8s-app=kube-ui 1d
|
||||
...
|
||||
```
|
||||
|
||||
Similarly, you can take a look at the set of [pods](../user-guide/pods.md) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```console
|
||||
$ kubectl get --all-namespaces pods
|
||||
```
|
||||
|
||||
command.
|
||||
|
||||
You'll see a list of pods that looks something like this (the name specifics will be different):
|
||||
|
||||
```console
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
|
||||
kube-system kube-dns-v5-7ztia 3/3 Running 0 15m
|
||||
kube-system kube-ui-v1-curt1 1/1 Running 0 15m
|
||||
kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
|
||||
kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
|
||||
```
|
||||
|
||||
Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
|
||||
|
||||
#### Run some examples
|
||||
|
||||
Then, see [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](../../examples/). The [guestbook example](../../examples/guestbook/) is a good "getting started" walkthrough.
|
||||
|
||||
### Tearing down the cluster
|
||||
|
||||
To remove/delete/teardown the cluster, use the `kube-down.sh` script.
|
||||
|
||||
```bash
|
||||
cd kubernetes
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
|
||||
|
||||
### Customizing
|
||||
|
||||
The script above relies on Google Storage to stage the Kubernetes release. It
|
||||
then will start (by default) a single master VM along with 4 worker VMs. You
|
||||
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
|
||||
You can view a transcript of a successful cluster creation
|
||||
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### Project settings
|
||||
|
||||
You need to have the Google Cloud Storage API, and the Google Cloud Storage
|
||||
JSON API enabled. It is activated by default for new projects. Otherwise, it
|
||||
can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
|
||||
API Overview](https://cloud.google.com/storage/docs/json_api/) for more
|
||||
details.
|
||||
|
||||
Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
|
||||
|
||||
#### Cluster initialization hang
|
||||
|
||||
If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
|
||||
|
||||
**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
|
||||
|
||||
#### SSH
|
||||
|
||||
If you're having trouble SSHing into your instances, ensure the GCE firewall
|
||||
isn't blocking port 22 to your VMs. By default, this should work but if you
|
||||
have edited firewall rules or created a new non-default network, you'll need to
|
||||
expose it: `gcloud compute firewall-rules create default-ssh --network=<network-name>
|
||||
--description "SSH allowed from anywhere" --allow tcp:22`
|
||||
|
||||
Additionally, your GCE SSH key must either have no passcode or you need to be
|
||||
using `ssh-agent`.
|
||||
|
||||
#### Networking
|
||||
|
||||
The instances must be able to connect to each other using their private IP. The
|
||||
script uses the "default" network which should have a firewall rule called
|
||||
"default-allow-internal" which allows traffic on any port on the private IPs.
|
||||
If this rule is missing from the default network or if you change the network
|
||||
being used in `cluster/config-default.sh` create a new rule with the following
|
||||
field values:
|
||||
|
||||
* Source Ranges: `10.0.0.0/8`
|
||||
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/gce/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,241 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Getting started with Juju
|
||||
-------------------------
|
||||
|
||||
[Juju](https://jujucharms.com/docs/stable/about-juju) makes it easy to deploy
|
||||
Kubernetes by provisioning, installing and configuring all the systems in
|
||||
the cluster. Once deployed the cluster can easily scale up with one command
|
||||
to increase the cluster size.
|
||||
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [On Ubuntu](#on-ubuntu)
|
||||
- [With Docker](#with-docker)
|
||||
- [Launch Kubernetes cluster](#launch-kubernetes-cluster)
|
||||
- [Exploring the cluster](#exploring-the-cluster)
|
||||
- [Run some containers!](#run-some-containers)
|
||||
- [Scale out cluster](#scale-out-cluster)
|
||||
- [Launch the "k8petstore" example app](#launch-the-k8petstore-example-app)
|
||||
- [Tear down cluster](#tear-down-cluster)
|
||||
- [More Info](#more-info)
|
||||
- [Cloud compatibility](#cloud-compatibility)
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
> Note: If you're running kube-up, on Ubuntu - all of the dependencies
|
||||
> will be handled for you. You may safely skip to the section:
|
||||
> [Launch Kubernetes Cluster](#launch-kubernetes-cluster)
|
||||
|
||||
### On Ubuntu
|
||||
|
||||
[Install the Juju client](https://jujucharms.com/get-started) on your
|
||||
local Ubuntu system:
|
||||
|
||||
sudo add-apt-repository ppa:juju/stable
|
||||
sudo apt-get update
|
||||
sudo apt-get install juju-core juju-quickstart juju-deployer
|
||||
|
||||
|
||||
### With Docker
|
||||
|
||||
If you are not using Ubuntu or prefer the isolation of Docker, you may
|
||||
run the following:
|
||||
|
||||
mkdir ~/.juju
|
||||
sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti jujusolutions/jujubox:latest
|
||||
|
||||
At this point from either path you will have access to the `juju
|
||||
quickstart` command.
|
||||
|
||||
To set up the credentials for your chosen cloud run:
|
||||
|
||||
juju quickstart --constraints="mem=3.75G" -i
|
||||
|
||||
> The `constraints` flag is optional, it changes the size of virtual machines
|
||||
> that Juju will generate when it requests a new machine. Larger machines
|
||||
> will run faster but cost more money than smaller machines.
|
||||
|
||||
Follow the dialogue and choose `save` and `use`. Quickstart will now
|
||||
bootstrap the juju root node and setup the juju web based user
|
||||
interface.
|
||||
|
||||
|
||||
## Launch Kubernetes cluster
|
||||
|
||||
You will need to export the `KUBERNETES_PROVIDER` environment variable before
|
||||
bringing up the cluster.
|
||||
|
||||
export KUBERNETES_PROVIDER=juju
|
||||
cluster/kube-up.sh
|
||||
|
||||
If this is your first time running the `kube-up.sh` script, it will install
|
||||
the required dependencies to get started with Juju, additionally it will
|
||||
launch a curses based configuration utility allowing you to select your cloud
|
||||
provider and enter the proper access credentials.
|
||||
|
||||
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
|
||||
Software Defined Networking (SDN) so containers on different hosts can
|
||||
communicate with each other.
|
||||
|
||||
|
||||
## Exploring the cluster
|
||||
|
||||
The `juju status` command provides information about each unit in the cluster:
|
||||
|
||||
$ juju status --format=oneline
|
||||
- docker/0: 52.4.92.78 (started)
|
||||
- flannel-docker/0: 52.4.92.78 (started)
|
||||
- kubernetes/0: 52.4.92.78 (started)
|
||||
- docker/1: 52.6.104.142 (started)
|
||||
- flannel-docker/1: 52.6.104.142 (started)
|
||||
- kubernetes/1: 52.6.104.142 (started)
|
||||
- etcd/0: 52.5.216.210 (started) 4001/tcp
|
||||
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
|
||||
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
|
||||
|
||||
You can use `juju ssh` to access any of the units:
|
||||
|
||||
juju ssh kubernetes-master/0
|
||||
|
||||
|
||||
## Run some containers!
|
||||
|
||||
`kubectl` is available on the Kubernetes master node. We'll ssh in to
|
||||
launch some containers, but one could use `kubectl` locally by setting
|
||||
`KUBERNETES_MASTER` to point at the ip address of "kubernetes-master/0".
|
||||
|
||||
No pods will be available before starting a container:
|
||||
|
||||
kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
||||
kubectl get replicationcontrollers
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
||||
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "hello",
|
||||
"labels": {
|
||||
"name": "hello",
|
||||
"environment": "testing"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"containers": [{
|
||||
"name": "hello",
|
||||
"image": "quay.io/kelseyhightower/hello",
|
||||
"ports": [{
|
||||
"containerPort": 80,
|
||||
"hostPort": 80
|
||||
}]
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create the pod with kubectl:
|
||||
|
||||
kubectl create -f pod.json
|
||||
|
||||
|
||||
Get info on the pod:
|
||||
|
||||
kubectl get pods
|
||||
|
||||
|
||||
To test the hello app, we need to locate which node is hosting
|
||||
the container. Better tooling for using Juju to introspect container
|
||||
is in the works but we can use `juju run` and `juju status` to find
|
||||
our hello app.
|
||||
|
||||
Exit out of our ssh session and run:
|
||||
|
||||
juju run --unit kubernetes/0 "docker ps -n=1"
|
||||
...
|
||||
juju run --unit kubernetes/1 "docker ps -n=1"
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
02beb61339d8 quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hour k8s_hello....
|
||||
|
||||
|
||||
We see "kubernetes/1" has our container, we can open port 80:
|
||||
|
||||
juju run --unit kubernetes/1 "open-port 80"
|
||||
juju expose kubernetes
|
||||
sudo apt-get install curl
|
||||
curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3)
|
||||
|
||||
Finally delete the pod:
|
||||
|
||||
juju ssh kubernetes-master/0
|
||||
kubectl delete pods hello
|
||||
|
||||
|
||||
## Scale out cluster
|
||||
|
||||
We can add node units like so:
|
||||
|
||||
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
|
||||
|
||||
## Launch the "k8petstore" example app
|
||||
|
||||
The [k8petstore example](../../examples/k8petstore/) is available as a
|
||||
[juju action](https://jujucharms.com/docs/devel/actions).
|
||||
|
||||
juju action do kubernetes-master/0
|
||||
|
||||
> Note: this example includes curl statements to exercise the app, which
|
||||
> automatically generates "petstore" transactions written to redis, and allows
|
||||
> you to visualize the throughput in your browser.
|
||||
|
||||
## Tear down cluster
|
||||
|
||||
./kube-down.sh
|
||||
|
||||
or destroy your current Juju environment (using the `juju env` command):
|
||||
|
||||
juju destroy-environment --force `juju env`
|
||||
|
||||
|
||||
## More Info
|
||||
|
||||
The Kubernetes charms and bundles can be found in the `kubernetes` project on
|
||||
github.com:
|
||||
|
||||
- [Bundle Repository](http://releases.k8s.io/HEAD/cluster/juju/bundles)
|
||||
* [Kubernetes master charm](../../cluster/juju/charms/trusty/kubernetes-master/)
|
||||
* [Kubernetes node charm](../../cluster/juju/charms/trusty/kubernetes/)
|
||||
- [More about Juju](https://jujucharms.com)
|
||||
|
||||
|
||||
### Cloud compatibility
|
||||
|
||||
Juju runs natively against a variety of public cloud providers. Juju currently
|
||||
works with [Amazon Web Service](https://jujucharms.com/docs/stable/config-aws),
|
||||
[Windows Azure](https://jujucharms.com/docs/stable/config-azure),
|
||||
[DigitalOcean](https://jujucharms.com/docs/stable/config-digitalocean),
|
||||
[Google Compute Engine](https://jujucharms.com/docs/stable/config-gce),
|
||||
[HP Public Cloud](https://jujucharms.com/docs/stable/config-hpcloud),
|
||||
[Joyent](https://jujucharms.com/docs/stable/config-joyent),
|
||||
[LXC](https://jujucharms.com/docs/stable/config-LXC), any
|
||||
[OpenStack](https://jujucharms.com/docs/stable/config-openstack) deployment,
|
||||
[Vagrant](https://jujucharms.com/docs/stable/config-vagrant), and
|
||||
[Vmware vSphere](https://jujucharms.com/docs/stable/config-vmware).
|
||||
|
||||
If you do not see your favorite cloud provider listed many clouds can be
|
||||
configured for [manual provisioning](https://jujucharms.com/docs/stable/config-manual).
|
||||
|
||||
The Kubernetes bundle has been tested on GCE and AWS and found to work with
|
||||
version 1.0.0.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/juju/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,305 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started with libvirt CoreOS
|
||||
-----------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Highlights](#highlights)
|
||||
- [Warnings about `libvirt-coreos` use case](#warnings-about-libvirt-coreos-use-case)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Setup](#setup)
|
||||
- [Interacting with your Kubernetes cluster with the `kube-*` scripts.](#interacting-with-your-kubernetes-cluster-with-the-kube--scripts)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [!!! Cannot find kubernetes-server-linux-amd64.tar.gz](#-cannot-find-kubernetes-server-linux-amd64targz)
|
||||
- [Can't find virsh in PATH, please fix and retry.](#cant-find-virsh-in-path-please-fix-and-retry)
|
||||
- [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-no-such-file-or-directory)
|
||||
- [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-permission-denied)
|
||||
- [error: Out of memory initializing network (virsh net-create...)](#error-out-of-memory-initializing-network-virsh-net-create)
|
||||
|
||||
### Highlights
|
||||
|
||||
* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
|
||||
* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write)
|
||||
* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt)
|
||||
|
||||
### Warnings about `libvirt-coreos` use case
|
||||
|
||||
The primary goal of the `libvirt-coreos` cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
|
||||
|
||||
In order to achieve that goal, its deployment is very different from the “standard production deployment” method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
|
||||
|
||||
The `libvirt-coreos` cluster provider doesn’t aim at being production look-alike.
|
||||
|
||||
Another difference is that no security is enforced on `libvirt-coreos` at all. For example,
|
||||
|
||||
* Kube API server is reachable via a clear-text connection (no SSL);
|
||||
* Kube API server requires no credentials;
|
||||
* etcd access is not protected;
|
||||
* Kubernetes secrets are not protected as securely as they are on production environments;
|
||||
* etc.
|
||||
|
||||
So, an k8s application developer should not validate its interaction with Kubernetes on `libvirt-coreos` because he might technically succeed in doing things that are prohibited on a production environment like:
|
||||
|
||||
* un-authenticated access to Kube API server;
|
||||
* Access to Kubernetes private data structures inside etcd;
|
||||
* etc.
|
||||
|
||||
On the other hand, `libvirt-coreos` might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on `libvirt-coreos` than on a production deployment.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html)
|
||||
2. Install [ebtables](http://ebtables.netfilter.org/)
|
||||
3. Install [qemu](http://wiki.qemu.org/Main_Page)
|
||||
4. Install [libvirt](http://libvirt.org/)
|
||||
5. Install [openssl](http://openssl.org/)
|
||||
6. Enable and start the libvirt daemon, e.g:
|
||||
* ``systemctl enable libvirtd && systemctl start libvirtd`` # for systemd-based systems
|
||||
* ``/etc/init.d/libvirt-bin start`` # for init.d-based systems
|
||||
7. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html)
|
||||
8. Check that your $HOME is accessible to the qemu user²
|
||||
|
||||
#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
|
||||
|
||||
You can test it with the following command:
|
||||
|
||||
```sh
|
||||
virsh -c qemu:///system pool-list
|
||||
```
|
||||
|
||||
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
|
||||
|
||||
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
|
||||
|
||||
```sh
|
||||
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.libvirt.unix.manage" &&
|
||||
subject.user == "$USER") {
|
||||
return polkit.Result.YES;
|
||||
polkit.log("action=" + action);
|
||||
polkit.log("subject=" + subject);
|
||||
}
|
||||
});
|
||||
EOF
|
||||
```
|
||||
|
||||
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
|
||||
|
||||
```console
|
||||
$ ls -l /var/run/libvirt/libvirt-sock
|
||||
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
|
||||
|
||||
$ usermod -a -G libvirtd $USER
|
||||
# $USER needs to logout/login to have the new group be taken into account
|
||||
```
|
||||
|
||||
(Replace `$USER` with your login name)
|
||||
|
||||
#### ² Qemu will run with a specific user. It must have access to the VMs drives
|
||||
|
||||
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
|
||||
|
||||
As we’re using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
|
||||
|
||||
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
|
||||
|
||||
```console
|
||||
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
|
||||
```
|
||||
|
||||
In order to fix that issue, you have several possibilities:
|
||||
* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory:
|
||||
* backed by a filesystem with a lot of free disk space
|
||||
* writable by your user;
|
||||
* accessible by the qemu user.
|
||||
* Grant the qemu user access to the storage pool.
|
||||
* Edit `/etc/libvirt/qemu.conf` to run under that user, that have access to the storage pool (not recommended for production usage).
|
||||
|
||||
On Arch:
|
||||
|
||||
```sh
|
||||
setfacl -m g:kvm:--x ~
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
|
||||
To start your local cluster, open a shell and run:
|
||||
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
||||
|
||||
The `NUM_NODES` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
|
||||
|
||||
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||
|
||||
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
|
||||
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
|
||||
|
||||
You can check that your machines are there and running with:
|
||||
|
||||
```console
|
||||
$ virsh -c qemu:///system list
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
15 kubernetes_master running
|
||||
16 kubernetes_node-01 running
|
||||
17 kubernetes_node-02 running
|
||||
18 kubernetes_node-03 running
|
||||
```
|
||||
|
||||
You can check that the Kubernetes cluster is working with:
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
192.168.10.2 <none> Ready
|
||||
192.168.10.3 <none> Ready
|
||||
192.168.10.4 <none> Ready
|
||||
```
|
||||
|
||||
The VMs are running [CoreOS](https://coreos.com/).
|
||||
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
|
||||
The user to use to connect to the VM is `core`.
|
||||
The IP to connect to the master is 192.168.10.1.
|
||||
The IPs to connect to the nodes are 192.168.10.2 and onwards.
|
||||
|
||||
Connect to `kubernetes_master`:
|
||||
|
||||
```sh
|
||||
ssh core@192.168.10.1
|
||||
```
|
||||
|
||||
Connect to `kubernetes_node-01`:
|
||||
|
||||
```sh
|
||||
ssh core@192.168.10.2
|
||||
```
|
||||
|
||||
### Interacting with your Kubernetes cluster with the `kube-*` scripts.
|
||||
|
||||
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||
```
|
||||
|
||||
Bring up a libvirt-CoreOS cluster of 5 nodes
|
||||
|
||||
```sh
|
||||
NUM_NODES=5 cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Destroy the libvirt-CoreOS cluster
|
||||
|
||||
```sh
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
|
||||
|
||||
```sh
|
||||
cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
|
||||
|
||||
```sh
|
||||
KUBE_PUSH=local cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
```sh
|
||||
kubectl ...
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz
|
||||
|
||||
Build the release tarballs:
|
||||
|
||||
```sh
|
||||
make release
|
||||
```
|
||||
|
||||
#### Can't find virsh in PATH, please fix and retry.
|
||||
|
||||
Install libvirt
|
||||
|
||||
On Arch:
|
||||
|
||||
```sh
|
||||
pacman -S qemu libvirt
|
||||
```
|
||||
|
||||
On Ubuntu 14.04.1:
|
||||
|
||||
```sh
|
||||
aptitude install qemu-system-x86 libvirt-bin
|
||||
```
|
||||
|
||||
On Fedora 21:
|
||||
|
||||
```sh
|
||||
yum install qemu libvirt
|
||||
```
|
||||
|
||||
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
|
||||
|
||||
Start the libvirt daemon
|
||||
|
||||
On Arch:
|
||||
|
||||
```sh
|
||||
systemctl start libvirtd
|
||||
```
|
||||
|
||||
On Ubuntu 14.04.1:
|
||||
|
||||
```sh
|
||||
service libvirt-bin start
|
||||
```
|
||||
|
||||
#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
|
||||
|
||||
Fix libvirt access permission (Remember to adapt `$USER`)
|
||||
|
||||
On Arch and Fedora 21:
|
||||
|
||||
```sh
|
||||
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.libvirt.unix.manage" &&
|
||||
subject.user == "$USER") {
|
||||
return polkit.Result.YES;
|
||||
polkit.log("action=" + action);
|
||||
polkit.log("subject=" + subject);
|
||||
}
|
||||
});
|
||||
EOF
|
||||
```
|
||||
|
||||
On Ubuntu:
|
||||
|
||||
```sh
|
||||
usermod -a -G libvirtd $USER
|
||||
```
|
||||
|
||||
#### error: Out of memory initializing network (virsh net-create...)
|
||||
|
||||
Ensure libvirtd has been restarted since ebtables was installed.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/libvirt-coreos/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,236 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Cluster Level Logging with Elasticsearch and Kibana
|
||||
|
||||
On the Google Compute Engine (GCE) platform the default cluster level logging support targets
|
||||
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging.md) getting
|
||||
started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an
|
||||
alternative to Google Cloud Logging.
|
||||
|
||||
To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:
|
||||
|
||||
```console
|
||||
KUBE_LOGGING_DESTINATION=elasticsearch
|
||||
```
|
||||
|
||||
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
|
||||
|
||||
Now when you create a cluster a message will indicate that the Fluentd node-level log collectors
|
||||
will target Elasticsearch:
|
||||
|
||||
```console
|
||||
$ cluster/kube-up.sh
|
||||
...
|
||||
Project: kubernetes-satnam
|
||||
Zone: us-central1-b
|
||||
... calling kube-up
|
||||
Project: kubernetes-satnam
|
||||
Zone: us-central1-b
|
||||
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
|
||||
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
|
||||
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
|
||||
Looking for already existing resources
|
||||
Starting master and configuring firewalls
|
||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
|
||||
NAME ZONE SIZE_GB TYPE STATUS
|
||||
kubernetes-master-pd us-central1-b 20 pd-ssd READY
|
||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
|
||||
+++ Logging using Fluentd to elasticsearch
|
||||
```
|
||||
|
||||
The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana
|
||||
viewer should be running in the kube-system namespace soon after the cluster comes to life.
|
||||
|
||||
```console
|
||||
$ kubectl get pods --namespace=kube-system
|
||||
NAME READY REASON RESTARTS AGE
|
||||
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
|
||||
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
|
||||
kibana-logging-v1-bhpo8 1/1 Running 0 2h
|
||||
kube-dns-v3-7r1l9 3/3 Running 0 2h
|
||||
monitoring-heapster-v4-yl332 1/1 Running 1 2h
|
||||
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
|
||||
```
|
||||
|
||||
Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers
|
||||
the Docker container logs and sends them to Elasticsearch. The Fluentd collector communicates to
|
||||
a Kubernetes service that maps requests to specific Elasticsearch pods. Similarly, Kibana can also be
|
||||
accessed via a Kubernetes service definition.
|
||||
|
||||
|
||||
```console
|
||||
$ kubectl get services --namespace=kube-system
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.222.57 9200/TCP
|
||||
kibana-logging k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana k8s-app=kibana-logging 10.0.193.226 5601/TCP
|
||||
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
|
||||
53/TCP
|
||||
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
|
||||
monitoring-grafana kubernetes.io/cluster-service=true,kubernetes.io/name=Grafana k8s-app=influxGrafana 10.0.167.139 80/TCP
|
||||
monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster 10.0.208.221 80/TCP
|
||||
monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.188.57 8083/TCP
|
||||
```
|
||||
|
||||
By default two Elasticsearch replicas are created and one Kibana replica is created.
|
||||
|
||||
```console
|
||||
$ kubectl get rc --namespace=kube-system
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
elasticsearch-logging-v1 elasticsearch-logging gcr.io/google_containers/elasticsearch:1.4 k8s-app=elasticsearch-logging,version=v1 2
|
||||
kibana-logging-v1 kibana-logging gcr.io/google_containers/kibana:1.3 k8s-app=kibana-logging,version=v1 1
|
||||
kube-dns-v3 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v3 1
|
||||
kube2sky gcr.io/google_containers/kube2sky:1.9
|
||||
skydns gcr.io/google_containers/skydns:2015-03-11-001
|
||||
monitoring-heapster-v4 heapster gcr.io/google_containers/heapster:v0.14.3 k8s-app=heapster,version=v4 1
|
||||
monitoring-influx-grafana-v1 influxdb gcr.io/google_containers/heapster_influxdb:v0.3 k8s-app=influxGrafana,version=v1 1
|
||||
grafana gcr.io/google_containers/heapster_grafana:v0.7
|
||||
```
|
||||
|
||||
The Elasticsearch and Kibana services are not directly exposed via a publicly reachable IP address. Instead,
|
||||
they can be accessed via the service proxy running at the master. The URLs for accessing Elasticsearch
|
||||
and Kibana via the service proxy can be found using the `kubectl cluster-info` command.
|
||||
|
||||
```console
|
||||
$ kubectl cluster-info
|
||||
Kubernetes master is running at https://146.148.94.154
|
||||
Elasticsearch is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
|
||||
Kibana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/kibana-logging
|
||||
KubeDNS is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/kube-dns
|
||||
KubeUI is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/kube-ui
|
||||
Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
|
||||
Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
|
||||
InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
|
||||
```
|
||||
|
||||
Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out
|
||||
the `admin` password for the cluster using `kubectl config view`.
|
||||
|
||||
```console
|
||||
$ kubectl config view
|
||||
...
|
||||
- name: kubernetes-satnam_kubernetes-basic-auth
|
||||
user:
|
||||
password: 7GlspJ9Q43OnGIJO
|
||||
username: admin
|
||||
...
|
||||
```
|
||||
|
||||
The first time you try to access the cluster from a browser a dialog box appears asking for the username and password.
|
||||
Use the username `admin` and provide the basic auth password reported by `kubectl config view` for the
|
||||
cluster you are trying to connect to. Connecting to the Elasticsearch URL should then give the
|
||||
status page for Elasticsearch.
|
||||
|
||||

|
||||
|
||||
You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch
|
||||
from your local machine using `curl` but first you need to know what your bearer token is:
|
||||
|
||||
```console
|
||||
$ kubectl config view --minify
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: REDACTED
|
||||
server: https://146.148.94.154
|
||||
name: kubernetes-satnam_kubernetes
|
||||
contexts:
|
||||
- context:
|
||||
cluster: kubernetes-satnam_kubernetes
|
||||
user: kubernetes-satnam_kubernetes
|
||||
name: kubernetes-satnam_kubernetes
|
||||
current-context: kubernetes-satnam_kubernetes
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: kubernetes-satnam_kubernetes
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp
|
||||
```
|
||||
|
||||
Now you can issue requests to Elasticsearch:
|
||||
|
||||
```console
|
||||
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Vance Astrovik",
|
||||
"cluster_name" : "kubernetes-logging",
|
||||
"version" : {
|
||||
"number" : "1.5.2",
|
||||
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
|
||||
"build_timestamp" : "2015-04-27T09:21:06Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "4.10.4"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:
|
||||
|
||||
```console
|
||||
$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/_search?pretty=true
|
||||
{
|
||||
"took" : 7,
|
||||
"timed_out" : false,
|
||||
"_shards" : {
|
||||
"total" : 6,
|
||||
"successful" : 6,
|
||||
"failed" : 0
|
||||
},
|
||||
"hits" : {
|
||||
"total" : 123711,
|
||||
"max_score" : 1.0,
|
||||
"hits" : [ {
|
||||
"_index" : ".kibana",
|
||||
"_type" : "config",
|
||||
"_id" : "4.0.2",
|
||||
"_score" : 1.0,
|
||||
"_source":{"buildNum":6004,"defaultIndex":"logstash-*"}
|
||||
}, {
|
||||
...
|
||||
"_index" : "logstash-2015.06.22",
|
||||
"_type" : "fluentd",
|
||||
"_id" : "AU4c_GvFZL5p_gZ8dxtx",
|
||||
"_score" : 1.0,
|
||||
"_source":{"log":"synthetic-logger-10lps-pod: 31: 2015-06-22 20:35:33.597918073+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:33+00:00"}
|
||||
}, {
|
||||
"_index" : "logstash-2015.06.22",
|
||||
"_type" : "fluentd",
|
||||
"_id" : "AU4c_GvFZL5p_gZ8dxt2",
|
||||
"_score" : 1.0,
|
||||
"_source":{"log":"synthetic-logger-10lps-pod: 36: 2015-06-22 20:35:34.108780133+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:34+00:00"}
|
||||
} ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The Elasticsearch website contains information about [URI search queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html) which can be used to extract the required logs.
|
||||
|
||||
Alternatively you can view the ingested logs using Kibana. The first time you visit the Kibana URL you will be
|
||||
presented with a page that asks you to configure your view of the ingested logs. Select the option for
|
||||
timeseries values and select `@timestamp`. On the following page select the `Discover` tab and then you
|
||||
should be able to see the ingested logs. You can set the refresh interval to 5 seconds to have the logs
|
||||
regulary refreshed. Here is a typical view of ingested logs from the Kibana viewer.
|
||||
|
||||

|
||||
|
||||
Another way to access Elasticsearch and Kibana in the cluster is to use `kubectl proxy` which will serve
|
||||
a local proxy to the remote master:
|
||||
|
||||
```console
|
||||
$ kubectl proxy
|
||||
Starting to serve on localhost:8001
|
||||
```
|
||||
|
||||
Now you can visit the URL [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging) to contact Elasticsearch and [http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging](http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging) to access the Kibana viewer.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/logging-elasticsearch/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,226 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Cluster Level Logging to Google Cloud Logging
|
||||
|
||||
A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services.
|
||||
|
||||
Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pod’s container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to Google Cloud Logging. After a cluster has been created you will have a collection of system pods running in the `kube-system` namespace that support monitoring,
|
||||
logging and DNS resolution for names of Kubernetes services:
|
||||
|
||||
```console
|
||||
$ kubectl get pods --namespace=kube-system
|
||||
NAME READY REASON RESTARTS AGE
|
||||
fluentd-cloud-logging-kubernetes-node-0f64 1/1 Running 0 32m
|
||||
fluentd-cloud-logging-kubernetes-node-27gf 1/1 Running 0 32m
|
||||
fluentd-cloud-logging-kubernetes-node-pk22 1/1 Running 0 31m
|
||||
fluentd-cloud-logging-kubernetes-node-20ej 1/1 Running 0 31m
|
||||
kube-dns-v3-pk22 3/3 Running 0 32m
|
||||
monitoring-heapster-v1-20ej 0/1 Running 9 32m
|
||||
```
|
||||
|
||||
Here is the same information in a picture which shows how the pods might be placed on specific nodes.
|
||||
|
||||

|
||||
|
||||
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
|
||||
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
|
||||
To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counter
|
||||
spec:
|
||||
containers:
|
||||
- name: count
|
||||
image: ubuntu:14.04
|
||||
args: [bash, -c,
|
||||
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
|
||||
```
|
||||
|
||||
[Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
|
||||
|
||||
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let’s create the pod in the default
|
||||
namespace.
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/blog-logging/counter-pod.yaml
|
||||
pods/counter
|
||||
```
|
||||
|
||||
We can observe the running pod:
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
counter 1/1 Running 0 5m
|
||||
```
|
||||
|
||||
This step may take a few minutes to download the ubuntu:14.04 image during which the pod status will be shown as `Pending`.
|
||||
|
||||
One of the nodes is now running the counter pod:
|
||||
|
||||

|
||||
|
||||
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
|
||||
|
||||
```console
|
||||
$ kubectl logs counter
|
||||
0: Tue Jun 2 21:37:31 UTC 2015
|
||||
1: Tue Jun 2 21:37:32 UTC 2015
|
||||
2: Tue Jun 2 21:37:33 UTC 2015
|
||||
3: Tue Jun 2 21:37:34 UTC 2015
|
||||
4: Tue Jun 2 21:37:35 UTC 2015
|
||||
5: Tue Jun 2 21:37:36 UTC 2015
|
||||
...
|
||||
```
|
||||
|
||||
This command fetches the log text from the Docker log file for the image that is running in this container. We can connect to the running container and observe the running counter bash script.
|
||||
|
||||
```console
|
||||
$ kubectl exec -i counter bash
|
||||
ps aux
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
root 1 0.0 0.0 17976 2888 ? Ss 00:02 0:00 bash -c for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done
|
||||
root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash
|
||||
root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
|
||||
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
|
||||
```
|
||||
|
||||
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container’s execution and only see the log lines for the new container? Let’s find out. First let’s delete the currently running counter.
|
||||
|
||||
```console
|
||||
$ kubectl delete pod counter
|
||||
pods/counter
|
||||
```
|
||||
|
||||
Now let’s restart the counter.
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/blog-logging/counter-pod.yaml
|
||||
pods/counter
|
||||
```
|
||||
|
||||
Let’s wait for the container to restart and get the log lines again.
|
||||
|
||||
```console
|
||||
$ kubectl logs counter
|
||||
0: Tue Jun 2 21:51:40 UTC 2015
|
||||
1: Tue Jun 2 21:51:41 UTC 2015
|
||||
2: Tue Jun 2 21:51:42 UTC 2015
|
||||
3: Tue Jun 2 21:51:43 UTC 2015
|
||||
4: Tue Jun 2 21:51:44 UTC 2015
|
||||
5: Tue Jun 2 21:51:45 UTC 2015
|
||||
6: Tue Jun 2 21:51:46 UTC 2015
|
||||
7: Tue Jun 2 21:51:47 UTC 2015
|
||||
8: Tue Jun 2 21:51:48 UTC 2015
|
||||
```
|
||||
|
||||
We’ve lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don’t fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana.
|
||||
|
||||
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called `fluentd-cloud-logging` on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
|
||||
|
||||
This log collection pod has a specification which looks something like this:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: fluentd-cloud-logging
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: fluentd-logging
|
||||
spec:
|
||||
dnsPolicy: Default
|
||||
containers:
|
||||
- name: fluentd-cloud-logging
|
||||
image: gcr.io/google_containers/fluentd-gcp:1.17
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: -q
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
```
|
||||
|
||||
[Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
|
||||
|
||||
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
|
||||
|
||||
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
|
||||
|
||||

|
||||
|
||||
When we view the logs in the Developer Console we observe the logs for both invocations of the container.
|
||||
|
||||

|
||||
|
||||
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
|
||||
|
||||
Logs ingested into Google Cloud Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to. You can also follow this link to the
|
||||
[settings tab](https://pantheon.corp.google.com/project/_/logs/settings).
|
||||
|
||||
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first:
|
||||
|
||||
```console
|
||||
SELECT metadata.timestamp, structPayload.log
|
||||
FROM [mylogs.kubernetes_counter_default_count_20150611]
|
||||
ORDER BY metadata.timestamp DESC
|
||||
```
|
||||
|
||||
Here is some sample output:
|
||||
|
||||

|
||||
|
||||
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.
|
||||
|
||||
|
||||
```console
|
||||
$ gsutil -m cp -r gs://myproject/kubernetes.counter_default_count/2015/06/11 .
|
||||
```
|
||||
|
||||
Now we can run queries over the ingested logs. The example below uses the [jq](http://stedolan.github.io/jq/) program to extract just the log lines.
|
||||
|
||||
```console
|
||||
$ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
|
||||
"0: Thu Jun 11 21:39:38 UTC 2015\n"
|
||||
"1: Thu Jun 11 21:39:39 UTC 2015\n"
|
||||
"2: Thu Jun 11 21:39:40 UTC 2015\n"
|
||||
"3: Thu Jun 11 21:39:41 UTC 2015\n"
|
||||
"4: Thu Jun 11 21:39:42 UTC 2015\n"
|
||||
"5: Thu Jun 11 21:39:43 UTC 2015\n"
|
||||
"6: Thu Jun 11 21:39:44 UTC 2015\n"
|
||||
"7: Thu Jun 11 21:39:45 UTC 2015\n"
|
||||
...
|
||||
```
|
||||
|
||||
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod’s containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-gcp) and sending them to the Google Cloud Logging service.
|
||||
|
||||
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html).
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/logging/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,327 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Getting Started With Kubernetes on Mesos on Docker
|
||||
----------------------------------------
|
||||
|
||||
The mesos/docker provider uses docker-compose to launch Kubernetes as a Mesos framework, running in docker with its
|
||||
dependencies (etcd & mesos).
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Cluster Goals](#cluster-goals)
|
||||
- [Cluster Topology](#cluster-topology)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Install on Mac (Homebrew)](#install-on-mac-homebrew)
|
||||
- [Install on Linux](#install-on-linux)
|
||||
- [Docker Machine Config (Mac)](#docker-machine-config-mac)
|
||||
- [Walkthrough](#walkthrough)
|
||||
- [Addons](#addons)
|
||||
- [KubeUI](#kubeui)
|
||||
- [End To End Testing](#end-to-end-testing)
|
||||
- [Kubernetes CLI](#kubernetes-cli)
|
||||
- [Helpful scripts](#helpful-scripts)
|
||||
- [Build Locally](#build-locally)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
|
||||
## Cluster Goals
|
||||
|
||||
- kubernetes development
|
||||
- pod/service development
|
||||
- demoing
|
||||
- fast deployment
|
||||
- minimal hardware requirements
|
||||
- minimal configuration
|
||||
- entry point for exploration
|
||||
- simplified networking
|
||||
- fast end-to-end tests
|
||||
- local deployment
|
||||
|
||||
Non-Goals:
|
||||
- high availability
|
||||
- fault tolerance
|
||||
- remote deployment
|
||||
- production usage
|
||||
- monitoring
|
||||
- long running
|
||||
- state persistence across restarts
|
||||
|
||||
## Cluster Topology
|
||||
|
||||
The cluster consists of several docker containers linked together by docker-managed hostnames:
|
||||
|
||||
| Component | Hostname | Description |
|
||||
|-------------------------------|-----------------------------|-----------------------------------------------------------------------------------------|
|
||||
| docker-grand-ambassador | | Proxy to allow circular hostname linking in docker |
|
||||
| etcd | etcd | Key/Value store used by Mesos |
|
||||
| Mesos Master | mesosmaster1 | REST endpoint for interacting with Mesos |
|
||||
| Mesos Slave (x2) | mesosslave1<br/>mesosslave2 | Mesos agents that offer resources and run framework executors (e.g. Kubernetes Kublets) |
|
||||
| Kubernetes API Server | apiserver | REST endpoint for interacting with Kubernetes |
|
||||
| Kubernetes Controller Manager | controller | |
|
||||
| Kubernetes Scheduler | scheduler | Schedules container deployment by accepting Mesos offers |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Required:
|
||||
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - version control system
|
||||
- [Docker CLI](https://docs.docker.com/) - container management command line client
|
||||
- [Docker Engine](https://docs.docker.com/) - container management daemon
|
||||
- On Mac, use [Docker Machine](https://docs.docker.com/machine/install-machine/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) - multi-container application orchestration
|
||||
|
||||
Optional:
|
||||
- [Virtual Box](https://www.virtualbox.org/wiki/Downloads)
|
||||
- Free x86 virtualization engine with a Docker Machine driver
|
||||
- [Golang](https://golang.org/doc/install) - Go programming language
|
||||
- Required to build Kubernetes locally
|
||||
- [Make](https://en.wikipedia.org/wiki/Make_(software)) - Utility for building executables from source
|
||||
- Required to build Kubernetes locally with make
|
||||
|
||||
### Install on Mac (Homebrew)
|
||||
|
||||
It's possible to install all of the above via [Homebrew](http://brew.sh/) on a Mac.
|
||||
|
||||
Some steps print instructions for configuring or launching. Make sure each is properly set up before continuing to the next step.
|
||||
|
||||
```
|
||||
brew install git
|
||||
brew install caskroom/cask/brew-cask
|
||||
brew cask install virtualbox
|
||||
brew install docker
|
||||
brew install docker-machine
|
||||
brew install docker-compose
|
||||
```
|
||||
|
||||
### Install on Linux
|
||||
|
||||
Most of the above are available via apt and yum, but depending on your distribution, you may have to install via other
|
||||
means to get the latest versions.
|
||||
|
||||
It is recommended to use Ubuntu, simply because it best supports AUFS, used by docker to mount volumes. Alternate file
|
||||
systems may not fully support docker-in-docker.
|
||||
|
||||
In order to build Kubernetes, the current user must be in a docker group with sudo privileges.
|
||||
See the docker docs for [instructions](https://docs.docker.com/installation/ubuntulinux/#create-a-docker-group).
|
||||
|
||||
|
||||
#### Docker Machine Config (Mac)
|
||||
|
||||
If on a mac using docker-machine, the following steps will make the docker IPs (in the virtualbox VM) reachable from the
|
||||
host machine (mac).
|
||||
|
||||
1. Create VM
|
||||
|
||||
oracle-virtualbox
|
||||
|
||||
```
|
||||
docker-machine create --driver virtualbox kube-dev
|
||||
eval "$(docker-machine env kube-dev)"
|
||||
```
|
||||
|
||||
2. Set the VM's host-only network to "promiscuous mode":
|
||||
|
||||
oracle-virtualbox
|
||||
|
||||
```
|
||||
docker-machine stop kube-dev
|
||||
VBoxManage modifyvm kube-dev --nicpromisc2 allow-all
|
||||
docker-machine start kube-dev
|
||||
```
|
||||
|
||||
This allows the VM to accept packets that were sent to a different IP.
|
||||
|
||||
Since the host-only network routes traffic between VMs and the host, other VMs will also be able to access the docker
|
||||
IPs, if they have the following route.
|
||||
|
||||
1. Route traffic to docker through the docker-machine IP:
|
||||
|
||||
```
|
||||
sudo route -n add -net 172.17.0.0 $(docker-machine ip kube-dev)
|
||||
```
|
||||
|
||||
Since the docker-machine IP can change when the VM is restarted, this route may need to be updated over time.
|
||||
To delete the route later: `sudo route delete 172.17.0.0`
|
||||
|
||||
|
||||
## Walkthrough
|
||||
|
||||
1. Checkout source
|
||||
|
||||
```
|
||||
git clone https://github.com/kubernetes/kubernetes
|
||||
cd kubernetes
|
||||
```
|
||||
|
||||
By default, that will get you the bleeding edge of master branch.
|
||||
You may want a [release branch](https://github.com/kubernetes/kubernetes/releases) instead,
|
||||
if you have trouble with master.
|
||||
|
||||
1. Build binaries
|
||||
|
||||
You'll need to build kubectl (CLI) for your local architecture and operating system and the rest of the server binaries for linux/amd64.
|
||||
|
||||
Building a new release covers both cases:
|
||||
|
||||
```
|
||||
KUBERNETES_CONTRIB=mesos build/release.sh
|
||||
```
|
||||
|
||||
For developers, it may be faster to [build locally](#build-locally).
|
||||
|
||||
1. [Optional] Build docker images
|
||||
|
||||
The following docker images are built as part of `./cluster/kube-up.sh`, but it may make sense to build them manually the first time because it may take a while.
|
||||
|
||||
1. Test image includes all the dependencies required for running e2e tests.
|
||||
|
||||
```
|
||||
./cluster/mesos/docker/test/build.sh
|
||||
```
|
||||
|
||||
In the future, this image may be available to download. It doesn't contain anything specific to the current release, except its build dependencies.
|
||||
|
||||
1. Kubernetes-Mesos image includes the compiled linux binaries.
|
||||
|
||||
```
|
||||
./cluster/mesos/docker/km/build.sh
|
||||
```
|
||||
|
||||
This image needs to be built every time you recompile the server binaries.
|
||||
|
||||
1. [Optional] Configure Mesos resources
|
||||
|
||||
By default, the mesos-slaves are configured to offer a fixed amount of resources (cpus, memory, disk, ports).
|
||||
If you want to customize these values, update the `MESOS_RESOURCES` environment variables in `./cluster/mesos/docker/docker-compose.yml`.
|
||||
If you delete the `MESOS_RESOURCES` environment variables, the resource amounts will be auto-detected based on the host resources, which will over-provision by > 2x.
|
||||
|
||||
If the configured resources are not available on the host, you may want to increase the resources available to Docker Engine.
|
||||
You may have to increase you VM disk, memory, or cpu allocation. See the Docker Machine docs for details
|
||||
([Virtualbox](https://docs.docker.com/machine/drivers/virtualbox))
|
||||
|
||||
|
||||
1. Configure provider
|
||||
|
||||
```
|
||||
export KUBERNETES_PROVIDER=mesos/docker
|
||||
```
|
||||
|
||||
This tells cluster scripts to use the code within `cluster/mesos/docker`.
|
||||
|
||||
1. Create cluster
|
||||
|
||||
```
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
If you manually built all the above docker images, you can skip that step during kube-up:
|
||||
|
||||
```
|
||||
MESOS_DOCKER_SKIP_BUILD=true ./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
After deploying the cluster, `~/.kube/config` will be created or updated to configure kubectl to target the new cluster.
|
||||
|
||||
1. Explore examples
|
||||
|
||||
To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the
|
||||
[Kubernetes Walkthrough](../user-guide/walkthrough/).
|
||||
|
||||
To skip to a more advanced example, see the [Guestbook Example](../../examples/guestbook/)
|
||||
|
||||
1. Destroy cluster
|
||||
|
||||
```
|
||||
./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
## Addons
|
||||
|
||||
The `kube-up` for the mesos/docker provider will automatically deploy KubeDNS and KubeUI addons as pods/services.
|
||||
|
||||
Check their status with:
|
||||
|
||||
```
|
||||
./cluster/kubectl.sh get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
### KubeUI
|
||||
|
||||
The web-based Kubernetes UI is accessible in a browser through the API Server proxy: `https://<apiserver>:6443/ui/`.
|
||||
|
||||
By default, basic-auth is configured with user `admin` and password `admin`.
|
||||
|
||||
The IP of the API Server can be found using `./cluster/kubectl.sh cluster-info`.
|
||||
|
||||
|
||||
## End To End Testing
|
||||
|
||||
Warning: e2e tests can take a long time to run. You may not want to run them immediately if you're just getting started.
|
||||
|
||||
While your cluster is up, you can run the end-to-end tests:
|
||||
|
||||
```
|
||||
./cluster/test-e2e.sh
|
||||
```
|
||||
|
||||
Notable parameters:
|
||||
- Increase the logging verbosity: `-v=2`
|
||||
- Run only a subset of the tests (regex matching): `-ginkgo.focus=<pattern>`
|
||||
|
||||
To build, deploy, test, and destroy, all in one command (plus unit & integration tests):
|
||||
|
||||
```
|
||||
make test_e2e
|
||||
```
|
||||
|
||||
|
||||
## Kubernetes CLI
|
||||
|
||||
When compiling from source, it's simplest to use the `./cluster/kubectl.sh` script, which detects your platform &
|
||||
architecture and proxies commands to the appropriate `kubectl` binary.
|
||||
|
||||
ex: `./cluster/kubectl.sh get pods`
|
||||
|
||||
|
||||
## Helpful scripts
|
||||
|
||||
- Kill all docker containers
|
||||
|
||||
```
|
||||
docker ps -q -a | xargs docker rm -f
|
||||
```
|
||||
|
||||
- Clean up unused docker volumes
|
||||
|
||||
```
|
||||
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes
|
||||
```
|
||||
|
||||
## Build Locally
|
||||
|
||||
The steps above tell you how to build in a container, for minimal local dependencies. But if you have Go and Make installed you can build locally much faster:
|
||||
|
||||
```
|
||||
KUBERNETES_CONTRIB=mesos make
|
||||
```
|
||||
|
||||
However, if you're not on linux, you'll still need to compile the linux/amd64 server binaries:
|
||||
|
||||
```
|
||||
KUBERNETES_CONTRIB=mesos build/run.sh hack/build-go.sh
|
||||
```
|
||||
|
||||
The above two steps should be significantly faster than cross-compiling a whole new release for every supported platform (which is what `./build/release.sh` does).
|
||||
|
||||
Breakdown:
|
||||
|
||||
- `KUBERNETES_CONTRIB=mesos` - enables building of the contrib/mesos binaries
|
||||
- `hack/build-go.sh` - builds the Go binaries for the current architecture (linux/amd64 when in a docker container)
|
||||
- `make` - delegates to `hack/build-go.sh`
|
||||
- `build/run.sh` - executes a command in the build container
|
||||
- `build/release.sh` - cross compiles Kubernetes for all supported architectures and operating systems (slow)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/mesos-docker/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,348 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Getting started with Kubernetes on Mesos
|
||||
----------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [About Kubernetes on Mesos](#about-kubernetes-on-mesos)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Deploy Kubernetes-Mesos](#deploy-kubernetes-mesos)
|
||||
- [Deploy etcd](#deploy-etcd)
|
||||
- [Start Kubernetes-Mesos Services](#start-kubernetes-mesos-services)
|
||||
- [Validate KM Services](#validate-km-services)
|
||||
- [Spin up a pod](#spin-up-a-pod)
|
||||
- [Launching kube-dns](#launching-kube-dns)
|
||||
- [What next?](#what-next)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
|
||||
## About Kubernetes on Mesos
|
||||
|
||||
<!-- TODO: Update, clean up. -->
|
||||
|
||||
Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3].
|
||||
Mesos also ensures applications from different frameworks running on your cluster are isolated and that resources are allocated fairly among them.
|
||||
|
||||
Mesos clusters can be deployed on nearly every IaaS cloud provider infrastructure or in your own physical datacenter. Kubernetes on Mesos runs on-top of that and therefore allows you to easily move Kubernetes workloads from one of these environments to the other.
|
||||
|
||||
This tutorial will walk you through setting up Kubernetes on a Mesos cluster.
|
||||
It provides a step by step walk through of adding Kubernetes to a Mesos cluster and starting your first pod with an nginx webserver.
|
||||
|
||||
**NOTE:** There are [known issues with the current implementation][7] and support for centralized logging and monitoring is not yet available.
|
||||
Please [file an issue against the kubernetes-mesos project][8] if you have problems completing the steps below.
|
||||
|
||||
Further information is available in the Kubernetes on Mesos [contrib directory][13].
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Understanding of [Apache Mesos][6]
|
||||
- A running [Mesos cluster on Google Compute Engine][5]
|
||||
- A [VPN connection][10] to the cluster
|
||||
- A machine in the cluster which should become the Kubernetes *master node* with:
|
||||
- Go (see [here](../devel/development.md#go-versions) for required versions)
|
||||
- make (i.e. build-essential)
|
||||
- Docker
|
||||
|
||||
**Note**: You *can*, but you *don't have to* deploy Kubernetes-Mesos on the same machine the Mesos master is running on.
|
||||
|
||||
### Deploy Kubernetes-Mesos
|
||||
|
||||
Log into the future Kubernetes *master node* over SSH, replacing the placeholder below with the correct IP address.
|
||||
|
||||
```bash
|
||||
ssh jclouds@${ip_address_of_master_node}
|
||||
```
|
||||
|
||||
Build Kubernetes-Mesos.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kubernetes/kubernetes
|
||||
cd kubernetes
|
||||
export KUBERNETES_CONTRIB=mesos
|
||||
make
|
||||
```
|
||||
|
||||
Set some environment variables.
|
||||
The internal IP address of the master may be obtained via `hostname -i`.
|
||||
|
||||
```bash
|
||||
export KUBERNETES_MASTER_IP=$(hostname -i)
|
||||
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
|
||||
```
|
||||
|
||||
Note that KUBERNETES_MASTER is used as the api endpoint. If you have existing `~/.kube/config` and point to another endpoint, you need to add option `--server=${KUBERNETES_MASTER}` to kubectl in later steps.
|
||||
|
||||
### Deploy etcd
|
||||
|
||||
Start etcd and verify that it is running:
|
||||
|
||||
```bash
|
||||
sudo docker run -d --hostname $(uname -n) --name etcd \
|
||||
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.2.1 \
|
||||
--listen-client-urls http://0.0.0.0:4001 \
|
||||
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
|
||||
```
|
||||
|
||||
```console
|
||||
$ sudo docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
fd7bac9e2301 quay.io/coreos/etcd:v2.2.1 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
|
||||
```
|
||||
|
||||
It's also a good idea to ensure your etcd instance is reachable by testing it
|
||||
|
||||
```bash
|
||||
curl -L http://${KUBERNETES_MASTER_IP}:4001/v2/keys/
|
||||
```
|
||||
|
||||
If connectivity is OK, you will see an output of the available keys in etcd (if any).
|
||||
|
||||
### Start Kubernetes-Mesos Services
|
||||
|
||||
Update your PATH to more easily run the Kubernetes-Mesos binaries:
|
||||
|
||||
```bash
|
||||
export PATH="$(pwd)/_output/local/go/bin:$PATH"
|
||||
```
|
||||
|
||||
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos-master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
|
||||
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
|
||||
|
||||
```bash
|
||||
export MESOS_MASTER=<host:port or zk:// url>
|
||||
```
|
||||
|
||||
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
|
||||
|
||||
```console
|
||||
$ cat <<EOF >mesos-cloud.conf
|
||||
[mesos-cloud]
|
||||
mesos-master = ${MESOS_MASTER}
|
||||
EOF
|
||||
```
|
||||
|
||||
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
|
||||
|
||||
```console
|
||||
$ km apiserver \
|
||||
--address=${KUBERNETES_MASTER_IP} \
|
||||
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
|
||||
--service-cluster-ip-range=10.10.10.0/24 \
|
||||
--port=8888 \
|
||||
--cloud-provider=mesos \
|
||||
--cloud-config=mesos-cloud.conf \
|
||||
--secure-port=0 \
|
||||
--v=1 >apiserver.log 2>&1 &
|
||||
|
||||
$ km controller-manager \
|
||||
--master=${KUBERNETES_MASTER_IP}:8888 \
|
||||
--cloud-provider=mesos \
|
||||
--cloud-config=./mesos-cloud.conf \
|
||||
--v=1 >controller.log 2>&1 &
|
||||
|
||||
$ km scheduler \
|
||||
--address=${KUBERNETES_MASTER_IP} \
|
||||
--mesos-master=${MESOS_MASTER} \
|
||||
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
|
||||
--mesos-user=root \
|
||||
--api-servers=${KUBERNETES_MASTER_IP}:8888 \
|
||||
--cluster-dns=10.10.10.10 \
|
||||
--cluster-domain=cluster.local \
|
||||
--v=2 >scheduler.log 2>&1 &
|
||||
```
|
||||
|
||||
Disown your background jobs so that they'll stay running if you log out.
|
||||
|
||||
```bash
|
||||
disown -a
|
||||
```
|
||||
|
||||
#### Validate KM Services
|
||||
|
||||
Interact with the kubernetes-mesos framework via `kubectl`:
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
```
|
||||
|
||||
```console
|
||||
# NOTE: your service IPs will likely differ
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
k8sm-scheduler component=scheduler,provider=k8sm <none> 10.10.10.113 10251/TCP
|
||||
kubernetes component=apiserver,provider=kubernetes <none> 10.10.10.1 443/TCP
|
||||
```
|
||||
|
||||
Lastly, look for Kubernetes in the Mesos web GUI by pointing your browser to
|
||||
`http://<mesos-master-ip:port>`. Make sure you have an active VPN connection.
|
||||
Go to the Frameworks tab, and look for an active framework named "Kubernetes".
|
||||
|
||||
## Spin up a pod
|
||||
|
||||
Write a JSON pod description to a local file:
|
||||
|
||||
```bash
|
||||
$ cat <<EOPOD >nginx.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
EOPOD
|
||||
```
|
||||
|
||||
Send the pod description to Kubernetes using the `kubectl` CLI:
|
||||
|
||||
```console
|
||||
$ kubectl create -f ./nginx.yaml
|
||||
pods/nginx
|
||||
```
|
||||
|
||||
Wait a minute or two while `dockerd` downloads the image layers from the internet.
|
||||
We can use the `kubectl` interface to monitor the status of our pod:
|
||||
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx 1/1 Running 0 14s
|
||||
```
|
||||
|
||||
Verify that the pod task is running in the Mesos web GUI. Click on the
|
||||
Kubernetes framework. The next screen should show the running Mesos task that
|
||||
started the Kubernetes pod.
|
||||
|
||||
## Launching kube-dns
|
||||
|
||||
Kube-dns is an addon for Kubernetes which adds DNS-based service discovery to the cluster. For a detailed explanation see [DNS in Kubernetes][4].
|
||||
|
||||
The kube-dns addon runs as a pod inside the cluster. The pod consists of three co-located containers:
|
||||
|
||||
- a local etcd instance
|
||||
- the [skydns][11] DNS server
|
||||
- the kube2sky process to glue skydns to the state of the Kubernetes cluster.
|
||||
|
||||
The skydns container offers DNS service via port 53 to the cluster. The etcd communication works via local 127.0.0.1 communication
|
||||
|
||||
We assume that kube-dns will use
|
||||
|
||||
- the service IP `10.10.10.10`
|
||||
- and the `cluster.local` domain.
|
||||
|
||||
Note that we have passed these two values already as parameter to the apiserver above.
|
||||
|
||||
A template for an replication controller spinning up the pod with the 3 containers can be found at [cluster/addons/dns/skydns-rc.yaml.in][11] in the repository. The following steps are necessary in order to get a valid replication controller yaml file:
|
||||
|
||||
- replace `{{ pillar['dns_replicas'] }}` with `1`
|
||||
- replace `{{ pillar['dns_domain'] }}` with `cluster.local.`
|
||||
- add `--kube_master_url=${KUBERNETES_MASTER}` parameter to the kube2sky container command.
|
||||
|
||||
In addition the service template at [cluster/addons/dns/skydns-svc.yaml.in][12] needs the following replacement:
|
||||
|
||||
- `{{ pillar['dns_server'] }}` with `10.10.10.10`.
|
||||
|
||||
To do this automatically:
|
||||
|
||||
```bash
|
||||
sed -e "s/{{ pillar\['dns_replicas'\] }}/1/g;"\
|
||||
"s,\(command = \"/kube2sky\"\),\\1\\"$'\n'" - --kube_master_url=${KUBERNETES_MASTER},;"\
|
||||
"s/{{ pillar\['dns_domain'\] }}/cluster.local/g" \
|
||||
cluster/addons/dns/skydns-rc.yaml.in > skydns-rc.yaml
|
||||
sed -e "s/{{ pillar\['dns_server'\] }}/10.10.10.10/g" \
|
||||
cluster/addons/dns/skydns-svc.yaml.in > skydns-svc.yaml
|
||||
```
|
||||
|
||||
Now the kube-dns pod and service are ready to be launched:
|
||||
|
||||
```bash
|
||||
kubectl create -f ./skydns-rc.yaml
|
||||
kubectl create -f ./skydns-svc.yaml
|
||||
```
|
||||
|
||||
Check with `kubectl get pods --namespace=kube-system` that 3/3 containers of the pods are eventually up and running. Note that the kube-dns pods run in the `kube-system` namespace, not in `default`.
|
||||
|
||||
To check that the new DNS service in the cluster works, we start a busybox pod and use that to do a DNS lookup. First create the `busybox.yaml` pod spec:
|
||||
|
||||
```bash
|
||||
cat <<EOF >busybox.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- image: busybox
|
||||
command:
|
||||
- sleep
|
||||
- "3600"
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: busybox
|
||||
restartPolicy: Always
|
||||
EOF
|
||||
```
|
||||
|
||||
Then start the pod:
|
||||
|
||||
```bash
|
||||
kubectl create -f ./busybox.yaml
|
||||
```
|
||||
|
||||
When the pod is up and running, start a lookup for the Kubernetes master service, made available on 10.10.10.1 by default:
|
||||
|
||||
```bash
|
||||
kubectl exec busybox -- nslookup kubernetes
|
||||
```
|
||||
|
||||
If everything works fine, you will get this output:
|
||||
|
||||
```console
|
||||
Server: 10.10.10.10
|
||||
Address 1: 10.10.10.10
|
||||
|
||||
Name: kubernetes
|
||||
Address 1: 10.10.10.1
|
||||
```
|
||||
|
||||
## What next?
|
||||
|
||||
Try out some of the standard [Kubernetes examples][9].
|
||||
|
||||
Read about Kubernetes on Mesos' architecture in the [contrib directory][13].
|
||||
|
||||
**NOTE:** Some examples require Kubernetes DNS to be installed on the cluster.
|
||||
Future work will add instructions to this guide to enable support for Kubernetes DNS.
|
||||
|
||||
**NOTE:** Please be aware that there are [known issues with the current Kubernetes-Mesos implementation][7].
|
||||
|
||||
[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer
|
||||
[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos
|
||||
[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos
|
||||
[4]: ../../cluster/addons/dns/README.md
|
||||
[5]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/
|
||||
[6]: http://mesos.apache.org/
|
||||
[7]: ../../contrib/mesos/docs/issues.md
|
||||
[8]: https://github.com/mesosphere/kubernetes-mesos/issues
|
||||
[9]: ../../examples/
|
||||
[10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup
|
||||
[11]: ../../cluster/addons/dns/skydns-rc.yaml.in
|
||||
[12]: ../../cluster/addons/dns/skydns-svc.yaml.in
|
||||
[13]: ../../contrib/mesos/README.md
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/mesos/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,60 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on oVirt
|
||||
------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [What is oVirt](#what-is-ovirt)
|
||||
- [oVirt Cloud Provider Deployment](#ovirt-cloud-provider-deployment)
|
||||
- [Using the oVirt Cloud Provider](#using-the-ovirt-cloud-provider)
|
||||
- [oVirt Cloud Provider Screencast](#ovirt-cloud-provider-screencast)
|
||||
|
||||
## What is oVirt
|
||||
|
||||
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
|
||||
|
||||
## oVirt Cloud Provider Deployment
|
||||
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
|
||||
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
|
||||
|
||||
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
|
||||
[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
|
||||
[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines
|
||||
[generate a template]: http://www.ovirt.org/Quick_Start_Guide#Using_Templates
|
||||
[install the ovirt-guest-agent]: http://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
|
||||
|
||||
## Using the oVirt Cloud Provider
|
||||
|
||||
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
|
||||
|
||||
[connection]
|
||||
uri = https://localhost:8443/ovirt-engine/api
|
||||
username = admin@internal
|
||||
password = admin
|
||||
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
|
||||
|
||||
[filters]
|
||||
# Search query used to find nodes
|
||||
vms = tag=kubernetes
|
||||
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
|
||||
|
||||
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
|
||||
|
||||
## oVirt Cloud Provider Screencast
|
||||
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
|
||||
|
||||
[](http://www.youtube.com/watch?v=JyyST4ZKne8)
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/ovirt/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,77 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started on Rackspace
|
||||
----------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Provider: Rackspace](#provider-rackspace)
|
||||
- [Build](#build)
|
||||
- [Cluster](#cluster)
|
||||
- [Some notes:](#some-notes)
|
||||
- [Network Design](#network-design)
|
||||
|
||||
## Introduction
|
||||
|
||||
* Supported Version: v0.18.1
|
||||
|
||||
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to Google Compute Engine. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
|
||||
|
||||
These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification.
|
||||
|
||||
NOTE: The rackspace scripts do NOT rely on `saltstack` and instead rely on cloud-init for configuration.
|
||||
|
||||
The current cluster design is inspired by:
|
||||
- [corekube](https://github.com/metral/corekube)
|
||||
- [Angus Lees](https://github.com/anguslees/kube-openstack)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Python2.7
|
||||
2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into.
|
||||
3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details.
|
||||
|
||||
## Provider: Rackspace
|
||||
|
||||
- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh`
|
||||
- Note: The get.k8s.io install method is not working yet for our scripts.
|
||||
* To install the latest released version of Kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
|
||||
|
||||
## Build
|
||||
|
||||
1. The Kubernetes binaries will be built via the common build scripts in `build/`.
|
||||
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
|
||||
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
|
||||
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted.
|
||||
|
||||
## Cluster
|
||||
|
||||
There is a specific `cluster/rackspace` directory with the scripts for the following steps:
|
||||
|
||||
1. A cloud network will be created and all instances will be attached to this network.
|
||||
- flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network.
|
||||
2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password).
|
||||
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
|
||||
4. We then boot as many nodes as defined via `$NUM_NODES`.
|
||||
|
||||
## Some notes
|
||||
|
||||
- The scripts expect `eth2` to be the cloud network that the containers will communicate across.
|
||||
- A number of the items in `config-default.sh` are overridable via environment variables.
|
||||
- For older versions please either:
|
||||
* Sync back to `v0.9` with `git checkout v0.9`
|
||||
* Download a [snapshot of `v0.9`](https://github.com/kubernetes/kubernetes/archive/v0.9.tar.gz)
|
||||
* Sync back to `v0.3` with `git checkout v0.3`
|
||||
* Download a [snapshot of `v0.3`](https://github.com/kubernetes/kubernetes/archive/v0.3.tar.gz)
|
||||
|
||||
## Network Design
|
||||
|
||||
- eth0 - Public Interface used for servers/containers to reach the internet
|
||||
- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services.
|
||||
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/rackspace/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,223 +32,8 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Run Kubernetes with rkt
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/rkt/README/
|
||||
|
||||
This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime.
|
||||
We still have [a bunch of work](http://issue.k8s.io/8262) to do to make the experience with rkt wonderful, please stay tuned!
|
||||
|
||||
### **Prerequisite**
|
||||
|
||||
- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on the machine and should be enabled. The minimum version required at this moment (2015/09/01) is 219
|
||||
*(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)*
|
||||
|
||||
- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt).
|
||||
The minimum version required for now is [v0.8.0](https://github.com/coreos/rkt/releases/tag/v0.8.0).
|
||||
|
||||
- Note that for rkt version later than v0.7.0, `metadata service` is not required for running pods in private networks. So now rkt pods will not register the metadata service be default.
|
||||
|
||||
- Since release [v1.2.0-alpha.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.2.0-alpha.5),
|
||||
the [rkt API service](https://github.com/coreos/rkt/blob/master/api/v1alpha/README.md)
|
||||
must be running on the node.
|
||||
|
||||
### Network Setup
|
||||
|
||||
rkt uses the [Container Network Interface (CNI)](https://github.com/appc/cni)
|
||||
to manage container networking. By default, all pods attempt to join a network
|
||||
called `rkt.kubernetes.io`, which is currently defined [in `rkt.go`]
|
||||
(https://github.com/kubernetes/kubernetes/blob/v1.2.0-alpha.6/pkg/kubelet/rkt/rkt.go#L91).
|
||||
In order for pods to get correct IP addresses, the CNI config file must be
|
||||
edited to add this `rkt.kubernetes.io` network:
|
||||
|
||||
#### Using flannel
|
||||
|
||||
In addition to the basic prerequisites above, each node must be running
|
||||
a [flannel](https://github.com/coreos/flannel) daemon. This implies
|
||||
that a flannel-supporting etcd service must be available to the cluster
|
||||
as well, apart from the Kubernetes etcd, which will not yet be
|
||||
available at flannel configuration time. Once it's running, flannel can
|
||||
be set up with a CNI config like:
|
||||
|
||||
```console
|
||||
$ cat <<EOF >/etc/rkt/net.d/k8s_cluster.conf
|
||||
{
|
||||
"name": "rkt.kubernetes.io",
|
||||
"type": "flannel"
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
While `k8s_cluster.conf` is a rather arbitrary name for the config file itself,
|
||||
and can be adjusted to suit local conventions, the keys and values should be exactly
|
||||
as shown above. `name` must be `rkt.kubernetes.io` and `type` should be `flannel`.
|
||||
More details about the flannel CNI plugin can be found
|
||||
[in the CNI documentation](https://github.com/appc/cni/blob/master/Documentation/flannel.md).
|
||||
|
||||
#### On GCE
|
||||
|
||||
Each VM on GCE has an additional 256 IP addresses routed to it, so
|
||||
it is possible to forego flannel in smaller clusters. This makes the
|
||||
necessary CNI config file a bit more verbose:
|
||||
|
||||
```console
|
||||
$ cat <<EOF >/etc/rkt/net.d/k8s_cluster.conf
|
||||
{
|
||||
"name": "rkt.kubernetes.io",
|
||||
"type": "bridge",
|
||||
"bridge": "cbr0",
|
||||
"isGateway": true,
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"subnet": "10.255.228.1/24",
|
||||
"gateway": "10.255.228.1"
|
||||
},
|
||||
"routes": [
|
||||
{ "dst": "0.0.0.0/0" }
|
||||
]
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
This example creates a `bridge` plugin configuration for the CNI network, specifying
|
||||
the bridge name `cbr0`. It also specifies the CIDR, in the `ipam` field.
|
||||
|
||||
Creating these files for any moderately-sized cluster is at best inconvenient.
|
||||
Work is in progress to
|
||||
[enable Kubernetes to use the CNI by default]
|
||||
(https://github.com/kubernetes/kubernetes/pull/18795/files).
|
||||
As that work matures, such manual CNI config munging will become unnecessary
|
||||
for primary use cases. For early adopters, an initial example shows one way to
|
||||
[automatically generate these CNI configurations]
|
||||
(https://gist.github.com/yifan-gu/fbb911db83d785915543)
|
||||
for rkt.
|
||||
|
||||
### Local cluster
|
||||
|
||||
To use rkt as the container runtime, we need to supply the following flags to kubelet:
|
||||
- `--container-runtime=rkt` chooses the container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.
|
||||
- `--rkt-path=$PATH_TO_RKT_BINARY` sets the path of rkt binary. Leave empty to use the first rkt in $PATH.
|
||||
- `--rkt-stage1-image` sets the path of the stage1 image. Local paths and http/https URLs are supported. Leave empty to use the 'stage1.aci' that locates in the same directory as the rkt binary.
|
||||
|
||||
If you are using the [hack/local-up-cluster.sh](../../../hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to
|
||||
set these flags:
|
||||
|
||||
```console
|
||||
$ export CONTAINER_RUNTIME=rkt
|
||||
$ export RKT_PATH=$PATH_TO_RKT_BINARY
|
||||
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
|
||||
```
|
||||
|
||||
Then we can launch the local cluster using the script:
|
||||
|
||||
```console
|
||||
$ hack/local-up-cluster.sh
|
||||
```
|
||||
|
||||
### CoreOS cluster on Google Compute Engine (GCE)
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
|
||||
|
||||
```console
|
||||
$ export KUBE_OS_DISTRIBUTION=coreos
|
||||
$ export KUBE_GCE_NODE_IMAGE=<image_id>
|
||||
$ export KUBE_GCE_NODE_PROJECT=coreos-cloud
|
||||
$ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
```
|
||||
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```console
|
||||
$ export KUBE_RKT_VERSION=0.15.0
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
|
||||
```console
|
||||
$ cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet.
|
||||
|
||||
### CoreOS cluster on AWS
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
|
||||
|
||||
```console
|
||||
$ export KUBERNETES_PROVIDER=aws
|
||||
$ export KUBE_OS_DISTRIBUTION=coreos
|
||||
$ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
```
|
||||
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```console
|
||||
$ export KUBE_RKT_VERSION=0.8.0
|
||||
```
|
||||
|
||||
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
|
||||
|
||||
```console
|
||||
$ export COREOS_CHANNEL=stable
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
|
||||
```console
|
||||
$ kube-up.sh
|
||||
```
|
||||
|
||||
Note: CoreOS is not supported as the master using the automated launch
|
||||
scripts. The master node is always Ubuntu.
|
||||
|
||||
### Getting started with your cluster
|
||||
|
||||
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](../../../examples/).
|
||||
|
||||
### Different UX with rkt container runtime
|
||||
|
||||
rkt and Docker have very different designs, as well as ACI and Docker image format. Users might experience some different experience when switching from one to the other. More information can be found [here](notes.md).
|
||||
|
||||
### Debugging
|
||||
|
||||
Here are several tips for you when you run into any issues.
|
||||
|
||||
##### Check logs
|
||||
|
||||
By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
|
||||
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
|
||||
If the cluster is using salt, we can edit the [logging.sls](../../../cluster/saltbase/pillar/logging.sls) in the saltbase.
|
||||
|
||||
##### Check rkt pod status
|
||||
|
||||
To check the pods' status, we can use rkt command, such as `rkt list`, `rkt status`, `rkt image list`, etc.
|
||||
More information about rkt command line can be found [here](https://github.com/coreos/rkt/blob/master/Documentation/commands.md)
|
||||
|
||||
##### Check journal logs
|
||||
|
||||
As we use systemd to launch rkt pods(by creating service files which will run `rkt run-prepared`, we can check the pods' log
|
||||
using `journalctl`:
|
||||
|
||||
- Check the running state of the systemd service:
|
||||
|
||||
```console
|
||||
$ sudo journalctl -u $SERVICE_FILE
|
||||
```
|
||||
|
||||
where `$SERVICE_FILE` is the name of the service file created for the pod, you can find it in the kubelet logs.
|
||||
|
||||
##### Check the log of the container in the pod:
|
||||
|
||||
```console
|
||||
$ sudo journalctl -M rkt-$UUID -u $CONTAINER_NAME
|
||||
```
|
||||
|
||||
where `$UUID` is the rkt pod's UUID, which you can find via `rkt list --full`, and `$CONTAINER_NAME` is the container's name.
|
||||
|
||||
##### Check Kubernetes events, logs.
|
||||
|
||||
Besides above tricks, Kubernetes also provides us handy tools for debugging the pods. More information can be found [here](../../../docs/user-guide/application-troubleshooting.md)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -27,105 +27,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
# Notes on Different UX with rkt container runtime
|
||||
|
||||
### Doesn't support ENTRYPOINT + CMD feature
|
||||
|
||||
To run a Docker image, rkt will convert it into [App Container Image (ACI) format](https://github.com/appc/spec/blob/master/SPEC.md) first.
|
||||
However, during the conversion, the `ENTRYPOINT` and `CMD` are concatentated to construct ACI's `Exec` field.
|
||||
This means after the conversion, we are not able to replace only `ENTRYPOINT` or `CMD` without touching the other part.
|
||||
So for now, users are recommended to specify the **executable path** in `Command` and **arguments** in `Args`.
|
||||
(This has the same effect if users specify the **executable path + arguments** in `Command` or `Args` alone).
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
The above pod yaml file is valid as it's not specifying `Command` or `Args`, so the default `ENTRYPOINT` and `CMD` of the image will be used.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
labels:
|
||||
name: busybox
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
command:
|
||||
- /bin/sleep
|
||||
- 1000
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
labels:
|
||||
name: busybox
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
command:
|
||||
- /bin/sleep
|
||||
args:
|
||||
- 1000
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
labels:
|
||||
name: busybox
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
args:
|
||||
- /bin/sleep
|
||||
- 1000
|
||||
```
|
||||
|
||||
All the three examples above are valid as they contain both the executable path and the arguments.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
labels:
|
||||
name: busybox
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox
|
||||
args:
|
||||
- 1000
|
||||
```
|
||||
|
||||
The last example is invalid, as we cannot override just the `CMD` of the image alone.
|
||||
|
||||
|
||||
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/rkt/notes/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,863 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started from Scratch
|
||||
----------------------------
|
||||
|
||||
This guide is for people who want to craft a custom Kubernetes cluster. If you
|
||||
can find an existing Getting Started Guide that meets your needs on [this
|
||||
list](README.md), then we recommend using it, as you will be able to benefit
|
||||
from the experience of others. However, if you have specific IaaS, networking,
|
||||
configuration management, or operating system requirements not met by any of
|
||||
those guides, then this guide will provide an outline of the steps you need to
|
||||
take. Note that it requires considerably more effort than using one of the
|
||||
pre-defined guides.
|
||||
|
||||
This guide is also useful for those wanting to understand at a high level some of the
|
||||
steps that existing cluster setup scripts are making.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Designing and Preparing](#designing-and-preparing)
|
||||
- [Learning](#learning)
|
||||
- [Cloud Provider](#cloud-provider)
|
||||
- [Nodes](#nodes)
|
||||
- [Network](#network)
|
||||
- [Cluster Naming](#cluster-naming)
|
||||
- [Software Binaries](#software-binaries)
|
||||
- [Downloading and Extracting Kubernetes Binaries](#downloading-and-extracting-kubernetes-binaries)
|
||||
- [Selecting Images](#selecting-images)
|
||||
- [Security Models](#security-models)
|
||||
- [Preparing Certs](#preparing-certs)
|
||||
- [Preparing Credentials](#preparing-credentials)
|
||||
- [Configuring and Installing Base Software on Nodes](#configuring-and-installing-base-software-on-nodes)
|
||||
- [Docker](#docker)
|
||||
- [rkt](#rkt)
|
||||
- [kubelet](#kubelet)
|
||||
- [kube-proxy](#kube-proxy)
|
||||
- [Networking](#networking)
|
||||
- [Other](#other)
|
||||
- [Using Configuration Management](#using-configuration-management)
|
||||
- [Bootstrapping the Cluster](#bootstrapping-the-cluster)
|
||||
- [etcd](#etcd)
|
||||
- [Apiserver, Controller Manager, and Scheduler](#apiserver-controller-manager-and-scheduler)
|
||||
- [Apiserver pod template](#apiserver-pod-template)
|
||||
- [Cloud Providers](#cloud-providers)
|
||||
- [Scheduler pod template](#scheduler-pod-template)
|
||||
- [Controller Manager Template](#controller-manager-template)
|
||||
- [Starting and Verifying Apiserver, Scheduler, and Controller Manager](#starting-and-verifying-apiserver-scheduler-and-controller-manager)
|
||||
- [Starting Cluster Services](#starting-cluster-services)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Running validate-cluster](#running-validate-cluster)
|
||||
- [Inspect pods and services](#inspect-pods-and-services)
|
||||
- [Try Examples](#try-examples)
|
||||
- [Running the Conformance Test](#running-the-conformance-test)
|
||||
- [Networking](#networking)
|
||||
- [Getting Help](#getting-help)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Designing and Preparing
|
||||
|
||||
### Learning
|
||||
|
||||
1. You should be familiar with using Kubernetes already. We suggest you set
|
||||
up a temporary cluster by following one of the other Getting Started Guides.
|
||||
This will help you become familiar with the CLI ([kubectl](../user-guide/kubectl/kubectl.md)) and concepts ([pods](../user-guide/pods.md), [services](../user-guide/services.md), etc.) first.
|
||||
1. You should have `kubectl` installed on your desktop. This will happen as a side
|
||||
effect of completing one of the other Getting Started Guides. If not, follow the instructions
|
||||
[here](../user-guide/prereqs.md).
|
||||
|
||||
### Cloud Provider
|
||||
|
||||
Kubernetes has the concept of a Cloud Provider, which is a module which provides
|
||||
an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
|
||||
The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
|
||||
create a custom cluster without implementing a cloud provider (for example if using
|
||||
bare-metal), and not all parts of the interface need to be implemented, depending
|
||||
on how flags are set on various components.
|
||||
|
||||
### Nodes
|
||||
|
||||
- You can use virtual or physical machines.
|
||||
- While you can build a cluster with 1 machine, in order to run all the examples and tests you
|
||||
need at least 4 nodes.
|
||||
- Many Getting-started-guides make a distinction between the master node and regular nodes. This
|
||||
is not strictly necessary.
|
||||
- Nodes will need to run some version of Linux with the x86_64 architecture. It may be possible
|
||||
to run on other OSes and Architectures, but this guide does not try to assist with that.
|
||||
- Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes.
|
||||
Larger or more active clusters may benefit from more cores.
|
||||
- Other nodes can have any reasonable amount of memory and any number of cores. They need not
|
||||
have identical configurations.
|
||||
|
||||
### Network
|
||||
|
||||
Kubernetes has a distinctive [networking model](../admin/networking.md).
|
||||
|
||||
Kubernetes allocates an IP address to each pod. When creating a cluster, you
|
||||
need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
|
||||
approach is to allocate a different block of IPs to each node in the cluster as
|
||||
the node is added. A process in one pod should be able to communicate with
|
||||
another pod using the IP of the second pod. This connectivity can be
|
||||
accomplished in two ways:
|
||||
- Configure network to route Pod IPs
|
||||
- Harder to setup from scratch.
|
||||
- Google Compute Engine ([GCE](gce.md)) and [AWS](aws.md) guides use this approach.
|
||||
- Need to make the Pod IPs routable by programming routers, switches, etc.
|
||||
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
|
||||
- Generally highest performance.
|
||||
- Create an Overlay network
|
||||
- Easier to setup
|
||||
- Traffic is encapsulated, so per-pod IPs are routable.
|
||||
- Examples:
|
||||
- [Flannel](https://github.com/coreos/flannel)
|
||||
- [Weave](http://weave.works/)
|
||||
- [Open vSwitch (OVS)](http://openvswitch.org/)
|
||||
- Does not require "Routes" portion of Cloud Provider module.
|
||||
- Reduced performance (exactly how much depends on your solution).
|
||||
|
||||
You need to select an address range for the Pod IPs.
|
||||
- Various approaches:
|
||||
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
|
||||
Kubernetes cluster from that space, which leaves room for several clusters.
|
||||
Each node gets a further subdivision of this space.
|
||||
- AWS: use one VPC for whole organization, carve off a chunk for each
|
||||
cluster, or use different VPC for different clusters.
|
||||
- IPv6 is not supported yet.
|
||||
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
|
||||
from which smaller CIDRs are automatically allocated to each node (if nodes
|
||||
are dynamically added).
|
||||
- You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
|
||||
node supports 254 pods per machine and is a common choice. If IPs are
|
||||
scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
|
||||
- e.g. use `10.10.0.0/16` as the range for the cluster, with up to 256 nodes
|
||||
using `10.10.0.0/24` through `10.10.255.0/24`, respectively.
|
||||
- Need to make these routable or connect with overlay.
|
||||
|
||||
Kubernetes also allocates an IP to each [service](../user-guide/services.md). However,
|
||||
service IPs do not necessarily need to be routable. The kube-proxy takes care
|
||||
of translating Service IPs to Pod IPs before traffic leaves the node. You do
|
||||
need to Allocate a block of IPs for services. Call this
|
||||
`SERVICE_CLUSTER_IP_RANGE`. For example, you could set
|
||||
`SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"`, allowing 65534 distinct services to
|
||||
be active at once. Note that you can grow the end of this range, but you
|
||||
cannot move it without disrupting the services and pods that already use it.
|
||||
|
||||
Also, you need to pick a static IP for master node.
|
||||
- Call this `MASTER_IP`.
|
||||
- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
|
||||
- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1`
|
||||
|
||||
### Cluster Naming
|
||||
|
||||
You should pick a name for your cluster. Pick a short name for each cluster
|
||||
which is unique from future cluster names. This will be used in several ways:
|
||||
- by kubectl to distinguish between various clusters you have access to. You will probably want a
|
||||
second one sometime later, such as for testing new Kubernetes releases, running in a different
|
||||
region of the world, etc.
|
||||
- Kubernetes clusters can create cloud provider resources (e.g. AWS ELBs) and different clusters
|
||||
need to distinguish which resources each created. Call this `CLUSTERNAME`.
|
||||
|
||||
### Software Binaries
|
||||
|
||||
You will need binaries for:
|
||||
- etcd
|
||||
- A container runner, one of:
|
||||
- docker
|
||||
- rkt
|
||||
- Kubernetes
|
||||
- kubelet
|
||||
- kube-proxy
|
||||
- kube-apiserver
|
||||
- kube-controller-manager
|
||||
- kube-scheduler
|
||||
|
||||
#### Downloading and Extracting Kubernetes Binaries
|
||||
|
||||
A Kubernetes binary release includes all the Kubernetes binaries as well as the supported release of etcd.
|
||||
You can use a Kubernetes binary release (recommended) or build your Kubernetes binaries following the instructions in the
|
||||
[Developer Documentation](../devel/README.md). Only using a binary release is covered in this guide.
|
||||
|
||||
Download the [latest binary release](https://github.com/kubernetes/kubernetes/releases/latest) and unzip it.
|
||||
Then locate `./kubernetes/server/kubernetes-server-linux-amd64.tar.gz` and unzip *that*.
|
||||
Then, within the second set of unzipped files, locate `./kubernetes/server/bin`, which contains
|
||||
all the necessary binaries.
|
||||
|
||||
#### Selecting Images
|
||||
|
||||
You will run docker, kubelet, and kube-proxy outside of a container, the same way you would run any system daemon, so
|
||||
you just need the bare binaries. For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler,
|
||||
we recommend that you run these as containers, so you need an image to be built.
|
||||
|
||||
You have several choices for Kubernetes images:
|
||||
- Use images hosted on Google Container Registry (GCR):
|
||||
- e.g `gcr.io/google_containers/hyperkube:$TAG`, where `TAG` is the latest
|
||||
release tag, which can be found on the [latest releases page](https://github.com/kubernetes/kubernetes/releases/latest).
|
||||
- Ensure $TAG is the same tag as the release tag you are using for kubelet and kube-proxy.
|
||||
- The [hyperkube](../../cmd/hyperkube/) binary is an all in one binary
|
||||
- `hyperkube kubelet ...` runs the kublet, `hyperkube apiserver ...` runs an apiserver, etc.
|
||||
- Build your own images.
|
||||
- Useful if you are using a private registry.
|
||||
- The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which
|
||||
can be converted into docker images using a command like
|
||||
`docker load -i kube-apiserver.tar`
|
||||
- You can verify if the image is loaded successfully with the right repository and tag using
|
||||
command like `docker images`
|
||||
|
||||
For etcd, you can:
|
||||
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.2.1`
|
||||
- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.1`
|
||||
- Use etcd binary included in your OS distro.
|
||||
- Build your own image
|
||||
- You can do: `cd kubernetes/cluster/images/etcd; make`
|
||||
|
||||
We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
|
||||
were tested extensively with this version of etcd and not with any other version.
|
||||
The recommended version number can also be found as the value of `ETCD_VERSION` in `kubernetes/cluster/images/etcd/Makefile`.
|
||||
|
||||
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
|
||||
- `HYPERKUBE_IMAGE==gcr.io/google_containers/hyperkube:$TAG`
|
||||
- `ETCD_IMAGE=gcr.io/google_containers/etcd:$ETCD_VERSION`
|
||||
|
||||
### Security Models
|
||||
|
||||
There are two main options for security:
|
||||
- Access the apiserver using HTTP.
|
||||
- Use a firewall for security.
|
||||
- This is easier to setup.
|
||||
- Access the apiserver using HTTPS
|
||||
- Use https with certs, and credentials for user.
|
||||
- This is the recommended approach.
|
||||
- Configuring certs can be tricky.
|
||||
|
||||
If following the HTTPS approach, you will need to prepare certs and credentials.
|
||||
|
||||
#### Preparing Certs
|
||||
|
||||
You need to prepare several certs:
|
||||
- The master needs a cert to act as an HTTPS server.
|
||||
- The kubelets optionally need certs to identify themselves as clients of the master, and when
|
||||
serving its own API over HTTPS.
|
||||
|
||||
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs.
|
||||
- see function `create-certs` in `cluster/gce/util.sh`
|
||||
- see also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
|
||||
`cluster/saltbase/salt/generate-cert/make-cert.sh`
|
||||
|
||||
You will end up with the following files (we will use these variables later on)
|
||||
- `CA_CERT`
|
||||
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/ca.crt`.
|
||||
- `MASTER_CERT`
|
||||
- signed by CA_CERT
|
||||
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/server.crt`
|
||||
- `MASTER_KEY `
|
||||
- put in on node where apiserver runs, in e.g. `/srv/kubernetes/server.key`
|
||||
- `KUBELET_CERT`
|
||||
- optional
|
||||
- `KUBELET_KEY`
|
||||
- optional
|
||||
|
||||
#### Preparing Credentials
|
||||
|
||||
The admin user (and any users) need:
|
||||
- a token or a password to identify them.
|
||||
- tokens are just long alphanumeric strings, e.g. 32 chars. See
|
||||
- `TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)`
|
||||
|
||||
Your tokens and passwords need to be stored in a file for the apiserver
|
||||
to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
|
||||
The format for this file is described in the [authentication documentation](../admin/authentication.md).
|
||||
|
||||
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
|
||||
into a [kubeconfig file](../user-guide/kubeconfig-file.md).
|
||||
|
||||
The kubeconfig file for the administrator can be created as follows:
|
||||
- If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started
|
||||
Guide), you will already have a `$HOME/.kube/config` file.
|
||||
- You need to add certs, keys, and the master IP to the kubeconfig file:
|
||||
- If using the firewall-only security option, set the apiserver this way:
|
||||
- `kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true`
|
||||
- Otherwise, do this to set the apiserver ip, client certs, and user credentials.
|
||||
- `kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP`
|
||||
- `kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN`
|
||||
- Set your cluster as the default cluster to use:
|
||||
- `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER`
|
||||
- `kubectl config use-context $CONTEXT_NAME`
|
||||
|
||||
Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how
|
||||
many distinct files to make:
|
||||
1. Use the same credential as the admin
|
||||
- This is simplest to setup.
|
||||
1. One token and kubeconfig file for all kubelets, one for all kube-proxy, one for admin.
|
||||
- This mirrors what is done on GCE today
|
||||
1. Different credentials for every kubelet, etc.
|
||||
- We are working on this but all the pieces are not ready yet.
|
||||
|
||||
You can make the files by copying the `$HOME/.kube/config`, by following the code
|
||||
in `cluster/gce/configure-vm.sh` or by using the following template:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
users:
|
||||
- name: kubelet
|
||||
user:
|
||||
token: ${KUBELET_TOKEN}
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
certificate-authority-data: ${CA_CERT_BASE64_ENCODED}
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: service-account-context
|
||||
current-context: service-account-context
|
||||
```
|
||||
|
||||
Put the kubeconfig(s) on every node. The examples later in this
|
||||
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
|
||||
`/var/lib/kubelet/kubeconfig`.
|
||||
|
||||
## Configuring and Installing Base Software on Nodes
|
||||
|
||||
This section discusses how to configure machines to be Kubernetes nodes.
|
||||
|
||||
You should run three daemons on every node:
|
||||
- docker or rkt
|
||||
- kubelet
|
||||
- kube-proxy
|
||||
|
||||
You will also need to do assorted other configuration on top of a
|
||||
base OS install.
|
||||
|
||||
Tip: One possible starting point is to setup a cluster using an existing Getting
|
||||
Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that
|
||||
cluster, and then modify them for use on your custom cluster.
|
||||
|
||||
### Docker
|
||||
|
||||
The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
|
||||
|
||||
If you previously had Docker installed on a node without setting Kubernetes-specific
|
||||
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
|
||||
as follows before proceeding to configure Docker for Kubernetes.
|
||||
|
||||
```sh
|
||||
iptables -t nat -F
|
||||
ifconfig docker0 down
|
||||
brctl delbr docker0
|
||||
```
|
||||
|
||||
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
|
||||
Some suggested docker options:
|
||||
- create your own bridge for the per-node CIDR ranges, call it cbr0, and set `--bridge=cbr0` option on docker.
|
||||
- set `--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
|
||||
so that kube-proxy can manage iptables instead of docker.
|
||||
- `--ip-masq=false`
|
||||
- if you have setup PodIPs to be routable, then you want this false, otherwise, docker will
|
||||
rewrite the PodIP source-address to a NodeIP.
|
||||
- some environments (e.g. GCE) still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific.
|
||||
- if you are using an overlay network, consult those instructions.
|
||||
- `--mtu=`
|
||||
- may be required when using Flannel, because of the extra packet size due to udp encapsulation
|
||||
- `--insecure-registry $CLUSTER_SUBNET`
|
||||
- to connect to a private registry, if you set one up, without using SSL.
|
||||
|
||||
You may want to increase the number of open files for docker:
|
||||
- `DOCKER_NOFILE=1000000`
|
||||
|
||||
Where this config goes depends on your node OS. For example, GCE's Debian-based distro uses `/etc/default/docker`.
|
||||
|
||||
Ensure docker is working correctly on your system before proceeding with the rest of the
|
||||
installation, by following examples given in the Docker documentation.
|
||||
|
||||
### rkt
|
||||
|
||||
[rkt](https://github.com/coreos/rkt) is an alternative to Docker. You only need to install one of Docker or rkt.
|
||||
The minimum version required is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6).
|
||||
|
||||
[systemd](http://www.freedesktop.org/wiki/Software/systemd/) is required on your node to run rkt. The
|
||||
minimum version required to match rkt v0.5.6 is
|
||||
[systemd 215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html).
|
||||
|
||||
[rkt metadata service](https://github.com/coreos/rkt/blob/master/Documentation/networking.md) is also required
|
||||
for rkt networking support. You can start rkt metadata service by using command like
|
||||
`sudo systemd-run rkt metadata-service`
|
||||
|
||||
Then you need to configure your kubelet with flag:
|
||||
- `--container-runtime=rkt`
|
||||
|
||||
### kubelet
|
||||
|
||||
All nodes should run kubelet. See [Selecting Binaries](#selecting-binaries).
|
||||
|
||||
Arguments to consider:
|
||||
- If following the HTTPS security approach:
|
||||
- `--api-servers=https://$MASTER_IP`
|
||||
- `--kubeconfig=/var/lib/kubelet/kubeconfig`
|
||||
- Otherwise, if taking the firewall-based security approach
|
||||
- `--api-servers=http://$MASTER_IP`
|
||||
- `--config=/etc/kubernetes/manifests`
|
||||
- `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).)
|
||||
- `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses.
|
||||
- `--docker-root=`
|
||||
- `--root-dir=`
|
||||
- `--configure-cbr0=` (described above)
|
||||
- `--register-node` (described in [Node](../admin/node.md) documentation.)
|
||||
|
||||
### kube-proxy
|
||||
|
||||
All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
|
||||
strictly required, but being consistent is easier.) Obtain a binary as described for
|
||||
kubelet.
|
||||
|
||||
Arguments to consider:
|
||||
- If following the HTTPS security approach:
|
||||
- `--api-servers=https://$MASTER_IP`
|
||||
- `--kubeconfig=/var/lib/kube-proxy/kubeconfig`
|
||||
- Otherwise, if taking the firewall-based security approach
|
||||
- `--api-servers=http://$MASTER_IP`
|
||||
|
||||
### Networking
|
||||
|
||||
Each node needs to be allocated its own CIDR range for pod networking.
|
||||
Call this `NODE_X_POD_CIDR`.
|
||||
|
||||
A bridge called `cbr0` needs to be created on each node. The bridge is explained
|
||||
further in the [networking documentation](../admin/networking.md). The bridge itself
|
||||
needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
|
||||
this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
|
||||
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
|
||||
because of how this is used later.
|
||||
|
||||
- Recommended, automatic approach:
|
||||
1. Set `--configure-cbr0=true` option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically.
|
||||
It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller
|
||||
yet, the bridge will not be setup immediately.
|
||||
- Alternate, manual approach:
|
||||
1. Set `--configure-cbr0=false` on kubelet and restart.
|
||||
1. Create a bridge
|
||||
- `brctl addbr cbr0`.
|
||||
1. Set appropriate MTU. NOTE: the actual value of MTU will depend on your network environment
|
||||
- `ip link set dev cbr0 mtu 1460`
|
||||
1. Add the node's network to the bridge (docker will go on other side of bridge).
|
||||
- `ip addr add $NODE_X_BRIDGE_ADDR dev cbr0`
|
||||
1. Turn it on
|
||||
- `ip link set dev cbr0 up`
|
||||
|
||||
If you have turned off Docker's IP masquerading to allow pods to talk to each
|
||||
other, then you may need to do masquerading just for destination IPs outside
|
||||
the cluster network. For example:
|
||||
|
||||
```sh
|
||||
iptables -t nat -A POSTROUTING ! -d ${CLUSTER_SUBNET} -m addrtype ! --dst-type LOCAL -j MASQUERADE
|
||||
```
|
||||
|
||||
This will rewrite the source address from
|
||||
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
|
||||
[connection tracking](http://www.iptables.info/en/connection-state.html)
|
||||
will ensure that responses destined to the node still reach
|
||||
the pod.
|
||||
|
||||
NOTE: This is environment specific. Some environments will not need
|
||||
any masquerading at all. Others, such as GCE, will not allow pod IPs to send
|
||||
traffic to the internet, but have no problem with them inside your GCE Project.
|
||||
|
||||
### Other
|
||||
|
||||
- Enable auto-upgrades for your OS package manager, if desired.
|
||||
- Configure log rotation for all node components (e.g. using [logrotate](http://linux.die.net/man/8/logrotate)).
|
||||
- Setup liveness-monitoring (e.g. using [supervisord](http://supervisord.org/)).
|
||||
- Setup volume plugin support (optional)
|
||||
- Install any client binaries for optional volume types, such as `glusterfs-client` for GlusterFS
|
||||
volumes.
|
||||
|
||||
### Using Configuration Management
|
||||
|
||||
The previous steps all involved "conventional" system administration techniques for setting up
|
||||
machines. You may want to use a Configuration Management system to automate the node configuration
|
||||
process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the
|
||||
various Getting Started Guides.
|
||||
|
||||
## Bootstrapping the Cluster
|
||||
|
||||
While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using
|
||||
traditional system administration/automation approaches, the remaining *master* components of Kubernetes are
|
||||
all configured and managed *by Kubernetes*:
|
||||
- their options are specified in a Pod spec (yaml or json) rather than an /etc/init.d file or
|
||||
systemd unit.
|
||||
- they are kept running by Kubernetes rather than by init.
|
||||
|
||||
### etcd
|
||||
|
||||
You will need to run one or more instances of etcd.
|
||||
- Recommended approach: run one etcd instance, with its log written to a directory backed
|
||||
by durable storage (RAID, GCE PD)
|
||||
- Alternative: run 3 or 5 etcd instances.
|
||||
- Log can be written to non-durable storage because storage is replicated.
|
||||
- run a single apiserver which connects to one of the etc nodes.
|
||||
See [cluster-troubleshooting](../admin/cluster-troubleshooting.md) for more discussion on factors affecting cluster
|
||||
availability.
|
||||
|
||||
To run an etcd instance:
|
||||
|
||||
1. copy `cluster/saltbase/salt/etcd/etcd.manifest`
|
||||
1. make any modifications needed
|
||||
1. start the pod by putting it into the kubelet manifest directory
|
||||
|
||||
### Apiserver, Controller Manager, and Scheduler
|
||||
|
||||
The apiserver, controller manager, and scheduler will each run as a pod on the master node.
|
||||
|
||||
For each of these components, the steps to start them running are similar:
|
||||
|
||||
1. Start with a provided template for a pod.
|
||||
1. Set the `HYPERKUBE_IMAGE` to the values chosen in [Selecting Images](#selecting-images).
|
||||
1. Determine which flags are needed for your cluster, using the advice below each template.
|
||||
1. Set the flags to be individual strings in the command array (e.g. $ARGN below)
|
||||
1. Start the pod by putting the completed template into the kubelet manifest directory.
|
||||
1. Verify that the pod is started.
|
||||
|
||||
#### Apiserver pod template
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "kube-apiserver"
|
||||
},
|
||||
"spec": {
|
||||
"hostNetwork": true,
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-apiserver",
|
||||
"image": "${HYPERKUBE_IMAGE}",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"apiserver",
|
||||
"$ARG1",
|
||||
"$ARG2",
|
||||
...
|
||||
"$ARGN"
|
||||
],
|
||||
"ports": [
|
||||
{
|
||||
"name": "https",
|
||||
"hostPort": 443,
|
||||
"containerPort": 443
|
||||
},
|
||||
{
|
||||
"name": "local",
|
||||
"hostPort": 8080,
|
||||
"containerPort": 8080
|
||||
}
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"mountPath": "/srv/kubernetes",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"mountPath": "/etc/ssl",
|
||||
"readOnly": true
|
||||
}
|
||||
],
|
||||
"livenessProbe": {
|
||||
"httpGet": {
|
||||
"path": "/healthz",
|
||||
"port": 8080
|
||||
},
|
||||
"initialDelaySeconds": 15,
|
||||
"timeoutSeconds": 15
|
||||
}
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"hostPath": {
|
||||
"path": "/srv/kubernetes"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"hostPath": {
|
||||
"path": "/etc/ssl"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Here are some apiserver flags you may need to set:
|
||||
|
||||
- `--cloud-provider=` see [cloud providers](#cloud-providers)
|
||||
- `--cloud-config=` see [cloud providers](#cloud-providers)
|
||||
- `--address=${MASTER_IP}` *or* `--bind-address=127.0.0.1` and `--address=127.0.0.1` if you want to run a proxy on the master node.
|
||||
- `--cluster-name=$CLUSTER_NAME`
|
||||
- `--service-cluster-ip-range=$SERVICE_CLUSTER_IP_RANGE`
|
||||
- `--etcd-servers=http://127.0.0.1:4001`
|
||||
- `--tls-cert-file=/srv/kubernetes/server.cert`
|
||||
- `--tls-private-key-file=/srv/kubernetes/server.key`
|
||||
- `--admission-control=$RECOMMENDED_LIST`
|
||||
- See [admission controllers](../admin/admission-controllers.md) for recommended arguments.
|
||||
- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.
|
||||
|
||||
If you are following the firewall-only security approach, then use these arguments:
|
||||
|
||||
- `--token-auth-file=/dev/null`
|
||||
- `--insecure-bind-address=$MASTER_IP`
|
||||
- `--advertise-address=$MASTER_IP`
|
||||
|
||||
If you are using the HTTPS approach, then set:
|
||||
- `--client-ca-file=/srv/kubernetes/ca.crt`
|
||||
- `--token-auth-file=/srv/kubernetes/known_tokens.csv`
|
||||
- `--basic-auth-file=/srv/kubernetes/basic_auth.csv`
|
||||
|
||||
This pod mounts several node file system directories using the `hostPath` volumes. Their purposes are:
|
||||
- The `/etc/ssl` mount allows the apiserver to find the SSL root certs so it can
|
||||
authenticate external services, such as a cloud provider.
|
||||
- This is not required if you do not use a cloud provider (e.g. bare-metal).
|
||||
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
|
||||
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
|
||||
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
|
||||
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.
|
||||
|
||||
*TODO* document proxy-ssh setup.
|
||||
|
||||
##### Cloud Providers
|
||||
|
||||
Apiserver supports several cloud providers.
|
||||
|
||||
- options for `--cloud-provider` flag are `aws`, `gce`, `mesos`, `openshift`, `ovirt`, `rackspace`, `vagrant`, or unset.
|
||||
- unset used for e.g. bare metal setups.
|
||||
- support for new IaaS is added by contributing code [here](../../pkg/cloudprovider/providers/)
|
||||
|
||||
Some cloud providers require a config file. If so, you need to put config file into apiserver image or mount through hostPath.
|
||||
|
||||
- `--cloud-config=` set if cloud provider requires a config file.
|
||||
- Used by `aws`, `gce`, `mesos`, `openshift`, `ovirt` and `rackspace`.
|
||||
- You must put config file into apiserver image or mount through hostPath.
|
||||
- Cloud config file syntax is [Gcfg](https://code.google.com/p/gcfg/).
|
||||
- AWS format defined by type [AWSCloudConfig](../../pkg/cloudprovider/providers/aws/aws.go)
|
||||
- There is a similar type in the corresponding file for other cloud providers.
|
||||
- GCE example: search for `gce.conf` in [this file](../../cluster/gce/configure-vm.sh)
|
||||
|
||||
#### Scheduler pod template
|
||||
|
||||
Complete this template for the scheduler pod:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "kube-scheduler"
|
||||
},
|
||||
"spec": {
|
||||
"hostNetwork": true,
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-scheduler",
|
||||
"image": "$HYBERKUBE_IMAGE",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"scheduler",
|
||||
"--master=127.0.0.1:8080",
|
||||
"$SCHEDULER_FLAG1",
|
||||
...
|
||||
"$SCHEDULER_FLAGN"
|
||||
],
|
||||
"livenessProbe": {
|
||||
"httpGet": {
|
||||
"host" : "127.0.0.1",
|
||||
"path": "/healthz",
|
||||
"port": 10251
|
||||
},
|
||||
"initialDelaySeconds": 15,
|
||||
"timeoutSeconds": 15
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Typically, no additional flags are required for the scheduler.
|
||||
|
||||
Optionally, you may want to mount `/var/log` as well and redirect output there.
|
||||
|
||||
#### Controller Manager Template
|
||||
|
||||
Template for controller manager pod:
|
||||
|
||||
```json
|
||||
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "kube-controller-manager"
|
||||
},
|
||||
"spec": {
|
||||
"hostNetwork": true,
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-controller-manager",
|
||||
"image": "$HYPERKUBE_IMAGE",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"controller-manager",
|
||||
"$CNTRLMNGR_FLAG1",
|
||||
...
|
||||
"$CNTRLMNGR_FLAGN"
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"mountPath": "/srv/kubernetes",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"mountPath": "/etc/ssl",
|
||||
"readOnly": true
|
||||
}
|
||||
],
|
||||
"livenessProbe": {
|
||||
"httpGet": {
|
||||
"host": "127.0.0.1",
|
||||
"path": "/healthz",
|
||||
"port": 10252
|
||||
},
|
||||
"initialDelaySeconds": 15,
|
||||
"timeoutSeconds": 15
|
||||
}
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "srvkube",
|
||||
"hostPath": {
|
||||
"path": "/srv/kubernetes"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "etcssl",
|
||||
"hostPath": {
|
||||
"path": "/etc/ssl"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Flags to consider using with controller manager:
|
||||
- `--cluster-name=$CLUSTER_NAME`
|
||||
- `--cluster-cidr=`
|
||||
- *TODO*: explain this flag.
|
||||
- `--allocate-node-cidrs=`
|
||||
- *TODO*: explain when you want controller to do this and when you want to do it another way.
|
||||
- `--cloud-provider=` and `--cloud-config` as described in apiserver section.
|
||||
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by the [service account](../user-guide/service-accounts.md) feature.
|
||||
- `--master=127.0.0.1:8080`
|
||||
|
||||
#### Starting and Verifying Apiserver, Scheduler, and Controller Manager
|
||||
|
||||
Place each completed pod template into the kubelet config dir
|
||||
(whatever `--config=` argument of kubelet is set to, typically
|
||||
`/etc/kubernetes/manifests`). The order does not matter: scheduler and
|
||||
controller manager will retry reaching the apiserver until it is up.
|
||||
|
||||
Use `ps` or `docker ps` to verify that each process has started. For example, verify that kubelet has started a container for the apiserver like this:
|
||||
|
||||
```console
|
||||
$ sudo docker ps | grep apiserver:
|
||||
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
|
||||
```
|
||||
|
||||
Then try to connect to the apiserver:
|
||||
|
||||
```console
|
||||
$ echo $(curl -s http://localhost:8080/healthz)
|
||||
ok
|
||||
$ curl -s http://localhost:8080/api
|
||||
{
|
||||
"versions": [
|
||||
"v1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
If you have selected the `--register-node=true` option for kubelets, they will now begin self-registering with the apiserver.
|
||||
You should soon be able to see all your nodes by running the `kubectl get nodes` command.
|
||||
Otherwise, you will need to manually create node objects.
|
||||
|
||||
### Starting Cluster Services
|
||||
|
||||
You will want to complete your Kubernetes clusters by adding cluster-wide
|
||||
services. These are sometimes called *addons*, and [an overview
|
||||
of their purpose is in the admin guide](
|
||||
../../docs/admin/cluster-components.md#addons).
|
||||
|
||||
Notes for setting up each cluster service are given below:
|
||||
|
||||
* Cluster DNS:
|
||||
* required for many kubernetes examples
|
||||
* [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/dns/)
|
||||
* [Admin Guide](../admin/dns.md)
|
||||
* Cluster-level Logging
|
||||
* Multiple implementations with different storage backends and UIs.
|
||||
* [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/)
|
||||
* [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/).
|
||||
* Both require running fluentd on each node.
|
||||
* [User Guide](../user-guide/logging.md)
|
||||
* Container Resource Monitoring
|
||||
* [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/)
|
||||
* GUI
|
||||
* [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/)
|
||||
cluster.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Running validate-cluster
|
||||
|
||||
**TODO** explain how to use `cluster/validate-cluster.sh`
|
||||
|
||||
### Inspect pods and services
|
||||
|
||||
Try to run through the "Inspect your cluster" section in one of the other Getting Started Guides, such as [GCE](gce.md#inspect-your-cluster).
|
||||
You should see some services. You should also see "mirror pods" for the apiserver, scheduler and controller-manager, plus any add-ons you started.
|
||||
|
||||
### Try Examples
|
||||
|
||||
At this point you should be able to run through one of the basic examples, such as the [nginx example](../../examples/simple-nginx.md).
|
||||
|
||||
### Running the Conformance Test
|
||||
|
||||
You may want to try to run the [Conformance test](http://releases.k8s.io/HEAD/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention.
|
||||
|
||||
### Networking
|
||||
|
||||
The nodes must be able to connect to each other using their private IP. Verify this by
|
||||
pinging or SSH-ing from one node to another.
|
||||
|
||||
### Getting Help
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
|
||||
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on [Slack](../troubleshooting.md#slack).
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/scratch/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,475 +31,9 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Bare Metal Kubernetes with Calico Networking
|
||||
------------------------------------------------
|
||||
|
||||
## Introduction
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/ubuntu-calico/
|
||||
|
||||
This document describes how to deploy Kubernetes with Calico networking from scratch on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-containers repository](https://github.com/projectcalico/calico-containers).
|
||||
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes).
|
||||
|
||||
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We'll run Calico's etcd cluster on the master and install the Calico daemon on the master and nodes.
|
||||
|
||||
## Prerequisites and Assumptions
|
||||
|
||||
- This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
|
||||
- All machines should have Docker >= 1.7.0 installed.
|
||||
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
|
||||
- All machines should have connectivity to each other and the internet.
|
||||
- This guide assumes a DHCP server on your network to assign server IPs.
|
||||
- This guide uses `192.168.0.0/16` as the subnet from which pod IP addresses are assigned. If this overlaps with your host subnet, you will need to configure Calico to use a different [IP pool](https://github.com/projectcalico/calico-containers/blob/master/docs/calicoctl/pool.md#calicoctl-pool-commands).
|
||||
- This guide assumes that none of the hosts have been configured with any Kubernetes or Calico software.
|
||||
- This guide will set up a secure, TLS-authenticated API server.
|
||||
|
||||
## Set up the master
|
||||
|
||||
### Configure TLS
|
||||
|
||||
The master requires the root CA public key, `ca.pem`; the apiserver certificate, `apiserver.pem` and its private key, `apiserver-key.pem`.
|
||||
|
||||
1. Create the file `openssl.cnf` with the following contents.
|
||||
|
||||
```
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
[req_distinguished_name]
|
||||
[ v3_req ]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
IP.1 = 10.100.0.1
|
||||
IP.2 = ${MASTER_IPV4}
|
||||
```
|
||||
|
||||
> Replace ${MASTER_IPV4} with the Master's IP address on which the Kubernetes API will be accessible.
|
||||
|
||||
2. Generate the necessary TLS assets.
|
||||
|
||||
```
|
||||
# Generate the root CA.
|
||||
openssl genrsa -out ca-key.pem 2048
|
||||
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
|
||||
|
||||
# Generate the API server keypair.
|
||||
openssl genrsa -out apiserver-key.pem 2048
|
||||
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
|
||||
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
|
||||
```
|
||||
|
||||
3. You should now have the following three files: `ca.pem`, `apiserver.pem`, and `apiserver-key.pem`. Send the three files to your master host (using `scp` for example).
|
||||
4. Move them to the `/etc/kubernetes/ssl` folder and ensure that only the root user can read the key:
|
||||
|
||||
```
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem
|
||||
|
||||
# Set permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
|
||||
```
|
||||
|
||||
### Install Kubernetes on the Master
|
||||
|
||||
We'll use the `kubelet` to bootstrap the Kubernetes master.
|
||||
|
||||
1. Download and install the `kubelet` and `kubectl` binaries:
|
||||
|
||||
```
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
|
||||
sudo chmod +x /usr/bin/kubelet /usr/bin/kubectl
|
||||
```
|
||||
|
||||
2. Install the `kubelet` systemd unit file and start the `kubelet`:
|
||||
|
||||
```
|
||||
# Install the unit file
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubelet.service
|
||||
|
||||
# Enable the unit file so that it runs on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
|
||||
# Start the kubelet service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3. Download and install the master manifest file, which will start the Kubernetes master services automatically:
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/kubernetes/manifests
|
||||
sudo wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest
|
||||
```
|
||||
|
||||
4. Check the progress by running `docker ps`. After a while, you should see the `etcd`, `apiserver`, `controller-manager`, `scheduler`, and `kube-proxy` containers running.
|
||||
|
||||
> Note: it may take some time for all the containers to start. Don't worry if `docker ps` doesn't show any containers for a while or if some containers start before others.
|
||||
|
||||
### Install Calico's etcd on the master
|
||||
|
||||
Calico needs its own etcd cluster to store its state. In this guide we install a single-node cluster on the master server.
|
||||
|
||||
> Note: In a production deployment we recommend running a distributed etcd cluster for redundancy. In this guide, we use a single etcd for simplicitly.
|
||||
|
||||
1. Download the template manifest file:
|
||||
|
||||
```
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest
|
||||
```
|
||||
|
||||
2. Replace all instances of `<MASTER_IPV4>` in the `calico-etcd.manifest` file with your master's IP address.
|
||||
|
||||
3. Then, move the file to the `/etc/kubernetes/manifests` directory:
|
||||
|
||||
```
|
||||
sudo mv -f calico-etcd.manifest /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
### Install Calico on the master
|
||||
|
||||
We need to install Calico on the master. This allows the master to route packets to the pods on other nodes.
|
||||
|
||||
1. Install the `calicoctl` tool:
|
||||
|
||||
```
|
||||
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
```
|
||||
|
||||
2. Prefetch the calico/node container (this ensures that the Calico service starts immediately when we enable it):
|
||||
|
||||
```
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-kubernetes` repository:
|
||||
|
||||
```
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template
|
||||
```
|
||||
|
||||
4. Edit `network-environment` to represent this node's settings:
|
||||
|
||||
- Replace `<KUBERNETES_MASTER>` with the IP address of the master. This should be the source IP address used to reach the Kubernetes worker nodes.
|
||||
|
||||
5. Move `network-environment` into `/etc`:
|
||||
|
||||
```
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
6. Install, enable, and start the `calico-node` service:
|
||||
|
||||
```
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
## Set up the nodes
|
||||
|
||||
The following steps should be run on each Kubernetes node.
|
||||
|
||||
### Configure TLS
|
||||
|
||||
Worker nodes require three keys: `ca.pem`, `worker.pem`, and `worker-key.pem`. We've already generated
|
||||
`ca.pem` for use on the Master. The worker public/private keypair should be generated for each Kubernetes node.
|
||||
|
||||
1. Create the file `worker-openssl.cnf` with the following contents.
|
||||
|
||||
```
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
[req_distinguished_name]
|
||||
[ v3_req ]
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = $ENV::WORKER_IP
|
||||
```
|
||||
|
||||
2. Generate the necessary TLS assets for this worker. This relies on the worker's IP address, and the `ca.pem` file generated earlier in the guide.
|
||||
|
||||
```
|
||||
# Export this worker's IP address.
|
||||
export WORKER_IP=<WORKER_IPV4>
|
||||
```
|
||||
|
||||
```
|
||||
# Generate keys.
|
||||
openssl genrsa -out worker-key.pem 2048
|
||||
openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=worker-key" -config worker-openssl.cnf
|
||||
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
|
||||
```
|
||||
|
||||
3. Send the three files (`ca.pem`, `worker.pem`, and `worker-key.pem`) to the host (using scp, for example).
|
||||
|
||||
4. Move the files to the `/etc/kubernetes/ssl` folder with the appropriate permissions:
|
||||
|
||||
```
|
||||
# Move keys
|
||||
sudo mkdir -p /etc/kubernetes/ssl/
|
||||
sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem
|
||||
|
||||
# Set permissions
|
||||
sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
|
||||
sudo chown root:root /etc/kubernetes/ssl/worker-key.pem
|
||||
```
|
||||
|
||||
### Configure the kubelet worker
|
||||
|
||||
1. With your certs in place, create a kubeconfig for worker authentication in `/etc/kubernetes/worker-kubeconfig.yaml`; replace `<KUBERNETES_MASTER>` with the IP address of the master:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
server: https://<KUBERNETES_MASTER>:443
|
||||
certificate-authority: /etc/kubernetes/ssl/ca.pem
|
||||
users:
|
||||
- name: kubelet
|
||||
user:
|
||||
client-certificate: /etc/kubernetes/ssl/worker.pem
|
||||
client-key: /etc/kubernetes/ssl/worker-key.pem
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: kubelet-context
|
||||
current-context: kubelet-context
|
||||
```
|
||||
|
||||
### Install Calico on the node
|
||||
|
||||
On your compute nodes, it is important that you install Calico before Kubernetes. We'll install Calico using the provided `calico-node.service` systemd unit file:
|
||||
|
||||
1. Install the `calicoctl` binary:
|
||||
|
||||
```
|
||||
wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
```
|
||||
|
||||
2. Fetch the calico/node container:
|
||||
|
||||
```
|
||||
sudo docker pull calico/node:v0.15.0
|
||||
```
|
||||
|
||||
3. Download the `network-environment` template from the `calico-cni` repository:
|
||||
|
||||
```
|
||||
wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/network-environment-template
|
||||
```
|
||||
|
||||
4. Edit `network-environment` to represent this node's settings:
|
||||
|
||||
- Replace `<DEFAULT_IPV4>` with the IP address of the node.
|
||||
- Replace `<KUBERNETES_MASTER>` with the IP or hostname of the master.
|
||||
|
||||
5. Move `network-environment` into `/etc`:
|
||||
|
||||
```
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
6. Install the `calico-node` service:
|
||||
|
||||
```
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
7. Install the Calico CNI plugins:
|
||||
|
||||
```
|
||||
sudo mkdir -p /opt/cni/bin/
|
||||
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico
|
||||
sudo wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam
|
||||
sudo chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
|
||||
```
|
||||
|
||||
8. Create a CNI network configuration file, which tells Kubernetes to create a network named `calico-k8s-network` and to use the calico plugins for that network. Create file `/etc/cni/net.d/10-calico.conf` with the following contents, replacing `<KUBERNETES_MASTER>` with the IP of the master (this file should be the same on each node):
|
||||
|
||||
```
|
||||
# Make the directory structure.
|
||||
mkdir -p /etc/cni/net.d
|
||||
|
||||
# Make the network configuration file
|
||||
cat >/etc/rkt/net.d/10-calico.conf <<EOF
|
||||
{
|
||||
"name": "calico-k8s-network",
|
||||
"type": "calico",
|
||||
"etcd_authority": "<KUBERNETES_MASTER>:6666",
|
||||
"log_level": "info",
|
||||
"ipam": {
|
||||
"type": "calico-ipam"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Since this is the only network we create, it will be used by default by the kubelet.
|
||||
|
||||
9. Verify that Calico started correctly:
|
||||
|
||||
```
|
||||
calicoctl status
|
||||
```
|
||||
|
||||
should show that Felix (Calico's per-node agent) is running and the there should be a BGP status line for each other node that you've configured and the master. The "Info" column should show "Established":
|
||||
|
||||
```
|
||||
$ calicoctl status
|
||||
calico-node container is running. Status: Up 15 hours
|
||||
Running felix version 1.3.0rc5
|
||||
|
||||
IPv4 BGP status
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
| Peer address | Peer type | State | Since | Info |
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
| 172.18.203.41 | node-to-node mesh | up | 17:32:26 | Established |
|
||||
| 172.18.203.42 | node-to-node mesh | up | 17:32:25 | Established |
|
||||
+---------------+-------------------+-------+----------+-------------+
|
||||
|
||||
IPv6 BGP status
|
||||
+--------------+-----------+-------+-------+------+
|
||||
| Peer address | Peer type | State | Since | Info |
|
||||
+--------------+-----------+-------+-------+------+
|
||||
+--------------+-----------+-------+-------+------+
|
||||
```
|
||||
|
||||
If the "Info" column shows "Active" or some other value then Calico is having difficulty connecting to the other host. Check the IP address of the peer is correct and check that Calico is using the correct local IP address (set in the `network-environment` file above).
|
||||
|
||||
### Install Kubernetes on the Node
|
||||
|
||||
1. Download and Install the kubelet binary:
|
||||
|
||||
```
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet
|
||||
sudo chmod +x /usr/bin/kubelet
|
||||
```
|
||||
|
||||
2. Install the `kubelet` systemd unit file:
|
||||
|
||||
```
|
||||
# Download the unit file.
|
||||
sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kubelet.service
|
||||
|
||||
# Enable and start the unit files so that they run on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3. Download the `kube-proxy` manifest:
|
||||
|
||||
```
|
||||
wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kube-proxy.manifest
|
||||
```
|
||||
|
||||
4. In that file, replace `<KUBERNETES_MASTER>` with your master's IP. Then move it into place:
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/kubernetes/manifests/
|
||||
sudo mv kube-proxy.manifest /etc/kubernetes/manifests/
|
||||
```
|
||||
|
||||
## Configure kubectl remote access
|
||||
|
||||
To administer your cluster from a separate host (e.g your laptop), you will need the root CA generated earlier, as well as an admin public/private keypair (`ca.pem`, `admin.pem`, `admin-key.pem`). Run the following steps on the machine which you will use to control your cluster.
|
||||
|
||||
1. Download the kubectl binary.
|
||||
|
||||
```
|
||||
sudo wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl
|
||||
sudo chmod +x /usr/bin/kubectl
|
||||
```
|
||||
|
||||
2. Generate the admin public/private keypair.
|
||||
|
||||
3. Export the necessary variables, substituting in correct values for your machine.
|
||||
|
||||
```
|
||||
# Export the appropriate paths.
|
||||
export CA_CERT_PATH=<PATH_TO_CA_PEM>
|
||||
export ADMIN_CERT_PATH=<PATH_TO_ADMIN_PEM>
|
||||
export ADMIN_KEY_PATH=<PATH_TO_ADMIN_KEY_PEM>
|
||||
|
||||
# Export the Master's IP address.
|
||||
export MASTER_IPV4=<MASTER_IPV4>
|
||||
```
|
||||
|
||||
4. Configure your host `kubectl` with the admin credentials:
|
||||
|
||||
```
|
||||
kubectl config set-cluster calico-cluster --server=https://${MASTER_IPV4} --certificate-authority=${CA_CERT_PATH}
|
||||
kubectl config set-credentials calico-admin --certificate-authority=${CA_CERT_PATH} --client-key=${ADMIN_KEY_PATH} --client-certificate=${ADMIN_CERT_PATH}
|
||||
kubectl config set-context calico --cluster=calico-cluster --user=calico-admin
|
||||
kubectl config use-context calico
|
||||
```
|
||||
|
||||
Check your work with `kubectl get nodes`, which should succeed and display the nodes.
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
Most Kubernetes deployments will require the DNS addon for service discovery. To install DNS, create the skydns service and replication controller provided. This step makes use of the kubectl configuration made above.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml
|
||||
```
|
||||
|
||||
## Install the Kubernetes UI Addon (Optional)
|
||||
|
||||
The Kubernetes UI can be installed using `kubectl` to run the following manifest file.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml
|
||||
```
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use outgoing NAT on your Kubernetes nodes.
|
||||
|
||||
Calico can provide outgoing NAT for containers. To enable it, use the following `calicoctl` command:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool add <CONTAINER_SUBNET> --nat-outgoing
|
||||
```
|
||||
|
||||
By default, `<CONTAINER_SUBNET>` will be `192.168.0.0/16`. You can find out which pools have been configured with the following command:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<master_ip:6666> calicoctl pool show
|
||||
```
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
The Calico documentation contains more information on how to configure Calico to [peer with existing infrastructure](https://github.com/projectcalico/calico-containers/blob/master/docs/ExternalConnectivity.md).
|
||||
|
||||
[](https://github.com/igrigorik/ga-beacon)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
|
@@ -31,293 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Kubernetes Deployment On Bare-metal Ubuntu Nodes
|
||||
------------------------------------------------
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a Cluster](#starting-a-cluster)
|
||||
- [Set up working directory](#set-up-working-directory)
|
||||
- [Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
|
||||
- [Test it out](#test-it-out)
|
||||
- [Deploy addons](#deploy-addons)
|
||||
- [Trouble shooting](#trouble-shooting)
|
||||
- [Upgrading a Cluster](#upgrading-a-cluster)
|
||||
- [Test it out](#test-it-out-ii)
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy kubernetes on ubuntu nodes, 1 master and 3 nodes involved
|
||||
in the given examples. You can scale to **any number of nodes** by changing some settings with ease.
|
||||
The original idea was heavily inspired by @jainvipin 's ubuntu single node
|
||||
work, which has been merge into this document.
|
||||
|
||||
The scripting referenced here can be used to deploy Kubernetes with
|
||||
networking based either on Flannel or on a CNI plugin that you supply.
|
||||
This document is focused on the Flannel case. See
|
||||
`kubernetes/cluster/ubuntu/config-default.sh` for remarks on how to
|
||||
use a CNI plugin instead.
|
||||
|
||||
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
|
||||
2. All machines can communicate with each other. Master node needs to be connected to the
|
||||
Internet to download the necessary files, while worker nodes do not.
|
||||
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
|
||||
Ubuntu 15 which uses systemd instead of upstart.
|
||||
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.4, may work with higher versions.
|
||||
5. All the remote servers can be ssh logged in without a password by using key authentication.
|
||||
|
||||
|
||||
## Starting a Cluster
|
||||
|
||||
### Set up working directory
|
||||
|
||||
Clone the kubernetes github repo locally
|
||||
|
||||
``` console
|
||||
$ git clone https://github.com/kubernetes/kubernetes.git
|
||||
```
|
||||
|
||||
#### Configure and start the Kubernetes cluster
|
||||
|
||||
The startup process will first download all the required binaries automatically.
|
||||
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.4.
|
||||
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
|
||||
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` like following.
|
||||
|
||||
```console
|
||||
$ export KUBE_VERSION=1.0.5
|
||||
$ export FLANNEL_VERSION=0.5.0
|
||||
$ export ETCD_VERSION=2.2.0
|
||||
```
|
||||
|
||||
**Note**
|
||||
|
||||
For users who want to bring up a cluster with k8s version v1.1.1, `controller manager` may fail to start
|
||||
due to [a known issue](https://github.com/kubernetes/kubernetes/issues/17109). You could raise it
|
||||
up manually by using following command on the remote master server. Note that
|
||||
you should do this only after `api-server` is up. Moreover this issue is fixed in v1.1.2 and later.
|
||||
|
||||
```console
|
||||
$ sudo service kube-controller-manager start
|
||||
```
|
||||
|
||||
Note that we use flannel here to set up overlay network, yet it's optional. Actually you can build up k8s
|
||||
cluster natively, or use flannel, Open vSwitch or any other SDN tool you like.
|
||||
|
||||
An example cluster is listed below:
|
||||
|
||||
| IP Address | Role |
|
||||
|-------------|----------|
|
||||
|10.10.103.223| node |
|
||||
|10.10.103.162| node |
|
||||
|10.10.103.250| both master and node|
|
||||
|
||||
First configure the cluster information in cluster/ubuntu/config-default.sh, following is a simple sample.
|
||||
|
||||
```sh
|
||||
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
|
||||
|
||||
export role="ai i i"
|
||||
|
||||
export NUM_NODES=${NUM_NODES:-3}
|
||||
|
||||
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
|
||||
|
||||
export FLANNEL_NET=172.16.0.0/16
|
||||
```
|
||||
|
||||
The first variable `nodes` defines all your cluster nodes, master node comes first and
|
||||
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
|
||||
|
||||
Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine
|
||||
acts as both master and node, "a" stands for master, "i" stands for node.
|
||||
|
||||
The `NUM_NODES` variable defines the total number of nodes.
|
||||
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure
|
||||
that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips.
|
||||
You can use below three private network range according to rfc1918. Besides you'd better not choose the one
|
||||
that conflicts with your own private network range.
|
||||
|
||||
10.0.0.0 - 10.255.255.255 (10/8 prefix)
|
||||
|
||||
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
|
||||
|
||||
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
|
||||
|
||||
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network,
|
||||
should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
|
||||
You can optionally provide additional Flannel network configuration
|
||||
through `FLANNEL_OTHER_NET_CONFIG`, as explained in `cluster/ubuntu/config-default.sh`.
|
||||
|
||||
**Note:** When deploying, master needs to be connected to the Internet to download the necessary files.
|
||||
If your machines are located in a private network that need proxy setting to connect the Internet,
|
||||
you can set the config `PROXY_SETTING` in cluster/ubuntu/config-default.sh such as:
|
||||
|
||||
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
|
||||
|
||||
After all the above variables being set correctly, we can use following command in `cluster/` directory to
|
||||
bring up the whole cluster.
|
||||
|
||||
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
|
||||
|
||||
The scripts automatically copy binaries and config files to all the machines via `scp` and start kubernetes
|
||||
service on them. The only thing you need to do is to type the sudo password when promoted.
|
||||
|
||||
```console
|
||||
Deploying node on machine 10.10.103.223
|
||||
...
|
||||
[sudo] password to start node:
|
||||
```
|
||||
|
||||
If everything works correctly, you will see the following message from console indicating the k8s cluster is up.
|
||||
|
||||
```console
|
||||
Cluster validation succeeded
|
||||
```
|
||||
|
||||
### Test it out
|
||||
|
||||
You can use `kubectl` command to check if the newly created cluster is working correctly.
|
||||
The `kubectl` binary is under the `cluster/ubuntu/binaries` directory.
|
||||
You can make it available via PATH, then you can use the below command smoothly.
|
||||
|
||||
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready
|
||||
10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready
|
||||
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
|
||||
```
|
||||
|
||||
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster.
|
||||
|
||||
|
||||
### Deploy addons
|
||||
|
||||
Assuming you have a starting cluster now, this section will tell you how to deploy addons like DNS
|
||||
and UI onto the existing cluster.
|
||||
|
||||
The configuration of DNS is configured in cluster/ubuntu/config-default.sh.
|
||||
|
||||
```sh
|
||||
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
|
||||
|
||||
DNS_SERVER_IP="192.168.3.10"
|
||||
|
||||
DNS_DOMAIN="cluster.local"
|
||||
|
||||
DNS_REPLICAS=1
|
||||
```
|
||||
|
||||
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the `SERVICE_CLUSTER_IP_RANGE`.
|
||||
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
|
||||
|
||||
By default, we also take care of kube-ui addon.
|
||||
|
||||
```sh
|
||||
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
|
||||
```
|
||||
|
||||
After all the above variables have been set, just type the following command.
|
||||
|
||||
```console
|
||||
$ cd cluster/ubuntu
|
||||
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
|
||||
```
|
||||
|
||||
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
|
||||
|
||||
### On going
|
||||
|
||||
We are working on these features which we'd like to let everybody know:
|
||||
|
||||
1. Run kubernetes binaries in Docker using [kube-in-docker](https://github.com/ZJU-SEL/kube-in-docker/tree/baremetal-kube),
|
||||
to eliminate OS-distro differences.
|
||||
2. Tearing Down scripts: clear and re-create the whole stack by one click.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Generally, what this approach does is quite simple:
|
||||
|
||||
1. Download and copy binaries and configuration files to proper directories on every node.
|
||||
2. Configure `etcd` for master node using IPs based on input from user.
|
||||
3. Create and start flannel network for worker nodes.
|
||||
|
||||
So if you encounter a problem, check etcd configuration of master node first.
|
||||
|
||||
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
|
||||
2. You may find following commands useful, the former one to bring down the cluster, while the latter one could start it again.
|
||||
|
||||
```console
|
||||
$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
|
||||
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
|
||||
```
|
||||
|
||||
3. You can also customize your own settings in `/etc/default/{component_name}` and restart it via
|
||||
`$ sudo service {component_name} restart`.
|
||||
|
||||
|
||||
## Upgrading a Cluster
|
||||
|
||||
If you already have a kubernetes cluster, and want to upgrade to a new version,
|
||||
you can use following command in `cluster/` directory to update the whole cluster
|
||||
or a specified node to a new version.
|
||||
|
||||
```console
|
||||
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh [-m|-n <node id>] <version>
|
||||
```
|
||||
|
||||
It can be done for all components (by default), master(`-m`) or specified node(`-n`).
|
||||
Upgrading a single node is currently experimental.
|
||||
If the version is not specified, the script will try to use local binaries. You should ensure all
|
||||
the binaries are well prepared in the expected directory path cluster/ubuntu/binaries.
|
||||
|
||||
```console
|
||||
$ tree cluster/ubuntu/binaries
|
||||
binaries/
|
||||
├── kubectl
|
||||
├── master
|
||||
│ ├── etcd
|
||||
│ ├── etcdctl
|
||||
│ ├── flanneld
|
||||
│ ├── kube-apiserver
|
||||
│ ├── kube-controller-manager
|
||||
│ └── kube-scheduler
|
||||
└── minion
|
||||
├── flanneld
|
||||
├── kubelet
|
||||
└── kube-proxy
|
||||
```
|
||||
|
||||
You can use following command to get a help.
|
||||
|
||||
```console
|
||||
$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -h
|
||||
```
|
||||
|
||||
Here are some examples:
|
||||
|
||||
* upgrade master to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -m 1.0.5`
|
||||
* upgrade node `vcap@10.10.103.223` to version 1.0.5 : `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh -n 10.10.103.223 1.0.5`
|
||||
* upgrade master and all nodes to version 1.0.5: `$ KUBERNETES_PROVIDER=ubuntu ./kube-push.sh 1.0.5`
|
||||
|
||||
The script will not delete any resources of your cluster, it just replaces the binaries.
|
||||
|
||||
### Test it out
|
||||
|
||||
You can use the `kubectl` command to check if the newly upgraded kubernetes cluster is working correctly. See
|
||||
also [test-it-out](ubuntu.md#test-it-out)
|
||||
|
||||
To make sure the version of the upgraded cluster is what you expect, you will find these commands helpful.
|
||||
* upgrade all components or master: `$ kubectl version`. Check the *Server Version*.
|
||||
* upgrade node `vcap@10.10.102.223`: `$ ssh -t vcap@10.10.102.223 'cd /opt/bin && sudo ./kubelet --version'`
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/ubuntu/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -32,397 +32,7 @@ Documentation for other releases can be found at
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## Getting started with Vagrant
|
||||
|
||||
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Setup](#setup)
|
||||
- [Interacting with your Kubernetes cluster with Vagrant.](#interacting-with-your-kubernetes-cluster-with-vagrant)
|
||||
- [Authenticating with your master](#authenticating-with-your-master)
|
||||
- [Running containers](#running-containers)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [I keep downloading the same (large) box all the time!](#i-keep-downloading-the-same-large-box-all-the-time)
|
||||
- [I am getting timeouts when trying to curl the master from my host!](#i-am-getting-timeouts-when-trying-to-curl-the-master-from-my-host)
|
||||
- [I just created the cluster, but I am getting authorization errors!](#i-just-created-the-cluster-but-i-am-getting-authorization-errors)
|
||||
- [I just created the cluster, but I do not see my container running!](#i-just-created-the-cluster-but-i-do-not-see-my-container-running)
|
||||
- [I want to make changes to Kubernetes code!](#i-want-to-make-changes-to-kubernetes-code)
|
||||
- [I have brought Vagrant up but the nodes cannot validate!](#i-have-brought-vagrant-up-but-the-nodes-cannot-validate)
|
||||
- [I want to change the number of nodes!](#i-want-to-change-the-number-of-nodes)
|
||||
- [I want my VMs to have more memory!](#i-want-my-vms-to-have-more-memory)
|
||||
- [I ran vagrant suspend and nothing works!](#i-ran-vagrant-suspend-and-nothing-works)
|
||||
- [I want vagrant to sync folders via nfs!](#i-want-vagrant-to-sync-folders-via-nfs)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Install latest version >= 1.7.4 of vagrant from http://www.vagrantup.com/downloads.html
|
||||
2. Install one of:
|
||||
1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads
|
||||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
|
||||
|
||||
### Setup
|
||||
|
||||
Setting up a cluster is as simple as running:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
curl -sS https://get.k8s.io | bash
|
||||
```
|
||||
|
||||
Alternatively, you can download [Kubernetes release](https://github.com/kubernetes/kubernetes/releases) and extract the archive. To start your local cluster, open a shell and run:
|
||||
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
||||
|
||||
By default, the Vagrant setup will create a single master VM (called kubernetes-master) and one node (called kubernetes-node-1). Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).
|
||||
|
||||
Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine.
|
||||
|
||||
If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable:
|
||||
|
||||
```sh
|
||||
export VAGRANT_DEFAULT_PROVIDER=parallels
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
By default, each VM in the cluster is running Fedora.
|
||||
|
||||
To access the master or any node:
|
||||
|
||||
```sh
|
||||
vagrant ssh master
|
||||
vagrant ssh node-1
|
||||
```
|
||||
|
||||
If you are running more than one node, you can access the others by:
|
||||
|
||||
```sh
|
||||
vagrant ssh node-2
|
||||
vagrant ssh node-3
|
||||
```
|
||||
|
||||
Each node in the cluster installs the docker daemon and the kubelet.
|
||||
|
||||
The master node instantiates the Kubernetes master components as pods on the machine.
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
|
||||
```console
|
||||
[vagrant@kubernetes-master ~] $ vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo su
|
||||
|
||||
[root@kubernetes-master ~] $ systemctl status kubelet
|
||||
[root@kubernetes-master ~] $ journalctl -ru kubelet
|
||||
|
||||
[root@kubernetes-master ~] $ systemctl status docker
|
||||
[root@kubernetes-master ~] $ journalctl -ru docker
|
||||
|
||||
[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log
|
||||
[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log
|
||||
[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log
|
||||
```
|
||||
|
||||
To view the services on any of the nodes:
|
||||
|
||||
```console
|
||||
[vagrant@kubernetes-master ~] $ vagrant ssh node-1
|
||||
[vagrant@kubernetes-master ~] $ sudo su
|
||||
|
||||
[root@kubernetes-master ~] $ systemctl status kubelet
|
||||
[root@kubernetes-master ~] $ journalctl -ru kubelet
|
||||
|
||||
[root@kubernetes-master ~] $ systemctl status docker
|
||||
[root@kubernetes-master ~] $ journalctl -ru docker
|
||||
```
|
||||
|
||||
### Interacting with your Kubernetes cluster with Vagrant.
|
||||
|
||||
With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands.
|
||||
|
||||
To push updates to new Kubernetes code after making source changes:
|
||||
|
||||
```sh
|
||||
./cluster/kube-push.sh
|
||||
```
|
||||
|
||||
To stop and then restart the cluster:
|
||||
|
||||
```sh
|
||||
vagrant halt
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
To destroy the cluster:
|
||||
|
||||
```sh
|
||||
vagrant destroy
|
||||
```
|
||||
|
||||
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
|
||||
|
||||
You may need to build the binaries first, you can do this with `make`
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
|
||||
NAME LABELS
|
||||
10.245.1.4 <none>
|
||||
10.245.1.5 <none>
|
||||
10.245.1.3 <none>
|
||||
```
|
||||
|
||||
### Authenticating with your master
|
||||
|
||||
When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future.
|
||||
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
```json
|
||||
{ "User": "vagrant",
|
||||
"Password": "vagrant",
|
||||
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
|
||||
"CertFile": "/home/k8s_user/.kubecfg.vagrant.crt",
|
||||
"KeyFile": "/home/k8s_user/.kubecfg.vagrant.key"
|
||||
}
|
||||
```
|
||||
|
||||
You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with:
|
||||
|
||||
```sh
|
||||
./cluster/kubectl.sh get nodes
|
||||
```
|
||||
|
||||
### Running containers
|
||||
|
||||
Your cluster is running, you can list the nodes in your cluster:
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
|
||||
NAME LABELS
|
||||
10.245.2.4 <none>
|
||||
10.245.2.3 <none>
|
||||
10.245.2.2 <none>
|
||||
```
|
||||
|
||||
Now start running some containers!
|
||||
|
||||
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
|
||||
Before starting a container there will be no pods, services and replication controllers.
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
||||
$ ./cluster/kubectl.sh get services
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
|
||||
$ ./cluster/kubectl.sh get replicationcontrollers
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
```
|
||||
|
||||
Start a container running nginx with a replication controller and three replicas
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
|
||||
```
|
||||
|
||||
When listing the pods, you will see that three containers have been started and are in Waiting state:
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-5kq0g 0/1 Pending 0 10s
|
||||
my-nginx-gr3hh 0/1 Pending 0 10s
|
||||
my-nginx-xql4j 0/1 Pending 0 10s
|
||||
```
|
||||
|
||||
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
|
||||
|
||||
```console
|
||||
$ vagrant ssh node-1 -c 'sudo docker images'
|
||||
kubernetes-node-1:
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
<none> <none> 96864a7d2df3 26 hours ago 204.4 MB
|
||||
google/cadvisor latest e0575e677c50 13 days ago 12.64 MB
|
||||
kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB
|
||||
```
|
||||
|
||||
Once the docker image for nginx has been downloaded, the container will start and you can list it:
|
||||
|
||||
```console
|
||||
$ vagrant ssh node-1 -c 'sudo docker ps'
|
||||
kubernetes-node-1:
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f
|
||||
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
|
||||
aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2
|
||||
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
|
||||
```
|
||||
|
||||
Going back to listing the pods, services and replicationcontrollers, you now have:
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-5kq0g 1/1 Running 0 1m
|
||||
my-nginx-gr3hh 1/1 Running 0 1m
|
||||
my-nginx-xql4j 1/1 Running 0 1m
|
||||
|
||||
$ ./cluster/kubectl.sh get services
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
|
||||
$ ./cluster/kubectl.sh get replicationcontrollers
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
|
||||
my-nginx my-nginx nginx run=my-nginx 3 1m
|
||||
```
|
||||
|
||||
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with scaling the replicas with:
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-5kq0g 1/1 Running 0 2m
|
||||
my-nginx-gr3hh 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
Congratulations!
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### I keep downloading the same (large) box all the time!
|
||||
|
||||
By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh`
|
||||
|
||||
```sh
|
||||
export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box
|
||||
export KUBERNETES_BOX_URL=path_of_your_kuber_box
|
||||
export KUBERNETES_PROVIDER=vagrant
|
||||
./cluster/kube-up.sh
|
||||
```
|
||||
|
||||
#### I am getting timeouts when trying to curl the master from my host!
|
||||
|
||||
During provision of the cluster, you may see the following message:
|
||||
|
||||
```sh
|
||||
Validating node-1
|
||||
.............
|
||||
Waiting for each node to be registered with cloud provider
|
||||
error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout
|
||||
```
|
||||
|
||||
Some users have reported VPNs may prevent traffic from being routed to the host machine into the virtual machine network.
|
||||
|
||||
To debug, first verify that the master is binding to the proper IP address:
|
||||
|
||||
```
|
||||
$ vagrant ssh master
|
||||
$ ifconfig | grep eth1 -C 2
|
||||
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.245.1.2 netmask
|
||||
255.255.255.0 broadcast 10.245.1.255
|
||||
```
|
||||
|
||||
Then verify that your host machine has a network connection to a bridge that can serve that address:
|
||||
|
||||
```sh
|
||||
$ ifconfig | grep 10.245.1 -C 2
|
||||
|
||||
vboxnet5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 10.245.1.1 netmask 255.255.255.0 broadcast 10.245.1.255
|
||||
inet6 fe80::800:27ff:fe00:5 prefixlen 64 scopeid 0x20<link>
|
||||
ether 0a:00:27:00:00:05 txqueuelen 1000 (Ethernet)
|
||||
```
|
||||
|
||||
If you do not see a response on your host machine, you will most likely need to connect your host to the virtual network created by the virtualization provider.
|
||||
|
||||
If you do see a network, but are still unable to ping the machine, check if your VPN is blocking the request.
|
||||
|
||||
#### I just created the cluster, but I am getting authorization errors!
|
||||
|
||||
You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact.
|
||||
|
||||
```sh
|
||||
rm ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
After using kubectl.sh make sure that the correct credentials are set:
|
||||
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
}
|
||||
```
|
||||
|
||||
#### I just created the cluster, but I do not see my container running!
|
||||
|
||||
If this is your first time creating the cluster, the kubelet on each node schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned.
|
||||
|
||||
#### I want to make changes to Kubernetes code!
|
||||
|
||||
To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md).
|
||||
|
||||
#### I have brought Vagrant up but the nodes cannot validate!
|
||||
|
||||
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`).
|
||||
|
||||
#### I want to change the number of nodes!
|
||||
|
||||
You can control the number of nodes that are instantiated via the environment variable `NUM_NODES` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single node. You do this, by setting `NUM_NODES` to 1 like so:
|
||||
|
||||
```sh
|
||||
export NUM_NODES=1
|
||||
```
|
||||
|
||||
#### I want my VMs to have more memory!
|
||||
|
||||
You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable.
|
||||
Just set it to the number of megabytes you would like the machines to have. For example:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_MEMORY=2048
|
||||
```
|
||||
|
||||
If you need more granular control, you can set the amount of memory for the master and nodes independently. For example:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_MASTER_MEMORY=1536
|
||||
export KUBERNETES_NODE_MEMORY=2048
|
||||
```
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
|
||||
`vagrant suspend` seems to mess up the network. This is not supported at this time.
|
||||
|
||||
#### I want vagrant to sync folders via nfs!
|
||||
|
||||
You can ensure that vagrant uses nfs to sync folders with virtual machines by setting the KUBERNETES_VAGRANT_USE_NFS environment variable to 'true'. nfs is faster than virtualbox or vmware's 'shared folders' and does not require guest additions. See the [vagrant docs](http://docs.vagrantup.com/v2/synced-folders/nfs.html) for details on configuring nfs on the host. This setting will have no effect on the libvirt provider, which uses nfs by default. For example:
|
||||
|
||||
```sh
|
||||
export KUBERNETES_VAGRANT_USE_NFS=true
|
||||
```
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/vagrant/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -31,106 +31,8 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Getting started with vSphere
|
||||
-------------------------------
|
||||
|
||||
The example below creates a Kubernetes cluster with 4 worker node Virtual
|
||||
Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This
|
||||
cluster is set up and controlled from your workstation (or wherever you find
|
||||
convenient).
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Setup](#setup)
|
||||
- [Starting a cluster](#starting-a-cluster)
|
||||
- [Extra: debugging deployment failure](#extra-debugging-deployment-failure)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. You need administrator credentials to an ESXi machine or vCenter instance.
|
||||
2. You must have Go (see [here](../devel/development.md#go-versions) for supported versions) installed: [www.golang.org](http://www.golang.org).
|
||||
3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`.
|
||||
|
||||
```sh
|
||||
export GOPATH=$HOME/src/go
|
||||
mkdir -p $GOPATH
|
||||
export PATH=$PATH:$GOPATH/bin
|
||||
```
|
||||
|
||||
4. Install the govc tool to interact with ESXi/vCenter:
|
||||
|
||||
```sh
|
||||
go get github.com/vmware/govmomi/govc
|
||||
```
|
||||
|
||||
5. Get or build a [binary release](binary_release.md)
|
||||
|
||||
### Setup
|
||||
|
||||
Download a prebuilt Debian 8.2 VMDK that we'll use as a base image:
|
||||
|
||||
```sh
|
||||
curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2016-01-08/kube.vmdk.gz{,.md5}
|
||||
md5sum -c kube.vmdk.gz.md5
|
||||
gzip -d kube.vmdk.gz
|
||||
```
|
||||
|
||||
Import this VMDK into your vSphere datastore:
|
||||
|
||||
```sh
|
||||
export GOVC_URL='hostname' # hostname of the vc
|
||||
export GOVC_USERNAME='username' # username for logging into the vsphere.
|
||||
export GOVC_PASSWORD='password' # password for the above username
|
||||
export GOVC_NETWORK='Network Name' # Name of the network the vms should join. Many times it could be "VM Network"
|
||||
export GOVC_INSECURE=1 # If the host above uses a self-signed cert
|
||||
export GOVC_DATASTORE='target datastore'
|
||||
export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore'
|
||||
|
||||
govc import.vmdk kube.vmdk ./kube/
|
||||
```
|
||||
|
||||
Verify that the VMDK was correctly uploaded and expanded to ~3GiB:
|
||||
|
||||
```sh
|
||||
govc datastore.ls ./kube/
|
||||
```
|
||||
|
||||
Take a look at the file `cluster/vsphere/config-common.sh` fill in the required
|
||||
parameters. The guest login for the image that you imported is `kube:kube`.
|
||||
|
||||
### Starting a cluster
|
||||
|
||||
Now, let's continue with deploying Kubernetes.
|
||||
This process takes about ~20-30 minutes depending on your network.
|
||||
|
||||
#### From extracted binary release
|
||||
|
||||
```sh
|
||||
cd kubernetes
|
||||
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
|
||||
```
|
||||
|
||||
#### Build from source
|
||||
|
||||
```sh
|
||||
cd kubernetes
|
||||
make release
|
||||
KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Refer to the top level README and the getting started guide for Google Compute
|
||||
Engine. Once you have successfully reached this point, your vSphere Kubernetes
|
||||
deployment works just as any other one!
|
||||
|
||||
**Enjoy!**
|
||||
|
||||
### Extra: debugging deployment failure
|
||||
|
||||
The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
|
||||
can log into any VM as the `kube` user to poke around and figure out what is
|
||||
going on (find yourself authorized with your SSH key, or use the password
|
||||
`kube` otherwise).
|
||||
This file has moved to: http://kubernetes.github.io/docs/getting-started-guides/vsphere/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
Reference in New Issue
Block a user