RKE support pluggable addons on cluster bootstrap, user can specify the addon yaml in the cluster.yml file, and when running
```
rke up --config cluster.yml
```
RKE will deploy the addons yaml after the cluster starts, RKE first uploads this yaml file as a configmap in kubernetes cluster and then run a kubernetes job that mounts this config map and deploy the addons.
> Note that RKE doesn't support yet removal of the addons, so once they are deployed the first time you can't change them using rke
To start using addons use `addons:` option in the `cluster.yml` file for example:
Note that we are using `|-` because the addons option is a multi line string option, where you can specify multiple yaml files and separate them with `---`
## High Availability
RKE is HA ready, you can specify more than one controlplane host in the `cluster.yml` file, and rke will deploy master components on all of them, the kubelets are configured to connect to `127.0.0.1:6443` by default which is the address of `nginx-proxy` service that proxy requests to all master nodes.
to start an HA cluster, just specify more than one host with role `controlplane`, and start the cluster normally.
## Adding/Removing Nodes
RKE support adding/removing nodes for worker and controlplane hosts, in order to add additional nodes you will only need to update the `cluster.yml` file with additional nodes and run `rke up` with the same file.
To remove nodes just remove them from the hosts list in the cluster configuration file `cluster.yml`, and re run `rke up` command.
## Cluster Remove
RKE support `rke remove` command, the command does the following:
- Connect to each host and remove the kubernetes services deployed on it.
- Clean each host from the directories left by the services:
- /etc/kubernetes/ssl
- /var/lib/etcd
- /etc/cni
- /opt/cni
- /var/run/calico
Note that this command is irreversible and will destroy the kubernetes cluster entirely.
## Cluster Upgrade
RKE support kubernetes cluster upgrade through changing the image version of services, in order to do that change the image option for each services, for example:
```
image: rancher/k8s:v1.8.2-rancher1
```
TO
```
image: rancher/k8s:v1.8.3-rancher2
```
And then run:
```
rke up --config cluster.yml
```
RKE will first look for the local `.kube_config_cluster.yml` and then tries to upgrade each service to the latest image.
> Note that rollback isn't supported in RKE and may lead to unxpected results
## RKE Config
RKE support command `rke config` which generates a cluster config template for the user, to start using this command just write:
```
rke config --name mycluster.yml
```
RKE will ask some questions around the cluster file like number of the hosts, ips, ssh users, etc, `--empty` option will generate an empty cluster.yml file, also if you just want to print on the screen and not save it in a file you can use `--print`.