1
0
mirror of https://github.com/rancher/os.git synced 2025-09-02 15:24:32 +00:00

More random docs

This commit is contained in:
Darren Shepherd
2021-10-22 11:40:06 -07:00
parent 4e708c8a1b
commit 06fc7fc32e
12 changed files with 608 additions and 36 deletions

58
docs/architecture.md Normal file
View File

@@ -0,0 +1,58 @@
# Architecture
RancherOS v2 is an immutable Linux distribution built to run Rancher and
it's corresponding Kubernetes distributions [RKE2](https://rke2.io)
and [k3s](https://k3s.io). It is built using the [cOS-toolkit](https://rancher-sandbox.github.io/cos-toolkit-docs/docs/)
and based on openSUSE. Initial node configurations is done using only a
cloud-init style approach and all further maintenance is done using
Kubernetes operators.
## Use Cases
RancherOS is intended to be ran as the operating system beneath a Rancher Multi-Cluster
Management server or as a node in a Kubernetes cluster managed by Rancher. RancherOS
also allows you to build stand alone Kubernetes clusters that run an embedded
and smaller version of Rancher to manage the local cluster. A key attribute of RancherOS
is that it is managed by Rancher and thus Rancher will exist either locally in the cluster
or centrally with Rancher Multi-Cluster Manager.
## OCI Image based
RancherOS v2 is an A/B style image based distribution. One first runs
on a read-only image A and to do an upgrade pulls a new read only image
B and then reboots the system to run on B. What is unique about
RancherOS v2 is that the runtime images come from OCI Images. Not an
OCI Image containing special artifacts, but an actual Docker runnable
image that is built using standard Docker build processes. RancherOS is
built using normal `docker build` and if you wish to customize the OS
image all you need to do is create a new `Dockerfile`.
## rancherd
RancherOS v2 includes no container runtime, Kubernetes distribution,
or Rancher itself. All of these assests are dynamically pulled at runtime. All that
is included in RancherOS is [rancherd](https://github.com/rancher/rancherd) which
is responsible for bootstrapping RKE2/k3s and Rancher from an OCI registry. This means
an update to containerd, k3s, RKE2, or Rancher does not require an OS upgrade
or node reboot.
## cloud-init
RancherOS v2 is initially configured using a simple version of `cloud-init`.
It is not expected that one will need to do a lot of customization to RancherOS
as the core OS's sole purpose is to run Rancher and Kubernetes and not serve as
a generic Linux distribution.
## RancherOS Operator
RancherOS v2 includes an operator that is responsible for managing OS upgrades
and assiting with secure device onboarding (SDO).
## openSUSE Leap
RancherOS v2 is based off of openSUSE Leap. There is no specific tie in to
openSUSE beyond that RancherOS assumes the underlying distribution is
based on systemd. We choose openSUSE for obvious reasons, but beyond
that openSUSE Leap provides a stable layer to build upon that is well
tested and has paths to commercial support, if one chooses.

62
docs/clusters.md Normal file
View File

@@ -0,0 +1,62 @@
# Understanding Clusters
Rancherd bootstraps a node with Kubernetes (k3s/rke2) and Rancher such
that all future management of Kubernetes and Rancher can be done from
Kubernetes. Rancherd will only run once per node. Once the system has
been fully bootstrapped it will not run again. It is intended that the
primary use of Rancherd is to be ran from cloud-init or a similar system.
## Cluster Initialization
Creating a cluster always starts with one node initializing the cluster, by
assigning the `cluster-init` role and then other nodes joining to the cluster.
The new cluster will have a token generated for it or you can manually
assign a unique string. The token for an existing cluster can be determined
by running `rancherd get-token`.
## Joining Nodes
Nodes can be joined to the cluster as the role `server` to add more control
plane nodes or as the role `agent` to add more worker nodes. To join a node
you must have the Rancher server URL (which is by default running on port
`8443`) and the token.
## Node Roles
Rancherd will bootstrap a node with one of the following roles
2. __server__: Joins the cluster as a new control-plane,etcd,worker node
3. __agent__: Joins the cluster as a worker only node.
## Server discovery
It can be quite cumbersome to automate bringing up a clustered system
that requires one bootstrap node. Also there are more considerations
around load balancing and replacing nodes in a proper production setup.
Rancherd support server discovery based on https://github.com/hashicorp/go-discover.
When using server discovery the `cluster-init` role is not used, only `server`
and `agent`. The `server` URL is also dropped in place of using the `discovery`
key. The `discovery` configuration will be used to dynamically determine what
is the server URL and if the current node should act as the `cluster-init` node.
Example
```yaml
role: server
discovery:
params:
# Corresponds to go-discover provider name
provider: "mdns"
# All other key/values are parameters corresponding to what
# the go-discover provider is expecting
service: "rancher-server"
# If this is a new cluster it will wait until 3 server are
# available and they all agree on the same cluster-init node
expectedServers: 3
# How long servers are remembered for. It is useful for providers
# that are not consistent in their responses, like mdns.
serverCacheDuration: 1m
```
More information on how to use the discovery is in the config examples.

189
docs/configuration.md Normal file
View File

@@ -0,0 +1,189 @@
# Configuration Reference
All configuration should come from RancherOS minimal `cloud-init`.
Below is a reference of supported configuration. It is important
that the config always starts with `#cloud-config`
```yaml
#cloud-config
# Add additional users or set the password/ssh keys for root
users:
- name: "bar"
passwd: "foo"
groups: "users"
ssh_authorized_keys:
- faaapploo
# Assigns these keys to the first user in users or root if there
# is none
ssh_authorized_keys:
- asdd
# Run these commands once the system has fully booted
runcmd:
- foo
# Hostname to assign
hostname: "bar"
# Write arbitrary files
write_files:
- encoding: b64
content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4
path: /foo/bar
permissions: "0644"
owner: "bar"
# Rancherd configuration
rancherd:
########################################################
# The below parameters apply to server role that first #
# initializes the cluster #
########################################################
# The Kubernetes version to be installed. This must be a k3s or RKE2 version
# v1.21 or newer. k3s and RKE2 versions always have a `k3s` or `rke2` in the
# version string.
# Valid versions are
# k3s: curl -sL https://raw.githubusercontent.com/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.k3s.releases[].version'
# RKE2: curl -sL https://raw.githubusercontent.com/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.rke2.releases[].version'
kubernetesVersion: v1.22.2+k3s1
# The Rancher version to be installed or a channel "latest" or "stable"
rancherVersion: v2.6.0
# Values set on the Rancher Helm chart. Refer to
# https://github.com/rancher/rancher/blob/release/v2.6/chart/values.yaml
# for possible values.
rancherValues:
# Below are the default values set
# Multi-Cluster Management is disabled by default, change to multi-cluster-management=true to enable
features: multi-cluster-management=false
# The Rancher UI will run on the host port 8443 by default. Set to 0 to disable
# and instead use ingress.enabled=true to route traffic through ingress
hostPort: 8443
# Accessing ingress is disabled by default.
ingress:
enabled: false
# Don't create a default admin password
noDefaultAdmin: true
# The negative value means it will up to that many replicas if there are
# at least that many nodes available. For example, if you have 2 nodes and
# `replicas` is `-3` then 2 replicas will run. Once you add a third node
# a then 3 replicas will run
replicas: -3
# External TLS is assumed
tls: external
# Addition SANs (hostnames) to be added to the generated TLS certificate that
# served on port 6443.
tlsSans:
- additionalhostname.example.com
# Kubernetes resources that will be created once Rancher is bootstrapped
resources:
- kind: ConfigMap
apiVersion: v1
metadata:
name: random
data:
key: value
# Contents of the registries.yaml that will be used by k3s/RKE2. The structure
# is documented at https://rancher.com/docs/k3s/latest/en/installation/private-registry/
registries: {}
# The default registry used for all Rancher container images. For more information
# refer to https://rancher.com/docs/rancher/v2.6/en/admin-settings/config-private-registry/
systemDefaultRegistry: someprefix.example.com:5000
# Advanced: The system agent installer image used for Kubernetes
runtimeInstallerImage: ...
# Advanced: The system agent installer image used for Rancher
rancherInstallerImage: ...
###########################################
# The below parameters apply to all roles #
###########################################
# Generic commands to run before bootstrapping the node.
preInstructions:
- name: something
# This image will be extracted to a temporary folder and
# set as the current working dir. The command will not run
# contained or chrooted, this is only a way to copy assets
# to the host. This is parameter is optional
image: custom/image:1.1.1
# Environment variables to set
env:
- FOO=BAR
# Program arguments
args:
- arg1
- arg2
# Command to run
command: /bin/dosomething
# Save output to /var/lib/rancher/rancherd/plan/plan-output.json
saveOutput: false
# Generic commands to run after bootstrapping the node.
postInstructions:
- name: something
env:
- FOO=BAR
args:
- arg1
- arg2
command: /bin/dosomething
saveOutput: false
# The URL to Rancher to join a node. If you have disabled the hostPort and configured
# TLS then this will be the server you have setup.
server: https://myserver.example.com:8443
# A shared secret to join nodes to the cluster
token: sometoken
# Instead of setting the server parameter above the server value can be dynamically
# determined from cloud provider metadata. This is powered by https://github.com/hashicorp/go-discover.
# Discovery requires that the hostPort is not disabled.
discovery:
params:
# Corresponds to go-discover provider name
provider: "mdns"
# All other key/values are parameters corresponding to what
# the go-discover provider is expecting
service: "rancher-server"
# If this is a new cluster it will wait until 3 server are
# available and they all agree on the same cluster-init node
expectedServers: 3
# How long servers are remembered for. It is useful for providers
# that are not consistent in their responses, like mdns.
serverCacheDuration: 1m
# The role of this node. Every cluster must start with one node as role=cluster-init.
# After that nodes can be joined using the server role for control-plane nodes and
# agent role for worker only nodes. The server/agent terms correspond to the server/agent
# terms in k3s and RKE2
role: cluster-init,server,agent
# The Kubernetes node name that will be set
nodeName: custom-hostname
# The IP address that will be set in Kubernetes for this node
address: 123.123.123.123
# The internal IP address that will be used for this node
internalAddress: 123.123.123.124
# Taints to apply to this node upon creation
taints:
- dedicated=special-user:NoSchedule
# Labels to apply to this node upon creation
labels:
- key=value
# Advanced: Arbitrary configuration that will be placed in /etc/rancher/k3s/config.yaml.d/40-rancherd.yaml
# or /etc/rancher/rke2/config.yaml.d/40-rancherd.yaml
extraConfig: {}
```

42
docs/customizing.md Normal file
View File

@@ -0,0 +1,42 @@
# Custom Images
RancherOS image can be easily remaster using a docker build.
For example, to add `cowsay` to RancherOS you would use the
following Dockerfile
## Docker image
```Dockerfile
FROM rancher/os2:v0.0.1-test01
RUN zypper install -y cowsay
# IMPORTANT: Setup rancheros-release used for versioning/upgrade. The
# values here should reflect the tag of the image being built
ARG IMAGE_REPO=norepo
ARG IMAGE_TAG=latest
RUN echo "IMAGE_REPO=${IMAGE_REPO}" > /usr/lib/rancheros-release && \
echo "IMAGE_TAG=${IMAGE_TAG}" >> /usr/lib/rancheros-release && \
echo "IMAGE=${IMAGE_REPO}:${IMAGE_TAG}" >> /usr/lib/rancheros-release
```
And then the following commands
```bash
docker build --build-arg IMAGE_REPO=myrepo/custom-build \
--build-arg IMAGE_TAG=v1.1.1 \
-t myrepo/custom-build:v1.1.1 .
docker push myrepo/custom-build:v1.1.1
```
## Bootable images
To create bootable images from the docker image you just created
run the below command
```bash
curl -o ros-image-build https://raw.githubusercontent.com/rancher/os2/main/ros-image-build
bash ros-image myrepo/custom-build:v1.1.1 qcow,iso,ami
```
The above command will create an ISO, a qcow image, and publish AMIs. You need not create all
three types and can change to comma seperated list to the types you care for.

16
docs/dashboard.md Normal file
View File

@@ -0,0 +1,16 @@
# Dashboard/UI
The Rancher UI is running by default on port `:8443`. There is no default
`admin` user password set. You must run `rancherd reset-admin` once to
get an `admin` password to login.
To disable the Rancher UI from running on a host port, or to change the
default hostPort used the below configuration.
```yaml
#cloud-config
rancherd:
rancherValues:
# Setting the host port to 0 will disable the hostPort
hostPort: 0
```

View File

@@ -1 +0,0 @@
../README.md

View File

@@ -26,8 +26,8 @@ Install directives can be set from the kernel command line using a period (.) se
#cloud-config
rancheros:
install:
# An http://, https://, or tftp:// URL to load and overlay on top of
# this configuration. This configuration can include any install
# An http://, https://, or tftp:// URL to load as the base configuration
# for this configuration. This configuration can include any install
# directives or OEM configuration. The resulting merged configuration
# will be read by the installer and all content of the merged config will
# be stored in /oem/99_custom.yaml in the created image.
@@ -104,24 +104,24 @@ RancherOS requires the following partitions. These partitions are required by [
## Folders
| Path | Read-Only | Ephemeral | Persistence |
| ------------------|:---------:|:---------:|:-----------:|
| / | x | | |
| /etc | | x | |
| /etc/cni | | | x |
| /etc/iscsi | | | x |
| /etc/rancher | | | x |
| /etc/ssh | | | x |
| /etc/systemd | | | x |
| /srv | | x | |
| /home | | | x |
| /opt | | | x |
| /root | | | x |
| /var | | x | |
| /usr/libexec | | | x |
| /var/lib/cni | | | x |
| /var/lib/kubelet | | | x |
| /var/lib/longhorn | | | x |
| /var/lib/rancher | | | x |
| /var/lib/wicked | | | x |
| /var/log | | | x |
| Path | Read-Only | Ephemeral | Persistent |
| ------------------|:---------:|:---------:|:----------:|
| / | x | | |
| /etc | | x | |
| /etc/cni | | | x |
| /etc/iscsi | | | x |
| /etc/rancher | | | x |
| /etc/ssh | | | x |
| /etc/systemd | | | x |
| /srv | | x | |
| /home | | | x |
| /opt | | | x |
| /root | | | x |
| /var | | x | |
| /usr/libexec | | | x |
| /var/lib/cni | | | x |
| /var/lib/kubelet | | | x |
| /var/lib/longhorn | | | x |
| /var/lib/rancher | | | x |
| /var/lib/wicked | | | x |
| /var/log | | | x |

8
docs/mcm.md Normal file
View File

@@ -0,0 +1,8 @@
## Multi-Cluster Management
By default Multi Cluster Managmement is disabled in Rancher. To enable set the
following in the rancherd config.yaml
```yaml
rancherValues:
features: multi-cluster-management=true
```

86
docs/operator.md Normal file
View File

@@ -0,0 +1,86 @@
# RancherOS Operator
The RancherOS operator is responsible for managing the RancherOS versions
and maintaining a machine inventory to assist with secure device on-boarding.
## Managing Upgrades
The RancherOS will manage the upgrade of the local cluster where the operator
is running and also any downstream cluster managed by Rancher Multi-Cluster
Manager.
### ManagedOSImage
The ManagedOSImage kind used to define what version of RancherOS should be
running on each node. The simplest example of this type would be to change
the version of the local nodes.
```bash
kubectl edit -n fleet-local defautl-os-image
```
```yaml
apiVersion: rancheros.cattle.io/v1
kind: ManagedOSImage
metadata:
name: default-os-image
namespace: fleet-local
spec:
osImage: rancher/os2:v0.0.0
```
#### Reference
Below is reference of the full type
```yaml
apiVersion: rancheros.cattle.io/v1
kind: ManagedOSImage
metadata:
name: arbitrary
# There are two special namespaces to consider. If you wish to manage
# nodes on the local cluster this namespace should be `fleet-local`. If
# you wish to manage nodes in Rancher MCM managed clusters then the
# namespace is typically fleet-default.
namespace: fleet-local
spec:
# The image name to pull for the OS
osImage: rancher/os2:v0.0.0
# The selector for which nodes will be select. If null then all nodes
# will be selected
nodeSelector:
matchLabels: {}
# How many nodes in parallel to update. If empty the default is 1 and
# if set to 0 the rollout will be paused
concurrency: 2
# Arbitrary action to perform on the node prior to upgrade
prepare:
image: ...
command: ["/bin/sh"]
args: ["-c", "true"]
env:
- name: TEST_ENV
value: testValue
# Parameters to control the drain behavior. If null no draining will happen
# on the node.
drain:
# Refer to kubectl drain --help for the definition of these values
timeout: 5m
gracePeriod: 5m
deleteLocalData: false
ignoreDaemonSets: true
force: false
disableEviction: false
skipWaitForDeleteTimeout: 5
# Which cluster to target
# This is used if you are running Rancher MCM and managing
# multiple clusters. The syntax of this field matches the
# Fleet targets and is described at https://fleet.rancher.io/gitrepo-targets/
targets: []
```

66
docs/upgrade.md Normal file
View File

@@ -0,0 +1,66 @@
# Upgrade
# Command line
You can also use the `rancherd upgrade` command on a `server` node to automatically
upgrade RancherOS, Rancher, and/or Kubernetes.
# Kubernetes API
All components in RancherOS are managed using Kubernetes. Below is how
to use Kubernetes approaches to upgrade the components.
## RancherOS
RancherOS is upgraded with the RancherOS operator. Refer to the
[RancherOS Operator](./operator.md) documentation for complete information, but the
TL;DR is
```bash
kubectl edit -n fleet-local defautl-os-image
```
```yaml
apiVersion: rancheros.cattle.io/v1
kind: ManagedOSImage
metadata:
name: default-os-image
namespace: fleet-local
spec:
# Set to the new RancherOS version you would like to upgrade to
osImage: rancher/os2:v0.0.0
```
## rancherd
Rancherd itself doesn't need to be upgraded. It is only ran once per node
to bootstrap the system and then after that provides no value. Rancherd is
packaged in the OS image so newer versions of Rancherd will come with newer
versions of RancherOS.
## Rancher
Rancher is installed as a helm chart following the standard procedure. You can upgrade
Rancher with the standard procedure documented at
https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/.
## Kubernetes
To upgrade Kubernetes you will use Rancher to orchestrate the upgrade. This is a matter of changing
the Kubernetes version on the `fleet-local/local` `Cluster` in the `provisioning.cattle.io/v1`
apiVersion. For example
```shell
kubectl edit clusters.provisioning.cattle.io -n fleet-local local
```
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
name: local
namespace: fleet-local
spec:
# Change to new valid k8s version, >= 1.21
# Valid versions are
# k3s: curl -sL https://raw.githubusercontent.com/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.k3s.releases[].version'
# RKE2: curl -sL https://raw.githubusercontent.com/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.rke2.releases[].version'
kubernetesVersion: v1.21.4+k3s1
```

51
docs/versions.md Normal file
View File

@@ -0,0 +1,51 @@
# Supported Versions and Channels
The `kubernetesVersion` and `rancherVersion` fields accept explicit versions
numbers or channel names.
## Valid Versions
The list of valid versions for the `kubernetesVersion` field can be determined
from the Rancher metadata using the following commands.
__k3s:__
```bash
curl -sL https://raw.githubusercontent.com/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.k3s.releases[].version'
```
__rke2:__
```bash
curl -sL https://raw.githubusercontent.com/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.rke2.releases[].version'
```
The list of valid `rancherVersion` values can be obtained from the
[stable](https://artifacthub.io/packages/helm/rancher-stable/rancher) and
[latest](https://artifacthub.io/packages/helm/rancher-latest/rancher) helm
repos. The version string is expected to be the "application version" which
is the version starting with a `v`. For example, `v2.6.2` is the current
format not `2.6.2`.
## Version Channels
Valid `kubernetesVersion` channels are as follows:
| Channel Name | Description |
|--------------|-------------|
| stable | k3s stable (default value of kubernetesVersion) |
| latest | k3s latest |
| testing | k3s test |
| stable:k3s | Same as stable channel |
| latest:k3s | Same as latest channel |
| testing:k3s | Same as testing channel |
| stable:rke2 | rke2 stable |
| latest:rke2 | rke2 latest |
| testing:rke2 | rke2 testing |
| v1.21 | Latest k3s v1.21 release. The applies to any Kubernetes minor version |
| v1.21:rke2 | Latest rke2 v1.21 release. The applies to any Kubernetes minor version |
Valid `rancherVersions` channels are as follows:
| Channel Name | Description |
|--------------|-------------|
| stable | [stable helm repo](https://artifacthub.io/packages/helm/rancher-stable/rancher) |
| latest | [latest helm repo](https://artifacthub.io/packages/helm/rancher-latest/rancher) |

View File

@@ -43,16 +43,11 @@ markdown_extensions:
permalink: true
nav:
- index.md
- architecture.md
- clusters.md
- installation.md
#- Installation:
# - install/iso-install.md
# - install/pxe-boot-install.md
#- configuration.md
#- Operator:
# - operator/upgrade.md
# - operator/inventory.md
#- dashboard.md
#- Reference:
# - reference/api.md
#- faq.md
- upgrade.md
- configuration.md
- dashboard.md
- operator.md
- versions.md