mirror of
https://github.com/k3s-io/kubernetes.git
synced 2026-01-05 15:37:24 +00:00
Merge remote-tracking branch 'upstream/master'
This commit is contained in:
@@ -134,12 +134,13 @@ Here are all the solutions mentioned above in table form.
|
||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
|
||||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | [✓][3] | Commercial
|
||||
Vagrant | Saltstack | Fedora | OVS | [docs](vagrant.md) | [✓][2] | Project
|
||||
Vagrant | Saltstack | Fedora | flannel | [docs](vagrant.md) | [✓][2] | Project
|
||||
GCE | Saltstack | Debian | GCE | [docs](gce.md) | [✓][1] | Project
|
||||
Azure | CoreOS | CoreOS | Weave | [docs](coreos/azure/README.md) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
|
||||
Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project ([@brendandburns](https://github.com/brendandburns))
|
||||
Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project ([@brendandburns](https://github.com/brendandburns))
|
||||
Bare-metal | Ansible | Fedora | flannel | [docs](fedora/fedora_ansible_config.md) | | Project
|
||||
Digital Ocean | custom | Fedora | Calico | [docs](fedora/fedora-calico.md) | | Community (@djosborne)
|
||||
Bare-metal | custom | Fedora | _none_ | [docs](fedora/fedora_manual_config.md) | | Project
|
||||
Bare-metal | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
libvirt | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
|
||||
|
||||
@@ -89,7 +89,7 @@ coreos:
|
||||
ExecStart=/opt/bin/kube-apiserver \
|
||||
--service-account-key-file=/opt/bin/kube-serviceaccount.key \
|
||||
--service-account-lookup=false \
|
||||
--admission-control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
|
||||
--admission-control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,DenyEscalatingExec,ResourceQuota \
|
||||
--runtime-config=api/v1 \
|
||||
--allow-privileged=true \
|
||||
--insecure-bind-address=0.0.0.0 \
|
||||
|
||||
352
docs/getting-started-guides/fedora/fedora-calico.md
Normal file
352
docs/getting-started-guides/fedora/fedora-calico.md
Normal file
@@ -0,0 +1,352 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
|
||||
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
|
||||
|
||||
If you are using a released version of Kubernetes, you should
|
||||
refer to the docs that go with that version.
|
||||
|
||||
<strong>
|
||||
The latest 1.0.x release of this document can be found
|
||||
[here](http://releases.k8s.io/release-1.0/docs/getting-started-guides/fedora/fedora-calico.md).
|
||||
|
||||
Documentation for other releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).
|
||||
</strong>
|
||||
--
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Running Kubernetes with [Calico Networking](http://projectcalico.org) on a [Digital Ocean](http://digitalocean.com) [Fedora Host](http://fedoraproject.org)
|
||||
-----------------------------------------------------
|
||||
|
||||
## Table of Contents
|
||||
|
||||
* [Prerequisites](#prerequisites)
|
||||
* [Overview](#overview)
|
||||
* [Setup Communication Between Hosts](#setup-communication-between-hosts)
|
||||
* [Setup Master](#setup-master)
|
||||
* [Install etcd](#install-etcd)
|
||||
* [Install Kubernetes](#install-kubernetes)
|
||||
* [Install Calico](#install-calico)
|
||||
* [Setup Node](#setup-node)
|
||||
* [Configure the Virtual Interface - cbr0](#configure-the-virtual-interface---cbr0)
|
||||
* [Install Docker](#install-docker)
|
||||
* [Install Calico](#install-calico-1)
|
||||
* [Install Kubernetes](#install-kubernetes-1)
|
||||
* [Check Running Cluster](#check-running-cluster)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need two or more Fedora 22 droplets on Digital Ocean with [Private Networking](https://www.digitalocean.com/community/tutorials/how-to-set-up-and-use-digitalocean-private-networking) enabled.
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will walk you through the process of getting a Kubernetes Fedora cluster running on Digital Ocean with networking powered by Calico networking. It will cover the installation and configuration of the following systemd processes on the following hosts:
|
||||
|
||||
Kubernetes Master:
|
||||
- `kube-apiserver`
|
||||
- `kube-controller-manager`
|
||||
- `kube-scheduler`
|
||||
- `etcd`
|
||||
- `docker`
|
||||
- `calico-node`
|
||||
|
||||
Kubernetes Node:
|
||||
- `kubelet`
|
||||
- `kube-proxy`
|
||||
- `docker`
|
||||
- `calico-node`
|
||||
|
||||
For this demo, we will be setting up one Master and one Node with the following information:
|
||||
|
||||
| Hostname | IP |
|
||||
|-------------|-------------|
|
||||
| kube-master |10.134.251.56|
|
||||
| kube-node-1 |10.134.251.55|
|
||||
|
||||
This guide is scalable to multiple nodes provided you [configure interface-cbr0 with its own subnet on each Node](#configure-the-virtual-interface---cbr0) and [add an entry to /etc/hosts for each host](#setup-communication-between-hosts).
|
||||
|
||||
Ensure you substitute the IP Addresses and Hostnames used in this guide with ones in your own setup.
|
||||
|
||||
## Setup Communication Between Hosts
|
||||
|
||||
Digital Ocean private networking configures a private network on eth1 for each host. To simplify communication between the hosts, we will add an entry to /etc/hosts so that all hosts in the cluster can hostname-resolve one another to this interface. **It is important that the hostname resolves to this interface instead of eth0, as all Kubernetes and Calico services will be running on it.**
|
||||
|
||||
```
|
||||
echo "10.134.251.56 kube-master" >> /etc/hosts
|
||||
echo "10.134.251.55 kube-node-1" >> /etc/hosts
|
||||
```
|
||||
|
||||
>Make sure that communication works between kube-master and each kube-node by using a utility such as ping.
|
||||
|
||||
## Setup Master
|
||||
|
||||
### Install etcd
|
||||
|
||||
* Both Calico and Kubernetes use etcd as their datastore. We will run etcd on Master and point all Kubernetes and Calico services at it.
|
||||
|
||||
```
|
||||
yum -y install etcd
|
||||
```
|
||||
|
||||
* Edit `/etc/etcd/etcd.conf`
|
||||
|
||||
```
|
||||
ETCD_LISTEN_CLIENT_URLS="http://kube-master:4001"
|
||||
|
||||
ETCD_ADVERTISE_CLIENT_URLS="http://kube-master:4001"
|
||||
```
|
||||
|
||||
### Install Kubernetes
|
||||
|
||||
* Run the following command on Master to install the latest Kubernetes (as well as docker):
|
||||
|
||||
```
|
||||
yum -y install kubernetes
|
||||
```
|
||||
|
||||
* Edit `/etc/kubernetes/config `
|
||||
|
||||
```
|
||||
# How the controller-manager, scheduler, and proxy find the apiserver
|
||||
KUBE_MASTER="--master=http://kube-master:8080"
|
||||
```
|
||||
|
||||
* Edit `/etc/kubernetes/apiserver`
|
||||
|
||||
```
|
||||
# The address on the local server to listen to.
|
||||
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
|
||||
|
||||
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master:4001"
|
||||
|
||||
# Remove ServiceAccount from this line to run without API Tokens
|
||||
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
|
||||
```
|
||||
|
||||
* Create /var/run/kubernetes on master:
|
||||
|
||||
```
|
||||
mkdir /var/run/kubernetes
|
||||
chown kube:kube /var/run/kubernetes
|
||||
chmod 750 /var/run/kubernetes
|
||||
```
|
||||
|
||||
* Start the appropriate services on master:
|
||||
|
||||
```
|
||||
for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl restart $SERVICE
|
||||
systemctl enable $SERVICE
|
||||
systemctl status $SERVICE
|
||||
done
|
||||
```
|
||||
|
||||
### Install Calico
|
||||
|
||||
Next, we'll launch Calico on Master to allow communication between Pods and any services running on the Master.
|
||||
* Install calicoctl, the calico configuration tool.
|
||||
|
||||
```
|
||||
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
|
||||
chmod +x ./calicoctl
|
||||
sudo mv ./calicoctl /usr/bin
|
||||
```
|
||||
|
||||
* Create `/etc/systemd/system/calico-node.service`
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=calicoctl node
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
User=root
|
||||
Environment="ETCD_AUTHORITY=kube-master:4001"
|
||||
PermissionsStartOnly=true
|
||||
ExecStartPre=/usr/bin/calicoctl checksystem --fix
|
||||
ExecStart=/usr/bin/calicoctl node --ip=10.134.251.56 --detach=false
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
>Be sure to substitute `--ip=10.134.251.56` with your Master's eth1 IP Address.
|
||||
|
||||
* Start Calico
|
||||
|
||||
```
|
||||
systemctl enable calico-node.service
|
||||
systemctl start calico-node.service
|
||||
```
|
||||
|
||||
>Starting calico for the first time may take a few minutes as the calico-node docker image is downloaded.
|
||||
|
||||
## Setup Node
|
||||
|
||||
### Configure the Virtual Interface - cbr0
|
||||
|
||||
By default, docker will create and run on a virtual interface called `docker0`. This interface is automatically assigned the address range 172.17.42.1/16. In order to set our own address range, we will create a new virtual interface called `cbr0` and then start docker on it.
|
||||
|
||||
* Add a virtual interface by creating `/etc/sysconfig/network-scripts/ifcfg-cbr0`:
|
||||
|
||||
```
|
||||
DEVICE=cbr0
|
||||
TYPE=Bridge
|
||||
IPADDR=192.168.1.1
|
||||
NETMASK=255.255.255.0
|
||||
ONBOOT=yes
|
||||
BOOTPROTO=static
|
||||
```
|
||||
|
||||
>**Note for Multi-Node Clusters:** Each node should be assigned an IP address on a unique subnet. In this example, node-1 is using 192.168.1.1/24, so node-2 should be assigned another pool on the 192.168.x.0/24 subnet, e.g. 192.168.2.1/24.
|
||||
|
||||
* Ensure that your system has bridge-utils installed. Then, restart the networking daemon to activate the new interface
|
||||
|
||||
```
|
||||
systemctl restart network.service
|
||||
```
|
||||
|
||||
### Install Docker
|
||||
|
||||
* Install Docker
|
||||
|
||||
```
|
||||
yum -y install docker
|
||||
```
|
||||
|
||||
* Configure docker to run on `cbr0` by editing `/etc/sysconfig/docker-network`:
|
||||
|
||||
```
|
||||
DOCKER_NETWORK_OPTIONS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
||||
```
|
||||
|
||||
* Start docker
|
||||
|
||||
```
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
### Install Calico
|
||||
|
||||
* Install calicoctl, the calico configuration tool.
|
||||
|
||||
```
|
||||
wget https://github.com/Metaswitch/calico-docker/releases/download/v0.5.5/calicoctl
|
||||
chmod +x ./calicoctl
|
||||
sudo mv ./calicoctl /usr/bin
|
||||
```
|
||||
|
||||
* Create `/etc/systemd/system/calico-node.service`
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=calicoctl node
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
User=root
|
||||
Environment="ETCD_AUTHORITY=kube-master:4001"
|
||||
PermissionsStartOnly=true
|
||||
ExecStartPre=/usr/bin/calicoctl checksystem --fix
|
||||
ExecStart=/usr/bin/calicoctl node --ip=10.134.251.55 --detach=false --kubernetes
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
> Note: You must replace the IP address with your node's eth1 IP Address!
|
||||
|
||||
* Start Calico
|
||||
|
||||
```
|
||||
systemctl enable calico-node.service
|
||||
systemctl start calico-node.service
|
||||
```
|
||||
|
||||
* Configure the IP Address Pool
|
||||
|
||||
Most Kubernetes application deployments will require communication between Pods and the kube-apiserver on Master. On a standard Digital Ocean Private Network, requests sent from Pods to the kube-apiserver will not be returned as the networking fabric will drop response packets destined for any 192.168.0.0/16 address. To resolve this, you can have calicoctl add a masquerade rule to all outgoing traffic on the node:
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=kube-master:4001 calicoctl pool add 192.168.0.0/16 --nat-outgoing
|
||||
```
|
||||
|
||||
### Install Kubernetes
|
||||
|
||||
* First, install Kubernetes.
|
||||
|
||||
```
|
||||
yum -y install kubernetes
|
||||
```
|
||||
|
||||
* Edit `/etc/kubernetes/config`
|
||||
|
||||
```
|
||||
# How the controller-manager, scheduler, and proxy find the apiserver
|
||||
KUBE_MASTER="--master=http://kube-master:8080"
|
||||
```
|
||||
|
||||
* Edit `/etc/kubernetes/kubelet`
|
||||
|
||||
We'll pass in an extra parameter - `--network-plugin=calico` to tell the Kubelet to use the Calico networking plugin. Additionally, we'll add two environment variables that will be used by the Calico networking plugin.
|
||||
|
||||
```
|
||||
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
# You may leave this blank to use the actual hostname
|
||||
# KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
|
||||
|
||||
# location of the api-server
|
||||
KUBELET_API_SERVER="--api-servers=http://kube-master:8080"
|
||||
|
||||
# Add your own!
|
||||
KUBELET_ARGS="--network-plugin=calico"
|
||||
|
||||
# The following are variables which the kubelet will pass to the calico-networking plugin
|
||||
ETCD_AUTHORITY="kube-master:4001"
|
||||
KUBE_API_ROOT="http://kube-master:8080/api/v1"
|
||||
```
|
||||
|
||||
* Start Kubernetes on the node.
|
||||
|
||||
```
|
||||
for SERVICE in kube-proxy kubelet; do
|
||||
systemctl restart $SERVICE
|
||||
systemctl enable $SERVICE
|
||||
systemctl status $SERVICE
|
||||
done
|
||||
```
|
||||
|
||||
## Check Running Cluster
|
||||
|
||||
The cluster should be running! Check that your nodes are reporting as such:
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-node-1 kubernetes.io/hostname=kube-node-1 Ready
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
@@ -169,7 +169,7 @@ metadata:
|
||||
spec:
|
||||
containers:
|
||||
- name: fluentd-cloud-logging
|
||||
image: gcr.io/google_containers/fluentd-gcp:1.12
|
||||
image: gcr.io/google_containers/fluentd-gcp:1.13
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
@@ -179,8 +179,8 @@ spec:
|
||||
value: -q
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /varlog
|
||||
- name: containers
|
||||
mountPath: /var/log
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
@@ -188,7 +188,7 @@ spec:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: containers
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
```
|
||||
@@ -245,7 +245,7 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
|
||||
...
|
||||
```
|
||||
|
||||
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod’s containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/HEAD/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
|
||||
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod’s containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/release-1.0/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
|
||||
|
||||
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html).
|
||||
|
||||
|
||||
@@ -240,7 +240,7 @@ You have several choices for Kubernetes images:
|
||||
|
||||
For etcd, you can:
|
||||
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.0.12`
|
||||
- Use images hosted on [Docker Hub](https://registry.hub.docker.com/u/coreos/etcd/) or [quay.io](https://registry.hub.docker.com/u/coreos/etcd/)
|
||||
- Use images hosted on [Docker Hub](https://hub.docker.com/search/?q=etcd) or [Quay.io](https://quay.io/repository/coreos/etcd), such as `quay.io/coreos/etcd:v2.2.0`
|
||||
- Use etcd binary included in your OS distro.
|
||||
- Build your own image
|
||||
- You can do: `cd kubernetes/cluster/images/etcd; make`
|
||||
|
||||
@@ -54,17 +54,16 @@ work, which has been merge into this document.
|
||||
## Prerequisites
|
||||
|
||||
1. The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
|
||||
2. All machines can communicate with each other, no need to connect Internet (should use
|
||||
private docker registry in this case).
|
||||
2. All machines can communicate with each other. Master node needs to connect the Internet to download the necessary files, while working nodes do not.
|
||||
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
|
||||
Ubuntu 15 which use systemd instead of upstart. We are working around fixing this.
|
||||
4. Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-1.0.3, may work with higher versions.
|
||||
5. All the remote servers can be ssh logged in without a password by using key authentication.
|
||||
|
||||
|
||||
### Starting a Cluster
|
||||
## Starting a Cluster
|
||||
|
||||
#### Download binaries
|
||||
### Download binaries
|
||||
|
||||
First clone the kubernetes github repo
|
||||
|
||||
@@ -135,6 +134,10 @@ that conflicts with your own private network range.
|
||||
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network,
|
||||
should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
|
||||
|
||||
**Note:** When deploying, master needs to connect the Internet to download the necessary files. If your machines locate in a private network that need proxy setting to connect the Internet, you can set the config `PROXY_SETTING` in cluster/ubuntu/config-default.sh such as:
|
||||
|
||||
PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"
|
||||
|
||||
After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.
|
||||
|
||||
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
|
||||
@@ -154,7 +157,7 @@ If all things goes right, you will see the below message from console indicating
|
||||
Cluster validation succeeded
|
||||
```
|
||||
|
||||
#### Test it out
|
||||
### Test it out
|
||||
|
||||
You can use `kubectl` command to check if the newly created k8s is working correctly.
|
||||
The `kubectl` binary is under the `cluster/ubuntu/binaries` directory.
|
||||
@@ -173,7 +176,7 @@ NAME LABELS STATUS
|
||||
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
|
||||
|
||||
|
||||
#### Deploy addons
|
||||
### Deploy addons
|
||||
|
||||
Assuming you have a starting cluster now, this section will tell you how to deploy addons like DNS
|
||||
and UI onto the existing cluster.
|
||||
@@ -208,7 +211,7 @@ $ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
|
||||
|
||||
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
|
||||
|
||||
#### On going
|
||||
### On going
|
||||
|
||||
We are working on these features which we'd like to let everybody know:
|
||||
|
||||
@@ -216,7 +219,7 @@ We are working on these features which we'd like to let everybody know:
|
||||
to eliminate OS-distro differences.
|
||||
2. Tearing Down scripts: clear and re-create the whole stack by one click.
|
||||
|
||||
#### Trouble shooting
|
||||
### Trouble shooting
|
||||
|
||||
Generally, what this approach does is quite simple:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user