- It is a contact between the container runtime and other plugins, and it doesn't have any of its own net configuration, it calls other plugins like flannel/calico to do the real net conf job.
- Multus reuses the concept of invoking the delegates in flannel, it groups the multi plugins into delegates and invoke each other in sequential order, according to the JSON scheme in the cni configuration.
Please refer the Kubernetes Network SIG - Multiple Network PoC proposal for more details refer the link - [K8s Multiple Network proposal](https://docs.google.com/document/d/1TW3P4c8auWwYy-w_5afIPDcGNLK3LZf0m14943eVfVg/edit)
### Creating “Network” third party resource in kubernetes
Both TPR and CRD will have same selfLink : **/apis/kubernetes.com/v1/namespaces/default/networks/<netobjname>**
if you are using 1.7 or planning to use 1.8 kubernetes, you can use CRD itself. There is no need to change any thing in Multus. For Kubernetes user <1.7useTPRbasednetworkobjectsasfollows
1. After the ThirdPartyResource object has been created you can create network objects. Network objects should contain network fields. These fields are in JSON format. In the following example, a plugin and args fields are set to the object of kind Network. The kind Network is derived from the metadata.name of the ThirdPartyResource object we created above.
2. Save the below following YAML to flannel-network.yaml
```
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: flannel-conf
plugin: flannel
args: '[
{
"delegate": {
"isDefaultGateway": true
}
}
]'
```
2. Run kubectl create command for the TPR - Network object
```
# kubectl create -f ./flannel-network.yaml
network "flannel-conf" created
```
3. Manage the Network objects using kubectl.
```
# kubectl get network
NAME KIND
flannel-conf Network.v1.kubernetes.com
```
4. You can also view the raw JSON data. Here you can see that it contains the custom plugin and args fields from the yaml you used to create it:
4. The plugin field should be the name of the CNI plugin and args should have the flannel args, it should be in the the JSON format as shown above. **User can create network objects for Calico, Weave, Romana, & Cilium and test the multus.**
5. Save the below following YAML to sriov-network.yaml. Refer [Intel - SR-IOV CNI](https://github.com/Intel-Corp/sriov-cni) or contact @kural in [Intel-Corp Slack](https://intel-corp.herokuapp.com/) for running the DPDK based workloads in Kubernetes
```
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: sriov-conf
plugin: sriov
args: '[
{
"if0": "enp12s0f1",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
}
]'
```
6. Save the below following YAML to sriov-vlanid-l2enable-network.yaml
```
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: sriov-vlanid-l2enable-conf
plugin: sriov
args: '[
{
"if0": "enp2s0",
"vlan": 210,
"if0name": "north",
"l2enable": true
}
]'
```
7. Follows the step 2 to create the network object “sriov-vlanid-l2enable-conf” and “sriov-conf”
1. Create Multus CNI configuration file /etc/cni/net.d/multus-cni.conf with below content in minions. Use only the absolute path to point to the kubeconfig file (it may change depending upon your cluster env) and make sure all CNI binary files are in `\opt\cni\bin` dir
### Configuring Multus to use the kubeconfig and also default networks
1. Many user want default networking feature along with Network object. Refer [#14](https://github.com/Intel-Corp/multus-cni/issues/14) & [#17](https://github.com/Intel-Corp/multus-cni/issues/17) issues for more information. In this following config, Weave act as the default network in the absence of network field in the pod metadata annotation.
## Testing the Multus CNI with Multiple Flannel Network
Github user [YYGCui](https://github.com/YYGCui) has used Multiple flannel network to work with Multus CNI plugin. Please refer this [closed issue](https://github.com/Intel-Corp/multus-cni/issues/7) for Multiple overlay network support with Multus CNI.
Make sure that the multus, [sriov](https://github.com/Intel-Corp/sriov-cni), [flannel](https://github.com/containernetworking/cni/blob/master/Documentation/flannel.md), and [ptp](https://github.com/containernetworking/cni/blob/master/Documentation/ptp.md) binaries are in the `/opt/cni/bin` directories and follow the steps as mention in the [CNI](https://github.com/containernetworking/cni/#running-a-docker-container-with-network-namespace-set-up-by-cni-plugins)
## Testing the Multus CNI with Kubernetes
Refer the Kubernetes User Guide and network plugin
Kubelet must be configured to run with the CNI `--network-plugin`, with the following configuration information.
Edit `/etc/default/kubelet` file and add `KUBELET_OPTS`:
```
KUBELET_OPTS="...
--network-plugin-dir=/etc/cni/net.d
--network-plugin=cni
"
```
Restart the kubelet
```
# systemctl restart kubelet.service
```
### Launching workloads in Kubernetes
Launch the workload using yaml file in the kubernetes master, with above configuration in the multus CNI, each pod should have multiple interfaces.
> Note: To verify whether Multus CNI plugin is working fine create a pod containing one “busybox” container and execute “ip link” command to check if interfaces management follows configuration.
1. Create “multus-test.yaml” file containing below configuration. Created pod will consist of one “busybox” container running “top” command.
```
apiVersion: v1
kind: Pod
metadata:
name: multus-test
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: test1
image: "busybox"
command: ["top"]
stdin: true
tty: true
```
2. Create pod using command:
```
# kubectl create -f multus-test.yaml
pod "multus-test" created
```
3. Run “ip link” command inside the container:
```
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
For any questions about Multus CNI, please reach out on github issue or feel free to contact the developer @kural in our [Intel-Corp Slack](https://intel-corp.herokuapp.com/)