- Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and all 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.
- It is a contact between the container runtime and other plugins, and it doesn't have any of its own net configuration, it calls other plugins like flannel/calico to do the real net conf job.
- Multus reuses the concept of invoking the delegates in flannel, it groups the multi plugins into delegates and invoke each other in sequential order, according to the JSON scheme in the cni configuration.
- Read [Containers Experience Kits - will be updated soon](https://networkbuilders.intel.com/network-technologies/container-experience-kits).
- Try our container exp kit demo - KubeCon's workshop [Enabling NFV Features in Kubernetes](https://github.com/intel/container-experience-kits-demo-area/)
- Invite yourself to the <ahref="https://intel-corp.herokuapp.com/"target="_blank">#intel-sddsg-slack</a> slack channel.
- Ask a question on the <ahref="https://intel-corp-team.slack.com/messages/C4C5RSEER">#general-discussion</a> slack channel.
- Need more assistant<ahref="mailto:kuralamudhan.ramakrishnan@intel.com"> email us</a>
- Feel free to <ahref="https://github.com/Intel-Corp/multus-cni/issues/new">file an issue.</a>
Please fill in the Questions/feedback - [google-form](https://goo.gl/forms/upBWyGs8Wmq69IEi2)!
### NFV based Networking in Kubernetes
* KubeCon workshop on ["Enabling NFV features in Kubernetes"](https://kccncna17.sched.com/event/Cvnw/enabling-nfv-features-in-kubernetes-hosted-by-kuralamudhan-ramakrishnan-ivan-coughlan-intel) presentation [slide deck](https://www.slideshare.net/KuralamudhanRamakris/enabling-nfv-features-in-kubernetes-83923352)
* Feature brief
* [Multiple Network Interface Support in Kubernetes ](https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-support-in-kubernetes-feature-brief.pdf)
* [Enhanced Platform Awareness in Kubernetes](https://builders.intel.com/docs/networkbuilders/enhanced-platform-awareness-feature-brief.pdf)
* Application note
* [Multiple Network Interfaces in Kubernetes and Container Bare Metal ](https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-in-kubernetes-application-note.pdf)
* [Enhanced Platform Awareness Features in Kubernetes ](https://builders.intel.com/docs/networkbuilders/enhanced-platform-awareness-in-kubernetes-application-note.pdf)
* White paper
* [Enabling New Features with Kubernetes for NFV](https://builders.intel.com/docs/networkbuilders/enabling_new_features_in_kubernetes_for_NFV.pdf)
Kubelet is responsible for establishing the network interfaces for each pod; it does this by invoking its configured CNI plugin. When Multus is invoked, it recovers pod annotations related to Multus, in turn, then it uses these annotations to recover a Kubernetes custom resource definition (CRD), which is an object that informs which plugins to invoke and the configuration needing to be passed to them. The order of plugin invocation is important as is the identity of the primary plugin.
Please refer the Kubernetes Network SIG - Multiple Network PoC proposal for more details refer the link - [K8s Multiple Network proposal](https://docs.google.com/document/d/1TW3P4c8auWwYy-w_5afIPDcGNLK3LZf0m14943eVfVg/edit)
Both TPR and CRD will have same selfLink : **/apis/kubernetes.com/v1/namespaces/default/networks/<netobjname>**
if you are using 1.7 or planning to use 1.8 kubernetes, you can use CRD itself. There is no need to change any thing in Multus. For Kubernetes user <1.7useTPRbasednetworkobjectsasfollows
1. After the ThirdPartyResource object has been created you can create network objects. Network objects should contain network fields. These fields are in JSON format. In the following example, a plugin and args fields are set to the object of kind Network. The kind Network is derived from the metadata.name of the ThirdPartyResource object we created above.
2. Save the below following YAML to flannel-network.yaml
```
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: flannel-conf
plugin: flannel
args: '[
{
"delegate": {
"isDefaultGateway": true
}
}
]'
```
2. Run kubectl create command for the TPR - Network object
```
# kubectl create -f ./flannel-network.yaml
network "flannel-conf" created
```
3. Manage the Network objects using kubectl.
```
# kubectl get network
NAME KIND
flannel-conf Network.v1.kubernetes.com
```
4. You can also view the raw JSON data. Here you can see that it contains the custom plugin and args fields from the yaml you used to create it:
4. The plugin field should be the name of the CNI plugin and args should have the flannel args, it should be in the JSON format as shown above. **User can create network objects for Calico, Weave, Romana, & Cilium and test the multus.**
5. Save the below following YAML to sriov-network.yaml. Refer [Intel - SR-IOV CNI](https://github.com/Intel-Corp/sriov-cni) or contact @kural in [Intel-Corp Slack](https://intel-corp.herokuapp.com/) for running the DPDK based workloads in Kubernetes
```
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: sriov-conf
plugin: sriov
args: '[
{
"if0": "enp12s0f1",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
}
]'
```
6. Save the below following YAML to sriov-vlanid-l2enable-network.yaml
```
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: sriov-vlanid-l2enable-conf
plugin: sriov
args: '[
{
"if0": "enp2s0",
"vlan": 210,
"if0name": "north",
"l2enable": true
}
]'
```
7. Follows the step 2 to create the network object “sriov-vlanid-l2enable-conf” and “sriov-conf”
1. Create Multus CNI configuration file /etc/cni/net.d/multus-cni.conf with below content in minions. Use only the absolute path to point to the kubeconfig file (it may change depending upon your cluster env) and make sure all CNI binary files are in `\opt\cni\bin` dir
### Configuring Multus to use the kubeconfig and also default networks
1. Many user want default networking feature along with Network object. Refer [#14](https://github.com/Intel-Corp/multus-cni/issues/14) & [#17](https://github.com/Intel-Corp/multus-cni/issues/17) issues for more information. In this following config, Weave act as the default network in the absence of network field in the pod metadata annotation.
Github user [YYGCui](https://github.com/YYGCui) has used Multiple flannel network to work with Multus CNI plugin. Please refer this [closed issue](https://github.com/Intel-Corp/multus-cni/issues/7) for Multiple overlay network support with Multus CNI.
Make sure that the multus, [sriov](https://github.com/Intel-Corp/sriov-cni), [flannel](https://github.com/containernetworking/cni/blob/master/Documentation/flannel.md), and [ptp](https://github.com/containernetworking/cni/blob/master/Documentation/ptp.md) binaries are in the `/opt/cni/bin` directories and follow the steps as mention in the [CNI](https://github.com/containernetworking/cni/#running-a-docker-container-with-network-namespace-set-up-by-cni-plugins)
Launch the workload using yaml file in the kubernetes master, with above configuration in the multus CNI, each pod should have multiple interfaces.
> Note: To verify whether Multus CNI plugin is working fine create a pod containing one “busybox” container and execute “ip link” command to check if interfaces management follows configuration.
1. Create “multus-test.yaml” file containing below configuration. Created pod will consist of one “busybox” container running “top” command.
```
apiVersion: v1
kind: Pod
metadata:
name: multus-test
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: test1
image: "busybox"
command: ["top"]
stdin: true
tty: true
```
2. Create pod using command:
```
# kubectl create -f multus-test.yaml
pod "multus-test" created
```
3. Run “ip link” command inside the container:
```
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
For any questions about Multus CNI, please reach out on github issue or feel free to contact the developer @kural in our [Intel-Corp Slack](https://intel-corp.herokuapp.com/)