## Multus CNI usage guide ### Prerequisites * Kubelet configured to use CNI * Kubernetes version with CRD support (generally ) Your Kubelet(s) must be configured to run with the CNI network plugin. Please see [Kubernetes document for CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) for more details. ### Install Multus Generally we recommend two options: Manually place a Multus binary in your `/opt/cni/bin`, or use our [quick-start method](quickstart.md) -- which creates a daemonset that has an opinionated way of how to install & configure Multus CNI (recommended). *Copy Multus Binary into place* You may acquire the Multus binary via compilation (see the [developer guide](development.md)) or download the a binary from the [GitHub releases](https://github.com/k8snetworkplumbingwg/multus-cni/releases) page. Copy multus binary into CNI binary directory, usually `/opt/cni/bin`. Perform this on all nodes in your cluster (master and nodes). cp multus /opt/cni/bin *Via Daemonset method* As a [quickstart](quickstart.md), you may apply these YAML files (included in the clone of this repository). Run this command (typically you would run this on the master, or wherever you have access to the `kubectl` command to manage your cluster). cat ./deployments/multus-daemonset.yml | kubectl apply -f - # thin deployment or cat ./deployments/multus-daemonset-thick.yml | kubectl apply -f - # thick (client/server) deployment If you need more comprehensive detail, continue along with this guide, otherwise, you may wish to either [follow the quickstart guide]() or skip to the ['Create network attachment definition'](#create-network-attachment-definition) section. ### Set up conf file in /etc/cni/net.d/ (Installed automatically by Daemonset) **If you use daemonset to install multus, skip this section and go to "Create network attachment"** You put CNI config file in `/etc/cni/net.d`. Kubernetes CNI runtime uses the alphabetically first file in the directory. (`"NOTE1"`, `"NOTE2"` are just comments, you can remove them at your configuration) Execute following commands at all Kubernetes nodes (i.e. master and minions) ``` mkdir -p /etc/cni/net.d cat >/etc/cni/net.d/00-multus.conf < /etc/cni/net.d/multus.d/multus.kubeconfig < /etc/cni/multus/net.d/macvlan2.conf { "cniVersion": "0.3.0", "type": "macvlan", "name": "macvlan-conf-2", "master": "eth1", "mode": "bridge", "ipam": { "type": "host-local", "ranges": [ [ { "subnet": "11.10.0.0/16", "rangeStart": "11.10.1.20", "rangeEnd": "11.10.3.50" } ] ] } } ``` ### Run pod with network annotation #### Launch pod with text annotation ``` # Execute following command at Kubernetes master cat </` ``` # Execute following command at Kubernetes master cat <`. ``` # Execute following command at Kubernetes master cat <"`. ``` # Execute following command at Kubernetes master cat <"`. ``` # Execute following command at Kubernetes master cat < mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if11: mtu 1450 qdisc noqueue state UP group default link/ether 0a:58:0a:f4:02:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 10.244.2.6/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::ac66:45ff:fe7c:3a19/64 scope link valid_lft forever preferred_lft forever 4: macvlan1@if3: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 4e:6d:7a:4e:14:87 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 macvlan mode bridge numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 10.10.1.22/16 scope global macvlan1 valid_lft forever preferred_lft forever inet6 fe80::4c6d:7aff:fe4e:1487/64 scope link valid_lft forever preferred_lft forever 5: net2@if3: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 6e:e3:71:7f:86:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 macvlan mode bridge numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 11.10.1.22/16 scope global net2 valid_lft forever preferred_lft forever inet6 fe80::6ce3:71ff:fe7f:86f7/64 scope link valid_lft forever preferred_lft forever ``` | Interface name | Description | | --- | --- | | lo | loopback | | eth0 | Default network interface (flannel) | | macvlan1 | macvlan interface (macvlan-conf-1) | | net2 | macvlan interface (macvlan-conf-2) | ## Specifying a default route for a specific attachment Typically, the default route for a pod will route traffic over the `eth0` and therefore over the cluster-wide default network. You may wish to specify that a different network attachment will have the default route. You can achieve this by using the JSON formatted annotation and specifying a `default-route` key. *NOTE*: It's important that you consider that this may impact some functionality of getting traffic to route over the cluster-wide default network. For example, we have a this configuration for macvlan: ``` cat <