|
||
---|---|---|
doc | ||
k8sclient | ||
multus | ||
types | ||
vendor | ||
.travis.yml | ||
build | ||
CONTRIBUTING.md | ||
glide.lock | ||
glide.yaml | ||
LICENSE | ||
README.md |
Attention !! this is WIP for Network Plumbing WG POC & Exp works, not part of Multus Implementation
Table of Contents
- Multi network CNI plugin
- Multi-Homed pod
- Build
- Work flow
- Usage with Kubernetes CRD/TPR based Network Objects
- Using Multus Conf file
- Testing the Multus CNI
- Contacts
Multi network CNI plugin
Multi-Homed pod
Build
This plugin requires Go 1.8 to build.
Go 1.5 users will need to set GO15VENDOREXPERIMENT=1
to get vendored dependencies. This flag is set by default in 1.6.
#./build
Work flow
## Network configuration referencename
(string, required): the name of the networktype
(string, required): "multus"kubeconfig
(string, optional): kubeconfig file for the out of cluster communication with kube-apiserver, Refer the docdelegates
(([]map,required): number of delegate details in the Multus, ignored in case kubeconfig is added.masterplugin
(bool,required): master plugin to report back the IP address and DNS to the container
Usage with Kubernetes CRD/TPR based Network Objects
Kubelet is responsible for establishing the network interfaces for each pod; it does this by invoking its configured CNI plugin. When Multus is invoked, it recovers pod annotations related to Multus, in turn, then it uses these annotations to recover a Kubernetes custom resource definition (CRD), which is an object that informs which plugins to invoke and the configuration needing to be passed to them. The order of plugin invocation is important as is the identity of the primary plugin.
Please refer the Kubernetes Network SIG - Multiple Network PoC proposal for more details refer the link - K8s Multiple Network proposal
Creating “Network” third party resource in kubernetes
Multus is compatible to work with both CRD/TPR. Both CRD/TPR based network object api self link is same.
CRD based Network objects
- Create a Third party resource “crdnetwork.yaml” for the network object as shown below
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: networks.kubernetes-network.cni.cncf.io
spec:
# group name to use for REST API: /apis/<group>/<version>
group: kubernetes-network.cni.cncf.io
# version name to use for REST API: /apis/<group>/<version>
version: v1
# either Namespaced or Cluster
scope: Namespaced
names:
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: networks
# singular name to be used as an alias on the CLI and for display
singular: network
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: Network
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- net
- kubectl create command for the Custom Resource Definition
# kubectl create -f ./crdnetwork.yaml
customresourcedefinition "networks.kubernetes-network.cni.cncf.io" created
- kubectl get command to check the Network CRD creation
# kubectl get CustomResourceDefinition
NAME KIND
networks.kubernetes.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io
- Save the below following YAML to flannel-network.yaml
apiVersion: "kubernetes-network.cni.cncf.io/v1"
kind: Network
metadata:
name: flannel-networkobj
plugin: flannel
args: '[
{
"delegate": {
"isDefaultGateway": true
}
}
]'
- create the custom resource definition
# kubectl create -f customCRD/flannel-network.yaml
network "flannel-networkobj" created
# kubectl get network
NAME KIND ARGS PLUGIN
flannel-networkobj Network.v1.kubernetes-network.cni.cncf.io [ { "delegate": { "isDefaultGateway": true } } ] flannel
- Get the custom network object details
# kubectl get network flannel-networkobj -o yaml
apiVersion: kubernetes.com/v1
args: '[ { "delegate": { "isDefaultGateway": true } } ]'
kind: Network
metadata:
clusterName: ""
creationTimestamp: 2017-07-11T21:46:52Z
deletionGracePeriodSeconds: null
deletionTimestamp: null
name: flannel-networkobj
namespace: default
resourceVersion: "6848829"
selfLink: /apis/kubernetes-network.cni.cncf.io/v1/namespaces/default/networks/flannel-networkobj
uid: 7311c965-6682-11e7-b0b9-408d5c537d27
plugin: flannel
Both TPR and CRD will have same selfLink : /apis/kubernetes-network.cni.cncf.io/v1/namespaces/default/networks/
if you are using 1.7 or planning to use 1.8 kubernetes, you can use CRD itself. There is no need to change any thing in Multus. For Kubernetes user < 1.7 use TPR based network objects as follows
TPR based Network objects
- Create a Third party resource “tprnetwork.yaml” for the network object as shown below
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
name: network.kubernetes-network.cni.cncf.io
description: "A specification of a Network obj in the kubernetes"
versions:
- name: v1
- Run kubectl create command for the Third Party Resource
# kubectl create -f ./tprnetwork.yaml
thirdpartyresource "network.kubernetes-network.cni.cncf.io" created
- Run kubectl get command to check the Network TPR creation
# kubectl get thirdpartyresource
NAME DESCRIPTION VERSION(S)
network.kubernetes-network.cni.cncf.io A specification of a Network obj in the kubernetes v1
Creating “Custom Network objects” third party resource in kubernetes
-
After the ThirdPartyResource object has been created you can create network objects. Network objects should contain network fields. These fields are in JSON format. In the following example, a plugin and args fields are set to the object of kind Network. The kind Network is derived from the metadata.name of the ThirdPartyResource object we created above.
-
Save the below following YAML to flannel-network.yaml
apiVersion: "kubernetes-network.cni.cncf.io/v1"
kind: Network
metadata:
name: flannel-conf
plugin: flannel
args: '[
{
"delegate": {
"isDefaultGateway": true
}
}
]'
- Run kubectl create command for the TPR - Network object
# kubectl create -f ./flannel-network.yaml
network "flannel-conf" created
- Manage the Network objects using kubectl.
# kubectl get network
NAME KIND
flannel-conf Network.v1.kubernetes-network.cni.cncf.io
- You can also view the raw JSON data. Here you can see that it contains the custom plugin and args fields from the yaml you used to create it:
# kubectl get network flannel-conf -o yaml
apiVersion: kubernetes-network.cni.cncf.io/v1
args: '[ { "delegate": { "isDefaultGateway": true } } ]'
kind: Network
metadata:
creationTimestamp: 2017-06-28T14:20:52Z
name: flannel-conf
namespace: default
resourceVersion: "5422876"
selfLink: /apis/kubernetes-network.cni.cncf.io/v1/namespaces/default/networks/flannel-conf
uid: fdcb94a2-5c0c-11e7-bbeb-408d5c537d27
plugin: flannel
- The plugin field should be the name of the CNI plugin and args should have the flannel args, it should be in the JSON format as shown above. User can create network objects for Calico, Weave, Romana, & Cilium and test the multus.
- Save the below following YAML to sriov-network.yaml. Refer Intel - SR-IOV CNI or contact @kural in Intel-Corp Slack for running the DPDK based workloads in Kubernetes
apiVersion: "kubernetes-network.cni.cncf.io/v1"
kind: Network
metadata:
name: sriov-conf
plugin: sriov
args: '[
{
"if0": "enp12s0f1",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
}
]'
- Save the below following YAML to sriov-vlanid-l2enable-network.yaml
apiVersion: "kubernetes-network.cni.cncf.io/v1"
kind: Network
metadata:
name: sriov-vlanid-l2enable-conf
plugin: sriov
args: '[
{
"if0": "enp2s0",
"vlan": 210,
"if0name": "north",
"l2enable": true
}
]'
- Follows the step 2 to create the network object “sriov-vlanid-l2enable-conf” and “sriov-conf”
- Manage the Network objects using kubectl.
# kubectl get network
NAME KIND
flannel-conf Network.v1.kubernetes-network.cni.cncf.io
sriov-vlanid-l2enable-conf Network.v1.kubernetes-network.cni.cncf.io
sriov-conf Network.v1.kubernetes-network.cni.cncf.io
Configuring Multus to use the kubeconfig
- Create Multus CNI configuration file /etc/cni/net.d/multus-cni.conf with below content on the nodes. Use only the absolute path to point to the kubeconfig file (it may change depending upon your cluster env) and make sure all CNI binary files are in
\opt\cni\bin
dir
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml"
}
- Restart kubelet service
# systemctl restart kubelet
Configuring Multus to use the kubeconfig and also default networks
- Many user want default networking feature along with Network object. Refer #14 & #17 issues for more information. In this following config, Weave act as the default network in the absence of network field in the pod metadata annotation.
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"type": "weave-net",
"hairpinMode": true,
"masterplugin": true
}]
}
- Restart kubelet service
# systemctl restart kubelet
Configuring Pod to use the TPR Network objects
- Save the below following YAML to pod-multi-network.yaml. In this case flannel-conf network object act as the primary network.
# cat pod-multi-network.yaml
apiVersion: v1
kind: Pod
metadata:
name: multus-multi-net-poc
annotations:
networks: '[
{ "name": "flannel-conf" },
{ "name": "sriov-conf"},
{ "name": "sriov-vlanid-l2enable-conf" }
]'
spec: # specification of the pod's contents
containers:
- name: multus-multi-net-poc
image: "busybox"
command: ["top"]
stdin: true
tty: true
- Create Multiple network based pod from the master node
# kubectl create -f ./pod-multi-network.yaml
pod "multus-multi-net-poc" created
- Get the details of the running pod from the master
# kubectl get pods
NAME READY STATUS RESTARTS AGE
multus-multi-net-poc 1/1 Running 0 30s
Verifying Pod network
- Run “ifconfig” command inside the container:
# kubectl exec -it multus-multi-net-poc -- ifconfig
eth0 Link encap:Ethernet HWaddr 06:21:91:2D:74:B9
inet addr:192.168.42.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::421:91ff:fe2d:74b9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
net0 Link encap:Ethernet HWaddr D2:94:98:82:00:00
inet addr:10.56.217.171 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::d094:98ff:fe82:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:120 (120.0 B) TX bytes:648 (648.0 B)
north Link encap:Ethernet HWaddr BE:F2:48:42:83:12
inet6 addr: fe80::bcf2:48ff:fe42:8312/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1420 errors:0 dropped:0 overruns:0 frame:0
TX packets:1276 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:95956 (93.7 KiB) TX bytes:82200 (80.2 KiB)
Interface name | Description |
---|---|
lo | loopback |
eth0@if41 | Flannel network tap interface |
net0 | VF0 of NIC 1 assigned to the container by Intel - SR-IOV CNI plugin |
north | VF0 of NIC 2 assigned with VLAN ID 210 to the container by SR-IOV CNI plugin |
2. Check the vlan ID of the NIC 2 VFs |
# ip link show enp2s0
20: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 24:8a:07:e8:7d:40 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, vlan 210, spoof checking off, link-state auto
vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
Using Multus Conf file
Given the following network configuration:
# tee /etc/cni/net.d/multus-cni.conf <<-'EOF'
{
"name": "multus-demo-network",
"type": "multus",
"delegates": [
{
"type": "sriov",
#part of sriov plugin conf
"if0": "enp12s0f0",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.131",
"rangeEnd": "10.56.217.190",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
},
{
"type": "ptp",
"ipam": {
"type": "host-local",
"subnet": "10.168.1.0/24",
"rangeStart": "10.168.1.11",
"rangeEnd": "10.168.1.20",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.168.1.1"
}
},
{
"type": "flannel",
"masterplugin": true,
"delegate": {
"isDefaultGateway": true
}
}
]
}
EOF
Further options for CNI configuration file.
One may also specify always_use_default
as a boolean value. This option requires that you're using the CRD method (and therefore requires that you must also specify both the kubeconfig
and the delegates
option as well, or Multus will present an error message). In the case that always_use_default
is true, the delegates
network will always be applied, along with those specified in the annotations.
For example, a valid configuration using the always_use_default
may look like:
{
"name": "multus-cni-network",
"type": "multus",
"delegates": [{"type": "flannel", "isDefaultGateway": true, "masterplugin": true}],
"always_use_default": true,
"kubeconfig": "/etc/kubernetes/kubelet.conf"
}
Testing the Multus CNI
Multiple Flannel Network
Github user YYGCui has used Multiple flannel network to work with Multus CNI plugin. Please refer this closed issue for Multiple overlay network support with Multus CNI.
docker
Make sure that the multus, sriov, flannel, and ptp binaries are in the /opt/cni/bin
directories and follow the steps as mention in the CNI
Kubernetes
Refer the Kubernetes User Guide and network plugin
Kubelet must be configured to run with the CNI --network-plugin
, with the following configuration information.
Edit /etc/default/kubelet
file and add KUBELET_OPTS
:
KUBELET_OPTS="...
--network-plugin-dir=/etc/cni/net.d
--network-plugin=cni
"
Restart the kubelet
# systemctl restart kubelet.service
Launching workloads in Kubernetes
Launch the workload using yaml file in the kubernetes master, with above configuration in the multus CNI, each pod should have multiple interfaces.
Note: To verify whether Multus CNI plugin is working fine create a pod containing one “busybox” container and execute “ip link” command to check if interfaces management follows configuration.
- Create “multus-test.yaml” file containing below configuration. Created pod will consist of one “busybox” container running “top” command.
apiVersion: v1
kind: Pod
metadata:
name: multus-test
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: test1
image: "busybox"
command: ["top"]
stdin: true
tty: true
- Create pod using command:
# kubectl create -f multus-test.yaml
pod "multus-test" created
- Run “ip link” command inside the container:
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 26:52:6b:d8:44:2d brd ff:ff:ff:ff:ff:ff
20: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
link/ether f6:fb:21:4f:1d:63 brd ff:ff:ff:ff:ff:ff
21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
link/ether 76:13:b1:60:00:00 brd ff:ff:ff:ff:ff:ff
Interface name | Description |
---|---|
lo | loopback |
eth0@if41 | Flannel network tap interface |
net0 | VF assigned to the container by SR-IOV CNI plugin |
net1 | ptp localhost interface |
Contacts
For any questions about Multus CNI, please reach out on github issue or feel free to contact the developer @kural in our Intel-Corp Slack