Update README.md and split into several child documents

Fix #154 and #139. Thank you @dougbtv for reviewing the docs!
This commit is contained in:
Tomofumi Hayashi 2018-11-13 09:06:22 +01:00
parent f4068d18bd
commit 1a78cb5766
5 changed files with 570 additions and 693 deletions

723
README.md
View File

@ -1,46 +1,28 @@
# Multus-CNI
![multus-cni Logo](https://github.com/intel/multus-cni/blob/master/doc/images/Multus.png)
[![Travis CI](https://travis-ci.org/intel/multus-cni.svg?branch=master)](https://travis-ci.org/intel/multus-cni/builds)[![Go Report Card](https://goreportcard.com/badge/github.com/intel/multus-cni)](https://goreportcard.com/report/github.com/intel/multus-cni)
* [MULTUS CNI plugin](#multus-cni-plugin)
* [Quickstart Guide](#quickstart-guide)
* [Multi-Homed pod](#multi-homed-pod)
* [Building from source](#building-from-source)
* [Work flow](#work-flow)
* [Usage with Kubernetes CRD based network objects](#usage-with-kubernetes-crd-based-network-objects)
* [Creating "Network" resources in Kubernetes](#creating-network-resources-in-kubernetes)
* [<strong>CRD based Network objects</strong>](#crd-based-network-objects)
* [Configuring Multus to use the kubeconfig](#configuring-multus-to-use-the-kubeconfig)
* [Configuring Multus to use kubeconfig and a default network](#configuring-multus-to-use-kubeconfig-and-a-default-network)
* [Configuring Pod to use the CRD network objects](#configuring-pod-to-use-the-crd-network-objects)
* [Verifying Pod network interfaces](#verifying-pod-network-interfaces)
* [Using with Multus conf file](#using-with-multus-conf-file)
* [Logging Options](#logging-options)
* [How to use with Network Device plugins?](#cni-running-with-network-device-plugin)
* [Default Network Readiness Checks](#default-network-readiness-checks)
* [Testing Multus CNI](#testing-multus-cni)
* [Multiple flannel networks](#multiple-flannel-networks)
* [Configure Kubernetes with CNI](#configure-kubernetes-with-cni)
* [Launching workloads in Kubernetes](#launching-workloads-in-kubernetes)
* [Multus additional plugins](#multus-additional-plugins)
* [NFV based networking in Kubernetes](#nfv-based-networking-in-kubernetes)
* [Need help](#need-help)
* [Contacts](#contacts)
Multus CNI enables attaching multiple network interfaces to pods in Kubernetes.
# MULTUS CNI plugin
- _Multus_ is a latin word for &quot;Multi&quot;
- As the name suggests, it acts as a Multi plugin in Kubernetes and provides the multiple network interface support in a pod
- Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and all 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [SRIOV-DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK &amp; VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes
- It is a contact between the container runtime and other plugins, and it doesn&#39;t have any of its own net configuration, it calls other plugins like flannel/calico to do the real net conf job.
- Multus reuses the concept of invoking delegates as used in flannel by grouping multiple plugins into delegates and invoking them in the sequential order of the CNI configuration file provided in json format
- The default network gets "eth0" and additional network Pod interface name as “net0”, “net1”,… “netX and so on. Multus also support interface names from the user.
- Multus is one of the projects in the [Baremetal Container Experience kit](https://networkbuilders.intel.com/network-technologies/container-experience-kits).
## How it works
Please check the [CNI](https://github.com/containernetworking/cni) documentation for more information on container networking.
Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) -- with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a "meta-plugin", a CNI plugin that can call multiple other CNI plugins.
# Quickstart Guide
Multus CNI follows the [Kubernetes Network Custom Resource Definition De-facto Standard](https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit) to provide a standardized method by which to specify the configurations for additional network interfaces. This standard is put forward by the Kubernetes [Network Plumbing Working Group](https://docs.google.com/document/d/1oE93V3SgOGWJ4O1zeD1UmpeToa0ZiiO6LqRAmZBPFWM/edit).
Multus may be deployed as a Daemonset, and is provided in this guide along with Flannel. Flannel is deployed as a pod-to-pod network that is used as our "default network". Each network attachment is made in addition to this default network.
Multus is one of the projects in the [Baremetal Container Experience kit](https://networkbuilders.intel.com/network-technologies/container-experience-kits)
### Multi-Homed pod
Here's an illustration of the network interfaces attached to a pod, as provisioned by Multus CNI. The diagram shows the pod with three interfaces: `eth0`, `net0` and `net1`. `eth0` connects kubernetes cluster network to connect with kubernetes server/services (e.g. kubernetes api-server, kubelet and so on). `net0` and `net1` are additional network attachments and connect to other networks by using [other CNI plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (e.g. vlan/vxlan/ptp).
![multus-pod-image](doc/images/multus-pod-image.svg)
## Quickstart Guide
Multus may be deployed as a Daemonset, and is provided in this guide along with Flannel. Flannel is deployed as a pod-to-pod network that is used as our "default network" (a network interface that every pod will be created with). Each network attachment is made in addition to this default network.
Firstly, clone this GitHub repository. We'll apply files to `kubectl` from this repo.
@ -50,666 +32,21 @@ We apply these files as such:
$ cat ./images/{multus-daemonset.yml,flannel-daemonset.yml} | kubectl apply -f -
```
Create a CNI configuration loaded as a CRD object, in this case a macvlan CNI configuration is defined. You may replace the `config` field with any valid CNI configuration where the CNI binary is available on the nodes.
This will configure your systems to be ready to use Multus CNI, but, to get started with adding additional interfaces to your pods, refer to our [usage guide](doc/how-to-use.md)
```
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
```
## Additional installation Options
You may then create a pod which attached this additional interface, where the annotation correlates to the `name` in the `NetworkAttachmentDefinition` above.
- Install via daemonset using the quick-start guide, above.
- Download binaries from [release page](https://github.com/intel/multus-cni/releases)
- By Docker image from [Docker Hub](https://hub.docker.com/r/nfvpe/multus/tags/)
- Or, roll-you-own and build from source
- See [Development](doc/development.md)
```
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: dougbtv/centos-network
EOF
```
## Documentation
- [How to use](doc/how-to-use.md)
- [Configuration](doc/configuration.md)
- [Development](doc/development.md)
You may now inspect the pod and see that there is an additional interface configured, like so:
```
$ kubectl exec -it samplepod -- ip a
```
# Kubernetes Network Custom Resource Definition De-facto Standard - Reference implementation
* This project is a reference implementation for Kubernetes Network Custom Resource Definition De-facto Standard. For more information refer [Network Plumbing Working Group Agenda](https://docs.google.com/document/d/1oE93V3SgOGWJ4O1zeD1UmpeToa0ZiiO6LqRAmZBPFWM/edit)
* Kubernetes Network Custom Resource Definition De-facto Standard [documentation link](https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit)
* Reference implementation support following modes
* CNI config JSON in network object
* Not using CNI config (“thick” plugin usecase)
* CNI configuration stored in on-disk file
> refer the section 3.2 Network Object Definition for more details in Kubernetes Network Custom Resource Definition De-facto Standard
* Refer the reference implementation presentation and demo details - [link](https://docs.google.com/presentation/d/1dbCin6MnhK-BjjcVun5YiPTL99VA2uSiyWAtWAPNlIc/edit?usp=sharing)
* Release version from v3.0 is not compatible with v1.0 and v2.0 version Network Object CRD
* [MULTUS CNI plugin](#multus-cni-plugin)specifications.
## Multi-Homed pod
<p align="center">
<img src="doc/images/multus_cni_pod.png" width="1008" />
</p>
## Building from source
**This plugin requires Go 1.8 (or later) to build.**
```
#./build
```
## Work flow
<p align="center">
<img src="doc/images/workflow.png" width="1008" />
</p>
## Network configuration reference
Following is the example of multus config file, in `/etc/cni/net.d/`.
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"confDir": "/etc/cni/multus/net.d",
"cniDir": "/var/lib/cni/multus",
"binDir": "/opt/cni/bin",
"logFile": "/var/log/multus.log",
"logLevel": "debug",
/* NOTE: you can set clusterNetwork+defaultNetworks OR delegates!! (this is only for manual) */
"clusterNetwork": "defaultCRD",
"defaultNetworks": ["sidecarCRD", "flannel"],
"delegates": [{
"type": "weave-net",
"hairpinMode": true
}, {
"type": "macvlan",
... (snip)
}]
}
```
- `name` (string, required): the name of the network
- `type` (string, required): &quot;multus&quot;
- `confDir` (string, optional): directory for CNI config file that multus reads. default `/etc/cni/multus/net.d`
- `cniDir` (string, optional): Multus CNI data directory, default `/var/lib/cni/multus`
- `binDir` (string, optional): directory for CNI plugins which multus calls. default `/opt/cni/bin`
- `kubeconfig` (string, optional): kubeconfig file for the out of cluster communication with kube-apiserver. See the example [kubeconfig](https://github.com/intel/multus-cni/blob/master/doc/node-kubeconfig.yaml). If you would like to use CRD (i.e. network attachment definition), this is required
- `logFile` (string, optional): file path for log file. multus puts log in given file
- `logLevel` (string, optional): logging level ("debug", "error" or "panic")
- `capabilities` ({}list, optional): [capabilities](https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md#dynamic-plugin-specific-fields-capabilities--runtime-configuration) supported by at least one of the delegates. (NOTE: Multus only supports portMappings capability for now). See the [example](https://github.com/intel/multus-cni/blob/master/examples/multus-ptp-portmap.conf).
User should chose following parameters combination (`clusterNetwork`+`defaultNetworks` or `delegates`):
- `clusterNetwork` (string, required): default CNI network for pods, used in kubernetes cluster (Pod IP and so on): name of network-attachment-definition, CNI json file name (without extention, .conf/.conflist) or directory for CNI config file
- `defaultNetworks` ([]string, required): default CNI network attachment: name of network-attachment-definition, CNI json file name (without extention, .conf/.conflist) or directory for CNI config file
- `delegates` ([]map,required): number of delegate details in the Multus
### Network selection flow of clusterNetwork/defaultNetworks
Multus will find network for clusterNetwork/defaultNetworks as following sequences:
1. CRD object for given network name
1. CNI json config file in `confDir`. Given name should be without extention, like .conf/.conflist. (e.g. "test" for "test.conf")
1. Directory for CNI json config file. Multus will find alphabetically first file for the network.
1. Multus raise error message
## Usage with Kubernetes CRD based network objects
Kubelet is responsible for establishing network interfaces for pods; it does this by invoking its configured CNI plugin. When Multus is invoked it retrieves network references from Pod annotation. Multus then uses these network references to get network configurations. Network configurations are defined as Kubernetes Custom Resource Object (CRD). These configurations describe which CNI plugins to invoke and what their configurations are. The order of plugin invocation is important as it identifies the primary plugin. This order is taken from network object references given in a Pod spec.
<p align="center">
<img src="doc/images/multus_crd_usage_diagram.JPG" width="1008" />
</p>
### Creating &quot;Network&quot; resources in Kubernetes
You may wish to create the `network-attachment-definition` manually if you haven't installed using the daemonset technique, which includes the CRD, and you can verify if it's loaded with `kubectl get crd` and look for the presence of `network-attachment-definition`.
1. Create a Custom Resource Definition (CRD) `crdnetwork.yaml`; for the network object using the YAML from the examples directory.
```
$ kubectl create -f ./examples/crd.yml
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
```
3. Run kubectl get command to check the Network CRD creation
```
$ kubectl get crd
NAME KIND
network-attachment-definitions.k8s.cni.cncf.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
```
### Creating CRD network resources in Kubernetes
1. After creating CRD network object you can create network resources in Kubernetes. These network resources may contain additional underlying CNI plugin parameters given in JSON format. In the following example shown below the args field contains parameters that will be passed into Flannel plugin.
2. Save the following YAML to flannel-network.yaml
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: flannel-networkobj
spec:
config: '{
"cniVersion": "0.3.0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}'
```
3. Create the custom resource definition
```
$ kubectl create -f ./flannel-network.yaml
network "flannel-networkobj" created
$ kubectl get net-attach-def
NAME AGE
flannel-networkobj 26s
```
4. Get the custom network object details
```
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
clusterName: ""
creationTimestamp: 2018-05-17T09:13:20Z
deletionGracePeriodSeconds: null
deletionTimestamp: null
initializers: null
name: flannel-networkobj
namespace: default
resourceVersion: "21176114"
selfLink: /apis/k8s.cni.cncf.io/v1/namespaces/default/networks/flannel-networkobj
uid: 8ac8f873-59b2-11e8-8308-a4bf01024e6f
spec:
config: '{ "cniVersion": "0.3.0", "type": "flannel", "delegate": { "isDefaultGateway":
true } }'
```
5. Save the following YAML to sriov-network.yaml to creating sriov network object. ( Refer to [Intel - SR-IOV CNI](https://github.com/Intel-Corp/sriov-cni) or contact @kural in [Intel-Corp Slack](https://intel-corp.herokuapp.com/) for running the DPDK based workloads in Kubernetes)
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-conf
spec:
config: '{
"type": "sriov",
"if0": "enp12s0f1",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
}'
```
6. Likewise save the following YAML to sriov-vlanid-l2enable-network.yaml to create another sriov based network object:
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-vlanid-l2enable-conf
spec:
config: '{
"type": "sriov",
"if0": "enp2s0",
"vlan": 210,
"l2enable": true
}'
```
7. Follow step 3 above to create &quot;sriov-vlanid-l2enable-conf&quot; and &quot;sriov-conf&quot; network objects
8. View network objects using kubectl
```
# kubectl get net-attach-def
NAME AGE
flannel-networkobj 29m
sriov-conf 6m
sriov-vlanid-l2enable-conf 2m
```
### Configuring Multus to use the kubeconfig
1. Create a Mutlus CNI configuration file on each Kubernetes node. This file should be created in: /etc/cni/net.d/multus-cni.conf with the content shown below. Use only the absolute path to point to the kubeconfig file (as it may change depending upon your cluster env). We are assuming all CNI plugin binaries are default location (`/opt/cni/bin dir`)
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml"
}
```
### Configuring Multus to use kubeconfig and a default network
1. Many users want Kubernetes default networking feature along with network objects. Refer to issues [#14](https://github.com/intel/multus-cni/issues/14) &amp; [#17](https://github.com/intel/multus-cni/issues/17) for more information. In the following Multus configuration, Weave act as the default network in the absence of network field in the pod metadata annotation.
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"type": "weave-net",
"hairpinMode": true
}]
}
```
Configurations referenced in annotations are created in addition to the default network.
### Configuring Pod to use the CRD network objects
1. Save the following YAML to pod-multi-network.yaml. In this case flannel-conf network object acts as the primary network.
```
# cat pod-multi-network.yaml
apiVersion: v1
kind: Pod
metadata:
name: multus-multi-net-poc
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name": "flannel-conf" },
{ "name": "sriov-conf" },
{ "name": "sriov-vlanid-l2enable-conf",
"interface": "north" }
]'
spec: # specification of the pod's contents
containers:
- name: multus-multi-net-poc
image: "busybox"
command: ["top"]
stdin: true
tty: true
```
2. Create Multiple network based pod from the master node
```
# kubectl create -f ./pod-multi-network.yaml
pod "multus-multi-net-poc" created
```
3. Get the details of the running pod from the master
```
# kubectl get pods
NAME READY STATUS RESTARTS AGE
multus-multi-net-poc 1/1 Running 0 30s
```
#### Pod Annotation Parameters
JSON formated network annotation in Pod can have several parameters as following:
- namespace: Kubernetes namespace that the target network attach definition is defined in.
- mac: MAC address (e.g "c2:11:22:33:44:66") for target network interface
- interface: interface name for target network interface
Note: If you add `mac`, please add 'tuning' plugin into target network attach definition as CNI plugin chaining as following.
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan with tuning
spec:
config: '{
"cniVersion": "0.3.0",
"name": "chains",
"plugins": [ {
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
},
{
"type":"tuning"
}]
}'
```
### Verifying Pod network interfaces
1. Run `ifconfig` command in Pod:
```
# kubectl exec -it multus-multi-net-poc -- ifconfig
eth0 Link encap:Ethernet HWaddr C6:43:7C:09:B4:9C
inet addr:10.128.0.4 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
net0 Link encap:Ethernet HWaddr 06:21:91:2D:74:B9
inet addr:192.168.42.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::421:91ff:fe2d:74b9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
net1 Link encap:Ethernet HWaddr D2:94:98:82:00:00
inet addr:10.56.217.171 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::d094:98ff:fe82:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:120 (120.0 B) TX bytes:648 (648.0 B)
north Link encap:Ethernet HWaddr BE:F2:48:42:83:12
inet6 addr: fe80::bcf2:48ff:fe42:8312/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1420 errors:0 dropped:0 overruns:0 frame:0
TX packets:1276 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:95956 (93.7 KiB) TX bytes:82200 (80.2 KiB)
```
| Interface name | Description |
| --- | --- |
| lo | loopback |
| eth0 | weave network interface |
| net0 | Flannel network tap interface |
| net1 | VF0 of NIC 1 assigned to the container by [Intel - SR-IOV CNI](https://github.com/intel/sriov-cni) plugin |
| north | VF0 of NIC 2 assigned with VLAN ID 210 to the container by SR-IOV CNI plugin |
2. Check the vlan ID of the NIC 2 VFs
```
# ip link show enp2s0
20: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 24:8a:07:e8:7d:40 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, vlan 210, spoof checking off, link-state auto
vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
```
## Using with Multus conf file
Given the following network configuration:
```
# tee /etc/cni/net.d/multus-cni.conf <<-'EOF'
{
"name": "multus-demo-network",
"type": "multus",
"delegates": [
{
"type": "sriov",
#part of sriov plugin conf
"if0": "enp12s0f0",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.131",
"rangeEnd": "10.56.217.190",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
},
{
"type": "ptp",
"ipam": {
"type": "host-local",
"subnet": "10.168.1.0/24",
"rangeStart": "10.168.1.11",
"rangeEnd": "10.168.1.20",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.168.1.1"
}
},
{
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
]
}
EOF
```
## Logging Options
You may wish to enable some enhanced logging for Multus, especially during the process where you're configuring Multus and need to understand what is or isn't working with your particular configuration.
Multus will always log via `STDERR`, which is the standard method by which CNI plugins communicate errors, and these errors are logged by the Kubelet. This method is always enabled.
### Writing to a Log File
Optionally, you may have Multus log to a file on the filesystem. This file will be written locally on each node where Multus is executed. You may configure this via the `logFile` option in the CNI configuration. By default this additional logging to a flat file is disabled.
For example in your CNI configuration, you may set:
```
"logFile": "/var/log/multus.log",
```
### Logging Level
The default logging level is set as `panic` -- this will log only the most critical errors, and is the least verbose logging level.
The available logging level values, in decreasing order of verbosity are:
* `debug`
* `error`
* `panic`
You may configure the logging level by using the `LogLevel` option in your CNI configuration. For example:
```
"LogLevel": "debug",
```
## CNI running with Network device plugin
Allocation of the Network device(such as SRIOV VFs) are done by Device plugins(Eg.SRIOV Network device plugin), Multus developed to work in the co-existence enviroment to work with device plugin by passing down the allocated device information to the CNI plugins.
* [Device plugin & CNI, NUMA Manager alignment - technical architecture document](https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc/edit)
* Reference implementation : [SRIOV Network device plugin](https://github.com/intel/sriov-network-device-plugin)
* Example: [How to make Multus work with device plugin?](https://github.com/intel/multus-cni/tree/master/examples#passing-down-device-information)
## Default Network Readiness Checks
You may wish for your "default network" (that is, the CNI plugin & its configuration you specify as your default delegate) to become ready before you attach networks with Multus. This is disabled by default and not used unless you add the readiness check option(s) to your CNI configuration file.
For example, if you use Flannel as a default network, the recommended method for Flannel to be installed is via a daemonset that also drops a configuration file in `/etc/cni/net.d/`. This may apply to other plugins that place that configuration file upon their readiness, hence, Multus uses their configuration filename as a semaphore and optionally waits to attach networks to pods until that file exists.
In this manner, you may prevent pods from crash looping, and instead wait for that default network to be ready.
Only one option is necessary to configure this functionality:
* `readinessindicatorfile`: The path to a file whose existance denotes that the default network is ready.
*NOTE*: If `readinessindicatorfile` is unset, or is an empty string, this functionality will be disabled, and is disabled by default.
## Testing Multus CNI
### Multiple flannel networks
Github user [YYGCui](https://github.com/YYGCui) has used multiple flannel network to work with Multus CNI plugin. Please refer to this [closed issue](https://github.com/intel/multus-cni/issues/7) for ,multiple overlay network support with Multus CNI.
Make sure that the multus, [sriov](https://github.com/Intel-Corp/sriov-cni), [flannel](https://github.com/containernetworking/cni/blob/master/Documentation/flannel.md), and [ptp](https://github.com/containernetworking/cni/blob/master/Documentation/ptp.md) binaries are in the /opt/cni/bin directories and follow the steps as mentioned in the [CNI](https://github.com/containernetworking/cni/#running-a-docker-container-with-network-namespace-set-up-by-cni-plugins)
#### Configure Kubernetes with CNI
Kubelet must be configured to run with the CNI network plugin. Edit `/etc/kubernetes/kubelet` file and add `--network-plugin=cni` flags in `KUBELET\_OPTS `as shown below:
```
KUBELET_OPTS="...
--network-plugin-dir=/etc/cni/net.d
--network-plugin=cni
"
```
Refer to the Kubernetes User Guide and network plugin for more information.
- [Single Node](https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/)
- [Multi Node](https://kubernetes.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
- [Network plugin](https://kubernetes.io/docs/admin/network-plugins/)
#### Launching workloads in Kubernetes
With Multus CNI configured as described in sections above each workload launched via a Kubernetes Pod will have multiple network interfacesLaunch the workload using yaml file in the kubernetes master, with above configuration in the multus CNI, each pod should have multiple interfaces.
Note: To verify whether Multus CNI plugin is working correctly, create a pod containing one `busybox` container and execute `ip link` command to check if interfaces management follows configuration.
1. Create `multus-test.yaml` file containing below configuration. Created pod will consist of one `busybox` container running `top` command.
```
apiVersion: v1
kind: Pod
metadata:
name: multus-test
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: test1
image: "busybox"
command: ["top"]
stdin: true
tty: true
```
2. Create pod using command:
```
# kubectl create -f multus-test.yaml
pod "multus-test" created
```
3. Run &quot;ip link&quot; command inside the container:
```
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 26:52:6b:d8:44:2d brd ff:ff:ff:ff:ff:ff
20: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
link/ether f6:fb:21:4f:1d:63 brd ff:ff:ff:ff:ff:ff
21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
link/ether 76:13:b1:60:00:00 brd ff:ff:ff:ff:ff:ff
```
| Interface name | Description |
| --- | --- |
| lo | loopback |
| eth0@if41 | Flannel network tap interface |
| net0 | VF assigned to the container by [SR-IOV CNI](https://github.com/Intel-Corp/sriov-cni) plugin |
| net1 | ptp localhost interface |
## Multus additional plugins
- [DPDK-SRIOV CNI](https://github.com/Intel-Corp/sriov-cni)
- [Vhostuser CNI](https://github.com/intel/vhost-user-net-plugin) - a Dataplane network plugin - Supports OVS-DPDK &amp; VPP
- [Bond CNI](https://github.com/Intel-Corp/bond-cni) - For fail-over and high availability of networking
## NFV based networking in Kubernetes
- KubeCon workshop on [&quot;Enabling NFV features in Kubernetes&quot;](https://kccncna17.sched.com/event/Cvnw/enabling-nfv-features-in-kubernetes-hosted-by-kuralamudhan-ramakrishnan-ivan-coughlan-intel) presentation [slide deck](https://www.slideshare.net/KuralamudhanRamakris/enabling-nfv-features-in-kubernetes-83923352)
- Feature brief
- [Multiple Network Interface Support in Kubernetes](https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-support-in-kubernetes-feature-brief.pdf)
- [Enhanced Platform Awareness in Kubernetes](https://builders.intel.com/docs/networkbuilders/enhanced-platform-awareness-feature-brief.pdf)
- Application note
- [Multiple Network Interfaces in Kubernetes and Container Bare Metal](https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-in-kubernetes-application-note.pdf)
- [Enhanced Platform Awareness Features in Kubernetes](https://builders.intel.com/docs/networkbuilders/enhanced-platform-awareness-in-kubernetes-application-note.pdf)
- White paper
- [Enabling New Features with Kubernetes for NFV](https://builders.intel.com/docs/networkbuilders/enabling_new_features_in_kubernetes_for_NFV.pdf)
- Multus&#39;s related project github pages
- [Multus](https://github.com/Intel-Corp/multus-cni)
- [SRIOV - DPDK CNI](https://github.com/Intel-Corp/sriov-cni)
- [Vhostuser - VPP &amp; OVS - DPDK CNI](https://github.com/intel/vhost-user-net-plugin)
- [Bond CNI](https://github.com/Intel-Corp/bond-cni)
- [Node Feature Discovery](https://github.com/kubernetes-incubator/node-feature-discovery)
- [CPU Manager for Kubernetes](https://github.com/Intel-Corp/CPU-Manager-for-Kubernetes)
## Need help
- Read [Containers Experience Kits](https://networkbuilders.intel.com/network-technologies/container-experience-kits)
- Try our container exp kit demo - KubeCon workshop on [Enabling NFV Features in Kubernetes](https://github.com/intel/container-experience-kits-demo-area/)
- Join us on [#intel-sddsg-slack](https://intel-corp.herokuapp.com/) slack channel and ask question in [#general-discussion](https://intel-corp-team.slack.com/messages/C4C5RSEER)
- You can also [email](mailto:kuralamudhan.ramakrishnan@intel.com) us
- Feel free to [submit](https://github.com/Intel-Corp/multus-cni/issues/new) an issue
Please fill in the Questions/feedback - [google-form](https://goo.gl/forms/upBWyGs8Wmq69IEi2)!
## Contacts
For any questions about Multus CNI, please reach out on github issue or feel free to contact the developer @kural in our [Intel-Corp Slack](https://intel-corp.herokuapp.com/)
## Contact Us
For any questions about Multus CNI, feel free to ask a question in #general in the [Intel-Corp Slack](https://intel-corp.herokuapp.com/), or open up a GitHub issue.

109
doc/configuration.md Normal file
View File

@ -0,0 +1,109 @@
## Multus-cni Configuration Reference
Following is the example of multus config file, in `/etc/cni/net.d/`.
(`"Note1"` and `"Note2"` are just comments, so you can remove them at your configuration)
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"confDir": "/etc/cni/multus/net.d",
"cniDir": "/var/lib/cni/multus",
"binDir": "/opt/cni/bin",
"logFile": "/var/log/multus.log",
"logLevel": "debug",
"capabilities": {
"portMappings": true
},
"readinessindicatorfile": "",
"Note1":"NOTE: you can set clusterNetwork+defaultNetworks OR delegates!!",
"clusterNetwork": "defaultCRD",
"defaultNetworks": ["sidecarCRD", "flannel"],
"Note2":"NOTE: If you use clusterNetwork/defaultNetworks, delegates is ignored",
"delegates": [{
"type": "weave-net",
"hairpinMode": true
}, {
"type": "macvlan",
... (snip)
}]
}
```
* `name` (string, required): the name of the network
* `type` (string, required): &quot;multus&quot;
* `confDir` (string, optional): directory for CNI config file that multus reads. default `/etc/cni/multus/net.d`
* `cniDir` (string, optional): Multus CNI data directory, default `/var/lib/cni/multus`
* `binDir` (string, optional): directory for CNI plugins which multus calls. default `/opt/cni/bin`
* `kubeconfig` (string, optional): kubeconfig file for the out of cluster communication with kube-apiserver. See the example [kubeconfig](https://github.com/intel/multus-cni/blob/master/doc/node-kubeconfig.yaml). If you would like to use CRD (i.e. network attachment definition), this is required
* `logFile` (string, optional): file path for log file. multus puts log in given file
* `logLevel` (string, optional): logging level ("debug", "error" or "panic")
* `capabilities` ({}list, optional): [capabilities](https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md#dynamic-plugin-specific-fields-capabilities--runtime-configuration) supported by at least one of the delegates. (NOTE: Multus only supports portMappings capability for now). See the [example](https://github.com/intel/multus-cni/blob/master/examples/multus-ptp-portmap.conf).
* `readinessindicatorfile`: The path to a file whose existance denotes that the default network is ready
User should chose following parameters combination (`clusterNetwork`+`defaultNetworks` or `delegates`):
* `clusterNetwork` (string, required): default CNI network for pods, used in kubernetes cluster (Pod IP and so on): name of network-attachment-definition, CNI json file name (without extention, .conf/.conflist) or directory for CNI config file
* `defaultNetworks` ([]string, required): default CNI network attachment: name of network-attachment-definition, CNI json file name (without extention, .conf/.conflist) or directory for CNI config file
* `delegates` ([]map,required): number of delegate details in the Multus
### Network selection flow of clusterNetwork/defaultNetworks
Multus will find network for clusterNetwork/defaultNetworks as following sequences:
1. CRD object for given network name
1. CNI json config file in `confDir`. Given name should be without extention, like .conf/.conflist. (e.g. "test" for "test.conf")
1. Directory for CNI json config file. Multus will find alphabetically first file for the network
1. Multus failed to find network. Multus raise error message
## Miscellaneous config
### Default Network Readiness Indicator
You may wish for your "default network" (that is, the CNI plugin & its configuration you specify as your default delegate) to become ready before you attach networks with Multus. This is disabled by default and not used unless you add the readiness check option(s) to your CNI configuration file.
For example, if you use Flannel as a default network, the recommended method for Flannel to be installed is via a daemonset that also drops a configuration file in `/etc/cni/net.d/`. This may apply to other plugins that place that configuration file upon their readiness, hence, Multus uses their configuration filename as a semaphore and optionally waits to attach networks to pods until that file exists.
In this manner, you may prevent pods from crash looping, and instead wait for that default network to be ready.
Only one option is necessary to configure this functionality:
* `readinessindicatorfile`: The path to a file whose existance denotes that the default network is ready.
*NOTE*: If `readinessindicatorfile` is unset, or is an empty string, this functionality will be disabled, and is disabled by default.
### Logging
You may wish to enable some enhanced logging for Multus, especially during the process where you're configuring Multus and need to understand what is or isn't working with your particular configuration.
Multus will always log via `STDERR`, which is the standard method by which CNI plugins communicate errors, and these errors are logged by the Kubelet. This method is always enabled.
#### Writing to a Log File
Optionally, you may have Multus log to a file on the filesystem. This file will be written locally on each node where Multus is executed. You may configure this via the `LogFile` option in the CNI configuration. By default this additional logging to a flat file is disabled.
For example in your CNI configuration, you may set:
```
"LogFile": "/var/log/multus.log",
```
#### Logging Level
The default logging level is set as `panic` -- this will log only the most critical errors, and is the least verbose logging level.
The available logging level values, in decreasing order of verbosity are:
* `debug`
* `error`
* `panic`
You may configure the logging level by using the `LogLevel` option in your CNI configuration. For example:
```
"LogLevel": "debug",
```

30
doc/development.md Normal file
View File

@ -0,0 +1,30 @@
## Development Information
## How to build the multus-cni?
```
git clone https://github.com/intel/multus-cni.git
cd multus-cni
./build
```
## How to run CI tests?
Multus has go unit tests (based on ginkgo framework). Following commands drive CI tests manually in your environment:
```
sudo ./test.sh
```
## Logging Best Practices
Followings are multus logging best practices:
* Add `logging.Debugf()` at the begining of function
* In case of error handling, use `logging.Errorf()` with given error info
* `logging.Panicf()` only be used at very critical error (it should NOT used usually)
## CI Introduction
TBD

400
doc/how-to-use.md Normal file
View File

@ -0,0 +1,400 @@
## How to use multus-cni?
### Prerequisite
Kubelet must be configured to run with the CNI network plugin. Edit `/etc/kubernetes/kubelet` file and add `--network-plugin=cni` flags in `KUBELET\_OPTS `as shown below:
```
KUBELET_OPTS="...
--network-plugin-dir=/etc/cni/net.d
--network-plugin=cni
"
```
### Install multus
You could copy binary directory or could use daemonset yaml in multus repository.
- copy multus binary
Copy multus binary into CNI binary directory, usually `/opt/cni/bin`.
```
# Execute following commands at all Kubernetes nodes (i.e. master and minions)
$ cp multus /opt/cni/bin
```
- use daemonset
As [Quickstart](https://github.com/intel/multus-cni/README.md#quickstart-guide), you could apply as such:
```
# Execute following command at Kubernetes master
$ cat ./images/{multus-daemonset.yml,flannel-daemonset.yml} | kubectl apply -f -
```
### Set up conf file in /etc/cni/net.d/
You put CNI config file in `/etc/cni/net.d`. Kubernetes CNI runtime uses the alphabetically first file in the directory. (`"Note1"`, `"Note2"` are just comments, you can remove them at your configuration)
```
# Execute following commands at all Kubernetes nodes (i.e. master and minions)
$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/30-multus.conf <<EOF
{
"name": "multus-cni-network",
"type": "multus",
"readinessindicatorfile": "/var/run/flannel/subnet.env",
"delegates": [
{
"Note1": "This is example, wrote your CNI config in delegates",
"Note2": "If you use flannel, you also need to run flannel daemonset before!",
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true
}
}
],
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}
EOF
```
For the detail, please take a look into [Configuration Reference](configuration.md)
**You can use "clusterNetwork"/"defaultNetworks" instead of "delegates", see []() for the detail**
As above config, you need to set `"kubeconfig"` in the config file for NetworkAttachmentDefinition(CRD).
##### Which network will be used for "Pod IP"?
In case of "delegates", the first delegates network will be used for "Pod IP". Otherwise, "clusterNetwork" will be used for "Pod IP".
#### Create ServiceAccount, ClusterRole and its binding
Create resources for multus to access CRD objects as folloiwng command:
```
# Execute following commands at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: multus
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
rules:
- apiGroups: ["k8s.cni.cncf.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
- pods/status
verbs:
- get
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:
- kind: ServiceAccount
name: multus
namespace: kube-system
EOF
```
#### Set up kubeconfig file
Create kubeconfig at master node as following commands:
```
# Execute following command at Kubernetes master
$ mkdir -p /etc/cni/multus.d
$ SERVICEACCOUNT_CA=$(kubectl get secrets -n=kube-system -o json | jq -r '.items[]|select(.metadata.annotations."kubernetes.io/service-account.name"=="multus")| .data."ca.crt"')
$ SERVICEACCOUNT_TOKEN=$(kubectl get secrets -n=kube-system -o json | jq -r '.items[]|select(.metadata.annotations."kubernetes.io/service-account.name"=="multus")| .data.token' | base64 -d )
$ KUBERNETES_SERVICE_PROTO=$(kubectl get all -o json | jq -r .items[0].spec.ports[0].name)
$ KUBERNETES_SERVICE_HOST=$(kubectl get all -o json | jq -r .items[0].spec.clusterIP)
$ KUBERNETES_SERVICE_PORT=$(kubectl get all -o json | jq -r .items[0].spec.ports[0].port)
$ cat > /etc/cni/net.d/multus.d/multus.kubeconfig <<EOF
# Kubeconfig file for Multus CNI plugin.
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: ${KUBERNETES_SERVICE_PROTOCOL:-https}://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}
certificate-authority-data: ${SERVICEACCOUNT_CA}
users:
- name: multus
user:
token: "${SERVICEACCOUNT_TOKEN}"
contexts:
- name: multus-context
context:
cluster: local
user: multus
current-context: multus-context
EOF
```
Copy `/etc/cni/net.d/multus.d/multus.kubeconfig` into other Kubernetes nodes
**Note: Recommend to exec 'chmod 600 /etc/cni/net.d/multus.d/multus.kubeconfig' to keep secure**
```
$ scp /etc/cni/net.d/multus.d/multus.kubeconfig ...
```
### Setup CRDs (daemonset automatically does)
**If you use daemonset to install multus, skip this section and go to "Create network attachment"**
Create CRD definition in Kubernetes as following command at master node:
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: network-attachment-definitions.k8s.cni.cncf.io
spec:
group: k8s.cni.cncf.io
version: v1
scope: Namespaced
names:
plural: network-attachment-definitions
singular: network-attachment-definition
kind: NetworkAttachmentDefinition
shortNames:
- net-attach-def
validation:
openAPIV3Schema:
properties:
spec:
properties:
config:
type: string
EOF
```
### Create network attachment definition
#### NetworkAttachmentDefinition with json CNI config:
Following command creates NetworkAttachmentDefinition. CNI config is in `config:` field.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-1
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"ranges": [
[ {
"subnet": "10.10.0.0/16",
"rangeStart": "10.10.1.20",
"rangeEnd": "10.10.3.50",
"gateway": "10.10.0.254"
} ]
]
}
}'
EOF
```
#### NetworkAttachmentDefinition with CNI config file:
If NetworkAttachmentDefinition has no spec, multus find a file in defaultConfDir ('/etc/cni/multus/net.d', with same name in the 'name' field of CNI config.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-2
EOF
```
```
# Execute following commands at all Kubernetes nodes (i.e. master and minions)
$ cat <<EOF > /etc/cni/multus/net.d/macvlan2.conf
{
"cniVersion": "0.3.0",
"type": "macvlan",
"name": "macvlan-conf-2",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"ranges": [
[ {
"subnet": "11.10.0.0/16",
"rangeStart": "11.10.1.20",
"rangeEnd": "11.10.3.50",
"gateway": "11.10.0.254"
} ]
]
}
}
```
### Run pod with network annotation
#### Lauch pod with text annotation
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-01
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf-1, macvlan-conf-2
spec:
containers:
- name: pod-case-01
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with text annotation with interface name
You can also specify interface name as adding `@<ifname>`.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-02
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf-1@macvlan1
spec:
containers:
- name: pod-case-02
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with json annotation
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-03
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name" : "macvlan-conf-1" },
{ "name" : "macvlan-conf-2" }
]'
spec:
containers:
- name: pod-case-03
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with json annotation with interface
You can also specify interface name as adding `"interfaceRequest": "<ifname>"`.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-04
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name" : "macvlan-conf-1",
"interfaceRequest": "macvlan1" },
{ "name" : "macvlan-conf-2" }
]'
spec:
containers:
- name: pod-case-04
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
### Verifying pod network
Following the example of `ip -d address` output of above pod, "pod-case-04":
```
# Execute following command at Kubernetes master
$ kubectl exec -it pod-case-04 -- ip -d address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 0a:58:0a:f4:02:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.244.2.6/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ac66:45ff:fe7c:3a19/64 scope link
valid_lft forever preferred_lft forever
4: macvlan1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 4e:6d:7a:4e:14:87 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
macvlan mode bridge numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.10.1.22/16 scope global macvlan1
valid_lft forever preferred_lft forever
inet6 fe80::4c6d:7aff:fe4e:1487/64 scope link
valid_lft forever preferred_lft forever
5: net2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 6e:e3:71:7f:86:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
macvlan mode bridge numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 11.10.1.22/16 scope global net2
valid_lft forever preferred_lft forever
inet6 fe80::6ce3:71ff:fe7f:86f7/64 scope link
valid_lft forever preferred_lft forever
```
| Interface name | Description |
| --- | --- |
| lo | loopback |
| eth0 | Default network interface (flannel) |
| macvlan1 | macvlan interface (macvlan-conf-1) |
| net2 | macvlan interface (macvlan-conf-2) |

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 190 KiB