k3s has removed some standard plugins, which we need. So fork and add it back.
Go to file
Jonathan Boulle d10d1a148e *: add basic test script + travis hook
Adds a simple test script, mostly to perform gofmt and govet checking;
currently tests only exist for the DHCP plugin.
2015-09-07 16:21:53 -07:00
Documentation host-local: allow ip request via CNI_ARGS 2015-09-04 01:38:22 +02:00
Godeps No more path rewriting 2015-06-12 16:29:18 -07:00
pkg CNI_ARGS: use ';' to split args as documented 2015-09-05 18:58:58 +02:00
plugins host-local: allow ip request via CNI_ARGS 2015-09-04 01:38:22 +02:00
scripts Propagate json error object to the caller 2015-06-09 10:46:28 -07:00
.gitignore adding .gitignore 2015-05-11 11:12:24 -07:00
.travis.yml *: add basic test script + travis hook 2015-09-07 16:21:53 -07:00
build No more path rewriting 2015-06-12 16:29:18 -07:00
CONTRIBUTING.md Add DCO and CONTRIBUTING.md 2015-09-02 11:00:27 -07:00
DCO Add DCO and CONTRIBUTING.md 2015-09-02 11:00:27 -07:00
LICENSE Initial commit 2015-04-04 20:35:49 -07:00
README.md Fix a few spelling mistakes in the docs 2015-08-20 16:41:25 +01:00
SPEC.md spec: fix fmt issues and example JSON 2015-07-06 18:05:10 -07:00
test *: add basic test script + travis hook 2015-09-07 16:21:53 -07:00

cni - the Container Network Interface

What is CNI?

CNI, the Container Network Interface, is a proposed standard for configuring network interfaces for Linux application containers. The standard consists of a simple specification for how executable plugins can be used to configure network namespaces. The specification itself is contained in SPEC.md

Why develop CNI?

Application containers on Linux are a rapidly evolving area, and within this space networking is a particularly unsolved problem, as it is highly environment-specific. We believe that every container runtime will seek to solve the same problem of making the network layer pluggable. In order to avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution. Hence we are proposing this specification, along with an initial set of plugins that can be used by different container runtime systems.

How do I use CNI?

Requirements

CNI requires Go 1.4+ to build.

Included Plugins

This repository includes a number of common plugins that can be found in plugins/ directory. Please see Documentation/ folder for documentation about particular plugins.

Running the plugins

The scripts/ directory contains two scripts, priv-net-run.sh and docker-run.sh, that can be used to exercise the plugins.

Start out by creating a netconf file to describe a network:

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}
EOF

Next, build the plugins:

$ ./build

Finally, execute a command (ifconfig in this example) in a private network namespace that has joined mynet network:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0      Link encap:Ethernet  HWaddr f2:c2:6f:54:b8:2b  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Running a Docker container with network namespace set up by CNI plugins

Use instructions in the previous section to define a netconf and build the plugins. Next, docker-run.sh script wraps docker run command to execute the plugins prior to entering the container:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr fa:60:70:aa:07:d1  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)