k3s has removed some standard plugins, which we need. So fork and add it back.
Go to file
2015-09-23 11:04:38 +02:00
cnitool Factor an API out into a module 2015-09-16 10:14:39 +01:00
Documentation host-local: allow ip request via CNI_ARGS 2015-09-04 01:38:22 +02:00
Godeps revendoring netlink 2015-09-23 11:04:38 +02:00
libcni bug fix: exec of DEL cmd caused JSON decode error 2015-09-18 10:30:10 -07:00
pkg bug fix: exec of DEL cmd caused JSON decode error 2015-09-18 10:30:10 -07:00
plugins Factor an API out into a module 2015-09-16 10:14:39 +01:00
scripts Propagate json error object to the caller 2015-06-09 10:46:28 -07:00
.gitignore adding .gitignore 2015-05-11 11:12:24 -07:00
.travis.yml *: add basic test script + travis hook 2015-09-07 16:21:53 -07:00
build Factor an API out into a module 2015-09-16 10:14:39 +01:00
CONTRIBUTING.md Add DCO and CONTRIBUTING.md 2015-09-02 11:00:27 -07:00
DCO Add DCO and CONTRIBUTING.md 2015-09-02 11:00:27 -07:00
LICENSE Initial commit 2015-04-04 20:35:49 -07:00
MAINTAINERS Add MAINTAINERS file 2015-09-08 15:58:00 -07:00
README.md Factor an API out into a module 2015-09-16 10:14:39 +01:00
SPEC.md spec: fix fmt issues and example JSON 2015-07-06 18:05:10 -07:00
test Factor an API out into a module 2015-09-16 10:14:39 +01:00

cni - the Container Network Interface

What is CNI?

CNI, the Container Network Interface, is a proposed standard for configuring network interfaces for Linux application containers. The standard consists of a simple specification for how executable plugins can be used to configure network namespaces; this repository also contains a go library implementing that specification.

The specification itself is contained in SPEC.md

Why develop CNI?

Application containers on Linux are a rapidly evolving area, and within this space networking is a particularly unsolved problem, as it is highly environment-specific. We believe that every container runtime will seek to solve the same problem of making the network layer pluggable. In order to avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution. Hence we are proposing this specification, along with an initial set of plugins that can be used by different container runtime systems.

How do I use CNI?

Requirements

CNI requires Go 1.4+ to build.

Included Plugins

This repository includes a number of common plugins that can be found in plugins/ directory. Please see Documentation/ folder for documentation about particular plugins.

Running the plugins

The scripts/ directory contains two scripts, priv-net-run.sh and docker-run.sh, that can be used to exercise the plugins.

Start out by creating a netconf file to describe a network:

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}
EOF

The directory /etc/cni/net.d is the default location in which the scripts will look for net configurations.

Next, build the plugins:

$ ./build

Finally, execute a command (ifconfig in this example) in a private network namespace that has joined mynet network:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0      Link encap:Ethernet  HWaddr f2:c2:6f:54:b8:2b  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The environment variable CNI_PATH tells the scripts and library where to look for plugin executables.

Running a Docker container with network namespace set up by CNI plugins

Use instructions in the previous section to define a netconf and build the plugins. Next, docker-run.sh script wraps docker run command to execute the plugins prior to entering the container:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr fa:60:70:aa:07:d1  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)