k3s has removed some standard plugins, which we need. So fork and add it back.
Go to file
2016-02-23 19:22:48 +01:00
cnitool Factor an API out into a module 2015-09-16 10:14:39 +01:00
Documentation docs/ptp: update example and DNS description 2016-01-29 10:39:28 +01:00
Godeps Revendor go-iptables to get --wait behavior 2016-01-27 13:56:53 -06:00
libcni Change copyright from CoreOS to CNI authors 2015-09-29 11:51:33 -07:00
pkg added the String method to Result type 2016-02-19 17:40:46 +00:00
plugins fixed ipam host-local IP json tag 2016-01-31 03:14:53 +00:00
scripts scripts: add DEBUG option 2016-02-23 19:22:48 +01:00
.gitignore adding .gitignore 2015-05-11 11:12:24 -07:00
.travis.yml *: add basic test script + travis hook 2015-09-07 16:21:53 -07:00
build scripts: improve shebang compatibility 2016-01-27 10:31:16 +01:00
CONTRIBUTING.md README/CONTRIBUTING: mention cni-dev@ list 2015-10-01 12:43:43 -07:00
DCO Add DCO and CONTRIBUTING.md 2015-09-02 11:00:27 -07:00
LICENSE Initial commit 2015-04-04 20:35:49 -07:00
MAINTAINERS MAINTAINERS: remove Eugene from list 2016-02-10 15:51:11 +01:00
README.md Update README.md 2016-01-06 16:25:02 +08:00
SPEC.md SPEC: DNS information as dictionary, adding domain, search domains, options 2016-01-29 10:39:22 +01:00
test scripts: improve shebang compatibility 2016-01-27 10:31:16 +01:00

cni - the Container Network Interface

What is CNI?

CNI, the Container Network Interface, is a proposed standard for configuring network interfaces for Linux application containers. The standard consists of a simple specification for how executable plugins can be used to configure network namespaces; this repository also contains a go library implementing that specification.

The specification itself is contained in SPEC.md

Why develop CNI?

Application containers on Linux are a rapidly evolving area, and within this space networking is a particularly unsolved problem, as it is highly environment-specific. We believe that every container runtime will seek to solve the same problem of making the network layer pluggable. In order to avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution. Hence we are proposing this specification, along with an initial set of plugins that can be used by different container runtime systems.

How do I use CNI?

Requirements

CNI requires Go 1.4+ to build.

Included Plugins

This repository includes a number of common plugins that can be found in plugins/ directory. Please see Documentation/ folder for documentation about particular plugins.

Running the plugins

The scripts/ directory contains two scripts, priv-net-run.sh and docker-run.sh, that can be used to exercise the plugins.

Start out by creating a netconf file to describe a network:

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}
EOF

The directory /etc/cni/net.d is the default location in which the scripts will look for net configurations.

Next, build the plugins:

$ ./build

Finally, execute a command (ifconfig in this example) in a private network namespace that has joined mynet network:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0      Link encap:Ethernet  HWaddr f2:c2:6f:54:b8:2b  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The environment variable CNI_PATH tells the scripts and library where to look for plugin executables.

Running a Docker container with network namespace set up by CNI plugins

Use instructions in the previous section to define a netconf and build the plugins. Next, docker-run.sh script wraps docker run command to execute the plugins prior to entering the container:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
eth0      Link encap:Ethernet  HWaddr fa:60:70:aa:07:d1  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Contact

For any questions about CNI, please reach out on the mailing list or IRC:

  • Email: cni-dev
  • IRC: #appc IRC channel on freenode.org