Compare commits

...

149 Commits
v3.0 ... v3.2

Author SHA1 Message Date
dougbtv
1b0b39d2f5 [travis] Updates Travis to tag master builds as :latest, and adds version tagged images to daemonsets 2019-03-22 11:59:49 +09:00
maximshd
08a2623b8e Properly initialize kubeClient in SetNetworkStatus method (#283)
* Properly initialize kubeClient in SetNetworkStatus method

* Fix typo

* Update error message

* Extend logging for setNetworkStatus function
2019-03-20 08:08:48 -04:00
Ashish Billore
362a7d0bd6 Minor Update for typo (#282) 2019-03-13 14:04:48 -04:00
Doug Smith
5e27b3cafb [entrypoint] Add options to specify logfile & loglevel in entrypoint (#280) 2019-03-07 11:01:03 -05:00
Tomofumi Hayashi
61416cbd40 Add 'verbose' option to logging minimum information (#275)
This change address #274 to add 'verbose option which outputs
minimum information (for usual runs with a bit information than
'error').
2019-03-07 11:00:46 -05:00
Tomofumi Hayashi
102cfc349d Fix multus-daemonset to use configmap to change config in yaml
This fix utilizes ConfigMap for multus.conf to change config from
yaml file. This change allows users to change multus config file
without container image change.
This change removes images/70-multus.conf because it is no longer
used.
2019-03-08 00:04:35 +09:00
Tomofumi Hayashi
5dc774a547 Caches all pod delegates json for pods deletion without k8s info
This fixes #243 with following changes:
 + Optimize to fetch Pod from k8s client
 + Change to use cache always in DEL.
 + If failed to fetch the pod info from k8s clinet in deletion,
  use cached delegates as emergency bailout
 + Add test cases for cache
2019-03-07 23:50:07 +09:00
Doug Smith
f0a43ca0a5 Adds wait loop to entrypoint when --multus-conf-file=auto (#234)
* [entrypoint] Adds wait loop when using --multus-conf-file=auto waits for presence of any conf file

* [entrypoint] fixes incorrect logical comparison

* [entrypoint] add log every 5 tries, fix tries increment, fix logical comparison

* [entrypoint] fix attempt output
2019-02-28 13:56:08 -05:00
Doug Smith
560d07f007 Merge pull request #273 from pliurh/config-file
Generate Multus config file regardless
2019-02-28 09:25:46 -05:00
Peng Liu
93db092895 Generate Multus config file regardless 2019-02-27 17:47:47 +08:00
Doug Smith
0010cd99ff Merge pull request #236 from hanxueluo/master
fix crash caused by empty delegates when use clusterNetwork
2019-02-26 07:17:36 -05:00
Tomofumi Hayashi
260316398f Change ClusterNetwork/DefaultNetwork namespace to MultusNamespace
Fix #261.
2019-02-22 14:08:30 +00:00
dougbtv
9b41f7635d Allows cmdDel to finish if netns doesn't exist, omits deferred netns.Close() in such a case 2019-02-22 13:54:03 +00:00
Doug Smith
8bf358071a Changes configuration for kube api to use gRPC 2019-02-21 14:42:57 +00:00
Tomofumi Hayashi
1fd6e131ac Fix term in svg file (default network -> cluster network) 2019-02-18 12:44:55 +09:00
Huanle Han
49d55d6f45 fix cmdDel crash when use pod annotation "v1.multus-cni.io/default-network"
Crash happens in code line `conf.Delegates[0] = delegate` in function TryLoadPodDelegates,
because len(conf.Delegates) is 0.

Signed-off-by: Huanle Han <hanhuanle@caicloud.io>
2019-02-11 18:18:36 +08:00
Tomofumi Hayashi
f0bc4fb475 Add multusNamespace/systemNamespaces config
This change provides new configuration parameters, multusNamespace
and systemNamespaces for flexible namespace management.
The change addresses issue #252 and issue #253.
2019-02-08 00:25:35 +09:00
Doug Smith
ec9dff343c Merge pull request #245 from knightXun/multus
refactor k8sclient: rename some val
2019-02-07 10:19:21 -05:00
Doug Smith
515e7eb92c Merge pull request #248 from dougbtv/quickstart-master-name
[docs] Adds note about master device name
2019-02-07 10:16:24 -05:00
Doug Smith
7984e7b007 Merge pull request #255 from aneeshkp/multus-crio
Fix Bin directory is different when using CRI-O
2019-02-07 10:12:45 -05:00
Doug Smith
d9188e4463 Merge pull request #258 from dcbw/teardown-add-cleanup
multus: simplify teardown on add error and clarify error message
2019-02-07 10:10:32 -05:00
Abdul Halim
34231166ae create empty NetworkStatus for empty Result struct
This fixes the issue described in #211 where LoadNetworkStatus is
throwing in errors if a delegate plugin returns empty Result that
contains in IPAM information.

This change will allow ignoreing the errors propagated from parsing
an empty Result and continue with next one.

Change-Id: Ife4b6103de044256233d581fa74759423ed94ff5
2019-02-05 16:16:32 +00:00
Dan Williams
063a3593b8 multus: simplify teardown on add error and clarify error message
Signed-off-by: Dan Williams <dcbw@redhat.com>
2019-02-04 10:55:08 -06:00
Aneesh Puttur
f229cbe47f moved crio details out of the README.md and into the ./docs/quickstart.md 2019-01-30 09:22:46 -05:00
Aneesh Puttur
747d31bb30 Fix Bin directory is different when using CRI-O
Added new configuration file for crio runtime multus-crio-daemonset.yml
Added instructions to readme for crio users.
Fixes #224
Signed-off-by : Aneesh Puttur <aputtur@redhat.com>
2019-01-29 10:56:35 -05:00
Moshe Levi
cd6f9880ac Fix multus dir creation in how-to-use.md
Change-Id: I3d7db11bf6216c2e1ddcab9d61c0c8e4fb8c0010
Signed-off-by: Moshe Levi <moshele@mellanox.com>
2019-01-27 19:41:01 +00:00
dougbtv
3ae7af14be [docs] Adds note about master device name 2019-01-25 14:30:14 -05:00
Md Safiyat Reza
c6c9706855 Fixed the internal link to the 'create-network-attachment-definition' section. 2019-01-25 15:42:53 +00:00
knight
66595e8172 refactor k8sclient: rename some val 2019-01-25 09:54:53 +08:00
Doug Smith
a4951bbd0d Merge pull request #242 from dougbtv/entrypoint-movefile
Updates entrypoint for atomic move of binary
2019-01-22 10:45:19 -05:00
dougbtv
d7e8809cf8 [entrypoint] Updates entrypoint for atomic move of binary (for cleaner upgrade) 2019-01-22 10:38:02 -05:00
Doug Smith
ac3f0d155e Merge pull request #237 from dougbtv/entrypoint-nsisolation
[entrypoint] Adds option for namespaceIsolation in entrypoint
2019-01-15 13:42:49 -05:00
dougbtv
66ca933a2f [entrypoint] Adds option for namespaceIsolation in entrypoint 2019-01-15 11:21:18 -05:00
Przemyslaw Lal
abdfc70c0d Remove validating admission controller
Remove validating admission controller to complete transfer of this feature to new repository at https://github.com/K8sNetworkPlumbingWG/net-attach-def-admission-controller
2019-01-11 18:49:34 +09:00
Tomofumi Hayashi
6a46d54161 Add version into binary and fix .travis.yml to run forked repo.
This changes introduce goreleaser, which does cross-compile and
package, as well as add version into go code. This change also
changes .travis.yml to allow to other users' forked repo.
2019-01-11 00:00:43 +09:00
Michael Wyraz
344e74f971 Remove unused config in flannel-daemonset.yaml 2019-01-10 23:54:00 +09:00
Michael Wyraz
9b3491f9f7 Add port forwarding to config provided by the docker image 2019-01-10 23:52:47 +09:00
Shahar Klein
9db8f38743 add MULTUS_CONF_FILE to the arr for checking 2019-01-10 23:51:55 +09:00
Doug Smith
fe6b68d58b Merge pull request #230 from dougbtv/dockerfile
[dockerfile] Updates Dockerfile for OpenShift-style build
2019-01-07 11:37:50 -05:00
dougbtv
6880d9eb3b [dockerfile] Updates Dockerfile for OpenShift-style build 2019-01-07 11:36:56 -05:00
Doug Smith
cef999f394 Merge pull request #223 from pliurh/doc
Add doc for specifying pod default cluster network
2019-01-07 11:19:13 -05:00
dougbtv
0d8296c54c [docs][minor] Typos and formatting for default network annotation docs 2019-01-07 11:18:08 -05:00
Tomofumi Hayashi
73f629b2fe Fixed document about '/' notation in text annotation. 2019-01-07 11:02:02 +09:00
Tomofumi Hayashi
e55832cae0 Merge branch 'master' of github.com:intel/multus-cni 2018-12-27 12:20:36 +09:00
Peng Liu
1558896125 Add doc for specifying pod default cluster network
Update according to dougbtv's comments
2018-12-20 11:48:16 +08:00
dougbtv
de4bed7a60 [feature] Adds a namespace isolation security feature 2018-12-20 06:43:59 +09:00
Tomofumi Hayashi
6e335441ab Merge branch 'master' of github.com:intel/multus-cni 2018-12-18 14:28:28 +09:00
Michal Rostecki
f157f424b5 k8sclient: Add missing error check
Before this change, error returned by `libcni.ConfFiles` was
silently ignored.

Signed-off-by: Michal Rostecki <mrostecki@suse.de>
2018-12-06 23:56:24 +09:00
dougbtv
e5e020f6a3 [dockerfile] Adds Dockerfile.rhel for OpenShift build 2018-12-06 16:16:26 +09:00
Tomofumi Hayashi
dd9355c3aa Fix go fmt issue. 2018-12-06 13:42:17 +09:00
Tomofumi Hayashi
a9b40c7841 Fix go vet issue. 2018-12-06 13:34:08 +09:00
Tomofumi Hayashi
6bf296f54d Change the namespace to 'kube-system' 2018-12-05 14:47:30 +09:00
Tomofumi Hayashi
e05de6260b clusterNetwork/defaultNetworks and namespace spec fixed
This fix is to add declaration about clusterNetwork/defaultNetwork
net-attach-def is in 'default' namesspace. In addition, this code
changes to skip defaultNetwork in case of 'kube-system' namespace
as well (#202).
2018-12-05 14:47:30 +09:00
Tomofumi Hayashi
887a9f42dd Fix Docker build issue around golang. 2018-12-04 18:20:22 +09:00
Michal Rostecki
4e3a5fe180 Add .gitignore file
Prevent tracking of binary outputs, GOPATH and test outputs.

Signed-off-by: Michal Rostecki <mrostecki@suse.de>
2018-12-03 16:50:21 +09:00
dougbtv
8d89700cac [bugfix] Delete all delegates instead of breaking out during deletion loop 2018-11-30 13:41:08 +00:00
Peng Liu
96217dd16e Change pod annotation name to 'v1.multus-cni.io/default-network' 2018-11-30 16:11:09 +09:00
Peng Liu
c3be74d7d6 Add more debug message 2018-11-30 16:11:09 +09:00
Peng Liu
555a734e21 Specify Pod default network in Annotations
Signed-off-by: Peng Liu <pliu@redhat.com>
2018-11-30 16:11:09 +09:00
Tomofumi Hayashi
4aa1d212f1 Add description how to use CRD in non-default namespaces 2018-11-30 00:31:39 +09:00
dougbtv
97a4996546 [docs] Updates to fix typos and extend information about CNI configurations generally per review 2018-11-30 00:31:39 +09:00
dougbtv
10abd05b1e [docs] Adds additional quickstart.md specific guide, some updates to usage guide 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
85293b3305 Add comments in case of daemonset. 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
be1be19f11 Add more paragraph. 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
3723fc3b53 Add "NOTE:" and change "NOTE" from "Note" 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
4a4b9d6ebd s/folloiwng/following/ 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
426e9b2581 Add 'skip in case of daemonset' at "SA, ClusterRole..." 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
b13b3c9e65 Indent the paragraph at "install multus" 2018-11-30 00:31:39 +09:00
Tomofumi Hayashi
1a78cb5766 Update README.md and split into several child documents
Fix #154 and #139. Thank you @dougbtv for reviewing the docs!
2018-11-30 00:31:39 +09:00
Alona Kaplan
f4068d18bd Support IPRequest to specify IP address for interface 2018-11-30 00:16:11 +09:00
Tomofumi Hayashi
d1c8a3d93f Fix the log message. 2018-11-29 15:11:33 +09:00
Doug Smith
6ffd60d289 Merge pull request #194 from intel/travis-ci
Travis
2018-11-19 11:35:58 -05:00
dougbtv
eb0eaf5099 [travis] Updates Travis to build ':snapshot' tagged image on each merge into master 2018-11-19 11:31:54 -05:00
Doug Smith
cbf737b917 Merge pull request #184 from s1061123/dev/issue_template
Add issue template for {bug,enhance,support}
2018-11-15 10:18:13 -05:00
Przemyslaw Lal
bf893189ef fix indentation
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
0903a11dc6 webhook documentation updates
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
de954a68dd add more webhook tests
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
b59a82d6fb improve error handling in webhook
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
d9e9e7b5dd run webhook as a deployment
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
82b3549c5d Add proxy env variables to docker build script
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
3f19f95fca Add documentation for validating admission webhook
Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
4471a16a9d Add deployment files for validating admission webhook
* Add script for automated certtificates and secret generation
* Add pod, service and webhook configuration specification files

Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Przemyslaw Lal
5aecd09331 Add validating admission webhook
* Add validating admission webhook HTTP server application
* Handle incoming AdmissionReview requests and validate their correctness, handle errors if any
* Validate Network Attachment Definition objects
* Send AdmissionReview response with allowed/denied decision and its reason
* In case of any other errors (malformed HTTP request, empty body, etc.) send proper HTTP error code
* Use TLS encryption
* Add some basic unit tests for Network Attachment Definition objects validation
* Build Docker image with webhook application

Signed-off-by: Przemyslaw Lal <przemyslawx.lal@intel.com>
2018-11-12 23:22:14 +00:00
Alona Kaplan
ec543570b5 Setting the MAC in CNI_ARGS shouldn't override the already existing CNI_ARGS 2018-11-12 17:13:37 +01:00
Dan Williams
5ef87bfd71 CRD: interfaceRequest -> interface (v1 spec conformance)
Change the Network Attachment Selection Annotation long-form
interface name request JSON key from 'interfaceRequest' to
'interface' to conform with the V1 NPWG spec.
2018-11-09 13:20:51 +09:00
Tomofumi Hayashi
9f00ea47f5 Fix rebase conflicts. 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
41348f2de1 Fix multus_test. 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
dccb303d6e Remove unnecessary else clause 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
f4f187d71d Incorporate @dcbw's comment. 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
60dfae14ea Add mac/interfaceRequest section in README.md 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
e505174d94 Change json field name to align with NPWG spec v1. 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
0812a8f7d7 Fix the way to set MAC. 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
3b073d7eb6 Add debug messasge for MAC. 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
deb2f2f64c Support MacRequest to specify MAC address for interface 2018-11-09 00:15:35 +09:00
Tomofumi Hayashi
d03379d768 Add issue template for {bug,enhance,support} 2018-11-06 16:28:33 +09:00
Tomofumi Hayashi
550f68024f Fix example files (#171 and #183) 2018-11-05 21:57:38 +09:00
Tomofumi Hayashi
5782dc1916 Fix typo in README.md 2018-11-02 22:54:39 +09:00
Tomofumi Hayashi
8b94703a4c Add clusterNetwork/defaultNetwork into multus
To support CRD/file/directory, add clusterNetwork/defaultNetwork
in multus.conf file.
2018-11-02 22:54:39 +09:00
Tomofumi Hayashi
4b231c6855 Add unit tests for clusterNetwork/defaultNetworks 2018-11-02 22:54:39 +09:00
Tomofumi Hayashi
3673a07229 Add clusterNetwork/defaultNetwork into multus
To support CRD/file/directory, add clusterNetwork/defaultNetwork
in multus.conf file.
2018-11-02 22:54:39 +09:00
Tomofumi Hayashi
5871c8744a Add clusterNetwork/defaultNetwork into multus
To support CRD/file/directory, add clusterNetwork/defaultNetwork
in multus.conf file.
2018-11-02 22:54:39 +09:00
Michael Cambria
aa7e000251 Make conflistDel() behave like conflistAdd()
conflistAdd() finds binaries differently than conflistDel().
Make the two call find binaries the same way.

Fixes #179

Signed-off-by: Michael Cambria <mcambria@redhat.com>
2018-11-02 02:50:28 +09:00
dougbtv
67e661f932 [rbac] Tightens down RBAC for clusterrole 2018-11-02 02:22:53 +09:00
Michael Cambria
5eca507387 Fix logFile to match configuration json
The Logging Options section of README describes how to specify a file
to log to.  There is a typo, LogFile should be logFile to match the
json.

Fixes #177

Signed-off-by: Michael Cambria <mcambria@redhat.com>
2018-10-31 10:14:47 +00:00
Doug Smith
bdf901c122 Merge pull request #171 from dougbtv/flannel_fix
Update flannel daemonset
2018-10-25 14:36:59 -04:00
dougbtv
447c6e0c2d Fixes flannel daemonset stuck in pod queue in Kubernetes 1.12.x per #170 2018-10-25 14:36:01 -04:00
Shahar Klein
ad50ff56ff Seems like the ENTRYPOINT value must be quoted
Signed-off-by: Shahar Klein <shaharklein@gmail.com>
2018-10-16 07:58:00 +09:00
Tomofumi Hayashi
17ae9c2e18 Fix TravisCI for the failure of 'go get golint' 2018-10-15 16:09:46 +09:00
Tomofumi Hayashi
4f45004710 TravisCI yaml parameterized
This change fixes #143, to make some specific TravisCI args parameter.
2018-10-11 23:42:52 +09:00
Doug Smith
21cdfe5485 Default network readiness fix conflicts after rebase 2018-10-11 11:27:18 +09:00
Doug Smith
1caddaef4f Merge pull request #162 from Kusanagi9999/improved-grep-to-search-for-conffiles
Improve grep in entrypoint.sh to only find .conf and .conflist files
2018-10-10 14:42:42 -04:00
Kuralamudhan Ramakrishnan
cee576c4d0 Update README.md 2018-10-10 15:29:58 +01:00
Kuralamudhan Ramakrishnan
a2022fa0d2 Update README.md 2018-10-10 15:14:28 +01:00
Abdul Halim
68da5dfb0d fixed some typos in comments
Change-Id: Ieb650479b6b0fef1a4ecaeb2c3c1a7c15fff43d5
2018-10-10 11:52:58 +01:00
Abdul Halim
9d5e7e7abf added checkpoint tests file
Change-Id: I53551660ffd017fe170de58abdf7a96e29178000
2018-10-10 11:52:58 +01:00
Abdul Halim
6e6c4c6cea refactoring checkpoint.go code to be testable
this changes will allow mocking checkpoint instance for unit tests

Change-Id: I72fb25d15d5c9f28577a0fcbfcd385df523a5e57
2018-10-10 11:52:58 +01:00
Abdul Halim
632804ce51 only create resourceMap on demand
making resourceMap a singleton object and only initialize it once
if one or more CRDs have a resourceName annotation in them.

Added copyright header for checkpoint/checkpoint.go.
Replaced fmt.Errorf with logging.

Change-Id: I54628d69324833e70a75dcf6533e6642dedde9b5
2018-10-10 11:52:58 +01:00
Abdul Halim
56cc7fb6b1 updated examples/README.md
Change-Id: I650fec86659b3690e1dc4b15bf84b6574cb0baba
2018-10-10 11:52:58 +01:00
Abdul Halim
e3d14b2732 parse kubelet checkpoint file for pod devices
Enabling kubelete checkpoint file  parsing to get Pod device info
so that these device information can be passed into CNI plugins
that need specific device information to work on.

Change-Id: I6630f56adc0a8307f575fc09ce9090c1ffca0337
2018-10-10 11:52:58 +01:00
Louis Woods
93621f7ada Improve grep in entrypoint.sh to only find .conf and .conflist files 2018-10-05 14:15:52 -07:00
Doug Smith
46a0f7590c Merge pull request #160 from Kusanagi9999/add-auto-config-generation
Add the option to auto generate 00-multus.conf
2018-10-05 16:22:28 -04:00
Louis Woods
91eaf6d36c Add the option to auto generate 00-multus.conf
When `--multus-conf-file=auto` is used, 00-multus.conf will be
automatically generated from the CNI configuration file of the master
plugin (the first file in lexicographical order in cni-conf-dir).
2018-10-04 16:40:16 -07:00
Tomofumi Hayashi
fb809ebf74 Add bracket [] in Dockerfile's entrypoint to parse argument correctly. 2018-10-02 22:49:59 +09:00
Kuralamudhan Ramakrishnan
adbea5a638 fixing readme file version details 2018-10-01 09:40:27 +02:00
Mathieu Rohon
a4f5fd4bf0 Add portmap capability support
Signed-off-by: Mathieu Rohon <mathieu.rohon@orange.com>
2018-10-01 06:48:24 +02:00
Nirmoy Das
635a275746 fix typo in the readme files 2018-09-12 18:09:51 +01:00
Tomofumi Hayashi
517f2d4b7a Revert "Merge branch 'dev/k8s-deviceid-model' into master"
This reverts commit 194c27fadf, reversing
changes made to 96d4d79d1f.
2018-08-30 20:15:30 +09:00
Abdul Halim
194c27fadf Merge branch 'dev/k8s-deviceid-model' into master 2018-08-30 10:48:18 +01:00
dougbtv
96d4d79d1f [docs] Remove coveralls badge from README 2018-08-28 08:56:43 +01:00
Tomofumi Hayashi
7d6cdab105 Updates npwg-demo-1 files to track latest changes
Almost files in npwg-demo-1 are fixed but I found several files
still use old definitons. This changes fix the stuff.
2018-08-28 08:56:10 +01:00
Doug Smith
88bd2eb653 Merge pull request #137 from intel/dockerfile-move
[dockerfile] Moves Dockerfile to root
2018-08-27 17:01:22 -04:00
dougbtv
8d38b3c6af [dockerfile] Moves Dockerfile to root 2018-08-27 16:55:46 -04:00
Kuralamudhan Ramakrishnan
86af6ab69f Update test.sh with coveralls job inclusion 2018-08-18 12:39:50 +01:00
dougbtv
e43f06b61d [ci][coveralls] Adds coveralls code coverage during Travis CI run, adds CI badges 2018-08-17 17:45:19 +01:00
Tomofumi Hayashi
4d5ae295cc Fix glide.yaml
This change fixes glide.yaml: track on cni's subpackage change and
some error about bytes package. With this change, verified that
'glide up' works without any error.
2018-08-17 13:32:01 +01:00
rkamudhan
f0f1d506c4 fixing the cmddel fix code 2018-08-17 00:45:05 +01:00
rkamudhan
b01f784504 handling the multiple cmd del call from kubelet 2018-08-17 00:45:05 +01:00
Tomofumi Hayashi
6d3216c340 Add debug log for newly added functions. 2018-08-17 00:43:31 +01:00
Tomofumi Hayashi
8e4c092192 Convert bytes to string in Debugf() 2018-08-17 00:43:31 +01:00
Tomofumi Hayashi
7d3626a5c0 Add logging message for debug/error
This changes adds logging message:debug and error. fmt.Errorf() is
wrapped by logging.Errorf() hence all error message is also put to
log file. logging.Debugf() is called at almost function call, so
we could track the code by logging message.
2018-08-17 00:43:31 +01:00
Maximilian Hristache
def72938cd Enable hairpin in the multus config
Fix #124
2018-08-13 14:44:20 +01:00
Abdul Halim
dd4e93c7d4 update example README.md file with sriov-net.yaml
added example sriov-net.yaml file in exmaples directory and
updated README.md file showing how to define CRDs with
resourceName annotation for device specific information.
2018-08-07 11:13:38 +01:00
Abdul Halim
f30c688775 added deviceIDs insertions into delegates
this changes allow Multus to parse resourceName annotation from
network CRDs then add deviceIDs into the delegate object for CNI
plugin to consume it.
2018-08-03 13:51:46 +01:00
rkamudhan
1ad25a890d adding error checking in network status creation as well 2018-08-01 16:03:25 +01:00
Kuralamudhan Ramakrishnan
88759d29de updating nitpick issues in the source file. 2018-08-01 16:03:25 +01:00
rkamudhan
d71fe3447f fixing multus runtime error for network status without pod network annotation 2018-08-01 16:03:25 +01:00
Kuralamudhan Ramakrishnan
02255b40fa Update 05_vlan1.yml 2018-08-01 13:02:29 +01:00
Kuralamudhan Ramakrishnan
5c1286b27b Update 04_macvlan1.yml 2018-08-01 12:59:14 +01:00
Kuralamudhan Ramakrishnan
6780518e5f Update 04_macvlan1.yml 2018-08-01 12:56:40 +01:00
47 changed files with 3601 additions and 1013 deletions

28
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View File

@@ -0,0 +1,28 @@
---
name: Bug Report
about: Report a bug encountered
---
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happend**:
**What you expected to happen**:
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Environment**:
- Multus version
image path and image ID (from 'docker images')
- Kubernetes version (use `kubectl version`):
- Primary CNI for Kubernetes cluster:
- OS (e.g. from /etc/os-release):
- File of '/etc/cni/net.d/'
- File of '/etc/cni/multus/net.d'
- NetworkAttachment info (use `kubectl get net-attach-def -o yaml`)
- Target pod yaml info (with annotation, use `kubectl get pod <podname> -o yaml`)
- Other log outputs (if you use multus logging)

10
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,10 @@
---
name: Enhancement Request
about: Suggest an enhancement to multus
---
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:

5
.github/ISSUE_TEMPLATE/support.md vendored Normal file
View File

@@ -0,0 +1,5 @@
---
name: Support Request
about: Support request or question relating to multus-cni
---

9
.gitignore vendored Normal file
View File

@@ -0,0 +1,9 @@
# Binary output dir
bin/
# GOPATH created by the build script
gopath/
# Test outputs
*.out
*.test

24
.goreleaser.yml Normal file
View File

@@ -0,0 +1,24 @@
# This is an example goreleaser.yaml file with some sane defaults.
# Make sure to check the documentation at http://goreleaser.com
builds:
-
env:
- CGO_ENABLED=0
main: ./multus/
goos:
- linux
goarch:
- 386
- amd64
- arm
- arm64
archive:
wrap_in_directory: true
checksum:
name_template: 'checksums.txt'
snapshot:
name_template: "{{ .Tag }}-snapshot"
release:
draft: true
changelog:
skip: true

View File

@@ -3,64 +3,84 @@ language: go
# for the detail
# sudo: requried
dist: trusty
go:
- 1.11.x
env:
global:
- REGISTRY_USER=nfvperobot
- secure: "LnQV09sy5nfrJd0PKAbxYPdKJ5QtLECofsunYfVk7tFp+ivKyZBXHwi4V4aGFuB2SqCnpauXBRTLet8hrfm5kN9ZZQRqy0WNs/fJHdFC6YKOKwyCQwczFb1by/iTX68dxWc2nK9+Opi6s/81Bh5yb3Oquqzdk+OEgaQHz2KP7BwI4yDrobinBR5laJ4KdxZJYgYx4mP6uUPxj7UZww+HaWqyiGy8cAeK3L81sGjxXJIYTRRfG1J4pifI5A3c3IOJRID0pvifgUIsQXp5MHpx+nxmhRJ7KMBLeNkUKruLTEsufgGCvhY5eWpdBhVN2YefGTqlKBCtKEqRUPlLbP5eJGUdY1PlUMUnQsr+FRWAZz90A1TESOZXZqDs4xR1ox1wX7mBUeelViXvUfLQB9sOD8G86FkXqNTqx/thp3x0Dqgy44pL+12Y3k5xVZmIsWDSpGmmIe1jOCsoL26Fdic+dTO/l3mx3KP1+gPNqbScuJsccLyPsr96uFCBCPJ2mSy7nCqb01KZTbbkIvv6oOCQ+Mfq8MT9lkxf6FJ+K+7vVbcgshOGhqA/l1UO3rKxnGt8Rkj/5XoHkcjXjM6YzT5LvljVWszJGXeTQxGjcsPrK2AscyX7JvNp/AMElII/Hxm6P0NESfV0whrZHyVOaqIRrbhUsK9j4YP8IMFoI4qYp4g="
- REGISTRY_USER=${REGISTRY_USER}
- REGISTRY_PASS=${REGISTRY_PASS}
- MULTUS_GOPATH=${PWD}/gopath
- secure: "${REGISTRY_SECURE}"
before_install:
- sudo apt-get update -qq
- go get github.com/mattn/goveralls
install:
# workaround golint install error in https://github.com/golang/lint/issues/288
- mkdir -p $GOPATH/src/golang.org/x
- pushd $GOPATH/src/golang.org/x
- git clone https://github.com/golang/tools.git
- git clone https://github.com/golang/lint.git
- go get github.com/golang/lint/golint
- popd
- go get -u golang.org/x/lint/golint
before_script:
- golint ./multus/... | grep -v ALL_CAPS | xargs -r false
- go fmt ./multus/...
- go vet ./multus/...
# Make gopath... to run golint/go fmt/go vet
- |-
if [ ! -h gopath/src/github.com/intel/multus-cni ]; then
mkdir -p gopath/src/github.com/intel
ln -s ../../../.. gopath/src/github.com/intel/multus-cni || exit 255
fi
- env GOPATH=${MULTUS_GOPATH} golint gopath/src/github.com/intel/multus-cni/multus/... | grep -v ALL_CAPS | xargs -r false
- env GOPATH=${MULTUS_GOPATH} go fmt gopath/src/github.com/intel/multus-cni/...
- go tool vet */*.go
# - gocyclo -over 15 ./multus
script:
- ./build
- sudo ./test.sh
- |-
GOV_GOPATH=${PWD}/gopath
pushd gopath/src/github.com/intel/multus-cni
env GOPATH=${GOV_GOPATH} ${GOPATH}/bin/goveralls -coverprofile=coverage.out -service=travis-ci
popd
- mkdir -p ${TRAVIS_BUILD_DIR}/dist
- tar cvfz ${TRAVIS_BUILD_DIR}/dist/multus-cni_amd64.tar.gz --warning=no-file-changed --exclude="dist" --exclude="vendor" .
- docker build -t nfvpe/multus -f ./images/Dockerfile .
before_deploy:
- go get -u github.com/laher/goxc
- mkdir -p $TRAVIS_BUILD_DIR/dist
- goxc -d=$TRAVIS_BUILD_DIR/dist -pv=$TRAVIS_TAG -bc=linux -tasks=clean-destination,xc,archive,rmbin
- docker build -t nfvpe/multus .
deploy:
- provider: releases
api_key:
secure: "iy7eqzXNvb/juc+5eVPQ/pFYDTCqDt8Zjt63n+zEK856Qzr2aEZwwOguMWs78XFDMFXagCs5PRTvtvZz8apoTfHX7Wkss3kRyEziAkuldQbH5yGDvpGyHsGBw78N95hauMoogefE7NuuLG3qRSWPeVz8RAKGhP7ADwEVyyfQKKYdum3Bqrz0D89HqKbCQqs3eZae7ppDIler3lab9WAQGuKNJ2HL6mqREVe48kb8sdsuSr+yV4qwVrBDNhXxQDxAT6LYuMXbknE7qTde2vViP13ZHpptbuZqiZG2ytzReIIs/iC9AWoIQXr3XTXl9z8fqlC3VljPCikBWVcmxDFA2aANYzx3M/7fMOO/DniwNhlZc9+pYfAkUrpoQPfPOWNqf45Qz0jP3wk49xy5hxEqe/rfmo5lipSsqeUsk+j3pT8kjVIAnDLrQpxSx7xwnijPLgtm34UwROVowfwLlOhE/7mUOFCbYlzEo3CKvjDN3Kmn35yHEueuu//Gv5jesVYvgcNPBHqaTKb5AXVTqymNBtA43PchLJ8gCC1mNukzSZifQP996vzbV5c9AxzBLjWbiDJ3lOFIpNhF8Sed0m0C0RylrTXHTX5TSrlMdXXffzYwbjJ96J+cFPBTpJNfSn+3N7hiart1r1k1bSXoPqYW4+94M8E1eZ5LjszoeiZbRrI="
file_glob: true
file: "$TRAVIS_BUILD_DIR/dist/*/*.gz"
skip_cleanup: true
# Release on versioned tag (e.g. v1.0)
- provider: script
#skip_cleanup: true
script: curl -sL https://git.io/goreleaser | bash
on:
tags: true
all_branches: true
condition: "$TRAVIS_TAG =~ ^v[0-9].*$"
# Push images to Dockerhub
# Push images to Dockerhub on tag
- provider: script
skip_cleanup: true
script: >
bash -c '
docker tag nfvpe/multus nfvpe/multus:$TRAVIS_TAG;
docker tag nfvpe/multus nfvpe/multus:stable;
docker login -u "$REGISTRY_USER" -p "$REGISTRY_PASS";
docker push nfvpe/multus;
docker push nfvpe/multus:$TRAVIS_TAG'
docker push nfvpe/multus;
docker push nfvpe/multus:stable;
docker push nfvpe/multus:$TRAVIS_TAG;
echo done'
on:
tags: true
all_branches: true
condition: "$TRAVIS_TAG =~ ^v[0-9].*$"
# Push images to Dockerhub on merge to master
- provider: script
on:
branch: master
script: >
bash -c '
docker tag nfvpe/multus nfvpe/multus:snapshot;
docker login -u "$REGISTRY_USER" -p "$REGISTRY_PASS";
docker push nfvpe/multus:snapshot;
docker push nfvpe/multus:latest;
echo done'
after_success:
# put build tgz to bintray

22
Dockerfile Normal file
View File

@@ -0,0 +1,22 @@
# This Dockerfile is used to build the image available on DockerHub
FROM centos:centos7
# Add everything
ADD . /usr/src/multus-cni
ENV INSTALL_PKGS "git golang"
RUN rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO && \
curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo && \
yum install -y $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
cd /usr/src/multus-cni && \
./build && \
yum autoremove -y $INSTALL_PKGS && \
yum clean all && \
rm -rf /tmp/*
WORKDIR /
ADD ./images/entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]

20
Dockerfile.openshift Normal file
View File

@@ -0,0 +1,20 @@
# This dockerfile is specific to building Multus for OpenShift
FROM openshift/origin-release:golang-1.10 as builder
ADD . /usr/src/multus-cni
WORKDIR /usr/src/multus-cni
RUN ./build
FROM openshift/origin-base
RUN mkdir -p /usr/src/multus-cni/images && mkdir -p /usr/src/multus-cni/bin
COPY --from=builder /usr/src/multus-cni/images/70-multus.conf /usr/src/multus-cni/images
COPY --from=builder /usr/src/multus-cni/bin/multus /usr/src/multus-cni/bin
ADD ./images/entrypoint.sh /
LABEL io.k8s.display-name="Multus CNI" \
io.k8s.description="This is a component of OpenShift Container Platform and provides a meta CNI plugin." \
io.openshift.tags="openshift" \
maintainer="Doug Smith <dosmith@redhat.com>"
ENTRYPOINT ["/entrypoint.sh"]

617
README.md
View File

@@ -1,42 +1,28 @@
# Multus-CNI
![multus-cni Logo](https://github.com/intel/multus-cni/blob/master/doc/images/Multus.png)
* [MULTUS CNI plugin](#multus-cni-plugin)
* [Quickstart Guide](#quickstart-guide)
* [Multi-Homed pod](#multi-homed-pod)
* [Building from source](#building-from-source)
* [Work flow](#work-flow)
* [Usage with Kubernetes CRD based network objects](#usage-with-kubernetes-crd-based-network-objects)
* [Creating "Network" resources in Kubernetes](#creating-network-resources-in-kubernetes)
* [<strong>CRD based Network objects</strong>](#crd-based-network-objects)
* [Configuring Multus to use the kubeconfig](#configuring-multus-to-use-the-kubeconfig)
* [Configuring Multus to use kubeconfig and a default network](#configuring-multus-to-use-kubeconfig-and-a-default-network)
* [Configuring Pod to use the CRD network objects](#configuring-pod-to-use-the-crd-network-objects)
* [Verifying Pod network interfaces](#verifying-pod-network-interfaces)
* [Using with Multus conf file](#using-with-multus-conf-file)
* [Logging Options](#logging-options)
* [Testing Multus CNI](#testing-multus-cni)
* [Multiple flannel networks](#multiple-flannel-networks)
* [Configure Kubernetes with CNI](#configure-kubernetes-with-cni)
* [Launching workloads in Kubernetes](#launching-workloads-in-kubernetes)
* [Multus additional plugins](#multus-additional-plugins)
* [NFV based networking in Kubernetes](#nfv-based-networking-in-kubernetes)
* [Need help](#need-help)
* [Contacts](#contacts)
[![Travis CI](https://travis-ci.org/intel/multus-cni.svg?branch=master)](https://travis-ci.org/intel/multus-cni/builds)[![Go Report Card](https://goreportcard.com/badge/github.com/intel/multus-cni)](https://goreportcard.com/report/github.com/intel/multus-cni)
# MULTUS CNI plugin
- _Multus_ is a latin word for &quot;Multi&quot;
- As the name suggests, it acts as a Multi plugin in Kubernetes and provides the multiple network interface support in a pod
- Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and all 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [SRIOV-DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK &amp; VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes
- It is a contact between the container runtime and other plugins, and it doesn&#39;t have any of its own net configuration, it calls other plugins like flannel/calico to do the real net conf job.
- Multus reuses the concept of invoking delegates as used in flannel by grouping multiple plugins into delegates and invoking them in the sequential order of the CNI configuration file provided in json format
- The default network gets "eth0" and additional network Pod interface name as “net0”, “net1”,… “netX and so on. Multus also support interface names from the user.
- Multus is one of the projects in the [Baremetal Container Experience kit](https://networkbuilders.intel.com/network-technologies/container-experience-kits).
Multus CNI enables attaching multiple network interfaces to pods in Kubernetes.
Please check the [CNI](https://github.com/containernetworking/cni) documentation for more information on container networking.
## How it works
# Quickstart Guide
Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) -- with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a "meta-plugin", a CNI plugin that can call multiple other CNI plugins.
Multus may be deployed as a Daemonset, and is provided in this guide along with Flannel. Flannel is deployed as a pod-to-pod network that is used as our "default network". Each network attachment is made in addition to this default network.
Multus CNI follows the [Kubernetes Network Custom Resource Definition De-facto Standard](https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit) to provide a standardized method by which to specify the configurations for additional network interfaces. This standard is put forward by the Kubernetes [Network Plumbing Working Group](https://docs.google.com/document/d/1oE93V3SgOGWJ4O1zeD1UmpeToa0ZiiO6LqRAmZBPFWM/edit).
Multus is one of the projects in the [Baremetal Container Experience kit](https://networkbuilders.intel.com/network-technologies/container-experience-kits)
### Multi-Homed pod
Here's an illustration of the network interfaces attached to a pod, as provisioned by Multus CNI. The diagram shows the pod with three interfaces: `eth0`, `net0` and `net1`. `eth0` connects kubernetes cluster network to connect with kubernetes server/services (e.g. kubernetes api-server, kubelet and so on). `net0` and `net1` are additional network attachments and connect to other networks by using [other CNI plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (e.g. vlan/vxlan/ptp).
![multus-pod-image](doc/images/multus-pod-image.svg)
## Quickstart Installation Guide
Multus may be deployed as a Daemonset, and is provided in this guide along with Flannel. Flannel is deployed as a pod-to-pod network that is used as our "default network" (a network interface that every pod will be created with). Each network attachment is made in addition to this default network.
Firstly, clone this GitHub repository. We'll apply files to `kubectl` from this repo.
@@ -46,561 +32,22 @@ We apply these files as such:
$ cat ./images/{multus-daemonset.yml,flannel-daemonset.yml} | kubectl apply -f -
```
Create a CNI configuration loaded as a CRD object, in this case a macvlan CNI configuration is defined. You may replace the `config` field with any valid CNI configuration where the CNI binary is available on the nodes.
This will configure your systems to be ready to use Multus CNI, but, to get started with adding additional interfaces to your pods, refer to our complete [quick-start guide](doc/quickstart.md)
```
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
```
## Additional installation Options
You may then create a pod which attached this additional interface, where the annotation correlates to the `name` in the `NetworkAttachmentDefinition` above.
- Install via daemonset using the quick-start guide, above.
- Download binaries from [release page](https://github.com/intel/multus-cni/releases)
- By Docker image from [Docker Hub](https://hub.docker.com/r/nfvpe/multus/tags/)
- Or, roll-you-own and build from source
- See [Development](doc/development.md)
```
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: dougbtv/centos-network
EOF
```
## Comprehensive Documentation
You may now inspect the pod and see that there is an additional interface configured, like so:
- [How to use](doc/how-to-use.md)
- [Configuration](doc/configuration.md)
- [Development](doc/development.md)
```
$ kubectl exec -it samplepod -- ip a
```
# Kubernetes Network Custom Resource Definition De-facto Standard - Reference implementation
* This project is a reference implementation for Kubernetes Network Custom Resource Definition De-facto Standard. For more information refer [Network Plumbing Working Group Agenda](https://docs.google.com/document/d/1oE93V3SgOGWJ4O1zeD1UmpeToa0ZiiO6LqRAmZBPFWM/edit)
* Kubernetes Network Custom Resource Definition De-facto Standard [documentation link](https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit)
* Reference implementation support following modes
* CNI config JSON in network object
* Not using CNI config (“thick” plugin usecase)
* CNI configuration stored in on-disk file
> refer the section 3.2 Network Object Definition for more details in Kubernetes Network Custom Resource Definition De-facto Standard
* Refer the reference implemenation presentation and demo details - [link](https://docs.google.com/presentation/d/1dbCin6MnhK-BjjcVun5YiPTL99VA2uSiyWAtWAPNlIc/edit?usp=sharing)
* Release version from v2.0 is not compatible with v1.1 and v1.2 network CRD
* [MULTUS CNI plugin](#multus-cni-plugin)specifications.
## Multi-Homed pod
<p align="center">
<img src="doc/images/multus_cni_pod.png" width="1008" />
</p>
## Building from source
**This plugin requires Go 1.8 (or later) to build.**
```
#./build
```
## Work flow
<p align="center">
<img src="doc/images/workflow.png" width="1008" />
</p>
## Network configuration reference
- name (string, required): the name of the network
- type (string, required): &quot;multus&quot;
- kubeconfig (string, optional): kubeconfig file for the out of cluster communication with kube-apiserver. See the example [kubeconfig](https://github.com/intel/multus-cni/blob/master/doc/node-kubeconfig.yaml)
- delegates (([]map,required): number of delegate details in the Multus
## Usage with Kubernetes CRD based network objects
Kubelet is responsible for establishing network interfaces for pods; it does this by invoking its configured CNI plugin. When Multus is invoked it retrieves network references from Pod annotation. Multus then uses these network references to get network configurations. Network configurations are defined as Kubernetes Custom Resource Object (CRD). These configurations describe which CNI plugins to invoke and what their configurations are. The order of plugin invocation is important as it identifies the primary plugin. This order is taken from network object references given in a Pod spec.
<p align="center">
<img src="doc/images/multus_crd_usage_diagram.JPG" width="1008" />
</p>
### Creating &quot;Network&quot; resources in Kubernetes
You may wish to create the `network-attachment-definition` manually if you haven't installed using the daemonset technique, which includes the CRD, and you can verify if it's loaded with `kubectl get crd` and look for the presence of `network-attachment-definition`.
1. Create a Custom Resource Definition (CRD) `crdnetwork.yaml`; for the network object using the YAML from the examples directory.
```
$ kubectl create -f ./examples/crd.yml
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
```
3. Run kubectl get command to check the Network CRD creation
```
$ kubectl get crd
NAME KIND
network-attachment-definitions.k8s.cni.cncf.io CustomResourceDefinition.v1beta1.apiextensions.k8s.io
```
### Creating CRD network resources in Kubernetes
1. After creating CRD network object you can create network resources in Kubernetes. These network resources may contain additional underlying CNI plugin parameters given in JSON format. In the following example shown below the args field contains parameters that will be passed into Flannel plugin.
2. Save the following YAML to flannel-network.yaml
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: flannel-networkobj
spec:
config: '{
"cniVersion": "0.3.0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}'
```
3. Create the custom resource definition
```
$ kubectl create -f ./flannel-network.yaml
network "flannel-networkobj" created
$ kubectl get net-attach-def
NAME AGE
flannel-networkobj 26s
```
4. Get the custom network object details
```
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
clusterName: ""
creationTimestamp: 2018-05-17T09:13:20Z
deletionGracePeriodSeconds: null
deletionTimestamp: null
initializers: null
name: flannel-networkobj
namespace: default
resourceVersion: "21176114"
selfLink: /apis/k8s.cni.cncf.io/v1/namespaces/default/networks/flannel-networkobj
uid: 8ac8f873-59b2-11e8-8308-a4bf01024e6f
spec:
config: '{ "cniVersion": "0.3.0", "type": "flannel", "delegate": { "isDefaultGateway":
true } }'
```
5. Save the following YAML to sriov-network.yaml to creating sriov network object. ( Refer to [Intel - SR-IOV CNI](https://github.com/Intel-Corp/sriov-cni) or contact @kural in [Intel-Corp Slack](https://intel-corp.herokuapp.com/) for running the DPDK based workloads in Kubernetes)
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-conf
spec:
config: '{
"type": "sriov",
"if0": "enp12s0f1",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
}'
```
6. Likewise save the following YAML to sriov-vlanid-l2enable-network.yaml to create another sriov based network object:
```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-vlanid-l2enable-conf
spec:
config: '{
"type": "sriov",
"if0": "enp2s0",
"vlan": 210,
"l2enable": true
}'
```
7. Follow step 3 above to create &quot;sriov-vlanid-l2enable-conf&quot; and &quot;sriov-conf&quot; network objects
8. View network objects using kubectl
```
# kubectl get net-attach-def
NAME AGE
flannel-networkobj 29m
sriov-conf 6m
sriov-vlanid-l2enable-conf 2m
```
### Configuring Multus to use the kubeconfig
1. Create a Mutlus CNI configuration file on each Kubernetes node. This file should be created in: /etc/cni/net.d/multus-cni.conf with the content shown below. Use only the absolute path to point to the kubeconfig file (as it may change depending upon your cluster env). We are assuming all CNI plugin binaries are default location (`/opt/cni/bin dir`)
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml"
}
```
### Configuring Multus to use kubeconfig and a default network
1. Many users want Kubernetes default networking feature along with network objects. Refer to issues [#14](https://github.com/intel/multus-cni/issues/14) &amp; [#17](https://github.com/intel/multus-cni/issues/17) for more information. In the following Multus configuration, Weave act as the default network in the absence of network field in the pod metadata annotation.
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"type": "weave-net",
"hairpinMode": true
}]
}
```
Configurations referenced in annotations are created in addition to the default network.
### Configuring Pod to use the CRD network objects
1. Save the following YAML to pod-multi-network.yaml. In this case flannel-conf network object acts as the primary network.
```
# cat pod-multi-network.yaml
apiVersion: v1
kind: Pod
metadata:
name: multus-multi-net-poc
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name": "flannel-conf" },
{ "name": "sriov-conf" },
{ "name": "sriov-vlanid-l2enable-conf",
"interfaceRequest": "north" }
]'
spec: # specification of the pod's contents
containers:
- name: multus-multi-net-poc
image: "busybox"
command: ["top"]
stdin: true
tty: true
```
2. Create Multiple network based pod from the master node
```
# kubectl create -f ./pod-multi-network.yaml
pod "multus-multi-net-poc" created
```
3. Get the details of the running pod from the master
```
# kubectl get pods
NAME READY STATUS RESTARTS AGE
multus-multi-net-poc 1/1 Running 0 30s
```
### Verifying Pod network interfaces
1. Run `ifconfig` command in Pod:
```
# kubectl exec -it multus-multi-net-poc -- ifconfig
eth0 Link encap:Ethernet HWaddr C6:43:7C:09:B4:9C
inet addr:10.128.0.4 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
net0 Link encap:Ethernet HWaddr 06:21:91:2D:74:B9
inet addr:192.168.42.3 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::421:91ff:fe2d:74b9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
net1 Link encap:Ethernet HWaddr D2:94:98:82:00:00
inet addr:10.56.217.171 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::d094:98ff:fe82:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:120 (120.0 B) TX bytes:648 (648.0 B)
north Link encap:Ethernet HWaddr BE:F2:48:42:83:12
inet6 addr: fe80::bcf2:48ff:fe42:8312/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1420 errors:0 dropped:0 overruns:0 frame:0
TX packets:1276 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:95956 (93.7 KiB) TX bytes:82200 (80.2 KiB)
```
| Interface name | Description |
| --- | --- |
| lo | loopback |
| eth0 | weave network interface |
| net0 | Flannel network tap interface |
| net1 | VF0 of NIC 1 assigned to the container by [Intel - SR-IOV CNI](https://github.com/intel/sriov-cni) plugin |
| north | VF0 of NIC 2 assigned with VLAN ID 210 to the container by SR-IOV CNI plugin |
2. Check the vlan ID of the NIC 2 VFs
```
# ip link show enp2s0
20: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 24:8a:07:e8:7d:40 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, vlan 210, spoof checking off, link-state auto
vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
```
## Using with Multus conf file
Given the following network configuration:
```
# tee /etc/cni/net.d/multus-cni.conf <<-'EOF'
{
"name": "multus-demo-network",
"type": "multus",
"delegates": [
{
"type": "sriov",
#part of sriov plugin conf
"if0": "enp12s0f0",
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.131",
"rangeEnd": "10.56.217.190",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.56.217.1"
}
},
{
"type": "ptp",
"ipam": {
"type": "host-local",
"subnet": "10.168.1.0/24",
"rangeStart": "10.168.1.11",
"rangeEnd": "10.168.1.20",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "10.168.1.1"
}
},
{
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
]
}
EOF
```
## Logging Options
You may wish to enable some enhanced logging for Multus, especially during the process where you're configuring Multus and need to understand what is or isn't working with your particular configuration.
Multus will always log via `STDERR`, which is the standard method by which CNI plugins communicate errors, and these errors are logged by the Kubelet. This method is always enabled.
### Writing to a Log File
Optionally, you may have Multus log to a file on the filesystem. This file will be written locally on each node where Multus is executed. You may configure this via the `LogFile` option in the CNI configuration. By default this additional logging to a flat file is disabled.
For example in your CNI configuration, you may set:
```
"LogFile": "/var/log/multus.log",
```
### Logging Level
The default logging level is set as `panic` -- this will log only the most critical errors, and is the least verbose logging level.
The available logging level values, in descreasing order of verbosity are:
* `debug`
* `error`
* `panic`
You may configure the logging level by using the `LogLevel` option in your CNI configuration. For example:
```
"LogLevel": "debug",
```
## Testing Multus CNI
### Multiple flannel networks
Github user [YYGCui](https://github.com/YYGCui) has used multiple flannel network to work with Multus CNI plugin. Please refer to this [closed issue](https://github.com/intel/multus-cni/issues/7) for ,multiple overlay network support with Multus CNI.
Make sure that the multus, [sriov](https://github.com/Intel-Corp/sriov-cni), [flannel](https://github.com/containernetworking/cni/blob/master/Documentation/flannel.md), and [ptp](https://github.com/containernetworking/cni/blob/master/Documentation/ptp.md) binaries are in the /opt/cni/bin directories and follow the steps as mentioned in the [CNI](https://github.com/containernetworking/cni/#running-a-docker-container-with-network-namespace-set-up-by-cni-plugins)
#### Configure Kubernetes with CNI
Kubelet must be configured to run with the CNI network plugin. Edit `/etc/kubernetes/kubelet` file and add `--network-plugin=cni` flags in `KUBELET\_OPTS `as shown below:
```
KUBELET_OPTS="...
--network-plugin-dir=/etc/cni/net.d
--network-plugin=cni
"
```
Refer to the Kubernetes User Guide and network plugin for more information.
- [Single Node](https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/)
- [Multi Node](https://kubernetes.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
- [Network plugin](https://kubernetes.io/docs/admin/network-plugins/)
#### Launching workloads in Kubernetes
With Multus CNI configured as described in sections above each workload launched via a Kubernetes Pod will have multiple network interfacesLaunch the workload using yaml file in the kubernetes master, with above configuration in the multus CNI, each pod should have multiple interfaces.
Note: To verify whether Multus CNI plugin is working correctly, create a pod containing one `busybox` container and execute `ip link` command to check if interfaces management follows configuration.
1. Create `multus-test.yaml` file containing below configuration. Created pod will consist of one `busybox` container running `top` command.
```
apiVersion: v1
kind: Pod
metadata:
name: multus-test
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: test1
image: "busybox"
command: ["top"]
stdin: true
tty: true
```
2. Create pod using command:
```
# kubectl create -f multus-test.yaml
pod "multus-test" created
```
3. Run &quot;ip link&quot; command inside the container:
```
# 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 26:52:6b:d8:44:2d brd ff:ff:ff:ff:ff:ff
20: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
link/ether f6:fb:21:4f:1d:63 brd ff:ff:ff:ff:ff:ff
21: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
link/ether 76:13:b1:60:00:00 brd ff:ff:ff:ff:ff:ff
```
| Interface name | Description |
| --- | --- |
| lo | loopback |
| eth0@if41 | Flannel network tap interface |
| net0 | VF assigned to the container by [SR-IOV CNI](https://github.com/Intel-Corp/sriov-cni) plugin |
| net1 | ptp localhost interface |
## Multus additional plugins
- [DPDK-SRIOV CNI](https://github.com/Intel-Corp/sriov-cni)
- [Vhostuser CNI](https://github.com/intel/vhost-user-net-plugin) - a Dataplane network plugin - Supports OVS-DPDK &amp; VPP
- [Bond CNI](https://github.com/Intel-Corp/bond-cni) - For fail-over and high availability of networking
## NFV based networking in Kubernetes
- KubeCon workshop on [&quot;Enabling NFV features in Kubernetes&quot;](https://kccncna17.sched.com/event/Cvnw/enabling-nfv-features-in-kubernetes-hosted-by-kuralamudhan-ramakrishnan-ivan-coughlan-intel) presentation [slide deck](https://www.slideshare.net/KuralamudhanRamakris/enabling-nfv-features-in-kubernetes-83923352)
- Feature brief
- [Multiple Network Interface Support in Kubernetes](https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-support-in-kubernetes-feature-brief.pdf)
- [Enhanced Platform Awareness in Kubernetes](https://builders.intel.com/docs/networkbuilders/enhanced-platform-awareness-feature-brief.pdf)
- Application note
- [Multiple Network Interfaces in Kubernetes and Container Bare Metal](https://builders.intel.com/docs/networkbuilders/multiple-network-interfaces-in-kubernetes-application-note.pdf)
- [Enhanced Platform Awareness Features in Kubernetes](https://builders.intel.com/docs/networkbuilders/enhanced-platform-awareness-in-kubernetes-application-note.pdf)
- White paper
- [Enabling New Features with Kubernetes for NFV](https://builders.intel.com/docs/networkbuilders/enabling_new_features_in_kubernetes_for_NFV.pdf)
- Multus&#39;s related project github pages
- [Multus](https://github.com/Intel-Corp/multus-cni)
- [SRIOV - DPDK CNI](https://github.com/Intel-Corp/sriov-cni)
- [Vhostuser - VPP &amp; OVS - DPDK CNI](https://github.com/intel/vhost-user-net-plugin)
- [Bond CNI](https://github.com/Intel-Corp/bond-cni)
- [Node Feature Discovery](https://github.com/kubernetes-incubator/node-feature-discovery)
- [CPU Manager for Kubernetes](https://github.com/Intel-Corp/CPU-Manager-for-Kubernetes)
## Need help
- Read [Containers Experience Kits](https://networkbuilders.intel.com/network-technologies/container-experience-kits)
- Try our container exp kit demo - KubeCon workshop on [Enabling NFV Features in Kubernetes](https://github.com/intel/container-experience-kits-demo-area/)
- Join us on [#intel-sddsg-slack](https://intel-corp.herokuapp.com/) slack channel and ask question in [#general-discussion](https://intel-corp-team.slack.com/messages/C4C5RSEER)
- You can also [email](mailto:kuralamudhan.ramakrishnan@intel.com) us
- Feel free to [submit](https://github.com/Intel-Corp/multus-cni/issues/new) an issue
Please fill in the Questions/feedback - [google-form](https://goo.gl/forms/upBWyGs8Wmq69IEi2)!
## Contacts
For any questions about Multus CNI, please reach out on github issue or feel free to contact the developer @kural in our [Intel-Corp Slack](https://intel-corp.herokuapp.com/)
## Contact Us
For any questions about Multus CNI, feel free to ask a question in #general in the [Intel-Corp Slack](https://intel-corp.herokuapp.com/), or open up a GitHub issue.

16
build
View File

@@ -4,6 +4,20 @@ set -e
ORG_PATH="github.com/intel"
REPO_PATH="${ORG_PATH}/multus-cni"
# Add version/commit/date into binary
# In case of TravisCI, need to check error code of 'git describe'.
set +e
git describe --tags --abbrev=0 > /dev/null 2>&1
if [ "$?" != "0" ]; then
VERSION="master"
else
VERSION=$(git describe --tags --abbrev=0)
fi
set -e
DATE=$(date --iso-8601=seconds)
COMMIT=$(git rev-parse --verify HEAD)
LDFLAGS="-X main.version=${VERSION:-master} -X main.commit=${COMMIT} -X main.date=${DATE}"
if [ ! -h gopath/src/${REPO_PATH} ]; then
mkdir -p gopath/src/${ORG_PATH}
ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255
@@ -14,4 +28,4 @@ export GOBIN=${PWD}/bin
export GOPATH=${PWD}/gopath
echo "Building plugins"
go install "$@" ${REPO_PATH}/multus
go install -ldflags "${LDFLAGS}" "$@" ${REPO_PATH}/multus

113
checkpoint/checkpoint.go Normal file
View File

@@ -0,0 +1,113 @@
// Copyright (c) 2018 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package checkpoint
import (
"encoding/json"
"io/ioutil"
"github.com/intel/multus-cni/logging"
"github.com/intel/multus-cni/types"
)
const (
checkPointfile = "/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint"
)
type PodDevicesEntry struct {
PodUID string
ContainerName string
ResourceName string
DeviceIDs []string
AllocResp []byte
}
type checkpointData struct {
PodDeviceEntries []PodDevicesEntry
RegisteredDevices map[string][]string
}
type Data struct {
Data checkpointData
Checksum uint64
}
type Checkpoint interface {
// GetComputeDeviceMap returns an instance of a map of ResourceInfo for a PodID
GetComputeDeviceMap(string) (map[string]*types.ResourceInfo, error)
}
type checkpoint struct {
fileName string
podEntires []PodDevicesEntry
}
// GetCheckpoint returns an instance of Checkpoint
func GetCheckpoint() (Checkpoint, error) {
logging.Debugf("GetCheckpoint(): invoked")
return getCheckpoint(checkPointfile)
}
func getCheckpoint(filePath string) (Checkpoint, error) {
cp := &checkpoint{fileName: filePath}
err := cp.getPodEntries()
if err != nil {
return nil, err
}
logging.Debugf("getCheckpoint(): created checkpoint instance with file: %s", filePath)
return cp, nil
}
// getPodEntries gets all Pod device allocation entries from checkpoint file
func (cp *checkpoint) getPodEntries() error {
cpd := &Data{}
rawBytes, err := ioutil.ReadFile(cp.fileName)
if err != nil {
return logging.Errorf("getPodEntries(): error reading file %s\n%v\n", checkPointfile, err)
}
if err = json.Unmarshal(rawBytes, cpd); err != nil {
return logging.Errorf("getPodEntries(): error unmarshalling raw bytes %v", err)
}
cp.podEntires = cpd.Data.PodDeviceEntries
logging.Debugf("getPodEntries(): podEntires %+v", cp.podEntires)
return nil
}
// GetComputeDeviceMap returns an instance of a map of ResourceInfo
func (cp *checkpoint) GetComputeDeviceMap(podID string) (map[string]*types.ResourceInfo, error) {
resourceMap := make(map[string]*types.ResourceInfo)
if podID == "" {
return nil, logging.Errorf("GetComputeDeviceMap(): invalid Pod cannot be empty")
}
for _, pod := range cp.podEntires {
if pod.PodUID == podID {
entry, ok := resourceMap[pod.ResourceName]
if ok {
// already exists; append to it
entry.DeviceIDs = append(entry.DeviceIDs, pod.DeviceIDs...)
} else {
// new entry
resourceMap[pod.ResourceName] = &types.ResourceInfo{DeviceIDs: pod.DeviceIDs}
}
}
}
return resourceMap, nil
}

View File

@@ -0,0 +1,120 @@
package checkpoint
import (
"os"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"io/ioutil"
"testing"
"github.com/intel/multus-cni/types"
)
const (
fakeTempFile = "/tmp/kubelet_internal_checkpoint"
)
type fakeCheckpoint struct {
fileName string
}
func (fc *fakeCheckpoint) WriteToFile(inBytes []byte) error {
return ioutil.WriteFile(fc.fileName, inBytes, 0600)
}
func (fc *fakeCheckpoint) DeleteFile() error {
return os.Remove(fc.fileName)
}
func TestCheckpoint(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Checkpoint")
}
var _ = BeforeSuite(func() {
sampleData := `{
"Data": {
"PodDeviceEntries": [
{
"PodUID": "970a395d-bb3b-11e8-89df-408d5c537d23",
"ContainerName": "appcntr1",
"ResourceName": "intel.com/sriov_net_A",
"DeviceIDs": [
"0000:03:02.3",
"0000:03:02.0"
],
"AllocResp": "CikKC3NyaW92X25ldF9BEhogMDAwMDowMzowMi4zIDAwMDA6MDM6MDIuMA=="
}
],
"RegisteredDevices": {
"intel.com/sriov_net_A": [
"0000:03:02.1",
"0000:03:02.2",
"0000:03:02.3",
"0000:03:02.0"
],
"intel.com/sriov_net_B": [
"0000:03:06.3",
"0000:03:06.0",
"0000:03:06.1",
"0000:03:06.2"
]
}
},
"Checksum": 229855270
}`
fakeCheckpoint := &fakeCheckpoint{fileName: fakeTempFile}
err := fakeCheckpoint.WriteToFile([]byte(sampleData))
Expect(err).NotTo(HaveOccurred())
})
var _ = Describe("Kubelet checkpoint data read operations", func() {
Context("Using /tmp/kubelet_internal_checkpoint file", func() {
var (
cp Checkpoint
err error
resourceMap map[string]*types.ResourceInfo
resourceInfo *types.ResourceInfo
resourceAnnot = "intel.com/sriov_net_A"
)
It("should get a Checkpoint instance from file", func() {
cp, err = getCheckpoint(fakeTempFile)
Expect(err).NotTo(HaveOccurred())
})
It("should return a ResourceMap instance", func() {
rmap, err := cp.GetComputeDeviceMap("970a395d-bb3b-11e8-89df-408d5c537d23")
Expect(err).NotTo(HaveOccurred())
Expect(rmap).NotTo(BeEmpty())
resourceMap = rmap
})
It("resourceMap should have value for \"intel.com/sriov_net_A\"", func() {
rInfo, ok := resourceMap[resourceAnnot]
Expect(ok).To(BeTrue())
resourceInfo = rInfo
})
It("should have 2 deviceIDs", func() {
Expect(len(resourceInfo.DeviceIDs)).To(BeEquivalentTo(2))
})
It("should have \"0000:03:02.3\" in deviceIDs[0]", func() {
Expect(resourceInfo.DeviceIDs[0]).To(BeEquivalentTo("0000:03:02.3"))
})
It("should have \"0000:03:02.0\" in deviceIDs[1]", func() {
Expect(resourceInfo.DeviceIDs[1]).To(BeEquivalentTo("0000:03:02.0"))
})
})
})
var _ = AfterSuite(func() {
fakeCheckpoint := &fakeCheckpoint{fileName: fakeTempFile}
err := fakeCheckpoint.DeleteFile()
Expect(err).NotTo(HaveOccurred())
})

276
doc/configuration.md Normal file
View File

@@ -0,0 +1,276 @@
## Multus-cni Configuration Reference
Following is the example of multus config file, in `/etc/cni/net.d/`.
(`"Note1"` and `"Note2"` are just comments, so you can remove them at your configuration)
```
{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"confDir": "/etc/cni/multus/net.d",
"cniDir": "/var/lib/cni/multus",
"binDir": "/opt/cni/bin",
"logFile": "/var/log/multus.log",
"logLevel": "debug",
"capabilities": {
"portMappings": true
},
"readinessindicatorfile": "",
"namespaceIsolation": false,
"Note1":"NOTE: you can set clusterNetwork+defaultNetworks OR delegates!!",
"clusterNetwork": "defaultCRD",
"defaultNetworks": ["sidecarCRD", "flannel"],
"systemNamespaces": ["kube-system", "admin"],
"multusNamespace": "kube-system",
"Note2":"NOTE: If you use clusterNetwork/defaultNetworks, delegates is ignored",
"delegates": [{
"type": "weave-net",
"hairpinMode": true
}, {
"type": "macvlan",
... (snip)
}]
}
```
* `name` (string, required): the name of the network
* `type` (string, required): &quot;multus&quot;
* `confDir` (string, optional): directory for CNI config file that multus reads. default `/etc/cni/multus/net.d`
* `cniDir` (string, optional): Multus CNI data directory, default `/var/lib/cni/multus`
* `binDir` (string, optional): directory for CNI plugins which multus calls. default `/opt/cni/bin`
* `kubeconfig` (string, optional): kubeconfig file for the out of cluster communication with kube-apiserver. See the example [kubeconfig](https://github.com/intel/multus-cni/blob/master/doc/node-kubeconfig.yaml). If you would like to use CRD (i.e. network attachment definition), this is required
* `logFile` (string, optional): file path for log file. multus puts log in given file
* `logLevel` (string, optional): logging level ("debug", "error" or "panic")
* `namespaceIsolation` (boolean, optional): Enables a security feature where pods are only allowed to access `NetworkAttachmentDefinitions` in the namespace where the pod resides. Defaults to false.
* `capabilities` ({}list, optional): [capabilities](https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md#dynamic-plugin-specific-fields-capabilities--runtime-configuration) supported by at least one of the delegates. (NOTE: Multus only supports portMappings capability for now). See the [example](https://github.com/intel/multus-cni/blob/master/examples/multus-ptp-portmap.conf).
* `readinessindicatorfile`: The path to a file whose existance denotes that the default network is ready
User should chose following parameters combination (`clusterNetwork`+`defaultNetworks` or `delegates`):
* `clusterNetwork` (string, required): default CNI network for pods, used in kubernetes cluster (Pod IP and so on): name of network-attachment-definition, CNI json file name (without extention, .conf/.conflist) or directory for CNI config file
* `defaultNetworks` ([]string, required): default CNI network attachment: name of network-attachment-definition, CNI json file name (without extention, .conf/.conflist) or directory for CNI config file
* `systemNamespaces` ([]string, optional): list of namespaces for Kubernetes system (namespaces listed here will not have `defaultNetworks` added)
* `multusNamespace` (string, optional): namespace for `clusterNetwork`/`defaultNetworks`
* `delegates` ([]map,required): number of delegate details in the Multus
### Network selection flow of clusterNetwork/defaultNetworks
Multus will find network for clusterNetwork/defaultNetworks as following sequences:
1. CRD object for given network name, in 'kube-system' namespace
1. CNI json config file in `confDir`. Given name should be without extention, like .conf/.conflist. (e.g. "test" for "test.conf")
1. Directory for CNI json config file. Multus will find alphabetically first file for the network
1. Multus failed to find network. Multus raise error message
## Miscellaneous config
### Default Network Readiness Indicator
You may wish for your "default network" (that is, the CNI plugin & its configuration you specify as your default delegate) to become ready before you attach networks with Multus. This is disabled by default and not used unless you add the readiness check option(s) to your CNI configuration file.
For example, if you use Flannel as a default network, the recommended method for Flannel to be installed is via a daemonset that also drops a configuration file in `/etc/cni/net.d/`. This may apply to other plugins that place that configuration file upon their readiness, hence, Multus uses their configuration filename as a semaphore and optionally waits to attach networks to pods until that file exists.
In this manner, you may prevent pods from crash looping, and instead wait for that default network to be ready.
Only one option is necessary to configure this functionality:
* `readinessindicatorfile`: The path to a file whose existance denotes that the default network is ready.
*NOTE*: If `readinessindicatorfile` is unset, or is an empty string, this functionality will be disabled, and is disabled by default.
### Logging
You may wish to enable some enhanced logging for Multus, especially during the process where you're configuring Multus and need to understand what is or isn't working with your particular configuration.
Multus will always log via `STDERR`, which is the standard method by which CNI plugins communicate errors, and these errors are logged by the Kubelet. This method is always enabled.
#### Writing to a Log File
Optionally, you may have Multus log to a file on the filesystem. This file will be written locally on each node where Multus is executed. You may configure this via the `LogFile` option in the CNI configuration. By default this additional logging to a flat file is disabled.
For example in your CNI configuration, you may set:
```
"LogFile": "/var/log/multus.log",
```
#### Logging Level
The default logging level is set as `panic` -- this will log only the most critical errors, and is the least verbose logging level.
The available logging level values, in decreasing order of verbosity are:
* `debug`
* `error`
* `panic`
You may configure the logging level by using the `LogLevel` option in your CNI configuration. For example:
```
"LogLevel": "debug",
```
### Namespace Isolation
The functionality provided by the `namespaceIsolation` configuration option enables a mode where Multus only allows pods to access custom resources (the `NetworkAttachmentDefinitions`) within the namespace where that pod resides. In other words, the `NetworkAttachmentDefinitions` are isolated to usage within the namespace in which they're created.
For example, if a pod is created in the namespace called `development`, Multus will not allow networks to be attached when defined by custom resources created in a different namespace, say in the `default` network.
Consider the situation where you have a system that has users of different privilege levels -- as an example, a platform which has two administrators: a Senior Administrator and a Junior Administrator. The Senior Administrator may have access to all namespaces, and some network configurations as used by Multus are considered to be privileged in that they allow access to some protected resources available on the network. However, the Junior Administrator has access to only a subset of namespaces, and therefore it should be assumed that the Junior Administrator cannot create pods in their limited subset of namespaces. The `namespaceIsolation` feature provides for this isolation, allowing pods created in given namespaces to only access custom resources in the same namespace as the pod.
Namespace Isolation is disabled by default.
#### Configuration example
```
"namespaceIsolation": true,
```
#### Usage example
Let's setup an example where we:
* Create a custom resource in a namespace called `privileged`
* Create a pod in a namespace called `development`, and have annotations that reference a custom resource in the `privileged` namespace. The creation of this pod should be disallowed by Multus (as we'll have the use of the custom resources limited only to those custom resources created within the same namespace as the pod).
Given the above scenario with a Junior & Senior Administrator. You may assume that the Senior Administrator has access to all namespaces, whereas the Junior Administrator has access only to the `development` namespace.
Firstly, we show that we have a number of namespaces available:
```
# List the available namespaces
[user@kube-master ~]$ kubectl get namespaces
NAME STATUS AGE
default Active 7h27m
development Active 3h
kube-public Active 7h27m
kube-system Active 7h27m
privileged Active 4s
```
We'll create a `NetworkAttachmentDefinition` in the `privileged` namespace.
```
# Show the network attachment definition we're creating.
[user@kube-master ~]$ cat cr.yml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
# Create that network attachment definition in the privileged namespace
[user@kube-master ~]$ kubectl create -f cr.yml -n privileged
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created
# List the available network attachment definitions in the privileged namespace.
[user@kube-master ~]$ kubectl get networkattachmentdefinition.k8s.cni.cncf.io -n privileged
NAME AGE
macvlan-conf 11s
```
Next, we'll create a pod with an annotation that references the privileged namespace. Pay particular attention to the annotation that reads `k8s.v1.cni.cncf.io/networks: privileged/macvlan-conf` -- where it contains a reference to a `namespace/configuration-name` formatted network attachment name. In this case referring to the `macvlan-conf` in the namespace called `privileged`.
```
# Show the yaml for a pod.
[user@kube-master ~]$ cat example.pod.yml
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: privileged/macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: dougbtv/centos-network
# Create that pod.
[user@kube-master ~]$ kubectl create -f example.pod.yml -n development
pod/samplepod created
```
You'll note that pod fails to spawn successfully. If you check the Multus logs, you'll see an entry such as:
```
2018-12-18T21:41:32Z [error] GetPodNetwork: namespace isolation violation: podnamespace: development / target namespace: privileged
```
This error expresses that the pod resides in the namespace named `development` but refers to a `NetworkAttachmentDefinition` outside of that namespace, in this case, the namespace named `privileged`.
In a positive example, you'd instead create the `NetworkAttachmentDefinition` in the `development` namespace, and you'd have an annotation that either A. does not reference a namespace, or B. refers to the same annotation.
A positive example may be:
```
# Create the same NetworkAttachmentDefinition as above, however in the development namespace
[user@kube-master ~]$ kubectl create -f cr.yml -n development
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created
# Show the yaml for a sample pod which references macvlan-conf without a namspace/ format
[user@kube-master ~]$ cat positive.example.pod
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: dougbtv/centos-network
# Create that pod.
[user@kube-master ~]$ kubectl create -f positive.example.pod -n development
pod/samplepod created
# We can see that this pod has been launched successfully.
[user@kube-master ~]$ kubectl get pods -n development
NAME READY STATUS RESTARTS AGE
samplepod 1/1 Running 0 31s
```
### Specify default cluster network in Pod annotations
Users may also specify the default network for any given pod (via annotation), for cases where there are multiple cluster networks available within a Kubernetes cluster.
Example use cases may include:
1. During a migration from one default network to another (e.g. from Flannel to Calico), it may be practical if both network solutions are able to operate in parallel. Users can then control which network a pod should attach to during the transition period.
2. Some users may deploy multiple cluster networks for the sake of their security considerations, and may desire to specify the default network for individual pods.
Follow these steps to specify the default network on a pod-by-pod basis:
1. First, you need to define all your cluster networks as network-attachment-definition objects.
2. Next, you can specify the network you want in pods with the `v1.multus-cni.io/default-network` annotation. Pods which do not specify this annotation will keep using the CNI as defined in the Multus config file.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-example
annotations:
v1.multus-cni.io/default-network: calico-conf
...
```

30
doc/development.md Normal file
View File

@@ -0,0 +1,30 @@
## Development Information
## How to build the multus-cni?
```
git clone https://github.com/intel/multus-cni.git
cd multus-cni
./build
```
## How to run CI tests?
Multus has go unit tests (based on ginkgo framework). Following commands drive CI tests manually in your environment:
```
sudo ./test.sh
```
## Logging Best Practices
Followings are multus logging best practices:
* Add `logging.Debugf()` at the begining of function
* In case of error handling, use `logging.Errorf()` with given error info
* `logging.Panicf()` only be used at very critical error (it should NOT used usually)
## CI Introduction
TBD

470
doc/how-to-use.md Normal file
View File

@@ -0,0 +1,470 @@
## How to use multus-cni?
### Prerequisites
* Kubelet configured to use CNI
* Kubernetes version with CRD support (generally )
Your Kubelet(s) must be configured to run with the CNI network plugin. Please see [Kubernetes document for CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) for more details.
### Install multus
Generally we recommend two options: Manually place a Multus binary in your `/opt/cni/bin`, or use our [quick-start method](quickstart.md) -- which creates a daemonset that has an opinionated way of how to install & configure Multus CNI (recommended).
*Copy Multus Binary into place*
You may acquire the Multus binary via compilation (see the [developer guide](development.md)) or download the a binary from the [GitHub releases](https://github.com/intel/multus-cni/releases) page. Copy multus binary into CNI binary directory, usually `/opt/cni/bin`. Perform this on all nodes in your cluster (master and nodes).
$ cp multus /opt/cni/bin
*Via Daemonset method*
As a [quickstart](quickstart.md), you may apply these YAML files (included in the clone of this repository). Run this command (typically you would run this on the master, or wherever you have access to the `kubectl` command to manage your cluster).
$ cat ./images/{multus-daemonset.yml,flannel-daemonset.yml} | kubectl apply -f -
If you need more comprehensive detail, continue along with this guide, otherwise, you may wish to either [follow the quickstart guide]() or skip to the ['Create network attachment definition'](#create-network-attachment-definition) section.
### Set up conf file in /etc/cni/net.d/ (Installed automatically by Daemonset)
**If you use daemonset to install multus, skip this section and go to "Create network attachment"**
You put CNI config file in `/etc/cni/net.d`. Kubernetes CNI runtime uses the alphabetically first file in the directory. (`"NOTE1"`, `"NOTE2"` are just comments, you can remove them at your configuration)
Execute following commands at all Kubernetes nodes (i.e. master and minions)
```
$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/30-multus.conf <<EOF
{
"name": "multus-cni-network",
"type": "multus",
"readinessindicatorfile": "/var/run/flannel/subnet.env",
"delegates": [
{
"NOTE1": "This is example, wrote your CNI config in delegates",
"NOTE2": "If you use flannel, you also need to run flannel daemonset before!",
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true
}
}
],
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}
EOF
```
For the detail, please take a look into [Configuration Reference](configuration.md)
**NOTE: You can use "clusterNetwork"/"defaultNetworks" instead of "delegates", see []() for the detail**
As above config, you need to set `"kubeconfig"` in the config file for NetworkAttachmentDefinition(CRD).
##### Which network will be used for "Pod IP"?
In case of "delegates", the first delegates network will be used for "Pod IP". Otherwise, "clusterNetwork" will be used for "Pod IP".
#### Create ServiceAccount, ClusterRole and its binding
Create resources for multus to access CRD objects as following command:
```
# Execute following commands at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: multus
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
rules:
- apiGroups: ["k8s.cni.cncf.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
- pods/status
verbs:
- get
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:
- kind: ServiceAccount
name: multus
namespace: kube-system
EOF
```
#### Set up kubeconfig file
Create kubeconfig at master node as following commands:
```
# Execute following command at Kubernetes master
$ mkdir -p /etc/cni/net.d/multus.d
$ SERVICEACCOUNT_CA=$(kubectl get secrets -n=kube-system -o json | jq -r '.items[]|select(.metadata.annotations."kubernetes.io/service-account.name"=="multus")| .data."ca.crt"')
$ SERVICEACCOUNT_TOKEN=$(kubectl get secrets -n=kube-system -o json | jq -r '.items[]|select(.metadata.annotations."kubernetes.io/service-account.name"=="multus")| .data.token' | base64 -d )
$ KUBERNETES_SERVICE_PROTO=$(kubectl get all -o json | jq -r .items[0].spec.ports[0].name)
$ KUBERNETES_SERVICE_HOST=$(kubectl get all -o json | jq -r .items[0].spec.clusterIP)
$ KUBERNETES_SERVICE_PORT=$(kubectl get all -o json | jq -r .items[0].spec.ports[0].port)
$ cat > /etc/cni/net.d/multus.d/multus.kubeconfig <<EOF
# Kubeconfig file for Multus CNI plugin.
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: ${KUBERNETES_SERVICE_PROTOCOL:-https}://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}
certificate-authority-data: ${SERVICEACCOUNT_CA}
users:
- name: multus
user:
token: "${SERVICEACCOUNT_TOKEN}"
contexts:
- name: multus-context
context:
cluster: local
user: multus
current-context: multus-context
EOF
```
Copy `/etc/cni/net.d/multus.d/multus.kubeconfig` into other Kubernetes nodes
**NOTE: Recommend to exec 'chmod 600 /etc/cni/net.d/multus.d/multus.kubeconfig' to keep secure**
```
$ scp /etc/cni/net.d/multus.d/multus.kubeconfig ...
```
### Setup CRDs (daemonset automatically does)
**If you use daemonset to install multus, skip this section and go to "Create network attachment"**
Create CRD definition in Kubernetes as following command at master node:
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: network-attachment-definitions.k8s.cni.cncf.io
spec:
group: k8s.cni.cncf.io
version: v1
scope: Namespaced
names:
plural: network-attachment-definitions
singular: network-attachment-definition
kind: NetworkAttachmentDefinition
shortNames:
- net-attach-def
validation:
openAPIV3Schema:
properties:
spec:
properties:
config:
type: string
EOF
```
### Create network attachment definition
The 'NetworkAttachmentDefinition' is used to setup the network attachment, i.e. secondary interface for the pod, There are two ways to configure the 'NetworkAttachmentDefinition' as following:
- NetworkAttachmentDefinition with json CNI config
- NetworkAttachmentDefinition with CNI config file
#### NetworkAttachmentDefinition with json CNI config:
Following command creates NetworkAttachmentDefinition. CNI config is in `config:` field.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-1
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"ranges": [
[ {
"subnet": "10.10.0.0/16",
"rangeStart": "10.10.1.20",
"rangeEnd": "10.10.3.50",
"gateway": "10.10.0.254"
} ]
]
}
}'
EOF
```
#### NetworkAttachmentDefinition with CNI config file:
If NetworkAttachmentDefinition has no spec, multus find a file in defaultConfDir ('/etc/cni/multus/net.d', with same name in the 'name' field of CNI config.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-2
EOF
```
```
# Execute following commands at all Kubernetes nodes (i.e. master and minions)
$ cat <<EOF > /etc/cni/multus/net.d/macvlan2.conf
{
"cniVersion": "0.3.0",
"type": "macvlan",
"name": "macvlan-conf-2",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"ranges": [
[ {
"subnet": "11.10.0.0/16",
"rangeStart": "11.10.1.20",
"rangeEnd": "11.10.3.50"
} ]
]
}
}
```
### Run pod with network annotation
#### Lauch pod with text annotation
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-01
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf-1, macvlan-conf-2
spec:
containers:
- name: pod-case-01
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with text annotation for NetworkAttachmentDefinition in different namespace
You can also specify NetworkAttachmentDefinition with its namespace as adding `<namespace>/`
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-3
namespace: testns1
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"ranges": [
[ {
"subnet": "12.10.0.0/16",
"rangeStart": "12.10.1.20",
"rangeEnd": "12.10.3.50"
} ]
]
}
}'
EOF
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-02
annotations:
k8s.v1.cni.cncf.io/networks: testns1/macvlan-conf-3
spec:
containers:
- name: pod-case-02
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with text annotation with interface name
You can also specify interface name as adding `@<ifname>`.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-03
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf-1@macvlan1
spec:
containers:
- name: pod-case-03
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with json annotation
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-04
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name" : "macvlan-conf-1" },
{ "name" : "macvlan-conf-2" }
]'
spec:
containers:
- name: pod-case-04
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with json annotation for NetworkAttachmentDefinition in different namespace
You can also specify NetworkAttachmentDefinition with its namespace as adding `"namespace": "<namespace>"`.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-05
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name" : "macvlan-conf-1",
"namespace": "testns1" }
]'
spec:
containers:
- name: pod-case-05
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
#### Lauch pod with json annotation with interface
You can also specify interface name as adding `"interfaceRequest": "<ifname>"`.
```
# Execute following command at Kubernetes master
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-case-06
annotations:
k8s.v1.cni.cncf.io/networks: '[
{ "name" : "macvlan-conf-1",
"interfaceRequest": "macvlan1" },
{ "name" : "macvlan-conf-2" }
]'
spec:
containers:
- name: pod-case-06
image: docker.io/centos/tools:latest
command:
- /sbin/init
EOF
```
### Verifying pod network
Following the example of `ip -d address` output of above pod, "pod-case-06":
```
# Execute following command at Kubernetes master
$ kubectl exec -it pod-case-06 -- ip -d address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 0a:58:0a:f4:02:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.244.2.6/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ac66:45ff:fe7c:3a19/64 scope link
valid_lft forever preferred_lft forever
4: macvlan1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 4e:6d:7a:4e:14:87 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
macvlan mode bridge numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.10.1.22/16 scope global macvlan1
valid_lft forever preferred_lft forever
inet6 fe80::4c6d:7aff:fe4e:1487/64 scope link
valid_lft forever preferred_lft forever
5: net2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 6e:e3:71:7f:86:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
macvlan mode bridge numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 11.10.1.22/16 scope global net2
valid_lft forever preferred_lft forever
inet6 fe80::6ce3:71ff:fe7f:86f7/64 scope link
valid_lft forever preferred_lft forever
```
| Interface name | Description |
| --- | --- |
| lo | loopback |
| eth0 | Default network interface (flannel) |
| macvlan1 | macvlan interface (macvlan-conf-1) |
| net2 | macvlan interface (macvlan-conf-2) |

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 190 KiB

184
doc/quickstart.md Normal file
View File

@@ -0,0 +1,184 @@
# Quickstart Guide
This guide is intended as a way to get you off the ground, and using Multus CNI to create Kubernetes pods with multiple interfaces. If you're already using Multus and need more detail, see the [comprehensive usage guide](how-to-use.md). This document a quickstart and a getting started guide in one, intended for your first run-through of Multus CNI.
We'll first install Multus CNI, and then we'll setup some configurations so that you can see how multiple interfaces are created for pods.
## Key Concepts
Two things we'll refer to a number of times through this document are:
* "Default network" -- This is your pod-to-pod network. This is how pods communicate among one another in your cluster, how they have connectivity. Generally speaking, this is presented as the interface named `eth0`. This interface is always attached to your pods, so that they can have connectivity among themselves. We'll add interfaces in addition to this.
* "CRDs" -- Custom Resource Definitions. Custom Resources are a way that the Kubernetes API is extended. We use these here to store some information that Multus can read. Primarily, we use these to store the configurations for each of the additional interfaces that are attached to your pods.
## Installation
Our recommended quickstart method to deploy Multus is to deploy using a Daemonset. This method is provided in this guide along with [Flannel](https://github.com/coreos/flannel). Flannel is deployed as a pod-to-pod network that is used as our "default network" -- this provides connectivity between pods in your cluster. Each additional network attachment (i.e. for multiple interfaces in pods) is made in addition to this default network. This guide generally assumes a new Kubernetes cluster that hasn't yet had any networking configured. If it's your first time using Multus, you might consider using a fresh cluster to learn with, and then later configure it to work with an existing cluster.
Firstly, clone this GitHub repository.
```
git clone https://github.com/intel/multus-cni.git && cd multus-cni
```
We'll apply files to `kubectl` from this repo. The files we're applying here specify a "Daemonset" (pods that run on each node in the cluster), this Daemonset handles installing the Multus CNI binary, dropping a default configuration on each node in the cluster -- and then also installs Flannel to use as a default network.
```
$ cat ./images/{multus-daemonset.yml,flannel-daemonset.yml} | kubectl apply -f -
```
Note: For crio runtime use multus-crio-daemonset.yml (crio uses /usr/libexec/cni as default path for plugin directory). Before deploying daemonsets,delete all default network plugin configuration files under /etc/cni/net.d
If the runtime is cri-o, then apply these files.
```
$ cat ./images/{multus-crio-daemonset.yml,flannel-daemonset.yml} | kubectl apply -f -
```
### Validating your installation
Generally, the first step in validating your installation is to look at the `STATUS` field of your nodes, you can check it out by looking at:
```
$ kubectl get nodes
```
This will show each of the nodes in your cluster, take a look at the `STATUS` field, and look for `Ready` to appear for each of your nodes. This readiness is determined by the presence of a CNI configuration file on each of the nodes, and when that file appears.
You may also wish to start any pod in your cluster (without any further configuration), and validate that it works as you'd otherwise expect -- especially that it can communicate over the default network.
## Creating additional interfaces
The first thing we'll do is create configurations for each of the additional interfaces that we attach to pods. We'll do this by creating Custom Resources. Part of the quickstart installation creates a "CRD" -- a custom resource definition that is the home where we keep these custom resources -- we'll store our configurations for each interface in these.
### CNI Configurations
Each configuration we'll add is a CNI configuration. If you're not familiar with them, let's break them down quickly. Here's an example CNI configuration:
```
{
"cniVersion": "0.3.0",
"type": "loopback",
"additional": "information"
}
```
CNI configurations are JSON, and we have a structure here that has a few things we're interested in:
1. `cniVersion`: Tells each CNI plugin which version is being used and can give the plugin information if it's using a too late (or too early) version.
2. `type`: This tells CNI which binary to call on disk. Each CNI plugin is a binary that's called. Typically, these binaries are stored in `/opt/cni/bin` on each node, and CNI executes this binary. In this case we've specified the `loopback` binary (which create a loopback-type network interface). If this is your first time installing Multus, you might want to verify that the plugins that are in the "type" field are actually on disk in the `/opt/cni/bin` directory.
3. `additional`: This field is put here as an example, each CNI plugin can specify whatever configuration parameters they'd like in JSON. These are specific to the binary you're calling in the `type` field.
For an even further example -- take a look at the [bridge CNI plugin README](https://github.com/containernetworking/plugins/tree/master/plugins/main/bridge) which shows additional
If you'd like more information about CNI configuration, you can read [the entire CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md). It might also be useful to look at the [CNI reference plugins](https://github.com/containernetworking/plugins) and see how they're configured.
You do not need to reload or refresh the Kubelets when CNI configurations change. These are read on each creation & deletion of pods. So if you change a configuration, it'll apply the next time a pod is created. Existing pods may need to be restarted if they need the new configuration.
### Storing a configuration as a Custom Resource
So, we want to create an additional interface. Let's create a macvlan interface for pods to use. We'll create a custom resource that defines the CNI configuration for interfaces.
Note in the following command that there's a `kind: NetworkAttachmentDefinition`. This is our fancy name for our configuration -- it's a custom extension of Kubernetes that defines how we attach networks to our pods.
Secondarily, note the `config` field. You'll see that this is a CNI configuration just like we explained earlier.
Lastly but *very* importantly, note under `metadata` the `name` field -- here's where we give this configuration a name, and it's how we tell pods to use this configuration. The name here is `macvlan-conf` -- as we're creating a configuration for macvlan.
Here's the command to create this example configuration:
```
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
```
*NOTE*: This example uses `eth0` as the `master` parameter, this master parameter should match the interface name on the hosts in your cluster.
You can see which configurations you've created using `kubectl` here's how you can do that:
```
kubectl get network-attachment-definitions
```
You can get more detail by describing them:
```
kubectl describe network-attachment-definitions macvlan-conf
```
### Creating a pod that attaches an additional interface
We're going to create a pod. This will look familiar as any pod you might have created before, but, we'll have a special `annotations` field -- in this case we'll have an annotation called `k8s.v1.cni.cncf.io/networks`. This field takes a comma delimited list of the names of your `NetworkAttachmentDefinition`s as we created above. Note in the comand below that we have the annotation of `k8s.v1.cni.cncf.io/networks: macvlan-conf` where `macvlan-conf` is the name we used above when we created our configuration.
Let's go ahead and create a pod (that just sleeps for a really long time) with this command:
```
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: dougbtv/centos-network
EOF
```
You may now inspect the pod and see what interfaces interfaces are attached, like so:
```
$ kubectl exec -it samplepod -- ip a
```
You should note that there's 3 interfaces:
* `lo` a loopback interface
* `eth0` our default network
* `net1` the new interface we created with the macvlan configuration.
### What if I want more interfaces?
You can add more interfaces to a pod by creating more custom resources and then referring to them in pod's annotation. You can also reuse configurations, so for example, to attach two macvlan interfaces to a pod, you could create a pod like so:
```
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf,macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/bash", "-c", "sleep 2000000000000"]
image: dougbtv/centos-network
EOF
```
Note that the annotation now reads `k8s.v1.cni.cncf.io/networks: macvlan-conf,macvlan-conf`. Where we have the same configuration used twice, separated by a comma.
If you were to create another custom resource with the name `foo` you could use that such as: `k8s.v1.cni.cncf.io/networks: foo,macvlan-conf`, and use any number of attachments.

View File

@@ -4,7 +4,7 @@ In the `./examples` folder some example configurations are provided for using Mu
## Examples overview
Generally, the examples here show a setup using Multus with CRD support. The examples here demonstrate a setup with Multus as the meta-plugin used by Kubernetes, and delgating to either Flannel (which will be the default pod network), or to macvlan. The CRDs are intended to be alignment with the defacto standard.
Generally, the examples here show a setup using Multus with CRD support. The examples here demonstrate a setup with Multus as the meta-plugin used by Kubernetes, and delegating to either Flannel (which will be the default pod network), or to macvlan. The CRDs are intended to be alignment with the defacto standard.
It is expected that aspects of your own setup will vary, at least in part, from some of what's demonstrated here. Namely, the IP address spaces, and likely the host ethernet interface names used in the macvlan part of the configuration.
@@ -12,7 +12,7 @@ More specifically, these examples show:
* Multus configured, using CNI a `.conf` file, with CRD support, specifying that we will use a "default network".
* A resource definition with a daemonset that places the `.conf` on each node in the cluster.
* A CRD definining the "networks" @ `network-attachment-definitions.k8s.cni.cncf.io`
* A CRD defining the "networks" @ `network-attachment-definitions.k8s.cni.cncf.io`
* CRD objects containing the configuration for both Flannel & macvlan.
## Quick-start instructions
@@ -37,7 +37,7 @@ More specifically, these examples show:
## RBAC configuration
You'll need to abnel the `system:node` users access to the API endpoints that will deliver the CRD objects to Multus.
You'll need to enable the `system:node` users access to the API endpoints that will deliver the CRD objects to Multus.
Using these examples, you'll first create a cluster role with the provided sample:
@@ -60,3 +60,35 @@ A sample `cni-configuration.conf` is provided, typically this file is placed in
## Other considerations
Primarily in this setup one thing that one should consider are the aspects of the `macvlan-conf.yml`, which is likely specific to the configuration of the node on which this resides.
## Passing down device information
Some CNI plugins require specific device information which maybe pre-allocated by K8s device plugin. This could be indicated by providing `k8s.v1.cni.cncf.io/resourceName` annotaton in its network attachment definition CRD. The file [`examples/sriov-net.yaml`](./sriov-net.yaml) shows an example on how to define a Network attachment definition with specific device allocation information. Multus will get allocated device information and make them available for CNI plugin to work on.
In this exmaple (shown below), it is expected that an [SRIOV Device Plugin](https://github.com/intel/sriov-network-device-plugin/) making a pool of SRIOV VFs available to the K8s with `intel.com/sriov` as their resourceName. Any device allocated from this resource pool will be passed down by Multus to the [sriov-cni](https://github.com/intel/sriov-cni/tree/dev/k8s-deviceid-model) plugin in `deviceID` field. This is up to the sriov-cni plugin to capture this information and work with this specific device information.
```yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-net-a
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/sriov
spec:
config: '{
"type": "sriov",
"vlan": 1000,
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
}'
```
The [net-resource-sample-pod.yaml](./net-resource-sample-pod.yaml) is an exmaple Pod manifest file that requesting a SRIOV device from a host which is then configured using the above network attachement definition.
>For further information on how to configure SRIOV Device Plugin and SRIOV-CNI please refer to the links given above.

View File

@@ -1,16 +1,19 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus-crd-overpowered
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
- apiGroups: ["k8s.cni.cncf.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
- pods/status
verbs:
- get
- update

View File

@@ -0,0 +1,36 @@
{
"name": "multus-cni-network",
"type": "multus"
"capabilities": {
"portMappings": true
},
"delegates": [
{
"cniVersion": "0.3.1",
"name": "ptp-tuning-conflist",
"plugins": [
{
"dns": {
"nameservers": [
"172.16.1.1"
]
},
"ipMasq": true,
"ipam": {
"subnet": "172.16.0.0/24",
"type": "host-local"
},
"mtu": 512,
"type": "ptp"
},
{
"capabilities": {
"portMappings": true
},
"externalSetMarkChain": "KUBE-MARK-MASQ",
"type": "portmap"
}
]
}
],
}

View File

@@ -101,8 +101,7 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: testpod1
labels:
env: test
annotations:
k8s.v1.cni.cncf.io/networks: sriov-net-a
spec:
containers:
- name: appcntr1
image: centos/tools
imagePullPolicy: IfNotPresent
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 300000; done;" ]
resources:
requests:
intel.com/sriov: '1'
limits:
intel.com/sriov: '1'
restartPolicy: "Never"

View File

@@ -2,21 +2,29 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: networks.kubernetes.cni.cncf.io
name: network-attachment-definitions.k8s.cni.cncf.io
spec:
# group name to use for REST API: /apis/<group>/<version>
group: kubernetes.cni.cncf.io
group: k8s.cni.cncf.io
# version name to use for REST API: /apis/<group>/<version>
version: v1
# either Namespaced or Cluster
scope: Namespaced
names:
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: networks
plural: network-attachment-definitions
# singular name to be used as an alias on the CLI and for display
singular: network
singular: network-attachment-definition
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: Network
kind: NetworkAttachmentDefinition
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- net
- net-attach-def
validation:
openAPIV3Schema:
properties:
spec:
properties:
config:
type: string

View File

@@ -2,7 +2,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: multus-crd-overpowered
name: multus
rules:
- apiGroups:
- '*'

View File

@@ -1,6 +1,6 @@
---
apiVersion: "kubernetes.cni.cncf.io/v1"
kind: Network
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-1
spec:
@@ -17,8 +17,8 @@ spec:
}
}'
---
apiVersion: "kubernetes.cni.cncf.io/v1"
kind: Network
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-2
spec:
@@ -35,8 +35,8 @@ spec:
}
}'
---
apiVersion: "kubernetes.cni.cncf.io/v1"
kind: Network
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-3
spec:
@@ -53,8 +53,8 @@ spec:
}
}'
---
apiVersion: "kubernetes.cni.cncf.io/v1"
kind: Network
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf-4
spec:

View File

@@ -1,6 +1,6 @@
---
apiVersion: "kubernetes.cni.cncf.io/v1"
kind: Network
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: vlan-conf-1-1
namespace: testns1

View File

@@ -8,7 +8,7 @@ metadata:
{ "name": "macvlan-conf-2" },
{ "name": "vlan-conf-1-1",
"namespace": "testns1",
"interfaceRequest": "vlan1-1" }
"interface": "vlan1-1" }
]'
spec:
containers:

21
examples/sriov-net.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: sriov-net-a
annotations:
k8s.v1.cni.cncf.io/resourceName: intel.com/sriov
spec:
config: '{
"type": "sriov",
"vlan": 1000,
"ipam": {
"type": "host-local",
"subnet": "10.56.217.0/24",
"rangeStart": "10.56.217.171",
"rangeEnd": "10.56.217.181",
"routes": [{
"dst": "0.0.0.0/0"
}],
"gateway": "10.56.217.1"
}
}'

View File

@@ -1,16 +1,18 @@
package: github.com/intel/multus-cni
ignore:
- bytes
import:
- package: github.com/containernetworking/cni
version: 07c1a6da47b7fbf8b357f4949ecce2113e598491
subpackages:
- pkg/ip
- pkg/ipam
- pkg/skel
- pkg/types
- pkg/version
- package: github.com/containernetworking/plugins
version: 2b8b1ac0af4568e928d96ccc5f47b075416eeabd
subpackages:
- pkg/ip
- pkg/ipam
- pkg/ns
- package: github.com/onsi/ginkgo
version: 7f8ab55aaf3b86885aa55b762e803744d1674700

View File

@@ -1,14 +0,0 @@
{
"name": "multus-cni-network",
"type": "multus",
"delegates": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true
}
}
],
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}

View File

@@ -1,27 +0,0 @@
FROM centos:centos7
# Add everything
ADD . /usr/src/multus-cni
ENV INSTALL_PKGS "git golang"
RUN yum install -y $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
cd /usr/src/multus-cni && \
./build && \
yum autoremove -y $INSTALL_PKGS && \
yum clean all && \
rm -rf /tmp/*
WORKDIR /
LABEL io.k8s.display-name="Multus CNI" \
io.k8s.description="This is a component of OpenShift Container Platform and provides a meta CNI plugin." \
io.openshift.tags="openshift" \
maintainer="Doug Smith <dosmith@redhat.com>"
ADD ./images/entrypoint.sh /
# does it require a root user?
# USER 1001
ENTRYPOINT /entrypoint.sh

View File

@@ -2,10 +2,10 @@
This is used for distribution of Multus in a Docker image.
Typically you'd build this from the root of your Multus clone, and you'd set the `-f` flag to specify the Dockerfile during build time. This allows the addition of the entirety of the Multus git clone as part of the Docker context. Use the `-f` flag with the root of the clone as the context (e.g. your current work directory would be root of git clone), such as:
Typically you'd build this from the root of your Multus clone, as such:
```
$ docker build -t dougbtv/multus -f ./images/Dockerfile .
$ docker build -t dougbtv/multus .
```
---
@@ -31,10 +31,12 @@ You can get get help with the `--help` flag.
```
$ ./entrypoint.sh --help
This is an entrypoint script for Multus CNI to overlay its
binary and configuration into locations in a filesystem.
The configuration & binary file will be copied to the
corresponding configuration directory.
This is an entrypoint script for Multus CNI to overlay its binary and
configuration into locations in a filesystem. The configuration & binary file
will be copied to the corresponding configuration directory. When
`--multus-conf-file=auto` is used, 00-multus.conf will be automatically
generated from the CNI configuration file of the master plugin (the first file
in lexicographical order in cni-conf-dir).
./entrypoint.sh
-h --help
@@ -42,6 +44,7 @@ corresponding configuration directory.
--cni-bin-dir=/host/opt/cni/bin
--multus-conf-file=/usr/src/multus-cni/images/70-multus.conf
--multus-bin-file=/usr/src/multus-cni/bin/multus
--multus-kubeconfig-file-host=/etc/cni/net.d/multus.d/multus.kubeconfig
```
You must use an `=` to delimit the parameter name and the value. For example you may set a custom `cni-conf-dir` like so:
@@ -59,7 +62,7 @@ Note: You'll noticed that there's a `/host/...` directory from the root for the
Example docker run command:
```
$ docker run -it -v /opt/cni/bin/:/host/opt/cni/bin/ -v /etc/cni/net.d/:/host/etc/cni/net.d/ --entrypoint=/bin/bash dougbtv/multus
$ docker run -it -v /opt/cni/bin/:/host/opt/cni/bin/ -v /etc/cni/net.d/:/host/etc/cni/net.d/ --entrypoint=/bin/bash dougbtv/multus
```
Originally inspired by and is a portmanteau of the [Flannel daemonset](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml), the [Calico Daemonset](https://github.com/projectcalico/calico/blob/master/v2.0/getting-started/kubernetes/installation/hosted/k8s-backend-addon-manager/calico-daemonset.yaml), and the [Calico CNI install bash script](https://github.com/projectcalico/cni-plugin/blob/be4df4db2e47aa7378b1bdf6933724bac1f348d0/k8s-install/scripts/install-cni.sh#L104-L153).
Originally inspired by and is a portmanteau of the [Flannel daemonset](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml), the [Calico Daemonset](https://github.com/projectcalico/calico/blob/master/v2.0/getting-started/kubernetes/installation/hosted/k8s-backend-addon-manager/calico-daemonset.yaml), and the [Calico CNI install bash script](https://github.com/projectcalico/cni-plugin/blob/be4df4db2e47aa7378b1bdf6933724bac1f348d0/k8s-install/scripts/install-cni.sh#L104-L153).

View File

@@ -8,14 +8,20 @@ CNI_CONF_DIR="/host/etc/cni/net.d"
CNI_BIN_DIR="/host/opt/cni/bin"
MULTUS_CONF_FILE="/usr/src/multus-cni/images/70-multus.conf"
MULTUS_BIN_FILE="/usr/src/multus-cni/bin/multus"
MULTUS_KUBECONFIG_FILE_HOST="/etc/cni/net.d/multus.d/multus.kubeconfig"
MULTUS_NAMESPACE_ISOLATION=false
MULTUS_LOG_LEVEL=""
MULTUS_LOG_FILE=""
# Give help text for parameters.
function usage()
{
echo -e "This is an entrypoint script for Multus CNI to overlay its"
echo -e "binary and configuration into locations in a filesystem."
echo -e "The configuration & binary file will be copied to the "
echo -e "corresponding configuration directory."
echo -e "This is an entrypoint script for Multus CNI to overlay its binary and "
echo -e "configuration into locations in a filesystem. The configuration & binary file "
echo -e "will be copied to the corresponding configuration directory. When "
echo -e "'--multus-conf-file=auto' is used, 00-multus.conf will be automatically "
echo -e "generated from the CNI configuration file of the master plugin (the first file "
echo -e "in lexicographical order in cni-conf-dir)."
echo -e ""
echo -e "./entrypoint.sh"
echo -e "\t-h --help"
@@ -23,6 +29,10 @@ function usage()
echo -e "\t--cni-bin-dir=$CNI_BIN_DIR"
echo -e "\t--multus-conf-file=$MULTUS_CONF_FILE"
echo -e "\t--multus-bin-file=$MULTUS_BIN_FILE"
echo -e "\t--multus-kubeconfig-file-host=$MULTUS_KUBECONFIG_FILE_HOST"
echo -e "\t--namespace-isolation=$MULTUS_NAMESPACE_ISOLATION"
echo -e "\t--multus-log-level=$MULTUS_LOG_LEVEL (empty by default, used only with --multus-conf-file=auto)"
echo -e "\t--multus-log-file=$MULTUS_LOG_FILE (empty by default, used only with --multus-conf-file=auto)"
}
# Parse parameters given as arguments to this script.
@@ -46,10 +56,20 @@ while [ "$1" != "" ]; do
--multus-bin-file)
MULTUS_BIN_FILE=$VALUE
;;
--multus-kubeconfig-file-host)
MULTUS_KUBECONFIG_FILE_HOST=$VALUE
;;
--namespace-isolation)
MULTUS_NAMESPACE_ISOLATION=$VALUE
;;
--multus-log-level)
MULTUS_LOG_LEVEL=$VALUE
;;
--multus-log-file)
MULTUS_LOG_FILE=$VALUE
;;
*)
echo "ERROR: unknown parameter \"$PARAM\""
usage
exit 1
echo "WARNING: unknown parameter \"$PARAM\""
;;
esac
shift
@@ -57,7 +77,11 @@ done
# Create array of known locations
declare -a arr=($CNI_CONF_DIR $CNI_BIN_DIR $MULTUS_CONF_FILE $MULTUS_BIN_FILE)
declare -a arr=($CNI_CONF_DIR $CNI_BIN_DIR $MULTUS_BIN_FILE)
if [ "$MULTUS_CONF_FILE" != "auto" ]; then
arr+=($MULTUS_CONF_FILE)
fi
# Loop through and verify each location each.
for i in "${arr[@]}"
@@ -68,9 +92,12 @@ do
fi
done
# Copy files into proper places.
cp -f $MULTUS_CONF_FILE $CNI_CONF_DIR
cp -f $MULTUS_BIN_FILE $CNI_BIN_DIR
# Copy files into place and atomically move into final binary name
cp -f $MULTUS_BIN_FILE $CNI_BIN_DIR/_multus
mv -f $CNI_BIN_DIR/_multus $CNI_BIN_DIR/multus
if [ "$MULTUS_CONF_FILE" != "auto" ]; then
cp -f $MULTUS_CONF_FILE $CNI_CONF_DIR
fi
# Make a multus.d directory (for our kubeconfig)
@@ -134,6 +161,81 @@ fi
# ---------------------- end Generate a "kube-config".
# ------------------------------- Generate "00-multus.conf"
if [ "$MULTUS_CONF_FILE" == "auto" ]; then
echo "Generating Multus configuration file ..."
found_master=false
tries=0
while [ $found_master == false ]; do
MASTER_PLUGIN="$(ls $CNI_CONF_DIR | grep -E '\.conf(list)?$' | grep -Ev '00-multus\.conf' | head -1)"
if [ "$MASTER_PLUGIN" == "" ]; then
if [ $tries -lt 600 ]; then
if ! (($tries % 5)); then
echo "Attemping to find master plugin configuration, attempt $tries"
fi
let "tries+=1"
sleep 1;
else
echo "Error: Multus could not be configured: no master plugin was found."
exit 1;
fi
else
found_master=true
ISOLATION_STRING=""
if [ "$MULTUS_NAMESPACE_ISOLATION" == true ]; then
ISOLATION_STRING="\"namespaceIsolation\": true,"
fi
LOG_LEVEL_STRING=""
if [ ! -z "${MULTUS_LOG_LEVEL// }" ]; then
case "$MULTUS_LOG_LEVEL" in
debug)
;;
error)
;;
panic)
;;
verbose)
;;
*)
echo "ERROR: Log levels should be one of: debug/verbose/error/panic, did not understand $MULTUS_LOG_LEVEL"
usage
exit 1
esac
LOG_LEVEL_STRING="\"logLevel\": \"$MULTUS_LOG_LEVEL\","
fi
LOG_FILE_STRING=""
if [ ! -z "${MULTUS_LOG_FILE// }" ]; then
LOG_FILE_STRING="\"logFile\": \"$MULTUS_LOG_FILE\","
fi
MASTER_PLUGIN_JSON="$(cat $CNI_CONF_DIR/$MASTER_PLUGIN)"
CONF=$(cat <<-EOF
{
"name": "multus-cni-network",
"type": "multus",
$ISOLATION_STRING
$LOG_LEVEL_STRING
$LOG_FILE_STRING
"kubeconfig": "$MULTUS_KUBECONFIG_FILE_HOST",
"delegates": [
$MASTER_PLUGIN_JSON
]
}
EOF
)
echo $CONF > $CNI_CONF_DIR/00-multus.conf
echo "Config file created @ $CNI_CONF_DIR/00-multus.conf"
fi
done
fi
# ---------------------- end Generate "00-multus.conf".
echo "Entering sleep... (success)"
# Sleep forever.

View File

@@ -55,25 +55,26 @@ metadata:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
# ------------------------------- Intentionally removed, Multus daemonset configures /etc/cni/net.d
#cni-conf.json: |
# {
# "name": "cbr0",
# "plugins": [
# {
# "type": "flannel",
# "delegate": {
# "hairpinMode": true,
# "isDefaultGateway": true
# }
# },
# {
# "type": "portmap",
# "capabilities": {
# "portMappings": true
# }
# }
# ]
# }
net-conf.json: |
{
"Network": "10.244.0.0/16",
@@ -101,8 +102,7 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
# ------------------------------- Intentionally removed, Multus daemonset configures /etc/cni/net.d
@@ -181,8 +181,7 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
@@ -260,8 +259,7 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
@@ -339,8 +337,7 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
@@ -418,8 +415,7 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
@@ -476,4 +472,4 @@ spec:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
name: kube-flannel-cfg

View File

@@ -0,0 +1,166 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: network-attachment-definitions.k8s.cni.cncf.io
spec:
group: k8s.cni.cncf.io
version: v1
scope: Namespaced
names:
plural: network-attachment-definitions
singular: network-attachment-definition
kind: NetworkAttachmentDefinition
shortNames:
- net-attach-def
validation:
openAPIV3Schema:
properties:
spec:
properties:
config:
type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
rules:
- apiGroups: ["k8s.cni.cncf.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
- pods/status
verbs:
- get
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:
- kind: ServiceAccount
name: multus
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: multus
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: multus-cni-config
namespace: kube-system
labels:
tier: node
app: multus
data:
cni-conf.json: |
{
"name": "multus-cni-network",
"type": "multus",
"capabilities": {
"portMappings": true
},
"delegates": [
{
"cniVersion": "0.3.1",
"name": "default-cni-network",
"plugins": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true,
"hairpinMode": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
],
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}
# -------------- for openshift.
# "delegates": [{
# "type": "openshift-sdn",
# "name:" "openshift.1",
# "masterplugin": true
# }],
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-multus-ds-amd64
namespace: kube-system
labels:
tier: node
app: multus
spec:
template:
metadata:
labels:
tier: node
app: multus
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
image: nfvpe/multus:v3.2
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
- "--cni-bin-dir=/host/usr/libexec/cni"
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: cnibin
mountPath: /host/usr/libexec/cni
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
path: /etc/cni/net.d
- name: cnibin
hostPath:
path: /usr/libexec/cni
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 70-multus.conf

View File

@@ -26,16 +26,19 @@ apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
- apiGroups: ["k8s.cni.cncf.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
- pods/status
verbs:
- get
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
@@ -56,11 +59,6 @@ metadata:
name: multus
namespace: kube-system
---
# ------------------------------------------------------
# Currently unused!
# If you wish to customize, mount this in the
# daemonset @ /usr/src/multus-cni/images/70-multus.conf
# ------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
@@ -74,13 +72,29 @@ data:
{
"name": "multus-cni-network",
"type": "multus",
"capabilities": {
"portMappings": true
},
"delegates": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true
}
"cniVersion": "0.3.1",
"name": "default-cni-network",
"plugins": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true,
"hairpinMode": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
],
"kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
@@ -111,13 +125,15 @@ spec:
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
image: nfvpe/multus:latest
image: nfvpe/multus:v3.2
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
resources:
requests:
cpu: "100m"
@@ -132,6 +148,8 @@ spec:
mountPath: /host/etc/cni/net.d
- name: cnibin
mountPath: /host/opt/cni/bin
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
@@ -142,3 +160,6 @@ spec:
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 70-multus.conf

View File

@@ -31,9 +31,17 @@ import (
"github.com/containernetworking/cni/libcni"
"github.com/containernetworking/cni/pkg/skel"
cnitypes "github.com/containernetworking/cni/pkg/types"
"github.com/intel/multus-cni/checkpoint"
"github.com/intel/multus-cni/logging"
"github.com/intel/multus-cni/types"
)
const (
resourceNameAnnot = "k8s.v1.cni.cncf.io/resourceName"
defaultNetAnnot = "v1.multus-cni.io/default-network"
networkAttachmentAnnot = "k8s.v1.cni.cncf.io/networks"
)
// NoK8sNetworkError indicates error, no network in kubernetes
type NoK8sNetworkError struct {
message string
@@ -67,40 +75,58 @@ func (d *defaultKubeClient) UpdatePodStatus(pod *v1.Pod) (*v1.Pod, error) {
}
func setKubeClientInfo(c *clientInfo, client KubeClient, k8sArgs *types.K8sArgs) {
logging.Debugf("setKubeClientInfo: %v, %v, %v", c, client, k8sArgs)
c.Client = client
c.Podnamespace = string(k8sArgs.K8S_POD_NAMESPACE)
c.Podname = string(k8sArgs.K8S_POD_NAME)
}
func SetNetworkStatus(k *clientInfo, netStatus []*types.NetworkStatus) error {
func SetNetworkStatus(client KubeClient, k8sArgs *types.K8sArgs, netStatus []*types.NetworkStatus, conf *types.NetConf) error {
logging.Debugf("SetNetworkStatus: %v, %v, %v, %v", client, k8sArgs, netStatus, conf)
pod, err := k.Client.GetPod(k.Podnamespace, k.Podname)
client, err := GetK8sClient(conf.Kubeconfig, client)
if err != nil {
return fmt.Errorf("SetNetworkStatus: failed to query the pod %v in out of cluster comm: %v", k.Podname, err)
return logging.Errorf("SetNetworkStatus: %v", err)
}
if client == nil {
if len(conf.Delegates) == 0 {
// No available kube client and no delegates, we can't do anything
return logging.Errorf("must have either Kubernetes config or delegates, refer to Multus documentation for usage instructions")
}
logging.Debugf("SetNetworkStatus: kube client info is not defined, skip network status setup")
return nil
}
var ns string
podName := string(k8sArgs.K8S_POD_NAME)
podNamespace := string(k8sArgs.K8S_POD_NAMESPACE)
pod, err := client.GetPod(podNamespace, podName)
if err != nil {
return logging.Errorf("SetNetworkStatus: failed to query the pod %v in out of cluster comm: %v", podName, err)
}
var networkStatuses string
if netStatus != nil {
var networkStatus []string
for _, nets := range netStatus {
data, err := json.MarshalIndent(nets, "", " ")
for _, status := range netStatus {
data, err := json.MarshalIndent(status, "", " ")
if err != nil {
return fmt.Errorf("SetNetworkStatus: error with Marshal Indent: %v", err)
return logging.Errorf("SetNetworkStatus: error with Marshal Indent: %v", err)
}
networkStatus = append(networkStatus, string(data))
}
ns = fmt.Sprintf("[%s]", strings.Join(networkStatus, ","))
networkStatuses = fmt.Sprintf("[%s]", strings.Join(networkStatus, ","))
}
_, err = setPodNetworkAnnotation(k.Client, k.Podnamespace, pod, ns)
_, err = setPodNetworkAnnotation(client, podNamespace, pod, networkStatuses)
if err != nil {
return fmt.Errorf("SetNetworkStatus: failed to update the pod %v in out of cluster comm: %v", k.Podname, err)
return logging.Errorf("SetNetworkStatus: failed to update the pod %v in out of cluster comm: %v", podName, err)
}
return nil
}
func setPodNetworkAnnotation(client KubeClient, namespace string, pod *v1.Pod, networkstatus string) (*v1.Pod, error) {
logging.Debugf("setPodNetworkAnnotation: %v, %s, %v, %s", client, namespace, pod, networkstatus)
//if pod annotations is empty, make sure it allocatable
if len(pod.Annotations) == 0 {
pod.Annotations = make(map[string]string)
@@ -122,27 +148,17 @@ func setPodNetworkAnnotation(client KubeClient, namespace string, pod *v1.Pod, n
pod, err = client.UpdatePodStatus(pod)
return err
}); resultErr != nil {
return nil, fmt.Errorf("status update failed for pod %s/%s: %v", pod.Namespace, pod.Name, resultErr)
return nil, logging.Errorf("status update failed for pod %s/%s: %v", pod.Namespace, pod.Name, resultErr)
}
return pod, nil
}
func getPodNetworkAnnotation(client KubeClient, k8sArgs *types.K8sArgs) (string, string, error) {
var err error
pod, err := client.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
if err != nil {
return "", "", fmt.Errorf("getPodNetworkAnnotation: failed to query the pod %v in out of cluster comm: %v", string(k8sArgs.K8S_POD_NAME), err)
}
return pod.Annotations["k8s.v1.cni.cncf.io/networks"], pod.ObjectMeta.Namespace, nil
}
func parsePodNetworkObjectName(podnetwork string) (string, string, string, error) {
var netNsName string
var netIfName string
var networkName string
logging.Debugf("parsePodNetworkObjectName: %s", podnetwork)
slashItems := strings.Split(podnetwork, "/")
if len(slashItems) == 2 {
netNsName = strings.TrimSpace(slashItems[0])
@@ -150,7 +166,7 @@ func parsePodNetworkObjectName(podnetwork string) (string, string, string, error
} else if len(slashItems) == 1 {
networkName = slashItems[0]
} else {
return "", "", "", fmt.Errorf("Invalid network object (failed at '/')")
return "", "", "", logging.Errorf("Invalid network object (failed at '/')")
}
atItems := strings.Split(networkName, "@")
@@ -158,7 +174,7 @@ func parsePodNetworkObjectName(podnetwork string) (string, string, string, error
if len(atItems) == 2 {
netIfName = strings.TrimSpace(atItems[1])
} else if len(atItems) != 1 {
return "", "", "", fmt.Errorf("Invalid network object (failed at '@')")
return "", "", "", logging.Errorf("Invalid network object (failed at '@')")
}
// Check and see if each item matches the specification for valid attachment name.
@@ -170,23 +186,25 @@ func parsePodNetworkObjectName(podnetwork string) (string, string, string, error
for i := range allItems {
matched, _ := regexp.MatchString("^[a-z0-9]([-a-z0-9]*[a-z0-9])?$", allItems[i])
if !matched && len([]rune(allItems[i])) > 0 {
return "", "", "", fmt.Errorf(fmt.Sprintf("Failed to parse: one or more items did not match comma-delimited format (must consist of lower case alphanumeric characters). Must start and end with an alphanumeric character), mismatch @ '%v'", allItems[i]))
return "", "", "", logging.Errorf(fmt.Sprintf("Failed to parse: one or more items did not match comma-delimited format (must consist of lower case alphanumeric characters). Must start and end with an alphanumeric character), mismatch @ '%v'", allItems[i]))
}
}
logging.Debugf("parsePodNetworkObjectName: parsed: %s, %s, %s", netNsName, networkName, netIfName)
return netNsName, networkName, netIfName, nil
}
func parsePodNetworkAnnotation(podNetworks, defaultNamespace string) ([]*types.NetworkSelectionElement, error) {
var networks []*types.NetworkSelectionElement
logging.Debugf("parsePodNetworkAnnotation: %s, %s", podNetworks, defaultNamespace)
if podNetworks == "" {
return nil, fmt.Errorf("parsePodNetworkAnnotation: pod annotation not having \"network\" as key, refer Multus README.md for the usage guide")
return nil, logging.Errorf("parsePodNetworkAnnotation: pod annotation not having \"network\" as key, refer Multus README.md for the usage guide")
}
if strings.IndexAny(podNetworks, "[{\"") >= 0 {
if err := json.Unmarshal([]byte(podNetworks), &networks); err != nil {
return nil, fmt.Errorf("parsePodNetworkAnnotation: failed to parse pod Network Attachment Selection Annotation JSON format: %v", err)
return nil, logging.Errorf("parsePodNetworkAnnotation: failed to parse pod Network Attachment Selection Annotation JSON format: %v", err)
}
} else {
// Comma-delimited list of network attachment object names
@@ -197,7 +215,7 @@ func parsePodNetworkAnnotation(podNetworks, defaultNamespace string) ([]*types.N
// Parse network name (i.e. <namespace>/<network name>@<ifname>)
netNsName, networkName, netIfName, err := parsePodNetworkObjectName(item)
if err != nil {
return nil, fmt.Errorf("parsePodNetworkAnnotation: %v", err)
return nil, logging.Errorf("parsePodNetworkAnnotation: %v", err)
}
networks = append(networks, &types.NetworkSelectionElement{
@@ -218,6 +236,7 @@ func parsePodNetworkAnnotation(podNetworks, defaultNamespace string) ([]*types.N
}
func getCNIConfigFromFile(name string, confdir string) ([]byte, error) {
logging.Debugf("getCNIConfigFromFile: %s, %s", name, confdir)
// In the absence of valid keys in a Spec, the runtime (or
// meta-plugin) should load and execute a CNI .configlist
@@ -228,9 +247,9 @@ func getCNIConfigFromFile(name string, confdir string) ([]byte, error) {
files, err := libcni.ConfFiles(confdir, []string{".conf", ".json", ".conflist"})
switch {
case err != nil:
return nil, fmt.Errorf("No networks found in %s", confdir)
return nil, logging.Errorf("No networks found in %s", confdir)
case len(files) == 0:
return nil, fmt.Errorf("No networks found in %s", confdir)
return nil, logging.Errorf("No networks found in %s", confdir)
}
for _, confFile := range files {
@@ -238,31 +257,31 @@ func getCNIConfigFromFile(name string, confdir string) ([]byte, error) {
if strings.HasSuffix(confFile, ".conflist") {
confList, err = libcni.ConfListFromFile(confFile)
if err != nil {
return nil, fmt.Errorf("Error loading CNI conflist file %s: %v", confFile, err)
return nil, logging.Errorf("Error loading CNI conflist file %s: %v", confFile, err)
}
if confList.Name == name {
if confList.Name == name || name == "" {
return confList.Bytes, nil
}
} else {
conf, err := libcni.ConfFromFile(confFile)
if err != nil {
return nil, fmt.Errorf("Error loading CNI config file %s: %v", confFile, err)
return nil, logging.Errorf("Error loading CNI config file %s: %v", confFile, err)
}
if conf.Network.Name == name {
if conf.Network.Name == name || name == "" {
// Ensure the config has a "type" so we know what plugin to run.
// Also catches the case where somebody put a conflist into a conf file.
if conf.Network.Type == "" {
return nil, fmt.Errorf("Error loading CNI config file %s: no 'type'; perhaps this is a .conflist?", confFile)
return nil, logging.Errorf("Error loading CNI config file %s: no 'type'; perhaps this is a .conflist?", confFile)
}
return conf.Bytes, nil
}
}
}
return nil, fmt.Errorf("no network available in the name %s in cni dir %s", name, confdir)
return nil, logging.Errorf("no network available in the name %s in cni dir %s", name, confdir)
}
// getCNIConfigFromSpec reads a CNI JSON configuration from the NetworkAttachmentDefinition
@@ -271,10 +290,11 @@ func getCNIConfigFromSpec(configData, netName string) ([]byte, error) {
var rawConfig map[string]interface{}
var err error
logging.Debugf("getCNIConfigFromSpec: %s, %s", configData, netName)
configBytes := []byte(configData)
err = json.Unmarshal(configBytes, &rawConfig)
if err != nil {
return nil, fmt.Errorf("getCNIConfigFromSpec: failed to unmarshal Spec.Config: %v", err)
return nil, logging.Errorf("getCNIConfigFromSpec: failed to unmarshal Spec.Config: %v", err)
}
// Inject network name if missing from Config for the thick plugin case
@@ -282,7 +302,7 @@ func getCNIConfigFromSpec(configData, netName string) ([]byte, error) {
rawConfig["name"] = netName
configBytes, err = json.Marshal(rawConfig)
if err != nil {
return nil, fmt.Errorf("getCNIConfigFromSpec: failed to re-marshal Spec.Config: %v", err)
return nil, logging.Errorf("getCNIConfigFromSpec: failed to re-marshal Spec.Config: %v", err)
}
}
@@ -293,6 +313,7 @@ func cniConfigFromNetworkResource(customResource *types.NetworkAttachmentDefinit
var config []byte
var err error
logging.Debugf("cniConfigFromNetworkResource: %v, %s", customResource, confdir)
emptySpec := types.NetworkAttachmentDefinitionSpec{}
if customResource.Spec == emptySpec {
// Network Spec empty; generate delegate from CNI JSON config
@@ -300,7 +321,7 @@ func cniConfigFromNetworkResource(customResource *types.NetworkAttachmentDefinit
// name as the custom resource
config, err = getCNIConfigFromFile(customResource.Metadata.Name, confdir)
if err != nil {
return nil, fmt.Errorf("cniConfigFromNetworkResource: err in getCNIConfigFromFile: %v", err)
return nil, logging.Errorf("cniConfigFromNetworkResource: err in getCNIConfigFromFile: %v", err)
}
} else {
// Config contains a standard JSON-encoded CNI configuration
@@ -308,36 +329,67 @@ func cniConfigFromNetworkResource(customResource *types.NetworkAttachmentDefinit
// execute.
config, err = getCNIConfigFromSpec(customResource.Spec.Config, customResource.Metadata.Name)
if err != nil {
return nil, fmt.Errorf("cniConfigFromNetworkResource: err in getCNIConfigFromSpec: %v", err)
return nil, logging.Errorf("cniConfigFromNetworkResource: err in getCNIConfigFromSpec: %v", err)
}
}
return config, nil
}
func getKubernetesDelegate(client KubeClient, net *types.NetworkSelectionElement, confdir string) (*types.DelegateNetConf, error) {
func getKubernetesDelegate(client KubeClient, net *types.NetworkSelectionElement, confdir string, podID string, resourceMap map[string]*types.ResourceInfo) (*types.DelegateNetConf, map[string]*types.ResourceInfo, error) {
logging.Debugf("getKubernetesDelegate: %v, %v, %s", client, net, confdir)
rawPath := fmt.Sprintf("/apis/k8s.cni.cncf.io/v1/namespaces/%s/network-attachment-definitions/%s", net.Namespace, net.Name)
netData, err := client.GetRawWithPath(rawPath)
if err != nil {
return nil, fmt.Errorf("getKubernetesDelegate: failed to get network resource, refer Multus README.md for the usage guide: %v", err)
return nil, resourceMap, logging.Errorf("getKubernetesDelegate: failed to get network resource, refer Multus README.md for the usage guide: %v", err)
}
customResource := &types.NetworkAttachmentDefinition{}
if err := json.Unmarshal(netData, customResource); err != nil {
return nil, fmt.Errorf("getKubernetesDelegate: failed to get the netplugin data: %v", err)
return nil, resourceMap, logging.Errorf("getKubernetesDelegate: failed to get the netplugin data: %v", err)
}
// Get resourceName annotation from NetworkAttachmentDefinition
deviceID := ""
resourceName, ok := customResource.Metadata.Annotations[resourceNameAnnot]
if ok && podID != "" {
// ResourceName annotation is found; try to get device info from resourceMap
logging.Debugf("getKubernetesDelegate: found resourceName annotation : %s", resourceName)
if resourceMap == nil {
checkpoint, err := checkpoint.GetCheckpoint()
if err != nil {
return nil, resourceMap, logging.Errorf("getKubernetesDelegate: failed to get a checkpoint instance: %v", err)
}
resourceMap, err = checkpoint.GetComputeDeviceMap(podID)
if err != nil {
return nil, resourceMap, logging.Errorf("getKubernetesDelegate: failed to get resourceMap from kubelet checkpoint file: %v", err)
}
logging.Debugf("getKubernetesDelegate(): resourceMap instance: %+v", resourceMap)
}
entry, ok := resourceMap[resourceName]
if ok {
if idCount := len(entry.DeviceIDs); idCount > 0 && idCount > entry.Index {
deviceID = entry.DeviceIDs[entry.Index]
logging.Debugf("getKubernetesDelegate: podID: %s deviceID: %s", podID, deviceID)
entry.Index++ // increment Index for next delegate
}
}
}
configBytes, err := cniConfigFromNetworkResource(customResource, confdir)
if err != nil {
return nil, err
return nil, resourceMap, err
}
delegate, err := types.LoadDelegateNetConf(configBytes, net.InterfaceRequest)
delegate, err := types.LoadDelegateNetConf(configBytes, net, deviceID)
if err != nil {
return nil, err
return nil, resourceMap, err
}
return delegate, nil
return delegate, resourceMap, nil
}
type KubeClient interface {
@@ -349,6 +401,7 @@ type KubeClient interface {
func GetK8sArgs(args *skel.CmdArgs) (*types.K8sArgs, error) {
k8sArgs := &types.K8sArgs{}
logging.Debugf("GetK8sArgs: %v", args)
err := cnitypes.LoadArgs(args.Args, k8sArgs)
if err != nil {
return nil, err
@@ -359,10 +412,11 @@ func GetK8sArgs(args *skel.CmdArgs) (*types.K8sArgs, error) {
// Attempts to load Kubernetes-defined delegates and add them to the Multus config.
// Returns the number of Kubernetes-defined delegates added or an error.
func TryLoadK8sDelegates(k8sArgs *types.K8sArgs, conf *types.NetConf, kubeClient KubeClient) (int, *clientInfo, error) {
func TryLoadPodDelegates(k8sArgs *types.K8sArgs, conf *types.NetConf, kubeClient KubeClient) (int, *clientInfo, error) {
var err error
clientInfo := &clientInfo{}
logging.Debugf("TryLoadPodDelegates: %v, %v, %v", k8sArgs, conf, kubeClient)
kubeClient, err = GetK8sClient(conf.Kubeconfig, kubeClient)
if err != nil {
return 0, nil, err
@@ -371,28 +425,50 @@ func TryLoadK8sDelegates(k8sArgs *types.K8sArgs, conf *types.NetConf, kubeClient
if kubeClient == nil {
if len(conf.Delegates) == 0 {
// No available kube client and no delegates, we can't do anything
return 0, nil, fmt.Errorf("must have either Kubernetes config or delegates, refer Multus README.md for the usage guide")
return 0, nil, logging.Errorf("must have either Kubernetes config or delegates, refer Multus README.md for the usage guide")
}
return 0, nil, nil
}
setKubeClientInfo(clientInfo, kubeClient, k8sArgs)
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, conf.ConfDir)
// Get the pod info. If cannot get it, we use cached delegates
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
if err != nil {
if _, ok := err.(*NoK8sNetworkError); ok {
return 0, nil, nil
logging.Debugf("tryLoadK8sDelegates: Err in loading K8s cluster default network from pod annotation: %v, use cached delegates", err)
return 0, nil, nil
}
delegate, err := tryLoadK8sPodDefaultNetwork(kubeClient, pod, conf)
if err != nil {
return 0, nil, logging.Errorf("tryLoadK8sDelegates: Err in loading K8s cluster default network from pod annotation: %v", err)
}
if delegate != nil {
logging.Debugf("tryLoadK8sDelegates: Overwrite the cluster default network with %v from pod annotations", delegate)
conf.Delegates[0] = delegate
}
networks, err := GetPodNetwork(pod)
if networks != nil {
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, conf.ConfDir, conf.NamespaceIsolation)
if err != nil {
if _, ok := err.(*NoK8sNetworkError); ok {
return 0, clientInfo, nil
}
return 0, nil, logging.Errorf("Multus: Err in getting k8s network from pod: %v", err)
}
return 0, nil, fmt.Errorf("Multus: Err in getting k8s network from pod: %v", err)
}
if err = conf.AddDelegates(delegates); err != nil {
return 0, nil, err
if err = conf.AddDelegates(delegates); err != nil {
return 0, nil, err
}
return len(delegates), clientInfo, nil
}
return len(delegates), clientInfo, nil
return 0, clientInfo, nil
}
func GetK8sClient(kubeconfig string, kubeClient KubeClient) (KubeClient, error) {
logging.Debugf("GetK8sClient: %s, %v", kubeconfig, kubeClient)
// If we get a valid kubeClient (eg from testcases) just return that
// one.
if kubeClient != nil {
@@ -407,19 +483,23 @@ func GetK8sClient(kubeconfig string, kubeClient KubeClient) (KubeClient, error)
// uses the current context in kubeconfig
config, err = clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return nil, fmt.Errorf("GetK8sClient: failed to get context for the kubeconfig %v, refer Multus README.md for the usage guide: %v", kubeconfig, err)
return nil, logging.Errorf("GetK8sClient: failed to get context for the kubeconfig %v, refer Multus README.md for the usage guide: %v", kubeconfig, err)
}
} else if os.Getenv("KUBERNETES_SERVICE_HOST") != "" && os.Getenv("KUBERNETES_SERVICE_PORT") != "" {
// Try in-cluster config where multus might be running in a kubernetes pod
config, err = rest.InClusterConfig()
if err != nil {
return nil, fmt.Errorf("createK8sClient: failed to get context for in-cluster kube config, refer Multus README.md for the usage guide: %v", err)
return nil, logging.Errorf("createK8sClient: failed to get context for in-cluster kube config, refer Multus README.md for the usage guide: %v", err)
}
} else {
// No kubernetes config; assume we shouldn't talk to Kube at all
return nil, nil
}
// Specify that we use gRPC
config.AcceptContentTypes = "application/vnd.kubernetes.protobuf,application/json"
config.ContentType = "application/vnd.kubernetes.protobuf"
// creates the clientset
client, err := kubernetes.NewForConfig(config)
if err != nil {
@@ -429,12 +509,11 @@ func GetK8sClient(kubeconfig string, kubeClient KubeClient) (KubeClient, error)
return &defaultKubeClient{client: client}, nil
}
func GetK8sNetwork(k8sclient KubeClient, k8sArgs *types.K8sArgs, confdir string) ([]*types.DelegateNetConf, error) {
func GetPodNetwork(pod *v1.Pod) ([]*types.NetworkSelectionElement, error) {
logging.Debugf("GetPodNetwork: %v", pod)
netAnnot, defaultNamespace, err := getPodNetworkAnnotation(k8sclient, k8sArgs)
if err != nil {
return nil, err
}
netAnnot := pod.Annotations[networkAttachmentAnnot]
defaultNamespace := pod.ObjectMeta.Namespace
if len(netAnnot) == 0 {
return nil, &NoK8sNetworkError{"no kubernetes network found"}
@@ -444,16 +523,178 @@ func GetK8sNetwork(k8sclient KubeClient, k8sArgs *types.K8sArgs, confdir string)
if err != nil {
return nil, err
}
return networks, nil
}
func GetNetworkDelegates(k8sclient KubeClient, pod *v1.Pod, networks []*types.NetworkSelectionElement, confdir string, confnamespaceIsolation bool) ([]*types.DelegateNetConf, error) {
logging.Debugf("GetNetworkDelegates: %v, %v, %v, %v, %v", k8sclient, pod, networks, confdir, confnamespaceIsolation)
// resourceMap holds Pod device allocation information; only initizized if CRD contains 'resourceName' annotation.
// This will only be initialized once and all delegate objects can reference this to look up device info.
var resourceMap map[string]*types.ResourceInfo
// Read all network objects referenced by 'networks'
var delegates []*types.DelegateNetConf
defaultNamespace := pod.ObjectMeta.Namespace
podID := pod.UID
for _, net := range networks {
delegate, err := getKubernetesDelegate(k8sclient, net, confdir)
// The pods namespace (stored as defaultNamespace, does not equal the annotation's target namespace in net.Namespace)
// In the case that this is a mismatch when namespaceisolation is enabled, this should be an error.
if confnamespaceIsolation {
if defaultNamespace != net.Namespace {
return nil, logging.Errorf("GetPodNetwork: namespace isolation violation: podnamespace: %v / target namespace: %v", defaultNamespace, net.Namespace)
}
}
delegate, updatedResourceMap, err := getKubernetesDelegate(k8sclient, net, confdir, string(podID), resourceMap)
if err != nil {
return nil, fmt.Errorf("GetK8sNetwork: failed getting the delegate: %v", err)
return nil, logging.Errorf("GetPodNetwork: failed getting the delegate: %v", err)
}
delegates = append(delegates, delegate)
resourceMap = updatedResourceMap
}
return delegates, nil
}
func getDefaultNetDelegateCRD(client KubeClient, net, confdir, namespace string) (*types.DelegateNetConf, error) {
logging.Debugf("getDefaultNetDelegateCRD: %v, %v, %s, %s", client, net, confdir, namespace)
rawPath := fmt.Sprintf("/apis/k8s.cni.cncf.io/v1/namespaces/%s/network-attachment-definitions/%s", namespace, net)
netData, err := client.GetRawWithPath(rawPath)
if err != nil {
return nil, logging.Errorf("getDefaultNetDelegateCRD: failed to get network resource, refer Multus README.md for the usage guide: %v", err)
}
customResource := &types.NetworkAttachmentDefinition{}
if err := json.Unmarshal(netData, customResource); err != nil {
return nil, logging.Errorf("getDefaultNetDelegateCRD: failed to get the netplugin data: %v", err)
}
configBytes, err := cniConfigFromNetworkResource(customResource, confdir)
if err != nil {
return nil, err
}
delegate, err := types.LoadDelegateNetConf(configBytes, nil, "")
if err != nil {
return nil, err
}
return delegate, nil
}
func getNetDelegate(client KubeClient, netname, confdir, namespace string) (*types.DelegateNetConf, error) {
logging.Debugf("getNetDelegate: %v, %v, %v, %s", client, netname, confdir, namespace)
// option1) search CRD object for the network
delegate, err := getDefaultNetDelegateCRD(client, netname, confdir, namespace)
if err == nil {
return delegate, nil
}
// option2) search CNI json config file
var configBytes []byte
configBytes, err = getCNIConfigFromFile(netname, confdir)
if err == nil {
delegate, err := types.LoadDelegateNetConf(configBytes, nil, "")
if err != nil {
return nil, err
}
return delegate, nil
}
// option3) search directry
fInfo, err := os.Stat(netname)
if err == nil {
if fInfo.IsDir() {
files, err := libcni.ConfFiles(netname, []string{".conf", ".conflist"})
if err != nil {
return nil, err
}
if len(files) > 0 {
var configBytes []byte
configBytes, err = getCNIConfigFromFile("", netname)
if err == nil {
delegate, err := types.LoadDelegateNetConf(configBytes, nil, "")
if err != nil {
return nil, err
}
return delegate, nil
}
return nil, err
}
}
}
return nil, logging.Errorf("getNetDelegate: cannot find network: %v", netname)
}
// GetDefaultNetwork parses 'defaultNetwork' config, gets network json and put it into netconf.Delegates.
func GetDefaultNetworks(k8sArgs *types.K8sArgs, conf *types.NetConf, kubeClient KubeClient) error {
logging.Debugf("GetDefaultNetworks: %v, %v, %v", k8sArgs, conf, kubeClient)
var delegates []*types.DelegateNetConf
kubeClient, err := GetK8sClient(conf.Kubeconfig, kubeClient)
if err != nil {
return err
}
if kubeClient == nil {
if len(conf.Delegates) == 0 {
// No available kube client and no delegates, we can't do anything
return logging.Errorf("must have either Kubernetes config or delegates, refer Multus README.md for the usage guide")
}
return nil
}
delegate, err := getNetDelegate(kubeClient, conf.ClusterNetwork, conf.ConfDir, conf.MultusNamespace)
if err != nil {
return err
}
delegate.MasterPlugin = true
delegates = append(delegates, delegate)
// Pod in kube-system namespace does not have default network for now.
if !types.CheckSystemNamespaces(string(k8sArgs.K8S_POD_NAMESPACE), conf.SystemNamespaces) {
for _, netname := range conf.DefaultNetworks {
delegate, err := getNetDelegate(kubeClient, netname, conf.ConfDir, conf.MultusNamespace)
if err != nil {
return err
}
delegates = append(delegates, delegate)
}
}
if err = conf.AddDelegates(delegates); err != nil {
return err
}
return nil
}
// tryLoadK8sPodDefaultNetwork get pod default network from annotations
func tryLoadK8sPodDefaultNetwork(kubeClient KubeClient, pod *v1.Pod, conf *types.NetConf) (*types.DelegateNetConf, error) {
var netAnnot string
logging.Debugf("tryLoadK8sPodDefaultNetwork: %v, %v, %v", kubeClient, pod, conf)
netAnnot, ok := pod.Annotations[defaultNetAnnot]
if !ok {
logging.Debugf("tryLoadK8sPodDefaultNetwork: Pod default network annotation is not defined")
return nil, nil
}
// The CRD object of default network should only be defined in multusNamespace
networks, err := parsePodNetworkAnnotation(netAnnot, conf.MultusNamespace)
if err != nil {
return nil, logging.Errorf("tryLoadK8sPodDefaultNetwork: failed to parse CRD object: %v", err)
}
if len(networks) > 1 {
return nil, logging.Errorf("tryLoadK8sPodDefaultNetwork: more than one default network is specified: %s", netAnnot)
}
delegate, _, err := getKubernetesDelegate(kubeClient, networks[0], conf.ConfDir, "", nil)
if err != nil {
return nil, logging.Errorf("tryLoadK8sPodDefaultNetwork: failed getting the delegate: %v", err)
}
delegate.MasterPlugin = true
return delegate, nil
}

View File

@@ -25,6 +25,7 @@ import (
testutils "github.com/intel/multus-cni/testing"
"github.com/containernetworking/cni/pkg/skel"
"github.com/intel/multus-cni/types"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -50,7 +51,7 @@ var _ = Describe("k8sclient operations", func() {
})
It("retrieves delegates from kubernetes using simple format annotation", func() {
fakePod := testutils.NewFakePod("testpod", "net1,net2")
fakePod := testutils.NewFakePod("testpod", "net1,net2", "")
net1 := `{
"name": "net1",
"type": "mynet",
@@ -81,7 +82,10 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, tmpDir, false)
Expect(err).NotTo(HaveOccurred())
Expect(fKubeClient.PodCount).To(Equal(1))
Expect(fKubeClient.NetCount).To(Equal(2))
@@ -96,7 +100,7 @@ var _ = Describe("k8sclient operations", func() {
})
It("fails when the network does not exist", func() {
fakePod := testutils.NewFakePod("testpod", "net1,net2")
fakePod := testutils.NewFakePod("testpod", "net1,net2", "")
net3 := `{
"name": "net3",
"type": "mynet3",
@@ -114,9 +118,12 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, tmpDir, false)
Expect(len(delegates)).To(Equal(0))
Expect(err).To(MatchError("GetK8sNetwork: failed getting the delegate: getKubernetesDelegate: failed to get network resource, refer Multus README.md for the usage guide: resource not found"))
Expect(err).To(MatchError("GetPodNetwork: failed getting the delegate: getKubernetesDelegate: failed to get network resource, refer Multus README.md for the usage guide: resource not found"))
})
It("retrieves delegates from kubernetes using JSON format annotation", func() {
@@ -131,7 +138,7 @@ var _ = Describe("k8sclient operations", func() {
"name":"net3",
"namespace":"other-ns"
}
]`)
]`, "")
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
@@ -158,7 +165,10 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, tmpDir, false)
Expect(err).NotTo(HaveOccurred())
Expect(fKubeClient.PodCount).To(Equal(1))
Expect(fKubeClient.NetCount).To(Equal(3))
@@ -173,7 +183,7 @@ var _ = Describe("k8sclient operations", func() {
})
It("fails when the JSON format annotation is invalid", func() {
fakePod := testutils.NewFakePod("testpod", "[adsfasdfasdfasf]")
fakePod := testutils.NewFakePod("testpod", "[adsfasdfasdfasf]", "")
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
@@ -185,13 +195,14 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
Expect(len(delegates)).To(Equal(0))
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(len(networks)).To(Equal(0))
Expect(err).To(MatchError("parsePodNetworkAnnotation: failed to parse pod Network Attachment Selection Annotation JSON format: invalid character 'a' looking for beginning of value"))
})
It("retrieves delegates from kubernetes using on-disk config files", func() {
fakePod := testutils.NewFakePod("testpod", "net1,net2")
fakePod := testutils.NewFakePod("testpod", "net1,net2", "")
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
@@ -215,7 +226,10 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, tmpDir, false)
Expect(err).NotTo(HaveOccurred())
Expect(fKubeClient.PodCount).To(Equal(1))
Expect(fKubeClient.NetCount).To(Equal(2))
@@ -228,7 +242,7 @@ var _ = Describe("k8sclient operations", func() {
})
It("injects network name into minimal thick plugin CNI config", func() {
fakePod := testutils.NewFakePod("testpod", "net1")
fakePod := testutils.NewFakePod("testpod", "net1", "")
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
@@ -241,7 +255,10 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, tmpDir, false)
Expect(err).NotTo(HaveOccurred())
Expect(fKubeClient.PodCount).To(Equal(1))
Expect(fKubeClient.NetCount).To(Equal(1))
@@ -252,7 +269,7 @@ var _ = Describe("k8sclient operations", func() {
})
It("fails when on-disk config file is not valid", func() {
fakePod := testutils.NewFakePod("testpod", "net1,net2")
fakePod := testutils.NewFakePod("testpod", "net1,net2", "")
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
@@ -272,8 +289,326 @@ var _ = Describe("k8sclient operations", func() {
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetK8sNetwork(kubeClient, k8sArgs, tmpDir)
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
delegates, err := GetNetworkDelegates(kubeClient, pod, networks, tmpDir, false)
Expect(len(delegates)).To(Equal(0))
Expect(err).To(MatchError(fmt.Sprintf("GetK8sNetwork: failed getting the delegate: cniConfigFromNetworkResource: err in getCNIConfigFromFile: Error loading CNI config file %s: error parsing configuration: invalid character 'a' looking for beginning of value", net2Name)))
Expect(err).To(MatchError(fmt.Sprintf("GetPodNetwork: failed getting the delegate: cniConfigFromNetworkResource: err in getCNIConfigFromFile: Error loading CNI config file %s: error parsing configuration: invalid character 'a' looking for beginning of value", net2Name)))
})
It("retrieves cluster network from CRD", func() {
fakePod := testutils.NewFakePod("testpod", "", "")
conf := `{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "myCRD1",
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddNetConfig("kube-system", "myCRD1", "{\"type\": \"mynet\"}")
fKubeClient.AddPod(fakePod)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(1))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("myCRD1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet"))
})
It("retrieves default networks from CRD", func() {
fakePod := testutils.NewFakePod("testpod", "", "")
conf := `{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "myCRD1",
"defaultNetworks": ["myCRD2"],
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddNetConfig("kube-system", "myCRD1", "{\"type\": \"mynet\"}")
fKubeClient.AddNetConfig("kube-system", "myCRD2", "{\"type\": \"mynet2\"}")
fKubeClient.AddPod(fakePod)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(2))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("myCRD1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet"))
Expect(netConf.Delegates[1].Conf.Name).To(Equal("myCRD2"))
Expect(netConf.Delegates[1].Conf.Type).To(Equal("mynet2"))
})
It("ignore default networks from CRD in case of kube-system namespace", func() {
fakePod := testutils.NewFakePod("testpod", "", "")
// overwrite namespace
fakePod.ObjectMeta.Namespace = "kube-system"
conf := `{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "myCRD1",
"defaultNetworks": ["myCRD2"],
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddNetConfig("kube-system", "myCRD1", "{\"type\": \"mynet\"}")
fKubeClient.AddNetConfig("kube-system", "myCRD2", "{\"type\": \"mynet2\"}")
fKubeClient.AddPod(fakePod)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(1))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("myCRD1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet"))
})
It("retrieves cluster network from file", func() {
fakePod := testutils.NewFakePod("testpod", "", "")
conf := `{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "myFile1",
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`
netConf, err := types.LoadNetConf([]byte(conf))
netConf.ConfDir = tmpDir
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
net1Name := filepath.Join(tmpDir, "10-net1.conf")
fKubeClient.AddNetFile(fakePod.ObjectMeta.Namespace, "net1", net1Name, `{
"name": "myFile1",
"type": "mynet",
"cniVersion": "0.2.0"
}`)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(1))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("myFile1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet"))
})
It("retrieves cluster network from path", func() {
fakePod := testutils.NewFakePod("testpod", "", "")
conf := fmt.Sprintf(`{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "%s",
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`, tmpDir)
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
net1Name := filepath.Join(tmpDir, "10-net1.conf")
fKubeClient.AddNetFile(fakePod.ObjectMeta.Namespace, "10-net1", net1Name, `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`)
fKubeClient.AddPod(fakePod)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(1))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("net1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet"))
})
It("Error in case of CRD not found", func() {
fakePod := testutils.NewFakePod("testpod", "", "")
conf := `{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "myCRD1",
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).To(HaveOccurred())
})
It("overwrite cluster network when Pod annotation is set", func() {
fakePod := testutils.NewFakePod("testpod", "", "net1")
conf := `{
"name":"node-cni-network",
"type":"multus",
"clusterNetwork": "net2",
"multusNamespace" : "kube-system",
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml"
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddNetConfig("kube-system", "net1", "{\"type\": \"mynet1\"}")
fKubeClient.AddNetConfig("kube-system", "net2", "{\"type\": \"mynet2\"}")
fKubeClient.AddPod(fakePod)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
err = GetDefaultNetworks(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(1))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("net2"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet2"))
numK8sDelegates, _, err := TryLoadPodDelegates(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(numK8sDelegates).To(Equal(0))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("net1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet1"))
})
It("overwrite multus config when Pod annotation is set", func() {
fakePod := testutils.NewFakePod("testpod", "", "net1")
conf := `{
"name":"node-cni-network",
"type":"multus",
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"type": "mynet2",
"name": "net2"
}]
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("net2"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet2"))
Expect(err).NotTo(HaveOccurred())
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig("kube-system", "net1", "{\"type\": \"mynet1\"}")
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
numK8sDelegates, _, err := TryLoadPodDelegates(k8sArgs, netConf, kubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(numK8sDelegates).To(Equal(0))
Expect(netConf.Delegates[0].Conf.Name).To(Equal("net1"))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("mynet1"))
})
It("Errors when namespace isolation is violated", func() {
fakePod := testutils.NewFakePod("testpod", "kube-system/net1", "")
conf := `{
"name":"node-cni-network",
"type":"multus",
"delegates": [{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}],
"kubeconfig":"/etc/kubernetes/node-kubeconfig.yaml",
"namespaceIsolation": true
}`
netConf, err := types.LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
net1 := `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`
args := &skel.CmdArgs{
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
}
fKubeClient := testutils.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig("kube-system", "net1", net1)
kubeClient, err := GetK8sClient("", fKubeClient)
Expect(err).NotTo(HaveOccurred())
k8sArgs, err := GetK8sArgs(args)
Expect(err).NotTo(HaveOccurred())
pod, err := kubeClient.GetPod(string(k8sArgs.K8S_POD_NAMESPACE), string(k8sArgs.K8S_POD_NAME))
networks, err := GetPodNetwork(pod)
Expect(err).NotTo(HaveOccurred())
_, err = GetNetworkDelegates(kubeClient, pod, networks, tmpDir, netConf.NamespaceIsolation)
Expect(err).To(HaveOccurred())
Expect(err).To(MatchError("GetPodNetwork: namespace isolation violation: podnamespace: test / target namespace: kube-system"))
})
})

View File

@@ -29,20 +29,24 @@ type Level uint32
const (
PanicLevel Level = iota
ErrorLevel
VerboseLevel
DebugLevel
MaxLevel
UnknownLevel
)
var loggingStderr bool
var loggingFp *os.File
var loggingFp *os.File
var loggingLevel Level
const defaultTimestampFormat = time.RFC3339
func (l Level) String() string {
switch l {
case PanicLevel:
return "panic"
case VerboseLevel:
return "verbose"
case ErrorLevel:
return "error"
case DebugLevel:
@@ -75,8 +79,13 @@ func Debugf(format string, a ...interface{}) {
Printf(DebugLevel, format, a...)
}
func Errorf(format string, a ...interface{}) {
func Verbosef(format string, a ...interface{}) {
Printf(VerboseLevel, format, a...)
}
func Errorf(format string, a ...interface{}) error {
Printf(ErrorLevel, format, a...)
return fmt.Errorf(format, a...)
}
func Panicf(format string, a ...interface{}) {
@@ -86,10 +95,16 @@ func Panicf(format string, a ...interface{}) {
Printf(PanicLevel, "========= Stack trace output end ========")
}
func GetLoggingLevel (levelStr string) Level {
func GetLoggingLevel() Level {
return loggingLevel
}
func getLoggingLevel(levelStr string) Level {
switch strings.ToLower(levelStr) {
case "debug":
return DebugLevel
case "verbose":
return VerboseLevel
case "error":
return ErrorLevel
case "panic":
@@ -99,18 +114,18 @@ func GetLoggingLevel (levelStr string) Level {
return UnknownLevel
}
func SetLogLevel (levelStr string) {
level := GetLoggingLevel(levelStr)
func SetLogLevel(levelStr string) {
level := getLoggingLevel(levelStr)
if level < MaxLevel {
loggingLevel = level
}
}
func SetLogStderr (enable bool) {
func SetLogStderr(enable bool) {
loggingStderr = enable
}
func SetLogFile (filename string) {
func SetLogFile(filename string) {
if filename == "" {
return
}

View File

@@ -17,8 +17,8 @@ package logging
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestLogging(t *testing.T) {
@@ -50,6 +50,8 @@ var _ = Describe("logging operations", func() {
Expect(loggingLevel).To(Equal(DebugLevel))
SetLogLevel("Error")
Expect(loggingLevel).To(Equal(ErrorLevel))
SetLogLevel("VERbose")
Expect(loggingLevel).To(Equal(VerboseLevel))
SetLogLevel("PANIC")
Expect(loggingLevel).To(Equal(PanicLevel))
})

View File

@@ -20,45 +20,70 @@ package main
import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"net"
"os"
"path/filepath"
"strings"
"time"
"github.com/containernetworking/cni/libcni"
"github.com/containernetworking/cni/pkg/invoke"
"github.com/containernetworking/cni/pkg/skel"
cnitypes "github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/version"
cniversion "github.com/containernetworking/cni/pkg/version"
"github.com/containernetworking/plugins/pkg/ns"
k8s "github.com/intel/multus-cni/k8sclient"
"github.com/intel/multus-cni/logging"
"github.com/intel/multus-cni/types"
"github.com/vishvananda/netlink"
"k8s.io/apimachinery/pkg/util/wait"
)
var version = "master@git"
var commit = "unknown commit"
var date = "unknown date"
var defaultReadinessBackoff = wait.Backoff{
Steps: 4,
Duration: 250 * time.Millisecond,
Factor: 4.0,
Jitter: 0.1,
}
func printVersionString() string {
return fmt.Sprintf("multus-cni version:%s, commit:%s, date:%s",
version, commit, date)
}
func saveScratchNetConf(containerID, dataDir string, netconf []byte) error {
logging.Debugf("saveScratchNetConf: %s, %s, %s", containerID, dataDir, string(netconf))
if err := os.MkdirAll(dataDir, 0700); err != nil {
return fmt.Errorf("failed to create the multus data directory(%q): %v", dataDir, err)
return logging.Errorf("failed to create the multus data directory(%q): %v", dataDir, err)
}
path := filepath.Join(dataDir, containerID)
err := ioutil.WriteFile(path, netconf, 0600)
if err != nil {
return fmt.Errorf("failed to write container data in the path(%q): %v", path, err)
return logging.Errorf("failed to write container data in the path(%q): %v", path, err)
}
return err
}
func consumeScratchNetConf(containerID, dataDir string) ([]byte, error) {
func consumeScratchNetConf(containerID, dataDir string) ([]byte, string, error) {
logging.Debugf("consumeScratchNetConf: %s, %s", containerID, dataDir)
path := filepath.Join(dataDir, containerID)
defer os.Remove(path)
return ioutil.ReadFile(path)
b, err := ioutil.ReadFile(path)
return b, path, err
}
func getIfname(delegate *types.DelegateNetConf, argif string, idx int) string {
logging.Debugf("getIfname: %v, %s, %d", delegate, argif, idx)
if delegate.IfnameRequest != "" {
return delegate.IfnameRequest
}
@@ -73,22 +98,35 @@ func getIfname(delegate *types.DelegateNetConf, argif string, idx int) string {
}
func saveDelegates(containerID, dataDir string, delegates []*types.DelegateNetConf) error {
logging.Debugf("saveDelegates: %s, %s, %v", containerID, dataDir, delegates)
delegatesBytes, err := json.Marshal(delegates)
if err != nil {
return fmt.Errorf("error serializing delegate netconf: %v", err)
return logging.Errorf("error serializing delegate netconf: %v", err)
}
if err = saveScratchNetConf(containerID, dataDir, delegatesBytes); err != nil {
return fmt.Errorf("error in saving the delegates : %v", err)
return logging.Errorf("error in saving the delegates : %v", err)
}
return err
}
func deleteDelegates(containerID, dataDir string) error {
logging.Debugf("deleteDelegates: %s, %s", containerID, dataDir)
path := filepath.Join(dataDir, containerID)
if err := os.Remove(path); err != nil {
return logging.Errorf("error in deleting the delegates : %v", err)
}
return nil
}
func validateIfName(nsname string, ifname string) error {
logging.Debugf("validateIfName: %s, %s", nsname, ifname)
podNs, err := ns.GetNS(nsname)
if err != nil {
return fmt.Errorf("no netns: %v", err)
return logging.Errorf("no netns: %v", err)
}
err = podNs.Do(func(_ ns.NetNS) error {
@@ -99,145 +137,239 @@ func validateIfName(nsname string, ifname string) error {
}
return err
}
return fmt.Errorf("ifname %s is already exist", ifname)
return logging.Errorf("ifname %s is already exist", ifname)
})
return err
}
func conflistAdd(rt *libcni.RuntimeConf, rawnetconflist []byte, binDir string) (cnitypes.Result, error) {
func conflistAdd(rt *libcni.RuntimeConf, rawnetconflist []byte, binDir string, exec invoke.Exec) (cnitypes.Result, error) {
logging.Debugf("conflistAdd: %v, %s, %s", rt, string(rawnetconflist), binDir)
// In part, adapted from K8s pkg/kubelet/dockershim/network/cni/cni.go
binDirs := []string{binDir}
cniNet := libcni.CNIConfig{Path: binDirs}
binDirs := filepath.SplitList(os.Getenv("CNI_PATH"))
binDirs = append(binDirs, binDir)
cniNet := libcni.NewCNIConfig(binDirs, exec)
confList, err := libcni.ConfListFromBytes(rawnetconflist)
if err != nil {
return nil, fmt.Errorf("error in converting the raw bytes to conflist: %v", err)
return nil, logging.Errorf("error in converting the raw bytes to conflist: %v", err)
}
result, err := cniNet.AddNetworkList(confList, rt)
if err != nil {
return nil, fmt.Errorf("error in getting result from AddNetworkList: %v", err)
return nil, logging.Errorf("error in getting result from AddNetworkList: %v", err)
}
return result, nil
}
func conflistDel(rt *libcni.RuntimeConf, rawnetconflist []byte, binDir string) error {
func conflistDel(rt *libcni.RuntimeConf, rawnetconflist []byte, binDir string, exec invoke.Exec) error {
logging.Debugf("conflistDel: %v, %s, %s", rt, string(rawnetconflist), binDir)
// In part, adapted from K8s pkg/kubelet/dockershim/network/cni/cni.go
binDirs := []string{binDir}
cniNet := libcni.CNIConfig{Path: binDirs}
binDirs := filepath.SplitList(os.Getenv("CNI_PATH"))
binDirs = append(binDirs, binDir)
cniNet := libcni.NewCNIConfig(binDirs, exec)
confList, err := libcni.ConfListFromBytes(rawnetconflist)
if err != nil {
return fmt.Errorf("error in converting the raw bytes to conflist: %v", err)
return logging.Errorf("error in converting the raw bytes to conflist: %v", err)
}
err = cniNet.DelNetworkList(confList, rt)
if err != nil {
return fmt.Errorf("error in getting result from DelNetworkList: %v", err)
return logging.Errorf("error in getting result from DelNetworkList: %v", err)
}
return err
}
func delegateAdd(exec invoke.Exec, ifName string, delegate *types.DelegateNetConf, rt *libcni.RuntimeConf, binDir string) (cnitypes.Result, error) {
func delegateAdd(exec invoke.Exec, ifName string, delegate *types.DelegateNetConf, rt *libcni.RuntimeConf, binDir string, cniArgs string) (cnitypes.Result, error) {
logging.Debugf("delegateAdd: %v, %s, %v, %v, %s", exec, ifName, delegate, rt, binDir)
if os.Setenv("CNI_IFNAME", ifName) != nil {
return nil, fmt.Errorf("Multus: error in setting CNI_IFNAME")
return nil, logging.Errorf("Multus: error in setting CNI_IFNAME")
}
if err := validateIfName(os.Getenv("CNI_NETNS"), ifName); err != nil {
return nil, fmt.Errorf("cannot set %q ifname to %q: %v", delegate.Conf.Type, ifName, err)
return nil, logging.Errorf("cannot set %q ifname to %q: %v", delegate.Conf.Type, ifName, err)
}
if delegate.ConfListPlugin != false {
result, err := conflistAdd(rt, delegate.Bytes, binDir)
if err != nil {
return nil, fmt.Errorf("Multus: error in invoke Conflist add - %q: %v", delegate.ConfList.Name, err)
if delegate.MacRequest != "" || delegate.IPRequest != "" {
if cniArgs != "" {
cniArgs = fmt.Sprintf("%s;IgnoreUnknown=true", cniArgs)
} else {
cniArgs = "IgnoreUnknown=true"
}
if delegate.MacRequest != "" {
// validate Mac address
_, err := net.ParseMAC(delegate.MacRequest)
if err != nil {
return nil, logging.Errorf("failed to parse mac address %q", delegate.MacRequest)
}
cniArgs = fmt.Sprintf("%s;MAC=%s", cniArgs, delegate.MacRequest)
logging.Debugf("Set MAC address %q to %q", delegate.MacRequest, ifName)
}
return result, nil
if delegate.IPRequest != "" {
// validate IP address
if strings.Contains(delegate.IPRequest, "/") {
_, _, err := net.ParseCIDR(delegate.IPRequest)
if err != nil {
return nil, logging.Errorf("failed to parse CIDR %q", delegate.MacRequest)
}
} else if net.ParseIP(delegate.IPRequest) == nil {
return nil, logging.Errorf("failed to parse IP address %q", delegate.IPRequest)
}
cniArgs = fmt.Sprintf("%s;IP=%s", cniArgs, delegate.IPRequest)
logging.Debugf("Set IP address %q to %q", delegate.IPRequest, ifName)
}
if os.Setenv("CNI_ARGS", cniArgs) != nil {
return nil, logging.Errorf("cannot set %q mac to %q and ip to %q", delegate.Conf.Type, delegate.MacRequest, delegate.IPRequest)
}
}
result, err := invoke.DelegateAdd(delegate.Conf.Type, delegate.Bytes, exec)
if err != nil {
return nil, fmt.Errorf("Multus: error in invoke Delegate add - %q: %v", delegate.Conf.Type, err)
var result cnitypes.Result
var err error
if delegate.ConfListPlugin {
result, err = conflistAdd(rt, delegate.Bytes, binDir, exec)
if err != nil {
return nil, logging.Errorf("Multus: error in invoke Conflist add - %q: %v", delegate.ConfList.Name, err)
}
} else {
result, err = invoke.DelegateAdd(delegate.Conf.Type, delegate.Bytes, exec)
if err != nil {
return nil, logging.Errorf("Multus: error in invoke Delegate add - %q: %v", delegate.Conf.Type, err)
}
}
if logging.GetLoggingLevel() >= logging.VerboseLevel {
data, _ := json.Marshal(result)
var confName string
if delegate.ConfListPlugin {
confName = delegate.ConfList.Name
} else {
confName = delegate.Conf.Name
}
logging.Verbosef("Add: %s:%s:%s:%s %s", rt.Args[1][1], rt.Args[2][1], confName, rt.IfName, string(data))
}
return result, nil
}
func delegateDel(exec invoke.Exec, ifName string, delegateConf *types.DelegateNetConf, rt *libcni.RuntimeConf, binDir string) error {
logging.Debugf("delegateDel: %v, %s, %v, %v, %s", exec, ifName, delegateConf, rt, binDir)
if os.Setenv("CNI_IFNAME", ifName) != nil {
return fmt.Errorf("Multus: error in setting CNI_IFNAME")
return logging.Errorf("Multus: error in setting CNI_IFNAME")
}
if delegateConf.ConfListPlugin != false {
err := conflistDel(rt, delegateConf.Bytes, binDir)
if err != nil {
return fmt.Errorf("Multus: error in invoke Conflist Del - %q: %v", delegateConf.ConfList.Name, err)
if logging.GetLoggingLevel() >= logging.VerboseLevel {
var confName string
if delegateConf.ConfListPlugin {
confName = delegateConf.ConfList.Name
} else {
confName = delegateConf.Conf.Name
}
return err
logging.Verbosef("Del: %s:%s:%s:%s %s", rt.Args[1][1], rt.Args[2][1], confName, rt.IfName, string(delegateConf.Bytes))
}
if err := invoke.DelegateDel(delegateConf.Conf.Type, delegateConf.Bytes, exec); err != nil {
return fmt.Errorf("Multus: error in invoke Delegate del - %q: %v", delegateConf.Conf.Type, err)
var err error
if delegateConf.ConfListPlugin {
err = conflistDel(rt, delegateConf.Bytes, binDir, exec)
if err != nil {
return logging.Errorf("Multus: error in invoke Conflist Del - %q: %v", delegateConf.ConfList.Name, err)
}
} else {
if err = invoke.DelegateDel(delegateConf.Conf.Type, delegateConf.Bytes, exec); err != nil {
return logging.Errorf("Multus: error in invoke Delegate del - %q: %v", delegateConf.Conf.Type, err)
}
}
return nil
return err
}
func delPlugins(exec invoke.Exec, argIfname string, delegates []*types.DelegateNetConf, lastIdx int, rt *libcni.RuntimeConf, binDir string) error {
logging.Debugf("delPlugins: %v, %s, %v, %d, %v, %s", exec, argIfname, delegates, lastIdx, rt, binDir)
if os.Setenv("CNI_COMMAND", "DEL") != nil {
return fmt.Errorf("Multus: error in setting CNI_COMMAND to DEL")
return logging.Errorf("Multus: error in setting CNI_COMMAND to DEL")
}
var errorstrings []string
for idx := lastIdx; idx >= 0; idx-- {
ifName := getIfname(delegates[idx], argIfname, idx)
rt.IfName = ifName
// Attempt to delete all but do not error out, instead, collect all errors.
if err := delegateDel(exec, ifName, delegates[idx], rt, binDir); err != nil {
return err
errorstrings = append(errorstrings, err.Error())
}
}
// Check if we had any errors, and send them all back.
if len(errorstrings) > 0 {
return fmt.Errorf(strings.Join(errorstrings, " / "))
}
return nil
}
func cmdAdd(args *skel.CmdArgs, exec invoke.Exec, kubeClient k8s.KubeClient) (cnitypes.Result, error) {
n, err := types.LoadNetConf(args.StdinData)
logging.Debugf("cmdAdd: %v, %v, %v", args, exec, kubeClient)
if err != nil {
return nil, fmt.Errorf("err in loading netconf: %v", err)
return nil, logging.Errorf("err in loading netconf: %v", err)
}
k8sArgs, err := k8s.GetK8sArgs(args)
if err != nil {
return nil, fmt.Errorf("Multus: Err in getting k8s args: %v", err)
return nil, logging.Errorf("Multus: Err in getting k8s args: %v", err)
}
numK8sDelegates, kc, err := k8s.TryLoadK8sDelegates(k8sArgs, n, kubeClient)
if err != nil {
return nil, fmt.Errorf("Multus: Err in loading K8s Delegates k8s args: %v", err)
}
if numK8sDelegates == 0 {
// cache the multus config if we have only Multus delegates
if err := saveDelegates(args.ContainerID, n.CNIDir, n.Delegates); err != nil {
return nil, fmt.Errorf("Multus: Err in saving the delegates: %v", err)
wait.ExponentialBackoff(defaultReadinessBackoff, func() (bool, error) {
_, err := os.Stat(n.ReadinessIndicatorFile)
switch {
case err == nil:
return true, nil
default:
return false, nil
}
})
if n.ClusterNetwork != "" {
err = k8s.GetDefaultNetworks(k8sArgs, n, kubeClient)
if err != nil {
return nil, logging.Errorf("Multus: Failed to get clusterNetwork/defaultNetworks: %v", err)
}
// First delegate is always the master plugin
n.Delegates[0].MasterPlugin = true
}
_, kc, err := k8s.TryLoadPodDelegates(k8sArgs, n, kubeClient)
if err != nil {
return nil, logging.Errorf("Multus: Err in loading K8s Delegates k8s args: %v", err)
}
// cache the multus config
if err := saveDelegates(args.ContainerID, n.CNIDir, n.Delegates); err != nil {
return nil, logging.Errorf("Multus: Err in saving the delegates: %v", err)
}
var result, tmpResult cnitypes.Result
var netStatus []*types.NetworkStatus
var rt *libcni.RuntimeConf
lastIdx := 0
cniArgs := os.Getenv("CNI_ARGS")
for idx, delegate := range n.Delegates {
lastIdx = idx
ifName := getIfname(delegate, args.IfName, idx)
rt, _ = types.LoadCNIRuntimeConf(args, k8sArgs, ifName)
tmpResult, err = delegateAdd(exec, ifName, delegate, rt, n.BinDir)
rt := types.CreateCNIRuntimeConf(args, k8sArgs, ifName, n.RuntimeConfig)
tmpResult, err = delegateAdd(exec, ifName, delegate, rt, n.BinDir, cniArgs)
if err != nil {
break
// If the add failed, tear down all networks we already added
netName := delegate.Conf.Name
if netName == "" {
netName = delegate.ConfList.Name
}
// Ignore errors; DEL must be idempotent anyway
_ = delPlugins(exec, args.IfName, n.Delegates, idx, rt, n.BinDir)
return nil, logging.Errorf("Multus: Err adding pod to network %q: %v", netName, err)
}
// Master plugin result is always used if present
@@ -246,27 +378,25 @@ func cmdAdd(args *skel.CmdArgs, exec invoke.Exec, kubeClient k8s.KubeClient) (cn
}
//create the network status, only in case Multus as kubeconfig
if n.Kubeconfig != "" && kc.Podnamespace != "kube-system" {
delegateNetStatus, err := types.LoadNetworkStatus(tmpResult, delegate.Conf.Name, delegate.MasterPlugin)
if err != nil {
return nil, fmt.Errorf("Multus: Err in setting networks status: %v", err)
}
if n.Kubeconfig != "" && kc != nil {
if !types.CheckSystemNamespaces(kc.Podnamespace, n.SystemNamespaces) {
delegateNetStatus, err := types.LoadNetworkStatus(tmpResult, delegate.Conf.Name, delegate.MasterPlugin)
if err != nil {
return nil, logging.Errorf("Multus: Err in setting network status: %v", err)
}
netStatus = append(netStatus, delegateNetStatus)
netStatus = append(netStatus, delegateNetStatus)
}
}
}
if err != nil {
// Ignore errors; DEL must be idempotent anyway
_ = delPlugins(exec, args.IfName, n.Delegates, lastIdx, rt, n.BinDir)
return nil, fmt.Errorf("Multus: Err in tearing down failed plugins: %v", err)
}
//set the network status annotation in apiserver, only in case Multus as kubeconfig
if n.Kubeconfig != "" && kc.Podnamespace != "kube-system" {
err = k8s.SetNetworkStatus(kc, netStatus)
if err != nil {
return nil, fmt.Errorf("Multus: Err set the networks status: %v", err)
if n.Kubeconfig != "" && kc != nil {
if !types.CheckSystemNamespaces(kc.Podnamespace, n.SystemNamespaces) {
err = k8s.SetNetworkStatus(kubeClient, k8sArgs, netStatus, n)
if err != nil {
return nil, logging.Errorf("Multus: Err set the networks status: %v", err)
}
}
}
@@ -274,6 +404,7 @@ func cmdAdd(args *skel.CmdArgs, exec invoke.Exec, kubeClient k8s.KubeClient) (cn
}
func cmdGet(args *skel.CmdArgs, exec invoke.Exec, kubeClient k8s.KubeClient) (cnitypes.Result, error) {
logging.Debugf("cmdGet: %v, %v, %v", args, exec, kubeClient)
in, err := types.LoadNetConf(args.StdinData)
if err != nil {
return nil, err
@@ -286,49 +417,102 @@ func cmdGet(args *skel.CmdArgs, exec invoke.Exec, kubeClient k8s.KubeClient) (cn
func cmdDel(args *skel.CmdArgs, exec invoke.Exec, kubeClient k8s.KubeClient) error {
in, err := types.LoadNetConf(args.StdinData)
logging.Debugf("cmdDel: %v, %v, %v", args, exec, kubeClient)
if err != nil {
return err
}
if args.Netns == "" {
return nil
}
netns, err := ns.GetNS(args.Netns)
if err != nil {
// if NetNs is passed down by the Cloud Orchestration Engine, or if it called multiple times
// so don't return an error if the device is already removed.
// https://github.com/kubernetes/kubernetes/issues/43014#issuecomment-287164444
_, ok := err.(ns.NSPathNotExistErr)
if ok {
logging.Debugf("cmdDel: WARNING netns may not exist, netns: %s, err: %s", netns, err)
} else {
return fmt.Errorf("failed to open netns %q: %v", netns, err)
}
}
if netns != nil {
defer netns.Close()
}
k8sArgs, err := k8s.GetK8sArgs(args)
if err != nil {
return fmt.Errorf("Multus: Err in getting k8s args: %v", err)
return logging.Errorf("Multus: Err in getting k8s args: %v", err)
}
numK8sDelegates, kc, err := k8s.TryLoadK8sDelegates(k8sArgs, in, kubeClient)
// Read the cache to get delegates json for the pod
netconfBytes, path, err := consumeScratchNetConf(args.ContainerID, in.CNIDir)
if err != nil {
return err
}
if numK8sDelegates == 0 {
// re-read the scratch multus config if we have only Multus delegates
netconfBytes, err := consumeScratchNetConf(args.ContainerID, in.CNIDir)
if err != nil {
if os.IsNotExist(err) {
// Per spec should ignore error if resources are missing / already removed
return nil
// Fetch delegates again if cache is not exist
if os.IsNotExist(err) {
if in.ClusterNetwork != "" {
err = k8s.GetDefaultNetworks(k8sArgs, in, kubeClient)
if err != nil {
return logging.Errorf("Multus: Failed to get clusterNetwork/defaultNetworks: %v", err)
}
// First delegate is always the master plugin
in.Delegates[0].MasterPlugin = true
}
return fmt.Errorf("Multus: Err in reading the delegates: %v", err)
}
// Get pod annotation and so on
_, _, err := k8s.TryLoadPodDelegates(k8sArgs, in, kubeClient)
if err != nil {
if len(in.Delegates) == 0 {
// No delegate available so send error
return logging.Errorf("Multus: failed to get delegates: %v", err)
}
// Get clusterNetwork before, so continue to delete
logging.Errorf("Multus: failed to get delegates: %v, but continue to delete clusterNetwork", err)
}
} else {
return logging.Errorf("Multus: Err in reading the delegates: %v", err)
}
} else {
defer os.Remove(path)
if err := json.Unmarshal(netconfBytes, &in.Delegates); err != nil {
return fmt.Errorf("Multus: failed to load netconf: %v", err)
return logging.Errorf("Multus: failed to load netconf: %v", err)
}
// First delegate is always the master plugin
in.Delegates[0].MasterPlugin = true
}
// unset the network status annotation in apiserver, only in case Multus as kubeconfig
if in.Kubeconfig != "" {
if !types.CheckSystemNamespaces(string(k8sArgs.K8S_POD_NAMESPACE), in.SystemNamespaces) {
err := k8s.SetNetworkStatus(kubeClient, k8sArgs, nil, in)
if err != nil {
// error happen but continue to delete
logging.Errorf("Multus: Err unset the networks status: %v", err)
}
}
}
//unset the network status annotation in apiserver, only in case Multus as kubeconfig
if in.Kubeconfig != "" && kc.Podnamespace != "kube-system" {
err := k8s.SetNetworkStatus(kc, nil)
if err != nil {
return fmt.Errorf("Multus: Err unset the networks status: %v", err)
}
}
rt, _ := types.LoadCNIRuntimeConf(args, k8sArgs, "")
rt := types.CreateCNIRuntimeConf(args, k8sArgs, "", in.RuntimeConfig)
return delPlugins(exec, args.IfName, in.Delegates, len(in.Delegates)-1, rt, in.BinDir)
}
func main() {
// Init command line flags to clear vendored packages' one, especially in init()
flag.CommandLine = flag.NewFlagSet(os.Args[0], flag.ExitOnError)
// add version flag
versionOpt := false
flag.BoolVar(&versionOpt, "version", false, "Show application version")
flag.BoolVar(&versionOpt, "v", false, "Show application version")
flag.Parse()
if versionOpt == true {
fmt.Printf("%s\n", printVersionString())
return
}
skel.PluginMain(
func(args *skel.CmdArgs) error {
result, err := cmdAdd(args, nil, nil)
@@ -345,5 +529,5 @@ func main() {
return result.Print()
},
func(args *skel.CmdArgs) error { return cmdDel(args, nil, nil) },
version.All, "meta-plugin that delegates to other CNI plugins")
cniversion.All, "meta-plugin that delegates to other CNI plugins")
}

View File

@@ -28,7 +28,7 @@ import (
"github.com/containernetworking/cni/pkg/skel"
cnitypes "github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/types/020"
"github.com/containernetworking/cni/pkg/version"
cniversion "github.com/containernetworking/cni/pkg/version"
"github.com/containernetworking/plugins/pkg/ns"
"github.com/containernetworking/plugins/pkg/testutils"
@@ -52,7 +52,7 @@ type fakePlugin struct {
}
type fakeExec struct {
version.PluginDecoder
cniversion.PluginDecoder
addIndex int
delIndex int
@@ -123,8 +123,12 @@ func (f *fakeExec) ExecPlugin(pluginPath string, stdinData []byte, environ []str
if plugin.expectedIfname != "" {
Expect(os.Getenv("CNI_IFNAME")).To(Equal(plugin.expectedIfname))
}
if len(plugin.expectedEnv) > 0 {
matchArray(gatherCNIEnv(), plugin.expectedEnv)
cniEnv := gatherCNIEnv()
for _, expectedCniEnvVar := range plugin.expectedEnv {
Expect(cniEnv).Should(ContainElement(expectedCniEnvVar))
}
}
if plugin.err != nil {
@@ -173,6 +177,8 @@ var _ = Describe("multus operations", func() {
StdinData: []byte(`{
"name": "node-cni-network",
"type": "multus",
"defaultnetworkfile": "/tmp/foo.multus.conf",
"defaultnetworkwaitseconds": 3,
"delegates": [{
"name": "weave1",
"cniVersion": "0.2.0",
@@ -185,6 +191,10 @@ var _ = Describe("multus operations", func() {
}`),
}
// Touch the default network file.
configPath := "/tmp/foo.multus.conf"
os.OpenFile(configPath, os.O_RDONLY|os.O_CREATE, 0755)
fExec := &fakeExec{}
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
@@ -226,10 +236,152 @@ var _ = Describe("multus operations", func() {
err = cmdDel(args, fExec, nil)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.delIndex).To(Equal(len(fExec.plugins)))
// Cleanup default network file.
if _, errStat := os.Stat(configPath); errStat == nil {
errRemove := os.Remove(configPath)
Expect(errRemove).NotTo(HaveOccurred())
}
})
It("executes delegates and cleans up on failure", func() {
expectedConf1 := `{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}`
expectedConf2 := `{
"name": "other1",
"cniVersion": "0.2.0",
"type": "other-plugin"
}`
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
StdinData: []byte(fmt.Sprintf(`{
"name": "node-cni-network",
"type": "multus",
"defaultnetworkfile": "/tmp/foo.multus.conf",
"defaultnetworkwaitseconds": 3,
"delegates": [%s,%s]
}`, expectedConf1, expectedConf2)),
}
// Touch the default network file.
configPath := "/tmp/foo.multus.conf"
os.OpenFile(configPath, os.O_RDONLY|os.O_CREATE, 0755)
fExec := &fakeExec{}
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.2/24"),
},
}
fExec.addPlugin(nil, "eth0", expectedConf1, expectedResult1, nil)
// This plugin invocation should fail
err := fmt.Errorf("expected plugin failure")
fExec.addPlugin(nil, "net1", expectedConf2, nil, err)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
_, err = cmdAdd(args, fExec, nil)
Expect(fExec.addIndex).To(Equal(2))
Expect(fExec.delIndex).To(Equal(2))
Expect(err).To(MatchError("Multus: Err adding pod to network \"other1\": Multus: error in invoke Delegate add - \"other-plugin\": expected plugin failure"))
// Cleanup default network file.
if _, errStat := os.Stat(configPath); errStat == nil {
errRemove := os.Remove(configPath)
Expect(errRemove).NotTo(HaveOccurred())
}
})
It("executes delegates with interface name and MAC and IP addr", func() {
podNet := `[{"name":"net1",
"interface": "test1",
"ips":"1.2.3.4/24"},
{"name":"net2",
"mac": "c2:11:22:33:44:66",
"ips": "10.0.0.1"}
]`
fakePod := testhelpers.NewFakePod("testpod", podNet, "")
net1 := `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`
net2 := `{
"name": "net2",
"type": "mynet2",
"cniVersion": "0.2.0"
}`
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
StdinData: []byte(`{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}]
}`),
}
fExec := &fakeExec{}
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.2/24"),
},
}
expectedConf1 := `{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}`
fExec.addPlugin(nil, "eth0", expectedConf1, expectedResult1, nil)
fExec.addPlugin([]string{"CNI_ARGS=IgnoreUnknown=true;IP=1.2.3.4/24"}, "test1", net1, &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.3/24"),
},
}, nil)
fExec.addPlugin([]string{"CNI_ARGS=IgnoreUnknown=true;MAC=c2:11:22:33:44:66;IP=10.0.0.1"}, "net2", net2, &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.4/24"),
},
}, nil)
fKubeClient := testhelpers.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig(fakePod.ObjectMeta.Namespace, "net1", net1)
fKubeClient.AddNetConfig(fakePod.ObjectMeta.Namespace, "net2", net2)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
result, err := cmdAdd(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.addIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(2))
Expect(fKubeClient.NetCount).To(Equal(2))
r := result.(*types020.Result)
// plugin 1 is the masterplugin
Expect(reflect.DeepEqual(r, expectedResult1)).To(BeTrue())
})
It("executes delegates and kubernetes networks", func() {
fakePod := testhelpers.NewFakePod("testpod", "net1,net2")
fakePod := testhelpers.NewFakePod("testpod", "net1,net2", "")
net1 := `{
"name": "net1",
"type": "mynet",
@@ -306,4 +458,328 @@ var _ = Describe("multus operations", func() {
// plugin 1 is the masterplugin
Expect(reflect.DeepEqual(r, expectedResult1)).To(BeTrue())
})
It("executes kubernetes networks and delete it after pod removal", func() {
fakePod := testhelpers.NewFakePod("testpod", "net1", "")
net1 := `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
StdinData: []byte(`{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}]
}`),
}
fExec := &fakeExec{}
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.2/24"),
},
}
expectedConf1 := `{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}`
fExec.addPlugin(nil, "eth0", expectedConf1, expectedResult1, nil)
fExec.addPlugin(nil, "net1", net1, &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.3/24"),
},
}, nil)
fKubeClient := testhelpers.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig(fakePod.ObjectMeta.Namespace, "net1", net1)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
result, err := cmdAdd(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.addIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(2))
Expect(fKubeClient.NetCount).To(Equal(1))
r := result.(*types020.Result)
// plugin 1 is the masterplugin
Expect(reflect.DeepEqual(r, expectedResult1)).To(BeTrue())
os.Setenv("CNI_COMMAND", "DEL")
os.Setenv("CNI_IFNAME", "eth0")
// set fKubeClient to nil to emulate no pod info
fKubeClient.DeletePod(fakePod)
err = cmdDel(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.delIndex).To(Equal(len(fExec.plugins)))
})
It("ensure delegates get portmap runtime config", func() {
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
StdinData: []byte(`{
"name": "node-cni-network",
"type": "multus",
"delegates": [{
"cniVersion": "0.3.1",
"name": "mynet-confList",
"plugins": [
{
"type": "firstPlugin",
"capabilities": {"portMappings": true}
}
]
}],
"runtimeConfig": {
"portMappings": [
{"hostPort": 8080, "containerPort": 80, "protocol": "tcp"}
]
}
}`),
}
fExec := &fakeExec{}
expectedConf1 := `{
"capabilities": {"portMappings": true},
"name": "mynet-confList",
"cniVersion": "0.3.1",
"type": "firstPlugin",
"runtimeConfig": {
"portMappings": [
{"hostPort": 8080, "containerPort": 80, "protocol": "tcp"}
]
}
}`
fExec.addPlugin(nil, "eth0", expectedConf1, nil, nil)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
_, err := cmdAdd(args, fExec, nil)
Expect(err).NotTo(HaveOccurred())
})
It("executes clusterNetwork delegate", func() {
fakePod := testhelpers.NewFakePod("testpod", "", "kube-system/net1")
net1 := `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.2/24"),
},
}
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
StdinData: []byte(`{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"defaultNetworks": [],
"clusterNetwork": "net1",
"delegates": []
}`),
}
fExec := &fakeExec{}
fExec.addPlugin(nil, "eth0", net1, expectedResult1, nil)
fKubeClient := testhelpers.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig("kube-system", "net1", net1)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
result, err := cmdAdd(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.addIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(2))
Expect(fKubeClient.NetCount).To(Equal(2))
r := result.(*types020.Result)
Expect(reflect.DeepEqual(r, expectedResult1)).To(BeTrue())
os.Setenv("CNI_COMMAND", "DEL")
os.Setenv("CNI_IFNAME", "eth0")
err = cmdDel(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.delIndex).To(Equal(len(fExec.plugins)))
})
It("Verify the cache is created in dataDir", func() {
tmpCNIDir := tmpDir + "/cniData"
err := os.Mkdir(tmpCNIDir, 0777)
Expect(err).NotTo(HaveOccurred())
fakePod := testhelpers.NewFakePod("testpod", "net1", "")
net1 := `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
StdinData: []byte(fmt.Sprintf(`{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"cniDir": "%s",
"delegates": [{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}]
}`, tmpCNIDir)),
}
fExec := &fakeExec{}
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.2/24"),
},
}
expectedConf1 := `{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}`
fExec.addPlugin(nil, "eth0", expectedConf1, expectedResult1, nil)
fExec.addPlugin(nil, "net1", net1, &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.3/24"),
},
}, nil)
fKubeClient := testhelpers.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig(fakePod.ObjectMeta.Namespace, "net1", net1)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
result, err := cmdAdd(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.addIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(2))
Expect(fKubeClient.NetCount).To(Equal(1))
r := result.(*types020.Result)
// plugin 1 is the masterplugin
Expect(reflect.DeepEqual(r, expectedResult1)).To(BeTrue())
By("Verify cache file existence")
cacheFilePath := fmt.Sprintf("%s/%s", tmpCNIDir, "123456789")
_, err = os.Stat(cacheFilePath)
Expect(err).NotTo(HaveOccurred())
By("Delete and check net count is not incremented")
os.Setenv("CNI_COMMAND", "DEL")
os.Setenv("CNI_IFNAME", "eth0")
err = cmdDel(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.delIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(3))
Expect(fKubeClient.NetCount).To(Equal(1))
})
It("Delete pod without cache", func() {
tmpCNIDir := tmpDir + "/cniData"
err := os.Mkdir(tmpCNIDir, 0777)
Expect(err).NotTo(HaveOccurred())
fakePod := testhelpers.NewFakePod("testpod", "net1", "")
net1 := `{
"name": "net1",
"type": "mynet",
"cniVersion": "0.2.0"
}`
args := &skel.CmdArgs{
ContainerID: "123456789",
Netns: testNS.Path(),
IfName: "eth0",
Args: fmt.Sprintf("K8S_POD_NAME=%s;K8S_POD_NAMESPACE=%s", fakePod.ObjectMeta.Name, fakePod.ObjectMeta.Namespace),
StdinData: []byte(fmt.Sprintf(`{
"name": "node-cni-network",
"type": "multus",
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"cniDir": "%s",
"delegates": [{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}]
}`, tmpCNIDir)),
}
fExec := &fakeExec{}
expectedResult1 := &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.2/24"),
},
}
expectedConf1 := `{
"name": "weave1",
"cniVersion": "0.2.0",
"type": "weave-net"
}`
fExec.addPlugin(nil, "eth0", expectedConf1, expectedResult1, nil)
fExec.addPlugin(nil, "net1", net1, &types020.Result{
CNIVersion: "0.2.0",
IP4: &types020.IPConfig{
IP: *testhelpers.EnsureCIDR("1.1.1.3/24"),
},
}, nil)
fKubeClient := testhelpers.NewFakeKubeClient()
fKubeClient.AddPod(fakePod)
fKubeClient.AddNetConfig(fakePod.ObjectMeta.Namespace, "net1", net1)
os.Setenv("CNI_COMMAND", "ADD")
os.Setenv("CNI_IFNAME", "eth0")
result, err := cmdAdd(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.addIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(2))
Expect(fKubeClient.NetCount).To(Equal(1))
r := result.(*types020.Result)
// plugin 1 is the masterplugin
Expect(reflect.DeepEqual(r, expectedResult1)).To(BeTrue())
By("Verify cache file existence")
cacheFilePath := fmt.Sprintf("%s/%s", tmpCNIDir, "123456789")
_, err = os.Stat(cacheFilePath)
Expect(err).NotTo(HaveOccurred())
err = os.Remove(cacheFilePath)
Expect(err).NotTo(HaveOccurred())
By("Delete and check pod/net count is incremented")
os.Setenv("CNI_COMMAND", "DEL")
os.Setenv("CNI_IFNAME", "eth0")
err = cmdDel(args, fExec, fKubeClient)
Expect(err).NotTo(HaveOccurred())
Expect(fExec.delIndex).To(Equal(len(fExec.plugins)))
Expect(fKubeClient.PodCount).To(Equal(4))
Expect(fKubeClient.NetCount).To(Equal(2))
})
})

View File

@@ -13,5 +13,5 @@ export GO15VENDOREXPERIMENT=1
export GOBIN=${PWD}/bin
export GOPATH=${PWD}/gopath
bash -c "umask 0; cd ${GOPATH}/src/${REPO_PATH}; PATH=${GOROOT}/bin:$(pwd)/bin:${PATH} go test ./..."
bash -c "umask 0; cd ${GOPATH}/src/${REPO_PATH}; PATH=${GOROOT}/bin:$(pwd)/bin:${PATH} go test -v -covermode=count -coverprofile=coverage.out ./..."

View File

@@ -103,7 +103,12 @@ func (f *FakeKubeClient) AddPod(pod *v1.Pod) {
f.pods[key] = pod
}
func NewFakePod(name string, netAnnotation string) *v1.Pod {
func (f *FakeKubeClient) DeletePod(pod *v1.Pod) {
key := fmt.Sprintf("%s/%s", pod.ObjectMeta.Namespace, pod.ObjectMeta.Name)
delete(f.pods, key)
}
func NewFakePod(name string, netAnnotation string, defaultNetAnnotation string) *v1.Pod {
pod := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: name,
@@ -115,13 +120,19 @@ func NewFakePod(name string, netAnnotation string) *v1.Pod {
},
},
}
annotations := make(map[string]string)
if netAnnotation != "" {
netAnnotation = strings.Replace(netAnnotation, "\n", "", -1)
netAnnotation = strings.Replace(netAnnotation, "\t", "", -1)
pod.ObjectMeta.Annotations = map[string]string{
"k8s.v1.cni.cncf.io/networks": netAnnotation,
}
annotations["k8s.v1.cni.cncf.io/networks"] = netAnnotation
}
if defaultNetAnnotation != "" {
annotations["v1.multus-cni.io/default-network"] = defaultNetAnnotation
}
pod.ObjectMeta.Annotations = annotations
return pod
}

View File

@@ -17,7 +17,6 @@ package types
import (
"encoding/json"
"fmt"
"github.com/containernetworking/cni/libcni"
"github.com/containernetworking/cni/pkg/skel"
@@ -28,43 +27,66 @@ import (
)
const (
defaultCNIDir = "/var/lib/cni/multus"
defaultConfDir = "/etc/cni/multus/net.d"
defaultBinDir = "/opt/cni/bin"
defaultCNIDir = "/var/lib/cni/multus"
defaultConfDir = "/etc/cni/multus/net.d"
defaultBinDir = "/opt/cni/bin"
defaultReadinessIndicatorFile = ""
defaultMultusNamespace = "kube-system"
)
func LoadDelegateNetConfList(bytes []byte, delegateConf *DelegateNetConf) error {
logging.Debugf("LoadDelegateNetConfList: %s, %v", string(bytes), delegateConf)
if err := json.Unmarshal(bytes, &delegateConf.ConfList); err != nil {
return fmt.Errorf("err in unmarshalling delegate conflist: %v", err)
return logging.Errorf("err in unmarshalling delegate conflist: %v", err)
}
if delegateConf.ConfList.Plugins == nil {
return fmt.Errorf("delegate must have the 'type'or 'Plugin' field")
return logging.Errorf("delegate must have the 'type'or 'Plugin' field")
}
if delegateConf.ConfList.Plugins[0].Type == "" {
return fmt.Errorf("a plugin delegate must have the 'type' field")
return logging.Errorf("a plugin delegate must have the 'type' field")
}
delegateConf.ConfListPlugin = true
return nil
}
// Convert raw CNI JSON into a DelegateNetConf structure
func LoadDelegateNetConf(bytes []byte, ifnameRequest string) (*DelegateNetConf, error) {
func LoadDelegateNetConf(bytes []byte, net *NetworkSelectionElement, deviceID string) (*DelegateNetConf, error) {
var err error
logging.Debugf("LoadDelegateNetConf: %s, %v, %s", string(bytes), net, deviceID)
// If deviceID is present, inject this into delegate config
if deviceID != "" {
var updatedBytes []byte
if updatedBytes, err = delegateAddDeviceID(bytes, deviceID); err != nil {
return nil, logging.Errorf("error in LoadDelegateNetConf - delegateAddDeviceID unable to update delegate config: %v", err)
}
bytes = updatedBytes
}
delegateConf := &DelegateNetConf{}
if err := json.Unmarshal(bytes, &delegateConf.Conf); err != nil {
return nil, fmt.Errorf("error in LoadDelegateNetConf - unmarshalling delegate config: %v", err)
return nil, logging.Errorf("error in LoadDelegateNetConf - unmarshalling delegate config: %v", err)
}
// Do some minimal validation
if delegateConf.Conf.Type == "" {
if err := LoadDelegateNetConfList(bytes, delegateConf); err != nil {
return nil, fmt.Errorf("error in LoadDelegateNetConf: %v", err)
return nil, logging.Errorf("error in LoadDelegateNetConf: %v", err)
}
}
if ifnameRequest != "" {
delegateConf.IfnameRequest = ifnameRequest
if net != nil {
if net.InterfaceRequest != "" {
delegateConf.IfnameRequest = net.InterfaceRequest
}
if net.MacRequest != "" {
delegateConf.MacRequest = net.MacRequest
}
if net.IPRequest != "" {
delegateConf.IPRequest = net.IPRequest
}
}
delegateConf.Bytes = bytes
@@ -72,8 +94,9 @@ func LoadDelegateNetConf(bytes []byte, ifnameRequest string) (*DelegateNetConf,
return delegateConf, nil
}
func LoadCNIRuntimeConf(args *skel.CmdArgs, k8sArgs *K8sArgs, ifName string) (*libcni.RuntimeConf, error) {
func CreateCNIRuntimeConf(args *skel.CmdArgs, k8sArgs *K8sArgs, ifName string, rc *RuntimeConfig) *libcni.RuntimeConf {
logging.Debugf("LoadCNIRuntimeConf: %v, %v, %s, %v", args, k8sArgs, ifName, rc)
// In part, adapted from K8s pkg/kubelet/dockershim/network/cni/cni.go#buildCNIRuntimeConf
// Todo
// ingress, egress and bandwidth capability features as same as kubelet.
@@ -88,20 +111,29 @@ func LoadCNIRuntimeConf(args *skel.CmdArgs, k8sArgs *K8sArgs, ifName string) (*l
{"K8S_POD_INFRA_CONTAINER_ID", string(k8sArgs.K8S_POD_INFRA_CONTAINER_ID)},
},
}
return rt, nil
if rc != nil {
rt.CapabilityArgs = map[string]interface{}{
"portMappings": rc.PortMaps,
}
}
return rt
}
func LoadNetworkStatus(r types.Result, netName string, defaultNet bool) (*NetworkStatus, error) {
// Convert whatever the IPAM result was into the current Result type
result, err := current.NewResultFromResult(r)
if err != nil {
return nil, fmt.Errorf("error convert the type.Result to current.Result: %v", err)
}
logging.Debugf("LoadNetworkStatus: %v, %s, %t", r, netName, defaultNet)
netstatus := &NetworkStatus{}
netstatus.Name = netName
netstatus.Default = defaultNet
// Convert whatever the IPAM result was into the current Result type
result, err := current.NewResultFromResult(r)
if err != nil {
logging.Errorf("error convert the type.Result to current.Result: %v", err)
return netstatus, nil
}
for _, ifs := range result.Interfaces {
//Only pod interfaces can have sandbox information
if ifs.Sandbox != "" {
@@ -128,8 +160,10 @@ func LoadNetworkStatus(r types.Result, netName string, defaultNet bool) (*Networ
func LoadNetConf(bytes []byte) (*NetConf, error) {
netconf := &NetConf{}
logging.Debugf("LoadNetConf: %s", string(bytes))
if err := json.Unmarshal(bytes, netconf); err != nil {
return nil, fmt.Errorf("failed to load netconf: %v", err)
return nil, logging.Errorf("failed to load netconf: %v", err)
}
// Logging
@@ -144,16 +178,16 @@ func LoadNetConf(bytes []byte) (*NetConf, error) {
if netconf.RawPrevResult != nil {
resultBytes, err := json.Marshal(netconf.RawPrevResult)
if err != nil {
return nil, fmt.Errorf("could not serialize prevResult: %v", err)
return nil, logging.Errorf("could not serialize prevResult: %v", err)
}
res, err := version.NewResult(netconf.CNIVersion, resultBytes)
if err != nil {
return nil, fmt.Errorf("could not parse prevResult: %v", err)
return nil, logging.Errorf("could not parse prevResult: %v", err)
}
netconf.RawPrevResult = nil
netconf.PrevResult, err = current.NewResultFromResult(res)
if err != nil {
return nil, fmt.Errorf("could not convert result to current version: %v", err)
return nil, logging.Errorf("could not convert result to current version: %v", err)
}
}
@@ -163,8 +197,8 @@ func LoadNetConf(bytes []byte) (*NetConf, error) {
// the master plugin. Kubernetes CRD delegates are then appended to
// the existing delegate list and all delegates executed in-order.
if len(netconf.RawDelegates) == 0 {
return nil, fmt.Errorf("at least one delegate must be specified")
if len(netconf.RawDelegates) == 0 && netconf.ClusterNetwork == "" {
return nil, logging.Errorf("at least one delegate/defaultNetwork must be specified")
}
if netconf.CNIDir == "" {
@@ -179,27 +213,75 @@ func LoadNetConf(bytes []byte) (*NetConf, error) {
netconf.BinDir = defaultBinDir
}
for idx, rawConf := range netconf.RawDelegates {
bytes, err := json.Marshal(rawConf)
if err != nil {
return nil, fmt.Errorf("error marshalling delegate %d config: %v", idx, err)
}
delegateConf, err := LoadDelegateNetConf(bytes, "")
if err != nil {
return nil, fmt.Errorf("failed to load delegate %d config: %v", idx, err)
}
netconf.Delegates = append(netconf.Delegates, delegateConf)
if netconf.ReadinessIndicatorFile == "" {
netconf.ReadinessIndicatorFile = defaultReadinessIndicatorFile
}
netconf.RawDelegates = nil
// First delegate is always the master plugin
netconf.Delegates[0].MasterPlugin = true
if len(netconf.SystemNamespaces) == 0 {
netconf.SystemNamespaces = []string{"kube-system"}
}
if netconf.MultusNamespace == "" {
netconf.MultusNamespace = defaultMultusNamespace
}
// get RawDelegates and put delegates field
if netconf.ClusterNetwork == "" {
// for Delegates
if len(netconf.RawDelegates) == 0 {
return nil, logging.Errorf("at least one delegate must be specified")
}
for idx, rawConf := range netconf.RawDelegates {
bytes, err := json.Marshal(rawConf)
if err != nil {
return nil, logging.Errorf("error marshalling delegate %d config: %v", idx, err)
}
delegateConf, err := LoadDelegateNetConf(bytes, nil, "")
if err != nil {
return nil, logging.Errorf("failed to load delegate %d config: %v", idx, err)
}
netconf.Delegates = append(netconf.Delegates, delegateConf)
}
netconf.RawDelegates = nil
// First delegate is always the master plugin
netconf.Delegates[0].MasterPlugin = true
}
return netconf, nil
}
// AddDelegates appends the new delegates to the delegates list
func (n *NetConf) AddDelegates(newDelegates []*DelegateNetConf) error {
logging.Debugf("AddDelegates: %v", newDelegates)
n.Delegates = append(n.Delegates, newDelegates...)
return nil
}
// delegateAddDeviceID injects deviceID information in delegate bytes
func delegateAddDeviceID(inBytes []byte, deviceID string) ([]byte, error) {
var rawConfig map[string]interface{}
var err error
err = json.Unmarshal(inBytes, &rawConfig)
if err != nil {
return nil, logging.Errorf("delegateAddDeviceID: failed to unmarshal inBytes: %v", err)
}
// Inject deviceID
rawConfig["deviceID"] = deviceID
configBytes, err := json.Marshal(rawConfig)
if err != nil {
return nil, logging.Errorf("delegateAddDeviceID: failed to re-marshal Spec.Config: %v", err)
}
return configBytes, nil
}
// CheckSystemNamespaces checks whether given namespace is in systemNamespaces or not.
func CheckSystemNamespaces(namespace string, systemNamespaces []string) bool {
for _, nsname := range systemNamespaces {
if namespace == nsname {
return true
}
}
return false
}

View File

@@ -35,13 +35,20 @@ var _ = Describe("config operations", func() {
"kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml",
"delegates": [{
"type": "weave-net"
}]
}],
"runtimeConfig": {
"portMappings": [
{"hostPort": 8080, "containerPort": 80, "protocol": "tcp"}
]
}
}`
netConf, err := LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
Expect(len(netConf.Delegates)).To(Equal(1))
Expect(netConf.Delegates[0].Conf.Type).To(Equal("weave-net"))
Expect(netConf.Delegates[0].MasterPlugin).To(BeTrue())
Expect(len(netConf.RuntimeConfig.PortMaps)).To(Equal(1))
})
It("succeeds if only delegates are set", func() {
@@ -81,4 +88,47 @@ var _ = Describe("config operations", func() {
_, err := LoadNetConf([]byte(conf))
Expect(err).To(HaveOccurred())
})
It("has defaults set for network readiness", func() {
conf := `{
"name": "defaultnetwork",
"type": "multus",
"kubeconfig": "/etc/kubernetes/kubelet.conf",
"delegates": [{
"cniVersion": "0.3.0",
"name": "defaultnetwork",
"type": "flannel",
"isDefaultGateway": true
}]
}`
netConf, err := LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
Expect(netConf.ReadinessIndicatorFile).To(Equal(""))
})
It("honors overrides for network readiness", func() {
conf := `{
"name": "defaultnetwork",
"type": "multus",
"readinessindicatorfile": "/etc/cni/net.d/foo",
"kubeconfig": "/etc/kubernetes/kubelet.conf",
"delegates": [{
"cniVersion": "0.3.0",
"name": "defaultnetwork",
"type": "flannel",
"isDefaultGateway": true
}]
}`
netConf, err := LoadNetConf([]byte(conf))
Expect(err).NotTo(HaveOccurred())
Expect(netConf.ReadinessIndicatorFile).To(Equal("/etc/cni/net.d/foo"))
})
It("check CheckSystemNamespaces() works fine", func() {
b1 := CheckSystemNamespaces("foobar", []string{"barfoo", "bafoo", "foobar"})
Expect(b1).To(Equal(true))
b2 := CheckSystemNamespaces("foobar1", []string{"barfoo", "bafoo", "foobar"})
Expect(b2).To(Equal(false))
})
})

View File

@@ -36,12 +36,34 @@ type NetConf struct {
CNIDir string `json:"cniDir"`
BinDir string `json:"binDir"`
// RawDelegates is private to the NetConf class; use Delegates instead
RawDelegates []map[string]interface{} `json:"delegates"`
Delegates []*DelegateNetConf `json:"-"`
NetStatus []*NetworkStatus `json:"-"`
Kubeconfig string `json:"kubeconfig"`
LogFile string `json:"logFile"`
LogLevel string `json:"logLevel"`
RawDelegates []map[string]interface{} `json:"delegates"`
Delegates []*DelegateNetConf `json:"-"`
NetStatus []*NetworkStatus `json:"-"`
Kubeconfig string `json:"kubeconfig"`
ClusterNetwork string `json:"clusterNetwork"`
DefaultNetworks []string `json:"defaultNetworks"`
LogFile string `json:"logFile"`
LogLevel string `json:"logLevel"`
RuntimeConfig *RuntimeConfig `json:"runtimeConfig,omitempty"`
// Default network readiness options
ReadinessIndicatorFile string `json:"readinessindicatorfile"`
// Option to isolate the usage of CR's to the namespace in which a pod resides.
NamespaceIsolation bool `json:"namespaceIsolation"`
// Option to set system namespaces (to avoid to add defaultNetworks)
SystemNamespaces []string `json:"systemNamespaces"`
// Option to set the namespace that multus-cni uses (clusterNetwork/defaultNetworks)
MultusNamespace string `json:"multusNamespace"`
}
type RuntimeConfig struct {
PortMaps []PortMapEntry `json:"portMappings,omitempty"`
}
type PortMapEntry struct {
HostPort int `json:"hostPort"`
ContainerPort int `json:"containerPort"`
Protocol string `json:"protocol"`
HostIP string `json:"hostIP,omitempty"`
}
type NetworkStatus struct {
@@ -57,6 +79,8 @@ type DelegateNetConf struct {
Conf types.NetConf
ConfList types.NetConfList
IfnameRequest string `json:"ifnameRequest,omitempty"`
MacRequest string `json:"macRequest,omitempty"`
IPRequest string `json:"ipRequest,omitempty"`
// MasterPlugin is only used internal housekeeping
MasterPlugin bool `json:"-"`
// Conflist plugin is only used internal housekeeping
@@ -102,13 +126,13 @@ type NetworkSelectionElement struct {
Namespace string `json:"namespace,omitempty"`
// IPRequest contains an optional requested IP address for this network
// attachment
IPRequest string `json:"ipRequest,omitempty"`
IPRequest string `json:"ips,omitempty"`
// MacRequest contains an optional requested MAC address for this
// network attachment
MacRequest string `json:"macRequest,omitempty"`
MacRequest string `json:"mac,omitempty"`
// InterfaceRequest contains an optional requested name for the
// network interface this attachment will create in the container
InterfaceRequest string `json:"interfaceRequest,omitempty"`
InterfaceRequest string `json:"interface,omitempty"`
}
// K8sArgs is the valid CNI_ARGS used for Kubernetes
@@ -119,3 +143,9 @@ type K8sArgs struct {
K8S_POD_NAMESPACE types.UnmarshallableString
K8S_POD_INFRA_CONTAINER_ID types.UnmarshallableString
}
// ResourceInfo is struct to hold Pod device allocation information
type ResourceInfo struct {
Index int
DeviceIDs []string
}