Compare commits

...

70 Commits

Author SHA1 Message Date
Archana Shinde
797b6ad65d Merge pull request #1096 from katabuilder/1.11.5-branch-bump
# Kata Containers 1.11.5
2020-11-11 13:38:58 -08:00
katacontainersbot
b54a00d08c release: Kata Containers 1.11.5
Version bump no changes

Signed-off-by: katacontainersbot <katacontainersbot@gmail.com>
2020-11-11 07:30:21 +00:00
Jose Carlos Venegas Munoz
7a9c65cfdb Merge pull request #1009 from jcvenegas/1.11.4-branch-bump
# Kata Containers 1.11.4
2020-10-21 14:50:59 -05:00
Carlos Venegas
8678901b8e release: Kata Containers 1.11.4
Version bump no changes

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2020-10-20 14:04:57 -05:00
Eric Ernst
5ff5c7d6c3 Merge pull request #631 from egernst/1.11.3-branch-bump
# Kata Containers 1.11.3
2020-08-28 09:04:37 -07:00
Eric Ernst
6a27b1c79b release: Kata Containers 1.11.3
Version bump no changes

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-08-27 09:00:24 -07:00
Archana Shinde
156e3bca9b Merge pull request #348 from bergwolf/1.11.2-branch-bump
# Kata Containers 1.11.2
2020-07-01 09:39:38 -07:00
Peng Tao
7b09bcf5a3 release: Kata Containers 1.11.2
Version bump no changes

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-06-29 03:14:22 +00:00
Salvador Fuentes
845ce4a727 Merge pull request #273 from amshinde/1.11.1-branch-bump
# Kata Containers 1.11.1
2020-06-05 17:38:08 -05:00
Archana Shinde
3355510feb release: Kata Containers 1.11.1
Version bump no changes

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2020-06-05 15:43:17 +00:00
Jose Carlos Venegas Munoz
71d25d530e Merge pull request #217 from jcvenegas/backport-release-fix
backport: release: actions: pin artifact to v1
2020-05-08 13:17:37 -05:00
Jose Carlos Venegas Munoz
55a004e6de release: actions: pin artifact to v1
the actions upload/download-artifact moved to a new version
and master now is not comptible.

Fixes: #211

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
2020-05-08 17:18:50 +00:00
Jose Carlos Venegas Munoz
449236f7cd Merge pull request #207 from katabuilder/1.11.0-branch-bump
# Kata Containers 1.11.0
2020-05-06 11:55:58 -05:00
katabuilder
92a0f7a0b1 release: Kata Containers 1.11.0
Version bump no changes

Signed-off-by: katabuilder <katabuilder@katacontainers.io>
2020-05-05 20:13:35 +00:00
Archana Shinde
c95d09a34d Merge pull request #181 from chavafg/1.11.0-rc0-branch-bump
# Kata Containers 1.11.0-rc0
2020-04-17 14:55:51 -07:00
Salvador Fuentes
63d9a8696f release: Kata Containers 1.11.0-rc0
- Fix potentianl crash
- sandbox: fix the issue of missing setting hostname
- unify the rustjail's log to contain container id and exec id
- Refactor the way of creating container process

ba3c732 grpc: fix the issue of potential crashes
32431d7 rpc: fix the issue of kill container process
986e666 sandbox: fix the issue of missing setting hostname
7d9bdf7 grpc: Fix the issue passing wrong exec_id to exec process
9220fb8 rustjail: unify the rustjail's log to contain container id and exec id
c1b6838 rustjail: refactoring the way of creating container process
e56b10f rustjail: remove the unused imported crates
ded27f4 oci: add Default and Clone to oci spec objects
7df8ede rustjail: replace protocol spec with oci spec

Signed-off-by: Salvador Fuentes <salvador.fuentes@intel.com>
2020-04-17 17:51:05 +00:00
Yang Bo
c0dc7676e0 Merge pull request #179 from lifupan/fix_potentianl_crash
Fix potentianl crash
2020-04-07 19:58:52 +08:00
fupan.lfp
ba3c732f86 grpc: fix the issue of potential crashes
It's better to check whether the sandbox's get_container
result instead of unwrap it directly, otherwise it would
crash the agent if the conainer id is invalid.

Fixes: #178

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-04-02 18:58:24 +08:00
fupan.lfp
32431d701c rpc: fix the issue of kill container process
When kill a process, if the exec id is empty, then
it means to kill all processes in the container, if
the exec id isn't empty, then it will only kill the
specific exec process.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-04-02 17:58:46 +08:00
Yang Bo
6d61ab439c Merge pull request #176 from lifupan/fix_hostname
sandbox: fix the issue of missing setting hostname
2020-04-01 10:00:31 +08:00
fupan.lfp
986e666b0b sandbox: fix the issue of missing setting hostname
When setup the persisten uts namespace, it's should
set the hostname for this ns.

Fixes: #175

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-31 17:22:24 +08:00
fupan.lfp
7d9bdf7b01 grpc: Fix the issue passing wrong exec_id to exec process
This issue was brought accidently by PR #174, fix this issue.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-31 17:19:40 +08:00
James O. D. Hunt
c948d8a802 Merge pull request #174 from lifupan/unify_log
unify the rustjail's log to contain container id and exec id
2020-03-30 10:02:39 +01:00
fupan.lfp
9220fb8e0c rustjail: unify the rustjail's log to contain container id and exec id
Add the container id and exec id to start container's log
which would make it clearly to check the log.

Fixes: #173

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-27 20:10:50 +08:00
Yang Bo
1e15465012 Merge pull request #167 from lifupan/refactor
Refactor the way of creating container process
2020-03-24 11:18:42 +08:00
fupan.lfp
c1b6838e25 rustjail: refactoring the way of creating container process
In the previous implementation, create a container process
by forking the parent process as the container process,
and then at the forked child process do much more setting,
such as rootfs mounting, drop capabilities and so on, at
last exec the container entry cmd to switch into container
process.

But since the parent is a muti thread process, which would
cause a dead lock in the forked child. For example, if one
of the parent process's thread do some malloc operation, which
would take a mutex lock, and at the same time, the parent forked
a child process, since the mutex lock status would be inherited
by the child process but there's no chance to release the lock
in the child since the child process only has a single thread
which would meet a dead lock if it would do some malloc operation.

Thus, the new implementation would do exec directly after forked
and then do the setting in the exec process. Of course, this requred
a data communication between parent and child since the child cannot
depends on the shared memory by fork way.

Fixes: #166
Fixes: #133

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-23 17:12:10 +08:00
fupan.lfp
e56b10f835 rustjail: remove the unused imported crates
remove the unused imported crates

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-20 17:04:05 +08:00
fupan.lfp
ded27f48d5 oci: add Default and Clone to oci spec objects
Add the clone and default feature to oci
spec objects.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-20 17:03:54 +08:00
fupan.lfp
7df8edef1b rustjail: replace protocol spec with oci spec
transform the rpc protocol spec to
oci spec.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-20 16:26:32 +08:00
James O. D. Hunt
8280208443 Merge pull request #154 from awprice/issue-152
agent: add configurable container pipe size cmdline option
2020-03-18 08:36:23 +00:00
GabyCT
7087b5f43c Merge pull request #165 from bergwolf/1.11.0-alpha1-branch-bump
# Kata Containers 1.11.0-alpha1
2020-03-17 13:10:53 -06:00
James O. D. Hunt
fe0a3a0c7c Merge pull request #156 from lifupan/master
add a workspace and run all the tests in the workspace
2020-03-17 11:10:27 +00:00
Peng Tao
fbf1d015e7 release: Kata Containers 1.11.0-alpha1
- actions: Add verbose information
- systemd-service: build rust-agent systemd services
- grpc: fix the issue of crash agent when didn't find the process

cd233c0 actions: Add verbose information
f0eaeac path-absolutize: version update
3136712 systemd-service: build rust-agent systemd services
289d617 grpc: fix the issue of crash agent when didn't find the process

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-03-16 12:38:41 +00:00
fupan.lfp
245183cb28 cargo: add a workspace and run all the tests in the workspace
Add a worksapce and run all of the tests in
under this workspace.

Fixes:#155

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-16 16:34:59 +08:00
GabyCT
22afde1850 Merge pull request #158 from jcvenegas/fix-157
actions: Add verbose information
2020-03-04 15:15:42 -06:00
Jose Carlos Venegas Munoz
cd233c047a actions: Add verbose information
Add a logs to debug actions easily

Fixes: #157

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
2020-03-04 16:02:06 +00:00
Alex Price
204edf0e51 agent: add configurable container pipe size cmdline option
Adds a cmdline option to configure the stdout/stderr pipe sizes.
Uses `F_SETPIPE_SZ` to resize the write side of the pipe after
creation.

Example Cmdline option: `agent.container_pipe_size=2097152`

fixes #152

Signed-off-by: Alex Price <aprice@atlassian.com>
2020-03-04 15:31:59 +11:00
GabyCT
35c33bba47 Merge pull request #145 from Pennyzct/build_service_for_rust_agent
systemd-service: build rust-agent systemd services
2020-03-03 13:17:27 -06:00
Penny Zheng
f0eaeac3be path-absolutize: version update
The latest tag version v1.2.0 fixes the error of inapporiately using
mutable static.

Fixes: #144

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-03-03 09:24:13 +08:00
Penny Zheng
3136712d8e systemd-service: build rust-agent systemd services
I add another sub-command `build-service` in Makefile to
generate rust-agent-related systemd service files, which
are necessary for building guest rootfs image.
The whole design is following the one in go-agent.

Fixes: #144

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-03-03 09:24:02 +08:00
James O. D. Hunt
7965445adf Merge pull request #138 from lifupan/master
grpc: fix the issue of crash agent when didn't find the process
2020-02-25 10:53:00 +00:00
Salvador Fuentes
9d7bbdc5a6 Merge pull request #143 from amshinde/1.11.0-alpha0-branch-bump
# Kata Containers 1.11.0-alpha0
2020-02-19 17:24:45 -06:00
Archana Shinde
83b1712fa9 release: Kata Containers 1.11.0-alpha0
- should ignore  invalid a key-value pair as an env
- Revert: "Makefile: Fix rust agent build using "--release"."
- Makefile: Fix rust agent build using "--release".
- vsock: support log_vport and debug_console_vport
- Agent: Separate logging into a single crate
- agent: fix the issue of crash agent without spec
- fix the issue of missing restore process's cwd
- Running rust-agent on AArch64
- ci: Remove run_rust_test functions as not being used
- add oci compatibility test case
- agent: Add unit tests for sandbox.rs
- version: Add VERSION file
- ci: Add minimal makefile to use central go test script
- netlink: pull out netlink as library crate.
- Fixup workflow 103

40b5a56 agent: ignore invalid a key-value pair as an env
269daa9 Revert: "Makefile: Fix rust agent build using "--release"."
a3e46a3 Makefile: Fix rust agent build using "--release".
3c1252e vsock: support log_vport and debug_console_vport
c373f84 agent: separate logging into a single crate
2be8661 agent: fix the issue of missing restore process's cwd
6c7453d agent: fix the issue of crash agent without spec
4edf537 ci: Remove run_rust_test functions as not being used
d222533 agent: add oci compatibility test case
7dfc4e0 linker: `no such file` linking error on AArch64
44b2caa AArch64: missing symbols on target `aarch64-unknown-linux-musl`
9621a7f ABI: only support arm 64-bit platform
8d60612 version: Add VERSION file
a5192a1 netlink: pull out netlink as library crate.
3881c06 ci: Add minimal makefile to use central go test script
1c57665 workflows: make sure we build the experimental kernel, CLH
cbd5fa0 workflows: fix step output usage
92301a6 agent: Add unit tests for sandbox.rs

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2020-02-18 19:36:52 +00:00
fupan.lfp
289d61730c grpc: fix the issue of crash agent when didn't find the process
It's better to catch the  error of couldn't find the process
in tty_win_resize service, other wise, an invalid process id
could crash the agent.

Fixes: #137

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-02-11 10:04:19 +08:00
Yang Bo
e2c9426ebf Merge pull request #134 from liubin/master
should ignore  invalid a key-value pair as an env
2020-02-10 11:14:36 +08:00
Fupan Li
31a97031f8 Merge pull request #136 from yyyeerbo/wip
Revert: "Makefile: Fix rust agent build using "--release"."
2020-02-10 09:17:11 +08:00
Kant
40b5a56688 agent: ignore invalid a key-value pair as an env
Fixes #135

Signed-off-by: Kant <lb203159@antfin.com>
2020-02-08 13:51:28 +08:00
Yang Bo
269daa94ef Revert: "Makefile: Fix rust agent build using "--release"."
This reverts commit a3e46a369f.

There is still problem with static link, built binary will
segmentfault on clearlinux. So revert this patch for now.

Depends-on: github.com/kata-containers/tests#2293

Fixes: #69

Signed-off-by: Yang Bo <bo@hyper.sh>
2020-02-08 12:56:34 +08:00
Yang Bo
afc7b4d523 Merge pull request #129 from yyyeerbo/wip
Makefile: Fix rust agent build using "--release".
2020-02-07 15:31:58 +08:00
Yang Bo
a3e46a369f Makefile: Fix rust agent build using "--release".
Based on @ericho's work on the bug

Depends-on: github.com/kata-containers/tests#2277

Fixes: #69

Signed-off-by: Yang Bo <bo@hyper.sh>
2020-02-07 11:38:03 +08:00
Fupan Li
356222fbba Merge pull request #132 from yyyeerbo/wip2
vsock: support log_vport and debug_console_vport
2020-02-07 10:06:42 +08:00
Fupan Li
7d667a92ee Merge pull request #130 from Tim-Zhang/separate-logging
Agent: Separate logging into a single crate
2020-02-04 22:29:50 +08:00
Yang Bo
3c1252ea79 vsock: support log_vport and debug_console_vport
Fixes: #61, #64

Signed-off-by: Yang Bo <bo@hyper.sh>
2020-02-04 20:32:07 +08:00
Tim Zhang
c373f846f5 agent: separate logging into a single crate
Since the codes in logging.rs is weakly related to the project,
separating it from the project will reduce coupling and make it reusable.

Fixes: #131

Signed-off-by: Tim Zhang <tim@hyper.sh>
2020-02-03 20:40:26 +08:00
James O. D. Hunt
b5e741ba8b Merge pull request #125 from lifupan/fix_agent_crash
agent: fix the issue of crash agent without spec
2020-01-20 11:29:16 +00:00
James O. D. Hunt
174f9abee8 Merge pull request #127 from lifupan/fix_cwd
fix the issue of missing restore process's cwd
2020-01-20 11:28:11 +00:00
fupan.lfp
2be8661ffa agent: fix the issue of missing restore process's cwd
It should restore to it's previous cwd after it
create container in which it would change it's
cwd to container's bundle path.

Fixes: #126

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-01-20 11:00:48 +08:00
fupan.lfp
6c7453db78 agent: fix the issue of crash agent without spec
To check is the oci spec passed in, other wise,
it would crash the agent unwrap it directly.

Fixes: #124

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-01-18 18:26:01 +08:00
Yang Bo
1b1e066083 Merge pull request #108 from Pennyzct/build_bug_fix
Running rust-agent on AArch64
2020-01-15 21:43:31 +08:00
Salvador Fuentes
7ce9c40c76 Merge pull request #122 from GabyCT/topic/removetest
ci: Remove run_rust_test functions as not being used
2020-01-15 07:21:43 -06:00
Gabriela Cervantes
4edf5379ca ci: Remove run_rust_test functions as not being used
This PR removes a function that is never used as the script that is
referring is also non existing at the test repository.

Fixes #113

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2020-01-14 14:23:14 -06:00
Fupan Li
8fbc673e68 Merge pull request #119 from quanweiZhou/add-test-case
add oci compatibility test case
2020-01-09 14:54:11 +08:00
Yang Bo
c4f15f1280 Merge pull request #91 from ericho/master
agent: Add unit tests for sandbox.rs
2020-01-09 12:51:41 +08:00
quanweiZhou
d2225334d9 agent: add oci compatibility test case
add oci compatibility test case for src/agent/oci/src/lib.rs
follow by Open Container Initiative Runtime Specification

Fixes: #118

Signed-off-by: quanweiZhou <quanweiZhou@linux.alibaba.com>
2020-01-09 11:14:24 +08:00
Penny Zheng
7dfc4e0219 linker: no such file linking error on AArch64
When using default cc linker, we will have segfault.
Debugging with `rust-gdb`, the specific error is as follows:
src/string/memcpy.c: No such file or directory.
Only changing linker with `aarch64-linux-musl-gcc`, the
`rust-agent` could be totally statically linked and run successfully.

Fixes: #107

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-01-09 11:08:23 +08:00
Penny Zheng
44b2caa2e5 AArch64: missing symbols on target aarch64-unknown-linux-musl
The __addtf3, __subtf3 and __multf3 symbols are used by aarch64-musl,
but are not provided by rust compiler-builtins.
For now, the only temporary but functional workaround accepted by rust
communities is to get them from libgcc.

Fixes: #107

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-01-09 11:06:04 +08:00
Penny Zheng
9621a7f3f5 ABI: only support arm 64-bit platform
We only support running Kata Containers on AArch64.

Fixes: #107

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-01-09 09:59:20 +08:00
Jose Carlos Venegas Munoz
3b6a837664 Merge pull request #115 from jcvenegas/fix-114
version: Add VERSION file
2020-01-07 14:42:55 -06:00
Jose Carlos Venegas Munoz
8d60612052 version: Add VERSION file
Needed by some CI scripts, like release or to verify stable
branches state.

Fixes: #114

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
2020-01-07 19:25:33 +00:00
Erich Cordoba
92301a6382 agent: Add unit tests for sandbox.rs
These are the unit tests for the sandbox struct. This is the summary
of the most important changes:

  - To test containers it was needed to create a `LinuxContainer` type
    and this requires root privileges. So, some tests now requires root
    user to be run.
  - There was a bug in the `unset_sandbox_storage` method. The return
    type was wrapped in a `Result` to avoid this problem.

Fixes: #50

Signed-off-by: Erich Cordoba <erich.cordoba.malibran@intel.com>
2019-12-06 13:11:07 -06:00
32 changed files with 3019 additions and 1250 deletions

View File

@@ -14,5 +14,5 @@ do
tar -xvf $c
done
tar cfJ ../kata-static.tar.xz ./opt
tar cvfJ ../kata-static.tar.xz ./opt
popd >>/dev/null

View File

@@ -1,3 +1,4 @@
name: Publish release tarball
on:
push:
tags:
@@ -16,7 +17,7 @@ jobs:
popd
./packaging/artifact-list.sh > artifact-list.txt
- name: save-artifact-list
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: artifact-list
path: artifact-list.txt
@@ -29,7 +30,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- run: |
@@ -44,7 +45,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-kernel.tar.gz
@@ -57,7 +58,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- run: |
@@ -72,7 +73,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-experimental-kernel.tar.gz
@@ -85,7 +86,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-qemu
@@ -98,7 +99,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-qemu.tar.gz
@@ -111,7 +112,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-nemu
@@ -124,7 +125,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-nemu.tar.gz
@@ -138,7 +139,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-qemu-virtiofsd
@@ -151,7 +152,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-qemu-virtiofsd.tar.gz
@@ -165,7 +166,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-image
@@ -178,7 +179,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-image.tar.gz
@@ -192,7 +193,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-firecracker
@@ -205,7 +206,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-firecracker.tar.gz
@@ -219,7 +220,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-clh
@@ -232,7 +233,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-clh.tar.gz
@@ -246,7 +247,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-kata-components
@@ -259,7 +260,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-kata-components.tar.gz
@@ -270,14 +271,14 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: kata-artifacts
- name: colate-artifacts
run: |
$GITHUB_WORKSPACE/.github/workflows/gather-artifacts.sh
- name: store-artifacts
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: release-candidate
path: kata-static.tar.xz
@@ -287,7 +288,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: get-artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: release-candidate
- name: build-and-push-kata-deploy-ci
@@ -328,7 +329,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: download-artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: release-candidate
- name: install hub
@@ -339,7 +340,10 @@ jobs:
- name: push static tarball to github
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
mv release-candidate/kata-static.tar.xz release-candidate/kata-static-$tag-x86_64.tar.xz
git clone https://github.com/kata-containers/runtime.git
tarball="kata-static-$tag-x86_64.tar.xz"
repo="https://github.com/kata-containers/runtime.git"
mv release-candidate/kata-static.tar.xz "release-candidate/${tarball}"
git clone "${repo}"
cd runtime
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a ../release-candidate/kata-static-${tag}-x86_64.tar.xz "${tag}"
echo "uploading asset '${tarball}' to '${repo}' tag: ${tag}"
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "../release-candidate/${tarball}" "${tag}"

1
VERSION Normal file
View File

@@ -0,0 +1 @@
1.11.5

View File

@@ -28,12 +28,6 @@ run_static_checks()
bash "$tests_repo_dir/.ci/static-checks.sh" "github.com/kata-containers/kata-containers"
}
run_rust_test()
{
clone_tests_repo
bash "$tests_repo_dir/.ci/rust-test.sh"
}
run_go_test()
{
clone_tests_repo

15
src/agent/.cargo/config Normal file
View File

@@ -0,0 +1,15 @@
## Copyright (c) 2020 ARM Limited
##
## SPDX-License-Identifier: Apache-2.0
##
[target.aarch64-unknown-linux-musl]
## Only setting linker with `aarch64-linux-musl-gcc`, the
## `rust-agent` could be totally statically linked.
linker = "aarch64-linux-musl-gcc"
## The __addtf3, __subtf3 and __multf3 symbols are used by aarch64-musl,
## but are not provided by rust compiler-builtins.
## For now, the only functional workaround accepted by rust communities
## is to get them from libgcc.
rustflags = [ "-C", "link-arg=-lgcc" ]

View File

@@ -6,6 +6,7 @@ edition = "2018"
[dependencies]
oci = { path = "oci" }
logging = { path = "logging" }
rustjail = { path = "rustjail" }
protocols = { path = "protocols" }
netlink = { path = "netlink" }
@@ -15,19 +16,27 @@ grpcio = { git="https://github.com/alipay/grpc-rs", branch="rust_agent" }
protobuf = "2.6.1"
futures = "0.1.27"
libc = "0.2.58"
nix = "0.14.1"
nix = "0.17.0"
prctl = "1.0.0"
serde_json = "1.0.39"
signal-hook = "0.1.9"
scan_fmt = "0.2.3"
scopeguard = "1.0.0"
regex = "1"
# slog:
# - Dynamic keys required to allow HashMap keys to be slog::Serialized.
# - The 'max_*' features allow changing the log level at runtime
# (by stopping the compiler from removing log calls).
slog = { version = "2.5.2", features = ["dynamic-keys", "max_level_trace", "release_max_level_info"] }
slog-json = "2.3.0"
slog-async = "2.3.0"
slog-scope = "4.1.2"
# for testing
tempfile = "3.1.0"
[workspace]
members = [
"logging",
"netlink",
"oci",
"protocols",
"rustjail",
]

View File

@@ -34,6 +34,22 @@ TARGET_PATH = target/$(TRIPLE)/$(BUILD_TYPE)/$(TARGET)
DESTDIR :=
BINDIR := /usr/bin
# Define if agent will be installed as init
INIT := no
# Path to systemd unit directory if installed as not init.
UNIT_DIR := /usr/lib/systemd/system
GENERATED_FILES :=
ifeq ($(INIT),no)
# Unit file to start kata agent in systemd systems
UNIT_FILES = kata-agent.service
GENERATED_FILES := $(UNIT_FILES)
# Target to be reached in systemd services
UNIT_FILES += kata-containers.target
endif
# Display name of command and it's version (or a message if not available).
#
# Arguments:
@@ -47,6 +63,10 @@ define get_toolchain_version
$(shell printf "%s: %s\\n" "toolchain" "$(or $(shell rustup show active-toolchain 2>/dev/null), (unknown))")
endef
define INSTALL_FILE
install -D -m 644 $1 $(DESTDIR)$2/$1 || exit 1;
endef
default: $(TARGET) show-header
$(TARGET): $(TARGET_PATH)
@@ -57,18 +77,30 @@ $(TARGET_PATH): $(SOURCES) | show-summary
show-header:
@printf "%s - version %s (commit %s)\n\n" "$(TARGET)" "$(VERSION)" "$(COMMIT_MSG)"
install:
$(GENERATED_FILES): %: %.in
@sed \
-e 's|[@]bindir[@]|$(BINDIR)|g' \
-e 's|[@]kata-agent[@]|$(TARGET)|g' \
"$<" > "$@"
install: build-service
@install -D $(TARGET_PATH) $(DESTDIR)/$(BINDIR)/$(TARGET)
clean:
@cargo clean
check:
@cargo test --target $(TRIPLE)
@cargo test --all --target $(TRIPLE)
run:
@cargo run --target $(TRIPLE)
build-service: $(GENERATED_FILES)
ifeq ($(INIT),no)
@echo "Installing systemd unit files..."
$(foreach f,$(UNIT_FILES),$(call INSTALL_FILE,$f,$(UNIT_DIR)))
endif
show-summary: show-header
@printf "project:\n"
@printf " name: $(PROJECT_NAME)\n"

View File

@@ -0,0 +1,22 @@
#
# Copyright (c) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
[Unit]
Description=Kata Containers Agent
Documentation=https://github.com/kata-containers/kata-containers
Wants=kata-containers.target
[Service]
# Send agent output to tty to allow capture debug logs
# from a VM vsock port
StandardOutput=tty
Type=simple
ExecStart=@bindir@/@kata-agent@
LimitNOFILE=infinity
# ExecStop is required for static agent tracing; in all other scenarios
# the runtime handles shutting down the VM.
ExecStop=/bin/sync ; /usr/bin/systemctl --force poweroff
FailureAction=poweroff

View File

@@ -0,0 +1,15 @@
#
# Copyright (c) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
[Unit]
Description=Kata Containers Agent Target
Requires=basic.target
Requires=tmp.mount
Wants=chronyd.service
Requires=kata-agent.service
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes

View File

@@ -0,0 +1,20 @@
[package]
name = "logging"
version = "0.1.0"
authors = ["Tim Zhang <tim@hyper.sh>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
serde_json = "1.0.39"
# slog:
# - Dynamic keys required to allow HashMap keys to be slog::Serialized.
# - The 'max_*' features allow changing the log level at runtime
# (by stopping the compiler from removing log calls).
slog = { version = "2.5.2", features = ["dynamic-keys", "max_level_trace", "release_max_level_info"] }
slog-json = "2.3.0"
slog-async = "2.3.0"
slog-scope = "4.1.2"
# for testing
tempfile = "3.1.0"

View File

@@ -2,6 +2,8 @@
//
// SPDX-License-Identifier: Apache-2.0
//
#[macro_use]
extern crate slog;
use slog::{BorrowedKV, Drain, Key, OwnedKV, OwnedKVList, Record, KV};
use std::collections::HashMap;
@@ -146,12 +148,6 @@ impl<D> RuntimeLevelFilter<D> {
level: Mutex::new(level),
}
}
fn set_level(&self, level: slog::Level) {
let mut log_level = self.level.lock().unwrap();
*log_level = level;
}
}
impl<D> Drain for RuntimeLevelFilter<D>

View File

@@ -8,7 +8,7 @@ edition = "2018"
[dependencies]
libc = "0.2.58"
nix = "0.14.1"
nix = "0.17.0"
protobuf = "2.6.1"
rustjail = { path = "../rustjail" }
protocols = { path = "../protocols" }

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ serde_derive = "1.0.91"
oci = { path = "../oci" }
protocols = { path ="../protocols" }
caps = "0.3.0"
nix = "0.14.1"
nix = "0.17.0"
scopeguard = "1.0.0"
prctl = "1.0.0"
lazy_static = "1.3.0"
@@ -22,4 +22,4 @@ slog = "2.5.2"
slog-scope = "4.1.2"
scan_fmt = "0.2"
regex = "1.1"
path-absolutize = { git = "git://github.com/magiclen/path-absolutize.git", tag= "v1.1.3" }
path-absolutize = { git = "git://github.com/magiclen/path-absolutize.git", tag= "v1.2.0" }

View File

@@ -9,10 +9,12 @@
use lazy_static;
use crate::errors::*;
use crate::log_child;
use crate::sync::write_count;
use caps::{self, CapSet, Capability, CapsHashSet};
use protocols::oci::LinuxCapabilities;
use slog::Logger;
use oci::LinuxCapabilities;
use std::collections::HashMap;
use std::os::unix::io::RawFd;
lazy_static! {
pub static ref CAPSMAP: HashMap<String, Capability> = {
@@ -76,14 +78,14 @@ lazy_static! {
};
}
fn to_capshashset(logger: &Logger, caps: &[String]) -> CapsHashSet {
fn to_capshashset(cfd_log: RawFd, caps: &[String]) -> CapsHashSet {
let mut r = CapsHashSet::new();
for cap in caps.iter() {
let c = CAPSMAP.get(cap);
if c.is_none() {
warn!(logger, "{} is not a cap", cap);
log_child!(cfd_log, "{} is not a cap", cap);
continue;
}
@@ -98,37 +100,35 @@ pub fn reset_effective() -> Result<()> {
Ok(())
}
pub fn drop_priviledges(logger: &Logger, caps: &LinuxCapabilities) -> Result<()> {
let logger = logger.new(o!("subsystem" => "capabilities"));
pub fn drop_priviledges(cfd_log: RawFd, caps: &LinuxCapabilities) -> Result<()> {
let all = caps::all();
for c in all.difference(&to_capshashset(&logger, caps.Bounding.as_ref())) {
for c in all.difference(&to_capshashset(cfd_log, caps.bounding.as_ref())) {
caps::drop(None, CapSet::Bounding, *c)?;
}
caps::set(
None,
CapSet::Effective,
to_capshashset(&logger, caps.Effective.as_ref()),
to_capshashset(cfd_log, caps.effective.as_ref()),
)?;
caps::set(
None,
CapSet::Permitted,
to_capshashset(&logger, caps.Permitted.as_ref()),
to_capshashset(cfd_log, caps.permitted.as_ref()),
)?;
caps::set(
None,
CapSet::Inheritable,
to_capshashset(&logger, caps.Inheritable.as_ref()),
to_capshashset(cfd_log, caps.inheritable.as_ref()),
)?;
if let Err(_) = caps::set(
None,
CapSet::Ambient,
to_capshashset(&logger, caps.Ambient.as_ref()),
to_capshashset(cfd_log, caps.ambient.as_ref()),
) {
warn!(logger, "failed to set ambient capability");
log_child!(cfd_log, "failed to set ambient capability");
}
Ok(())

View File

@@ -2,7 +2,6 @@
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::cgroups::FreezerState;
use crate::cgroups::Manager as CgroupManager;
use crate::container::DEFAULT_DEVICES;
@@ -10,15 +9,16 @@ use crate::errors::*;
use lazy_static;
use libc::{self, pid_t};
use nix::errno::Errno;
use oci::{LinuxDeviceCgroup, LinuxResources, LinuxThrottleDevice, LinuxWeightDevice};
use protobuf::{CachedSize, RepeatedField, SingularPtrField, UnknownFields};
use protocols::agent::{
BlkioStats, BlkioStatsEntry, CgroupStats, CpuStats, CpuUsage, HugetlbStats, MemoryData,
MemoryStats, PidsStats, ThrottlingData,
};
use protocols::oci::{LinuxDeviceCgroup, LinuxResources, LinuxThrottleDevice, LinuxWeightDevice};
use regex::Regex;
use std::collections::HashMap;
use std::fs;
use std::path::Path;
// Convenience macro to obtain the scope logger
macro_rules! sl {
@@ -57,63 +57,51 @@ lazy_static! {
pub static ref DEFAULT_ALLOWED_DEVICES: Vec<LinuxDeviceCgroup> = {
let mut v = Vec::new();
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: WILDCARD,
Minor: WILDCARD,
Access: "m".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "b".to_string(),
Major: WILDCARD,
Minor: WILDCARD,
Access: "m".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "b".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 5,
Minor: 1,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(1),
access: "rwm".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 136,
Minor: WILDCARD,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(136),
minor: Some(WILDCARD),
access: "rwm".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 5,
Minor: 2,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(2),
access: "rwm".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 10,
Minor: 200,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(10),
minor: Some(200),
access: "rwm".to_string(),
});
v
@@ -219,7 +207,7 @@ fn parse_size(s: &str, m: &HashMap<String, u128>) -> Result<u128> {
fn custom_size(mut size: f64, base: f64, m: &Vec<String>) -> String {
let mut i = 0;
while size > base {
while size >= base && i < m.len() - 1 {
size /= base;
i += 1;
}
@@ -319,7 +307,6 @@ where
T: ToString,
{
let p = format!("{}/{}", dir, file);
info!(sl!(), "{}", p.as_str());
fs::write(p.as_str(), v.to_string().as_bytes())?;
Ok(())
}
@@ -419,10 +406,10 @@ impl Subsystem for CpuSet {
let mut cpus: &str = "";
let mut mems: &str = "";
if r.CPU.is_some() {
let cpu = r.CPU.as_ref().unwrap();
cpus = cpu.Cpus.as_str();
mems = cpu.Mems.as_str();
if r.cpu.is_some() {
let cpu = r.cpu.as_ref().unwrap();
cpus = cpu.cpus.as_str();
mems = cpu.mems.as_str();
}
// For updatecontainer, just set the new value
@@ -466,17 +453,25 @@ impl Subsystem for Cpu {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.CPU.is_none() {
if r.cpu.is_none() {
return Ok(());
}
let cpu = r.CPU.as_ref().unwrap();
let cpu = r.cpu.as_ref().unwrap();
try_write_nonzero(dir, CPU_RT_PERIOD_US, cpu.RealtimePeriod as i128)?;
try_write_nonzero(dir, CPU_RT_RUNTIME_US, cpu.RealtimeRuntime as i128)?;
write_nonzero(dir, CPU_SHARES, cpu.Shares as i128)?;
write_nonzero(dir, CPU_CFS_QUOTA_US, cpu.Quota as i128)?;
write_nonzero(dir, CPU_CFS_PERIOD_US, cpu.Period as i128)?;
try_write_nonzero(
dir,
CPU_RT_PERIOD_US,
cpu.realtime_period.unwrap_or(0) as i128,
)?;
try_write_nonzero(
dir,
CPU_RT_RUNTIME_US,
cpu.realtime_runtime.unwrap_or(0) as i128,
)?;
write_nonzero(dir, CPU_SHARES, cpu.shares.unwrap_or(0) as i128)?;
write_nonzero(dir, CPU_CFS_QUOTA_US, cpu.quota.unwrap_or(0) as i128)?;
write_nonzero(dir, CPU_CFS_PERIOD_US, cpu.period.unwrap_or(0) as i128)?;
Ok(())
}
@@ -599,24 +594,24 @@ impl CpuAcct {
}
fn write_device(d: &LinuxDeviceCgroup, dir: &str) -> Result<()> {
let file = if d.Allow { DEVICES_ALLOW } else { DEVICES_DENY };
let file = if d.allow { DEVICES_ALLOW } else { DEVICES_DENY };
let major = if d.Major == WILDCARD {
let major = if d.major.unwrap_or(0) == WILDCARD {
"*".to_string()
} else {
d.Major.to_string()
d.major.unwrap_or(0).to_string()
};
let minor = if d.Minor == WILDCARD {
let minor = if d.minor.unwrap_or(0) == WILDCARD {
"*".to_string()
} else {
d.Minor.to_string()
d.minor.unwrap_or(0).to_string()
};
let t = if d.Type.is_empty() {
let t = if d.r#type.is_empty() {
"a"
} else {
d.Type.as_str()
d.r#type.as_str()
};
let v = format!(
@@ -624,7 +619,7 @@ fn write_device(d: &LinuxDeviceCgroup, dir: &str) -> Result<()> {
t,
major.as_str(),
minor.as_str(),
d.Access.as_str()
d.access.as_str()
);
info!(sl!(), "{}", v.as_str());
@@ -638,19 +633,17 @@ impl Subsystem for Devices {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
for d in r.Devices.iter() {
for d in r.devices.iter() {
write_device(d, dir)?;
}
for d in DEFAULT_DEVICES.iter() {
let td = LinuxDeviceCgroup {
Allow: true,
Type: d.Type.clone(),
Major: d.Major,
Minor: d.Minor,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: d.r#type.clone(),
major: Some(d.major),
minor: Some(d.minor),
access: "rwm".to_string(),
};
write_device(&td, dir)?;
@@ -691,30 +684,34 @@ impl Subsystem for Memory {
}
fn set(&self, dir: &str, r: &LinuxResources, update: bool) -> Result<()> {
if r.Memory.is_none() {
if r.memory.is_none() {
return Ok(());
}
let memory = r.Memory.as_ref().unwrap();
let memory = r.memory.as_ref().unwrap();
// initialize kmem limits for accounting
if !update {
try_write(dir, KMEM_LIMIT, 1)?;
try_write(dir, KMEM_LIMIT, -1)?;
}
write_nonzero(dir, MEMORY_LIMIT, memory.Limit as i128)?;
write_nonzero(dir, MEMORY_SOFT_LIMIT, memory.Reservation as i128)?;
write_nonzero(dir, MEMORY_LIMIT, memory.limit.unwrap_or(0) as i128)?;
write_nonzero(
dir,
MEMORY_SOFT_LIMIT,
memory.reservation.unwrap_or(0) as i128,
)?;
try_write_nonzero(dir, MEMSW_LIMIT, memory.Swap as i128)?;
try_write_nonzero(dir, KMEM_LIMIT, memory.Kernel as i128)?;
try_write_nonzero(dir, MEMSW_LIMIT, memory.swap.unwrap_or(0) as i128)?;
try_write_nonzero(dir, KMEM_LIMIT, memory.kernel.unwrap_or(0) as i128)?;
write_nonzero(dir, KMEM_TCP_LIMIT, memory.KernelTCP as i128)?;
write_nonzero(dir, KMEM_TCP_LIMIT, memory.kernel_tcp.unwrap_or(0) as i128)?;
if memory.Swappiness <= 100 {
write_file(dir, SWAPPINESS, memory.Swappiness)?;
if memory.swapiness.unwrap_or(0) <= 100 {
write_file(dir, SWAPPINESS, memory.swapiness.unwrap_or(0))?;
}
if memory.DisableOOMKiller {
if memory.disable_oom_killer.unwrap_or(false) {
write_file(dir, OOM_CONTROL, 1)?;
}
@@ -808,14 +805,14 @@ impl Subsystem for Pids {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.Pids.is_none() {
if r.pids.is_none() {
return Ok(());
}
let pids = r.Pids.as_ref().unwrap();
let pids = r.pids.as_ref().unwrap();
let v = if pids.Limit > 0 {
pids.Limit.to_string()
let v = if pids.limit > 0 {
pids.limit.to_string()
} else {
"max".to_string()
};
@@ -857,14 +854,14 @@ impl Pids {
#[inline]
fn weight(d: &LinuxWeightDevice) -> (String, String) {
(
format!("{}:{} {}", d.Major, d.Minor, d.Weight),
format!("{}:{} {}", d.Major, d.Minor, d.LeafWeight),
format!("{:?} {:?}", d.blk, d.weight),
format!("{:?} {:?}", d.blk, d.leaf_weight),
)
}
#[inline]
fn rate(d: &LinuxThrottleDevice) -> String {
format!("{}:{} {}", d.Major, d.Minor, d.Rate)
format!("{:?} {}", d.blk, d.rate)
}
fn write_blkio_device<T: ToString>(dir: &str, file: &str, v: T) -> Result<()> {
@@ -895,34 +892,38 @@ impl Subsystem for Blkio {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.BlockIO.is_none() {
if r.block_io.is_none() {
return Ok(());
}
let blkio = r.BlockIO.as_ref().unwrap();
let blkio = r.block_io.as_ref().unwrap();
write_nonzero(dir, BLKIO_WEIGHT, blkio.Weight as i128)?;
write_nonzero(dir, BLKIO_LEAF_WEIGHT, blkio.LeafWeight as i128)?;
write_nonzero(dir, BLKIO_WEIGHT, blkio.weight.unwrap_or(0) as i128)?;
write_nonzero(
dir,
BLKIO_LEAF_WEIGHT,
blkio.leaf_weight.unwrap_or(0) as i128,
)?;
for d in blkio.WeightDevice.iter() {
for d in blkio.weight_device.iter() {
let (w, lw) = weight(d);
write_blkio_device(dir, BLKIO_WEIGHT_DEVICE, w)?;
write_blkio_device(dir, BLKIO_LEAF_WEIGHT_DEVICE, lw)?;
}
for d in blkio.ThrottleReadBpsDevice.iter() {
for d in blkio.throttle_read_bps_device.iter() {
write_blkio_device(dir, BLKIO_READ_BPS_DEVICE, rate(d))?;
}
for d in blkio.ThrottleWriteBpsDevice.iter() {
for d in blkio.throttle_write_bps_device.iter() {
write_blkio_device(dir, BLKIO_WRITE_BPS_DEVICE, rate(d))?;
}
for d in blkio.ThrottleReadIOPSDevice.iter() {
for d in blkio.throttle_read_iops_device.iter() {
write_blkio_device(dir, BLKIO_READ_IOPS_DEVICE, rate(d))?;
}
for d in blkio.ThrottleWriteIOPSDevice.iter() {
for d in blkio.throttle_write_iops_device.iter() {
write_blkio_device(dir, BLKIO_WRITE_IOPS_DEVICE, rate(d))?;
}
@@ -934,6 +935,11 @@ fn get_blkio_stat(dir: &str, file: &str) -> Result<RepeatedField<BlkioStatsEntry
let p = format!("{}/{}", dir, file);
let mut m = RepeatedField::new();
// do as runc
if !Path::new(&p).exists() {
return Ok(RepeatedField::new());
}
for l in fs::read_to_string(p.as_str())?.lines() {
let parts: Vec<&str> = l.split(' ').collect();
@@ -1010,9 +1016,9 @@ impl Subsystem for HugeTLB {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
for l in r.HugepageLimits.iter() {
let file = format!("hugetlb.{}.limit_in_bytes", l.Pagesize);
write_file(dir, file.as_str(), l.Limit)?;
for l in r.hugepage_limits.iter() {
let file = format!("hugetlb.{}.limit_in_bytes", l.page_size);
write_file(dir, file.as_str(), l.limit)?;
}
Ok(())
}
@@ -1052,13 +1058,13 @@ impl Subsystem for NetCls {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.Network.is_none() {
if r.network.is_none() {
return Ok(());
}
let network = r.Network.as_ref().unwrap();
let network = r.network.as_ref().unwrap();
write_nonzero(dir, NET_CLS_CLASSID, network.ClassID as i128)?;
write_nonzero(dir, NET_CLS_CLASSID, network.class_id.unwrap_or(0) as i128)?;
Ok(())
}
@@ -1070,14 +1076,14 @@ impl Subsystem for NetPrio {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.Network.is_none() {
if r.network.is_none() {
return Ok(());
}
let network = r.Network.as_ref().unwrap();
let network = r.network.as_ref().unwrap();
for p in network.Priorities.iter() {
let prio = format!("{} {}", p.Name, p.Priority);
for p in network.priorities.iter() {
let prio = format!("{} {}", p.name, p.priority);
try_write_file(dir, NET_PRIO_IFPRIOMAP, prio)?;
}
@@ -1222,7 +1228,7 @@ fn get_all_procs(dir: &str) -> Result<Vec<i32>> {
Ok(m)
}
#[derive(Debug, Clone)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Manager {
pub paths: HashMap<String, String>,
pub mounts: HashMap<String, String>,
@@ -1236,7 +1242,6 @@ pub const FROZEN: &'static str = "FROZEN";
impl CgroupManager for Manager {
fn apply(&self, pid: pid_t) -> Result<()> {
for (key, value) in &self.paths {
info!(sl!(), "apply cgroup {}", key);
apply(value, pid)?;
}
@@ -1247,7 +1252,6 @@ impl CgroupManager for Manager {
for (key, value) in &self.paths {
let _ = fs::create_dir_all(value);
let sub = get_subsystem(key)?;
info!(sl!(), "setting cgroup {}", key);
sub.set(value, spec, update)?;
}
@@ -1299,9 +1303,16 @@ impl CgroupManager for Manager {
};
// BlkioStats
// note that virtiofs has no blkio stats
info!(sl!(), "blkio_stats");
let blkio_stats = if self.paths.get("blkio").is_some() {
SingularPtrField::some(Blkio().get_stats(self.paths.get("blkio").unwrap())?)
match Blkio().get_stats(self.paths.get("blkio").unwrap()) {
Ok(stat) => SingularPtrField::some(stat),
Err(e) => {
warn!(sl!(), "failed to get blkio stats");
SingularPtrField::none()
}
}
} else {
SingularPtrField::none()
};

View File

@@ -5,8 +5,8 @@
use crate::errors::*;
// use crate::configs::{FreezerState, Config};
use oci::LinuxResources;
use protocols::agent::CgroupStats;
use protocols::oci::LinuxResources;
use std::collections::HashMap;
pub mod fs;

File diff suppressed because it is too large Load Diff

View File

@@ -16,11 +16,13 @@ error_chain! {
Ffi(std::ffi::NulError);
Caps(caps::errors::Error);
Serde(serde_json::Error);
UTF8(std::string::FromUtf8Error);
FromUTF8(std::string::FromUtf8Error);
Parse(std::num::ParseIntError);
Scanfmt(scan_fmt::parse::ScanError);
Ip(std::net::AddrParseError);
Regex(regex::Error);
EnvVar(std::env::VarError);
UTF8(std::str::Utf8Error);
}
// define new errors
errors {

View File

@@ -42,14 +42,14 @@ macro_rules! sl {
};
}
pub mod capabilities;
pub mod cgroups;
pub mod container;
pub mod errors;
pub mod mount;
pub mod process;
pub mod specconv;
// pub mod sync;
pub mod capabilities;
pub mod sync;
pub mod validator;
// pub mod factory;
@@ -66,9 +66,6 @@ pub mod validator;
// construtc ociSpec from grpcSpec, which is needed for hook
// execution. since hooks read config.json
use std::collections::HashMap;
use std::mem::MaybeUninit;
use oci::{
Box as ociBox, Hooks as ociHooks, Linux as ociLinux, LinuxCapabilities as ociLinuxCapabilities,
Mount as ociMount, POSIXRlimit as ociPOSIXRlimit, Process as ociProcess, Root as ociRoot,
@@ -78,8 +75,10 @@ use protocols::oci::{
Hooks as grpcHooks, Linux as grpcLinux, Mount as grpcMount, Process as grpcProcess,
Root as grpcRoot, Spec as grpcSpec,
};
use std::collections::HashMap;
use std::mem::MaybeUninit;
fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
let console_size = if p.ConsoleSize.is_some() {
let c = p.ConsoleSize.as_ref().unwrap();
Some(ociBox {
@@ -296,7 +295,7 @@ fn blockio_grpc_to_oci(blk: &grpcLinuxBlockIO) -> ociLinuxBlockIO {
}
}
fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let devices = {
let mut d = Vec::new();
for dev in res.Devices.iter() {

View File

@@ -10,10 +10,11 @@ use nix::mount::{self, MntFlags, MsFlags};
use nix::sys::stat::{self, Mode, SFlag};
use nix::unistd::{self, Gid, Uid};
use nix::NixPath;
use protocols::oci::{LinuxDevice, Mount, Spec};
use oci::{LinuxDevice, Mount, Spec};
use std::collections::{HashMap, HashSet};
use std::fs::{self, OpenOptions};
use std::os::unix;
use std::os::unix::io::RawFd;
use std::path::{Path, PathBuf};
use path_absolutize::*;
@@ -23,11 +24,11 @@ use std::io::{BufRead, BufReader};
use crate::container::DEFAULT_DEVICES;
use crate::errors::*;
use crate::sync::write_count;
use lazy_static;
use std::string::ToString;
use protobuf::{CachedSize, RepeatedField, UnknownFields};
use slog::Logger;
use crate::log_child;
// Info reveals information about a particular mounted filesystem. This
// struct is populated from the content in the /proc/<pid>/mountinfo file.
@@ -98,7 +99,7 @@ lazy_static! {
}
pub fn init_rootfs(
logger: &Logger,
cfd_log: RawFd,
spec: &Spec,
cpath: &HashMap<String, String>,
mounts: &HashMap<String, String>,
@@ -108,14 +109,14 @@ pub fn init_rootfs(
lazy_static::initialize(&PROPAGATION);
lazy_static::initialize(&LINUXDEVICETYPE);
let linux = spec.Linux.as_ref().unwrap();
let linux = spec.linux.as_ref().unwrap();
let mut flags = MsFlags::MS_REC;
match PROPAGATION.get(&linux.RootfsPropagation.as_str()) {
match PROPAGATION.get(&linux.rootfs_propagation.as_str()) {
Some(fl) => flags |= *fl,
None => flags |= MsFlags::MS_SLAVE,
}
let rootfs = spec.Root.as_ref().unwrap().Path.as_str();
let rootfs = spec.root.as_ref().unwrap().path.as_str();
let root = fs::canonicalize(rootfs)?;
let rootfs = root.to_str().unwrap();
@@ -128,19 +129,19 @@ pub fn init_rootfs(
None::<&str>,
)?;
for m in &spec.Mounts {
for m in &spec.mounts {
let (mut flags, data) = parse_mount(&m);
if !m.destination.starts_with("/") || m.destination.contains("..") {
return Err(ErrorKind::Nix(nix::Error::Sys(Errno::EINVAL)).into());
}
if m.field_type == "cgroup" {
mount_cgroups(logger, m, rootfs, flags, &data, cpath, mounts)?;
if m.r#type == "cgroup" {
mount_cgroups(cfd_log, &m, rootfs, flags, &data, cpath, mounts)?;
} else {
if m.destination == "/dev" {
flags &= !MsFlags::MS_RDONLY;
}
mount_from(&m, &rootfs, flags, &data, "")?;
mount_from(cfd_log, &m, &rootfs, flags, &data, "")?;
}
}
@@ -148,7 +149,7 @@ pub fn init_rootfs(
unistd::chdir(rootfs)?;
default_symlinks()?;
create_devices(&linux.Devices, bind_device)?;
create_devices(&linux.devices, bind_device)?;
ensure_ptmx()?;
unistd::chdir(&olddir)?;
@@ -157,7 +158,7 @@ pub fn init_rootfs(
}
fn mount_cgroups(
logger: &Logger,
cfd_log: RawFd,
m: &Mount,
rootfs: &str,
flags: MsFlags,
@@ -168,16 +169,14 @@ fn mount_cgroups(
// mount tmpfs
let ctm = Mount {
source: "tmpfs".to_string(),
field_type: "tmpfs".to_string(),
r#type: "tmpfs".to_string(),
destination: m.destination.clone(),
options: RepeatedField::default(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
options: Vec::new(),
};
let cflags = MsFlags::MS_NOEXEC | MsFlags::MS_NOSUID | MsFlags::MS_NODEV;
info!(logger, "tmpfs");
mount_from(&ctm, rootfs, cflags, "", "")?;
// info!(logger, "tmpfs");
mount_from(cfd_log, &ctm, rootfs, cflags, "", "")?;
let olddir = unistd::getcwd()?;
unistd::chdir(rootfs)?;
@@ -186,7 +185,7 @@ fn mount_cgroups(
// bind mount cgroups
for (key, mount) in mounts.iter() {
info!(logger, "{}", key);
log_child!(cfd_log, "mount cgroup subsystem {}", key);
let source = if cpath.get(key).is_some() {
cpath.get(key).unwrap()
} else {
@@ -213,36 +212,33 @@ fn mount_cgroups(
srcs.insert(source.to_string());
info!(logger, "{}", destination.as_str());
log_child!(cfd_log, "mount destination: {}", destination.as_str());
let bm = Mount {
source: source.to_string(),
field_type: "bind".to_string(),
r#type: "bind".to_string(),
destination: destination.clone(),
options: RepeatedField::default(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
options: Vec::new(),
};
mount_from(
&bm,
rootfs,
flags | MsFlags::MS_REC | MsFlags::MS_BIND,
"",
"",
)?;
let mut mount_flags: MsFlags = flags | MsFlags::MS_REC | MsFlags::MS_BIND;
if key.contains("systemd") {
mount_flags &= !MsFlags::MS_RDONLY;
}
mount_from(cfd_log, &bm, rootfs, mount_flags, "", "")?;
if key != base {
let src = format!("{}/{}", m.destination.as_str(), key);
match unix::fs::symlink(destination.as_str(), &src[1..]) {
Err(e) => {
info!(
logger,
log_child!(
cfd_log,
"symlink: {} {} err: {}",
key,
destination.as_str(),
e.to_string()
);
return Err(e.into());
}
Ok(_) => {}
@@ -426,11 +422,18 @@ fn parse_mount(m: &Mount) -> (MsFlags, String) {
(flags, data.join(","))
}
fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str) -> Result<()> {
fn mount_from(
cfd_log: RawFd,
m: &Mount,
rootfs: &str,
flags: MsFlags,
data: &str,
_label: &str,
) -> Result<()> {
let d = String::from(data);
let dest = format!("{}{}", rootfs, &m.destination);
let src = if m.field_type.as_str() == "bind" {
let src = if m.r#type.as_str() == "bind" {
let src = fs::canonicalize(m.source.as_str())?;
let dir = if src.is_file() {
Path::new(&dest).parent().unwrap()
@@ -442,8 +445,8 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
match fs::create_dir_all(&dir) {
Ok(_) => {}
Err(e) => {
info!(
sl!(),
log_child!(
cfd_log,
"creat dir {}: {}",
dir.to_str().unwrap(),
e.to_string()
@@ -461,8 +464,6 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
PathBuf::from(&m.source)
};
info!(sl!(), "{}, {}", src.to_str().unwrap(), dest.as_str());
// ignore this check since some mount's src didn't been a directory
// such as tmpfs.
/*
@@ -477,20 +478,25 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
match stat::stat(dest.as_str()) {
Ok(_) => {}
Err(e) => {
info!(sl!(), "{}: {}", dest.as_str(), e.as_errno().unwrap().desc());
log_child!(
cfd_log,
"{}: {}",
dest.as_str(),
e.as_errno().unwrap().desc()
);
}
}
match mount::mount(
Some(src.to_str().unwrap()),
dest.as_str(),
Some(m.field_type.as_str()),
Some(m.r#type.as_str()),
flags,
Some(d.as_str()),
) {
Ok(_) => {}
Err(e) => {
info!(sl!(), "mount error: {}", e.as_errno().unwrap().desc());
log_child!(cfd_log, "mount error: {}", e.as_errno().unwrap().desc());
return Err(e.into());
}
}
@@ -513,8 +519,8 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
None::<&str>,
) {
Err(e) => {
info!(
sl!(),
log_child!(
cfd_log,
"remout {}: {}",
dest.as_str(),
e.as_errno().unwrap().desc()
@@ -550,8 +556,8 @@ fn create_devices(devices: &[LinuxDevice], bind: bool) -> Result<()> {
op(dev)?;
}
for dev in devices {
if !dev.Path.starts_with("/dev") || dev.Path.contains("..") {
let msg = format!("{} is not a valid device path", dev.Path);
if !dev.path.starts_with("/dev") || dev.path.contains("..") {
let msg = format!("{} is not a valid device path", dev.path);
bail!(ErrorKind::ErrorCode(msg));
}
op(dev)?;
@@ -581,22 +587,22 @@ lazy_static! {
}
fn mknod_dev(dev: &LinuxDevice) -> Result<()> {
let f = match LINUXDEVICETYPE.get(dev.Type.as_str()) {
let f = match LINUXDEVICETYPE.get(dev.r#type.as_str()) {
Some(v) => v,
None => return Err(ErrorKind::ErrorCode("invalid spec".to_string()).into()),
};
stat::mknod(
&dev.Path[1..],
&dev.path[1..],
*f,
Mode::from_bits_truncate(dev.FileMode),
makedev(dev.Major as u64, dev.Minor as u64),
Mode::from_bits_truncate(dev.file_mode.unwrap_or(0)),
makedev(dev.major as u64, dev.minor as u64),
)?;
unistd::chown(
&dev.Path[1..],
Some(Uid::from_raw(dev.UID as uid_t)),
Some(Gid::from_raw(dev.GID as uid_t)),
&dev.path[1..],
Some(Uid::from_raw(dev.uid.unwrap_or(0) as uid_t)),
Some(Gid::from_raw(dev.gid.unwrap_or(0) as uid_t)),
)?;
Ok(())
@@ -604,7 +610,7 @@ fn mknod_dev(dev: &LinuxDevice) -> Result<()> {
fn bind_dev(dev: &LinuxDevice) -> Result<()> {
let fd = fcntl::open(
&dev.Path[1..],
&dev.path[1..],
OFlag::O_RDWR | OFlag::O_CREAT,
Mode::from_bits_truncate(0o644),
)?;
@@ -612,8 +618,8 @@ fn bind_dev(dev: &LinuxDevice) -> Result<()> {
unistd::close(fd)?;
mount::mount(
Some(&*dev.Path),
&dev.Path[1..],
Some(&*dev.path),
&dev.path[1..],
None::<&str>,
MsFlags::MS_BIND,
None::<&str>,
@@ -621,23 +627,23 @@ fn bind_dev(dev: &LinuxDevice) -> Result<()> {
Ok(())
}
pub fn finish_rootfs(spec: &Spec) -> Result<()> {
pub fn finish_rootfs(cfd_log: RawFd, spec: &Spec) -> Result<()> {
let olddir = unistd::getcwd()?;
info!(sl!(), "{}", olddir.to_str().unwrap());
log_child!(cfd_log, "old cwd: {}", olddir.to_str().unwrap());
unistd::chdir("/")?;
if spec.Linux.is_some() {
let linux = spec.Linux.as_ref().unwrap();
if spec.linux.is_some() {
let linux = spec.linux.as_ref().unwrap();
for path in linux.MaskedPaths.iter() {
for path in linux.masked_paths.iter() {
mask_path(path)?;
}
for path in linux.ReadonlyPaths.iter() {
for path in linux.readonly_paths.iter() {
readonly_path(path)?;
}
}
for m in spec.Mounts.iter() {
for m in spec.mounts.iter() {
if m.destination == "/dev" {
let (flags, _) = parse_mount(m);
if flags.contains(MsFlags::MS_RDONLY) {
@@ -652,7 +658,7 @@ pub fn finish_rootfs(spec: &Spec) -> Result<()> {
}
}
if spec.Root.as_ref().unwrap().Readonly {
if spec.root.as_ref().unwrap().readonly {
let flags = MsFlags::MS_BIND | MsFlags::MS_RDONLY | MsFlags::MS_NODEV | MsFlags::MS_REMOUNT;
mount::mount(Some("/"), "/", None::<&str>, flags, None::<&str>)?;

View File

@@ -12,7 +12,7 @@ use std::os::unix::io::RawFd;
// use crate::cgroups::Manager as CgroupManager;
// use crate::intelrdt::Manager as RdtManager;
use nix::fcntl::OFlag;
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use nix::sys::signal::{self, Signal};
use nix::sys::socket::{self, AddressFamily, SockFlag, SockType};
use nix::sys::wait::{self, WaitStatus};
@@ -20,7 +20,7 @@ use nix::unistd::{self, Pid};
use nix::Result;
use nix::Error;
use protocols::oci::Process as OCIProcess;
use oci::Process as OCIProcess;
use slog::Logger;
#[derive(Debug)]
@@ -34,10 +34,8 @@ pub struct Process {
pub extra_files: Vec<File>,
// pub caps: Capabilities,
// pub rlimits: Vec<Rlimit>,
pub console_socket: Option<RawFd>,
pub term_master: Option<RawFd>,
// parent end of fds
pub parent_console_socket: Option<RawFd>,
pub tty: bool,
pub parent_stdin: Option<RawFd>,
pub parent_stdout: Option<RawFd>,
pub parent_stderr: Option<RawFd>,
@@ -72,7 +70,13 @@ impl ProcessOperations for Process {
}
impl Process {
pub fn new(logger: &Logger, ocip: &OCIProcess, id: &str, init: bool) -> Result<Self> {
pub fn new(
logger: &Logger,
ocip: &OCIProcess,
id: &str,
init: bool,
pipe_size: i32,
) -> Result<Self> {
let logger = logger.new(o!("subsystem" => "process"));
let mut p = Process {
@@ -83,9 +87,8 @@ impl Process {
exit_pipe_w: None,
exit_pipe_r: None,
extra_files: Vec::new(),
console_socket: None,
tty: ocip.terminal,
term_master: None,
parent_console_socket: None,
parent_stdin: None,
parent_stdout: None,
parent_stderr: None,
@@ -98,44 +101,61 @@ impl Process {
info!(logger, "before create console socket!");
if ocip.Terminal {
let (psocket, csocket) = match socket::socketpair(
AddressFamily::Unix,
SockType::Stream,
None,
SockFlag::SOCK_CLOEXEC,
) {
Ok((u, v)) => (u, v),
Err(e) => {
match e {
Error::Sys(errno) => {
info!(logger, "socketpair: {}", errno.desc());
}
_ => {
info!(logger, "socketpair: other error!");
}
}
return Err(e);
}
};
p.parent_console_socket = Some(psocket);
p.console_socket = Some(csocket);
if !p.tty {
info!(logger, "created console socket!");
let (stdin, pstdin) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stdin = Some(pstdin);
p.stdin = Some(stdin);
let (pstdout, stdout) = create_extended_pipe(OFlag::O_CLOEXEC, pipe_size)?;
p.parent_stdout = Some(pstdout);
p.stdout = Some(stdout);
let (pstderr, stderr) = create_extended_pipe(OFlag::O_CLOEXEC, pipe_size)?;
p.parent_stderr = Some(pstderr);
p.stderr = Some(stderr);
}
info!(logger, "created console socket!");
let (stdin, pstdin) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stdin = Some(pstdin);
p.stdin = Some(stdin);
let (pstdout, stdout) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stdout = Some(pstdout);
p.stdout = Some(stdout);
let (pstderr, stderr) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stderr = Some(pstderr);
p.stderr = Some(stderr);
Ok(p)
}
}
fn create_extended_pipe(flags: OFlag, pipe_size: i32) -> Result<(RawFd, RawFd)> {
let (r, w) = unistd::pipe2(flags)?;
if pipe_size > 0 {
fcntl(w, FcntlArg::F_SETPIPE_SZ(pipe_size))?;
}
Ok((r, w))
}
#[cfg(test)]
mod tests {
use crate::process::create_extended_pipe;
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use std::fs;
use std::os::unix::io::RawFd;
fn get_pipe_max_size() -> i32 {
fs::read_to_string("/proc/sys/fs/pipe-max-size")
.unwrap()
.trim()
.parse::<i32>()
.unwrap()
}
fn get_pipe_size(fd: RawFd) -> i32 {
fcntl(fd, FcntlArg::F_GETPIPE_SZ).unwrap()
}
#[test]
fn test_create_extended_pipe() {
// Test the default
let (r, w) = create_extended_pipe(OFlag::O_CLOEXEC, 0).unwrap();
// Test setting to the max size
let max_size = get_pipe_max_size();
let (r, w) = create_extended_pipe(OFlag::O_CLOEXEC, max_size).unwrap();
let actual_size = get_pipe_size(w);
assert_eq!(max_size, actual_size);
}
}

View File

@@ -3,7 +3,7 @@
// SPDX-License-Identifier: Apache-2.0
//
use protocols::oci::Spec;
use oci::Spec;
// use crate::configs::namespaces;
// use crate::configs::device::Device;

View File

@@ -0,0 +1,177 @@
// Copyright (c) 2019 Ant Financial
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::errors::*;
use nix::errno::Errno;
use nix::unistd;
use nix::Error;
use std::mem;
use std::os::unix::io::RawFd;
pub const SYNC_SUCCESS: i32 = 1;
pub const SYNC_FAILED: i32 = 2;
pub const SYNC_DATA: i32 = 3;
const DATA_SIZE: usize = 100;
const MSG_SIZE: usize = mem::size_of::<i32>();
#[macro_export]
macro_rules! log_child {
($fd:expr, $($arg:tt)+) => ({
let lfd = $fd;
let mut log_str = format_args!($($arg)+).to_string();
log_str.push('\n');
write_count(lfd, log_str.as_bytes(), log_str.len());
})
}
pub fn write_count(fd: RawFd, buf: &[u8], count: usize) -> Result<usize> {
let mut len = 0;
loop {
match unistd::write(fd, &buf[len..]) {
Ok(l) => {
len += l;
if len == count {
break;
}
}
Err(e) => {
if e != Error::from_errno(Errno::EINTR) {
return Err(e.into());
}
}
}
}
Ok(len)
}
fn read_count(fd: RawFd, count: usize) -> Result<Vec<u8>> {
let mut v: Vec<u8> = vec![0; count];
let mut len = 0;
loop {
match unistd::read(fd, &mut v[len..]) {
Ok(l) => {
len += l;
if len == count || l == 0 {
break;
}
}
Err(e) => {
if e != Error::from_errno(Errno::EINTR) {
return Err(e.into());
}
}
}
}
Ok(v[0..len].to_vec())
}
pub fn read_sync(fd: RawFd) -> Result<Vec<u8>> {
let buf = read_count(fd, MSG_SIZE)?;
if buf.len() != MSG_SIZE {
return Err(ErrorKind::ErrorCode(format!(
"process: {} failed to receive sync message from peer: got msg length: {}, expected: {}",
std::process::id(),
buf.len(),
MSG_SIZE
))
.into());
}
let buf_array: [u8; MSG_SIZE] = [buf[0], buf[1], buf[2], buf[3]];
let msg: i32 = i32::from_be_bytes(buf_array);
match msg {
SYNC_SUCCESS => return Ok(Vec::new()),
SYNC_DATA => {
let buf = read_count(fd, MSG_SIZE)?;
let buf_array: [u8; MSG_SIZE] = [buf[0], buf[1], buf[2], buf[3]];
let msg_length: i32 = i32::from_be_bytes(buf_array);
let data_buf = read_count(fd, msg_length as usize)?;
return Ok(data_buf);
}
SYNC_FAILED => {
let mut error_buf = vec![];
loop {
let buf = read_count(fd, DATA_SIZE)?;
error_buf.extend(&buf);
if DATA_SIZE == buf.len() {
continue;
} else {
break;
}
}
let error_str = match std::str::from_utf8(&error_buf) {
Ok(v) => v,
Err(e) => {
return Err(ErrorKind::ErrorCode(format!(
"receive error message from child process failed: {:?}",
e
))
.into())
}
};
return Err(ErrorKind::ErrorCode(String::from(error_str)).into());
}
_ => return Err(ErrorKind::ErrorCode("error in receive sync message".to_string()).into()),
}
}
pub fn write_sync(fd: RawFd, msg_type: i32, data_str: &str) -> Result<()> {
let buf = msg_type.to_be_bytes();
let count = write_count(fd, &buf, MSG_SIZE)?;
if count != MSG_SIZE {
return Err(ErrorKind::ErrorCode("error in send sync message".to_string()).into());
}
match msg_type {
SYNC_FAILED => match write_count(fd, data_str.as_bytes(), data_str.len()) {
Ok(_count) => unistd::close(fd)?,
Err(e) => {
unistd::close(fd)?;
return Err(
ErrorKind::ErrorCode("error in send message to process".to_string()).into(),
);
}
},
SYNC_DATA => {
let length: i32 = data_str.len() as i32;
match write_count(fd, &length.to_be_bytes(), MSG_SIZE) {
Ok(_count) => (),
Err(e) => {
unistd::close(fd)?;
return Err(ErrorKind::ErrorCode(
"error in send message to process".to_string(),
)
.into());
}
}
match write_count(fd, data_str.as_bytes(), data_str.len()) {
Ok(_count) => (),
Err(e) => {
unistd::close(fd)?;
return Err(ErrorKind::ErrorCode(
"error in send message to process".to_string(),
)
.into());
}
}
}
_ => (),
};
Ok(())
}

View File

@@ -1,16 +1,21 @@
// Copyright (c) 2019 Ant Financial
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::container::Config;
use crate::errors::*;
use lazy_static;
use nix::errno::Errno;
use nix::Error;
use oci::{LinuxIDMapping, LinuxNamespace, Spec};
use protobuf::RepeatedField;
use protocols::oci::{LinuxIDMapping, LinuxNamespace, Spec};
use std::collections::HashMap;
use std::path::{Component, PathBuf};
fn contain_namespace(nses: &RepeatedField<LinuxNamespace>, key: &str) -> bool {
fn contain_namespace(nses: &Vec<LinuxNamespace>, key: &str) -> bool {
for ns in nses {
if ns.Type.as_str() == key {
if ns.r#type.as_str() == key {
return true;
}
}
@@ -18,10 +23,10 @@ fn contain_namespace(nses: &RepeatedField<LinuxNamespace>, key: &str) -> bool {
false
}
fn get_namespace_path(nses: &RepeatedField<LinuxNamespace>, key: &str) -> Result<String> {
fn get_namespace_path(nses: &Vec<LinuxNamespace>, key: &str) -> Result<String> {
for ns in nses {
if ns.Type.as_str() == key {
return Ok(ns.Path.clone());
if ns.r#type.as_str() == key {
return Ok(ns.path.clone());
}
}
@@ -71,15 +76,15 @@ fn network(_oci: &Spec) -> Result<()> {
}
fn hostname(oci: &Spec) -> Result<()> {
if oci.Hostname.is_empty() || oci.Hostname == "".to_string() {
if oci.hostname.is_empty() || oci.hostname == "".to_string() {
return Ok(());
}
if oci.Linux.is_none() {
if oci.linux.is_none() {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
let linux = oci.Linux.as_ref().unwrap();
if !contain_namespace(&linux.Namespaces, "uts") {
let linux = oci.linux.as_ref().unwrap();
if !contain_namespace(&linux.namespaces, "uts") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
@@ -87,12 +92,12 @@ fn hostname(oci: &Spec) -> Result<()> {
}
fn security(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if linux.MaskedPaths.len() == 0 && linux.ReadonlyPaths.len() == 0 {
let linux = oci.linux.as_ref().unwrap();
if linux.masked_paths.len() == 0 && linux.readonly_paths.len() == 0 {
return Ok(());
}
if !contain_namespace(&linux.Namespaces, "mount") {
if !contain_namespace(&linux.namespaces, "mount") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
@@ -101,9 +106,9 @@ fn security(oci: &Spec) -> Result<()> {
Ok(())
}
fn idmapping(maps: &RepeatedField<LinuxIDMapping>) -> Result<()> {
fn idmapping(maps: &Vec<LinuxIDMapping>) -> Result<()> {
for map in maps {
if map.Size > 0 {
if map.size > 0 {
return Ok(());
}
}
@@ -112,19 +117,19 @@ fn idmapping(maps: &RepeatedField<LinuxIDMapping>) -> Result<()> {
}
fn usernamespace(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if contain_namespace(&linux.Namespaces, "user") {
let linux = oci.linux.as_ref().unwrap();
if contain_namespace(&linux.namespaces, "user") {
let user_ns = PathBuf::from("/proc/self/ns/user");
if !user_ns.exists() {
return Err(ErrorKind::ErrorCode("user namespace not supported!".to_string()).into());
}
// check if idmappings is correct, at least I saw idmaps
// with zero size was passed to agent
idmapping(&linux.UIDMappings)?;
idmapping(&linux.GIDMappings)?;
idmapping(&linux.uid_mappings)?;
idmapping(&linux.gid_mappings)?;
} else {
// no user namespace but idmap
if linux.UIDMappings.len() != 0 || linux.GIDMappings.len() != 0 {
if linux.uid_mappings.len() != 0 || linux.gid_mappings.len() != 0 {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
}
@@ -133,8 +138,8 @@ fn usernamespace(oci: &Spec) -> Result<()> {
}
fn cgroupnamespace(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if contain_namespace(&linux.Namespaces, "cgroup") {
let linux = oci.linux.as_ref().unwrap();
if contain_namespace(&linux.namespaces, "cgroup") {
let path = PathBuf::from("/proc/self/ns/cgroup");
if !path.exists() {
return Err(ErrorKind::ErrorCode("cgroup unsupported!".to_string()).into());
@@ -178,10 +183,10 @@ fn check_host_ns(path: &str) -> Result<()> {
}
fn sysctl(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
for (key, _) in linux.Sysctl.iter() {
let linux = oci.linux.as_ref().unwrap();
for (key, _) in linux.sysctl.iter() {
if SYSCTLS.contains_key(key.as_str()) || key.starts_with("fs.mqueue.") {
if contain_namespace(&linux.Namespaces, "ipc") {
if contain_namespace(&linux.namespaces, "ipc") {
continue;
} else {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
@@ -189,11 +194,11 @@ fn sysctl(oci: &Spec) -> Result<()> {
}
if key.starts_with("net.") {
if !contain_namespace(&linux.Namespaces, "network") {
if !contain_namespace(&linux.namespaces, "network") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
let net = get_namespace_path(&linux.Namespaces, "network")?;
let net = get_namespace_path(&linux.namespaces, "network")?;
if net.is_empty() || net == "".to_string() {
continue;
}
@@ -201,7 +206,7 @@ fn sysctl(oci: &Spec) -> Result<()> {
check_host_ns(net.as_str())?;
}
if contain_namespace(&linux.Namespaces, "uts") {
if contain_namespace(&linux.namespaces, "uts") {
if key == "kernel.domainname" {
continue;
}
@@ -217,21 +222,21 @@ fn sysctl(oci: &Spec) -> Result<()> {
}
fn rootless_euid_mapping(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if !contain_namespace(&linux.Namespaces, "user") {
let linux = oci.linux.as_ref().unwrap();
if !contain_namespace(&linux.namespaces, "user") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
if linux.UIDMappings.len() == 0 || linux.GIDMappings.len() == 0 {
if linux.gid_mappings.len() == 0 || linux.gid_mappings.len() == 0 {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
Ok(())
}
fn has_idmapping(maps: &RepeatedField<LinuxIDMapping>, id: u32) -> bool {
fn has_idmapping(maps: &Vec<LinuxIDMapping>, id: u32) -> bool {
for map in maps {
if id >= map.ContainerID && id < map.ContainerID + map.Size {
if id >= map.container_id && id < map.container_id + map.size {
return true;
}
}
@@ -239,9 +244,9 @@ fn has_idmapping(maps: &RepeatedField<LinuxIDMapping>, id: u32) -> bool {
}
fn rootless_euid_mount(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
let linux = oci.linux.as_ref().unwrap();
for mnt in oci.Mounts.iter() {
for mnt in oci.mounts.iter() {
for opt in mnt.options.iter() {
if opt.starts_with("uid=") || opt.starts_with("gid=") {
let fields: Vec<&str> = opt.split('=').collect();
@@ -253,13 +258,13 @@ fn rootless_euid_mount(oci: &Spec) -> Result<()> {
let id = fields[1].trim().parse::<u32>()?;
if opt.starts_with("uid=") {
if !has_idmapping(&linux.UIDMappings, id) {
if !has_idmapping(&linux.uid_mappings, id) {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
}
if opt.starts_with("gid=") {
if !has_idmapping(&linux.GIDMappings, id) {
if !has_idmapping(&linux.gid_mappings, id) {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
}
@@ -279,14 +284,14 @@ pub fn validate(conf: &Config) -> Result<()> {
lazy_static::initialize(&SYSCTLS);
let oci = conf.spec.as_ref().unwrap();
if oci.Linux.is_none() {
if oci.linux.is_none() {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
if oci.Root.is_none() {
if oci.root.is_none() {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
let root = oci.Root.get_ref().Path.as_str();
let root = oci.root.as_ref().unwrap().path.as_str();
rootfs(root)?;
network(oci)?;

View File

@@ -10,9 +10,13 @@ const DEBUG_CONSOLE_FLAG: &str = "agent.debug_console";
const DEV_MODE_FLAG: &str = "agent.devmode";
const LOG_LEVEL_OPTION: &str = "agent.log";
const HOTPLUG_TIMOUT_OPTION: &str = "agent.hotplug_timeout";
const DEBUG_CONSOLE_VPORT_OPTION: &str = "agent.debug_console_vport";
const LOG_VPORT_OPTION: &str = "agent.log_vport";
const CONTAINER_PIPE_SIZE_OPTION: &str = "agent.container_pipe_size";
const DEFAULT_LOG_LEVEL: slog::Level = slog::Level::Info;
const DEFAULT_HOTPLUG_TIMEOUT: time::Duration = time::Duration::from_secs(3);
const DEFAULT_CONTAINER_PIPE_SIZE: i32 = 0;
// FIXME: unused
const TRACE_MODE_FLAG: &str = "agent.trace";
@@ -24,6 +28,9 @@ pub struct agentConfig {
pub dev_mode: bool,
pub log_level: slog::Level,
pub hotplug_timeout: time::Duration,
pub debug_console_vport: i32,
pub log_vport: i32,
pub container_pipe_size: i32,
}
impl agentConfig {
@@ -33,6 +40,9 @@ impl agentConfig {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
debug_console_vport: 0,
log_vport: 0,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
}
}
@@ -60,12 +70,40 @@ impl agentConfig {
self.hotplug_timeout = hotplugTimeout;
}
}
if param.starts_with(format!("{}=", DEBUG_CONSOLE_VPORT_OPTION).as_str()) {
let port = get_vsock_port(param)?;
if port > 0 {
self.debug_console_vport = port;
}
}
if param.starts_with(format!("{}=", LOG_VPORT_OPTION).as_str()) {
let port = get_vsock_port(param)?;
if port > 0 {
self.log_vport = port;
}
}
if param.starts_with(format!("{}=", CONTAINER_PIPE_SIZE_OPTION).as_str()) {
let container_pipe_size = get_container_pipe_size(param)?;
self.container_pipe_size = container_pipe_size
}
}
Ok(())
}
}
fn get_vsock_port(p: &str) -> Result<i32> {
let fields: Vec<&str> = p.split("=").collect();
if fields.len() != 2 {
return Err(ErrorKind::ErrorCode("invalid port parameter".to_string()).into());
}
Ok(fields[1].parse::<i32>()?)
}
// Map logrus (https://godoc.org/github.com/sirupsen/logrus)
// log level to the equivalent slog log levels.
//
@@ -127,6 +165,40 @@ fn get_hotplug_timeout(param: &str) -> Result<time::Duration> {
Ok(time::Duration::from_secs(value.unwrap()))
}
fn get_container_pipe_size(param: &str) -> Result<i32> {
let fields: Vec<&str> = param.split("=").collect();
if fields.len() != 2 {
return Err(
ErrorKind::ErrorCode(String::from("invalid container pipe size parameter")).into(),
);
}
let key = fields[0];
if key != CONTAINER_PIPE_SIZE_OPTION {
return Err(
ErrorKind::ErrorCode(String::from("invalid container pipe size key name")).into(),
);
}
let res = fields[1].parse::<i32>();
if res.is_err() {
return Err(
ErrorKind::ErrorCode(String::from("unable to parse container pipe size")).into(),
);
}
let value = res.unwrap();
if value < 0 {
return Err(ErrorKind::ErrorCode(String::from(
"container pipe size should not be negative",
))
.into());
}
Ok(value)
}
#[cfg(test)]
mod tests {
use super::*;
@@ -143,6 +215,11 @@ mod tests {
const ERR_INVALID_HOTPLUG_TIMEOUT_PARAM: &str = "unable to parse hotplug timeout";
const ERR_INVALID_HOTPLUG_TIMEOUT_KEY: &str = "invalid hotplug timeout key name";
const ERR_INVALID_CONTAINER_PIPE_SIZE: &str = "invalid container pipe size parameter";
const ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM: &str = "unable to parse container pipe size";
const ERR_INVALID_CONTAINER_PIPE_SIZE_KEY: &str = "invalid container pipe size key name";
const ERR_INVALID_CONTAINER_PIPE_NEGATIVE: &str = "container pipe size should not be negative";
// helper function to make errors less crazy-long
fn make_err(desc: &str) -> Error {
ErrorKind::ErrorCode(desc.to_string()).into()
@@ -189,6 +266,7 @@ mod tests {
dev_mode: bool,
log_level: slog::Level,
hotplug_timeout: time::Duration,
container_pipe_size: i32,
}
let tests = &[
@@ -198,6 +276,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.debug_console agent.devmodex",
@@ -205,6 +284,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.logx=debug",
@@ -212,6 +292,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.log=debug",
@@ -219,6 +300,7 @@ mod tests {
dev_mode: false,
log_level: slog::Level::Debug,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "",
@@ -226,6 +308,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo",
@@ -233,6 +316,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo bar",
@@ -240,6 +324,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo bar",
@@ -247,6 +332,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent bar",
@@ -254,6 +340,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo debug_console agent bar devmode",
@@ -261,6 +348,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.debug_console",
@@ -268,6 +356,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.debug_console ",
@@ -275,6 +364,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.debug_console foo",
@@ -282,6 +372,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.debug_console foo",
@@ -289,6 +380,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.debug_console bar",
@@ -296,6 +388,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.debug_console",
@@ -303,6 +396,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.debug_console ",
@@ -310,6 +404,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode",
@@ -317,6 +412,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.devmode ",
@@ -324,6 +420,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode foo",
@@ -331,6 +428,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.devmode foo",
@@ -338,6 +436,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.devmode bar",
@@ -345,6 +444,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.devmode",
@@ -352,6 +452,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.devmode ",
@@ -359,6 +460,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console",
@@ -366,6 +468,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.hotplug_timeout=100",
@@ -373,6 +476,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: time::Duration::from_secs(100),
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.hotplug_timeout=0",
@@ -380,6 +484,39 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pipe_size=2097152",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: 2097152,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pipe_size=100",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: 100,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pipe_size=0",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pip_siz=100",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
];
@@ -409,9 +546,15 @@ mod tests {
.expect(&format!("{}: failed to write file contents", msg));
let mut config = agentConfig::new();
assert!(config.debug_console == false, msg);
assert!(config.dev_mode == false, msg);
assert!(config.hotplug_timeout == time::Duration::from_secs(3), msg);
assert_eq!(config.debug_console, false, "{}", msg);
assert_eq!(config.dev_mode, false, "{}", msg);
assert_eq!(
config.hotplug_timeout,
time::Duration::from_secs(3),
"{}",
msg
);
assert_eq!(config.container_pipe_size, 0, "{}", msg);
let result = config.parse_cmdline(filename);
assert!(result.is_ok(), "{}", msg);
@@ -420,6 +563,7 @@ mod tests {
assert_eq!(d.dev_mode, config.dev_mode, "{}", msg);
assert_eq!(d.log_level, config.log_level, "{}", msg);
assert_eq!(d.hotplug_timeout, config.hotplug_timeout, "{}", msg);
assert_eq!(d.container_pipe_size, config.container_pipe_size, "{}", msg);
}
}
@@ -660,4 +804,78 @@ mod tests {
assert_result!(d.result, result, format!("{}", msg));
}
}
#[test]
fn test_get_container_pipe_size() {
#[derive(Debug)]
struct TestData<'a> {
param: &'a str,
result: Result<i32>,
}
let tests = &[
TestData {
param: "",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE)),
},
TestData {
param: "agent.container_pipe_size",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE)),
},
TestData {
param: "foo=bar",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_KEY)),
},
TestData {
param: "agent.container_pip_siz=1",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_KEY)),
},
TestData {
param: "agent.container_pipe_size=1",
result: Ok(1),
},
TestData {
param: "agent.container_pipe_size=3",
result: Ok(3),
},
TestData {
param: "agent.container_pipe_size=2097152",
result: Ok(2097152),
},
TestData {
param: "agent.container_pipe_size=0",
result: Ok(0),
},
TestData {
param: "agent.container_pipe_size=-1",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_NEGATIVE)),
},
TestData {
param: "agent.container_pipe_size=foobar",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
TestData {
param: "agent.container_pipe_size=j",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
TestData {
param: "agent.container_pipe_size=4jbsdja",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
TestData {
param: "agent.container_pipe_size=4294967296",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let result = get_container_pipe_size(d.param);
let msg = format!("{}: result: {:?}", msg, result);
assert_result!(d.result, result, format!("{}", msg));
}
}
}

View File

@@ -14,8 +14,8 @@ use crate::linux_abi::*;
use crate::mount::{DRIVERBLKTYPE, DRIVERMMIOBLKTYPE, DRIVERNVDIMMTYPE, DRIVERSCSITYPE};
use crate::sandbox::Sandbox;
use crate::{AGENT_CONFIG, GLOBAL_DEVICE_WATCHER};
use oci::Spec;
use protocols::agent::Device;
use protocols::oci::Spec;
use rustjail::errors::*;
// Convenience macro to obtain the scope logger
@@ -207,7 +207,7 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
.into());
}
let linux = match spec.Linux.as_mut() {
let linux = match spec.linux.as_mut() {
None => {
return Err(
ErrorKind::ErrorCode("Spec didn't container linux field".to_string()).into(),
@@ -232,14 +232,14 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
"got the device: dev_path: {}, major: {}, minor: {}\n", &device.vm_path, major_id, minor_id
);
let devices = linux.Devices.as_mut_slice();
let devices = linux.devices.as_mut_slice();
for dev in devices.iter_mut() {
if dev.Path == device.container_path {
let host_major = dev.Major;
let host_minor = dev.Minor;
if dev.path == device.container_path {
let host_major = dev.major;
let host_minor = dev.minor;
dev.Major = major_id as i64;
dev.Minor = minor_id as i64;
dev.major = major_id as i64;
dev.minor = minor_id as i64;
info!(
sl!(),
@@ -252,12 +252,12 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
// Resources must be updated since they are used to identify the
// device in the devices cgroup.
if let Some(res) = linux.Resources.as_mut() {
let ds = res.Devices.as_mut_slice();
if let Some(res) = linux.resources.as_mut() {
let ds = res.devices.as_mut_slice();
for d in ds.iter_mut() {
if d.Major == host_major && d.Minor == host_minor {
d.Major = major_id as i64;
d.Minor = minor_id as i64;
if d.major == Some(host_major) && d.minor == Some(host_minor) {
d.major = Some(major_id as i64);
d.minor = Some(minor_id as i64);
info!(
sl!(),

View File

@@ -8,6 +8,7 @@ use grpcio::{EnvBuilder, Server, ServerBuilder};
use grpcio::{RpcStatus, RpcStatusCode};
use std::sync::{Arc, Mutex};
use oci::{LinuxNamespace, Spec};
use protobuf::{RepeatedField, SingularPtrField};
use protocols::agent::CopyFileRequest;
use protocols::agent::{
@@ -16,7 +17,6 @@ use protocols::agent::{
};
use protocols::empty::Empty;
use protocols::health::{HealthCheckResponse, HealthCheckResponse_ServingStatus};
use protocols::oci::{LinuxNamespace, Spec};
use rustjail;
use rustjail::container::{BaseContainer, LinuxContainer};
use rustjail::errors::*;
@@ -36,10 +36,12 @@ use crate::namespace::{NSTYPEIPC, NSTYPEPID, NSTYPEUTS};
use crate::random;
use crate::sandbox::Sandbox;
use crate::version::{AGENT_VERSION, API_VERSION};
use crate::AGENT_CONFIG;
use netlink::{RtnlHandle, NETLINK_ROUTE};
use libc::{self, c_ushort, pid_t, winsize, TIOCSWINSZ};
use serde_json;
use std::convert::TryFrom;
use std::fs;
use std::os::unix::io::RawFd;
use std::os::unix::prelude::PermissionsExt;
@@ -79,7 +81,15 @@ impl agentService {
let sandbox;
let mut s;
let oci = oci_spec.as_mut().unwrap();
let mut oci = match oci_spec.as_mut() {
Some(spec) => rustjail::grpc_to_oci(spec),
None => {
error!(sl!(), "no oci spec in the create container request!");
return Err(
ErrorKind::Nix(nix::Error::from_errno(nix::errno::Errno::EINVAL)).into(),
);
}
};
info!(sl!(), "receive createcontainer {}", &cid);
@@ -93,7 +103,7 @@ impl agentService {
// updates the devices listed in the OCI spec, so that they actually
// match real devices inside the VM. This step is necessary since we
// cannot predict everything from the caller.
add_devices(&req.devices.to_vec(), oci, &self.sandbox)?;
add_devices(&req.devices.to_vec(), &mut oci, &self.sandbox)?;
// Both rootfs and volumes (invoked with --volume for instance) will
// be processed the same way. The idea is to always mount any provided
@@ -109,11 +119,13 @@ impl agentService {
s.container_mounts.insert(cid.clone(), m);
}
update_container_namespaces(&s, oci)?;
update_container_namespaces(&s, &mut oci)?;
// write spec to bundle path, hooks might
// read ocispec
setup_bundle(oci)?;
let olddir = setup_bundle(&oci)?;
// restore the cwd for kata-agent process.
defer!(unistd::chdir(&olddir).unwrap());
let opts = CreateOpts {
cgroup_name: "".to_string(),
@@ -128,8 +140,15 @@ impl agentService {
let mut ctr: LinuxContainer =
LinuxContainer::new(cid.as_str(), CONTAINER_BASE, opts, &sl!())?;
let p = if oci.Process.is_some() {
let tp = Process::new(&sl!(), oci.get_Process(), eid.as_str(), true)?;
let pipe_size = AGENT_CONFIG.read().unwrap().container_pipe_size;
let p = if oci.process.is_some() {
let tp = Process::new(
&sl!(),
&oci.process.as_ref().unwrap(),
eid.as_str(),
true,
pipe_size,
)?;
tp
} else {
info!(sl!(), "no process configurations!");
@@ -169,7 +188,12 @@ impl agentService {
if req.timeout == 0 {
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
return Err(ErrorKind::Nix(nix::Error::from_errno(Errno::EINVAL)).into());
}
};
ctr.destroy()?;
@@ -204,7 +228,12 @@ impl agentService {
let handle = thread::spawn(move || {
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid2.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid2.as_str()) {
Some(cr) => cr,
None => {
return;
}
};
ctr.destroy().unwrap();
tx.send(1).unwrap();
@@ -257,13 +286,15 @@ impl agentService {
let mut sandbox = s.lock().unwrap();
// ignore string_user, not sure what it is
let ocip = if req.process.is_some() {
let process = if req.process.is_some() {
req.process.as_ref().unwrap()
} else {
return Err(ErrorKind::Nix(nix::Error::from_errno(nix::errno::Errno::EINVAL)).into());
};
let p = Process::new(&sl!(), ocip, exec_id.as_str(), false)?;
let pipe_size = AGENT_CONFIG.read().unwrap().container_pipe_size;
let ocip = rustjail::process_grpc_to_oci(process);
let p = Process::new(&sl!(), &ocip, exec_id.as_str(), false, pipe_size)?;
let ctr = match sandbox.get_container(cid.as_str()) {
Some(v) => v,
@@ -284,6 +315,7 @@ impl agentService {
let eid = req.exec_id.clone();
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let mut init = false;
info!(
sl!(),
@@ -291,9 +323,14 @@ impl agentService {
"container-id" => cid.clone(),
"exec-id" => eid.clone()
);
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), true)?;
let mut signal = Signal::from_c_int(req.signal as i32).unwrap();
if eid == "" {
init = true;
}
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), init)?;
let mut signal = Signal::try_from(req.signal as i32).unwrap();
// For container initProcess, if it hasn't installed handler for "SIGTERM" signal,
// it will ignore the "SIGTERM" signal sent to it, thus send it "SIGKILL" signal
@@ -344,7 +381,13 @@ impl agentService {
}
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
return Err(ErrorKind::Nix(nix::Error::from_errno(Errno::EINVAL)).into());
}
};
// need to close all fds
let mut p = ctr.processes.get_mut(&pid).unwrap();
@@ -557,11 +600,11 @@ impl protocols::agent_grpc::AgentService for agentService {
req: protocols::agent::ExecProcessRequest,
sink: ::grpcio::UnarySink<protocols::empty::Empty>,
) {
if let Err(_) = self.do_exec_process(req) {
if let Err(e) = self.do_exec_process(req) {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::Internal,
Some(String::from("fail to exec process!")),
Some(format!("{}", e)),
))
.map_err(|_e| error!(sl!(), "fail to exec process!"));
ctx.spawn(f);
@@ -630,7 +673,20 @@ impl protocols::agent_grpc::AgentService for agentService {
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::InvalidArgument,
Some(String::from("invalid container id")),
))
.map_err(|_e| error!(sl!(), "invalid container id!"));
ctx.spawn(f);
return;
}
};
let pids = ctr.processes().unwrap();
match format.as_str() {
@@ -718,17 +774,30 @@ impl protocols::agent_grpc::AgentService for agentService {
sink: ::grpcio::UnarySink<protocols::empty::Empty>,
) {
let cid = req.container_id.clone();
let res = req.resources.clone();
let res = req.resources;
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::Internal,
Some("invalid container id".to_string()),
))
.map_err(|_e| error!(sl!(), "invalid container id!"));
ctx.spawn(f);
return;
}
};
let resp = Empty::new();
if res.is_some() {
match ctr.set(res.unwrap()) {
let ociRes = rustjail::resources_grpc_to_oci(&res.unwrap());
match ctr.set(ociRes) {
Err(_e) => {
let f = sink
.fail(RpcStatus::new(
@@ -760,7 +829,19 @@ impl protocols::agent_grpc::AgentService for agentService {
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::Internal,
Some("invalid container id!".to_string()),
))
.map_err(|_e| error!(sl!(), "invalid container id!"));
ctx.spawn(f);
return;
}
};
let resp = match ctr.stats() {
Err(_e) => {
@@ -915,7 +996,19 @@ impl protocols::agent_grpc::AgentService for agentService {
let eid = req.exec_id.clone();
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), false).unwrap();
let p = match find_process(&mut sandbox, cid.as_str(), eid.as_str(), false) {
Ok(v) => v,
Err(e) => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::InvalidArgument,
Some(format!("invalid argument: {}", e)),
))
.map_err(|_e| error!(sl!(), "invalid argument"));
ctx.spawn(f);
return;
}
};
if p.term_master.is_none() {
let f = sink
@@ -1516,7 +1609,7 @@ fn find_process<'a>(
None => return Err(ErrorKind::ErrorCode(String::from("Invalid container id")).into()),
};
if init && eid == "" {
if init || eid == "" {
let p = match ctr.processes.get_mut(&ctr.init_process_pid) {
Some(v) => v,
None => {
@@ -1579,7 +1672,7 @@ pub fn start<S: Into<String>>(sandbox: Arc<Mutex<Sandbox>>, host: S, port: u16)
// sense to rely on the namespace path provided by the host since namespaces
// are different inside the guest.
fn update_container_namespaces(sandbox: &Sandbox, spec: &mut Spec) -> Result<()> {
let linux = match spec.Linux.as_mut() {
let linux = match spec.linux.as_mut() {
None => {
return Err(
ErrorKind::ErrorCode("Spec didn't container linux field".to_string()).into(),
@@ -1590,26 +1683,26 @@ fn update_container_namespaces(sandbox: &Sandbox, spec: &mut Spec) -> Result<()>
let mut pidNs = false;
let namespaces = linux.Namespaces.as_mut_slice();
let namespaces = linux.namespaces.as_mut_slice();
for namespace in namespaces.iter_mut() {
if namespace.Type == NSTYPEPID {
if namespace.r#type == NSTYPEPID {
pidNs = true;
continue;
}
if namespace.Type == NSTYPEIPC {
namespace.Path = sandbox.shared_ipcns.path.clone();
if namespace.r#type == NSTYPEIPC {
namespace.path = sandbox.shared_ipcns.path.clone();
continue;
}
if namespace.Type == NSTYPEUTS {
namespace.Path = sandbox.shared_utsns.path.clone();
if namespace.r#type == NSTYPEUTS {
namespace.path = sandbox.shared_utsns.path.clone();
continue;
}
}
if !pidNs && !sandbox.sandbox_pid_ns {
let mut pid_ns = LinuxNamespace::new();
pid_ns.set_Type(NSTYPEPID.to_string());
linux.Namespaces.push(pid_ns);
let mut pid_ns = LinuxNamespace::default();
pid_ns.r#type = NSTYPEPID.to_string();
linux.namespaces.push(pid_ns);
}
Ok(())
@@ -1738,26 +1831,26 @@ fn do_copy_file(req: &CopyFileRequest) -> Result<()> {
Ok(())
}
fn setup_bundle(gspec: &Spec) -> Result<()> {
if gspec.Root.is_none() {
fn setup_bundle(spec: &Spec) -> Result<PathBuf> {
if spec.root.is_none() {
return Err(nix::Error::Sys(Errno::EINVAL).into());
}
let root = gspec.Root.as_ref().unwrap().Path.as_str();
let root = spec.root.as_ref().unwrap().path.as_str();
let rootfs = fs::canonicalize(root)?;
let bundle_path = rootfs.parent().unwrap().to_str().unwrap();
let config = format!("{}/{}", bundle_path, "config.json");
let oci = rustjail::grpc_to_oci(gspec);
info!(
sl!(),
"{:?}",
oci.process.as_ref().unwrap().console_size.as_ref()
spec.process.as_ref().unwrap().console_size.as_ref()
);
let _ = oci.save(config.as_str());
let _ = spec.save(config.as_str());
let olddir = unistd::getcwd().chain_err(|| "cannot getcwd")?;
unistd::chdir(bundle_path)?;
Ok(())
Ok(olddir)
}

View File

@@ -16,7 +16,7 @@ pub const SYSFS_PCI_BUS_RESCAN_FILE: &str = "/sys/bus/pci/rescan";
target_arch = "x86"
))]
pub const PCI_ROOT_BUS_PATH: &str = "/devices/pci0000:00";
#[cfg(target_arch = "arm")]
#[cfg(target_arch = "aarch64")]
pub const PCI_ROOT_BUS_PATH: &str = "/devices/platform/4010000000.pcie/pci0000:00";
pub const SYSFS_CPU_ONLINE_PATH: &str = "/sys/devices/system/cpu";

View File

@@ -10,24 +10,26 @@
#![allow(non_snake_case)]
#[macro_use]
extern crate lazy_static;
extern crate oci;
extern crate prctl;
extern crate protocols;
extern crate regex;
extern crate rustjail;
extern crate scan_fmt;
extern crate serde_json;
extern crate signal_hook;
#[macro_use]
extern crate scan_fmt;
extern crate oci;
extern crate scopeguard;
#[macro_use]
extern crate slog;
extern crate slog_async;
extern crate slog_json;
#[macro_use]
extern crate netlink;
use futures::*;
use nix::fcntl::{self, OFlag};
use nix::sys::socket::{self, AddressFamily, SockAddr, SockFlag, SockType};
use nix::sys::wait::{self, WaitStatus};
use nix::unistd;
use prctl::set_child_subreaper;
@@ -35,7 +37,7 @@ use rustjail::errors::*;
use signal_hook::{iterator::Signals, SIGCHLD};
use std::collections::HashMap;
use std::env;
use std::fs;
use std::fs::{self, File};
use std::os::unix::fs as unixfs;
use std::os::unix::io::AsRawFd;
use std::path::Path;
@@ -47,7 +49,6 @@ use unistd::Pid;
mod config;
mod device;
mod linux_abi;
mod logging;
mod mount;
mod namespace;
mod network;
@@ -99,19 +100,27 @@ fn announce(logger: &Logger) {
fn main() -> Result<()> {
let args: Vec<String> = env::args().collect();
if args.len() == 2 && args[1] == "init" {
rustjail::container::init_child();
exit(0);
}
env::set_var("RUST_BACKTRACE", "full");
lazy_static::initialize(&SHELLS);
lazy_static::initialize(&AGENT_CONFIG);
// support vsock log
let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?;
let writer = unsafe { File::from_raw_fd(wfd) };
let agentConfig = AGENT_CONFIG.clone();
if unistd::getpid() == Pid::from_raw(1) {
// Init a temporary logger used by init agent as init process
// since before do the base mount, it wouldn't access "/proc/cmdline"
// to get the customzied debug level.
let writer = io::stdout();
let logger = logging::create_logger(NAME, "agent", slog::Level::Debug, writer);
init_agent_as_init(&logger)?;
}
@@ -125,7 +134,32 @@ fn main() -> Result<()> {
}
let config = agentConfig.read().unwrap();
let writer = io::stdout();
let log_vport = config.log_vport as u32;
let log_handle = thread::spawn(move || -> Result<()> {
let mut reader = unsafe { File::from_raw_fd(rfd) };
if log_vport > 0 {
let listenfd = socket::socket(
AddressFamily::Vsock,
SockType::Stream,
SockFlag::SOCK_CLOEXEC,
None,
)?;
let addr = SockAddr::new_vsock(libc::VMADDR_CID_ANY, log_vport);
socket::bind(listenfd, &addr)?;
socket::listen(listenfd, 1)?;
let datafd = socket::accept4(listenfd, SockFlag::SOCK_CLOEXEC)?;
let mut log_writer = unsafe { File::from_raw_fd(datafd) };
let _ = io::copy(&mut reader, &mut log_writer)?;
let _ = unistd::close(listenfd);
let _ = unistd::close(datafd);
}
// copy log to stdout
let mut stdout_writer = io::stdout();
let _ = io::copy(&mut reader, &mut stdout_writer)?;
Ok(())
});
let writer = unsafe { File::from_raw_fd(wfd) };
// Recreate a logger with the log level get from "/proc/cmdline".
let logger = logging::create_logger(NAME, "agent", config.log_level, writer);
@@ -143,13 +177,14 @@ fn main() -> Result<()> {
let _guard = slog_scope::set_global_logger(logger.new(o!("subsystem" => "grpc")));
let shells = SHELLS.clone();
let debug_console_vport = config.debug_console_vport as u32;
let shell_handle = if config.debug_console {
let thread_logger = logger.clone();
thread::spawn(move || {
let shells = shells.lock().unwrap();
let result = setup_debug_console(shells.to_vec());
let result = setup_debug_console(shells.to_vec(), debug_console_vport);
if result.is_err() {
// Report error, but don't fail
warn!(thread_logger, "failed to setup debug console";
@@ -196,6 +231,7 @@ fn main() -> Result<()> {
// let _ = rx.wait();
handle.join().unwrap();
let _ = log_handle.join();
if config.debug_console {
shell_handle.join().unwrap();
@@ -330,18 +366,35 @@ lazy_static! {
// pub static mut TRACE_MODE: ;
use crate::config::agentConfig;
use nix::fcntl::{self, OFlag};
use nix::sys::stat::Mode;
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::PathBuf;
use std::process::{exit, Command, Stdio};
fn setup_debug_console(shells: Vec<String>) -> Result<()> {
fn setup_debug_console(shells: Vec<String>, port: u32) -> Result<()> {
for shell in shells.iter() {
let binary = PathBuf::from(shell);
if binary.exists() {
let f: RawFd = fcntl::open(CONSOLE_PATH, OFlag::O_RDWR, Mode::empty())?;
let f: RawFd = if port > 0 {
let listenfd = socket::socket(
AddressFamily::Vsock,
SockType::Stream,
SockFlag::SOCK_CLOEXEC,
None,
)?;
let addr = SockAddr::new_vsock(libc::VMADDR_CID_ANY, port);
socket::bind(listenfd, &addr)?;
socket::listen(listenfd, 1)?;
socket::accept4(listenfd, SockFlag::SOCK_CLOEXEC)?
} else {
let mut flags = OFlag::empty();
flags.insert(OFlag::O_RDWR);
flags.insert(OFlag::O_CLOEXEC);
fcntl::open(CONSOLE_PATH, flags, Mode::empty())?
};
let cmd = Command::new(shell)
.arg("-i")
.stdin(unsafe { Stdio::from_raw_fd(f) })
.stdout(unsafe { Stdio::from_raw_fd(f) })
.stderr(unsafe { Stdio::from_raw_fd(f) })
@@ -381,7 +434,7 @@ mod tests {
let mut shells = shells_ref.lock().unwrap();
shells.clear();
let result = setup_debug_console(shells.to_vec());
let result = setup_debug_console(shells.to_vec(), 0);
assert!(result.is_err());
assert_eq!(result.unwrap_err().to_string(), "Error Code: 'no shell'");
@@ -404,7 +457,7 @@ mod tests {
shells.push(shell);
let result = setup_debug_console(shells.to_vec());
let result = setup_debug_console(shells.to_vec(), 0);
assert!(result.is_err());
assert_eq!(

View File

@@ -37,6 +37,8 @@ pub struct Namespace {
pub path: String,
persistent_ns_dir: String,
ns_type: NamespaceType,
//only used for uts namespace
pub hostname: Option<String>,
}
impl Namespace {
@@ -46,6 +48,7 @@ impl Namespace {
path: String::from(""),
persistent_ns_dir: String::from(PERSISTENT_NS_DIR),
ns_type: NamespaceType::IPC,
hostname: None,
}
}
@@ -54,8 +57,11 @@ impl Namespace {
self
}
pub fn as_uts(mut self) -> Self {
pub fn as_uts(mut self, hostname: &str) -> Self {
self.ns_type = NamespaceType::UTS;
if hostname != "" {
self.hostname = Some(String::from(hostname));
}
self
}
@@ -82,6 +88,7 @@ impl Namespace {
}
self.path = new_ns_path.clone().into_os_string().into_string().unwrap();
let hostname = self.hostname.clone();
let new_thread = thread::spawn(move || {
let origin_ns_path = get_current_thread_ns_path(&ns_type.get());
@@ -98,6 +105,12 @@ impl Namespace {
return Err(err.to_string());
}
if ns_type == NamespaceType::UTS && hostname.is_some() {
match nix::unistd::sethostname(hostname.unwrap()) {
Err(err) => return Err(err.to_string()),
Ok(_) => (),
}
}
// Bind mount the new namespace from the current thread onto the mount point to persist it.
let source: &str = origin_ns_path.as_str();
let destination: &str = new_ns_path.as_path().to_str().unwrap_or("none");
@@ -136,7 +149,7 @@ impl Namespace {
}
/// Represents the Namespace type.
#[derive(Clone, Copy)]
#[derive(Clone, Copy, PartialEq)]
enum NamespaceType {
IPC,
UTS,
@@ -201,7 +214,7 @@ mod tests {
let tmpdir = Builder::new().prefix("ipc").tempdir().unwrap();
let ns_uts = Namespace::new(&logger)
.as_uts()
.as_uts("test_hostname")
.set_root_dir(tmpdir.path().to_str().unwrap())
.setup();

View File

@@ -97,16 +97,22 @@ impl Sandbox {
//
// It's assumed that caller is calling this method after
// acquiring a lock on sandbox.
pub fn unset_sandbox_storage(&mut self, path: &str) -> bool {
pub fn unset_sandbox_storage(&mut self, path: &str) -> Result<bool> {
match self.storages.get_mut(path) {
None => return false,
None => {
return Err(ErrorKind::ErrorCode(format!(
"Sandbox storage with path {} not found",
path
))
.into())
}
Some(count) => {
*count -= 1;
if *count < 1 {
self.storages.remove(path);
return true;
return Ok(true);
}
false
return Ok(false);
}
}
}
@@ -130,8 +136,15 @@ impl Sandbox {
// It's assumed that caller is calling this method after
// acquiring a lock on sandbox.
pub fn unset_and_remove_sandbox_storage(&mut self, path: &str) -> Result<()> {
if self.unset_sandbox_storage(path) {
return self.remove_sandbox_storage(path);
match self.unset_sandbox_storage(path) {
Ok(res) => {
if res {
return self.remove_sandbox_storage(path);
}
}
Err(err) => {
return Err(err);
}
}
Ok(())
}
@@ -158,7 +171,10 @@ impl Sandbox {
};
// // Set up shared UTS namespace
self.shared_utsns = match Namespace::new(&self.logger).as_uts().setup() {
self.shared_utsns = match Namespace::new(&self.logger)
.as_uts(self.hostname.as_str())
.setup()
{
Ok(ns) => ns,
Err(err) => {
return Err(ErrorKind::ErrorCode(format!(
@@ -262,3 +278,279 @@ fn online_memory(logger: &Logger) -> Result<()> {
online_resources(logger, SYSFS_MEMORY_ONLINE_PATH, r"memory[0-9]+", -1)?;
Ok(())
}
#[cfg(test)]
mod tests {
//use rustjail::Error;
use super::Sandbox;
use crate::{mount::BareMount, skip_if_not_root};
use nix::mount::MsFlags;
use oci::{Linux, Root, Spec};
use rustjail::container::LinuxContainer;
use rustjail::specconv::CreateOpts;
use slog::Logger;
use tempfile::Builder;
fn bind_mount(src: &str, dst: &str, logger: &Logger) -> Result<(), rustjail::errors::Error> {
let baremount = BareMount::new(src, dst, "bind", MsFlags::MS_BIND, "", &logger);
baremount.mount()
}
#[test]
fn set_sandbox_storage() {
let logger = slog::Logger::root(slog::Discard, o!());
let mut s = Sandbox::new(&logger).unwrap();
let tmpdir = Builder::new().tempdir().unwrap();
let tmpdir_path = tmpdir.path().to_str().unwrap();
// Add a new sandbox storage
let new_storage = s.set_sandbox_storage(&tmpdir_path);
// Check the reference counter
let ref_count = s.storages[tmpdir_path];
assert_eq!(
ref_count, 1,
"Invalid refcount, got {} expected 1.",
ref_count
);
assert_eq!(new_storage, true);
// Use the existing sandbox storage
let new_storage = s.set_sandbox_storage(&tmpdir_path);
assert_eq!(new_storage, false, "Should be false as already exists.");
// Since we are using existing storage, the reference counter
// should be 2 by now.
let ref_count = s.storages[tmpdir_path];
assert_eq!(
ref_count, 2,
"Invalid refcount, got {} expected 2.",
ref_count
);
}
#[test]
fn remove_sandbox_storage() {
skip_if_not_root!();
let logger = slog::Logger::root(slog::Discard, o!());
let s = Sandbox::new(&logger).unwrap();
let tmpdir = Builder::new().tempdir().unwrap();
let tmpdir_path = tmpdir.path().to_str().unwrap();
let srcdir = Builder::new()
.prefix("src")
.tempdir_in(tmpdir_path)
.unwrap();
let srcdir_path = srcdir.path().to_str().unwrap();
let destdir = Builder::new()
.prefix("dest")
.tempdir_in(tmpdir_path)
.unwrap();
let destdir_path = destdir.path().to_str().unwrap();
let emptydir = Builder::new()
.prefix("empty")
.tempdir_in(tmpdir_path)
.unwrap();
assert!(
s.remove_sandbox_storage(&srcdir_path).is_err(),
"Expect Err as the directory i not a mountpoint"
);
assert!(s.remove_sandbox_storage("").is_err());
let invalid_dir = emptydir.path().join("invalid");
assert!(s
.remove_sandbox_storage(invalid_dir.to_str().unwrap())
.is_err());
// Now, create a double mount as this guarantees the directory cannot
// be deleted after the first umount.
for _i in 0..2 {
assert!(bind_mount(srcdir_path, destdir_path, &logger).is_ok());
}
assert!(
s.remove_sandbox_storage(destdir_path).is_err(),
"Expect fail as deletion cannot happen due to the second mount."
);
// This time it should work as the previous two calls have undone the double
// mount.
assert!(s.remove_sandbox_storage(destdir_path).is_ok());
}
#[test]
#[allow(unused_assignments)]
fn unset_and_remove_sandbox_storage() {
skip_if_not_root!();
let logger = slog::Logger::root(slog::Discard, o!());
let mut s = Sandbox::new(&logger).unwrap();
// FIX: This test fails, not sure why yet.
assert!(
s.unset_and_remove_sandbox_storage("/tmp/testEphePath")
.is_err(),
"Should fail because sandbox storage doesn't exist"
);
let tmpdir = Builder::new().tempdir().unwrap();
let tmpdir_path = tmpdir.path().to_str().unwrap();
let srcdir = Builder::new()
.prefix("src")
.tempdir_in(tmpdir_path)
.unwrap();
let srcdir_path = srcdir.path().to_str().unwrap();
let destdir = Builder::new()
.prefix("dest")
.tempdir_in(tmpdir_path)
.unwrap();
let destdir_path = destdir.path().to_str().unwrap();
assert!(bind_mount(srcdir_path, destdir_path, &logger).is_ok());
assert_eq!(s.set_sandbox_storage(&destdir_path), true);
assert!(s.unset_and_remove_sandbox_storage(&destdir_path).is_ok());
let mut other_dir_str = String::new();
{
// Create another folder in a separate scope to ensure that is
// deleted
let other_dir = Builder::new()
.prefix("dir")
.tempdir_in(tmpdir_path)
.unwrap();
let other_dir_path = other_dir.path().to_str().unwrap();
other_dir_str = other_dir_path.to_string();
assert_eq!(s.set_sandbox_storage(&other_dir_path), true);
}
assert!(s.unset_and_remove_sandbox_storage(&other_dir_str).is_err());
}
#[test]
fn unset_sandbox_storage() {
let logger = slog::Logger::root(slog::Discard, o!());
let mut s = Sandbox::new(&logger).unwrap();
let storage_path = "/tmp/testEphe";
// Add a new sandbox storage
assert_eq!(s.set_sandbox_storage(&storage_path), true);
// Use the existing sandbox storage
assert_eq!(
s.set_sandbox_storage(&storage_path),
false,
"Expects false as the storage is not new."
);
assert_eq!(
s.unset_sandbox_storage(&storage_path).unwrap(),
false,
"Expects false as there is still a storage."
);
// Reference counter should decrement to 1.
let ref_count = s.storages[storage_path];
assert_eq!(
ref_count, 1,
"Invalid refcount, got {} expected 1.",
ref_count
);
assert_eq!(
s.unset_sandbox_storage(&storage_path).unwrap(),
true,
"Expects true as there is still a storage."
);
// Since no container is using this sandbox storage anymore
// there should not be any reference in sandbox struct
// for the given storage
assert!(
!s.storages.contains_key(storage_path),
"The storages map should not contain the key {}",
storage_path
);
// If no container is using the sandbox storage, the reference
// counter for it should not exist.
assert!(
s.unset_sandbox_storage(&storage_path).is_err(),
"Expects false as the reference counter should no exist."
);
}
fn create_dummy_opts() -> CreateOpts {
let mut root = Root::default();
root.path = String::from("/");
let linux = Linux::default();
let mut spec = Spec::default();
spec.root = Some(root).into();
spec.linux = Some(linux).into();
CreateOpts {
cgroup_name: "".to_string(),
use_systemd_cgroup: false,
no_pivot_root: false,
no_new_keyring: false,
spec: Some(spec),
rootless_euid: false,
rootless_cgroup: false,
}
}
fn create_linuxcontainer() -> LinuxContainer {
LinuxContainer::new(
"some_id",
"/run/agent",
create_dummy_opts(),
&slog_scope::logger(),
)
.unwrap()
}
#[test]
fn get_container_entry_exist() {
skip_if_not_root!();
let logger = slog::Logger::root(slog::Discard, o!());
let mut s = Sandbox::new(&logger).unwrap();
let linux_container = create_linuxcontainer();
s.containers
.insert("testContainerID".to_string(), linux_container);
let cnt = s.get_container("testContainerID");
assert!(cnt.is_some());
}
#[test]
fn get_container_no_entry() {
let logger = slog::Logger::root(slog::Discard, o!());
let mut s = Sandbox::new(&logger).unwrap();
let cnt = s.get_container("testContainerID");
assert!(cnt.is_none());
}
#[test]
fn add_and_get_container() {
skip_if_not_root!();
let logger = slog::Logger::root(slog::Discard, o!());
let mut s = Sandbox::new(&logger).unwrap();
let linux_container = create_linuxcontainer();
s.add_container(linux_container);
assert!(s.get_container("some_id").is_some());
}
}