Compare commits

...

34 Commits

Author SHA1 Message Date
Salvador Fuentes
845ce4a727 Merge pull request #273 from amshinde/1.11.1-branch-bump
# Kata Containers 1.11.1
2020-06-05 17:38:08 -05:00
Archana Shinde
3355510feb release: Kata Containers 1.11.1
Version bump no changes

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2020-06-05 15:43:17 +00:00
Jose Carlos Venegas Munoz
71d25d530e Merge pull request #217 from jcvenegas/backport-release-fix
backport: release: actions: pin artifact to v1
2020-05-08 13:17:37 -05:00
Jose Carlos Venegas Munoz
55a004e6de release: actions: pin artifact to v1
the actions upload/download-artifact moved to a new version
and master now is not comptible.

Fixes: #211

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
2020-05-08 17:18:50 +00:00
Jose Carlos Venegas Munoz
449236f7cd Merge pull request #207 from katabuilder/1.11.0-branch-bump
# Kata Containers 1.11.0
2020-05-06 11:55:58 -05:00
katabuilder
92a0f7a0b1 release: Kata Containers 1.11.0
Version bump no changes

Signed-off-by: katabuilder <katabuilder@katacontainers.io>
2020-05-05 20:13:35 +00:00
Archana Shinde
c95d09a34d Merge pull request #181 from chavafg/1.11.0-rc0-branch-bump
# Kata Containers 1.11.0-rc0
2020-04-17 14:55:51 -07:00
Salvador Fuentes
63d9a8696f release: Kata Containers 1.11.0-rc0
- Fix potentianl crash
- sandbox: fix the issue of missing setting hostname
- unify the rustjail's log to contain container id and exec id
- Refactor the way of creating container process

ba3c732 grpc: fix the issue of potential crashes
32431d7 rpc: fix the issue of kill container process
986e666 sandbox: fix the issue of missing setting hostname
7d9bdf7 grpc: Fix the issue passing wrong exec_id to exec process
9220fb8 rustjail: unify the rustjail's log to contain container id and exec id
c1b6838 rustjail: refactoring the way of creating container process
e56b10f rustjail: remove the unused imported crates
ded27f4 oci: add Default and Clone to oci spec objects
7df8ede rustjail: replace protocol spec with oci spec

Signed-off-by: Salvador Fuentes <salvador.fuentes@intel.com>
2020-04-17 17:51:05 +00:00
Yang Bo
c0dc7676e0 Merge pull request #179 from lifupan/fix_potentianl_crash
Fix potentianl crash
2020-04-07 19:58:52 +08:00
fupan.lfp
ba3c732f86 grpc: fix the issue of potential crashes
It's better to check whether the sandbox's get_container
result instead of unwrap it directly, otherwise it would
crash the agent if the conainer id is invalid.

Fixes: #178

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-04-02 18:58:24 +08:00
fupan.lfp
32431d701c rpc: fix the issue of kill container process
When kill a process, if the exec id is empty, then
it means to kill all processes in the container, if
the exec id isn't empty, then it will only kill the
specific exec process.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-04-02 17:58:46 +08:00
Yang Bo
6d61ab439c Merge pull request #176 from lifupan/fix_hostname
sandbox: fix the issue of missing setting hostname
2020-04-01 10:00:31 +08:00
fupan.lfp
986e666b0b sandbox: fix the issue of missing setting hostname
When setup the persisten uts namespace, it's should
set the hostname for this ns.

Fixes: #175

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-31 17:22:24 +08:00
fupan.lfp
7d9bdf7b01 grpc: Fix the issue passing wrong exec_id to exec process
This issue was brought accidently by PR #174, fix this issue.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-31 17:19:40 +08:00
James O. D. Hunt
c948d8a802 Merge pull request #174 from lifupan/unify_log
unify the rustjail's log to contain container id and exec id
2020-03-30 10:02:39 +01:00
fupan.lfp
9220fb8e0c rustjail: unify the rustjail's log to contain container id and exec id
Add the container id and exec id to start container's log
which would make it clearly to check the log.

Fixes: #173

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-27 20:10:50 +08:00
Yang Bo
1e15465012 Merge pull request #167 from lifupan/refactor
Refactor the way of creating container process
2020-03-24 11:18:42 +08:00
fupan.lfp
c1b6838e25 rustjail: refactoring the way of creating container process
In the previous implementation, create a container process
by forking the parent process as the container process,
and then at the forked child process do much more setting,
such as rootfs mounting, drop capabilities and so on, at
last exec the container entry cmd to switch into container
process.

But since the parent is a muti thread process, which would
cause a dead lock in the forked child. For example, if one
of the parent process's thread do some malloc operation, which
would take a mutex lock, and at the same time, the parent forked
a child process, since the mutex lock status would be inherited
by the child process but there's no chance to release the lock
in the child since the child process only has a single thread
which would meet a dead lock if it would do some malloc operation.

Thus, the new implementation would do exec directly after forked
and then do the setting in the exec process. Of course, this requred
a data communication between parent and child since the child cannot
depends on the shared memory by fork way.

Fixes: #166
Fixes: #133

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-23 17:12:10 +08:00
fupan.lfp
e56b10f835 rustjail: remove the unused imported crates
remove the unused imported crates

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-20 17:04:05 +08:00
fupan.lfp
ded27f48d5 oci: add Default and Clone to oci spec objects
Add the clone and default feature to oci
spec objects.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-20 17:03:54 +08:00
fupan.lfp
7df8edef1b rustjail: replace protocol spec with oci spec
transform the rpc protocol spec to
oci spec.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-20 16:26:32 +08:00
James O. D. Hunt
8280208443 Merge pull request #154 from awprice/issue-152
agent: add configurable container pipe size cmdline option
2020-03-18 08:36:23 +00:00
GabyCT
7087b5f43c Merge pull request #165 from bergwolf/1.11.0-alpha1-branch-bump
# Kata Containers 1.11.0-alpha1
2020-03-17 13:10:53 -06:00
James O. D. Hunt
fe0a3a0c7c Merge pull request #156 from lifupan/master
add a workspace and run all the tests in the workspace
2020-03-17 11:10:27 +00:00
Peng Tao
fbf1d015e7 release: Kata Containers 1.11.0-alpha1
- actions: Add verbose information
- systemd-service: build rust-agent systemd services
- grpc: fix the issue of crash agent when didn't find the process

cd233c0 actions: Add verbose information
f0eaeac path-absolutize: version update
3136712 systemd-service: build rust-agent systemd services
289d617 grpc: fix the issue of crash agent when didn't find the process

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-03-16 12:38:41 +00:00
fupan.lfp
245183cb28 cargo: add a workspace and run all the tests in the workspace
Add a worksapce and run all of the tests in
under this workspace.

Fixes:#155

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-03-16 16:34:59 +08:00
GabyCT
22afde1850 Merge pull request #158 from jcvenegas/fix-157
actions: Add verbose information
2020-03-04 15:15:42 -06:00
Jose Carlos Venegas Munoz
cd233c047a actions: Add verbose information
Add a logs to debug actions easily

Fixes: #157

Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
2020-03-04 16:02:06 +00:00
Alex Price
204edf0e51 agent: add configurable container pipe size cmdline option
Adds a cmdline option to configure the stdout/stderr pipe sizes.
Uses `F_SETPIPE_SZ` to resize the write side of the pipe after
creation.

Example Cmdline option: `agent.container_pipe_size=2097152`

fixes #152

Signed-off-by: Alex Price <aprice@atlassian.com>
2020-03-04 15:31:59 +11:00
GabyCT
35c33bba47 Merge pull request #145 from Pennyzct/build_service_for_rust_agent
systemd-service: build rust-agent systemd services
2020-03-03 13:17:27 -06:00
Penny Zheng
f0eaeac3be path-absolutize: version update
The latest tag version v1.2.0 fixes the error of inapporiately using
mutable static.

Fixes: #144

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-03-03 09:24:13 +08:00
Penny Zheng
3136712d8e systemd-service: build rust-agent systemd services
I add another sub-command `build-service` in Makefile to
generate rust-agent-related systemd service files, which
are necessary for building guest rootfs image.
The whole design is following the one in go-agent.

Fixes: #144

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
2020-03-03 09:24:02 +08:00
James O. D. Hunt
7965445adf Merge pull request #138 from lifupan/master
grpc: fix the issue of crash agent when didn't find the process
2020-02-25 10:53:00 +00:00
fupan.lfp
289d61730c grpc: fix the issue of crash agent when didn't find the process
It's better to catch the  error of couldn't find the process
in tty_win_resize service, other wise, an invalid process id
could crash the agent.

Fixes: #137

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-02-11 10:04:19 +08:00
26 changed files with 1774 additions and 1218 deletions

View File

@@ -14,5 +14,5 @@ do
tar -xvf $c
done
tar cfJ ../kata-static.tar.xz ./opt
tar cvfJ ../kata-static.tar.xz ./opt
popd >>/dev/null

View File

@@ -1,3 +1,4 @@
name: Publish release tarball
on:
push:
tags:
@@ -16,7 +17,7 @@ jobs:
popd
./packaging/artifact-list.sh > artifact-list.txt
- name: save-artifact-list
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: artifact-list
path: artifact-list.txt
@@ -29,7 +30,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- run: |
@@ -44,7 +45,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-kernel.tar.gz
@@ -57,7 +58,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- run: |
@@ -72,7 +73,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-experimental-kernel.tar.gz
@@ -85,7 +86,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-qemu
@@ -98,7 +99,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-qemu.tar.gz
@@ -111,7 +112,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-nemu
@@ -124,7 +125,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-nemu.tar.gz
@@ -138,7 +139,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-qemu-virtiofsd
@@ -151,7 +152,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-qemu-virtiofsd.tar.gz
@@ -165,7 +166,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-image
@@ -178,7 +179,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-image.tar.gz
@@ -192,7 +193,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-firecracker
@@ -205,7 +206,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-firecracker.tar.gz
@@ -219,7 +220,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-clh
@@ -232,7 +233,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-clh.tar.gz
@@ -246,7 +247,7 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifact-list
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: artifact-list
- name: build-kata-components
@@ -259,7 +260,7 @@ jobs:
fi
- name: store-artifacts
if: env.artifact-built == 'true'
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: kata-artifacts
path: kata-static-kata-components.tar.gz
@@ -270,14 +271,14 @@ jobs:
steps:
- uses: actions/checkout@v1
- name: get-artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: kata-artifacts
- name: colate-artifacts
run: |
$GITHUB_WORKSPACE/.github/workflows/gather-artifacts.sh
- name: store-artifacts
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v1
with:
name: release-candidate
path: kata-static.tar.xz
@@ -287,7 +288,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: get-artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: release-candidate
- name: build-and-push-kata-deploy-ci
@@ -328,7 +329,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: download-artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v1
with:
name: release-candidate
- name: install hub
@@ -339,7 +340,10 @@ jobs:
- name: push static tarball to github
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
mv release-candidate/kata-static.tar.xz release-candidate/kata-static-$tag-x86_64.tar.xz
git clone https://github.com/kata-containers/runtime.git
tarball="kata-static-$tag-x86_64.tar.xz"
repo="https://github.com/kata-containers/runtime.git"
mv release-candidate/kata-static.tar.xz "release-candidate/${tarball}"
git clone "${repo}"
cd runtime
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a ../release-candidate/kata-static-${tag}-x86_64.tar.xz "${tag}"
echo "uploading asset '${tarball}' to '${repo}' tag: ${tag}"
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "../release-candidate/${tarball}" "${tag}"

View File

@@ -1 +1 @@
1.11.0-alpha0
1.11.1

View File

@@ -31,3 +31,12 @@ slog = { version = "2.5.2", features = ["dynamic-keys", "max_level_trace", "rele
slog-scope = "4.1.2"
# for testing
tempfile = "3.1.0"
[workspace]
members = [
"logging",
"netlink",
"oci",
"protocols",
"rustjail",
]

View File

@@ -34,6 +34,22 @@ TARGET_PATH = target/$(TRIPLE)/$(BUILD_TYPE)/$(TARGET)
DESTDIR :=
BINDIR := /usr/bin
# Define if agent will be installed as init
INIT := no
# Path to systemd unit directory if installed as not init.
UNIT_DIR := /usr/lib/systemd/system
GENERATED_FILES :=
ifeq ($(INIT),no)
# Unit file to start kata agent in systemd systems
UNIT_FILES = kata-agent.service
GENERATED_FILES := $(UNIT_FILES)
# Target to be reached in systemd services
UNIT_FILES += kata-containers.target
endif
# Display name of command and it's version (or a message if not available).
#
# Arguments:
@@ -47,6 +63,10 @@ define get_toolchain_version
$(shell printf "%s: %s\\n" "toolchain" "$(or $(shell rustup show active-toolchain 2>/dev/null), (unknown))")
endef
define INSTALL_FILE
install -D -m 644 $1 $(DESTDIR)$2/$1 || exit 1;
endef
default: $(TARGET) show-header
$(TARGET): $(TARGET_PATH)
@@ -57,18 +77,30 @@ $(TARGET_PATH): $(SOURCES) | show-summary
show-header:
@printf "%s - version %s (commit %s)\n\n" "$(TARGET)" "$(VERSION)" "$(COMMIT_MSG)"
install:
$(GENERATED_FILES): %: %.in
@sed \
-e 's|[@]bindir[@]|$(BINDIR)|g' \
-e 's|[@]kata-agent[@]|$(TARGET)|g' \
"$<" > "$@"
install: build-service
@install -D $(TARGET_PATH) $(DESTDIR)/$(BINDIR)/$(TARGET)
clean:
@cargo clean
check:
@cargo test --target $(TRIPLE)
@cargo test --all --target $(TRIPLE)
run:
@cargo run --target $(TRIPLE)
build-service: $(GENERATED_FILES)
ifeq ($(INIT),no)
@echo "Installing systemd unit files..."
$(foreach f,$(UNIT_FILES),$(call INSTALL_FILE,$f,$(UNIT_DIR)))
endif
show-summary: show-header
@printf "project:\n"
@printf " name: $(PROJECT_NAME)\n"

View File

@@ -0,0 +1,22 @@
#
# Copyright (c) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
[Unit]
Description=Kata Containers Agent
Documentation=https://github.com/kata-containers/kata-containers
Wants=kata-containers.target
[Service]
# Send agent output to tty to allow capture debug logs
# from a VM vsock port
StandardOutput=tty
Type=simple
ExecStart=@bindir@/@kata-agent@
LimitNOFILE=infinity
# ExecStop is required for static agent tracing; in all other scenarios
# the runtime handles shutting down the VM.
ExecStop=/bin/sync ; /usr/bin/systemctl --force poweroff
FailureAction=poweroff

View File

@@ -0,0 +1,15 @@
#
# Copyright (c) 2018-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
[Unit]
Description=Kata Containers Agent Target
Requires=basic.target
Requires=tmp.mount
Wants=chronyd.service
Requires=kata-agent.service
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes

View File

@@ -27,7 +27,7 @@ where
*d == T::default()
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Spec {
#[serde(
default,
@@ -69,7 +69,7 @@ impl Spec {
pub type LinuxRlimit = POSIXRlimit;
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Process {
#[serde(default)]
pub terminal: bool,
@@ -112,7 +112,7 @@ pub struct Process {
pub selinux_label: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxCapabilities {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub bounding: Vec<String>,
@@ -126,7 +126,7 @@ pub struct LinuxCapabilities {
pub ambient: Vec<String>,
}
#[derive(Default, Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Box {
#[serde(default)]
pub height: u32,
@@ -134,7 +134,7 @@ pub struct Box {
pub width: u32,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct User {
#[serde(default)]
pub uid: u32,
@@ -150,7 +150,7 @@ pub struct User {
pub username: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Root {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub path: String,
@@ -158,7 +158,7 @@ pub struct Root {
pub readonly: bool,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Mount {
#[serde(default)]
pub destination: String,
@@ -170,7 +170,7 @@ pub struct Mount {
pub options: Vec<String>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Hook {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub path: String,
@@ -182,7 +182,7 @@ pub struct Hook {
pub timeout: Option<i32>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Hooks {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub prestart: Vec<Hook>,
@@ -192,7 +192,7 @@ pub struct Hooks {
pub poststop: Vec<Hook>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Linux {
#[serde(default, rename = "uidMappings", skip_serializing_if = "Vec::is_empty")]
pub uid_mappings: Vec<LinuxIDMapping>,
@@ -238,7 +238,7 @@ pub struct Linux {
pub intel_rdt: Option<LinuxIntelRdt>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxNamespace {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub r#type: LinuxNamespaceType,
@@ -256,7 +256,7 @@ pub const USERNAMESPACE: &str = "user";
pub const UTSNAMESPACE: &str = "uts";
pub const CGROUPNAMESPACE: &str = "cgroup";
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxIDMapping {
#[serde(default, rename = "containerID")]
pub container_id: u32,
@@ -266,7 +266,7 @@ pub struct LinuxIDMapping {
pub size: u32,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct POSIXRlimit {
#[serde(default)]
pub r#type: String,
@@ -276,7 +276,7 @@ pub struct POSIXRlimit {
pub soft: u64,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxHugepageLimit {
#[serde(default, rename = "pageSize", skip_serializing_if = "String::is_empty")]
pub page_size: String,
@@ -284,7 +284,7 @@ pub struct LinuxHugepageLimit {
pub limit: u64,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxInterfacePriority {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub name: String,
@@ -292,7 +292,7 @@ pub struct LinuxInterfacePriority {
pub priority: u32,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxBlockIODevice {
#[serde(default)]
pub major: i64,
@@ -300,7 +300,7 @@ pub struct LinuxBlockIODevice {
pub minor: i64,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxWeightDevice {
pub blk: LinuxBlockIODevice,
#[serde(default, skip_serializing_if = "Option::is_none")]
@@ -313,14 +313,14 @@ pub struct LinuxWeightDevice {
pub leaf_weight: Option<u16>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxThrottleDevice {
pub blk: LinuxBlockIODevice,
#[serde(default)]
pub rate: u64,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxBlockIO {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub weight: Option<u16>,
@@ -362,7 +362,7 @@ pub struct LinuxBlockIO {
pub throttle_write_iops_device: Vec<LinuxThrottleDevice>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxMemory {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub limit: Option<i64>,
@@ -384,7 +384,7 @@ pub struct LinuxMemory {
pub disable_oom_killer: Option<bool>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxCPU {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub shares: Option<u64>,
@@ -410,13 +410,13 @@ pub struct LinuxCPU {
pub mems: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxPids {
#[serde(default)]
pub limit: i64,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxNetwork {
#[serde(default, skip_serializing_if = "Option::is_none", rename = "classID")]
pub class_id: Option<u32>,
@@ -424,7 +424,7 @@ pub struct LinuxNetwork {
pub priorities: Vec<LinuxInterfacePriority>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxRdma {
#[serde(
default,
@@ -440,7 +440,7 @@ pub struct LinuxRdma {
pub hca_objects: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxResources {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub devices: Vec<LinuxDeviceCgroup>,
@@ -464,7 +464,7 @@ pub struct LinuxResources {
pub rdma: HashMap<String, LinuxRdma>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxDevice {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub path: String,
@@ -482,7 +482,7 @@ pub struct LinuxDevice {
pub gid: Option<u32>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxDeviceCgroup {
#[serde(default)]
pub allow: bool,
@@ -496,7 +496,7 @@ pub struct LinuxDeviceCgroup {
pub access: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Solaris {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub milestone: String,
@@ -520,13 +520,13 @@ pub struct Solaris {
pub capped_memory: Option<SolarisCappedMemory>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct SolarisCappedCPU {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub ncpus: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct SolarisCappedMemory {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub physical: String,
@@ -534,7 +534,7 @@ pub struct SolarisCappedMemory {
pub swap: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct SolarisAnet {
#[serde(default, skip_serializing_if = "String::is_empty", rename = "linkname")]
pub link_name: String,
@@ -572,7 +572,7 @@ pub struct SolarisAnet {
pub mac_address: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Windows<T> {
#[serde(
default,
@@ -594,7 +594,7 @@ pub struct Windows<T> {
pub network: Option<WindowsNetwork>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub memory: Option<WindowsMemoryResources>,
@@ -604,13 +604,13 @@ pub struct WindowsResources {
pub storage: Option<WindowsStorageResources>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsMemoryResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub limit: Option<u64>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsCPUResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub count: Option<u64>,
@@ -620,7 +620,7 @@ pub struct WindowsCPUResources {
pub maximum: Option<u64>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsStorageResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub iops: Option<u64>,
@@ -634,7 +634,7 @@ pub struct WindowsStorageResources {
pub sandbox_size: Option<u64>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsNetwork {
#[serde(
default,
@@ -658,7 +658,7 @@ pub struct WindowsNetwork {
pub network_shared_container_name: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsHyperV {
#[serde(
default,
@@ -668,14 +668,14 @@ pub struct WindowsHyperV {
pub utility_vm_path: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VM {
pub hypervisor: VMHypervisor,
pub kernel: VMKernel,
pub image: VMImage,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VMHypervisor {
#[serde(default)]
pub path: String,
@@ -683,7 +683,7 @@ pub struct VMHypervisor {
pub parameters: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VMKernel {
#[serde(default)]
pub path: String,
@@ -693,7 +693,7 @@ pub struct VMKernel {
pub initrd: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VMImage {
#[serde(default)]
pub path: String,
@@ -701,7 +701,7 @@ pub struct VMImage {
pub format: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxSeccomp {
#[serde(default, rename = "defaultAction")]
pub default_action: LinuxSeccompAction,
@@ -750,7 +750,7 @@ pub const OPGREATEREQUAL: &str = "SCMP_CMP_GE";
pub const OPGREATERTHAN: &str = "SCMP_CMP_GT";
pub const OPMASKEDEQUAL: &str = "SCMP_CMP_MASKED_EQ";
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxSeccompArg {
#[serde(default)]
pub index: u32,
@@ -762,7 +762,7 @@ pub struct LinuxSeccompArg {
pub op: LinuxSeccompOperator,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxSyscall {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub names: Vec<String>,
@@ -772,7 +772,7 @@ pub struct LinuxSyscall {
pub args: Vec<LinuxSeccompArg>,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxIntelRdt {
#[serde(
default,
@@ -782,7 +782,7 @@ pub struct LinuxIntelRdt {
pub l3_cache_schema: String,
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct State {
#[serde(
default,

View File

@@ -22,4 +22,4 @@ slog = "2.5.2"
slog-scope = "4.1.2"
scan_fmt = "0.2"
regex = "1.1"
path-absolutize = { git = "git://github.com/magiclen/path-absolutize.git", tag= "v1.1.3" }
path-absolutize = { git = "git://github.com/magiclen/path-absolutize.git", tag= "v1.2.0" }

View File

@@ -9,10 +9,12 @@
use lazy_static;
use crate::errors::*;
use crate::log_child;
use crate::sync::write_count;
use caps::{self, CapSet, Capability, CapsHashSet};
use protocols::oci::LinuxCapabilities;
use slog::Logger;
use oci::LinuxCapabilities;
use std::collections::HashMap;
use std::os::unix::io::RawFd;
lazy_static! {
pub static ref CAPSMAP: HashMap<String, Capability> = {
@@ -76,14 +78,14 @@ lazy_static! {
};
}
fn to_capshashset(logger: &Logger, caps: &[String]) -> CapsHashSet {
fn to_capshashset(cfd_log: RawFd, caps: &[String]) -> CapsHashSet {
let mut r = CapsHashSet::new();
for cap in caps.iter() {
let c = CAPSMAP.get(cap);
if c.is_none() {
warn!(logger, "{} is not a cap", cap);
log_child!(cfd_log, "{} is not a cap", cap);
continue;
}
@@ -98,37 +100,35 @@ pub fn reset_effective() -> Result<()> {
Ok(())
}
pub fn drop_priviledges(logger: &Logger, caps: &LinuxCapabilities) -> Result<()> {
let logger = logger.new(o!("subsystem" => "capabilities"));
pub fn drop_priviledges(cfd_log: RawFd, caps: &LinuxCapabilities) -> Result<()> {
let all = caps::all();
for c in all.difference(&to_capshashset(&logger, caps.Bounding.as_ref())) {
for c in all.difference(&to_capshashset(cfd_log, caps.bounding.as_ref())) {
caps::drop(None, CapSet::Bounding, *c)?;
}
caps::set(
None,
CapSet::Effective,
to_capshashset(&logger, caps.Effective.as_ref()),
to_capshashset(cfd_log, caps.effective.as_ref()),
)?;
caps::set(
None,
CapSet::Permitted,
to_capshashset(&logger, caps.Permitted.as_ref()),
to_capshashset(cfd_log, caps.permitted.as_ref()),
)?;
caps::set(
None,
CapSet::Inheritable,
to_capshashset(&logger, caps.Inheritable.as_ref()),
to_capshashset(cfd_log, caps.inheritable.as_ref()),
)?;
if let Err(_) = caps::set(
None,
CapSet::Ambient,
to_capshashset(&logger, caps.Ambient.as_ref()),
to_capshashset(cfd_log, caps.ambient.as_ref()),
) {
warn!(logger, "failed to set ambient capability");
log_child!(cfd_log, "failed to set ambient capability");
}
Ok(())

View File

@@ -2,7 +2,6 @@
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::cgroups::FreezerState;
use crate::cgroups::Manager as CgroupManager;
use crate::container::DEFAULT_DEVICES;
@@ -10,15 +9,16 @@ use crate::errors::*;
use lazy_static;
use libc::{self, pid_t};
use nix::errno::Errno;
use oci::{LinuxDeviceCgroup, LinuxResources, LinuxThrottleDevice, LinuxWeightDevice};
use protobuf::{CachedSize, RepeatedField, SingularPtrField, UnknownFields};
use protocols::agent::{
BlkioStats, BlkioStatsEntry, CgroupStats, CpuStats, CpuUsage, HugetlbStats, MemoryData,
MemoryStats, PidsStats, ThrottlingData,
};
use protocols::oci::{LinuxDeviceCgroup, LinuxResources, LinuxThrottleDevice, LinuxWeightDevice};
use regex::Regex;
use std::collections::HashMap;
use std::fs;
use std::path::Path;
// Convenience macro to obtain the scope logger
macro_rules! sl {
@@ -57,63 +57,51 @@ lazy_static! {
pub static ref DEFAULT_ALLOWED_DEVICES: Vec<LinuxDeviceCgroup> = {
let mut v = Vec::new();
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: WILDCARD,
Minor: WILDCARD,
Access: "m".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "b".to_string(),
Major: WILDCARD,
Minor: WILDCARD,
Access: "m".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "b".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 5,
Minor: 1,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(1),
access: "rwm".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 136,
Minor: WILDCARD,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(136),
minor: Some(WILDCARD),
access: "rwm".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 5,
Minor: 2,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(2),
access: "rwm".to_string(),
});
v.push(LinuxDeviceCgroup {
Allow: true,
Type: "c".to_string(),
Major: 10,
Minor: 200,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: "c".to_string(),
major: Some(10),
minor: Some(200),
access: "rwm".to_string(),
});
v
@@ -219,7 +207,7 @@ fn parse_size(s: &str, m: &HashMap<String, u128>) -> Result<u128> {
fn custom_size(mut size: f64, base: f64, m: &Vec<String>) -> String {
let mut i = 0;
while size > base {
while size >= base && i < m.len() - 1 {
size /= base;
i += 1;
}
@@ -319,7 +307,6 @@ where
T: ToString,
{
let p = format!("{}/{}", dir, file);
info!(sl!(), "{}", p.as_str());
fs::write(p.as_str(), v.to_string().as_bytes())?;
Ok(())
}
@@ -419,10 +406,10 @@ impl Subsystem for CpuSet {
let mut cpus: &str = "";
let mut mems: &str = "";
if r.CPU.is_some() {
let cpu = r.CPU.as_ref().unwrap();
cpus = cpu.Cpus.as_str();
mems = cpu.Mems.as_str();
if r.cpu.is_some() {
let cpu = r.cpu.as_ref().unwrap();
cpus = cpu.cpus.as_str();
mems = cpu.mems.as_str();
}
// For updatecontainer, just set the new value
@@ -466,17 +453,25 @@ impl Subsystem for Cpu {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.CPU.is_none() {
if r.cpu.is_none() {
return Ok(());
}
let cpu = r.CPU.as_ref().unwrap();
let cpu = r.cpu.as_ref().unwrap();
try_write_nonzero(dir, CPU_RT_PERIOD_US, cpu.RealtimePeriod as i128)?;
try_write_nonzero(dir, CPU_RT_RUNTIME_US, cpu.RealtimeRuntime as i128)?;
write_nonzero(dir, CPU_SHARES, cpu.Shares as i128)?;
write_nonzero(dir, CPU_CFS_QUOTA_US, cpu.Quota as i128)?;
write_nonzero(dir, CPU_CFS_PERIOD_US, cpu.Period as i128)?;
try_write_nonzero(
dir,
CPU_RT_PERIOD_US,
cpu.realtime_period.unwrap_or(0) as i128,
)?;
try_write_nonzero(
dir,
CPU_RT_RUNTIME_US,
cpu.realtime_runtime.unwrap_or(0) as i128,
)?;
write_nonzero(dir, CPU_SHARES, cpu.shares.unwrap_or(0) as i128)?;
write_nonzero(dir, CPU_CFS_QUOTA_US, cpu.quota.unwrap_or(0) as i128)?;
write_nonzero(dir, CPU_CFS_PERIOD_US, cpu.period.unwrap_or(0) as i128)?;
Ok(())
}
@@ -599,24 +594,24 @@ impl CpuAcct {
}
fn write_device(d: &LinuxDeviceCgroup, dir: &str) -> Result<()> {
let file = if d.Allow { DEVICES_ALLOW } else { DEVICES_DENY };
let file = if d.allow { DEVICES_ALLOW } else { DEVICES_DENY };
let major = if d.Major == WILDCARD {
let major = if d.major.unwrap_or(0) == WILDCARD {
"*".to_string()
} else {
d.Major.to_string()
d.major.unwrap_or(0).to_string()
};
let minor = if d.Minor == WILDCARD {
let minor = if d.minor.unwrap_or(0) == WILDCARD {
"*".to_string()
} else {
d.Minor.to_string()
d.minor.unwrap_or(0).to_string()
};
let t = if d.Type.is_empty() {
let t = if d.r#type.is_empty() {
"a"
} else {
d.Type.as_str()
d.r#type.as_str()
};
let v = format!(
@@ -624,7 +619,7 @@ fn write_device(d: &LinuxDeviceCgroup, dir: &str) -> Result<()> {
t,
major.as_str(),
minor.as_str(),
d.Access.as_str()
d.access.as_str()
);
info!(sl!(), "{}", v.as_str());
@@ -638,19 +633,17 @@ impl Subsystem for Devices {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
for d in r.Devices.iter() {
for d in r.devices.iter() {
write_device(d, dir)?;
}
for d in DEFAULT_DEVICES.iter() {
let td = LinuxDeviceCgroup {
Allow: true,
Type: d.Type.clone(),
Major: d.Major,
Minor: d.Minor,
Access: "rwm".to_string(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
allow: true,
r#type: d.r#type.clone(),
major: Some(d.major),
minor: Some(d.minor),
access: "rwm".to_string(),
};
write_device(&td, dir)?;
@@ -691,30 +684,34 @@ impl Subsystem for Memory {
}
fn set(&self, dir: &str, r: &LinuxResources, update: bool) -> Result<()> {
if r.Memory.is_none() {
if r.memory.is_none() {
return Ok(());
}
let memory = r.Memory.as_ref().unwrap();
let memory = r.memory.as_ref().unwrap();
// initialize kmem limits for accounting
if !update {
try_write(dir, KMEM_LIMIT, 1)?;
try_write(dir, KMEM_LIMIT, -1)?;
}
write_nonzero(dir, MEMORY_LIMIT, memory.Limit as i128)?;
write_nonzero(dir, MEMORY_SOFT_LIMIT, memory.Reservation as i128)?;
write_nonzero(dir, MEMORY_LIMIT, memory.limit.unwrap_or(0) as i128)?;
write_nonzero(
dir,
MEMORY_SOFT_LIMIT,
memory.reservation.unwrap_or(0) as i128,
)?;
try_write_nonzero(dir, MEMSW_LIMIT, memory.Swap as i128)?;
try_write_nonzero(dir, KMEM_LIMIT, memory.Kernel as i128)?;
try_write_nonzero(dir, MEMSW_LIMIT, memory.swap.unwrap_or(0) as i128)?;
try_write_nonzero(dir, KMEM_LIMIT, memory.kernel.unwrap_or(0) as i128)?;
write_nonzero(dir, KMEM_TCP_LIMIT, memory.KernelTCP as i128)?;
write_nonzero(dir, KMEM_TCP_LIMIT, memory.kernel_tcp.unwrap_or(0) as i128)?;
if memory.Swappiness <= 100 {
write_file(dir, SWAPPINESS, memory.Swappiness)?;
if memory.swapiness.unwrap_or(0) <= 100 {
write_file(dir, SWAPPINESS, memory.swapiness.unwrap_or(0))?;
}
if memory.DisableOOMKiller {
if memory.disable_oom_killer.unwrap_or(false) {
write_file(dir, OOM_CONTROL, 1)?;
}
@@ -808,14 +805,14 @@ impl Subsystem for Pids {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.Pids.is_none() {
if r.pids.is_none() {
return Ok(());
}
let pids = r.Pids.as_ref().unwrap();
let pids = r.pids.as_ref().unwrap();
let v = if pids.Limit > 0 {
pids.Limit.to_string()
let v = if pids.limit > 0 {
pids.limit.to_string()
} else {
"max".to_string()
};
@@ -857,14 +854,14 @@ impl Pids {
#[inline]
fn weight(d: &LinuxWeightDevice) -> (String, String) {
(
format!("{}:{} {}", d.Major, d.Minor, d.Weight),
format!("{}:{} {}", d.Major, d.Minor, d.LeafWeight),
format!("{:?} {:?}", d.blk, d.weight),
format!("{:?} {:?}", d.blk, d.leaf_weight),
)
}
#[inline]
fn rate(d: &LinuxThrottleDevice) -> String {
format!("{}:{} {}", d.Major, d.Minor, d.Rate)
format!("{:?} {}", d.blk, d.rate)
}
fn write_blkio_device<T: ToString>(dir: &str, file: &str, v: T) -> Result<()> {
@@ -895,34 +892,38 @@ impl Subsystem for Blkio {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.BlockIO.is_none() {
if r.block_io.is_none() {
return Ok(());
}
let blkio = r.BlockIO.as_ref().unwrap();
let blkio = r.block_io.as_ref().unwrap();
write_nonzero(dir, BLKIO_WEIGHT, blkio.Weight as i128)?;
write_nonzero(dir, BLKIO_LEAF_WEIGHT, blkio.LeafWeight as i128)?;
write_nonzero(dir, BLKIO_WEIGHT, blkio.weight.unwrap_or(0) as i128)?;
write_nonzero(
dir,
BLKIO_LEAF_WEIGHT,
blkio.leaf_weight.unwrap_or(0) as i128,
)?;
for d in blkio.WeightDevice.iter() {
for d in blkio.weight_device.iter() {
let (w, lw) = weight(d);
write_blkio_device(dir, BLKIO_WEIGHT_DEVICE, w)?;
write_blkio_device(dir, BLKIO_LEAF_WEIGHT_DEVICE, lw)?;
}
for d in blkio.ThrottleReadBpsDevice.iter() {
for d in blkio.throttle_read_bps_device.iter() {
write_blkio_device(dir, BLKIO_READ_BPS_DEVICE, rate(d))?;
}
for d in blkio.ThrottleWriteBpsDevice.iter() {
for d in blkio.throttle_write_bps_device.iter() {
write_blkio_device(dir, BLKIO_WRITE_BPS_DEVICE, rate(d))?;
}
for d in blkio.ThrottleReadIOPSDevice.iter() {
for d in blkio.throttle_read_iops_device.iter() {
write_blkio_device(dir, BLKIO_READ_IOPS_DEVICE, rate(d))?;
}
for d in blkio.ThrottleWriteIOPSDevice.iter() {
for d in blkio.throttle_write_iops_device.iter() {
write_blkio_device(dir, BLKIO_WRITE_IOPS_DEVICE, rate(d))?;
}
@@ -934,6 +935,11 @@ fn get_blkio_stat(dir: &str, file: &str) -> Result<RepeatedField<BlkioStatsEntry
let p = format!("{}/{}", dir, file);
let mut m = RepeatedField::new();
// do as runc
if !Path::new(&p).exists() {
return Ok(RepeatedField::new());
}
for l in fs::read_to_string(p.as_str())?.lines() {
let parts: Vec<&str> = l.split(' ').collect();
@@ -1010,9 +1016,9 @@ impl Subsystem for HugeTLB {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
for l in r.HugepageLimits.iter() {
let file = format!("hugetlb.{}.limit_in_bytes", l.Pagesize);
write_file(dir, file.as_str(), l.Limit)?;
for l in r.hugepage_limits.iter() {
let file = format!("hugetlb.{}.limit_in_bytes", l.page_size);
write_file(dir, file.as_str(), l.limit)?;
}
Ok(())
}
@@ -1052,13 +1058,13 @@ impl Subsystem for NetCls {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.Network.is_none() {
if r.network.is_none() {
return Ok(());
}
let network = r.Network.as_ref().unwrap();
let network = r.network.as_ref().unwrap();
write_nonzero(dir, NET_CLS_CLASSID, network.ClassID as i128)?;
write_nonzero(dir, NET_CLS_CLASSID, network.class_id.unwrap_or(0) as i128)?;
Ok(())
}
@@ -1070,14 +1076,14 @@ impl Subsystem for NetPrio {
}
fn set(&self, dir: &str, r: &LinuxResources, _update: bool) -> Result<()> {
if r.Network.is_none() {
if r.network.is_none() {
return Ok(());
}
let network = r.Network.as_ref().unwrap();
let network = r.network.as_ref().unwrap();
for p in network.Priorities.iter() {
let prio = format!("{} {}", p.Name, p.Priority);
for p in network.priorities.iter() {
let prio = format!("{} {}", p.name, p.priority);
try_write_file(dir, NET_PRIO_IFPRIOMAP, prio)?;
}
@@ -1222,7 +1228,7 @@ fn get_all_procs(dir: &str) -> Result<Vec<i32>> {
Ok(m)
}
#[derive(Debug, Clone)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Manager {
pub paths: HashMap<String, String>,
pub mounts: HashMap<String, String>,
@@ -1236,7 +1242,6 @@ pub const FROZEN: &'static str = "FROZEN";
impl CgroupManager for Manager {
fn apply(&self, pid: pid_t) -> Result<()> {
for (key, value) in &self.paths {
info!(sl!(), "apply cgroup {}", key);
apply(value, pid)?;
}
@@ -1247,7 +1252,6 @@ impl CgroupManager for Manager {
for (key, value) in &self.paths {
let _ = fs::create_dir_all(value);
let sub = get_subsystem(key)?;
info!(sl!(), "setting cgroup {}", key);
sub.set(value, spec, update)?;
}
@@ -1299,9 +1303,16 @@ impl CgroupManager for Manager {
};
// BlkioStats
// note that virtiofs has no blkio stats
info!(sl!(), "blkio_stats");
let blkio_stats = if self.paths.get("blkio").is_some() {
SingularPtrField::some(Blkio().get_stats(self.paths.get("blkio").unwrap())?)
match Blkio().get_stats(self.paths.get("blkio").unwrap()) {
Ok(stat) => SingularPtrField::some(stat),
Err(e) => {
warn!(sl!(), "failed to get blkio stats");
SingularPtrField::none()
}
}
} else {
SingularPtrField::none()
};

View File

@@ -5,8 +5,8 @@
use crate::errors::*;
// use crate::configs::{FreezerState, Config};
use oci::LinuxResources;
use protocols::agent::CgroupStats;
use protocols::oci::LinuxResources;
use std::collections::HashMap;
pub mod fs;

File diff suppressed because it is too large Load Diff

View File

@@ -16,11 +16,13 @@ error_chain! {
Ffi(std::ffi::NulError);
Caps(caps::errors::Error);
Serde(serde_json::Error);
UTF8(std::string::FromUtf8Error);
FromUTF8(std::string::FromUtf8Error);
Parse(std::num::ParseIntError);
Scanfmt(scan_fmt::parse::ScanError);
Ip(std::net::AddrParseError);
Regex(regex::Error);
EnvVar(std::env::VarError);
UTF8(std::str::Utf8Error);
}
// define new errors
errors {

View File

@@ -42,14 +42,14 @@ macro_rules! sl {
};
}
pub mod capabilities;
pub mod cgroups;
pub mod container;
pub mod errors;
pub mod mount;
pub mod process;
pub mod specconv;
// pub mod sync;
pub mod capabilities;
pub mod sync;
pub mod validator;
// pub mod factory;
@@ -66,9 +66,6 @@ pub mod validator;
// construtc ociSpec from grpcSpec, which is needed for hook
// execution. since hooks read config.json
use std::collections::HashMap;
use std::mem::MaybeUninit;
use oci::{
Box as ociBox, Hooks as ociHooks, Linux as ociLinux, LinuxCapabilities as ociLinuxCapabilities,
Mount as ociMount, POSIXRlimit as ociPOSIXRlimit, Process as ociProcess, Root as ociRoot,
@@ -78,8 +75,10 @@ use protocols::oci::{
Hooks as grpcHooks, Linux as grpcLinux, Mount as grpcMount, Process as grpcProcess,
Root as grpcRoot, Spec as grpcSpec,
};
use std::collections::HashMap;
use std::mem::MaybeUninit;
fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
let console_size = if p.ConsoleSize.is_some() {
let c = p.ConsoleSize.as_ref().unwrap();
Some(ociBox {
@@ -296,7 +295,7 @@ fn blockio_grpc_to_oci(blk: &grpcLinuxBlockIO) -> ociLinuxBlockIO {
}
}
fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let devices = {
let mut d = Vec::new();
for dev in res.Devices.iter() {

View File

@@ -10,10 +10,11 @@ use nix::mount::{self, MntFlags, MsFlags};
use nix::sys::stat::{self, Mode, SFlag};
use nix::unistd::{self, Gid, Uid};
use nix::NixPath;
use protocols::oci::{LinuxDevice, Mount, Spec};
use oci::{LinuxDevice, Mount, Spec};
use std::collections::{HashMap, HashSet};
use std::fs::{self, OpenOptions};
use std::os::unix;
use std::os::unix::io::RawFd;
use std::path::{Path, PathBuf};
use path_absolutize::*;
@@ -23,11 +24,11 @@ use std::io::{BufRead, BufReader};
use crate::container::DEFAULT_DEVICES;
use crate::errors::*;
use crate::sync::write_count;
use lazy_static;
use std::string::ToString;
use protobuf::{CachedSize, RepeatedField, UnknownFields};
use slog::Logger;
use crate::log_child;
// Info reveals information about a particular mounted filesystem. This
// struct is populated from the content in the /proc/<pid>/mountinfo file.
@@ -98,7 +99,7 @@ lazy_static! {
}
pub fn init_rootfs(
logger: &Logger,
cfd_log: RawFd,
spec: &Spec,
cpath: &HashMap<String, String>,
mounts: &HashMap<String, String>,
@@ -108,14 +109,14 @@ pub fn init_rootfs(
lazy_static::initialize(&PROPAGATION);
lazy_static::initialize(&LINUXDEVICETYPE);
let linux = spec.Linux.as_ref().unwrap();
let linux = spec.linux.as_ref().unwrap();
let mut flags = MsFlags::MS_REC;
match PROPAGATION.get(&linux.RootfsPropagation.as_str()) {
match PROPAGATION.get(&linux.rootfs_propagation.as_str()) {
Some(fl) => flags |= *fl,
None => flags |= MsFlags::MS_SLAVE,
}
let rootfs = spec.Root.as_ref().unwrap().Path.as_str();
let rootfs = spec.root.as_ref().unwrap().path.as_str();
let root = fs::canonicalize(rootfs)?;
let rootfs = root.to_str().unwrap();
@@ -128,19 +129,19 @@ pub fn init_rootfs(
None::<&str>,
)?;
for m in &spec.Mounts {
for m in &spec.mounts {
let (mut flags, data) = parse_mount(&m);
if !m.destination.starts_with("/") || m.destination.contains("..") {
return Err(ErrorKind::Nix(nix::Error::Sys(Errno::EINVAL)).into());
}
if m.field_type == "cgroup" {
mount_cgroups(logger, m, rootfs, flags, &data, cpath, mounts)?;
if m.r#type == "cgroup" {
mount_cgroups(cfd_log, &m, rootfs, flags, &data, cpath, mounts)?;
} else {
if m.destination == "/dev" {
flags &= !MsFlags::MS_RDONLY;
}
mount_from(&m, &rootfs, flags, &data, "")?;
mount_from(cfd_log, &m, &rootfs, flags, &data, "")?;
}
}
@@ -148,7 +149,7 @@ pub fn init_rootfs(
unistd::chdir(rootfs)?;
default_symlinks()?;
create_devices(&linux.Devices, bind_device)?;
create_devices(&linux.devices, bind_device)?;
ensure_ptmx()?;
unistd::chdir(&olddir)?;
@@ -157,7 +158,7 @@ pub fn init_rootfs(
}
fn mount_cgroups(
logger: &Logger,
cfd_log: RawFd,
m: &Mount,
rootfs: &str,
flags: MsFlags,
@@ -168,16 +169,14 @@ fn mount_cgroups(
// mount tmpfs
let ctm = Mount {
source: "tmpfs".to_string(),
field_type: "tmpfs".to_string(),
r#type: "tmpfs".to_string(),
destination: m.destination.clone(),
options: RepeatedField::default(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
options: Vec::new(),
};
let cflags = MsFlags::MS_NOEXEC | MsFlags::MS_NOSUID | MsFlags::MS_NODEV;
info!(logger, "tmpfs");
mount_from(&ctm, rootfs, cflags, "", "")?;
// info!(logger, "tmpfs");
mount_from(cfd_log, &ctm, rootfs, cflags, "", "")?;
let olddir = unistd::getcwd()?;
unistd::chdir(rootfs)?;
@@ -186,7 +185,7 @@ fn mount_cgroups(
// bind mount cgroups
for (key, mount) in mounts.iter() {
info!(logger, "{}", key);
log_child!(cfd_log, "mount cgroup subsystem {}", key);
let source = if cpath.get(key).is_some() {
cpath.get(key).unwrap()
} else {
@@ -213,36 +212,33 @@ fn mount_cgroups(
srcs.insert(source.to_string());
info!(logger, "{}", destination.as_str());
log_child!(cfd_log, "mount destination: {}", destination.as_str());
let bm = Mount {
source: source.to_string(),
field_type: "bind".to_string(),
r#type: "bind".to_string(),
destination: destination.clone(),
options: RepeatedField::default(),
unknown_fields: UnknownFields::default(),
cached_size: CachedSize::default(),
options: Vec::new(),
};
mount_from(
&bm,
rootfs,
flags | MsFlags::MS_REC | MsFlags::MS_BIND,
"",
"",
)?;
let mut mount_flags: MsFlags = flags | MsFlags::MS_REC | MsFlags::MS_BIND;
if key.contains("systemd") {
mount_flags &= !MsFlags::MS_RDONLY;
}
mount_from(cfd_log, &bm, rootfs, mount_flags, "", "")?;
if key != base {
let src = format!("{}/{}", m.destination.as_str(), key);
match unix::fs::symlink(destination.as_str(), &src[1..]) {
Err(e) => {
info!(
logger,
log_child!(
cfd_log,
"symlink: {} {} err: {}",
key,
destination.as_str(),
e.to_string()
);
return Err(e.into());
}
Ok(_) => {}
@@ -426,11 +422,18 @@ fn parse_mount(m: &Mount) -> (MsFlags, String) {
(flags, data.join(","))
}
fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str) -> Result<()> {
fn mount_from(
cfd_log: RawFd,
m: &Mount,
rootfs: &str,
flags: MsFlags,
data: &str,
_label: &str,
) -> Result<()> {
let d = String::from(data);
let dest = format!("{}{}", rootfs, &m.destination);
let src = if m.field_type.as_str() == "bind" {
let src = if m.r#type.as_str() == "bind" {
let src = fs::canonicalize(m.source.as_str())?;
let dir = if src.is_file() {
Path::new(&dest).parent().unwrap()
@@ -442,8 +445,8 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
match fs::create_dir_all(&dir) {
Ok(_) => {}
Err(e) => {
info!(
sl!(),
log_child!(
cfd_log,
"creat dir {}: {}",
dir.to_str().unwrap(),
e.to_string()
@@ -461,8 +464,6 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
PathBuf::from(&m.source)
};
info!(sl!(), "{}, {}", src.to_str().unwrap(), dest.as_str());
// ignore this check since some mount's src didn't been a directory
// such as tmpfs.
/*
@@ -477,20 +478,25 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
match stat::stat(dest.as_str()) {
Ok(_) => {}
Err(e) => {
info!(sl!(), "{}: {}", dest.as_str(), e.as_errno().unwrap().desc());
log_child!(
cfd_log,
"{}: {}",
dest.as_str(),
e.as_errno().unwrap().desc()
);
}
}
match mount::mount(
Some(src.to_str().unwrap()),
dest.as_str(),
Some(m.field_type.as_str()),
Some(m.r#type.as_str()),
flags,
Some(d.as_str()),
) {
Ok(_) => {}
Err(e) => {
info!(sl!(), "mount error: {}", e.as_errno().unwrap().desc());
log_child!(cfd_log, "mount error: {}", e.as_errno().unwrap().desc());
return Err(e.into());
}
}
@@ -513,8 +519,8 @@ fn mount_from(m: &Mount, rootfs: &str, flags: MsFlags, data: &str, _label: &str)
None::<&str>,
) {
Err(e) => {
info!(
sl!(),
log_child!(
cfd_log,
"remout {}: {}",
dest.as_str(),
e.as_errno().unwrap().desc()
@@ -550,8 +556,8 @@ fn create_devices(devices: &[LinuxDevice], bind: bool) -> Result<()> {
op(dev)?;
}
for dev in devices {
if !dev.Path.starts_with("/dev") || dev.Path.contains("..") {
let msg = format!("{} is not a valid device path", dev.Path);
if !dev.path.starts_with("/dev") || dev.path.contains("..") {
let msg = format!("{} is not a valid device path", dev.path);
bail!(ErrorKind::ErrorCode(msg));
}
op(dev)?;
@@ -581,22 +587,22 @@ lazy_static! {
}
fn mknod_dev(dev: &LinuxDevice) -> Result<()> {
let f = match LINUXDEVICETYPE.get(dev.Type.as_str()) {
let f = match LINUXDEVICETYPE.get(dev.r#type.as_str()) {
Some(v) => v,
None => return Err(ErrorKind::ErrorCode("invalid spec".to_string()).into()),
};
stat::mknod(
&dev.Path[1..],
&dev.path[1..],
*f,
Mode::from_bits_truncate(dev.FileMode),
makedev(dev.Major as u64, dev.Minor as u64),
Mode::from_bits_truncate(dev.file_mode.unwrap_or(0)),
makedev(dev.major as u64, dev.minor as u64),
)?;
unistd::chown(
&dev.Path[1..],
Some(Uid::from_raw(dev.UID as uid_t)),
Some(Gid::from_raw(dev.GID as uid_t)),
&dev.path[1..],
Some(Uid::from_raw(dev.uid.unwrap_or(0) as uid_t)),
Some(Gid::from_raw(dev.gid.unwrap_or(0) as uid_t)),
)?;
Ok(())
@@ -604,7 +610,7 @@ fn mknod_dev(dev: &LinuxDevice) -> Result<()> {
fn bind_dev(dev: &LinuxDevice) -> Result<()> {
let fd = fcntl::open(
&dev.Path[1..],
&dev.path[1..],
OFlag::O_RDWR | OFlag::O_CREAT,
Mode::from_bits_truncate(0o644),
)?;
@@ -612,8 +618,8 @@ fn bind_dev(dev: &LinuxDevice) -> Result<()> {
unistd::close(fd)?;
mount::mount(
Some(&*dev.Path),
&dev.Path[1..],
Some(&*dev.path),
&dev.path[1..],
None::<&str>,
MsFlags::MS_BIND,
None::<&str>,
@@ -621,23 +627,23 @@ fn bind_dev(dev: &LinuxDevice) -> Result<()> {
Ok(())
}
pub fn finish_rootfs(spec: &Spec) -> Result<()> {
pub fn finish_rootfs(cfd_log: RawFd, spec: &Spec) -> Result<()> {
let olddir = unistd::getcwd()?;
info!(sl!(), "{}", olddir.to_str().unwrap());
log_child!(cfd_log, "old cwd: {}", olddir.to_str().unwrap());
unistd::chdir("/")?;
if spec.Linux.is_some() {
let linux = spec.Linux.as_ref().unwrap();
if spec.linux.is_some() {
let linux = spec.linux.as_ref().unwrap();
for path in linux.MaskedPaths.iter() {
for path in linux.masked_paths.iter() {
mask_path(path)?;
}
for path in linux.ReadonlyPaths.iter() {
for path in linux.readonly_paths.iter() {
readonly_path(path)?;
}
}
for m in spec.Mounts.iter() {
for m in spec.mounts.iter() {
if m.destination == "/dev" {
let (flags, _) = parse_mount(m);
if flags.contains(MsFlags::MS_RDONLY) {
@@ -652,7 +658,7 @@ pub fn finish_rootfs(spec: &Spec) -> Result<()> {
}
}
if spec.Root.as_ref().unwrap().Readonly {
if spec.root.as_ref().unwrap().readonly {
let flags = MsFlags::MS_BIND | MsFlags::MS_RDONLY | MsFlags::MS_NODEV | MsFlags::MS_REMOUNT;
mount::mount(Some("/"), "/", None::<&str>, flags, None::<&str>)?;

View File

@@ -12,7 +12,7 @@ use std::os::unix::io::RawFd;
// use crate::cgroups::Manager as CgroupManager;
// use crate::intelrdt::Manager as RdtManager;
use nix::fcntl::OFlag;
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use nix::sys::signal::{self, Signal};
use nix::sys::socket::{self, AddressFamily, SockFlag, SockType};
use nix::sys::wait::{self, WaitStatus};
@@ -20,7 +20,7 @@ use nix::unistd::{self, Pid};
use nix::Result;
use nix::Error;
use protocols::oci::Process as OCIProcess;
use oci::Process as OCIProcess;
use slog::Logger;
#[derive(Debug)]
@@ -34,10 +34,8 @@ pub struct Process {
pub extra_files: Vec<File>,
// pub caps: Capabilities,
// pub rlimits: Vec<Rlimit>,
pub console_socket: Option<RawFd>,
pub term_master: Option<RawFd>,
// parent end of fds
pub parent_console_socket: Option<RawFd>,
pub tty: bool,
pub parent_stdin: Option<RawFd>,
pub parent_stdout: Option<RawFd>,
pub parent_stderr: Option<RawFd>,
@@ -72,7 +70,13 @@ impl ProcessOperations for Process {
}
impl Process {
pub fn new(logger: &Logger, ocip: &OCIProcess, id: &str, init: bool) -> Result<Self> {
pub fn new(
logger: &Logger,
ocip: &OCIProcess,
id: &str,
init: bool,
pipe_size: i32,
) -> Result<Self> {
let logger = logger.new(o!("subsystem" => "process"));
let mut p = Process {
@@ -83,9 +87,8 @@ impl Process {
exit_pipe_w: None,
exit_pipe_r: None,
extra_files: Vec::new(),
console_socket: None,
tty: ocip.terminal,
term_master: None,
parent_console_socket: None,
parent_stdin: None,
parent_stdout: None,
parent_stderr: None,
@@ -98,44 +101,61 @@ impl Process {
info!(logger, "before create console socket!");
if ocip.Terminal {
let (psocket, csocket) = match socket::socketpair(
AddressFamily::Unix,
SockType::Stream,
None,
SockFlag::SOCK_CLOEXEC,
) {
Ok((u, v)) => (u, v),
Err(e) => {
match e {
Error::Sys(errno) => {
info!(logger, "socketpair: {}", errno.desc());
}
_ => {
info!(logger, "socketpair: other error!");
}
}
return Err(e);
}
};
p.parent_console_socket = Some(psocket);
p.console_socket = Some(csocket);
if !p.tty {
info!(logger, "created console socket!");
let (stdin, pstdin) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stdin = Some(pstdin);
p.stdin = Some(stdin);
let (pstdout, stdout) = create_extended_pipe(OFlag::O_CLOEXEC, pipe_size)?;
p.parent_stdout = Some(pstdout);
p.stdout = Some(stdout);
let (pstderr, stderr) = create_extended_pipe(OFlag::O_CLOEXEC, pipe_size)?;
p.parent_stderr = Some(pstderr);
p.stderr = Some(stderr);
}
info!(logger, "created console socket!");
let (stdin, pstdin) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stdin = Some(pstdin);
p.stdin = Some(stdin);
let (pstdout, stdout) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stdout = Some(pstdout);
p.stdout = Some(stdout);
let (pstderr, stderr) = unistd::pipe2(OFlag::O_CLOEXEC)?;
p.parent_stderr = Some(pstderr);
p.stderr = Some(stderr);
Ok(p)
}
}
fn create_extended_pipe(flags: OFlag, pipe_size: i32) -> Result<(RawFd, RawFd)> {
let (r, w) = unistd::pipe2(flags)?;
if pipe_size > 0 {
fcntl(w, FcntlArg::F_SETPIPE_SZ(pipe_size))?;
}
Ok((r, w))
}
#[cfg(test)]
mod tests {
use crate::process::create_extended_pipe;
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use std::fs;
use std::os::unix::io::RawFd;
fn get_pipe_max_size() -> i32 {
fs::read_to_string("/proc/sys/fs/pipe-max-size")
.unwrap()
.trim()
.parse::<i32>()
.unwrap()
}
fn get_pipe_size(fd: RawFd) -> i32 {
fcntl(fd, FcntlArg::F_GETPIPE_SZ).unwrap()
}
#[test]
fn test_create_extended_pipe() {
// Test the default
let (r, w) = create_extended_pipe(OFlag::O_CLOEXEC, 0).unwrap();
// Test setting to the max size
let max_size = get_pipe_max_size();
let (r, w) = create_extended_pipe(OFlag::O_CLOEXEC, max_size).unwrap();
let actual_size = get_pipe_size(w);
assert_eq!(max_size, actual_size);
}
}

View File

@@ -3,7 +3,7 @@
// SPDX-License-Identifier: Apache-2.0
//
use protocols::oci::Spec;
use oci::Spec;
// use crate::configs::namespaces;
// use crate::configs::device::Device;

View File

@@ -0,0 +1,177 @@
// Copyright (c) 2019 Ant Financial
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::errors::*;
use nix::errno::Errno;
use nix::unistd;
use nix::Error;
use std::mem;
use std::os::unix::io::RawFd;
pub const SYNC_SUCCESS: i32 = 1;
pub const SYNC_FAILED: i32 = 2;
pub const SYNC_DATA: i32 = 3;
const DATA_SIZE: usize = 100;
const MSG_SIZE: usize = mem::size_of::<i32>();
#[macro_export]
macro_rules! log_child {
($fd:expr, $($arg:tt)+) => ({
let lfd = $fd;
let mut log_str = format_args!($($arg)+).to_string();
log_str.push('\n');
write_count(lfd, log_str.as_bytes(), log_str.len());
})
}
pub fn write_count(fd: RawFd, buf: &[u8], count: usize) -> Result<usize> {
let mut len = 0;
loop {
match unistd::write(fd, &buf[len..]) {
Ok(l) => {
len += l;
if len == count {
break;
}
}
Err(e) => {
if e != Error::from_errno(Errno::EINTR) {
return Err(e.into());
}
}
}
}
Ok(len)
}
fn read_count(fd: RawFd, count: usize) -> Result<Vec<u8>> {
let mut v: Vec<u8> = vec![0; count];
let mut len = 0;
loop {
match unistd::read(fd, &mut v[len..]) {
Ok(l) => {
len += l;
if len == count || l == 0 {
break;
}
}
Err(e) => {
if e != Error::from_errno(Errno::EINTR) {
return Err(e.into());
}
}
}
}
Ok(v[0..len].to_vec())
}
pub fn read_sync(fd: RawFd) -> Result<Vec<u8>> {
let buf = read_count(fd, MSG_SIZE)?;
if buf.len() != MSG_SIZE {
return Err(ErrorKind::ErrorCode(format!(
"process: {} failed to receive sync message from peer: got msg length: {}, expected: {}",
std::process::id(),
buf.len(),
MSG_SIZE
))
.into());
}
let buf_array: [u8; MSG_SIZE] = [buf[0], buf[1], buf[2], buf[3]];
let msg: i32 = i32::from_be_bytes(buf_array);
match msg {
SYNC_SUCCESS => return Ok(Vec::new()),
SYNC_DATA => {
let buf = read_count(fd, MSG_SIZE)?;
let buf_array: [u8; MSG_SIZE] = [buf[0], buf[1], buf[2], buf[3]];
let msg_length: i32 = i32::from_be_bytes(buf_array);
let data_buf = read_count(fd, msg_length as usize)?;
return Ok(data_buf);
}
SYNC_FAILED => {
let mut error_buf = vec![];
loop {
let buf = read_count(fd, DATA_SIZE)?;
error_buf.extend(&buf);
if DATA_SIZE == buf.len() {
continue;
} else {
break;
}
}
let error_str = match std::str::from_utf8(&error_buf) {
Ok(v) => v,
Err(e) => {
return Err(ErrorKind::ErrorCode(format!(
"receive error message from child process failed: {:?}",
e
))
.into())
}
};
return Err(ErrorKind::ErrorCode(String::from(error_str)).into());
}
_ => return Err(ErrorKind::ErrorCode("error in receive sync message".to_string()).into()),
}
}
pub fn write_sync(fd: RawFd, msg_type: i32, data_str: &str) -> Result<()> {
let buf = msg_type.to_be_bytes();
let count = write_count(fd, &buf, MSG_SIZE)?;
if count != MSG_SIZE {
return Err(ErrorKind::ErrorCode("error in send sync message".to_string()).into());
}
match msg_type {
SYNC_FAILED => match write_count(fd, data_str.as_bytes(), data_str.len()) {
Ok(_count) => unistd::close(fd)?,
Err(e) => {
unistd::close(fd)?;
return Err(
ErrorKind::ErrorCode("error in send message to process".to_string()).into(),
);
}
},
SYNC_DATA => {
let length: i32 = data_str.len() as i32;
match write_count(fd, &length.to_be_bytes(), MSG_SIZE) {
Ok(_count) => (),
Err(e) => {
unistd::close(fd)?;
return Err(ErrorKind::ErrorCode(
"error in send message to process".to_string(),
)
.into());
}
}
match write_count(fd, data_str.as_bytes(), data_str.len()) {
Ok(_count) => (),
Err(e) => {
unistd::close(fd)?;
return Err(ErrorKind::ErrorCode(
"error in send message to process".to_string(),
)
.into());
}
}
}
_ => (),
};
Ok(())
}

View File

@@ -1,16 +1,21 @@
// Copyright (c) 2019 Ant Financial
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::container::Config;
use crate::errors::*;
use lazy_static;
use nix::errno::Errno;
use nix::Error;
use oci::{LinuxIDMapping, LinuxNamespace, Spec};
use protobuf::RepeatedField;
use protocols::oci::{LinuxIDMapping, LinuxNamespace, Spec};
use std::collections::HashMap;
use std::path::{Component, PathBuf};
fn contain_namespace(nses: &RepeatedField<LinuxNamespace>, key: &str) -> bool {
fn contain_namespace(nses: &Vec<LinuxNamespace>, key: &str) -> bool {
for ns in nses {
if ns.Type.as_str() == key {
if ns.r#type.as_str() == key {
return true;
}
}
@@ -18,10 +23,10 @@ fn contain_namespace(nses: &RepeatedField<LinuxNamespace>, key: &str) -> bool {
false
}
fn get_namespace_path(nses: &RepeatedField<LinuxNamespace>, key: &str) -> Result<String> {
fn get_namespace_path(nses: &Vec<LinuxNamespace>, key: &str) -> Result<String> {
for ns in nses {
if ns.Type.as_str() == key {
return Ok(ns.Path.clone());
if ns.r#type.as_str() == key {
return Ok(ns.path.clone());
}
}
@@ -71,15 +76,15 @@ fn network(_oci: &Spec) -> Result<()> {
}
fn hostname(oci: &Spec) -> Result<()> {
if oci.Hostname.is_empty() || oci.Hostname == "".to_string() {
if oci.hostname.is_empty() || oci.hostname == "".to_string() {
return Ok(());
}
if oci.Linux.is_none() {
if oci.linux.is_none() {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
let linux = oci.Linux.as_ref().unwrap();
if !contain_namespace(&linux.Namespaces, "uts") {
let linux = oci.linux.as_ref().unwrap();
if !contain_namespace(&linux.namespaces, "uts") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
@@ -87,12 +92,12 @@ fn hostname(oci: &Spec) -> Result<()> {
}
fn security(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if linux.MaskedPaths.len() == 0 && linux.ReadonlyPaths.len() == 0 {
let linux = oci.linux.as_ref().unwrap();
if linux.masked_paths.len() == 0 && linux.readonly_paths.len() == 0 {
return Ok(());
}
if !contain_namespace(&linux.Namespaces, "mount") {
if !contain_namespace(&linux.namespaces, "mount") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
@@ -101,9 +106,9 @@ fn security(oci: &Spec) -> Result<()> {
Ok(())
}
fn idmapping(maps: &RepeatedField<LinuxIDMapping>) -> Result<()> {
fn idmapping(maps: &Vec<LinuxIDMapping>) -> Result<()> {
for map in maps {
if map.Size > 0 {
if map.size > 0 {
return Ok(());
}
}
@@ -112,19 +117,19 @@ fn idmapping(maps: &RepeatedField<LinuxIDMapping>) -> Result<()> {
}
fn usernamespace(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if contain_namespace(&linux.Namespaces, "user") {
let linux = oci.linux.as_ref().unwrap();
if contain_namespace(&linux.namespaces, "user") {
let user_ns = PathBuf::from("/proc/self/ns/user");
if !user_ns.exists() {
return Err(ErrorKind::ErrorCode("user namespace not supported!".to_string()).into());
}
// check if idmappings is correct, at least I saw idmaps
// with zero size was passed to agent
idmapping(&linux.UIDMappings)?;
idmapping(&linux.GIDMappings)?;
idmapping(&linux.uid_mappings)?;
idmapping(&linux.gid_mappings)?;
} else {
// no user namespace but idmap
if linux.UIDMappings.len() != 0 || linux.GIDMappings.len() != 0 {
if linux.uid_mappings.len() != 0 || linux.gid_mappings.len() != 0 {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
}
@@ -133,8 +138,8 @@ fn usernamespace(oci: &Spec) -> Result<()> {
}
fn cgroupnamespace(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if contain_namespace(&linux.Namespaces, "cgroup") {
let linux = oci.linux.as_ref().unwrap();
if contain_namespace(&linux.namespaces, "cgroup") {
let path = PathBuf::from("/proc/self/ns/cgroup");
if !path.exists() {
return Err(ErrorKind::ErrorCode("cgroup unsupported!".to_string()).into());
@@ -178,10 +183,10 @@ fn check_host_ns(path: &str) -> Result<()> {
}
fn sysctl(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
for (key, _) in linux.Sysctl.iter() {
let linux = oci.linux.as_ref().unwrap();
for (key, _) in linux.sysctl.iter() {
if SYSCTLS.contains_key(key.as_str()) || key.starts_with("fs.mqueue.") {
if contain_namespace(&linux.Namespaces, "ipc") {
if contain_namespace(&linux.namespaces, "ipc") {
continue;
} else {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
@@ -189,11 +194,11 @@ fn sysctl(oci: &Spec) -> Result<()> {
}
if key.starts_with("net.") {
if !contain_namespace(&linux.Namespaces, "network") {
if !contain_namespace(&linux.namespaces, "network") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
let net = get_namespace_path(&linux.Namespaces, "network")?;
let net = get_namespace_path(&linux.namespaces, "network")?;
if net.is_empty() || net == "".to_string() {
continue;
}
@@ -201,7 +206,7 @@ fn sysctl(oci: &Spec) -> Result<()> {
check_host_ns(net.as_str())?;
}
if contain_namespace(&linux.Namespaces, "uts") {
if contain_namespace(&linux.namespaces, "uts") {
if key == "kernel.domainname" {
continue;
}
@@ -217,21 +222,21 @@ fn sysctl(oci: &Spec) -> Result<()> {
}
fn rootless_euid_mapping(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
if !contain_namespace(&linux.Namespaces, "user") {
let linux = oci.linux.as_ref().unwrap();
if !contain_namespace(&linux.namespaces, "user") {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
if linux.UIDMappings.len() == 0 || linux.GIDMappings.len() == 0 {
if linux.gid_mappings.len() == 0 || linux.gid_mappings.len() == 0 {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
Ok(())
}
fn has_idmapping(maps: &RepeatedField<LinuxIDMapping>, id: u32) -> bool {
fn has_idmapping(maps: &Vec<LinuxIDMapping>, id: u32) -> bool {
for map in maps {
if id >= map.ContainerID && id < map.ContainerID + map.Size {
if id >= map.container_id && id < map.container_id + map.size {
return true;
}
}
@@ -239,9 +244,9 @@ fn has_idmapping(maps: &RepeatedField<LinuxIDMapping>, id: u32) -> bool {
}
fn rootless_euid_mount(oci: &Spec) -> Result<()> {
let linux = oci.Linux.as_ref().unwrap();
let linux = oci.linux.as_ref().unwrap();
for mnt in oci.Mounts.iter() {
for mnt in oci.mounts.iter() {
for opt in mnt.options.iter() {
if opt.starts_with("uid=") || opt.starts_with("gid=") {
let fields: Vec<&str> = opt.split('=').collect();
@@ -253,13 +258,13 @@ fn rootless_euid_mount(oci: &Spec) -> Result<()> {
let id = fields[1].trim().parse::<u32>()?;
if opt.starts_with("uid=") {
if !has_idmapping(&linux.UIDMappings, id) {
if !has_idmapping(&linux.uid_mappings, id) {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
}
if opt.starts_with("gid=") {
if !has_idmapping(&linux.GIDMappings, id) {
if !has_idmapping(&linux.gid_mappings, id) {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
}
@@ -279,14 +284,14 @@ pub fn validate(conf: &Config) -> Result<()> {
lazy_static::initialize(&SYSCTLS);
let oci = conf.spec.as_ref().unwrap();
if oci.Linux.is_none() {
if oci.linux.is_none() {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
if oci.Root.is_none() {
if oci.root.is_none() {
return Err(ErrorKind::Nix(Error::from_errno(Errno::EINVAL)).into());
}
let root = oci.Root.get_ref().Path.as_str();
let root = oci.root.as_ref().unwrap().path.as_str();
rootfs(root)?;
network(oci)?;

View File

@@ -12,9 +12,11 @@ const LOG_LEVEL_OPTION: &str = "agent.log";
const HOTPLUG_TIMOUT_OPTION: &str = "agent.hotplug_timeout";
const DEBUG_CONSOLE_VPORT_OPTION: &str = "agent.debug_console_vport";
const LOG_VPORT_OPTION: &str = "agent.log_vport";
const CONTAINER_PIPE_SIZE_OPTION: &str = "agent.container_pipe_size";
const DEFAULT_LOG_LEVEL: slog::Level = slog::Level::Info;
const DEFAULT_HOTPLUG_TIMEOUT: time::Duration = time::Duration::from_secs(3);
const DEFAULT_CONTAINER_PIPE_SIZE: i32 = 0;
// FIXME: unused
const TRACE_MODE_FLAG: &str = "agent.trace";
@@ -28,6 +30,7 @@ pub struct agentConfig {
pub hotplug_timeout: time::Duration,
pub debug_console_vport: i32,
pub log_vport: i32,
pub container_pipe_size: i32,
}
impl agentConfig {
@@ -39,6 +42,7 @@ impl agentConfig {
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
debug_console_vport: 0,
log_vport: 0,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
}
}
@@ -80,6 +84,11 @@ impl agentConfig {
self.log_vport = port;
}
}
if param.starts_with(format!("{}=", CONTAINER_PIPE_SIZE_OPTION).as_str()) {
let container_pipe_size = get_container_pipe_size(param)?;
self.container_pipe_size = container_pipe_size
}
}
Ok(())
@@ -156,6 +165,40 @@ fn get_hotplug_timeout(param: &str) -> Result<time::Duration> {
Ok(time::Duration::from_secs(value.unwrap()))
}
fn get_container_pipe_size(param: &str) -> Result<i32> {
let fields: Vec<&str> = param.split("=").collect();
if fields.len() != 2 {
return Err(
ErrorKind::ErrorCode(String::from("invalid container pipe size parameter")).into(),
);
}
let key = fields[0];
if key != CONTAINER_PIPE_SIZE_OPTION {
return Err(
ErrorKind::ErrorCode(String::from("invalid container pipe size key name")).into(),
);
}
let res = fields[1].parse::<i32>();
if res.is_err() {
return Err(
ErrorKind::ErrorCode(String::from("unable to parse container pipe size")).into(),
);
}
let value = res.unwrap();
if value < 0 {
return Err(ErrorKind::ErrorCode(String::from(
"container pipe size should not be negative",
))
.into());
}
Ok(value)
}
#[cfg(test)]
mod tests {
use super::*;
@@ -172,6 +215,11 @@ mod tests {
const ERR_INVALID_HOTPLUG_TIMEOUT_PARAM: &str = "unable to parse hotplug timeout";
const ERR_INVALID_HOTPLUG_TIMEOUT_KEY: &str = "invalid hotplug timeout key name";
const ERR_INVALID_CONTAINER_PIPE_SIZE: &str = "invalid container pipe size parameter";
const ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM: &str = "unable to parse container pipe size";
const ERR_INVALID_CONTAINER_PIPE_SIZE_KEY: &str = "invalid container pipe size key name";
const ERR_INVALID_CONTAINER_PIPE_NEGATIVE: &str = "container pipe size should not be negative";
// helper function to make errors less crazy-long
fn make_err(desc: &str) -> Error {
ErrorKind::ErrorCode(desc.to_string()).into()
@@ -218,6 +266,7 @@ mod tests {
dev_mode: bool,
log_level: slog::Level,
hotplug_timeout: time::Duration,
container_pipe_size: i32,
}
let tests = &[
@@ -227,6 +276,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.debug_console agent.devmodex",
@@ -234,6 +284,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.logx=debug",
@@ -241,6 +292,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.log=debug",
@@ -248,6 +300,7 @@ mod tests {
dev_mode: false,
log_level: slog::Level::Debug,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "",
@@ -255,6 +308,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo",
@@ -262,6 +316,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo bar",
@@ -269,6 +324,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo bar",
@@ -276,6 +332,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent bar",
@@ -283,6 +340,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo debug_console agent bar devmode",
@@ -290,6 +348,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.debug_console",
@@ -297,6 +356,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.debug_console ",
@@ -304,6 +364,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.debug_console foo",
@@ -311,6 +372,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.debug_console foo",
@@ -318,6 +380,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.debug_console bar",
@@ -325,6 +388,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.debug_console",
@@ -332,6 +396,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.debug_console ",
@@ -339,6 +404,7 @@ mod tests {
dev_mode: false,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode",
@@ -346,6 +412,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.devmode ",
@@ -353,6 +420,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode foo",
@@ -360,6 +428,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: " agent.devmode foo",
@@ -367,6 +436,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.devmode bar",
@@ -374,6 +444,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.devmode",
@@ -381,6 +452,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "foo agent.devmode ",
@@ -388,6 +460,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console",
@@ -395,6 +468,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.hotplug_timeout=100",
@@ -402,6 +476,7 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: time::Duration::from_secs(100),
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.hotplug_timeout=0",
@@ -409,6 +484,39 @@ mod tests {
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pipe_size=2097152",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: 2097152,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pipe_size=100",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: 100,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pipe_size=0",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
TestData {
contents: "agent.devmode agent.debug_console agent.container_pip_siz=100",
debug_console: true,
dev_mode: true,
log_level: DEFAULT_LOG_LEVEL,
hotplug_timeout: DEFAULT_HOTPLUG_TIMEOUT,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
},
];
@@ -438,9 +546,15 @@ mod tests {
.expect(&format!("{}: failed to write file contents", msg));
let mut config = agentConfig::new();
assert!(config.debug_console == false, msg);
assert!(config.dev_mode == false, msg);
assert!(config.hotplug_timeout == time::Duration::from_secs(3), msg);
assert_eq!(config.debug_console, false, "{}", msg);
assert_eq!(config.dev_mode, false, "{}", msg);
assert_eq!(
config.hotplug_timeout,
time::Duration::from_secs(3),
"{}",
msg
);
assert_eq!(config.container_pipe_size, 0, "{}", msg);
let result = config.parse_cmdline(filename);
assert!(result.is_ok(), "{}", msg);
@@ -449,6 +563,7 @@ mod tests {
assert_eq!(d.dev_mode, config.dev_mode, "{}", msg);
assert_eq!(d.log_level, config.log_level, "{}", msg);
assert_eq!(d.hotplug_timeout, config.hotplug_timeout, "{}", msg);
assert_eq!(d.container_pipe_size, config.container_pipe_size, "{}", msg);
}
}
@@ -689,4 +804,78 @@ mod tests {
assert_result!(d.result, result, format!("{}", msg));
}
}
#[test]
fn test_get_container_pipe_size() {
#[derive(Debug)]
struct TestData<'a> {
param: &'a str,
result: Result<i32>,
}
let tests = &[
TestData {
param: "",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE)),
},
TestData {
param: "agent.container_pipe_size",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE)),
},
TestData {
param: "foo=bar",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_KEY)),
},
TestData {
param: "agent.container_pip_siz=1",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_KEY)),
},
TestData {
param: "agent.container_pipe_size=1",
result: Ok(1),
},
TestData {
param: "agent.container_pipe_size=3",
result: Ok(3),
},
TestData {
param: "agent.container_pipe_size=2097152",
result: Ok(2097152),
},
TestData {
param: "agent.container_pipe_size=0",
result: Ok(0),
},
TestData {
param: "agent.container_pipe_size=-1",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_NEGATIVE)),
},
TestData {
param: "agent.container_pipe_size=foobar",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
TestData {
param: "agent.container_pipe_size=j",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
TestData {
param: "agent.container_pipe_size=4jbsdja",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
TestData {
param: "agent.container_pipe_size=4294967296",
result: Err(make_err(ERR_INVALID_CONTAINER_PIPE_SIZE_PARAM)),
},
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let result = get_container_pipe_size(d.param);
let msg = format!("{}: result: {:?}", msg, result);
assert_result!(d.result, result, format!("{}", msg));
}
}
}

View File

@@ -14,8 +14,8 @@ use crate::linux_abi::*;
use crate::mount::{DRIVERBLKTYPE, DRIVERMMIOBLKTYPE, DRIVERNVDIMMTYPE, DRIVERSCSITYPE};
use crate::sandbox::Sandbox;
use crate::{AGENT_CONFIG, GLOBAL_DEVICE_WATCHER};
use oci::Spec;
use protocols::agent::Device;
use protocols::oci::Spec;
use rustjail::errors::*;
// Convenience macro to obtain the scope logger
@@ -207,7 +207,7 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
.into());
}
let linux = match spec.Linux.as_mut() {
let linux = match spec.linux.as_mut() {
None => {
return Err(
ErrorKind::ErrorCode("Spec didn't container linux field".to_string()).into(),
@@ -232,14 +232,14 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
"got the device: dev_path: {}, major: {}, minor: {}\n", &device.vm_path, major_id, minor_id
);
let devices = linux.Devices.as_mut_slice();
let devices = linux.devices.as_mut_slice();
for dev in devices.iter_mut() {
if dev.Path == device.container_path {
let host_major = dev.Major;
let host_minor = dev.Minor;
if dev.path == device.container_path {
let host_major = dev.major;
let host_minor = dev.minor;
dev.Major = major_id as i64;
dev.Minor = minor_id as i64;
dev.major = major_id as i64;
dev.minor = minor_id as i64;
info!(
sl!(),
@@ -252,12 +252,12 @@ fn update_spec_device_list(device: &Device, spec: &mut Spec) -> Result<()> {
// Resources must be updated since they are used to identify the
// device in the devices cgroup.
if let Some(res) = linux.Resources.as_mut() {
let ds = res.Devices.as_mut_slice();
if let Some(res) = linux.resources.as_mut() {
let ds = res.devices.as_mut_slice();
for d in ds.iter_mut() {
if d.Major == host_major && d.Minor == host_minor {
d.Major = major_id as i64;
d.Minor = minor_id as i64;
if d.major == Some(host_major) && d.minor == Some(host_minor) {
d.major = Some(major_id as i64);
d.minor = Some(minor_id as i64);
info!(
sl!(),

View File

@@ -8,6 +8,7 @@ use grpcio::{EnvBuilder, Server, ServerBuilder};
use grpcio::{RpcStatus, RpcStatusCode};
use std::sync::{Arc, Mutex};
use oci::{LinuxNamespace, Spec};
use protobuf::{RepeatedField, SingularPtrField};
use protocols::agent::CopyFileRequest;
use protocols::agent::{
@@ -16,7 +17,6 @@ use protocols::agent::{
};
use protocols::empty::Empty;
use protocols::health::{HealthCheckResponse, HealthCheckResponse_ServingStatus};
use protocols::oci::{LinuxNamespace, Spec};
use rustjail;
use rustjail::container::{BaseContainer, LinuxContainer};
use rustjail::errors::*;
@@ -36,6 +36,7 @@ use crate::namespace::{NSTYPEIPC, NSTYPEPID, NSTYPEUTS};
use crate::random;
use crate::sandbox::Sandbox;
use crate::version::{AGENT_VERSION, API_VERSION};
use crate::AGENT_CONFIG;
use netlink::{RtnlHandle, NETLINK_ROUTE};
use libc::{self, c_ushort, pid_t, winsize, TIOCSWINSZ};
@@ -80,8 +81,8 @@ impl agentService {
let sandbox;
let mut s;
let oci = match oci_spec.as_mut() {
Some(spec) => spec,
let mut oci = match oci_spec.as_mut() {
Some(spec) => rustjail::grpc_to_oci(spec),
None => {
error!(sl!(), "no oci spec in the create container request!");
return Err(
@@ -102,7 +103,7 @@ impl agentService {
// updates the devices listed in the OCI spec, so that they actually
// match real devices inside the VM. This step is necessary since we
// cannot predict everything from the caller.
add_devices(&req.devices.to_vec(), oci, &self.sandbox)?;
add_devices(&req.devices.to_vec(), &mut oci, &self.sandbox)?;
// Both rootfs and volumes (invoked with --volume for instance) will
// be processed the same way. The idea is to always mount any provided
@@ -118,11 +119,11 @@ impl agentService {
s.container_mounts.insert(cid.clone(), m);
}
update_container_namespaces(&s, oci)?;
update_container_namespaces(&s, &mut oci)?;
// write spec to bundle path, hooks might
// read ocispec
let olddir = setup_bundle(oci)?;
let olddir = setup_bundle(&oci)?;
// restore the cwd for kata-agent process.
defer!(unistd::chdir(&olddir).unwrap());
@@ -139,8 +140,15 @@ impl agentService {
let mut ctr: LinuxContainer =
LinuxContainer::new(cid.as_str(), CONTAINER_BASE, opts, &sl!())?;
let p = if oci.Process.is_some() {
let tp = Process::new(&sl!(), oci.get_Process(), eid.as_str(), true)?;
let pipe_size = AGENT_CONFIG.read().unwrap().container_pipe_size;
let p = if oci.process.is_some() {
let tp = Process::new(
&sl!(),
&oci.process.as_ref().unwrap(),
eid.as_str(),
true,
pipe_size,
)?;
tp
} else {
info!(sl!(), "no process configurations!");
@@ -180,7 +188,12 @@ impl agentService {
if req.timeout == 0 {
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
return Err(ErrorKind::Nix(nix::Error::from_errno(Errno::EINVAL)).into());
}
};
ctr.destroy()?;
@@ -215,7 +228,12 @@ impl agentService {
let handle = thread::spawn(move || {
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid2.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid2.as_str()) {
Some(cr) => cr,
None => {
return;
}
};
ctr.destroy().unwrap();
tx.send(1).unwrap();
@@ -268,13 +286,15 @@ impl agentService {
let mut sandbox = s.lock().unwrap();
// ignore string_user, not sure what it is
let ocip = if req.process.is_some() {
let process = if req.process.is_some() {
req.process.as_ref().unwrap()
} else {
return Err(ErrorKind::Nix(nix::Error::from_errno(nix::errno::Errno::EINVAL)).into());
};
let p = Process::new(&sl!(), ocip, exec_id.as_str(), false)?;
let pipe_size = AGENT_CONFIG.read().unwrap().container_pipe_size;
let ocip = rustjail::process_grpc_to_oci(process);
let p = Process::new(&sl!(), &ocip, exec_id.as_str(), false, pipe_size)?;
let ctr = match sandbox.get_container(cid.as_str()) {
Some(v) => v,
@@ -295,6 +315,7 @@ impl agentService {
let eid = req.exec_id.clone();
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let mut init = false;
info!(
sl!(),
@@ -302,7 +323,12 @@ impl agentService {
"container-id" => cid.clone(),
"exec-id" => eid.clone()
);
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), true)?;
if eid == "" {
init = true;
}
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), init)?;
let mut signal = Signal::try_from(req.signal as i32).unwrap();
@@ -355,7 +381,13 @@ impl agentService {
}
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
return Err(ErrorKind::Nix(nix::Error::from_errno(Errno::EINVAL)).into());
}
};
// need to close all fds
let mut p = ctr.processes.get_mut(&pid).unwrap();
@@ -568,11 +600,11 @@ impl protocols::agent_grpc::AgentService for agentService {
req: protocols::agent::ExecProcessRequest,
sink: ::grpcio::UnarySink<protocols::empty::Empty>,
) {
if let Err(_) = self.do_exec_process(req) {
if let Err(e) = self.do_exec_process(req) {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::Internal,
Some(String::from("fail to exec process!")),
Some(format!("{}", e)),
))
.map_err(|_e| error!(sl!(), "fail to exec process!"));
ctx.spawn(f);
@@ -641,7 +673,20 @@ impl protocols::agent_grpc::AgentService for agentService {
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::InvalidArgument,
Some(String::from("invalid container id")),
))
.map_err(|_e| error!(sl!(), "invalid container id!"));
ctx.spawn(f);
return;
}
};
let pids = ctr.processes().unwrap();
match format.as_str() {
@@ -729,17 +774,30 @@ impl protocols::agent_grpc::AgentService for agentService {
sink: ::grpcio::UnarySink<protocols::empty::Empty>,
) {
let cid = req.container_id.clone();
let res = req.resources.clone();
let res = req.resources;
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::Internal,
Some("invalid container id".to_string()),
))
.map_err(|_e| error!(sl!(), "invalid container id!"));
ctx.spawn(f);
return;
}
};
let resp = Empty::new();
if res.is_some() {
match ctr.set(res.unwrap()) {
let ociRes = rustjail::resources_grpc_to_oci(&res.unwrap());
match ctr.set(ociRes) {
Err(_e) => {
let f = sink
.fail(RpcStatus::new(
@@ -771,7 +829,19 @@ impl protocols::agent_grpc::AgentService for agentService {
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let ctr = sandbox.get_container(cid.as_str()).unwrap();
let ctr: &mut LinuxContainer = match sandbox.get_container(cid.as_str()) {
Some(cr) => cr,
None => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::Internal,
Some("invalid container id!".to_string()),
))
.map_err(|_e| error!(sl!(), "invalid container id!"));
ctx.spawn(f);
return;
}
};
let resp = match ctr.stats() {
Err(_e) => {
@@ -926,7 +996,19 @@ impl protocols::agent_grpc::AgentService for agentService {
let eid = req.exec_id.clone();
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().unwrap();
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), false).unwrap();
let p = match find_process(&mut sandbox, cid.as_str(), eid.as_str(), false) {
Ok(v) => v,
Err(e) => {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::InvalidArgument,
Some(format!("invalid argument: {}", e)),
))
.map_err(|_e| error!(sl!(), "invalid argument"));
ctx.spawn(f);
return;
}
};
if p.term_master.is_none() {
let f = sink
@@ -1527,7 +1609,7 @@ fn find_process<'a>(
None => return Err(ErrorKind::ErrorCode(String::from("Invalid container id")).into()),
};
if init && eid == "" {
if init || eid == "" {
let p = match ctr.processes.get_mut(&ctr.init_process_pid) {
Some(v) => v,
None => {
@@ -1590,7 +1672,7 @@ pub fn start<S: Into<String>>(sandbox: Arc<Mutex<Sandbox>>, host: S, port: u16)
// sense to rely on the namespace path provided by the host since namespaces
// are different inside the guest.
fn update_container_namespaces(sandbox: &Sandbox, spec: &mut Spec) -> Result<()> {
let linux = match spec.Linux.as_mut() {
let linux = match spec.linux.as_mut() {
None => {
return Err(
ErrorKind::ErrorCode("Spec didn't container linux field".to_string()).into(),
@@ -1601,26 +1683,26 @@ fn update_container_namespaces(sandbox: &Sandbox, spec: &mut Spec) -> Result<()>
let mut pidNs = false;
let namespaces = linux.Namespaces.as_mut_slice();
let namespaces = linux.namespaces.as_mut_slice();
for namespace in namespaces.iter_mut() {
if namespace.Type == NSTYPEPID {
if namespace.r#type == NSTYPEPID {
pidNs = true;
continue;
}
if namespace.Type == NSTYPEIPC {
namespace.Path = sandbox.shared_ipcns.path.clone();
if namespace.r#type == NSTYPEIPC {
namespace.path = sandbox.shared_ipcns.path.clone();
continue;
}
if namespace.Type == NSTYPEUTS {
namespace.Path = sandbox.shared_utsns.path.clone();
if namespace.r#type == NSTYPEUTS {
namespace.path = sandbox.shared_utsns.path.clone();
continue;
}
}
if !pidNs && !sandbox.sandbox_pid_ns {
let mut pid_ns = LinuxNamespace::new();
pid_ns.set_Type(NSTYPEPID.to_string());
linux.Namespaces.push(pid_ns);
let mut pid_ns = LinuxNamespace::default();
pid_ns.r#type = NSTYPEPID.to_string();
linux.namespaces.push(pid_ns);
}
Ok(())
@@ -1749,24 +1831,23 @@ fn do_copy_file(req: &CopyFileRequest) -> Result<()> {
Ok(())
}
fn setup_bundle(gspec: &Spec) -> Result<PathBuf> {
if gspec.Root.is_none() {
fn setup_bundle(spec: &Spec) -> Result<PathBuf> {
if spec.root.is_none() {
return Err(nix::Error::Sys(Errno::EINVAL).into());
}
let root = gspec.Root.as_ref().unwrap().Path.as_str();
let root = spec.root.as_ref().unwrap().path.as_str();
let rootfs = fs::canonicalize(root)?;
let bundle_path = rootfs.parent().unwrap().to_str().unwrap();
let config = format!("{}/{}", bundle_path, "config.json");
let oci = rustjail::grpc_to_oci(gspec);
info!(
sl!(),
"{:?}",
oci.process.as_ref().unwrap().console_size.as_ref()
spec.process.as_ref().unwrap().console_size.as_ref()
);
let _ = oci.save(config.as_str());
let _ = spec.save(config.as_str());
let olddir = unistd::getcwd().chain_err(|| "cannot getcwd")?;
unistd::chdir(bundle_path)?;

View File

@@ -10,15 +10,14 @@
#![allow(non_snake_case)]
#[macro_use]
extern crate lazy_static;
extern crate oci;
extern crate prctl;
extern crate protocols;
extern crate regex;
extern crate rustjail;
extern crate scan_fmt;
extern crate serde_json;
extern crate signal_hook;
#[macro_use]
extern crate scan_fmt;
extern crate oci;
#[macro_use]
extern crate scopeguard;
@@ -101,6 +100,10 @@ fn announce(logger: &Logger) {
fn main() -> Result<()> {
let args: Vec<String> = env::args().collect();
if args.len() == 2 && args[1] == "init" {
rustjail::container::init_child();
exit(0);
}
env::set_var("RUST_BACKTRACE", "full");

View File

@@ -37,6 +37,8 @@ pub struct Namespace {
pub path: String,
persistent_ns_dir: String,
ns_type: NamespaceType,
//only used for uts namespace
pub hostname: Option<String>,
}
impl Namespace {
@@ -46,6 +48,7 @@ impl Namespace {
path: String::from(""),
persistent_ns_dir: String::from(PERSISTENT_NS_DIR),
ns_type: NamespaceType::IPC,
hostname: None,
}
}
@@ -54,8 +57,11 @@ impl Namespace {
self
}
pub fn as_uts(mut self) -> Self {
pub fn as_uts(mut self, hostname: &str) -> Self {
self.ns_type = NamespaceType::UTS;
if hostname != "" {
self.hostname = Some(String::from(hostname));
}
self
}
@@ -82,6 +88,7 @@ impl Namespace {
}
self.path = new_ns_path.clone().into_os_string().into_string().unwrap();
let hostname = self.hostname.clone();
let new_thread = thread::spawn(move || {
let origin_ns_path = get_current_thread_ns_path(&ns_type.get());
@@ -98,6 +105,12 @@ impl Namespace {
return Err(err.to_string());
}
if ns_type == NamespaceType::UTS && hostname.is_some() {
match nix::unistd::sethostname(hostname.unwrap()) {
Err(err) => return Err(err.to_string()),
Ok(_) => (),
}
}
// Bind mount the new namespace from the current thread onto the mount point to persist it.
let source: &str = origin_ns_path.as_str();
let destination: &str = new_ns_path.as_path().to_str().unwrap_or("none");
@@ -136,7 +149,7 @@ impl Namespace {
}
/// Represents the Namespace type.
#[derive(Clone, Copy)]
#[derive(Clone, Copy, PartialEq)]
enum NamespaceType {
IPC,
UTS,
@@ -201,7 +214,7 @@ mod tests {
let tmpdir = Builder::new().prefix("ipc").tempdir().unwrap();
let ns_uts = Namespace::new(&logger)
.as_uts()
.as_uts("test_hostname")
.set_root_dir(tmpdir.path().to_str().unwrap())
.setup();

View File

@@ -171,7 +171,10 @@ impl Sandbox {
};
// // Set up shared UTS namespace
self.shared_utsns = match Namespace::new(&self.logger).as_uts().setup() {
self.shared_utsns = match Namespace::new(&self.logger)
.as_uts(self.hostname.as_str())
.setup()
{
Ok(ns) => ns,
Err(err) => {
return Err(ErrorKind::ErrorCode(format!(
@@ -282,7 +285,7 @@ mod tests {
use super::Sandbox;
use crate::{mount::BareMount, skip_if_not_root};
use nix::mount::MsFlags;
use protocols::oci::{Linux, Root, Spec};
use oci::{Linux, Root, Spec};
use rustjail::container::LinuxContainer;
use rustjail::specconv::CreateOpts;
use slog::Logger;
@@ -489,13 +492,13 @@ mod tests {
}
fn create_dummy_opts() -> CreateOpts {
let mut root = Root::new();
root.Path = String::from("/");
let mut root = Root::default();
root.path = String::from("/");
let linux = Linux::new();
let mut spec = Spec::new();
spec.Root = Some(root).into();
spec.Linux = Some(linux).into();
let linux = Linux::default();
let mut spec = Spec::default();
spec.root = Some(root).into();
spec.linux = Some(linux).into();
CreateOpts {
cgroup_name: "".to_string(),