Use the "allow all" policy for k8s-sandbox-vcpus-allocation.bats,
instead of relying on the Kata Guest image to use the same policy
as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-nginx-connectivity.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-volume.bats, instead of relying
on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-sysctls.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-security-context.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-seccomp.bats, instead of relying
on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-projected-volume.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-pod-quota.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-optional-empty-secret.bats,
instead of relying on the Kata Guest image to use the same policy as
its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-measured-rootfs.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-liveness-probes.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-inotify.bats, instead of relying
on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-guest-pull-image.bats, instead of
relying on the Kata Guest image to use the same policy as its default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-footloose.bats, instead of
relying on the Kata Guest image to use the same policy as its
default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Use the "allow all" policy for k8s-empty-dirs.bats, instead of
relying on the Kata Guest image to use the same policy as its
default.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Check from:
- k8s-exec-rejected.bats
- k8s-policy-set-keys.bats
if policy testing is enabled or not, to reduce the complexity of
run_kubernetes_tests.sh. After these changes, there are no policy
specific commands left in run_kubernetes_tests.sh.
add_allow_all_policy_to_yaml() is moving out of run_kubernetes_tests.sh
too, but it not used yet. It will be used in future commits.
Fixes: #9395
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Don't add the "allow all" policy to all the test YAML files anymore.
After this change, the k8s tests assume that all the Kata CI Guest
rootfs image files either:
- Don't support Agent Policy at all, or
- Include an "allow all" default policy.
This relience/assumption will be addressed in a future commit.
Fixes: #9395
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Configure the system to mount cgroups-v2 by default during system boot
by the systemd system, We must add systemd.unified_cgroup_hierarchy=1
parameter to kernel cmdline, which will be passed by kernel_params in
configuration.toml.
To enable cgroup-v2, just add systemd.unified_cgroup_hierarchy=true[1]
to kernel_params.
Fixes: #9336
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This reverts commit 1c5693be86.
Avoid apparent infinite loop when ReadStreamRequest is blocked by
policy - for some of the pods.
When running the k8s-limit-range.bats test with Policy enabled,
the Shim + VMM never get terminated on my cluster. Not sure why
the sandbox clean-up works better for other tests, but the
k8s-limit-range test pod gets stuck in an infinite loop:
stdout io stream copy error happens: error = %wrpc error: code =
PermissionDenied desc = \"ReadStreamRequest is blocked by policy
...
policy check: ReadStreamRequest
...
stdout io stream copy error happens: error = %wrpc error: code =
PermissionDenied desc = \"ReadStreamRequest is blocked by policy
...
policy check: ReadStreamRequest
...
Fixes: #9380
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
This PR removes the runc version information as this is not longer being used
in the kata containers scripts.
Fixes#9364
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Since we're removing the unused service_offload parameter,
don't set it in any of the packaging scripts.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Since we no longer use the service_offload configuration,
remove the ServiceOffload field from the image struct.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
These experimental options were added 2 years ago
in anticipation of features that would be added
in CoCo. These do not match the features that were
eventually added and will soon be ported to main.
Fixes: #8047
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
This is the report from `make check`:
```
error: this loop never actually loops
--> src/signal.rs:147:9
|
147 | / loop {
148 | | select! {
149 | | _ = handle => {
150 | | println!("INFO: task completed");
... |
156 | | }
157 | | }
| |_________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#never_loop
= note: `#[deny(clippy::never_loop)]` on by default
```
There is only one option: you get something or a timeout. You never retry, so
the report is correct.
Fixes: #9342
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
The lint report is the following:
```
error: `flatten()` will run forever if the iterator repeatedly produces an `Err`
--> src/rpc.rs:1754:10
|
1754 | .flatten()
| ^^^^^^^^^ help: replace with: `map_while(Result::ok)`
|
note: this expression returning a `std::io::Lines` may produce an infinite number of `Err` in case of a read error
--> src/rpc.rs:1752:5
|
1752 | / reader
1753 | | .lines()
| |________________^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#lines_filter_map_ok
= note: `-D clippy::lines-filter-map-ok` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::lines_filter_map_ok)]`
```
This commit simply applies the suggestion.
Fixes: #9342
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
Running `make check` in the `src/agent` directory gives:
```
error: you seem to use `.enumerate()` and immediately discard the index
--> rustjail/src/mount.rs:572:27
|
572 | for (_index, line) in reader.lines().enumerate() {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unused_enumerate_index
= note: `-D clippy::unused-enumerate-index` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::unused_enumerate_index)]`
help: remove the `.enumerate()` call
|
572 | for line in reader.lines() {
| ~~~~ ~~~~~~~~~~~~~~
Checking tokio-native-tls v0.3.1
Checking hyper-tls v0.5.0
Checking reqwest v0.11.18
error: could not compile `rustjail` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
make: *** [../../utils.mk:177: standard_rust_check] Error 101
```
Fixes: #9342
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
support to configure CreateContainerRequestTimeout in the
configurations.
e.g.:
[runtime]
...
create_container_timeout = 300
Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
(https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Most of the content of `docs/Stable-Branch-Strategy.md` got de-facto
deprecated by the re-design of the release process described in #9064.
Remove this file and all its references in the repo.
The `## Versioning` section has some useful information though. It is
moved to `docs/Release-Process.md`. The documentation of the `PATCH`
field is adapted according to new workflow.
Fixes#9064 - part VI
Signed-off-by: Greg Kurz <groug@kaod.org>
Support to configure CreateContainerRequestTimeout in the annotations.
e.g.:
annotations:
"io.katacontainers.config.runtime.create_container_timeout": "300"
Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
(https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
In the situation to pull images in the guest #8484, it’s important to account for pulling large images.
Presently, the image pull process in the guest hinges on `CreateContainerRequest`, which defaults to a 60-second timeout.
However, this duration may prove insufficient for pulling larger images, such as those containing AI models.
Consequently, we must devise a method to extend the timeout period for large image pull.
Fixes: #8141
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
This PR updates the journal log name for nerdctl artifacts to make
sure that we have different names in case we add a parallel GHA job.
Fixes#9357
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Since https://github.com/kata-containers/kata-containers/pull/7769, we support
building the OPA binary into the ppc64le and s390x arch versions of the rootfs,
so build the policy enabled agent to match for those architectures too.
Fixes: #9355
Signed-off-by: stevenhorsman <steven@uk.ibm.com>