From d7f75dce83ce2b868c40b7639559f3218e1c00e7 Mon Sep 17 00:00:00 2001 From: bin liu Date: Thu, 23 Jul 2020 14:18:29 +0800 Subject: [PATCH] docs: remove shim/proxy topics and fix docs links And also change links from old documentation to docs sub-directory. Fixes #444 Signed-off-by: bin liu --- README.md | 17 +-- docs/Developer-Guide.md | 12 +- docs/Limitations.md | 7 +- docs/Release-Process.md | 2 +- docs/Upgrading.md | 14 +-- docs/design/VSocks.md | 63 ++--------- docs/design/architecture.md | 103 ++---------------- docs/design/host-cgroups.md | 2 +- docs/design/vcpu-handling.md | 6 +- docs/how-to/containerd-kata.md | 4 +- .../how-to-import-kata-logs-with-fluentd.md | 4 +- .../how-to-load-kernel-modules-with-kata.md | 4 +- ...to-use-k8s-with-cri-containerd-and-kata.md | 2 +- .../how-to-use-kata-containers-with-acrn.md | 4 +- .../how-to-use-kata-containers-with-nemu.md | 2 +- docs/how-to/how-to-use-virtio-fs-with-kata.md | 4 +- docs/how-to/privileged.md | 2 +- docs/how-to/run-kata-with-k8s.md | 10 +- docs/how-to/service-mesh.md | 2 +- .../what-is-vm-cache-and-how-do-I-use-it.md | 6 +- ...at-is-vm-templating-and-how-do-I-use-it.md | 2 +- docs/install/README.md | 8 +- docs/install/aws-installation-guide.md | 2 +- docs/install/azure-installation-guide.md | 2 +- docs/install/centos-installation-guide.md | 2 +- docs/install/debian-installation-guide.md | 2 +- docs/install/docker/centos-docker-install.md | 2 +- docs/install/docker/debian-docker-install.md | 2 +- docs/install/docker/fedora-docker-install.md | 2 +- .../install/docker/opensuse-docker-install.md | 2 +- docs/install/docker/rhel-docker-install.md | 2 +- docs/install/docker/sles-docker-install.md | 2 +- docs/install/docker/ubuntu-docker-install.md | 2 +- docs/install/fedora-installation-guide.md | 2 +- docs/install/gce-installation-guide.md | 2 +- docs/install/minikube-installation-guide.md | 2 +- docs/install/rhel-installation-guide.md | 2 +- docs/install/sles-installation-guide.md | 2 +- docs/install/ubuntu-installation-guide.md | 2 +- docs/install/vexxhost-installation-guide.md | 2 +- .../Intel-GPU-passthrough-and-Kata.md | 8 +- .../Nvidia-GPU-passthrough-and-Kata.md | 8 +- docs/use-cases/using-Intel-QAT-and-kata.md | 4 +- .../using-SPDK-vhostuser-and-kata.md | 2 +- docs/use-cases/zun_kata.md | 4 +- src/agent/README.md | 4 +- src/runtime/README.md | 10 +- .../cli/config/configuration-acrn.toml.in | 18 --- .../cli/config/configuration-clh.toml.in | 19 ---- .../cli/config/configuration-fc.toml.in | 18 --- .../configuration-qemu-virtiofs.toml.in | 18 --- .../cli/config/configuration-qemu.toml.in | 19 ---- src/runtime/data/kata-collect-data.sh.in | 13 --- tools/packaging/kernel/README.md | 2 +- tools/packaging/obs-packaging/README.md | 2 +- tools/packaging/release/README.md | 2 +- tools/packaging/snap/README.md | 10 +- 57 files changed, 116 insertions(+), 361 deletions(-) diff --git a/README.md b/README.md index f3b0c3a489..6ab1ea3b95 100644 --- a/README.md +++ b/README.md @@ -8,9 +8,8 @@ * [Kata Containers-developed components](#kata-containers-developed-components) * [Agent](#agent) * [KSM throttler](#ksm-throttler) - * [Proxy](#proxy) * [Runtime](#runtime) - * [Shim](#shim) + * [Trace forwarder](#trace-forwarder) * [Additional](#additional) * [Hypervisor](#hypervisor) * [Kernel](#kernel) @@ -75,26 +74,12 @@ The [`kata-ksm-throttler`](https://github.com/kata-containers/ksm-throttler) is an optional utility that monitors containers and deduplicates memory to maximize container density on a host. -##### Proxy - -The [`kata-proxy`](https://github.com/kata-containers/proxy) is a process that -runs on the host and co-ordinates access to the agent running inside the -virtual machine. - ##### Runtime The [`kata-runtime`](src/runtime/README.md) is usually invoked by a container manager and provides high-level verbs to manage containers. -##### Shim - -The [`kata-shim`](https://github.com/kata-containers/shim) is a process that -runs on the host. It acts as though it is the workload (which actually runs -inside the virtual machine). This shim is required to be compliant with the -expectations of the [OCI runtime -specification](https://github.com/opencontainers/runtime-spec). - ##### Trace forwarder The [`kata-trace-forwarder`](src/trace-forwarder) is a component only used diff --git a/docs/Developer-Guide.md b/docs/Developer-Guide.md index 1202d4032d..311f6cc964 100644 --- a/docs/Developer-Guide.md +++ b/docs/Developer-Guide.md @@ -378,11 +378,11 @@ $ (cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers-initrd. # Install guest kernel images -You can build and install the guest kernel image as shown [here](https://github.com/kata-containers/packaging/tree/master/kernel#build-kata-containers-kernel). +You can build and install the guest kernel image as shown [here](../tools/packaging/kernel/README.md#build-kata-containers-kernel). # Install a hypervisor -When setting up Kata using a [packaged installation method](https://github.com/kata-containers/documentation/tree/master/install#installing-on-a-linux-system), the `qemu-lite` hypervisor is installed automatically. For other installation methods, you will need to manually install a suitable hypervisor. +When setting up Kata using a [packaged installation method](install/README.md#installing-on-a-linux-system), the `qemu-lite` hypervisor is installed automatically. For other installation methods, you will need to manually install a suitable hypervisor. ## Build a custom QEMU @@ -447,14 +447,14 @@ Refer to to the [Run Kata Containers with Kubernetes](how-to/run-kata-with-k8s.m If you are unable to create a Kata Container first ensure you have [enabled full debug](#enable-full-debug) before attempting to create a container. Then run the -[`kata-collect-data.sh`](https://github.com/kata-containers/runtime/blob/master/data/kata-collect-data.sh.in) +[`kata-collect-data.sh`](../src/runtime/data/kata-collect-data.sh.in) script and paste its output directly into a [GitHub issue](https://github.com/kata-containers/kata-containers/issues/new). > **Note:** > > The `kata-collect-data.sh` script is built from the -> [runtime](https://github.com/kata-containers/runtime) repository. +> [runtime](../src/runtime) repository. To perform analysis on Kata logs, use the [`kata-log-parser`](https://github.com/kata-containers/tests/tree/master/cmd/log-parser) @@ -507,7 +507,7 @@ the following steps (using rootfs or initrd image). > additional packages in the rootfs and add “agent.debug_console” to kernel parameters in the runtime > config file. This tells the Kata agent to launch the console directly. > -> Once these steps are taken you can connect to the virtual machine using the [debug console](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#connect-to-the-virtual-machine-using-the-debug-console). +> Once these steps are taken you can connect to the virtual machine using the [debug console](Developer-Guide.md#connect-to-the-virtual-machine-using-the-debug-console). ### Create a custom image containing a shell @@ -571,7 +571,7 @@ $ sudo install -o root -g root -m 0640 kata-containers.img "/usr/share/kata-cont ``` Next, modify the `image=` values in the `[hypervisor.qemu]` section of the -[configuration file](https://github.com/kata-containers/runtime#configuration) +[configuration file](../src/runtime/README.md#configuration) to specify the full path to the image name specified in the previous code section. Alternatively, recreate the symbolic link so it points to the new debug image: diff --git a/docs/Limitations.md b/docs/Limitations.md index bbd578fb30..cd2fed9d86 100644 --- a/docs/Limitations.md +++ b/docs/Limitations.md @@ -39,7 +39,7 @@ Some of these limitations have potential solutions, whereas others exist due to fundamental architectural differences generally related to the use of VMs. -The [Kata Container runtime](https://github.com/kata-containers/runtime) +The [Kata Container runtime](../src/runtime) launches each container within its own hardware isolated VM, and each VM has its own kernel. Due to this higher degree of isolation, certain container capabilities cannot be supported or are implicitly enabled through the VM. @@ -270,11 +270,6 @@ The following examples outline some of the various areas constraints can be appl This can be achieved by specifying particular hypervisor configuration options. - - Constrain the [shim](https://github.com/kata-containers/shim) process. - - This process represents the container workload running inside the VM. - - - Constrain the [proxy](https://github.com/kata-containers/proxy) process. Note that in some circumstances it might be necessary to apply particular constraints to more than one of the previous areas to achieve the desired level of isolation and resource control. diff --git a/docs/Release-Process.md b/docs/Release-Process.md index 93bf83875e..a4b0d31853 100644 --- a/docs/Release-Process.md +++ b/docs/Release-Process.md @@ -69,7 +69,7 @@ We make use of [GitHub actions](https://github.com/features/actions) in this [file](https://github.com/kata-containers/kata-containers/blob/master/.github/workflows/main.yaml) in the `kata-containers/kata-containers` repository to build and upload release artifacts. This action is auto triggered with the above step when a new tag is pushed to the `kata-containers/kata-conatiners` repository. - Check the [actions status page](https://github.com/kata-containers/kata-containers/actions) to verify all steps in the actions workflow have completed successfully. On success, a static tarball containing Kata release artifacts will be uploaded to the [Release page](https://github.com/kata-containers/runtime/releases). + Check the [actions status page](https://github.com/kata-containers/kata-containers/actions) to verify all steps in the actions workflow have completed successfully. On success, a static tarball containing Kata release artifacts will be uploaded to the [Release page](https://github.com/kata-containers/kata-containers/releases). ### Create OBS Packages diff --git a/docs/Upgrading.md b/docs/Upgrading.md index 6c5f50688b..cd30b0ae0e 100644 --- a/docs/Upgrading.md +++ b/docs/Upgrading.md @@ -68,7 +68,7 @@ $ for container in $(sudo docker ps -q); do sudo docker stop $container; done The automatic migration of [Clear Containers configuration](https://github.com/clearcontainers/runtime#configuration) to -[Kata Containers configuration](https://github.com/kata-containers/runtime#configuration) is +[Kata Containers configuration](../src/runtime/README.md#configuration) is not supported. If you have made changes to your Clear Containers configuration, you should @@ -111,7 +111,7 @@ $ sudo rm /etc/systemd/system/docker.service.d/clear-containers.conf ## Install Kata Containers -Follow one of the [installation guides](https://github.com/kata-containers/documentation/tree/master/install). +Follow one of the [installation guides](install). ## Create a Kata Container @@ -126,12 +126,12 @@ not configured to use the same container root storage. Currently, runV defaults defaults to `/var/run/kata-containers`. Now, to upgrade from runV you need to fresh install Kata Containers by following one of -the [installation guides](https://github.com/kata-containers/documentation/tree/master/install). +the [installation guides](install). # Upgrade Kata Containers As shown in the -[installation instructions](https://github.com/kata-containers/documentation/blob/master/install), +[installation instructions](install), Kata Containers provide binaries for popular distributions in their native packaging formats. This allows Kata Containers to be upgraded using the standard package management tools for your distribution. @@ -150,7 +150,7 @@ Since the official assets are packaged, they are automatically upgraded when new package versions are published. > **Warning**: Note that if you use custom assets (by modifying the -> [Kata Runtime configuration > file](https://github.com/kata-containers/runtime/#configuration)), +> [Kata Runtime configuration > file](../src/runtime/README.md#configuration)), > it is your responsibility to ensure they are updated as necessary. ### Guest kernel @@ -159,7 +159,7 @@ The `kata-linux-container` package contains a Linux\* kernel based on the latest vanilla version of the [long-term kernel](https://www.kernel.org/) plus a small number of -[patches](https://github.com/kata-containers/packaging/tree/master/kernel). +[patches](../tools/packaging/kernel). The `Longterm` branch is only updated with [important bug fixes](https://www.kernel.org/category/releases.html) @@ -174,7 +174,7 @@ The `kata-containers-image` package is updated only when critical updates are available for the packages used to create it, such as: - systemd -- [Kata Containers Agent](https://github.com/kata-containers/agent) +- [Kata Containers Agent](../src/agent) ### Determining asset versions diff --git a/docs/design/VSocks.md b/docs/design/VSocks.md index 884561ceaa..f4179628d2 100644 --- a/docs/design/VSocks.md +++ b/docs/design/VSocks.md @@ -1,7 +1,6 @@ # Kata Containers and VSOCKs - [Introduction](#introduction) - - [proxy communication diagram](#proxy-communication-diagram) - [VSOCK communication diagram](#vsock-communication-diagram) - [System requirements](#system-requirements) - [Advantages of using VSOCKs](#advantages-of-using-vsocks) @@ -16,46 +15,10 @@ processes in the virtual machine can read/write data from/to a serial port device and the processes in the host can read/write data from/to a Unix socket. Most GNU/Linux distributions have support for serial ports, making it the most portable solution. However, the serial link limits read/write access to one -process at a time. To deal with this limitation the resources (serial port and -Unix socket) must be multiplexed. In Kata Containers those resources are -multiplexed by using [`kata-proxy`][2] and [Yamux][3], the following diagram shows -how it's implemented. - - -### proxy communication diagram - -``` -.----------------------. -| .------------------. | -| | .-----. .-----. | | -| | |cont1| |cont2| | | -| | `-----' `-----' | | -| | \ / | | -| | .---------. | | -| | | agent | | | -| | `---------' | | -| | | | | -| | .-----------. | | -| |POD |serial port| | | -| `----|-----------|-' | -| | socket | | -| `-----------' | -| | | -| .-------. | -| | proxy | | -| `-------' | -| | | -| .------./ \.------. | -| | shim | | shim | | -| `------' `------' | -| Host | -`----------------------' -``` - -A newer, simpler method is [VSOCKs][4], which can accept connections from -multiple clients and does not require multiplexers ([`kata-proxy`][2] and -[Yamux][3]). The following diagram shows how it's implemented in Kata Containers. +process at a time. +A newer, simpler method is [VSOCKs][1], which can accept connections from +multiple clients. The following diagram shows how it's implemented in Kata Containers. ### VSOCK communication diagram @@ -95,6 +58,7 @@ The Kata Containers version must be greater than or equal to 1.2.0 and `use_vsoc must be set to `true` in the runtime [configuration file][1]. ### With VMWare guest + To use Kata Containers with VSOCKs in a VMWare guest environment, first stop the `vmware-tools` service and unload the VMWare Linux kernel module. ``` sudo systemctl stop vmware-tools @@ -107,28 +71,25 @@ sudo modprobe -i vhost_vsock ### High density Using a proxy for multiplexing the connections between the VM and the host uses -4.5MB per [POD][5]. In a high density deployment this could add up to GBs of +4.5MB per [POD][2]. In a high density deployment this could add up to GBs of memory that could have been used to host more PODs. When we talk about density each kilobyte matters and it might be the decisive factor between run another POD or not. For example if you have 500 PODs running in a server, the same -amount of [`kata-proxy`][2] processes will be running and consuming for around +amount of [`kata-proxy`][3] processes will be running and consuming for around 2250MB of RAM. Before making the decision not to use VSOCKs, you should ask yourself, how many more containers can run with the memory RAM consumed by the Kata proxies? ### Reliability -[`kata-proxy`][2] is in charge of multiplexing the connections between virtual +[`kata-proxy`][3] is in charge of multiplexing the connections between virtual machine and host processes, if it dies all connections get broken. For example -if you have a [POD][5] with 10 containers running, if `kata-proxy` dies it would +if you have a [POD][2] with 10 containers running, if `kata-proxy` dies it would be impossible to contact your containers, though they would still be running. Since communication via VSOCKs is direct, the only way to lose communication -with the containers is if the VM itself or the [shim][6] dies, if this happens +with the containers is if the VM itself or the `containerd-shim-kata-v2` dies, if this happens the containers are removed automatically. -[1]: https://github.com/kata-containers/runtime#configuration -[2]: https://github.com/kata-containers/proxy -[3]: https://github.com/hashicorp/yamux -[4]: https://wiki.qemu.org/Features/VirtioVsock -[5]: ./vcpu-handling.md#virtual-cpus-and-kubernetes-pods -[6]: https://github.com/kata-containers/shim +[1]: https://wiki.qemu.org/Features/VirtioVsock +[2]: ./vcpu-handling.md#virtual-cpus-and-kubernetes-pods +[3]: https://github.com/kata-containers/proxy diff --git a/docs/design/architecture.md b/docs/design/architecture.md index 46e1441a67..99a70b878d 100644 --- a/docs/design/architecture.md +++ b/docs/design/architecture.md @@ -17,8 +17,6 @@ * [exec](#exec) * [kill](#kill) * [delete](#delete) -* [Proxy](#proxy) -* [Shim](#shim) * [Networking](#networking) * [Storage](#storage) * [Kubernetes Support](#kubernetes-support) @@ -37,7 +35,7 @@ This is an architectural overview of Kata Containers, based on the 1.5.0 release The two primary deliverables of the Kata Containers project are a container runtime and a CRI friendly shim. There is also a CRI friendly library API behind them. -The [Kata Containers runtime (`kata-runtime`)](https://github.com/kata-containers/runtime) +The [Kata Containers runtime (`kata-runtime`)](../../src/runtime) is compatible with the [OCI](https://github.com/opencontainers) [runtime specification](https://github.com/opencontainers/runtime-spec) and therefore works seamlessly with the [Docker\* Engine](https://www.docker.com/products/docker-engine) pluggable runtime @@ -52,7 +50,7 @@ the Docker engine or `kubelet` (Kubernetes) creates respectively. ![Docker and Kata Containers](arch-images/docker-kata.png) -The [`containerd-shim-kata-v2` (shown as `shimv2` from this point onwards)](https://github.com/kata-containers/runtime/tree/master/containerd-shim-v2) +The [`containerd-shim-kata-v2` (shown as `shimv2` from this point onwards)](../../src/runtime/containerd-shim-v2) is another Kata Containers entrypoint, which implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata. With `shimv2`, Kubernetes can launch Pod and OCI compatible containers with one shim (the `shimv2`) per Pod instead @@ -62,7 +60,7 @@ of `2N+1` shims (a `containerd-shim` and a `kata-shim` for each container and th ![Kubernetes integration with shimv2](arch-images/shimv2.svg) The container process is then spawned by -[agent](https://github.com/kata-containers/agent), an agent process running +[agent](../../src/agent), an agent process running as a daemon inside the virtual machine. `kata-agent` runs a gRPC server in the guest using a VIRTIO serial or VSOCK interface which QEMU exposes as a socket file on the host. `kata-runtime` uses a gRPC protocol to communicate with @@ -72,30 +70,7 @@ stderr, stdin) between the containers and the manage engines (e.g. Docker Engine For any given container, both the init process and all potentially executed commands within that container, together with their related I/O streams, need -to go through the VIRTIO serial or VSOCK interface exported by QEMU. -In the VIRTIO serial case, a [Kata Containers -proxy (`kata-proxy`)](https://github.com/kata-containers/proxy) instance is -launched for each virtual machine to handle multiplexing and demultiplexing -those commands and streams. - -On the host, each container process's removal is handled by a reaper in the higher -layers of the container stack. In the case of Docker or containerd it is handled by `containerd-shim`. -In the case of CRI-O it is handled by `conmon`. For clarity, for the remainder -of this document the term "container process reaper" will be used to refer to -either reaper. As Kata Containers processes run inside their own virtual machines, -the container process reaper cannot monitor, control -or reap them. `kata-runtime` fixes that issue by creating an [additional shim process -(`kata-shim`)](https://github.com/kata-containers/shim) between the container process -reaper and `kata-proxy`. A `kata-shim` instance will both forward signals and `stdin` -streams to the container process on the guest and pass the container `stdout` -and `stderr` streams back up the stack to the CRI shim or Docker via the container process -reaper. `kata-runtime` creates a `kata-shim` daemon for each container and for each -OCI command received to run within an already running container (example, `docker -exec`). - -Since Kata Containers version 1.5, the new introduced `shimv2` has integrated the -functionalities of the reaper, the `kata-runtime`, the `kata-shim`, and the `kata-proxy`. -As a result, there will not be any of the additional processes previously listed. +to go through the VSOCK interface exported by QEMU. The container workload, that is, the actual OCI bundle rootfs, is exported from the host to the virtual machine. In the case where a block-based graph driver is @@ -155,7 +130,7 @@ The only service running in the context of the initrd is the [Agent](#agent) as ## Agent -[`kata-agent`](https://github.com/kata-containers/agent) is a process running in the +[`kata-agent`](../../src/agent) is a process running in the guest as a supervisor for managing containers and processes running within those containers. @@ -164,12 +139,7 @@ run several containers per VM to support container engines that require multiple containers running inside a pod. In the case of docker, `kata-runtime` creates a single container per pod. -`kata-agent` communicates with the other Kata components over gRPC. -It also runs a [`yamux`](https://github.com/hashicorp/yamux) server on the same gRPC URL. - -The `kata-agent` makes use of [`libcontainer`](https://github.com/opencontainers/runc/tree/master/libcontainer) -to manage the lifecycle of the container. This way the `kata-agent` reuses most -of the code used by [`runc`](https://github.com/opencontainers/runc). +`kata-agent` communicates with the other Kata components over ttRPC. ### Agent gRPC protocol @@ -199,7 +169,7 @@ Most users will not need to modify the configuration file. The file is well commented and provides a few "knobs" that can be used to modify the behavior of the runtime. -The configuration file is also used to enable runtime [debug output](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#enable-full-debug). +The configuration file is also used to enable runtime [debug output](../Developer-Guide.md#enable-full-debug). ### Significant OCI commands @@ -324,57 +294,6 @@ process representing this container process. 4. Communicate with `kata-agent` (connecting the proxy) to remove the container configuration from the VM. 4. Return container status. -## Proxy - -Communication with the VM can be achieved by either `virtio-serial` or, if the host -kernel is newer than v4.8, a virtual socket, `vsock` can be used. The default is `virtio-serial`. - -The VM will likely be running multiple container processes. In the event `virtio-serial` -is used, the I/O streams associated with each process needs to be multiplexed and demultiplexed on the host. On systems with `vsock` support, this component becomes optional. - -`kata-proxy` is a process offering access to the VM [`kata-agent`](https://github.com/kata-containers/agent) -to multiple `kata-shim` and `kata-runtime` clients associated with the VM. Its -main role is to route the I/O streams and signals between each `kata-shim` -instance and the `kata-agent`. -`kata-proxy` connects to `kata-agent` on a Unix domain socket that `kata-runtime` provides -while spawning `kata-proxy`. -`kata-proxy` uses [`yamux`](https://github.com/hashicorp/yamux) to multiplex gRPC -requests on its connection to the `kata-agent`. - -When proxy type is configured as `proxyBuiltIn`, we do not spawn a separate -process to proxy gRPC connections. Instead a built-in Yamux gRPC dialer is used to connect -directly to `kata-agent`. This is used by CRI container runtime server `frakti` which -calls directly into `kata-runtime`. - -## Shim - -A container process reaper, such as Docker's `containerd-shim` or CRI-O's `conmon`, -is designed around the assumption that it can monitor and reap the actual container -process. As the container process reaper runs on the host, it cannot directly -monitor a process running within a virtual machine. At most it can see the QEMU -process, but that is not enough. With Kata Containers, `kata-shim` acts as the -container process that the container process reaper can monitor. Therefore -`kata-shim` needs to handle all container I/O streams (`stdout`, `stdin` and `stderr`) -and forward all signals the container process reaper decides to send to the container -process. - -`kata-shim` has an implicit knowledge about which VM agent will handle those streams -and signals and thus acts as an encapsulation layer between the container process -reaper and the `kata-agent`. `kata-shim`: - -- Connects to `kata-proxy` on a Unix domain socket. The socket URL is passed from - `kata-runtime` to `kata-shim` when the former spawns the latter along with a - `containerID` and `execID`. The `containerID` and `execID` are used to identify - the true container process that the shim process will be shadowing or representing. -- Forwards the standard input stream from the container process reaper into - `kata-proxy` using gRPC `WriteStdin` gRPC API. -- Reads the standard output/error from the container process. -- Forwards signals it receives from the container process reaper to `kata-proxy` - using `SignalProcessRequest` API. -- Monitors terminal changes and forwards them to `kata-proxy` using gRPC `TtyWinResize` - API. - - ## Networking Containers will typically live in their own, possibly shared, networking namespace. @@ -534,13 +453,13 @@ pod creation request from a container one. ### Containerd As of Kata Containers 1.5, using `shimv2` with containerd 1.2.0 or above is the preferred -way to run Kata Containers with Kubernetes ([see the howto](https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers)). +way to run Kata Containers with Kubernetes ([see the howto](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers)). The CRI-O will catch up soon ([`kubernetes-sigs/cri-o#2024`](https://github.com/kubernetes-sigs/cri-o/issues/2024)). Refer to the following how-to guides: -- [How to use Kata Containers and Containerd](/how-to/containerd-kata.md) -- [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md) +- [How to use Kata Containers and Containerd](../how-to/containerd-kata.md) +- [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md) ### CRI-O @@ -587,7 +506,7 @@ with a Kubernetes pod: #### Mixing VM based and namespace based runtimes -> **Note:** Since Kubernetes 1.12, the [`Kubernetes RuntimeClass`](/how-to/containerd-kata.md#kubernetes-runtimeclass) +> **Note:** Since Kubernetes 1.12, the [`Kubernetes RuntimeClass`](../how-to/containerd-kata.md#kubernetes-runtimeclass) > has been supported and the user can specify runtime without the non-standardized annotations. One interesting evolution of the CRI-O support for `kata-runtime` is the ability diff --git a/docs/design/host-cgroups.md b/docs/design/host-cgroups.md index 0692e1a82f..6bb31cfb17 100644 --- a/docs/design/host-cgroups.md +++ b/docs/design/host-cgroups.md @@ -51,7 +51,7 @@ Kata Containers introduces a non-negligible overhead for running a sandbox (pod) 2) Kata Containers do not fully constrain the VMM and associated processes, instead placing a subset of them outside of the pod-cgroup. Kata Containers provides two options for how cgroups are handled on the host. Selection of these options is done through -the `SandboxCgroupOnly` flag within the Kata Containers [configuration](https://github.com/kata-containers/runtime#configuration) +the `SandboxCgroupOnly` flag within the Kata Containers [configuration](../../src/runtime/README.md#configuration) file. ## `SandboxCgroupOnly` enabled diff --git a/docs/design/vcpu-handling.md b/docs/design/vcpu-handling.md index 811b972a05..ab65357bbc 100644 --- a/docs/design/vcpu-handling.md +++ b/docs/design/vcpu-handling.md @@ -170,6 +170,6 @@ docker run --cpus 4 -ti debian bash -c "nproc; cat /sys/fs/cgroup/cpu,cpuacct/cp [2]: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource [3]: https://kubernetes.io/docs/concepts/workloads/pods/pod/ [4]: https://docs.docker.com/engine/reference/commandline/update/ -[5]: https://github.com/kata-containers/agent -[6]: https://github.com/kata-containers/runtime -[7]: https://github.com/kata-containers/runtime#configuration +[5]: ../../src/agent +[6]: ../../src/runtime +[7]: ../../src/runtime/README.md#configuration diff --git a/docs/how-to/containerd-kata.md b/docs/how-to/containerd-kata.md index 1ed1db3e15..283aeca14e 100644 --- a/docs/how-to/containerd-kata.md +++ b/docs/how-to/containerd-kata.md @@ -57,7 +57,7 @@ use `RuntimeClass` instead of the deprecated annotations. ### Containerd Runtime V2 API: Shim V2 API -The [`containerd-shim-kata-v2` (short as `shimv2` in this documentation)](https://github.com/kata-containers/runtime/tree/master/containerd-shim-v2) +The [`containerd-shim-kata-v2` (short as `shimv2` in this documentation)](../../src/runtime/containerd-shim-v2) implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata. With `shimv2`, Kubernetes can launch Pod and OCI-compatible containers with one shim per Pod. Prior to `shimv2`, `2N+1` shims (i.e. a `containerd-shim` and a `kata-shim` for each container and the Pod sandbox itself) and no standalone `kata-proxy` @@ -72,7 +72,7 @@ is implemented in Kata Containers v1.5.0. ### Install Kata Containers -Follow the instructions to [install Kata Containers](https://github.com/kata-containers/documentation/blob/master/install/README.md). +Follow the instructions to [install Kata Containers](../install/README.md). ### Install containerd with CRI plugin diff --git a/docs/how-to/how-to-import-kata-logs-with-fluentd.md b/docs/how-to/how-to-import-kata-logs-with-fluentd.md index 056cb16eb6..4b213b1e88 100644 --- a/docs/how-to/how-to-import-kata-logs-with-fluentd.md +++ b/docs/how-to/how-to-import-kata-logs-with-fluentd.md @@ -33,7 +33,7 @@ also applies to the Kata `shimv2` runtime. Differences pertaining to Kata `shim Kata generates logs. The logs can come from numerous parts of the Kata stack (the runtime, proxy, shim and even the agent). By default the logs -[go to the system journal](https://github.com/kata-containers/runtime#logging), +[go to the system journal](../../src/runtime/README.md#logging), but they can also be configured to be stored in files. The logs default format is in [`logfmt` structured logging](https://brandur.org/logfmt), but can be switched to @@ -256,7 +256,7 @@ directly from Kata, that should make overall import and processing of the log en There are potentially two things we can do with Kata here: -- Get Kata to [output its logs in `JSON` format](https://github.com/kata-containers/runtime#logging) rather +- Get Kata to [output its logs in `JSON` format](../../src/runtime/README.md#logging) rather than `logfmt`. - Get Kata to log directly into a file, rather than via the system journal. This would allow us to not need to parse the systemd format files, and capture the Kata log lines directly. It would also avoid Fluentd diff --git a/docs/how-to/how-to-load-kernel-modules-with-kata.md b/docs/how-to/how-to-load-kernel-modules-with-kata.md index f6a21aef55..12e5431493 100644 --- a/docs/how-to/how-to-load-kernel-modules-with-kata.md +++ b/docs/how-to/how-to-load-kernel-modules-with-kata.md @@ -103,6 +103,6 @@ spec: > **Note**: To pass annotations to Kata containers, [cri must to be configurated correctly](how-to-set-sandbox-config-kata.md#cri-configuration) -[1]: https://github.com/kata-containers/runtime -[2]: https://github.com/kata-containers/agent +[1]: ../../src/runtime +[2]: ../../src/agent [3]: https://kubernetes.io/docs/concepts/workloads/pods/pod/ diff --git a/docs/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md b/docs/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md index a4bc1a4850..7962565b74 100644 --- a/docs/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md +++ b/docs/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md @@ -177,7 +177,7 @@ $ sudo -E kubectl taint nodes --all node-role.kubernetes.io/master- By default, all pods are created with the default runtime configured in CRI containerd plugin. If a pod has the `io.kubernetes.cri.untrusted-workload` annotation set to `"true"`, the CRI plugin runs the pod with the -[Kata Containers runtime](https://github.com/kata-containers/runtime/blob/master/README.md). +[Kata Containers runtime](../../src/runtime/README.md). - Create an untrusted pod configuration diff --git a/docs/how-to/how-to-use-kata-containers-with-acrn.md b/docs/how-to/how-to-use-kata-containers-with-acrn.md index 27cad4caac..c6e948c6d8 100644 --- a/docs/how-to/how-to-use-kata-containers-with-acrn.md +++ b/docs/how-to/how-to-use-kata-containers-with-acrn.md @@ -49,7 +49,7 @@ This document requires the presence of the ACRN hypervisor and Kata Containers o $ sudo sed -i "s/$kernel_img/bzImage/g" /mnt/loader/entries/$conf_file $ sync && sudo umount /mnt && sudo reboot ``` -- Kata Containers installation: Automated installation does not seem to be supported for Clear Linux, so please use [manual installation](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md) steps. +- Kata Containers installation: Automated installation does not seem to be supported for Clear Linux, so please use [manual installation](../Developer-Guide.md) steps. > **Note:** Create rootfs image and not initrd image. @@ -82,7 +82,7 @@ $ sudo systemctl daemon-reload $ sudo systemctl restart docker ``` -4. Configure [Docker](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#update-the-docker-systemd-unit-file) to use `kata-runtime`. +4. Configure [Docker](../Developer-Guide.md#update-the-docker-systemd-unit-file) to use `kata-runtime`. ## Configure Kata Containers with ACRN diff --git a/docs/how-to/how-to-use-kata-containers-with-nemu.md b/docs/how-to/how-to-use-kata-containers-with-nemu.md index dc35685254..35a5d35494 100644 --- a/docs/how-to/how-to-use-kata-containers-with-nemu.md +++ b/docs/how-to/how-to-use-kata-containers-with-nemu.md @@ -19,7 +19,7 @@ Kata Containers relies by default on the QEMU hypervisor in order to spawn the v This document describes how to run Kata Containers with NEMU, first by explaining how to download, build and install it. Then it walks through the steps needed to update your Kata Containers configuration in order to run with NEMU. ## Pre-requisites -This document requires Kata Containers to be [installed](https://github.com/kata-containers/documentation/blob/master/install/README.md) on your system. +This document requires Kata Containers to be [installed](../install/README.md) on your system. Also, it's worth noting that NEMU only supports `x86_64` and `aarch64` architecture. diff --git a/docs/how-to/how-to-use-virtio-fs-with-kata.md b/docs/how-to/how-to-use-virtio-fs-with-kata.md index c991b212be..384d73aedc 100644 --- a/docs/how-to/how-to-use-virtio-fs-with-kata.md +++ b/docs/how-to/how-to-use-virtio-fs-with-kata.md @@ -25,14 +25,14 @@ This document describes how to get Kata Containers to work with virtio-fs. ## Install Kata Containers with virtio-fs support -The Kata Containers NEMU configuration, the NEMU VMM and the `virtiofs` daemon are available in the [Kata Container release](https://github.com/kata-containers/runtime/releases) artifacts starting with the 1.7 release. While the feature is experimental, distribution packages are not supported, but installation is available through [`kata-deploy`](https://github.com/kata-containers/packaging/tree/master/kata-deploy). +The Kata Containers NEMU configuration, the NEMU VMM and the `virtiofs` daemon are available in the [Kata Container release](https://github.com/kata-containers/kata-containers/releases) artifacts starting with the 1.7 release. While the feature is experimental, distribution packages are not supported, but installation is available through [`kata-deploy`](../../tools/packaging/kata-deploy). Install the latest release of Kata as follows: ``` docker run --runtime=runc -v /opt/kata:/opt/kata -v /var/run/dbus:/var/run/dbus -v /run/systemd:/run/systemd -v /etc/docker:/etc/docker -it katadocker/kata-deploy kata-deploy-docker install ``` -This will place the Kata release artifacts in `/opt/kata`, and update Docker's configuration to include a runtime target, `kata-nemu`. Learn more about `kata-deploy` and how to use `kata-deploy` in Kubernetes [here](https://github.com/kata-containers/packaging/tree/master/kata-deploy#kubernetes-quick-start). +This will place the Kata release artifacts in `/opt/kata`, and update Docker's configuration to include a runtime target, `kata-nemu`. Learn more about `kata-deploy` and how to use `kata-deploy` in Kubernetes [here](../../tools/packaging/kata-deploy/README.md#kubernetes-quick-start). ## Run a Kata Container utilizing virtio-fs diff --git a/docs/how-to/privileged.md b/docs/how-to/privileged.md index 575e4b1817..cecc3907d1 100644 --- a/docs/how-to/privileged.md +++ b/docs/how-to/privileged.md @@ -75,5 +75,5 @@ See below example config: privileged_without_host_devices = true ``` - - [Kata Containers with CRI-O](https://github.com/kata-containers/documentation/blob/master/how-to/run-kata-with-k8s.md#cri-o) + - [Kata Containers with CRI-O](../how-to/run-kata-with-k8s.md#cri-o) diff --git a/docs/how-to/run-kata-with-k8s.md b/docs/how-to/run-kata-with-k8s.md index c8ed783ea0..24245214e0 100644 --- a/docs/how-to/run-kata-with-k8s.md +++ b/docs/how-to/run-kata-with-k8s.md @@ -14,7 +14,7 @@ * [Run a Kubernetes pod with Kata Containers](#run-a-kubernetes-pod-with-kata-containers) ## Prerequisites -This guide requires Kata Containers available on your system, install-able by following [this guide](https://github.com/kata-containers/documentation/blob/master/install/README.md). +This guide requires Kata Containers available on your system, install-able by following [this guide](../install/README.md). ## Install a CRI implementation @@ -28,7 +28,7 @@ After choosing one CRI implementation, you must make the appropriate configurati to ensure it integrates with Kata Containers. Kata Containers 1.5 introduced the `shimv2` for containerd 1.2.0, reducing the components -required to spawn pods and containers, and this is the preferred way to run Kata Containers with Kubernetes ([as documented here](https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers)). +required to spawn pods and containers, and this is the preferred way to run Kata Containers with Kubernetes ([as documented here](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers)). An equivalent shim implementation for CRI-O is planned. @@ -78,7 +78,7 @@ a runtime to be used when the workload cannot be trusted and a higher level of s is required. An additional flag can be used to let CRI-O know if a workload should be considered _trusted_ or _untrusted_ by default. For further details, see the documentation -[here](https://github.com/kata-containers/documentation/blob/master/design/architecture.md#mixing-vm-based-and-namespace-based-runtimes). +[here](../design/architecture.md#mixing-vm-based-and-namespace-based-runtimes). ```toml # runtime is the OCI compatible runtime used for trusted container workloads. @@ -132,7 +132,7 @@ to properly install it. To customize containerd to select Kata Containers runtime, follow our "Configure containerd to use Kata Containers" internal documentation -[here](https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers). +[here](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers). ## Install Kubernetes @@ -160,7 +160,7 @@ Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-tim Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock" ``` For more information about containerd see the "Configure Kubelet to use containerd" -documentation [here](https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-kubelet-to-use-containerd). +documentation [here](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-kubelet-to-use-containerd). ## Run a Kubernetes pod with Kata Containers diff --git a/docs/how-to/service-mesh.md b/docs/how-to/service-mesh.md index 5a1727f93a..dde04fd119 100644 --- a/docs/how-to/service-mesh.md +++ b/docs/how-to/service-mesh.md @@ -48,7 +48,7 @@ as the proxy starts. ### Kata and Kubernetes -Follow the [instructions](https://github.com/kata-containers/documentation/blob/master/install/README.md) +Follow the [instructions](../install/README.md) to get Kata Containers properly installed and configured with Kubernetes. You can choose between CRI-O and CRI-containerd, both are supported through this document. diff --git a/docs/how-to/what-is-vm-cache-and-how-do-I-use-it.md b/docs/how-to/what-is-vm-cache-and-how-do-I-use-it.md index c35f8a091b..83adec9ac6 100644 --- a/docs/how-to/what-is-vm-cache-and-how-do-I-use-it.md +++ b/docs/how-to/what-is-vm-cache-and-how-do-I-use-it.md @@ -10,7 +10,7 @@ VMCache is a new function that creates VMs as caches before using it. It helps speed up new container creation. The function consists of a server and some clients communicating -through Unix socket. The protocol is gRPC in [`protocols/cache/cache.proto`](https://github.com/kata-containers/runtime/blob/master/protocols/cache/cache.proto). +through Unix socket. The protocol is gRPC in [`protocols/cache/cache.proto`](../../src/runtime/protocols/cache/cache.proto). The VMCache server will create some VMs and cache them by factory cache. It will convert the VM to gRPC format and transport it when gets requested from clients. @@ -21,9 +21,9 @@ a new sandbox. ### How is this different to VM templating -Both [VM templating](https://github.com/kata-containers/documentation/blob/master/how-to/what-is-vm-templating-and-how-do-I-use-it.md) and VMCache help speed up new container creation. +Both [VM templating](../how-to/what-is-vm-templating-and-how-do-I-use-it.md) and VMCache help speed up new container creation. When VM templating enabled, new VMs are created by cloning from a pre-created template VM, and they will share the same initramfs, kernel and agent memory in readonly mode. So it saves a lot of memory if there are many Kata Containers running on the same host. -VMCache is not vulnerable to [share memory CVE](https://github.com/kata-containers/documentation/blob/master/how-to/what-is-vm-templating-and-how-do-I-use-it.md#what-are-the-cons) because each VM doesn't share the memory. +VMCache is not vulnerable to [share memory CVE](../how-to/what-is-vm-templating-and-how-do-I-use-it.md#what-are-the-cons) because each VM doesn't share the memory. ### How to enable VMCache diff --git a/docs/how-to/what-is-vm-templating-and-how-do-I-use-it.md b/docs/how-to/what-is-vm-templating-and-how-do-I-use-it.md index 1d2974a415..fae8215eb1 100644 --- a/docs/how-to/what-is-vm-templating-and-how-do-I-use-it.md +++ b/docs/how-to/what-is-vm-templating-and-how-do-I-use-it.md @@ -8,7 +8,7 @@ same initramfs, kernel and agent memory in readonly mode. It is very much like a process fork done by the kernel but here we *fork* VMs. ### How is this different from VMCache -Both [VMCache](https://github.com/kata-containers/documentation/blob/master/how-to/what-is-vm-cache-and-how-do-I-use-it.md) and VM templating help speed up new container creation. +Both [VMCache](../how-to/what-is-vm-cache-and-how-do-I-use-it.md) and VM templating help speed up new container creation. When VMCache enabled, new VMs are created by the VMCache server. So it is not vulnerable to share memory CVE because each VM doesn't share the memory. VM templating saves a lot of memory if there are many Kata Containers running on the same host. diff --git a/docs/install/README.md b/docs/install/README.md index 3b999ec3b1..6f6ff66248 100644 --- a/docs/install/README.md +++ b/docs/install/README.md @@ -18,7 +18,7 @@ in a system configured to run Kata Containers. ## Prerequisites Kata Containers requires nested virtualization or bare metal. See the -[hardware requirements](https://github.com/kata-containers/runtime/blob/master/README.md#hardware-requirements) +[hardware requirements](../../src/runtime/README.md#hardware-requirements) to see if your system is capable of running Kata Containers. ## Packaged installation methods @@ -78,7 +78,7 @@ Manual installation instructions are available for [these distributions](#suppor 3. Install a supported container manager. 4. Configure the container manager to use `kata-runtime` as the default OCI runtime. Or, for Kata Containers 1.5.0 or above, configure the `io.containerd.kata.v2` to be the runtime shim (see [containerd runtime v2 (shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) - and [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md)). + and [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md)). > **Notes on upgrading**: > - If you are installing Kata Containers on a system that already has Clear Containers or `runv` installed, @@ -87,7 +87,7 @@ Manual installation instructions are available for [these distributions](#suppor > **Notes on releases**: > - [This download server](http://download.opensuse.org/repositories/home:/katacontainers:/releases:/) > hosts the Kata Containers packages built by OBS for all the supported architectures. -> Packages are available for the latest and stable releases (more info [here](https://github.com/kata-containers/documentation/blob/master/Stable-Branch-Strategy.md)). +> Packages are available for the latest and stable releases (more info [here](../Stable-Branch-Strategy.md)). > > - The following guides apply to the latest Kata Containers release > (a.k.a. `master` release). @@ -124,4 +124,4 @@ versions. This is not recommended for normal users. ## Further information * The [upgrading document](../Upgrading.md). * The [developer guide](../Developer-Guide.md). -* The [runtime documentation](https://github.com/kata-containers/runtime/blob/master/README.md). +* The [runtime documentation](../../src/runtime/README.md). diff --git a/docs/install/aws-installation-guide.md b/docs/install/aws-installation-guide.md index 65d9e860ba..885449782e 100644 --- a/docs/install/aws-installation-guide.md +++ b/docs/install/aws-installation-guide.md @@ -137,4 +137,4 @@ Go onto the next step. The process for installing Kata itself on bare metal is identical to that of a virtualization-enabled VM. -For detailed information to install Kata on your distribution of choice, see the [Kata Containers installation user guides](https://github.com/kata-containers/documentation/blob/master/install/README.md). +For detailed information to install Kata on your distribution of choice, see the [Kata Containers installation user guides](../install/README.md). diff --git a/docs/install/azure-installation-guide.md b/docs/install/azure-installation-guide.md index 14b3512df6..e36206b5b1 100644 --- a/docs/install/azure-installation-guide.md +++ b/docs/install/azure-installation-guide.md @@ -15,4 +15,4 @@ Create a new virtual machine with: ## Set up with distribution specific quick start -Follow distribution specific [install guides](https://github.com/kata-containers/documentation/tree/master/install#supported-distributions). +Follow distribution specific [install guides](../install/README.md#supported-distributions). diff --git a/docs/install/centos-installation-guide.md b/docs/install/centos-installation-guide.md index cece662c83..da53a0b8e0 100644 --- a/docs/install/centos-installation-guide.md +++ b/docs/install/centos-installation-guide.md @@ -14,4 +14,4 @@ 2. Decide which container manager to use and select the corresponding link that follows: - [Docker](docker/centos-docker-install.md) - - [Kubernetes](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes) + - [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes) diff --git a/docs/install/debian-installation-guide.md b/docs/install/debian-installation-guide.md index 0bd7b597e3..7bb2ebb536 100644 --- a/docs/install/debian-installation-guide.md +++ b/docs/install/debian-installation-guide.md @@ -19,4 +19,4 @@ 2. Decide which container manager to use and select the corresponding link that follows: - [Docker](docker/debian-docker-install.md) - - [Kubernetes](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes) + - [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes) diff --git a/docs/install/docker/centos-docker-install.md b/docs/install/docker/centos-docker-install.md index 203605fce7..b9902da4bd 100644 --- a/docs/install/docker/centos-docker-install.md +++ b/docs/install/docker/centos-docker-install.md @@ -24,7 +24,7 @@ 2. Configure Docker to use Kata Containers by default with **ONE** of the following methods: 1. systemd (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ sudo mkdir -p /etc/systemd/system/docker.service.d/ diff --git a/docs/install/docker/debian-docker-install.md b/docs/install/docker/debian-docker-install.md index 3031017107..662e0ecfaf 100644 --- a/docs/install/docker/debian-docker-install.md +++ b/docs/install/docker/debian-docker-install.md @@ -37,7 +37,7 @@ a. `sysVinit` ``` b. systemd (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ sudo mkdir -p /etc/systemd/system/docker.service.d/ diff --git a/docs/install/docker/fedora-docker-install.md b/docs/install/docker/fedora-docker-install.md index e7e86440cc..51c629fc56 100644 --- a/docs/install/docker/fedora-docker-install.md +++ b/docs/install/docker/fedora-docker-install.md @@ -26,7 +26,7 @@ 2. Configure Docker to use Kata Containers by default with **ONE** of the following methods: 1. systemd (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ sudo mkdir -p /etc/systemd/system/docker.service.d/ diff --git a/docs/install/docker/opensuse-docker-install.md b/docs/install/docker/opensuse-docker-install.md index 1d19d7235e..0e126ae4cb 100644 --- a/docs/install/docker/opensuse-docker-install.md +++ b/docs/install/docker/opensuse-docker-install.md @@ -23,7 +23,7 @@ 2. Configure Docker to use Kata Containers by default with **ONE** of the following methods: 1. Specify the runtime options in `/etc/sysconfig/docker` (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ DOCKER_SYSCONFIG=/etc/sysconfig/docker diff --git a/docs/install/docker/rhel-docker-install.md b/docs/install/docker/rhel-docker-install.md index 073451e25c..f4c9d224f4 100644 --- a/docs/install/docker/rhel-docker-install.md +++ b/docs/install/docker/rhel-docker-install.md @@ -25,7 +25,7 @@ 2. Configure Docker to use Kata Containers by default with **ONE** of the following methods: 1. systemd (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ sudo mkdir -p /etc/systemd/system/docker.service.d/ diff --git a/docs/install/docker/sles-docker-install.md b/docs/install/docker/sles-docker-install.md index 0e925c2fae..de7587ac29 100644 --- a/docs/install/docker/sles-docker-install.md +++ b/docs/install/docker/sles-docker-install.md @@ -23,7 +23,7 @@ 2. Configure Docker to use Kata Containers by default with **ONE** of the following methods: 1. systemd (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ sudo mkdir -p /etc/systemd/system/docker.service.d/ diff --git a/docs/install/docker/ubuntu-docker-install.md b/docs/install/docker/ubuntu-docker-install.md index 3491528852..d1cc494564 100644 --- a/docs/install/docker/ubuntu-docker-install.md +++ b/docs/install/docker/ubuntu-docker-install.md @@ -28,7 +28,7 @@ 2. Configure Docker to use Kata Containers by default with **ONE** of the following methods: 1. systemd (this is the default and is applied automatically if you select the - [automatic installation](https://github.com/kata-containers/documentation/tree/master/install#automatic-installation) option) + [automatic installation](../../install/README.md#automatic-installation) option) ```bash $ sudo mkdir -p /etc/systemd/system/docker.service.d/ diff --git a/docs/install/fedora-installation-guide.md b/docs/install/fedora-installation-guide.md index 38a4f6eaa2..4bf0594e78 100644 --- a/docs/install/fedora-installation-guide.md +++ b/docs/install/fedora-installation-guide.md @@ -14,4 +14,4 @@ 2. Decide which container manager to use and select the corresponding link that follows: - [Docker](docker/fedora-docker-install.md) - - [Kubernetes](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes) + - [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes) diff --git a/docs/install/gce-installation-guide.md b/docs/install/gce-installation-guide.md index 488e2562b3..389bec160f 100644 --- a/docs/install/gce-installation-guide.md +++ b/docs/install/gce-installation-guide.md @@ -101,7 +101,7 @@ If this fails, ensure you created your instance from the correct image and that The process for installing Kata itself on a virtualization-enabled VM is identical to that for bare metal. -For detailed information to install Kata on your distribution of choice, see the [Kata Containers installation user guides](https://github.com/kata-containers/documentation/blob/master/install/README.md). +For detailed information to install Kata on your distribution of choice, see the [Kata Containers installation user guides](../install/README.md). ## Create a Kata-enabled Image diff --git a/docs/install/minikube-installation-guide.md b/docs/install/minikube-installation-guide.md index fe323d3c39..2c37caed42 100644 --- a/docs/install/minikube-installation-guide.md +++ b/docs/install/minikube-installation-guide.md @@ -54,7 +54,7 @@ to enable nested virtualization can be found on the [KVM Nested Guests page](https://www.linux-kvm.org/page/Nested_Guests) Alternatively, and for other architectures, the Kata Containers built in -[`kata-check`](https://github.com/kata-containers/runtime#hardware-requirements) +[`kata-check`](../../src/runtime/README.md#hardware-requirements) command can be used *inside Minikube* once Kata has been installed, to check for compatibility. ## Setting up Minikube diff --git a/docs/install/rhel-installation-guide.md b/docs/install/rhel-installation-guide.md index 0151c8177e..37412a7f11 100644 --- a/docs/install/rhel-installation-guide.md +++ b/docs/install/rhel-installation-guide.md @@ -13,4 +13,4 @@ 2. Decide which container manager to use and select the corresponding link that follows: - [Docker](docker/rhel-docker-install.md) - - [Kubernetes](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes) + - [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes) diff --git a/docs/install/sles-installation-guide.md b/docs/install/sles-installation-guide.md index 2d900e0f01..b602ce05ca 100644 --- a/docs/install/sles-installation-guide.md +++ b/docs/install/sles-installation-guide.md @@ -12,4 +12,4 @@ 2. Decide which container manager to use and select the corresponding link that follows: - [Docker](docker/sles-docker-install.md) - - [Kubernetes](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes) + - [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes) diff --git a/docs/install/ubuntu-installation-guide.md b/docs/install/ubuntu-installation-guide.md index a42283eb92..a3f97f8b99 100644 --- a/docs/install/ubuntu-installation-guide.md +++ b/docs/install/ubuntu-installation-guide.md @@ -14,4 +14,4 @@ 2. Decide which container manager to use and select the corresponding link that follows: - [Docker](docker/ubuntu-docker-install.md) - - [Kubernetes](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes) + - [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes) diff --git a/docs/install/vexxhost-installation-guide.md b/docs/install/vexxhost-installation-guide.md index da5de43186..92890d1bee 100644 --- a/docs/install/vexxhost-installation-guide.md +++ b/docs/install/vexxhost-installation-guide.md @@ -13,4 +13,4 @@ with v2). The recommended machine type for container workloads is `v2-highcpu` ## Set up with distribution specific quick start -Follow distribution specific [install guides](https://github.com/kata-containers/documentation/tree/master/install#supported-distributions). +Follow distribution specific [install guides](../install/README.md#supported-distributions). diff --git a/docs/use-cases/Intel-GPU-passthrough-and-Kata.md b/docs/use-cases/Intel-GPU-passthrough-and-Kata.md index b16c89ddf6..e5fe2a2fb5 100644 --- a/docs/use-cases/Intel-GPU-passthrough-and-Kata.md +++ b/docs/use-cases/Intel-GPU-passthrough-and-Kata.md @@ -55,7 +55,7 @@ line. ## Install and configure Kata Containers To use this feature, you need Kata version 1.3.0 or above. -Follow the [Kata Containers setup instructions](https://github.com/kata-containers/documentation/blob/master/install/README.md) +Follow the [Kata Containers setup instructions](../install/README.md) to install the latest version of Kata. In order to pass a GPU to a Kata Container, you need to enable the `hotplug_vfio_on_root_bus` @@ -82,12 +82,12 @@ CONFIG_DRM_I915_USERPTR=y ``` Build the Kata Containers kernel with the previous config options, using the instructions -described in [Building Kata Containers kernel](https://github.com/kata-containers/packaging/tree/master/kernel). -For further details on building and installing guest kernels, see [the developer guide](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#install-guest-kernel-images). +described in [Building Kata Containers kernel](../../tools/packaging/kernel). +For further details on building and installing guest kernels, see [the developer guide](../Developer-Guide.md#install-guest-kernel-images). There is an easy way to build a guest kernel that supports Intel GPU: ``` -## Build guest kernel with https://github.com/kata-containers/packaging/tree/master/kernel +## Build guest kernel with ../../tools/packaging/kernel # Prepare (download guest kernel source, generate .config) $ ./build-kernel.sh -g intel -f setup diff --git a/docs/use-cases/Nvidia-GPU-passthrough-and-Kata.md b/docs/use-cases/Nvidia-GPU-passthrough-and-Kata.md index 591595e548..32f90f3f96 100644 --- a/docs/use-cases/Nvidia-GPU-passthrough-and-Kata.md +++ b/docs/use-cases/Nvidia-GPU-passthrough-and-Kata.md @@ -72,7 +72,7 @@ Your host kernel needs to be booted with `intel_iommu=on` on the kernel command ## Install and configure Kata Containers To use non-large BARs devices (for example, Nvidia Tesla T4), you need Kata version 1.3.0 or above. -Follow the [Kata Containers setup instructions](https://github.com/kata-containers/documentation/blob/master/install/README.md) +Follow the [Kata Containers setup instructions](../install/README.md) to install the latest version of Kata. The following configuration in the Kata `configuration.toml` file as shown below can work: @@ -131,13 +131,13 @@ It is worth checking that it is not enabled in your kernel configuration to prev Build the Kata Containers kernel with the previous config options, -using the instructions described in [Building Kata Containers kernel](https://github.com/kata-containers/packaging/tree/master/kernel). +using the instructions described in [Building Kata Containers kernel](../../tools/packaging/kernel). For further details on building and installing guest kernels, -see [the developer guide](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#install-guest-kernel-images). +see [the developer guide](../Developer-Guide.md#install-guest-kernel-images). There is an easy way to build a guest kernel that supports Nvidia GPU: ``` -## Build guest kernel with https://github.com/kata-containers/packaging/tree/master/kernel +## Build guest kernel with ../../tools/packaging/kernel # Prepare (download guest kernel source, generate .config) $ ./build-kernel.sh -v 4.19.86 -g nvidia -f setup diff --git a/docs/use-cases/using-Intel-QAT-and-kata.md b/docs/use-cases/using-Intel-QAT-and-kata.md index b5ebac16f2..cef1962473 100644 --- a/docs/use-cases/using-Intel-QAT-and-kata.md +++ b/docs/use-cases/using-Intel-QAT-and-kata.md @@ -194,9 +194,9 @@ $ ls -la /sys/bus/pci/drivers/vfio-pci This example automatically uses the latest Kata kernel supported by Kata. It follows the instructions from the -[packaging kernel repository](https://github.com/kata-containers/packaging/tree/master/kernel) +[packaging kernel repository](../../tools/packaging/kernel) and uses the latest Kata kernel -[config](https://github.com/kata-containers/packaging/tree/master/kernel/configs). +[config](../../tools/packaging/kernel/configs). There are some patches that must be installed as well, which the `build-kernel.sh` script should automatically apply. If you are using a different kernel version, then you might need to manually apply them. Since diff --git a/docs/use-cases/using-SPDK-vhostuser-and-kata.md b/docs/use-cases/using-SPDK-vhostuser-and-kata.md index bc9bdb87b9..624b96b3be 100644 --- a/docs/use-cases/using-SPDK-vhostuser-and-kata.md +++ b/docs/use-cases/using-SPDK-vhostuser-and-kata.md @@ -184,7 +184,7 @@ used for vhost-user devices. The base directory for vhost-user device is a configurable value, with the default being `/var/run/kata-containers/vhost-user`. It can be -configured by parameter `vhost_user_store_path` in [Kata TOML configuration file](https://github.com/kata-containers/runtime/blob/master/README.md#configuration). +configured by parameter `vhost_user_store_path` in [Kata TOML configuration file](../../src/runtime/README.md#configuration). Currently, the vhost-user storage device is not enabled by default, so the user should enable it explicitly inside the Kata TOML configuration diff --git a/docs/use-cases/zun_kata.md b/docs/use-cases/zun_kata.md index a037294c30..fca0dcab94 100644 --- a/docs/use-cases/zun_kata.md +++ b/docs/use-cases/zun_kata.md @@ -10,7 +10,7 @@ Currently, the instructions are based on the following links: - https://docs.openstack.org/zun/latest/admin/clear-containers.html -- https://github.com/kata-containers/documentation/blob/master/install/ubuntu-installation-guide.md +- ../install/ubuntu-installation-guide.md ## Install Git to use with DevStack @@ -54,7 +54,7 @@ $ zun delete test ## Install Kata Containers -Follow [these instructions](https://github.com/kata-containers/documentation/blob/master/install/ubuntu-installation-guide.md) +Follow [these instructions](../install/ubuntu-installation-guide.md) to install the Kata Containers components. ## Update Docker with new Kata Containers runtime diff --git a/src/agent/README.md b/src/agent/README.md index 0e4c716e43..62142456c6 100644 --- a/src/agent/README.md +++ b/src/agent/README.md @@ -52,8 +52,8 @@ cargo build --target x86_64-unknown-linux-musl --release ``` ## Run Kata CI with rust-agent - * Firstly, install kata as noted by ["how to install Kata"](https://github.com/kata-containers/documentation/blob/master/install/README.md) - * Secondly, build your own kata initrd/image following the steps in ["how to build your own initrd/image"](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#create-and-install-rootfs-and-initrd-image). + * Firstly, install kata as noted by ["how to install Kata"](../../docs/install/README.md) + * Secondly, build your own kata initrd/image following the steps in ["how to build your own initrd/image"](../../docs/Developer-Guide.md#create-and-install-rootfs-and-initrd-image). notes: Please use your rust agent instead of the go agent when building your initrd/image. * Clone the kata ci test cases from: https://github.com/kata-containers/tests.git, and then run the cri test with: diff --git a/src/runtime/README.md b/src/runtime/README.md index 5c49f1b3e7..66cac648c2 100644 --- a/src/runtime/README.md +++ b/src/runtime/README.md @@ -88,11 +88,11 @@ available for various operating systems. ## Quick start for developers See the -[developer guide](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md). +[developer guide](../../docs/Developer-Guide.md). ## Architecture overview -See the [architecture overview](https://github.com/kata-containers/documentation/blob/master/design/architecture.md) +See the [architecture overview](../../docs/design/architecture.md) for details on the Kata Containers design. ## Configuration @@ -174,12 +174,12 @@ $ sudo journalctl -t kata ## Debugging See the -[debugging section of the developer guide](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#troubleshoot-kata-containers). +[debugging section of the developer guide](../../docs/Developer-Guide.md#troubleshoot-kata-containers). ## Limitations See the -[limitations file](https://github.com/kata-containers/documentation/blob/master/Limitations.md) +[limitations file](../../docs/Limitations.md) for further details. ## Community @@ -195,7 +195,7 @@ See [how to reach the community](https://github.com/kata-containers/community/bl See the [project table of contents](https://github.com/kata-containers/kata-containers) and the -[documentation repository](https://github.com/kata-containers/documentation). +[documentation repository](../../docs). ## Additional packages diff --git a/src/runtime/cli/config/configuration-acrn.toml.in b/src/runtime/cli/config/configuration-acrn.toml.in index ccf191a412..7e63021cb0 100644 --- a/src/runtime/cli/config/configuration-acrn.toml.in +++ b/src/runtime/cli/config/configuration-acrn.toml.in @@ -101,24 +101,6 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_ACRN@" # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" -[shim.@PROJECT_TYPE@] -path = "@SHIMPATH@" - -# If enabled, shim messages will be sent to the system log -# (default: disabled) -#enable_debug = true - -# If enabled, the shim will create opentracing.io traces and spans. -# (See https://www.jaegertracing.io/docs/getting-started). -# -# Note: By default, the shim runs in a separate network namespace. Therefore, -# to allow it to send trace details to the Jaeger agent running on the host, -# it is necessary to set 'disable_new_netns=true' so that it runs in the host -# network namespace. -# -# (default: disabled) -#enable_tracing = true - [agent.@PROJECT_TYPE@] # If enabled, make the agent display debug-level messages. # (default: disabled) diff --git a/src/runtime/cli/config/configuration-clh.toml.in b/src/runtime/cli/config/configuration-clh.toml.in index ba7e2568da..35518a67c5 100644 --- a/src/runtime/cli/config/configuration-clh.toml.in +++ b/src/runtime/cli/config/configuration-clh.toml.in @@ -99,25 +99,6 @@ block_device_driver = "virtio-blk" # Default false #enable_debug = true -[shim.@PROJECT_TYPE@] -path = "@SHIMPATH@" - -# If enabled, shim messages will be sent to the system log -# (default: disabled) -#enable_debug = true - -# If enabled, the shim will create opentracing.io traces and spans. -# (See https://www.jaegertracing.io/docs/getting-started). -# -# Note: By default, the shim runs in a separate network namespace. Therefore, -# to allow it to send trace details to the Jaeger agent running on the host, -# it is necessary to set 'disable_new_netns=true' so that it runs in the host -# network namespace. -# -# (default: disabled) -#enable_tracing = true - - [agent.@PROJECT_TYPE@] # If enabled, make the agent display debug-level messages. # (default: disabled) diff --git a/src/runtime/cli/config/configuration-fc.toml.in b/src/runtime/cli/config/configuration-fc.toml.in index 8a09031408..068a05441e 100644 --- a/src/runtime/cli/config/configuration-fc.toml.in +++ b/src/runtime/cli/config/configuration-fc.toml.in @@ -217,24 +217,6 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_FC@" # Default false #enable_template = true -[shim.@PROJECT_TYPE@] -path = "@SHIMPATH@" - -# If enabled, shim messages will be sent to the system log -# (default: disabled) -#enable_debug = true - -# If enabled, the shim will create opentracing.io traces and spans. -# (See https://www.jaegertracing.io/docs/getting-started). -# -# Note: By default, the shim runs in a separate network namespace. Therefore, -# to allow it to send trace details to the Jaeger agent running on the host, -# it is necessary to set 'disable_new_netns=true' so that it runs in the host -# network namespace. -# -# (default: disabled) -#enable_tracing = true - [agent.@PROJECT_TYPE@] # If enabled, make the agent display debug-level messages. # (default: disabled) diff --git a/src/runtime/cli/config/configuration-qemu-virtiofs.toml.in b/src/runtime/cli/config/configuration-qemu-virtiofs.toml.in index bf1aac91d1..67639b08f1 100644 --- a/src/runtime/cli/config/configuration-qemu-virtiofs.toml.in +++ b/src/runtime/cli/config/configuration-qemu-virtiofs.toml.in @@ -309,24 +309,6 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@" # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" -[shim.@PROJECT_TYPE@] -path = "@SHIMPATH@" - -# If enabled, shim messages will be sent to the system log -# (default: disabled) -#enable_debug = true - -# If enabled, the shim will create opentracing.io traces and spans. -# (See https://www.jaegertracing.io/docs/getting-started). -# -# Note: By default, the shim runs in a separate network namespace. Therefore, -# to allow it to send trace details to the Jaeger agent running on the host, -# it is necessary to set 'disable_new_netns=true' so that it runs in the host -# network namespace. -# -# (default: disabled) -#enable_tracing = true - [agent.@PROJECT_TYPE@] # If enabled, make the agent display debug-level messages. # (default: disabled) diff --git a/src/runtime/cli/config/configuration-qemu.toml.in b/src/runtime/cli/config/configuration-qemu.toml.in index 2a07d71f9b..3b1d481161 100644 --- a/src/runtime/cli/config/configuration-qemu.toml.in +++ b/src/runtime/cli/config/configuration-qemu.toml.in @@ -13,7 +13,6 @@ [hypervisor.qemu] path = "@QEMUPATH@" kernel = "@KERNELPATH@" -initrd = "@INITRDPATH@" image = "@IMAGEPATH@" machine_type = "@MACHINETYPE@" @@ -333,24 +332,6 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@" # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" -[shim.@PROJECT_TYPE@] -path = "@SHIMPATH@" - -# If enabled, shim messages will be sent to the system log -# (default: disabled) -#enable_debug = true - -# If enabled, the shim will create opentracing.io traces and spans. -# (See https://www.jaegertracing.io/docs/getting-started). -# -# Note: By default, the shim runs in a separate network namespace. Therefore, -# to allow it to send trace details to the Jaeger agent running on the host, -# it is necessary to set 'disable_new_netns=true' so that it runs in the host -# network namespace. -# -# (default: disabled) -#enable_tracing = true - [agent.@PROJECT_TYPE@] # If enabled, make the agent display debug-level messages. # (default: disabled) diff --git a/src/runtime/data/kata-collect-data.sh.in b/src/runtime/data/kata-collect-data.sh.in index db3a50e433..b6c3a88a80 100644 --- a/src/runtime/data/kata-collect-data.sh.in +++ b/src/runtime/data/kata-collect-data.sh.in @@ -307,17 +307,6 @@ show_runtime_log_details() end_section } -show_shim_log_details() -{ - local title="Shim logs" - - subheading "$title" - - start_section "$title" - find_system_journal_problems "shim" "@PROJECT_TYPE@-shim" - end_section -} - show_throttler_log_details() { local title="Throttler logs" @@ -336,7 +325,6 @@ show_log_details() heading "$title" show_runtime_log_details - show_shim_log_details show_throttler_log_details show_containerd_shimv2_log_details @@ -366,7 +354,6 @@ show_package_versions() for project in @PROJECT_TYPE@ do pattern+="|${project}-runtime" - pattern+="|${project}-shim" pattern+="|${project}-ksm-throttler" pattern+="|${project}-containers-image" done diff --git a/tools/packaging/kernel/README.md b/tools/packaging/kernel/README.md index cf55801555..72d7e2f7ab 100644 --- a/tools/packaging/kernel/README.md +++ b/tools/packaging/kernel/README.md @@ -16,7 +16,7 @@ automates the process to build a kernel for Kata Containers. ## Requirements The `build-kernel.sh` script requires an installed Golang version matching the -[component build requirements](https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#requirements-to-build-individual-components). +[component build requirements](../../../docs/Developer-Guide.md#requirements-to-build-individual-components). ## Usage diff --git a/tools/packaging/obs-packaging/README.md b/tools/packaging/obs-packaging/README.md index 3a7eb116d5..757a410588 100644 --- a/tools/packaging/obs-packaging/README.md +++ b/tools/packaging/obs-packaging/README.md @@ -7,5 +7,5 @@ This directory contains tooling and packaging metadata to build all Kata components with OBS. See -[the Kata installation documentation](https://github.com/kata-containers/documentation/blob/master/install/README.md) +[the Kata installation documentation](../../../docs/install/README.md) for instructions on how to install the official packages. diff --git a/tools/packaging/release/README.md b/tools/packaging/release/README.md index 8e0b2d72ac..d3ffce1b06 100644 --- a/tools/packaging/release/README.md +++ b/tools/packaging/release/README.md @@ -14,7 +14,7 @@ tools used for creating Kata Containers releases. ## Create a Kata Containers release -See [the release documentation](https://github.com/kata-containers/documentation/blob/master/Release-Process.md). +See [the release documentation](../../../docs/Release-Process.md). ## Release tools diff --git a/tools/packaging/snap/README.md b/tools/packaging/snap/README.md index 12a1d6248f..f0982adb8b 100644 --- a/tools/packaging/snap/README.md +++ b/tools/packaging/snap/README.md @@ -84,12 +84,12 @@ then a new configuration file can be [created](#configure-kata-containers) and [configured][7]. [1]: https://docs.snapcraft.io/snaps/intro -[2]: https://github.com/kata-containers/documentation/blob/master/design/architecture.md#root-filesystem-image +[2]: ../../../docs/design/architecture.md#root-filesystem-image [3]: https://docs.snapcraft.io/reference/confinement#classic [4]: https://github.com/kata-containers/runtime#configuration [5]: https://docs.docker.com/engine/reference/commandline/dockerd -[6]: https://github.com/kata-containers/documentation/blob/master/install/docker/ubuntu-docker-install.md -[7]: https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#configure-to-use-initrd-or-rootfs-image +[6]: ../../../docs/install/docker/ubuntu-docker-install.md +[7]: ../../../docs/Developer-Guide.md#configure-to-use-initrd-or-rootfs-image [8]: https://snapcraft.io/kata-containers -[9]: https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-docker -[10]: https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md#run-kata-containers-with-kubernetes +[9]: ../../../docs/Developer-Guide.md#run-kata-containers-with-docker +[10]: ../../../docs/Developer-Guide.md#run-kata-containers-with-kubernetes