docs: Update outdated URLs and keep them available

By comparing the content of the old url and the new url,
ensure that their content is consistent and does not contain ambiguities

Fixes: #4454

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
This commit is contained in:
Binbin Zhang 2022-06-15 15:50:56 +08:00
parent 185360cb9a
commit a305bafeef
5 changed files with 6 additions and 6 deletions

View File

@ -320,7 +320,7 @@ mod tests {
## Test user ## Test user
[Unit tests are run *twice*](https://github.com/kata-containers/tests/blob/main/.ci/go-test.sh): [Unit tests are run *twice*](../src/runtime/go-test.sh):
- as the current user - as the current user
- as the `root` user (if different to the current user) - as the `root` user (if different to the current user)

View File

@ -79,7 +79,7 @@ a "`BUG: feature X not implemented see {bug-url}`" type error.
- Don't use multiple log calls when a single log call could be used. - Don't use multiple log calls when a single log call could be used.
- Use structured logging where possible to allow - Use structured logging where possible to allow
[standard tooling](https://github.com/kata-containers/tests/tree/main/cmd/log-parser) [standard tooling](../src/tools/log-parser)
be able to extract the log fields. be able to extract the log fields.
### Names ### Names

View File

@ -17,7 +17,7 @@ Kubelet instance is responsible for managing the lifecycle of pods
within the nodes and eventually relies on a container runtime to within the nodes and eventually relies on a container runtime to
handle execution. The Kubelet architecture decouples lifecycle handle execution. The Kubelet architecture decouples lifecycle
management from container execution through a dedicated gRPC based management from container execution through a dedicated gRPC based
[Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/container-runtime-interface-v1.md). [Container Runtime Interface (CRI)](https://github.com/kubernetes/design-proposals-archive/blob/main/node/container-runtime-interface-v1.md).
In other words, a Kubelet is a CRI client and expects a CRI In other words, a Kubelet is a CRI client and expects a CRI
implementation to handle the server side of the interface. implementation to handle the server side of the interface.

View File

@ -68,7 +68,7 @@ the Kata logs import to the EFK stack.
> stack they are able to utilise in order to modify and test as necessary. > stack they are able to utilise in order to modify and test as necessary.
Minikube by default Minikube by default
[configures](https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/coreos/minikube/rootfs-overlay/etc/systemd/journald.conf) [configures](https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/minikube/x86_64/rootfs-overlay/etc/systemd/journald.conf)
the `systemd-journald` with the the `systemd-journald` with the
[`Storage=volatile`](https://www.freedesktop.org/software/systemd/man/journald.conf.html) option, [`Storage=volatile`](https://www.freedesktop.org/software/systemd/man/journald.conf.html) option,
which results in the journal being stored in `/run/log/journal`. Unfortunately, the Minikube EFK which results in the journal being stored in `/run/log/journal`. Unfortunately, the Minikube EFK
@ -163,7 +163,7 @@ sub-filter on, for instance, the `SYSLOG_IDENTIFIER` to differentiate the Kata c
on the `PRIORITY` to filter out critical issues etc. on the `PRIORITY` to filter out critical issues etc.
Kata generates a significant amount of Kata specific information, which can be seen as Kata generates a significant amount of Kata specific information, which can be seen as
[`logfmt`](https://github.com/kata-containers/tests/tree/main/cmd/log-parser#logfile-requirements). [`logfmt`](../../src/tools/log-parser/README.md#logfile-requirements).
data contained in the `MESSAGE` field. Imported as-is, there is no easy way to filter on that data data contained in the `MESSAGE` field. Imported as-is, there is no easy way to filter on that data
in Kibana: in Kibana:

View File

@ -125,7 +125,7 @@ $ kata-runtime env
For detailed information and analysis on obtaining logs for other system For detailed information and analysis on obtaining logs for other system
components, see the documentation for the components, see the documentation for the
[`kata-log-parser`](https://github.com/kata-containers/tests/tree/main/cmd/log-parser) [`kata-log-parser`](../tools/log-parser)
tool. tool.
### Kata containerd shimv2 ### Kata containerd shimv2