mirror of
https://github.com/kata-containers/kata-containers.git
synced 2026-03-18 10:44:10 +00:00
Compare commits
36 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0ad6f05dee | ||
|
|
4c9c01a124 | ||
|
|
f2319d693d | ||
|
|
98ccf8f6a1 | ||
|
|
cae48e9c9b | ||
|
|
a36103c759 | ||
|
|
6abbcc551c | ||
|
|
342aa95cc8 | ||
|
|
9f75e226f1 | ||
|
|
363fbed804 | ||
|
|
54a638317a | ||
|
|
8ce6b12b41 | ||
|
|
f840de5acb | ||
|
|
952cea5f5d | ||
|
|
cc965fa0cb | ||
|
|
44b1473d0c | ||
|
|
565efd1bf2 | ||
|
|
f41cc18427 | ||
|
|
e059b50f5c | ||
|
|
71ce6f537f | ||
|
|
a2b73b60bd | ||
|
|
2ce9ce7b8f | ||
|
|
30fc2c863d | ||
|
|
24028969c2 | ||
|
|
4e54aa5a7b | ||
|
|
d815393c3e | ||
|
|
4111e1a3de | ||
|
|
2918be180f | ||
|
|
6b31b06832 | ||
|
|
53a9cf7dc4 | ||
|
|
5589b246d7 | ||
|
|
1da88dca4b | ||
|
|
8cc2231818 | ||
|
|
63c1498f05 | ||
|
|
3e2f9223b0 | ||
|
|
4c21cb3eb1 |
5
.github/workflows/release.yaml
vendored
5
.github/workflows/release.yaml
vendored
@@ -140,13 +140,10 @@ jobs:
|
|||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
- name: generate-and-upload-tarball
|
- name: generate-and-upload-tarball
|
||||||
run: |
|
run: |
|
||||||
pushd $GITHUB_WORKSPACE/src/agent
|
|
||||||
cargo vendor >> .cargo/config
|
|
||||||
popd
|
|
||||||
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
|
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
|
||||||
tarball="kata-containers-$tag-vendor.tar.gz"
|
tarball="kata-containers-$tag-vendor.tar.gz"
|
||||||
pushd $GITHUB_WORKSPACE
|
pushd $GITHUB_WORKSPACE
|
||||||
tar -cvzf "${tarball}" src/agent/.cargo/config src/agent/vendor
|
bash -c "tools/packaging/release/generate_vendor.sh ${tarball}"
|
||||||
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
|
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
|
||||||
popd
|
popd
|
||||||
|
|
||||||
|
|||||||
@@ -104,26 +104,69 @@ $ sudo kubeadm init --ignore-preflight-errors=all --cri-socket /run/containerd/c
|
|||||||
$ export KUBECONFIG=/etc/kubernetes/admin.conf
|
$ export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
```
|
```
|
||||||
|
|
||||||
You can force Kubelet to use Kata Containers by adding some `untrusted`
|
### Allow pods to run in the master node
|
||||||
annotation to your pod configuration. In our case, this ensures Kata
|
|
||||||
Containers is the selected runtime to run the described workload.
|
|
||||||
|
|
||||||
`nginx-untrusted.yaml`
|
By default, the cluster will not schedule pods in the master node. To enable master node scheduling:
|
||||||
```yaml
|
```bash
|
||||||
apiVersion: v1
|
$ sudo -E kubectl taint nodes --all node-role.kubernetes.io/master-
|
||||||
kind: Pod
|
```
|
||||||
|
|
||||||
|
### Create runtime class for Kata Containers
|
||||||
|
|
||||||
|
Users can use [`RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/#runtime-class) to specify a different runtime for Pods.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ cat > runtime.yaml <<EOF
|
||||||
|
apiVersion: node.k8s.io/v1
|
||||||
|
kind: RuntimeClass
|
||||||
metadata:
|
metadata:
|
||||||
name: nginx-untrusted
|
name: kata
|
||||||
annotations:
|
handler: kata
|
||||||
io.kubernetes.cri.untrusted-workload: "true"
|
EOF
|
||||||
spec:
|
|
||||||
containers:
|
$ sudo -E kubectl apply -f runtime.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run pod in Kata Containers
|
||||||
|
|
||||||
|
If a pod has the `runtimeClassName` set to `kata`, the CRI plugin runs the pod with the
|
||||||
|
[Kata Containers runtime](../../src/runtime/README.md).
|
||||||
|
|
||||||
|
- Create an pod configuration that using Kata Containers runtime
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ cat << EOF | tee nginx-kata.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: nginx-kata
|
||||||
|
spec:
|
||||||
|
runtimeClassName: kata
|
||||||
|
containers:
|
||||||
- name: nginx
|
- name: nginx
|
||||||
image: nginx
|
image: nginx
|
||||||
```
|
|
||||||
|
|
||||||
Next, you run your pod:
|
EOF
|
||||||
```
|
```
|
||||||
$ sudo -E kubectl apply -f nginx-untrusted.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
|
- Create the pod
|
||||||
|
```bash
|
||||||
|
$ sudo -E kubectl apply -f nginx-kata.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
- Check pod is running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo -E kubectl get pods
|
||||||
|
```
|
||||||
|
|
||||||
|
- Check hypervisor is running
|
||||||
|
```bash
|
||||||
|
$ ps aux | grep qemu
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete created pod
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo -E kubectl delete -f nginx-kata.yaml
|
||||||
|
```
|
||||||
|
|||||||
@@ -21,20 +21,7 @@ CONFIG_X86_SGX_KVM=y
|
|||||||
* [Intel SGX Kubernetes device plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/sgx_plugin#deploying-with-pre-built-images)
|
* [Intel SGX Kubernetes device plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/sgx_plugin#deploying-with-pre-built-images)
|
||||||
|
|
||||||
> Note: Kata Containers supports creating VM sandboxes with Intel® SGX enabled
|
> Note: Kata Containers supports creating VM sandboxes with Intel® SGX enabled
|
||||||
> using [cloud-hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor/) VMM only. QEMU support is waiting to get the
|
> using [cloud-hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor/) and [QEMU](https://www.qemu.org/) VMMs only.
|
||||||
> Intel SGX enabled QEMU upstream release.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Kata Containers Guest Kernel
|
|
||||||
|
|
||||||
Follow the instructions to [setup](../../tools/packaging/kernel/README.md#setup-kernel-source-code) and [build](../../tools/packaging/kernel/README.md#build-the-kernel) the experimental guest kernel. Then, install as:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ sudo cp kata-linux-experimental-*/vmlinux /opt/kata/share/kata-containers/vmlinux.sgx
|
|
||||||
$ sudo sed -i 's|vmlinux.container|vmlinux.sgx|g' \
|
|
||||||
/opt/kata/share/defaults/kata-containers/configuration-clh.toml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Kata Containers Configuration
|
### Kata Containers Configuration
|
||||||
|
|
||||||
@@ -48,6 +35,8 @@ to the `sandbox` are: `["io.katacontainers.*", "sgx.intel.com/epc"]`.
|
|||||||
|
|
||||||
With the following sample job deployed using `kubectl apply -f`:
|
With the following sample job deployed using `kubectl apply -f`:
|
||||||
|
|
||||||
|
> Note: Change the `runtimeClassName` option accordingly, only `kata-clh` and `kata-qemu` support Intel® SGX.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: batch/v1
|
apiVersion: batch/v1
|
||||||
kind: Job
|
kind: Job
|
||||||
|
|||||||
@@ -8,8 +8,8 @@ use std::fs::File;
|
|||||||
use std::os::unix::io::RawFd;
|
use std::os::unix::io::RawFd;
|
||||||
use tokio::sync::mpsc::Sender;
|
use tokio::sync::mpsc::Sender;
|
||||||
|
|
||||||
|
use nix::errno::Errno;
|
||||||
use nix::fcntl::{fcntl, FcntlArg, OFlag};
|
use nix::fcntl::{fcntl, FcntlArg, OFlag};
|
||||||
use nix::sys::signal::{self, Signal};
|
|
||||||
use nix::sys::wait::{self, WaitStatus};
|
use nix::sys::wait::{self, WaitStatus};
|
||||||
use nix::unistd::{self, Pid};
|
use nix::unistd::{self, Pid};
|
||||||
use nix::Result;
|
use nix::Result;
|
||||||
@@ -80,7 +80,7 @@ pub struct Process {
|
|||||||
pub trait ProcessOperations {
|
pub trait ProcessOperations {
|
||||||
fn pid(&self) -> Pid;
|
fn pid(&self) -> Pid;
|
||||||
fn wait(&self) -> Result<WaitStatus>;
|
fn wait(&self) -> Result<WaitStatus>;
|
||||||
fn signal(&self, sig: Signal) -> Result<()>;
|
fn signal(&self, sig: libc::c_int) -> Result<()>;
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ProcessOperations for Process {
|
impl ProcessOperations for Process {
|
||||||
@@ -92,8 +92,10 @@ impl ProcessOperations for Process {
|
|||||||
wait::waitpid(Some(self.pid()), None)
|
wait::waitpid(Some(self.pid()), None)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn signal(&self, sig: Signal) -> Result<()> {
|
fn signal(&self, sig: libc::c_int) -> Result<()> {
|
||||||
signal::kill(self.pid(), Some(sig))
|
let res = unsafe { libc::kill(self.pid().into(), sig) };
|
||||||
|
|
||||||
|
Errno::result(res).map(drop)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -281,6 +283,6 @@ mod tests {
|
|||||||
// signal to every process in the process
|
// signal to every process in the process
|
||||||
// group of the calling process.
|
// group of the calling process.
|
||||||
process.pid = 0;
|
process.pid = 0;
|
||||||
assert!(process.signal(Signal::SIGCONT).is_ok());
|
assert!(process.signal(libc::SIGCONT).is_ok());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -19,6 +19,7 @@ use ttrpc::{
|
|||||||
};
|
};
|
||||||
|
|
||||||
use anyhow::{anyhow, Context, Result};
|
use anyhow::{anyhow, Context, Result};
|
||||||
|
use cgroups::freezer::FreezerState;
|
||||||
use oci::{LinuxNamespace, Root, Spec};
|
use oci::{LinuxNamespace, Root, Spec};
|
||||||
use protobuf::{Message, RepeatedField, SingularPtrField};
|
use protobuf::{Message, RepeatedField, SingularPtrField};
|
||||||
use protocols::agent::{
|
use protocols::agent::{
|
||||||
@@ -39,9 +40,9 @@ use rustjail::specconv::CreateOpts;
|
|||||||
|
|
||||||
use nix::errno::Errno;
|
use nix::errno::Errno;
|
||||||
use nix::mount::MsFlags;
|
use nix::mount::MsFlags;
|
||||||
use nix::sys::signal::Signal;
|
|
||||||
use nix::sys::stat;
|
use nix::sys::stat;
|
||||||
use nix::unistd::{self, Pid};
|
use nix::unistd::{self, Pid};
|
||||||
|
use rustjail::cgroups::Manager;
|
||||||
use rustjail::process::ProcessOperations;
|
use rustjail::process::ProcessOperations;
|
||||||
|
|
||||||
use sysinfo::{DiskExt, System, SystemExt};
|
use sysinfo::{DiskExt, System, SystemExt};
|
||||||
@@ -69,7 +70,6 @@ use tracing_opentelemetry::OpenTelemetrySpanExt;
|
|||||||
use tracing::instrument;
|
use tracing::instrument;
|
||||||
|
|
||||||
use libc::{self, c_char, c_ushort, pid_t, winsize, TIOCSWINSZ};
|
use libc::{self, c_char, c_ushort, pid_t, winsize, TIOCSWINSZ};
|
||||||
use std::convert::TryFrom;
|
|
||||||
use std::fs;
|
use std::fs;
|
||||||
use std::os::unix::fs::MetadataExt;
|
use std::os::unix::fs::MetadataExt;
|
||||||
use std::os::unix::prelude::PermissionsExt;
|
use std::os::unix::prelude::PermissionsExt;
|
||||||
@@ -389,7 +389,6 @@ impl AgentService {
|
|||||||
let cid = req.container_id.clone();
|
let cid = req.container_id.clone();
|
||||||
let eid = req.exec_id.clone();
|
let eid = req.exec_id.clone();
|
||||||
let s = self.sandbox.clone();
|
let s = self.sandbox.clone();
|
||||||
let mut sandbox = s.lock().await;
|
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
sl!(),
|
sl!(),
|
||||||
@@ -398,27 +397,93 @@ impl AgentService {
|
|||||||
"exec-id" => eid.clone(),
|
"exec-id" => eid.clone(),
|
||||||
);
|
);
|
||||||
|
|
||||||
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
|
let mut sig: libc::c_int = req.signal as libc::c_int;
|
||||||
|
{
|
||||||
let mut signal = Signal::try_from(req.signal as i32).map_err(|e| {
|
let mut sandbox = s.lock().await;
|
||||||
anyhow!(e).context(format!(
|
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
|
||||||
"failed to convert {:?} to signal (container-id: {}, exec-id: {})",
|
// For container initProcess, if it hasn't installed handler for "SIGTERM" signal,
|
||||||
req.signal, cid, eid
|
// it will ignore the "SIGTERM" signal sent to it, thus send it "SIGKILL" signal
|
||||||
))
|
// instead of "SIGTERM" to terminate it.
|
||||||
})?;
|
if p.init && sig == libc::SIGTERM && !is_signal_handled(p.pid, sig as u32) {
|
||||||
|
sig = libc::SIGKILL;
|
||||||
// For container initProcess, if it hasn't installed handler for "SIGTERM" signal,
|
}
|
||||||
// it will ignore the "SIGTERM" signal sent to it, thus send it "SIGKILL" signal
|
p.signal(sig)?;
|
||||||
// instead of "SIGTERM" to terminate it.
|
|
||||||
if p.init && signal == Signal::SIGTERM && !is_signal_handled(p.pid, req.signal) {
|
|
||||||
signal = Signal::SIGKILL;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
p.signal(signal)?;
|
if eid.is_empty() {
|
||||||
|
// eid is empty, signal all the remaining processes in the container cgroup
|
||||||
|
info!(
|
||||||
|
sl!(),
|
||||||
|
"signal all the remaining processes";
|
||||||
|
"container-id" => cid.clone(),
|
||||||
|
"exec-id" => eid.clone(),
|
||||||
|
);
|
||||||
|
|
||||||
|
if let Err(err) = self.freeze_cgroup(&cid, FreezerState::Frozen).await {
|
||||||
|
warn!(
|
||||||
|
sl!(),
|
||||||
|
"freeze cgroup failed";
|
||||||
|
"container-id" => cid.clone(),
|
||||||
|
"exec-id" => eid.clone(),
|
||||||
|
"error" => format!("{:?}", err),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let pids = self.get_pids(&cid).await?;
|
||||||
|
for pid in pids.iter() {
|
||||||
|
let res = unsafe { libc::kill(*pid, sig) };
|
||||||
|
if let Err(err) = Errno::result(res).map(drop) {
|
||||||
|
warn!(
|
||||||
|
sl!(),
|
||||||
|
"signal failed";
|
||||||
|
"container-id" => cid.clone(),
|
||||||
|
"exec-id" => eid.clone(),
|
||||||
|
"pid" => pid,
|
||||||
|
"error" => format!("{:?}", err),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if let Err(err) = self.freeze_cgroup(&cid, FreezerState::Thawed).await {
|
||||||
|
warn!(
|
||||||
|
sl!(),
|
||||||
|
"unfreeze cgroup failed";
|
||||||
|
"container-id" => cid.clone(),
|
||||||
|
"exec-id" => eid.clone(),
|
||||||
|
"error" => format!("{:?}", err),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn freeze_cgroup(&self, cid: &str, state: FreezerState) -> Result<()> {
|
||||||
|
let s = self.sandbox.clone();
|
||||||
|
let mut sandbox = s.lock().await;
|
||||||
|
let ctr = sandbox
|
||||||
|
.get_container(cid)
|
||||||
|
.ok_or_else(|| anyhow!("Invalid container id {}", cid))?;
|
||||||
|
let cm = ctr
|
||||||
|
.cgroup_manager
|
||||||
|
.as_ref()
|
||||||
|
.ok_or_else(|| anyhow!("cgroup manager not exist"))?;
|
||||||
|
cm.freeze(state)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_pids(&self, cid: &str) -> Result<Vec<i32>> {
|
||||||
|
let s = self.sandbox.clone();
|
||||||
|
let mut sandbox = s.lock().await;
|
||||||
|
let ctr = sandbox
|
||||||
|
.get_container(cid)
|
||||||
|
.ok_or_else(|| anyhow!("Invalid container id {}", cid))?;
|
||||||
|
let cm = ctr
|
||||||
|
.cgroup_manager
|
||||||
|
.as_ref()
|
||||||
|
.ok_or_else(|| anyhow!("cgroup manager not exist"))?;
|
||||||
|
let pids = cm.get_pids()?;
|
||||||
|
Ok(pids)
|
||||||
|
}
|
||||||
|
|
||||||
#[instrument]
|
#[instrument]
|
||||||
async fn do_wait_process(
|
async fn do_wait_process(
|
||||||
&self,
|
&self,
|
||||||
|
|||||||
@@ -589,12 +589,10 @@ $(GENERATED_FILES): %: %.in $(MAKEFILE_LIST) VERSION .git-commit
|
|||||||
|
|
||||||
generate-config: $(CONFIGS)
|
generate-config: $(CONFIGS)
|
||||||
|
|
||||||
test: install-hook go-test
|
test: hook go-test
|
||||||
|
|
||||||
install-hook:
|
hook:
|
||||||
make -C virtcontainers hook
|
make -C virtcontainers hook
|
||||||
echo "installing mock hook"
|
|
||||||
sudo -E make -C virtcontainers install
|
|
||||||
|
|
||||||
go-test: $(GENERATED_FILES)
|
go-test: $(GENERATED_FILES)
|
||||||
go clean -testcache
|
go clean -testcache
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ import (
|
|||||||
const defaultListenAddress = "127.0.0.1:8090"
|
const defaultListenAddress = "127.0.0.1:8090"
|
||||||
|
|
||||||
var monitorListenAddr = flag.String("listen-address", defaultListenAddress, "The address to listen on for HTTP requests.")
|
var monitorListenAddr = flag.String("listen-address", defaultListenAddress, "The address to listen on for HTTP requests.")
|
||||||
var runtimeEndpoint = flag.String("runtime-endpoint", "/run/containerd/containerd.sock", `Endpoint of CRI container runtime service. (default: "/run/containerd/containerd.sock")`)
|
var runtimeEndpoint = flag.String("runtime-endpoint", "/run/containerd/containerd.sock", "Endpoint of CRI container runtime service.")
|
||||||
var logLevel = flag.String("log-level", "info", "Log level of logrus(trace/debug/info/warn/error/fatal/panic).")
|
var logLevel = flag.String("log-level", "info", "Log level of logrus(trace/debug/info/warn/error/fatal/panic).")
|
||||||
|
|
||||||
// These values are overridden via ldflags
|
// These values are overridden via ldflags
|
||||||
|
|||||||
@@ -776,6 +776,8 @@ func (s *service) Kill(ctx context.Context, r *taskAPI.KillRequest) (_ *ptypes.E
|
|||||||
return empty, errors.New("The exec process does not exist")
|
return empty, errors.New("The exec process does not exist")
|
||||||
}
|
}
|
||||||
processStatus = execs.status
|
processStatus = execs.status
|
||||||
|
} else {
|
||||||
|
r.All = true
|
||||||
}
|
}
|
||||||
|
|
||||||
// According to CRI specs, kubelet will call StopPodSandbox()
|
// According to CRI specs, kubelet will call StopPodSandbox()
|
||||||
|
|||||||
@@ -8,12 +8,14 @@ package containerdshim
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
|
||||||
"github.com/containerd/containerd/api/types/task"
|
"github.com/containerd/containerd/api/types/task"
|
||||||
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
|
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
|
||||||
)
|
)
|
||||||
|
|
||||||
func startContainer(ctx context.Context, s *service, c *container) (retErr error) {
|
func startContainer(ctx context.Context, s *service, c *container) (retErr error) {
|
||||||
|
shimLog.WithField("container", c.id).Debug("start container")
|
||||||
defer func() {
|
defer func() {
|
||||||
if retErr != nil {
|
if retErr != nil {
|
||||||
// notify the wait goroutine to continue
|
// notify the wait goroutine to continue
|
||||||
@@ -78,7 +80,8 @@ func startContainer(ctx context.Context, s *service, c *container) (retErr error
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
c.ttyio = tty
|
c.ttyio = tty
|
||||||
go ioCopy(c.exitIOch, c.stdinCloser, tty, stdin, stdout, stderr)
|
|
||||||
|
go ioCopy(shimLog.WithField("container", c.id), c.exitIOch, c.stdinCloser, tty, stdin, stdout, stderr)
|
||||||
} else {
|
} else {
|
||||||
// close the io exit channel, since there is no io for this container,
|
// close the io exit channel, since there is no io for this container,
|
||||||
// otherwise the following wait goroutine will hang on this channel.
|
// otherwise the following wait goroutine will hang on this channel.
|
||||||
@@ -94,6 +97,10 @@ func startContainer(ctx context.Context, s *service, c *container) (retErr error
|
|||||||
}
|
}
|
||||||
|
|
||||||
func startExec(ctx context.Context, s *service, containerID, execID string) (e *exec, retErr error) {
|
func startExec(ctx context.Context, s *service, containerID, execID string) (e *exec, retErr error) {
|
||||||
|
shimLog.WithFields(logrus.Fields{
|
||||||
|
"container": containerID,
|
||||||
|
"exec": execID,
|
||||||
|
}).Debug("start container execution")
|
||||||
// start an exec
|
// start an exec
|
||||||
c, err := s.getContainer(containerID)
|
c, err := s.getContainer(containerID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -140,7 +147,10 @@ func startExec(ctx context.Context, s *service, containerID, execID string) (e *
|
|||||||
}
|
}
|
||||||
execs.ttyio = tty
|
execs.ttyio = tty
|
||||||
|
|
||||||
go ioCopy(execs.exitIOch, execs.stdinCloser, tty, stdin, stdout, stderr)
|
go ioCopy(shimLog.WithFields(logrus.Fields{
|
||||||
|
"container": c.id,
|
||||||
|
"exec": execID,
|
||||||
|
}), execs.exitIOch, execs.stdinCloser, tty, stdin, stdout, stderr)
|
||||||
|
|
||||||
go wait(ctx, s, c, execID)
|
go wait(ctx, s, c, execID)
|
||||||
|
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import (
|
|||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"github.com/containerd/fifo"
|
"github.com/containerd/fifo"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
// The buffer size used to specify the buffer for IO streams copy
|
// The buffer size used to specify the buffer for IO streams copy
|
||||||
@@ -86,18 +87,20 @@ func newTtyIO(ctx context.Context, stdin, stdout, stderr string, console bool) (
|
|||||||
return ttyIO, nil
|
return ttyIO, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteCloser, stdoutPipe, stderrPipe io.Reader) {
|
func ioCopy(shimLog *logrus.Entry, exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteCloser, stdoutPipe, stderrPipe io.Reader) {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
if tty.Stdin != nil {
|
if tty.Stdin != nil {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
|
shimLog.Debug("stdin io stream copy started")
|
||||||
p := bufPool.Get().(*[]byte)
|
p := bufPool.Get().(*[]byte)
|
||||||
defer bufPool.Put(p)
|
defer bufPool.Put(p)
|
||||||
io.CopyBuffer(stdinPipe, tty.Stdin, *p)
|
io.CopyBuffer(stdinPipe, tty.Stdin, *p)
|
||||||
// notify that we can close process's io safely.
|
// notify that we can close process's io safely.
|
||||||
close(stdinCloser)
|
close(stdinCloser)
|
||||||
wg.Done()
|
wg.Done()
|
||||||
|
shimLog.Debug("stdin io stream copy exited")
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -105,6 +108,7 @@ func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteClo
|
|||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
|
|
||||||
go func() {
|
go func() {
|
||||||
|
shimLog.Debug("stdout io stream copy started")
|
||||||
p := bufPool.Get().(*[]byte)
|
p := bufPool.Get().(*[]byte)
|
||||||
defer bufPool.Put(p)
|
defer bufPool.Put(p)
|
||||||
io.CopyBuffer(tty.Stdout, stdoutPipe, *p)
|
io.CopyBuffer(tty.Stdout, stdoutPipe, *p)
|
||||||
@@ -113,20 +117,24 @@ func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteClo
|
|||||||
// close stdin to make the other routine stop
|
// close stdin to make the other routine stop
|
||||||
tty.Stdin.Close()
|
tty.Stdin.Close()
|
||||||
}
|
}
|
||||||
|
shimLog.Debug("stdout io stream copy exited")
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
if tty.Stderr != nil && stderrPipe != nil {
|
if tty.Stderr != nil && stderrPipe != nil {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
|
shimLog.Debug("stderr io stream copy started")
|
||||||
p := bufPool.Get().(*[]byte)
|
p := bufPool.Get().(*[]byte)
|
||||||
defer bufPool.Put(p)
|
defer bufPool.Put(p)
|
||||||
io.CopyBuffer(tty.Stderr, stderrPipe, *p)
|
io.CopyBuffer(tty.Stderr, stderrPipe, *p)
|
||||||
wg.Done()
|
wg.Done()
|
||||||
|
shimLog.Debug("stderr io stream copy exited")
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
tty.close()
|
tty.close()
|
||||||
close(exitch)
|
close(exitch)
|
||||||
|
shimLog.Debug("all io stream copy goroutines exited")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ package containerdshim
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
@@ -179,7 +180,7 @@ func TestIoCopy(t *testing.T) {
|
|||||||
defer tty.close()
|
defer tty.close()
|
||||||
|
|
||||||
// start the ioCopy threads : copy from src to dst
|
// start the ioCopy threads : copy from src to dst
|
||||||
go ioCopy(exitioch, stdinCloser, tty, dstInW, srcOutR, srcErrR)
|
go ioCopy(logrus.WithContext(context.Background()), exitioch, stdinCloser, tty, dstInW, srcOutR, srcErrR)
|
||||||
|
|
||||||
var firstW, secondW, thirdW io.WriteCloser
|
var firstW, secondW, thirdW io.WriteCloser
|
||||||
var firstR, secondR, thirdR io.Reader
|
var firstR, secondR, thirdR io.Reader
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ import (
|
|||||||
"github.com/containerd/containerd/api/types/task"
|
"github.com/containerd/containerd/api/types/task"
|
||||||
"github.com/containerd/containerd/mount"
|
"github.com/containerd/containerd/mount"
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
"google.golang.org/grpc/codes"
|
|
||||||
|
|
||||||
"github.com/kata-containers/kata-containers/src/runtime/pkg/oci"
|
"github.com/kata-containers/kata-containers/src/runtime/pkg/oci"
|
||||||
)
|
)
|
||||||
@@ -31,12 +30,17 @@ func wait(ctx context.Context, s *service, c *container, execID string) (int32,
|
|||||||
if execID == "" {
|
if execID == "" {
|
||||||
//wait until the io closed, then wait the container
|
//wait until the io closed, then wait the container
|
||||||
<-c.exitIOch
|
<-c.exitIOch
|
||||||
|
shimLog.WithField("container", c.id).Debug("The container io streams closed")
|
||||||
} else {
|
} else {
|
||||||
execs, err = c.getExec(execID)
|
execs, err = c.getExec(execID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return exitCode255, err
|
return exitCode255, err
|
||||||
}
|
}
|
||||||
<-execs.exitIOch
|
<-execs.exitIOch
|
||||||
|
shimLog.WithFields(logrus.Fields{
|
||||||
|
"container": c.id,
|
||||||
|
"exec": execID,
|
||||||
|
}).Debug("The container process io streams closed")
|
||||||
//This wait could be triggered before exec start which
|
//This wait could be triggered before exec start which
|
||||||
//will get the exec's id, thus this assignment must after
|
//will get the exec's id, thus this assignment must after
|
||||||
//the exec exit, to make sure it get the exec's id.
|
//the exec exit, to make sure it get the exec's id.
|
||||||
@@ -63,6 +67,7 @@ func wait(ctx context.Context, s *service, c *container, execID string) (int32,
|
|||||||
if c.cType.IsSandbox() {
|
if c.cType.IsSandbox() {
|
||||||
// cancel watcher
|
// cancel watcher
|
||||||
if s.monitor != nil {
|
if s.monitor != nil {
|
||||||
|
shimLog.WithField("sandbox", s.sandbox.ID()).Info("cancel watcher")
|
||||||
s.monitor <- nil
|
s.monitor <- nil
|
||||||
}
|
}
|
||||||
if err = s.sandbox.Stop(ctx, true); err != nil {
|
if err = s.sandbox.Stop(ctx, true); err != nil {
|
||||||
@@ -82,13 +87,17 @@ func wait(ctx context.Context, s *service, c *container, execID string) (int32,
|
|||||||
c.exitTime = timeStamp
|
c.exitTime = timeStamp
|
||||||
|
|
||||||
c.exitCh <- uint32(ret)
|
c.exitCh <- uint32(ret)
|
||||||
|
shimLog.WithField("container", c.id).Debug("The container status is StatusStopped")
|
||||||
} else {
|
} else {
|
||||||
execs.status = task.StatusStopped
|
execs.status = task.StatusStopped
|
||||||
execs.exitCode = ret
|
execs.exitCode = ret
|
||||||
execs.exitTime = timeStamp
|
execs.exitTime = timeStamp
|
||||||
|
|
||||||
execs.exitCh <- uint32(ret)
|
execs.exitCh <- uint32(ret)
|
||||||
|
shimLog.WithFields(logrus.Fields{
|
||||||
|
"container": c.id,
|
||||||
|
"exec": execID,
|
||||||
|
}).Debug("The container exec status is StatusStopped")
|
||||||
}
|
}
|
||||||
s.mu.Unlock()
|
s.mu.Unlock()
|
||||||
|
|
||||||
@@ -102,6 +111,7 @@ func watchSandbox(ctx context.Context, s *service) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
err := <-s.monitor
|
err := <-s.monitor
|
||||||
|
shimLog.WithError(err).WithField("sandbox", s.sandbox.ID()).Info("watchSandbox gets an error or stop signal")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -147,13 +157,11 @@ func watchOOMEvents(ctx context.Context, s *service) {
|
|||||||
default:
|
default:
|
||||||
containerID, err := s.sandbox.GetOOMEvent(ctx)
|
containerID, err := s.sandbox.GetOOMEvent(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
shimLog.WithError(err).Warn("failed to get OOM event from sandbox")
|
if err.Error() == "ttrpc: closed" || err.Error() == "Dead agent" {
|
||||||
// If the GetOOMEvent call is not implemented, then the agent is most likely an older version,
|
shimLog.WithError(err).Warn("agent has shutdown, return from watching of OOM events")
|
||||||
// stop attempting to get OOM events.
|
|
||||||
// for rust agent, the response code is not found
|
|
||||||
if isGRPCErrorCode(codes.NotFound, err) || err.Error() == "Dead agent" {
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
shimLog.WithError(err).Warn("failed to get OOM event from sandbox")
|
||||||
time.Sleep(defaultCheckInterval)
|
time.Sleep(defaultCheckInterval)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ import (
|
|||||||
var testKeyHook = "test-key"
|
var testKeyHook = "test-key"
|
||||||
var testContainerIDHook = "test-container-id"
|
var testContainerIDHook = "test-container-id"
|
||||||
var testControllerIDHook = "test-controller-id"
|
var testControllerIDHook = "test-controller-id"
|
||||||
var testBinHookPath = "/usr/bin/virtcontainers/bin/test/hook"
|
var testBinHookPath = "../../virtcontainers/hook/mock/hook"
|
||||||
var testBundlePath = "/test/bundle"
|
var testBundlePath = "/test/bundle"
|
||||||
|
|
||||||
func getMockHookBinPath() string {
|
func getMockHookBinPath() string {
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -1060,7 +1061,18 @@ func (c *Container) signalProcess(ctx context.Context, processID string, signal
|
|||||||
return fmt.Errorf("Container not ready, running or paused, impossible to signal the container")
|
return fmt.Errorf("Container not ready, running or paused, impossible to signal the container")
|
||||||
}
|
}
|
||||||
|
|
||||||
return c.sandbox.agent.signalProcess(ctx, c, processID, signal, all)
|
// kill(2) method can return ESRCH in certain cases, which is not handled by containerd cri server in container_stop.go.
|
||||||
|
// CRIO server also doesn't handle ESRCH. So kata runtime will swallow it here.
|
||||||
|
var err error
|
||||||
|
if err = c.sandbox.agent.signalProcess(ctx, c, processID, signal, all); err != nil &&
|
||||||
|
strings.Contains(err.Error(), "ESRCH: No such process") {
|
||||||
|
c.Logger().WithFields(logrus.Fields{
|
||||||
|
"container": c.id,
|
||||||
|
"process-id": processID,
|
||||||
|
}).Warn("signal encounters ESRCH, process already finished")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Container) winsizeProcess(ctx context.Context, processID string, height, width uint32) error {
|
func (c *Container) winsizeProcess(ctx context.Context, processID string, height, width uint32) error {
|
||||||
|
|||||||
@@ -18,6 +18,8 @@ const (
|
|||||||
watcherChannelSize = 128
|
watcherChannelSize = 128
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var monitorLog = virtLog.WithField("subsystem", "virtcontainers/monitor")
|
||||||
|
|
||||||
// nolint: govet
|
// nolint: govet
|
||||||
type monitor struct {
|
type monitor struct {
|
||||||
watchers []chan error
|
watchers []chan error
|
||||||
@@ -33,6 +35,9 @@ type monitor struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func newMonitor(s *Sandbox) *monitor {
|
func newMonitor(s *Sandbox) *monitor {
|
||||||
|
// there should only be one monitor for one sandbox,
|
||||||
|
// so it's safe to let monitorLog as a global variable.
|
||||||
|
monitorLog = monitorLog.WithField("sandbox", s.ID())
|
||||||
return &monitor{
|
return &monitor{
|
||||||
sandbox: s,
|
sandbox: s,
|
||||||
checkInterval: defaultCheckInterval,
|
checkInterval: defaultCheckInterval,
|
||||||
@@ -72,6 +77,7 @@ func (m *monitor) newWatcher(ctx context.Context) (chan error, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *monitor) notify(ctx context.Context, err error) {
|
func (m *monitor) notify(ctx context.Context, err error) {
|
||||||
|
monitorLog.WithError(err).Warn("notify on errors")
|
||||||
m.sandbox.agent.markDead(ctx)
|
m.sandbox.agent.markDead(ctx)
|
||||||
|
|
||||||
m.Lock()
|
m.Lock()
|
||||||
@@ -85,18 +91,19 @@ func (m *monitor) notify(ctx context.Context, err error) {
|
|||||||
// but just in case...
|
// but just in case...
|
||||||
defer func() {
|
defer func() {
|
||||||
if x := recover(); x != nil {
|
if x := recover(); x != nil {
|
||||||
virtLog.Warnf("watcher closed channel: %v", x)
|
monitorLog.Warnf("watcher closed channel: %v", x)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
for _, c := range m.watchers {
|
for _, c := range m.watchers {
|
||||||
|
monitorLog.WithError(err).Warn("write error to watcher")
|
||||||
// throw away message can not write to channel
|
// throw away message can not write to channel
|
||||||
// make it not stuck, the first error is useful.
|
// make it not stuck, the first error is useful.
|
||||||
select {
|
select {
|
||||||
case c <- err:
|
case c <- err:
|
||||||
|
|
||||||
default:
|
default:
|
||||||
virtLog.WithField("channel-size", watcherChannelSize).Warnf("watcher channel is full, throw notify message")
|
monitorLog.WithField("channel-size", watcherChannelSize).Warnf("watcher channel is full, throw notify message")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -104,6 +111,7 @@ func (m *monitor) notify(ctx context.Context, err error) {
|
|||||||
func (m *monitor) stop() {
|
func (m *monitor) stop() {
|
||||||
// wait outside of monitor lock for the watcher channel to exit.
|
// wait outside of monitor lock for the watcher channel to exit.
|
||||||
defer m.wg.Wait()
|
defer m.wg.Wait()
|
||||||
|
monitorLog.Info("stopping monitor")
|
||||||
|
|
||||||
m.Lock()
|
m.Lock()
|
||||||
defer m.Unlock()
|
defer m.Unlock()
|
||||||
@@ -122,7 +130,7 @@ func (m *monitor) stop() {
|
|||||||
// but just in case...
|
// but just in case...
|
||||||
defer func() {
|
defer func() {
|
||||||
if x := recover(); x != nil {
|
if x := recover(); x != nil {
|
||||||
virtLog.Warnf("watcher closed channel: %v", x)
|
monitorLog.Warnf("watcher closed channel: %v", x)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
|
|||||||
@@ -321,6 +321,7 @@ func WaitLocalProcess(pid int, timeoutSecs uint, initialSignal syscall.Signal, l
|
|||||||
if initialSignal != syscall.Signal(0) {
|
if initialSignal != syscall.Signal(0) {
|
||||||
if err = syscall.Kill(pid, initialSignal); err != nil {
|
if err = syscall.Kill(pid, initialSignal); err != nil {
|
||||||
if err == syscall.ESRCH {
|
if err == syscall.ESRCH {
|
||||||
|
logger.WithField("pid", pid).Warnf("kill encounters ESRCH, process already finished")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -143,7 +143,7 @@ $ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
|
|||||||
|
|
||||||
After ensuring kata-deploy has been deleted, cleanup the cluster:
|
After ensuring kata-deploy has been deleted, cleanup the cluster:
|
||||||
```sh
|
```sh
|
||||||
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stabe.yaml
|
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion.
|
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion.
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ spec:
|
|||||||
katacontainers.io/kata-runtime: cleanup
|
katacontainers.io/kata-runtime: cleanup
|
||||||
containers:
|
containers:
|
||||||
- name: kube-kata-cleanup
|
- name: kube-kata-cleanup
|
||||||
image: quay.io/kata-containers/kata-deploy:latest
|
image: quay.io/kata-containers/kata-deploy:2.4.0
|
||||||
imagePullPolicy: Always
|
imagePullPolicy: Always
|
||||||
command: [ "bash", "-c", "/opt/kata-artifacts/scripts/kata-deploy.sh reset" ]
|
command: [ "bash", "-c", "/opt/kata-artifacts/scripts/kata-deploy.sh reset" ]
|
||||||
env:
|
env:
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ spec:
|
|||||||
serviceAccountName: kata-label-node
|
serviceAccountName: kata-label-node
|
||||||
containers:
|
containers:
|
||||||
- name: kube-kata
|
- name: kube-kata
|
||||||
image: quay.io/kata-containers/kata-deploy:latest
|
image: quay.io/kata-containers/kata-deploy:2.4.0
|
||||||
imagePullPolicy: Always
|
imagePullPolicy: Always
|
||||||
lifecycle:
|
lifecycle:
|
||||||
preStop:
|
preStop:
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
89
|
90
|
||||||
|
|||||||
@@ -0,0 +1,81 @@
|
|||||||
|
From 29c4a3363bf287bb9a7b0342b1bc2dba3661c96c Mon Sep 17 00:00:00 2001
|
||||||
|
From: Fabiano Rosas <farosas@linux.ibm.com>
|
||||||
|
Date: Fri, 17 Dec 2021 17:57:18 +0100
|
||||||
|
Subject: [PATCH] Revert "target/ppc: Move SPR_DSISR setting to powerpc_excp"
|
||||||
|
MIME-Version: 1.0
|
||||||
|
Content-Type: text/plain; charset=UTF-8
|
||||||
|
Content-Transfer-Encoding: 8bit
|
||||||
|
|
||||||
|
This reverts commit 336e91f85332dda0ede4c1d15b87a19a0fb898a2.
|
||||||
|
|
||||||
|
It breaks the --disable-tcg build:
|
||||||
|
|
||||||
|
../target/ppc/excp_helper.c:463:29: error: implicit declaration of
|
||||||
|
function ‘cpu_ldl_code’ [-Werror=implicit-function-declaration]
|
||||||
|
|
||||||
|
We should not have TCG code in powerpc_excp because some kvm-only
|
||||||
|
routines use it indirectly to dispatch interrupts. See
|
||||||
|
kvm_handle_debug, spapr_mce_req_event and
|
||||||
|
spapr_do_system_reset_on_cpu.
|
||||||
|
|
||||||
|
We can re-introduce the change once we have split the interrupt
|
||||||
|
injection code between KVM and TCG.
|
||||||
|
|
||||||
|
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
|
||||||
|
Message-Id: <20211209173323.2166642-1-farosas@linux.ibm.com>
|
||||||
|
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
||||||
|
---
|
||||||
|
target/ppc/excp_helper.c | 21 ++++++++++++---------
|
||||||
|
1 file changed, 12 insertions(+), 9 deletions(-)
|
||||||
|
|
||||||
|
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
|
||||||
|
index feb3fd42e2..6ba0840e99 100644
|
||||||
|
--- a/target/ppc/excp_helper.c
|
||||||
|
+++ b/target/ppc/excp_helper.c
|
||||||
|
@@ -464,15 +464,13 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case POWERPC_EXCP_ALIGN: /* Alignment exception */
|
||||||
|
+ /* Get rS/rD and rA from faulting opcode */
|
||||||
|
/*
|
||||||
|
- * Get rS/rD and rA from faulting opcode.
|
||||||
|
- * Note: We will only invoke ALIGN for atomic operations,
|
||||||
|
- * so all instructions are X-form.
|
||||||
|
+ * Note: the opcode fields will not be set properly for a
|
||||||
|
+ * direct store load/store, but nobody cares as nobody
|
||||||
|
+ * actually uses direct store segments.
|
||||||
|
*/
|
||||||
|
- {
|
||||||
|
- uint32_t insn = cpu_ldl_code(env, env->nip);
|
||||||
|
- env->spr[SPR_DSISR] |= (insn & 0x03FF0000) >> 16;
|
||||||
|
- }
|
||||||
|
+ env->spr[SPR_DSISR] |= (env->error_code & 0x03FF0000) >> 16;
|
||||||
|
break;
|
||||||
|
case POWERPC_EXCP_PROGRAM: /* Program exception */
|
||||||
|
switch (env->error_code & ~0xF) {
|
||||||
|
@@ -1441,6 +1439,11 @@ void ppc_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
|
||||||
|
int mmu_idx, uintptr_t retaddr)
|
||||||
|
{
|
||||||
|
CPUPPCState *env = cs->env_ptr;
|
||||||
|
+ uint32_t insn;
|
||||||
|
+
|
||||||
|
+ /* Restore state and reload the insn we executed, for filling in DSISR. */
|
||||||
|
+ cpu_restore_state(cs, retaddr, true);
|
||||||
|
+ insn = cpu_ldl_code(env, env->nip);
|
||||||
|
|
||||||
|
switch (env->mmu_model) {
|
||||||
|
case POWERPC_MMU_SOFT_4xx:
|
||||||
|
@@ -1456,8 +1459,8 @@ void ppc_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
|
||||||
|
}
|
||||||
|
|
||||||
|
cs->exception_index = POWERPC_EXCP_ALIGN;
|
||||||
|
- env->error_code = 0;
|
||||||
|
- cpu_loop_exit_restore(cs, retaddr);
|
||||||
|
+ env->error_code = insn & 0x03FF0000;
|
||||||
|
+ cpu_loop_exit(cs);
|
||||||
|
}
|
||||||
|
#endif /* CONFIG_TCG */
|
||||||
|
#endif /* !CONFIG_USER_ONLY */
|
||||||
|
--
|
||||||
|
GitLab
|
||||||
|
|
||||||
53
tools/packaging/release/generate_vendor.sh
Executable file
53
tools/packaging/release/generate_vendor.sh
Executable file
@@ -0,0 +1,53 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# Copyright (c) 2022 Intel Corporation
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
|
#
|
||||||
|
|
||||||
|
set -o errexit
|
||||||
|
set -o nounset
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
script_name="$(basename "${BASH_SOURCE[0]}")"
|
||||||
|
|
||||||
|
# This is very much error prone in case we re-structure our
|
||||||
|
# repos again, but it's also used in a few other places :-/
|
||||||
|
repo_dir="${script_dir}/../../.."
|
||||||
|
|
||||||
|
function usage() {
|
||||||
|
|
||||||
|
cat <<EOF
|
||||||
|
Usage: ${script_name} tarball-name
|
||||||
|
This script creates a tarball with all the cargo vendored code
|
||||||
|
that a distro would need to do a full build of the project in
|
||||||
|
a disconnected environment, generating a "tarball-name" file.
|
||||||
|
|
||||||
|
EOF
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
create_vendor_tarball() {
|
||||||
|
vendor_dir_list=""
|
||||||
|
pushd ${repo_dir}
|
||||||
|
for i in $(find . -name 'Cargo.lock'); do
|
||||||
|
dir="$(dirname $i)"
|
||||||
|
pushd "${dir}"
|
||||||
|
[ -d .cargo ] || mkdir .cargo
|
||||||
|
cargo vendor >> .cargo/config
|
||||||
|
vendor_dir_list+=" $dir/vendor $dir/.cargo/config"
|
||||||
|
echo "${vendor_dir_list}"
|
||||||
|
popd
|
||||||
|
done
|
||||||
|
popd
|
||||||
|
|
||||||
|
tar -cvzf ${1} ${vendor_dir_list}
|
||||||
|
}
|
||||||
|
|
||||||
|
main () {
|
||||||
|
[ $# -ne 1 ] && usage && exit 0
|
||||||
|
create_vendor_tarball ${1}
|
||||||
|
}
|
||||||
|
|
||||||
|
main "$@"
|
||||||
@@ -68,7 +68,6 @@ generate_kata_deploy_commit() {
|
|||||||
kata-deploy files must be adapted to a new release. The cases where it
|
kata-deploy files must be adapted to a new release. The cases where it
|
||||||
happens are when the release goes from -> to:
|
happens are when the release goes from -> to:
|
||||||
* main -> stable:
|
* main -> stable:
|
||||||
* kata-deploy / kata-cleanup: change from \"latest\" to \"rc0\"
|
|
||||||
* kata-deploy-stable / kata-cleanup-stable: are removed
|
* kata-deploy-stable / kata-cleanup-stable: are removed
|
||||||
|
|
||||||
* stable -> stable:
|
* stable -> stable:
|
||||||
@@ -161,7 +160,7 @@ bump_repo() {
|
|||||||
# +----------------+----------------+
|
# +----------------+----------------+
|
||||||
# | from | to |
|
# | from | to |
|
||||||
# -------------------+----------------+----------------+
|
# -------------------+----------------+----------------+
|
||||||
# kata-deploy | "latest" | "rc0" |
|
# kata-deploy | "latest" | "latest" |
|
||||||
# -------------------+----------------+----------------+
|
# -------------------+----------------+----------------+
|
||||||
# kata-deploy-stable | "stable" | REMOVED |
|
# kata-deploy-stable | "stable" | REMOVED |
|
||||||
# -------------------+----------------+----------------+
|
# -------------------+----------------+----------------+
|
||||||
@@ -183,29 +182,34 @@ bump_repo() {
|
|||||||
info "Updating kata-deploy / kata-cleanup image tags"
|
info "Updating kata-deploy / kata-cleanup image tags"
|
||||||
local version_to_replace="${current_version}"
|
local version_to_replace="${current_version}"
|
||||||
local replacement="${new_version}"
|
local replacement="${new_version}"
|
||||||
if [ "${target_branch}" == "main" ]; then
|
local need_commit=false
|
||||||
|
if [ "${target_branch}" == "main" ];then
|
||||||
if [[ "${new_version}" =~ "rc" ]]; then
|
if [[ "${new_version}" =~ "rc" ]]; then
|
||||||
## this is the case 2) where we remove te kata-deploy / kata-cleanup stable files
|
## We are bumping from alpha to RC, should drop kata-deploy-stable yamls.
|
||||||
git rm "${kata_deploy_stable_yaml}"
|
git rm "${kata_deploy_stable_yaml}"
|
||||||
git rm "${kata_cleanup_stable_yaml}"
|
git rm "${kata_cleanup_stable_yaml}"
|
||||||
|
|
||||||
else
|
need_commit=true
|
||||||
## this is the case 1) where we just do nothing
|
|
||||||
replacement="latest"
|
|
||||||
fi
|
fi
|
||||||
version_to_replace="latest"
|
elif [ "${new_version}" != *"rc"* ]; then
|
||||||
fi
|
## We are on a stable branch and creating new stable releases.
|
||||||
|
## Need to change kata-deploy / kata-cleanup to use the stable tags.
|
||||||
if [ "${version_to_replace}" != "${replacement}" ]; then
|
if [[ "${version_to_replace}" =~ "rc" ]]; then
|
||||||
## this covers case 2) and 3), as on both of them we have changes on kata-deploy / kata-cleanup files
|
## Coming from "rcX" so from the latest tag.
|
||||||
sed -i "s#${registry}:${version_to_replace}#${registry}:${new_version}#g" "${kata_deploy_yaml}"
|
version_to_replace="latest"
|
||||||
sed -i "s#${registry}:${version_to_replace}#${registry}:${new_version}#g" "${kata_cleanup_yaml}"
|
fi
|
||||||
|
sed -i "s#${registry}:${version_to_replace}#${registry}:${replacement}#g" "${kata_deploy_yaml}"
|
||||||
|
sed -i "s#${registry}:${version_to_replace}#${registry}:${replacement}#g" "${kata_cleanup_yaml}"
|
||||||
|
|
||||||
git diff
|
git diff
|
||||||
|
|
||||||
git add "${kata_deploy_yaml}"
|
git add "${kata_deploy_yaml}"
|
||||||
git add "${kata_cleanup_yaml}"
|
git add "${kata_cleanup_yaml}"
|
||||||
|
|
||||||
|
need_commit=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${need_commit}" == "true" ]; then
|
||||||
info "Creating the commit with the kata-deploy changes"
|
info "Creating the commit with the kata-deploy changes"
|
||||||
local commit_msg="$(generate_kata_deploy_commit $new_version)"
|
local commit_msg="$(generate_kata_deploy_commit $new_version)"
|
||||||
git commit -s -m "${commit_msg}"
|
git commit -s -m "${commit_msg}"
|
||||||
|
|||||||
@@ -250,7 +250,6 @@ generate_qemu_options() {
|
|||||||
qemu_options+=(size:--disable-auth-pam)
|
qemu_options+=(size:--disable-auth-pam)
|
||||||
|
|
||||||
# Disable unused filesystem support
|
# Disable unused filesystem support
|
||||||
[ "$arch" == x86_64 ] && qemu_options+=(size:--disable-fdt)
|
|
||||||
qemu_options+=(size:--disable-glusterfs)
|
qemu_options+=(size:--disable-glusterfs)
|
||||||
qemu_options+=(size:--disable-libiscsi)
|
qemu_options+=(size:--disable-libiscsi)
|
||||||
qemu_options+=(size:--disable-libnfs)
|
qemu_options+=(size:--disable-libnfs)
|
||||||
@@ -303,7 +302,6 @@ generate_qemu_options() {
|
|||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
qemu_options+=(size:--disable-qom-cast-debug)
|
qemu_options+=(size:--disable-qom-cast-debug)
|
||||||
qemu_options+=(size:--disable-tcmalloc)
|
|
||||||
|
|
||||||
# Disable libudev since it is only needed for qemu-pr-helper and USB,
|
# Disable libudev since it is only needed for qemu-pr-helper and USB,
|
||||||
# none of which are used with Kata
|
# none of which are used with Kata
|
||||||
|
|||||||
@@ -208,11 +208,14 @@ Description: Install $kata_project [1] (and optionally $containerd_project [2])
|
|||||||
Options:
|
Options:
|
||||||
|
|
||||||
-c <version> : Specify containerd version.
|
-c <version> : Specify containerd version.
|
||||||
|
-d : Enable debug for all components.
|
||||||
-f : Force installation (use with care).
|
-f : Force installation (use with care).
|
||||||
-h : Show this help statement.
|
-h : Show this help statement.
|
||||||
-k <version> : Specify Kata Containers version.
|
-k <version> : Specify Kata Containers version.
|
||||||
-o : Only install Kata Containers.
|
-o : Only install Kata Containers.
|
||||||
-r : Don't cleanup on failure (retain files).
|
-r : Don't cleanup on failure (retain files).
|
||||||
|
-t : Disable self test (don't try to create a container after install).
|
||||||
|
-T : Only run self test (do not install anything).
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
@@ -402,13 +405,21 @@ install_containerd()
|
|||||||
|
|
||||||
sudo tar -C /usr/local -xvf "${file}"
|
sudo tar -C /usr/local -xvf "${file}"
|
||||||
|
|
||||||
sudo ln -sf /usr/local/bin/ctr "${link_dir}"
|
for file in \
|
||||||
|
/usr/local/bin/containerd \
|
||||||
|
/usr/local/bin/ctr
|
||||||
|
do
|
||||||
|
sudo ln -sf "$file" "${link_dir}"
|
||||||
|
done
|
||||||
|
|
||||||
info "$project installed\n"
|
info "$project installed\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
configure_containerd()
|
configure_containerd()
|
||||||
{
|
{
|
||||||
|
local enable_debug="${1:-}"
|
||||||
|
[ -z "$enable_debug" ] && die "no enable debug value"
|
||||||
|
|
||||||
local project="$containerd_project"
|
local project="$containerd_project"
|
||||||
|
|
||||||
info "Configuring $project"
|
info "Configuring $project"
|
||||||
@@ -460,26 +471,55 @@ configure_containerd()
|
|||||||
info "Backed up $cfg to $original"
|
info "Backed up $cfg to $original"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
local modified="false"
|
||||||
|
|
||||||
# Add the Kata Containers configuration details:
|
# Add the Kata Containers configuration details:
|
||||||
|
|
||||||
|
local comment_text
|
||||||
|
comment_text=$(printf "%s: Added by %s\n" \
|
||||||
|
"$(date -Iseconds)" \
|
||||||
|
"$script_name")
|
||||||
|
|
||||||
sudo grep -q "$kata_runtime_type" "$cfg" || {
|
sudo grep -q "$kata_runtime_type" "$cfg" || {
|
||||||
cat <<-EOT | sudo tee -a "$cfg"
|
cat <<-EOT | sudo tee -a "$cfg"
|
||||||
[plugins]
|
# $comment_text
|
||||||
[plugins."io.containerd.grpc.v1.cri"]
|
[plugins]
|
||||||
[plugins."io.containerd.grpc.v1.cri".containerd]
|
[plugins."io.containerd.grpc.v1.cri"]
|
||||||
default_runtime_name = "${kata_runtime_name}"
|
[plugins."io.containerd.grpc.v1.cri".containerd]
|
||||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
|
default_runtime_name = "${kata_runtime_name}"
|
||||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${kata_runtime_name}]
|
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
|
||||||
runtime_type = "${kata_runtime_type}"
|
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${kata_runtime_name}]
|
||||||
EOT
|
runtime_type = "${kata_runtime_type}"
|
||||||
|
EOT
|
||||||
|
|
||||||
info "Modified $cfg"
|
modified="true"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if [ "$enable_debug" = "true" ]
|
||||||
|
then
|
||||||
|
local debug_enabled
|
||||||
|
debug_enabled=$(awk -v RS='' '/\[debug\]/' "$cfg" |\
|
||||||
|
grep -E "^\s*\<level\>\s*=\s*.*\<debug\>" || true)
|
||||||
|
|
||||||
|
[ -n "$debug_enabled" ] || {
|
||||||
|
cat <<-EOT | sudo tee -a "$cfg"
|
||||||
|
# $comment_text
|
||||||
|
[debug]
|
||||||
|
level = "debug"
|
||||||
|
EOT
|
||||||
|
}
|
||||||
|
|
||||||
|
modified="true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
[ "$modified" = "true" ] && info "Modified $cfg"
|
||||||
sudo systemctl enable containerd
|
sudo systemctl enable containerd
|
||||||
sudo systemctl start containerd
|
sudo systemctl start containerd
|
||||||
|
|
||||||
info "Configured $project\n"
|
local msg="disabled"
|
||||||
|
[ "$enable_debug" = "true" ] && msg="enabled"
|
||||||
|
|
||||||
|
info "Configured $project (debug $msg)\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
install_kata()
|
install_kata()
|
||||||
@@ -540,11 +580,48 @@ install_kata()
|
|||||||
info "$project installed\n"
|
info "$project installed\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
configure_kata()
|
||||||
|
{
|
||||||
|
local enable_debug="${1:-}"
|
||||||
|
[ -z "$enable_debug" ] && die "no enable debug value"
|
||||||
|
|
||||||
|
[ "$enable_debug" = "false" ] && \
|
||||||
|
info "Using default $kata_project configuration" && \
|
||||||
|
return 0
|
||||||
|
|
||||||
|
local config_file='configuration.toml'
|
||||||
|
local kata_dir='/etc/kata-containers'
|
||||||
|
|
||||||
|
sudo mkdir -p "$kata_dir"
|
||||||
|
|
||||||
|
local cfg_from
|
||||||
|
local cfg_to
|
||||||
|
|
||||||
|
cfg_from="${kata_install_dir}/share/defaults/kata-containers/${config_file}"
|
||||||
|
cfg_to="${kata_dir}/${config_file}"
|
||||||
|
|
||||||
|
[ -e "$cfg_from" ] || die "cannot find $kata_project configuration file"
|
||||||
|
|
||||||
|
sudo install -o root -g root -m 0644 "$cfg_from" "$cfg_to"
|
||||||
|
|
||||||
|
sudo sed -i \
|
||||||
|
-e 's/^# *\(enable_debug\).*=.*$/\1 = true/g' \
|
||||||
|
-e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.log=debug initcall_debug"/g' \
|
||||||
|
"$cfg_to"
|
||||||
|
|
||||||
|
info "Configured $kata_project for full debug (delete $cfg_to to use pristine $kata_project configuration)"
|
||||||
|
}
|
||||||
|
|
||||||
handle_kata()
|
handle_kata()
|
||||||
{
|
{
|
||||||
local version="${1:-}"
|
local version="${1:-}"
|
||||||
|
|
||||||
install_kata "$version"
|
local enable_debug="${2:-}"
|
||||||
|
[ -z "$enable_debug" ] && die "no enable debug value"
|
||||||
|
|
||||||
|
install_kata "$version" "$enable_debug"
|
||||||
|
|
||||||
|
configure_kata "$enable_debug"
|
||||||
|
|
||||||
kata-runtime --version
|
kata-runtime --version
|
||||||
}
|
}
|
||||||
@@ -556,6 +633,9 @@ handle_containerd()
|
|||||||
local force="${2:-}"
|
local force="${2:-}"
|
||||||
[ -z "$force" ] && die "need force value"
|
[ -z "$force" ] && die "need force value"
|
||||||
|
|
||||||
|
local enable_debug="${3:-}"
|
||||||
|
[ -z "$enable_debug" ] && die "no enable debug value"
|
||||||
|
|
||||||
local ret
|
local ret
|
||||||
|
|
||||||
if [ "$force" = "true" ]
|
if [ "$force" = "true" ]
|
||||||
@@ -572,7 +652,7 @@ handle_containerd()
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
configure_containerd
|
configure_containerd "$enable_debug"
|
||||||
|
|
||||||
containerd --version
|
containerd --version
|
||||||
}
|
}
|
||||||
@@ -617,20 +697,32 @@ handle_installation()
|
|||||||
local only_kata="${3:-}"
|
local only_kata="${3:-}"
|
||||||
[ -z "$only_kata" ] && die "no only Kata value"
|
[ -z "$only_kata" ] && die "no only Kata value"
|
||||||
|
|
||||||
|
local enable_debug="${4:-}"
|
||||||
|
[ -z "$enable_debug" ] && die "no enable debug value"
|
||||||
|
|
||||||
|
local disable_test="${5:-}"
|
||||||
|
[ -z "$disable_test" ] && die "no disable test value"
|
||||||
|
|
||||||
|
local only_run_test="${6:-}"
|
||||||
|
[ -z "$only_run_test" ] && die "no only run test value"
|
||||||
|
|
||||||
# These params can be blank
|
# These params can be blank
|
||||||
local kata_version="${4:-}"
|
local kata_version="${7:-}"
|
||||||
local containerd_version="${5:-}"
|
local containerd_version="${8:-}"
|
||||||
|
|
||||||
|
[ "$only_run_test" = "true" ] && test_installation && return 0
|
||||||
|
|
||||||
setup "$cleanup" "$force"
|
setup "$cleanup" "$force"
|
||||||
|
|
||||||
handle_kata "$kata_version"
|
handle_kata "$kata_version" "$enable_debug"
|
||||||
|
|
||||||
[ "$only_kata" = "false" ] && \
|
[ "$only_kata" = "false" ] && \
|
||||||
handle_containerd \
|
handle_containerd \
|
||||||
"$containerd_version" \
|
"$containerd_version" \
|
||||||
"$force"
|
"$force" \
|
||||||
|
"$enable_debug"
|
||||||
|
|
||||||
test_installation
|
[ "$disable_test" = "false" ] && test_installation
|
||||||
|
|
||||||
if [ "$only_kata" = "true" ]
|
if [ "$only_kata" = "true" ]
|
||||||
then
|
then
|
||||||
@@ -647,21 +739,27 @@ handle_args()
|
|||||||
local cleanup="true"
|
local cleanup="true"
|
||||||
local force="false"
|
local force="false"
|
||||||
local only_kata="false"
|
local only_kata="false"
|
||||||
|
local disable_test="false"
|
||||||
|
local only_run_test="false"
|
||||||
|
local enable_debug="false"
|
||||||
|
|
||||||
local opt
|
local opt
|
||||||
|
|
||||||
local kata_version=""
|
local kata_version=""
|
||||||
local containerd_version=""
|
local containerd_version=""
|
||||||
|
|
||||||
while getopts "c:fhk:or" opt "$@"
|
while getopts "c:dfhk:ortT" opt "$@"
|
||||||
do
|
do
|
||||||
case "$opt" in
|
case "$opt" in
|
||||||
c) containerd_version="$OPTARG" ;;
|
c) containerd_version="$OPTARG" ;;
|
||||||
|
d) enable_debug="true" ;;
|
||||||
f) force="true" ;;
|
f) force="true" ;;
|
||||||
h) usage; exit 0 ;;
|
h) usage; exit 0 ;;
|
||||||
k) kata_version="$OPTARG" ;;
|
k) kata_version="$OPTARG" ;;
|
||||||
o) only_kata="true" ;;
|
o) only_kata="true" ;;
|
||||||
r) cleanup="false" ;;
|
r) cleanup="false" ;;
|
||||||
|
t) disable_test="true" ;;
|
||||||
|
T) only_run_test="true" ;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
@@ -674,6 +772,9 @@ handle_args()
|
|||||||
"$cleanup" \
|
"$cleanup" \
|
||||||
"$force" \
|
"$force" \
|
||||||
"$only_kata" \
|
"$only_kata" \
|
||||||
|
"$enable_debug" \
|
||||||
|
"$disable_test" \
|
||||||
|
"$only_run_test" \
|
||||||
"$kata_version" \
|
"$kata_version" \
|
||||||
"$containerd_version"
|
"$containerd_version"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -75,7 +75,7 @@ assets:
|
|||||||
url: "https://github.com/cloud-hypervisor/cloud-hypervisor"
|
url: "https://github.com/cloud-hypervisor/cloud-hypervisor"
|
||||||
uscan-url: >-
|
uscan-url: >-
|
||||||
https://github.com/cloud-hypervisor/cloud-hypervisor/tags.*/v?(\d\S+)\.tar\.gz
|
https://github.com/cloud-hypervisor/cloud-hypervisor/tags.*/v?(\d\S+)\.tar\.gz
|
||||||
version: "v22.0"
|
version: "v22.1"
|
||||||
|
|
||||||
firecracker:
|
firecracker:
|
||||||
description: "Firecracker micro-VMM"
|
description: "Firecracker micro-VMM"
|
||||||
@@ -88,8 +88,8 @@ assets:
|
|||||||
qemu:
|
qemu:
|
||||||
description: "VMM that uses KVM"
|
description: "VMM that uses KVM"
|
||||||
url: "https://github.com/qemu/qemu"
|
url: "https://github.com/qemu/qemu"
|
||||||
version: "v6.1.0"
|
version: "v6.2.0"
|
||||||
tag: "v6.1.0"
|
tag: "v6.2.0"
|
||||||
# Do not include any non-full release versions
|
# Do not include any non-full release versions
|
||||||
# Break the line *without CR or space being appended*, to appease
|
# Break the line *without CR or space being appended*, to appease
|
||||||
# yamllint, and note the deliberate ' ' at the end of the expression.
|
# yamllint, and note the deliberate ' ' at the end of the expression.
|
||||||
@@ -153,7 +153,7 @@ assets:
|
|||||||
kernel:
|
kernel:
|
||||||
description: "Linux kernel optimised for virtual machines"
|
description: "Linux kernel optimised for virtual machines"
|
||||||
url: "https://cdn.kernel.org/pub/linux/kernel/v5.x/"
|
url: "https://cdn.kernel.org/pub/linux/kernel/v5.x/"
|
||||||
version: "v5.15.23"
|
version: "v5.15.26"
|
||||||
tdx:
|
tdx:
|
||||||
description: "Linux kernel that supports TDX"
|
description: "Linux kernel that supports TDX"
|
||||||
url: "https://github.com/intel/tdx/archive/refs/tags"
|
url: "https://github.com/intel/tdx/archive/refs/tags"
|
||||||
|
|||||||
Reference in New Issue
Block a user