Compare commits

...

12 Commits

Author SHA1 Message Date
Alex Lyn
dd0c24d775 kata-deploy: Complete containerd config for erofs snapshotter
Add missing containerd configuration items for erofs snapshotter to
enable fsmerged erofs feature:

- Add differ plugin configuration:
  - mkfs_options: ["-T0","--mkfs-time","--sort=none"]
  - enable_tar_index: false

- Add snapshotter plugin configuration:
  - default_size: "10G"
  - max_unmerged_layers: 1

These configurations align with the documentation in
docs/how-to/how-to-use-fsmerged-erofs-with-kata.md Step 2,
ensuring the CI workflow run-k8s-tests-coco-nontee-with-erofs-snapshotter
can properly configure containerd for erofs fsmerged rootfs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-09 17:36:00 +08:00
Fabiano Fidêncio
47a4816c43 ci: enable erofs tests with runtime-rs
Now that Ya'nan has added the support, let's make sure this is tested.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-09 06:56:01 +02:00
Alex Lyn
25edd51ab5 docs: Add how-to guide for using fsmerged EROFS rootfs with Kata
Document the end-to-end workflow for using the containerd EROFS
snapshotter with Kata Containers runtime-rs, covering containerd
configuration, Kata QEMU settings, and pod deployment examples
via crictl/ctr/Kubernetes.

Include prerequisites (containerd >= 2.2, runtime-rs main branch),
QEMU VMDK format verification command, architecture diagram,
VMDK descriptor format reference, and troubleshooting guide.

Note that Cloud Hypervisor, Firecracker, and Dragonball do not
support VMDK block devices and are currently unsupported for
fsmerged EROFS rootfs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-08 09:10:48 +08:00
Alex Lyn
fde3272825 agent: Refactor multi-layer EROFS handling with unified flow
Refactor the multi-layer EROFS storage handling to improve code
maintainability and reduce duplication.

Key changes:
(1) Extract update_storage_device() to unify device state management
  for both multi-layer and standard storages
(2) Simplify handle_multi_layer_storage() to focus on device creation,
  returning MultiLayerProcessResult struct instead of managing state
(3) Unify the processing flow in add_storages() with clear separation:
(4) Support multiple EROFS lower layers with dynamic lower-N mount paths
(5) Improve mkdir directive handling with deferred {{ mount 1 }}
  resolution

This reduces code duplication, improves readability, and makes the
storage handling logic more consistent across different storage types.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-08 09:10:48 +08:00
Alex Lyn
c06fc44775 agent: Register MultiLayerErofsHandler and process multiple EROFS
Introduce MultiLayerErofsHandler and method of handle_multi_layer_storage
for multi-layer storage:
(1) Register MultiLayerErofsHandler to STORAGE_HANDLERS to handle
multi-layer EROFS storage with driver type 'multi-layer-erofs'.
(2) Add handle_multi_layer_erofs function to process multiple EROFS
storages with X-kata.multi-layer marker together in guest.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-08 09:10:38 +08:00
Alex Lyn
35d867c5a5 agent: Add support for EROFS rootfs handling in kata-agent
Add multi_layer_erofs.rs implementing guest-side processing logics
of multi-layer EROFS rootfs with overlay mount support.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
Alex Lyn
0a5d95988a runtime-rs: Add erofs rootfs handling logic in handler_rootfs
Add handling for multi-layer EROFS rootfs in RootFsResource
handler_rootfs method. It will correctly handle the multi-layers
erofs rootfs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
Alex Lyn
30303960c2 runtime-rs: Add support for erofs rootfs with multi-layer
Add erofs_rootfs.rs implementing ErofsMultiLayerRootfs for
multi-layer EROFS rootfs with VMDK descriptor generation.

It's the core implementation of Erofs rootfs within runtime.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
Alex Lyn
d98f5fb3af runtime-rs: Change Rootfs::get_storage return type
Change Rootfs::get_storage to return Option<Vec<Storage>>
to support multi-layer rootfs with multiple storages.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
Alex Lyn
d980dbbb0a runtime-rs: Add format argument to hotplug_block_device method
Add format argument to hotplug_block_device for flexibly specifying
different block formats.
With this, we can support kinds of formats, currently raw and vmdk are
supported, and some other formats will be supported in future.

Aside the formats, the corresponding handling logics are also required
to properly handle its options needed in QMP blockdev-add.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
Alex Lyn
4707bd26c9 runtime-rs: Add BlockDeviceFormat enum to support more block formats
In practice, we need more kinds of block formats, not limited to `Raw`.

This commit aims to add BlockDeviceFormat enum for kinds of block device
formats support, like RAW, VMDK, etc. And it will do some following actions
to make this changes work well, including format field in BlockConfig.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
Alex Lyn
92248ba6b8 runtime-rs: Add RUNTIME_ALLOW_MOUNTS to RuntimeInfo
Add RUNTIME_ALLOW_MOUNTS annotation to RuntimeInfo to specify
custom mount types allowed by the runtime.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-07 20:40:58 +08:00
19 changed files with 1848 additions and 83 deletions

View File

@@ -378,6 +378,7 @@ jobs:
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- erofs
pull-type:

View File

@@ -0,0 +1,70 @@
From 6936ab1bac4567095ad2a62e5871af1a7982c616 Mon Sep 17 00:00:00 2001
From: Alex Lyn <alex.lyn@antgroup.com>
Date: Thu, 9 Apr 2026 15:47:22 +0800
Subject: [PATCH] kata-deploy: Complete containerd config for erofs snapshotter
Add missing containerd configuration items for erofs snapshotter to
enable fsmerged erofs feature:
- Add differ plugin configuration:
- mkfs_options: ["-T0","--mkfs-time","--sort=none"]
- enable_tar_index: false
- Add snapshotter plugin configuration:
- default_size: "10G"
- max_unmerged_layers: 1
These configurations align with the documentation in
docs/how-to/how-to-use-fsmerged-erofs-with-kata.md Step 2,
ensuring the CI workflow run-k8s-tests-coco-nontee-with-erofs-snapshotter
can properly configure containerd for erofs fsmerged rootfs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
---
.../binary/src/artifacts/snapshotters.rs | 23 +++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/tools/packaging/kata-deploy/binary/src/artifacts/snapshotters.rs b/tools/packaging/kata-deploy/binary/src/artifacts/snapshotters.rs
index fb49f35d5..998a881d2 100644
--- a/tools/packaging/kata-deploy/binary/src/artifacts/snapshotters.rs
+++ b/tools/packaging/kata-deploy/binary/src/artifacts/snapshotters.rs
@@ -30,6 +30,19 @@ pub async fn configure_erofs_snapshotter(
"[\"erofs\",\"walking\"]",
)?;
+ // Configure erofs differ plugin
+ toml_utils::set_toml_value(
+ configuration_file,
+ ".plugins.\"io.containerd.differ.v1.erofs\".mkfs_options",
+ "[\"-T0\",\"--mkfs-time\",\"--sort=none\"]",
+ )?;
+ toml_utils::set_toml_value(
+ configuration_file,
+ ".plugins.\"io.containerd.differ.v1.erofs\".enable_tar_index",
+ "false",
+ )?;
+
+ // Configure erofs snapshotter plugin
toml_utils::set_toml_value(
configuration_file,
".plugins.\"io.containerd.snapshotter.v1.erofs\".enable_fsverity",
@@ -40,6 +53,16 @@ pub async fn configure_erofs_snapshotter(
".plugins.\"io.containerd.snapshotter.v1.erofs\".set_immutable",
"true",
)?;
+ toml_utils::set_toml_value(
+ configuration_file,
+ ".plugins.\"io.containerd.snapshotter.v1.erofs\".default_size",
+ "\"10G\"",
+ )?;
+ toml_utils::set_toml_value(
+ configuration_file,
+ ".plugins.\"io.containerd.snapshotter.v1.erofs\".max_unmerged_layers",
+ "1",
+ )?;
Ok(())
}
--
2.34.0

View File

@@ -0,0 +1,336 @@
# Use EROFS Snapshotter with Kata Containers (runtime-rs)
## Project Overview
The [EROFS snapshotter](https://erofs.docs.kernel.org) is a native containerd
snapshotter that converts OCI container image layers into EROFS-formatted blobs.
When used with Kata Containers `runtime-rs`, the EROFS snapshotter enables
**block-level image pass-through** to the guest VM, bypassing virtio-fs / 9p
entirely. This delivers lower overhead, better performance, and smaller memory
footprints compared to traditional shared-filesystem approaches.
## Quick Start Guide
This section provides a quick overview of the steps to get started with EROFS snapshotter and Kata Containers. For detailed instructions, see the [Installation Guide](#installation-guide) section.
### Quick Steps
1. **Install erofs-utils**: Install erofs-utils (version >= 1.7) on your host system
2. **Configure containerd**: Enable EROFS snapshotter and differ in containerd configuration
3. **Configure Kata Containers**: Set up runtime-rs with appropriate hypervisor settings
4. **Run a container**: Use `ctr` or Kubernetes to run containers with EROFS snapshotter
### Prerequisites
| Component | Version Requirement |
|-----------|-------------------|
| Linux kernel | >= 5.4 (with `erofs` module) |
| erofs-utils | >= 1.7 (>= 1.8 recommended) |
| containerd | >= 2.2 (with EROFS snapshotter and differ support) |
| Kata Containers | Latest `main` branch with runtime-rs |
| QEMU | >= 5.0 (VMDK flat-extent support and >= 8.0 recommended) |
## Installation Guide
This section provides detailed step-by-step instructions for installing and configuring EROFS snapshotter with Kata Containers.
### Step 1: Install erofs-utils
```bash
# Debian/Ubuntu
$ sudo apt install erofs-utils
# Fedora
$ sudo dnf install erofs-utils
```
Verify the version:
```bash
$ mkfs.erofs --version
# Should show 1.7 or higher
```
Load the kernel module:
```bash
$ sudo modprobe erofs
```
### Step 2: Configure containerd
#### Enable the EROFS snapshotter and differ
Edit your containerd configuration (typically `/etc/containerd/config.toml`):
```toml
version = 3
...
[plugins.'io.containerd.cri.v1.runtime']
...
[plugins.'io.containerd.cri.v1.runtime'.containerd]
...
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.kata]
runtime_type = 'io.containerd.kata.v2'
pod_annotations = ["*"]
container_annotations = ["*"]
privileged_without_host_devices = false
sandboxer = 'podsandbox'
...
[plugins.'io.containerd.differ.v1.erofs']
mkfs_options = ["-T0", "--mkfs-time", "--sort=none"]
enable_tar_index = false
[plugins.'io.containerd.service.v1.diff-service']
default = ['erofs', 'walking']
[plugins.'io.containerd.snapshotter.v1.erofs']
default_size = '<SIZE>' # SIZE=6G or 10G or other size
max_unmerged_layers = 1
```
#### Verify the EROFS plugins are loaded
Check if EROFS module is loaded
```bash
$ lsmod | grep erofs
erofs 188416 0
netfs 614400 1 erofs
```
If not loaded:
```bash
$ sudo modprobe erofs
```
Restart containerd and check:
```bash
$ sudo systemctl restart containerd
$ sudo ctr plugins ls | grep erofs
io.containerd.snapshotter.v1 erofs linux/amd64 ok
io.containerd.differ.v1 erofs linux/amd64 ok
```
Check containerd snapshotter status
```bash
$ sudo ctr plugins ls | grep erofs
io.containerd.mount-handler.v1 erofs linux/amd64 ok
io.containerd.snapshotter.v1 erofs linux/amd64 ok
io.containerd.differ.v1 erofs linux/amd64 ok
```
Both `snapshotter` and `differ` should show `ok`.
### Step 3: Configure Kata Containers (runtime-rs)
Edit the Kata configuration file (e.g.,
`configuration-qemu-runtime-rs.toml`):
```toml
[hypervisor.qemu]
# shared_fs can be set to "none" since EROFS layers are passed via
# block devices, not via virtio-fs. If you still need virtio-fs for
# other purposes (e.g., file sharing), keep "virtio-fs".
# For pure block-device EROFS mode:
shared_fs = "none"
```
> **Note**: The `shared_fs = "none"` setting is for the case where all
> container images use the EROFS snapshotter. If you have a mixed environment,
> keep `shared_fs = "virtio-fs"` so that non-EROFS containers can still use
> virtio-fs.
### Quick Test
Once the installation is complete, you can quickly test with:
Using `ctr` for example.
```bash
# Pull the image
$ sudo ctr image pull docker.io/library/wordpress:latest
# Run with EROFS snapshotter and Kata runtime-rs
$ sudo ctr run --runtime io.containerd.kata.v2 --snapshotter=erofs --rm -t library/wordpress:latest test001 date
Wed Apr 1 07:10:53 UTC 2026
$ sudo ctr run --runtime io.containerd.kata.v2 --snapshotter=erofs --rm -t wordpress:latest test001 lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
vda 254:0 0 256M 0 disk
`-vda1 254:1 0 253M 0 part
vdb 254:16 0 6G 0 disk
vdc 254:32 0 759.7M 0 disk
```
> **Note**: Ensure that the containerd CRI configuration maps the `kata`
> handler to the Kata runtime with `snapshotter = "erofs"` as shown in
> [Step 2](#step-2-configure-containerd).
### Architecture
The following diagram illustrates the data flow:
```
Host Guest VM
==== ========
containerd kata-agent
| |
v v
EROFS snapshotter 1. mount ext4 /dev/vdX
| (writable upper)
|-- Mount[0]: ext4 rw layer |
| (block device on host) 2. mount erofs /dev/vdY
| (read-only lower)
|-- Mount[1]: erofs layers |
| source: layer.erofs 3. overlay mount
| device=extra1.erofs lowerdir=<erofs_mount>
| device=extra2.erofs upperdir=<ext4_mount>/upper
| workdir=<ext4_mount>/work
v |
runtime-rs v
| container rootfs
|-- single erofs: attach as Raw ready
|-- multi erofs: generate VMDK
| descriptor + attach as Vmdk
|
v
QEMU (virtio-blk)
|-- /dev/vdX: ext4 rw layer
|-- /dev/vdY: erofs layer(s)
```
### VMDK flat-extent descriptor (multi-layer case)
VMDK Descriptor Format (twoGbMaxExtentFlat)
The descriptor follows the [VMware Virtual Disk Format specification](https://github.com/libyal/libvmdk/blob/main/documentation/VMWare%20Virtual%20Disk%20Format%20(VMDK).asciidoc):
- Header: `# Disk DescriptorFile` marker and version info
- Extent descriptions: `RW <sectors> FLAT "<filename>" <offset>`
- `sectors`: number of 512-byte sectors for this extent
- `filename`: absolute path to the backing file
- `offset`: starting sector offset within the file (0-based)
- DDB (Disk Data Base): virtual hardware and geometry metadata
Files larger than 2GB are automatically split into multiple extents
(MAX_2GB_EXTENT_SECTORS per extent) as required by the twoGbMaxExtentFlat format.
When multiple EROFS layers are merged, `runtime-rs` generates a VMDK
descriptor file (`twoGbMaxExtentFlat` format):
```
# Disk DescriptorFile
version=1
CID=fffffffe
parentCID=ffffffff
createType="twoGbMaxExtentFlat"
# Extent description
RW 2048 FLAT "/path/to/fsmeta.erofs" 0
RW 4096 FLAT "/path/to/layer1.erofs" 0
RW 8192 FLAT "/path/to/layer2.erofs" 0
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "4"
ddb.geometry.cylinders = "15"
ddb.geometry.heads = "16"
ddb.geometry.sectors = "63"
ddb.adapterType = "ide"
```
QEMU's VMDK driver reads this descriptor and presents all extents as a
single contiguous block device to the guest. The guest kernel's EROFS driver
then mounts this combined device with multi-device support.
### How it works
The containerd EROFS snapshotter prepares a multi-layer rootfs layout:
```
Mount[0]: ext4 rw layer --> virtio-blk device (writable upper layer)
Mount[1]: erofs layers --> virtio-blk device (read-only, via VMDK for multi-extent)
Mount[2]: overlay --> guest agent mounts overlay combining upper + lower
```
For the EROFS read-only layers:
- **Single layer**: The single `.erofs` blob is attached directly as a raw
virtio-blk device.
- **Multiple layers**: Multiple `.erofs` blobs (the base layer + `device=`
extra layers) are merged into a single virtual block device using a VMDK
flat-extent descriptor (`twoGbMaxExtentFlat` format). QEMU's VMDK driver
parses the descriptor and concatenates all extents transparently.
Inside the guest VM, the kata-agent:
1. Mounts the ext4 block device as the writable upper layer.
2. Mounts the erofs block device as the read-only lower layer.
3. Creates an overlay filesystem combining the two.
### Verify QEMU VMDK support
The multi-layer EROFS rootfs relies on QEMU's VMDK block driver to present
a VMDK flat-extent descriptor as a single virtual disk. QEMU must be compiled
with VMDK format support enabled (this is typically on by default, but some
minimal or custom builds may disable it).
Run the following command to check:
```bash
$ qemu-system-x86_64 -drive format=help 2>&1 | grep vmdk
```
You should see `vmdk` in the `Supported formats` list, for example:
```
Supported formats: blkdebug blklogwrites blkreplay blkverify bochs cloop
compress copy-before-write copy-on-read dmg file ftp ftps host_cdrom
host_device http https luks nbd null-aio null-co nvme parallels preallocate
qcow qcow2 qed quorum raw replication snapshot-access ssh throttle vdi vhdx
vmdk vpc vvfat
```
If `vmdk` does not appear, you need to rebuild QEMU with VMDK support enabled.
#### Build the guest components
The guest kernel must have `CONFIG_EROFS_FS=y` (or `=m` with the module
auto-loaded). The kata-agent in the guest image must include multi-layer
EROFS support.
Refer to [how-to-use-erofs-build-rootfs.md](how-to-use-erofs-build-rootfs.md)
for building a guest rootfs with EROFS support.
### Limitations
> **Hypervisor support**: The fsmerged EROFS rootfs feature currently **only
> supports QEMU** as the hypervisor, because it depends on the VMDK
> flat-extent descriptor format for merging multiple EROFS layers into a
> single block device. The following hypervisors do **not** support VMDK
> format block devices at this time, and therefore **cannot** be used with
> fsmerged EROFS rootfs:
>
> - **Cloud Hypervisor (CLH)** — no VMDK block device support ([WIP](https://github.com/cloud-hypervisor/cloud-hypervisor/issues/7167))
> - **Firecracker** — no VMDK block device support ([WIP](https://github.com/firecracker-microvm/firecracker/pull/5741))
> - **Dragonball** — no VMDK block device support (TODO)
>
> For single-layer EROFS (only one `.erofs` blob, no `device=` extra layers),
> the blob is attached as a raw block device without a VMDK descriptor. This
> mode may work with other hypervisors that support raw virtio-blk devices,
> but has not been fully tested.
## References
- [EROFS documentation](https://erofs.docs.kernel.org)
- [Containerd EROFS snapshotter](https://github.com/containerd/containerd/blob/main/docs/snapshotters/erofs.md)
- [Configure Kata to use EROFS build rootfs](how-to-use-erofs-build-rootfs.md)
- [Kata Containers architecture](../design/architecture)

View File

@@ -4,7 +4,7 @@
// SPDX-License-Identifier: Apache-2.0
//
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::fs;
use std::os::unix::fs::{MetadataExt, PermissionsExt};
use std::path::Path;
@@ -26,6 +26,7 @@ use self::ephemeral_handler::EphemeralHandler;
use self::fs_handler::{OverlayfsHandler, Virtio9pHandler, VirtioFsHandler};
use self::image_pull_handler::ImagePullHandler;
use self::local_handler::LocalHandler;
use self::multi_layer_erofs::{handle_multi_layer_erofs_group, is_multi_layer_storage};
use crate::mount::{baremount, is_mounted, remove_mounts};
use crate::sandbox::Sandbox;
@@ -37,6 +38,7 @@ mod ephemeral_handler;
mod fs_handler;
mod image_pull_handler;
mod local_handler;
mod multi_layer_erofs;
const RW_MASK: u32 = 0o660;
const RO_MASK: u32 = 0o440;
@@ -147,6 +149,7 @@ lazy_static! {
#[cfg(target_arch = "s390x")]
Arc::new(self::block_handler::VirtioBlkCcwHandler {}),
Arc::new(ImagePullHandler {}),
Arc::new(self::multi_layer_erofs::MultiLayerErofsHandler {}),
];
for handler in handlers {
@@ -157,6 +160,88 @@ lazy_static! {
};
}
/// Result of multi-layer storage handling
struct MultiLayerProcessResult {
/// The primary device created
device: Arc<dyn StorageDevice>,
/// All mount points that were processed as part of this group
processed_mount_points: Vec<String>,
}
/// Handle multi-layer storage by creating the overlay device.
/// Returns None if the storage is not a multi-layer storage.
/// Returns Some(Ok(result)) if successfully processed.
/// Returns Some(Err(e)) if there was an error.
async fn handle_multi_layer_storage(
logger: &Logger,
storage: &Storage,
storages: &[Storage],
sandbox: &Arc<Mutex<Sandbox>>,
cid: &Option<String>,
processed_mount_points: &HashSet<String>,
) -> Result<Option<MultiLayerProcessResult>> {
if !is_multi_layer_storage(storage) {
return Ok(None);
}
// Skip if already processed as part of a previous multi-layer group
if processed_mount_points.contains(&storage.mount_point) {
return Ok(None);
}
slog::info!(
logger,
"processing multi-layer EROFS storage";
"mount-point" => &storage.mount_point,
"source" => &storage.source,
"driver" => &storage.driver,
"fstype" => &storage.fstype,
);
let result = handle_multi_layer_erofs_group(storage, storages, cid, sandbox, logger).await?;
// Create device for the mount point
let device = new_device(result.mount_point.clone())?;
Ok(Some(MultiLayerProcessResult {
device,
processed_mount_points: result.processed_mount_points,
}))
}
/// Update sandbox storage with the created device.
/// Handles cleanup on failure.
async fn update_storage_device(
sandbox: &Arc<Mutex<Sandbox>>,
mount_point: &str,
device: Arc<dyn StorageDevice>,
logger: &Logger,
) -> Result<()> {
if let Err(device) = sandbox
.lock()
.await
.update_sandbox_storage(mount_point, device)
{
error!(logger, "failed to update device for storage"; "mount-point" => mount_point);
if let Err(e) = sandbox
.lock()
.await
.remove_sandbox_storage(mount_point)
.await
{
warn!(logger, "failed to remove dummy sandbox storage"; "error" => ?e);
}
if let Err(e) = device.cleanup() {
error!(logger, "failed to clean state for storage device"; "mount-point" => mount_point, "error" => ?e);
}
return Err(anyhow!(
"failed to update device for storage: {}",
mount_point
));
}
Ok(())
}
// add_storages takes a list of storages passed by the caller, and perform the
// associated operations such as waiting for the device to show up, and mount
// it to a specific location, according to the type of handler chosen, and for
@@ -169,8 +254,54 @@ pub async fn add_storages(
cid: Option<String>,
) -> Result<Vec<String>> {
let mut mount_list = Vec::new();
let mut processed_mount_points = HashSet::new();
for storage in storages {
for storage in &storages {
// Try multi-layer storage handling first
if let Some(result) = handle_multi_layer_storage(
&logger,
storage,
&storages,
sandbox,
&cid,
&processed_mount_points,
)
.await?
{
// Register all processed mount points
for mp in &result.processed_mount_points {
processed_mount_points.insert(mp.clone());
}
// Add sandbox storage for each mount point in the group
for mp in &result.processed_mount_points {
let state = sandbox
.lock()
.await
.add_sandbox_storage(mp, storage.shared)
.await;
// Only update device for the first occurrence
if state.ref_count().await == 1 {
update_storage_device(sandbox, mp, result.device.clone(), &logger).await?;
}
}
// Add the primary mount point to the list
if let Some(path) = result.device.path() {
if !path.is_empty() {
mount_list.push(path.to_string());
}
}
continue;
}
// Skip if already processed as part of multi-layer group
if processed_mount_points.contains(&storage.mount_point) {
continue;
}
// Standard storage handling
let path = storage.mount_point.clone();
let state = sandbox
.lock()
@@ -178,68 +309,48 @@ pub async fn add_storages(
.add_sandbox_storage(&path, storage.shared)
.await;
if state.ref_count().await > 1 {
if let Some(path) = state.path() {
if !path.is_empty() {
mount_list.push(path.to_string());
if let Some(p) = state.path() {
if !p.is_empty() {
mount_list.push(p.to_string());
}
}
// The device already exists.
continue;
}
if let Some(handler) = STORAGE_HANDLERS.handler(&storage.driver) {
// Create device using handler
let device = if let Some(handler) = STORAGE_HANDLERS.handler(&storage.driver) {
let logger =
logger.new(o!( "subsystem" => "storage", "storage-type" => storage.driver.clone()));
logger.new(o!("subsystem" => "storage", "storage-type" => storage.driver.clone()));
let mut ctx = StorageContext {
cid: &cid,
logger: &logger,
sandbox,
};
match handler.create_device(storage, &mut ctx).await {
Ok(device) => {
match sandbox
.lock()
.await
.update_sandbox_storage(&path, device.clone())
{
Ok(d) => {
if let Some(path) = device.path() {
if !path.is_empty() {
mount_list.push(path.to_string());
}
}
drop(d);
}
Err(device) => {
error!(logger, "failed to update device for storage");
if let Err(e) = sandbox.lock().await.remove_sandbox_storage(&path).await
{
warn!(logger, "failed to remove dummy sandbox storage {:?}", e);
}
if let Err(e) = device.cleanup() {
error!(
logger,
"failed to clean state for storage device {}, {}", path, e
);
}
return Err(anyhow!("failed to update device for storage"));
}
}
}
Err(e) => {
error!(logger, "failed to create device for storage, error: {e:?}");
if let Err(e) = sandbox.lock().await.remove_sandbox_storage(&path).await {
warn!(logger, "failed to remove dummy sandbox storage {e:?}");
}
return Err(e);
}
}
handler.create_device(storage.clone(), &mut ctx).await
} else {
return Err(anyhow!(
"Failed to find the storage handler {}",
storage.driver
));
};
match device {
Ok(device) => {
update_storage_device(sandbox, &path, device.clone(), &logger).await?;
if let Some(p) = device.path() {
if !p.is_empty() {
mount_list.push(p.to_string());
}
}
}
Err(e) => {
error!(logger, "failed to create device for storage"; "error" => ?e);
if let Err(e) = sandbox.lock().await.remove_sandbox_storage(&path).await {
warn!(logger, "failed to remove dummy sandbox storage"; "error" => ?e);
}
return Err(e);
}
}
}

View File

@@ -0,0 +1,492 @@
// Copyright (c) 2026 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
//! Multi-layer EROFS storage handler
//!
//! This handler implements the guest-side processing of multi-layer EROFS rootfs:
//! - Storage with X-kata.overlay-upper: ext4 rw layer (upperdir)
//! - Storage with X-kata.overlay-lower: erofs layers (lowerdir)
//! - Creates overlay to combine them
//! - Supports X-kata.mkdir.path options to create directories in upper layer before overlay mount
use std::fs;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use anyhow::{anyhow, Context, Result};
use kata_sys_util::mount::create_mount_destination;
use kata_types::mount::StorageDevice;
use protocols::agent::Storage;
use regex::Regex;
use slog::Logger;
use tokio::sync::Mutex;
use crate::device::BLOCK;
use crate::mount::baremount;
use crate::sandbox::Sandbox;
use crate::storage::{StorageContext, StorageHandler};
use crate::uevent::{wait_for_uevent, Uevent, UeventMatcher};
/// EROFS Type
const EROFS_TYPE: &str = "erofs";
/// ext4 Type
const EXT4_TYPE: &str = "ext4";
/// Overlay Type
const OVERLAY_TYPE: &str = "overlay";
/// Driver type for multi-layer EROFS
pub const DRIVER_MULTI_LAYER_EROFS: &str = "erofs.multi-layer";
/// Custom storage option markers
const OPT_OVERLAY_UPPER: &str = "X-kata.overlay-upper";
const OPT_OVERLAY_LOWER: &str = "X-kata.overlay-lower";
const OPT_MULTI_LAYER: &str = "X-kata.multi-layer=true";
const OPT_MKDIR_PATH: &str = "X-kata.mkdir.path=";
#[derive(Debug)]
struct VirtioBlkMatcher {
rex: Regex,
}
impl VirtioBlkMatcher {
fn new(devname: &str) -> Self {
let re = format!(r"/virtio[0-9]+/block/{}$", devname);
VirtioBlkMatcher {
rex: Regex::new(&re).expect("Failed to compile VirtioBlkMatcher regex"),
}
}
}
impl UeventMatcher for VirtioBlkMatcher {
fn is_match(&self, uev: &Uevent) -> bool {
uev.subsystem == BLOCK && self.rex.is_match(&uev.devpath) && !uev.devname.is_empty()
}
}
#[derive(Debug)]
pub struct MultiLayerErofsHandler {}
#[derive(Debug, Clone)]
pub struct MultiLayerErofsResult {
pub mount_point: String,
pub processed_mount_points: Vec<String>,
}
#[allow(dead_code)]
#[derive(Debug)]
struct MkdirDirective {
raw_path: String,
mode: Option<String>,
}
#[async_trait::async_trait]
impl StorageHandler for MultiLayerErofsHandler {
fn driver_types(&self) -> &[&str] {
&[DRIVER_MULTI_LAYER_EROFS]
}
async fn create_device(
&self,
storage: Storage,
ctx: &mut StorageContext,
) -> Result<Arc<dyn StorageDevice>> {
// This is called when a single storage has driver="erofs.multi-layer"
// For now, treat it as a regular mount point
slog::info!(
ctx.logger,
"multi-layer EROFS handler invoked for single storage";
"driver" => &storage.driver,
"source" => &storage.source,
"fstype" => &storage.fstype,
"mount-point" => &storage.mount_point,
);
let path = crate::storage::common_storage_handler(ctx.logger, &storage)?;
crate::storage::new_device(path)
}
}
pub fn is_multi_layer_storage(storage: &Storage) -> bool {
storage.options.iter().any(|o| o == OPT_MULTI_LAYER)
|| storage.driver == DRIVER_MULTI_LAYER_EROFS
}
pub async fn handle_multi_layer_erofs_group(
trigger: &Storage,
storages: &[Storage],
cid: &Option<String>,
sandbox: &Arc<Mutex<Sandbox>>,
logger: &Logger,
) -> Result<MultiLayerErofsResult> {
let logger = logger.new(o!(
"subsystem" => "multi-layer-erofs",
"trigger-mount-point" => trigger.mount_point.clone(),
));
let multi_layer_storages: Vec<&Storage> = storages
.iter()
.filter(|s| is_multi_layer_storage(s))
.collect();
if multi_layer_storages.is_empty() {
return Err(anyhow!("no multi-layer storages found"));
}
let mut ext4_storage: Option<&Storage> = None;
let mut erofs_storages: Vec<&Storage> = Vec::new();
let mut mkdir_dirs: Vec<MkdirDirective> = Vec::new();
for storage in &multi_layer_storages {
if is_upper_storage(storage) {
if ext4_storage.is_some() {
return Err(anyhow!(
"multi-layer erofs currently supports exactly one ext4 upper layer"
));
}
ext4_storage = Some(*storage);
// Extract mkdir directories from X-kata.mkdir.path options
for opt in &storage.options {
if let Some(mkdir_spec) = opt.strip_prefix(OPT_MKDIR_PATH) {
mkdir_dirs.push(parse_mkdir_directive(mkdir_spec)?);
}
}
} else if is_lower_storage(storage) {
erofs_storages.push(*storage);
}
}
let ext4 = ext4_storage
.ok_or_else(|| anyhow!("multi-layer erofs missing ext4 upper layer storage"))?;
if erofs_storages.is_empty() {
return Err(anyhow!(
"multi-layer erofs missing erofs lower layer storage"
));
}
slog::info!(
logger,
"handling multi-layer erofs group";
"ext4-device" => &ext4.source,
"erofs-devices" => erofs_storages
.iter()
.map(|s| s.source.as_str())
.collect::<Vec<_>>()
.join(","),
"mount-point" => &ext4.mount_point,
"mkdir-dirs-count" => mkdir_dirs.len(),
);
// Create temporary mount points for upper and lower layers
let cid_str = cid.as_deref().unwrap_or("sandbox");
let temp_base = PathBuf::from(format!("/run/kata-containers/{}/multi-layer", cid_str));
fs::create_dir_all(&temp_base).context("failed to create temp mount base")?;
let upper_mount = temp_base.join("upper");
fs::create_dir_all(&upper_mount).context("failed to create upper mount dir")?;
wait_and_mount_upper(ext4, &upper_mount, sandbox, &logger).await?;
for mkdir_dir in &mkdir_dirs {
// As {{ mount 1 }} refers to the first lower layer, which is not available until we mount it.
// Just skip it for now and handle it in a second pass after mounting the lower layers.
if mkdir_dir.raw_path.contains("{{ mount 1 }}") {
continue;
}
let resolved_path = resolve_mkdir_path(&mkdir_dir.raw_path, &upper_mount, None);
slog::info!(
logger,
"creating mkdir directory in upper layer";
"raw-path" => &mkdir_dir.raw_path,
"resolved-path" => &resolved_path,
);
fs::create_dir_all(&resolved_path)
.with_context(|| format!("failed to create mkdir directory: {}", resolved_path))?;
}
let mut lower_mounts = Vec::new();
for (index, erofs) in erofs_storages.iter().enumerate() {
let lower_mount = temp_base.join(format!("lower-{}", index));
fs::create_dir_all(&lower_mount).with_context(|| {
format!("failed to create lower mount dir {}", lower_mount.display())
})?;
wait_and_mount_lower(erofs, &lower_mount, sandbox, &logger).await?;
lower_mounts.push(lower_mount);
}
// If any mkdir directive refers to {{ mount 1 }}, resolve it now using the first lower mount.
// This matches current supported placeholder behavior without inventing a broader template scheme.
for mkdir_dir in &mkdir_dirs {
if mkdir_dir.raw_path.contains("{{ mount 1 }}") {
let first_lower = lower_mounts
.first()
.ok_or_else(|| anyhow!("lower mount is missing while resolving mkdir path"))?;
let resolved_path =
resolve_mkdir_path(&mkdir_dir.raw_path, &upper_mount, Some(first_lower));
slog::info!(
logger,
"creating deferred mkdir directory";
"raw-path" => &mkdir_dir.raw_path,
"resolved-path" => &resolved_path,
);
fs::create_dir_all(&resolved_path).with_context(|| {
format!(
"failed to create deferred mkdir directory: {}",
resolved_path
)
})?;
}
}
let upperdir = upper_mount.join("upper");
let workdir = upper_mount.join("work");
if !upperdir.exists() {
fs::create_dir_all(&upperdir).context("failed to create upperdir")?;
}
fs::create_dir_all(&workdir).context("failed to create workdir")?;
let lowerdir = lower_mounts
.iter()
.map(|p| p.display().to_string())
.collect::<Vec<_>>()
.join(":");
slog::info!(
logger,
"creating overlay mount";
"upperdir" => upperdir.display(),
"lowerdir" => &lowerdir,
"workdir" => workdir.display(),
"target" => &ext4.mount_point,
);
create_mount_destination(
Path::new(OVERLAY_TYPE),
Path::new(&ext4.mount_point),
"",
OVERLAY_TYPE,
)
.context("failed to create overlay mount destination")?;
let overlay_options = format!(
"upperdir={},lowerdir={},workdir={}",
upperdir.display(),
lowerdir,
workdir.display()
);
baremount(
Path::new(OVERLAY_TYPE),
Path::new(&ext4.mount_point),
OVERLAY_TYPE,
nix::mount::MsFlags::empty(),
&overlay_options,
&logger,
)
.context("failed to mount overlay")?;
slog::info!(
logger,
"multi-layer erofs overlay mounted successfully";
"mount-point" => &ext4.mount_point,
);
// Collect all unique mount points to maintain a clean resource state.
//
// In multi-layer EROFS configurations, upper and lower storages may share
// the same mount point.
// We must deduplicate these entries before processing to prevent:
// 1. Double-incrementing sandbox refcounts for the same resource.
// 2. Redundant bookkeeping operations that could lead to state inconsistency.
//
// Note: We maintain the original order of insertion, which is essential for
// ensuring a predictable and correct sequence during resource cleanup.
let processed_mount_points = multi_layer_storages.iter().fold(Vec::new(), |mut acc, s| {
if !acc.contains(&s.mount_point) {
acc.push(s.mount_point.clone());
}
acc
});
Ok(MultiLayerErofsResult {
mount_point: ext4.mount_point.clone(),
processed_mount_points,
})
}
fn is_upper_storage(storage: &Storage) -> bool {
storage.options.iter().any(|o| o == OPT_OVERLAY_UPPER)
|| (storage.fstype == EXT4_TYPE && storage.options.iter().any(|o| o == OPT_MULTI_LAYER))
}
fn is_lower_storage(storage: &Storage) -> bool {
storage.options.iter().any(|o| o == OPT_OVERLAY_LOWER)
|| (storage.fstype == EROFS_TYPE && storage.options.iter().any(|o| o == OPT_MULTI_LAYER))
}
fn parse_mkdir_directive(spec: &str) -> Result<MkdirDirective> {
let parts: Vec<&str> = spec.splitn(2, ':').collect();
if parts.is_empty() || parts[0].is_empty() {
return Err(anyhow!("invalid X-kata.mkdir.path directive: '{}'", spec));
}
Ok(MkdirDirective {
raw_path: parts[0].to_string(),
mode: parts.get(1).map(|s| s.to_string()),
})
}
fn resolve_mkdir_path(
raw_path: &str,
upper_mount: &Path,
first_lower_mount: Option<&Path>,
) -> String {
let mut resolved = raw_path.replace("{{ mount 0 }}", upper_mount.to_str().unwrap_or(""));
if let Some(lower) = first_lower_mount {
resolved = resolved.replace("{{ mount 1 }}", lower.to_str().unwrap_or(""));
}
resolved
}
async fn wait_and_mount_upper(
ext4: &Storage,
upper_mount: &Path,
sandbox: &Arc<Mutex<Sandbox>>,
logger: &Logger,
) -> Result<()> {
let ext4_devname = extract_device_name(&ext4.source)?;
slog::info!(
logger,
"waiting for ext4 block device to be ready";
"device" => &ext4.source,
"devname" => &ext4_devname,
);
let matcher = VirtioBlkMatcher::new(&ext4_devname);
wait_for_uevent(sandbox, matcher)
.await
.context("timeout waiting for ext4 block device")?;
slog::info!(
logger,
"mounting ext4 upper layer";
"device" => &ext4.source,
"fstype" => &ext4.fstype,
"mount-point" => upper_mount.display(),
"options" => ext4.options.join(","),
);
create_mount_destination(Path::new(&ext4.source), upper_mount, "", &ext4.fstype)
.context("failed to create upper mount destination")?;
// Filter out X-kata.* custom options before mount
// These are metadata markers, not actual mount options
let mount_options: Vec<String> = ext4
.options
.iter()
.filter(|o| !o.starts_with("X-kata."))
.cloned()
.collect();
slog::info!(
logger,
"filtered ext4 mount options";
"original-options" => ext4.options.join(","),
"mount-options" => mount_options.join(","),
);
let (flags, options) = kata_sys_util::mount::parse_mount_options(&mount_options)?;
baremount(
Path::new(&ext4.source),
upper_mount,
&ext4.fstype,
flags,
options.as_str(),
logger,
)
.context("failed to mount ext4 upper layer")?;
Ok(())
}
async fn wait_and_mount_lower(
erofs: &Storage,
lower_mount: &Path,
sandbox: &Arc<Mutex<Sandbox>>,
logger: &Logger,
) -> Result<()> {
let erofs_devname = extract_device_name(&erofs.source)?;
slog::info!(
logger,
"waiting for erofs block device to be ready";
"device" => &erofs.source,
"devname" => &erofs_devname,
);
let matcher = VirtioBlkMatcher::new(&erofs_devname);
wait_for_uevent(sandbox, matcher)
.await
.context("timeout waiting for erofs block device")?;
slog::info!(
logger,
"mounting erofs lower layer";
"device" => &erofs.source,
"mount-point" => lower_mount.display(),
);
create_mount_destination(Path::new(&erofs.source), lower_mount, "", EROFS_TYPE)
.context("failed to create lower mount destination")?;
baremount(
Path::new(&erofs.source),
lower_mount,
EROFS_TYPE,
nix::mount::MsFlags::MS_RDONLY,
"ro",
logger,
)
.context("failed to mount erofs lower layer")?;
Ok(())
}
/// Extract device name from a device path
///
/// Examples:
/// - "/dev/vda" -> "vda"
/// - "/dev/vdb" -> "vdb"
fn extract_device_name(device_path: &str) -> Result<String> {
device_path
.strip_prefix("/dev/")
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("device path '{}' must start with /dev/", device_path))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_driver_types() {
let handler = MultiLayerErofsHandler {};
assert_eq!(handler.driver_types(), &[DRIVER_MULTI_LAYER_EROFS]);
}
#[test]
fn test_constants() {
assert_eq!(OPT_OVERLAY_UPPER, "X-kata.overlay-upper");
assert_eq!(OPT_OVERLAY_LOWER, "X-kata.overlay-lower");
assert_eq!(OPT_MULTI_LAYER, "X-kata.multi-layer=true");
assert_eq!(OPT_MKDIR_PATH, "X-kata.mkdir.path=");
}
}

View File

@@ -24,9 +24,9 @@ pub use vfio::{
pub use vhost_user::{VhostUserConfig, VhostUserDevice, VhostUserType};
pub use vhost_user_net::VhostUserNetDevice;
pub use virtio_blk::{
BlockConfig, BlockDevice, BlockDeviceAio, KATA_BLK_DEV_TYPE, KATA_CCW_DEV_TYPE,
KATA_MMIO_BLK_DEV_TYPE, KATA_NVDIMM_DEV_TYPE, KATA_SCSI_DEV_TYPE, VIRTIO_BLOCK_CCW,
VIRTIO_BLOCK_MMIO, VIRTIO_BLOCK_PCI, VIRTIO_PMEM,
BlockConfig, BlockDevice, BlockDeviceAio, BlockDeviceFormat, KATA_BLK_DEV_TYPE,
KATA_CCW_DEV_TYPE, KATA_MMIO_BLK_DEV_TYPE, KATA_NVDIMM_DEV_TYPE, KATA_SCSI_DEV_TYPE,
VIRTIO_BLOCK_CCW, VIRTIO_BLOCK_MMIO, VIRTIO_BLOCK_PCI, VIRTIO_PMEM,
};
pub use virtio_fs::{
ShareFsConfig, ShareFsDevice, ShareFsMountConfig, ShareFsMountOperation, ShareFsMountType,

View File

@@ -59,6 +59,23 @@ impl std::fmt::Display for BlockDeviceAio {
}
}
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub enum BlockDeviceFormat {
#[default]
Raw,
Vmdk,
}
impl std::fmt::Display for BlockDeviceFormat {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let to_string = match *self {
BlockDeviceFormat::Raw => "raw".to_string(),
BlockDeviceFormat::Vmdk => "vmdk".to_string(),
};
write!(f, "{to_string}")
}
}
#[derive(Debug, Clone, Default)]
pub struct BlockConfig {
/// Path of the drive.
@@ -71,6 +88,9 @@ pub struct BlockConfig {
/// Don't close `path_on_host` file when dropping the device.
pub no_drop: bool,
/// raw, vmdk, etc. And default to raw if not set.
pub format: BlockDeviceFormat,
/// Specifies cache-related options for block devices.
/// Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
/// If not set, use configurarion block_device_cache_direct.

View File

@@ -866,6 +866,7 @@ impl QemuInner {
),
block_device.config.is_readonly,
block_device.config.no_drop,
&format!("{}", block_device.config.format),
)
.context("hotplug block device")?;

View File

@@ -14,7 +14,8 @@ use kata_types::rootless::is_rootless;
use nix::sys::socket::{sendmsg, ControlMessage, MsgFlags};
use qapi_qmp::{
self as qmp, BlockdevAioOptions, BlockdevOptions, BlockdevOptionsBase,
BlockdevOptionsGenericFormat, BlockdevOptionsRaw, BlockdevRef, MigrationInfo, PciDeviceInfo,
BlockdevOptionsGenericCOWFormat, BlockdevOptionsGenericFormat, BlockdevOptionsRaw, BlockdevRef,
MigrationInfo, PciDeviceInfo,
};
use qapi_qmp::{migrate, migrate_incoming, migrate_set_capabilities};
use qapi_qmp::{MigrationCapability, MigrationCapabilityStatus};
@@ -642,6 +643,7 @@ impl Qmp {
is_direct: Option<bool>,
is_readonly: bool,
no_drop: bool,
format: &str,
) -> Result<(Option<PciPath>, Option<String>)> {
// `blockdev-add`
let node_name = format!("drive-{index}");
@@ -690,27 +692,64 @@ impl Qmp {
}
};
let blockdev_options_raw = BlockdevOptions::raw {
base: BlockdevOptionsBase {
detect_zeroes: None,
cache: None,
discard: None,
force_share: None,
auto_read_only: None,
node_name: Some(node_name.clone()),
read_only: None,
},
raw: BlockdevOptionsRaw {
base: BlockdevOptionsGenericFormat {
file: BlockdevRef::definition(Box::new(blockdev_file)),
},
offset: None,
size: None,
},
let blockdev_options = match format {
"raw" => {
// Use raw format for regular block devices
BlockdevOptions::raw {
base: BlockdevOptionsBase {
detect_zeroes: None,
cache: None,
discard: None,
force_share: None,
auto_read_only: None,
node_name: Some(node_name.clone()),
read_only: None,
},
raw: BlockdevOptionsRaw {
base: BlockdevOptionsGenericFormat {
file: BlockdevRef::definition(Box::new(blockdev_file)),
},
offset: None,
size: None,
},
}
}
"vmdk" => {
// Use VMDK format driver for VMDK descriptor files
// The VMDK driver will parse the descriptor and handle multi-extent files
info!(
sl!(),
"hotplug_block_device: using VMDK format driver for {}", path_on_host
);
BlockdevOptions::vmdk {
base: BlockdevOptionsBase {
detect_zeroes: None,
cache: None,
discard: None,
force_share: None,
auto_read_only: None,
node_name: Some(node_name.clone()),
read_only: None,
},
vmdk: BlockdevOptionsGenericCOWFormat {
base: BlockdevOptionsGenericFormat {
file: BlockdevRef::definition(Box::new(blockdev_file)),
},
backing: None,
},
}
}
other => {
warn!(
sl!(),
"unrecognized format '{}', defaulting to raw for {}", other, path_on_host
);
return Err(anyhow!("unrecognized block device format: {}", other));
}
};
self.qmp
.execute(&qapi_qmp::blockdev_add(blockdev_options_raw))
.execute(&qapi_qmp::blockdev_add(blockdev_options))
.map_err(|e| anyhow!("blockdev-add backend {:?}", e))
.map(|_| ())?;

View File

@@ -137,8 +137,8 @@ impl Rootfs for BlockRootfs {
Ok(vec![self.mount.clone()])
}
async fn get_storage(&self) -> Option<Storage> {
self.storage.clone()
async fn get_storage(&self) -> Option<Vec<Storage>> {
self.storage.clone().map(|s| vec![s])
}
async fn get_device_id(&self) -> Result<Option<String>> {

View File

@@ -0,0 +1,640 @@
// Copyright (c) 2026 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
// Handle multi-layer EROFS rootfs:
// Mount[0]: ext4 rw layer -> virtio-blk device (writable)
// Mount[1]: erofs with device= -> virtio-blk via VMDK (read-only)
// Mount[2]: overlay (format/mkdir/overlay) -> host mount OR guest agent
// The overlay mount may be handled by the guest agent if it contains "{{"
// templates in upperdir/workdir.
use super::{Rootfs, ROOTFS};
use crate::share_fs::{do_get_guest_path, do_get_host_path};
use agent::Storage;
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use hypervisor::{
device::{
device_manager::{do_handle_device, get_block_device_info, DeviceManager},
DeviceConfig, DeviceType,
},
BlockConfig, BlockDeviceAio, BlockDeviceFormat,
};
use kata_types::config::hypervisor::{
VIRTIO_BLK_CCW, VIRTIO_BLK_MMIO, VIRTIO_BLK_PCI, VIRTIO_PMEM, VIRTIO_SCSI,
};
use kata_types::mount::Mount;
use oci_spec::runtime as oci;
use std::fs;
use std::io::{BufWriter, Write};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use tokio::sync::RwLock;
/// EROFS rootfs type identifier
pub(crate) const EROFS_ROOTFS_TYPE: &str = "erofs";
/// RW layer rootfs type identifier, used for multi-layer EROFS as the writable upper layer
/// Typically ext4 format, but can be extended to other fs types in the future.
pub(crate) const RW_LAYER_ROOTFS_TYPE: &str = "ext4";
/// VMDK file extension for merged EROFS image
const EROFS_MERGED_VMDK: &str = "merged_fs.vmdk";
/// Maximum number of virtio-blk devices allowed
const MAX_VIRTIO_BLK_DEVICES: usize = 10;
/// Maximum sectors per 2GB extent (2GB / 512 bytes per sector)
const MAX_2GB_EXTENT_SECTORS: u64 = 0x8000_0000 >> 9;
/// Sectors per track for VMDK geometry
const SECTORS_PER_TRACK: u64 = 63;
/// Number of heads for VMDK geometry
const NUMBER_HEADS: u64 = 16;
/// VMDK subformat type (twoGbMaxExtentFlat for large files)
const VMDK_SUBFORMAT: &str = "twoGbMaxExtentFlat";
/// VMDK adapter type
const VMDK_ADAPTER_TYPE: &str = "ide";
/// VMDK hardware version
const VMDK_HW_VERSION: &str = "4";
/// Default shared directory for guest rootfs VMDK files (for multi-layer EROFS)
const DEFAULT_KATA_GUEST_ROOT_SHARED_FS: &str = "/run/kata-containers/";
/// Template for mkdir option in overlay mount (X-containerd.mkdir.path)
const X_CONTAINERD_MKDIR_PATH: &str = "X-containerd.mkdir.path=";
/// Template for mkdir option passed to guest agent (X-kata.mkdir.path)
const X_KATA_MKDIR_PATH: &str = "X-kata.mkdir.path=";
/// Generate merged VMDK file from multiple EROFS devices
///
/// Creates a VMDK descriptor that combines multiple EROFS images into a single
/// virtual block device (flatten device). For a single device, the EROFS image
/// is used directly without a VMDK wrapper.
///
/// And `erofs_devices` are for host paths to EROFS image files (from `source` and `device=` options)
async fn generate_merged_erofs_vmdk(
sid: &str,
cid: &str,
erofs_devices: &[String],
) -> Result<(String, BlockDeviceFormat)> {
if erofs_devices.is_empty() {
return Err(anyhow!("no EROFS devices provided"));
}
// Validate all device paths exist and are regular files before proceeding.
for dev_path in erofs_devices {
let metadata = fs::metadata(dev_path)
.context(format!("EROFS device path not accessible: {}", dev_path))?;
if !metadata.is_file() {
return Err(anyhow!(
"EROFS device path is not a regular file: {}",
dev_path
));
}
}
// For single device, use it directly with Raw format (no need for VMDK descriptor)
if erofs_devices.len() == 1 {
info!(
sl!(),
"single EROFS device, using directly with Raw format: {}", erofs_devices[0]
);
return Ok((erofs_devices[0].clone(), BlockDeviceFormat::Raw));
}
// For multiple devices, create VMDK descriptor
let sandbox_dir = PathBuf::from(DEFAULT_KATA_GUEST_ROOT_SHARED_FS).join(sid);
let container_dir = sandbox_dir.join(cid);
fs::create_dir_all(&container_dir).context(format!(
"failed to create container directory: {}",
container_dir.display()
))?;
let vmdk_path = container_dir.join(EROFS_MERGED_VMDK);
info!(
sl!(),
"creating VMDK descriptor for {} EROFS devices: {}",
erofs_devices.len(),
vmdk_path.display()
);
// create_vmdk_descriptor uses atomic write (temp + rename) internally,
// so a failure will not leave a corrupt descriptor file.
create_vmdk_descriptor(&vmdk_path, erofs_devices)
.context("failed to create VMDK descriptor")?;
Ok((vmdk_path.display().to_string(), BlockDeviceFormat::Vmdk))
}
/// Create VMDK descriptor for multiple EROFS extents (flatten device)
///
/// Generates a VMDK descriptor file (twoGbMaxExtentFlat format) that references
/// multiple EROFS images as flat extents, allowing them to be treated as a single
/// contiguous block device in the VM.
fn create_vmdk_descriptor(vmdk_path: &Path, erofs_paths: &[String]) -> Result<()> {
if erofs_paths.is_empty() {
return Err(anyhow!(
"empty EROFS path list, cannot create VMDK descriptor"
));
}
// collect extent information without writing anything.
struct ExtentInfo {
path: String,
total_sectors: u64,
}
let mut extents: Vec<ExtentInfo> = Vec::with_capacity(erofs_paths.len());
let mut total_sectors: u64 = 0;
for erofs_path in erofs_paths {
let metadata = fs::metadata(erofs_path)
.context(format!("failed to stat EROFS file: {}", erofs_path))?;
let file_size = metadata.len();
if file_size == 0 {
warn!(sl!(), "EROFS file {} is zero-length, skipping", erofs_path);
continue;
}
// round up to whole sectors to avoid losing tail bytes on non-aligned files.
// VMDK extents are measured in 512-byte sectors; a file that is not sector-aligned
// still needs the last partial sector to be addressable by the VM.
let sectors = file_size.div_ceil(512);
if file_size % 512 != 0 {
warn!(
sl!(),
"EROFS file {} size ({} bytes) is not 512-byte aligned, \
rounding up to {} sectors ({} bytes addressable)",
erofs_path,
file_size,
sectors,
sectors * 512
);
}
total_sectors = total_sectors.checked_add(sectors).ok_or_else(|| {
anyhow!(
"total sector count overflow when adding {} ({} sectors)",
erofs_path,
sectors
)
})?;
extents.push(ExtentInfo {
path: erofs_path.clone(),
total_sectors: sectors,
});
}
if total_sectors == 0 {
return Err(anyhow!(
"no valid EROFS files to create VMDK descriptor (all files are empty)"
));
}
// write descriptor to a temp file, then atomically rename.
let tmp_path = vmdk_path.with_extension("vmdk.tmp");
let file = fs::File::create(&tmp_path).context(format!(
"failed to create temp VMDK file: {}",
tmp_path.display()
))?;
let mut writer = BufWriter::new(file);
// Header
writeln!(writer, "# Disk DescriptorFile")?;
writeln!(writer, "version=1")?;
writeln!(writer, "CID=fffffffe")?;
writeln!(writer, "parentCID=ffffffff")?;
writeln!(writer, "createType=\"{}\"", VMDK_SUBFORMAT)?;
writeln!(writer)?;
// Extent descriptions
writeln!(writer, "# Extent description")?;
for extent in &extents {
let mut remaining = extent.total_sectors;
let mut file_offset: u64 = 0;
while remaining > 0 {
let chunk = remaining.min(MAX_2GB_EXTENT_SECTORS);
writeln!(
writer,
"RW {} FLAT \"{}\" {}",
chunk, extent.path, file_offset
)?;
file_offset += chunk;
remaining -= chunk;
}
info!(
sl!(),
"VMDK extent: {} ({} sectors, {} extent chunk(s))",
extent.path,
extent.total_sectors,
extent.total_sectors.div_ceil(MAX_2GB_EXTENT_SECTORS)
);
}
writeln!(writer)?;
// Disk Data Base (DDB)
// Geometry: cylinders = ceil(total_sectors / (sectors_per_track * heads))
let cylinders = total_sectors.div_ceil(SECTORS_PER_TRACK * NUMBER_HEADS);
writeln!(writer, "# The Disk Data Base")?;
writeln!(writer, "#DDB")?;
writeln!(writer)?;
writeln!(writer, "ddb.virtualHWVersion = \"{}\"", VMDK_HW_VERSION)?;
writeln!(writer, "ddb.geometry.cylinders = \"{}\"", cylinders)?;
writeln!(writer, "ddb.geometry.heads = \"{}\"", NUMBER_HEADS)?;
writeln!(writer, "ddb.geometry.sectors = \"{}\"", SECTORS_PER_TRACK)?;
writeln!(writer, "ddb.adapterType = \"{}\"", VMDK_ADAPTER_TYPE)?;
// Flush the BufWriter to ensure all data is written before rename.
writer.flush().context("failed to flush VMDK descriptor")?;
// Explicitly drop to close the file handle before rename.
drop(writer);
// atomic rename: tmp -> final path.
fs::rename(&tmp_path, vmdk_path).context(format!(
"failed to rename temp VMDK {} -> {}",
tmp_path.display(),
vmdk_path.display()
))?;
info!(
sl!(),
"VMDK descriptor created: {} (total {} sectors, {} extents, {} cylinders)",
vmdk_path.display(),
total_sectors,
extents.len(),
cylinders
);
Ok(())
}
/// Extract block device information from hypervisor device info
fn extract_block_device_info(
device_info: &DeviceType,
block_driver: &str,
) -> Result<(String, String, String)> {
if let DeviceType::Block(device) = device_info {
let blk_driver = device.config.driver_option.clone();
let device_id = device.device_id.clone();
// Use virt_path as guest device path (e.g., /dev/vda)
// pci_path is PCI address (e.g., 02/00) which is not a valid mount source
let guest_path = match block_driver {
VIRTIO_BLK_PCI | VIRTIO_BLK_MMIO | VIRTIO_BLK_CCW => {
// virt_path is the correct guest device path for all virtio-blk types
if device.config.virt_path.is_empty() {
return Err(anyhow!("virt_path is empty for block device"));
}
device.config.virt_path.clone()
}
VIRTIO_SCSI | VIRTIO_PMEM => {
return Err(anyhow!(
"Block driver {} not fully supported for EROFS",
block_driver
));
}
_ => {
return Err(anyhow!("Unknown block driver: {}", block_driver));
}
};
Ok((device_id, guest_path, blk_driver))
} else {
Err(anyhow!("Expected block device, got {:?}", device_info))
}
}
/// EROFS Multi-Layer Rootfs with overlay support
///
/// Handles the EROFS Multi-Layer where rootfs consists of:
/// - Mount[0]: ext4 rw layer (writable container layer) -> virtio-blk device
/// - Mount[1]: erofs layers (fsmeta + flattened layers) -> virtio-blk via VMDK
/// - Mount[2]: overlay (to combine ext4 upper + erofs lower)
pub(crate) struct ErofsMultiLayerRootfs {
guest_path: String,
device_ids: Vec<String>,
mount: oci::Mount,
rwlayer_storage: Option<Storage>, // Writable layer storage (upper layer), typically ext4
erofs_storage: Option<Storage>,
/// Path to generated VMDK descriptor (only set when multiple EROFS devices are merged)
vmdk_path: Option<PathBuf>,
}
impl ErofsMultiLayerRootfs {
pub async fn new(
device_manager: &RwLock<DeviceManager>,
sid: &str,
cid: &str,
rootfs_mounts: &[Mount],
_share_fs: &Option<Arc<dyn crate::share_fs::ShareFs>>,
) -> Result<Self> {
let container_path = do_get_guest_path(ROOTFS, cid, false, false);
let host_path = do_get_host_path(ROOTFS, sid, cid, false, false);
fs::create_dir_all(&host_path)
.map_err(|e| anyhow!("failed to create rootfs dir {}: {:?}", host_path, e))?;
let mut device_ids = Vec::new();
let mut rwlayer_storage: Option<Storage> = None;
let mut erofs_storage: Option<Storage> = None;
let mut vmdk_path: Option<PathBuf> = None;
// Directories to create (X-containerd.mkdir.path)
let mut mkdir_dirs: Vec<String> = Vec::new();
let blkdev_info = get_block_device_info(device_manager).await;
let block_driver = blkdev_info.block_device_driver.clone();
// Process each mount in rootfs_mounts to set up devices and storages
for mount in rootfs_mounts {
match mount.fs_type.as_str() {
fmt if fmt.eq_ignore_ascii_case(RW_LAYER_ROOTFS_TYPE) => {
// Mount[0]: rw layer -> virtio-blk device /dev/vdX1
info!(
sl!(),
"multi-layer erofs: adding rw layer: {}", mount.source
);
let device_config = &mut BlockConfig {
driver_option: block_driver.clone(),
format: BlockDeviceFormat::Raw, // rw layer should be raw format
path_on_host: mount.source.clone(),
blkdev_aio: BlockDeviceAio::new(&blkdev_info.block_device_aio),
..Default::default()
};
let device_info = do_handle_device(
device_manager,
&DeviceConfig::BlockCfg(device_config.clone()),
)
.await
.context("failed to attach rw block device")?;
let (device_id, guest_path, blk_driver) =
extract_block_device_info(&device_info, &block_driver)?;
info!(
sl!(),
"writable block device attached - device_id: {} guest_path: {}",
device_id,
guest_path
);
// Filter out "loop" option which is not needed in VM (device is already /dev/vdX)
let mut options: Vec<String> = mount
.options
.iter()
.filter(|o| *o != "loop")
.cloned()
.collect();
// RW layer is the writable upper layer (marked with X-kata.overlay-upper)
options.push("X-kata.overlay-upper".to_string());
options.push("X-kata.multi-layer=true".to_string());
// Set up storage for rw layer (upper layer)
rwlayer_storage = Some(Storage {
driver: blk_driver,
source: guest_path.clone(),
fs_type: RW_LAYER_ROOTFS_TYPE.to_string(),
mount_point: container_path.clone(),
options,
..Default::default()
});
device_ids.push(device_id);
}
fmt if fmt.eq_ignore_ascii_case(EROFS_ROOTFS_TYPE) => {
// Mount[1]: erofs layers -> virtio-blk via VMDK /dev/vdX2
info!(
sl!(),
"multi-layer erofs: adding erofs layers: {}", mount.source
);
// Collect all EROFS devices: source + `device=` options
let mut erofs_devices = vec![mount.source.clone()];
for opt in &mount.options {
if let Some(device_path) = opt.strip_prefix("device=") {
erofs_devices.push(device_path.to_string());
}
}
info!(sl!(), "EROFS devices count: {}", erofs_devices.len());
// Generate merged VMDK file from all EROFS devices
// Returns (path, format) - format is Vmdk for multiple devices, Raw for single device
let (erofs_path, erofs_format) =
generate_merged_erofs_vmdk(sid, cid, &erofs_devices)
.await
.context("failed to generate EROFS VMDK")?;
// Track VMDK path for cleanup (only when VMDK is actually created)
if erofs_format == BlockDeviceFormat::Vmdk {
vmdk_path = Some(PathBuf::from(&erofs_path));
}
info!(
sl!(),
"EROFS block device config - path: {}, format: {:?}",
erofs_path,
erofs_format
);
let device_config = &mut BlockConfig {
driver_option: block_driver.clone(),
format: erofs_format, // Vmdk for multiple devices, Raw for single device
path_on_host: erofs_path,
blkdev_aio: BlockDeviceAio::new(&blkdev_info.block_device_aio),
is_readonly: true, // EROFS layer is read-only
..Default::default()
};
let device_info = do_handle_device(
device_manager,
&DeviceConfig::BlockCfg(device_config.clone()),
)
.await
.context("failed to attach erofs block device")?;
let (device_id, guest_path, blk_driver) =
extract_block_device_info(&device_info, &block_driver)?;
info!(
sl!(),
"erofs device attached - device_id: {} guest_path: {}",
device_id,
guest_path
);
let mut options: Vec<String> = mount
.options
.iter()
.filter(|o| {
// Filter out options that are not valid erofs mount parameters:
// 1. "loop" - not needed in VM, device is already /dev/vdX
// 2. "device=" prefix - used for VMDK generation only, not for mount
// 3. "X-kata." prefix - metadata markers for kata internals
*o != "loop" && !o.starts_with("device=") && !o.starts_with("X-kata.")
})
.cloned()
.collect();
// Erofs layers are read-only lower layers (marked with X-kata.overlay-lower)
options.push("X-kata.overlay-lower".to_string());
options.push("X-kata.multi-layer=true".to_string());
info!(
sl!(),
"erofs storage options filtered: {:?} -> {:?}", mount.options, options
);
erofs_storage = Some(Storage {
driver: blk_driver,
source: guest_path.clone(),
fs_type: EROFS_ROOTFS_TYPE.to_string(),
mount_point: container_path.clone(),
options,
..Default::default()
});
device_ids.push(device_id);
}
fmt if fmt.eq_ignore_ascii_case("overlay")
|| fmt.eq_ignore_ascii_case("format/overlay")
|| fmt.eq_ignore_ascii_case("format/mkdir/overlay") =>
{
// Mount[2]: overlay to combine rwlayer (upper) + erofs (lower)
info!(
sl!(),
"multi-layer erofs: parsing overlay mount, options: {:?}", mount.options
);
// Parse mkdir options (X-containerd.mkdir.path)
for opt in &mount.options {
if let Some(mkdir_spec) = opt.strip_prefix(X_CONTAINERD_MKDIR_PATH) {
// Keep the full spec (path:mode or path:mode:uid:gid) for guest agent
mkdir_dirs.push(mkdir_spec.to_string());
}
}
}
_ => {
info!(
sl!(),
"multi-layer erofs: ignoring unknown mount type: {}", mount.fs_type
);
}
}
}
if device_ids.is_empty() {
return Err(anyhow!("no devices attached for multi-layer erofs rootfs"));
}
// Check device count limit
if device_ids.len() > MAX_VIRTIO_BLK_DEVICES {
return Err(anyhow!(
"exceeded maximum virtio disk count: {} > {}",
device_ids.len(),
MAX_VIRTIO_BLK_DEVICES
));
}
// Add mkdir directives to rwlayer storage options for guest agent
if let Some(ref mut rwlayer) = rwlayer_storage {
rwlayer.options.extend(
mkdir_dirs
.iter()
.map(|dir| format!("{}{}", X_KATA_MKDIR_PATH, dir)),
);
}
Ok(Self {
guest_path: container_path,
device_ids,
mount: oci::Mount::default(),
rwlayer_storage,
erofs_storage,
vmdk_path,
})
}
}
#[async_trait]
impl Rootfs for ErofsMultiLayerRootfs {
async fn get_guest_rootfs_path(&self) -> Result<String> {
Ok(self.guest_path.clone())
}
async fn get_rootfs_mount(&self) -> Result<Vec<oci::Mount>> {
Ok(vec![self.mount.clone()])
}
async fn get_storage(&self) -> Option<Vec<Storage>> {
// Return all storages for multi-layer EROFS (rw layer + erofs layer) to guest agent.
// Guest agent needs both to create overlay mount
let mut storages = Vec::new();
if let Some(rwlayer) = self.rwlayer_storage.clone() {
storages.push(rwlayer);
}
if let Some(erofs) = self.erofs_storage.clone() {
storages.push(erofs);
}
if storages.is_empty() {
None
} else {
Some(storages)
}
}
async fn get_device_id(&self) -> Result<Option<String>> {
Ok(self.device_ids.first().cloned())
}
async fn cleanup(&self, device_manager: &RwLock<DeviceManager>) -> Result<()> {
let mut dm = device_manager.write().await;
for device_id in &self.device_ids {
dm.try_remove_device(device_id).await?;
}
// Clean up generated VMDK descriptor file if it exists (only for multi-device case)
if let Some(ref vmdk) = self.vmdk_path {
if vmdk.exists() {
if let Err(e) = fs::remove_file(vmdk) {
warn!(
sl!(),
"failed to remove VMDK descriptor {}: {}",
vmdk.display(),
e
);
}
}
}
Ok(())
}
}
/// Check if mounts represent multi-layer EROFS rootfs(with or without `device=` options):
/// - Must have at least 2 mounts (rw layer + erofs layer)
/// - Multi-layer: erofs with `device=` options
/// - Single-layer: erofs without `device=` options (just layer.erofs)
pub fn is_erofs_multi_layer(rootfs_mounts: &[Mount]) -> bool {
if rootfs_mounts.len() < 2 {
return false;
}
let has_rwlayer = rootfs_mounts.iter().any(|m| {
m.fs_type.eq_ignore_ascii_case(RW_LAYER_ROOTFS_TYPE) && m.options.iter().any(|o| o == "rw")
});
let has_erofs = rootfs_mounts
.iter()
.any(|m| m.fs_type.eq_ignore_ascii_case(EROFS_ROOTFS_TYPE));
// Must have rwlayer + erofs (multi-layer or single-layer)
has_rwlayer && has_erofs
}

View File

@@ -11,6 +11,7 @@ use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use kata_types::mount::Mount;
mod block_rootfs;
mod erofs_rootfs;
pub mod virtual_volume;
use hypervisor::{device::device_manager::DeviceManager, Hypervisor};
@@ -19,8 +20,11 @@ use virtual_volume::{is_kata_virtual_volume, VirtualVolume};
use std::{collections::HashMap, sync::Arc, vec::Vec};
use tokio::sync::RwLock;
use self::{block_rootfs::is_block_rootfs, nydus_rootfs::NYDUS_ROOTFS_TYPE};
use crate::share_fs::ShareFs;
use self::{
block_rootfs::is_block_rootfs, erofs_rootfs::ErofsMultiLayerRootfs,
nydus_rootfs::NYDUS_ROOTFS_TYPE,
};
use crate::{rootfs::erofs_rootfs::is_erofs_multi_layer, share_fs::ShareFs};
use oci_spec::runtime as oci;
const ROOTFS: &str = "rootfs";
@@ -31,7 +35,7 @@ const TYPE_OVERLAY_FS: &str = "overlay";
pub trait Rootfs: Send + Sync {
async fn get_guest_rootfs_path(&self) -> Result<String>;
async fn get_rootfs_mount(&self) -> Result<Vec<oci::Mount>>;
async fn get_storage(&self) -> Option<Storage>;
async fn get_storage(&self) -> Option<Vec<Storage>>;
async fn cleanup(&self, device_manager: &RwLock<DeviceManager>) -> Result<()>;
async fn get_device_id(&self) -> Result<Option<String>>;
}
@@ -90,9 +94,26 @@ impl RootFsResource {
Err(anyhow!("share fs is unavailable"))
}
}
mounts_vec if is_single_layer_rootfs(mounts_vec) => {
_ if is_erofs_multi_layer(rootfs_mounts) => {
info!(
sl!(),
"handling multi-layer erofs rootfs with {} mounts",
rootfs_mounts.len()
);
let multi_layer =
ErofsMultiLayerRootfs::new(device_manager, sid, cid, rootfs_mounts, share_fs)
.await
.context("new multi-layer erofs rootfs")?;
let ret = Arc::new(multi_layer);
let mut inner = self.inner.write().await;
inner.rootfs.push(ret.clone());
Ok(ret)
}
_ if is_single_layer_rootfs(rootfs_mounts) => {
// Safe as single_layer_rootfs must have one layer
let layer = &mounts_vec[0];
let layer = &rootfs_mounts[0];
let mut inner = self.inner.write().await;
if is_guest_pull_volume(share_fs, layer) {

View File

@@ -149,8 +149,8 @@ impl Rootfs for NydusRootfs {
Ok(vec![])
}
async fn get_storage(&self) -> Option<Storage> {
Some(self.rootfs.clone())
async fn get_storage(&self) -> Option<Vec<Storage>> {
Some(vec![self.rootfs.clone()])
}
async fn get_device_id(&self) -> Result<Option<String>> {

View File

@@ -73,7 +73,7 @@ impl Rootfs for ShareFsRootfs {
todo!()
}
async fn get_storage(&self) -> Option<Storage> {
async fn get_storage(&self) -> Option<Vec<Storage>> {
None
}

View File

@@ -15,6 +15,7 @@ use oci_spec::runtime as oci;
use serde_json;
use tokio::sync::RwLock;
use agent::Storage;
use hypervisor::device::device_manager::DeviceManager;
use kata_types::{
annotations,
@@ -184,8 +185,8 @@ impl super::Rootfs for VirtualVolume {
Ok(vec![])
}
async fn get_storage(&self) -> Option<agent::Storage> {
Some(self.storages[0].clone())
async fn get_storage(&self) -> Option<Vec<Storage>> {
Some(self.storages.clone())
}
async fn get_device_id(&self) -> Result<Option<String>> {

View File

@@ -167,8 +167,8 @@ impl Container {
);
let mut storages = vec![];
if let Some(storage) = rootfs.get_storage().await {
storages.push(storage);
if let Some(mut storage_list) = rootfs.get_storage().await {
storages.append(&mut storage_list);
}
inner.rootfs.push(rootfs);

View File

@@ -26,6 +26,10 @@ use shim::{config, Args, Error, ShimExecutor};
const DEFAULT_TOKIO_RUNTIME_WORKER_THREADS: usize = 2;
// env to config tokio runtime worker threads
const ENV_TOKIO_RUNTIME_WORKER_THREADS: &str = "TOKIO_RUNTIME_WORKER_THREADS";
// RUNTIME_ALLOW_MOUNTS are the custom mount types allowed by the runtime. These
// types should not be handled by the mount manager.
// To include prepare mount types, use "/*" suffix, such as "format/*"
pub const RUNTIME_ALLOW_MOUNTS: &str = "containerd.io/runtime-allow-mounts";
#[derive(Debug)]
enum Action {
@@ -134,6 +138,10 @@ fn show_info() -> Result<()> {
let mut info = RuntimeInfo::new();
info.name = config::CONTAINERD_RUNTIME_NAME.to_string();
info.version = Some(version).into();
info.annotations.insert(
RUNTIME_ALLOW_MOUNTS.to_string(),
"mkdir/*,format/*,erofs".to_string(),
);
let data = info
.write_to_bytes()

View File

@@ -64,6 +64,7 @@ memdisk
pmem
Sharedfs
Initdata
fsmerged
# Networking & Communication
netns

View File

@@ -30,6 +30,20 @@ pub async fn configure_erofs_snapshotter(
"[\"erofs\",\"walking\"]",
)?;
//// Configure erofs differ plugin
//// erofs-utils >= 1.8.2
//toml_utils::set_toml_value(
// configuration_file,
// ".plugins.\"io.containerd.differ.v1.erofs\".mkfs_options",
// "[\"-T0\",\"--mkfs-time\",\"--sort=none\"]",
//)?;
toml_utils::set_toml_value(
configuration_file,
".plugins.\"io.containerd.differ.v1.erofs\".enable_tar_index",
"false",
)?;
// Configure erofs snapshotter plugin
toml_utils::set_toml_value(
configuration_file,
".plugins.\"io.containerd.snapshotter.v1.erofs\".enable_fsverity",
@@ -40,6 +54,16 @@ pub async fn configure_erofs_snapshotter(
".plugins.\"io.containerd.snapshotter.v1.erofs\".set_immutable",
"true",
)?;
toml_utils::set_toml_value(
configuration_file,
".plugins.\"io.containerd.snapshotter.v1.erofs\".default_size",
"\"10G\"",
)?;
toml_utils::set_toml_value(
configuration_file,
".plugins.\"io.containerd.snapshotter.v1.erofs\".max_unmerged_layers",
"1",
)?;
Ok(())
}