runtime-rs: virtio-fs: plumb virtio_fs_queue_size to qemu/CH

The shared filesystem device builder in `prepare_virtiofs` was
hardcoding `queue_size = 0` and `queue_num = 0` on the `ShareFsConfig`
it hands to the hypervisor, ignoring `SharedFsInfo.virtio_fs_queue_size`
parsed from `configuration.toml` entirely.

For qemu, this is silently broken: the cmdline generator's
`DeviceVhostUserFs::set_queue_size` treats 0 as "not set" and skips the
`queue-size=` argument when emitting the `vhost-user-fs-pci` device, so
QEMU falls back to its built-in default of 128, regardless of what the
user configured.

For Cloud Hypervisor it happens to work in practice today, but only
because `ch::handle_share_fs_device` and `TryFrom<ShareFsSettings> for
FsConfig` substitute a hardcoded 1024 when the incoming
`queue_num`/`queue_size` are zero. That fallback masks the real bug; the
toml value still never reaches the VMM.

Add a `get_shared_fs_info` accessor on `DeviceManager` mirroring the
existing `get_block_device_info` helper, and use it in
`prepare_virtiofs` to populate `ShareFsConfig.queue_size` from
`SharedFsInfo.virtio_fs_queue_size`. Use a single virtqueue
(`queue_num = 1`), matching what runtime-go hardcodes for both qemu
(govmm `QemuFSParams` does not emit `num-queues=`) and CH
(`numQueues := int32(1)` in `clh.go`).

The CH-side fallback and the CH config template are addressed in a
follow-up commit.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This commit is contained in:
Aurélien Bombo
2026-05-01 15:10:04 -05:00
parent 62b847fd6c
commit 0ad195b2a4
2 changed files with 16 additions and 4 deletions

View File

@@ -8,7 +8,7 @@ use std::{collections::HashMap, sync::Arc};
use anyhow::{anyhow, Context, Result};
use kata_sys_util::rand::RandomBytes;
use kata_types::config::hypervisor::{BlockDeviceInfo, TopologyConfigInfo, VIRTIO_SCSI};
use kata_types::config::hypervisor::{BlockDeviceInfo, SharedFsInfo, TopologyConfigInfo, VIRTIO_SCSI};
use tokio::sync::{Mutex, RwLock};
use crate::{
@@ -121,6 +121,10 @@ impl DeviceManager {
self.hypervisor.hypervisor_config().await.blockdev_info
}
async fn get_shared_fs_info(&self) -> SharedFsInfo {
self.hypervisor.hypervisor_config().await.shared_fs
}
async fn try_add_device(&mut self, device_id: &str) -> Result<()> {
// find the device
let device = self
@@ -623,6 +627,10 @@ pub async fn get_block_device_info(d: &RwLock<DeviceManager>) -> BlockDeviceInfo
d.read().await.get_block_device_info().await
}
pub async fn get_shared_fs_info(d: &RwLock<DeviceManager>) -> SharedFsInfo {
d.read().await.get_shared_fs_info().await
}
#[cfg(test)]
mod tests {
use super::DeviceManager;

View File

@@ -12,7 +12,7 @@ use tokio::sync::RwLock;
use hypervisor::{
device::{
device_manager::{do_handle_device, do_update_device, DeviceManager},
device_manager::{do_handle_device, do_update_device, get_shared_fs_info, DeviceManager},
driver::{ShareFsMountConfig, ShareFsMountOperation, ShareFsMountType},
DeviceConfig,
},
@@ -54,8 +54,12 @@ pub(crate) async fn prepare_virtiofs(
sock_path: generate_sock_path(root),
mount_tag: String::from(MOUNT_GUEST_TAG),
fs_type: fs_type.to_string(),
queue_size: 0,
queue_num: 0,
// Pull virtio-fs queue size from the hypervisor config so the value
// configured via `virtio_fs_queue_size` in the toml actually reaches
// the VMM device line. There is currently no toml knob for the number
// of virtqueues, so we use a single queue (matching the prior default).
queue_size: get_shared_fs_info(d).await.virtio_fs_queue_size as u64,
queue_num: 1,
options: vec![],
mount_config: None,
};