Compare commits

...

16 Commits

Author SHA1 Message Date
Fupan Li
5e3917fdb2 dragonball: temp debug
temp debug for dragonball

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-22 21:25:43 +08:00
Fupan Li
c6e622f5fa CI: temp debug
a temp debug

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-17 21:26:04 +08:00
Alex Lyn
cf0c064a0a CI: Try to get kata log
DO-NOT-MERGE:Just for debugging

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-03-17 16:39:20 +08:00
Fupan Li
eabb98ecab dragonball: fix the issue of type miss match with api change
For aarch64, it should match the api for get_one_reg and
set_one_reg.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-16 14:41:53 +08:00
Fupan Li
c1b7069e50 tools: fix the genpolicy building issue
Add the new helper item bring by the cargo bump

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:04 +00:00
Fupan Li
fddd1e8b6e dragonball: update the Cargo.lock and rm the unused crate
update the Cargo.lock  and rm the unused crate

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:04 +00:00
Fupan Li
d6178d78b1 dragonball: fix the tcp test address for 127.0.0.2
Fix TCP test addresses from 127.0.0.2 to 127.0.0.1 for vsock backend
tests

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:04 +00:00
Fupan Li
1c7b14e282 dragonball: Fix the feature gating for host devices
Fix feature-gating for PCI/virtio in dragonball
device_manager and mptable test

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:04 +00:00
Fupan Li
e9bda42b01 dragonball: fix the failed UT tests
Fix dragonball make check: clippy and format errors

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
a66c93caaa dragonball: add GuestRegionCollection error
add GuestRegionCollection error variant with proper
error context preservation

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
17454c0969 dragonball: Fix remaining warnings
remove unused imports (ioctl_ioc_nr, std::io) that cause -D warnings failures

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
f8617241f4 dragonball: Fix dragonball compilation errors for upgraded
Fix dragonball compilation errors for upgraded
dependencies (vm-memory 0.17.1, kvm-ioctls 0.24.0, vfio-ioctls 0.5.2)

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
d0f0dc2008 dragonball: fix the dbs-virtio-devices compiled errors
Update dbs-virtio-devices to compile with:
- virtio-bindings 0.2.x: VIRTIO_F_VERSION_1, VIRTIO_F_NOTIFY_ON_EMPTY,
  VIRTIO_F_RING_PACKED moved from virtio_blk/virtio_net/virtio_ring to
  virtio_config module.
- virtio-queue 0.17.0: Descriptor no longer exported at top level, use
  desc::split::Descriptor instead.
- vhost 0.15.0: Master->Frontend, VhostUserMaster->VhostUserFrontend,
  MasterReqHandler->FrontendReqHandler,
  VhostUserMasterReqHandler->VhostUserFrontendReqHandler,
  SLAVE_REQ->BACKEND_REQ, SLAVE_SEND_FD->BACKEND_SEND_FD,
  set_slave_request_fd->set_backend_request_fd.
  FS slave messages (VhostUserFSSlaveMsg etc.) removed from vhost crate;
  SlaveReqHandler now implements VhostUserFrontendReqHandler with
  handle_config_change only.
- fuse-backend-rs 0.14.0: Handle CachePolicy::Metadata variant,
  fix get_rootfs() returning tuple, use buffer-based I/O for Ufile
  since ReadVolatile/WriteVolatile are not implemented for Box<dynUfile>.
- vm-memory 0.17.1: GuestRegionMmap::new returns Option instead of
  Result, mmap::Error removed.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
3e39c1fad3 dragonball: fix the issue of kvm-binding upgraded
Fix the compiling errors caused by kvm-binding and
kvm-ioctls upgraded.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
a6a81124cb runtime-rs: fix the api changes for vm-memory 0.17.1 API
Rename vm-memory GuestMemory methods for 0.17.1 upgrade
Rename read_from -> read_volatile_from, write_to -> write_volatile_to,
read_exact_from -> read_exact_volatile_from, and write_all_to ->
write_all_volatile_to across all dragonball Rust source files.
Change bitmap() return type from &Self::B to BS<'_, Self::B>
Move as_slice/as_mut_slice from GuestMemoryRegion trait impl to inherent
impl block, using get_host_address for mmap regions
Update GuestMemory impl: remove type I, use impl Iterator return type
Replace Error with GuestRegionCollectionError for region collection errors
Fix VolatileSlice::with_bitmap call to include mmap parameter
Fix test: use ptr_guard().as_ptr() instead of removed as_ptr()

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
Fupan Li
8d09a0e7e7 runtime-rs: Bump the rust-vmm related crates
vm-memory 0.10.0 → =0.17.1
vmm-sys-util 0.11.0 → 0.15.0
kvm-bindings 0.6.0 → 0.14.0
kvm-ioctls =0.12.1 → 0.24.0
virtio-queue 0.7.0 → 0.17.0
virtio-bindings 0.1.0 → 0.2.0
fuse-backend-rs 0.10.5 → 0.14.0

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2026-03-12 10:58:03 +00:00
47 changed files with 1567 additions and 1380 deletions

1941
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -56,19 +56,19 @@ exclude = [
[workspace.dependencies]
# Rust-VMM crates
event-manager = "0.2.1"
kvm-bindings = "0.6.0"
kvm-ioctls = "=0.12.1"
linux-loader = "0.8.0"
event-manager = "0.4.0"
kvm-bindings = "0.14.0"
kvm-ioctls = "0.24.0"
linux-loader = "0.13.0"
seccompiler = "0.5.0"
vfio-bindings = "0.3.0"
vfio-ioctls = "0.1.0"
virtio-bindings = "0.1.0"
virtio-queue = "0.7.0"
vm-fdt = "0.2.0"
vm-memory = "0.10.0"
vm-superio = "0.5.0"
vmm-sys-util = "0.11.0"
vfio-bindings = "0.6.1"
vfio-ioctls = "0.5.0"
virtio-bindings = "0.2.0"
virtio-queue = "0.17.0"
vm-fdt = "0.3.0"
vm-memory = "=0.17.1"
vm-superio = "0.8.0"
vmm-sys-util = "0.15.0"
# Local dependencies from Dragonball Sandbox crates
dragonball = { path = "src/dragonball" }

View File

@@ -50,6 +50,7 @@ vm-memory = { workspace = true, features = ["backend-mmap"] }
crossbeam-channel = "0.5.6"
vfio-bindings = { workspace = true, optional = true }
vfio-ioctls = { workspace = true, optional = true }
kata-sys-util = { path = "../libs/kata-sys-util" }
[dev-dependencies]
slog-async = "2.7.0"

View File

@@ -1,16 +1,15 @@
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
use std::io::{Read, Write};
use std::sync::atomic::Ordering;
use std::sync::Arc;
use vm_memory::bitmap::{Bitmap, BS};
use vm_memory::guest_memory::GuestMemoryIterator;
use vm_memory::mmap::{Error, NewBitmap};
use vm_memory::mmap::NewBitmap;
use vm_memory::{
guest_memory, AtomicAccess, Bytes, FileOffset, GuestAddress, GuestMemory, GuestMemoryRegion,
GuestRegionMmap, GuestUsize, MemoryRegionAddress, VolatileSlice,
GuestRegionCollectionError, GuestRegionMmap, GuestUsize, MemoryRegionAddress, ReadVolatile,
VolatileSlice, WriteVolatile,
};
use crate::GuestRegionRaw;
@@ -67,63 +66,63 @@ impl<B: Bitmap> Bytes<MemoryRegionAddress> for GuestRegionHybrid<B> {
}
}
fn read_from<F>(
fn read_volatile_from<F>(
&self,
addr: MemoryRegionAddress,
src: &mut F,
count: usize,
) -> guest_memory::Result<usize>
where
F: Read,
F: ReadVolatile,
{
match self {
GuestRegionHybrid::Mmap(region) => region.read_from(addr, src, count),
GuestRegionHybrid::Raw(region) => region.read_from(addr, src, count),
GuestRegionHybrid::Mmap(region) => region.read_volatile_from(addr, src, count),
GuestRegionHybrid::Raw(region) => region.read_volatile_from(addr, src, count),
}
}
fn read_exact_from<F>(
fn read_exact_volatile_from<F>(
&self,
addr: MemoryRegionAddress,
src: &mut F,
count: usize,
) -> guest_memory::Result<()>
where
F: Read,
F: ReadVolatile,
{
match self {
GuestRegionHybrid::Mmap(region) => region.read_exact_from(addr, src, count),
GuestRegionHybrid::Raw(region) => region.read_exact_from(addr, src, count),
GuestRegionHybrid::Mmap(region) => region.read_exact_volatile_from(addr, src, count),
GuestRegionHybrid::Raw(region) => region.read_exact_volatile_from(addr, src, count),
}
}
fn write_to<F>(
fn write_volatile_to<F>(
&self,
addr: MemoryRegionAddress,
dst: &mut F,
count: usize,
) -> guest_memory::Result<usize>
where
F: Write,
F: WriteVolatile,
{
match self {
GuestRegionHybrid::Mmap(region) => region.write_to(addr, dst, count),
GuestRegionHybrid::Raw(region) => region.write_to(addr, dst, count),
GuestRegionHybrid::Mmap(region) => region.write_volatile_to(addr, dst, count),
GuestRegionHybrid::Raw(region) => region.write_volatile_to(addr, dst, count),
}
}
fn write_all_to<F>(
fn write_all_volatile_to<F>(
&self,
addr: MemoryRegionAddress,
dst: &mut F,
count: usize,
) -> guest_memory::Result<()>
where
F: Write,
F: WriteVolatile,
{
match self {
GuestRegionHybrid::Mmap(region) => region.write_all_to(addr, dst, count),
GuestRegionHybrid::Raw(region) => region.write_all_to(addr, dst, count),
GuestRegionHybrid::Mmap(region) => region.write_all_volatile_to(addr, dst, count),
GuestRegionHybrid::Raw(region) => region.write_all_volatile_to(addr, dst, count),
}
}
@@ -168,7 +167,7 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionHybrid<B> {
}
}
fn bitmap(&self) -> &Self::B {
fn bitmap(&self) -> BS<'_, Self::B> {
match self {
GuestRegionHybrid::Mmap(region) => region.bitmap(),
GuestRegionHybrid::Raw(region) => region.bitmap(),
@@ -189,20 +188,6 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionHybrid<B> {
}
}
unsafe fn as_slice(&self) -> Option<&[u8]> {
match self {
GuestRegionHybrid::Mmap(region) => region.as_slice(),
GuestRegionHybrid::Raw(region) => region.as_slice(),
}
}
unsafe fn as_mut_slice(&self) -> Option<&mut [u8]> {
match self {
GuestRegionHybrid::Mmap(region) => region.as_mut_slice(),
GuestRegionHybrid::Raw(region) => region.as_mut_slice(),
}
}
fn get_slice(
&self,
offset: MemoryRegionAddress,
@@ -223,6 +208,39 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionHybrid<B> {
}
}
impl<B: Bitmap> GuestRegionHybrid<B> {
/// Returns a slice corresponding to the region.
///
/// # Safety
/// This is safe because we mapped the area at addr ourselves, so this slice will not
/// overflow. However, it is possible to alias.
pub unsafe fn as_slice(&self) -> Option<&[u8]> {
match self {
GuestRegionHybrid::Mmap(region) => {
let addr = region.get_host_address(MemoryRegionAddress(0)).ok()?;
Some(std::slice::from_raw_parts(addr, region.len() as usize))
}
GuestRegionHybrid::Raw(region) => region.as_slice(),
}
}
/// Returns a mutable slice corresponding to the region.
///
/// # Safety
/// This is safe because we mapped the area at addr ourselves, so this slice will not
/// overflow. However, it is possible to alias.
#[allow(clippy::mut_from_ref)]
pub unsafe fn as_mut_slice(&self) -> Option<&mut [u8]> {
match self {
GuestRegionHybrid::Mmap(region) => {
let addr = region.get_host_address(MemoryRegionAddress(0)).ok()?;
Some(std::slice::from_raw_parts_mut(addr, region.len() as usize))
}
GuestRegionHybrid::Raw(region) => region.as_mut_slice(),
}
}
}
/// [`GuestMemory`](trait.GuestMemory.html) implementation that manage hybrid types of guest memory
/// regions.
///
@@ -248,7 +266,9 @@ impl<B: Bitmap> GuestMemoryHybrid<B> {
/// * `regions` - The vector of regions.
/// The regions shouldn't overlap and they should be sorted
/// by the starting address.
pub fn from_regions(mut regions: Vec<GuestRegionHybrid<B>>) -> Result<Self, Error> {
pub fn from_regions(
mut regions: Vec<GuestRegionHybrid<B>>,
) -> Result<Self, GuestRegionCollectionError> {
Self::from_arc_regions(regions.drain(..).map(Arc::new).collect())
}
@@ -264,9 +284,11 @@ impl<B: Bitmap> GuestMemoryHybrid<B> {
/// * `regions` - The vector of `Arc` regions.
/// The regions shouldn't overlap and they should be sorted
/// by the starting address.
pub fn from_arc_regions(regions: Vec<Arc<GuestRegionHybrid<B>>>) -> Result<Self, Error> {
pub fn from_arc_regions(
regions: Vec<Arc<GuestRegionHybrid<B>>>,
) -> Result<Self, GuestRegionCollectionError> {
if regions.is_empty() {
return Err(Error::NoMemoryRegion);
return Err(GuestRegionCollectionError::NoMemoryRegion);
}
for window in regions.windows(2) {
@@ -274,11 +296,11 @@ impl<B: Bitmap> GuestMemoryHybrid<B> {
let next = &window[1];
if prev.start_addr() > next.start_addr() {
return Err(Error::UnsortedMemoryRegions);
return Err(GuestRegionCollectionError::UnsortedMemoryRegions);
}
if prev.last_addr() >= next.start_addr() {
return Err(Error::MemoryRegionOverlap);
return Err(GuestRegionCollectionError::MemoryRegionOverlap);
}
}
@@ -292,7 +314,7 @@ impl<B: Bitmap> GuestMemoryHybrid<B> {
pub fn insert_region(
&self,
region: Arc<GuestRegionHybrid<B>>,
) -> Result<GuestMemoryHybrid<B>, Error> {
) -> Result<GuestMemoryHybrid<B>, GuestRegionCollectionError> {
let mut regions = self.regions.clone();
regions.push(region);
regions.sort_by_key(|x| x.start_addr());
@@ -310,7 +332,7 @@ impl<B: Bitmap> GuestMemoryHybrid<B> {
&self,
base: GuestAddress,
size: GuestUsize,
) -> Result<(GuestMemoryHybrid<B>, Arc<GuestRegionHybrid<B>>), Error> {
) -> Result<(GuestMemoryHybrid<B>, Arc<GuestRegionHybrid<B>>), GuestRegionCollectionError> {
if let Ok(region_index) = self.regions.binary_search_by_key(&base, |x| x.start_addr()) {
if self.regions.get(region_index).unwrap().len() as GuestUsize == size {
let mut regions = self.regions.clone();
@@ -319,32 +341,13 @@ impl<B: Bitmap> GuestMemoryHybrid<B> {
}
}
Err(Error::InvalidGuestRegion)
Err(GuestRegionCollectionError::NoMemoryRegion)
}
}
/// An iterator over the elements of `GuestMemoryHybrid`.
///
/// This struct is created by `GuestMemory::iter()`. See its documentation for more.
pub struct Iter<'a, B>(std::slice::Iter<'a, Arc<GuestRegionHybrid<B>>>);
impl<'a, B> Iterator for Iter<'a, B> {
type Item = &'a GuestRegionHybrid<B>;
fn next(&mut self) -> Option<Self::Item> {
self.0.next().map(AsRef::as_ref)
}
}
impl<'a, B: 'a> GuestMemoryIterator<'a, GuestRegionHybrid<B>> for GuestMemoryHybrid<B> {
type Iter = Iter<'a, B>;
}
impl<B: Bitmap + 'static> GuestMemory for GuestMemoryHybrid<B> {
type R = GuestRegionHybrid<B>;
type I = Self;
fn num_regions(&self) -> usize {
self.regions.len()
}
@@ -359,15 +362,15 @@ impl<B: Bitmap + 'static> GuestMemory for GuestMemoryHybrid<B> {
index.map(|x| self.regions[x].as_ref())
}
fn iter(&self) -> Iter<'_, B> {
Iter(self.regions.iter())
fn iter(&self) -> impl Iterator<Item = &GuestRegionHybrid<B>> {
self.regions.iter().map(AsRef::as_ref)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Seek;
use std::io::{Read, Seek, Write};
use vm_memory::{GuestMemoryError, MmapRegion};
use vmm_sys_util::tempfile::TempFile;
@@ -654,14 +657,14 @@ mod tests {
// Rewind file pointer after write operation.
file_to_write_mmap_region.rewind().unwrap();
guest_region
.read_from(write_addr, &mut file_to_write_mmap_region, size_of_file)
.read_volatile_from(write_addr, &mut file_to_write_mmap_region, size_of_file)
.unwrap();
let mut file_read_from_mmap_region = TempFile::new().unwrap().into_file();
file_read_from_mmap_region
.set_len(size_of_file as u64)
.unwrap();
guest_region
.write_all_to(write_addr, &mut file_read_from_mmap_region, size_of_file)
.write_all_volatile_to(write_addr, &mut file_read_from_mmap_region, size_of_file)
.unwrap();
// Rewind file pointer after write operation.
file_read_from_mmap_region.rewind().unwrap();
@@ -679,7 +682,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_region
.read_from(invalid_addr, &mut file_to_write_mmap_region, size_of_file)
.read_volatile_from(invalid_addr, &mut file_to_write_mmap_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -689,7 +692,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_region
.write_to(invalid_addr, &mut file_read_from_mmap_region, size_of_file)
.write_volatile_to(invalid_addr, &mut file_read_from_mmap_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -719,14 +722,14 @@ mod tests {
// Rewind file pointer after write operation.
file_to_write_mmap_region.rewind().unwrap();
guest_region
.read_from(write_addr, &mut file_to_write_mmap_region, size_of_file)
.read_volatile_from(write_addr, &mut file_to_write_mmap_region, size_of_file)
.unwrap();
let mut file_read_from_mmap_region = TempFile::new().unwrap().into_file();
file_read_from_mmap_region
.set_len(size_of_file as u64)
.unwrap();
guest_region
.write_all_to(write_addr, &mut file_read_from_mmap_region, size_of_file)
.write_all_volatile_to(write_addr, &mut file_read_from_mmap_region, size_of_file)
.unwrap();
// Rewind file pointer after write operation.
file_read_from_mmap_region.rewind().unwrap();
@@ -744,7 +747,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_region
.read_from(invalid_addr, &mut file_to_write_mmap_region, size_of_file)
.read_volatile_from(invalid_addr, &mut file_to_write_mmap_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -754,7 +757,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_region
.write_to(invalid_addr, &mut file_read_from_mmap_region, size_of_file)
.write_volatile_to(invalid_addr, &mut file_read_from_mmap_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -788,14 +791,14 @@ mod tests {
.unwrap();
file_to_write_mmap_region.rewind().unwrap();
guest_mmap_region
.read_exact_from(write_addr, &mut file_to_write_mmap_region, size_of_file)
.read_exact_volatile_from(write_addr, &mut file_to_write_mmap_region, size_of_file)
.unwrap();
let mut file_read_from_mmap_region = TempFile::new().unwrap().into_file();
file_read_from_mmap_region
.set_len(size_of_file as u64)
.unwrap();
guest_mmap_region
.write_all_to(write_addr, &mut file_read_from_mmap_region, size_of_file)
.write_all_volatile_to(write_addr, &mut file_read_from_mmap_region, size_of_file)
.unwrap();
file_read_from_mmap_region.rewind().unwrap();
let mut content = String::new();
@@ -818,14 +821,14 @@ mod tests {
.unwrap();
file_to_write_raw_region.rewind().unwrap();
guest_raw_region
.read_exact_from(write_addr, &mut file_to_write_raw_region, size_of_file)
.read_exact_volatile_from(write_addr, &mut file_to_write_raw_region, size_of_file)
.unwrap();
let mut file_read_from_raw_region = TempFile::new().unwrap().into_file();
file_read_from_raw_region
.set_len(size_of_file as u64)
.unwrap();
guest_raw_region
.write_all_to(write_addr, &mut file_read_from_raw_region, size_of_file)
.write_all_volatile_to(write_addr, &mut file_read_from_raw_region, size_of_file)
.unwrap();
file_read_from_raw_region.rewind().unwrap();
let mut content = String::new();
@@ -842,7 +845,11 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_mmap_region
.read_exact_from(invalid_addr, &mut file_to_write_mmap_region, size_of_file)
.read_exact_volatile_from(
invalid_addr,
&mut file_to_write_mmap_region,
size_of_file
)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -852,7 +859,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_mmap_region
.write_all_to(invalid_addr, &mut file_read_from_mmap_region, size_of_file)
.write_all_volatile_to(invalid_addr, &mut file_read_from_mmap_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -862,7 +869,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_raw_region
.read_exact_from(invalid_addr, &mut file_to_write_raw_region, size_of_file)
.read_exact_volatile_from(invalid_addr, &mut file_to_write_raw_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -872,7 +879,7 @@ mod tests {
let invalid_addr = MemoryRegionAddress(0x900);
assert!(matches!(
guest_raw_region
.write_all_to(invalid_addr, &mut file_read_from_raw_region, size_of_file)
.write_all_volatile_to(invalid_addr, &mut file_read_from_raw_region, size_of_file)
.err()
.unwrap(),
GuestMemoryError::InvalidBackendAddress
@@ -1076,13 +1083,16 @@ mod tests {
let guest_region = GuestMemoryHybrid::<()>::from_regions(regions);
assert!(matches!(
guest_region.err().unwrap(),
Error::UnsortedMemoryRegions
GuestRegionCollectionError::UnsortedMemoryRegions
));
// Error no memory region case.
let regions = Vec::<GuestRegionHybrid<()>>::new();
let guest_region = GuestMemoryHybrid::<()>::from_regions(regions);
assert!(matches!(guest_region.err().unwrap(), Error::NoMemoryRegion));
assert!(matches!(
guest_region.err().unwrap(),
GuestRegionCollectionError::NoMemoryRegion
));
}
#[test]

View File

@@ -1,7 +1,6 @@
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
use std::io::{Read, Write};
use std::sync::atomic::Ordering;
use vm_memory::bitmap::{Bitmap, BS};
@@ -9,7 +8,7 @@ use vm_memory::mmap::NewBitmap;
use vm_memory::volatile_memory::compute_offset;
use vm_memory::{
guest_memory, volatile_memory, Address, AtomicAccess, Bytes, FileOffset, GuestAddress,
GuestMemoryRegion, GuestUsize, MemoryRegionAddress, VolatileSlice,
GuestMemoryRegion, GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
};
/// Guest memory region for virtio-fs DAX window.
@@ -73,67 +72,67 @@ impl<B: Bitmap> Bytes<MemoryRegionAddress> for GuestRegionRaw<B> {
.map_err(Into::into)
}
fn read_from<F>(
fn read_volatile_from<F>(
&self,
addr: MemoryRegionAddress,
src: &mut F,
count: usize,
) -> guest_memory::Result<usize>
where
F: Read,
F: ReadVolatile,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.read_from::<F>(maddr, src, count)
.read_volatile_from::<F>(maddr, src, count)
.map_err(Into::into)
}
fn read_exact_from<F>(
fn read_exact_volatile_from<F>(
&self,
addr: MemoryRegionAddress,
src: &mut F,
count: usize,
) -> guest_memory::Result<()>
where
F: Read,
F: ReadVolatile,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.read_exact_from::<F>(maddr, src, count)
.read_exact_volatile_from::<F>(maddr, src, count)
.map_err(Into::into)
}
fn write_to<F>(
fn write_volatile_to<F>(
&self,
addr: MemoryRegionAddress,
dst: &mut F,
count: usize,
) -> guest_memory::Result<usize>
where
F: Write,
F: WriteVolatile,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.write_to::<F>(maddr, dst, count)
.write_volatile_to::<F>(maddr, dst, count)
.map_err(Into::into)
}
fn write_all_to<F>(
fn write_all_volatile_to<F>(
&self,
addr: MemoryRegionAddress,
dst: &mut F,
count: usize,
) -> guest_memory::Result<()>
where
F: Write,
F: WriteVolatile,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.write_all_to::<F>(maddr, dst, count)
.write_all_volatile_to::<F>(maddr, dst, count)
.map_err(Into::into)
}
@@ -170,8 +169,8 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionRaw<B> {
self.guest_base
}
fn bitmap(&self) -> &Self::B {
&self.bitmap
fn bitmap(&self) -> BS<'_, Self::B> {
self.bitmap.slice_at(0)
}
fn get_host_address(&self, addr: MemoryRegionAddress) -> guest_memory::Result<*mut u8> {
@@ -186,18 +185,6 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionRaw<B> {
None
}
unsafe fn as_slice(&self) -> Option<&[u8]> {
// This is safe because we mapped the area at addr ourselves, so this slice will not
// overflow. However, it is possible to alias.
Some(std::slice::from_raw_parts(self.addr, self.size))
}
unsafe fn as_mut_slice(&self) -> Option<&mut [u8]> {
// This is safe because we mapped the area at addr ourselves, so this slice will not
// overflow. However, it is possible to alias.
Some(std::slice::from_raw_parts_mut(self.addr, self.size))
}
fn get_slice(
&self,
offset: MemoryRegionAddress,
@@ -216,6 +203,7 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionRaw<B> {
(self.addr as usize + offset) as *mut _,
count,
self.bitmap.slice_at(offset),
None,
)
})
}
@@ -226,6 +214,27 @@ impl<B: Bitmap> GuestMemoryRegion for GuestRegionRaw<B> {
}
}
impl<B: Bitmap> GuestRegionRaw<B> {
/// Returns a slice corresponding to the region.
///
/// # Safety
/// This is safe because we mapped the area at addr ourselves, so this slice will not
/// overflow. However, it is possible to alias.
pub unsafe fn as_slice(&self) -> Option<&[u8]> {
Some(std::slice::from_raw_parts(self.addr, self.size))
}
/// Returns a mutable slice corresponding to the region.
///
/// # Safety
/// This is safe because we mapped the area at addr ourselves, so this slice will not
/// overflow. However, it is possible to alias.
#[allow(clippy::mut_from_ref)]
pub unsafe fn as_mut_slice(&self) -> Option<&mut [u8]> {
Some(std::slice::from_raw_parts_mut(self.addr, self.size))
}
}
#[cfg(test)]
mod tests {
extern crate vmm_sys_util;
@@ -348,7 +357,7 @@ mod tests {
unsafe { GuestRegionRaw::<()>::new(GuestAddress(0x10_0000), &mut buf as *mut _, 1024) };
let s = m.get_slice(MemoryRegionAddress(2), 3).unwrap();
assert_eq!(s.as_ptr(), &mut buf[2] as *mut _);
assert_eq!(s.ptr_guard().as_ptr(), &buf[2] as *const _);
}
/*
@@ -600,7 +609,7 @@ mod tests {
File::open(Path::new("c:\\Windows\\system32\\ntoskrnl.exe")).unwrap()
};
gm.write_obj(!0u32, addr).unwrap();
gm.read_exact_from(addr, &mut file, mem::size_of::<u32>())
gm.read_exact_volatile_from(addr, &mut file, mem::size_of::<u32>())
.unwrap();
let value: u32 = gm.read_obj(addr).unwrap();
if cfg!(unix) {
@@ -610,7 +619,7 @@ mod tests {
}
let mut sink = Vec::new();
gm.write_all_to(addr, &mut sink, mem::size_of::<u32>())
gm.write_all_volatile_to(addr, &mut sink, mem::size_of::<u32>())
.unwrap();
if cfg!(unix) {
assert_eq!(sink, vec![0; mem::size_of::<u32>()]);

View File

@@ -113,20 +113,23 @@ arm64_sys_reg!(MPIDR_EL1, 3, 0, 0, 0, 5);
/// * `mem` - Reserved DRAM for current VM.
pub fn setup_regs(vcpu: &VcpuFd, cpu_id: u8, boot_ip: u64, fdt_address: u64) -> Result<()> {
// Get the register index of the PSTATE (Processor State) register.
vcpu.set_one_reg(arm64_core_reg!(pstate), PSTATE_FAULT_BITS_64 as u128)
.map_err(Error::SetCoreRegister)?;
vcpu.set_one_reg(
arm64_core_reg!(pstate),
&(PSTATE_FAULT_BITS_64 as u128).to_le_bytes(),
)
.map_err(Error::SetCoreRegister)?;
// Other vCPUs are powered off initially awaiting PSCI wakeup.
if cpu_id == 0 {
// Setting the PC (Processor Counter) to the current program address (kernel address).
vcpu.set_one_reg(arm64_core_reg!(pc), boot_ip as u128)
vcpu.set_one_reg(arm64_core_reg!(pc), &(boot_ip as u128).to_le_bytes())
.map_err(Error::SetCoreRegister)?;
// Last mandatory thing to set -> the address pointing to the FDT (also called DTB).
// "The device tree blob (dtb) must be placed on an 8-byte boundary and must
// not exceed 2 megabytes in size." -> https://www.kernel.org/doc/Documentation/arm64/booting.txt.
// We are choosing to place it the end of DRAM. See `get_fdt_addr`.
vcpu.set_one_reg(arm64_core_reg!(regs), fdt_address as u128)
vcpu.set_one_reg(arm64_core_reg!(regs), &(fdt_address as u128).to_le_bytes())
.map_err(Error::SetCoreRegister)?;
}
Ok(())
@@ -157,9 +160,10 @@ pub fn is_system_register(regid: u64) -> bool {
///
/// * `vcpu` - Structure for the VCPU that holds the VCPU's fd.
pub fn read_mpidr(vcpu: &VcpuFd) -> Result<u64> {
vcpu.get_one_reg(MPIDR_EL1)
.map(|value| value as u64)
.map_err(Error::GetSysRegister)
let mut reg_data = 0u128.to_le_bytes();
vcpu.get_one_reg(MPIDR_EL1, &mut reg_data)
.map_err(Error::GetSysRegister)?;
Ok(u128::from_le_bytes(reg_data) as u64)
}
#[cfg(test)]

View File

@@ -10,7 +10,6 @@
use libc::c_char;
use std::collections::HashMap;
use std::io;
use std::mem;
use std::result;
use std::slice;
@@ -205,7 +204,7 @@ pub fn setup_mptable<M: GuestMemory>(
return Err(Error::AddressOverflow);
}
mem.read_from(base_mp, &mut io::repeat(0), mp_size)
mem.write_slice(&vec![0u8; mp_size], base_mp)
.map_err(|_| Error::Clear)?;
{
@@ -452,23 +451,11 @@ mod tests {
let mpc_offset = GuestAddress(u64::from(mpf_intel.0.physptr));
let mpc_table: MpcTableWrapper = mem.read_obj(mpc_offset).unwrap();
struct Sum(u8);
impl io::Write for Sum {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
for v in buf.iter() {
self.0 = self.0.wrapping_add(*v);
}
Ok(buf.len())
}
fn flush(&mut self) -> io::Result<()> {
Ok(())
}
}
let mut sum = Sum(0);
mem.write_to(mpc_offset, &mut sum, mpc_table.0.length as usize)
let mut buf = Vec::new();
mem.write_volatile_to(mpc_offset, &mut buf, mpc_table.0.length as usize)
.unwrap();
assert_eq!(sum.0, 0);
let sum: u8 = buf.iter().fold(0u8, |acc, &v| acc.wrapping_add(v));
assert_eq!(sum, 0);
}
#[test]

View File

@@ -25,7 +25,7 @@ use std::collections::HashMap;
use std::io::{Error, ErrorKind};
use std::sync::{Arc, Mutex};
use kvm_bindings::{kvm_irq_routing, kvm_irq_routing_entry};
use kvm_bindings::{kvm_irq_routing_entry, KvmIrqRouting as KvmIrqRoutingWrapper};
use kvm_ioctls::VmFd;
use super::*;
@@ -196,26 +196,18 @@ impl KvmIrqRouting {
}
fn set_routing(&self, routes: &HashMap<u64, kvm_irq_routing_entry>) -> Result<()> {
// Allocate enough buffer memory.
let elem_sz = std::mem::size_of::<kvm_irq_routing>();
let total_sz = std::mem::size_of::<kvm_irq_routing_entry>() * routes.len() + elem_sz;
let elem_cnt = total_sz.div_ceil(elem_sz);
let mut irq_routings = Vec::<kvm_irq_routing>::with_capacity(elem_cnt);
irq_routings.resize_with(elem_cnt, Default::default);
let mut irq_routing = KvmIrqRoutingWrapper::new(routes.len())
.map_err(|_| Error::other("Failed to create KvmIrqRouting"))?;
// Prepare the irq_routing header.
let irq_routing = &mut irq_routings[0];
irq_routing.nr = routes.len() as u32;
irq_routing.flags = 0;
// Safe because we have just allocated enough memory above.
let irq_routing_entries = unsafe { irq_routing.entries.as_mut_slice(routes.len()) };
for (idx, entry) in routes.values().enumerate() {
irq_routing_entries[idx] = *entry;
{
let irq_routing_entries = irq_routing.as_mut_slice();
for (idx, entry) in routes.values().enumerate() {
irq_routing_entries[idx] = *entry;
}
}
self.vm_fd
.set_gsi_routing(irq_routing)
.set_gsi_routing(&irq_routing)
.map_err(from_sys_util_errno)?;
Ok(())

View File

@@ -11,7 +11,7 @@ use kvm_bindings::{CpuId, __IncompleteArrayField, KVMIO};
use thiserror::Error;
use vmm_sys_util::fam::{FamStruct, FamStructWrapper};
use vmm_sys_util::ioctl::ioctl_with_val;
use vmm_sys_util::{generate_fam_struct_impl, ioctl_ioc_nr, ioctl_iowr_nr};
use vmm_sys_util::{generate_fam_struct_impl, ioctl_iowr_nr};
/// Tdx capability list.
pub type TdxCaps = FamStructWrapper<TdxCapabilities>;

View File

@@ -13,7 +13,7 @@ use std::os::raw::*;
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use vmm_sys_util::ioctl::{ioctl_with_mut_ref, ioctl_with_ref, ioctl_with_val};
use vmm_sys_util::{ioctl_ioc_nr, ioctl_iow_nr};
use vmm_sys_util::ioctl_iow_nr;
use crate::net::net_gen;

View File

@@ -23,15 +23,15 @@ dbs-address-space = { workspace = true }
dbs-boot = { workspace = true }
epoll = ">=4.3.1, <4.3.2"
io-uring = "0.5.2"
fuse-backend-rs = { version = "0.10.5", optional = true }
fuse-backend-rs = { version = "0.14.0", optional = true }
kvm-bindings = { workspace = true }
kvm-ioctls = { workspace = true }
libc = "0.2.119"
log = "0.4.14"
nix = "0.24.3"
nydus-api = "0.3.1"
nydus-rafs = "0.3.2"
nydus-storage = "0.6.4"
nydus-api = "0.4.1"
nydus-rafs = "0.4.1"
nydus-storage = "0.7.2"
rlimit = "0.7.0"
serde = "1.0.27"
serde_json = "1.0.9"
@@ -42,8 +42,9 @@ virtio-queue = { workspace = true }
vmm-sys-util = { workspace = true }
vm-memory = { workspace = true, features = ["backend-mmap"] }
sendfd = "0.4.3"
vhost-rs = { version = "0.6.1", package = "vhost", optional = true }
vhost-rs = { version = "0.15.0", package = "vhost", optional = true }
timerfd = "1.0"
kata-sys-util = { workspace = true}
[dev-dependencies]
vm-memory = { workspace = true, features = ["backend-mmap", "backend-atomic"] }
@@ -63,7 +64,7 @@ virtio-fs-pro = [
]
virtio-mem = ["virtio-mmio"]
virtio-balloon = ["virtio-mmio"]
vhost = ["virtio-mmio", "vhost-rs/vhost-user-master", "vhost-rs/vhost-kern"]
vhost = ["virtio-mmio", "vhost-rs/vhost-user-frontend", "vhost-rs/vhost-kern"]
vhost-net = ["vhost", "vhost-rs/vhost-net"]
vhost-user = ["vhost"]
vhost-user-fs = ["vhost-user"]

View File

@@ -34,7 +34,7 @@ use dbs_utils::epoll_manager::{
use dbs_utils::metric::{IncMetric, SharedIncMetric, SharedStoreMetric, StoreMetric};
use log::{debug, error, info, trace};
use serde::Serialize;
use virtio_bindings::bindings::virtio_blk::VIRTIO_F_VERSION_1;
use virtio_bindings::bindings::virtio_config::VIRTIO_F_VERSION_1;
use virtio_queue::{QueueOwnedT, QueueSync, QueueT};
use vm_memory::{
ByteValued, Bytes, GuestAddress, GuestAddressSpace, GuestMemory, GuestMemoryRegion,

View File

@@ -20,6 +20,7 @@ use dbs_utils::{
};
use log::{debug, error, info, warn};
use virtio_bindings::bindings::virtio_blk::*;
use virtio_bindings::bindings::virtio_config::VIRTIO_F_VERSION_1;
use virtio_queue::QueueT;
use vm_memory::GuestMemoryRegion;
use vmm_sys_util::eventfd::{EventFd, EFD_NONBLOCK};

View File

@@ -2,13 +2,13 @@
// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
use std::io::{self, Seek, SeekFrom, Write};
use std::io::{self, Read, Seek, SeekFrom, Write};
use std::ops::Deref;
use std::result;
use log::error;
use virtio_bindings::bindings::virtio_blk::*;
use virtio_queue::{Descriptor, DescriptorChain};
use virtio_queue::{desc::split::Descriptor, DescriptorChain};
use vm_memory::{ByteValued, Bytes, GuestAddress, GuestMemory, GuestMemoryError};
use crate::{
@@ -231,13 +231,19 @@ impl Request {
for io in data_descs {
match self.request_type {
RequestType::In => {
mem.read_from(GuestAddress(io.data_addr), disk, io.data_len)
let mut buf = vec![0u8; io.data_len];
disk.read_exact(&mut buf)
.map_err(|e| ExecuteError::Read(GuestMemoryError::IOError(e)))?;
mem.write_slice(&buf, GuestAddress(io.data_addr))
.map_err(ExecuteError::Read)?;
len += io.data_len;
}
RequestType::Out => {
mem.write_to(GuestAddress(io.data_addr), disk, io.data_len)
let mut buf = vec![0u8; io.data_len];
mem.read_slice(&mut buf, GuestAddress(io.data_addr))
.map_err(ExecuteError::Write)?;
disk.write_all(&buf)
.map_err(|e| ExecuteError::Write(GuestMemoryError::IOError(e)))?;
}
RequestType::Flush => match disk.flush() {
Ok(_) => {}

View File

@@ -2,9 +2,11 @@
//
// SPDX-License-Identifier: Apache-2.0 AND BSD-3-Clause
use kata_sys_util::netns::NetnsGuard;
use std::any::Any;
use std::collections::HashMap;
use std::ffi::CString;
use std::fs;
use std::fs::File;
use std::io::{BufRead, BufReader, Read};
use std::marker::PhantomData;
@@ -29,7 +31,7 @@ use nydus_api::ConfigV2;
use nydus_rafs::blobfs::{BlobFs, Config as BlobfsConfig};
use nydus_rafs::{fs::Rafs, RafsIoRead};
use rlimit::Resource;
use virtio_bindings::bindings::virtio_blk::VIRTIO_F_VERSION_1;
use virtio_bindings::bindings::virtio_config::VIRTIO_F_VERSION_1;
use virtio_queue::QueueT;
use vm_memory::{
FileOffset, GuestAddress, GuestAddressSpace, GuestRegionMmap, GuestUsize, MmapRegion,
@@ -233,6 +235,7 @@ impl<AS: GuestAddressSpace> VirtioFs<AS> {
CachePolicy::Always => Duration::from_secs(CACHE_ALWAYS_TIMEOUT),
CachePolicy::Never => Duration::from_secs(CACHE_NONE_TIMEOUT),
CachePolicy::Auto => Duration::from_secs(CACHE_AUTO_TIMEOUT),
CachePolicy::Metadata => Duration::from_secs(CACHE_AUTO_TIMEOUT),
}
}
@@ -453,6 +456,19 @@ impl<AS: GuestAddressSpace> VirtioFs<AS> {
prefetch_list_path: Option<String>,
) -> FsResult<()> {
debug!("http_server rafs");
let currentnetns = fs::read_link("/proc/self/ns/net").unwrap_or_default();
info!("========fupan====1==netns={:?}", currentnetns);
let tid = unsafe { libc::syscall(libc::SYS_gettid) as i32 };
let _netns_guard =
NetnsGuard::new("/proc/self/ns/net").map_err(|e| FsError::BackendFs(e.to_string()))?;
let netnspath = format!("/proc/{}/ns/net", tid);
let netns = fs::read_link(netnspath.as_str()).unwrap_or_default();
info!("========fupan====2==netns={:?}", netns);
info!("========fupan====3==config={:?}", config);
let file = Path::new(&source);
let (mut rafs, rafs_cfg) = match config.as_ref() {
Some(cfg) => {
@@ -541,7 +557,7 @@ impl<AS: GuestAddressSpace> VirtioFs<AS> {
)));
}
};
let any_fs = rootfs.deref().as_any();
let any_fs = rootfs.0.deref().as_any();
if let Some(fs_swap) = any_fs.downcast_ref::<Rafs>() {
let mut file = <dyn RafsIoRead>::from_file(&source)
.map_err(|e| FsError::BackendFs(format!("RafsIoRead failed: {e:?}")))?;
@@ -611,8 +627,7 @@ impl<AS: GuestAddressSpace> VirtioFs<AS> {
};
let region = Arc::new(
GuestRegionMmap::new(mmap_region, GuestAddress(guest_addr))
.map_err(Error::InsertMmap)?,
GuestRegionMmap::new(mmap_region, GuestAddress(guest_addr)).ok_or(Error::InsertMmap)?,
);
self.handler.insert_region(region.clone())?;

View File

@@ -245,8 +245,8 @@ pub enum Error {
#[error("set user memory region failed: {0}")]
SetUserMemoryRegion(kvm_ioctls::Error),
/// Inserting mmap region failed.
#[error("inserting mmap region failed: {0}")]
InsertMmap(vm_memory::mmap::Error),
#[error("inserting mmap region failed")]
InsertMmap,
/// Failed to set madvise on guest memory region.
#[error("failed to set madvice() on guest memory region")]
Madvise(#[source] nix::Error),

View File

@@ -30,7 +30,7 @@ use dbs_utils::epoll_manager::{
};
use kvm_ioctls::VmFd;
use log::{debug, error, info, trace, warn};
use virtio_bindings::bindings::virtio_blk::VIRTIO_F_VERSION_1;
use virtio_bindings::bindings::virtio_config::VIRTIO_F_VERSION_1;
use virtio_queue::{DescriptorChain, QueueOwnedT, QueueSync, QueueT};
use vm_memory::{
ByteValued, Bytes, GuestAddress, GuestAddressSpace, GuestMemory, GuestMemoryError,
@@ -1389,7 +1389,7 @@ pub(crate) mod tests {
.map_err(Error::NewMmapRegion)?;
let region =
Arc::new(GuestRegionMmap::new(mmap_region, guest_addr).map_err(Error::InsertMmap)?);
Arc::new(GuestRegionMmap::new(mmap_region, guest_addr).ok_or(Error::InsertMmap)?);
Ok(region)
}

View File

@@ -22,6 +22,7 @@ use dbs_utils::net::{net_gen, MacAddr, Tap};
use dbs_utils::rate_limiter::{BucketUpdate, RateLimiter, TokenType};
use libc;
use log::{debug, error, info, trace, warn};
use virtio_bindings::bindings::virtio_config::VIRTIO_F_VERSION_1;
use virtio_bindings::bindings::virtio_net::*;
use virtio_queue::{QueueOwnedT, QueueSync, QueueT};
use vm_memory::{Bytes, GuestAddress, GuestAddressSpace, GuestMemoryRegion, GuestRegionMmap};

View File

@@ -6,7 +6,7 @@ use log::{debug, error, warn};
use virtio_bindings::bindings::virtio_net::{
virtio_net_ctrl_hdr, virtio_net_ctrl_mq, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET,
};
use virtio_queue::{Descriptor, DescriptorChain};
use virtio_queue::{desc::split::Descriptor, DescriptorChain};
use vm_memory::{Bytes, GuestMemory};
use crate::{DbsGuestAddressSpace, Error as VirtioError, Result as VirtioResult};

View File

@@ -26,6 +26,7 @@ use vhost_rs::vhost_user::message::VhostUserVringAddrFlags;
#[cfg(not(test))]
use vhost_rs::VhostBackend;
use vhost_rs::{VhostUserMemoryRegionInfo, VringConfigData};
use virtio_bindings::bindings::virtio_config::{VIRTIO_F_NOTIFY_ON_EMPTY, VIRTIO_F_VERSION_1};
use virtio_bindings::bindings::virtio_net::*;
use virtio_bindings::bindings::virtio_ring::*;
use virtio_queue::{DescriptorChain, QueueT};

View File

@@ -25,7 +25,7 @@ use vhost_rs::vhost_user::message::{
VhostUserConfigFlags, VhostUserProtocolFeatures, VhostUserVirtioFeatures,
VHOST_USER_CONFIG_OFFSET,
};
use vhost_rs::vhost_user::{Master, VhostUserMaster};
use vhost_rs::vhost_user::{Frontend, VhostUserFrontend};
use vhost_rs::{Error as VhostError, VhostBackend};
use virtio_bindings::bindings::virtio_blk::{VIRTIO_BLK_F_MQ, VIRTIO_BLK_F_SEG_MAX};
use virtio_queue::QueueT;
@@ -231,7 +231,7 @@ impl VhostUserBlockDevice {
info!("vhost-user-blk: try to connect to {vhost_socket:?}");
// Connect to the vhost-user socket.
let mut master = Master::connect(&vhost_socket, 1).map_err(VirtIoError::VhostError)?;
let mut master = Frontend::connect(&vhost_socket, 1).map_err(VirtIoError::VhostError)?;
info!("vhost-user-blk: get features");
let avail_features = master.get_features().map_err(VirtIoError::VhostError)?;
@@ -290,11 +290,11 @@ impl VhostUserBlockDevice {
})
}
fn reconnect_to_server(&mut self) -> VirtIoResult<Master> {
fn reconnect_to_server(&mut self) -> VirtIoResult<Frontend> {
if !Path::new(self.vhost_socket.as_str()).exists() {
return Err(VirtIoError::InternalError);
}
let master = Master::connect(&self.vhost_socket, 1).map_err(VirtIoError::VhostError)?;
let master = Frontend::connect(&self.vhost_socket, 1).map_err(VirtIoError::VhostError)?;
Ok(master)
}
@@ -360,7 +360,7 @@ impl VhostUserBlockDevice {
if !Path::new(self.vhost_socket.as_str()).exists() {
return Err(ActivateError::InternalError);
}
let master = Master::connect(String::from(self.vhost_socket.as_str()), 1)
let master = Frontend::connect(String::from(self.vhost_socket.as_str()), 1)
.map_err(VirtIoError::VhostError)?;
self.endpoint.set_master(master);
@@ -388,7 +388,7 @@ impl VhostUserBlockDevice {
R: GuestMemoryRegion + Send + Sync + 'static,
>(
&mut self,
master: Master,
master: Frontend,
config: EndpointParam<AS, Q, R>,
ops: &mut EventOps,
) -> std::result::Result<(), VirtIoError> {

View File

@@ -10,10 +10,10 @@ use dbs_utils::epoll_manager::{EventOps, EventSet, Events};
use log::*;
use vhost_rs::vhost_user::message::{VhostUserProtocolFeatures, VhostUserVringAddrFlags};
use vhost_rs::vhost_user::{
Error as VhostUserError, Listener as VhostUserListener, Master, VhostUserMaster,
Error as VhostUserError, Frontend, Listener as VhostUserListener, VhostUserFrontend,
};
use vhost_rs::{Error as VhostError, VhostBackend, VhostUserMemoryRegionInfo, VringConfigData};
use virtio_bindings::bindings::virtio_net::VIRTIO_F_RING_PACKED;
use virtio_bindings::bindings::virtio_config::VIRTIO_F_RING_PACKED;
use virtio_queue::QueueT;
use vm_memory::{
Address, GuestAddress, GuestAddressSpace, GuestMemory, GuestMemoryRegion, MemoryRegionAddress,
@@ -50,7 +50,7 @@ impl Listener {
}
// Wait for an incoming connection until success.
pub fn accept(&self) -> VirtioResult<(Master, u64)> {
pub fn accept(&self) -> VirtioResult<(Frontend, u64)> {
loop {
match self.try_accept() {
Ok(Some((master, mut feature))) => {
@@ -65,14 +65,14 @@ impl Listener {
}
}
pub fn try_accept(&self) -> VirtioResult<Option<(Master, u64)>> {
pub fn try_accept(&self) -> VirtioResult<Option<(Frontend, u64)>> {
let sock = match self.listener.accept() {
Ok(Some(conn)) => conn,
Ok(None) => return Ok(None),
Err(e) => return Err(e.into()),
};
let mut master = Master::from_stream(sock, 1);
let mut master = Frontend::from_stream(sock, 1);
info!("{}: try to get virtio features from slave.", self.name);
match Endpoint::initialize(&mut master) {
Ok(Some(features)) => Ok(Some((master, features))),
@@ -159,8 +159,8 @@ impl<AS: GuestAddressSpace, Q: QueueT, R: GuestMemoryRegion> EndpointParam<'_, A
/// Caller needs to ensure mutual exclusive access to the object.
pub(super) struct Endpoint {
/// Underlying vhost-user communication endpoint.
conn: Option<Master>,
old: Option<Master>,
conn: Option<Frontend>,
old: Option<Frontend>,
/// Token to register epoll event for the underlying socket.
slot: u32,
/// Identifier string for logs.
@@ -168,7 +168,7 @@ pub(super) struct Endpoint {
}
impl Endpoint {
pub fn new(master: Master, slot: u32, name: String) -> Self {
pub fn new(master: Frontend, slot: u32, name: String) -> Self {
Endpoint {
conn: Some(master),
old: None,
@@ -186,7 +186,7 @@ impl Endpoint {
/// * - Ok(Some(avial_features)): virtio features from the slave
/// * - Ok(None): underlying communicaiton channel gets broken during negotiation
/// * - Err(e): error conditions
fn initialize(master: &mut Master) -> VirtioResult<Option<u64>> {
fn initialize(master: &mut Frontend) -> VirtioResult<Option<u64>> {
// 1. Seems that some vhost-user slaves depend on the get_features request to driver its
// internal state machine.
// N.B. it's really TDD, we just found it works in this way. Any spec about this?
@@ -242,7 +242,7 @@ impl Endpoint {
pub fn negotiate<AS: GuestAddressSpace, Q: QueueT, R: GuestMemoryRegion>(
&mut self,
config: &EndpointParam<AS, Q, R>,
mut old: Option<&mut Master>,
mut old: Option<&mut Frontend>,
) -> VirtioResult<()> {
let guard = config.virtio_config.lock_guest_memory();
let mem = guard.deref();
@@ -286,19 +286,19 @@ impl Endpoint {
);
// Setup slave channel if SLAVE_REQ protocol feature is set
if protocol_features.contains(VhostUserProtocolFeatures::SLAVE_REQ) {
if protocol_features.contains(VhostUserProtocolFeatures::BACKEND_REQ) {
match config.slave_req_fd {
Some(fd) => master.set_slave_request_fd(&fd)?,
Some(fd) => master.set_backend_request_fd(&fd)?,
None => {
error!(
"{}: Protocol feature SLAVE_REQ is set but not slave channel fd",
"{}: Protocol feature BACKEND_REQ is set but not slave channel fd",
self.name
);
return Err(VhostError::VhostUserProtocol(VhostUserError::InvalidParam).into());
}
}
} else {
info!("{}: has no SLAVE_REQ protocol feature set", self.name);
info!("{}: has no BACKEND_REQ protocol feature set", self.name);
}
// 6. check number of queues supported
@@ -454,7 +454,7 @@ impl Endpoint {
/// Restore communication with the vhost-user slave on reconnect.
pub fn reconnect<AS: GuestAddressSpace, Q: QueueT, R: GuestMemoryRegion>(
&mut self,
master: Master,
master: Frontend,
config: &EndpointParam<AS, Q, R>,
ops: &mut EventOps,
) -> VirtioResult<()> {
@@ -515,7 +515,11 @@ impl Endpoint {
}
/// Deregister the underlying socket from the epoll controller.
pub fn deregister_epoll_event(&self, master: &Master, ops: &mut EventOps) -> VirtioResult<()> {
pub fn deregister_epoll_event(
&self,
master: &Frontend,
ops: &mut EventOps,
) -> VirtioResult<()> {
info!(
"{}: unregister epoll event for fd {}.",
self.name,
@@ -529,7 +533,7 @@ impl Endpoint {
.map_err(VirtioError::EpollMgr)
}
pub fn set_master(&mut self, master: Master) {
pub fn set_master(&mut self, master: Frontend) {
self.conn = Some(master);
}
}

View File

@@ -3,10 +3,7 @@
// SPDX-License-Identifier: Apache-2.0
use std::any::Any;
use std::io;
use std::marker::PhantomData;
use std::ops::Deref;
use std::os::fd::AsRawFd;
use std::sync::{Arc, Mutex, MutexGuard};
use dbs_device::resources::{DeviceResources, ResourceConstraint};
@@ -15,18 +12,15 @@ use dbs_utils::epoll_manager::{
};
use kvm_bindings::kvm_userspace_memory_region;
use kvm_ioctls::VmFd;
use libc::{c_void, off64_t, pread64, pwrite64};
use log::*;
use vhost_rs::vhost_user::message::{
VhostUserFSSlaveMsg, VhostUserFSSlaveMsgFlags, VhostUserProtocolFeatures,
VhostUserVirtioFeatures, VHOST_USER_FS_SLAVE_ENTRIES,
use vhost_rs::vhost_user::message::{VhostUserProtocolFeatures, VhostUserVirtioFeatures};
use vhost_rs::vhost_user::{
Frontend, FrontendReqHandler, HandlerResult, VhostUserFrontendReqHandler,
};
use vhost_rs::vhost_user::{HandlerResult, Master, MasterReqHandler, VhostUserMasterReqHandler};
use vhost_rs::VhostBackend;
use virtio_queue::QueueT;
use vm_memory::{
GuestAddress, GuestAddressSpace, GuestMemory, GuestMemoryRegion, GuestRegionMmap, GuestUsize,
MmapRegion,
GuestAddress, GuestAddressSpace, GuestMemoryRegion, GuestRegionMmap, GuestUsize, MmapRegion,
};
use crate::ConfigResult;
@@ -50,6 +44,7 @@ const NUM_QUEUE_OFFSET: usize = 1;
const MASTER_SLOT: u32 = 0;
const SLAVE_REQ_SLOT: u32 = 1;
#[allow(dead_code)]
struct SlaveReqHandler<AS: GuestAddressSpace> {
/// the address of memory region allocated for virtiofs
cache_offset: u64,
@@ -69,6 +64,7 @@ struct SlaveReqHandler<AS: GuestAddressSpace> {
impl<AS: GuestAddressSpace> SlaveReqHandler<AS> {
// Make sure request is within cache range
#[allow(dead_code)]
fn is_req_valid(&self, offset: u64, len: u64) -> bool {
// TODO: do we need to validate alignment here?
match offset.checked_add(len) {
@@ -78,274 +74,24 @@ impl<AS: GuestAddressSpace> SlaveReqHandler<AS> {
}
}
impl<AS: GuestAddressSpace> VhostUserMasterReqHandler for SlaveReqHandler<AS> {
impl<AS: GuestAddressSpace> VhostUserFrontendReqHandler for SlaveReqHandler<AS> {
fn handle_config_change(&self) -> HandlerResult<u64> {
trace!(target: "vhost-fs", "{}: SlaveReqHandler::handle_config_change()", self.id);
debug!("{}: unhandle device_config_change event", self.id);
Ok(0)
}
fn fs_slave_map(&self, fs: &VhostUserFSSlaveMsg, fd: &dyn AsRawFd) -> HandlerResult<u64> {
trace!(target: "vhost-fs", "{}: SlaveReqHandler::fs_slave_map()", self.id);
for i in 0..VHOST_USER_FS_SLAVE_ENTRIES {
let offset = fs.cache_offset[i];
let len = fs.len[i];
// Ignore if the length is 0.
if len == 0 {
continue;
}
debug!(
"{}: fs_slave_map: offset={:x} len={:x} cache_size={:x}",
self.id, offset, len, self.cache_size
);
if !self.is_req_valid(offset, len) {
debug!(
"{}: fs_slave_map: Wrong offset or length, offset={:x} len={:x} cache_size={:x}",
self.id, offset, len, self.cache_size
);
return Err(std::io::Error::from_raw_os_error(libc::EINVAL));
}
let addr = self.mmap_cache_addr + offset;
let flags = fs.flags[i];
let ret = unsafe {
libc::mmap(
addr as *mut libc::c_void,
len as usize,
flags.bits() as i32,
libc::MAP_SHARED | libc::MAP_FIXED,
fd.as_raw_fd(),
fs.fd_offset[i] as libc::off_t,
)
};
if ret == libc::MAP_FAILED {
let e = std::io::Error::last_os_error();
error!("{}: fs_slave_map: mmap failed, {}", self.id, e);
return Err(e);
}
let ret = unsafe { libc::close(fd.as_raw_fd()) };
if ret == -1 {
let e = std::io::Error::last_os_error();
error!("{}: fs_slave_map: close failed, {}", self.id, e);
return Err(e);
}
}
Ok(0)
}
fn fs_slave_unmap(&self, fs: &VhostUserFSSlaveMsg) -> HandlerResult<u64> {
trace!(target: "vhost-fs", "{}: SlaveReqHandler::fs_slave_map()", self.id);
for i in 0..VHOST_USER_FS_SLAVE_ENTRIES {
let offset = fs.cache_offset[i];
let mut len = fs.len[i];
// Ignore if the length is 0.
if len == 0 {
continue;
}
debug!(
"{}: fs_slave_unmap: offset={:x} len={:x} cache_size={:x}",
self.id, offset, len, self.cache_size
);
// Need to handle a special case where the slave ask for the unmapping
// of the entire mapping.
if len == 0xffff_ffff_ffff_ffff {
len = self.cache_size;
}
if !self.is_req_valid(offset, len) {
error!(
"{}: fs_slave_map: Wrong offset or length, offset={:x} len={:x} cache_size={:x}",
self.id, offset, len, self.cache_size
);
return Err(std::io::Error::from_raw_os_error(libc::EINVAL));
}
let addr = self.mmap_cache_addr + offset;
#[allow(clippy::unnecessary_cast)]
let ret = unsafe {
libc::mmap(
addr as *mut libc::c_void,
len as usize,
libc::PROT_NONE,
libc::MAP_ANONYMOUS | libc::MAP_PRIVATE | libc::MAP_FIXED,
-1,
0 as libc::off_t,
)
};
if ret == libc::MAP_FAILED {
let e = std::io::Error::last_os_error();
error!("{}: fs_slave_map: mmap failed, {}", self.id, e);
return Err(e);
}
}
Ok(0)
}
fn fs_slave_sync(&self, fs: &VhostUserFSSlaveMsg) -> HandlerResult<u64> {
trace!(target: "vhost-fs", "{}: SlaveReqHandler::fs_slave_sync()", self.id);
for i in 0..VHOST_USER_FS_SLAVE_ENTRIES {
let offset = fs.cache_offset[i];
let len = fs.len[i];
// Ignore if the length is 0.
if len == 0 {
continue;
}
debug!(
"{}: fs_slave_sync: offset={:x} len={:x} cache_size={:x}",
self.id, offset, len, self.cache_size
);
if !self.is_req_valid(offset, len) {
error!(
"{}: fs_slave_map: Wrong offset or length, offset={:x} len={:x} cache_size={:x}",
self.id, offset, len, self.cache_size
);
return Err(std::io::Error::from_raw_os_error(libc::EINVAL));
}
let addr = self.mmap_cache_addr + offset;
let ret =
unsafe { libc::msync(addr as *mut libc::c_void, len as usize, libc::MS_SYNC) };
if ret == -1 {
let e = std::io::Error::last_os_error();
error!("{}: fs_slave_sync: msync failed, {}", self.id, e);
return Err(e);
}
}
Ok(0)
}
fn fs_slave_io(&self, fs: &VhostUserFSSlaveMsg, fd: &dyn AsRawFd) -> HandlerResult<u64> {
trace!(target: "vhost-fs", "{}: SlaveReqHandler::fs_slave_io()", self.id);
let guard = self.mem.memory();
let mem = guard.deref();
let mut done: u64 = 0;
for i in 0..VHOST_USER_FS_SLAVE_ENTRIES {
// Ignore if the length is 0.
if fs.len[i] == 0 {
continue;
}
let mut foffset = fs.fd_offset[i];
let mut len = fs.len[i] as usize;
let gpa = fs.cache_offset[i];
let cache_end = self.cache_offset + self.cache_size;
let efault = libc::EFAULT;
debug!(
"{}: fs_slave_io: gpa={:x} len={:x} foffset={:x} cache_offset={:x} cache_size={:x}",
self.id, gpa, len, foffset, self.cache_offset, self.cache_size
);
let mut ptr = if gpa >= self.cache_offset && gpa < cache_end {
let offset = gpa
.checked_sub(self.cache_offset)
.ok_or_else(|| io::Error::from_raw_os_error(efault))?;
let end = gpa
.checked_add(fs.len[i])
.ok_or_else(|| io::Error::from_raw_os_error(efault))?;
if end >= cache_end {
error!( "{}: fs_slave_io: Wrong gpa or len (gpa={:x} len={:x} cache_offset={:x}, cache_size={:x})", self.id, gpa, len, self.cache_offset, self.cache_size );
return Err(io::Error::from_raw_os_error(efault));
}
self.mmap_cache_addr + offset
} else {
// gpa is a RAM addr.
mem.get_host_address(GuestAddress(gpa))
.map_err(|e| {
error!(
"{}: fs_slave_io: Failed to find RAM region associated with gpa 0x{:x}: {:?}",
self.id, gpa, e
);
io::Error::from_raw_os_error(efault)
})? as u64
};
while len > 0 {
let ret = if (fs.flags[i] & VhostUserFSSlaveMsgFlags::MAP_W)
== VhostUserFSSlaveMsgFlags::MAP_W
{
debug!("{}: write: foffset={:x}, len={:x}", self.id, foffset, len);
unsafe {
pwrite64(
fd.as_raw_fd(),
ptr as *const c_void,
len,
foffset as off64_t,
)
}
} else {
debug!("{}: read: foffset={:x}, len={:x}", self.id, foffset, len);
unsafe { pread64(fd.as_raw_fd(), ptr as *mut c_void, len, foffset as off64_t) }
};
if ret < 0 {
let e = std::io::Error::last_os_error();
if (fs.flags[i] & VhostUserFSSlaveMsgFlags::MAP_W)
== VhostUserFSSlaveMsgFlags::MAP_W
{
error!("{}: fs_slave_io: pwrite failed, {}", self.id, e);
} else {
error!("{}: fs_slave_io: pread failed, {}", self.id, e);
}
return Err(e);
}
if ret == 0 {
// EOF
let e = io::Error::new(
io::ErrorKind::UnexpectedEof,
"failed to access whole buffer",
);
error!("{}: fs_slave_io: IO error, {}", self.id, e);
return Err(e);
}
len -= ret as usize;
foffset += ret as u64;
ptr += ret as u64;
done += ret as u64;
}
let ret = unsafe { libc::close(fd.as_raw_fd()) };
if ret == -1 {
let e = std::io::Error::last_os_error();
error!("{}: fs_slave_io: close failed, {}", self.id, e);
return Err(e);
}
}
Ok(done)
}
}
pub struct VhostUserFsHandler<
AS: GuestAddressSpace,
Q: QueueT,
R: GuestMemoryRegion,
S: VhostUserMasterReqHandler,
S: VhostUserFrontendReqHandler,
> {
config: VirtioDeviceConfig<AS, Q, R>,
device: Arc<Mutex<VhostUserFsDevice>>,
slave_req_handler: Option<MasterReqHandler<S>>,
slave_req_handler: Option<FrontendReqHandler<S>>,
id: String,
}
@@ -354,7 +100,7 @@ where
AS: 'static + GuestAddressSpace + Send + Sync,
Q: QueueT + Send + 'static,
R: GuestMemoryRegion + Send + Sync + 'static,
S: 'static + Send + VhostUserMasterReqHandler,
S: 'static + Send + VhostUserFrontendReqHandler,
{
fn process(&mut self, events: Events, _ops: &mut EventOps) {
trace!(target: "vhost-fs", "{}: VhostUserFsHandler::process({})", self.id, events.data());
@@ -425,7 +171,7 @@ impl VhostUserFsDevice {
// Connect to the vhost-user socket.
info!("{VHOST_USER_FS_NAME}: try to connect to {path:?}");
let num_queues = NUM_QUEUE_OFFSET + req_num_queues;
let master = Master::connect(path, num_queues as u64).map_err(VirtioError::VhostError)?;
let master = Frontend::connect(path, num_queues as u64).map_err(VirtioError::VhostError)?;
info!("{VHOST_USER_FS_NAME}: get features");
let avail_features = master.get_features().map_err(VirtioError::VhostError)?;
@@ -475,7 +221,7 @@ impl VhostUserFsDevice {
let mut features = VhostUserProtocolFeatures::MQ | VhostUserProtocolFeatures::REPLY_ACK;
if self.is_dax_on() {
features |=
VhostUserProtocolFeatures::SLAVE_REQ | VhostUserProtocolFeatures::SLAVE_SEND_FD;
VhostUserProtocolFeatures::BACKEND_REQ | VhostUserProtocolFeatures::BACKEND_SEND_FD;
}
features
}
@@ -484,7 +230,7 @@ impl VhostUserFsDevice {
AS: GuestAddressSpace,
Q: QueueT,
R: GuestMemoryRegion,
S: VhostUserMasterReqHandler,
S: VhostUserFrontendReqHandler,
>(
&mut self,
handler: &VhostUserFsHandler<AS, Q, R, S>,
@@ -621,7 +367,7 @@ where
mem: config.vm_as.clone(),
id: device.device_info.driver_name.clone(),
});
let req_handler = MasterReqHandler::new(vu_master_req_handler)
let req_handler = FrontendReqHandler::new(vu_master_req_handler)
.map_err(|e| ActivateError::VhostActivate(vhost_rs::Error::VhostUserProtocol(e)))?;
Some(req_handler)
@@ -748,7 +494,7 @@ where
let guest_mmap_region = Arc::new(
GuestRegionMmap::new(mmap_region, GuestAddress(guest_addr))
.map_err(VirtioError::InsertMmap)?,
.ok_or(VirtioError::InsertMmap)?,
);
Ok(Some(VirtioSharedMemoryList {

View File

@@ -12,7 +12,7 @@ use dbs_utils::epoll_manager::{EpollManager, EventOps, Events, MutEventSubscribe
use dbs_utils::net::MacAddr;
use log::{debug, error, info, trace, warn};
use vhost_rs::vhost_user::{
Error as VhostUserError, Master, VhostUserProtocolFeatures, VhostUserVirtioFeatures,
Error as VhostUserError, Frontend, VhostUserProtocolFeatures, VhostUserVirtioFeatures,
};
use vhost_rs::Error as VhostError;
use virtio_bindings::bindings::virtio_net::{
@@ -59,7 +59,7 @@ struct VhostUserNetDevice {
impl VhostUserNetDevice {
fn new(
master: Master,
master: Frontend,
mut avail_features: u64,
listener: Listener,
guest_mac: Option<&MacAddr>,

View File

@@ -14,13 +14,14 @@ use vhost_rs::vhost_user::message::{
VhostUserVringAddr, VhostUserVringState, MAX_MSG_SIZE,
};
use vhost_rs::vhost_user::Error;
use vm_memory::ByteValued;
use vmm_sys_util::sock_ctrl_msg::ScmSocket;
use vmm_sys_util::tempfile::TempFile;
pub const MAX_ATTACHED_FD_ENTRIES: usize = 32;
pub(crate) trait Req:
Clone + Copy + Debug + PartialEq + Eq + PartialOrd + Ord + Into<u32>
Clone + Copy + Debug + PartialEq + Eq + PartialOrd + Ord + Into<u32> + Send + Sync
{
fn is_valid(&self) -> bool;
}
@@ -215,6 +216,10 @@ impl<R: Req> Default for VhostUserMsgHeader<R> {
}
}
// SAFETY: VhostUserMsgHeader is a packed struct with only primitive (u32) fields and PhantomData.
// All bit patterns are valid, and it has no padding bytes.
unsafe impl<R: Req> ByteValued for VhostUserMsgHeader<R> {}
/// Unix domain socket endpoint for vhost-user connection.
pub(crate) struct Endpoint<R: Req> {
sock: UnixStream,

View File

@@ -99,13 +99,13 @@ mod tests {
#[test]
fn test_tcp_backend_bind() {
let tcp_sock_addr = String::from("127.0.0.2:9000");
let tcp_sock_addr = String::from("127.0.0.1:9000");
assert!(VsockTcpBackend::new(tcp_sock_addr).is_ok());
}
#[test]
fn test_tcp_backend_accept() {
let tcp_sock_addr = String::from("127.0.0.2:9001");
let tcp_sock_addr = String::from("127.0.0.1:9001");
let mut vsock_backend = VsockTcpBackend::new(tcp_sock_addr.clone()).unwrap();
let _stream = TcpStream::connect(&tcp_sock_addr).unwrap();
@@ -115,7 +115,7 @@ mod tests {
#[test]
fn test_tcp_backend_communication() {
let tcp_sock_addr = String::from("127.0.0.2:9002");
let tcp_sock_addr = String::from("127.0.0.1:9002");
let test_string = String::from("TEST");
let mut buffer = [0; 10];
@@ -139,7 +139,7 @@ mod tests {
#[test]
fn test_tcp_backend_connect() {
let tcp_sock_addr = String::from("127.0.0.2:9003");
let tcp_sock_addr = String::from("127.0.0.1:9003");
let vsock_backend = VsockTcpBackend::new(tcp_sock_addr).unwrap();
// tcp backend don't support peer connection
assert!(vsock_backend.connect(0).is_err());
@@ -147,14 +147,14 @@ mod tests {
#[test]
fn test_tcp_backend_type() {
let tcp_sock_addr = String::from("127.0.0.2:9004");
let tcp_sock_addr = String::from("127.0.0.1:9004");
let vsock_backend = VsockTcpBackend::new(tcp_sock_addr).unwrap();
assert_eq!(vsock_backend.r#type(), VsockBackendType::Tcp);
}
#[test]
fn test_tcp_backend_vsock_stream() {
let tcp_sock_addr = String::from("127.0.0.2:9005");
let tcp_sock_addr = String::from("127.0.0.1:9005");
let _vsock_backend = VsockTcpBackend::new(tcp_sock_addr.clone()).unwrap();
let vsock_stream = TcpStream::connect(&tcp_sock_addr).unwrap();

View File

@@ -17,7 +17,7 @@
/// backend.
use std::ops::{Deref, DerefMut};
use virtio_queue::{Descriptor, DescriptorChain};
use virtio_queue::{desc::split::Descriptor, DescriptorChain};
use vm_memory::{Address, GuestMemory};
use super::defs;

View File

@@ -118,11 +118,15 @@ pub enum AddressManagerError {
/// Failure in accessing the memory located at some address.
#[error("address manager failed to access guest memory located at 0x{0:x}")]
AccessGuestMemory(u64, #[source] vm_memory::mmap::Error),
AccessGuestMemory(u64, #[source] vm_memory::GuestMemoryError),
/// Failed to create GuestMemory
#[error("address manager failed to create guest memory object")]
CreateGuestMemory(#[source] vm_memory::Error),
CreateGuestMemory(#[source] vm_memory::GuestMemoryError),
/// Failed to insert/manage guest memory region collection
#[error("address manager failed to manage guest memory region collection")]
GuestRegionCollection(#[source] vm_memory::GuestRegionCollectionError),
/// Failure in initializing guest memory.
#[error("address manager failed to initialize guest memory")]
@@ -328,7 +332,7 @@ impl AddressSpaceMgr {
vm_memory = vm_memory
.insert_region(mmap_reg.clone())
.map_err(AddressManagerError::CreateGuestMemory)?;
.map_err(AddressManagerError::GuestRegionCollection)?;
self.map_to_kvm(res_mgr, &param, reg, mmap_reg)?;
}
@@ -488,8 +492,11 @@ impl AddressSpaceMgr {
self.configure_thp_and_prealloc(&region, &mmap_reg)?;
}
let reg = GuestRegionImpl::new(mmap_reg, region.start_addr())
.map_err(AddressManagerError::CreateGuestMemory)?;
let reg = GuestRegionImpl::new(mmap_reg, region.start_addr()).ok_or(
AddressManagerError::GuestRegionCollection(
vm_memory::GuestRegionCollectionError::NoMemoryRegion,
),
)?;
Ok(Arc::new(reg))
}

View File

@@ -31,7 +31,7 @@ pub enum BalloonDeviceError {
/// guest memory error
#[error("failed to access guest memory, {0}")]
GuestMemoryError(#[source] vm_memory::mmap::Error),
GuestMemoryError(#[source] vm_memory::GuestMemoryError),
/// create balloon device error
#[error("failed to create virtio-balloon device, {0}")]

View File

@@ -557,15 +557,14 @@ impl MemRegionFactory for MemoryRegionFactory {
);
// All value should be valid.
let memory_region = Arc::new(
GuestRegionMmap::new(mmap_region, guest_addr).map_err(VirtioError::InsertMmap)?,
);
let memory_region =
Arc::new(GuestRegionMmap::new(mmap_region, guest_addr).ok_or(VirtioError::InsertMmap)?);
let vm_as_new = self
.vm_as
.memory()
.insert_region(memory_region.clone())
.map_err(VirtioError::InsertMmap)?;
.map_err(|_| VirtioError::InsertMmap)?;
self.vm_as.lock().unwrap().replace(vm_as_new);
self.address_space.insert_region(region).map_err(|e| {
error!(self.logger, "failed to insert address space region: {}", e);

View File

@@ -78,7 +78,7 @@ impl DeviceVirtioRegionHandler {
) -> std::result::Result<(), VirtioError> {
let vm_as_new = self.vm_as.memory().insert_region(region).map_err(|e| {
error!("DeviceVirtioRegionHandler failed to insert guest memory region: {e:?}.");
VirtioError::InsertMmap(e)
VirtioError::InsertMmap
})?;
// Do not expect poisoned lock here, so safe to unwrap().
self.vm_as.lock().unwrap().replace(vm_as_new);

View File

@@ -13,6 +13,7 @@ use arc_swap::ArcSwap;
use dbs_address_space::AddressSpace;
#[cfg(target_arch = "aarch64")]
use dbs_arch::{DeviceType, MMIODeviceInfo};
#[cfg(feature = "host-device")]
use dbs_boot::layout::MMIO_LOW_END;
use dbs_device::device_manager::{Error as IoManagerError, IoManager, IoManagerContext};
use dbs_device::resources::DeviceResources;
@@ -24,7 +25,6 @@ use dbs_legacy_devices::ConsoleHandler;
use dbs_pci::CAPABILITY_BAR_SIZE;
use dbs_utils::epoll_manager::EpollManager;
use kvm_ioctls::VmFd;
use virtio_queue::QueueSync;
#[cfg(feature = "dbs-virtio-devices")]
use dbs_device::resources::ResourceConstraint;
@@ -41,6 +41,7 @@ use dbs_virtio_devices::{
#[cfg(feature = "host-device")]
use dbs_pci::VfioPciDevice;
#[cfg(feature = "host-device")]
use dbs_pci::VirtioPciDevice;
#[cfg(all(feature = "hotplug", feature = "dbs-upcall"))]
use dbs_upcall::{
@@ -59,6 +60,7 @@ use crate::resource_manager::ResourceManager;
use crate::vm::{KernelConfigInfo, Vm, VmConfigInfo};
use crate::IoManagerCached;
#[cfg(feature = "host-device")]
use vm_memory::GuestRegionMmap;
/// Virtual machine console device manager.
@@ -187,18 +189,23 @@ pub enum DeviceMgrError {
/// Error from Vfio Pci
#[error("failed to do vfio pci operation: {0:?}")]
VfioPci(#[source] dbs_pci::VfioPciError),
#[cfg(feature = "host-device")]
/// Error from Virtio Pci
#[error("failed to do virtio pci operation")]
VirtioPci,
#[cfg(feature = "host-device")]
/// PCI system manager error
#[error("Pci system manager error")]
PciSystemManager,
#[cfg(feature = "host-device")]
/// Dragonball pci system error
#[error("pci error: {0:?}")]
PciError(#[source] dbs_pci::Error),
#[cfg(feature = "host-device")]
/// Virtio Pci system error
#[error("virtio pci error: {0:?}")]
VirtioPciError(#[source] dbs_pci::VirtioPciDeviceError),
#[cfg(feature = "host-device")]
/// Unsupported pci device type
#[error("unsupported pci device type")]
InvalidPciDeviceType,
@@ -315,6 +322,7 @@ pub struct DeviceOpContext {
virtio_devices: Vec<Arc<dyn DeviceIo>>,
#[cfg(feature = "host-device")]
vfio_manager: Option<Arc<Mutex<VfioDeviceMgr>>>,
#[cfg(feature = "host-device")]
pci_system_manager: Arc<Mutex<PciSystemManager>>,
vm_config: Option<VmConfigInfo>,
shared_info: Arc<RwLock<InstanceInfo>>,
@@ -366,6 +374,7 @@ impl DeviceOpContext {
shared_info,
#[cfg(feature = "host-device")]
vfio_manager: None,
#[cfg(feature = "host-device")]
pci_system_manager: device_mgr.pci_system_manager.clone(),
}
}
@@ -659,6 +668,7 @@ pub struct DeviceManager {
vhost_user_net_manager: VhostUserNetDeviceMgr,
#[cfg(feature = "host-device")]
pub(crate) vfio_manager: Arc<Mutex<VfioDeviceMgr>>,
#[cfg(feature = "host-device")]
pub(crate) pci_system_manager: Arc<Mutex<PciSystemManager>>,
}
@@ -674,15 +684,21 @@ impl DeviceManager {
let irq_manager = Arc::new(KvmIrqManager::new(vm_fd.clone()));
let io_manager = Arc::new(ArcSwap::new(Arc::new(IoManager::new())));
let io_lock = Arc::new(Mutex::new(()));
#[cfg(feature = "host-device")]
let io_context = DeviceManagerContext::new(io_manager.clone(), io_lock.clone());
#[cfg(feature = "host-device")]
let mut mgr = PciSystemManager::new(irq_manager.clone(), io_context, res_manager.clone())?;
#[cfg(feature = "host-device")]
let requirements = mgr.resource_requirements();
#[cfg(feature = "host-device")]
let resources = res_manager
.allocate_device_resources(&requirements, USE_SHARED_IRQ)
.map_err(DeviceMgrError::ResourceError)?;
#[cfg(feature = "host-device")]
mgr.activate(resources)?;
#[cfg(feature = "host-device")]
let pci_system_manager = Arc::new(Mutex::new(mgr));
Ok(DeviceManager {
@@ -720,6 +736,7 @@ impl DeviceManager {
pci_system_manager.clone(),
logger,
))),
#[cfg(feature = "host-device")]
pci_system_manager,
})
}
@@ -1251,6 +1268,7 @@ impl DeviceManager {
}
/// Create an Virtio PCI transport layer device for the virtio backend device.
#[cfg(feature = "host-device")]
pub fn create_virtio_pci_device(
mut device: DbsVirtioDevice,
ctx: &mut DeviceOpContext,
@@ -1366,6 +1384,7 @@ impl DeviceManager {
}
/// Create an Virtio PCI transport layer device for the virtio backend device.
#[cfg(feature = "host-device")]
pub fn register_virtio_pci_device(
device: Arc<dyn DeviceIo>,
ctx: &DeviceOpContext,
@@ -1385,6 +1404,7 @@ impl DeviceManager {
}
/// Deregister Virtio device from IoManager
#[cfg(feature = "host-device")]
pub fn deregister_virtio_device(
device: &Arc<dyn DeviceIo>,
ctx: &mut DeviceOpContext,
@@ -1405,11 +1425,15 @@ impl DeviceManager {
}
/// Destroy/Deregister resources for a Virtio PCI
#[cfg(feature = "host-device")]
fn destroy_pci_device(
device: Arc<dyn DeviceIo>,
ctx: &mut DeviceOpContext,
dev_id: u8,
) -> std::result::Result<(), DeviceMgrError> {
use virtio_queue::QueueSync;
use vm_memory::GuestRegionMmap;
// unregister IoManager
Self::deregister_virtio_device(&device, ctx)?;
// unregister Resource manager
@@ -1489,6 +1513,7 @@ impl DeviceManager {
}
/// Teardown the Virtio PCI or MMIO transport layer device associated with the virtio backend device.
#[cfg(feature = "dbs-virtio-devices")]
pub fn destroy_virtio_device(
device: Arc<dyn DeviceIo>,
ctx: &mut DeviceOpContext,
@@ -1496,12 +1521,18 @@ impl DeviceManager {
if let Some(mmio_dev) = device.as_any().downcast_ref::<DbsMmioV2Device>() {
Self::destroy_mmio_device(device.clone(), ctx)?;
mmio_dev.remove();
} else if let Some(pci_dev) = device.as_any().downcast_ref::<VirtioPciDevice<
GuestAddressSpaceImpl,
QueueSync,
GuestRegionMmap,
>>() {
Self::destroy_pci_device(device.clone(), ctx, pci_dev.device_id())?;
}
#[cfg(feature = "host-device")]
{
use virtio_queue::QueueSync;
use vm_memory::GuestRegionMmap;
if let Some(pci_dev) = device.as_any().downcast_ref::<VirtioPciDevice<
GuestAddressSpaceImpl,
QueueSync,
GuestRegionMmap,
>>() {
Self::destroy_pci_device(device.clone(), ctx, pci_dev.device_id())?;
}
}
Ok(())
@@ -1572,18 +1603,25 @@ mod tests {
let irq_manager = Arc::new(KvmIrqManager::new(vm_fd.clone()));
let io_manager = Arc::new(ArcSwap::new(Arc::new(IoManager::new())));
let io_lock = Arc::new(Mutex::new(()));
#[cfg(feature = "host-device")]
let io_context = DeviceManagerContext::new(io_manager.clone(), io_lock.clone());
#[cfg(feature = "host-device")]
let mut mgr =
PciSystemManager::new(irq_manager.clone(), io_context, res_manager.clone())
.unwrap();
#[cfg(feature = "host-device")]
let requirements = mgr.resource_requirements();
#[cfg(feature = "host-device")]
let resources = res_manager
.allocate_device_resources(&requirements, USE_SHARED_IRQ)
.map_err(DeviceMgrError::ResourceError)
.unwrap();
#[cfg(feature = "host-device")]
mgr.activate(resources).unwrap();
#[cfg(feature = "host-device")]
let pci_system_manager = Arc::new(Mutex::new(mgr));
DeviceManager {
@@ -1619,6 +1657,7 @@ mod tests {
pci_system_manager.clone(),
&logger,
))),
#[cfg(feature = "host-device")]
pci_system_manager,
logger,

View File

@@ -406,9 +406,11 @@ impl VfioDeviceMgr {
if let Some(vfio_container) = self.vfio_container.as_ref() {
Ok(vfio_container.clone())
} else {
let kvm_dev_fd = Arc::new(self.get_kvm_dev_fd()?);
let vfio_container =
Arc::new(VfioContainer::new(kvm_dev_fd).map_err(VfioDeviceError::VfioIoctlError)?);
let kvm_dev_fd = self.get_kvm_dev_fd()?;
let vfio_dev_fd = Arc::new(vfio_ioctls::VfioDeviceFd::new_from_kvm(kvm_dev_fd));
let vfio_container = Arc::new(
VfioContainer::new(Some(vfio_dev_fd)).map_err(VfioDeviceError::VfioIoctlError)?,
);
self.vfio_container = Some(vfio_container.clone());
Ok(vfio_container)

View File

@@ -43,7 +43,7 @@ impl Vcpu {
#[allow(clippy::too_many_arguments)]
pub fn new_aarch64(
id: u8,
vcpu_fd: Arc<VcpuFd>,
vcpu_fd: VcpuFd,
io_mgr: IoManagerCached,
exit_evt: EventFd,
vcpu_state_event: EventFd,

View File

@@ -274,7 +274,7 @@ enum VcpuEmulation {
/// A wrapper around creating and using a kvm-based VCPU.
pub struct Vcpu {
// vCPU fd used by the vCPU
fd: Arc<VcpuFd>,
fd: VcpuFd,
// vCPU id info
id: u8,
// Io manager Cached for facilitating IO operations
@@ -317,7 +317,7 @@ pub struct Vcpu {
}
// Using this for easier explicit type-casting to help IDEs interpret the code.
type VcpuCell = Cell<Option<*const Vcpu>>;
type VcpuCell = Cell<Option<*mut Vcpu>>;
impl Vcpu {
thread_local!(static TLS_VCPU_PTR: VcpuCell = const { Cell::new(None) });
@@ -332,7 +332,7 @@ impl Vcpu {
if cell.get().is_some() {
return Err(VcpuError::VcpuTlsInit);
}
cell.set(Some(self as *const Vcpu));
cell.set(Some(self as *mut Vcpu));
Ok(())
})
}
@@ -369,13 +369,13 @@ impl Vcpu {
/// dereferencing from pointer an already borrowed `Vcpu`.
unsafe fn run_on_thread_local<F>(func: F) -> Result<()>
where
F: FnOnce(&Vcpu),
F: FnOnce(&mut Vcpu),
{
Self::TLS_VCPU_PTR.with(|cell: &VcpuCell| {
if let Some(vcpu_ptr) = cell.get() {
// Dereferencing here is safe since `TLS_VCPU_PTR` is populated/non-empty,
// and it is being cleared on `Vcpu::drop` so there is no dangling pointer.
let vcpu_ref: &Vcpu = &*vcpu_ptr;
let vcpu_ref: &mut Vcpu = &mut *vcpu_ptr;
func(vcpu_ref);
Ok(())
} else {
@@ -436,7 +436,7 @@ impl Vcpu {
/// Extract the vcpu running logic for test mocking.
#[cfg(not(test))]
pub fn emulate(fd: &VcpuFd) -> std::result::Result<VcpuExit<'_>, kvm_ioctls::Error> {
pub fn emulate(fd: &mut VcpuFd) -> std::result::Result<VcpuExit<'_>, kvm_ioctls::Error> {
fd.run()
}
@@ -444,7 +444,7 @@ impl Vcpu {
///
/// Returns error or enum specifying whether emulation was handled or interrupted.
fn run_emulation(&mut self) -> Result<VcpuEmulation> {
match Vcpu::emulate(&self.fd) {
match Vcpu::emulate(&mut self.fd) {
Ok(run) => {
match run {
#[cfg(target_arch = "x86_64")]
@@ -455,8 +455,9 @@ impl Vcpu {
}
#[cfg(target_arch = "x86_64")]
VcpuExit::IoOut(addr, data) => {
if !self.check_io_port_info(addr, data)? {
let _ = self.io_mgr.pio_write(addr, data);
let data = data.to_vec();
if !self.check_io_port_info(addr, &data)? {
let _ = self.io_mgr.pio_write(addr, &data);
}
self.metrics.exit_io_out.inc();
Ok(VcpuEmulation::Handled)
@@ -493,14 +494,14 @@ impl Vcpu {
VcpuExit::SystemEvent(event_type, event_flags) => match event_type {
KVM_SYSTEM_EVENT_RESET | KVM_SYSTEM_EVENT_SHUTDOWN => {
info!(
"Received KVM_SYSTEM_EVENT: type: {event_type}, event: {event_flags}"
"Received KVM_SYSTEM_EVENT: type: {event_type}, event: {event_flags:?}"
);
Ok(VcpuEmulation::Stopped)
}
_ => {
self.metrics.failures.inc();
error!(
"Received KVM_SYSTEM_EVENT signal type: {event_type}, flag: {event_flags}"
"Received KVM_SYSTEM_EVENT signal type: {event_type}, flag: {event_flags:?}"
);
Err(VcpuError::VcpuUnhandledKvmExit)
}
@@ -765,7 +766,7 @@ impl Vcpu {
/// Get vcpu file descriptor.
pub fn vcpu_fd(&self) -> &VcpuFd {
self.fd.as_ref()
&self.fd
}
pub fn metrics(&self) -> Arc<VcpuMetrics> {
@@ -804,7 +805,7 @@ pub mod tests {
FailEntry(u64, u32),
InternalError,
Unknown,
SystemEvent(u32, u64),
SystemEvent(u32, Vec<u64>),
Error(i32),
}
@@ -813,7 +814,7 @@ pub mod tests {
}
impl Vcpu {
pub fn emulate(_fd: &VcpuFd) -> std::result::Result<VcpuExit<'_>, kvm_ioctls::Error> {
pub fn emulate(_fd: &mut VcpuFd) -> std::result::Result<VcpuExit<'_>, kvm_ioctls::Error> {
let res = &*EMULATE_RES.lock().unwrap();
match res {
EmulationCase::IoIn => Ok(VcpuExit::IoIn(0, &mut [])),
@@ -828,7 +829,8 @@ pub mod tests {
EmulationCase::InternalError => Ok(VcpuExit::InternalError),
EmulationCase::Unknown => Ok(VcpuExit::Unknown),
EmulationCase::SystemEvent(event_type, event_flags) => {
Ok(VcpuExit::SystemEvent(*event_type, *event_flags))
let flags = event_flags.clone().into_boxed_slice();
Ok(VcpuExit::SystemEvent(*event_type, Box::leak(flags)))
}
EmulationCase::Error(e) => Err(kvm_ioctls::Error::new(*e)),
}
@@ -839,7 +841,7 @@ pub mod tests {
fn create_vcpu() -> (Vcpu, Receiver<VcpuStateEvent>) {
let kvm_context = KvmContext::new(None).unwrap();
let vm = kvm_context.kvm().create_vm().unwrap();
let vcpu_fd = Arc::new(vm.create_vcpu(0).unwrap());
let vcpu_fd = vm.create_vcpu(0).unwrap();
let io_manager = IoManagerCached::new(Arc::new(ArcSwap::new(Arc::new(IoManager::new()))));
let supported_cpuid = kvm_context
.supported_cpuid(kvm_bindings::KVM_MAX_CPUID_ENTRIES)
@@ -875,7 +877,7 @@ pub mod tests {
let kvm = Kvm::new().unwrap();
let vm = Arc::new(kvm.create_vm().unwrap());
let _kvm_context = KvmContext::new(Some(kvm.as_raw_fd())).unwrap();
let vcpu_fd = Arc::new(vm.create_vcpu(0).unwrap());
let vcpu_fd = vm.create_vcpu(0).unwrap();
let io_manager = IoManagerCached::new(Arc::new(ArcSwap::new(Arc::new(IoManager::new()))));
let reset_event_fd = EventFd::new(libc::EFD_NONBLOCK).unwrap();
let vcpu_state_event = EventFd::new(libc::EFD_NONBLOCK).unwrap();
@@ -947,17 +949,19 @@ pub mod tests {
assert!(matches!(res, Err(VcpuError::VcpuUnhandledKvmExit)));
// KVM_SYSTEM_EVENT_RESET
*(EMULATE_RES.lock().unwrap()) = EmulationCase::SystemEvent(KVM_SYSTEM_EVENT_RESET, 0);
*(EMULATE_RES.lock().unwrap()) =
EmulationCase::SystemEvent(KVM_SYSTEM_EVENT_RESET, vec![0]);
let res = vcpu.run_emulation();
assert!(matches!(res, Ok(VcpuEmulation::Stopped)));
// KVM_SYSTEM_EVENT_SHUTDOWN
*(EMULATE_RES.lock().unwrap()) = EmulationCase::SystemEvent(KVM_SYSTEM_EVENT_SHUTDOWN, 0);
*(EMULATE_RES.lock().unwrap()) =
EmulationCase::SystemEvent(KVM_SYSTEM_EVENT_SHUTDOWN, vec![0]);
let res = vcpu.run_emulation();
assert!(matches!(res, Ok(VcpuEmulation::Stopped)));
// Other system event
*(EMULATE_RES.lock().unwrap()) = EmulationCase::SystemEvent(0, 0);
*(EMULATE_RES.lock().unwrap()) = EmulationCase::SystemEvent(0, vec![0]);
let res = vcpu.run_emulation();
assert!(matches!(res, Err(VcpuError::VcpuUnhandledKvmExit)));

View File

@@ -189,7 +189,7 @@ pub struct VcpuResizeInfo {
#[derive(Default)]
pub(crate) struct VcpuInfo {
pub(crate) vcpu: Option<Vcpu>,
vcpu_fd: Option<Arc<VcpuFd>>,
vcpu_fd: Option<VcpuFd>,
handle: Option<VcpuHandle>,
tid: u32,
}
@@ -541,18 +541,13 @@ impl VcpuManager {
}
// We will reuse the kvm's vcpufd after first creation, for we can't
// create vcpufd with same id in one kvm instance.
let kvm_vcpu = match &self.vcpu_infos[cpu_index as usize].vcpu_fd {
Some(vcpu_fd) => vcpu_fd.clone(),
None => {
let vcpu_fd = Arc::new(
self.vm_fd
.create_vcpu(cpu_index as u64)
.map_err(VcpuError::VcpuFd)
.map_err(VcpuManagerError::Vcpu)?,
);
self.vcpu_infos[cpu_index as usize].vcpu_fd = Some(vcpu_fd.clone());
vcpu_fd
}
let kvm_vcpu = match self.vcpu_infos[cpu_index as usize].vcpu_fd.take() {
Some(vcpu_fd) => vcpu_fd,
None => self
.vm_fd
.create_vcpu(cpu_index as u64)
.map_err(VcpuError::VcpuFd)
.map_err(VcpuManagerError::Vcpu)?,
};
let mut vcpu = self.create_vcpu_arch(cpu_index, kvm_vcpu, request_ts)?;
@@ -777,7 +772,7 @@ impl VcpuManager {
fn create_vcpu_arch(
&self,
cpu_index: u8,
vcpu_fd: Arc<VcpuFd>,
vcpu_fd: VcpuFd,
request_ts: TimestampUs,
) -> Result<Vcpu> {
// It's safe to unwrap because guest_kernel always exist until vcpu manager done
@@ -806,7 +801,7 @@ impl VcpuManager {
fn create_vcpu_arch(
&self,
cpu_index: u8,
vcpu_fd: Arc<VcpuFd>,
vcpu_fd: VcpuFd,
request_ts: TimestampUs,
) -> Result<Vcpu> {
Vcpu::new_aarch64(

View File

@@ -45,7 +45,7 @@ impl Vcpu {
#[allow(clippy::too_many_arguments)]
pub fn new_x86_64(
id: u8,
vcpu_fd: Arc<VcpuFd>,
vcpu_fd: VcpuFd,
io_mgr: IoManagerCached,
cpuid: CpuId,
exit_evt: EventFd,

View File

@@ -642,7 +642,7 @@ impl Vm {
image: &mut F,
) -> std::result::Result<InitrdConfig, LoadInitrdError>
where
F: Read + Seek,
F: Read + Seek + vm_memory::ReadVolatile,
{
use crate::error::LoadInitrdError::*;
@@ -666,7 +666,7 @@ impl Vm {
// Load the image into memory
vm_memory
.read_from(GuestAddress(address), image, size)
.read_volatile_from(GuestAddress(address), image, size)
.map_err(|_| LoadInitrd)?;
Ok(InitrdConfig {
@@ -1132,7 +1132,7 @@ pub mod tests {
let vm_memory = vm.address_space.vm_memory().unwrap();
vm_memory.write_obj(code, load_addr).unwrap();
let vcpu_fd = vm.vm_fd().create_vcpu(0).unwrap();
let mut vcpu_fd = vm.vm_fd().create_vcpu(0).unwrap();
let mut vcpu_sregs = vcpu_fd.get_sregs().unwrap();
assert_ne!(vcpu_sregs.cs.base, 0);
assert_ne!(vcpu_sregs.cs.selector, 0);

View File

@@ -22,7 +22,7 @@ slog = { workspace = true }
slog-scope = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["sync", "fs", "process", "io-util"] }
vmm-sys-util = "0.11.0"
vmm-sys-util = "0.15.0"
rand = { workspace = true }
path-clean = "1.0.1"
lazy_static = { workspace = true }

View File

@@ -15,7 +15,7 @@ use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
use libc::ifreq;
use vmm_sys_util::ioctl::{ioctl_with_mut_ref, ioctl_with_ref, ioctl_with_val};
use vmm_sys_util::{ioctl_ioc_nr, ioctl_iow_nr};
use vmm_sys_util::ioctl_iow_nr;
// As defined in the Linux UAPI:
// https://elixir.bootlin.com/linux/v4.17/source/include/uapi/linux/if.h#L33
pub(crate) const IFACE_NAME_MAX_LEN: usize = 16;

View File

@@ -1140,7 +1140,7 @@ dependencies = [
"serde",
"thiserror 1.0.40",
"timerfd",
"vmm-sys-util 0.11.2",
"vmm-sys-util 0.15.0",
]
[[package]]
@@ -1469,9 +1469,9 @@ dependencies = [
[[package]]
name = "event-manager"
version = "0.2.1"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "377fa591135fbe23396a18e2655a6d5481bf7c5823cdfa3cc81b01a229cbe640"
checksum = "13bdac971eb2efaceffca0976058ab80c715945cc565c8a4aa1ed3bb0dc8d0e4"
dependencies = [
"libc",
"vmm-sys-util 0.14.0",
@@ -2119,7 +2119,7 @@ dependencies = [
"tracing",
"ttrpc",
"ttrpc-codegen",
"vmm-sys-util 0.11.2",
"vmm-sys-util 0.15.0",
]
[[package]]
@@ -5709,9 +5709,9 @@ dependencies = [
[[package]]
name = "vmm-sys-util"
version = "0.11.2"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48b7b084231214f7427041e4220d77dfe726897a6d41fddee450696e66ff2a29"
checksum = "d21f366bf22bfba3e868349978766a965cbe628c323d58e026be80b8357ab789"
dependencies = [
"bitflags 1.3.2",
"libc",
@@ -5719,9 +5719,9 @@ dependencies = [
[[package]]
name = "vmm-sys-util"
version = "0.14.0"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d21f366bf22bfba3e868349978766a965cbe628c323d58e026be80b8357ab789"
checksum = "506c62fdf617a5176827c2f9afbcf1be155b03a9b4bf9617a60dbc07e3a1642f"
dependencies = [
"bitflags 1.3.2",
"libc",

View File

@@ -689,7 +689,11 @@ fn build_auth(reference: &Reference) -> RegistryAuth {
Err(CredentialRetrievalError::ConfigReadError) => {
debug!("build_auth: Cannot read docker credentials - using anonymous access.");
}
Err(CredentialRetrievalError::HelperFailure { stdout, stderr }) => {
Err(CredentialRetrievalError::HelperFailure {
helper: _,
stdout,
stderr,
}) => {
if stdout == "credentials not found in native keychain\n" {
// On WSL, this error is generated when credentials are not
// available in ~/.docker/config.json.

View File

@@ -264,7 +264,11 @@ pub fn build_auth(reference: &Reference) -> Option<AuthConfig> {
Err(CredentialRetrievalError::ConfigReadError) => {
debug!("build_auth: Cannot read docker credentials - using anonymous access.");
}
Err(CredentialRetrievalError::HelperFailure { stdout, stderr }) => {
Err(CredentialRetrievalError::HelperFailure {
helper: _,
stdout,
stderr,
}) => {
if stdout == "credentials not found in native keychain\n" {
// On WSL, this error is generated when credentials are not
// available in ~/.docker/config.json.

View File

@@ -905,7 +905,7 @@ dependencies = [
"serde",
"thiserror 1.0.50",
"timerfd",
"vmm-sys-util 0.11.2",
"vmm-sys-util 0.15.0",
]
[[package]]
@@ -1126,12 +1126,12 @@ dependencies = [
[[package]]
name = "event-manager"
version = "0.2.1"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "377fa591135fbe23396a18e2655a6d5481bf7c5823cdfa3cc81b01a229cbe640"
checksum = "13bdac971eb2efaceffca0976058ab80c715945cc565c8a4aa1ed3bb0dc8d0e4"
dependencies = [
"libc",
"vmm-sys-util 0.11.2",
"vmm-sys-util 0.15.0",
]
[[package]]
@@ -1619,7 +1619,7 @@ dependencies = [
"tracing",
"ttrpc",
"ttrpc-codegen",
"vmm-sys-util 0.11.2",
"vmm-sys-util 0.15.0",
]
[[package]]
@@ -1904,7 +1904,7 @@ dependencies = [
"toml",
"url",
"virt_container",
"vmm-sys-util 0.11.2",
"vmm-sys-util 0.15.0",
]
[[package]]
@@ -4564,6 +4564,16 @@ dependencies = [
"libc",
]
[[package]]
name = "vmm-sys-util"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "506c62fdf617a5176827c2f9afbcf1be155b03a9b4bf9617a60dbc07e3a1642f"
dependencies = [
"bitflags 1.3.2",
"libc",
]
[[package]]
name = "vsock"
version = "0.3.0"

View File

@@ -34,7 +34,7 @@ kata-sys-util = { path = "../../../src/libs/kata-sys-util/" }
agent = { path = "../../runtime-rs/crates/agent" }
virt_container = { path = "../../runtime-rs/crates/runtimes/virt_container" }
serial_test = "0.10.0"
vmm-sys-util = "0.11.0"
vmm-sys-util = "0.15.0"
epoll = "4.0.1"
libc = "0.2.138"

View File

@@ -34,6 +34,17 @@ setup() {
[ "$pod_name" == "$result" ]
}
#@test "test network performance with nydus" {
# curl_cmd='curl -I -s -o /dev/null \
#-w "code=%{http_code} ip=%{remote_ip} dns=%{time_namelookup}s connect=%{time_connect}s tls=%{time_appconnect}s starttransfer=%{time_starttransfer}s total=%{time_total}s\n" \
#https://ghcr.io/v2/dragonflyoss/image-service/alpine/blobs/sha256:12dba7d4fae4c70e1421021dd1ef3e8a1a4f1a9369074fc912a636dd2afdd640'
#
# kubectl apply -f "${yaml_file}"
# kubectl wait --for jsonpath=status.phase=Succeeded --timeout=$timeout pod "$pod_name"
# result=$(kubectl exec "$pod_name" -- sh -c "$curl_cmd")
# echo "$result"
#}
teardown() {
# Debugging information
kubectl describe "pod/$pod_name"

View File

@@ -183,7 +183,8 @@ function run_test() {
pod=$(sudo -E crictl --timeout=20s runp -r kata-${KATA_HYPERVISOR} $dir_path/nydus-sandbox.yaml)
echo "Pod $pod created"
cnt=$(sudo -E crictl --timeout=20s create $pod $dir_path/nydus-container.yaml $dir_path/nydus-sandbox.yaml)
echo "Container $cnt created"
echo "XXXXXContainer $cnt created"
sudo -E crictl --timeout=20s start $cnt
echo "Container $cnt started"
@@ -203,6 +204,7 @@ function teardown() {
echo "Running teardown"
local rc=0
journalctl -x -t kata --since "10 minutes ago" || true
local pid
for bin in containerd-nydus-grpc nydusd; do
pid=$(pidof $bin)