After #12857, the VFIO-AP hotplug test fails because runtime-rs
unconditionally removes all /dev/vfio/* devices from the OCI spec
before sending it to the kata agent. The agent then rejects
the container creation with:
```
Missing devices in OCI spec
```
Filter devices from the OCI spec conditionally based on the
vfio_mode configuration (e.g. guest-kernel). Also factor the
filtering logic out into a separate function and add unit tests.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
So many unformatted rust codes cause uncommitted change files in
rust runtime and its libs or agent sources, which can be easily
found just by `cargo fmt --all`.
Let's reduce such noisy bad experiences
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add a new runtime-rs configuration template that combines the NVIDIA GPU
cold-plug stack with Intel TDX confidential guest support. This is the
runtime-rs counterpart of the Go runtime's configuration-qemu-nvidia-gpu-tdx
template.
The template merges the GPU NV settings (VFIO cold-plug, Pod Resources API,
NV-specific kernel/image/firmware, extended timeouts) with TDX confidential
guest settings (confidential_guest, OVMF.inteltdx.fd firmware, TDX Quote
Generation Service socket, confidential NV kernel and image).
The Makefile is updated with the new config file registration and the
FIRMWARETDVFPATH_NV variable pointing to OVMF.inteltdx.fd.
Also removes a stray tdx_quote_generation_service_socket_port setting
from the SNP GPU template where it did not belong.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add a new runtime-rs configuration template that combines the NVIDIA GPU
cold-plug stack with AMD SEV-SNP confidential guest support. This is the
runtime-rs counterpart of the Go runtime's configuration-qemu-nvidia-gpu-snp
template.
The template merges the GPU NV settings (VFIO cold-plug, Pod Resources API,
NV-specific kernel/image/firmware, extended timeouts) with the SNP
confidential guest settings (confidential_guest, sev_snp_guest, SNP ID
block/auth, guest policy, AMDSEV.fd firmware, confidential NV kernel and
image).
The Makefile is updated with the new config file registration, the
CONFIDENTIAL_NV image/kernel variables, and FIRMWARESNPPATH_NV pointing
to AMDSEV.fd.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add a QEMU configuration template for the NVIDIA GPU runtime-rs shim,
mirroring the Go runtime's configuration-qemu-nvidia-gpu.toml.in. The
template uses _NV-suffixed Makefile variables for kernel, image, and
verity params so the GPU-specific rootfs and kernel are selected at
build time.
Wire the new config into the runtime-rs Makefile: define
FIRMWAREPATH_NV with arch-specific OVMF/AAVMF paths (matching the Go
runtime's PR #12780), add EDK2_NAME for x86_64, and register the config
in CONFIGS/CONFIG_PATHS/SYSCONFIG_PATHS so it gets installed alongside
the other runtime-rs configurations.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Extend the in-guest agent's VFIO device handler to support the cold-plug
flow. When the runtime cold-plugs a GPU before the VM boots, the agent
needs to bind the device to the vfio-pci driver inside the guest and
set up the correct /dev/vfio/ group nodes so the workload can access
the GPU.
This updates the device discovery logic to handle the PCI topology that
QEMU presents for cold-plugged vfio-pci devices and ensures the IOMMU
group is properly resolved from the guest's sysfs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
NVIDIA GPUs often have an HDA audio controller (PCI class 0x0403) in the
same IOMMU group. This device should not be passed through to the guest,
just like Host and PCI bridges.
Change filter_bridge_device() to accept a slice of PCI class bitmasks
and add 0x0403 (audio) to the ignore list alongside 0x0600 (host/PCI
bridge). This matches the Go runtime fix from NVIDIA/kata-containers#26.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Use BlockCfgModern for rawblock volumes when the hypervisor supports it,
passing logical and physical sector sizes from the volume metadata.
In the container manager, clear Linux.Resources fields (Pids, BlockIO,
Network) that genpolicy expects to be null, and filter VFIO character
devices from Linux.Devices to avoid policy rejection.
Update Dragonball's inner_device to handle the DeviceType::VfioModern
variant in its no-op match arm.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Extend the resource manager to handle VfioModern and BlockModern device
types when building the agent's device list and storage list. For VFIO
modern devices, the manager resolves the container path and sets the
agent Device.id to match what genpolicy expects.
Rework CDI device annotation handling in container_device.rs:
- Strip the "vfio" prefix from device names when building CDI annotation
keys (cdi.k8s.io/vfio0, cdi.k8s.io/vfio1, etc.)
- Remove the per-device index suffix that caused policy mismatches
- Add iommufd cdev path support alongside legacy VFIO group paths
Update the vfio driver to detect iommufd cdev vs legacy group from
the CDI device node path.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Query the kubelet Pod Resources API during sandbox setup to discover
which GPU devices have been allocated to the pod. When cold_plug_vfio
is enabled, the sandbox resolves CDI device specs, extracts host PCI
addresses and IOMMU groups from sysfs, and creates VfioModernCfg
device entries that get passed to the hypervisor for cold-plug.
Add pod-resources and cdi crate dependencies to the runtimes and
virt_container workspace members.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Implement add_device() and remove_device() support for
DeviceType::VfioModern and DeviceType::BlockModern in the QEMU inner
hypervisor layer.
For cold-plug (before VM boot): VfioDeviceConfig/VfioDeviceGroup
structs are constructed from the device's resolved PCI address, IOMMU
group, and bus assignment, then appended to the QEMU command line via
cmdline_generator.
Block devices use VirtioBlkDevice with the modern config's sector size
fields and are always cold-plugged onto the command line.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Bump QMP connection timeout from 10s to 30s and initial read timeout
from 250ms to 5s to accommodate the longer initialization time when
VFIO devices are cold-plugged (IOMMU domain setup and device reset
can be slow for GPUs).
Re-export cmdline_generator types from qemu/mod.rs for downstream use.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add QEMU command-line parameter types for VFIO device cold-plug:
- ObjectIommufd: /dev/iommu object for iommufd-backed passthrough
- PCIeVfioDevice: vfio-pci device on a PCIe root port or switch port,
supporting both legacy VFIO group and iommufd cdev backends
- FWCfgDevice: firmware config device for fw_cfg blob injection
- VfioDeviceBase/VfioDeviceConfig/VfioDeviceGroup: high-level wrappers
that compose the above into complete QEMU argument sets, resolving
IOMMU groups, device nodes, and per-device fw_cfg entries
Refactor existing cmdline structs (BalloonDevice, VirtioNetDevice,
VirtioBlkDevice, etc.) to use a shared devices_to_params() helper
and align the ToQemuParams implementations.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Extend PCIeTopology to support cold-plug port reservation and release
for VFIO devices. New fields track the topology mode (NoPort, RootPort,
SwitchPort), whether cold-plug dynamic expansion is enabled, and a map
of reserved bus assignments per device.
PCIeTopology::new() now infers the mode from the configured root-port
and switch-port counts, pre-seeds the port structures, and makes
add_root_ports_on_bus() idempotent so that PortDevice::attach can
safely call it again after the topology has already been initialized.
New methods:
- reserve_bus_for_device: allocate a free root port or switch downstream
port for a device, expanding the port map when cold_plug is enabled
- release_bus_for_device: free the previously reserved port
- find_free_root_port / find_free_switch_down_port: internal helpers
- release_root_port / release_switch_down_port: internal helpers
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add DeviceConfig::VfioModernCfg and DeviceConfig::BlockCfgModern
variants so the device manager can accept creation requests for the
modern VFIO and block drivers introduced in the previous commits.
Wire find_device() to look up VfioModern devices by iommu_group_devnode
and BlockModern devices by path_on_host. Add create_block_device_modern()
for BlockConfigModern with the same driver-option normalization and
virt-path assignment as the legacy path.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add a modern block device driver using the Arc<Mutex> pattern for
interior mutability, matching the VfioDeviceModern approach. The driver
implements the Device trait with attach/detach/hotplug lifecycle
management, and supports BlockConfigModern with logical and physical
sector size fields.
Add the DeviceType::BlockModern enum variant so the driver compiles.
The device_manager and hypervisor cold-plug wiring follow in subsequent
commits.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add the VfioDeviceModern driver for VFIO device passthrough in
runtime-rs. The driver handles device discovery through sysfs, detects
whether the host uses iommufd cdev or legacy VFIO group interfaces,
resolves PCI BDF addresses and IOMMU groups, and implements the Device
and PCIeDevice traits for hypervisor integration.
The module is structured as:
- core.rs: sysfs discovery, BDF parsing, IOMMU group resolution,
device-node path logic for both iommufd cdev and legacy group paths
- device.rs: VfioDeviceModern/VfioDeviceModernHandle types, Device
and PCIeDevice trait implementations
- mod.rs: host capability detection (iommufd vs legacy), backend
selection logic
The DeviceType::VfioModern enum variant and stub PCIeTopology methods
(reserve_bus_for_device, release_bus_for_device) are added so the
driver compiles; full topology wiring follows in a subsequent commit.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The vsock connect loop previously ran the blocking connect(2) syscall
directly on a tokio async worker thread, which could stall other async
tasks. Move the socket creation and connect(2) call into
spawn_blocking so the async runtime remains responsive.
Replace the fixed-interval retry loop with an Instant-based deadline
and bounded exponential backoff (10ms-500ms, doubling each attempt).
This avoids hammering the vsock endpoint during slow VM boots while
still converging quickly once the guest agent is ready.
Also improve log messages to include attempt counts and remaining time.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The vfio-ioctls 0.6.0 crate changed the vfio_dma_map signature: the
host address parameter is now a raw pointer (*mut u8) instead of u64,
and the size parameter is usize instead of u64. Since the kernel uses
the host address to set up DMA mappings to physical memory — and the
caller must guarantee the memory behind that pointer remains valid for
the lifetime of the mapping — upstream marked vfio_dma_map as unsafe fn.
Wrap vfio_dma_map calls in unsafe blocks and adjust the type casts
accordingly. vfio_dma_unmap only needed the usize cast for the size
parameter (it does not take a host address, so it remains safe).
Bump workspace dependencies:
- vfio-bindings 0.6.1 -> 0.6.2
- vfio-ioctls 0.5.0 -> 0.6.0
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The VFIO cold-plug path needs to resolve a PCI device's sysfs address
from its /dev/vfio/ group or iommufd cdev node. Extend the PCI helpers
in kata-sys-util to support this: add a function that walks
/sys/bus/pci/devices to find a device by its IOMMU group, and expose the
guest BDF that the QEMU command line will reference.
These helpers are consumed by the runtime-rs hypervisor crate when
building VFIO device descriptors for the QEMU command line.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The Go runtime already exposes a [runtime] pod_resource_api_sock option
that tells the shim where to find the kubelet Pod Resources API socket.
The runtime-rs VFIO cold-plug code needs the same setting so it can
query assigned GPU devices before the VM starts.
Add the field to RuntimeConfig and wire it through deserialization so
that configuration-*.toml files can set it.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add a gRPC client crate that speaks the kubelet PodResourcesLister
service (v1). The runtime-rs VFIO cold-plug path needs this to discover
which GPU devices the kubelet has assigned to a pod so they can be
passed through to the guest before the VM boots.
The crate is intentionally kept minimal: it wraps the upstream
pod_resources.proto, exposes a Unix-domain-socket client, and
re-exports the generated types.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
`make vendor` isn't required anymore. People who need vendored code should
use the `tools/packaging/release/generate_vendor.sh` script instead.
Assisted-by: Claude AI
Signed-off-by: Greg Kurz <groug@kaod.org>
Now shipped in the vendored code tarball.
Drop the git tree status check since it isn't needed anymore.
Also stop building with `-mod=vendor`. This requires to
expose GOMODCACHE as suggested by Fabiano Fidêncio.
Signed-off-by: Greg Kurz <groug@kaod.org>
When not escaped, the `.` character in a regular expression matches
any character. This causes `CopyFileRequest is blocked by policy`
for paths like :
/run/kata-containers/shared/containers/b8d668e556bc5daf7454de26496a419128d182c5c16d5af6ad03a9e2593f96d4-c9126bd2cf103ae6-secrets/rhsm/ca
In this case, the match is `/ca`.
Signed-off-by: Greg Kurz <groug@kaod.org>
1. Ignore PodAffinity's preferredDuringSchedulingIgnoredDuringExecution.
2. Ignore additional PodAffinityTerm fields.
3. Add basic tests for the new fields.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Upgrade netlink-packet-route and rtnetlink so IFLA_INET6_CONF matches the
kernel's 240-byte layout (DEVCONF_FORCE_FORWARDING). Adapt to API changes:
NeighbourAttribute::LinkLayerAddress and bool MulticastSnooping.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
NetworkPair::new() always constructed the virtual interface name as
"eth{idx}" and looked it up in the network namespace. This works for
regular veth endpoints created by CNI (which names them eth0, eth1,
etc.), but fails for interfaces injected by Multus with different
names (e.g. "net1" for mlx5 Scalable Functions).
The `name` parameter was only applied after the lookup to override
the stored name, which is too late — the lookup already failed with
"No such device (os error 19)".
Use the provided name directly for the lookup when it is non-empty,
falling back to "eth{idx}" only when no name is given. This also
removes the now-redundant post-creation name override.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
On s390x, QEMU uses the CCW bus instead of PCI. The network device
hotplug path was hardcoded to find a PCI slot, which fails with
"no free slots on PCI bridges" on s390x.
Add CCW support to `hotplug_network_device`: when running on a
native CCW bus, allocate a CCW subchannel address and use `devno`
instead of PCI `bus`/`addr`/`vectors`.
Additionally, after hotplugging a network device, the guest kernel
needs time to probe the CCW device before the network interface
appears. Add a retry loop (up to 10 attempts, 100ms apart) to
`handle_interfaces` so that `update_interface` succeeds once the
guest has created the link.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
With the workspace unification we've bumped anyhow
from 1.0.31 to 1.0.102, so update the code to reflect that
error implements `Display` now in the newer version.
Assisted-by: IBM Bob
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Add trace-forwarder to be a workspace member to simplify the
dependency management.
Assisted-by: IBM Bob
Signed-off-by: stevenhorsman <steven@uk.ibm.com>