The interface "VhostUserDevice" has duplicate functions and fields with
Device, so we can merge them into one interface and manage them with one
group of interfaces.
Signed-off-by: Wei Zhang <zhangwei555@huawei.com>
Fixes#50
Previously the devices are created with device manager and laterly
attached to hypervisor with "device.Attach()", this could work, but
there's no way to remember the reference count for every device, which
means if we plug one device to hypervisor twice, it's truly inserted
twice, but actually we only need to insert once but use it in many
places.
Use device manager as a consolidated entrypoint of device management can
give us a way to handle many "references" to single device, because it
can save all devices and remember it's use count.
Signed-off-by: Wei Zhang <zhangwei555@huawei.com>
There is a relation between the maximum number of vCPUs and the
memory footprint, if QEMU maxcpus option and kernel nr_cpus
cmdline argument are big, then memory footprint is big, this
issue only occurs if CPU hotplug support is enabled in the kernel,
might be because of kernel needs to allocate resources to watch all
sockets waiting for a CPU to be connected (ACPI event).
For example
```
+---------------+-------------------------+
| | Memory Footprint (KB) |
+---------------+-------------------------+
| NR_CPUS=240 | 186501 |
+---------------+-------------------------+
| NR_CPUS=8 | 110684 |
+---------------+-------------------------+
```
In order to do not affect CPU hotplug and allow to users to have containers
with the same number of physical CPUs, this patch tries to mitigate the
big memory footprint by using the actual number of physical CPUs as the
maximum number of vCPUs for each container if `default_maxvcpus` is <= 0 in
the runtime configuration file, otherwise `default_maxvcpus` is used as the
maximum number of vCPUs.
Before this patch a container with 256MB of RAM
```
total used free shared buff/cache available
Mem: 195M 40M 113M 26M 41M 112M
Swap: 0B 0B 0B
```
With this patch
```
total used free shared buff/cache available
Mem: 236M 11M 188M 26M 36M 186M
Swap: 0B 0B 0B
```
fixes#295
Signed-off-by: Julio Montes <julio.montes@intel.com>
* Move makeNameID() func to virtcontainers/utils file as it's a generic
function for making name and ID.
* Move bindDevicetoVFIO() and bindDevicetoHost() to vfio driver package.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
Fixes#50
This is done for decoupling device management part from other parts.
It seperate device.go to several dirs and files:
```
virtcontainers/device
├── api
│ └── interface.go
├── config
│ └── config.go
├── drivers
│ ├── block.go
│ ├── generic.go
│ ├── utils.go
│ ├── vfio.go
│ ├── vhost_user_blk.go
│ ├── vhost_user.go
│ ├── vhost_user_net.go
│ └── vhost_user_scsi.go
└── manager
├── manager.go
└── utils.go
```
* `api` contains interface definition of device management, so upper level caller
should import and use the interface, and lower level should implement the interface.
it's bridge to device drivers and callers.
* `config` contains structed exported data.
* `drivers` contains specific device drivers including block, vfio and vhost user
devices.
* `manager` exposes an external management package with a `DeviceManager`.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
We need to store the bridge address to state to use it
for assigning addresses to devices attached to teh bridge.
So we need to make sure that the bridge pointer is assigned
the address.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
Pass the slot address while attaching bridges. This is needed
to determine the pci/e address of devices that are attached
to the bridge.
Fixes#210
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
When imported, the vc files carried in the 'full style' apache
license text, but the standard for kata is to use SPDX style.
Update the relevant files to SPDX.
Fixes: #227
Signed-off-by: Graham whaley <graham.whaley@intel.com>
Add a hypervisor configuration to specify if IO should
be handled in a separate thread. Add support for iothreads for
virtio-scsi for now. Since we attach all scsi drives to the
same scsi controller, all the drives will be handled in a separate
IO thread which would still give better performance.
Going forward we need to assess if adding more controllers and
attaching iothreasds to each of them with distributing drives
among teh scsi controllers should be done, based on more performance
analysis.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>