Add configuration to decide the amount of slots that will be used in a VM
- This will limit the amount of times that memory can be hotplugged.
- Use memory slots provided by user.
- tests: aling struct
cli: kata-env: Add memory slots info.
- Show the slots to be added to the VM.
```diff
[Hypervisor]
MachineType = "pc"
Version = "QEMU ..."
Path = "/opt/kata/bin/qemu-system-x86_64"
BlockDeviceDriver = "virtio-scsi"
Msize9p = 8192
+ MemorySlots = 10
Debug = false
UseVSock = false
```
Fixes: #751
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
Add `disable_vhost_net` option to enable or disable the use of
vhost_net. Vhost_net can improve network performance.
Signed-off-by: Ruidong Cao <caoruidong@huawei.com>
Now that we only use hypervisor config to set them, they
are not overridden by other configs. So drop the default prefix.
Signed-off-by: Peng Tao <bergwolf@gmail.com>
We can just use hyprvisor config to specify the memory size
of a guest. There is no need to maintain the extra place just
for memory size.
Fixes: #692
Signed-off-by: Peng Tao <bergwolf@gmail.com>
Add additional `context.Context` parameters and `struct` fields to allow
trace spans to be created by the `virtcontainers` internal functions,
objects and sub-packages.
Note that not every function is traced; we can add more traces as
desired.
Fixes#566.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
We need this configuration due to a limitation in seabios
firmware in handling hotplug for PCI devices with large BARS.
Long term, this needs to be fixed in the firmware.
Fixes#594
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
When the hypervisor option `use_vsock` is true the runtime will check for vsock
support. If vsock is supported, not proxy will be used and the shims
will connect to the VM using VSOCKS. This flag is true by default, so will use
VSOCK when possible and no proxy will be started.
fixes#383
Signed-off-by: Jose Carlos Venegas Munoz jose.carlos.venegas.munoz@intel.com
Signed-off-by: Julio Montes <julio.montes@intel.com>
Add `use_vsock` option to enable or disable the use of vsocks
for communication between host and guest.
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
Signed-off-by: Julio Montes <julio.montes@intel.com>
For each time a sandbox structure is created, we ensure s.Release()
is called. Then we can keep the qmp connection as long as Sandbox
pointer is alive.
All VC interfaces are still stateless as s.Release() is called before
each API returns.
OTOH, for VCSandbox APIs, FetchSandbox() must be paired with s.Release,
the same as before.
Fixes: #500
Signed-off-by: Peng Tao <bergwolf@gmail.com>
1. support qemu migration save operation
2. setup vm templating parameters per hypervisor config
3. create vm storage path when it does not exist. This can happen when
an empty guest is created without a sandbox.
Signed-off-by: Peng Tao <bergwolf@gmail.com>
A hypervisor implementation does not need to depend on a sandbox
structure. Decouple them in preparation for vm factory.
Signed-off-by: Peng Tao <bergwolf@gmail.com>
Don't fail if a new container with a CPU constraint was added to
a POD and no more vCPUs are available, instead apply the constraint
and let kernel balance the resources.
Signed-off-by: Julio Montes <julio.montes@intel.com>
There is a relation between the maximum number of vCPUs and the
memory footprint, if QEMU maxcpus option and kernel nr_cpus
cmdline argument are big, then memory footprint is big, this
issue only occurs if CPU hotplug support is enabled in the kernel,
might be because of kernel needs to allocate resources to watch all
sockets waiting for a CPU to be connected (ACPI event).
For example
```
+---------------+-------------------------+
| | Memory Footprint (KB) |
+---------------+-------------------------+
| NR_CPUS=240 | 186501 |
+---------------+-------------------------+
| NR_CPUS=8 | 110684 |
+---------------+-------------------------+
```
In order to do not affect CPU hotplug and allow to users to have containers
with the same number of physical CPUs, this patch tries to mitigate the
big memory footprint by using the actual number of physical CPUs as the
maximum number of vCPUs for each container if `default_maxvcpus` is <= 0 in
the runtime configuration file, otherwise `default_maxvcpus` is used as the
maximum number of vCPUs.
Before this patch a container with 256MB of RAM
```
total used free shared buff/cache available
Mem: 195M 40M 113M 26M 41M 112M
Swap: 0B 0B 0B
```
With this patch
```
total used free shared buff/cache available
Mem: 236M 11M 188M 26M 36M 186M
Swap: 0B 0B 0B
```
fixes#295
Signed-off-by: Julio Montes <julio.montes@intel.com>
A Unix domain socket is limited to 107 usable bytes on Linux. However,
not all code creating socket paths was checking for this limits.
Created a new `utils.BuildSocketPath()` function (with tests) to
encapsulate the logic and updated all code creating sockets to use it.
Fixes#268.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
* Move makeNameID() func to virtcontainers/utils file as it's a generic
function for making name and ID.
* Move bindDevicetoVFIO() and bindDevicetoHost() to vfio driver package.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
When imported, the vc files carried in the 'full style' apache
license text, but the standard for kata is to use SPDX style.
Update the relevant files to SPDX.
Fixes: #227
Signed-off-by: Graham whaley <graham.whaley@intel.com>
As agreed in [the kata containers API
design](https://github.com/kata-containers/documentation/blob/master/design/kata-api-design.md),
we need to rename pod notion to sandbox. The patch is a bit big but the
actual change is done through the script:
```
sed -i -e 's/pod/sandbox/g' -e 's/Pod/Sandbox/g' -e 's/POD/SB/g'
```
The only expections are `pod_sandbox` and `pod_container` annotations,
since we already pushed them to cri shims, we have to use them unchanged.
Fixes: #199
Signed-off-by: Peng Tao <bergwolf@gmail.com>
Add a hypervisor configuration to specify if IO should
be handled in a separate thread. Add support for iothreads for
virtio-scsi for now. Since we attach all scsi drives to the
same scsi controller, all the drives will be handled in a separate
IO thread which would still give better performance.
Going forward we need to assess if adding more controllers and
attaching iothreasds to each of them with distributing drives
among teh scsi controllers should be done, based on more performance
analysis.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
If an initrd image is configured in HypervisorConfig or passed in by
annotations, append it to qemu command line arguments.
Fixes: #97
Signed-off-by: Peng Tao <bergwolf@gmail.com>
Added magic tags for `gometalinter` to ignore two unused `const`s that
form part of an `iota` sequence.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>