Compare commits

...

79 Commits
2.5.1 ... 2.4.1

Author SHA1 Message Date
Fabiano Fidêncio
67d67ab66d Merge pull request #4204 from fidencio/2.4.1-branch-bump
# Kata Containers 2.4.1
2022-05-04 19:11:37 +02:00
Fabiano Fidêncio
99c6726cf6 release: Kata Containers 2.4.1
- stable-2.4 | Second round of backports for the 2.4.1 release
- stable-2.4 | First round of backports for the 2.4.1 release
- stable-2.4 | versions: Upgrade to Cloud Hypervisor v23.0
- stable-2.4 | runtime: Base64 encode the direct volume mountInfo path
- stable-2.4 | agent: Avoid agent panic when reading empty stats

8e076c87 release: Adapt kata-deploy for 2.4.1
b50b091c agent: watchers: ensure uid/gid is preserved on copy/mkdir
03bc89ab clh: Rely on Cloud Hypervisor for generating the device ID
6b2c641f tools: fix typo in clh directory name
81e10fe3 packaging: Fix clh build from source fall-back
8b21c5f7 agent: modify the type of swappiness to u64
3f5c6e71 runtime: Allock mockfs storage to be placed in any directory
0bd1abac runtime: Let MockFSInit create a mock fs driver at any path
3e74243f runtime: Move mockfs control global into mockfs.go
aed4fe6a runtime: Export StoragePathSuffix
e1c4f57c runtime: Don't abuse MockStorageRootPath() for factory tests
c49084f3 runtime: Make bind mount tests better clean up after themselves
4e350f7d runtime: Clean up mock hook logs in tests
415420f6 runtime: Make SetupOCIConfigFile clean up after itself
688b9abd runtime: Don't use fixed /tmp/mountPoint path
dc1288de kata-monitor: add a README file
78edf827 kata-monitor: add some links when generating pages for browsers
eff74fab agent: fsGroup support for direct-assigned volume
01cd5809 proto: fsGroup support for direct-assigned volume
97ad1d55 runtime: fsGroup support for direct-assigned volume
b62cced7 runtime: no need to write virtiofsd error to log
8242cfd2 kata-monitor: update the hrefs in the debug/pprof index page
a37d4e53 agent: best-effort removing mount point
d1197ee8 tools/packaging: Fix error path in 'kata-deploy-binaries.sh -s'
c9c77511 tools/packaging: Fix usage of kata-deploy-binaries.sh
1e622316 tools/packaging/kata-deploy: Copy install_yq.sh in a dedicated script
8fa64e01 packaging: Eliminate TTY_OPT and NO_TTY variables in kata-deploy
8f67f9e3 tools/packaging/kata-deploy/local-build: Add build to gitignore
3049b776 versions: Bump firecracker to v0.23.4
aedfef29 runtime/virtcontainers: Pass the hugepages resources to agent
c9e1f727 agent: Verify that we allocated as many hugepages as we need
ba858e8c agent: Don't attempt to create directories for hugepage configuration
bc32eff7 virtcontainers: clh: Re-generate the client code
984ef538 versions: Upgrade to Cloud Hypervisor v23.0
adf6493b runtime: Base64 encode the direct volume mountInfo path
6b417540 agent: Avoid agent panic when reading empty stats

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 16:18:45 +02:00
Fabiano Fidêncio
8e076c8701 release: Adapt kata-deploy for 2.4.1
kata-deploy files must be adapted to a new release.  The cases where it
happens are when the release goes from -> to:
* main -> stable:
  * kata-deploy-stable / kata-cleanup-stable: are removed

* stable -> stable:
  * kata-deploy / kata-cleanup: bump the release to the new one.

There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 16:18:45 +02:00
Fabiano Fidêncio
b11f7df5ab Merge pull request #4202 from fidencio/topic/second-round-of-backports-for-2.4.1
stable-2.4 | Second round of backports for the 2.4.1 release
2022-05-04 14:30:23 +02:00
Yibo Zhuang
b50b091c87 agent: watchers: ensure uid/gid is preserved on copy/mkdir
Today in agent watchers, when we copy files/symlinks
or create directories, the ownership of the source path
is not preserved which can lead to permission issues.

In copy, ensure that we do a chown of the source path
uid/gid to the destination file/symlink after copy to
ensure that ownership matches the source ownership.
fs::copy() takes care of setting the permissions.

For directory creation, ensure that we set the
permissions of the created directory to the source
directory permissions and also perform a chown of the
source path uid/gid to ensure directory ownership
and permissions matches to the source.

Fixes: #4188

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 70eda2fa6c)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 12:39:44 +02:00
Fabiano Fidêncio
03bc89ab0b clh: Rely on Cloud Hypervisor for generating the device ID
We're currently hitting a race condition on the Cloud Hypervisor's
driver code when quickly removing and adding a block device.

This happens because the device removal is an asynchronous operation,
and we currently do *not* monitor events coming from Cloud Hypervisor to
know when the device was actually removed.  Together with this, the
sandbox code doesn't know about that and when a new device is attached
it'll quickly assign what may be the very same ID to the new device,
leading to the Cloud Hypervisor's driver trying to hotplug a device with
the very same ID of the device that was not yet removed.

This is, in a nutshell, why the tests with Cloud Hypervisor and
devmapper have been failing every now and then.

The workaround taken to solve the issue is basically *not* passing down
the device ID to Cloud Hypervisor and simply letting Cloud Hypervisor
itself generate those, as Cloud Hypervisor does it in a manner that
avoids such conflicts.  With this addition we have then to keep a map of
the device ID and the Cloud Hypervisor's generated ID, so we can
properly remove the device.

This workaround will probably stay for a while, at least till someone
has enough cycles to implement a way to watch the device removal event
and then properly act on that.  Spoiler alert, this will be a complex
change that may not even be worth it considering the race can be avoided
with this commit.

Fixes: #4196

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 33a8b70558)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 12:39:39 +02:00
Fabiano Fidêncio
d4dccb4900 Merge pull request #4153 from fidencio/wip/first-round-of-backports-for-2.4.1
stable-2.4 | First round of backports for the 2.4.1 release
2022-04-27 11:23:35 +02:00
Greg Kurz
6b2c641f0b tools: fix typo in clh directory name
This allows to get released binaries again.

Fixes: #4151

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit b658dccc5f)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:11:05 +02:00
Greg Kurz
81e10fe34f packaging: Fix clh build from source fall-back
If we fail to download the clh binary, we fall-back to build from source.
Unfortunately, `pull_clh_released_binary()` leaves a `cloud_hypervisor`
directory behind, which causes `build_clh_from_source()` not to clone
the git repo:

    [ -d "${repo_dir}" ] || git clone "${cloud_hypervisor_repo}"

When building from a kata-containers git repo, the subsequent calls
to `git` in this function thus apply to the kata-containers repo and
eventually fail, e.g.:

+ git checkout v23.0
error: pathspec 'v23.0' did not match any file(s) known to git

It doesn't quite make sense actually to keep an existing directory the
content of which is arbitrary when we want to it to contain a specific
version of clh. Just remove it instead.

Fixes: #4151

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit afbd60da27)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:11:01 +02:00
holyfei
8b21c5f78d agent: modify the type of swappiness to u64
The type of MemorySwappiness in runtime is uint64, and the type of swappiness in agent is int64,
if we set max uint64 in runtime and pass it to agent, the value will be equal to -1. We should
modify the type of swappiness to u64

Fixes: #4123

Signed-off-by: holyfei <yangfeiyu20092010@163.com>
(cherry picked from commit 0239502781)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
3f5c6e7182 runtime: Allock mockfs storage to be placed in any directory
Currently EnableMockTesting() takes no arguments and will always place the
mock storage in the fixed location /tmp/vc/mockfs.  This means that one
test run can interfere with the next one if anything isn't cleaned up
(and there are other bugs which means that happens).  If if those were
fixed this would allow developers testing on the same machine to interfere
with each other.

So, allow the mockfs to be placed at an arbitrary place given as a
parameter to EnableMockTesting().  In TestMain() we place it under our
existing temporary directory, so we don't need any additional cleanup just
for the mockfs.

fixes #4140

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 1b931f4203)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
0bd1abac3e runtime: Let MockFSInit create a mock fs driver at any path
Currently MockFSInit always creates the mockfs at the fixed path
/tmp/vc/mockfs.  This change allows it to be initialized at any path
given as a parameter.  This allows the tests in fs_test.go to be
simplified, because the by using a temporary directory from
t.TempDir(), which is automatically cleaned up, we don't need to
manually trigger initTestDir() (which is misnamed, it's actually a
cleanup function).

For now we still use the fixed path when auto-creating the mockfs in
MockAutoInit(), but we'll change that later.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit ef6d54a781)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
3e74243fbe runtime: Move mockfs control global into mockfs.go
virtcontainers/persist/fs/mockfs.go defines a mock filesystem type for
testing.  A global variable in virtcontainers/persist/manager.go is used to
force use of the mock fs rather than a normal one.

This patch moves the global, and the EnableMockTesting() function which
sets it into mockfs.go.  This is slightly cleaner to begin with, and will
allow some further enhancements.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 5d8438e939)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
aed4fe6a2e runtime: Export StoragePathSuffix
storagePathSuffix defines the file path suffix - "vc" - used for
Kata's persistent storage information, as a private constant.  We
duplicate this information in fc.go which also needs it.

Export it from fs.go instead, so it can be used in fc.go.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 963d03ea8a)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
e1c4f57c35 runtime: Don't abuse MockStorageRootPath() for factory tests
A number of unit tests under virtcontainers/factory use
MockStorageRootPath() as a general purpose temporary directory.  This
doesn't make sense: the mockfs driver isn't even in use here since we only
call EnableMockTesting for the pase virtcontainers package, not the
subpackages.

Instead use t.TempDir() which is for exactly this purpose.  As a bonus it
also handles the cleanup, so we don't need MockStorageDestroy any more.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 1719a8b491)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
c49084f303 runtime: Make bind mount tests better clean up after themselves
There are several tests in mount_test.go which perform a sample bind
mount.  These need a corresponding unmount to clean up afterwards or
attempting to delete the temporary files will fail due to the existing
mountpoint.  Most of them had such an unmount, but
TestBindMountInvalidPgtypes was missing one.

In addition, the existing unmounts where done inconsistently - one was
simply inline (so wouldn't be executed if the test fails too early) and one
is a defer.  Change them all to use the t.Cleanup mechanism.

For the dummy mountpoint files, rather than cleaning them up after the
test, the tests were removing them at the beginning of the test.  That
stops the test being messed up by a previous run, but messily.  Since
these are created in a private temporary directory anyway, if there's
something already there, that indicates a problem we shouldn't ignore.
In fact we don't need to explicitly remove these at all - they'll be
removed along with the rest of the private temporary directory.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit bec59f9e39)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
4e350f7d53 runtime: Clean up mock hook logs in tests
The tests in hook_test.go run a mock hook binary, which does some debug
logging to /tmp/mock_hook.log.  Currently we don't clean up those logs
when the tests are done.  Use a test cleanup function to do this.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit f7ba21c86f)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
415420f689 runtime: Make SetupOCIConfigFile clean up after itself
SetupOCIConfigFile creates a temporary directory with os.MkDirTemp().  This
means the callers need to register a deferred function to remove it again.
At least one of them was commented out meaning that a /temp/katatest-
directory was leftover after the unit tests ran.

Change to using t.TempDir() which as well as better matching other parts of
the tests means the testing framework will handle cleaning it up.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 90b2f5b776)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
688b9abd35 runtime: Don't use fixed /tmp/mountPoint path
Several tests in kata_agent_test.go create /tmp/mountPoint as a dummy
directory to mount.  This is not cleaned up after the test.  Although it
is in /tmp, that's still a little messy and can be confusing to a user.
In addition, because it uses the same name every time, it allows for one
run of the test to interfere with the next.

Use the built in t.TempDir() to use an automatically named and deleted
temporary directory instead.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 2eeb5dc223)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
Francesco Giudici
dc1288de8d kata-monitor: add a README file
Fixes: #3704

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
(cherry picked from commit 7b2ff02647)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
bin
78edf827df kata-monitor: add some links when generating pages for browsers
Add some links to rendered webpages for better user experience,
let users can jump to pages only by clicking links in browsers.

Fixes: #4061

Signed-off-by: bin <bin@hyper.sh>
(cherry picked from commit f8cc5d1ad8)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
Yibo Zhuang
eff74fab0e agent: fsGroup support for direct-assigned volume
Adding two functions set_ownership and
recursive_ownership_change to support changing group id
ownership for a mounted volume.

The set_ownership will be called in common_storage_handler
after mount_storage performs the mount for the volume.
set_ownership will be a noop if the FSGroup field in the
Storage struct is not set which indicates no chown will be
performed. If FSGroup field is specified, then it will
perform the recursive walk of the mounted volume path to
change ownership of all files and directories to the
desired group id. It will also configure the SetGid bit
so that files created the directory will have group
following parent directory group.

If the fsGroupChangePolicy is on root mismatch,
then the group ownership will be skipped if the root
directory group id alreasy matches the desired group
id and if the SetGid bit is also set on the root directory.

This is the same behavior as what
Kubelet does today when performing the recursive walk
to change ownership.

Fixes #4018

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 92c00c7e84)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:09:22 +02:00
Yibo Zhuang
01cd58094e proto: fsGroup support for direct-assigned volume
This change adds two fields to the Storage pb

FSGroup which is a group id that the runtime
specifies to indicate to the agent to perform a
chown of the mounted volume to the specified
group id after mounting is complete in the guest.

FSGroupChangePolicy which is a policy to indicate
whether to always perform the group id ownership
change or only if the root directory group id
does not match with the desired group id.

These two fields will allow CSI plugins to indicate
to Kata that after the block device is mounted in
the guest, group id ownership change should be performed
on that volume.

Fixes #4018

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 6a47b82c81)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Yibo Zhuang
97ad1d55ff runtime: fsGroup support for direct-assigned volume
The fsGroup will be specified by the fsGroup key in
the direct-assign mountinfo metadate field.
This will be set when invoking the kata-runtime
binary and providing the key, value pair in the metadata
field. Similarly, the fsGroupChangePolicy will also
be provided in the mountinfo metadate field.

Adding an extra fields FsGroup and FSGroupChangePolicy
in the Mount construct for container mount which will
be populated when creating block devices by parsing
out the mountInfo.json.

And in handleDeviceBlockVolume of the kata-agent client,
it checks if the mount FSGroup is not nil, which
indicates that fsGroup change is required in the guest,
and will provide the FSGroup field in the protobuf to
pass the value to the agent.

Fixes #4018

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 532d53977e)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Zhuoyu Tie
b62cced7f4 runtime: no need to write virtiofsd error to log
The scanner reads nothing from viriofsd stderr pipe, because param
'--syslog' rediercts stderr to syslog. So there is no need to write
scanner.Text() to kata log

Fixes: #4063

Signed-off-by: Zhuoyu Tie <tiezhuoyu@outlook.com>
(cherry picked from commit 6e79042aa0)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Francesco Giudici
8242cfd2be kata-monitor: update the hrefs in the debug/pprof index page
kata-monitor allows to get data profiles from the kata shim
instances running on the same node by acting as a proxy
(e.g., http://$NODE_ADDRESS:8090/debug/pprof/?sandbox=$MYSANDBOXID).
In order to proxy the requests and the responses to the right shim,
kata-monitor requires to pass the sandbox id via a query string in the
url.

The profiling index page proxied by kata-monitor contains the link to all
the data profiles available. All the links anyway do not contain the
sandbox id included in the request: the links result then broken when
accessed through kata-monitor.
This happens because the profiling index page comes from the kata shim,
which will not include the query string provided in the http request.

Let's add on-the-fly the sandbox id in each href tag returned by the kata
shim index page before providing the proxied page.

Fixes: #4054

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
(cherry picked from commit 86977ff780)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Feng Wang
a37d4e538f agent: best-effort removing mount point
During container exit, the agent tries to remove all the mount point directories,
which can fail if it's a readonly filesytem (e.g. device mapper). This commit ignores
the removal failure and logs a warning message.

Fixes: #4043

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit aabcebbf58)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
d1197ee8e5 tools/packaging: Fix error path in 'kata-deploy-binaries.sh -s'
`make kata-tarball` relies on `kata-deploy-binaries.sh -s` which
silently ignores errors, and you may end up with an incomplete
tarball without noticing it because `make`'s exit status is 0.

`kata-deploy-binaries.sh` does set the `errexit` option and all the
code in the script seems to assume that since it doesn't do error
checking. Unfortunately, bash automatically disables `errexit` when
calling a function from a conditional pipeline, like done in the `-s`
case:

	if [ "${silent}" == true ]; then
		if ! handle_build "${t}" &>"$log_file"; then
                ^^^^^^
           this disables `errexit`

and `handle_build` ends with a `tar tvf` that always succeeds.

Adding error checking all over the place isn't really an option
as it would seriously obfuscate the code. Drop the conditional
pipeline instead and print the final error message from a `trap`
handler on the special ERR signal. This requires the `errtrace`
option as `trap`s aren't propagated to functions by default.

Since all outputs of `handle_build` are redirected to the build
log file, some file descriptor duplication magic is needed for
the handler to be able to write to the orignal stdout and stderr.

Fixes #3757

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit a779e19bee)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
c9c7751184 tools/packaging: Fix usage of kata-deploy-binaries.sh
Add missing documentation for -s .

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 0baebd2b37)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
1e62231610 tools/packaging/kata-deploy: Copy install_yq.sh in a dedicated script
'make kata-tarball' sometimes fails early with:

cp: cannot create regular file '[...]/tools/packaging/kata-deploy/local-build/dockerbuild/install_yq.sh': File exists

This happens because all assets are built in parallel using the same
`kata-deploy-binaries-in-docker.sh` script, and thus all try to copy
the `install_yq.sh` script to the same location with the `cp` command.
This is a well known race condition that cannot be avoided without
serialization of `cp` invocations.

Move the copying of `install_yq.sh` to a separate script and ensure
it is called *before* parallel builds. Make the presence of the copy
a prerequisite for each sub-build so that they still can be triggered
individually. Update the GH release workflow to also call this script
before calling `kata-deploy-binaries-in-docker.sh`.

Fixes #3756

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 154c8b03d3)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
8fa64e011d packaging: Eliminate TTY_OPT and NO_TTY variables in kata-deploy
NO_TTY configured whether to add the -t option to docker run.  It makes no
sense for the caller to configure this, since whether you need it depends
on the commands you're running.  Since the point here is to run
non-interactive build scripts, we don't need -t, or -i either.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 1ed7da8fc7)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
8f67f9e384 tools/packaging/kata-deploy/local-build: Add build to gitignore
This directory consists entirely of files built during a make kata-tarball,
so it should not be committed to the tree. A symbolic link to this directory
might be created during 'make tarball', ignore it as well.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[greg: - rearranged the subject to make the subsystem checker happy
       - also ignore the symbolic link created by
         `kata-deploy-binaries-in-docker.sh`]
Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit bad859d2f8)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
3049b7760a versions: Bump firecracker to v0.23.4
This release changes Docker images repository from DockerHub to Amazon
ECR. This resolves the `You have reached your pull rate limit` error
when building the firecracker tarball.

Fixes #4001

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 0d5f80b803)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Miao Xia
aedfef29a3 runtime/virtcontainers: Pass the hugepages resources to agent
The hugepages resources claimed by containers should be limited
by cgroup in the guest OS.

Fixes: #3695

Signed-off-by: Miao Xia <xia.miao1@zte.com.cn>
(cherry picked from commit a2f5c1768e)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
c9e1f72785 agent: Verify that we allocated as many hugepages as we need
allocate_hugepages() writes to the kernel sysfs file to allocate hugepages
in the Kata VM.  However, even if the write succeeds, it's not certain that
the kernel will actually be able to allocate as many hugepages as we
requested.

This patch reads back the file after writing it to check if we were able to
allocate all the required hugepages.

fixes #3816

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 42e35505b0)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
ba858e8cd9 agent: Don't attempt to create directories for hugepage configuration
allocate_hugepages() constructs the path for the sysfs directory containing
hugepage configuration, then attempts to create this directory if it does
not exist.

This doesn't make sense: sysfs is a view into kernel configuration, if the
kernel has support for the hugepage size, the directory will already be
there, if it doesn't, trying to create it won't help.

For the same reason, attempting to create the "nr_hugepages" file
itself is pointless, so there's no reason to call
OpenOptions::create(true).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 608e003abc)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Fabiano Fidêncio
b784763685 Merge pull request #4120 from likebreath/0420/backport_clh_v23.0
stable-2.4 | versions: Upgrade to Cloud Hypervisor v23.0
2022-04-21 14:33:37 +02:00
Fabiano Fidêncio
df2d57e9b8 Merge pull request #4098 from fengwang666/stable-2.4_backport
stable-2.4 | runtime: Base64 encode the direct volume mountInfo path
2022-04-21 12:54:03 +02:00
Bo Chen
bc32eff7b4 virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor v23.0.
Note: The client code of cloud-hypervisor's (CLH) OpenAPI is
automatically generated by openapi-generator [1-2].

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 29e569aa92)
2022-04-20 15:57:50 -07:00
Bo Chen
984ef5389e versions: Upgrade to Cloud Hypervisor v23.0
Highlights from the Cloud Hypervisor release v23.0: 1) vDPA Support; 2)
Updated OS Support list (Jammy 22.04 added with EOLed versions removed);
3) AArch64 Memory Map Improvements; 4) AMX Support; 5) Bug Fixes;

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v23.0

Fixes: #4101

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 6012c19707)
2022-04-20 15:57:50 -07:00
Feng Wang
adf6493b89 runtime: Base64 encode the direct volume mountInfo path
This is to avoid accidentally deleting multiple volumes.

Fixes #4020

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit 354cd3b9b6)
2022-04-13 22:30:53 -07:00
Greg Kurz
10bab3c96a Merge pull request #4081 from fidencio/wip/stable-2.4-agent-avoid-panic-when-getting-empty-stats
stable-2.4 | agent: Avoid agent panic when reading empty stats
2022-04-13 14:13:13 +02:00
Fabiano Fidêncio
6b41754018 agent: Avoid agent panic when reading empty stats
This was seen in an issue report, where we'd try to unwrap a None value,
leading to a panic.

Fixes: #4077
Related: #4043

Full backtrace:
```
"thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', rustjail/src/cgroups/fs/mod.rs:593:31"
"stack backtrace:"
"   0:     0x7f0390edcc3a - std::backtrace_rs::backtrace::libunwind::trace::hd5eff4de16dbdd15"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5"
"   1:     0x7f0390edcc3a - std::backtrace_rs::backtrace::trace_unsynchronized::h04a775b4c6ab90d6"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5"
"   2:     0x7f0390edcc3a - std::sys_common::backtrace::_print_fmt::h3253c3db9f17d826"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:67:5"
"   3:     0x7f0390edcc3a - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h02bfc712fc868664"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:46:22"
"   4:     0x7f0390a91fbc - core::fmt::write::hfd5090d1132106d8"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/fmt/mod.rs:1149:17"
"   5:     0x7f0390edb804 - std::io::Write::write_fmt::h34acb699c6d6f5a9"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/io/mod.rs:1697:15"
"   6:     0x7f0390edbee0 - std::sys_common::backtrace::_print::hfca761479e3d91ed"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:49:5"
"   7:     0x7f0390edbee0 - std::sys_common::backtrace::print::hf666af0b87d2b5ba"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:36:9"
"   8:     0x7f0390edbee0 - std::panicking::default_hook::{{closure}}::hb4617bd1d4a09097"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:211:50"
"   9:     0x7f0390edb2da - std::panicking::default_hook::h84f684d9eff1eede"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:228:9"
"  10:     0x7f0390edb2da - std::panicking::rust_panic_with_hook::h8e784f5c39f46346"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:606:17"
"  11:     0x7f0390f0c416 - std::panicking::begin_panic_handler::{{closure}}::hef496869aa926670"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:500:13"
"  12:     0x7f0390f0c3b6 - std::sys_common::backtrace::__rust_end_short_backtrace::h8e9b039b8ed3e70f"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:139:18"
"  13:     0x7f0390f0c372 - rust_begin_unwind"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:498:5"
"  14:     0x7f03909062c0 - core::panicking::panic_fmt::h568976b83a33ae59"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:107:14"
"  15:     0x7f039090641c - core::panicking::panic::he2e71cfa6548cc2c"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:48:5"
"  16:     0x7f0390eb443f - <rustjail::cgroups::fs::Manager as rustjail::cgroups::Manager>::get_stats::h85031fc1c59c53d9"
"  17:     0x7f03909c0138 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hfa6e6cd7516f8d11"
"  18:     0x7f0390d697e5 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hffbaa534cfa97d44"
"  19:     0x7f039099c0b3 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hae3ab083a06d0b4b"
"  20:     0x7f0390af9e1e - std::panic::catch_unwind::h1fdd25c8ebba32e1"
"  21:     0x7f0390b7c4e6 - tokio::runtime::task::raw::poll::hd3ebbd0717dac808"
"  22:     0x7f0390f49f3f - tokio::runtime::thread_pool::worker::Context::run_task::hfdd63cd1e0b17abf"
"  23:     0x7f0390f3a599 - tokio::runtime::task::raw::poll::h62954f6369b1d210"
"  24:     0x7f0390f37863 - std::sys_common::backtrace::__rust_begin_short_backtrace::h1c58f232c078bfe9"
"  25:     0x7f0390f4f3dd - core::ops::function::FnOnce::call_once{{vtable.shim}}::h2d329a84c0feed57"
"  26:     0x7f0390f0e535 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h137e5243c6233a3b"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/boxed.rs:1694:9"
"  27:     0x7f0390f0e535 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h7331c46863d912b7"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/boxed.rs:1694:9"
"  28:     0x7f0390f0e535 - std::sys::unix::thread::Thread::new::thread_start::h1fb20b966cb927ab"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys/unix/thread.rs:106:17"
```

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 78f30c33c6)
2022-04-12 18:59:02 +02:00
Fabiano Fidêncio
0ad6f05dee Merge pull request #4024 from bergwolf/2.4.0-branch-bump
# Kata Containers 2.4.0
2022-04-01 13:46:35 +02:00
Peng Tao
4c9c01a124 release: Kata Containers 2.4.0
- stable-2.4 | agent: fix container stop error with signal SIGRTMIN+3
- stable-2.4 | kata-monitor: fix duplicated output when printing usage
- stable-2.4 | runtime: Stop getting OOM events from agent for "ttrpc closed" error
- kata-deploy: fix version bump from -rc to stable
- stable-2.4: release: Include all the rust vendored code into the vendored tarball
- stable-2.4 | tools: release: Do not consider release candidates as stable releases
- agent: Signal the whole process group
- stable-2.4 | docs: Update k8s documentation
- backport main commits to stable 2.4
- stable-2.4: Bump QEMU to 6.2 (bringing then SGX support in)
- runtime: Properly handle ESRCH error when signaling container
- stable-2.4 | versions: Upgrade to Cloud Hypervisor v22.1

f2319d69 release: Adapt kata-deploy for 2.4.0
cae48e9c agent: fix container stop error with signal SIGRTMIN+3
342aa95c kata-monitor: fix duplicated output when printing usage
9f75e226 runtime: add logs around sandbox monitor
363fbed8 runtime: stop getting OOM events when ttrpc: closed error
f840de5a workflows,release: Ship *all* the rust vendored code
952cea5f tools: Add a generate_vendor.sh script
cc965fa0 kata-deploy: fix version bump from -rc to stable
f41cc184 tools: release: Do not consider release candidates as stable releases
e059b50f runtime: Add more debug logs for container io stream copy
71ce6f53 agent: Kill the all the container processes of the same cgroup
30fc2c86 docs: Update k8s documentation
24028969 virtcontainers: Run mock hook from build tree rather than system bin dir
4e54aa5a doc: fix filename typo
d815393c manager: Add options to change self test behaviour
4111e1a3 manager: Add option to enable component debug
2918be18 manager: Create containerd link
6b31b068 kernel: fix cve-2022-0847
5589b246 doc: update Intel SGX use cases document
1da88dca tools: update QEMU to 6.2
3e2f9223 runtime: Properly handle ESRCH error when signaling container
4c21cb3e versions: Upgrade to Cloud Hypervisor v22.1

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-04-01 06:20:20 +00:00
Peng Tao
f2319d693d release: Adapt kata-deploy for 2.4.0
kata-deploy files must be adapted to a new release.  The cases where it
happens are when the release goes from -> to:
* main -> stable:
  * kata-deploy-stable / kata-cleanup-stable: are removed

* stable -> stable:
  * kata-deploy / kata-cleanup: bump the release to the new one.

There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-04-01 06:20:20 +00:00
Bin Liu
98ccf8f6a1 Merge pull request #4008 from wxx213/stable-2.4
stable-2.4 | agent: fix container stop error with signal SIGRTMIN+3
2022-04-01 11:29:18 +08:00
Wang Xingxing
cae48e9c9b agent: fix container stop error with signal SIGRTMIN+3
The nix::sys::signal::Signal package api cannot deal with SIGRTMIN+3,
directly use libc function to send the signal.

Fixes: #3990

Signed-off-by: Wang Xingxing <stellarwxx@163.com>
(cherry picked from commit 0d765bd082)
Signed-off-by: Wang Xingxing <stellarwxx@163.com>
2022-03-31 16:49:06 +08:00
snir911
a36103c759 Merge pull request #4003 from fgiudici/kata-monitor_fix_help_backport
stable-2.4 | kata-monitor: fix duplicated output when printing usage
2022-03-30 18:57:17 +03:00
Fabiano Fidêncio
6abbcc551c Merge pull request #3997 from liubin/backport-2.4
stable-2.4 | runtime: Stop getting OOM events from agent for "ttrpc closed" error
2022-03-30 14:08:55 +02:00
Francesco Giudici
342aa95cc8 kata-monitor: fix duplicated output when printing usage
(default: "/run/containerd/containerd.sock") is duplicated when
printing kata-monitor usage:

[root@kubernetes ~]# kata-monitor --help
Usage of kata-monitor:
  -listen-address string
        The address to listen on for HTTP requests. (default ":8090")
  -log-level string
        Log level of logrus(trace/debug/info/warn/error/fatal/panic). (default "info")
  -runtime-endpoint string
        Endpoint of CRI container runtime service. (default: "/run/containerd/containerd.sock") (default "/run/containerd/containerd.sock")

the golang flag package takes care of adding the defaults when printing
usage. Remove the explicit print of the value so that it would not be
printed on screen twice.

Fixes: #3998

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
(cherry picked from commit a63bbf9793)
2022-03-30 14:02:54 +02:00
bin
9f75e226f1 runtime: add logs around sandbox monitor
For debugging purposes, add some logs.

Fixes: #3815

Signed-off-by: bin <bin@hyper.sh>
2022-03-30 17:11:40 +08:00
bin
363fbed804 runtime: stop getting OOM events when ttrpc: closed error
getOOMEvents is a long-waiting call, it will retry when failed.
For cases of agent shutdown, the retry should stop.

When the agent hasn't detected agent has died, we can also check
whether the error is "ttrpc: closed".

Fixes: #3815

Signed-off-by: bin <bin@hyper.sh>
2022-03-30 17:11:35 +08:00
Fabiano Fidêncio
54a638317a Merge pull request #3988 from bergwolf/github/kata-deploy
kata-deploy: fix version bump from -rc to stable
2022-03-30 11:01:45 +02:00
Peng Tao
8ce6b12b41 Merge pull request #3993 from fidencio/wip/stable-2.4-release-include-all-rust-vendored-code-to-the-vendored-tarball
stable-2.4: release: Include all the rust vendored code into the vendored tarball
2022-03-30 16:10:47 +08:00
Fabiano Fidêncio
f840de5acb workflows,release: Ship *all* the rust vendored code
Instead of only vendoring the code needed by the agent, let's ensure we
vendor all the needed rust code, and let's do it using the newly
introduced enerate_vendor.sh script.

Fixes: #3973

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 3606923ac8)
2022-03-29 23:27:43 +02:00
Fabiano Fidêncio
952cea5f5d tools: Add a generate_vendor.sh script
This script is responsible for generating a tarball with all the rust
vendored code that is needed for fully building kata-containers on a
disconnected environment.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 2eb07455d0)
2022-03-29 23:27:29 +02:00
Peng Tao
cc965fa0cb kata-deploy: fix version bump from -rc to stable
In such case, we should bump from "latest" tag rather than from
current_version.

Fixes: #3986
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-03-29 03:45:27 +00:00
GabyCT
44b1473d0c Merge pull request #3977 from fidencio/wip/backport-fix-for-3847
stable-2.4 | tools: release: Do not consider release candidates as stable releases
2022-03-28 10:38:47 -06:00
Fupan Li
565efd1bf2 Merge pull request #3975 from bergwolf/github/backport-stable-2.4
agent: Signal the whole process group
2022-03-28 18:26:12 +08:00
Fabiano Fidêncio
f41cc18427 tools: release: Do not consider release candidates as stable releases
During the release of 2.4.0-rc0 @egernst noticed an incositency in the
way we handle release tags, as release candidates are being taken as
"stable" releases, while both the kata-deploy tests and the release
action consider this as "latest".

Ideally we should have our own tag for "release candidate", but that's
something that could and should be discussed more extensively outside of
the scope of this quick fix.

For now, let's align the code generating the PR for bumping the release
with what we already do as part of the release action and kata-deploy
test, and tag "-rc"  as latest, regardless of which branch it's coming
from.

Fixes: #3847

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 4adf93ef2c)
2022-03-28 11:01:58 +02:00
Feng Wang
e059b50f5c runtime: Add more debug logs for container io stream copy
This can help debugging container lifecycle issues

Fixes: #3913

Signed-off-by: Feng Wang <feng.wang@databricks.com>
2022-03-28 16:22:22 +08:00
Feng Wang
71ce6f537f agent: Kill the all the container processes of the same cgroup
Otherwise the container process might leak and cause an unclean exit

Fixes: #3913

Signed-off-by: Feng Wang <feng.wang@databricks.com>
2022-03-28 16:21:51 +08:00
Bin Liu
a2b73b60bd Merge pull request #3960 from cmaf/update-k8s-docs-1-stable-2.4
stable-2.4 | docs: Update k8s documentation
2022-03-25 15:25:25 +08:00
Bin Liu
2ce9ce7b8f Merge pull request #3954 from bergwolf/github/backport-stable-2.4
backport main commits to stable 2.4
2022-03-25 14:45:17 +08:00
Chelsea Mafrica
30fc2c863d docs: Update k8s documentation
Update documentation with missing step to untaint node to enable
scheduling and update the example to run a pod using the kata runtime
class instead of untrusted workloads, which applies to versions of CRI-O
prior to v1.12.

Fixes #3863

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
(cherry picked from commit 5c434270d1)
2022-03-24 11:22:18 -07:00
David Gibson
24028969c2 virtcontainers: Run mock hook from build tree rather than system bin dir
Running unit tests should generally have minimal dependencies on
things outside the build tree.  It *definitely* shouldn't modify
system wide things outside the build tree.  Currently the runtime
"make test" target does so, though.

Several of the tests in src/runtime/pkg/katautils/hook_test.go require a
sample hook binary.  They expect this hook in
/usr/bin/virtcontainers/bin/test/hook, so the makefile, as root, installs
the test binary to that location.

Go tests automatically run within the package's directory though, so
there's no need to use a system wide path.  We can use a relative path to
the binary build within the tree just as easily.

fixes #3941

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2022-03-24 12:02:00 +08:00
Garrett Mahin
4e54aa5a7b doc: fix filename typo
Corrects a filename typo in cleanup cluster part
of kata-deploy README.md

Fixes: #3869
Signed-off-by: Garrett Mahin <garrett.mahin@gmail.com>
2022-03-24 12:00:17 +08:00
James O. D. Hunt
d815393c3e manager: Add options to change self test behaviour
Added new `kata-manager` options to control the self-test behaviour. By
default, after installation the manager will run a test to ensure a Kata
Containers container can be created. New options allow:

- The self test to be disabled.
- Only the self test to be run (no installation).

These features allow changes to be made to the installed system before
the self test is run.

Fixes: #3851.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-03-24 11:59:48 +08:00
James O. D. Hunt
4111e1a3de manager: Add option to enable component debug
Added a `-d` option to `kata-manager` to enable Kata Containers
and containerd debug.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-03-24 11:59:33 +08:00
James O. D. Hunt
2918be180f manager: Create containerd link
Make the `kata-manager` create a `containerd` link to ensure the
downloaded containerd systemd service file can find the daemon when
using the GitHub packaged version of containerd.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-03-24 11:59:26 +08:00
Julio Montes
6b31b06832 kernel: fix cve-2022-0847
bump guest kernel version to fix cve-2022-0847 "Dirty Pipe"

fixes #3852

Signed-off-by: Julio Montes <julio.montes@intel.com>
2022-03-24 11:58:43 +08:00
Fabiano Fidêncio
53a9cf7dc4 Merge pull request #3927 from fidencio/stable-2.4/qemu-bump
stable-2.4: Bump QEMU to 6.2 (bringing then SGX support in)
2022-03-23 07:20:35 +01:00
Julio Montes
5589b246d7 doc: update Intel SGX use cases document
Installation section is not longer needed because of the latest
default kata kernel supports Intel SGX.
Include QEMU to the list of supported hypervisors.

fixes #3911

Signed-off-by: Julio Montes <julio.montes@intel.com>
(cherry picked from commit 24b29310b2)
2022-03-22 08:36:04 +01:00
Julio Montes
1da88dca4b tools: update QEMU to 6.2
bring Intel SGX support

Changes tha may impact in Kata Containers
Arm:
The 'virt' machine now supports an emulated ITS
The 'virt' machine now supports more than 123 CPUs in TCG emulation mode
The pl031 real-time clock device now supports sending RTC_CHANGE QMP events

PowerPC:
Improved POWER10 support for the 'powernv' machine
Initial support for POWER10 DD2.0 CPU added
Added support for FORM2 PAPR NUMA descriptions in the "pseries" machine
 type

s390x:
Improved storage key emulation (e.g. fixed address handling, lazy
 storage key enablement for TCG, ...)
New gen16 CPU features are now enabled automatically in the latest
 machine type

KVM:
Support for SGX in the virtual machine, using the /dev/sgx_vepc device
 on the host and the "memory-backend-epc" backend in QEMU.
New "hv-apicv" CPU property (aliased to "hv-avic") sets the
 HV_DEPRECATING_AEOI_RECOMMENDED bit in CPUID[0x40000004].EAX.

virtio-mem:
QEMU now fully supports guest memory dumps with virtio-mem.
QEMU now cleanly supports precopy migration, postcopy migration and
 background snapshots with virtio-mem.

fixes #3902

Signed-off-by: Julio Montes <julio.montes@intel.com>
(cherry picked from commit 18d4d7fb1d)
2022-03-22 08:35:45 +01:00
Peng Tao
8cc2231818 Merge pull request #3892 from fengwang666/my_2.4_pr_backport
runtime: Properly handle ESRCH error when signaling container
2022-03-15 10:11:25 +08:00
GabyCT
63c1498f05 Merge pull request #3891 from likebreath/stable-2.4
stable-2.4 | versions: Upgrade to Cloud Hypervisor v22.1
2022-03-14 17:44:09 -06:00
Feng Wang
3e2f9223b0 runtime: Properly handle ESRCH error when signaling container
Currently kata shim v2 doesn't translate ESRCH signal, causing container
fail to stop and shim leak.

Fixes: #3874

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit aa5ae6b17c)
2022-03-14 13:15:54 -07:00
Bo Chen
4c21cb3eb1 versions: Upgrade to Cloud Hypervisor v22.1
This is a bug fix release. The following issues have been addressed:
1) VFIO ioctl reordering to fix MSI on AMD platforms; 2) Fix virtio-net
control queue.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v22.1

Fixes: #3872

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 7a18e32fa7)
2022-03-14 12:34:31 -07:00
94 changed files with 3150 additions and 661 deletions

View File

@@ -26,6 +26,7 @@ jobs:
- name: Build ${{ matrix.asset }}
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-copy-yq-installer.sh
./tools/packaging/kata-deploy/local-build/kata-deploy-binaries-in-docker.sh --build="${KATA_ASSET}"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
@@ -140,13 +141,10 @@ jobs:
- uses: actions/checkout@v2
- name: generate-and-upload-tarball
run: |
pushd $GITHUB_WORKSPACE/src/agent
cargo vendor >> .cargo/config
popd
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="kata-containers-$tag-vendor.tar.gz"
pushd $GITHUB_WORKSPACE
tar -cvzf "${tarball}" src/agent/.cargo/config src/agent/vendor
bash -c "tools/packaging/release/generate_vendor.sh ${tarball}"
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd

1
.gitignore vendored
View File

@@ -9,4 +9,5 @@ src/agent/src/version.rs
src/agent/kata-agent.service
src/agent/protocols/src/*.rs
!src/agent/protocols/src/lib.rs
build

View File

@@ -1 +1 @@
2.4.0-rc0
2.4.1

View File

@@ -51,6 +51,7 @@ The `kata-monitor` management agent should be started on each node where the Kat
> **Note**: a *node* running Kata containers will be either a single host system or a worker node belonging to a K8s cluster capable of running Kata pods.
- Aggregate sandbox metrics running on the node, adding the `sandbox_id` label to them.
- Attach the additional `cri_uid`, `cri_name` and `cri_namespace` labels to the sandbox metrics, tracking the `uid`, `name` and `namespace` Kubernetes pod metadata.
- Expose a new Prometheus target, allowing all node metrics coming from the Kata shim to be collected by Prometheus indirectly. This simplifies the targets count in Prometheus and avoids exposing shim's metrics by `ip:port`.
Only one `kata-monitor` process runs in each node.

View File

@@ -104,26 +104,69 @@ $ sudo kubeadm init --ignore-preflight-errors=all --cri-socket /run/containerd/c
$ export KUBECONFIG=/etc/kubernetes/admin.conf
```
You can force Kubelet to use Kata Containers by adding some `untrusted`
annotation to your pod configuration. In our case, this ensures Kata
Containers is the selected runtime to run the described workload.
### Allow pods to run in the master node
`nginx-untrusted.yaml`
```yaml
apiVersion: v1
kind: Pod
By default, the cluster will not schedule pods in the master node. To enable master node scheduling:
```bash
$ sudo -E kubectl taint nodes --all node-role.kubernetes.io/master-
```
### Create runtime class for Kata Containers
Users can use [`RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/#runtime-class) to specify a different runtime for Pods.
```bash
$ cat > runtime.yaml <<EOF
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: nginx-untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
name: kata
handler: kata
EOF
$ sudo -E kubectl apply -f runtime.yaml
```
### Run pod in Kata Containers
If a pod has the `runtimeClassName` set to `kata`, the CRI plugin runs the pod with the
[Kata Containers runtime](../../src/runtime/README.md).
- Create an pod configuration that using Kata Containers runtime
```bash
$ cat << EOF | tee nginx-kata.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kata
spec:
runtimeClassName: kata
containers:
- name: nginx
image: nginx
```
Next, you run your pod:
```
$ sudo -E kubectl apply -f nginx-untrusted.yaml
```
EOF
```
- Create the pod
```bash
$ sudo -E kubectl apply -f nginx-kata.yaml
```
- Check pod is running
```bash
$ sudo -E kubectl get pods
```
- Check hypervisor is running
```bash
$ ps aux | grep qemu
```
### Delete created pod
```bash
$ sudo -E kubectl delete -f nginx-kata.yaml
```

View File

@@ -21,20 +21,7 @@ CONFIG_X86_SGX_KVM=y
* [Intel SGX Kubernetes device plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/cmd/sgx_plugin#deploying-with-pre-built-images)
> Note: Kata Containers supports creating VM sandboxes with Intel® SGX enabled
> using [cloud-hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor/) VMM only. QEMU support is waiting to get the
> Intel SGX enabled QEMU upstream release.
## Installation
### Kata Containers Guest Kernel
Follow the instructions to [setup](../../tools/packaging/kernel/README.md#setup-kernel-source-code) and [build](../../tools/packaging/kernel/README.md#build-the-kernel) the experimental guest kernel. Then, install as:
```sh
$ sudo cp kata-linux-experimental-*/vmlinux /opt/kata/share/kata-containers/vmlinux.sgx
$ sudo sed -i 's|vmlinux.container|vmlinux.sgx|g' \
/opt/kata/share/defaults/kata-containers/configuration-clh.toml
```
> using [cloud-hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor/) and [QEMU](https://www.qemu.org/) VMMs only.
### Kata Containers Configuration
@@ -48,6 +35,8 @@ to the `sandbox` are: `["io.katacontainers.*", "sgx.intel.com/epc"]`.
With the following sample job deployed using `kubectl apply -f`:
> Note: Change the `runtimeClassName` option accordingly, only `kata-clh` and `kata-qemu` support Intel® SGX.
```yaml
apiVersion: batch/v1
kind: Job

View File

@@ -391,7 +391,7 @@ fn set_memory_resources(cg: &cgroups::Cgroup, memory: &LinuxMemory, update: bool
if let Some(swappiness) = memory.swappiness {
if (0..=100).contains(&swappiness) {
mem_controller.set_swappiness(swappiness as u64)?;
mem_controller.set_swappiness(swappiness)?;
} else {
return Err(anyhow!(
"invalid value:{}. valid memory swappiness range is 0-100",
@@ -590,9 +590,9 @@ fn get_cpuacct_stats(cg: &cgroups::Cgroup) -> SingularPtrField<CpuUsage> {
let h = lines_to_map(&cpuacct.stat);
let usage_in_usermode =
(((*h.get("user").unwrap() * NANO_PER_SECOND) as f64) / *CLOCK_TICKS) as u64;
(((*h.get("user").unwrap_or(&0) * NANO_PER_SECOND) as f64) / *CLOCK_TICKS) as u64;
let usage_in_kernelmode =
(((*h.get("system").unwrap() * NANO_PER_SECOND) as f64) / *CLOCK_TICKS) as u64;
(((*h.get("system").unwrap_or(&0) * NANO_PER_SECOND) as f64) / *CLOCK_TICKS) as u64;
let total_usage = cpuacct.usage;
@@ -623,9 +623,9 @@ fn get_cpuacct_stats(cg: &cgroups::Cgroup) -> SingularPtrField<CpuUsage> {
let cpu_controller: &CpuController = get_controller_or_return_singular_none!(cg);
let stat = cpu_controller.cpu().stat;
let h = lines_to_map(&stat);
let usage_in_usermode = *h.get("user_usec").unwrap();
let usage_in_kernelmode = *h.get("system_usec").unwrap();
let total_usage = *h.get("usage_usec").unwrap();
let usage_in_usermode = *h.get("user_usec").unwrap_or(&0);
let usage_in_kernelmode = *h.get("system_usec").unwrap_or(&0);
let total_usage = *h.get("usage_usec").unwrap_or(&0);
let percpu_usage = vec![];
SingularPtrField::some(CpuUsage {

View File

@@ -265,7 +265,7 @@ pub fn resources_grpc_to_oci(res: &grpc::LinuxResources) -> oci::LinuxResources
swap: Some(mem.Swap),
kernel: Some(mem.Kernel),
kernel_tcp: Some(mem.KernelTCP),
swappiness: Some(mem.Swappiness as i64),
swappiness: Some(mem.Swappiness),
disable_oom_killer: Some(mem.DisableOOMKiller),
})
} else {

View File

@@ -8,8 +8,8 @@ use std::fs::File;
use std::os::unix::io::RawFd;
use tokio::sync::mpsc::Sender;
use nix::errno::Errno;
use nix::fcntl::{fcntl, FcntlArg, OFlag};
use nix::sys::signal::{self, Signal};
use nix::sys::wait::{self, WaitStatus};
use nix::unistd::{self, Pid};
use nix::Result;
@@ -80,7 +80,7 @@ pub struct Process {
pub trait ProcessOperations {
fn pid(&self) -> Pid;
fn wait(&self) -> Result<WaitStatus>;
fn signal(&self, sig: Signal) -> Result<()>;
fn signal(&self, sig: libc::c_int) -> Result<()>;
}
impl ProcessOperations for Process {
@@ -92,8 +92,10 @@ impl ProcessOperations for Process {
wait::waitpid(Some(self.pid()), None)
}
fn signal(&self, sig: Signal) -> Result<()> {
signal::kill(self.pid(), Some(sig))
fn signal(&self, sig: libc::c_int) -> Result<()> {
let res = unsafe { libc::kill(self.pid().into(), sig) };
Errno::result(res).map(drop)
}
}
@@ -281,6 +283,6 @@ mod tests {
// signal to every process in the process
// group of the calling process.
process.pid = 0;
assert!(process.signal(Signal::SIGCONT).is_ok());
assert!(process.signal(libc::SIGCONT).is_ok());
}
}

View File

@@ -16,7 +16,7 @@ use std::sync::Arc;
use tokio::sync::Mutex;
use nix::mount::MsFlags;
use nix::unistd::Gid;
use nix::unistd::{Gid, Uid};
use regex::Regex;
@@ -29,6 +29,7 @@ use crate::device::{
use crate::linux_abi::*;
use crate::pci;
use crate::protocols::agent::Storage;
use crate::protocols::types::FSGroupChangePolicy;
use crate::Sandbox;
#[cfg(target_arch = "s390x")]
use crate::{ccw, device::get_virtio_blk_ccw_device_name};
@@ -43,6 +44,11 @@ pub const MOUNT_GUEST_TAG: &str = "kataShared";
// Allocating an FSGroup that owns the pod's volumes
const FS_GID: &str = "fsgid";
const RW_MASK: u32 = 0o660;
const RO_MASK: u32 = 0o440;
const EXEC_MASK: u32 = 0o110;
const MODE_SETGID: u32 = 0o2000;
#[rustfmt::skip]
lazy_static! {
pub static ref FLAGS: HashMap<&'static str, (bool, MsFlags)> = {
@@ -222,7 +228,7 @@ async fn ephemeral_storage_handler(
let meta = fs::metadata(&storage.mount_point)?;
let mut permission = meta.permissions();
let o_mode = meta.mode() | 0o2000;
let o_mode = meta.mode() | MODE_SETGID;
permission.set_mode(o_mode);
fs::set_permissions(&storage.mount_point, permission)?;
}
@@ -272,7 +278,7 @@ async fn local_storage_handler(
if need_set_fsgid {
// set SetGid mode mask.
o_mode |= 0o2000;
o_mode |= MODE_SETGID;
}
permission.set_mode(o_mode);
@@ -321,26 +327,39 @@ fn allocate_hugepages(logger: &Logger, options: &[String]) -> Result<()> {
// sysfs entry is always of the form hugepages-${pagesize}kB
// Ref: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
let path = Path::new(SYS_FS_HUGEPAGES_PREFIX).join(format!("hugepages-{}kB", pagesize / 1024));
if !path.exists() {
fs::create_dir_all(&path).context("create hugepages-size directory")?;
}
let path = Path::new(SYS_FS_HUGEPAGES_PREFIX)
.join(format!("hugepages-{}kB", pagesize / 1024))
.join("nr_hugepages");
// write numpages to nr_hugepages file.
let path = path.join("nr_hugepages");
let numpages = format!("{}", size / pagesize);
info!(logger, "write {} pages to {:?}", &numpages, &path);
let mut file = OpenOptions::new()
.write(true)
.create(true)
.open(&path)
.context(format!("open nr_hugepages directory {:?}", &path))?;
file.write_all(numpages.as_bytes())
.context(format!("write nr_hugepages failed: {:?}", &path))?;
// Even if the write succeeds, the kernel isn't guaranteed to be
// able to allocate all the pages we requested. Verify that it
// did.
let verify = fs::read_to_string(&path).context(format!("reading {:?}", &path))?;
let allocated = verify
.trim_end()
.parse::<u64>()
.map_err(|_| anyhow!("Unexpected text {:?} in {:?}", &verify, &path))?;
if allocated != size / pagesize {
return Err(anyhow!(
"Only allocated {} of {} hugepages of size {}",
allocated,
numpages,
pagesize
));
}
Ok(())
}
@@ -476,7 +495,9 @@ fn common_storage_handler(logger: &Logger, storage: &Storage) -> Result<String>
// Mount the storage device.
let mount_point = storage.mount_point.to_string();
mount_storage(logger, storage).and(Ok(mount_point))
mount_storage(logger, storage)?;
set_ownership(logger, storage)?;
Ok(mount_point)
}
// nvdimm_storage_handler handles the storage for NVDIMM driver.
@@ -560,6 +581,91 @@ fn mount_storage(logger: &Logger, storage: &Storage) -> Result<()> {
)
}
#[instrument]
pub fn set_ownership(logger: &Logger, storage: &Storage) -> Result<()> {
let logger = logger.new(o!("subsystem" => "mount", "fn" => "set_ownership"));
// If fsGroup is not set, skip performing ownership change
if storage.fs_group.is_none() {
return Ok(());
}
let fs_group = storage.get_fs_group();
let mut read_only = false;
let opts_vec: Vec<String> = storage.options.to_vec();
if opts_vec.contains(&String::from("ro")) {
read_only = true;
}
let mount_path = Path::new(&storage.mount_point);
let metadata = mount_path.metadata().map_err(|err| {
error!(logger, "failed to obtain metadata for mount path";
"mount-path" => mount_path.to_str(),
"error" => err.to_string(),
);
err
})?;
if fs_group.group_change_policy == FSGroupChangePolicy::OnRootMismatch
&& metadata.gid() == fs_group.group_id
{
let mut mask = if read_only { RO_MASK } else { RW_MASK };
mask |= EXEC_MASK;
// With fsGroup change policy to OnRootMismatch, if the current
// gid of the mount path root directory matches the desired gid
// and the current permission of mount path root directory is correct,
// then ownership change will be skipped.
let current_mode = metadata.permissions().mode();
if (mask & current_mode == mask) && (current_mode & MODE_SETGID != 0) {
info!(logger, "skipping ownership change for volume";
"mount-path" => mount_path.to_str(),
"fs-group" => fs_group.group_id.to_string(),
);
return Ok(());
}
}
info!(logger, "performing recursive ownership change";
"mount-path" => mount_path.to_str(),
"fs-group" => fs_group.group_id.to_string(),
);
recursive_ownership_change(
mount_path,
None,
Some(Gid::from_raw(fs_group.group_id)),
read_only,
)
}
#[instrument]
pub fn recursive_ownership_change(
path: &Path,
uid: Option<Uid>,
gid: Option<Gid>,
read_only: bool,
) -> Result<()> {
let mut mask = if read_only { RO_MASK } else { RW_MASK };
if path.is_dir() {
for entry in fs::read_dir(&path)? {
recursive_ownership_change(entry?.path().as_path(), uid, gid, read_only)?;
}
mask |= EXEC_MASK;
mask |= MODE_SETGID;
}
nix::unistd::chown(path, uid, gid)?;
if gid.is_some() {
let metadata = path.metadata()?;
let mut permission = metadata.permissions();
let target_mode = metadata.mode() | mask;
permission.set_mode(target_mode);
fs::set_permissions(path, permission)?;
}
Ok(())
}
/// Looks for `mount_point` entry in the /proc/mounts.
#[instrument]
pub fn is_mounted(mount_point: &str) -> Result<bool> {
@@ -912,6 +1018,8 @@ fn parse_options(option_list: Vec<String>) -> HashMap<String, String> {
mod tests {
use super::*;
use crate::{skip_if_not_root, skip_loop_if_not_root, skip_loop_if_root};
use protobuf::RepeatedField;
use protocols::agent::FSGroup;
use std::fs::File;
use std::fs::OpenOptions;
use std::io::Write;
@@ -1539,4 +1647,212 @@ mod tests {
}
}
}
#[test]
fn test_set_ownership() {
skip_if_not_root!();
let logger = slog::Logger::root(slog::Discard, o!());
#[derive(Debug)]
struct TestData<'a> {
mount_path: &'a str,
fs_group: Option<FSGroup>,
read_only: bool,
expected_group_id: u32,
expected_permission: u32,
}
let tests = &[
TestData {
mount_path: "foo",
fs_group: None,
read_only: false,
expected_group_id: 0,
expected_permission: 0,
},
TestData {
mount_path: "rw_mount",
fs_group: Some(FSGroup {
group_id: 3000,
group_change_policy: FSGroupChangePolicy::Always,
unknown_fields: Default::default(),
cached_size: Default::default(),
}),
read_only: false,
expected_group_id: 3000,
expected_permission: RW_MASK | EXEC_MASK | MODE_SETGID,
},
TestData {
mount_path: "ro_mount",
fs_group: Some(FSGroup {
group_id: 3000,
group_change_policy: FSGroupChangePolicy::OnRootMismatch,
unknown_fields: Default::default(),
cached_size: Default::default(),
}),
read_only: true,
expected_group_id: 3000,
expected_permission: RO_MASK | EXEC_MASK | MODE_SETGID,
},
];
let tempdir = tempdir().expect("failed to create tmpdir");
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let mount_dir = tempdir.path().join(d.mount_path);
fs::create_dir(&mount_dir)
.unwrap_or_else(|_| panic!("{}: failed to create root directory", msg));
let directory_mode = mount_dir.as_path().metadata().unwrap().permissions().mode();
let mut storage_data = Storage::new();
if d.read_only {
storage_data.set_options(RepeatedField::from_slice(&[
"foo".to_string(),
"ro".to_string(),
]));
}
if let Some(fs_group) = d.fs_group.clone() {
storage_data.set_fs_group(fs_group);
}
storage_data.mount_point = mount_dir.clone().into_os_string().into_string().unwrap();
let result = set_ownership(&logger, &storage_data);
assert!(result.is_ok());
assert_eq!(
mount_dir.as_path().metadata().unwrap().gid(),
d.expected_group_id
);
assert_eq!(
mount_dir.as_path().metadata().unwrap().permissions().mode(),
(directory_mode | d.expected_permission)
);
}
}
#[test]
fn test_recursive_ownership_change() {
skip_if_not_root!();
const COUNT: usize = 5;
#[derive(Debug)]
struct TestData<'a> {
// Directory where the recursive ownership change should be performed on
path: &'a str,
// User ID for ownership change
uid: u32,
// Group ID for ownership change
gid: u32,
// Set when the permission should be read-only
read_only: bool,
// The expected permission of all directories after ownership change
expected_permission_directory: u32,
// The expected permission of all files after ownership change
expected_permission_file: u32,
}
let tests = &[
TestData {
path: "no_gid_change",
uid: 0,
gid: 0,
read_only: false,
expected_permission_directory: 0,
expected_permission_file: 0,
},
TestData {
path: "rw_gid_change",
uid: 0,
gid: 3000,
read_only: false,
expected_permission_directory: RW_MASK | EXEC_MASK | MODE_SETGID,
expected_permission_file: RW_MASK,
},
TestData {
path: "ro_gid_change",
uid: 0,
gid: 3000,
read_only: true,
expected_permission_directory: RO_MASK | EXEC_MASK | MODE_SETGID,
expected_permission_file: RO_MASK,
},
];
let tempdir = tempdir().expect("failed to create tmpdir");
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let mount_dir = tempdir.path().join(d.path);
fs::create_dir(&mount_dir)
.unwrap_or_else(|_| panic!("{}: failed to create root directory", msg));
let directory_mode = mount_dir.as_path().metadata().unwrap().permissions().mode();
let mut file_mode: u32 = 0;
// create testing directories and files
for n in 1..COUNT {
let nest_dir = mount_dir.join(format!("nested{}", n));
fs::create_dir(&nest_dir)
.unwrap_or_else(|_| panic!("{}: failed to create nest directory", msg));
for f in 1..COUNT {
let filename = nest_dir.join(format!("file{}", f));
File::create(&filename)
.unwrap_or_else(|_| panic!("{}: failed to create file", msg));
file_mode = filename.as_path().metadata().unwrap().permissions().mode();
}
}
let uid = if d.uid > 0 {
Some(Uid::from_raw(d.uid))
} else {
None
};
let gid = if d.gid > 0 {
Some(Gid::from_raw(d.gid))
} else {
None
};
let result = recursive_ownership_change(&mount_dir, uid, gid, d.read_only);
assert!(result.is_ok());
assert_eq!(mount_dir.as_path().metadata().unwrap().gid(), d.gid);
assert_eq!(
mount_dir.as_path().metadata().unwrap().permissions().mode(),
(directory_mode | d.expected_permission_directory)
);
for n in 1..COUNT {
let nest_dir = mount_dir.join(format!("nested{}", n));
for f in 1..COUNT {
let filename = nest_dir.join(format!("file{}", f));
let file = Path::new(&filename);
assert_eq!(file.metadata().unwrap().gid(), d.gid);
assert_eq!(
file.metadata().unwrap().permissions().mode(),
(file_mode | d.expected_permission_file)
);
}
let dir = Path::new(&nest_dir);
assert_eq!(dir.metadata().unwrap().gid(), d.gid);
assert_eq!(
dir.metadata().unwrap().permissions().mode(),
(directory_mode | d.expected_permission_directory)
);
}
}
}
}

View File

@@ -19,6 +19,7 @@ use ttrpc::{
};
use anyhow::{anyhow, Context, Result};
use cgroups::freezer::FreezerState;
use oci::{LinuxNamespace, Root, Spec};
use protobuf::{Message, RepeatedField, SingularPtrField};
use protocols::agent::{
@@ -39,9 +40,9 @@ use rustjail::specconv::CreateOpts;
use nix::errno::Errno;
use nix::mount::MsFlags;
use nix::sys::signal::Signal;
use nix::sys::stat;
use nix::unistd::{self, Pid};
use rustjail::cgroups::Manager;
use rustjail::process::ProcessOperations;
use sysinfo::{DiskExt, System, SystemExt};
@@ -69,7 +70,6 @@ use tracing_opentelemetry::OpenTelemetrySpanExt;
use tracing::instrument;
use libc::{self, c_char, c_ushort, pid_t, winsize, TIOCSWINSZ};
use std::convert::TryFrom;
use std::fs;
use std::os::unix::fs::MetadataExt;
use std::os::unix::prelude::PermissionsExt;
@@ -389,7 +389,6 @@ impl AgentService {
let cid = req.container_id.clone();
let eid = req.exec_id.clone();
let s = self.sandbox.clone();
let mut sandbox = s.lock().await;
info!(
sl!(),
@@ -398,27 +397,93 @@ impl AgentService {
"exec-id" => eid.clone(),
);
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
let mut signal = Signal::try_from(req.signal as i32).map_err(|e| {
anyhow!(e).context(format!(
"failed to convert {:?} to signal (container-id: {}, exec-id: {})",
req.signal, cid, eid
))
})?;
// For container initProcess, if it hasn't installed handler for "SIGTERM" signal,
// it will ignore the "SIGTERM" signal sent to it, thus send it "SIGKILL" signal
// instead of "SIGTERM" to terminate it.
if p.init && signal == Signal::SIGTERM && !is_signal_handled(p.pid, req.signal) {
signal = Signal::SIGKILL;
let mut sig: libc::c_int = req.signal as libc::c_int;
{
let mut sandbox = s.lock().await;
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
// For container initProcess, if it hasn't installed handler for "SIGTERM" signal,
// it will ignore the "SIGTERM" signal sent to it, thus send it "SIGKILL" signal
// instead of "SIGTERM" to terminate it.
if p.init && sig == libc::SIGTERM && !is_signal_handled(p.pid, sig as u32) {
sig = libc::SIGKILL;
}
p.signal(sig)?;
}
p.signal(signal)?;
if eid.is_empty() {
// eid is empty, signal all the remaining processes in the container cgroup
info!(
sl!(),
"signal all the remaining processes";
"container-id" => cid.clone(),
"exec-id" => eid.clone(),
);
if let Err(err) = self.freeze_cgroup(&cid, FreezerState::Frozen).await {
warn!(
sl!(),
"freeze cgroup failed";
"container-id" => cid.clone(),
"exec-id" => eid.clone(),
"error" => format!("{:?}", err),
);
}
let pids = self.get_pids(&cid).await?;
for pid in pids.iter() {
let res = unsafe { libc::kill(*pid, sig) };
if let Err(err) = Errno::result(res).map(drop) {
warn!(
sl!(),
"signal failed";
"container-id" => cid.clone(),
"exec-id" => eid.clone(),
"pid" => pid,
"error" => format!("{:?}", err),
);
}
}
if let Err(err) = self.freeze_cgroup(&cid, FreezerState::Thawed).await {
warn!(
sl!(),
"unfreeze cgroup failed";
"container-id" => cid.clone(),
"exec-id" => eid.clone(),
"error" => format!("{:?}", err),
);
}
}
Ok(())
}
async fn freeze_cgroup(&self, cid: &str, state: FreezerState) -> Result<()> {
let s = self.sandbox.clone();
let mut sandbox = s.lock().await;
let ctr = sandbox
.get_container(cid)
.ok_or_else(|| anyhow!("Invalid container id {}", cid))?;
let cm = ctr
.cgroup_manager
.as_ref()
.ok_or_else(|| anyhow!("cgroup manager not exist"))?;
cm.freeze(state)?;
Ok(())
}
async fn get_pids(&self, cid: &str) -> Result<Vec<i32>> {
let s = self.sandbox.clone();
let mut sandbox = s.lock().await;
let ctr = sandbox
.get_container(cid)
.ok_or_else(|| anyhow!("Invalid container id {}", cid))?;
let cm = ctr
.cgroup_manager
.as_ref()
.ok_or_else(|| anyhow!("cgroup manager not exist"))?;
let pids = cm.get_pids()?;
Ok(pids)
}
#[instrument]
async fn do_wait_process(
&self,

View File

@@ -149,7 +149,12 @@ impl Sandbox {
pub fn remove_sandbox_storage(&self, path: &str) -> Result<()> {
let mounts = vec![path.to_string()];
remove_mounts(&mounts)?;
fs::remove_dir_all(path).context(format!("failed to remove dir {:?}", path))?;
// "remove_dir" will fail if the mount point is backed by a read-only filesystem.
// This is the case with the device mapper snapshotter, where we mount the block device directly
// at the underlying sandbox path which was provided from the base RO kataShared path from the host.
if let Err(err) = fs::remove_dir(path) {
warn!(self.logger, "failed to remove dir {}, {:?}", path, err);
}
Ok(())
}
@@ -562,19 +567,8 @@ mod tests {
.remove_sandbox_storage(invalid_dir.to_str().unwrap())
.is_err());
// Now, create a double mount as this guarantees the directory cannot
// be deleted after the first umount.
for _i in 0..2 {
assert!(bind_mount(srcdir_path, destdir_path, &logger).is_ok());
}
assert!(bind_mount(srcdir_path, destdir_path, &logger).is_ok());
assert!(
s.remove_sandbox_storage(destdir_path).is_err(),
"Expect fail as deletion cannot happen due to the second mount."
);
// This time it should work as the previous two calls have undone the double
// mount.
assert!(s.remove_sandbox_storage(destdir_path).is_ok());
}

View File

@@ -6,6 +6,7 @@
#![allow(unknown_lints)]
use std::collections::HashMap;
use std::os::unix::fs::MetadataExt;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::SystemTime;
@@ -13,6 +14,7 @@ use std::time::SystemTime;
use anyhow::{ensure, Context, Result};
use async_recursion::async_recursion;
use nix::mount::{umount, MsFlags};
use nix::unistd::{Gid, Uid};
use slog::{debug, error, info, warn, Logger};
use thiserror::Error;
use tokio::fs;
@@ -80,7 +82,8 @@ impl Drop for Storage {
}
async fn copy(from: impl AsRef<Path>, to: impl AsRef<Path>) -> Result<()> {
if fs::symlink_metadata(&from).await?.file_type().is_symlink() {
let metadata = fs::symlink_metadata(&from).await?;
if metadata.file_type().is_symlink() {
// if source is a symlink, create new symlink with same link source. If
// the symlink exists, remove and create new one:
if fs::symlink_metadata(&to).await.is_ok() {
@@ -88,8 +91,15 @@ async fn copy(from: impl AsRef<Path>, to: impl AsRef<Path>) -> Result<()> {
}
fs::symlink(fs::read_link(&from).await?, &to).await?;
} else {
fs::copy(from, to).await?;
fs::copy(&from, &to).await?;
}
// preserve the source uid and gid to the destination.
nix::unistd::chown(
to.as_ref(),
Some(Uid::from_raw(metadata.uid())),
Some(Gid::from_raw(metadata.gid())),
)?;
Ok(())
}
@@ -106,14 +116,29 @@ impl Storage {
async fn update_target(&self, logger: &Logger, source_path: impl AsRef<Path>) -> Result<()> {
let source_file_path = source_path.as_ref();
let metadata = source_file_path.symlink_metadata()?;
// if we are creating a directory: just create it, nothing more to do
if source_file_path.symlink_metadata()?.file_type().is_dir() {
if metadata.file_type().is_dir() {
let dest_file_path = self.make_target_path(&source_file_path)?;
fs::create_dir_all(&dest_file_path)
.await
.with_context(|| format!("Unable to mkdir all for {}", dest_file_path.display()))?;
// set the directory permissions to match the source directory permissions
fs::set_permissions(&dest_file_path, metadata.permissions())
.await
.with_context(|| {
format!("Unable to set permissions for {}", dest_file_path.display())
})?;
// preserve the source directory uid and gid to the destination.
nix::unistd::chown(
&dest_file_path,
Some(Uid::from_raw(metadata.uid())),
Some(Gid::from_raw(metadata.gid())),
)
.with_context(|| format!("Unable to set ownership for {}", dest_file_path.display()))?;
return Ok(());
}
@@ -504,6 +529,7 @@ mod tests {
use super::*;
use crate::mount::is_mounted;
use crate::skip_if_not_root;
use nix::unistd::{Gid, Uid};
use std::fs;
use std::thread;
@@ -895,20 +921,28 @@ mod tests {
#[tokio::test]
async fn test_copy() {
skip_if_not_root!();
// prepare tmp src/destination
let source_dir = tempfile::tempdir().unwrap();
let dest_dir = tempfile::tempdir().unwrap();
let uid = Uid::from_raw(10);
let gid = Gid::from_raw(200);
// verify copy of a regular file
let src_file = source_dir.path().join("file.txt");
let dst_file = dest_dir.path().join("file.txt");
fs::write(&src_file, "foo").unwrap();
nix::unistd::chown(&src_file, Some(uid), Some(gid)).unwrap();
copy(&src_file, &dst_file).await.unwrap();
// verify destination:
assert!(!fs::symlink_metadata(dst_file)
assert!(!fs::symlink_metadata(&dst_file)
.unwrap()
.file_type()
.is_symlink());
assert_eq!(fs::metadata(&dst_file).unwrap().uid(), uid.as_raw());
assert_eq!(fs::metadata(&dst_file).unwrap().gid(), gid.as_raw());
// verify copy of a symlink
let src_symlink_file = source_dir.path().join("symlink_file.txt");
@@ -916,7 +950,7 @@ mod tests {
tokio::fs::symlink(&src_file, &src_symlink_file)
.await
.unwrap();
copy(src_symlink_file, &dst_symlink_file).await.unwrap();
copy(&src_symlink_file, &dst_symlink_file).await.unwrap();
// verify destination:
assert!(fs::symlink_metadata(&dst_symlink_file)
.unwrap()
@@ -924,6 +958,8 @@ mod tests {
.is_symlink());
assert_eq!(fs::read_link(&dst_symlink_file).unwrap(), src_file);
assert_eq!(fs::read_to_string(&dst_symlink_file).unwrap(), "foo");
assert_ne!(fs::metadata(&dst_symlink_file).unwrap().uid(), uid.as_raw());
assert_ne!(fs::metadata(&dst_symlink_file).unwrap().gid(), gid.as_raw());
}
#[tokio::test]
@@ -1069,6 +1105,8 @@ mod tests {
#[tokio::test]
async fn watch_directory() {
skip_if_not_root!();
// Prepare source directory:
// ./tmp/1.txt
// ./tmp/A/B/2.txt
@@ -1079,7 +1117,9 @@ mod tests {
// A/C is an empty directory
let empty_dir = "A/C";
fs::create_dir_all(source_dir.path().join(empty_dir)).unwrap();
let path = source_dir.path().join(empty_dir);
fs::create_dir_all(&path).unwrap();
nix::unistd::chown(&path, Some(Uid::from_raw(10)), Some(Gid::from_raw(200))).unwrap();
// delay 20 ms between writes to files in order to ensure filesystem timestamps are unique
thread::sleep(Duration::from_millis(20));
@@ -1123,7 +1163,9 @@ mod tests {
// create another empty directory A/C/D
let empty_dir = "A/C/D";
fs::create_dir_all(source_dir.path().join(empty_dir)).unwrap();
let path = source_dir.path().join(empty_dir);
fs::create_dir_all(&path).unwrap();
nix::unistd::chown(&path, Some(Uid::from_raw(10)), Some(Gid::from_raw(200))).unwrap();
assert_eq!(entry.scan(&logger).await.unwrap(), 1);
assert!(dest_dir.path().join(empty_dir).exists());
}

View File

@@ -381,7 +381,7 @@ pub struct LinuxMemory {
#[serde(default, skip_serializing_if = "Option::is_none", rename = "kernelTCP")]
pub kernel_tcp: Option<i64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub swappiness: Option<i64>,
pub swappiness: Option<u64>,
#[serde(
default,
skip_serializing_if = "Option::is_none",

View File

@@ -399,6 +399,17 @@ message SetGuestDateTimeRequest {
int64 Usec = 2;
}
// FSGroup consists of the group id and group ownership change policy
// that a volume should have its ownership changed to.
message FSGroup {
// GroupID is the ID that the group ownership of the
// files in the mounted volume will need to be changed to.
uint32 group_id = 2;
// GroupChangePolicy specifies the policy for applying group id
// ownership change on a mounted volume.
types.FSGroupChangePolicy group_change_policy = 3;
}
// Storage represents both the rootfs of the container, and any volume that
// could have been defined through the Mount list of the OCI specification.
message Storage {
@@ -422,11 +433,14 @@ message Storage {
// device, "9p" for shared filesystem, or "tmpfs" for shared /dev/shm.
string fstype = 4;
// Options describes the additional options that might be needed to
// mount properly the storage filesytem.
// mount properly the storage filesystem.
repeated string options = 5;
// MountPoint refers to the path where the storage should be mounted
// inside the VM.
string mount_point = 6;
// FSGroup consists of the group ID and group ownership change policy
// that the mounted volume must have its group ID changed to when specified.
FSGroup fs_group = 7;
}
// Device represents only the devices that could have been defined through the

View File

@@ -16,6 +16,15 @@ enum IPFamily {
v6 = 1;
}
// FSGroupChangePolicy defines the policy for applying group id ownership change on a mounted volume.
enum FSGroupChangePolicy {
// Always indicates that the volume ownership will always be changed.
Always = 0;
// OnRootMismatch indicates that the volume ownership will be changed only
// when the ownership of the root directory does not match with the expected group id for the volume.
OnRootMismatch = 1;
}
message IPAddress {
IPFamily family = 1;
string address = 2;

View File

@@ -589,12 +589,10 @@ $(GENERATED_FILES): %: %.in $(MAKEFILE_LIST) VERSION .git-commit
generate-config: $(CONFIGS)
test: install-hook go-test
test: hook go-test
install-hook:
hook:
make -C virtcontainers hook
echo "installing mock hook"
sudo -E make -C virtcontainers install
go-test: $(GENERATED_FILES)
go clean -testcache

View File

@@ -10,6 +10,7 @@ This repository contains the following components:
|-|-|
| `containerd-shim-kata-v2` | The [shimv2 runtime](../../docs/design/architecture/README.md#runtime) |
| `kata-runtime` | [utility program](../../docs/design/architecture/README.md#utility-program) |
| `kata-monitor` | [metrics collector daemon](cmd/kata-monitor/README.md) |
For details of the other Kata Containers repositories, see the
[repository summary](https://github.com/kata-containers/kata-containers).

View File

@@ -0,0 +1,68 @@
# Kata monitor
## Overview
`kata-monitor` is a daemon able to collect and expose metrics related to all the Kata Containers workloads running on the same host.
Once started, it detects all the running Kata Containers runtimes (`containerd-shim-kata-v2`) in the system and exposes few http endpoints to allow the retrieval of the available data.
The main endpoint is the `/metrics` one which aggregates metrics from all the kata workloads.
Available metrics include:
* Kata runtime metrics
* Kata agent metrics
* Kata guest OS metrics
* Hypervisor metrics
* Firecracker metrics
* Kata monitor metrics
All the provided metrics are in Prometheus format. While `kata-monitor` can be used as a standalone daemon on any host running Kata Containers workloads and can be used for retrieving profiling data from the running Kata runtimes, its main expected usage is to be deployed as a DaemonSet on a Kubernetes cluster: there Prometheus should scrape the metrics from the kata-monitor endpoints.
For more information on the Kata Containers metrics architecture and a detailed list of the available metrics provided by Kata monitor check the [Kata 2.0 Metrics Design](../../../../docs/design/kata-2-0-metrics.md) document.
## Usage
Each `kata-monitor` instance detects and monitors the Kata Container workloads running on the same node.
### Kata monitor arguments
The `kata-monitor` binary accepts the following arguments:
* `--listen-address` _IP:PORT_
* `--runtime-enpoint` _PATH_TO_THE_CONTAINER_MANAGER_CRI_INTERFACE_
* `--log-level` _[ trace | debug | info | warn | error | fatal | panic ]_
The **listen-address** specifies the IP and TCP port where the kata-monitor HTTP endpoints will be exposed. It defaults to `127.0.0.1:8090`.
The **runtime-endpoint** is the CRI of a CRI compliant container manager: it will be used to retrieve the CRI `PodSandboxMetadata` (`uid`, `name` and `namespace`) which will be attached to the Kata metrics through the labels `cri_uid`, `cri_name` and `cri_namespace`. It defaults to the containerd socket: `/run/containerd/containerd.sock`.
The **log-level** allows the chose how verbose the logs should be. The default is `info`.
### Kata monitor HTTP endpoints
`kata-monitor` exposes the following endpoints:
* `/metrics` : get Kata sandboxes metrics.
* `/sandboxes` : list all the Kata sandboxes running on the host.
* `/agent-url` : Get the agent URL of a Kata sandbox.
* `/debug/vars` : Internal data of the Kata runtime shim.
* `/debug/pprof/` : Golang profiling data of the Kata runtime shim: index page.
* `/debug/pprof/cmdline` : Golang profiling data of the Kata runtime shim: `cmdline` endpoint.
* `/debug/pprof/profile` : Golang profiling data of the Kata runtime shim: `profile` endpoint (CPU profiling).
* `/debug/pprof/symbol` : Golang profiling data of the Kata runtime shim: `symbol` endpoint.
* `/debug/pprof/trace` : Golang profiling data of the Kata runtime shim: `trace` endpoint.
**NOTE: The debug endpoints are available only if the [Kata Containers configuration file](https://github.com/kata-containers/kata-containers/blob/9d5b03a1b70bbd175237ec4b9f821d6ccee0a1f6/src/runtime/config/configuration-qemu.toml.in#L590-L592) includes** `enable_pprof = true` **in the** `[runtime]` **section**.
The `/sandboxes` endpoint lists the _sandbox ID_ of all the detected Kata runtimes. If accessed via a web browser, it provides html links to the endpoints available for each sandbox.
In order to retrieve data for a specific Kata workload, the _sandbox ID_ should be passed in the query string using the _sandbox_ key. The `/agent-url`, and all the `/debug/`* endpoints require `sandbox_id` to be specified in the query string.
<br>
#### Examples
Retrieve the IDs of the available sandboxes:
```bash
$ curl 127.0.0.1:8090/sandboxes
```
output:
```
6fcf0a90b01e90d8747177aa466c3462d02e02a878bc393649df83d4c314af0c
df96b24bd49ec437c872c1a758edc084121d607ce1242ff5d2263a0e1b693343
```
Retrieve the `agent-url` of the sandbox with ID _df96b24bd49ec437c872c1a758edc084121d607ce1242ff5d2263a0e1b693343_:
```bash
$ curl 127.0.0.1:8090/agent-url?sandbox=df96b24bd49ec437c872c1a758edc084121d607ce1242ff5d2263a0e1b693343
```
output:
```
vsock://830455376:1024
```

View File

@@ -21,7 +21,7 @@ import (
const defaultListenAddress = "127.0.0.1:8090"
var monitorListenAddr = flag.String("listen-address", defaultListenAddress, "The address to listen on for HTTP requests.")
var runtimeEndpoint = flag.String("runtime-endpoint", "/run/containerd/containerd.sock", `Endpoint of CRI container runtime service. (default: "/run/containerd/containerd.sock")`)
var runtimeEndpoint = flag.String("runtime-endpoint", "/run/containerd/containerd.sock", "Endpoint of CRI container runtime service.")
var logLevel = flag.String("log-level", "info", "Log level of logrus(trace/debug/info/warn/error/fatal/panic).")
// These values are overridden via ldflags
@@ -175,6 +175,15 @@ func main() {
}
func indexPage(w http.ResponseWriter, r *http.Request) {
htmlResponse := kataMonitor.IfReturnHTMLResponse(w, r)
if htmlResponse {
indexPageHTML(w, r)
} else {
indexPageText(w, r)
}
}
func indexPageText(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Available HTTP endpoints:\n"))
spacing := 0
@@ -184,13 +193,35 @@ func indexPage(w http.ResponseWriter, r *http.Request) {
}
}
spacing = spacing + 3
formatter := fmt.Sprintf("%%-%ds: %%s\n", spacing)
formattedString := fmt.Sprintf("%%-%ds: %%s\n", spacing)
for _, endpoint := range endpoints {
w.Write([]byte(fmt.Sprintf(formattedString, endpoint.path, endpoint.desc)))
w.Write([]byte(fmt.Sprintf(formatter, endpoint.path, endpoint.desc)))
}
}
func indexPageHTML(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("<h1>Available HTTP endpoints:</h1>\n"))
var formattedString string
needLinkPaths := []string{"/metrics", "/sandboxes"}
w.Write([]byte("<ul>"))
for _, endpoint := range endpoints {
formattedString = fmt.Sprintf("<b>%s</b>: %s\n", endpoint.path, endpoint.desc)
for _, linkPath := range needLinkPaths {
if linkPath == endpoint.path {
formattedString = fmt.Sprintf("<b><a href='%s'>%s</a></b>: %s\n", endpoint.path, endpoint.path, endpoint.desc)
break
}
}
formattedString = fmt.Sprintf("<li>%s</li>", formattedString)
w.Write([]byte(formattedString))
}
w.Write([]byte("</ul>"))
}
// initLog setup logger
func initLog() {
kataMonitorLog := logrus.WithFields(logrus.Fields{

View File

@@ -54,7 +54,10 @@ var addCommand = cli.Command{
},
},
Action: func(c *cli.Context) error {
return volume.Add(volumePath, mountInfo)
if err := volume.Add(volumePath, mountInfo); err != nil {
return cli.NewExitError(err.Error(), 1)
}
return nil
},
}
@@ -69,7 +72,10 @@ var removeCommand = cli.Command{
},
},
Action: func(c *cli.Context) error {
return volume.Remove(volumePath)
if err := volume.Remove(volumePath); err != nil {
return cli.NewExitError(err.Error(), 1)
}
return nil
},
}
@@ -86,9 +92,8 @@ var statsCommand = cli.Command{
Action: func(c *cli.Context) (string, error) {
stats, err := Stats(volumePath)
if err != nil {
return "", err
return "", cli.NewExitError(err.Error(), 1)
}
return string(stats), nil
},
}
@@ -109,7 +114,10 @@ var resizeCommand = cli.Command{
},
},
Action: func(c *cli.Context) error {
return Resize(volumePath, size)
if err := Resize(volumePath, size); err != nil {
return cli.NewExitError(err.Error(), 1)
}
return nil
},
}

View File

@@ -50,7 +50,6 @@ func TestCreateSandboxSuccess(t *testing.T) {
}()
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
// defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -99,7 +98,6 @@ func TestCreateSandboxFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -137,7 +135,6 @@ func TestCreateSandboxConfigFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, _ := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -187,7 +184,6 @@ func TestCreateContainerSuccess(t *testing.T) {
}
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -227,7 +223,6 @@ func TestCreateContainerFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -278,7 +273,6 @@ func TestCreateContainerConfigFail(t *testing.T) {
}()
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)

View File

@@ -7,7 +7,6 @@
package containerdshim
import (
"os"
"testing"
taskAPI "github.com/containerd/containerd/runtime/v2/task"
@@ -25,8 +24,8 @@ func TestDeleteContainerSuccessAndFail(t *testing.T) {
MockID: testSandboxID,
}
rootPath, bundlePath, _ := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(rootPath)
_, bundlePath, _ := ktu.SetupOCIConfigFile(t)
_, err := compatoci.ParseConfigJSON(bundlePath)
assert.NoError(err)

View File

@@ -776,6 +776,8 @@ func (s *service) Kill(ctx context.Context, r *taskAPI.KillRequest) (_ *ptypes.E
return empty, errors.New("The exec process does not exist")
}
processStatus = execs.status
} else {
r.All = true
}
// According to CRI specs, kubelet will call StopPodSandbox()

View File

@@ -41,8 +41,7 @@ func TestServiceCreate(t *testing.T) {
assert := assert.New(t)
tmpdir, bundleDir, _ := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
_, bundleDir, _ := ktu.SetupOCIConfigFile(t)
ctx := context.Background()

View File

@@ -8,12 +8,14 @@ package containerdshim
import (
"context"
"fmt"
"github.com/sirupsen/logrus"
"github.com/containerd/containerd/api/types/task"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
)
func startContainer(ctx context.Context, s *service, c *container) (retErr error) {
shimLog.WithField("container", c.id).Debug("start container")
defer func() {
if retErr != nil {
// notify the wait goroutine to continue
@@ -78,7 +80,8 @@ func startContainer(ctx context.Context, s *service, c *container) (retErr error
return err
}
c.ttyio = tty
go ioCopy(c.exitIOch, c.stdinCloser, tty, stdin, stdout, stderr)
go ioCopy(shimLog.WithField("container", c.id), c.exitIOch, c.stdinCloser, tty, stdin, stdout, stderr)
} else {
// close the io exit channel, since there is no io for this container,
// otherwise the following wait goroutine will hang on this channel.
@@ -94,6 +97,10 @@ func startContainer(ctx context.Context, s *service, c *container) (retErr error
}
func startExec(ctx context.Context, s *service, containerID, execID string) (e *exec, retErr error) {
shimLog.WithFields(logrus.Fields{
"container": containerID,
"exec": execID,
}).Debug("start container execution")
// start an exec
c, err := s.getContainer(containerID)
if err != nil {
@@ -140,7 +147,10 @@ func startExec(ctx context.Context, s *service, containerID, execID string) (e *
}
execs.ttyio = tty
go ioCopy(execs.exitIOch, execs.stdinCloser, tty, stdin, stdout, stderr)
go ioCopy(shimLog.WithFields(logrus.Fields{
"container": c.id,
"exec": execID,
}), execs.exitIOch, execs.stdinCloser, tty, stdin, stdout, stderr)
go wait(ctx, s, c, execID)

View File

@@ -12,6 +12,7 @@ import (
"syscall"
"github.com/containerd/fifo"
"github.com/sirupsen/logrus"
)
// The buffer size used to specify the buffer for IO streams copy
@@ -86,18 +87,20 @@ func newTtyIO(ctx context.Context, stdin, stdout, stderr string, console bool) (
return ttyIO, nil
}
func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteCloser, stdoutPipe, stderrPipe io.Reader) {
func ioCopy(shimLog *logrus.Entry, exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteCloser, stdoutPipe, stderrPipe io.Reader) {
var wg sync.WaitGroup
if tty.Stdin != nil {
wg.Add(1)
go func() {
shimLog.Debug("stdin io stream copy started")
p := bufPool.Get().(*[]byte)
defer bufPool.Put(p)
io.CopyBuffer(stdinPipe, tty.Stdin, *p)
// notify that we can close process's io safely.
close(stdinCloser)
wg.Done()
shimLog.Debug("stdin io stream copy exited")
}()
}
@@ -105,6 +108,7 @@ func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteClo
wg.Add(1)
go func() {
shimLog.Debug("stdout io stream copy started")
p := bufPool.Get().(*[]byte)
defer bufPool.Put(p)
io.CopyBuffer(tty.Stdout, stdoutPipe, *p)
@@ -113,20 +117,24 @@ func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteClo
// close stdin to make the other routine stop
tty.Stdin.Close()
}
shimLog.Debug("stdout io stream copy exited")
}()
}
if tty.Stderr != nil && stderrPipe != nil {
wg.Add(1)
go func() {
shimLog.Debug("stderr io stream copy started")
p := bufPool.Get().(*[]byte)
defer bufPool.Put(p)
io.CopyBuffer(tty.Stderr, stderrPipe, *p)
wg.Done()
shimLog.Debug("stderr io stream copy exited")
}()
}
wg.Wait()
tty.close()
close(exitch)
shimLog.Debug("all io stream copy goroutines exited")
}

View File

@@ -7,6 +7,7 @@ package containerdshim
import (
"context"
"github.com/sirupsen/logrus"
"io"
"os"
"path/filepath"
@@ -179,7 +180,7 @@ func TestIoCopy(t *testing.T) {
defer tty.close()
// start the ioCopy threads : copy from src to dst
go ioCopy(exitioch, stdinCloser, tty, dstInW, srcOutR, srcErrR)
go ioCopy(logrus.WithContext(context.Background()), exitioch, stdinCloser, tty, dstInW, srcOutR, srcErrR)
var firstW, secondW, thirdW io.WriteCloser
var firstR, secondR, thirdR io.Reader

View File

@@ -15,7 +15,6 @@ import (
"github.com/containerd/containerd/api/types/task"
"github.com/containerd/containerd/mount"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"github.com/kata-containers/kata-containers/src/runtime/pkg/oci"
)
@@ -31,12 +30,17 @@ func wait(ctx context.Context, s *service, c *container, execID string) (int32,
if execID == "" {
//wait until the io closed, then wait the container
<-c.exitIOch
shimLog.WithField("container", c.id).Debug("The container io streams closed")
} else {
execs, err = c.getExec(execID)
if err != nil {
return exitCode255, err
}
<-execs.exitIOch
shimLog.WithFields(logrus.Fields{
"container": c.id,
"exec": execID,
}).Debug("The container process io streams closed")
//This wait could be triggered before exec start which
//will get the exec's id, thus this assignment must after
//the exec exit, to make sure it get the exec's id.
@@ -63,6 +67,7 @@ func wait(ctx context.Context, s *service, c *container, execID string) (int32,
if c.cType.IsSandbox() {
// cancel watcher
if s.monitor != nil {
shimLog.WithField("sandbox", s.sandbox.ID()).Info("cancel watcher")
s.monitor <- nil
}
if err = s.sandbox.Stop(ctx, true); err != nil {
@@ -82,13 +87,17 @@ func wait(ctx context.Context, s *service, c *container, execID string) (int32,
c.exitTime = timeStamp
c.exitCh <- uint32(ret)
shimLog.WithField("container", c.id).Debug("The container status is StatusStopped")
} else {
execs.status = task.StatusStopped
execs.exitCode = ret
execs.exitTime = timeStamp
execs.exitCh <- uint32(ret)
shimLog.WithFields(logrus.Fields{
"container": c.id,
"exec": execID,
}).Debug("The container exec status is StatusStopped")
}
s.mu.Unlock()
@@ -102,6 +111,7 @@ func watchSandbox(ctx context.Context, s *service) {
return
}
err := <-s.monitor
shimLog.WithError(err).WithField("sandbox", s.sandbox.ID()).Info("watchSandbox gets an error or stop signal")
if err == nil {
return
}
@@ -147,13 +157,11 @@ func watchOOMEvents(ctx context.Context, s *service) {
default:
containerID, err := s.sandbox.GetOOMEvent(ctx)
if err != nil {
shimLog.WithError(err).Warn("failed to get OOM event from sandbox")
// If the GetOOMEvent call is not implemented, then the agent is most likely an older version,
// stop attempting to get OOM events.
// for rust agent, the response code is not found
if isGRPCErrorCode(codes.NotFound, err) || err.Error() == "Dead agent" {
if err.Error() == "ttrpc: closed" || err.Error() == "Dead agent" {
shimLog.WithError(err).Warn("agent has shutdown, return from watching of OOM events")
return
}
shimLog.WithError(err).Warn("failed to get OOM event from sandbox")
time.Sleep(defaultCheckInterval)
continue
}

View File

@@ -6,17 +6,36 @@
package volume
import (
b64 "encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
)
const (
mountInfoFileName = "mountInfo.json"
FSGroupMetadataKey = "fsGroup"
FSGroupChangePolicyMetadataKey = "fsGroupChangePolicy"
)
// FSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume.
// This type and the allowed values are tracking the PodFSGroupChangePolicy defined in
// https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go
// It is up to the client using the direct-assigned volume feature (e.g. CSI drivers) to determine
// the optimal setting for this change policy (i.e. from Pod spec or assuming volume ownership
// based on the storage offering).
type FSGroupChangePolicy string
const (
// FSGroupChangeAlways indicates that volume's ownership should always be changed.
FSGroupChangeAlways FSGroupChangePolicy = "Always"
// FSGroupChangeOnRootMismatch indicates that volume's ownership will be changed
// only when ownership of root directory does not match with the desired group id.
FSGroupChangeOnRootMismatch FSGroupChangePolicy = "OnRootMismatch"
)
var kataDirectVolumeRootPath = "/run/kata-containers/shared/direct-volumes"
@@ -37,19 +56,20 @@ type MountInfo struct {
// Add writes the mount info of a direct volume into a filesystem path known to Kata Container.
func Add(volumePath string, mountInfo string) error {
volumeDir := filepath.Join(kataDirectVolumeRootPath, volumePath)
volumeDir := filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath)))
stat, err := os.Stat(volumeDir)
if err != nil && !errors.Is(err, os.ErrNotExist) {
return err
}
if stat != nil && !stat.IsDir() {
return fmt.Errorf("%s should be a directory", volumeDir)
}
if errors.Is(err, os.ErrNotExist) {
if err != nil {
if !errors.Is(err, os.ErrNotExist) {
return err
}
if err := os.MkdirAll(volumeDir, 0700); err != nil {
return err
}
}
if stat != nil && !stat.IsDir() {
return fmt.Errorf("%s should be a directory", volumeDir)
}
var deserialized MountInfo
if err := json.Unmarshal([]byte(mountInfo), &deserialized); err != nil {
return err
@@ -60,14 +80,12 @@ func Add(volumePath string, mountInfo string) error {
// Remove deletes the direct volume path including all the files inside it.
func Remove(volumePath string) error {
// Find the base of the volume path to delete the whole volume path
base := strings.SplitN(volumePath, string(os.PathSeparator), 2)[0]
return os.RemoveAll(filepath.Join(kataDirectVolumeRootPath, base))
return os.RemoveAll(filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath))))
}
// VolumeMountInfo retrieves the mount info of a direct volume.
func VolumeMountInfo(volumePath string) (*MountInfo, error) {
mountInfoFilePath := filepath.Join(kataDirectVolumeRootPath, volumePath, mountInfoFileName)
mountInfoFilePath := filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath)), mountInfoFileName)
if _, err := os.Stat(mountInfoFilePath); err != nil {
return nil, err
}
@@ -84,16 +102,17 @@ func VolumeMountInfo(volumePath string) (*MountInfo, error) {
// RecordSandboxId associates a sandbox id with a direct volume.
func RecordSandboxId(sandboxId string, volumePath string) error {
mountInfoFilePath := filepath.Join(kataDirectVolumeRootPath, volumePath, mountInfoFileName)
encodedPath := b64.URLEncoding.EncodeToString([]byte(volumePath))
mountInfoFilePath := filepath.Join(kataDirectVolumeRootPath, encodedPath, mountInfoFileName)
if _, err := os.Stat(mountInfoFilePath); err != nil {
return err
}
return ioutil.WriteFile(filepath.Join(kataDirectVolumeRootPath, volumePath, sandboxId), []byte(""), 0600)
return ioutil.WriteFile(filepath.Join(kataDirectVolumeRootPath, encodedPath, sandboxId), []byte(""), 0600)
}
func GetSandboxIdForVolume(volumePath string) (string, error) {
files, err := ioutil.ReadDir(filepath.Join(kataDirectVolumeRootPath, volumePath))
files, err := ioutil.ReadDir(filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath))))
if err != nil {
return "", err
}

View File

@@ -6,6 +6,7 @@
package volume
import (
b64 "encoding/base64"
"encoding/json"
"errors"
"os"
@@ -22,12 +23,15 @@ func TestAdd(t *testing.T) {
assert.Nil(t, err)
defer os.RemoveAll(kataDirectVolumeRootPath)
var volumePath = "/a/b/c"
var basePath = "a"
actual := MountInfo{
VolumeType: "block",
Device: "/dev/sda",
FsType: "ext4",
Options: []string{"journal_dev", "noload"},
Metadata: map[string]string{
FSGroupMetadataKey: "3000",
FSGroupChangePolicyMetadataKey: string(FSGroupChangeOnRootMismatch),
},
Options: []string{"journal_dev", "noload"},
}
buf, err := json.Marshal(actual)
assert.Nil(t, err)
@@ -41,15 +45,17 @@ func TestAdd(t *testing.T) {
assert.Equal(t, expected.Device, actual.Device)
assert.Equal(t, expected.FsType, actual.FsType)
assert.Equal(t, expected.Options, actual.Options)
assert.Equal(t, expected.Metadata, actual.Metadata)
_, err = os.Stat(filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath))))
assert.Nil(t, err)
// Remove the file
err = Remove(volumePath)
assert.Nil(t, err)
_, err = os.Stat(filepath.Join(kataDirectVolumeRootPath, basePath))
_, err = os.Stat(filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath))))
assert.True(t, errors.Is(err, os.ErrNotExist))
// Test invalid mount info json
assert.Error(t, Add(volumePath, "{invalid json}"))
_, err = os.Stat(filepath.Join(kataDirectVolumeRootPath))
assert.Nil(t, err)
}
func TestRecordSandboxId(t *testing.T) {

View File

@@ -78,6 +78,21 @@ func (km *KataMonitor) ProcessMetricsRequest(w http.ResponseWriter, r *http.Requ
scrapeDurationsHistogram.Observe(float64(time.Since(start).Nanoseconds() / int64(time.Millisecond)))
}()
// this is likely the same as `kata-runtime metrics <SANDBOX>`.
sandboxID, err := getSandboxIDFromReq(r)
if err == nil && sandboxID != "" {
metrics, err := GetSandboxMetrics(sandboxID)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte(err.Error()))
return
}
w.Write([]byte(metrics))
return
}
// if no sandbox provided, will get all sandbox's metrics.
// prepare writer for writing response.
contentType := expfmt.Negotiate(r.Header)

View File

@@ -27,6 +27,7 @@ const (
RuntimeCRIO = "cri-o"
fsMonitorRetryDelaySeconds = 60
podCacheRefreshDelaySeconds = 5
contentTypeHtml = "text/html"
)
// SetLogger sets the logger for katamonitor package.
@@ -194,7 +195,41 @@ func (km *KataMonitor) GetAgentURL(w http.ResponseWriter, r *http.Request) {
// ListSandboxes list all sandboxes running in Kata
func (km *KataMonitor) ListSandboxes(w http.ResponseWriter, r *http.Request) {
sandboxes := km.sandboxCache.getSandboxList()
htmlResponse := IfReturnHTMLResponse(w, r)
if htmlResponse {
listSandboxesHtml(sandboxes, w)
} else {
listSandboxesText(sandboxes, w)
}
}
func listSandboxesText(sandboxes []string, w http.ResponseWriter) {
for _, s := range sandboxes {
w.Write([]byte(fmt.Sprintf("%s\n", s)))
}
}
func listSandboxesHtml(sandboxes []string, w http.ResponseWriter) {
w.Write([]byte("<h1>Sandbox list</h1>\n"))
w.Write([]byte("<ul>\n"))
for _, s := range sandboxes {
w.Write([]byte(fmt.Sprintf("<li>%s: <a href='/debug/pprof/?sandbox=%s'>pprof</a>, <a href='/metrics?sandbox=%s'>metrics</a>, <a href='/agent-url?sandbox=%s'>agent-url</a></li>\n", s, s, s, s)))
}
w.Write([]byte("</ul>\n"))
}
// IfReturnHTMLResponse returns true if request accepts html response
// NOTE: IfReturnHTMLResponse will also set response header to `text/html`
func IfReturnHTMLResponse(w http.ResponseWriter, r *http.Request) bool {
accepts := r.Header["Accept"]
for _, accept := range accepts {
fields := strings.Split(accept, ",")
for _, field := range fields {
if field == contentTypeHtml {
w.Header().Set("Content-Type", contentTypeHtml)
return true
}
}
}
return false
}

View File

@@ -10,6 +10,8 @@ import (
"io"
"net"
"net/http"
"regexp"
"strings"
cdshim "github.com/containerd/containerd/runtime/v2/shim"
@@ -33,7 +35,13 @@ func (km *KataMonitor) composeSocketAddress(r *http.Request) (string, error) {
return shim.SocketAddress(sandbox), nil
}
func (km *KataMonitor) proxyRequest(w http.ResponseWriter, r *http.Request) {
func (km *KataMonitor) proxyRequest(w http.ResponseWriter, r *http.Request,
proxyResponse func(req *http.Request, w io.Writer, r io.Reader) error) {
if proxyResponse == nil {
proxyResponse = copyResponse
}
w.Header().Set("X-Content-Type-Options", "nosniff")
socketAddress, err := km.composeSocketAddress(r)
@@ -55,8 +63,10 @@ func (km *KataMonitor) proxyRequest(w http.ResponseWriter, r *http.Request) {
}
uri := fmt.Sprintf("http://shim%s", r.URL.String())
monitorLog.Debugf("proxyRequest to: %s, uri: %s", socketAddress, uri)
resp, err := client.Get(uri)
if err != nil {
serveError(w, http.StatusInternalServerError, fmt.Sprintf("failed to request %s through %s", uri, socketAddress))
return
}
@@ -73,38 +83,68 @@ func (km *KataMonitor) proxyRequest(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Disposition", contentDisposition)
}
io.Copy(w, output)
err = proxyResponse(r, w, output)
if err != nil {
monitorLog.WithError(err).Errorf("failed proxying %s from %s", uri, socketAddress)
serveError(w, http.StatusInternalServerError, "error retrieving resource")
}
}
// ExpvarHandler handles other `/debug/vars` requests
func (km *KataMonitor) ExpvarHandler(w http.ResponseWriter, r *http.Request) {
km.proxyRequest(w, r)
km.proxyRequest(w, r, nil)
}
// PprofIndex handles other `/debug/pprof/` requests
func (km *KataMonitor) PprofIndex(w http.ResponseWriter, r *http.Request) {
km.proxyRequest(w, r)
if len(strings.TrimPrefix(r.URL.Path, "/debug/pprof/")) == 0 {
km.proxyRequest(w, r, copyResponseAddingSandboxIdToHref)
} else {
km.proxyRequest(w, r, nil)
}
}
// PprofCmdline handles other `/debug/cmdline` requests
func (km *KataMonitor) PprofCmdline(w http.ResponseWriter, r *http.Request) {
km.proxyRequest(w, r)
km.proxyRequest(w, r, nil)
}
// PprofProfile handles other `/debug/profile` requests
func (km *KataMonitor) PprofProfile(w http.ResponseWriter, r *http.Request) {
km.proxyRequest(w, r)
km.proxyRequest(w, r, nil)
}
// PprofSymbol handles other `/debug/symbol` requests
func (km *KataMonitor) PprofSymbol(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
km.proxyRequest(w, r)
km.proxyRequest(w, r, nil)
}
// PprofTrace handles other `/debug/trace` requests
func (km *KataMonitor) PprofTrace(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", `attachment; filename="trace"`)
km.proxyRequest(w, r)
km.proxyRequest(w, r, nil)
}
func copyResponse(req *http.Request, w io.Writer, r io.Reader) error {
_, err := io.Copy(w, r)
return err
}
func copyResponseAddingSandboxIdToHref(req *http.Request, w io.Writer, r io.Reader) error {
sb, err := getSandboxIDFromReq(req)
if err != nil {
monitorLog.WithError(err).Warning("missing sandbox query in pprof url")
return copyResponse(req, w, r)
}
buf, err := io.ReadAll(r)
if err != nil {
return err
}
re := regexp.MustCompile(`<a href=(['"])(\w+)\?(\w+=\w+)['"]>`)
outHtml := re.ReplaceAllString(string(buf), fmt.Sprintf("<a href=$1$2?sandbox=%s&$3$1>", sb))
w.Write([]byte(outHtml))
return nil
}

View File

@@ -0,0 +1,117 @@
// Copyright (c) 2022 Red Hat Inc.
//
// SPDX-License-Identifier: Apache-2.0
//
package katamonitor
import (
"bytes"
"net/http"
"net/url"
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCopyResponseAddingSandboxIdToHref(t *testing.T) {
assert := assert.New(t)
htmlIn := strings.NewReader(`
<html>
<head>
<title>/debug/pprof/</title>
<style>
.profile-name{
display:inline-block;
width:6rem;
}
</style>
</head>
<body>
/debug/pprof/<br>
<br>
Types of profiles available:
<table>
<thead><td>Count</td><td>Profile</td></thead>
<tr><td>27</td><td><a href='allocs?debug=1'>allocs</a></td></tr>
<tr><td>0</td><td><a href='block?debug=1'>block</a></td></tr>
<tr><td>0</td><td><a href='cmdline?debug=1'>cmdline</a></td></tr>
<tr><td>39</td><td><a href='goroutine?debug=1'>goroutine</a></td></tr>
<tr><td>27</td><td><a href='heap?debug=1'>heap</a></td></tr>
<tr><td>0</td><td><a href='mutex?debug=1'>mutex</a></td></tr>
<tr><td>0</td><td><a href='profile?debug=1'>profile</a></td></tr>
<tr><td>10</td><td><a href='threadcreate?debug=1'>threadcreate</a></td></tr>
<tr><td>0</td><td><a href='trace?debug=1'>trace</a></td></tr>
</table>
<a href="goroutine?debug=2">full goroutine stack dump</a>
<br>
<p>
Profile Descriptions:
<ul>
<li><div class=profile-name>allocs: </div> A sampling of all past memory allocations</li>
<li><div class=profile-name>block: </div> Stack traces that led to blocking on synchronization primitives</li>
<li><div class=profile-name>cmdline: </div> The command line invocation of the current program</li>
<li><div class=profile-name>goroutine: </div> Stack traces of all current goroutines</li>
<li><div class=profile-name>heap: </div> A sampling of memory allocations of live objects. You can specify the gc GET parameter to run GC before taking the heap sample.</li>
<li><div class=profile-name>mutex: </div> Stack traces of holders of contended mutexes</li>
<li><div class=profile-name>profile: </div> CPU profile. You can specify the duration in the seconds GET parameter. After you get the profile file, use the go tool pprof command to investigate the profile.</li>
<li><div class=profile-name>threadcreate: </div> Stack traces that led to the creation of new OS threads</li>
<li><div class=profile-name>trace: </div> A trace of execution of the current program. You can specify the duration in the seconds GET parameter. After you get the trace file, use the go tool trace command to investigate the trace.</li>
</ul>
</p>
</body>
</html>`)
htmlExpected := bytes.NewBufferString(`
<html>
<head>
<title>/debug/pprof/</title>
<style>
.profile-name{
display:inline-block;
width:6rem;
}
</style>
</head>
<body>
/debug/pprof/<br>
<br>
Types of profiles available:
<table>
<thead><td>Count</td><td>Profile</td></thead>
<tr><td>27</td><td><a href='allocs?sandbox=1234567890&debug=1'>allocs</a></td></tr>
<tr><td>0</td><td><a href='block?sandbox=1234567890&debug=1'>block</a></td></tr>
<tr><td>0</td><td><a href='cmdline?sandbox=1234567890&debug=1'>cmdline</a></td></tr>
<tr><td>39</td><td><a href='goroutine?sandbox=1234567890&debug=1'>goroutine</a></td></tr>
<tr><td>27</td><td><a href='heap?sandbox=1234567890&debug=1'>heap</a></td></tr>
<tr><td>0</td><td><a href='mutex?sandbox=1234567890&debug=1'>mutex</a></td></tr>
<tr><td>0</td><td><a href='profile?sandbox=1234567890&debug=1'>profile</a></td></tr>
<tr><td>10</td><td><a href='threadcreate?sandbox=1234567890&debug=1'>threadcreate</a></td></tr>
<tr><td>0</td><td><a href='trace?sandbox=1234567890&debug=1'>trace</a></td></tr>
</table>
<a href="goroutine?sandbox=1234567890&debug=2">full goroutine stack dump</a>
<br>
<p>
Profile Descriptions:
<ul>
<li><div class=profile-name>allocs: </div> A sampling of all past memory allocations</li>
<li><div class=profile-name>block: </div> Stack traces that led to blocking on synchronization primitives</li>
<li><div class=profile-name>cmdline: </div> The command line invocation of the current program</li>
<li><div class=profile-name>goroutine: </div> Stack traces of all current goroutines</li>
<li><div class=profile-name>heap: </div> A sampling of memory allocations of live objects. You can specify the gc GET parameter to run GC before taking the heap sample.</li>
<li><div class=profile-name>mutex: </div> Stack traces of holders of contended mutexes</li>
<li><div class=profile-name>profile: </div> CPU profile. You can specify the duration in the seconds GET parameter. After you get the profile file, use the go tool pprof command to investigate the profile.</li>
<li><div class=profile-name>threadcreate: </div> Stack traces that led to the creation of new OS threads</li>
<li><div class=profile-name>trace: </div> A trace of execution of the current program. You can specify the duration in the seconds GET parameter. After you get the trace file, use the go tool trace command to investigate the trace.</li>
</ul>
</p>
</body>
</html>`)
req := &http.Request{URL: &url.URL{RawQuery: "sandbox=1234567890"}}
buf := bytes.NewBuffer(nil)
copyResponseAddingSandboxIdToHref(req, buf, htmlIn)
assert.Equal(htmlExpected, buf)
}

View File

@@ -346,11 +346,10 @@ func IsInGitHubActions() bool {
func SetupOCIConfigFile(t *testing.T) (rootPath string, bundlePath, ociConfigFile string) {
assert := assert.New(t)
tmpdir, err := os.MkdirTemp("", "katatest-")
assert.NoError(err)
tmpdir := t.TempDir()
bundlePath = filepath.Join(tmpdir, "bundle")
err = os.MkdirAll(bundlePath, testDirMode)
err := os.MkdirAll(bundlePath, testDirMode)
assert.NoError(err)
ociConfigFile = filepath.Join(bundlePath, "config.json")

View File

@@ -216,7 +216,6 @@ func TestCreateSandboxConfigFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, _ := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -250,7 +249,6 @@ func TestCreateSandboxFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, _ := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -273,7 +271,6 @@ func TestCreateSandboxAnnotations(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, _ := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
runtimeConfig, err := newTestRuntimeConfig(tmpdir, testConsole, true)
assert.NoError(err)
@@ -350,8 +347,7 @@ func TestCheckForFips(t *testing.T) {
func TestCreateContainerContainerConfigFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
_, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
spec, err := compatoci.ParseConfigJSON(bundlePath)
assert.NoError(err)
@@ -378,8 +374,7 @@ func TestCreateContainerContainerConfigFail(t *testing.T) {
func TestCreateContainerFail(t *testing.T) {
assert := assert.New(t)
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
_, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
spec, err := compatoci.ParseConfigJSON(bundlePath)
assert.NoError(err)
@@ -413,8 +408,7 @@ func TestCreateContainer(t *testing.T) {
mockSandbox.CreateContainerFunc = nil
}()
tmpdir, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
defer os.RemoveAll(tmpdir)
_, bundlePath, ociConfigFile := ktu.SetupOCIConfigFile(t)
spec, err := compatoci.ParseConfigJSON(bundlePath)
assert.NoError(err)

View File

@@ -20,8 +20,9 @@ import (
var testKeyHook = "test-key"
var testContainerIDHook = "test-container-id"
var testControllerIDHook = "test-controller-id"
var testBinHookPath = "/usr/bin/virtcontainers/bin/test/hook"
var testBinHookPath = "../../virtcontainers/hook/mock/hook"
var testBundlePath = "/test/bundle"
var mockHookLogFile = "/tmp/mock_hook.log"
func getMockHookBinPath() string {
return testBinHookPath
@@ -49,12 +50,17 @@ func createWrongHook() specs.Hook {
}
}
func cleanMockHookLogFile() {
_ = os.Remove(mockHookLogFile)
}
func TestRunHook(t *testing.T) {
if tc.NotValid(ktu.NeedRoot()) {
t.Skip(ktu.TestDisabledNeedRoot)
}
assert := assert.New(t)
t.Cleanup(cleanMockHookLogFile)
ctx := context.Background()
spec := specs.Spec{}
@@ -87,6 +93,7 @@ func TestPreStartHooks(t *testing.T) {
}
assert := assert.New(t)
t.Cleanup(cleanMockHookLogFile)
ctx := context.Background()
@@ -129,6 +136,7 @@ func TestPostStartHooks(t *testing.T) {
}
assert := assert.New(t)
t.Cleanup(cleanMockHookLogFile)
ctx := context.Background()
@@ -173,6 +181,7 @@ func TestPostStopHooks(t *testing.T) {
assert := assert.New(t)
ctx := context.Background()
t.Cleanup(cleanMockHookLogFile)
// Hooks field is nil
spec := specs.Spec{}

View File

@@ -165,6 +165,7 @@ type cloudHypervisor struct {
APIClient clhClient
ctx context.Context
id string
devicesIds map[string]string
vmconfig chclient.VmConfig
state CloudHypervisorState
config HypervisorConfig
@@ -360,6 +361,7 @@ func (clh *cloudHypervisor) CreateVM(ctx context.Context, id string, network Net
clh.id = id
clh.state.state = clhNotReady
clh.devicesIds = make(map[string]string)
clh.Logger().WithField("function", "CreateVM").Info("creating Sandbox")
@@ -667,7 +669,6 @@ func (clh *cloudHypervisor) hotplugAddBlockDevice(drive *config.BlockDrive) erro
clhDisk := *chclient.NewDiskConfig(drive.File)
clhDisk.Readonly = &drive.ReadOnly
clhDisk.VhostUser = func(b bool) *bool { return &b }(false)
clhDisk.Id = &driveID
pciInfo, _, err := cl.VmAddDiskPut(ctx, clhDisk)
@@ -675,6 +676,7 @@ func (clh *cloudHypervisor) hotplugAddBlockDevice(drive *config.BlockDrive) erro
return fmt.Errorf("failed to hotplug block device %+v %s", drive, openAPIClientError(err))
}
clh.devicesIds[driveID] = pciInfo.GetId()
drive.PCIPath, err = clhPciInfoToPath(pciInfo)
return err
@@ -688,11 +690,11 @@ func (clh *cloudHypervisor) hotPlugVFIODevice(device *config.VFIODev) error {
// Create the clh device config via the constructor to ensure default values are properly assigned
clhDevice := *chclient.NewVmAddDevice()
clhDevice.Path = &device.SysfsDev
clhDevice.Id = &device.ID
pciInfo, _, err := cl.VmAddDevicePut(ctx, clhDevice)
if err != nil {
return fmt.Errorf("Failed to hotplug device %+v %s", device, openAPIClientError(err))
}
clh.devicesIds[device.ID] = pciInfo.GetId()
// clh doesn't use bridges, so the PCI path is simply the slot
// number of the device. This will break if clh starts using
@@ -753,13 +755,15 @@ func (clh *cloudHypervisor) HotplugRemoveDevice(ctx context.Context, devInfo int
ctx, cancel := context.WithTimeout(context.Background(), clhHotPlugAPITimeout*time.Second)
defer cancel()
originalDeviceID := clh.devicesIds[deviceID]
remove := *chclient.NewVmRemoveDevice()
remove.Id = &deviceID
remove.Id = &originalDeviceID
_, err := cl.VmRemoveDevicePut(ctx, remove)
if err != nil {
err = fmt.Errorf("failed to hotplug remove (unplug) device %+v: %s", devInfo, openAPIClientError(err))
}
delete(clh.devicesIds, deviceID)
return nil, err
}

View File

@@ -384,6 +384,7 @@ func TestCloudHypervisorHotplugAddBlockDevice(t *testing.T) {
clh := &cloudHypervisor{}
clh.config = clhConfig
clh.APIClient = &clhClientMock{}
clh.devicesIds = make(map[string]string)
clh.config.BlockDeviceDriver = config.VirtioBlock
err = clh.hotplugAddBlockDevice(&config.BlockDrive{Pmem: false})
@@ -406,6 +407,7 @@ func TestCloudHypervisorHotplugRemoveDevice(t *testing.T) {
clh := &cloudHypervisor{}
clh.config = clhConfig
clh.APIClient = &clhClientMock{}
clh.devicesIds = make(map[string]string)
_, err = clh.HotplugRemoveDevice(context.Background(), &config.BlockDrive{}, BlockDev)
assert.NoError(err, "Hotplug remove block device expected no error")

View File

@@ -12,6 +12,7 @@ import (
"os"
"path/filepath"
"strconv"
"strings"
"syscall"
"time"
@@ -638,6 +639,26 @@ func (c *Container) createBlockDevices(ctx context.Context) error {
c.mounts[i].Type = mntInfo.FsType
c.mounts[i].Options = mntInfo.Options
c.mounts[i].ReadOnly = readonly
for key, value := range mntInfo.Metadata {
switch key {
case volume.FSGroupMetadataKey:
gid, err := strconv.Atoi(value)
if err != nil {
c.Logger().WithError(err).Errorf("invalid group id value %s provided for key %s", value, volume.FSGroupMetadataKey)
continue
}
c.mounts[i].FSGroup = &gid
case volume.FSGroupChangePolicyMetadataKey:
if _, exists := mntInfo.Metadata[volume.FSGroupMetadataKey]; !exists {
c.Logger().Errorf("%s specified without provding the group id with key %s", volume.FSGroupChangePolicyMetadataKey, volume.FSGroupMetadataKey)
continue
}
c.mounts[i].FSGroupChangePolicy = volume.FSGroupChangePolicy(value)
default:
c.Logger().Warnf("Ignoring unsupported direct-assignd volume metadata key: %s, value: %s", key, value)
}
}
}
var stat unix.Stat_t
@@ -1060,7 +1081,18 @@ func (c *Container) signalProcess(ctx context.Context, processID string, signal
return fmt.Errorf("Container not ready, running or paused, impossible to signal the container")
}
return c.sandbox.agent.signalProcess(ctx, c, processID, signal, all)
// kill(2) method can return ESRCH in certain cases, which is not handled by containerd cri server in container_stop.go.
// CRIO server also doesn't handle ESRCH. So kata runtime will swallow it here.
var err error
if err = c.sandbox.agent.signalProcess(ctx, c, processID, signal, all); err != nil &&
strings.Contains(err.Error(), "ESRCH: No such process") {
c.Logger().WithFields(logrus.Fields{
"container": c.id,
"process-id": processID,
}).Warn("signal encounters ESRCH, process already finished")
return nil
}
return err
}
func (c *Container) winsizeProcess(ctx context.Context, processID string, height, width uint32) error {

View File

@@ -13,14 +13,12 @@ import (
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/factory/direct"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
)
func TestTemplateFactory(t *testing.T) {
assert := assert.New(t)
testDir := fs.MockStorageRootPath()
defer fs.MockStorageDestroy()
testDir := t.TempDir()
hyperConfig := vc.HypervisorConfig{
KernelPath: testDir,

View File

@@ -12,14 +12,12 @@ import (
"github.com/stretchr/testify/assert"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
)
func TestTemplateFactory(t *testing.T) {
assert := assert.New(t)
testDir := fs.MockStorageRootPath()
defer fs.MockStorageDestroy()
testDir := t.TempDir()
hyperConfig := vc.HypervisorConfig{
KernelPath: testDir,

View File

@@ -12,7 +12,6 @@ import (
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/factory/base"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/mock"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
"github.com/sirupsen/logrus"
@@ -39,10 +38,10 @@ func TestNewFactory(t *testing.T) {
_, err = NewFactory(ctx, config, false)
assert.Error(err)
defer fs.MockStorageDestroy()
testDir := t.TempDir()
config.VMConfig.HypervisorConfig = vc.HypervisorConfig{
KernelPath: fs.MockStorageRootPath(),
ImagePath: fs.MockStorageRootPath(),
KernelPath: testDir,
ImagePath: testDir,
}
// direct
@@ -69,7 +68,7 @@ func TestNewFactory(t *testing.T) {
defer hybridVSockTTRPCMock.Stop()
config.Template = true
config.TemplatePath = fs.MockStorageRootPath()
config.TemplatePath = testDir
f, err = NewFactory(ctx, config, false)
assert.Nil(err)
f.CloseFactory(ctx)
@@ -134,8 +133,7 @@ func TestCheckVMConfig(t *testing.T) {
err = checkVMConfig(config1, config2)
assert.Nil(err)
testDir := fs.MockStorageRootPath()
defer fs.MockStorageDestroy()
testDir := t.TempDir()
config1.HypervisorConfig = vc.HypervisorConfig{
KernelPath: testDir,
@@ -155,8 +153,7 @@ func TestCheckVMConfig(t *testing.T) {
func TestFactoryGetVM(t *testing.T) {
assert := assert.New(t)
testDir := fs.MockStorageRootPath()
defer fs.MockStorageDestroy()
testDir := t.TempDir()
hyperConfig := vc.HypervisorConfig{
KernelPath: testDir,
@@ -321,8 +318,7 @@ func TestDeepCompare(t *testing.T) {
config.VMConfig = vc.VMConfig{
HypervisorType: vc.MockHypervisor,
}
testDir := fs.MockStorageRootPath()
defer fs.MockStorageDestroy()
testDir := t.TempDir()
config.VMConfig.HypervisorConfig = vc.HypervisorConfig{
KernelPath: testDir,

View File

@@ -16,7 +16,6 @@ import (
"github.com/stretchr/testify/assert"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/mock"
)
@@ -32,8 +31,7 @@ func TestTemplateFactory(t *testing.T) {
templateWaitForAgent = 1 * time.Microsecond
testDir := fs.MockStorageRootPath()
defer fs.MockStorageDestroy()
testDir := t.TempDir()
hyperConfig := vc.HypervisorConfig{
KernelPath: testDir,

View File

@@ -27,6 +27,7 @@ import (
hv "github.com/kata-containers/kata-containers/src/runtime/pkg/hypervisors"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/device/config"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/firecracker/client"
models "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/firecracker/client/models"
ops "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/firecracker/client/operations"
@@ -84,8 +85,6 @@ const (
fcMetricsFifo = "metrics.fifo"
defaultFcConfig = "fcConfig.json"
// storagePathSuffix mirrors persist/fs/fs.go:storagePathSuffix
storagePathSuffix = "vc"
)
// Specify the minimum version of firecracker supported
@@ -244,7 +243,7 @@ func (fc *firecracker) setPaths(hypervisorConfig *HypervisorConfig) {
// <cgroups_base>/<exec_file_name>/<id>/
hypervisorName := filepath.Base(hypervisorConfig.HypervisorPath)
//fs.RunStoragePath cannot be used as we need exec perms
fc.chrootBaseDir = filepath.Join("/run", storagePathSuffix)
fc.chrootBaseDir = filepath.Join("/run", fs.StoragePathSuffix)
fc.vmPath = filepath.Join(fc.chrootBaseDir, hypervisorName, fc.id)
fc.jailerRoot = filepath.Join(fc.vmPath, "root") // auto created by jailer

View File

@@ -19,6 +19,7 @@ import (
"time"
"github.com/docker/go-units"
volume "github.com/kata-containers/kata-containers/src/runtime/pkg/direct-volume"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
resCtrl "github.com/kata-containers/kata-containers/src/runtime/pkg/resourcecontrol"
"github.com/kata-containers/kata-containers/src/runtime/pkg/uuid"
@@ -167,6 +168,15 @@ func getPagesizeFromOpt(fsOpts []string) string {
return ""
}
func getFSGroupChangePolicy(policy volume.FSGroupChangePolicy) pbTypes.FSGroupChangePolicy {
switch policy {
case volume.FSGroupChangeOnRootMismatch:
return pbTypes.FSGroupChangePolicy_OnRootMismatch
default:
return pbTypes.FSGroupChangePolicy_Always
}
}
// Shared path handling:
// 1. create three directories for each sandbox:
// -. /run/kata-containers/shared/sandboxes/$sbx_id/mounts/, a directory to hold all host/guest shared mounts
@@ -910,7 +920,6 @@ func (k *kataAgent) constrainGRPCSpec(grpcSpec *grpc.Spec, passSeccomp bool, str
grpcSpec.Linux.Resources.Devices = nil
grpcSpec.Linux.Resources.Pids = nil
grpcSpec.Linux.Resources.BlockIO = nil
grpcSpec.Linux.Resources.HugepageLimits = nil
grpcSpec.Linux.Resources.Network = nil
if grpcSpec.Linux.Resources.CPU != nil {
grpcSpec.Linux.Resources.CPU.Cpus = ""
@@ -1469,6 +1478,12 @@ func (k *kataAgent) handleDeviceBlockVolume(c *Container, m Mount, device api.De
if len(vol.Options) == 0 {
vol.Options = m.Options
}
if m.FSGroup != nil {
vol.FsGroup = &grpc.FSGroup{
GroupId: uint32(*m.FSGroup),
GroupChangePolicy: getFSGroupChangePolicy(m.FSGroupChangePolicy),
}
}
return vol, nil
}
@@ -1549,11 +1564,9 @@ func (k *kataAgent) handleBlkOCIMounts(c *Container, spec *specs.Spec) ([]*grpc.
// Each device will be mounted at a unique location within the VM only once. Mounting
// to the container specific location is handled within the OCI spec. Let's ensure that
// the storage mount point is unique for each device. This is then utilized as the source
// in the OCI spec. If multiple containers mount the same block device, it's refcounted inside
// in the OCI spec. If multiple containers mount the same block device, it's ref-counted inside
// the guest by Kata agent.
filename := b64.StdEncoding.EncodeToString([]byte(vol.Source))
// Make the base64 encoding path safe.
filename = strings.ReplaceAll(filename, "/", "_")
filename := b64.URLEncoding.EncodeToString([]byte(vol.Source))
path := filepath.Join(kataGuestSandboxStorageDir(), filename)
// Update applicable OCI mount source

View File

@@ -23,6 +23,7 @@ import (
"github.com/stretchr/testify/assert"
"code.cloudfoundry.org/bytefmt"
volume "github.com/kata-containers/kata-containers/src/runtime/pkg/direct-volume"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/device/api"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/device/config"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/device/drivers"
@@ -190,8 +191,7 @@ func TestKataAgentSendReq(t *testing.T) {
func TestHandleEphemeralStorage(t *testing.T) {
k := kataAgent{}
var ociMounts []specs.Mount
mountSource := "/tmp/mountPoint"
os.Mkdir(mountSource, 0755)
mountSource := t.TempDir()
mount := specs.Mount{
Type: KataEphemeralDevType,
@@ -211,8 +211,7 @@ func TestHandleEphemeralStorage(t *testing.T) {
func TestHandleLocalStorage(t *testing.T) {
k := kataAgent{}
var ociMounts []specs.Mount
mountSource := "/tmp/mountPoint"
os.Mkdir(mountSource, 0755)
mountSource := t.TempDir()
mount := specs.Mount{
Type: KataLocalDevType,
@@ -234,6 +233,7 @@ func TestHandleLocalStorage(t *testing.T) {
}
func TestHandleDeviceBlockVolume(t *testing.T) {
var gid = 2000
k := kataAgent{}
// nolint: govet
@@ -315,6 +315,27 @@ func TestHandleDeviceBlockVolume(t *testing.T) {
Source: testSCSIAddr,
},
},
{
BlockDeviceDriver: config.VirtioBlock,
inputMount: Mount{
FSGroup: &gid,
FSGroupChangePolicy: volume.FSGroupChangeOnRootMismatch,
},
inputDev: &drivers.BlockDevice{
BlockDrive: &config.BlockDrive{
PCIPath: testPCIPath,
VirtPath: testVirtPath,
},
},
resultVol: &pb.Storage{
Driver: kataBlkDevType,
Source: testPCIPath.String(),
FsGroup: &pb.FSGroup{
GroupId: uint32(gid),
GroupChangePolicy: pbTypes.FSGroupChangePolicy_OnRootMismatch,
},
},
},
}
for _, test := range tests {
@@ -609,7 +630,7 @@ func TestConstrainGRPCSpec(t *testing.T) {
assert.NotNil(g.Linux.Resources.Memory)
assert.Nil(g.Linux.Resources.Pids)
assert.Nil(g.Linux.Resources.BlockIO)
assert.Nil(g.Linux.Resources.HugepageLimits)
assert.Len(g.Linux.Resources.HugepageLimits, 0)
assert.Nil(g.Linux.Resources.Network)
assert.NotNil(g.Linux.Resources.CPU)
assert.Equal(g.Process.SelinuxLabel, "")
@@ -665,8 +686,7 @@ func TestHandleShm(t *testing.T) {
// In case the type of mount is ephemeral, the container mount is not
// shared with the sandbox shm.
ociMounts[0].Type = KataEphemeralDevType
mountSource := "/tmp/mountPoint"
os.Mkdir(mountSource, 0755)
mountSource := t.TempDir()
ociMounts[0].Source = mountSource
k.handleShm(ociMounts, sandbox)

View File

@@ -18,6 +18,8 @@ const (
watcherChannelSize = 128
)
var monitorLog = virtLog.WithField("subsystem", "virtcontainers/monitor")
// nolint: govet
type monitor struct {
watchers []chan error
@@ -33,6 +35,9 @@ type monitor struct {
}
func newMonitor(s *Sandbox) *monitor {
// there should only be one monitor for one sandbox,
// so it's safe to let monitorLog as a global variable.
monitorLog = monitorLog.WithField("sandbox", s.ID())
return &monitor{
sandbox: s,
checkInterval: defaultCheckInterval,
@@ -72,6 +77,7 @@ func (m *monitor) newWatcher(ctx context.Context) (chan error, error) {
}
func (m *monitor) notify(ctx context.Context, err error) {
monitorLog.WithError(err).Warn("notify on errors")
m.sandbox.agent.markDead(ctx)
m.Lock()
@@ -85,18 +91,19 @@ func (m *monitor) notify(ctx context.Context, err error) {
// but just in case...
defer func() {
if x := recover(); x != nil {
virtLog.Warnf("watcher closed channel: %v", x)
monitorLog.Warnf("watcher closed channel: %v", x)
}
}()
for _, c := range m.watchers {
monitorLog.WithError(err).Warn("write error to watcher")
// throw away message can not write to channel
// make it not stuck, the first error is useful.
select {
case c <- err:
default:
virtLog.WithField("channel-size", watcherChannelSize).Warnf("watcher channel is full, throw notify message")
monitorLog.WithField("channel-size", watcherChannelSize).Warnf("watcher channel is full, throw notify message")
}
}
}
@@ -104,6 +111,7 @@ func (m *monitor) notify(ctx context.Context, err error) {
func (m *monitor) stop() {
// wait outside of monitor lock for the watcher channel to exit.
defer m.wg.Wait()
monitorLog.Info("stopping monitor")
m.Lock()
defer m.Unlock()
@@ -122,7 +130,7 @@ func (m *monitor) stop() {
// but just in case...
defer func() {
if x := recover(); x != nil {
virtLog.Warnf("watcher closed channel: %v", x)
monitorLog.Warnf("watcher closed channel: %v", x)
}
}()

View File

@@ -14,6 +14,7 @@ import (
"syscall"
merr "github.com/hashicorp/go-multierror"
volume "github.com/kata-containers/kata-containers/src/runtime/pkg/direct-volume"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
"github.com/pkg/errors"
@@ -325,6 +326,7 @@ func bindMountContainerRootfs(ctx context.Context, shareDir, cid, cRootFs string
}
// Mount describes a container mount.
// nolint: govet
type Mount struct {
// Source is the source of the mount.
Source string
@@ -352,6 +354,14 @@ type Mount struct {
// ReadOnly specifies if the mount should be read only or not
ReadOnly bool
// FSGroup a group ID that the group ownership of the files for the mounted volume
// will need to be changed when set.
FSGroup *int
// FSGroupChangePolicy specifies the policy that will be used when applying
// group id ownership change for a volume.
FSGroupChangePolicy volume.FSGroupChangePolicy
}
func isSymlink(path string) bool {

View File

@@ -383,6 +383,12 @@ func TestBindMountFailingMount(t *testing.T) {
assert.Error(err)
}
func cleanupFooMount() {
dest := filepath.Join(testDir, "fooDirDest")
syscall.Unmount(dest, 0)
}
func TestBindMountSuccessful(t *testing.T) {
assert := assert.New(t)
if tc.NotValid(ktu.NeedRoot()) {
@@ -391,9 +397,7 @@ func TestBindMountSuccessful(t *testing.T) {
source := filepath.Join(testDir, "fooDirSrc")
dest := filepath.Join(testDir, "fooDirDest")
syscall.Unmount(dest, 0)
os.Remove(source)
os.Remove(dest)
t.Cleanup(cleanupFooMount)
err := os.MkdirAll(source, mountPerm)
assert.NoError(err)
@@ -403,8 +407,6 @@ func TestBindMountSuccessful(t *testing.T) {
err = bindMount(context.Background(), source, dest, false, "private")
assert.NoError(err)
syscall.Unmount(dest, 0)
}
func TestBindMountReadonlySuccessful(t *testing.T) {
@@ -415,9 +417,7 @@ func TestBindMountReadonlySuccessful(t *testing.T) {
source := filepath.Join(testDir, "fooDirSrc")
dest := filepath.Join(testDir, "fooDirDest")
syscall.Unmount(dest, 0)
os.Remove(source)
os.Remove(dest)
t.Cleanup(cleanupFooMount)
err := os.MkdirAll(source, mountPerm)
assert.NoError(err)
@@ -428,8 +428,6 @@ func TestBindMountReadonlySuccessful(t *testing.T) {
err = bindMount(context.Background(), source, dest, true, "private")
assert.NoError(err)
defer syscall.Unmount(dest, 0)
// should not be able to create file in read-only mount
destFile := filepath.Join(dest, "foo")
_, err = os.OpenFile(destFile, os.O_CREATE, mountPerm)
@@ -444,9 +442,7 @@ func TestBindMountInvalidPgtypes(t *testing.T) {
source := filepath.Join(testDir, "fooDirSrc")
dest := filepath.Join(testDir, "fooDirDest")
syscall.Unmount(dest, 0)
os.Remove(source)
os.Remove(dest)
t.Cleanup(cleanupFooMount)
err := os.MkdirAll(source, mountPerm)
assert.NoError(err)

View File

@@ -29,11 +29,11 @@ const dirMode = os.FileMode(0700) | os.ModeDir
// fileMode is the permission bits used for creating a file
const fileMode = os.FileMode(0600)
// storagePathSuffix is the suffix used for all storage paths
// StoragePathSuffix is the suffix used for all storage paths
//
// Note: this very brief path represents "virtcontainers". It is as
// terse as possible to minimise path length.
const storagePathSuffix = "vc"
const StoragePathSuffix = "vc"
// sandboxPathSuffix is the suffix used for sandbox storage
const sandboxPathSuffix = "sbs"
@@ -64,7 +64,7 @@ func Init() (persistapi.PersistDriver, error) {
return &FS{
sandboxState: &persistapi.SandboxState{},
containerState: make(map[string]persistapi.ContainerState),
storageRootPath: filepath.Join("/run", storagePathSuffix),
storageRootPath: filepath.Join("/run", StoragePathSuffix),
driverName: "fs",
}, nil
}

View File

@@ -14,8 +14,8 @@ import (
"github.com/stretchr/testify/assert"
)
func getFsDriver() (*FS, error) {
driver, err := MockFSInit()
func getFsDriver(t *testing.T) (*FS, error) {
driver, err := MockFSInit(t.TempDir())
if err != nil {
return nil, fmt.Errorf("failed to init fs driver")
}
@@ -27,16 +27,8 @@ func getFsDriver() (*FS, error) {
return fs.FS, nil
}
func initTestDir() func() {
return func() {
os.RemoveAll(MockStorageRootPath())
}
}
func TestFsLockShared(t *testing.T) {
defer initTestDir()()
fs, err := getFsDriver()
fs, err := getFsDriver(t)
assert.Nil(t, err)
assert.NotNil(t, fs)
@@ -61,9 +53,7 @@ func TestFsLockShared(t *testing.T) {
}
func TestFsLockExclusive(t *testing.T) {
defer initTestDir()()
fs, err := getFsDriver()
fs, err := getFsDriver(t)
assert.Nil(t, err)
assert.NotNil(t, fs)
@@ -89,9 +79,7 @@ func TestFsLockExclusive(t *testing.T) {
}
func TestFsDriver(t *testing.T) {
defer initTestDir()()
fs, err := getFsDriver()
fs, err := getFsDriver(t)
assert.Nil(t, err)
assert.NotNil(t, fs)
@@ -162,12 +150,10 @@ func TestFsDriver(t *testing.T) {
}
func TestGlobalReadWrite(t *testing.T) {
defer initTestDir()()
relPath := "test/123/aaa.json"
data := "hello this is testing global read write"
fs, err := getFsDriver()
fs, err := getFsDriver(t)
assert.Nil(t, err)
assert.NotNil(t, fs)

View File

@@ -7,19 +7,27 @@ package fs
import (
"fmt"
"os"
"path/filepath"
persistapi "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/api"
)
var mockRootPath = ""
type MockFS struct {
// inherit from FS. Overwrite if needed.
*FS
}
func EnableMockTesting(rootPath string) {
mockRootPath = rootPath
}
func MockStorageRootPath() string {
return filepath.Join(os.TempDir(), "vc", "mockfs")
if mockRootPath == "" {
panic("Using uninitialized mock storage root path")
}
return mockRootPath
}
func MockRunStoragePath() string {
@@ -30,11 +38,7 @@ func MockRunVMStoragePath() string {
return filepath.Join(MockStorageRootPath(), vmPathSuffix)
}
func MockStorageDestroy() {
os.RemoveAll(MockStorageRootPath())
}
func MockFSInit() (persistapi.PersistDriver, error) {
func MockFSInit(rootPath string) (persistapi.PersistDriver, error) {
driver, err := Init()
if err != nil {
return nil, fmt.Errorf("Could not create Mock FS driver: %v", err)
@@ -45,8 +49,15 @@ func MockFSInit() (persistapi.PersistDriver, error) {
return nil, fmt.Errorf("Could not create Mock FS driver")
}
fsDriver.storageRootPath = MockStorageRootPath()
fsDriver.storageRootPath = rootPath
fsDriver.driverName = "mockfs"
return &MockFS{fsDriver}, nil
}
func MockAutoInit() (persistapi.PersistDriver, error) {
if mockRootPath != "" {
return MockFSInit(MockStorageRootPath())
}
return nil, nil
}

View File

@@ -0,0 +1,34 @@
// Copyright Red Hat.
//
// SPDX-License-Identifier: Apache-2.0
//
package fs
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestMockAutoInit(t *testing.T) {
assert := assert.New(t)
orgMockRootPath := mockRootPath
defer func() {
mockRootPath = orgMockRootPath
}()
mockRootPath = ""
fsd, err := MockAutoInit()
assert.Nil(fsd)
assert.NoError(err)
// Testing mock driver
mockRootPath = t.TempDir()
fsd, err = MockAutoInit()
assert.NoError(err)
expectedFS, err := MockFSInit(MockStorageRootPath())
assert.NoError(err)
assert.Equal(expectedFS, fsd)
}

View File

@@ -28,13 +28,8 @@ var (
RootFSName: fs.Init,
RootlessFSName: fs.RootlessInit,
}
mockTesting = false
)
func EnableMockTesting() {
mockTesting = true
}
// GetDriver returns new PersistDriver according to driver name
func GetDriverByName(name string) (persistapi.PersistDriver, error) {
if expErr != nil {
@@ -56,8 +51,9 @@ func GetDriver() (persistapi.PersistDriver, error) {
return nil, expErr
}
if mockTesting {
return fs.MockFSInit()
mock, err := fs.MockAutoInit()
if mock != nil || err != nil {
return mock, err
}
if rootless.IsRootless() {

View File

@@ -27,12 +27,6 @@ func TestGetDriverByName(t *testing.T) {
func TestGetDriver(t *testing.T) {
assert := assert.New(t)
orgMockTesting := mockTesting
defer func() {
mockTesting = orgMockTesting
}()
mockTesting = false
fsd, err := GetDriver()
assert.NoError(err)
@@ -46,12 +40,4 @@ func TestGetDriver(t *testing.T) {
assert.NoError(err)
assert.Equal(expectedFS, fsd)
// Testing mock driver
mockTesting = true
fsd, err = GetDriver()
assert.NoError(err)
expectedFS, err = fs.MockFSInit()
assert.NoError(err)
assert.Equal(expectedFS, fsd)
}

View File

@@ -1988,6 +1988,52 @@ func (m *SetGuestDateTimeRequest) XXX_DiscardUnknown() {
var xxx_messageInfo_SetGuestDateTimeRequest proto.InternalMessageInfo
// FSGroup consists of the group id and group ownership change policy
// that a volume should have its ownership changed to.
type FSGroup struct {
// GroupID is the ID that the group ownership of the
// files in the mounted volume will need to be changed to.
GroupId uint32 `protobuf:"varint,2,opt,name=group_id,json=groupId,proto3" json:"group_id,omitempty"`
// GroupChangePolicy specifies the policy for applying group id
// ownership change on a mounted volume.
GroupChangePolicy protocols.FSGroupChangePolicy `protobuf:"varint,3,opt,name=group_change_policy,json=groupChangePolicy,proto3,enum=types.FSGroupChangePolicy" json:"group_change_policy,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *FSGroup) Reset() { *m = FSGroup{} }
func (*FSGroup) ProtoMessage() {}
func (*FSGroup) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{47}
}
func (m *FSGroup) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *FSGroup) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_FSGroup.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *FSGroup) XXX_Merge(src proto.Message) {
xxx_messageInfo_FSGroup.Merge(m, src)
}
func (m *FSGroup) XXX_Size() int {
return m.Size()
}
func (m *FSGroup) XXX_DiscardUnknown() {
xxx_messageInfo_FSGroup.DiscardUnknown(m)
}
var xxx_messageInfo_FSGroup proto.InternalMessageInfo
// Storage represents both the rootfs of the container, and any volume that
// could have been defined through the Mount list of the OCI specification.
type Storage struct {
@@ -2011,11 +2057,14 @@ type Storage struct {
// device, "9p" for shared filesystem, or "tmpfs" for shared /dev/shm.
Fstype string `protobuf:"bytes,4,opt,name=fstype,proto3" json:"fstype,omitempty"`
// Options describes the additional options that might be needed to
// mount properly the storage filesytem.
// mount properly the storage filesystem.
Options []string `protobuf:"bytes,5,rep,name=options,proto3" json:"options,omitempty"`
// MountPoint refers to the path where the storage should be mounted
// inside the VM.
MountPoint string `protobuf:"bytes,6,opt,name=mount_point,json=mountPoint,proto3" json:"mount_point,omitempty"`
MountPoint string `protobuf:"bytes,6,opt,name=mount_point,json=mountPoint,proto3" json:"mount_point,omitempty"`
// FSGroup consists of the group ID and group ownership change policy
// that the mounted volume must have its group ID changed to when specified.
FsGroup *FSGroup `protobuf:"bytes,7,opt,name=fs_group,json=fsGroup,proto3" json:"fs_group,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
@@ -2024,7 +2073,7 @@ type Storage struct {
func (m *Storage) Reset() { *m = Storage{} }
func (*Storage) ProtoMessage() {}
func (*Storage) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{47}
return fileDescriptor_712ce9a559fda969, []int{48}
}
func (m *Storage) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2095,7 +2144,7 @@ type Device struct {
func (m *Device) Reset() { *m = Device{} }
func (*Device) ProtoMessage() {}
func (*Device) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{48}
return fileDescriptor_712ce9a559fda969, []int{49}
}
func (m *Device) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2136,7 +2185,7 @@ type StringUser struct {
func (m *StringUser) Reset() { *m = StringUser{} }
func (*StringUser) ProtoMessage() {}
func (*StringUser) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{49}
return fileDescriptor_712ce9a559fda969, []int{50}
}
func (m *StringUser) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2193,7 +2242,7 @@ type CopyFileRequest struct {
func (m *CopyFileRequest) Reset() { *m = CopyFileRequest{} }
func (*CopyFileRequest) ProtoMessage() {}
func (*CopyFileRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{50}
return fileDescriptor_712ce9a559fda969, []int{51}
}
func (m *CopyFileRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2231,7 +2280,7 @@ type GetOOMEventRequest struct {
func (m *GetOOMEventRequest) Reset() { *m = GetOOMEventRequest{} }
func (*GetOOMEventRequest) ProtoMessage() {}
func (*GetOOMEventRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{51}
return fileDescriptor_712ce9a559fda969, []int{52}
}
func (m *GetOOMEventRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2270,7 +2319,7 @@ type OOMEvent struct {
func (m *OOMEvent) Reset() { *m = OOMEvent{} }
func (*OOMEvent) ProtoMessage() {}
func (*OOMEvent) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{52}
return fileDescriptor_712ce9a559fda969, []int{53}
}
func (m *OOMEvent) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2309,7 +2358,7 @@ type AddSwapRequest struct {
func (m *AddSwapRequest) Reset() { *m = AddSwapRequest{} }
func (*AddSwapRequest) ProtoMessage() {}
func (*AddSwapRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{53}
return fileDescriptor_712ce9a559fda969, []int{54}
}
func (m *AddSwapRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2347,7 +2396,7 @@ type GetMetricsRequest struct {
func (m *GetMetricsRequest) Reset() { *m = GetMetricsRequest{} }
func (*GetMetricsRequest) ProtoMessage() {}
func (*GetMetricsRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{54}
return fileDescriptor_712ce9a559fda969, []int{55}
}
func (m *GetMetricsRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2386,7 +2435,7 @@ type Metrics struct {
func (m *Metrics) Reset() { *m = Metrics{} }
func (*Metrics) ProtoMessage() {}
func (*Metrics) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{55}
return fileDescriptor_712ce9a559fda969, []int{56}
}
func (m *Metrics) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2426,7 +2475,7 @@ type VolumeStatsRequest struct {
func (m *VolumeStatsRequest) Reset() { *m = VolumeStatsRequest{} }
func (*VolumeStatsRequest) ProtoMessage() {}
func (*VolumeStatsRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{56}
return fileDescriptor_712ce9a559fda969, []int{57}
}
func (m *VolumeStatsRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2467,7 +2516,7 @@ type ResizeVolumeRequest struct {
func (m *ResizeVolumeRequest) Reset() { *m = ResizeVolumeRequest{} }
func (*ResizeVolumeRequest) ProtoMessage() {}
func (*ResizeVolumeRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_712ce9a559fda969, []int{57}
return fileDescriptor_712ce9a559fda969, []int{58}
}
func (m *ResizeVolumeRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2546,6 +2595,7 @@ func init() {
proto.RegisterType((*GuestDetailsResponse)(nil), "grpc.GuestDetailsResponse")
proto.RegisterType((*MemHotplugByProbeRequest)(nil), "grpc.MemHotplugByProbeRequest")
proto.RegisterType((*SetGuestDateTimeRequest)(nil), "grpc.SetGuestDateTimeRequest")
proto.RegisterType((*FSGroup)(nil), "grpc.FSGroup")
proto.RegisterType((*Storage)(nil), "grpc.Storage")
proto.RegisterType((*Device)(nil), "grpc.Device")
proto.RegisterType((*StringUser)(nil), "grpc.StringUser")
@@ -2564,198 +2614,203 @@ func init() {
}
var fileDescriptor_712ce9a559fda969 = []byte{
// 3055 bytes of a gzipped FileDescriptorProto
// 3127 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x1a, 0xcb, 0x72, 0x24, 0x47,
0xd1, 0xf3, 0x90, 0x66, 0x26, 0xe7, 0xa5, 0x69, 0x69, 0xb5, 0xb3, 0x63, 0x5b, 0xac, 0x7b, 0xed,
0xf5, 0xda, 0xc6, 0x92, 0xbd, 0x76, 0xb0, 0x7e, 0x84, 0x59, 0x24, 0xad, 0x2c, 0xc9, 0xb6, 0xbc,
0x43, 0xcb, 0xc2, 0x04, 0x04, 0x74, 0xb4, 0xba, 0x6b, 0x47, 0x65, 0x4d, 0x77, 0xb5, 0xab, 0xab,
0xb5, 0x92, 0x89, 0x20, 0x38, 0xc1, 0x8d, 0x23, 0x37, 0x7e, 0x80, 0xe0, 0xc6, 0x91, 0x0b, 0x07,
0x0e, 0x0e, 0x4e, 0x1c, 0x39, 0x11, 0x78, 0x3f, 0x81, 0x2f, 0x20, 0xea, 0xd5, 0x5d, 0x3d, 0x0f,
0x19, 0x14, 0x1b, 0xc1, 0x65, 0xa2, 0x33, 0x2b, 0x2b, 0x5f, 0x55, 0x99, 0x95, 0x59, 0x35, 0x30,
0x1c, 0x61, 0x76, 0x92, 0x1e, 0xaf, 0xfb, 0x24, 0xdc, 0x38, 0xf5, 0x98, 0xf7, 0xba, 0x4f, 0x22,
0xe6, 0xe1, 0x08, 0xd1, 0x64, 0x0a, 0x4e, 0xa8, 0xbf, 0x31, 0xc6, 0xc7, 0xc9, 0x46, 0x4c, 0x09,
0x23, 0x3e, 0x19, 0xab, 0xaf, 0x64, 0xc3, 0x1b, 0xa1, 0x88, 0xad, 0x0b, 0xc0, 0xaa, 0x8e, 0x68,
0xec, 0x0f, 0x1a, 0xc4, 0xc7, 0x12, 0x31, 0x68, 0xf8, 0x89, 0xfe, 0x6c, 0xb2, 0x8b, 0x18, 0x25,
0x0a, 0x78, 0x76, 0x44, 0xc8, 0x68, 0x8c, 0x24, 0x8f, 0xe3, 0xf4, 0xd1, 0x06, 0x0a, 0x63, 0x76,
0x21, 0x07, 0xed, 0xdf, 0x97, 0x61, 0x75, 0x9b, 0x22, 0x8f, 0xa1, 0x6d, 0xad, 0x80, 0x83, 0xbe,
0x4c, 0x51, 0xc2, 0xac, 0x17, 0xa0, 0x95, 0x29, 0xe5, 0xe2, 0xa0, 0x5f, 0xba, 0x59, 0xba, 0xd3,
0x70, 0x9a, 0x19, 0x6e, 0x3f, 0xb0, 0xae, 0x43, 0x0d, 0x9d, 0x23, 0x9f, 0x8f, 0x96, 0xc5, 0xe8,
0x22, 0x07, 0xf7, 0x03, 0xeb, 0x4d, 0x68, 0x26, 0x8c, 0xe2, 0x68, 0xe4, 0xa6, 0x09, 0xa2, 0xfd,
0xca, 0xcd, 0xd2, 0x9d, 0xe6, 0xdd, 0xa5, 0x75, 0xae, 0xf2, 0xfa, 0xa1, 0x18, 0x38, 0x4a, 0x10,
0x75, 0x20, 0xc9, 0xbe, 0xad, 0xdb, 0x50, 0x0b, 0xd0, 0x19, 0xf6, 0x51, 0xd2, 0xaf, 0xde, 0xac,
0xdc, 0x69, 0xde, 0x6d, 0x49, 0xf2, 0x07, 0x02, 0xe9, 0xe8, 0x41, 0xeb, 0x15, 0xa8, 0x27, 0x8c,
0x50, 0x6f, 0x84, 0x92, 0xfe, 0x82, 0x20, 0x6c, 0x6b, 0xbe, 0x02, 0xeb, 0x64, 0xc3, 0xd6, 0x73,
0x50, 0x79, 0xb8, 0xbd, 0xdf, 0x5f, 0x14, 0xd2, 0x41, 0x51, 0xc5, 0xc8, 0x77, 0x38, 0xda, 0xba,
0x05, 0xed, 0xc4, 0x8b, 0x82, 0x63, 0x72, 0xee, 0xc6, 0x38, 0x88, 0x92, 0x7e, 0xed, 0x66, 0xe9,
0x4e, 0xdd, 0x69, 0x29, 0xe4, 0x90, 0xe3, 0xec, 0xf7, 0xe0, 0xda, 0x21, 0xf3, 0x28, 0xbb, 0x82,
0x77, 0xec, 0x23, 0x58, 0x75, 0x50, 0x48, 0xce, 0xae, 0xe4, 0xda, 0x3e, 0xd4, 0x18, 0x0e, 0x11,
0x49, 0x99, 0x70, 0x6d, 0xdb, 0xd1, 0xa0, 0xfd, 0xc7, 0x12, 0x58, 0x3b, 0xe7, 0xc8, 0x1f, 0x52,
0xe2, 0xa3, 0x24, 0xf9, 0x3f, 0x2d, 0xd7, 0xcb, 0x50, 0x8b, 0xa5, 0x02, 0xfd, 0xaa, 0x20, 0x57,
0xab, 0xa0, 0xb5, 0xd2, 0xa3, 0xf6, 0x17, 0xb0, 0x72, 0x88, 0x47, 0x91, 0x37, 0x7e, 0x8a, 0xfa,
0xae, 0xc2, 0x62, 0x22, 0x78, 0x0a, 0x55, 0xdb, 0x8e, 0x82, 0xec, 0x21, 0x58, 0x9f, 0x7b, 0x98,
0x3d, 0x3d, 0x49, 0xf6, 0xeb, 0xb0, 0x5c, 0xe0, 0x98, 0xc4, 0x24, 0x4a, 0x90, 0x50, 0x80, 0x79,
0x2c, 0x4d, 0x04, 0xb3, 0x05, 0x47, 0x41, 0x36, 0x81, 0xd5, 0xa3, 0x38, 0xb8, 0x62, 0x34, 0xdd,
0x85, 0x06, 0x45, 0x09, 0x49, 0x29, 0x8f, 0x81, 0xb2, 0x70, 0xea, 0x8a, 0x74, 0xea, 0x27, 0x38,
0x4a, 0xcf, 0x1d, 0x3d, 0xe6, 0xe4, 0x64, 0x6a, 0x7f, 0xb2, 0xe4, 0x2a, 0xfb, 0xf3, 0x3d, 0xb8,
0x36, 0xf4, 0xd2, 0xe4, 0x2a, 0xba, 0xda, 0xef, 0xf3, 0xbd, 0x9d, 0xa4, 0xe1, 0x95, 0x26, 0xff,
0xa1, 0x04, 0xf5, 0xed, 0x38, 0x3d, 0x4a, 0xbc, 0x11, 0xb2, 0xbe, 0x03, 0x4d, 0x46, 0x98, 0x37,
0x76, 0x53, 0x0e, 0x0a, 0xf2, 0xaa, 0x03, 0x02, 0x25, 0x09, 0x5e, 0x80, 0x56, 0x8c, 0xa8, 0x1f,
0xa7, 0x8a, 0xa2, 0x7c, 0xb3, 0x72, 0xa7, 0xea, 0x34, 0x25, 0x4e, 0x92, 0xac, 0xc3, 0xb2, 0x18,
0x73, 0x71, 0xe4, 0x9e, 0x22, 0x1a, 0xa1, 0x71, 0x48, 0x02, 0x24, 0x36, 0x47, 0xd5, 0xe9, 0x89,
0xa1, 0xfd, 0xe8, 0xe3, 0x6c, 0xc0, 0x7a, 0x15, 0x7a, 0x19, 0x3d, 0xdf, 0xf1, 0x82, 0xba, 0x2a,
0xa8, 0xbb, 0x8a, 0xfa, 0x48, 0xa1, 0xed, 0x5f, 0x42, 0xe7, 0xb3, 0x13, 0x4a, 0x18, 0x1b, 0xe3,
0x68, 0xf4, 0xc0, 0x63, 0x1e, 0x0f, 0xcd, 0x18, 0x51, 0x4c, 0x82, 0x44, 0x69, 0xab, 0x41, 0xeb,
0x35, 0xe8, 0x31, 0x49, 0x8b, 0x02, 0x57, 0xd3, 0x94, 0x05, 0xcd, 0x52, 0x36, 0x30, 0x54, 0xc4,
0x2f, 0x41, 0x27, 0x27, 0xe6, 0xc1, 0xad, 0xf4, 0x6d, 0x67, 0xd8, 0xcf, 0x70, 0x88, 0xec, 0x33,
0xe1, 0x2b, 0xb1, 0xc8, 0xd6, 0x6b, 0xd0, 0xc8, 0xfd, 0x50, 0x12, 0x3b, 0xa4, 0x23, 0x77, 0x88,
0x76, 0xa7, 0x53, 0xcf, 0x9c, 0xf2, 0x01, 0x74, 0x59, 0xa6, 0xb8, 0x1b, 0x78, 0xcc, 0x2b, 0x6e,
0xaa, 0xa2, 0x55, 0x4e, 0x87, 0x15, 0x60, 0xfb, 0x7d, 0x68, 0x0c, 0x71, 0x90, 0x48, 0xc1, 0x7d,
0xa8, 0xf9, 0x29, 0xa5, 0x28, 0x62, 0xda, 0x64, 0x05, 0x5a, 0x2b, 0xb0, 0x30, 0xc6, 0x21, 0x66,
0xca, 0x4c, 0x09, 0xd8, 0x04, 0xe0, 0x00, 0x85, 0x84, 0x5e, 0x08, 0x87, 0xad, 0xc0, 0x82, 0xb9,
0xb8, 0x12, 0xb0, 0x9e, 0x85, 0x46, 0xe8, 0x9d, 0x67, 0x8b, 0xca, 0x47, 0xea, 0xa1, 0x77, 0x2e,
0x95, 0xef, 0x43, 0xed, 0x91, 0x87, 0xc7, 0x7e, 0xc4, 0x94, 0x57, 0x34, 0x98, 0x0b, 0xac, 0x9a,
0x02, 0xff, 0x5a, 0x86, 0xa6, 0x94, 0x28, 0x15, 0x5e, 0x81, 0x05, 0xdf, 0xf3, 0x4f, 0x32, 0x91,
0x02, 0xb0, 0x6e, 0x6b, 0x45, 0xca, 0x66, 0x86, 0xcb, 0x35, 0xd5, 0xaa, 0x6d, 0x00, 0x24, 0x8f,
0xbd, 0x58, 0xe9, 0x56, 0x99, 0x43, 0xdc, 0xe0, 0x34, 0x52, 0xdd, 0xb7, 0xa0, 0x25, 0xf7, 0x9d,
0x9a, 0x52, 0x9d, 0x33, 0xa5, 0x29, 0xa9, 0xe4, 0xa4, 0x5b, 0xd0, 0x4e, 0x13, 0xe4, 0x9e, 0x60,
0x44, 0x3d, 0xea, 0x9f, 0x5c, 0xf4, 0x17, 0xe4, 0x01, 0x94, 0x26, 0x68, 0x4f, 0xe3, 0xac, 0xbb,
0xb0, 0xc0, 0x73, 0x4b, 0xd2, 0x5f, 0x14, 0x67, 0xdd, 0x73, 0x26, 0x4b, 0x61, 0xea, 0xba, 0xf8,
0xdd, 0x89, 0x18, 0xbd, 0x70, 0x24, 0xe9, 0xe0, 0x1d, 0x80, 0x1c, 0x69, 0x2d, 0x41, 0xe5, 0x14,
0x5d, 0xa8, 0x38, 0xe4, 0x9f, 0xdc, 0x39, 0x67, 0xde, 0x38, 0xd5, 0x5e, 0x97, 0xc0, 0x7b, 0xe5,
0x77, 0x4a, 0xb6, 0x0f, 0xdd, 0xad, 0xf1, 0x29, 0x26, 0xc6, 0xf4, 0x15, 0x58, 0x08, 0xbd, 0x2f,
0x08, 0xd5, 0x9e, 0x14, 0x80, 0xc0, 0xe2, 0x88, 0x50, 0xcd, 0x42, 0x00, 0x56, 0x07, 0xca, 0x24,
0x16, 0xfe, 0x6a, 0x38, 0x65, 0x12, 0xe7, 0x82, 0xaa, 0x86, 0x20, 0xfb, 0x9f, 0x55, 0x80, 0x5c,
0x8a, 0xe5, 0xc0, 0x00, 0x13, 0x37, 0x41, 0x94, 0x9f, 0xef, 0xee, 0xf1, 0x05, 0x43, 0x89, 0x4b,
0x91, 0x9f, 0xd2, 0x04, 0x9f, 0xf1, 0xf5, 0xe3, 0x66, 0x5f, 0x93, 0x66, 0x4f, 0xe8, 0xe6, 0x5c,
0xc7, 0xe4, 0x50, 0xce, 0xdb, 0xe2, 0xd3, 0x1c, 0x3d, 0xcb, 0xda, 0x87, 0x6b, 0x39, 0xcf, 0xc0,
0x60, 0x57, 0xbe, 0x8c, 0xdd, 0x72, 0xc6, 0x2e, 0xc8, 0x59, 0xed, 0xc0, 0x32, 0x26, 0xee, 0x97,
0x29, 0x4a, 0x0b, 0x8c, 0x2a, 0x97, 0x31, 0xea, 0x61, 0xf2, 0x43, 0x31, 0x21, 0x67, 0x33, 0x84,
0x1b, 0x86, 0x95, 0x3c, 0xdc, 0x0d, 0x66, 0xd5, 0xcb, 0x98, 0xad, 0x66, 0x5a, 0xf1, 0x7c, 0x90,
0x73, 0xfc, 0x08, 0x56, 0x31, 0x71, 0x1f, 0x7b, 0x98, 0x4d, 0xb2, 0x5b, 0xf8, 0x16, 0x23, 0xf9,
0x89, 0x56, 0xe4, 0x25, 0x8d, 0x0c, 0x11, 0x1d, 0x15, 0x8c, 0x5c, 0xfc, 0x16, 0x23, 0x0f, 0xc4,
0x84, 0x9c, 0xcd, 0x26, 0xf4, 0x30, 0x99, 0xd4, 0xa6, 0x76, 0x19, 0x93, 0x2e, 0x26, 0x45, 0x4d,
0xb6, 0xa0, 0x97, 0x20, 0x9f, 0x11, 0x6a, 0x6e, 0x82, 0xfa, 0x65, 0x2c, 0x96, 0x14, 0x7d, 0xc6,
0xc3, 0xfe, 0x29, 0xb4, 0xf6, 0xd2, 0x11, 0x62, 0xe3, 0xe3, 0x2c, 0x19, 0x3c, 0xb5, 0xfc, 0x63,
0xff, 0xbb, 0x0c, 0xcd, 0xed, 0x11, 0x25, 0x69, 0x5c, 0xc8, 0xc9, 0x32, 0x48, 0x27, 0x73, 0xb2,
0x20, 0x11, 0x39, 0x59, 0x12, 0xbf, 0x0d, 0xad, 0x50, 0x84, 0xae, 0xa2, 0x97, 0x79, 0xa8, 0x37,
0x15, 0xd4, 0x4e, 0x33, 0x34, 0x92, 0xd9, 0x3a, 0x40, 0x8c, 0x83, 0x44, 0xcd, 0x91, 0xe9, 0xa8,
0xab, 0xca, 0x2d, 0x9d, 0xa2, 0x9d, 0x46, 0x9c, 0x65, 0xeb, 0x37, 0xa1, 0x79, 0xcc, 0x9d, 0xa4,
0x26, 0x14, 0x92, 0x51, 0xee, 0x3d, 0x07, 0x8e, 0xf3, 0x20, 0xdc, 0x83, 0xf6, 0x89, 0x74, 0x99,
0x9a, 0x24, 0xf7, 0xd0, 0x2d, 0x65, 0x49, 0x6e, 0xef, 0xba, 0xe9, 0x59, 0xb9, 0x00, 0xad, 0x13,
0x03, 0x35, 0x38, 0x84, 0xde, 0x14, 0xc9, 0x8c, 0x1c, 0x74, 0xc7, 0xcc, 0x41, 0xcd, 0xbb, 0x96,
0x14, 0x64, 0xce, 0x34, 0xf3, 0xd2, 0x6f, 0xcb, 0xd0, 0xfa, 0x14, 0xb1, 0xc7, 0x84, 0x9e, 0x4a,
0x7d, 0x2d, 0xa8, 0x46, 0x5e, 0x88, 0x14, 0x47, 0xf1, 0x6d, 0xdd, 0x80, 0x3a, 0x3d, 0x97, 0x09,
0x44, 0xad, 0x67, 0x8d, 0x9e, 0x8b, 0xc4, 0x60, 0x3d, 0x0f, 0x40, 0xcf, 0xdd, 0xd8, 0xf3, 0x4f,
0x91, 0xf2, 0x60, 0xd5, 0x69, 0xd0, 0xf3, 0xa1, 0x44, 0xf0, 0xad, 0x40, 0xcf, 0x5d, 0x44, 0x29,
0xa1, 0x89, 0xca, 0x55, 0x75, 0x7a, 0xbe, 0x23, 0x60, 0x35, 0x37, 0xa0, 0x24, 0x8e, 0x51, 0x20,
0x72, 0xb4, 0x98, 0xfb, 0x40, 0x22, 0xb8, 0x54, 0xa6, 0xa5, 0x2e, 0x4a, 0xa9, 0x2c, 0x97, 0xca,
0x72, 0xa9, 0x35, 0x39, 0x93, 0x99, 0x52, 0x59, 0x26, 0xb5, 0x2e, 0xa5, 0x32, 0x43, 0x2a, 0xcb,
0xa5, 0x36, 0xf4, 0x5c, 0x25, 0xd5, 0xfe, 0x4d, 0x09, 0x56, 0x27, 0x0b, 0x3f, 0x55, 0x9b, 0xbe,
0x0d, 0x2d, 0x5f, 0xac, 0x57, 0x61, 0x4f, 0xf6, 0xa6, 0x56, 0xd2, 0x69, 0xfa, 0xc6, 0x36, 0xbe,
0x07, 0xed, 0x48, 0x3a, 0x38, 0xdb, 0x9a, 0x95, 0x7c, 0x5d, 0x4c, 0xdf, 0x3b, 0xad, 0xc8, 0x80,
0xec, 0x00, 0xac, 0xcf, 0x29, 0x66, 0xe8, 0x90, 0x51, 0xe4, 0x85, 0x4f, 0xa3, 0xba, 0xb7, 0xa0,
0x2a, 0xaa, 0x15, 0xbe, 0x4c, 0x2d, 0x47, 0x7c, 0xdb, 0x2f, 0xc3, 0x72, 0x41, 0x8a, 0xb2, 0x75,
0x09, 0x2a, 0x63, 0x14, 0x09, 0xee, 0x6d, 0x87, 0x7f, 0xda, 0x1e, 0xf4, 0x1c, 0xe4, 0x05, 0x4f,
0x4f, 0x1b, 0x25, 0xa2, 0x92, 0x8b, 0xb8, 0x03, 0x96, 0x29, 0x42, 0xa9, 0xa2, 0xb5, 0x2e, 0x19,
0x5a, 0x3f, 0x84, 0xde, 0xf6, 0x98, 0x24, 0xe8, 0x90, 0x05, 0x38, 0x7a, 0x1a, 0xed, 0xc8, 0x2f,
0x60, 0xf9, 0x33, 0x76, 0xf1, 0x39, 0x67, 0x96, 0xe0, 0xaf, 0xd0, 0x53, 0xb2, 0x8f, 0x92, 0xc7,
0xda, 0x3e, 0x4a, 0x1e, 0xf3, 0xe6, 0xc6, 0x27, 0xe3, 0x34, 0x8c, 0x44, 0x28, 0xb4, 0x1d, 0x05,
0xd9, 0x5b, 0xd0, 0x92, 0x35, 0xf4, 0x01, 0x09, 0xd2, 0x31, 0x9a, 0x19, 0x83, 0x6b, 0x00, 0xb1,
0x47, 0xbd, 0x10, 0x31, 0x44, 0xe5, 0x1e, 0x6a, 0x38, 0x06, 0xc6, 0xfe, 0x5d, 0x19, 0x56, 0xe4,
0x7d, 0xc3, 0xa1, 0x6c, 0xb3, 0xb5, 0x09, 0x03, 0xa8, 0x9f, 0x90, 0x84, 0x19, 0x0c, 0x33, 0x98,
0xab, 0xc8, 0xfb, 0x73, 0xc9, 0x8d, 0x7f, 0x16, 0x2e, 0x01, 0x2a, 0x97, 0x5f, 0x02, 0x4c, 0xb5,
0xf9, 0xd5, 0xe9, 0x36, 0x9f, 0x47, 0x9b, 0x26, 0xc2, 0x32, 0xc6, 0x1b, 0x4e, 0x43, 0x61, 0xf6,
0x03, 0xeb, 0x36, 0x74, 0x47, 0x5c, 0x4b, 0xf7, 0x84, 0x90, 0x53, 0x37, 0xf6, 0xd8, 0x89, 0x08,
0xf5, 0x86, 0xd3, 0x16, 0xe8, 0x3d, 0x42, 0x4e, 0x87, 0x1e, 0x3b, 0xb1, 0xde, 0x85, 0x8e, 0x2a,
0x03, 0x43, 0xe1, 0xa2, 0x44, 0x1d, 0x7e, 0x2a, 0x8a, 0x4c, 0xef, 0x39, 0xed, 0x53, 0x03, 0x4a,
0xec, 0xeb, 0x70, 0xed, 0x01, 0x4a, 0x18, 0x25, 0x17, 0x45, 0xc7, 0xd8, 0xdf, 0x07, 0xd8, 0x8f,
0x18, 0xa2, 0x8f, 0x3c, 0x1f, 0x25, 0xd6, 0x1b, 0x26, 0xa4, 0x8a, 0xa3, 0xa5, 0x75, 0x79, 0xdd,
0x93, 0x0d, 0x38, 0x06, 0x8d, 0xbd, 0x0e, 0x8b, 0x0e, 0x49, 0x79, 0x3a, 0x7a, 0x51, 0x7f, 0xa9,
0x79, 0x2d, 0x35, 0x4f, 0x20, 0x1d, 0x35, 0x66, 0xef, 0xe9, 0x16, 0x36, 0x67, 0xa7, 0x96, 0x68,
0x1d, 0x1a, 0x58, 0xe3, 0x54, 0x56, 0x99, 0x16, 0x9d, 0x93, 0xd8, 0xef, 0xc3, 0xb2, 0xe4, 0x24,
0x39, 0x6b, 0x36, 0x2f, 0xc2, 0x22, 0xd5, 0x6a, 0x94, 0xf2, 0x7b, 0x1e, 0x45, 0xa4, 0xc6, 0xb8,
0x3f, 0x3e, 0xc1, 0x09, 0xcb, 0x0d, 0xd1, 0xfe, 0x58, 0x86, 0x1e, 0x1f, 0x28, 0xf0, 0xb4, 0x3f,
0x84, 0xd6, 0xa6, 0x33, 0xfc, 0x14, 0xe1, 0xd1, 0xc9, 0x31, 0xcf, 0x9e, 0xdf, 0x2b, 0xc2, 0xca,
0x60, 0x4b, 0x69, 0x6b, 0x0c, 0x39, 0x05, 0x3a, 0xfb, 0x23, 0x58, 0xdd, 0x0c, 0x02, 0x13, 0xa5,
0xb5, 0x7e, 0x03, 0x1a, 0x91, 0xc1, 0xce, 0x38, 0xb3, 0x0a, 0xd4, 0x39, 0x91, 0xfd, 0x33, 0x58,
0x7e, 0x18, 0x8d, 0x71, 0x84, 0xb6, 0x87, 0x47, 0x07, 0x28, 0xcb, 0x45, 0x16, 0x54, 0x79, 0xcd,
0x26, 0x78, 0xd4, 0x1d, 0xf1, 0xcd, 0x83, 0x33, 0x3a, 0x76, 0xfd, 0x38, 0x4d, 0xd4, 0x65, 0xcf,
0x62, 0x74, 0xbc, 0x1d, 0xa7, 0x09, 0x3f, 0x5c, 0x78, 0x71, 0x41, 0xa2, 0xf1, 0x85, 0x88, 0xd0,
0xba, 0x53, 0xf3, 0xe3, 0xf4, 0x61, 0x34, 0xbe, 0xb0, 0xbf, 0x2b, 0x3a, 0x70, 0x84, 0x02, 0xc7,
0x8b, 0x02, 0x12, 0x3e, 0x40, 0x67, 0x86, 0x84, 0xac, 0xdb, 0xd3, 0x99, 0xe8, 0xeb, 0x12, 0xb4,
0x36, 0x47, 0x28, 0x62, 0x0f, 0x10, 0xf3, 0xf0, 0x58, 0x74, 0x74, 0x67, 0x88, 0x26, 0x98, 0x44,
0x2a, 0xdc, 0x34, 0xc8, 0x1b, 0x72, 0x1c, 0x61, 0xe6, 0x06, 0x1e, 0x0a, 0x49, 0x24, 0xb8, 0xd4,
0x1d, 0xe0, 0xa8, 0x07, 0x02, 0x63, 0xbd, 0x0c, 0x5d, 0x79, 0x19, 0xe7, 0x9e, 0x78, 0x51, 0x30,
0xe6, 0x81, 0x5e, 0x11, 0xa1, 0xd9, 0x91, 0xe8, 0x3d, 0x85, 0xb5, 0x5e, 0x81, 0x25, 0x15, 0x86,
0x39, 0x65, 0x55, 0x50, 0x76, 0x15, 0xbe, 0x40, 0x9a, 0xc6, 0x31, 0xa1, 0x2c, 0x71, 0x13, 0xe4,
0xfb, 0x24, 0x8c, 0x55, 0x3b, 0xd4, 0xd5, 0xf8, 0x43, 0x89, 0xb6, 0x47, 0xb0, 0xbc, 0xcb, 0xed,
0x54, 0x96, 0xe4, 0xdb, 0xaa, 0x13, 0xa2, 0xd0, 0x3d, 0x1e, 0x13, 0xff, 0xd4, 0xe5, 0xc9, 0x51,
0x79, 0x98, 0x17, 0x5c, 0x5b, 0x1c, 0x79, 0x88, 0xbf, 0x12, 0x9d, 0x3f, 0xa7, 0x3a, 0x21, 0x2c,
0x1e, 0xa7, 0x23, 0x37, 0xa6, 0xe4, 0x18, 0x29, 0x13, 0xbb, 0x21, 0x0a, 0xf7, 0x24, 0x7e, 0xc8,
0xd1, 0xf6, 0x9f, 0x4b, 0xb0, 0x52, 0x94, 0xa4, 0x52, 0xfd, 0x06, 0xac, 0x14, 0x45, 0xa9, 0xe3,
0x5f, 0x96, 0x97, 0x3d, 0x53, 0xa0, 0x2c, 0x04, 0xee, 0x41, 0x5b, 0x5c, 0xdd, 0xba, 0x81, 0xe4,
0x54, 0x2c, 0x7a, 0xcc, 0x75, 0x71, 0x5a, 0x9e, 0xb9, 0x4a, 0xef, 0xc2, 0x0d, 0x65, 0xbe, 0x3b,
0xad, 0xb6, 0xdc, 0x10, 0xab, 0x8a, 0xe0, 0x60, 0x42, 0xfb, 0x4f, 0xa0, 0x9f, 0xa3, 0xb6, 0x2e,
0x04, 0x32, 0xdf, 0xcc, 0xcb, 0x13, 0xc6, 0x6e, 0x06, 0x01, 0x15, 0x51, 0x52, 0x75, 0x66, 0x0d,
0xd9, 0xf7, 0xe1, 0xfa, 0x21, 0x62, 0xd2, 0x1b, 0x1e, 0x53, 0x9d, 0x88, 0x64, 0xb6, 0x04, 0x95,
0x43, 0xe4, 0x0b, 0xe3, 0x2b, 0x0e, 0xff, 0xe4, 0x1b, 0xf0, 0x28, 0x41, 0xbe, 0xb0, 0xb2, 0xe2,
0x88, 0x6f, 0xfb, 0x4f, 0x25, 0xa8, 0xa9, 0xe4, 0xcc, 0x0f, 0x98, 0x80, 0xe2, 0x33, 0x44, 0xd5,
0xd6, 0x53, 0x90, 0xf5, 0x12, 0x74, 0xe4, 0x97, 0x4b, 0x62, 0x86, 0x49, 0x96, 0xf2, 0xdb, 0x12,
0xfb, 0x50, 0x22, 0xc5, 0xe5, 0x9b, 0xb8, 0xfe, 0x52, 0x9d, 0xa6, 0x82, 0x38, 0xfe, 0x51, 0xc2,
0x23, 0x5c, 0xa4, 0xf8, 0x86, 0xa3, 0x20, 0xbe, 0xd5, 0x35, 0xbf, 0x05, 0xc1, 0x4f, 0x83, 0x7c,
0xab, 0x87, 0x24, 0x8d, 0x98, 0x1b, 0x13, 0x1c, 0x31, 0x95, 0xd3, 0x41, 0xa0, 0x86, 0x1c, 0x63,
0xff, 0xba, 0x04, 0x8b, 0xf2, 0x02, 0x9a, 0xf7, 0xb6, 0xd9, 0xc9, 0x5a, 0xc6, 0xa2, 0x4a, 0x11,
0xb2, 0xe4, 0x69, 0x2a, 0xbe, 0x79, 0x1c, 0x9f, 0x85, 0xf2, 0x7c, 0x50, 0xaa, 0x9d, 0x85, 0xe2,
0x60, 0x78, 0x09, 0x3a, 0xf9, 0x01, 0x2d, 0xc6, 0xa5, 0x8a, 0xed, 0x0c, 0x2b, 0xc8, 0xe6, 0x6a,
0x6a, 0xff, 0x98, 0xb7, 0xf4, 0xd9, 0xe5, 0xeb, 0x12, 0x54, 0xd2, 0x4c, 0x19, 0xfe, 0xc9, 0x31,
0xa3, 0xec, 0x68, 0xe7, 0x9f, 0xd6, 0x6d, 0xe8, 0x78, 0x41, 0x80, 0xf9, 0x74, 0x6f, 0xbc, 0x8b,
0x83, 0x2c, 0x48, 0x8b, 0x58, 0xfb, 0x6f, 0x25, 0xe8, 0x6e, 0x93, 0xf8, 0xe2, 0x43, 0x3c, 0x46,
0x46, 0x06, 0x11, 0x4a, 0xaa, 0x93, 0x9d, 0x7f, 0xf3, 0x6a, 0xf5, 0x11, 0x1e, 0x23, 0x19, 0x5a,
0x72, 0x65, 0xeb, 0x1c, 0x21, 0xc2, 0x4a, 0x0f, 0x66, 0xd7, 0x6e, 0x6d, 0x39, 0x78, 0x40, 0x02,
0x51, 0x97, 0x07, 0x98, 0xba, 0xd9, 0x25, 0x5b, 0xdb, 0xa9, 0x05, 0x98, 0x8a, 0x21, 0x65, 0xc8,
0x82, 0xb8, 0x44, 0x35, 0x0d, 0x59, 0x94, 0x18, 0x6e, 0xc8, 0x2a, 0x2c, 0x92, 0x47, 0x8f, 0x12,
0xc4, 0x44, 0x05, 0x5d, 0x71, 0x14, 0x94, 0xa5, 0xb9, 0xba, 0x91, 0xe6, 0x56, 0xc0, 0xda, 0x45,
0xec, 0xe1, 0xc3, 0x83, 0x9d, 0x33, 0x14, 0x31, 0x7d, 0x3a, 0xbc, 0x0e, 0x75, 0x8d, 0xfa, 0x6f,
0xae, 0x27, 0x5f, 0x85, 0xce, 0x66, 0x10, 0x1c, 0x3e, 0xf6, 0x62, 0xed, 0x8f, 0x3e, 0xd4, 0x86,
0xdb, 0xfb, 0x43, 0xe9, 0x92, 0x0a, 0x37, 0x40, 0x81, 0xfc, 0x34, 0xda, 0x45, 0xec, 0x00, 0x31,
0x8a, 0xfd, 0xec, 0x34, 0xba, 0x05, 0x35, 0x85, 0xe1, 0x33, 0x43, 0xf9, 0xa9, 0xd3, 0xac, 0x02,
0xed, 0x1f, 0x80, 0xf5, 0x23, 0x5e, 0x57, 0x21, 0x59, 0x54, 0x2b, 0x49, 0xaf, 0x42, 0xef, 0x4c,
0x60, 0x5d, 0x59, 0x70, 0x18, 0xcb, 0xd0, 0x95, 0x03, 0x22, 0x06, 0x85, 0xec, 0x23, 0x58, 0x96,
0x65, 0xa0, 0xe4, 0x73, 0x05, 0x16, 0xdc, 0x87, 0xd9, 0x7a, 0x56, 0x1d, 0xf1, 0x7d, 0xf7, 0x2f,
0x3d, 0x75, 0x54, 0xa8, 0x5b, 0x07, 0x6b, 0x17, 0xba, 0x13, 0x4f, 0x44, 0x96, 0xba, 0x86, 0x9a,
0xfd, 0x72, 0x34, 0x58, 0x5d, 0x97, 0x4f, 0x4e, 0xeb, 0xfa, 0xc9, 0x69, 0x7d, 0x27, 0x8c, 0xd9,
0x85, 0xb5, 0x03, 0x9d, 0xe2, 0x63, 0x8a, 0xf5, 0xac, 0xae, 0xda, 0x66, 0x3c, 0xb1, 0xcc, 0x65,
0xb3, 0x0b, 0xdd, 0x89, 0x77, 0x15, 0xad, 0xcf, 0xec, 0xe7, 0x96, 0xb9, 0x8c, 0xee, 0x43, 0xd3,
0x78, 0x48, 0xb1, 0xfa, 0x92, 0xc9, 0xf4, 0xdb, 0xca, 0x5c, 0x06, 0xdb, 0xd0, 0x2e, 0xbc, 0x6d,
0x58, 0x03, 0x65, 0xcf, 0x8c, 0x07, 0x8f, 0xb9, 0x4c, 0xb6, 0xa0, 0x69, 0x3c, 0x31, 0x68, 0x2d,
0xa6, 0xdf, 0x31, 0x06, 0x37, 0x66, 0x8c, 0xa8, 0x13, 0x69, 0x17, 0xba, 0x13, 0xef, 0x0e, 0xda,
0x25, 0xb3, 0x9f, 0x23, 0xe6, 0x2a, 0xf3, 0xb1, 0x58, 0x22, 0xa3, 0xad, 0x34, 0x96, 0x68, 0xfa,
0x95, 0x61, 0xf0, 0xdc, 0xec, 0x41, 0xa5, 0xd5, 0x0e, 0x74, 0x8a, 0x0f, 0x0c, 0x9a, 0xd9, 0xcc,
0x67, 0x87, 0xcb, 0xd7, 0xbb, 0xf0, 0xd6, 0x90, 0xaf, 0xf7, 0xac, 0x27, 0x88, 0xb9, 0x8c, 0x36,
0x01, 0x54, 0x13, 0x19, 0xe0, 0x28, 0x73, 0xf4, 0x54, 0xf3, 0x9a, 0x39, 0x7a, 0x46, 0xc3, 0x79,
0x1f, 0x40, 0xf6, 0x7e, 0x01, 0x49, 0x99, 0x75, 0x5d, 0xab, 0x31, 0xd1, 0x70, 0x0e, 0xfa, 0xd3,
0x03, 0x53, 0x0c, 0x10, 0xa5, 0x57, 0x61, 0xf0, 0x01, 0x40, 0xde, 0x53, 0x6a, 0x06, 0x53, 0x5d,
0xe6, 0x25, 0x3e, 0x68, 0x99, 0x1d, 0xa4, 0xa5, 0x6c, 0x9d, 0xd1, 0x55, 0x5e, 0xc2, 0xa2, 0x3b,
0xd1, 0x21, 0x14, 0x37, 0xdb, 0x64, 0xe3, 0x30, 0x98, 0xea, 0x12, 0xac, 0x7b, 0xd0, 0x32, 0x5b,
0x03, 0xad, 0xc5, 0x8c, 0x76, 0x61, 0x50, 0x68, 0x0f, 0xac, 0xfb, 0xd0, 0x29, 0xb6, 0x05, 0x7a,
0x4b, 0xcd, 0x6c, 0x16, 0x06, 0xea, 0xd2, 0xcb, 0x20, 0x7f, 0x0b, 0x20, 0x6f, 0x1f, 0xb4, 0xfb,
0xa6, 0x1a, 0x8a, 0x09, 0xa9, 0xbb, 0xd0, 0x9d, 0x68, 0x0b, 0xb4, 0xc5, 0xb3, 0xbb, 0x85, 0xb9,
0xae, 0x7b, 0x1b, 0x20, 0x3f, 0x2e, 0xb4, 0xf4, 0xa9, 0x03, 0x64, 0xd0, 0xd6, 0x17, 0x82, 0x92,
0x6e, 0x1b, 0xda, 0x85, 0x9e, 0x59, 0xa7, 0x99, 0x59, 0x8d, 0xf4, 0x65, 0xc9, 0xb7, 0xd8, 0x60,
0x6a, 0xcf, 0xcd, 0x6c, 0x3b, 0x2f, 0xdb, 0x3f, 0x66, 0x57, 0xa3, 0x57, 0x6e, 0x46, 0xa7, 0xf3,
0x2d, 0xf1, 0x6c, 0x76, 0x2e, 0x46, 0x3c, 0xcf, 0x68, 0x68, 0xe6, 0x32, 0xda, 0x83, 0xee, 0xae,
0x2e, 0x4a, 0x55, 0xc1, 0xac, 0xd4, 0x99, 0xd1, 0x20, 0x0c, 0x06, 0xb3, 0x86, 0x54, 0x50, 0x7d,
0x0c, 0xbd, 0xa9, 0x62, 0xd9, 0x5a, 0xcb, 0xae, 0x65, 0x67, 0x56, 0xd1, 0x73, 0xd5, 0xda, 0x87,
0xa5, 0xc9, 0x5a, 0xd9, 0x7a, 0x5e, 0x25, 0xca, 0xd9, 0x35, 0xf4, 0x5c, 0x56, 0xef, 0x42, 0x5d,
0xd7, 0x66, 0x96, 0xba, 0xfe, 0x9e, 0xa8, 0xd5, 0xe6, 0x4e, 0xbd, 0x07, 0x4d, 0xa3, 0x14, 0xd2,
0xd9, 0x6e, 0xba, 0x3a, 0x1a, 0xa8, 0xdb, 0xea, 0x8c, 0xf2, 0x1e, 0xd4, 0x54, 0xf9, 0x63, 0xad,
0x64, 0x9b, 0xdc, 0xa8, 0x86, 0x2e, 0xdb, 0x61, 0xbb, 0x88, 0x19, 0x45, 0x8d, 0x16, 0x3a, 0x5d,
0xe7, 0xe8, 0x14, 0x5b, 0x18, 0x51, 0x6b, 0xb1, 0x09, 0x2d, 0xb3, 0xac, 0xd1, 0x4b, 0x3a, 0xa3,
0xd4, 0x99, 0xa7, 0xc9, 0xd6, 0xf9, 0xd7, 0xdf, 0xac, 0x3d, 0xf3, 0x8f, 0x6f, 0xd6, 0x9e, 0xf9,
0xd5, 0x93, 0xb5, 0xd2, 0xd7, 0x4f, 0xd6, 0x4a, 0x7f, 0x7f, 0xb2, 0x56, 0xfa, 0xd7, 0x93, 0xb5,
0xd2, 0x4f, 0x7e, 0xfe, 0x3f, 0xfe, 0x0f, 0x87, 0xa6, 0x11, 0xc3, 0x21, 0xda, 0x38, 0xc3, 0x94,
0x19, 0x43, 0xf1, 0xe9, 0x48, 0xfe, 0x19, 0xc7, 0xf8, 0x8f, 0x0e, 0xd7, 0xf2, 0x78, 0x51, 0xc0,
0x6f, 0xfd, 0x27, 0x00, 0x00, 0xff, 0xff, 0x5c, 0x4e, 0xa6, 0xdc, 0xf0, 0x23, 0x00, 0x00,
0xb5, 0x6c, 0x63, 0xc9, 0x5e, 0x3b, 0x58, 0x3f, 0xc2, 0x2c, 0x92, 0x56, 0x96, 0x64, 0x5b, 0xde,
0xa1, 0x65, 0x61, 0x02, 0x02, 0x3a, 0x7a, 0xba, 0x4b, 0x33, 0x65, 0x4d, 0x77, 0xb5, 0xab, 0xab,
0xb5, 0x1a, 0x13, 0x41, 0x70, 0x82, 0x1b, 0x47, 0x6e, 0xfc, 0x00, 0xc1, 0x1f, 0x70, 0xe1, 0xc0,
0xc1, 0xc1, 0x89, 0x23, 0x17, 0x08, 0xbc, 0x9f, 0xc0, 0x17, 0x10, 0xf5, 0xea, 0xc7, 0x3c, 0x64,
0x50, 0x6c, 0x04, 0x97, 0x89, 0xce, 0xac, 0xac, 0x7c, 0x55, 0x65, 0x56, 0x66, 0xd5, 0x40, 0x7f,
0x88, 0xd9, 0x28, 0x1e, 0x6c, 0xb9, 0xc4, 0xdf, 0x3e, 0x77, 0x98, 0xf3, 0xba, 0x4b, 0x02, 0xe6,
0xe0, 0x00, 0xd1, 0x68, 0x06, 0x8e, 0xa8, 0xbb, 0x3d, 0xc6, 0x83, 0x68, 0x3b, 0xa4, 0x84, 0x11,
0x97, 0x8c, 0xd5, 0x57, 0xb4, 0xed, 0x0c, 0x51, 0xc0, 0xb6, 0x04, 0x60, 0x94, 0x87, 0x34, 0x74,
0x7b, 0x35, 0xe2, 0x62, 0x89, 0xe8, 0xd5, 0xdc, 0x48, 0x7f, 0xd6, 0xd9, 0x24, 0x44, 0x91, 0x02,
0x9e, 0x1d, 0x12, 0x32, 0x1c, 0x23, 0xc9, 0x63, 0x10, 0x9f, 0x6d, 0x23, 0x3f, 0x64, 0x13, 0x39,
0x68, 0xfe, 0xbe, 0x08, 0xeb, 0x7b, 0x14, 0x39, 0x0c, 0xed, 0x69, 0x05, 0x2c, 0xf4, 0x65, 0x8c,
0x22, 0x66, 0xbc, 0x00, 0x8d, 0x44, 0x29, 0x1b, 0x7b, 0xdd, 0xc2, 0xed, 0xc2, 0x66, 0xcd, 0xaa,
0x27, 0xb8, 0x23, 0xcf, 0xb8, 0x09, 0x15, 0x74, 0x89, 0x5c, 0x3e, 0x5a, 0x14, 0xa3, 0xcb, 0x1c,
0x3c, 0xf2, 0x8c, 0x37, 0xa1, 0x1e, 0x31, 0x8a, 0x83, 0xa1, 0x1d, 0x47, 0x88, 0x76, 0x4b, 0xb7,
0x0b, 0x9b, 0xf5, 0x7b, 0x2b, 0x5b, 0x5c, 0xe5, 0xad, 0x13, 0x31, 0x70, 0x1a, 0x21, 0x6a, 0x41,
0x94, 0x7c, 0x1b, 0x77, 0xa1, 0xe2, 0xa1, 0x0b, 0xec, 0xa2, 0xa8, 0x5b, 0xbe, 0x5d, 0xda, 0xac,
0xdf, 0x6b, 0x48, 0xf2, 0x87, 0x02, 0x69, 0xe9, 0x41, 0xe3, 0x15, 0xa8, 0x46, 0x8c, 0x50, 0x67,
0x88, 0xa2, 0xee, 0x92, 0x20, 0x6c, 0x6a, 0xbe, 0x02, 0x6b, 0x25, 0xc3, 0xc6, 0x73, 0x50, 0x7a,
0xb4, 0x77, 0xd4, 0x5d, 0x16, 0xd2, 0x41, 0x51, 0x85, 0xc8, 0xb5, 0x38, 0xda, 0xb8, 0x03, 0xcd,
0xc8, 0x09, 0xbc, 0x01, 0xb9, 0xb4, 0x43, 0xec, 0x05, 0x51, 0xb7, 0x72, 0xbb, 0xb0, 0x59, 0xb5,
0x1a, 0x0a, 0xd9, 0xe7, 0x38, 0xf3, 0x3d, 0xb8, 0x71, 0xc2, 0x1c, 0xca, 0xae, 0xe1, 0x1d, 0xf3,
0x14, 0xd6, 0x2d, 0xe4, 0x93, 0x8b, 0x6b, 0xb9, 0xb6, 0x0b, 0x15, 0x86, 0x7d, 0x44, 0x62, 0x26,
0x5c, 0xdb, 0xb4, 0x34, 0x68, 0xfe, 0xb1, 0x00, 0xc6, 0xfe, 0x25, 0x72, 0xfb, 0x94, 0xb8, 0x28,
0x8a, 0xfe, 0x4f, 0xcb, 0xf5, 0x32, 0x54, 0x42, 0xa9, 0x40, 0xb7, 0x2c, 0xc8, 0xd5, 0x2a, 0x68,
0xad, 0xf4, 0xa8, 0xf9, 0x05, 0xac, 0x9d, 0xe0, 0x61, 0xe0, 0x8c, 0x9f, 0xa2, 0xbe, 0xeb, 0xb0,
0x1c, 0x09, 0x9e, 0x42, 0xd5, 0xa6, 0xa5, 0x20, 0xb3, 0x0f, 0xc6, 0xe7, 0x0e, 0x66, 0x4f, 0x4f,
0x92, 0xf9, 0x3a, 0xac, 0xe6, 0x38, 0x46, 0x21, 0x09, 0x22, 0x24, 0x14, 0x60, 0x0e, 0x8b, 0x23,
0xc1, 0x6c, 0xc9, 0x52, 0x90, 0x49, 0x60, 0xfd, 0x34, 0xf4, 0xae, 0x19, 0x4d, 0xf7, 0xa0, 0x46,
0x51, 0x44, 0x62, 0xca, 0x63, 0xa0, 0x28, 0x9c, 0xba, 0x26, 0x9d, 0xfa, 0x09, 0x0e, 0xe2, 0x4b,
0x4b, 0x8f, 0x59, 0x29, 0x99, 0xda, 0x9f, 0x2c, 0xba, 0xce, 0xfe, 0x7c, 0x0f, 0x6e, 0xf4, 0x9d,
0x38, 0xba, 0x8e, 0xae, 0xe6, 0xfb, 0x7c, 0x6f, 0x47, 0xb1, 0x7f, 0xad, 0xc9, 0x7f, 0x28, 0x40,
0x75, 0x2f, 0x8c, 0x4f, 0x23, 0x67, 0x88, 0x8c, 0xef, 0x40, 0x9d, 0x11, 0xe6, 0x8c, 0xed, 0x98,
0x83, 0x82, 0xbc, 0x6c, 0x81, 0x40, 0x49, 0x82, 0x17, 0xa0, 0x11, 0x22, 0xea, 0x86, 0xb1, 0xa2,
0x28, 0xde, 0x2e, 0x6d, 0x96, 0xad, 0xba, 0xc4, 0x49, 0x92, 0x2d, 0x58, 0x15, 0x63, 0x36, 0x0e,
0xec, 0x73, 0x44, 0x03, 0x34, 0xf6, 0x89, 0x87, 0xc4, 0xe6, 0x28, 0x5b, 0x1d, 0x31, 0x74, 0x14,
0x7c, 0x9c, 0x0c, 0x18, 0xaf, 0x42, 0x27, 0xa1, 0xe7, 0x3b, 0x5e, 0x50, 0x97, 0x05, 0x75, 0x5b,
0x51, 0x9f, 0x2a, 0xb4, 0xf9, 0x4b, 0x68, 0x7d, 0x36, 0xa2, 0x84, 0xb1, 0x31, 0x0e, 0x86, 0x0f,
0x1d, 0xe6, 0xf0, 0xd0, 0x0c, 0x11, 0xc5, 0xc4, 0x8b, 0x94, 0xb6, 0x1a, 0x34, 0x5e, 0x83, 0x0e,
0x93, 0xb4, 0xc8, 0xb3, 0x35, 0x4d, 0x51, 0xd0, 0xac, 0x24, 0x03, 0x7d, 0x45, 0xfc, 0x12, 0xb4,
0x52, 0x62, 0x1e, 0xdc, 0x4a, 0xdf, 0x66, 0x82, 0xfd, 0x0c, 0xfb, 0xc8, 0xbc, 0x10, 0xbe, 0x12,
0x8b, 0x6c, 0xbc, 0x06, 0xb5, 0xd4, 0x0f, 0x05, 0xb1, 0x43, 0x5a, 0x72, 0x87, 0x68, 0x77, 0x5a,
0xd5, 0xc4, 0x29, 0x1f, 0x40, 0x9b, 0x25, 0x8a, 0xdb, 0x9e, 0xc3, 0x9c, 0xfc, 0xa6, 0xca, 0x5b,
0x65, 0xb5, 0x58, 0x0e, 0x36, 0xdf, 0x87, 0x5a, 0x1f, 0x7b, 0x91, 0x14, 0xdc, 0x85, 0x8a, 0x1b,
0x53, 0x8a, 0x02, 0xa6, 0x4d, 0x56, 0xa0, 0xb1, 0x06, 0x4b, 0x63, 0xec, 0x63, 0xa6, 0xcc, 0x94,
0x80, 0x49, 0x00, 0x8e, 0x91, 0x4f, 0xe8, 0x44, 0x38, 0x6c, 0x0d, 0x96, 0xb2, 0x8b, 0x2b, 0x01,
0xe3, 0x59, 0xa8, 0xf9, 0xce, 0x65, 0xb2, 0xa8, 0x7c, 0xa4, 0xea, 0x3b, 0x97, 0x52, 0xf9, 0x2e,
0x54, 0xce, 0x1c, 0x3c, 0x76, 0x03, 0xa6, 0xbc, 0xa2, 0xc1, 0x54, 0x60, 0x39, 0x2b, 0xf0, 0x2f,
0x45, 0xa8, 0x4b, 0x89, 0x52, 0xe1, 0x35, 0x58, 0x72, 0x1d, 0x77, 0x94, 0x88, 0x14, 0x80, 0x71,
0x57, 0x2b, 0x52, 0xcc, 0x66, 0xb8, 0x54, 0x53, 0xad, 0xda, 0x36, 0x40, 0xf4, 0xd8, 0x09, 0x95,
0x6e, 0xa5, 0x05, 0xc4, 0x35, 0x4e, 0x23, 0xd5, 0x7d, 0x0b, 0x1a, 0x72, 0xdf, 0xa9, 0x29, 0xe5,
0x05, 0x53, 0xea, 0x92, 0x4a, 0x4e, 0xba, 0x03, 0xcd, 0x38, 0x42, 0xf6, 0x08, 0x23, 0xea, 0x50,
0x77, 0x34, 0xe9, 0x2e, 0xc9, 0x03, 0x28, 0x8e, 0xd0, 0xa1, 0xc6, 0x19, 0xf7, 0x60, 0x89, 0xe7,
0x96, 0xa8, 0xbb, 0x2c, 0xce, 0xba, 0xe7, 0xb2, 0x2c, 0x85, 0xa9, 0x5b, 0xe2, 0x77, 0x3f, 0x60,
0x74, 0x62, 0x49, 0xd2, 0xde, 0x3b, 0x00, 0x29, 0xd2, 0x58, 0x81, 0xd2, 0x39, 0x9a, 0xa8, 0x38,
0xe4, 0x9f, 0xdc, 0x39, 0x17, 0xce, 0x38, 0xd6, 0x5e, 0x97, 0xc0, 0x7b, 0xc5, 0x77, 0x0a, 0xa6,
0x0b, 0xed, 0xdd, 0xf1, 0x39, 0x26, 0x99, 0xe9, 0x6b, 0xb0, 0xe4, 0x3b, 0x5f, 0x10, 0xaa, 0x3d,
0x29, 0x00, 0x81, 0xc5, 0x01, 0xa1, 0x9a, 0x85, 0x00, 0x8c, 0x16, 0x14, 0x49, 0x28, 0xfc, 0x55,
0xb3, 0x8a, 0x24, 0x4c, 0x05, 0x95, 0x33, 0x82, 0xcc, 0x7f, 0x96, 0x01, 0x52, 0x29, 0x86, 0x05,
0x3d, 0x4c, 0xec, 0x08, 0x51, 0x7e, 0xbe, 0xdb, 0x83, 0x09, 0x43, 0x91, 0x4d, 0x91, 0x1b, 0xd3,
0x08, 0x5f, 0xf0, 0xf5, 0xe3, 0x66, 0xdf, 0x90, 0x66, 0x4f, 0xe9, 0x66, 0xdd, 0xc4, 0xe4, 0x44,
0xce, 0xdb, 0xe5, 0xd3, 0x2c, 0x3d, 0xcb, 0x38, 0x82, 0x1b, 0x29, 0x4f, 0x2f, 0xc3, 0xae, 0x78,
0x15, 0xbb, 0xd5, 0x84, 0x9d, 0x97, 0xb2, 0xda, 0x87, 0x55, 0x4c, 0xec, 0x2f, 0x63, 0x14, 0xe7,
0x18, 0x95, 0xae, 0x62, 0xd4, 0xc1, 0xe4, 0x87, 0x62, 0x42, 0xca, 0xa6, 0x0f, 0xb7, 0x32, 0x56,
0xf2, 0x70, 0xcf, 0x30, 0x2b, 0x5f, 0xc5, 0x6c, 0x3d, 0xd1, 0x8a, 0xe7, 0x83, 0x94, 0xe3, 0x47,
0xb0, 0x8e, 0x89, 0xfd, 0xd8, 0xc1, 0x6c, 0x9a, 0xdd, 0xd2, 0xb7, 0x18, 0xc9, 0x4f, 0xb4, 0x3c,
0x2f, 0x69, 0xa4, 0x8f, 0xe8, 0x30, 0x67, 0xe4, 0xf2, 0xb7, 0x18, 0x79, 0x2c, 0x26, 0xa4, 0x6c,
0x76, 0xa0, 0x83, 0xc9, 0xb4, 0x36, 0x95, 0xab, 0x98, 0xb4, 0x31, 0xc9, 0x6b, 0xb2, 0x0b, 0x9d,
0x08, 0xb9, 0x8c, 0xd0, 0xec, 0x26, 0xa8, 0x5e, 0xc5, 0x62, 0x45, 0xd1, 0x27, 0x3c, 0xcc, 0x9f,
0x42, 0xe3, 0x30, 0x1e, 0x22, 0x36, 0x1e, 0x24, 0xc9, 0xe0, 0xa9, 0xe5, 0x1f, 0xf3, 0xdf, 0x45,
0xa8, 0xef, 0x0d, 0x29, 0x89, 0xc3, 0x5c, 0x4e, 0x96, 0x41, 0x3a, 0x9d, 0x93, 0x05, 0x89, 0xc8,
0xc9, 0x92, 0xf8, 0x6d, 0x68, 0xf8, 0x22, 0x74, 0x15, 0xbd, 0xcc, 0x43, 0x9d, 0x99, 0xa0, 0xb6,
0xea, 0x7e, 0x26, 0x99, 0x6d, 0x01, 0x84, 0xd8, 0x8b, 0xd4, 0x1c, 0x99, 0x8e, 0xda, 0xaa, 0xdc,
0xd2, 0x29, 0xda, 0xaa, 0x85, 0x49, 0xb6, 0x7e, 0x13, 0xea, 0x03, 0xee, 0x24, 0x35, 0x21, 0x97,
0x8c, 0x52, 0xef, 0x59, 0x30, 0x48, 0x83, 0xf0, 0x10, 0x9a, 0x23, 0xe9, 0x32, 0x35, 0x49, 0xee,
0xa1, 0x3b, 0xca, 0x92, 0xd4, 0xde, 0xad, 0xac, 0x67, 0xe5, 0x02, 0x34, 0x46, 0x19, 0x54, 0xef,
0x04, 0x3a, 0x33, 0x24, 0x73, 0x72, 0xd0, 0x66, 0x36, 0x07, 0xd5, 0xef, 0x19, 0x52, 0x50, 0x76,
0x66, 0x36, 0x2f, 0xfd, 0xb6, 0x08, 0x8d, 0x4f, 0x11, 0x7b, 0x4c, 0xe8, 0xb9, 0xd4, 0xd7, 0x80,
0x72, 0xe0, 0xf8, 0x48, 0x71, 0x14, 0xdf, 0xc6, 0x2d, 0xa8, 0xd2, 0x4b, 0x99, 0x40, 0xd4, 0x7a,
0x56, 0xe8, 0xa5, 0x48, 0x0c, 0xc6, 0xf3, 0x00, 0xf4, 0xd2, 0x0e, 0x1d, 0xf7, 0x1c, 0x29, 0x0f,
0x96, 0xad, 0x1a, 0xbd, 0xec, 0x4b, 0x04, 0xdf, 0x0a, 0xf4, 0xd2, 0x46, 0x94, 0x12, 0x1a, 0xa9,
0x5c, 0x55, 0xa5, 0x97, 0xfb, 0x02, 0x56, 0x73, 0x3d, 0x4a, 0xc2, 0x10, 0x79, 0x22, 0x47, 0x8b,
0xb9, 0x0f, 0x25, 0x82, 0x4b, 0x65, 0x5a, 0xea, 0xb2, 0x94, 0xca, 0x52, 0xa9, 0x2c, 0x95, 0x5a,
0x91, 0x33, 0x59, 0x56, 0x2a, 0x4b, 0xa4, 0x56, 0xa5, 0x54, 0x96, 0x91, 0xca, 0x52, 0xa9, 0x35,
0x3d, 0x57, 0x49, 0x35, 0x7f, 0x53, 0x80, 0xf5, 0xe9, 0xc2, 0x4f, 0xd5, 0xa6, 0x6f, 0x43, 0xc3,
0x15, 0xeb, 0x95, 0xdb, 0x93, 0x9d, 0x99, 0x95, 0xb4, 0xea, 0x6e, 0x66, 0x1b, 0xdf, 0x87, 0x66,
0x20, 0x1d, 0x9c, 0x6c, 0xcd, 0x52, 0xba, 0x2e, 0x59, 0xdf, 0x5b, 0x8d, 0x20, 0x03, 0x99, 0x1e,
0x18, 0x9f, 0x53, 0xcc, 0xd0, 0x09, 0xa3, 0xc8, 0xf1, 0x9f, 0x46, 0x75, 0x6f, 0x40, 0x59, 0x54,
0x2b, 0x7c, 0x99, 0x1a, 0x96, 0xf8, 0x36, 0x5f, 0x86, 0xd5, 0x9c, 0x14, 0x65, 0xeb, 0x0a, 0x94,
0xc6, 0x28, 0x10, 0xdc, 0x9b, 0x16, 0xff, 0x34, 0x1d, 0xe8, 0x58, 0xc8, 0xf1, 0x9e, 0x9e, 0x36,
0x4a, 0x44, 0x29, 0x15, 0xb1, 0x09, 0x46, 0x56, 0x84, 0x52, 0x45, 0x6b, 0x5d, 0xc8, 0x68, 0xfd,
0x08, 0x3a, 0x7b, 0x63, 0x12, 0xa1, 0x13, 0xe6, 0xe1, 0xe0, 0x69, 0xb4, 0x23, 0xbf, 0x80, 0xd5,
0xcf, 0xd8, 0xe4, 0x73, 0xce, 0x2c, 0xc2, 0x5f, 0xa1, 0xa7, 0x64, 0x1f, 0x25, 0x8f, 0xb5, 0x7d,
0x94, 0x3c, 0xe6, 0xcd, 0x8d, 0x4b, 0xc6, 0xb1, 0x1f, 0x88, 0x50, 0x68, 0x5a, 0x0a, 0x32, 0x77,
0xa1, 0x21, 0x6b, 0xe8, 0x63, 0xe2, 0xc5, 0x63, 0x34, 0x37, 0x06, 0x37, 0x00, 0x42, 0x87, 0x3a,
0x3e, 0x62, 0x88, 0xca, 0x3d, 0x54, 0xb3, 0x32, 0x18, 0xf3, 0x77, 0x45, 0x58, 0x93, 0xf7, 0x0d,
0x27, 0xb2, 0xcd, 0xd6, 0x26, 0xf4, 0xa0, 0x3a, 0x22, 0x11, 0xcb, 0x30, 0x4c, 0x60, 0xae, 0x22,
0xef, 0xcf, 0x25, 0x37, 0xfe, 0x99, 0xbb, 0x04, 0x28, 0x5d, 0x7d, 0x09, 0x30, 0xd3, 0xe6, 0x97,
0x67, 0xdb, 0x7c, 0x1e, 0x6d, 0x9a, 0x08, 0xcb, 0x18, 0xaf, 0x59, 0x35, 0x85, 0x39, 0xf2, 0x8c,
0xbb, 0xd0, 0x1e, 0x72, 0x2d, 0xed, 0x11, 0x21, 0xe7, 0x76, 0xe8, 0xb0, 0x91, 0x08, 0xf5, 0x9a,
0xd5, 0x14, 0xe8, 0x43, 0x42, 0xce, 0xfb, 0x0e, 0x1b, 0x19, 0xef, 0x42, 0x4b, 0x95, 0x81, 0xbe,
0x70, 0x51, 0xa4, 0x0e, 0x3f, 0x15, 0x45, 0x59, 0xef, 0x59, 0xcd, 0xf3, 0x0c, 0x14, 0x99, 0x37,
0xe1, 0xc6, 0x43, 0x14, 0x31, 0x4a, 0x26, 0x79, 0xc7, 0x98, 0xdf, 0x07, 0x38, 0x0a, 0x18, 0xa2,
0x67, 0x8e, 0x8b, 0x22, 0xe3, 0x8d, 0x2c, 0xa4, 0x8a, 0xa3, 0x95, 0x2d, 0x79, 0xdd, 0x93, 0x0c,
0x58, 0x19, 0x1a, 0x73, 0x0b, 0x96, 0x2d, 0x12, 0xf3, 0x74, 0xf4, 0xa2, 0xfe, 0x52, 0xf3, 0x1a,
0x6a, 0x9e, 0x40, 0x5a, 0x6a, 0xcc, 0x3c, 0xd4, 0x2d, 0x6c, 0xca, 0x4e, 0x2d, 0xd1, 0x16, 0xd4,
0xb0, 0xc6, 0xa9, 0xac, 0x32, 0x2b, 0x3a, 0x25, 0x31, 0xdf, 0x87, 0x55, 0xc9, 0x49, 0x72, 0xd6,
0x6c, 0x5e, 0x84, 0x65, 0xaa, 0xd5, 0x28, 0xa4, 0xf7, 0x3c, 0x8a, 0x48, 0x8d, 0x71, 0x7f, 0x7c,
0x82, 0x23, 0x96, 0x1a, 0xa2, 0xfd, 0xb1, 0x0a, 0x1d, 0x3e, 0x90, 0xe3, 0x69, 0x7e, 0x08, 0x8d,
0x1d, 0xab, 0xff, 0x29, 0xc2, 0xc3, 0xd1, 0x80, 0x67, 0xcf, 0xef, 0xe5, 0x61, 0x65, 0xb0, 0xa1,
0xb4, 0xcd, 0x0c, 0x59, 0x39, 0x3a, 0xf3, 0x23, 0x58, 0xdf, 0xf1, 0xbc, 0x2c, 0x4a, 0x6b, 0xfd,
0x06, 0xd4, 0x82, 0x0c, 0xbb, 0xcc, 0x99, 0x95, 0xa3, 0x4e, 0x89, 0xcc, 0x9f, 0xc1, 0xea, 0xa3,
0x60, 0x8c, 0x03, 0xb4, 0xd7, 0x3f, 0x3d, 0x46, 0x49, 0x2e, 0x32, 0xa0, 0xcc, 0x6b, 0x36, 0xc1,
0xa3, 0x6a, 0x89, 0x6f, 0x1e, 0x9c, 0xc1, 0xc0, 0x76, 0xc3, 0x38, 0x52, 0x97, 0x3d, 0xcb, 0xc1,
0x60, 0x2f, 0x8c, 0x23, 0x7e, 0xb8, 0xf0, 0xe2, 0x82, 0x04, 0xe3, 0x89, 0x88, 0xd0, 0xaa, 0x55,
0x71, 0xc3, 0xf8, 0x51, 0x30, 0x9e, 0x98, 0xdf, 0x15, 0x1d, 0x38, 0x42, 0x9e, 0xe5, 0x04, 0x1e,
0xf1, 0x1f, 0xa2, 0x8b, 0x8c, 0x84, 0xa4, 0xdb, 0xd3, 0x99, 0xe8, 0xeb, 0x02, 0x34, 0x76, 0x86,
0x28, 0x60, 0x0f, 0x11, 0x73, 0xf0, 0x58, 0x74, 0x74, 0x17, 0x88, 0x46, 0x98, 0x04, 0x2a, 0xdc,
0x34, 0xc8, 0x1b, 0x72, 0x1c, 0x60, 0x66, 0x7b, 0x0e, 0xf2, 0x49, 0x20, 0xb8, 0x54, 0x2d, 0xe0,
0xa8, 0x87, 0x02, 0x63, 0xbc, 0x0c, 0x6d, 0x79, 0x19, 0x67, 0x8f, 0x9c, 0xc0, 0x1b, 0xf3, 0x40,
0x2f, 0x89, 0xd0, 0x6c, 0x49, 0xf4, 0xa1, 0xc2, 0x1a, 0xaf, 0xc0, 0x8a, 0x0a, 0xc3, 0x94, 0xb2,
0x2c, 0x28, 0xdb, 0x0a, 0x9f, 0x23, 0x8d, 0xc3, 0x90, 0x50, 0x16, 0xd9, 0x11, 0x72, 0x5d, 0xe2,
0x87, 0xaa, 0x1d, 0x6a, 0x6b, 0xfc, 0x89, 0x44, 0x9b, 0x43, 0x58, 0x3d, 0xe0, 0x76, 0x2a, 0x4b,
0xd2, 0x6d, 0xd5, 0xf2, 0x91, 0x6f, 0x0f, 0xc6, 0xc4, 0x3d, 0xb7, 0x79, 0x72, 0x54, 0x1e, 0xe6,
0x05, 0xd7, 0x2e, 0x47, 0x9e, 0xe0, 0xaf, 0x44, 0xe7, 0xcf, 0xa9, 0x46, 0x84, 0x85, 0xe3, 0x78,
0x68, 0x87, 0x94, 0x0c, 0x90, 0x32, 0xb1, 0xed, 0x23, 0xff, 0x50, 0xe2, 0xfb, 0x1c, 0x6d, 0xfe,
0xa9, 0x00, 0x6b, 0x79, 0x49, 0x2a, 0xd5, 0x6f, 0xc3, 0x5a, 0x5e, 0x94, 0x3a, 0xfe, 0x65, 0x79,
0xd9, 0xc9, 0x0a, 0x94, 0x85, 0xc0, 0x7d, 0x68, 0x8a, 0xab, 0x5b, 0xdb, 0x93, 0x9c, 0xf2, 0x45,
0x4f, 0x76, 0x5d, 0xac, 0x86, 0x93, 0x5d, 0xa5, 0x77, 0xe1, 0x96, 0x32, 0xdf, 0x9e, 0x55, 0x5b,
0x6e, 0x88, 0x75, 0x45, 0x70, 0x3c, 0xa5, 0xfd, 0x27, 0xd0, 0x4d, 0x51, 0xbb, 0x13, 0x81, 0x4c,
0x37, 0xf3, 0xea, 0x94, 0xb1, 0x3b, 0x9e, 0x47, 0x45, 0x94, 0x94, 0xad, 0x79, 0x43, 0xe6, 0x03,
0xb8, 0x79, 0x82, 0x98, 0xf4, 0x86, 0xc3, 0x54, 0x27, 0x22, 0x99, 0xad, 0x40, 0xe9, 0x04, 0xb9,
0xc2, 0xf8, 0x92, 0xc5, 0x3f, 0xf9, 0x06, 0x3c, 0x8d, 0x90, 0x2b, 0xac, 0x2c, 0x59, 0xe2, 0xdb,
0x0c, 0xa1, 0xf2, 0xe1, 0xc9, 0x01, 0xaf, 0x37, 0xf8, 0xa6, 0x96, 0xf5, 0x89, 0x3a, 0x8b, 0x9a,
0x56, 0x45, 0xc0, 0x47, 0x9e, 0xf1, 0x11, 0xac, 0xca, 0x21, 0x77, 0xe4, 0x04, 0x43, 0x64, 0x87,
0x64, 0x8c, 0x5d, 0xb9, 0xf5, 0x5b, 0xf7, 0x7a, 0x2a, 0x7c, 0x15, 0x9f, 0x3d, 0x41, 0xd2, 0x17,
0x14, 0x56, 0x67, 0x38, 0x8d, 0x32, 0xff, 0x51, 0x80, 0x8a, 0x3a, 0x0e, 0xf8, 0x91, 0xe6, 0x51,
0x7c, 0x81, 0xa8, 0xda, 0xec, 0x0a, 0x32, 0x5e, 0x82, 0x96, 0xfc, 0xb2, 0x49, 0xc8, 0x30, 0x49,
0x0e, 0x99, 0xa6, 0xc4, 0x3e, 0x92, 0x48, 0x71, 0xdd, 0x27, 0x2e, 0xdc, 0x54, 0x6f, 0xab, 0x20,
0x8e, 0x3f, 0x8b, 0xb8, 0x52, 0xe2, 0x50, 0xa9, 0x59, 0x0a, 0xe2, 0xc1, 0xa5, 0xf9, 0x2d, 0x09,
0x7e, 0x1a, 0xe4, 0xc1, 0xe5, 0x93, 0x38, 0x60, 0x76, 0x48, 0x70, 0xc0, 0xd4, 0x29, 0x02, 0x02,
0xd5, 0xe7, 0x18, 0x63, 0x13, 0xaa, 0x67, 0x91, 0x2d, 0xac, 0x11, 0x15, 0x63, 0x72, 0xb2, 0x29,
0xab, 0xad, 0xca, 0x59, 0x24, 0x3e, 0xcc, 0x5f, 0x17, 0x60, 0x59, 0x5e, 0x8e, 0xf3, 0xbe, 0x3b,
0x39, 0xf5, 0x8b, 0x58, 0x54, 0x50, 0x42, 0x2b, 0x79, 0xd2, 0x8b, 0x6f, 0x9e, 0x63, 0x2e, 0x7c,
0x79, 0x76, 0x29, 0x23, 0x2e, 0x7c, 0x71, 0x68, 0xbd, 0x04, 0xad, 0xb4, 0x78, 0x10, 0xe3, 0xd2,
0x98, 0x66, 0x82, 0x15, 0x64, 0x0b, 0x6d, 0x32, 0x7f, 0x0c, 0x90, 0x5e, 0x12, 0xf3, 0xed, 0x10,
0x27, 0xca, 0xf0, 0x4f, 0x8e, 0x19, 0x26, 0x65, 0x07, 0xff, 0x34, 0xee, 0x42, 0xcb, 0xf1, 0x3c,
0xcc, 0xa7, 0x3b, 0xe3, 0x03, 0xec, 0x25, 0x09, 0x24, 0x8f, 0x35, 0xff, 0x5a, 0x80, 0xf6, 0x1e,
0x09, 0x27, 0x1f, 0xe2, 0x31, 0xca, 0x64, 0x37, 0xa1, 0xa4, 0xaa, 0x3a, 0xf8, 0x37, 0xaf, 0xa4,
0xcf, 0xf0, 0x18, 0xc9, 0xb0, 0x97, 0xbb, 0xae, 0xca, 0x11, 0x22, 0xe4, 0xf5, 0x60, 0x72, 0x25,
0xd8, 0x94, 0x83, 0xc7, 0xc4, 0x13, 0x3d, 0x83, 0x87, 0xa9, 0x9d, 0x5c, 0x00, 0x36, 0xad, 0x8a,
0x87, 0xa9, 0x18, 0x52, 0x86, 0x2c, 0x89, 0x0b, 0xde, 0xac, 0x21, 0xcb, 0x12, 0xc3, 0x0d, 0x59,
0x87, 0x65, 0x72, 0x76, 0x16, 0x21, 0x26, 0xd6, 0xaa, 0x64, 0x29, 0x28, 0x49, 0xc1, 0xd5, 0x4c,
0x0a, 0x5e, 0x03, 0xe3, 0x00, 0xb1, 0x47, 0x8f, 0x8e, 0xf7, 0x2f, 0x50, 0xc0, 0xf4, 0xc9, 0xf5,
0x3a, 0x54, 0x35, 0xea, 0xbf, 0xb9, 0x3a, 0x7d, 0x15, 0x5a, 0x3b, 0x9e, 0x77, 0xf2, 0xd8, 0x09,
0xb5, 0x3f, 0xba, 0x50, 0xe9, 0xef, 0x1d, 0xf5, 0xa5, 0x4b, 0x4a, 0xdc, 0x00, 0x05, 0xf2, 0x93,
0xf2, 0x00, 0xb1, 0x63, 0xc4, 0x28, 0x76, 0x93, 0x93, 0xf2, 0x0e, 0x54, 0x14, 0x86, 0xcf, 0xf4,
0xe5, 0xa7, 0x3e, 0x02, 0x14, 0x68, 0xfe, 0x00, 0x8c, 0x1f, 0xf1, 0x9a, 0x0f, 0xc9, 0x82, 0x5f,
0x49, 0x7a, 0x15, 0x3a, 0x17, 0x02, 0x6b, 0xcb, 0x62, 0x28, 0xb3, 0x0c, 0x6d, 0x39, 0x20, 0xf2,
0x83, 0x90, 0x7d, 0x0a, 0xab, 0xb2, 0x44, 0x95, 0x7c, 0xae, 0xc1, 0x82, 0xfb, 0x30, 0x59, 0xcf,
0xb2, 0x25, 0xbe, 0xef, 0xfd, 0xb9, 0xa3, 0x8e, 0x31, 0x75, 0x23, 0x62, 0x1c, 0x40, 0x7b, 0xea,
0xf9, 0xca, 0x50, 0x57, 0x64, 0xf3, 0x5f, 0xb5, 0x7a, 0xeb, 0x5b, 0xf2, 0x39, 0x6c, 0x4b, 0x3f,
0x87, 0x6d, 0xed, 0xfb, 0x21, 0x9b, 0x18, 0xfb, 0xd0, 0xca, 0x3f, 0xf4, 0x18, 0xcf, 0xea, 0x8a,
0x72, 0xce, 0xf3, 0xcf, 0x42, 0x36, 0x07, 0xd0, 0x9e, 0x7a, 0xf3, 0xd1, 0xfa, 0xcc, 0x7f, 0x0a,
0x5a, 0xc8, 0xe8, 0x01, 0xd4, 0x33, 0x8f, 0x3c, 0x46, 0x57, 0x32, 0x99, 0x7d, 0xf7, 0x59, 0xc8,
0x60, 0x0f, 0x9a, 0xb9, 0x77, 0x17, 0xa3, 0xa7, 0xec, 0x99, 0xf3, 0x18, 0xb3, 0x90, 0xc9, 0x2e,
0xd4, 0x33, 0xcf, 0x1f, 0x5a, 0x8b, 0xd9, 0x37, 0x96, 0xde, 0xad, 0x39, 0x23, 0xea, 0xb4, 0x3c,
0x80, 0xf6, 0xd4, 0x9b, 0x88, 0x76, 0xc9, 0xfc, 0xa7, 0x92, 0x85, 0xca, 0x7c, 0x2c, 0x96, 0x28,
0xd3, 0xf2, 0x66, 0x96, 0x68, 0xf6, 0x05, 0xa4, 0xf7, 0xdc, 0xfc, 0x41, 0xa5, 0xd5, 0x3e, 0xb4,
0xf2, 0x8f, 0x1f, 0x9a, 0xd9, 0xdc, 0x27, 0x91, 0xab, 0xd7, 0x3b, 0xf7, 0x0e, 0x92, 0xae, 0xf7,
0xbc, 0xe7, 0x91, 0x85, 0x8c, 0x76, 0x00, 0x54, 0x83, 0xeb, 0xe1, 0x20, 0x71, 0xf4, 0x4c, 0x63,
0x9d, 0x38, 0x7a, 0x4e, 0x33, 0xfc, 0x00, 0x40, 0xf6, 0xa5, 0x1e, 0x89, 0x99, 0x71, 0x53, 0xab,
0x31, 0xd5, 0x0c, 0xf7, 0xba, 0xb3, 0x03, 0x33, 0x0c, 0x10, 0xa5, 0xd7, 0x61, 0xf0, 0x01, 0x40,
0xda, 0xef, 0x6a, 0x06, 0x33, 0x1d, 0xf0, 0x15, 0x3e, 0x68, 0x64, 0xbb, 0x5b, 0x43, 0xd9, 0x3a,
0xa7, 0xe3, 0xbd, 0x82, 0x45, 0x7b, 0xaa, 0x7b, 0xc9, 0x6f, 0xb6, 0xe9, 0xa6, 0xa6, 0x37, 0xd3,
0xc1, 0x18, 0xf7, 0xa1, 0x91, 0x6d, 0x5b, 0xb4, 0x16, 0x73, 0x5a, 0x99, 0x5e, 0xae, 0x75, 0x31,
0x1e, 0x40, 0x2b, 0xdf, 0xb2, 0xe8, 0x2d, 0x35, 0xb7, 0x91, 0xe9, 0xa9, 0x0b, 0xb9, 0x0c, 0xf9,
0x5b, 0x00, 0x69, 0x6b, 0xa3, 0xdd, 0x37, 0xd3, 0xec, 0x4c, 0x49, 0x3d, 0x80, 0xf6, 0x54, 0xcb,
0xa2, 0x2d, 0x9e, 0xdf, 0xc9, 0x2c, 0x74, 0xdd, 0xdb, 0x00, 0xe9, 0x71, 0xa1, 0xa5, 0xcf, 0x1c,
0x20, 0xbd, 0xa6, 0xbe, 0xac, 0x94, 0x74, 0x7b, 0xd0, 0xcc, 0xf5, 0xf3, 0x3a, 0xcd, 0xcc, 0x6b,
0xf2, 0xaf, 0x4a, 0xbe, 0xf9, 0xe6, 0x57, 0x7b, 0x6e, 0x6e, 0x4b, 0x7c, 0xd5, 0xfe, 0xc9, 0x76,
0x5c, 0x7a, 0xe5, 0xe6, 0x74, 0x61, 0xdf, 0x12, 0xcf, 0xd9, 0xae, 0x2a, 0x13, 0xcf, 0x73, 0x9a,
0xad, 0x85, 0x8c, 0x0e, 0xa1, 0x7d, 0xa0, 0x0b, 0x66, 0x55, 0xcc, 0x2b, 0x75, 0xe6, 0x34, 0x2f,
0xbd, 0xde, 0xbc, 0x21, 0x15, 0x54, 0x1f, 0x43, 0x67, 0xa6, 0x90, 0x37, 0x36, 0x92, 0x2b, 0xe3,
0xb9, 0x15, 0xfe, 0x42, 0xb5, 0x8e, 0x60, 0x65, 0xba, 0x8e, 0x37, 0x9e, 0x57, 0x89, 0x72, 0x7e,
0x7d, 0xbf, 0x90, 0xd5, 0xbb, 0x50, 0xd5, 0xb5, 0x99, 0xa1, 0xae, 0xe6, 0xa7, 0x6a, 0xb5, 0x85,
0x53, 0xef, 0x43, 0x3d, 0x53, 0x0a, 0xe9, 0x6c, 0x37, 0x5b, 0x1d, 0xf5, 0xd4, 0x4d, 0x7a, 0x42,
0x79, 0x1f, 0x2a, 0xaa, 0xfc, 0x31, 0xd6, 0x92, 0x4d, 0x9e, 0xa9, 0x86, 0xae, 0xda, 0x61, 0x07,
0x88, 0x65, 0x8a, 0x1a, 0x2d, 0x74, 0xb6, 0xce, 0xd1, 0x29, 0x36, 0x37, 0xa2, 0xd6, 0x62, 0x07,
0x1a, 0xd9, 0xb2, 0x46, 0x2f, 0xe9, 0x9c, 0x52, 0x67, 0x91, 0x26, 0xbb, 0x97, 0x5f, 0x7f, 0xb3,
0xf1, 0xcc, 0xdf, 0xbf, 0xd9, 0x78, 0xe6, 0x57, 0x4f, 0x36, 0x0a, 0x5f, 0x3f, 0xd9, 0x28, 0xfc,
0xed, 0xc9, 0x46, 0xe1, 0x5f, 0x4f, 0x36, 0x0a, 0x3f, 0xf9, 0xf9, 0xff, 0xf8, 0x1f, 0x21, 0x1a,
0x07, 0x0c, 0xfb, 0x68, 0xfb, 0x02, 0x53, 0x96, 0x19, 0x0a, 0xcf, 0x87, 0xf2, 0x8f, 0x42, 0x99,
0xff, 0x0f, 0x71, 0x2d, 0x07, 0xcb, 0x02, 0x7e, 0xeb, 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x6c,
0xf4, 0x1d, 0x49, 0x8c, 0x24, 0x00, 0x00,
}
func (m *CreateContainerRequest) Marshal() (dAtA []byte, err error) {
@@ -5108,6 +5163,43 @@ func (m *SetGuestDateTimeRequest) MarshalToSizedBuffer(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
func (m *FSGroup) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *FSGroup) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *FSGroup) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.XXX_unrecognized != nil {
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if m.GroupChangePolicy != 0 {
i = encodeVarintAgent(dAtA, i, uint64(m.GroupChangePolicy))
i--
dAtA[i] = 0x18
}
if m.GroupId != 0 {
i = encodeVarintAgent(dAtA, i, uint64(m.GroupId))
i--
dAtA[i] = 0x10
}
return len(dAtA) - i, nil
}
func (m *Storage) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -5132,6 +5224,18 @@ func (m *Storage) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if m.FsGroup != nil {
{
size, err := m.FsGroup.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintAgent(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x3a
}
if len(m.MountPoint) > 0 {
i -= len(m.MountPoint)
copy(dAtA[i:], m.MountPoint)
@@ -5452,20 +5556,20 @@ func (m *AddSwapRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
copy(dAtA[i:], m.XXX_unrecognized)
}
if len(m.PCIPath) > 0 {
dAtA26 := make([]byte, len(m.PCIPath)*10)
var j25 int
dAtA27 := make([]byte, len(m.PCIPath)*10)
var j26 int
for _, num := range m.PCIPath {
for num >= 1<<7 {
dAtA26[j25] = uint8(uint64(num)&0x7f | 0x80)
dAtA27[j26] = uint8(uint64(num)&0x7f | 0x80)
num >>= 7
j25++
j26++
}
dAtA26[j25] = uint8(num)
j25++
dAtA27[j26] = uint8(num)
j26++
}
i -= j25
copy(dAtA[i:], dAtA26[:j25])
i = encodeVarintAgent(dAtA, i, uint64(j25))
i -= j26
copy(dAtA[i:], dAtA27[:j26])
i = encodeVarintAgent(dAtA, i, uint64(j26))
i--
dAtA[i] = 0xa
}
@@ -6684,6 +6788,24 @@ func (m *SetGuestDateTimeRequest) Size() (n int) {
return n
}
func (m *FSGroup) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.GroupId != 0 {
n += 1 + sovAgent(uint64(m.GroupId))
}
if m.GroupChangePolicy != 0 {
n += 1 + sovAgent(uint64(m.GroupChangePolicy))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *Storage) Size() (n int) {
if m == nil {
return 0
@@ -6718,6 +6840,10 @@ func (m *Storage) Size() (n int) {
if l > 0 {
n += 1 + l + sovAgent(uint64(l))
}
if m.FsGroup != nil {
l = m.FsGroup.Size()
n += 1 + l + sovAgent(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@@ -7631,6 +7757,18 @@ func (this *SetGuestDateTimeRequest) String() string {
}, "")
return s
}
func (this *FSGroup) String() string {
if this == nil {
return "nil"
}
s := strings.Join([]string{`&FSGroup{`,
`GroupId:` + fmt.Sprintf("%v", this.GroupId) + `,`,
`GroupChangePolicy:` + fmt.Sprintf("%v", this.GroupChangePolicy) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
}
func (this *Storage) String() string {
if this == nil {
return "nil"
@@ -7642,6 +7780,7 @@ func (this *Storage) String() string {
`Fstype:` + fmt.Sprintf("%v", this.Fstype) + `,`,
`Options:` + fmt.Sprintf("%v", this.Options) + `,`,
`MountPoint:` + fmt.Sprintf("%v", this.MountPoint) + `,`,
`FsGroup:` + strings.Replace(this.FsGroup.String(), "FSGroup", "FSGroup", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
@@ -14422,6 +14561,95 @@ func (m *SetGuestDateTimeRequest) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *FSGroup) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowAgent
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: FSGroup: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: FSGroup: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field GroupId", wireType)
}
m.GroupId = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowAgent
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.GroupId |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field GroupChangePolicy", wireType)
}
m.GroupChangePolicy = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowAgent
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.GroupChangePolicy |= protocols.FSGroupChangePolicy(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipAgent(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthAgent
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *Storage) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -14643,6 +14871,42 @@ func (m *Storage) Unmarshal(dAtA []byte) error {
}
m.MountPoint = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 7:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field FsGroup", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowAgent
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthAgent
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthAgent
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.FsGroup == nil {
m.FsGroup = &FSGroup{}
}
if err := m.FsGroup.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipAgent(dAtA[iNdEx:])

View File

@@ -49,6 +49,35 @@ func (IPFamily) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_f715d0876e8f65d3, []int{0}
}
// FSGroupChangePolicy defines the policy for applying group id ownership change on a mounted volume.
type FSGroupChangePolicy int32
const (
// Always indicates that the volume ownership will always be changed.
FSGroupChangePolicy_Always FSGroupChangePolicy = 0
// OnRootMismatch indicates that the volume ownership will be changed only
// when the ownership of the root directory does not match with the expected group id for the volume.
FSGroupChangePolicy_OnRootMismatch FSGroupChangePolicy = 1
)
var FSGroupChangePolicy_name = map[int32]string{
0: "Always",
1: "OnRootMismatch",
}
var FSGroupChangePolicy_value = map[string]int32{
"Always": 0,
"OnRootMismatch": 1,
}
func (x FSGroupChangePolicy) String() string {
return proto.EnumName(FSGroupChangePolicy_name, int32(x))
}
func (FSGroupChangePolicy) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_f715d0876e8f65d3, []int{1}
}
type IPAddress struct {
Family IPFamily `protobuf:"varint,1,opt,name=family,proto3,enum=types.IPFamily" json:"family,omitempty"`
Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"`
@@ -230,6 +259,7 @@ var xxx_messageInfo_ARPNeighbor proto.InternalMessageInfo
func init() {
proto.RegisterEnum("types.IPFamily", IPFamily_name, IPFamily_value)
proto.RegisterEnum("types.FSGroupChangePolicy", FSGroupChangePolicy_name, FSGroupChangePolicy_value)
proto.RegisterType((*IPAddress)(nil), "types.IPAddress")
proto.RegisterType((*Interface)(nil), "types.Interface")
proto.RegisterType((*Route)(nil), "types.Route")
@@ -241,38 +271,40 @@ func init() {
}
var fileDescriptor_f715d0876e8f65d3 = []byte{
// 482 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x93, 0x3f, 0x8b, 0xdb, 0x30,
0x18, 0xc6, 0xa3, 0x38, 0xf6, 0xc5, 0x0a, 0xd7, 0x06, 0x51, 0x0e, 0xd1, 0x82, 0x31, 0x59, 0x6a,
0x0a, 0x8d, 0x21, 0x2d, 0xdd, 0xaf, 0xc3, 0x41, 0x96, 0x62, 0xb4, 0xb5, 0x4b, 0x91, 0x1d, 0xc5,
0x31, 0xb1, 0x2d, 0x23, 0xc9, 0x09, 0xd9, 0xfa, 0x45, 0xba, 0xf5, 0xc3, 0xdc, 0xd8, 0xb1, 0xe3,
0x25, 0x9f, 0xa4, 0x48, 0x72, 0x52, 0xf7, 0x0f, 0x85, 0x9b, 0xf2, 0xfe, 0x5e, 0x49, 0x79, 0x9f,
0xe7, 0x91, 0x05, 0x93, 0xbc, 0x50, 0x9b, 0x36, 0x9d, 0x67, 0xbc, 0x8a, 0xb7, 0x54, 0xd1, 0xd7,
0x19, 0xaf, 0x15, 0x2d, 0x6a, 0x26, 0xe4, 0x5f, 0x2c, 0x45, 0x16, 0x97, 0x45, 0x2a, 0xe3, 0x46,
0x70, 0xc5, 0x33, 0x5e, 0x76, 0x95, 0x8c, 0xd5, 0xa1, 0x61, 0x72, 0x6e, 0x00, 0xb9, 0x06, 0x66,
0x29, 0xf4, 0x97, 0xc9, 0xed, 0x6a, 0x25, 0x98, 0x94, 0xe8, 0x25, 0xf4, 0xd6, 0xb4, 0x2a, 0xca,
0x03, 0x06, 0x21, 0x88, 0x9e, 0x2c, 0x9e, 0xce, 0xed, 0x89, 0x65, 0x72, 0x67, 0xda, 0xa4, 0x5b,
0x46, 0x18, 0x5e, 0x51, 0x7b, 0x06, 0x0f, 0x43, 0x10, 0xf9, 0xe4, 0x8c, 0x08, 0xc1, 0x51, 0x45,
0xe5, 0x16, 0x3b, 0xa6, 0x6d, 0xea, 0xd9, 0x03, 0x80, 0xfe, 0xb2, 0x56, 0x4c, 0xac, 0x69, 0xc6,
0xd0, 0x0d, 0xf4, 0x56, 0x6c, 0x57, 0x64, 0xcc, 0x0c, 0xf1, 0x49, 0x47, 0xfa, 0x64, 0x4d, 0x2b,
0xd6, 0xfd, 0xa1, 0xa9, 0xd1, 0x02, 0x4e, 0x2e, 0xea, 0x98, 0xc4, 0x4e, 0xe8, 0x44, 0x93, 0xc5,
0xf4, 0xa2, 0xaa, 0x5b, 0x21, 0xfd, 0x4d, 0x68, 0x0a, 0x9d, 0x4a, 0xb5, 0x78, 0x14, 0x82, 0x68,
0x44, 0x74, 0xa9, 0x27, 0x6e, 0xf6, 0x7a, 0x03, 0x76, 0xed, 0x44, 0x4b, 0xda, 0x45, 0x93, 0x15,
0x09, 0x55, 0x1b, 0xec, 0x59, 0x17, 0x1d, 0x6a, 0x2d, 0x7a, 0x06, 0xbe, 0xb2, 0x5a, 0x74, 0x8d,
0x5e, 0x40, 0x5f, 0xd0, 0xfd, 0xe7, 0x75, 0x49, 0x73, 0x89, 0xc7, 0x21, 0x88, 0xae, 0xc9, 0x58,
0xd0, 0xfd, 0x9d, 0xe6, 0xd9, 0x37, 0x00, 0x5d, 0xc2, 0x5b, 0x65, 0x6c, 0xac, 0x98, 0x54, 0x9d,
0x39, 0x53, 0xeb, 0x41, 0x39, 0x55, 0x6c, 0x4f, 0x0f, 0xe7, 0xb8, 0x3a, 0xec, 0x85, 0xe1, 0xfc,
0x16, 0xc6, 0x0d, 0xf4, 0x24, 0x6f, 0x45, 0xc6, 0x8c, 0x0f, 0x9f, 0x74, 0x84, 0x9e, 0x41, 0x57,
0x66, 0xbc, 0x61, 0xc6, 0xc9, 0x35, 0xb1, 0xd0, 0xbb, 0x37, 0xef, 0xbf, 0xf7, 0x36, 0xfb, 0x0a,
0xe0, 0xe4, 0x96, 0x24, 0x1f, 0x58, 0x91, 0x6f, 0x52, 0x2e, 0x74, 0xbe, 0x8a, 0x5f, 0xc2, 0x33,
0x9a, 0xff, 0x99, 0x6f, 0x6f, 0x53, 0x4f, 0xf2, 0xf0, 0x4f, 0xc9, 0x65, 0xa9, 0x3f, 0x83, 0xb3,
0x15, 0x4b, 0x46, 0xb2, 0xa2, 0xca, 0x3a, 0x71, 0x89, 0x05, 0xdd, 0xb5, 0x49, 0xba, 0xb6, 0x6b,
0xe0, 0xd5, 0x73, 0x38, 0x3e, 0x6b, 0x46, 0x1e, 0x1c, 0xee, 0xde, 0x4e, 0x07, 0xe6, 0xf7, 0xdd,
0x14, 0xbc, 0x97, 0xf7, 0xc7, 0x60, 0xf0, 0xe3, 0x18, 0x0c, 0xbe, 0x9c, 0x02, 0x70, 0x7f, 0x0a,
0xc0, 0xf7, 0x53, 0x00, 0x1e, 0x4e, 0x01, 0xf8, 0xf4, 0xf1, 0x91, 0x8f, 0x43, 0xb4, 0xb5, 0x2a,
0x2a, 0x16, 0xef, 0x0a, 0xa1, 0x7a, 0x4b, 0xcd, 0x36, 0x8f, 0x69, 0xce, 0x6a, 0xf5, 0xeb, 0xe1,
0xa4, 0x9e, 0x29, 0xdf, 0xfc, 0x0c, 0x00, 0x00, 0xff, 0xff, 0xae, 0x92, 0x74, 0xed, 0x80, 0x03,
0x00, 0x00,
// 527 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x93, 0xcd, 0x8e, 0xd3, 0x30,
0x14, 0x85, 0xeb, 0x69, 0x93, 0x69, 0x5c, 0xcd, 0x10, 0x19, 0x34, 0x8a, 0x40, 0x8a, 0xaa, 0x6e,
0xa8, 0x46, 0xa2, 0x91, 0xca, 0xcf, 0xbe, 0x20, 0x15, 0x75, 0x01, 0x44, 0x66, 0x05, 0x1b, 0xe4,
0xa6, 0x6e, 0x62, 0x35, 0x89, 0x23, 0xdb, 0x69, 0xd5, 0x1d, 0x2f, 0xc2, 0x8e, 0x87, 0x99, 0x25,
0x4b, 0x96, 0x33, 0x7d, 0x12, 0x64, 0x3b, 0x2d, 0xe1, 0x47, 0x48, 0xac, 0x7a, 0xbf, 0x6b, 0xbb,
0xf7, 0x9c, 0xe3, 0x18, 0xc6, 0x29, 0x53, 0x59, 0xbd, 0x9c, 0x24, 0xbc, 0x88, 0x36, 0x44, 0x91,
0x27, 0x09, 0x2f, 0x15, 0x61, 0x25, 0x15, 0xf2, 0x0f, 0x96, 0x22, 0x89, 0x72, 0xb6, 0x94, 0x51,
0x25, 0xb8, 0xe2, 0x09, 0xcf, 0x9b, 0x4a, 0x46, 0x6a, 0x5f, 0x51, 0x39, 0x31, 0x80, 0x1c, 0x03,
0xa3, 0x25, 0xf4, 0x16, 0xf1, 0x6c, 0xb5, 0x12, 0x54, 0x4a, 0xf4, 0x18, 0xba, 0x6b, 0x52, 0xb0,
0x7c, 0x1f, 0x80, 0x21, 0x18, 0x5f, 0x4e, 0xef, 0x4d, 0xec, 0x89, 0x45, 0x3c, 0x37, 0x6d, 0xdc,
0x2c, 0xa3, 0x00, 0x9e, 0x13, 0x7b, 0x26, 0x38, 0x1b, 0x82, 0xb1, 0x87, 0x8f, 0x88, 0x10, 0xec,
0x15, 0x44, 0x6e, 0x82, 0xae, 0x69, 0x9b, 0x7a, 0x74, 0x0b, 0xa0, 0xb7, 0x28, 0x15, 0x15, 0x6b,
0x92, 0x50, 0x74, 0x05, 0xdd, 0x15, 0xdd, 0xb2, 0x84, 0x9a, 0x21, 0x1e, 0x6e, 0x48, 0x9f, 0x2c,
0x49, 0x41, 0x9b, 0x3f, 0x34, 0x35, 0x9a, 0xc2, 0xc1, 0x49, 0x1d, 0x95, 0x41, 0x77, 0xd8, 0x1d,
0x0f, 0xa6, 0xfe, 0x49, 0x55, 0xb3, 0x82, 0xdb, 0x9b, 0x90, 0x0f, 0xbb, 0x85, 0xaa, 0x83, 0xde,
0x10, 0x8c, 0x7b, 0x58, 0x97, 0x7a, 0x62, 0xb6, 0xd3, 0x1b, 0x02, 0xc7, 0x4e, 0xb4, 0xa4, 0x5d,
0x54, 0x09, 0x8b, 0x89, 0xca, 0x02, 0xd7, 0xba, 0x68, 0x50, 0x6b, 0xd1, 0x33, 0x82, 0x73, 0xab,
0x45, 0xd7, 0xe8, 0x11, 0xf4, 0x04, 0xd9, 0x7d, 0x5a, 0xe7, 0x24, 0x95, 0x41, 0x7f, 0x08, 0xc6,
0x17, 0xb8, 0x2f, 0xc8, 0x6e, 0xae, 0x79, 0xf4, 0x15, 0x40, 0x07, 0xf3, 0x5a, 0x19, 0x1b, 0x2b,
0x2a, 0x55, 0x63, 0xce, 0xd4, 0x7a, 0x50, 0x4a, 0x14, 0xdd, 0x91, 0xfd, 0x31, 0xae, 0x06, 0x5b,
0x61, 0x74, 0x7f, 0x09, 0xe3, 0x0a, 0xba, 0x92, 0xd7, 0x22, 0xa1, 0xc6, 0x87, 0x87, 0x1b, 0x42,
0x0f, 0xa0, 0x23, 0x13, 0x5e, 0x51, 0xe3, 0xe4, 0x02, 0x5b, 0x68, 0xdd, 0x9b, 0xfb, 0xcf, 0x7b,
0x1b, 0x7d, 0x01, 0x70, 0x30, 0xc3, 0xf1, 0x5b, 0xca, 0xd2, 0x6c, 0xc9, 0x85, 0xce, 0x57, 0xf1,
0x53, 0x78, 0x46, 0xf3, 0x5f, 0xf3, 0x6d, 0x6d, 0x6a, 0x49, 0x3e, 0xfb, 0x5d, 0x72, 0x9e, 0xeb,
0xcf, 0xe0, 0x68, 0xc5, 0x92, 0x91, 0xac, 0x88, 0xb2, 0x4e, 0x1c, 0x6c, 0x41, 0x77, 0x6d, 0x92,
0x8e, 0xed, 0x1a, 0xb8, 0x7e, 0x08, 0xfb, 0x47, 0xcd, 0xc8, 0x85, 0x67, 0xdb, 0x67, 0x7e, 0xc7,
0xfc, 0xbe, 0xf0, 0xc1, 0xf5, 0x73, 0x78, 0x7f, 0xfe, 0xfe, 0xb5, 0xe0, 0x75, 0xf5, 0x2a, 0x23,
0x65, 0x4a, 0x63, 0x9e, 0xb3, 0x64, 0x8f, 0x20, 0x74, 0x67, 0xf9, 0x8e, 0xec, 0xa5, 0xdf, 0x41,
0x08, 0x5e, 0xbe, 0x2b, 0x31, 0xe7, 0xea, 0x0d, 0x93, 0x05, 0x51, 0x49, 0xe6, 0x83, 0x97, 0xf2,
0xe6, 0x2e, 0xec, 0x7c, 0xbf, 0x0b, 0x3b, 0x9f, 0x0f, 0x21, 0xb8, 0x39, 0x84, 0xe0, 0xdb, 0x21,
0x04, 0xb7, 0x87, 0x10, 0x7c, 0xfc, 0xf0, 0x9f, 0x6f, 0x4a, 0xd4, 0xa5, 0x62, 0x05, 0x8d, 0xb6,
0x4c, 0xa8, 0xd6, 0x52, 0xb5, 0x49, 0x23, 0x92, 0xd2, 0x52, 0xfd, 0x7c, 0x6f, 0x4b, 0xd7, 0x94,
0x4f, 0x7f, 0x04, 0x00, 0x00, 0xff, 0xff, 0xd7, 0x28, 0x03, 0xf1, 0xb7, 0x03, 0x00, 0x00,
}
func (m *IPAddress) Marshal() (dAtA []byte, err error) {

View File

@@ -10,6 +10,7 @@ docs/BalloonConfig.md
docs/CmdLineConfig.md
docs/ConsoleConfig.md
docs/CpuAffinity.md
docs/CpuFeatures.md
docs/CpuTopology.md
docs/CpusConfig.md
docs/DefaultApi.md
@@ -35,6 +36,7 @@ docs/SendMigrationData.md
docs/SgxEpcConfig.md
docs/TdxConfig.md
docs/TokenBucket.md
docs/VdpaConfig.md
docs/VmAddDevice.md
docs/VmConfig.md
docs/VmInfo.md
@@ -51,6 +53,7 @@ model_balloon_config.go
model_cmd_line_config.go
model_console_config.go
model_cpu_affinity.go
model_cpu_features.go
model_cpu_topology.go
model_cpus_config.go
model_device_config.go
@@ -75,6 +78,7 @@ model_send_migration_data.go
model_sgx_epc_config.go
model_tdx_config.go
model_token_bucket.go
model_vdpa_config.go
model_vm_add_device.go
model_vm_config.go
model_vm_info.go

View File

@@ -92,6 +92,7 @@ Class | Method | HTTP request | Description
*DefaultApi* | [**VmAddFsPut**](docs/DefaultApi.md#vmaddfsput) | **Put** /vm.add-fs | Add a new virtio-fs device to the VM
*DefaultApi* | [**VmAddNetPut**](docs/DefaultApi.md#vmaddnetput) | **Put** /vm.add-net | Add a new network device to the VM
*DefaultApi* | [**VmAddPmemPut**](docs/DefaultApi.md#vmaddpmemput) | **Put** /vm.add-pmem | Add a new pmem device to the VM
*DefaultApi* | [**VmAddVdpaPut**](docs/DefaultApi.md#vmaddvdpaput) | **Put** /vm.add-vdpa | Add a new vDPA device to the VM
*DefaultApi* | [**VmAddVsockPut**](docs/DefaultApi.md#vmaddvsockput) | **Put** /vm.add-vsock | Add a new vsock device to the VM
*DefaultApi* | [**VmCountersGet**](docs/DefaultApi.md#vmcountersget) | **Get** /vm.counters | Get counters from the VM
*DefaultApi* | [**VmInfoGet**](docs/DefaultApi.md#vminfoget) | **Get** /vm.info | Returns general information about the cloud-hypervisor Virtual Machine (VM) instance.
@@ -111,6 +112,7 @@ Class | Method | HTTP request | Description
- [CmdLineConfig](docs/CmdLineConfig.md)
- [ConsoleConfig](docs/ConsoleConfig.md)
- [CpuAffinity](docs/CpuAffinity.md)
- [CpuFeatures](docs/CpuFeatures.md)
- [CpuTopology](docs/CpuTopology.md)
- [CpusConfig](docs/CpusConfig.md)
- [DeviceConfig](docs/DeviceConfig.md)
@@ -135,6 +137,7 @@ Class | Method | HTTP request | Description
- [SgxEpcConfig](docs/SgxEpcConfig.md)
- [TdxConfig](docs/TdxConfig.md)
- [TokenBucket](docs/TokenBucket.md)
- [VdpaConfig](docs/VdpaConfig.md)
- [VmAddDevice](docs/VmAddDevice.md)
- [VmConfig](docs/VmConfig.md)
- [VmInfo](docs/VmInfo.md)

View File

@@ -306,6 +306,28 @@ paths:
"500":
description: The new device could not be added to the VM instance.
summary: Add a new vsock device to the VM
/vm.add-vdpa:
put:
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/VdpaConfig'
description: The details of the new vDPA device
required: true
responses:
"200":
content:
application/json:
schema:
$ref: '#/components/schemas/PciDeviceInfo'
description: The new vDPA device was successfully added to the VM instance.
"204":
description: The new vDPA device was successfully (cold) added to the VM
instance.
"500":
description: The new vDPA device could not be added to the VM instance.
summary: Add a new vDPA device to the VM
/vm.snapshot:
put:
requestBody:
@@ -386,7 +408,7 @@ components:
VmInfo:
description: Virtual Machine information
example:
memory_actual_size: 3
memory_actual_size: 7
state: Created
config:
console:
@@ -472,6 +494,8 @@ components:
refill_time: 0
id: id
cpus:
features:
amx: true
topology:
dies_per_package: 5
threads_per_core: 1
@@ -500,37 +524,48 @@ components:
id: id
kernel:
path: path
vdpa:
- pci_segment: 7
path: path
num_queues: 3
iommu: false
id: id
- pci_segment: 7
path: path
num_queues: 3
iommu: false
id: id
numa:
- distances:
- distance: 4
destination: 0
- distance: 4
destination: 0
- distance: 7
destination: 8
- distance: 7
destination: 8
cpus:
- 6
- 6
- 4
- 4
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 7
guest_numa_id: 0
- distances:
- distance: 4
destination: 0
- distance: 4
destination: 0
- distance: 7
destination: 8
- distance: 7
destination: 8
cpus:
- 6
- 6
- 4
- 4
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 7
guest_numa_id: 0
tdx:
firmware: firmware
rng:
@@ -538,10 +573,10 @@ components:
src: /dev/urandom
sgx_epc:
- prefault: false
size: 0
size: 6
id: id
- prefault: false
size: 0
size: 6
id: id
fs:
- pci_segment: 6
@@ -568,9 +603,9 @@ components:
cid: 3
platform:
iommu_segments:
- 7
- 7
num_pci_segments: 8
- 3
- 3
num_pci_segments: 3
pmem:
- pci_segment: 6
mergeable: false
@@ -801,6 +836,8 @@ components:
refill_time: 0
id: id
cpus:
features:
amx: true
topology:
dies_per_package: 5
threads_per_core: 1
@@ -829,37 +866,48 @@ components:
id: id
kernel:
path: path
vdpa:
- pci_segment: 7
path: path
num_queues: 3
iommu: false
id: id
- pci_segment: 7
path: path
num_queues: 3
iommu: false
id: id
numa:
- distances:
- distance: 4
destination: 0
- distance: 4
destination: 0
- distance: 7
destination: 8
- distance: 7
destination: 8
cpus:
- 6
- 6
- 4
- 4
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 7
guest_numa_id: 0
- distances:
- distance: 4
destination: 0
- distance: 4
destination: 0
- distance: 7
destination: 8
- distance: 7
destination: 8
cpus:
- 6
- 6
- 4
- 4
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 7
guest_numa_id: 0
tdx:
firmware: firmware
rng:
@@ -867,10 +915,10 @@ components:
src: /dev/urandom
sgx_epc:
- prefault: false
size: 0
size: 6
id: id
- prefault: false
size: 0
size: 6
id: id
fs:
- pci_segment: 6
@@ -897,9 +945,9 @@ components:
cid: 3
platform:
iommu_segments:
- 7
- 7
num_pci_segments: 8
- 3
- 3
num_pci_segments: 3
pmem:
- pci_segment: 6
mergeable: false
@@ -1007,6 +1055,10 @@ components:
items:
$ref: '#/components/schemas/DeviceConfig'
type: array
vdpa:
items:
$ref: '#/components/schemas/VdpaConfig'
type: array
vsock:
$ref: '#/components/schemas/VsockConfig'
sgx_epc:
@@ -1044,6 +1096,13 @@ components:
type: integer
type: array
type: object
CpuFeatures:
example:
amx: true
properties:
amx:
type: boolean
type: object
CpuTopology:
example:
dies_per_package: 5
@@ -1062,6 +1121,8 @@ components:
type: object
CpusConfig:
example:
features:
amx: true
topology:
dies_per_package: 5
threads_per_core: 1
@@ -1096,6 +1157,8 @@ components:
items:
$ref: '#/components/schemas/CpuAffinity'
type: array
features:
$ref: '#/components/schemas/CpuFeatures'
required:
- boot_vcpus
- max_vcpus
@@ -1103,9 +1166,9 @@ components:
PlatformConfig:
example:
iommu_segments:
- 7
- 7
num_pci_segments: 8
- 3
- 3
num_pci_segments: 3
properties:
num_pci_segments:
format: int16
@@ -1579,6 +1642,31 @@ components:
required:
- path
type: object
VdpaConfig:
example:
pci_segment: 7
path: path
num_queues: 3
iommu: false
id: id
properties:
path:
type: string
num_queues:
default: 1
type: integer
iommu:
default: false
type: boolean
pci_segment:
format: int16
type: integer
id:
type: string
required:
- num_queues
- path
type: object
VsockConfig:
example:
pci_segment: 7
@@ -1610,7 +1698,7 @@ components:
SgxEpcConfig:
example:
prefault: false
size: 0
size: 6
id: id
properties:
id:
@@ -1638,8 +1726,8 @@ components:
type: object
NumaDistance:
example:
distance: 4
destination: 0
distance: 7
destination: 8
properties:
destination:
format: int32
@@ -1654,20 +1742,20 @@ components:
NumaConfig:
example:
distances:
- distance: 4
destination: 0
- distance: 4
destination: 0
- distance: 7
destination: 8
- distance: 7
destination: 8
cpus:
- 6
- 6
- 4
- 4
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 7
guest_numa_id: 0
properties:
guest_numa_id:
format: int32

View File

@@ -1385,6 +1385,117 @@ func (a *DefaultApiService) VmAddPmemPutExecute(r ApiVmAddPmemPutRequest) (PciDe
return localVarReturnValue, localVarHTTPResponse, nil
}
type ApiVmAddVdpaPutRequest struct {
ctx _context.Context
ApiService *DefaultApiService
vdpaConfig *VdpaConfig
}
// The details of the new vDPA device
func (r ApiVmAddVdpaPutRequest) VdpaConfig(vdpaConfig VdpaConfig) ApiVmAddVdpaPutRequest {
r.vdpaConfig = &vdpaConfig
return r
}
func (r ApiVmAddVdpaPutRequest) Execute() (PciDeviceInfo, *_nethttp.Response, error) {
return r.ApiService.VmAddVdpaPutExecute(r)
}
/*
VmAddVdpaPut Add a new vDPA device to the VM
@param ctx _context.Context - for authentication, logging, cancellation, deadlines, tracing, etc. Passed from http.Request or context.Background().
@return ApiVmAddVdpaPutRequest
*/
func (a *DefaultApiService) VmAddVdpaPut(ctx _context.Context) ApiVmAddVdpaPutRequest {
return ApiVmAddVdpaPutRequest{
ApiService: a,
ctx: ctx,
}
}
// Execute executes the request
// @return PciDeviceInfo
func (a *DefaultApiService) VmAddVdpaPutExecute(r ApiVmAddVdpaPutRequest) (PciDeviceInfo, *_nethttp.Response, error) {
var (
localVarHTTPMethod = _nethttp.MethodPut
localVarPostBody interface{}
localVarFormFileName string
localVarFileName string
localVarFileBytes []byte
localVarReturnValue PciDeviceInfo
)
localBasePath, err := a.client.cfg.ServerURLWithContext(r.ctx, "DefaultApiService.VmAddVdpaPut")
if err != nil {
return localVarReturnValue, nil, GenericOpenAPIError{error: err.Error()}
}
localVarPath := localBasePath + "/vm.add-vdpa"
localVarHeaderParams := make(map[string]string)
localVarQueryParams := _neturl.Values{}
localVarFormParams := _neturl.Values{}
if r.vdpaConfig == nil {
return localVarReturnValue, nil, reportError("vdpaConfig is required and must be specified")
}
// to determine the Content-Type header
localVarHTTPContentTypes := []string{"application/json"}
// set Content-Type header
localVarHTTPContentType := selectHeaderContentType(localVarHTTPContentTypes)
if localVarHTTPContentType != "" {
localVarHeaderParams["Content-Type"] = localVarHTTPContentType
}
// to determine the Accept header
localVarHTTPHeaderAccepts := []string{"application/json"}
// set Accept header
localVarHTTPHeaderAccept := selectHeaderAccept(localVarHTTPHeaderAccepts)
if localVarHTTPHeaderAccept != "" {
localVarHeaderParams["Accept"] = localVarHTTPHeaderAccept
}
// body params
localVarPostBody = r.vdpaConfig
req, err := a.client.prepareRequest(r.ctx, localVarPath, localVarHTTPMethod, localVarPostBody, localVarHeaderParams, localVarQueryParams, localVarFormParams, localVarFormFileName, localVarFileName, localVarFileBytes)
if err != nil {
return localVarReturnValue, nil, err
}
localVarHTTPResponse, err := a.client.callAPI(req)
if err != nil || localVarHTTPResponse == nil {
return localVarReturnValue, localVarHTTPResponse, err
}
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(bytes.NewBuffer(localVarBody))
if err != nil {
return localVarReturnValue, localVarHTTPResponse, err
}
if localVarHTTPResponse.StatusCode >= 300 {
newErr := GenericOpenAPIError{
body: localVarBody,
error: localVarHTTPResponse.Status,
}
return localVarReturnValue, localVarHTTPResponse, newErr
}
err = a.client.decode(&localVarReturnValue, localVarBody, localVarHTTPResponse.Header.Get("Content-Type"))
if err != nil {
newErr := GenericOpenAPIError{
body: localVarBody,
error: err.Error(),
}
return localVarReturnValue, localVarHTTPResponse, newErr
}
return localVarReturnValue, localVarHTTPResponse, nil
}
type ApiVmAddVsockPutRequest struct {
ctx _context.Context
ApiService *DefaultApiService

View File

@@ -0,0 +1,56 @@
# CpuFeatures
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Amx** | Pointer to **bool** | | [optional]
## Methods
### NewCpuFeatures
`func NewCpuFeatures() *CpuFeatures`
NewCpuFeatures instantiates a new CpuFeatures object
This constructor will assign default values to properties that have it defined,
and makes sure properties required by API are set, but the set of arguments
will change when the set of required properties is changed
### NewCpuFeaturesWithDefaults
`func NewCpuFeaturesWithDefaults() *CpuFeatures`
NewCpuFeaturesWithDefaults instantiates a new CpuFeatures object
This constructor will only assign default values to properties that have it defined,
but it doesn't guarantee that properties required by API are set
### GetAmx
`func (o *CpuFeatures) GetAmx() bool`
GetAmx returns the Amx field if non-nil, zero value otherwise.
### GetAmxOk
`func (o *CpuFeatures) GetAmxOk() (*bool, bool)`
GetAmxOk returns a tuple with the Amx field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetAmx
`func (o *CpuFeatures) SetAmx(v bool)`
SetAmx sets Amx field to given value.
### HasAmx
`func (o *CpuFeatures) HasAmx() bool`
HasAmx returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -9,6 +9,7 @@ Name | Type | Description | Notes
**Topology** | Pointer to [**CpuTopology**](CpuTopology.md) | | [optional]
**MaxPhysBits** | Pointer to **int32** | | [optional]
**Affinity** | Pointer to [**[]CpuAffinity**](CpuAffinity.md) | | [optional]
**Features** | Pointer to [**CpuFeatures**](CpuFeatures.md) | | [optional]
## Methods
@@ -144,6 +145,31 @@ SetAffinity sets Affinity field to given value.
HasAffinity returns a boolean if a field has been set.
### GetFeatures
`func (o *CpusConfig) GetFeatures() CpuFeatures`
GetFeatures returns the Features field if non-nil, zero value otherwise.
### GetFeaturesOk
`func (o *CpusConfig) GetFeaturesOk() (*CpuFeatures, bool)`
GetFeaturesOk returns a tuple with the Features field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetFeatures
`func (o *CpusConfig) SetFeatures(v CpuFeatures)`
SetFeatures sets Features field to given value.
### HasFeatures
`func (o *CpusConfig) HasFeatures() bool`
HasFeatures returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -18,6 +18,7 @@ Method | HTTP request | Description
[**VmAddFsPut**](DefaultApi.md#VmAddFsPut) | **Put** /vm.add-fs | Add a new virtio-fs device to the VM
[**VmAddNetPut**](DefaultApi.md#VmAddNetPut) | **Put** /vm.add-net | Add a new network device to the VM
[**VmAddPmemPut**](DefaultApi.md#VmAddPmemPut) | **Put** /vm.add-pmem | Add a new pmem device to the VM
[**VmAddVdpaPut**](DefaultApi.md#VmAddVdpaPut) | **Put** /vm.add-vdpa | Add a new vDPA device to the VM
[**VmAddVsockPut**](DefaultApi.md#VmAddVsockPut) | **Put** /vm.add-vsock | Add a new vsock device to the VM
[**VmCountersGet**](DefaultApi.md#VmCountersGet) | **Get** /vm.counters | Get counters from the VM
[**VmInfoGet**](DefaultApi.md#VmInfoGet) | **Get** /vm.info | Returns general information about the cloud-hypervisor Virtual Machine (VM) instance.
@@ -870,6 +871,70 @@ No authorization required
[[Back to README]](../README.md)
## VmAddVdpaPut
> PciDeviceInfo VmAddVdpaPut(ctx).VdpaConfig(vdpaConfig).Execute()
Add a new vDPA device to the VM
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
vdpaConfig := *openapiclient.NewVdpaConfig("Path_example", int32(123)) // VdpaConfig | The details of the new vDPA device
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.DefaultApi.VmAddVdpaPut(context.Background()).VdpaConfig(vdpaConfig).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `DefaultApi.VmAddVdpaPut``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
// response from `VmAddVdpaPut`: PciDeviceInfo
fmt.Fprintf(os.Stdout, "Response from `DefaultApi.VmAddVdpaPut`: %v\n", resp)
}
```
### Path Parameters
### Other Parameters
Other parameters are passed through a pointer to a apiVmAddVdpaPutRequest struct via the builder pattern
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**vdpaConfig** | [**VdpaConfig**](VdpaConfig.md) | The details of the new vDPA device |
### Return type
[**PciDeviceInfo**](PciDeviceInfo.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
## VmAddVsockPut
> PciDeviceInfo VmAddVsockPut(ctx).VsockConfig(vsockConfig).Execute()

View File

@@ -0,0 +1,150 @@
# VdpaConfig
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Path** | **string** | |
**NumQueues** | **int32** | | [default to 1]
**Iommu** | Pointer to **bool** | | [optional] [default to false]
**PciSegment** | Pointer to **int32** | | [optional]
**Id** | Pointer to **string** | | [optional]
## Methods
### NewVdpaConfig
`func NewVdpaConfig(path string, numQueues int32, ) *VdpaConfig`
NewVdpaConfig instantiates a new VdpaConfig object
This constructor will assign default values to properties that have it defined,
and makes sure properties required by API are set, but the set of arguments
will change when the set of required properties is changed
### NewVdpaConfigWithDefaults
`func NewVdpaConfigWithDefaults() *VdpaConfig`
NewVdpaConfigWithDefaults instantiates a new VdpaConfig object
This constructor will only assign default values to properties that have it defined,
but it doesn't guarantee that properties required by API are set
### GetPath
`func (o *VdpaConfig) GetPath() string`
GetPath returns the Path field if non-nil, zero value otherwise.
### GetPathOk
`func (o *VdpaConfig) GetPathOk() (*string, bool)`
GetPathOk returns a tuple with the Path field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPath
`func (o *VdpaConfig) SetPath(v string)`
SetPath sets Path field to given value.
### GetNumQueues
`func (o *VdpaConfig) GetNumQueues() int32`
GetNumQueues returns the NumQueues field if non-nil, zero value otherwise.
### GetNumQueuesOk
`func (o *VdpaConfig) GetNumQueuesOk() (*int32, bool)`
GetNumQueuesOk returns a tuple with the NumQueues field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetNumQueues
`func (o *VdpaConfig) SetNumQueues(v int32)`
SetNumQueues sets NumQueues field to given value.
### GetIommu
`func (o *VdpaConfig) GetIommu() bool`
GetIommu returns the Iommu field if non-nil, zero value otherwise.
### GetIommuOk
`func (o *VdpaConfig) GetIommuOk() (*bool, bool)`
GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetIommu
`func (o *VdpaConfig) SetIommu(v bool)`
SetIommu sets Iommu field to given value.
### HasIommu
`func (o *VdpaConfig) HasIommu() bool`
HasIommu returns a boolean if a field has been set.
### GetPciSegment
`func (o *VdpaConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *VdpaConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *VdpaConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *VdpaConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetId
`func (o *VdpaConfig) GetId() string`
GetId returns the Id field if non-nil, zero value otherwise.
### GetIdOk
`func (o *VdpaConfig) GetIdOk() (*string, bool)`
GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetId
`func (o *VdpaConfig) SetId(v string)`
SetId sets Id field to given value.
### HasId
`func (o *VdpaConfig) HasId() bool`
HasId returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -18,6 +18,7 @@ Name | Type | Description | Notes
**Serial** | Pointer to [**ConsoleConfig**](ConsoleConfig.md) | | [optional]
**Console** | Pointer to [**ConsoleConfig**](ConsoleConfig.md) | | [optional]
**Devices** | Pointer to [**[]DeviceConfig**](DeviceConfig.md) | | [optional]
**Vdpa** | Pointer to [**[]VdpaConfig**](VdpaConfig.md) | | [optional]
**Vsock** | Pointer to [**VsockConfig**](VsockConfig.md) | | [optional]
**SgxEpc** | Pointer to [**[]SgxEpcConfig**](SgxEpcConfig.md) | | [optional]
**Tdx** | Pointer to [**TdxConfig**](TdxConfig.md) | | [optional]
@@ -400,6 +401,31 @@ SetDevices sets Devices field to given value.
HasDevices returns a boolean if a field has been set.
### GetVdpa
`func (o *VmConfig) GetVdpa() []VdpaConfig`
GetVdpa returns the Vdpa field if non-nil, zero value otherwise.
### GetVdpaOk
`func (o *VmConfig) GetVdpaOk() (*[]VdpaConfig, bool)`
GetVdpaOk returns a tuple with the Vdpa field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetVdpa
`func (o *VmConfig) SetVdpa(v []VdpaConfig)`
SetVdpa sets Vdpa field to given value.
### HasVdpa
`func (o *VmConfig) HasVdpa() bool`
HasVdpa returns a boolean if a field has been set.
### GetVsock
`func (o *VmConfig) GetVsock() VsockConfig`

View File

@@ -0,0 +1,113 @@
/*
Cloud Hypervisor API
Local HTTP based API for managing and inspecting a cloud-hypervisor virtual machine.
API version: 0.3.0
*/
// Code generated by OpenAPI Generator (https://openapi-generator.tech); DO NOT EDIT.
package openapi
import (
"encoding/json"
)
// CpuFeatures struct for CpuFeatures
type CpuFeatures struct {
Amx *bool `json:"amx,omitempty"`
}
// NewCpuFeatures instantiates a new CpuFeatures object
// This constructor will assign default values to properties that have it defined,
// and makes sure properties required by API are set, but the set of arguments
// will change when the set of required properties is changed
func NewCpuFeatures() *CpuFeatures {
this := CpuFeatures{}
return &this
}
// NewCpuFeaturesWithDefaults instantiates a new CpuFeatures object
// This constructor will only assign default values to properties that have it defined,
// but it doesn't guarantee that properties required by API are set
func NewCpuFeaturesWithDefaults() *CpuFeatures {
this := CpuFeatures{}
return &this
}
// GetAmx returns the Amx field value if set, zero value otherwise.
func (o *CpuFeatures) GetAmx() bool {
if o == nil || o.Amx == nil {
var ret bool
return ret
}
return *o.Amx
}
// GetAmxOk returns a tuple with the Amx field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *CpuFeatures) GetAmxOk() (*bool, bool) {
if o == nil || o.Amx == nil {
return nil, false
}
return o.Amx, true
}
// HasAmx returns a boolean if a field has been set.
func (o *CpuFeatures) HasAmx() bool {
if o != nil && o.Amx != nil {
return true
}
return false
}
// SetAmx gets a reference to the given bool and assigns it to the Amx field.
func (o *CpuFeatures) SetAmx(v bool) {
o.Amx = &v
}
func (o CpuFeatures) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if o.Amx != nil {
toSerialize["amx"] = o.Amx
}
return json.Marshal(toSerialize)
}
type NullableCpuFeatures struct {
value *CpuFeatures
isSet bool
}
func (v NullableCpuFeatures) Get() *CpuFeatures {
return v.value
}
func (v *NullableCpuFeatures) Set(val *CpuFeatures) {
v.value = val
v.isSet = true
}
func (v NullableCpuFeatures) IsSet() bool {
return v.isSet
}
func (v *NullableCpuFeatures) Unset() {
v.value = nil
v.isSet = false
}
func NewNullableCpuFeatures(val *CpuFeatures) *NullableCpuFeatures {
return &NullableCpuFeatures{value: val, isSet: true}
}
func (v NullableCpuFeatures) MarshalJSON() ([]byte, error) {
return json.Marshal(v.value)
}
func (v *NullableCpuFeatures) UnmarshalJSON(src []byte) error {
v.isSet = true
return json.Unmarshal(src, &v.value)
}

View File

@@ -21,6 +21,7 @@ type CpusConfig struct {
Topology *CpuTopology `json:"topology,omitempty"`
MaxPhysBits *int32 `json:"max_phys_bits,omitempty"`
Affinity *[]CpuAffinity `json:"affinity,omitempty"`
Features *CpuFeatures `json:"features,omitempty"`
}
// NewCpusConfig instantiates a new CpusConfig object
@@ -190,6 +191,38 @@ func (o *CpusConfig) SetAffinity(v []CpuAffinity) {
o.Affinity = &v
}
// GetFeatures returns the Features field value if set, zero value otherwise.
func (o *CpusConfig) GetFeatures() CpuFeatures {
if o == nil || o.Features == nil {
var ret CpuFeatures
return ret
}
return *o.Features
}
// GetFeaturesOk returns a tuple with the Features field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *CpusConfig) GetFeaturesOk() (*CpuFeatures, bool) {
if o == nil || o.Features == nil {
return nil, false
}
return o.Features, true
}
// HasFeatures returns a boolean if a field has been set.
func (o *CpusConfig) HasFeatures() bool {
if o != nil && o.Features != nil {
return true
}
return false
}
// SetFeatures gets a reference to the given CpuFeatures and assigns it to the Features field.
func (o *CpusConfig) SetFeatures(v CpuFeatures) {
o.Features = &v
}
func (o CpusConfig) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if true {
@@ -207,6 +240,9 @@ func (o CpusConfig) MarshalJSON() ([]byte, error) {
if o.Affinity != nil {
toSerialize["affinity"] = o.Affinity
}
if o.Features != nil {
toSerialize["features"] = o.Features
}
return json.Marshal(toSerialize)
}

View File

@@ -0,0 +1,249 @@
/*
Cloud Hypervisor API
Local HTTP based API for managing and inspecting a cloud-hypervisor virtual machine.
API version: 0.3.0
*/
// Code generated by OpenAPI Generator (https://openapi-generator.tech); DO NOT EDIT.
package openapi
import (
"encoding/json"
)
// VdpaConfig struct for VdpaConfig
type VdpaConfig struct {
Path string `json:"path"`
NumQueues int32 `json:"num_queues"`
Iommu *bool `json:"iommu,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
Id *string `json:"id,omitempty"`
}
// NewVdpaConfig instantiates a new VdpaConfig object
// This constructor will assign default values to properties that have it defined,
// and makes sure properties required by API are set, but the set of arguments
// will change when the set of required properties is changed
func NewVdpaConfig(path string, numQueues int32) *VdpaConfig {
this := VdpaConfig{}
this.Path = path
this.NumQueues = numQueues
var iommu bool = false
this.Iommu = &iommu
return &this
}
// NewVdpaConfigWithDefaults instantiates a new VdpaConfig object
// This constructor will only assign default values to properties that have it defined,
// but it doesn't guarantee that properties required by API are set
func NewVdpaConfigWithDefaults() *VdpaConfig {
this := VdpaConfig{}
var numQueues int32 = 1
this.NumQueues = numQueues
var iommu bool = false
this.Iommu = &iommu
return &this
}
// GetPath returns the Path field value
func (o *VdpaConfig) GetPath() string {
if o == nil {
var ret string
return ret
}
return o.Path
}
// GetPathOk returns a tuple with the Path field value
// and a boolean to check if the value has been set.
func (o *VdpaConfig) GetPathOk() (*string, bool) {
if o == nil {
return nil, false
}
return &o.Path, true
}
// SetPath sets field value
func (o *VdpaConfig) SetPath(v string) {
o.Path = v
}
// GetNumQueues returns the NumQueues field value
func (o *VdpaConfig) GetNumQueues() int32 {
if o == nil {
var ret int32
return ret
}
return o.NumQueues
}
// GetNumQueuesOk returns a tuple with the NumQueues field value
// and a boolean to check if the value has been set.
func (o *VdpaConfig) GetNumQueuesOk() (*int32, bool) {
if o == nil {
return nil, false
}
return &o.NumQueues, true
}
// SetNumQueues sets field value
func (o *VdpaConfig) SetNumQueues(v int32) {
o.NumQueues = v
}
// GetIommu returns the Iommu field value if set, zero value otherwise.
func (o *VdpaConfig) GetIommu() bool {
if o == nil || o.Iommu == nil {
var ret bool
return ret
}
return *o.Iommu
}
// GetIommuOk returns a tuple with the Iommu field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VdpaConfig) GetIommuOk() (*bool, bool) {
if o == nil || o.Iommu == nil {
return nil, false
}
return o.Iommu, true
}
// HasIommu returns a boolean if a field has been set.
func (o *VdpaConfig) HasIommu() bool {
if o != nil && o.Iommu != nil {
return true
}
return false
}
// SetIommu gets a reference to the given bool and assigns it to the Iommu field.
func (o *VdpaConfig) SetIommu(v bool) {
o.Iommu = &v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *VdpaConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VdpaConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *VdpaConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *VdpaConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *VdpaConfig) GetId() string {
if o == nil || o.Id == nil {
var ret string
return ret
}
return *o.Id
}
// GetIdOk returns a tuple with the Id field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VdpaConfig) GetIdOk() (*string, bool) {
if o == nil || o.Id == nil {
return nil, false
}
return o.Id, true
}
// HasId returns a boolean if a field has been set.
func (o *VdpaConfig) HasId() bool {
if o != nil && o.Id != nil {
return true
}
return false
}
// SetId gets a reference to the given string and assigns it to the Id field.
func (o *VdpaConfig) SetId(v string) {
o.Id = &v
}
func (o VdpaConfig) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if true {
toSerialize["path"] = o.Path
}
if true {
toSerialize["num_queues"] = o.NumQueues
}
if o.Iommu != nil {
toSerialize["iommu"] = o.Iommu
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.Id != nil {
toSerialize["id"] = o.Id
}
return json.Marshal(toSerialize)
}
type NullableVdpaConfig struct {
value *VdpaConfig
isSet bool
}
func (v NullableVdpaConfig) Get() *VdpaConfig {
return v.value
}
func (v *NullableVdpaConfig) Set(val *VdpaConfig) {
v.value = val
v.isSet = true
}
func (v NullableVdpaConfig) IsSet() bool {
return v.isSet
}
func (v *NullableVdpaConfig) Unset() {
v.value = nil
v.isSet = false
}
func NewNullableVdpaConfig(val *VdpaConfig) *NullableVdpaConfig {
return &NullableVdpaConfig{value: val, isSet: true}
}
func (v NullableVdpaConfig) MarshalJSON() ([]byte, error) {
return json.Marshal(v.value)
}
func (v *NullableVdpaConfig) UnmarshalJSON(src []byte) error {
v.isSet = true
return json.Unmarshal(src, &v.value)
}

View File

@@ -30,6 +30,7 @@ type VmConfig struct {
Serial *ConsoleConfig `json:"serial,omitempty"`
Console *ConsoleConfig `json:"console,omitempty"`
Devices *[]DeviceConfig `json:"devices,omitempty"`
Vdpa *[]VdpaConfig `json:"vdpa,omitempty"`
Vsock *VsockConfig `json:"vsock,omitempty"`
SgxEpc *[]SgxEpcConfig `json:"sgx_epc,omitempty"`
Tdx *TdxConfig `json:"tdx,omitempty"`
@@ -516,6 +517,38 @@ func (o *VmConfig) SetDevices(v []DeviceConfig) {
o.Devices = &v
}
// GetVdpa returns the Vdpa field value if set, zero value otherwise.
func (o *VmConfig) GetVdpa() []VdpaConfig {
if o == nil || o.Vdpa == nil {
var ret []VdpaConfig
return ret
}
return *o.Vdpa
}
// GetVdpaOk returns a tuple with the Vdpa field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VmConfig) GetVdpaOk() (*[]VdpaConfig, bool) {
if o == nil || o.Vdpa == nil {
return nil, false
}
return o.Vdpa, true
}
// HasVdpa returns a boolean if a field has been set.
func (o *VmConfig) HasVdpa() bool {
if o != nil && o.Vdpa != nil {
return true
}
return false
}
// SetVdpa gets a reference to the given []VdpaConfig and assigns it to the Vdpa field.
func (o *VmConfig) SetVdpa(v []VdpaConfig) {
o.Vdpa = &v
}
// GetVsock returns the Vsock field value if set, zero value otherwise.
func (o *VmConfig) GetVsock() VsockConfig {
if o == nil || o.Vsock == nil {
@@ -784,6 +817,9 @@ func (o VmConfig) MarshalJSON() ([]byte, error) {
if o.Devices != nil {
toSerialize["devices"] = o.Devices
}
if o.Vdpa != nil {
toSerialize["vdpa"] = o.Vdpa
}
if o.Vsock != nil {
toSerialize["vsock"] = o.Vsock
}

View File

@@ -325,7 +325,28 @@ paths:
description: The new device was successfully (cold) added to the VM instance.
500:
description: The new device could not be added to the VM instance.
/vm.add-vdpa:
put:
summary: Add a new vDPA device to the VM
requestBody:
description: The details of the new vDPA device
content:
application/json:
schema:
$ref: '#/components/schemas/VdpaConfig'
required: true
responses:
200:
description: The new vDPA device was successfully added to the VM instance.
content:
application/json:
schema:
$ref: '#/components/schemas/PciDeviceInfo'
204:
description: The new vDPA device was successfully (cold) added to the VM instance.
500:
description: The new vDPA device could not be added to the VM instance.
/vm.snapshot:
put:
@@ -505,6 +526,10 @@ components:
type: array
items:
$ref: '#/components/schemas/DeviceConfig'
vdpa:
type: array
items:
$ref: '#/components/schemas/VdpaConfig'
vsock:
$ref: '#/components/schemas/VsockConfig'
sgx_epc:
@@ -537,6 +562,12 @@ components:
items:
type: integer
CpuFeatures:
type: object
properties:
amx:
type: boolean
CpuTopology:
type: object
properties:
@@ -571,6 +602,8 @@ components:
type: array
items:
$ref: '#/components/schemas/CpuAffinity'
features:
$ref: '#/components/schemas/CpuFeatures'
PlatformConfig:
type: object
@@ -921,6 +954,26 @@ components:
id:
type: string
VdpaConfig:
required:
- path
- num_queues
type: object
properties:
path:
type: string
num_queues:
type: integer
default: 1
iommu:
type: boolean
default: false
pci_segment:
type: integer
format: int16
id:
type: string
VsockConfig:
required:
- cid

View File

@@ -321,6 +321,7 @@ func WaitLocalProcess(pid int, timeoutSecs uint, initialSignal syscall.Signal, l
if initialSignal != syscall.Signal(0) {
if err = syscall.Kill(pid, initialSignal); err != nil {
if err == syscall.ESRCH {
logger.WithField("pid", pid).Warnf("kill encounters ESRCH, process already finished")
return nil
}

View File

@@ -15,7 +15,6 @@ import (
"syscall"
"testing"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
"github.com/sirupsen/logrus"
@@ -58,8 +57,6 @@ var testHyperstartTtySocket = ""
// cleanUp Removes any stale sandbox/container state that can affect
// the next test to run.
func cleanUp() {
os.RemoveAll(fs.MockRunStoragePath())
os.RemoveAll(fs.MockRunVMStoragePath())
syscall.Unmount(GetSharePath(testSandboxID), syscall.MNT_DETACH|UmountNoFollow)
os.RemoveAll(testDir)
os.MkdirAll(testDir, DirMode)
@@ -108,8 +105,6 @@ func setupClh() {
func TestMain(m *testing.M) {
var err error
persist.EnableMockTesting()
flag.Parse()
logger := logrus.NewEntry(logrus.New())
@@ -126,6 +121,8 @@ func TestMain(m *testing.M) {
panic(err)
}
fs.EnableMockTesting(filepath.Join(testDir, "mockfs"))
fmt.Printf("INFO: Creating virtcontainers test directory %s\n", testDir)
err = os.MkdirAll(testDir, DirMode)
if err != nil {

View File

@@ -6,7 +6,6 @@
package virtcontainers
import (
"bufio"
"context"
"fmt"
"net"
@@ -136,24 +135,14 @@ func (v *virtiofsd) Start(ctx context.Context, onQuit onQuitFunc) (int, error) {
v.Logger().WithField("path", v.path).Info()
v.Logger().WithField("args", strings.Join(args, " ")).Info()
stderr, err := cmd.StderrPipe()
if err != nil {
return pid, err
}
if err = utils.StartCmd(cmd); err != nil {
return pid, err
}
// Monitor virtiofsd's stderr and stop sandbox if virtiofsd quits
go func() {
scanner := bufio.NewScanner(stderr)
for scanner.Scan() {
v.Logger().WithField("source", "virtiofsd").Info(scanner.Text())
}
v.Logger().Info("virtiofsd quits")
// Wait to release resources of virtiofsd process
cmd.Process.Wait()
v.Logger().Info("virtiofsd quits")
if onQuit != nil {
onQuit()
}

View File

@@ -400,7 +400,7 @@ fn memory_oci_to_ttrpc(
Swap: mem.swap.unwrap_or(0),
Kernel: mem.kernel.unwrap_or(0),
KernelTCP: mem.kernel_tcp.unwrap_or(0),
Swappiness: mem.swappiness.unwrap_or(0) as u64,
Swappiness: mem.swappiness.unwrap_or(0),
DisableOOMKiller: mem.disable_oom_killer.unwrap_or(false),
unknown_fields: protobuf::UnknownFields::new(),
cached_size: protobuf::CachedSize::default(),

View File

@@ -143,7 +143,7 @@ $ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
```sh
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stabe.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
```
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion.

View File

@@ -18,7 +18,7 @@ spec:
katacontainers.io/kata-runtime: cleanup
containers:
- name: kube-kata-cleanup
image: quay.io/kata-containers/kata-deploy:latest
image: quay.io/kata-containers/kata-deploy:2.4.1
imagePullPolicy: Always
command: [ "bash", "-c", "/opt/kata-artifacts/scripts/kata-deploy.sh reset" ]
env:

View File

@@ -16,7 +16,7 @@ spec:
serviceAccountName: kata-label-node
containers:
- name: kube-kata
image: quay.io/kata-containers/kata-deploy:latest
image: quay.io/kata-containers/kata-deploy:2.4.1
imagePullPolicy: Always
lifecycle:
preStop:

View File

@@ -0,0 +1 @@
build/

View File

@@ -15,8 +15,11 @@ endef
kata-tarball: | all-parallel merge-builds
all-parallel:
${MAKE} -f $(MK_PATH) all -j$$(( $$(nproc) - 1 )) NO_TTY="true" V=
$(MK_DIR)/dockerbuild/install_yq.sh:
$(MK_DIR)/kata-deploy-copy-yq-installer.sh
all-parallel: $(MK_DIR)/dockerbuild/install_yq.sh
${MAKE} -f $(MK_PATH) all -j$$(( $$(nproc) - 1 )) V=
all: cloud-hypervisor-tarball \
firecracker-tarball \
@@ -26,7 +29,7 @@ all: cloud-hypervisor-tarball \
rootfs-initrd-tarball \
shim-v2-tarball
%-tarball-build:
%-tarball-build: $(MK_DIR)/dockerbuild/install_yq.sh
$(call BUILD,$*)
cloud-hypervisor-tarball:

View File

@@ -16,25 +16,17 @@ kata_deploy_create="${script_dir}/kata-deploy-binaries.sh"
uid=$(id -u ${USER})
gid=$(id -g ${USER})
TTY_OPT="-i"
NO_TTY="${NO_TTY:-false}"
[ -t 1 ] && [ "${NO_TTY}" == "false" ] && TTY_OPT="-it"
if [ "${script_dir}" != "${PWD}" ]; then
ln -sf "${script_dir}/build" "${PWD}/build"
fi
install_yq_script_path="${script_dir}/../../../../ci/install_yq.sh"
cp "${install_yq_script_path}" "${script_dir}/dockerbuild/install_yq.sh"
docker build -q -t build-kata-deploy \
--build-arg IMG_USER="${USER}" \
--build-arg UID=${uid} \
--build-arg GID=${gid} \
"${script_dir}/dockerbuild/"
docker run ${TTY_OPT} \
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
--user ${uid}:${gid} \
--env USER=${USER} -v "${kata_dir}:${kata_dir}" \

View File

@@ -8,6 +8,7 @@
set -o errexit
set -o nounset
set -o pipefail
set -o errtrace
readonly project="kata-containers"
@@ -64,6 +65,7 @@ version: The kata version that will be use to create the tarball
options:
-h|--help : Show this help
-s : Silent mode (produce output in case of failure only)
--build=<asset> :
all
cloud-hypervisor
@@ -195,6 +197,18 @@ handle_build() {
tar tvf "${tarball_name}"
}
silent_mode_error_trap() {
local stdout="$1"
local stderr="$2"
local t="$3"
local log_file="$4"
exec 1>&${stdout}
exec 2>&${stderr}
error "Failed to build: $t, logs:"
cat "${log_file}"
exit 1
}
main() {
local build_targets
local silent
@@ -247,11 +261,15 @@ main() {
(
cd "${builddir}"
if [ "${silent}" == true ]; then
if ! handle_build "${t}" &>"$log_file"; then
error "Failed to build: $t, logs:"
cat "${log_file}"
exit 1
fi
local stdout
local stderr
# Save stdout and stderr, to be restored
# by silent_mode_error_trap() in case of
# build failure.
exec {stdout}>&1
exec {stderr}>&2
trap "silent_mode_error_trap $stdout $stderr $t \"$log_file\"" ERR
handle_build "${t}" &>"$log_file"
else
handle_build "${t}"
fi

View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
#
# Copyright (c) 2018-2021 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
set -o errexit
set -o nounset
set -o pipefail
set -o errtrace
script_dir=$(dirname "$(readlink -f "$0")")
install_yq_script_path="${script_dir}/../../../../ci/install_yq.sh"
cp "${install_yq_script_path}" "${script_dir}/dockerbuild/install_yq.sh"

View File

@@ -1 +1 @@
89
90

View File

@@ -0,0 +1,81 @@
From 29c4a3363bf287bb9a7b0342b1bc2dba3661c96c Mon Sep 17 00:00:00 2001
From: Fabiano Rosas <farosas@linux.ibm.com>
Date: Fri, 17 Dec 2021 17:57:18 +0100
Subject: [PATCH] Revert "target/ppc: Move SPR_DSISR setting to powerpc_excp"
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
This reverts commit 336e91f85332dda0ede4c1d15b87a19a0fb898a2.
It breaks the --disable-tcg build:
../target/ppc/excp_helper.c:463:29: error: implicit declaration of
function cpu_ldl_code [-Werror=implicit-function-declaration]
We should not have TCG code in powerpc_excp because some kvm-only
routines use it indirectly to dispatch interrupts. See
kvm_handle_debug, spapr_mce_req_event and
spapr_do_system_reset_on_cpu.
We can re-introduce the change once we have split the interrupt
injection code between KVM and TCG.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20211209173323.2166642-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
target/ppc/excp_helper.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index feb3fd42e2..6ba0840e99 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -464,15 +464,13 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
break;
}
case POWERPC_EXCP_ALIGN: /* Alignment exception */
+ /* Get rS/rD and rA from faulting opcode */
/*
- * Get rS/rD and rA from faulting opcode.
- * Note: We will only invoke ALIGN for atomic operations,
- * so all instructions are X-form.
+ * Note: the opcode fields will not be set properly for a
+ * direct store load/store, but nobody cares as nobody
+ * actually uses direct store segments.
*/
- {
- uint32_t insn = cpu_ldl_code(env, env->nip);
- env->spr[SPR_DSISR] |= (insn & 0x03FF0000) >> 16;
- }
+ env->spr[SPR_DSISR] |= (env->error_code & 0x03FF0000) >> 16;
break;
case POWERPC_EXCP_PROGRAM: /* Program exception */
switch (env->error_code & ~0xF) {
@@ -1441,6 +1439,11 @@ void ppc_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
int mmu_idx, uintptr_t retaddr)
{
CPUPPCState *env = cs->env_ptr;
+ uint32_t insn;
+
+ /* Restore state and reload the insn we executed, for filling in DSISR. */
+ cpu_restore_state(cs, retaddr, true);
+ insn = cpu_ldl_code(env, env->nip);
switch (env->mmu_model) {
case POWERPC_MMU_SOFT_4xx:
@@ -1456,8 +1459,8 @@ void ppc_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
}
cs->exception_index = POWERPC_EXCP_ALIGN;
- env->error_code = 0;
- cpu_loop_exit_restore(cs, retaddr);
+ env->error_code = insn & 0x03FF0000;
+ cpu_loop_exit(cs);
}
#endif /* CONFIG_TCG */
#endif /* !CONFIG_USER_ONLY */
--
GitLab

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
#
# Copyright (c) 2022 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
set -o errexit
set -o nounset
set -o pipefail
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
script_name="$(basename "${BASH_SOURCE[0]}")"
# This is very much error prone in case we re-structure our
# repos again, but it's also used in a few other places :-/
repo_dir="${script_dir}/../../.."
function usage() {
cat <<EOF
Usage: ${script_name} tarball-name
This script creates a tarball with all the cargo vendored code
that a distro would need to do a full build of the project in
a disconnected environment, generating a "tarball-name" file.
EOF
}
create_vendor_tarball() {
vendor_dir_list=""
pushd ${repo_dir}
for i in $(find . -name 'Cargo.lock'); do
dir="$(dirname $i)"
pushd "${dir}"
[ -d .cargo ] || mkdir .cargo
cargo vendor >> .cargo/config
vendor_dir_list+=" $dir/vendor $dir/.cargo/config"
echo "${vendor_dir_list}"
popd
done
popd
tar -cvzf ${1} ${vendor_dir_list}
}
main () {
[ $# -ne 1 ] && usage && exit 0
create_vendor_tarball ${1}
}
main "$@"

View File

@@ -68,7 +68,6 @@ generate_kata_deploy_commit() {
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy / kata-cleanup: change from \"latest\" to \"rc0\"
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
@@ -161,7 +160,7 @@ bump_repo() {
# +----------------+----------------+
# | from | to |
# -------------------+----------------+----------------+
# kata-deploy | "latest" | "rc0" |
# kata-deploy | "latest" | "latest" |
# -------------------+----------------+----------------+
# kata-deploy-stable | "stable" | REMOVED |
# -------------------+----------------+----------------+
@@ -183,29 +182,34 @@ bump_repo() {
info "Updating kata-deploy / kata-cleanup image tags"
local version_to_replace="${current_version}"
local replacement="${new_version}"
if [ "${target_branch}" == "main" ]; then
local need_commit=false
if [ "${target_branch}" == "main" ];then
if [[ "${new_version}" =~ "rc" ]]; then
## this is the case 2) where we remove te kata-deploy / kata-cleanup stable files
## We are bumping from alpha to RC, should drop kata-deploy-stable yamls.
git rm "${kata_deploy_stable_yaml}"
git rm "${kata_cleanup_stable_yaml}"
else
## this is the case 1) where we just do nothing
replacement="latest"
need_commit=true
fi
version_to_replace="latest"
fi
if [ "${version_to_replace}" != "${replacement}" ]; then
## this covers case 2) and 3), as on both of them we have changes on kata-deploy / kata-cleanup files
sed -i "s#${registry}:${version_to_replace}#${registry}:${new_version}#g" "${kata_deploy_yaml}"
sed -i "s#${registry}:${version_to_replace}#${registry}:${new_version}#g" "${kata_cleanup_yaml}"
elif [ "${new_version}" != *"rc"* ]; then
## We are on a stable branch and creating new stable releases.
## Need to change kata-deploy / kata-cleanup to use the stable tags.
if [[ "${version_to_replace}" =~ "rc" ]]; then
## Coming from "rcX" so from the latest tag.
version_to_replace="latest"
fi
sed -i "s#${registry}:${version_to_replace}#${registry}:${replacement}#g" "${kata_deploy_yaml}"
sed -i "s#${registry}:${version_to_replace}#${registry}:${replacement}#g" "${kata_cleanup_yaml}"
git diff
git add "${kata_deploy_yaml}"
git add "${kata_cleanup_yaml}"
need_commit=true
fi
if [ "${need_commit}" == "true" ]; then
info "Creating the commit with the kata-deploy changes"
local commit_msg="$(generate_kata_deploy_commit $new_version)"
git commit -s -m "${commit_msg}"

View File

@@ -250,7 +250,6 @@ generate_qemu_options() {
qemu_options+=(size:--disable-auth-pam)
# Disable unused filesystem support
[ "$arch" == x86_64 ] && qemu_options+=(size:--disable-fdt)
qemu_options+=(size:--disable-glusterfs)
qemu_options+=(size:--disable-libiscsi)
qemu_options+=(size:--disable-libnfs)
@@ -303,7 +302,6 @@ generate_qemu_options() {
;;
esac
qemu_options+=(size:--disable-qom-cast-debug)
qemu_options+=(size:--disable-tcmalloc)
# Disable libudev since it is only needed for qemu-pr-helper and USB,
# none of which are used with Kata

View File

@@ -41,14 +41,15 @@ pull_clh_released_binary() {
curl --fail -L ${cloud_hypervisor_binary} -o cloud-hypervisor-static || return 1
mkdir -p cloud-hypervisor
mv -f cloud-hypervisor-static cloud-hypervisor/cloud-hypervisor
chmod +x cloud_hypervisor/cloud-hypervisor
chmod +x cloud-hypervisor/cloud-hypervisor
}
build_clh_from_source() {
info "Build ${cloud_hypervisor_repo} version: ${cloud_hypervisor_version}"
repo_dir=$(basename "${cloud_hypervisor_repo}")
repo_dir="${repo_dir//.git}"
[ -d "${repo_dir}" ] || git clone "${cloud_hypervisor_repo}"
rm -rf "${repo_dir}"
git clone "${cloud_hypervisor_repo}"
pushd "${repo_dir}"
git fetch || true
git checkout "${cloud_hypervisor_version}"

View File

@@ -208,11 +208,14 @@ Description: Install $kata_project [1] (and optionally $containerd_project [2])
Options:
-c <version> : Specify containerd version.
-d : Enable debug for all components.
-f : Force installation (use with care).
-h : Show this help statement.
-k <version> : Specify Kata Containers version.
-o : Only install Kata Containers.
-r : Don't cleanup on failure (retain files).
-t : Disable self test (don't try to create a container after install).
-T : Only run self test (do not install anything).
Notes:
@@ -402,13 +405,21 @@ install_containerd()
sudo tar -C /usr/local -xvf "${file}"
sudo ln -sf /usr/local/bin/ctr "${link_dir}"
for file in \
/usr/local/bin/containerd \
/usr/local/bin/ctr
do
sudo ln -sf "$file" "${link_dir}"
done
info "$project installed\n"
}
configure_containerd()
{
local enable_debug="${1:-}"
[ -z "$enable_debug" ] && die "no enable debug value"
local project="$containerd_project"
info "Configuring $project"
@@ -460,26 +471,55 @@ configure_containerd()
info "Backed up $cfg to $original"
}
local modified="false"
# Add the Kata Containers configuration details:
local comment_text
comment_text=$(printf "%s: Added by %s\n" \
"$(date -Iseconds)" \
"$script_name")
sudo grep -q "$kata_runtime_type" "$cfg" || {
cat <<-EOT | sudo tee -a "$cfg"
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "${kata_runtime_name}"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${kata_runtime_name}]
runtime_type = "${kata_runtime_type}"
EOT
# $comment_text
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "${kata_runtime_name}"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${kata_runtime_name}]
runtime_type = "${kata_runtime_type}"
EOT
info "Modified $cfg"
modified="true"
}
if [ "$enable_debug" = "true" ]
then
local debug_enabled
debug_enabled=$(awk -v RS='' '/\[debug\]/' "$cfg" |\
grep -E "^\s*\<level\>\s*=\s*.*\<debug\>" || true)
[ -n "$debug_enabled" ] || {
cat <<-EOT | sudo tee -a "$cfg"
# $comment_text
[debug]
level = "debug"
EOT
}
modified="true"
fi
[ "$modified" = "true" ] && info "Modified $cfg"
sudo systemctl enable containerd
sudo systemctl start containerd
info "Configured $project\n"
local msg="disabled"
[ "$enable_debug" = "true" ] && msg="enabled"
info "Configured $project (debug $msg)\n"
}
install_kata()
@@ -540,11 +580,48 @@ install_kata()
info "$project installed\n"
}
configure_kata()
{
local enable_debug="${1:-}"
[ -z "$enable_debug" ] && die "no enable debug value"
[ "$enable_debug" = "false" ] && \
info "Using default $kata_project configuration" && \
return 0
local config_file='configuration.toml'
local kata_dir='/etc/kata-containers'
sudo mkdir -p "$kata_dir"
local cfg_from
local cfg_to
cfg_from="${kata_install_dir}/share/defaults/kata-containers/${config_file}"
cfg_to="${kata_dir}/${config_file}"
[ -e "$cfg_from" ] || die "cannot find $kata_project configuration file"
sudo install -o root -g root -m 0644 "$cfg_from" "$cfg_to"
sudo sed -i \
-e 's/^# *\(enable_debug\).*=.*$/\1 = true/g' \
-e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.log=debug initcall_debug"/g' \
"$cfg_to"
info "Configured $kata_project for full debug (delete $cfg_to to use pristine $kata_project configuration)"
}
handle_kata()
{
local version="${1:-}"
install_kata "$version"
local enable_debug="${2:-}"
[ -z "$enable_debug" ] && die "no enable debug value"
install_kata "$version" "$enable_debug"
configure_kata "$enable_debug"
kata-runtime --version
}
@@ -556,6 +633,9 @@ handle_containerd()
local force="${2:-}"
[ -z "$force" ] && die "need force value"
local enable_debug="${3:-}"
[ -z "$enable_debug" ] && die "no enable debug value"
local ret
if [ "$force" = "true" ]
@@ -572,7 +652,7 @@ handle_containerd()
fi
fi
configure_containerd
configure_containerd "$enable_debug"
containerd --version
}
@@ -617,20 +697,32 @@ handle_installation()
local only_kata="${3:-}"
[ -z "$only_kata" ] && die "no only Kata value"
local enable_debug="${4:-}"
[ -z "$enable_debug" ] && die "no enable debug value"
local disable_test="${5:-}"
[ -z "$disable_test" ] && die "no disable test value"
local only_run_test="${6:-}"
[ -z "$only_run_test" ] && die "no only run test value"
# These params can be blank
local kata_version="${4:-}"
local containerd_version="${5:-}"
local kata_version="${7:-}"
local containerd_version="${8:-}"
[ "$only_run_test" = "true" ] && test_installation && return 0
setup "$cleanup" "$force"
handle_kata "$kata_version"
handle_kata "$kata_version" "$enable_debug"
[ "$only_kata" = "false" ] && \
handle_containerd \
"$containerd_version" \
"$force"
"$force" \
"$enable_debug"
test_installation
[ "$disable_test" = "false" ] && test_installation
if [ "$only_kata" = "true" ]
then
@@ -647,21 +739,27 @@ handle_args()
local cleanup="true"
local force="false"
local only_kata="false"
local disable_test="false"
local only_run_test="false"
local enable_debug="false"
local opt
local kata_version=""
local containerd_version=""
while getopts "c:fhk:or" opt "$@"
while getopts "c:dfhk:ortT" opt "$@"
do
case "$opt" in
c) containerd_version="$OPTARG" ;;
d) enable_debug="true" ;;
f) force="true" ;;
h) usage; exit 0 ;;
k) kata_version="$OPTARG" ;;
o) only_kata="true" ;;
r) cleanup="false" ;;
t) disable_test="true" ;;
T) only_run_test="true" ;;
esac
done
@@ -674,6 +772,9 @@ handle_args()
"$cleanup" \
"$force" \
"$only_kata" \
"$enable_debug" \
"$disable_test" \
"$only_run_test" \
"$kata_version" \
"$containerd_version"
}

View File

@@ -75,7 +75,7 @@ assets:
url: "https://github.com/cloud-hypervisor/cloud-hypervisor"
uscan-url: >-
https://github.com/cloud-hypervisor/cloud-hypervisor/tags.*/v?(\d\S+)\.tar\.gz
version: "v22.0"
version: "v23.0"
firecracker:
description: "Firecracker micro-VMM"
@@ -83,13 +83,13 @@ assets:
uscan-url: >-
https://github.com/firecracker-microvm/firecracker/tags
.*/v?(\d\S+)\.tar\.gz
version: "v0.23.1"
version: "v0.23.4"
qemu:
description: "VMM that uses KVM"
url: "https://github.com/qemu/qemu"
version: "v6.1.0"
tag: "v6.1.0"
version: "v6.2.0"
tag: "v6.2.0"
# Do not include any non-full release versions
# Break the line *without CR or space being appended*, to appease
# yamllint, and note the deliberate ' ' at the end of the expression.
@@ -153,7 +153,7 @@ assets:
kernel:
description: "Linux kernel optimised for virtual machines"
url: "https://cdn.kernel.org/pub/linux/kernel/v5.x/"
version: "v5.15.23"
version: "v5.15.26"
tdx:
description: "Linux kernel that supports TDX"
url: "https://github.com/intel/tdx/archive/refs/tags"