Compare commits

..

38 Commits

Author SHA1 Message Date
Steve Horsman
a655605e8f Merge pull request #12566 from manuelh-dev/mahuber/fail-exp-timeout
tests: Extend fail timeout for failure test
2026-02-25 16:11:53 +00:00
Zvonko Kaiser
7294719e1c Merge pull request #12559 from fidencio/topic/kata-deploy-fix-custom-runtime-no-snapshotter
kata-deploy: a few guard-rails to avoid failures if components are not set in the values.yaml file
2026-02-25 08:03:28 -05:00
Steve Horsman
7ffb7719b5 Merge pull request #12562 from kata-containers/prep-for-go-1.25-switch
Prep for go 1.25 switch
2026-02-25 11:13:30 +00:00
dependabot[bot]
7cc2e9710b build(deps): bump github.com/BurntSushi/toml in /src/runtime
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 1.3.2 to 1.5.0.
- [Release notes](https://github.com/BurntSushi/toml/releases)
- [Commits](https://github.com/BurntSushi/toml/compare/v1.3.2...v1.5.0)

---
updated-dependencies:
- dependency-name: github.com/BurntSushi/toml
  dependency-version: 1.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-25 10:00:36 +01:00
Hyounggyu Choi
78d19a4402 Merge pull request #12569 from BbolroC/fix-assertion-guest-pull-runtime-rs
tests: Improve assertion handling for runtime-rs hypervisor
2026-02-24 16:34:40 +01:00
stevenhorsman
ef1b0b2913 runtime: Fix mismatch in receiver names
Fix: `ST1016: methods on the same type should have the same receiver name`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
1b2ca678e5 runtime: Fix identifier names
Fix identifiers that are non compliant with go's conventions
e.g. not capitalising initialisations

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
69fea195f9 runtime: Fix arm unit test
I think that c727332b0e
broke the arm unit test by removing the arm specific overrides,
so update the expected output

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
b187983f84 workflows: build_checks: skip the go install if installed
Some of our static checks are hitting issues with duplicate
go versions installed. Given that we in go.mod we set the
version to match our required toolchain, if go is already installed
we can let go handle the toolchain version management instead
of installing a second version

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
8f7a2b3d5d runtime: Add copyright & licenses
Add missing headers

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
9b307a5fa6 metrics: Uncapitalise error strings
Fix `T1005: error strings should not be capitalized (staticcheck)`
This is to comply with go conventitions as errors are normally appended,
so there would be a spurious captialisation in the middle of the message

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
6eb67327d0 tests: Use ReplaceAll over Replace
strings.ReplaceAll was introduced in Go 1.12 as a more readable and self-documenting way to say "replace everything".

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
8fc6280f5e log-parser: Use time.IsZero() to check
Using time.IsZero() to check for uninitialised times is clearer

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
c1117bc831 log-parser: Use ReplaceAll over Replace
strings.ReplaceAll was introduced in Go 1.12 as a more readable and self-documenting way to say "replace everything".

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
8311dffce3 log-parser: Apply De Morgan's law
QF1001: Distributing negation across terms and flipping operators, makes it
easy for humans to process expressions at a time, vs evaluating a whole block
and then flipping it and can allow for earlier exit

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
f24765562d csi-kata-directvolume: Fix error messages
Error messages get appended and prepended, so it's against convention
to end them with punctuation

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
f84b462b95 runtime: Fix typo in comment
Fix `requiered` is a misspelling of `required` (misspell)

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
15813564f7 runtime: Avoid using fmt.Sprintf("%s", x)
It's more efficient and concise to just call .String()

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
a577685a8a runtime: Apply De Morgan's law
QF1001: Distributing negation across terms and flipping operators, makes it
easy for humans to process expressions at a time, vs evaluating a whole block
and then flipping it and can allow for earlier exit

Signed-off-by: stevenhorsman <steven@uk.ibm.com>

fixup: demorgans
2026-02-24 14:33:04 +00:00
stevenhorsman
e86338c9c0 runtime: Remove explicit types in variable declarations
QF1011 - use the short declaration as the type can be inferred

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
f60ee411f0 runtime: Update poorly chosen Duration names
ST1011 - having time.Duration values with variable names of MS/Secs
is misleading

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
6562ec5b61 runtime: Merge conditional assignment
Fix `QF1007: could merge conditional assignment into variable declaration`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
a0ccb63f47 runtime: Use ReplaceAll over Replace
strings.ReplaceAll was introduced in Go 1.12 as a more readable and self-documenting way to say "replace everything".

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
a78d212dfc kata-monitor: Switch to switch statements
Resolve: `QF1003: could use tagged switch`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
6f438bfb19 runtime: Improve receiver name
Update from `this` to fix:
```
ST1006: receiver name should be a reflection of its identity; don't use generic names such as "this" or "self" (staticcheck)
```

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
f1960103d1 runtime: Improve split statement
strings.SplitN(s, sep, -1) is functionally identical to strings.Split(s, sep)
as -1 says to return all substrings, so choose the more concise version

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
8cd3aa8c84 runtime: Remove embedded field from selector
GenericDevice is an embedded (anonymous) field in the device struct, so its fields
and methods are "promoted" to the outer struct, so we go straight to it.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
4351a61f67 runtime: Fix error string formatting
Resolve `ST1005: error strings should not end with punctuation or newlines (staticcheck)`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
312567a137 runtime: Fix double imports
Remove one of the double imports to tidy up the code

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
93c77a7d4e runtime: Improve print statement
fix `QF1012: Use fmt.Fprintf(...) instead of Write([]byte(fmt.Sprintf(...))) (staticcheck)`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:33:04 +00:00
stevenhorsman
cff8994336 runtime: Switch to switch statements
Resolve: `QF1003: could use tagged switch on major (staticcheck)`
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:22:10 +00:00
stevenhorsman
487f530d89 ci: Update golangci configuration
Add a setting to skip the
`T1005: error strings should not be capitalized (staticcheck)`
rule to avoid impact to our error strings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 14:22:09 +00:00
Hyounggyu Choi
3d71be3dd3 tests: Improve assertion handling for runtime-rs hypervisor
Since runtime-rs added support for virtio-blk-ccw on s390x in #12531,
the assertion in k8s-guest-pull-image.bats should be generalized
to apply to all hypervisors ending with `-runtime-rs`.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2026-02-24 11:48:07 +01:00
stevenhorsman
5ca4c34a34 kata-monitor: Fix golangci-lint warning
QF1012: Use fmt.Fprintf(...) instead of Write([]byte(fmt.Sprintf(...))) (staticcheck)
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 10:02:48 +00:00
stevenhorsman
2ac89f4569 versions: Update golangci-lint
Bump to the latest version to pick up support for Go 1.25

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-02-24 10:02:48 +00:00
Manuel Huber
e15c18f05c tests: Extend fail timeout for failure test
Extend the timeout for the assert_pod_fail function call for the
test case "Test we cannot pull a large image that pull time exceeds
createcontainer timeout inside the guest" when the experimental
force guest-pull method is being used. In this method, the image is
first pulled on the host before creating the pod sandbox. While
image pull times can suddenly spike, we already time out in the
assert_pod_fail function before the image is even pulled on the
host.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-23 12:56:23 -08:00
Fabiano Fidêncio
b082cf1708 kata-deploy: validate defaultShim is enabled before propagating it
getDefaultShimForArch previously returned whatever string was set in
defaultShim.<arch> without any validation. A typo, a non-existent shim,
or a shim that is disabled via disableAll would all silently produce a
bogus DEFAULT_SHIM_* env var, causing kata-deploy to fail at runtime.

Guard the return value by checking whether the configured shim is
present in the list of shims that are both enabled and support the
requested architecture. If not, return empty string so the env var is
simply omitted.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 14:01:11 +01:00
Fabiano Fidêncio
4ff7f67278 kata-deploy: fix nil pointer when custom runtime omits containerd/crio
Using `$runtime.containerd.snapshotter` and `$runtime.crio.pullType`
panics with a nil pointer error when the containerd or crio block is
absent from the custom runtime definition.

Let's use the `dig` function which safely traverses nested keys and
returns an empty string as the default when any key in the path is
missing.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 13:59:41 +01:00
82 changed files with 559 additions and 714 deletions

View File

@@ -97,8 +97,10 @@ jobs:
- name: Install golang
if: contains(matrix.component.needs, 'golang')
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "$GITHUB_PATH"
if ! command -v go; then
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "$GITHUB_PATH"
fi
- name: Setup rust
if: contains(matrix.component.needs, 'rust')
run: |

View File

@@ -258,98 +258,6 @@ jobs:
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh delete-csi-driver
run-k8s-tests-coco-nontee-crio:
name: run-k8s-tests-coco-nontee-crio
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
runs-on: fidencio-crio
permissions:
contents: read
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KBS: "true"
KBS_INGRESS: "nodeport"
KUBERNETES: "vanilla"
PULL_TYPE: "guest-pull"
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ""
EXPERIMENTAL_FORCE_GUEST_PULL: ""
AUTO_GENERATE_POLICY: "yes"
K8S_TEST_HOST_TYPE: "all"
CONTAINER_ENGINE: "crio"
CONTAINER_RUNTIME: "crio"
CONTAINER_ENGINE_VERSION: "active"
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 80
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always()
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs
- name: Delete CSI driver
if: always()
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh delete-csi-driver
# Generate jobs for testing CoCo on non-TEE environments with erofs-snapshotter
run-k8s-tests-coco-nontee-with-erofs-snapshotter:
name: run-k8s-tests-coco-nontee-with-erofs-snapshotter

View File

@@ -196,7 +196,7 @@ func indexPageText(w http.ResponseWriter, r *http.Request) {
formatter := fmt.Sprintf("%%-%ds: %%s\n", spacing)
for _, endpoint := range endpoints {
w.Write([]byte(fmt.Sprintf(formatter, endpoint.path, endpoint.desc)))
fmt.Fprintf(w, formatter, endpoint.path, endpoint.desc)
}
}

View File

@@ -63,7 +63,7 @@ func setCPUtype(hypervisorType vc.HypervisorType) error {
cpuType = getCPUtype()
if cpuType == cpuTypeUnknown {
return fmt.Errorf("Unknow CPU Type")
return fmt.Errorf("Unknown CPU Type")
} else if cpuType == cpuTypeIntel {
var kvmIntelParams map[string]string
onVMM, err := vc.RunningOnVMM(procCPUInfo)

View File

@@ -55,18 +55,17 @@ func TestCCCheckCLIFunction(t *testing.T) {
var moduleData []testModuleData
cpuType = getCPUtype()
if cpuType == cpuTypeIntel {
moduleData = []testModuleData{}
switch cpuType {
case cpuTypeIntel:
cpuData = []testCPUData{
{archGenuineIntel, "lm vmx sse4_1", false},
}
moduleData = []testModuleData{}
} else if cpuType == cpuTypeAMD {
case cpuTypeAMD:
cpuData = []testCPUData{
{archAuthenticAMD, "lm svm sse4_1", false},
}
moduleData = []testModuleData{}
}
genericCheckCLIFunction(t, cpuData, moduleData)
@@ -276,7 +275,8 @@ func TestCheckHostIsVMContainerCapable(t *testing.T) {
var moduleData []testModuleData
cpuType = getCPUtype()
if cpuType == cpuTypeIntel {
switch cpuType {
case cpuTypeIntel:
cpuData = []testCPUData{
{"", "", true},
{"Intel", "", true},
@@ -292,7 +292,7 @@ func TestCheckHostIsVMContainerCapable(t *testing.T) {
{filepath.Join(sysModuleDir, "kvm_intel/parameters/nested"), "Y", false},
{filepath.Join(sysModuleDir, "kvm_intel/parameters/unrestricted_guest"), "Y", false},
}
} else if cpuType == cpuTypeAMD {
case cpuTypeAMD:
cpuData = []testCPUData{
{"", "", true},
{"AMD", "", true},
@@ -340,7 +340,7 @@ func TestCheckHostIsVMContainerCapable(t *testing.T) {
// Write the following into the denylist file
// blacklist <mod>
// install <mod> /bin/false
_, err = denylistFile.WriteString(fmt.Sprintf("blacklist %s\ninstall %s /bin/false\n", mod, mod))
_, err = fmt.Fprintf(denylistFile, "blacklist %s\ninstall %s /bin/false\n", mod, mod)
assert.Nil(err)
}
denylistFile.Close()
@@ -505,9 +505,10 @@ func TestSetCPUtype(t *testing.T) {
assert.NotEmpty(archRequiredKernelModules)
cpuType = getCPUtype()
if cpuType == cpuTypeIntel {
switch cpuType {
case cpuTypeIntel:
assert.Equal(archRequiredCPUFlags["vmx"], "Virtualization support")
} else if cpuType == cpuTypeAMD {
case cpuTypeAMD:
assert.Equal(archRequiredCPUFlags["svm"], "Virtualization support")
}

View File

@@ -17,7 +17,6 @@ import (
"testing"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils"
ktu "github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/sirupsen/logrus"
@@ -509,7 +508,7 @@ func TestCheckCheckCPUAttribs(t *testing.T) {
}
func TestCheckHaveKernelModule(t *testing.T) {
if tc.NotValid(ktu.NeedRoot()) {
if tc.NotValid(katatestutils.NeedRoot()) {
t.Skip(testDisabledAsNonRoot)
}
@@ -638,8 +637,8 @@ func TestCheckCheckKernelModules(t *testing.T) {
func TestCheckCheckKernelModulesUnreadableFile(t *testing.T) {
assert := assert.New(t)
if tc.NotValid(ktu.NeedNonRoot()) {
t.Skip(ktu.TestDisabledNeedNonRoot)
if tc.NotValid(katatestutils.NeedNonRoot()) {
t.Skip(katatestutils.TestDisabledNeedNonRoot)
}
dir := t.TempDir()

View File

@@ -56,9 +56,10 @@ func TestEnvGetEnvInfoSetsCPUType(t *testing.T) {
assert.NotEmpty(archRequiredKernelModules)
cpuType = getCPUtype()
if cpuType == cpuTypeIntel {
switch cpuType {
case cpuTypeIntel:
assert.Equal(archRequiredCPUFlags["vmx"], "Virtualization support")
} else if cpuType == cpuTypeAMD {
case cpuTypeAMD:
assert.Equal(archRequiredCPUFlags["svm"], "Virtualization support")
}

View File

@@ -14,7 +14,6 @@ import (
"path"
"path/filepath"
"runtime"
goruntime "runtime"
"strings"
"testing"
@@ -184,7 +183,7 @@ func genericGetExpectedHostDetails(tmpdir string, expectedVendor string, expecte
}
const expectedKernelVersion = "99.1"
const expectedArch = goruntime.GOARCH
const expectedArch = runtime.GOARCH
expectedDistro := DistroInfo{
Name: "Foo",
@@ -254,7 +253,7 @@ VERSION_ID="%s"
}
}
if goruntime.GOARCH == "arm64" {
if runtime.GOARCH == "arm64" {
expectedHostDetails.CPU.Vendor = "ARM Limited"
expectedHostDetails.CPU.Model = "v8"
}

View File

@@ -55,9 +55,9 @@ var getIPTablesCommand = cli.Command{
return err
}
url := containerdshim.IPTablesUrl
url := containerdshim.IPTablesURL
if isIPv6 {
url = containerdshim.IP6TablesUrl
url = containerdshim.IP6TablesURL
}
body, err := shimclient.DoGet(sandboxID, defaultTimeout, url)
if err != nil {
@@ -108,9 +108,9 @@ var setIPTablesCommand = cli.Command{
return err
}
url := containerdshim.IPTablesUrl
url := containerdshim.IPTablesURL
if isIPv6 {
url = containerdshim.IP6TablesUrl
url = containerdshim.IP6TablesURL
}
if err = shimclient.DoPut(sandboxID, defaultTimeout, url, "application/octet-stream", buf); err != nil {

View File

@@ -62,7 +62,7 @@ var setPolicyCommand = cli.Command{
return err
}
url := containerdshim.PolicyUrl
url := containerdshim.PolicyURL
if err = shimclient.DoPut(sandboxID, defaultTimeout, url, "application/octet-stream", buf); err != nil {
return fmt.Errorf("Error observed when making policy-set request(%s): %s", policyFile, err)

View File

@@ -126,7 +126,7 @@ var resizeCommand = cli.Command{
// Stats retrieves the filesystem stats of the direct volume inside the guest.
func Stats(volumePath string) ([]byte, error) {
sandboxId, err := volume.GetSandboxIdForVolume(volumePath)
sandboxID, err := volume.GetSandboxIDForVolume(volumePath)
if err != nil {
return nil, err
}
@@ -136,8 +136,8 @@ func Stats(volumePath string) ([]byte, error) {
}
urlSafeDevicePath := url.PathEscape(volumeMountInfo.Device)
body, err := shimclient.DoGet(sandboxId, defaultTimeout,
fmt.Sprintf("%s?%s=%s", containerdshim.DirectVolumeStatUrl, containerdshim.DirectVolumePathKey, urlSafeDevicePath))
body, err := shimclient.DoGet(sandboxID, defaultTimeout,
fmt.Sprintf("%s?%s=%s", containerdshim.DirectVolumeStatURL, containerdshim.DirectVolumePathKey, urlSafeDevicePath))
if err != nil {
return nil, err
}
@@ -146,7 +146,7 @@ func Stats(volumePath string) ([]byte, error) {
// Resize resizes a direct volume inside the guest.
func Resize(volumePath string, size uint64) error {
sandboxId, err := volume.GetSandboxIdForVolume(volumePath)
sandboxID, err := volume.GetSandboxIDForVolume(volumePath)
if err != nil {
return err
}
@@ -163,5 +163,5 @@ func Resize(volumePath string, size uint64) error {
if err != nil {
return err
}
return shimclient.DoPost(sandboxId, defaultTimeout, containerdshim.DirectVolumeResizeUrl, "application/json", encoded)
return shimclient.DoPost(sandboxID, defaultTimeout, containerdshim.DirectVolumeResizeURL, "application/json", encoded)
}

View File

@@ -94,11 +94,12 @@ func releaseURLIsValid(url string) error {
func getReleaseURL(currentVersion semver.Version) (url string, err error) {
major := currentVersion.Major
if major == 0 {
switch major {
case 0:
return "", fmt.Errorf("invalid current version: %v", currentVersion)
} else if major == 1 {
case 1:
url = kataLegacyReleaseURL
} else {
default:
url = kataReleaseURL
}

View File

@@ -8,7 +8,7 @@ go 1.24.13
require (
code.cloudfoundry.org/bytefmt v0.0.0-20211005130812-5bb3c17173e5
github.com/BurntSushi/toml v1.5.0
github.com/BurntSushi/toml v1.6.0
github.com/blang/semver v3.5.1+incompatible
github.com/blang/semver/v4 v4.0.0
github.com/container-orchestrated-devices/container-device-interface v0.6.0

View File

@@ -8,8 +8,9 @@ github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 h1:59MxjQVfjXsBpLy+dbd2/ELV5ofnUkUZBvWSC85sheA=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0/go.mod h1:OahwfttHWG6eJ0clwcfBAHoDI6X/LV/15hx/wlMZSrU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/BurntSushi/toml v1.6.0 h1:dRaEfpa2VI55EwlIW72hMRHdWouJeRF7TPYhI+AUQjk=
github.com/BurntSushi/toml v1.6.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=

View File

@@ -40,7 +40,6 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
"github.com/kata-containers/kata-containers/src/runtime/pkg/oci"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/compatoci"
"tags.cncf.io/container-device-interface/pkg/cdi"
)
@@ -52,7 +51,7 @@ var defaultStartManagementServerFunc startManagementServerFunc = func(s *service
shimLog.Info("management server started")
}
func copyLayersToMounts(rootFs *vc.RootFs, spec *specs.Spec) error {
func copyLayersToMounts(rootFs *virtcontainers.RootFs, spec *specs.Spec) error {
for _, o := range rootFs.Options {
if !strings.HasPrefix(o, annotations.FileSystemLayer) {
continue
@@ -75,7 +74,7 @@ func copyLayersToMounts(rootFs *vc.RootFs, spec *specs.Spec) error {
}
func create(ctx context.Context, s *service, r *taskAPI.CreateTaskRequest) (*container, error) {
rootFs := vc.RootFs{}
rootFs := virtcontainers.RootFs{}
if len(r.Rootfs) == 1 {
m := r.Rootfs[0]
rootFs.Source = m.Source
@@ -108,7 +107,7 @@ func create(ctx context.Context, s *service, r *taskAPI.CreateTaskRequest) (*con
}
switch containerType {
case vc.PodSandbox, vc.SingleContainer:
case virtcontainers.PodSandbox, virtcontainers.SingleContainer:
if s.sandbox != nil {
return nil, fmt.Errorf("cannot create another sandbox in sandbox: %s", s.sandbox.ID())
}
@@ -151,7 +150,7 @@ func create(ctx context.Context, s *service, r *taskAPI.CreateTaskRequest) (*con
// 2. If this is not a sandbox infrastructure container, but instead a standalone single container (analogous to "docker run..."),
// then the container spec itself will contain appropriate sizing information for the entire sandbox (since it is
// a single container.
if containerType == vc.PodSandbox {
if containerType == virtcontainers.PodSandbox {
s.config.SandboxCPUs, s.config.SandboxMemMB = oci.CalculateSandboxSizing(ociSpec)
} else {
s.config.SandboxCPUs, s.config.SandboxMemMB = oci.CalculateContainerSizing(ociSpec)
@@ -203,7 +202,7 @@ func create(ctx context.Context, s *service, r *taskAPI.CreateTaskRequest) (*con
defaultStartManagementServerFunc(s, ctx, ociSpec)
}
case vc.PodContainer:
case virtcontainers.PodContainer:
span, ctx := katatrace.Trace(s.ctx, shimLog, "create", shimTracingTags)
defer span.End()
@@ -325,7 +324,7 @@ func checkAndMount(s *service, r *taskAPI.CreateTaskRequest) (bool, error) {
return false, nil
}
if vc.IsNydusRootFSType(m.Type) {
if virtcontainers.IsNydusRootFSType(m.Type) {
// if kata + nydus, do not mount
return false, nil
}
@@ -361,7 +360,7 @@ func doMount(mounts []*containerd_types.Mount, rootfs string) error {
return nil
}
func configureNonRootHypervisor(runtimeConfig *oci.RuntimeConfig, sandboxId string) error {
func configureNonRootHypervisor(runtimeConfig *oci.RuntimeConfig, sandboxID string) error {
userName, err := utils.CreateVmmUser()
if err != nil {
return err
@@ -370,7 +369,7 @@ func configureNonRootHypervisor(runtimeConfig *oci.RuntimeConfig, sandboxId stri
if err != nil {
shimLog.WithFields(logrus.Fields{
"user_name": userName,
"sandbox_id": sandboxId,
"sandbox_id": sandboxID,
}).WithError(err).Warn("configure non root hypervisor failed, delete the user")
if err2 := utils.RemoveVmmUser(userName); err2 != nil {
shimLog.WithField("userName", userName).WithError(err).Warn("failed to remove user")
@@ -398,7 +397,7 @@ func configureNonRootHypervisor(runtimeConfig *oci.RuntimeConfig, sandboxId stri
"user_name": userName,
"uid": uid,
"gid": gid,
"sandbox_id": sandboxId,
"sandbox_id": sandboxID,
}).Debug("successfully created a non root user for the hypervisor")
userTmpDir := path.Join("/run/user/", fmt.Sprint(uid))
@@ -410,7 +409,7 @@ func configureNonRootHypervisor(runtimeConfig *oci.RuntimeConfig, sandboxId stri
}
}
if err = os.Mkdir(userTmpDir, vc.DirMode); err != nil {
if err = os.Mkdir(userTmpDir, virtcontainers.DirMode); err != nil {
return err
}
defer func() {

View File

@@ -34,13 +34,13 @@ import (
const (
DirectVolumePathKey = "path"
AgentUrl = "/agent-url"
DirectVolumeStatUrl = "/direct-volume/stats"
DirectVolumeResizeUrl = "/direct-volume/resize"
IPTablesUrl = "/iptables"
PolicyUrl = "/policy"
IP6TablesUrl = "/ip6tables"
MetricsUrl = "/metrics"
AgentURL = "/agent-url"
DirectVolumeStatURL = "/direct-volume/stats"
DirectVolumeResizeURL = "/direct-volume/resize"
IPTablesURL = "/iptables"
PolicyURL = "/policy"
IP6TablesURL = "/ip6tables"
MetricsURL = "/metrics"
)
var (
@@ -288,13 +288,13 @@ func (s *service) startManagementServer(ctx context.Context, ociSpec *specs.Spec
// bind handler
m := http.NewServeMux()
m.Handle(MetricsUrl, http.HandlerFunc(s.serveMetrics))
m.Handle(AgentUrl, http.HandlerFunc(s.agentURL))
m.Handle(DirectVolumeStatUrl, http.HandlerFunc(s.serveVolumeStats))
m.Handle(DirectVolumeResizeUrl, http.HandlerFunc(s.serveVolumeResize))
m.Handle(IPTablesUrl, http.HandlerFunc(s.ipTablesHandler))
m.Handle(PolicyUrl, http.HandlerFunc(s.policyHandler))
m.Handle(IP6TablesUrl, http.HandlerFunc(s.ip6TablesHandler))
m.Handle(MetricsURL, http.HandlerFunc(s.serveMetrics))
m.Handle(AgentURL, http.HandlerFunc(s.agentURL))
m.Handle(DirectVolumeStatURL, http.HandlerFunc(s.serveVolumeStats))
m.Handle(DirectVolumeResizeURL, http.HandlerFunc(s.serveVolumeResize))
m.Handle(IPTablesURL, http.HandlerFunc(s.ipTablesHandler))
m.Handle(PolicyURL, http.HandlerFunc(s.policyHandler))
m.Handle(IP6TablesURL, http.HandlerFunc(s.ip6TablesHandler))
s.mountPprofHandle(m, ociSpec)
// register shim metrics
@@ -373,7 +373,7 @@ func ClientSocketAddress(id string) (string, error) {
if _, err := os.Stat(socketPath); err != nil {
socketPath = SocketPathRust(id)
if _, err := os.Stat(socketPath); err != nil {
return "", fmt.Errorf("It fails to stat both %s and %s with error %v.", SocketPathGo(id), SocketPathRust(id), err)
return "", fmt.Errorf("it fails to stat both %s and %s with error %v", SocketPathGo(id), SocketPathRust(id), err)
}
}

View File

@@ -139,7 +139,7 @@ func (device *VFIODevice) Detach(ctx context.Context, devReceiver api.DeviceRece
}
}()
if device.GenericDevice.DeviceInfo.ColdPlug {
if device.DeviceInfo.ColdPlug {
// nothing to detach, device was cold plugged
deviceLogger().WithFields(logrus.Fields{
"device-group": device.DeviceInfo.HostPath,
@@ -264,7 +264,7 @@ func GetVFIODetails(deviceFileName, iommuDevicesPath string) (deviceBDF, deviceS
// getMediatedBDF returns the BDF of a VF
// Expected input string format is /sys/devices/pci0000:d7/BDF0/BDF1/.../MDEVBDF/UUID
func getMediatedBDF(deviceSysfsDev string) string {
tokens := strings.SplitN(deviceSysfsDev, "/", -1)
tokens := strings.Split(deviceSysfsDev, "/")
if len(tokens) < 4 {
return ""
}

View File

@@ -59,15 +59,11 @@ func NewDeviceManager(blockDriver string, vhostUserStoreEnabled bool, vhostUserS
vhostUserReconnectTimeout: vhostUserReconnect,
devices: make(map[string]api.Device),
}
if blockDriver == config.VirtioMmio {
dm.blockDriver = config.VirtioMmio
} else if blockDriver == config.VirtioBlock {
dm.blockDriver = config.VirtioBlock
} else if blockDriver == config.Nvdimm {
dm.blockDriver = config.Nvdimm
} else if blockDriver == config.VirtioBlockCCW {
dm.blockDriver = config.VirtioBlockCCW
} else {
switch blockDriver {
case config.VirtioMmio, config.VirtioBlock, config.Nvdimm, config.VirtioBlockCCW:
dm.blockDriver = blockDriver
default:
dm.blockDriver = config.VirtioSCSI
}

View File

@@ -99,18 +99,18 @@ func VolumeMountInfo(volumePath string) (*MountInfo, error) {
return &mountInfo, nil
}
// RecordSandboxId associates a sandbox id with a direct volume.
func RecordSandboxId(sandboxId string, volumePath string) error {
// RecordSandboxID associates a sandbox id with a direct volume.
func RecordSandboxID(sandboxID string, volumePath string) error {
encodedPath := b64.URLEncoding.EncodeToString([]byte(volumePath))
mountInfoFilePath := filepath.Join(kataDirectVolumeRootPath, encodedPath, mountInfoFileName)
if _, err := os.Stat(mountInfoFilePath); err != nil {
return err
}
return os.WriteFile(filepath.Join(kataDirectVolumeRootPath, encodedPath, sandboxId), []byte(""), 0600)
return os.WriteFile(filepath.Join(kataDirectVolumeRootPath, encodedPath, sandboxID), []byte(""), 0600)
}
func GetSandboxIdForVolume(volumePath string) (string, error) {
func GetSandboxIDForVolume(volumePath string) (string, error) {
files, err := os.ReadDir(filepath.Join(kataDirectVolumeRootPath, b64.URLEncoding.EncodeToString([]byte(volumePath))))
if err != nil {
return "", err

View File

@@ -56,7 +56,7 @@ func TestAdd(t *testing.T) {
assert.Nil(t, err)
}
func TestRecordSandboxId(t *testing.T) {
func TestRecordSandboxID(t *testing.T) {
var err error
kataDirectVolumeRootPath = t.TempDir()
@@ -73,22 +73,22 @@ func TestRecordSandboxId(t *testing.T) {
// Add the mount info
assert.Nil(t, Add(volumePath, string(buf)))
sandboxId := uuid.Generate().String()
err = RecordSandboxId(sandboxId, volumePath)
sandboxID := uuid.Generate().String()
err = RecordSandboxID(sandboxID, volumePath)
assert.Nil(t, err)
id, err := GetSandboxIdForVolume(volumePath)
id, err := GetSandboxIDForVolume(volumePath)
assert.Nil(t, err)
assert.Equal(t, sandboxId, id)
assert.Equal(t, sandboxID, id)
}
func TestRecordSandboxIdNoMountInfoFile(t *testing.T) {
func TestRecordSandboxIDNoMountInfoFile(t *testing.T) {
var err error
kataDirectVolumeRootPath = t.TempDir()
var volumePath = "/a/b/c"
sandboxId := uuid.Generate().String()
err = RecordSandboxId(sandboxId, volumePath)
sandboxID := uuid.Generate().String()
err = RecordSandboxID(sandboxID, volumePath)
assert.Error(t, err)
assert.True(t, errors.Is(err, os.ErrNotExist))
}

View File

@@ -496,8 +496,8 @@ type TdxQomObject struct {
Debug *bool `json:"debug,omitempty"`
}
func (this *SocketAddress) String() string {
b, err := json.Marshal(*this)
func (s *SocketAddress) String() string {
b, err := json.Marshal(*s)
if err != nil {
log.Fatalf("Unable to marshal SocketAddress object: %s", err.Error())
@@ -507,8 +507,8 @@ func (this *SocketAddress) String() string {
return string(b)
}
func (this *TdxQomObject) String() string {
b, err := json.Marshal(*this)
func (t *TdxQomObject) String() string {
b, err := json.Marshal(*t)
if err != nil {
log.Fatalf("Unable to marshal TDX QOM object: %s", err.Error())

View File

@@ -259,7 +259,7 @@ func (km *KataMonitor) aggregateSandboxMetrics(encoder expfmt.Encoder, filterFam
}
func getParsedMetrics(sandboxID string, sandboxMetadata sandboxCRIMetadata) ([]*dto.MetricFamily, error) {
body, err := shimclient.DoGet(sandboxID, defaultTimeout, containerdshim.MetricsUrl)
body, err := shimclient.DoGet(sandboxID, defaultTimeout, containerdshim.MetricsURL)
if err != nil {
return nil, err
}
@@ -269,7 +269,7 @@ func getParsedMetrics(sandboxID string, sandboxMetadata sandboxCRIMetadata) ([]*
// GetSandboxMetrics will get sandbox's metrics from shim
func GetSandboxMetrics(sandboxID string) (string, error) {
body, err := shimclient.DoGet(sandboxID, defaultTimeout, containerdshim.MetricsUrl)
body, err := shimclient.DoGet(sandboxID, defaultTimeout, containerdshim.MetricsURL)
if err != nil {
return "", err
}

View File

@@ -138,9 +138,11 @@ func TestEncodeMetricFamily(t *testing.T) {
continue
}
// only check kata_monitor_running_shim_count and kata_monitor_scrape_count
if fields[0] == "kata_monitor_running_shim_count" {
switch fields[0] {
case "kata_monitor_running_shim_count":
assert.Equal("11", fields[1], "kata_monitor_running_shim_count should be 11")
} else if fields[0] == "kata_monitor_scrape_count" {
case "kata_monitor_scrape_count":
assert.Equal("2", fields[1], "kata_monitor_scrape_count should be 2")
}
}

View File

@@ -184,7 +184,7 @@ func (km *KataMonitor) GetAgentURL(w http.ResponseWriter, r *http.Request) {
return
}
data, err := shimclient.DoGet(sandboxID, defaultTimeout, containerdshim.AgentUrl)
data, err := shimclient.DoGet(sandboxID, defaultTimeout, containerdshim.AgentURL)
if err != nil {
commonServeError(w, http.StatusBadRequest, err)
return
@@ -206,14 +206,14 @@ func (km *KataMonitor) ListSandboxes(w http.ResponseWriter, r *http.Request) {
func listSandboxesText(sandboxes []string, w http.ResponseWriter) {
for _, s := range sandboxes {
w.Write([]byte(fmt.Sprintf("%s\n", s)))
fmt.Fprintf(w, "%s\n", s)
}
}
func listSandboxesHtml(sandboxes []string, w http.ResponseWriter) {
w.Write([]byte("<h1>Sandbox list</h1>\n"))
w.Write([]byte("<ul>\n"))
for _, s := range sandboxes {
w.Write([]byte(fmt.Sprintf("<li>%s: <a href='/debug/pprof/?sandbox=%s'>pprof</a>, <a href='/metrics?sandbox=%s'>metrics</a>, <a href='/agent-url?sandbox=%s'>agent-url</a></li>\n", s, s, s, s)))
fmt.Fprintf(w, "<li>%s: <a href='/debug/pprof/?sandbox=%s'>pprof</a>, <a href='/metrics?sandbox=%s'>metrics</a>, <a href='/agent-url?sandbox=%s'>agent-url</a></li>\n", s, s, s, s)
}
w.Write([]byte("</ul>\n"))
}

View File

@@ -98,7 +98,7 @@ func (km *KataMonitor) ExpvarHandler(w http.ResponseWriter, r *http.Request) {
// PprofIndex handles other `/debug/pprof/` requests
func (km *KataMonitor) PprofIndex(w http.ResponseWriter, r *http.Request) {
if len(strings.TrimPrefix(r.URL.Path, "/debug/pprof/")) == 0 {
km.proxyRequest(w, r, copyResponseAddingSandboxIdToHref)
km.proxyRequest(w, r, copyResponseAddingSandboxIDToHref)
} else {
km.proxyRequest(w, r, nil)
}
@@ -132,7 +132,7 @@ func copyResponse(req *http.Request, w io.Writer, r io.Reader) error {
return err
}
func copyResponseAddingSandboxIdToHref(req *http.Request, w io.Writer, r io.Reader) error {
func copyResponseAddingSandboxIDToHref(req *http.Request, w io.Writer, r io.Reader) error {
sb, err := getSandboxIDFromReq(req)
if err != nil {
monitorLog.WithError(err).Warning("missing sandbox query in pprof url")

View File

@@ -15,7 +15,7 @@ import (
"github.com/stretchr/testify/assert"
)
func TestCopyResponseAddingSandboxIdToHref(t *testing.T) {
func TestCopyResponseAddingSandboxIDToHref(t *testing.T) {
assert := assert.New(t)
htmlIn := strings.NewReader(`
@@ -112,6 +112,6 @@ Profile Descriptions:
req := &http.Request{URL: &url.URL{RawQuery: "sandbox=1234567890"}}
buf := bytes.NewBuffer(nil)
copyResponseAddingSandboxIdToHref(req, buf, htmlIn)
copyResponseAddingSandboxIDToHref(req, buf, htmlIn)
assert.Equal(htmlExpected, buf)
}

View File

@@ -98,8 +98,8 @@ func getKernelVersion() (string, error) {
// These kernel version can't be parsed by the current lib and lead to panic
// therefore the '+' should be removed.
func fixKernelVersion(version string) string {
version = strings.Replace(version, "_", "-", -1)
return strings.Replace(version, "+", "", -1)
version = strings.ReplaceAll(version, "_", "-")
return strings.ReplaceAll(version, "+", "")
}
// handleKernelVersion checks that the current kernel version is compatible with

View File

@@ -23,7 +23,7 @@ const (
testDirMode = os.FileMode(0750)
testFileMode = os.FileMode(0640)
busyboxConfigJson = `
busyboxConfigJSON = `
{
"ociVersion": "1.0.1-dev",
"process": {
@@ -359,7 +359,7 @@ func SetupOCIConfigFile(t *testing.T) (rootPath string, bundlePath, ociConfigFil
assert.NoError(err)
ociConfigFile = filepath.Join(bundlePath, "config.json")
err = os.WriteFile(ociConfigFile, []byte(busyboxConfigJson), testFileMode)
err = os.WriteFile(ociConfigFile, []byte(busyboxConfigJSON), testFileMode)
assert.NoError(err)
return tmpdir, bundlePath, ociConfigFile

View File

@@ -22,7 +22,6 @@ import (
govmmQemu "github.com/kata-containers/kata-containers/src/runtime/pkg/govmm/qemu"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
"github.com/kata-containers/kata-containers/src/runtime/pkg/oci"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
exp "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/experimental"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
@@ -1900,8 +1899,8 @@ func checkConfig(config oci.RuntimeConfig) error {
// checkPCIeConfig ensures the PCIe configuration is valid.
// Only allow one of the following settings for cold-plug:
// no-port, root-port, switch-port
func checkPCIeConfig(coldPlug config.PCIePort, hotPlug config.PCIePort, machineType string, hypervisorType virtcontainers.HypervisorType) error {
if hypervisorType != virtcontainers.QemuHypervisor && hypervisorType != virtcontainers.ClhHypervisor {
func checkPCIeConfig(coldPlug config.PCIePort, hotPlug config.PCIePort, machineType string, hypervisorType vc.HypervisorType) error {
if hypervisorType != vc.QemuHypervisor && hypervisorType != vc.ClhHypervisor {
kataUtilsLogger.Warn("Advanced PCIe Topology only available for QEMU/CLH hypervisor, ignoring hot(cold)_vfio_port setting")
return nil
}
@@ -1917,7 +1916,7 @@ func checkPCIeConfig(coldPlug config.PCIePort, hotPlug config.PCIePort, machineT
if machineType != "q35" && machineType != "virt" {
return nil
}
if hypervisorType == virtcontainers.ClhHypervisor {
if hypervisorType == vc.ClhHypervisor {
if coldPlug != config.NoPort {
return fmt.Errorf("cold-plug not supported on CLH")
}

View File

@@ -21,7 +21,6 @@ import (
config "github.com/kata-containers/kata-containers/src/runtime/pkg/device/config"
ktu "github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils"
"github.com/kata-containers/kata-containers/src/runtime/pkg/oci"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/compatoci"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/vcmock"
@@ -427,7 +426,7 @@ func TestVfioChecksClh(t *testing.T) {
// Check valid CLH vfio configs
f := func(coldPlug, hotPlug config.PCIePort) error {
return checkPCIeConfig(coldPlug, hotPlug, defaultMachineType, virtcontainers.ClhHypervisor)
return checkPCIeConfig(coldPlug, hotPlug, defaultMachineType, vc.ClhHypervisor)
}
assert.NoError(f(config.NoPort, config.NoPort))
assert.NoError(f(config.NoPort, config.RootPort))
@@ -441,7 +440,7 @@ func TestVfioCheckQemu(t *testing.T) {
// Check valid Qemu vfio configs
f := func(coldPlug, hotPlug config.PCIePort) error {
return checkPCIeConfig(coldPlug, hotPlug, defaultMachineType, virtcontainers.QemuHypervisor)
return checkPCIeConfig(coldPlug, hotPlug, defaultMachineType, vc.QemuHypervisor)
}
assert.NoError(f(config.NoPort, config.NoPort))

View File

@@ -90,7 +90,7 @@ func TestNewSystemLogHook(t *testing.T) {
output := string(bytes)
output = strings.TrimSpace(output)
output = strings.Replace(output, `"`, "", -1)
output = strings.ReplaceAll(output, `"`, "")
fields := strings.Fields(output)

View File

@@ -1143,7 +1143,7 @@ func TestParseAnnotationBoolConfiguration(t *testing.T) {
ocispec := specs.Spec{
Annotations: map[string]string{tc.annotationKey: annotaionValue},
}
var val bool = false
val := false
err := newAnnotationConfiguration(ocispec, tc.annotationKey).setBool(func(v bool) {
val = v

View File

@@ -47,8 +47,8 @@ func buildUnixSocketClient(socketAddr string, timeout time.Duration) (*http.Clie
return client, nil
}
func DoGet(sandboxID string, timeoutInSeconds time.Duration, urlPath string) ([]byte, error) {
client, err := BuildShimClient(sandboxID, timeoutInSeconds)
func DoGet(sandboxID string, timeout time.Duration, urlPath string) ([]byte, error) {
client, err := BuildShimClient(sandboxID, timeout)
if err != nil {
return nil, err
}
@@ -71,8 +71,8 @@ func DoGet(sandboxID string, timeoutInSeconds time.Duration, urlPath string) ([]
}
// DoPut will make a PUT request to the shim endpoint that handles the given sandbox ID
func DoPut(sandboxID string, timeoutInSeconds time.Duration, urlPath, contentType string, payload []byte) error {
client, err := BuildShimClient(sandboxID, timeoutInSeconds)
func DoPut(sandboxID string, timeout time.Duration, urlPath, contentType string, payload []byte) error {
client, err := BuildShimClient(sandboxID, timeout)
if err != nil {
return err
}
@@ -103,8 +103,8 @@ func DoPut(sandboxID string, timeoutInSeconds time.Duration, urlPath, contentTyp
}
// DoPost will make a POST request to the shim endpoint that handles the given sandbox ID
func DoPost(sandboxID string, timeoutInSeconds time.Duration, urlPath, contentType string, payload []byte) error {
client, err := BuildShimClient(sandboxID, timeoutInSeconds)
func DoPost(sandboxID string, timeout time.Duration, urlPath, contentType string, payload []byte) error {
client, err := BuildShimClient(sandboxID, timeout)
if err != nil {
return err
}

View File

@@ -1,7 +1,7 @@
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml` packages.
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
Compatible with TOML version [v1.1.0](https://toml.io/en/v1.1.0).
Documentation: https://pkg.go.dev/github.com/BurntSushi/toml

View File

@@ -206,6 +206,13 @@ func markDecodedRecursive(md *MetaData, tmap map[string]any) {
markDecodedRecursive(md, tmap)
md.context = md.context[0 : len(md.context)-1]
}
if tarr, ok := tmap[key].([]map[string]any); ok {
for _, elm := range tarr {
md.context = append(md.context, key)
markDecodedRecursive(md, elm)
md.context = md.context[0 : len(md.context)-1]
}
}
}
}
@@ -423,7 +430,7 @@ func (md *MetaData) unifyString(data any, rv reflect.Value) error {
if i, ok := data.(int64); ok {
rv.SetString(strconv.FormatInt(i, 10))
} else if f, ok := data.(float64); ok {
rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
rv.SetString(strconv.FormatFloat(f, 'g', -1, 64))
} else {
return md.badtype("string", data)
}

View File

@@ -228,9 +228,9 @@ func (enc *Encoder) eElement(rv reflect.Value) {
}
switch v.Location() {
default:
enc.wf(v.Format(format))
enc.write(v.Format(format))
case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
enc.wf(v.In(time.UTC).Format(format))
enc.write(v.In(time.UTC).Format(format))
}
return
case Marshaler:
@@ -279,40 +279,40 @@ func (enc *Encoder) eElement(rv reflect.Value) {
case reflect.String:
enc.writeQuoted(rv.String())
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
enc.write(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
enc.write(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
enc.write(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
f := rv.Float()
if math.IsNaN(f) {
if math.Signbit(f) {
enc.wf("-")
enc.write("-")
}
enc.wf("nan")
enc.write("nan")
} else if math.IsInf(f, 0) {
if math.Signbit(f) {
enc.wf("-")
enc.write("-")
}
enc.wf("inf")
enc.write("inf")
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
enc.write(floatAddDecimal(strconv.FormatFloat(f, 'g', -1, 32)))
}
case reflect.Float64:
f := rv.Float()
if math.IsNaN(f) {
if math.Signbit(f) {
enc.wf("-")
enc.write("-")
}
enc.wf("nan")
enc.write("nan")
} else if math.IsInf(f, 0) {
if math.Signbit(f) {
enc.wf("-")
enc.write("-")
}
enc.wf("inf")
enc.write("inf")
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
enc.write(floatAddDecimal(strconv.FormatFloat(f, 'g', -1, 64)))
}
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
@@ -330,27 +330,32 @@ func (enc *Encoder) eElement(rv reflect.Value) {
// By the TOML spec, all floats must have a decimal with at least one number on
// either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
for _, c := range fstr {
if c == 'e' { // Exponent syntax
return fstr
}
if c == '.' {
return fstr
}
}
return fstr
return fstr + ".0"
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
enc.write(`"` + dblQuotedReplacer.Replace(s) + `"`)
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
enc.write("[")
for i := 0; i < length; i++ {
elem := eindirect(rv.Index(i))
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
enc.write(", ")
}
}
enc.wf("]")
enc.write("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
@@ -363,7 +368,7 @@ func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
continue
}
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key)
enc.writef("%s[[%s]]", enc.indentStr(key), key)
enc.newline()
enc.eMapOrStruct(key, trv, false)
}
@@ -376,7 +381,7 @@ func (enc *Encoder) eTable(key Key, rv reflect.Value) {
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key)
enc.writef("%s[%s]", enc.indentStr(key), key)
enc.newline()
}
enc.eMapOrStruct(key, rv, false)
@@ -422,7 +427,7 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
if inline {
enc.writeKeyValue(Key{mapKey.String()}, val, true)
if trailC || i != len(mapKeys)-1 {
enc.wf(", ")
enc.write(", ")
}
} else {
enc.encode(key.add(mapKey.String()), val)
@@ -431,12 +436,12 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
}
if inline {
enc.wf("{")
enc.write("{")
}
writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
writeMapKeys(mapKeysSub, false)
if inline {
enc.wf("}")
enc.write("}")
}
}
@@ -534,7 +539,7 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
if inline {
enc.writeKeyValue(Key{keyName}, fieldVal, true)
if fieldIndex[0] != totalFields-1 {
enc.wf(", ")
enc.write(", ")
}
} else {
enc.encode(key.add(keyName), fieldVal)
@@ -543,14 +548,14 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
}
if inline {
enc.wf("{")
enc.write("{")
}
l := len(fieldsDirect) + len(fieldsSub)
writeFields(fieldsDirect, l)
writeFields(fieldsSub, l)
if inline {
enc.wf("}")
enc.write("}")
}
}
@@ -700,7 +705,7 @@ func isEmpty(rv reflect.Value) bool {
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
enc.write("\n")
}
}
@@ -722,14 +727,22 @@ func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
enc.eElement(val)
return
}
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.writef("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
if !inline {
enc.newline()
}
}
func (enc *Encoder) wf(format string, v ...any) {
func (enc *Encoder) write(s string) {
_, err := enc.w.WriteString(s)
if err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) writef(format string, v ...any) {
_, err := fmt.Fprintf(enc.w, format, v...)
if err != nil {
encPanic(err)

View File

@@ -13,7 +13,6 @@ type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
@@ -47,14 +46,13 @@ func (p Position) String() string {
}
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
tomlNext bool
esc bool
input string
start int
pos int
line int
state stateFn
items chan item
esc bool
// Allow for backing up up to 4 runes. This is necessary because TOML
// contains 3-rune tokens (""" and ''').
@@ -90,14 +88,13 @@ func (lx *lexer) nextItem() item {
}
}
func lex(input string, tomlNext bool) *lexer {
func lex(input string) *lexer {
lx := &lexer{
input: input,
state: lexTop,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
line: 1,
tomlNext: tomlNext,
input: input,
state: lexTop,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
line: 1,
}
return lx
}
@@ -108,7 +105,7 @@ func (lx *lexer) push(state stateFn) {
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop")
panic("BUG in lexer: no states to pop")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
@@ -305,6 +302,8 @@ func lexTop(lx *lexer) stateFn {
return lexTableStart
case eof:
if lx.pos > lx.start {
// TODO: never reached? I think this can only occur on a bug in the
// lexer(?)
return lx.errorf("unexpected EOF")
}
lx.emit(itemEOF)
@@ -392,8 +391,6 @@ func lexTableNameStart(lx *lexer) stateFn {
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == '.':
lx.ignore()
return lexTableNameStart
@@ -412,7 +409,7 @@ func lexTableNameEnd(lx *lexer) stateFn {
// Lexes only one part, e.g. only 'a' inside 'a.b'.
func lexBareName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r, lx.tomlNext) {
if isBareKeyChar(r) {
return lexBareName
}
lx.backup()
@@ -420,23 +417,23 @@ func lexBareName(lx *lexer) stateFn {
return lx.pop()
}
// lexBareName lexes one part of a key or table.
//
// It assumes that at least one valid character for the table has already been
// read.
// lexQuotedName lexes one part of a quoted key or table name. It assumes that
// it starts lexing at the quote itself (" or ').
//
// Lexes only one part, e.g. only '"a"' inside '"a".b'.
func lexQuotedName(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case r == '"':
lx.ignore() // ignore the '"'
return lexString
case r == '\'':
lx.ignore() // ignore the "'"
return lexRawString
// TODO: I don't think any of the below conditions can ever be reached?
case isWhitespace(r):
return lexSkip(lx, lexValue)
case r == eof:
return lx.errorf("unexpected EOF; expected value")
default:
@@ -464,17 +461,19 @@ func lexKeyStart(lx *lexer) stateFn {
func lexKeyNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == '=' || r == eof:
return lx.errorf("unexpected '='")
case r == '.':
return lx.errorf("unexpected '.'")
default:
lx.push(lexKeyEnd)
return lexBareName
case r == '"' || r == '\'':
lx.ignore()
lx.push(lexKeyEnd)
return lexQuotedName
default:
lx.push(lexKeyEnd)
return lexBareName
// TODO: I think these can never be reached?
case r == '=' || r == eof:
return lx.errorf("unexpected '='")
case r == '.':
return lx.errorf("unexpected '.'")
}
}
@@ -485,7 +484,7 @@ func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
case r == eof:
case r == eof: // TODO: never reached
return lx.errorf("unexpected EOF; expected key separator '='")
case r == '.':
lx.ignore()
@@ -628,10 +627,7 @@ func lexInlineTableValue(lx *lexer) stateFn {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
if lx.tomlNext {
return lexSkip(lx, lexInlineTableValue)
}
return lx.errorPrevLine(errLexInlineTableNL{})
return lexSkip(lx, lexInlineTableValue)
case r == '#':
lx.push(lexInlineTableValue)
return lexCommentStart
@@ -653,10 +649,7 @@ func lexInlineTableValueEnd(lx *lexer) stateFn {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
if lx.tomlNext {
return lexSkip(lx, lexInlineTableValueEnd)
}
return lx.errorPrevLine(errLexInlineTableNL{})
return lexSkip(lx, lexInlineTableValueEnd)
case r == '#':
lx.push(lexInlineTableValueEnd)
return lexCommentStart
@@ -664,10 +657,7 @@ func lexInlineTableValueEnd(lx *lexer) stateFn {
lx.ignore()
lx.skip(isWhitespace)
if lx.peek() == '}' {
if lx.tomlNext {
return lexInlineTableValueEnd
}
return lx.errorf("trailing comma not allowed in inline tables")
return lexInlineTableValueEnd
}
return lexInlineTableValue
case r == '}':
@@ -855,9 +845,6 @@ func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'e':
if !lx.tomlNext {
return lx.error(errLexEscape{r})
}
fallthrough
case 'b':
fallthrough
@@ -878,9 +865,6 @@ func lexStringEscape(lx *lexer) stateFn {
case '\\':
return lx.pop()
case 'x':
if !lx.tomlNext {
return lx.error(errLexEscape{r})
}
return lexHexEscape
case 'u':
return lexShortUnicodeEscape
@@ -928,19 +912,9 @@ func lexLongUnicodeEscape(lx *lexer) stateFn {
// lexBaseNumberOrDate can differentiate base prefixed integers from other
// types.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
switch r {
case '0':
if lx.next() == '0' {
return lexBaseNumberOrDate
}
if !isDigit(r) {
// The only way to reach this state is if the value starts
// with a digit, so specifically treat anything else as an
// error.
return lx.errorf("expected a digit but got %q", r)
}
return lexNumberOrDate
}
@@ -1196,13 +1170,13 @@ func lexSkip(lx *lexer, nextState stateFn) stateFn {
}
func (s stateFn) String() string {
if s == nil {
return "<nil>"
}
name := runtime.FuncForPC(reflect.ValueOf(s).Pointer()).Name()
if i := strings.LastIndexByte(name, '.'); i > -1 {
name = name[i+1:]
}
if s == nil {
name = "<nil>"
}
return name + "()"
}
@@ -1210,8 +1184,6 @@ func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
@@ -1226,18 +1198,22 @@ func (itype itemType) String() string {
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemKeyEnd:
return "KeyEnd"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemArrayTableStart:
return "ArrayTableStart"
case itemArrayTableEnd:
return "ArrayTableEnd"
case itemKeyStart:
return "KeyStart"
case itemKeyEnd:
return "KeyEnd"
case itemCommentStart:
return "CommentStart"
case itemInlineTableStart:
@@ -1266,7 +1242,7 @@ func isDigit(r rune) bool { return r >= '0' && r <= '9' }
func isBinary(r rune) bool { return r == '0' || r == '1' }
func isOctal(r rune) bool { return r >= '0' && r <= '7' }
func isHex(r rune) bool { return (r >= '0' && r <= '9') || (r|0x20 >= 'a' && r|0x20 <= 'f') }
func isBareKeyChar(r rune, tomlNext bool) bool {
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') || (r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') || r == '_' || r == '-'
}

View File

@@ -3,7 +3,6 @@ package toml
import (
"fmt"
"math"
"os"
"strconv"
"strings"
"time"
@@ -17,7 +16,6 @@ type parser struct {
context Key // Full key for the current hash in scope.
currentKey string // Base key name for everything except hashes.
pos Position // Current position in the TOML file.
tomlNext bool
ordered []Key // List of keys in the order that they appear in the TOML data.
@@ -32,8 +30,6 @@ type keyInfo struct {
}
func parse(data string) (p *parser, err error) {
_, tomlNext := os.LookupEnv("BURNTSUSHI_TOML_110")
defer func() {
if r := recover(); r != nil {
if pErr, ok := r.(ParseError); ok {
@@ -73,10 +69,9 @@ func parse(data string) (p *parser, err error) {
p = &parser{
keyInfo: make(map[string]keyInfo),
mapping: make(map[string]any),
lx: lex(data, tomlNext),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]struct{}),
tomlNext: tomlNext,
}
for {
item := p.next()
@@ -350,17 +345,14 @@ func (p *parser) valueFloat(it item) (any, tomlType) {
var dtTypes = []struct {
fmt string
zone *time.Location
next bool
}{
{time.RFC3339Nano, time.Local, false},
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime, false},
{"2006-01-02", internal.LocalDate, false},
{"15:04:05.999999999", internal.LocalTime, false},
// tomlNext
{"2006-01-02T15:04Z07:00", time.Local, true},
{"2006-01-02T15:04", internal.LocalDatetime, true},
{"15:04", internal.LocalTime, true},
{time.RFC3339Nano, time.Local},
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime},
{"2006-01-02", internal.LocalDate},
{"15:04:05.999999999", internal.LocalTime},
{"2006-01-02T15:04Z07:00", time.Local},
{"2006-01-02T15:04", internal.LocalDatetime},
{"15:04", internal.LocalTime},
}
func (p *parser) valueDatetime(it item) (any, tomlType) {
@@ -371,9 +363,6 @@ func (p *parser) valueDatetime(it item) (any, tomlType) {
err error
)
for _, dt := range dtTypes {
if dt.next && !p.tomlNext {
continue
}
t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
if err == nil {
if missingLeadingZero(it.val, dt.fmt) {
@@ -644,6 +633,11 @@ func (p *parser) setValue(key string, value any) {
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isArray(keyContext) {
if !p.isImplicit(keyContext) {
if _, ok := hash[key]; ok {
p.panicf("Key '%s' has already been defined.", keyContext)
}
}
p.removeImplicit(keyContext)
hash[key] = value
return
@@ -802,10 +796,8 @@ func (p *parser) replaceEscapes(it item, str string) string {
b.WriteByte(0x0d)
skip = 1
case 'e':
if p.tomlNext {
b.WriteByte(0x1b)
skip = 1
}
b.WriteByte(0x1b)
skip = 1
case '"':
b.WriteByte(0x22)
skip = 1
@@ -815,11 +807,9 @@ func (p *parser) replaceEscapes(it item, str string) string {
// The lexer guarantees the correct number of characters are present;
// don't need to check here.
case 'x':
if p.tomlNext {
escaped := p.asciiEscapeToUnicode(it, str[i+2:i+4])
b.WriteRune(escaped)
skip = 3
}
escaped := p.asciiEscapeToUnicode(it, str[i+2:i+4])
b.WriteRune(escaped)
skip = 3
case 'u':
escaped := p.asciiEscapeToUnicode(it, str[i+2:i+6])
b.WriteRune(escaped)

View File

@@ -13,7 +13,7 @@ github.com/AdaLogics/go-fuzz-headers
# github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0
## explicit; go 1.18
github.com/AdamKorcz/go-118-fuzz-build/testing
# github.com/BurntSushi/toml v1.5.0
# github.com/BurntSushi/toml v1.6.0
## explicit; go 1.18
github.com/BurntSushi/toml
github.com/BurntSushi/toml/internal

View File

@@ -1372,10 +1372,7 @@ func (clh *cloudHypervisor) terminate(ctx context.Context, waitOnly bool) (err e
defer span.End()
pid := clh.state.PID
pidRunning := true
if pid == 0 {
pidRunning = false
}
pidRunning := pid != 0
defer func() {
clh.Logger().Debug("Cleanup VM")
@@ -1761,10 +1758,10 @@ func (clh *cloudHypervisor) addNet(e Endpoint) error {
return errors.New("net Pair to be added is nil, needed to get TAP file descriptors")
}
if len(netPair.TapInterface.VMFds) == 0 {
if len(netPair.VMFds) == 0 {
return errors.New("The file descriptors for the network pair are not present")
}
clh.netDevicesFiles[mac] = netPair.TapInterface.VMFds
clh.netDevicesFiles[mac] = netPair.VMFds
netRateLimiterConfig := clh.getNetRateLimiterConfig()

View File

@@ -148,7 +148,7 @@ func TestCloudHypervisorAddNetCheckNetConfigListValues(t *testing.T) {
e := &VethEndpoint{}
e.NetPair.TAPIface.HardAddr = macTest
e.NetPair.TapInterface.VMFds = vmFds
e.NetPair.VMFds = vmFds
err = clh.addNet(e)
assert.Nil(err)
@@ -183,7 +183,7 @@ func TestCloudHypervisorAddNetCheckEnpointTypes(t *testing.T) {
validVeth := &VethEndpoint{}
validVeth.NetPair.TAPIface.HardAddr = macTest
validVeth.NetPair.TapInterface.VMFds = vmFds
validVeth.NetPair.VMFds = vmFds
type args struct {
e Endpoint
@@ -224,7 +224,7 @@ func TestCloudHypervisorNetRateLimiter(t *testing.T) {
vmFds = append(vmFds, file)
validVeth := &VethEndpoint{}
validVeth.NetPair.TapInterface.VMFds = vmFds
validVeth.NetPair.VMFds = vmFds
type args struct {
bwMaxRate int64

View File

@@ -20,7 +20,6 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/pkg/device/config"
deviceUtils "github.com/kata-containers/kata-containers/src/runtime/pkg/device/drivers"
"github.com/kata-containers/kata-containers/src/runtime/pkg/device/manager"
deviceManager "github.com/kata-containers/kata-containers/src/runtime/pkg/device/manager"
volume "github.com/kata-containers/kata-containers/src/runtime/pkg/direct-volume"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
@@ -635,7 +634,7 @@ func (c *Container) createBlockDevices(ctx context.Context) error {
if mntInfo != nil {
// Write out sandbox info file on the mount source to allow CSI to communicate with the runtime
if err := volume.RecordSandboxId(c.sandboxID, c.mounts[i].Source); err != nil {
if err := volume.RecordSandboxID(c.sandboxID, c.mounts[i].Source); err != nil {
c.Logger().WithError(err).Error("error writing sandbox info")
}
@@ -1505,8 +1504,8 @@ func (c *Container) update(ctx context.Context, resources specs.LinuxResources)
return err
}
if state := c.state.State; !(state == types.StateRunning || state == types.StateReady) {
return fmt.Errorf("Container(%s) not running or ready, impossible to update", state)
if state := c.state.State; state != types.StateRunning && state != types.StateReady {
return fmt.Errorf("container(%s) not running or ready, impossible to update", state)
}
if c.config.Resources.CPU == nil {
@@ -1683,7 +1682,7 @@ func (c *Container) plugDevice(ctx context.Context, devicePath string) error {
// isDriveUsed checks if a drive has been used for container rootfs
func (c *Container) isDriveUsed() bool {
return !(c.state.Fstype == "")
return c.state.Fstype != ""
}
func (c *Container) removeDrive(ctx context.Context) (err error) {
@@ -1692,7 +1691,7 @@ func (c *Container) removeDrive(ctx context.Context) (err error) {
devID := c.state.BlockDeviceID
err := c.sandbox.devManager.DetachDevice(ctx, devID, c.sandbox)
if err != nil && err != manager.ErrDeviceNotAttached {
if err != nil && err != deviceManager.ErrDeviceNotAttached {
return err
}
@@ -1703,7 +1702,7 @@ func (c *Container) removeDrive(ctx context.Context) (err error) {
}).WithError(err).Error("remove device failed")
// ignore the device not exist error
if err != manager.ErrDeviceNotExist {
if err != deviceManager.ErrDeviceNotExist {
return err
}
}
@@ -1731,7 +1730,7 @@ func (c *Container) attachDevices(ctx context.Context) error {
func (c *Container) detachDevices(ctx context.Context) error {
for _, dev := range c.devices {
err := c.sandbox.devManager.DetachDevice(ctx, dev.ID, c.sandbox)
if err != nil && err != manager.ErrDeviceNotAttached {
if err != nil && err != deviceManager.ErrDeviceNotAttached {
return err
}
@@ -1742,7 +1741,7 @@ func (c *Container) detachDevices(ctx context.Context) error {
}).WithError(err).Error("remove device failed")
// ignore the device not exist error
if err != manager.ErrDeviceNotExist {
if err != deviceManager.ErrDeviceNotExist {
return err
}
}

View File

@@ -119,8 +119,8 @@ func TestSaveLoadIfPair(t *testing.T) {
// Since VMFds and VhostFds are't saved, netPair and loadedIfPair are not equal.
assert.False(t, reflect.DeepEqual(netPair, loadedIfPair))
netPair.TapInterface.VMFds = nil
netPair.TapInterface.VhostFds = nil
netPair.VMFds = nil
netPair.VhostFds = nil
// They are equal now.
assert.True(t, reflect.DeepEqual(netPair, loadedIfPair))
}

View File

@@ -937,7 +937,7 @@ func (fc *firecracker) fcAddNetDevice(ctx context.Context, endpoint Endpoint) {
// VMFds are not used by Firecracker, as it opens the tuntap
// device by its name. Let's just close those.
for _, f := range endpoint.NetworkPair().TapInterface.VMFds {
for _, f := range endpoint.NetworkPair().VMFds {
f.Close()
}
@@ -987,7 +987,7 @@ func (fc *firecracker) fcAddNetDevice(ctx context.Context, endpoint Endpoint) {
ifaceCfg := &models.NetworkInterface{
GuestMac: endpoint.HardwareAddr(),
IfaceID: &ifaceID,
HostDevName: &endpoint.NetworkPair().TapInterface.TAPIface.Name,
HostDevName: &endpoint.NetworkPair().TAPIface.Name,
RxRateLimiter: &rxRateLimiter,
TxRateLimiter: &txRateLimiter,
}

View File

@@ -325,7 +325,8 @@ func (f *FilesystemShare) ShareFile(ctx context.Context, c *Container, m *Mount)
return err
}
if !(info.Mode().IsRegular() || info.Mode().IsDir() || (info.Mode()&os.ModeSymlink) == os.ModeSymlink) {
mode := info.Mode()
if !mode.IsRegular() && !mode.IsDir() && mode&os.ModeSymlink != os.ModeSymlink {
f.Logger().WithField("ignored-file", srcPath).Debug("Ignoring file as FS sharing not supported")
if srcPath == srcRoot {
// Ignore the mount if this is not a regular file (excludes socket, device, ...) as it cannot be handled by
@@ -693,17 +694,17 @@ func (f *FilesystemShare) ShareRootFilesystem(ctx context.Context, c *Container)
f.Logger().Error("malformed block drive")
return nil, fmt.Errorf("malformed block drive")
}
switch {
case f.sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioMmio:
switch f.sandbox.config.HypervisorConfig.BlockDeviceDriver {
case config.VirtioMmio:
rootfsStorage.Driver = kataMmioBlkDevType
rootfsStorage.Source = blockDrive.VirtPath
case f.sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioBlockCCW:
case config.VirtioBlockCCW:
rootfsStorage.Driver = kataBlkCCWDevType
rootfsStorage.Source = blockDrive.DevNo
case f.sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioBlock:
case config.VirtioBlock:
rootfsStorage.Driver = kataBlkDevType
rootfsStorage.Source = blockDrive.PCIPath.String()
case f.sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioSCSI:
case config.VirtioSCSI:
rootfsStorage.Driver = kataSCSIDevType
rootfsStorage.Source = blockDrive.SCSIAddr
default:

View File

@@ -46,7 +46,6 @@ import (
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
grpcStatus "google.golang.org/grpc/status"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
@@ -361,15 +360,11 @@ func KataAgentKernelParams(config KataAgentConfig) []Param {
}
func (k *kataAgent) handleTraceSettings(config KataAgentConfig) bool {
disableVMShutdown := false
if config.Trace {
// Agent tracing requires that the agent be able to shutdown
// cleanly. This is the only scenario where the agent is
// responsible for stopping the VM: normally this is handled
// by the runtime.
disableVMShutdown = true
}
// Agent tracing requires that the agent be able to shutdown
// cleanly. This is the only scenario where the agent is
// responsible for stopping the VM: normally this is handled
// by the runtime.
disableVMShutdown := config.Trace
return disableVMShutdown
}
@@ -586,7 +581,7 @@ func (k *kataAgent) exec(ctx context.Context, sandbox *Sandbox, c Container, cmd
if _, err := k.sendReq(ctx, req); err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "ExecProcessRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "ExecProcessRequest timed out")
}
return nil, err
}
@@ -630,7 +625,7 @@ func (k *kataAgent) updateInterface(ctx context.Context, ifc *pbTypes.Interface)
"resulting-interface": fmt.Sprintf("%+v", resultingInterface),
}).WithError(err).Error("update interface request failed")
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "UpdateInterfaceRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "UpdateInterfaceRequest timed out")
}
}
if resultInterface, ok := resultingInterface.(*pbTypes.Interface); ok {
@@ -662,7 +657,7 @@ func (k *kataAgent) updateRoutes(ctx context.Context, routes []*pbTypes.Route) (
"resulting-routes": fmt.Sprintf("%+v", resultingRoutes),
}).WithError(err).Error("update routes request failed")
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "UpdateRoutesRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "UpdateRoutesRequest timed out")
}
}
resultRoutes, ok := resultingRoutes.(*grpc.Routes)
@@ -683,7 +678,7 @@ func (k *kataAgent) updateEphemeralMounts(ctx context.Context, storages []*grpc.
if _, err := k.sendReq(ctx, storagesReq); err != nil {
k.Logger().WithError(err).Error("update mounts request failed")
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "UpdateEphemeralMountsRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "UpdateEphemeralMountsRequest timed out")
}
return err
}
@@ -708,7 +703,7 @@ func (k *kataAgent) addARPNeighbors(ctx context.Context, neighs []*pbTypes.ARPNe
return nil
}
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "AddARPNeighborsRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "AddARPNeighborsRequest timed out")
}
k.Logger().WithFields(logrus.Fields{
"arpneighbors-requested": fmt.Sprintf("%+v", neighs),
@@ -724,7 +719,7 @@ func (k *kataAgent) listInterfaces(ctx context.Context) ([]*pbTypes.Interface, e
resultingInterfaces, err := k.sendReq(ctx, req)
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "ListInterfacesRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "ListInterfacesRequest timed out")
}
return nil, err
}
@@ -740,7 +735,7 @@ func (k *kataAgent) listRoutes(ctx context.Context) ([]*pbTypes.Route, error) {
resultingRoutes, err := k.sendReq(ctx, req)
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "ListRoutesRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "ListRoutesRequest timed out")
}
return nil, err
}
@@ -859,7 +854,7 @@ func (k *kataAgent) startSandbox(ctx context.Context, sandbox *Sandbox) error {
_, err = k.sendReq(ctx, req)
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "CreateSandboxRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "CreateSandboxRequest timed out")
}
return err
}
@@ -966,7 +961,7 @@ func (k *kataAgent) stopSandbox(ctx context.Context, sandbox *Sandbox) error {
if _, err := k.sendReq(ctx, req); err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "DestroySandboxRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "DestroySandboxRequest timed out")
}
return err
}
@@ -1499,7 +1494,7 @@ func (k *kataAgent) createContainer(ctx context.Context, sandbox *Sandbox, c *Co
if _, err = k.sendReq(ctx, req); err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "CreateContainerRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "CreateContainerRequest timed out")
}
return nil, err
}
@@ -1590,21 +1585,21 @@ func (k *kataAgent) handleEphemeralStorage(mounts []specs.Mount) ([]*grpc.Storag
var epheStorages []*grpc.Storage
for idx, mnt := range mounts {
if mnt.Type == KataEphemeralDevType {
origin_src := mounts[idx].Source
originSrc := mounts[idx].Source
stat := syscall.Stat_t{}
err := syscall.Stat(origin_src, &stat)
err := syscall.Stat(originSrc, &stat)
if err != nil {
k.Logger().WithError(err).Errorf("failed to stat %s", origin_src)
k.Logger().WithError(err).Errorf("failed to stat %s", originSrc)
return nil, err
}
var dir_options []string
var dirOptions []string
// if volume's gid isn't root group(default group), this means there's
// an specific fsGroup is set on this local volume, then it should pass
// to guest.
if stat.Gid != 0 {
dir_options = append(dir_options, fmt.Sprintf("%s=%d", fsGid, stat.Gid))
dirOptions = append(dirOptions, fmt.Sprintf("%s=%d", fsGid, stat.Gid))
}
// Set the mount source path to a path that resides inside the VM
@@ -1619,7 +1614,7 @@ func (k *kataAgent) handleEphemeralStorage(mounts []specs.Mount) ([]*grpc.Storag
Source: "tmpfs",
Fstype: "tmpfs",
MountPoint: mounts[idx].Source,
Options: dir_options,
Options: dirOptions,
}
epheStorages = append(epheStorages, epheStorage)
}
@@ -1633,21 +1628,21 @@ func (k *kataAgent) handleLocalStorage(mounts []specs.Mount, sandboxID string, r
var localStorages []*grpc.Storage
for idx, mnt := range mounts {
if mnt.Type == KataLocalDevType {
origin_src := mounts[idx].Source
originSrc := mounts[idx].Source
stat := syscall.Stat_t{}
err := syscall.Stat(origin_src, &stat)
err := syscall.Stat(originSrc, &stat)
if err != nil {
k.Logger().WithError(err).Errorf("failed to stat %s", origin_src)
k.Logger().WithError(err).Errorf("failed to stat %s", originSrc)
return nil, err
}
dir_options := localDirOptions
dirOptions := localDirOptions
// if volume's gid isn't root group(default group), this means there's
// an specific fsGroup is set on this local volume, then it should pass
// to guest.
if stat.Gid != 0 {
dir_options = append(dir_options, fmt.Sprintf("%s=%d", fsGid, stat.Gid))
dirOptions = append(dirOptions, fmt.Sprintf("%s=%d", fsGid, stat.Gid))
}
// Set the mount source path to a the desired directory point in the VM.
@@ -1664,7 +1659,7 @@ func (k *kataAgent) handleLocalStorage(mounts []specs.Mount, sandboxID string, r
Source: KataLocalDevType,
Fstype: KataLocalDevType,
MountPoint: mounts[idx].Source,
Options: dir_options,
Options: dirOptions,
}
localStorages = append(localStorages, localStorage)
}
@@ -1721,21 +1716,21 @@ func getContainerTypeforCRI(c *Container) (string, string) {
}
func handleImageGuestPullBlockVolume(c *Container, virtualVolumeInfo *types.KataVirtualVolume, vol *grpc.Storage) (*grpc.Storage, error) {
container_annotations := c.GetAnnotations()
containerAnnotations := c.GetAnnotations()
containerType, criContainerType := getContainerTypeforCRI(c)
var image_ref string
var imageRef string
if containerType == string(PodSandbox) {
image_ref = "pause"
imageRef = "pause"
} else {
const kubernetesCRIImageName = "io.kubernetes.cri.image-name"
const kubernetesCRIOImageName = "io.kubernetes.cri-o.ImageName"
switch criContainerType {
case ctrAnnotations.ContainerType:
image_ref = container_annotations[kubernetesCRIImageName]
imageRef = containerAnnotations[kubernetesCRIImageName]
case crioAnnotations.ContainerType:
image_ref = container_annotations[kubernetesCRIOImageName]
imageRef = containerAnnotations[kubernetesCRIOImageName]
default:
// There are cases, like when using nerdctl, where the criContainerType
// will never be set, leading to this code path.
@@ -1746,17 +1741,17 @@ func handleImageGuestPullBlockVolume(c *Container, virtualVolumeInfo *types.Kata
//
// With this in mind, let's "fallback" to the default k8s cri image-name
// annotation, as documented on our image-pull documentation.
image_ref = container_annotations[kubernetesCRIImageName]
imageRef = containerAnnotations[kubernetesCRIImageName]
}
if image_ref == "" {
if imageRef == "" {
return nil, fmt.Errorf("Failed to get image name from annotations")
}
}
virtualVolumeInfo.Source = image_ref
virtualVolumeInfo.Source = imageRef
//merge virtualVolumeInfo.ImagePull.Metadata and container_annotations
for k, v := range container_annotations {
for k, v := range containerAnnotations {
virtualVolumeInfo.ImagePull.Metadata[k] = v
}
@@ -1975,7 +1970,7 @@ func (k *kataAgent) startContainer(ctx context.Context, sandbox *Sandbox, c *Con
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "StartContainerRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "StartContainerRequest timed out")
}
return err
}
@@ -1986,7 +1981,7 @@ func (k *kataAgent) stopContainer(ctx context.Context, sandbox *Sandbox, c Conta
_, err := k.sendReq(ctx, &grpc.RemoveContainerRequest{ContainerId: c.id})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "RemoveContainerRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "RemoveContainerRequest timed out")
}
return err
}
@@ -2005,7 +2000,7 @@ func (k *kataAgent) signalProcess(ctx context.Context, c *Container, processID s
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "SignalProcessRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "SignalProcessRequest timed out")
}
return err
}
@@ -2020,7 +2015,7 @@ func (k *kataAgent) winsizeProcess(ctx context.Context, c *Container, processID
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "TtyWinResizeRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "TtyWinResizeRequest timed out")
}
return err
}
@@ -2038,7 +2033,7 @@ func (k *kataAgent) updateContainer(ctx context.Context, sandbox *Sandbox, c Con
_, err = k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "UpdateContainerRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "UpdateContainerRequest timed out")
}
return err
}
@@ -2050,7 +2045,7 @@ func (k *kataAgent) pauseContainer(ctx context.Context, sandbox *Sandbox, c Cont
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "PauseContainerRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "PauseContainerRequest timed out")
}
return err
}
@@ -2062,7 +2057,7 @@ func (k *kataAgent) resumeContainer(ctx context.Context, sandbox *Sandbox, c Con
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "ResumeContainerRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "ResumeContainerRequest timed out")
}
return err
}
@@ -2089,7 +2084,7 @@ func (k *kataAgent) memHotplugByProbe(ctx context.Context, addr uint64, sizeMB u
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "MemHotplugByProbeRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "MemHotplugByProbeRequest timed out")
}
return err
}
@@ -2103,7 +2098,7 @@ func (k *kataAgent) onlineCPUMem(ctx context.Context, cpus uint32, cpuOnly bool)
_, err := k.sendReq(ctx, req)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "OnlineCPUMemRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "OnlineCPUMemRequest timed out")
}
return err
}
@@ -2117,7 +2112,7 @@ func (k *kataAgent) statsContainer(ctx context.Context, sandbox *Sandbox, c Cont
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "StatsContainerRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "StatsContainerRequest timed out")
}
return nil, err
}
@@ -2201,7 +2196,7 @@ func (k *kataAgent) check(ctx context.Context) error {
_, err := k.sendReq(ctx, &grpc.CheckRequest{})
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "CheckRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "CheckRequest timed out")
}
err = fmt.Errorf("Failed to Check if grpc server is working: %s", err)
}
@@ -2218,7 +2213,7 @@ func (k *kataAgent) waitProcess(ctx context.Context, c *Container, processID str
})
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return 0, status.Errorf(codes.DeadlineExceeded, "WaitProcessRequest timed out")
return 0, grpcStatus.Errorf(codes.DeadlineExceeded, "WaitProcessRequest timed out")
}
return 0, err
}
@@ -2235,7 +2230,7 @@ func (k *kataAgent) writeProcessStdin(ctx context.Context, c *Container, Process
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return 0, status.Errorf(codes.DeadlineExceeded, "WriteStreamRequest timed out")
return 0, grpcStatus.Errorf(codes.DeadlineExceeded, "WriteStreamRequest timed out")
}
return 0, err
}
@@ -2249,7 +2244,7 @@ func (k *kataAgent) closeProcessStdin(ctx context.Context, c *Container, Process
ExecId: ProcessID,
})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "CloseStdinRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "CloseStdinRequest timed out")
}
return err
}
@@ -2259,7 +2254,7 @@ func (k *kataAgent) reseedRNG(ctx context.Context, data []byte) error {
Data: data,
})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "ReseedRandomDevRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "ReseedRandomDevRequest timed out")
}
return err
}
@@ -2267,7 +2262,7 @@ func (k *kataAgent) reseedRNG(ctx context.Context, data []byte) error {
func (k *kataAgent) removeStaleVirtiofsShareMounts(ctx context.Context) error {
_, err := k.sendReq(ctx, &grpc.RemoveStaleVirtiofsShareMountsRequest{})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "removeStaleVirtiofsShareMounts timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "removeStaleVirtiofsShareMounts timed out")
}
return err
}
@@ -2502,7 +2497,7 @@ func (k *kataAgent) getGuestDetails(ctx context.Context, req *grpc.GuestDetailsR
resp, err := k.sendReq(ctx, req)
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "GuestDetailsRequest request timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "GuestDetailsRequest request timed out")
}
return nil, err
}
@@ -2516,7 +2511,7 @@ func (k *kataAgent) setGuestDateTime(ctx context.Context, tv time.Time) error {
Usec: int64(tv.Nanosecond() / 1e3),
})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "SetGuestDateTimeRequest request timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "SetGuestDateTimeRequest request timed out")
}
return err
}
@@ -2571,7 +2566,7 @@ func (k *kataAgent) copyFile(ctx context.Context, src, dst string) error {
if cpReq.FileSize == 0 {
_, err := k.sendReq(ctx, cpReq)
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "CopyFileRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "CopyFileRequest timed out")
}
return err
}
@@ -2590,7 +2585,7 @@ func (k *kataAgent) copyFile(ctx context.Context, src, dst string) error {
if _, err = k.sendReq(ctx, cpReq); err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "CopyFileRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "CopyFileRequest timed out")
}
return fmt.Errorf("Could not send CopyFile request: %v", err)
}
@@ -2609,7 +2604,7 @@ func (k *kataAgent) addSwap(ctx context.Context, PCIPath types.PciPath) error {
_, err := k.sendReq(ctx, &grpc.AddSwapRequest{PCIPath: PCIPath.ToArray()})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "AddSwapRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "AddSwapRequest timed out")
}
return err
}
@@ -2638,7 +2633,7 @@ func (k *kataAgent) getOOMEvent(ctx context.Context) (string, error) {
result, err := k.sendReq(ctx, req)
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return "", status.Errorf(codes.DeadlineExceeded, "GetOOMEventRequest timed out")
return "", grpcStatus.Errorf(codes.DeadlineExceeded, "GetOOMEventRequest timed out")
}
return "", err
}
@@ -2652,7 +2647,7 @@ func (k *kataAgent) getAgentMetrics(ctx context.Context, req *grpc.GetMetricsReq
resp, err := k.sendReq(ctx, req)
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "GetMetricsRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "GetMetricsRequest timed out")
}
return nil, err
}
@@ -2664,7 +2659,7 @@ func (k *kataAgent) getIPTables(ctx context.Context, isIPv6 bool) ([]byte, error
resp, err := k.sendReq(ctx, &grpc.GetIPTablesRequest{IsIpv6: isIPv6})
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "GetIPTablesRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "GetIPTablesRequest timed out")
}
return nil, err
}
@@ -2679,7 +2674,7 @@ func (k *kataAgent) setIPTables(ctx context.Context, isIPv6 bool, data []byte) e
if err != nil {
k.Logger().WithError(err).Errorf("setIPTables request to agent failed")
if err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "SetIPTablesRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "SetIPTablesRequest timed out")
}
}
@@ -2690,7 +2685,7 @@ func (k *kataAgent) getGuestVolumeStats(ctx context.Context, volumeGuestPath str
result, err := k.sendReq(ctx, &grpc.VolumeStatsRequest{VolumeGuestPath: volumeGuestPath})
if err != nil {
if err.Error() == context.DeadlineExceeded.Error() {
return nil, status.Errorf(codes.DeadlineExceeded, "VolumeStatsRequest timed out")
return nil, grpcStatus.Errorf(codes.DeadlineExceeded, "VolumeStatsRequest timed out")
}
return nil, err
}
@@ -2706,7 +2701,7 @@ func (k *kataAgent) getGuestVolumeStats(ctx context.Context, volumeGuestPath str
func (k *kataAgent) resizeGuestVolume(ctx context.Context, volumeGuestPath string, size uint64) error {
_, err := k.sendReq(ctx, &grpc.ResizeVolumeRequest{VolumeGuestPath: volumeGuestPath, Size: size})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "ResizeVolumeRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "ResizeVolumeRequest timed out")
}
return err
}
@@ -2714,7 +2709,7 @@ func (k *kataAgent) resizeGuestVolume(ctx context.Context, volumeGuestPath strin
func (k *kataAgent) setPolicy(ctx context.Context, policy string) error {
_, err := k.sendReq(ctx, &grpc.SetPolicyRequest{Policy: policy})
if err != nil && err.Error() == context.DeadlineExceeded.Error() {
return status.Errorf(codes.DeadlineExceeded, "SetPolicyRequest timed out")
return grpcStatus.Errorf(codes.DeadlineExceeded, "SetPolicyRequest timed out")
}
return err
}

View File

@@ -10,6 +10,7 @@ import (
"time"
"context"
persistapi "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/api"
pbTypes "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols/grpc"
@@ -260,14 +261,14 @@ func (n *mockAgent) resizeGuestVolume(ctx context.Context, volumeGuestPath strin
return nil
}
func (k *mockAgent) getIPTables(ctx context.Context, isIPv6 bool) ([]byte, error) {
func (n *mockAgent) getIPTables(ctx context.Context, isIPv6 bool) ([]byte, error) {
return nil, nil
}
func (k *mockAgent) setIPTables(ctx context.Context, isIPv6 bool, data []byte) error {
func (n *mockAgent) setIPTables(ctx context.Context, isIPv6 bool, data []byte) error {
return nil
}
func (k *mockAgent) setPolicy(ctx context.Context, policy string) error {
func (n *mockAgent) setPolicy(ctx context.Context, policy string) error {
return nil
}

View File

@@ -240,7 +240,7 @@ func (n *LinuxNetwork) addSingleEndpoint(ctx context.Context, s *Sandbox, netInf
}
func (n *LinuxNetwork) removeSingleEndpoint(ctx context.Context, s *Sandbox, endpoint Endpoint, hotplug bool) error {
var idx int = len(n.eps)
idx := len(n.eps)
for i, val := range n.eps {
if val.HardwareAddr() == endpoint.HardwareAddr() {
idx = i
@@ -293,7 +293,7 @@ func (n *LinuxNetwork) endpointAlreadyAdded(netInfo *NetworkInfo) bool {
}
pair := ep.NetworkPair()
// Existing virtual endpoints
if pair != nil && (pair.TapInterface.Name == netInfo.Iface.Name || pair.TapInterface.TAPIface.Name == netInfo.Iface.Name || pair.VirtIface.Name == netInfo.Iface.Name) {
if pair != nil && (pair.Name == netInfo.Iface.Name || pair.TAPIface.Name == netInfo.Iface.Name || pair.VirtIface.Name == netInfo.Iface.Name) {
return true
}
}
@@ -1299,7 +1299,7 @@ func addRxRateLimiter(endpoint Endpoint, maxRate uint64) error {
switch ep := endpoint.(type) {
case *VethEndpoint, *IPVlanEndpoint, *TuntapEndpoint, *MacvlanEndpoint:
netPair := endpoint.NetworkPair()
linkName = netPair.TapInterface.TAPIface.Name
linkName = netPair.TAPIface.Name
case *MacvtapEndpoint, *TapEndpoint:
linkName = endpoint.Name()
default:
@@ -1467,7 +1467,7 @@ func addTxRateLimiter(endpoint Endpoint, maxRate uint64) error {
}
return addHTBQdisc(link.Attrs().Index, maxRate)
case NetXConnectMacVtapModel, NetXConnectNoneModel:
linkName = netPair.TapInterface.TAPIface.Name
linkName = netPair.TAPIface.Name
default:
return fmt.Errorf("Unsupported inter-networking model %v for adding tx rate limiter", netPair.NetInterworkingModel)
}
@@ -1502,7 +1502,7 @@ func addTxRateLimiter(endpoint Endpoint, maxRate uint64) error {
func removeHTBQdisc(linkName string) error {
link, err := netlink.LinkByName(linkName)
if err != nil {
return fmt.Errorf("Get link %s by name failed: %v", linkName, err)
return fmt.Errorf("get link %s by name failed: %v", linkName, err)
}
qdiscs, err := netlink.QdiscList(link)
@@ -1529,7 +1529,7 @@ func removeRxRateLimiter(endpoint Endpoint, networkNSPath string) error {
switch ep := endpoint.(type) {
case *VethEndpoint, *IPVlanEndpoint, *TuntapEndpoint, *MacvlanEndpoint:
netPair := endpoint.NetworkPair()
linkName = netPair.TapInterface.TAPIface.Name
linkName = netPair.TAPIface.Name
case *MacvtapEndpoint, *TapEndpoint:
linkName = endpoint.Name()
default:
@@ -1560,7 +1560,7 @@ func removeTxRateLimiter(endpoint Endpoint, networkNSPath string) error {
}
return nil
case NetXConnectMacVtapModel, NetXConnectNoneModel:
linkName = netPair.TapInterface.TAPIface.Name
linkName = netPair.TAPIface.Name
}
case *MacvtapEndpoint, *TapEndpoint:
linkName = endpoint.Name()
@@ -1571,7 +1571,7 @@ func removeTxRateLimiter(endpoint Endpoint, networkNSPath string) error {
if err := doNetNS(networkNSPath, func(_ ns.NetNS) error {
link, err := netlink.LinkByName(linkName)
if err != nil {
return fmt.Errorf("Get link %s by name failed: %v", linkName, err)
return fmt.Errorf("get link %s by name failed: %v", linkName, err)
}
if err := removeRedirectTCFilter(link); err != nil {
@@ -1591,7 +1591,7 @@ func removeTxRateLimiter(endpoint Endpoint, networkNSPath string) error {
// remove ifb interface
ifbLink, err := netlink.LinkByName("ifb0")
if err != nil {
return fmt.Errorf("Get link %s by name failed: %v", linkName, err)
return fmt.Errorf("get link %s by name failed: %v", linkName, err)
}
if err := netHandle.LinkSetDown(ifbLink); err != nil {

View File

@@ -38,14 +38,14 @@ const (
nydusdStopTimeoutSecs = 5
defaultHttpClientTimeoutSecs = 30 * time.Second
contentType = "application/json"
defaultHttpClientTimeout = 30 * time.Second
contentType = "application/json"
maxIdleConns = 10
idleConnTimeoutSecs = 10 * time.Second
dialTimoutSecs = 5 * time.Second
keepAliveSecs = 5 * time.Second
expectContinueTimeoutSecs = 1 * time.Second
maxIdleConns = 10
idleConnTimeout = 10 * time.Second
dialTimout = 5 * time.Second
keepAlive = 5 * time.Second
expectContinueTimeout = 1 * time.Second
// Registry Acceleration File System which is nydus provide to accelerate image load
nydusRafs = "rafs"
@@ -345,7 +345,7 @@ func NewNydusClient(sock string) (Interface, error) {
}
return &NydusClient{
httpClient: &http.Client{
Timeout: defaultHttpClientTimeoutSecs,
Timeout: defaultHttpClientTimeout,
Transport: transport,
},
}, nil
@@ -370,12 +370,12 @@ func buildTransport(sock string) (http.RoundTripper, error) {
}
return &http.Transport{
MaxIdleConns: maxIdleConns,
IdleConnTimeout: idleConnTimeoutSecs,
ExpectContinueTimeout: expectContinueTimeoutSecs,
IdleConnTimeout: idleConnTimeout,
ExpectContinueTimeout: expectContinueTimeout,
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
dialer := &net.Dialer{
Timeout: dialTimoutSecs,
KeepAlive: keepAliveSecs,
Timeout: dialTimout,
KeepAlive: keepAlive,
}
return dialer.DialContext(ctx, "unix", sock)
},

View File

@@ -24,7 +24,6 @@ import (
otelLabel "go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
grpcStatus "google.golang.org/grpc/status"
"github.com/containerd/ttrpc"
@@ -132,7 +131,7 @@ func TraceUnaryClientInterceptor() ttrpc.UnaryClientInterceptor {
span.SetAttributes(otelLabel.Key("RPC_ERROR").Bool(true))
}
// err can be nil, that will return an OK response code
if status, _ := status.FromError(err); status != nil {
if status, _ := grpcStatus.FromError(err); status != nil {
span.SetAttributes(otelLabel.Key("RPC_CODE").Int((int)(status.Code())))
span.SetAttributes(otelLabel.Key("RPC_MESSAGE").String(status.Message()))
}
@@ -400,7 +399,7 @@ func HybridVSockDialer(sock string, timeout time.Duration) (net.Conn, error) {
// Once the connection is opened, the following command MUST BE sent,
// the hypervisor needs to know the port number where the agent is listening in order to
// create the connection
if _, err = conn.Write([]byte(fmt.Sprintf("CONNECT %d\n", port))); err != nil {
if _, err = fmt.Fprintf(conn, "CONNECT %d\n", port); err != nil {
conn.Close()
return nil, err
}
@@ -457,7 +456,7 @@ func HybridVSockDialer(sock string, timeout time.Duration) (net.Conn, error) {
func RemoteSockDialer(sock string, timeout time.Duration) (net.Conn, error) {
s := strings.Split(sock, ":")
if !(len(s) == 2 && s[0] == RemoteSockScheme) {
if len(s) != 2 || s[0] != RemoteSockScheme {
return nil, fmt.Errorf("failed to parse remote sock: %q", sock)
}
socketPath := s[1]

View File

@@ -186,7 +186,7 @@ func TestParseConfigJSON(t *testing.T) {
var ociSpec compatOCISpec
var configByte []byte
ociSpec.Spec.Version = "1.0.0"
ociSpec.Version = "1.0.0"
ociSpec.Process = &compatOCIProcess{}
ociSpec.Process.Capabilities = map[string]interface{}{
"bounding": []interface{}{"CAP_KILL"},

View File

@@ -236,7 +236,7 @@ func (s CPUSet) String() string {
if r.start == r.end {
result.WriteString(strconv.Itoa(r.start))
} else {
result.WriteString(fmt.Sprintf("%d-%d", r.start, r.end))
fmt.Fprintf(&result, "%d-%d", r.start, r.end)
}
result.WriteString(",")
}

View File

@@ -431,15 +431,16 @@ func (q *qemu) buildDevices(ctx context.Context, kernelPath string) ([]govmmQemu
return nil, nil, nil, err
}
if assetType == types.ImageAsset {
switch assetType {
case types.ImageAsset:
devices, err = q.arch.appendImage(ctx, devices, assetPath)
if err != nil {
return nil, nil, nil, err
}
} else if assetType == types.InitrdAsset {
case types.InitrdAsset:
// InitrdAsset, need to set kernel initrd path
kernel.InitrdPath = assetPath
} else if assetType == types.SecureBootAsset {
case types.SecureBootAsset:
// SecureBootAsset, no need to set image or initrd path
q.Logger().Info("For IBM Z Secure Execution, initrd path should not be set")
kernel.InitrdPath = ""
@@ -621,7 +622,7 @@ func (q *qemu) CreateVM(ctx context.Context, id string, network Network, hypervi
// memory.
if q.config.SharedFS == config.VirtioFS || q.config.SharedFS == config.VirtioFSNydus ||
q.config.FileBackedMemRootDir != "" {
if !(q.config.BootToBeTemplate || q.config.BootFromTemplate) {
if !q.config.BootToBeTemplate && !q.config.BootFromTemplate {
q.setupFileBackedMem(&knobs, &memory)
} else {
return errors.New("VM templating has been enabled with either virtio-fs or file backed memory and this configuration will not work")
@@ -1914,11 +1915,12 @@ func (q *qemu) hotplugVFIODevice(ctx context.Context, device *config.VFIODev, op
// In case HotplugVFIOOnRootBus is true, devices are hotplugged on the root bus
// for pc machine type instead of bridge. This is useful for devices that require
// a large PCI BAR which is a currently a limitation with PCI bridges.
if q.state.HotPlugVFIO == config.RootPort {
switch q.state.HotPlugVFIO {
case config.RootPort:
err = q.hotplugVFIODeviceRootPort(ctx, device)
} else if q.state.HotPlugVFIO == config.SwitchPort {
case config.SwitchPort:
err = q.hotplugVFIODeviceSwitchPort(ctx, device)
} else if q.state.HotPlugVFIO == config.BridgePort {
case config.BridgePort:
err = q.hotplugVFIODeviceBridgePort(ctx, device)
}
if err != nil {

View File

@@ -97,10 +97,7 @@ func newQemuArch(config HypervisorConfig) (qemuArch, error) {
return nil, fmt.Errorf("unrecognised machinetype: %v", machineType)
}
factory := false
if config.BootToBeTemplate || config.BootFromTemplate {
factory = true
}
factory := config.BootToBeTemplate || config.BootFromTemplate
// IOMMU and Guest Protection require a split IRQ controller for handling interrupts
// otherwise QEMU won't be able to create the kernel irqchip
@@ -141,9 +138,9 @@ func newQemuArch(config HypervisorConfig) (qemuArch, error) {
return nil, err
}
if !q.qemuArchBase.disableNvdimm {
if !q.disableNvdimm {
hvLogger.WithField("subsystem", "qemuAmd64").Warn("Nvdimm is not supported with confidential guest, disabling it.")
q.qemuArchBase.disableNvdimm = true
q.disableNvdimm = true
}
}

View File

@@ -122,6 +122,7 @@ func TestQemuArm64AppendImage(t *testing.T) {
ID: "mem0",
MemPath: f.Name(),
Size: (uint64)(imageStat.Size()),
ReadOnly: true,
},
}

View File

@@ -1569,7 +1569,7 @@ func (s *Sandbox) CreateContainer(ctx context.Context, contConfig ContainerConfi
// Sandbox is responsible to update VM resources needed by Containers
// Update resources after having added containers to the sandbox, since
// container status is requiered to know if more resources should be added.
// container status is required to know if more resources should be added.
if err = s.updateResources(ctx); err != nil {
return nil, err
}
@@ -2559,7 +2559,7 @@ func (s *Sandbox) resourceControllerDelete() error {
// Keep MoveTo for the case of using cgroupfs paths and for the
// non-sandbox_cgroup_only mode. In that mode, Kata may use an overhead
// cgroup in which case an explicit MoveTo is used to drain tasks.
if !(resCtrl.IsSystemdCgroup(s.state.SandboxCgroupPath) && s.config.SandboxCgroupOnly) {
if !resCtrl.IsSystemdCgroup(s.state.SandboxCgroupPath) || !s.config.SandboxCgroupOnly {
resCtrlParent := sandboxController.Parent()
if err := sandboxController.MoveTo(resCtrlParent); err != nil {
return err
@@ -2577,7 +2577,7 @@ func (s *Sandbox) resourceControllerDelete() error {
}
// See comment at above MoveTo: Avoid this action as systemd moves tasks on unit deletion.
if !(resCtrl.IsSystemdCgroup(s.state.OverheadCgroupPath) && s.config.SandboxCgroupOnly) {
if !resCtrl.IsSystemdCgroup(s.state.OverheadCgroupPath) || !s.config.SandboxCgroupOnly {
resCtrlParent := overheadController.Parent()
if err := s.overheadController.MoveTo(resCtrlParent); err != nil {
return err

View File

@@ -23,7 +23,6 @@ import (
exp "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/experimental"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/persist/fs"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/annotations"
vcAnnotations "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/annotations"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
specs "github.com/opencontainers/runtime-spec/specs-go"
@@ -694,43 +693,43 @@ func TestSandboxCreateAssets(t *testing.T) {
{
types.FirmwareAsset,
map[string]string{
annotations.FirmwarePath: filename,
annotations.FirmwareHash: assetContentHash,
vcAnnotations.FirmwarePath: filename,
vcAnnotations.FirmwareHash: assetContentHash,
},
},
{
types.HypervisorAsset,
map[string]string{
annotations.HypervisorPath: filename,
annotations.HypervisorHash: assetContentHash,
vcAnnotations.HypervisorPath: filename,
vcAnnotations.HypervisorHash: assetContentHash,
},
},
{
types.ImageAsset,
map[string]string{
annotations.ImagePath: filename,
annotations.ImageHash: assetContentHash,
vcAnnotations.ImagePath: filename,
vcAnnotations.ImageHash: assetContentHash,
},
},
{
types.InitrdAsset,
map[string]string{
annotations.InitrdPath: filename,
annotations.InitrdHash: assetContentHash,
vcAnnotations.InitrdPath: filename,
vcAnnotations.InitrdHash: assetContentHash,
},
},
{
types.JailerAsset,
map[string]string{
annotations.JailerPath: filename,
annotations.JailerHash: assetContentHash,
vcAnnotations.JailerPath: filename,
vcAnnotations.JailerHash: assetContentHash,
},
},
{
types.KernelAsset,
map[string]string{
annotations.KernelPath: filename,
annotations.KernelHash: assetContentHash,
vcAnnotations.KernelPath: filename,
vcAnnotations.KernelHash: assetContentHash,
},
},
}
@@ -774,7 +773,7 @@ func TestSandboxCreateAssets(t *testing.T) {
imagePathData := &testData{
assetType: types.ImageAsset,
annotations: map[string]string{
annotations.ImagePath: "rhel9-os",
vcAnnotations.ImagePath: "rhel9-os",
},
}
@@ -1413,13 +1412,13 @@ func TestSandbox_Cgroups(t *testing.T) {
}
sandboxContainer := ContainerConfig{}
sandboxContainer.Annotations = make(map[string]string)
sandboxContainer.Annotations[annotations.ContainerTypeKey] = string(PodSandbox)
sandboxContainer.Annotations[vcAnnotations.ContainerTypeKey] = string(PodSandbox)
emptyJSONLinux := ContainerConfig{
CustomSpec: newEmptySpec(),
}
emptyJSONLinux.Annotations = make(map[string]string)
emptyJSONLinux.Annotations[annotations.ContainerTypeKey] = string(PodSandbox)
emptyJSONLinux.Annotations[vcAnnotations.ContainerTypeKey] = string(PodSandbox)
cloneSpec1 := newEmptySpec()
cloneSpec1.Linux.CgroupsPath = "/myRuntime/myContainer"
@@ -1427,7 +1426,7 @@ func TestSandbox_Cgroups(t *testing.T) {
CustomSpec: cloneSpec1,
}
successfulContainer.Annotations = make(map[string]string)
successfulContainer.Annotations[annotations.ContainerTypeKey] = string(PodSandbox)
successfulContainer.Annotations[vcAnnotations.ContainerTypeKey] = string(PodSandbox)
// nolint: govet
tests := []struct {

View File

@@ -541,10 +541,10 @@ func (s *stratovirt) appendNetwork(ctx context.Context, devices []VirtioDev, end
devices = append(devices, netDevice{
devType: "tap",
id: name,
ifname: endpoint.NetworkPair().TapInterface.TAPIface.Name,
ifname: endpoint.NetworkPair().TAPIface.Name,
netdev: name,
deviceID: name,
FDs: endpoint.NetworkPair().TapInterface.VMFds,
FDs: endpoint.NetworkPair().VMFds,
mac: endpoint.HardwareAddr(),
driver: mmioBus,
})
@@ -1130,7 +1130,7 @@ func (s *stratovirt) AddDevice(ctx context.Context, devInfo interface{}, devType
s.fds = append(s.fds, v.VhostFd)
s.svConfig.devices = s.appendVhostVsock(ctx, s.svConfig.devices, v)
case Endpoint:
s.fds = append(s.fds, v.NetworkPair().TapInterface.VMFds...)
s.fds = append(s.fds, v.NetworkPair().VMFds...)
s.svConfig.devices = s.appendNetwork(ctx, s.svConfig.devices, v)
case config.BlockDrive:
s.svConfig.devices = s.appendBlock(ctx, s.svConfig.devices)

View File

@@ -170,7 +170,7 @@ func createTuntapNetworkEndpoint(idx int, ifName string, hwName net.HardwareAddr
Name: fmt.Sprintf("eth%d", idx),
TAPIface: NetworkInterface{
Name: fmt.Sprintf("tap%d_kata", idx),
HardAddr: fmt.Sprintf("%s", hwName), //nolint:gosimple
HardAddr: hwName.String(),
},
},
EndpointType: TuntapEndpointType,

View File

@@ -25,8 +25,8 @@ const (
// IsSandbox determines if the container type can be considered as a sandbox.
// We can consider a sandbox in case we have a PodSandbox or a "regular" container
func (cType ContainerType) IsSandbox() bool {
return cType == PodSandbox || cType == SingleContainer
func (t ContainerType) IsSandbox() bool {
return t == PodSandbox || t == SingleContainer
}
func (t ContainerType) IsCriSandbox() bool {

View File

@@ -1,3 +1,8 @@
// Copyright (c) 2024 Kata Contributors
//
// SPDX-License-Identifier: Apache-2.0
//
package types
import (

View File

@@ -1,3 +1,8 @@
// Copyright (c) 2024 Kata Contributors
//
// SPDX-License-Identifier: Apache-2.0
//
package types
import (

View File

@@ -18,8 +18,8 @@ type RetryableFunc func() error
var (
DefaultAttempts = uint(10)
DefaultDelayMS = 100 * time.Millisecond
DefaultMaxJitterMS = 100 * time.Millisecond
DefaultDelay = 100 * time.Millisecond
DefaultMaxJitter = 100 * time.Millisecond
DefaultOnRetry = func(n uint, err error) {}
DefaultRetryIf = IsRecoverable
DefaultDelayType = CombineDelay(BackOffDelay, RandomDelay)
@@ -177,8 +177,8 @@ func Do(retryableFunc RetryableFunc, opts ...Option) error {
// default
config := &Config{
attempts: DefaultAttempts,
delay: DefaultDelayMS,
maxJitter: DefaultMaxJitterMS,
delay: DefaultDelay,
maxJitter: DefaultMaxJitter,
onRetry: DefaultOnRetry,
retryIf: DefaultRetryIf,
delayType: DefaultDelayType,

View File

@@ -148,7 +148,7 @@ func findVhostUserNetSocketPath(netInfo NetworkInfo) (string, error) {
// Check for socket file existence at known location.
for _, addr := range netInfo.Addrs {
socketPath := fmt.Sprintf(hostSocketSearchPath, addr.IPNet.IP)
socketPath := fmt.Sprintf(hostSocketSearchPath, addr.IP)
if _, err := os.Stat(socketPath); err == nil {
return socketPath, nil
}

View File

@@ -79,7 +79,7 @@ func MkPathIfNotExit(path string) (*string, error) {
return nil, errors.New("stat path failed")
} else if !exist {
if err := os.MkdirAll(path, PERM); err != nil {
return nil, errors.New("mkdir all failed.")
return nil, errors.New("mkdir all failed")
}
klog.Infof("mkdir full path successfully")
}
@@ -94,7 +94,7 @@ func MakeFullPath(path string) error {
return errors.New("stat path failed with not exist")
}
if err := os.MkdirAll(path, PERM); err != nil {
return errors.New("mkdir all failed.")
return errors.New("mkdir all failed")
}
}

View File

@@ -51,33 +51,12 @@ default WriteStreamRequest := false
# them and inspect OPA logs for the root cause of a failure.
default AllowRequestsFailingPolicy := false
# Constants (containerd keys; CRI-O uses different keys, see *_CRIO below)
# Constants
S_NAME_KEY = "io.kubernetes.cri.sandbox-name"
S_NAMESPACE_KEY = "io.kubernetes.cri.sandbox-namespace"
S_NAME_KEY_CRIO = "io.kubernetes.cri-o.SandboxName"
S_NAMESPACE_KEY_CRIO = "io.kubernetes.cri-o.Namespace"
SANDBOX_ID_KEY = "io.kubernetes.cri.sandbox-id"
SANDBOX_ID_KEY_CRIO = "io.kubernetes.cri-o.SandboxID"
C_TYPE_KEY = "io.kubernetes.cri.container-type"
C_TYPE_KEY_CRIO = "io.kubernetes.cri-o.ContainerType"
CONTAINER_NAME_KEY = "io.kubernetes.cri.container-name"
CONTAINER_NAME_KEY_CRIO = "io.kubernetes.cri-o.ContainerName"
IMAGE_NAME_KEY = "io.kubernetes.cri.image-name"
IMAGE_NAME_KEY_CRIO = "io.kubernetes.cri-o.ImageName"
SANDBOX_LOG_DIR_KEY = "io.kubernetes.cri.sandbox-log-directory"
SANDBOX_LOG_DIR_KEY_CRIO = "io.kubernetes.cri-o.LogPath"
CDI_VFIO_ANNOTATION_PREFIX = "cdi.k8s.io/vfio"
VFIO_PCI_ADDRESS_REGEX = "^[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[01][0-9a-fA-F]\\.[0-7]=[0-9a-fA-F]{2}/[0-9a-fA-F]{2}$"
# Get annotation value from input OCI: accept either CRI (containerd) or CRI-O key.
get_input_anno(i_oci, cri_key, crio_key) := v if {
v := i_oci.Annotations[cri_key]
}
get_input_anno(i_oci, cri_key, crio_key) := v if {
not i_oci.Annotations[cri_key]
v := i_oci.Annotations[crio_key]
}
CreateContainerRequest := {"ops": ops, "allowed": true} if {
# Check if the input request should be rejected even before checking the
# policy_data.containers information.
@@ -90,8 +69,8 @@ CreateContainerRequest := {"ops": ops, "allowed": true} if {
# array of possible state operations
ops_builder := []
# check sandbox name (containerd or CRI-O)
sandbox_name := get_input_anno(i_oci, S_NAME_KEY, S_NAME_KEY_CRIO)
# check sandbox name
sandbox_name = i_oci.Annotations[S_NAME_KEY]
add_sandbox_name_to_state := state_allows("sandbox_name", sandbox_name)
ops_builder1 := concat_op_if_not_null(ops_builder, add_sandbox_name_to_state)
@@ -106,9 +85,9 @@ CreateContainerRequest := {"ops": ops, "allowed": true} if {
p_oci := p_container.OCI
# check namespace (containerd or CRI-O)
# check namespace
p_namespace := p_oci.Annotations[S_NAMESPACE_KEY]
i_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
i_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
print("CreateContainerRequest: p_namespace =", p_namespace, "i_namespace =", i_namespace)
add_namespace_to_state := allow_namespace(p_namespace, i_namespace)
ops_builder2 := concat_op_if_not_null(ops_builder1, add_namespace_to_state)
@@ -270,13 +249,9 @@ allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 1: i key =", i_key)
startswith(i_key, "io.kubernetes.cri.")
print("allow_anno_key_value 1: true")
}
allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 1b: i key =", i_key)
startswith(i_key, "io.kubernetes.cri-o.")
print("allow_anno_key_value 1b: true")
}
allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 2: i key =", i_key)
@@ -297,17 +272,17 @@ allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 3: true")
}
# Get the value of the sandbox name/namespace annotations (containerd or CRI-O) and
# correlate with other annotations and process fields.
# Get the value of the S_NAME_KEY annotation and
# correlate it with other annotations and process fields.
allow_by_anno(p_oci, i_oci, p_storages, i_storages) if {
print("allow_by_anno 1: start")
not p_oci.Annotations[S_NAME_KEY]
i_s_name := get_input_anno(i_oci, S_NAME_KEY, S_NAME_KEY_CRIO)
i_s_name := i_oci.Annotations[S_NAME_KEY]
print("allow_by_anno 1: i_s_name =", i_s_name)
i_s_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
i_s_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
print("allow_by_anno 1: i_s_namespace =", i_s_namespace)
allow_by_sandbox_name(p_oci, i_oci, p_storages, i_storages, i_s_name, i_s_namespace)
@@ -318,12 +293,12 @@ allow_by_anno(p_oci, i_oci, p_storages, i_storages) if {
print("allow_by_anno 2: start")
p_s_name := p_oci.Annotations[S_NAME_KEY]
i_s_name := get_input_anno(i_oci, S_NAME_KEY, S_NAME_KEY_CRIO)
i_s_name := i_oci.Annotations[S_NAME_KEY]
print("allow_by_anno 2: i_s_name =", i_s_name, "p_s_name =", p_s_name)
allow_sandbox_name(p_s_name, i_s_name)
i_s_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
i_s_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
print("allow_by_anno 2: i_s_namespace =", i_s_namespace)
allow_by_sandbox_name(p_oci, i_oci, p_storages, i_storages, i_s_name, i_s_namespace)
@@ -334,7 +309,7 @@ allow_by_anno(p_oci, i_oci, p_storages, i_storages) if {
allow_by_sandbox_name(p_oci, i_oci, p_storages, i_storages, s_name, s_namespace) if {
print("allow_by_sandbox_name: start")
i_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
i_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
allow_by_container_types(p_oci, i_oci, s_name, i_namespace)
allow_by_bundle_or_sandbox_id(p_oci, i_oci, p_storages, i_storages)
@@ -350,14 +325,18 @@ allow_sandbox_name(p_s_name, i_s_name) if {
print("allow_sandbox_name: true")
}
# Check that the container-type annotation (containerd or CRI-O) and
# "io.katacontainers.pkg.oci.container_type" designate the expected type -
# either "sandbox" or "container". Then validate other annotations accordingly.
# Check that the "io.kubernetes.cri.container-type" and
# "io.katacontainers.pkg.oci.container_type" annotations designate the
# expected type - either a "sandbox" or a "container". Then, validate
# other annotations based on the actual "sandbox" or "container" value
# from the input container.
allow_by_container_types(p_oci, i_oci, s_name, s_namespace) if {
print("allow_by_container_types: checking container-type")
print("allow_by_container_types: checking io.kubernetes.cri.container-type")
p_cri_type := p_oci.Annotations[C_TYPE_KEY]
i_cri_type := get_input_anno(i_oci, C_TYPE_KEY, C_TYPE_KEY_CRIO)
c_type := "io.kubernetes.cri.container-type"
p_cri_type := p_oci.Annotations[c_type]
i_cri_type := i_oci.Annotations[c_type]
print("allow_by_container_types: p_cri_type =", p_cri_type, "i_cri_type =", i_cri_type)
p_cri_type == i_cri_type
@@ -396,52 +375,42 @@ allow_by_container_type(i_cri_type, p_oci, i_oci, s_name, s_namespace) if {
print("allow_by_container_type 2: true")
}
# Container name: sandbox has none; container must match (containerd or CRI-O key).
# "io.kubernetes.cri.container-name" annotation
allow_sandbox_container_name(p_oci, i_oci) if {
print("allow_sandbox_container_name: start")
container_annotation_missing_cri_crio(p_oci, i_oci, CONTAINER_NAME_KEY, CONTAINER_NAME_KEY_CRIO)
container_annotation_missing(p_oci, i_oci, "io.kubernetes.cri.container-name")
print("allow_sandbox_container_name: true")
}
allow_container_name(p_oci, i_oci) if {
print("allow_container_name: start")
allow_container_annotation_cri_crio(p_oci, i_oci, CONTAINER_NAME_KEY, CONTAINER_NAME_KEY_CRIO)
allow_container_annotation(p_oci, i_oci, "io.kubernetes.cri.container-name")
print("allow_container_name: true")
}
container_annotation_missing(p_oci, i_oci, key) if {
print("container_annotation_missing:", key)
not p_oci.Annotations[key]
not i_oci.Annotations[key]
print("container_annotation_missing: true")
}
# Both policy and input lack the annotation (input checked for both CRI and CRI-O keys).
container_annotation_missing_cri_crio(p_oci, i_oci, cri_key, crio_key) if {
print("container_annotation_missing_cri_crio:", cri_key)
not p_oci.Annotations[cri_key]
not i_oci.Annotations[cri_key]
not i_oci.Annotations[crio_key]
print("container_annotation_missing_cri_crio: true")
print("container_annotation_missing: true")
}
allow_container_annotation(p_oci, i_oci, key) if {
print("allow_container_annotation: key =", key)
p_value := p_oci.Annotations[key]
i_value := i_oci.Annotations[key]
print("allow_container_annotation: p_value =", p_value, "i_value =", i_value)
p_value == i_value
print("allow_container_annotation: true")
}
# Policy uses CRI key; input may have CRI or CRI-O key.
allow_container_annotation_cri_crio(p_oci, i_oci, cri_key, crio_key) if {
print("allow_container_annotation_cri_crio: cri_key =", cri_key)
p_value := p_oci.Annotations[cri_key]
i_value := get_input_anno(i_oci, cri_key, crio_key)
print("allow_container_annotation_cri_crio: p_value =", p_value, "i_value =", i_value)
p_value == i_value
print("allow_container_annotation_cri_crio: true")
print("allow_container_annotation: true")
}
# "nerdctl/network-namespace" annotation
@@ -470,16 +439,18 @@ allow_net_namespace(p_oci, i_oci) if {
print("allow_net_namespace: true")
}
# Sandbox log directory (containerd or CRI-O: cri-o uses LogPath)
# "io.kubernetes.cri.sandbox-log-directory" annotation
allow_sandbox_log_directory(p_oci, i_oci, s_name, s_namespace) if {
print("allow_sandbox_log_directory: start")
p_dir := p_oci.Annotations[SANDBOX_LOG_DIR_KEY]
key := "io.kubernetes.cri.sandbox-log-directory"
p_dir := p_oci.Annotations[key]
regex1 := replace(p_dir, "$(sandbox-name)", s_name)
regex2 := replace(regex1, "$(sandbox-namespace)", s_namespace)
print("allow_sandbox_log_directory: regex2 =", regex2)
i_dir := get_input_anno(i_oci, SANDBOX_LOG_DIR_KEY, SANDBOX_LOG_DIR_KEY_CRIO)
i_dir := i_oci.Annotations[key]
print("allow_sandbox_log_directory: i_dir =", i_dir)
regex.match(regex2, i_dir)
@@ -489,9 +460,12 @@ allow_sandbox_log_directory(p_oci, i_oci, s_name, s_namespace) if {
allow_log_directory(p_oci, i_oci) if {
print("allow_log_directory: start")
not p_oci.Annotations[SANDBOX_LOG_DIR_KEY]
not i_oci.Annotations[SANDBOX_LOG_DIR_KEY]
not i_oci.Annotations[SANDBOX_LOG_DIR_KEY_CRIO]
key := "io.kubernetes.cri.sandbox-log-directory"
not p_oci.Annotations[key]
not i_oci.Annotations[key]
print("allow_log_directory: true")
}
@@ -802,25 +776,22 @@ allow_linux_sysctl(p_linux, i_linux) if {
print("allow_linux_sysctl 2: true")
}
# Check sandbox_id and derive bundle_id from guest root path (CRI-agnostic: works for containerd and CRI-O).
# Bundle path on the host is runtime-specific; root path in the guest is stable, so we extract bundle_id from it.
# Check the consistency of the input "io.katacontainers.pkg.oci.bundle_path"
# and io.kubernetes.cri.sandbox-id" values with other fields.
allow_by_bundle_or_sandbox_id(p_oci, i_oci, p_storages, i_storages) if {
print("allow_by_bundle_or_sandbox_id: start")
p_regex := p_oci.Annotations[SANDBOX_ID_KEY]
sandbox_id := get_input_anno(i_oci, SANDBOX_ID_KEY, SANDBOX_ID_KEY_CRIO)
bundle_path := i_oci.Annotations["io.katacontainers.pkg.oci.bundle_path"]
bundle_id := replace(bundle_path, "/run/containerd/io.containerd.runtime.v2.task/k8s.io/", "")
key := "io.kubernetes.cri.sandbox-id"
p_regex := p_oci.Annotations[key]
sandbox_id := i_oci.Annotations[key]
print("allow_by_bundle_or_sandbox_id: sandbox_id =", sandbox_id, "regex =", p_regex)
regex.match(p_regex, sandbox_id)
# Derive bundle_id from guest root path (e.g. /run/kata-containers/<bundle_id>/rootfs).
# Match 64-char hex (real runtimes) or any single path segment (e.g. test data: bundle-id, gpu-container, dummy).
i_root := i_oci.Root.Path
p_root_pattern1 := p_oci.Root.Path
p_root_pattern2 := replace(p_root_pattern1, "$(root_path)", policy_data.common.root_path)
p_root_pattern3 := replace(p_root_pattern2, "$(bundle-id)", "([0-9a-f]{64}|[^/]+)")
print("allow_by_bundle_or_sandbox_id: i_root =", i_root, "regex =", p_root_pattern3)
bundle_id := regex.find_all_string_submatch_n(p_root_pattern3, i_root, 1)[0][1]
allow_root_path(p_oci, i_oci, bundle_id)
# Match each input mount with a Policy mount.

View File

@@ -24,7 +24,7 @@ func checkValid(value string) error {
}
for _, ch := range value {
if !(unicode.IsPrint(ch) || unicode.IsSpace(ch)) {
if !unicode.IsPrint(ch) && !unicode.IsSpace(ch) {
return fmt.Errorf("character %v (%x) in value %v not printable", ch, ch, value)
}
}

View File

@@ -68,7 +68,7 @@ func (r *HexByteReader) Read(p []byte) (n int, err error) {
// perform the conversion
s := string(bytes)
result := strings.Replace(s, `\x`, `\\x`, -1)
result := strings.ReplaceAll(s, `\x`, `\\x`)
// store the data
r.data = []byte(result)

View File

@@ -129,7 +129,7 @@ func (le LogEntry) Check(ignoreMissingFields bool) error {
return fmt.Errorf("missing line number: %+v", le)
}
if le.Time == (time.Time{}) {
if le.Time.IsZero() {
return fmt.Errorf("missing timestamp: %+v", le)
}

View File

@@ -294,7 +294,7 @@ func TestParseTime(t *testing.T) {
}
for i, d := range data {
if d.timeString != "" && d.t == (time.Time{}) {
if d.timeString != "" && d.t.IsZero() {
t, err := time.Parse(time.RFC3339Nano, d.timeString)
assert.NoError(err)
d.t = t

View File

@@ -1,34 +1,45 @@
# Copyright (c) 2017 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
version: "2"
run:
concurrency: 4
deadline: 600s
issues:
exclude-dirs:
- vendor
exclude-files:
- ".*\\.pb\\.go$"
linters:
disable-all: true
default: none
enable:
- gocyclo
- gofmt
- gosimple
- govet
- ineffassign
- misspell
- staticcheck
- typecheck
- unused
linters-settings:
gocyclo:
min_complexity: 15
unused:
check-exported: true
govet:
enable:
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- .*\.pb\.go$
- vendor
- third_party$
- builtin$
- examples$
settings:
staticcheck:
checks:
- all
- "-ST1005" # ST1005: error strings should not be capitalized (staticcheck)
- "-ST1003" # ST1003 - Poorly chosen identifier is a non-default rule
formatters:
enable:
- gofmt
exclusions:
generated: lax
paths:
- .*\.pb\.go$
- vendor
- third_party$
- builtin$
- examples$

View File

@@ -79,7 +79,7 @@ func createHeadingID(headingName string) (id string, err error) {
id = strings.Map(validHeadingIDChar, headingName)
id = strings.ToLower(id)
id = strings.Replace(id, " ", "-", -1)
id = strings.ReplaceAll(id, " ", "-")
return id, nil
}

View File

@@ -8,8 +8,8 @@ package main
import "strings"
func cleanString(s string) string {
result := strings.Replace(s, "\n", " ", -1)
result = strings.Replace(result, "\t", "\\t", -1)
result := strings.ReplaceAll(s, "\n", " ")
result = strings.ReplaceAll(result, "\t", "\\t")
result = strings.TrimSpace(result)
return result

View File

@@ -810,17 +810,16 @@ function install_nydus_snapshotter() {
rm -f "${tarball_name}"
}
# version: the CRI-O version to be installed (major.minor, e.g. 1.35)
# Repo: https://github.com/cri-o/packaging (OpenSUSE Build Service, not pkgs.k8s.io)
# version: the CRI-O version to be installe
function install_crio() {
local version=${1}
sudo mkdir -p /etc/apt/keyrings
sudo mkdir -p /etc/apt/sources.list.d
curl -fsSL https://download.opensuse.org/repositories/isv:/cri-o:/stable:/v${version}/deb/Release.key | \
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/v${version}/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://download.opensuse.org/repositories/isv:/cri-o:/stable:/v${version}/deb/ /" | \
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/v${version}/deb/ /" | \
sudo tee /etc/apt/sources.list.d/cri-o.list
sudo apt update

View File

@@ -397,19 +397,13 @@ EOF
# Deploy k8s using kubeadm with CreateContainerRequest (CRI) timeout set to 600s,
# mainly for CoCo (Confidential Containers) tests (attestation, policy, image pull, VM start).
local cri_socket
case "${CONTAINER_ENGINE:-containerd}" in
crio) cri_socket="/var/run/crio/crio.sock" ;;
containerd) cri_socket="/run/containerd/containerd.sock" ;;
*) cri_socket="/run/containerd/containerd.sock" ;;
esac
local kubeadm_config
kubeadm_config="$(mktemp --tmpdir kubeadm-config.XXXXXX.yaml)"
cat <<EOF | tee "${kubeadm_config}"
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
criSocket: "${cri_socket}"
criSocket: "/run/containerd/containerd.sock"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
@@ -433,29 +427,8 @@ EOF
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
}
# Try to install CRI-O for the given k8s-matching version (major.minor); if the repo/package
# is not available yet (k8s released before CRI-O), try previous minor (x.y-1).
function try_install_crio_for_k8s() {
local version="${1}"
local major minor
major="${version%%.*}"
minor="${version##*.}"
if install_crio "${version}"; then
return 0
fi
if [[ "${minor}" -gt 0 ]]; then
minor=$((minor - 1))
echo "CRI-O v${version} not available yet, trying v${major}.${minor}"
install_crio "${major}.${minor}"
else
echo "CRI-O v${version} failed and no fallback (minor would be < 0)" >&2
return 1
fi
}
# container_engine: containerd or crio
# container_engine_version: for containerd: major.minor or lts/active; for crio: major.minor (e.g. 1.31) or active
# container_engine: containerd (only containerd is supported for now, support for crio is welcome)
# container_engine_version: major.minor (and then we'll install the latest patch release matching that major.minor)
function deploy_vanilla_k8s() {
container_engine="${1}"
container_engine_version="${2}"
@@ -463,18 +436,10 @@ function deploy_vanilla_k8s() {
[[ -z "${container_engine}" ]] && die "container_engine is required"
[[ -z "${container_engine_version}" ]] && die "container_engine_version is required"
# Export so do_deploy_k8s can pick the right CRI socket
export CONTAINER_ENGINE="${container_engine}"
# Resolve lts/active to the actual version from versions.yaml (e.g. v1.7, v2.1)
case "${container_engine_version}" in
case "${container_engine_version}" in
lts|active)
if [[ "${container_engine}" == "containerd" ]]; then
container_engine_version=$(get_from_kata_deps ".externals.containerd.${container_engine_version}")
else
# CRI-O version matches k8s: use latest k8s stable major.minor (e.g. 1.31)
container_engine_version=$(curl -Ls https://dl.k8s.io/release/stable.txt | sed -e 's/^v//' | cut -d. -f-2)
fi
container_engine_version=$(get_from_kata_deps ".externals.containerd.${container_engine_version}")
;;
*) ;;
esac
@@ -489,11 +454,6 @@ function deploy_vanilla_k8s() {
sudo mkdir -p /etc/containerd
containerd config default | sed -e 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
;;
crio)
# CRI-O version is major.minor (e.g. 1.31) for download.opensuse.org/isv:cri-o:stable
# If k8s was released before CRI-O, try previous minor (x.y-1)
try_install_crio_for_k8s "${container_engine_version}"
;;
*) die "${container_engine} is not a container engine supported by this script" ;;
esac
sudo systemctl daemon-reload && sudo systemctl restart "${container_engine}"

View File

@@ -10,15 +10,14 @@ load "${BATS_TEST_DIRNAME}/confidential_common.sh"
export KBS="${KBS:-false}"
export SNAPSHOTTER="${SNAPSHOTTER:-}"
export EXPERIMENTAL_FORCE_GUEST_PULL="${EXPERIMENTAL_FORCE_GUEST_PULL:-}"
export PULL_TYPE="${PULL_TYPE:-}"
setup() {
if ! is_confidential_runtime_class; then
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
fi
setup_common || die "setup_common failed"
@@ -175,8 +174,8 @@ teardown() {
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
fi
confidential_teardown_common "${node}" "${node_start_time:-}"

View File

@@ -11,15 +11,14 @@ load "${BATS_TEST_DIRNAME}/confidential_common.sh"
export KBS="${KBS:-false}"
export SNAPSHOTTER="${SNAPSHOTTER:-}"
export EXPERIMENTAL_FORCE_GUEST_PULL="${EXPERIMENTAL_FORCE_GUEST_PULL:-}"
export PULL_TYPE="${PULL_TYPE:-}"
setup() {
if ! is_confidential_runtime_class; then
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
fi
tag_suffix=""
@@ -244,8 +243,8 @@ teardown() {
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
fi
teardown_common "${node}" "${node_start_time:-}"

View File

@@ -10,15 +10,14 @@ load "${BATS_TEST_DIRNAME}/confidential_common.sh"
export SNAPSHOTTER="${SNAPSHOTTER:-}"
export EXPERIMENTAL_FORCE_GUEST_PULL="${EXPERIMENTAL_FORCE_GUEST_PULL:-}"
export PULL_TYPE="${PULL_TYPE:-}"
setup() {
if ! is_confidential_runtime_class; then
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
fi
setup_common || die "setup_common failed"
@@ -171,10 +170,13 @@ setup() {
cat $pod_config
# The pod should be failed because the image is too large to be pulled in the timeout
assert_pod_fail "$pod_config"
local fail_timeout=120
# In this case, the host pulls first. Sometimes pull times spike, so allow longer to observe the failure
[[ -n "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]] && fail_timeout=360
assert_pod_fail "$pod_config" "$fail_timeout"
# runtime-rs has its dedicated error message, we need handle it separately.
if [ "${KATA_HYPERVISOR}" == "qemu-coco-dev-runtime-rs" ]; then
if [[ "${KATA_HYPERVISOR}" == *-runtime-rs ]]; then
pod_name="large-image-pod"
kubectl describe "pod/$pod_name" | grep "agent create container"
kubectl describe "pod/$pod_name" | grep "timeout"
@@ -229,8 +231,8 @@ teardown() {
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
fi
teardown_common "${node}" "${node_start_time:-}"

View File

@@ -192,7 +192,7 @@ func main() {
if context.GlobalString("metricsdir") == "" {
log.Error("Must supply metricsdir argument")
return errors.New("Must supply metricsdir argument")
return errors.New("must supply metricsdir argument")
}
baseFilePath = context.GlobalString("basefile")

View File

@@ -141,11 +141,21 @@ Uses null-based defaults for disableAll support:
{{- end -}}
{{/*
Get default shim for a specific architecture from structured config
Get default shim for a specific architecture from structured config.
Returns the configured default shim only if it is actually enabled and
supports the requested architecture. Returns empty string otherwise so
that callers can skip setting the env var rather than propagating a
bogus value that would cause kata-deploy to fail at runtime.
*/}}
{{- define "kata-deploy.getDefaultShimForArch" -}}
{{- $arch := .arch -}}
{{- index .root.Values.defaultShim $arch -}}
{{- $defaultShim := index .root.Values.defaultShim $arch -}}
{{- if $defaultShim -}}
{{- $enabledShims := include "kata-deploy.getEnabledShimsForArch" (dict "root" .root "arch" $arch) | trim | splitList " " -}}
{{- if has $defaultShim $enabledShims -}}
{{- $defaultShim -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*

View File

@@ -28,7 +28,7 @@ data:
{{- end }}
{{- end }}
{{- if $handler }}
{{ $handler }}:{{ $runtime.baseConfig }}:{{ $runtime.containerd.snapshotter | default "" }}:{{ $runtime.crio.pullType | default "" }}
{{ $handler }}:{{ $runtime.baseConfig }}:{{ dig "containerd" "snapshotter" "" $runtime }}:{{ dig "crio" "pullType" "" $runtime }}
{{- end }}
{{- end }}
{{- /* Generate drop-in files for each runtime */ -}}

View File

@@ -487,12 +487,12 @@ languages:
description: "golangci-lint"
notes: "'version' is the default minimum version used by this project."
url: "github.com/golangci/golangci-lint"
version: "1.64.8"
version: "2.9.0"
meta:
description: |
'newest-version' is the latest version known to work when
building Kata
newest-version: "1.64.8"
newest-version: "2.9.0"
docker_images:
description: "Docker images used for testing"