Commit Graph

18546 Commits

Author SHA1 Message Date
Fabiano Fidêncio
b64673196a ci: cache: qemu: Take configure-hypervisor.sh into account
The script is used to change the options used to build QEMU and **must**
be taken into consideration in case something changes, otherwise the
QEMU used by the CI would be the old cached one (ignoring any flag newly
added).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-20 14:52:57 +02:00
Fabiano Fidêncio
07731cde21 Merge pull request #12879 from stevenhorsman/confidential-tests-fixes
Confidential tests fixes
2026-04-20 14:33:02 +02:00
stevenhorsman
c75c432c01 ci: Update TEE scope
`k8s-confidential.bats` technically doesn't need attestation, but only runs
on TEE hardware, so include it in the attestation list so we can test it in PRs

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-04-20 09:36:10 +01:00
stevenhorsman
7179e92142 tests/confidentials: Remove pointless skip
The skip conditional is wrong, but it's not needed as the setup
and teardown only allow confidential hardware anyway

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-04-20 09:36:10 +01:00
Fupan Li
2629df2785 Merge pull request #12763 from Apokleos/fsmerged-erofs-rs
runtime-rs: support erofs snapshotter with Fsmerge enabled
2026-04-20 11:54:19 +08:00
Alex Lyn
e975b3158b Merge pull request #12837 from stevenhorsman/rand-bump-GHSA-cq8v-f236-94qc
versions: Bump rand crate where possible
2026-04-20 10:05:19 +08:00
Fabiano Fidêncio
d6f0b15578 ci: erofs: restrict to runtime-rs only
The erofs snapshotter configuration is node-wide (a single containerd
drop-in) and cannot be split per runtime handler.  The Go runtime does
not support fsmerged EROFS — it rejects fsmeta.erofs mount sources with
"unsupported mount source" — so erofs is only usable with runtime-rs.

Drop qemu-coco-dev (Go) from the erofs CI matrix and add a check in
kata-deploy's configure_erofs_snapshotter() that inspects the
SNAPSHOTTER_HANDLER_MAPPING: if any Go shim is explicitly mapped to
erofs, emit a prominent warning and bail out with a clear error telling
the operator to fix the mapping.

Since all shims are now guaranteed to be runtime-rs when erofs is
active, remove the conditional is_rust_shim gating and always emit the
full erofs configuration (differ options, default_size,
max_unmerged_layers=1).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-19 13:24:31 +02:00
Fabiano Fidêncio
cf1e6f82f2 tests: Show full kata-deploy pod logs in CI
Remove --tail=N limits from `kubectl logs` for kata-deploy pods so
the complete output is visible in CI job logs for debugging.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
c26f647a3a test: Improve process verification and robustness in kill test
During tests, one error as below:
```
..k8s-kill-all-process-in-container.bats: line 40: [: too many arguments
```
This commit aims to address such issue follows:
(1) Update process query command to "ps aux || ps" to ensure
  compatibility across different container images while maximizing
  process visibility.
(2) Use "[t]ail" in grep to reliably match the process without
  self-matching.
(3) Quote variable in assertion to resolve "too many arguments" bash
  error.
(4) Improve test reliability by ensuring the process list is actually
  visible to the verification logic.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
f4f6c78e9e tests: Update expectation for no-layer-image test case
The 'no-layer-image' test case was failing because the underlying shim
returned a "unsupported rootfs mounts count" error instead of the
expected application-level "file not found" or "ENOENT" error.

This change updates the BATS test to accept the shim-level rootfs
validation error as a valid failure condition for this unsupported
image scenario, ensuring the CI remains green while reflecting
current runtime behavior.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
be47c2e932 runtime-rs: Avoid share-rw on readonly virtio-scsi/blk devices
Hotplugging a readonly block device could fail with:

  Block node is read-only

The backend block node was created readonly, but the virtio-scsi/blk
frontend path still forced share-rw=true. This is unnecessary and can
cause QEMU to reject the attach because the frontend configuration
does not match the readonly backend.

Fix the virtio-scsi/blk hotplug path by:
- setting read-only for readonly devices where supported
- skipping share-rw for readonly devices

Readonly handling remains in the backend block node configuration,
while the frontend keeps normal disk semantics for block devices.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
02f975f88b runtime-rs: Enforce read-only and shared access for RO block devices
Explicitly configure `read_only` and `force_share` for readonly block
devices to ensure consistency between the image's read-only state and
QEMU's  access mode.

Motivation:
Previously, EROFS images were being accessed in a way that triggered
QEMU's exclusive locking (e.g., the 'resize' lock), even when the images
were intended to be read-only. This conflicted with external processes
(e.g., containerd snapshotter) that held read-only handles, resulting in
"Failed to get shared 'resize' lock" errors during blockdev-add.

Changes:
- Set `read_only=true` and `force_share=true` on both format and file
  nodes for VMDK descriptors and Raw images.
- This ensures QEMU requests shared locks, correctly matching the
  read-only nature of EROFS filesystems and preventing write-mode
  locking conflicts with concurrent processes.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Fabiano Fidêncio
9c803d86a6 ci: erofs: Bump containerd to v2.3
To ensure we're using the latest released version of the project, as I
think we're missing patches on v2.2.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-19 13:24:31 +02:00
Fabiano Fidêncio
cdd09c3c65 ci: enable erofs tests with runtime-rs
Now that erofs snapshotter has added , let's make sure this is tested.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
7f7cca16fa kata-deploy: Complete containerd config for erofs snapshotter
Add missing containerd configuration items for erofs snapshotter to
enable fsmerged erofs feature:

Add snapshotter plugin configuration:
 - default_size: "10G" # can be customized
 - max_unmerged_layers: 1 # Fixed with 1

These configurations align with the documentation in
docs/how-to/how-to-use-fsmerged-erofs-with-kata.md Step 2,
ensuring the CI workflow run-k8s-tests-coco-nontee-with-erofs-snapshotter
can properly configure containerd for erofs fsmerged rootfs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Fabiano Fidêncio
04e0f1c403 qemu: Enable VMDK block format support
The multi-layer EROFS rootfs feature relies on QEMU's VMDK flat-extent
driver to merge multiple EROFS layers into a single virtual block
device. Replace --disable-vmdk with an explicit --enable-vmdk so the
Kata static QEMU build includes VMDK support.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
27341f45f1 docs: Add how-to guide for using fsmerged EROFS rootfs with Kata
Document the end-to-end workflow for using the containerd EROFS
snapshotter with Kata Containers runtime-rs, covering containerd
configuration, Kata QEMU settings, and pod deployment examples
via crictl/ctr/Kubernetes.

Include prerequisites (containerd >= 2.2, runtime-rs main branch),
QEMU VMDK format verification command, architecture diagram,
VMDK descriptor format reference, and troubleshooting guide.

Note that Cloud Hypervisor, Firecracker, and Dragonball do not
support VMDK block devices and are currently unsupported for
fsmerged EROFS rootfs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
526126904e runtime-rs: Add support for handling vmdk hotplugging with scsi
We should also support virtio-scsi driver for handling vmdk format
block device, and this will help address more cases.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
ce3473d272 agent: Kill processes before removing container directory in destroy()
When using multi-layer EROFS snapshotter, the destroy() method fails to
kill container processes, causing process leaks in shared PID namespace
scenarios.

Problem Background:
1. Multi-layer EROFS creates temporary mount points under the container's
  root directory:
  - /run/kata-containers/<cid>/multi-layer/upper (ext4, writable)
  - /run/kata-containers/<cid>/multi-layer/lower-0 (EROFS, read-only)
2. The original destroy() method executed in this order:
  (1) umount rootfs
  (2) fs::remove_dir_all(&self.root) <- FAILS with "Read-only file system"
  (3) cgroup cleanup and process killing <- NEVER EXECUTED
3. When remove_dir_all() encounters the read-only EROFS mount point, it
  returns EROFS error (os error 30), causing destroy() to exit early
  without killing processes.

Why This Fix:
1. The test case k8s-kill-all-process-in-container.bats creates an init
  container with a background process (tail -f /dev/null), expecting it
  to be killed when the init container is destroyed.
2. With shared PID namespace (shareProcessNamespace: true), the orphaned
  process continues running, causing the test to fail.

Solution:
1. Reorder the destroy() method to kill processes BEFORE attempting to
  remove the container directory:
  (1) Get PIDs from cgroup and send SIGKILL
  (2) Destroy cgroup
  (3) umount rootfs
  (4) fs::remove_dir_all(&self.root)
2. This ensures processes are always killed regardless of filesystem
  cleanup status, matching the behavior of overlayfs snapshotter.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
c745d18e00 agent: Add virtio-scsi for multilayer erofs storage handler
It aims to suppport virtio-scsi driver for handling vmdk and rwlayer
storage in kata-agent.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
37a542c20f agent: Refactor multi-layer EROFS handling with unified flow
Refactor the multi-layer EROFS storage handling to improve code
maintainability and reduce duplication.

Key changes:
(1) Extract update_storage_device() to unify device state management
  for both multi-layer and standard storages
(2) Simplify handle_multi_layer_storage() to focus on device creation,
  returning MultiLayerProcessResult struct instead of managing state
(3) Unify the processing flow in add_storages() with clear separation:
(4) Support multiple EROFS lower layers with dynamic lower-N mount paths
(5) Improve mkdir directive handling with deferred {{ mount 1 }}
  resolution

This reduces code duplication, improves readability, and makes the
storage handling logic more consistent across different storage types.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
27c59f15a0 agent: Register MultiLayerErofsHandler and process multiple EROFS
Introduce MultiLayerErofsHandler and method of
handle_multi_layer_storage for multi-layer storage:
(1) Register MultiLayerErofsHandler to STORAGE_HANDLERS to handle
multi-layer EROFS storage with driver type 'multi-layer-erofs'.
(2) Add handle_multi_layer_erofs function to process multiple EROFS
storages with X-kata.multi-layer marker together in guest.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
6ce9180333 agent: Add support for EROFS rootfs handling in kata-agent
Add multi_layer_erofs.rs implementing guest-side processing logics
of multi-layer EROFS rootfs with overlay mount support.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
d8db044c63 runtime-rs: Add erofs rootfs handling logic in handler_rootfs
Add handling for multi-layer EROFS rootfs in RootFsResource
handler_rootfs method. It will correctly handle the multi-layers
erofs rootfs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
8d7051436a runtime-rs: Add support for erofs rootfs with multi-layer
Add erofs_rootfs.rs implementing ErofsMultiLayerRootfs for
multi-layer EROFS rootfs with VMDK descriptor generation.

It's the core implementation of Erofs rootfs within runtime.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-19 13:24:31 +02:00
Alex Lyn
cb706219ae runtime-rs: Change Rootfs::get_storage return type
Change Rootfs::get_storage to return Option<Vec<Storage>>
to support multi-layer rootfs with multiple storages.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-18 22:46:33 +02:00
Alex Lyn
c06bc388c2 runtime-rs: Add format argument to hotplug_block_device method
Add format argument to hotplug_block_device for flexibly specifying
different block formats.
With this, we can support kinds of formats, currently raw and vmdk are
supported, and some other formats will be supported in future.

Aside the formats, the corresponding handling logics are also required
to properly handle its options needed in QMP blockdev-add.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-18 22:46:33 +02:00
Alex Lyn
15740439eb runtime-rs: Add BlockDeviceFormat enum to support more block formats
In practice, we need more kinds of block formats, not limited to `Raw`.

This commit aims to add BlockDeviceFormat enum for kinds of block device
formats support, like RAW, VMDK, etc. And it will do some following actions
to make this changes work well, including format field in BlockConfig.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-18 19:00:44 +02:00
Alex Lyn
8ed4fa1406 runtime-rs: Add RUNTIME_ALLOW_MOUNTS to RuntimeInfo
Add RUNTIME_ALLOW_MOUNTS annotation to RuntimeInfo to specify
custom mount types allowed by the runtime.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-04-18 19:00:44 +02:00
Fabiano Fidêncio
614cd0618e Merge pull request #12841 from kata-containers/topic/arm-add-qemu-coco-dev
runtime-rs: arm64: ci: Enable qemu-coco-dev tests
2026-04-18 12:22:58 +02:00
Fabiano Fidêncio
edfaeec316 tests: arm64: Skip tests which do not have a multi-arch image
The image used has some special (as weird) properties that are being
taking advantage of to implement policy related tests.

Changing the image is a no-go at this point, otherwise we break the
tests ... so let's just skip those for now.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-18 00:48:13 +02:00
Fabiano Fidêncio
d04bb98e09 runtime-rs: Increase reconnect_timeout_ms for confidential VMs
The Go runtime's CoCo dev config uses dial_timeout = 45s, but all
runtime-rs confidential VM configs had reconnect_timeout_ms set to
3000ms (3s) or 5000ms (SE). This is too short for confidential VMs,
especially on arm64 where UEFI firmware (AAVMF) adds significant
boot time on top of the measured boot process, causing ECONNRESET
errors on the vsock connection before the agent is ready.

Bump reconnect_timeout_ms to 45000ms across all confidential VM
configs (coco-dev, SNP, TDX, SE) to match the Go runtime.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Made-with: Cursor
2026-04-18 00:48:13 +02:00
Fabiano Fidêncio
35e48fdfd1 ci: run qemu-coco-dev-runtime-rs tests on arm64
Add qemu-coco-dev-runtime-rs to the arm64 k8s test matrix so that the
CoCo non-TEE configuration is exercised on aarch64 runners.

Also enable auto-generated policy for qemu-coco-dev on aarch64 (matching
the existing x86_64 behavior) and register the new job as a required
gatekeeper check.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Made-with: Cursor
2026-04-18 00:48:13 +02:00
Fabiano Fidêncio
588a67a3fb kata-deploy: add arm64 support for qemu-coco-dev shims
Add aarch64/arm64 to the list of supported architectures for
qemu-coco-dev and qemu-coco-dev-runtime-rs shims across kata-deploy
configuration, Helm chart values, and test helper scripts.

Note that guest-components and the related build dependencies are not
yet wired for arm64 in these configurations; those will be addressed
separately.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Made-with: Cursor
2026-04-18 00:48:13 +02:00
Fabiano Fidêncio
861f15cdc4 build: add arm64 coco-dev build dependencies
Build coco-guest-components, pause-image, and rootfs-image-confidential
for arm64, which are required by qemu-coco-dev-runtime-rs.

Enable MEASURED_ROOTFS on the arm64 shim-v2 build, add the aarch64 case
to install_kernel() so the default kernel is built as a unified kernel
(with confidential guest support, like x86_64), and adjust the kernel
install naming so only CCA builds get the -confidential suffix.

Also wire rootfs-image-confidential-tarball into the aarch64 local-build
Makefile.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Made-with: Cursor
2026-04-18 00:48:13 +02:00
Fabiano Fidêncio
e1f8b8e8b4 build: add arm64 tools build (genpolicy only)
The arm64 build workflow was missing the tools build entirely.
Add build-tools-asset and create-kata-tools-tarball jobs mirroring
the amd64 workflow so that genpolicy and the other tools are
available for coco-dev tests that need auto-generated policy.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Made-with: Cursor
2026-04-18 00:48:02 +02:00
Fabiano Fidêncio
0ee556a40a Merge pull request #12874 from fidencio/topic/nydus-update-to-v0.15.15
versions: Update nydus-snapshotter to v0.15.15
2026-04-17 22:21:34 +02:00
Saul Paredes
6f6e45522e Merge pull request #11562 from Apokleos/clh-initdata
runtime-rs: Add CoCo/protected device for initdata within runtime-rs/Cloud Hypervisor
2026-04-17 11:09:19 -07:00
Fabiano Fidêncio
690f5a2b62 Merge pull request #12862 from fidencio/topic/runtime-rs-enable-measured-rootfs-tests
runtime-rs: enable measured rootfs for qemu-coco-dev-runtime-rs
2026-04-17 18:48:47 +02:00
Fabiano Fidêncio
3512241cbb versions: Update nydus-snapshotter to v0.15.15
The release brings in CVEs & security fixes on nydus-snapshotter deps.
See: https://github.com/containerd/nydus-snapshotter/releases/tag/v0.15.15

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-17 18:04:59 +02:00
stevenhorsman
35be1a938d versions: Bump rand crate where possible
Update all versions of rand that are controlled by us to remediate
GHSA-cq8v-f236-94qc.

Note: There are still some usages of rand 0.8.5 it that are from
transitive dependencies which we can't currently update:
- fail
- phf_generator
- opentelemetry
due to them being archived, or our usage being 17 versions out of date

Also update the rand API breakages e.g. :
- rand::thread_rng() → rand::rng() (function renamed)
- rand::distributions::Alphanumeric → rand::distr::Alphanumeric (module renamed)
- rng.gen_range() → rng.random_range() (function renamed)

Assisted-by: IBM Bob
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-04-17 15:58:58 +01:00
Fabiano Fidêncio
1ec0e344e5 runtime-rs: enable measured rootfs for qemu-coco-dev-runtime-rs
Add kernel_verity_params to the qemu-coco-dev-runtime-rs configuration
so the runtime can assemble dm-verity kernel parameters, and remove the
test skip that was disabling measured rootfs tests for this hypervisor.

Fixes: #12851

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-17 15:22:17 +02:00
Fabiano Fidêncio
fd8973d1c0 Merge pull request #11826 from squarti/termination-logs
agent: termination logs for share_fs=none
2026-04-17 15:16:14 +02:00
Fabiano Fidêncio
7205fd8579 tests: add integration tests for termination log via GetDiagnosticData
Add BATS tests for the GetDiagnosticData termination log feature on
CoCo platforms where shared_fs=none.

Three test cases cover:
- Successful exit (exit 0): termination message is propagated when
  GetDiagnosticDataRequest is allowed by policy.
- Failed exit (exit 1): termination message is propagated when
  GetDiagnosticDataRequest is allowed by policy.
- Policy denied: with default CoCo policy (GetDiagnosticDataRequest
  is false), the container stops cleanly but no termination message
  is propagated (best-effort behavior).

Tests are skipped on non-CoCo platforms where shared_fs is not "none".

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-17 13:16:25 +02:00
Fabiano Fidêncio
eda3bc6190 runtime-rs: wire GetDiagnosticData for termination logs
Add runtime-rs support for the GetDiagnosticData RPC. This extends
the Agent trait, types, and protocol translation layer with the new
request/response types.

During container stop, when shared_fs is "none" and the
terminationMessagePolicy annotation is "File", the runtime copies
the termination log from the guest via GetDiagnosticData. The call
is best-effort to avoid blocking container teardown.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-04-17 13:16:25 +02:00
Fabiano Fidêncio
411f8cf583 genpolicy: policy-gate GetDiagnosticDataRequest
Add policy rules for the new GetDiagnosticDataRequest RPC.
The request is denied by default in genpolicy-generated policies,
ensuring CoCo workloads do not expose diagnostic data unless
explicitly opted in via policy_data.request_defaults.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Silenio Quarti <silenio_quarti@ca.ibm.com>
2026-04-17 13:16:25 +02:00
Fabiano Fidêncio
64c139208f agent: add GetDiagnosticData RPC with termination log support
Add a new extensible GetDiagnosticData RPC that retrieves diagnostic
information from the guest VM. The request carries a log_type string
field to specify what kind of data is requested, and a container_id
field to identify the target container.

The first supported log_type is "termination_log", which reads the
Kubernetes termination message file from inside the guest. This is
needed for shared_fs=none configurations where the host cannot
directly access the guest filesystem.

On the Go runtime side, the container stop() path now calls
GetDiagnosticData to copy the termination message to the host
when running with NoSharedFS and the terminationMessagePolicy
annotation is set to "File". The call is best-effort: failures
are logged as warnings rather than blocking container teardown.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Silenio Quarti <silenio_quarti@ca.ibm.com>
2026-04-17 13:01:13 +02:00
Steve Horsman
1db12f8ccf Merge pull request #12812 from stevenhorsman/tee-test-refactor
ci: Refactor confidential TEE support
2026-04-17 11:12:13 +01:00
Steve Horsman
e4b3ba56dd Merge pull request #12855 from stevenhorsman/increase-stale-issues-frequency
ci: increase stale issues workflow frequency
2026-04-17 08:37:20 +01:00
stevenhorsman
1dc57c6cef ci: increase stale issues workflow frequency
Update the stale issues workflow to run more frequently:
- Weekdays: Every 4 hours (6x per day) at 00:00, 06:00, 12:00, 18:00 UTC
- Weekends: Every hour (24x per day)

Previously ran once daily at midnight UTC. This change reduces the time
it will take for us to get through our backlog, particularly increasing
the runs at the weekend, when we should have less other CI running,
which it could impact due to GH API rate limiting.

Assisted-by: IBM Bob
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-04-16 20:50:38 +01:00