# Cloud Storage for Snapshots Kubeshark can upload and download snapshots to cloud object storage, enabling cross-cluster sharing, backup/restore, and long-term retention. Supported providers: **Amazon S3** (`s3`), **Azure Blob Storage** (`azblob`), and **Google Cloud Storage** (`gcs`). ## Helm Values ```yaml tap: snapshots: cloud: provider: "" # "s3", "azblob", or "gcs" (empty = disabled) prefix: "" # key prefix in the bucket/container (e.g. "snapshots/") configMaps: [] # names of pre-existing ConfigMaps with cloud config env vars secrets: [] # names of pre-existing Secrets with cloud credentials s3: bucket: "" region: "" accessKey: "" secretKey: "" roleArn: "" externalId: "" azblob: storageAccount: "" container: "" storageKey: "" gcs: bucket: "" project: "" credentialsJson: "" ``` - `provider` selects which cloud backend to use. Leave empty to disable cloud storage. - `configMaps` and `secrets` are lists of names of existing ConfigMap/Secret resources. They are mounted as `envFrom` on the hub pod, injecting all their keys as environment variables. ### Inline Values (Alternative to External ConfigMaps/Secrets) Instead of creating ConfigMap and Secret resources manually, you can set cloud storage configuration directly in `values.yaml` or via `--set` flags. The Helm chart will automatically create the necessary ConfigMap and Secret resources. Both approaches can be used together — inline values are additive to external `configMaps`/`secrets` references. --- ## Amazon S3 ### Environment Variables | Variable | Required | Description | |----------|----------|-------------| | `SNAPSHOT_AWS_BUCKET` | Yes | S3 bucket name | | `SNAPSHOT_AWS_REGION` | No | AWS region (uses SDK default if empty) | | `SNAPSHOT_AWS_ACCESS_KEY` | No | Static access key ID (empty = use default credential chain) | | `SNAPSHOT_AWS_SECRET_KEY` | No | Static secret access key | | `SNAPSHOT_AWS_ROLE_ARN` | No | IAM role ARN to assume via STS (for cross-account access) | | `SNAPSHOT_AWS_EXTERNAL_ID` | No | External ID for the STS AssumeRole call | | `SNAPSHOT_CLOUD_PREFIX` | No | Key prefix in the bucket (e.g. `snapshots/`) | ### Authentication Methods Credentials are resolved in this order: 1. **Static credentials** -- If `SNAPSHOT_AWS_ACCESS_KEY` is set, static credentials are used directly. 2. **STS AssumeRole** -- If `SNAPSHOT_AWS_ROLE_ARN` is also set, the static (or default) credentials are used to assume the given IAM role. This is useful for cross-account S3 access. 3. **AWS default credential chain** -- When no static credentials are provided, the SDK default chain is used: - **IRSA** (EKS service account token) -- recommended for production on EKS - EC2 instance profile - Standard AWS environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, etc.) - Shared credentials file (`~/.aws/credentials`) The provider validates bucket access on startup via `HeadBucket`. If the bucket is inaccessible, the hub will fail to start. ### Example: Inline Values (simplest approach) ```yaml tap: snapshots: cloud: provider: "s3" s3: bucket: my-kubeshark-snapshots region: us-east-1 ``` Or with static credentials via `--set`: ```bash helm install kubeshark kubeshark/kubeshark \ --set tap.snapshots.cloud.provider=s3 \ --set tap.snapshots.cloud.s3.bucket=my-kubeshark-snapshots \ --set tap.snapshots.cloud.s3.region=us-east-1 \ --set tap.snapshots.cloud.s3.accessKey=AKIA... \ --set tap.snapshots.cloud.s3.secretKey=wJal... ``` ### Example: IRSA (recommended for EKS) [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) lets EKS pods assume an IAM role without static credentials. EKS injects a short-lived token into the pod automatically. **Prerequisites:** 1. Your EKS cluster must have an [OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) associated with it. 2. An IAM role with a trust policy that allows the Kubeshark service account to assume it. **Step 1 — Create an IAM policy scoped to your bucket:** ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:GetObjectVersion", "s3:DeleteObjectVersion", "s3:ListBucket", "s3:ListBucketVersions", "s3:GetBucketLocation", "s3:GetBucketVersioning" ], "Resource": [ "arn:aws:s3:::my-kubeshark-snapshots", "arn:aws:s3:::my-kubeshark-snapshots/*" ] } ] } ``` > For read-only access, remove `s3:PutObject`, `s3:DeleteObject`, and `s3:DeleteObjectVersion`. **Step 2 — Create an IAM role with IRSA trust policy:** ```bash # Get your cluster's OIDC provider URL OIDC_PROVIDER=$(aws eks describe-cluster --name CLUSTER_NAME \ --query "cluster.identity.oidc.issuer" --output text | sed 's|https://||') # Create a trust policy # The default K8s SA name is "-service-account" (e.g. "kubeshark-service-account") cat > trust-policy.json <-service-account" (default: "kubeshark-service-account") gcloud iam service-accounts add-iam-policy-binding \ kubeshark-gcs@PROJECT_ID.iam.gserviceaccount.com \ --role="roles/iam.workloadIdentityUser" \ --member="serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/kubeshark-service-account]" ``` Set Helm values — the `tap.annotations` field adds the Workload Identity annotation to the service account: ```yaml tap: annotations: iam.gke.io/gcp-service-account: kubeshark-gcs@PROJECT_ID.iam.gserviceaccount.com snapshots: cloud: provider: "gcs" configMaps: - kubeshark-gcs-config ``` Or via `--set`: ```bash helm install kubeshark kubeshark/kubeshark \ --set tap.snapshots.cloud.provider=gcs \ --set tap.snapshots.cloud.gcs.bucket=BUCKET_NAME \ --set tap.snapshots.cloud.gcs.project=PROJECT_ID \ --set tap.annotations."iam\.gke\.io/gcp-service-account"=kubeshark-gcs@PROJECT_ID.iam.gserviceaccount.com ``` No `credentialsJson` secret is needed — GKE injects credentials automatically via the Workload Identity metadata server. ### Example: Service Account Key Create a Secret with the service account JSON key: ```yaml apiVersion: v1 kind: Secret metadata: name: kubeshark-gcs-creds type: Opaque stringData: SNAPSHOT_GCS_CREDENTIALS_JSON: | { "type": "service_account", "project_id": "my-gcp-project", "private_key_id": "...", "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", "client_email": "kubeshark@my-gcp-project.iam.gserviceaccount.com", ... } ``` Create a ConfigMap with bucket configuration: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: kubeshark-gcs-config data: SNAPSHOT_GCS_BUCKET: my-kubeshark-snapshots SNAPSHOT_GCS_PROJECT: my-gcp-project ``` Set Helm values: ```yaml tap: snapshots: cloud: provider: "gcs" configMaps: - kubeshark-gcs-config secrets: - kubeshark-gcs-creds ```