mirror of
https://github.com/kubeshark/kubeshark.git
synced 2026-05-17 17:14:23 +00:00
Add install skill for Kubeshark deployment guidance (#1933)
* Add install skill for Kubeshark deployment guidance New skill that helps users install and configure Kubeshark with a clear CLI vs Helm decision tree, opinionated production defaults, and platform-specific storage class recommendations. * Add user-invocable flag to install skill frontmatter * Add backup/overwrite check guidance for ~/.kubeshark/ config files --------- Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
This commit is contained in:
@@ -14,6 +14,8 @@ compatible agents.
|
||||
|-------|-------------|
|
||||
| [`network-rca`](network-rca/) | Network Root Cause Analysis. Retrospective traffic analysis via snapshots, with two investigation routes: PCAP (for Wireshark/compliance) and Dissection (for AI-driven API-level investigation). |
|
||||
| [`kfl`](kfl/) | KFL2 (Kubeshark Filter Language) expert. Complete reference for writing, debugging, and optimizing CEL-based traffic filters across all supported protocols. |
|
||||
| [`security-audit`](security-audit/) | Network Security Audit. Systematic 8-phase threat detection across MITRE ATT&CK tactics — C2, exfiltration, lateral movement, credential theft, cryptomining, protocol abuse — using snapshot-based traffic analysis. |
|
||||
| [`install`](install/) | Installation & Deployment. Guides CLI and Helm installation, builds custom values files, handles platform-specific config (EKS/GKE/AKS/OpenShift/KinD), auth, ingress, cloud storage, and troubleshooting. |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
||||
489
skills/install/SKILL.md
Normal file
489
skills/install/SKILL.md
Normal file
@@ -0,0 +1,489 @@
|
||||
---
|
||||
name: install
|
||||
user-invocable: true
|
||||
description: >
|
||||
Kubeshark installation and deployment skill. Use this skill whenever the user wants
|
||||
to install Kubeshark, deploy Kubeshark to a Kubernetes cluster, set up Kubeshark,
|
||||
configure Kubeshark helm values, generate a Kubeshark config file, customize
|
||||
Kubeshark deployment, troubleshoot Kubeshark installation, upgrade Kubeshark,
|
||||
uninstall Kubeshark, or manage the Kubeshark Helm release. Also trigger when
|
||||
the user mentions "kubeshark tap", "kubeshark clean", "helm install kubeshark",
|
||||
"get kubeshark running", "set up traffic capture", "deploy kubeshark",
|
||||
"kubeshark not starting", "kubeshark pods not ready", "configure namespaces",
|
||||
"persistent storage", "cloud storage for snapshots", "kubeshark ingress",
|
||||
"kubeshark auth", "kubeshark SAML", "kubeshark license", "kubeshark config",
|
||||
"custom helm values", "kubeshark on EKS/GKE/AKS", "kubeshark on OpenShift",
|
||||
"kubeshark on KinD/minikube/k3s", "air-gapped", "offline install",
|
||||
or any request related to getting Kubeshark installed, configured, and running
|
||||
in a Kubernetes cluster.
|
||||
---
|
||||
|
||||
# Kubeshark Installation & Deployment
|
||||
|
||||
You are a Kubeshark deployment specialist. Your job is to help users install,
|
||||
configure, and deploy Kubeshark to their Kubernetes cluster — tailoring the
|
||||
configuration to their specific environment, requirements, and use case.
|
||||
|
||||
Kubeshark deploys via Helm. The CLI (`kubeshark tap`) is a thin wrapper that
|
||||
installs a basic Helm chart and establishes a port-forward — nothing more.
|
||||
For larger or production clusters, use Helm directly with a custom values file.
|
||||
|
||||
## Decision: CLI or Helm?
|
||||
|
||||
**Use the CLI** when:
|
||||
- Quick install on a dev/test cluster (minikube, KinD, k3s)
|
||||
- Personal environment, single user
|
||||
- Just want to try Kubeshark quickly
|
||||
|
||||
**Use Helm directly** when:
|
||||
- Larger cluster (staging, production)
|
||||
- Need custom configuration (ingress, auth, storage, namespaces)
|
||||
- GitOps / infrastructure-as-code workflows
|
||||
- Team environment
|
||||
|
||||
## Path A: CLI (Dev/Test Clusters)
|
||||
|
||||
### Step 1 — Install the CLI
|
||||
|
||||
Check if Kubeshark is already installed:
|
||||
|
||||
```bash
|
||||
kubeshark version
|
||||
```
|
||||
|
||||
If not installed, offer one of these methods:
|
||||
|
||||
**Homebrew (easiest, where available):**
|
||||
|
||||
```bash
|
||||
brew tap kubeshark/kubeshark
|
||||
brew install kubeshark
|
||||
```
|
||||
|
||||
**Binary download:**
|
||||
|
||||
For the full list of platforms and architectures, see https://docs.kubeshark.com/en/install
|
||||
|
||||
```bash
|
||||
# Linux (amd64)
|
||||
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/latest/download/kubeshark_linux_amd64
|
||||
chmod +x kubeshark
|
||||
sudo mv kubeshark /usr/local/bin/
|
||||
|
||||
# Linux (arm64)
|
||||
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/latest/download/kubeshark_linux_arm64
|
||||
chmod +x kubeshark
|
||||
sudo mv kubeshark /usr/local/bin/
|
||||
|
||||
# macOS (Apple Silicon)
|
||||
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/latest/download/kubeshark_darwin_arm64
|
||||
chmod +x kubeshark
|
||||
sudo mv kubeshark /usr/local/bin/
|
||||
|
||||
# macOS (Intel)
|
||||
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/latest/download/kubeshark_darwin_amd64
|
||||
chmod +x kubeshark
|
||||
sudo mv kubeshark /usr/local/bin/
|
||||
```
|
||||
|
||||
### Step 2 — Check for Updates
|
||||
|
||||
**Always check for updates before using the CLI.** This is critical — Kubeshark
|
||||
releases frequently and running an outdated version can cause issues.
|
||||
|
||||
```bash
|
||||
# Homebrew
|
||||
brew upgrade kubeshark
|
||||
|
||||
# Binary — check the latest release and re-download if newer
|
||||
kubeshark version
|
||||
# Compare with https://github.com/kubeshark/kubeshark/releases/latest
|
||||
```
|
||||
|
||||
### Step 3 — Deploy with `kubeshark tap`
|
||||
|
||||
```bash
|
||||
kubeshark tap
|
||||
```
|
||||
|
||||
This installs the Helm chart with defaults and opens the dashboard in your browser.
|
||||
That's it for dev/test clusters.
|
||||
|
||||
### Step 4 — Reconnect if Connection Breaks
|
||||
|
||||
If the port-forward drops (laptop sleep, network change, terminal closed):
|
||||
|
||||
```bash
|
||||
kubeshark proxy
|
||||
```
|
||||
|
||||
This re-establishes the port-forward and reopens the dashboard. It does **not**
|
||||
reinstall — Kubeshark is still running in the cluster.
|
||||
|
||||
### Step 5 — Clean Up After Use
|
||||
|
||||
**Always clean up when done.** Kubeshark runs eBPF probes and DaemonSet workers
|
||||
on every node — leaving it running wastes cluster resources.
|
||||
|
||||
```bash
|
||||
kubeshark clean
|
||||
```
|
||||
|
||||
Always remind the user to run `kubeshark clean` when they're finished. This is
|
||||
easy to forget and important.
|
||||
|
||||
## Path B: Helm (Larger / Production Clusters)
|
||||
|
||||
### Step 1 — Upgrade the Helm Chart
|
||||
|
||||
**Always update the Helm repo first.** This is the most important first step —
|
||||
running an outdated chart can cause issues.
|
||||
|
||||
```bash
|
||||
helm repo add kubeshark https://helm.kubeshark.com
|
||||
helm repo update
|
||||
```
|
||||
|
||||
### Step 2 — Create a Config Directory
|
||||
|
||||
Store all configuration files in `~/.kubeshark/`:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.kubeshark
|
||||
```
|
||||
|
||||
**Before writing any file to `~/.kubeshark/`, check if it already exists.**
|
||||
If `~/.kubeshark/values.yaml` (or any target filename) already exists, **ask the
|
||||
user** before overwriting. Either:
|
||||
1. Back up the existing file first: `cp ~/.kubeshark/values.yaml ~/.kubeshark/values.yaml.bak.$(date +%s)`
|
||||
2. Use a descriptive name for the new file (e.g., `values-production.yaml`, `values-staging.yaml`)
|
||||
|
||||
The user may have multiple values files for different clusters or environments.
|
||||
|
||||
### Step 3 — Build the Values File
|
||||
|
||||
Walk through the following configuration areas with the user. Each section
|
||||
explains what the value does and what to recommend.
|
||||
|
||||
#### Pod Targeting (CRITICAL)
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
regex: .*
|
||||
namespaces: []
|
||||
excludedNamespaces: []
|
||||
```
|
||||
|
||||
**This is one of the most important configuration decisions.** By default,
|
||||
Kubeshark monitors the entire cluster's traffic. On a large cluster this is a
|
||||
huge undertaking that consumes significant CPU and memory on every node.
|
||||
|
||||
**Always set namespace targeting.** Ask the user which namespaces contain the
|
||||
workloads they care about, and set those explicitly:
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
namespaces:
|
||||
- production
|
||||
- staging
|
||||
```
|
||||
|
||||
Alternatively, use `excludedNamespaces` to monitor everything except specific
|
||||
namespaces:
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
excludedNamespaces:
|
||||
- kube-system
|
||||
- monitoring
|
||||
- kubeshark
|
||||
```
|
||||
|
||||
The `regex` field filters by pod name within the targeted namespaces. Leave as
|
||||
`.*` unless the user wants to focus on specific pods.
|
||||
|
||||
Setting pod targeting rules causes Kubeshark to focus only on specific workloads,
|
||||
which moderates compute consumption significantly.
|
||||
|
||||
#### Docker Registry (Air-Gapped Environments)
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
docker:
|
||||
registry: docker.io/kubeshark
|
||||
tag: ""
|
||||
```
|
||||
|
||||
- `tap.docker.registry` — Change this for air-gapped environments where there's
|
||||
no access to `docker.io`. Point to your internal registry. Additional config
|
||||
may be needed (pull secrets, registry credentials).
|
||||
- `tap.docker.tag` — Set a specific version. If a patch version is missing, the
|
||||
latest patch in that minor version is used. **Leave empty (recommended)** to
|
||||
use the version matching the Helm chart.
|
||||
|
||||
For air-gapped clusters, also set:
|
||||
|
||||
```yaml
|
||||
internetConnectivity: false
|
||||
```
|
||||
|
||||
This is the **most important setting for air-gapped clusters** — it disables all
|
||||
outbound connectivity checks (license validation, telemetry, update checks).
|
||||
|
||||
#### Capture & Dissection
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
capture:
|
||||
dissection:
|
||||
enabled: true
|
||||
stopAfter: 5m
|
||||
raw:
|
||||
enabled: true
|
||||
storageSize: 1Gi
|
||||
dbMaxSize: 500Mi
|
||||
```
|
||||
|
||||
**`tap.capture.dissection.enabled`** — Controls real-time dissection (L7 protocol
|
||||
parsing on production nodes). Real-time dissection consumes significant compute
|
||||
resources from production nodes. **Recommend starting with `false` (disabled).**
|
||||
This can be toggled on-demand from the dashboard when needed, so it's used only
|
||||
when necessary and doesn't consume resources the rest of the time.
|
||||
|
||||
Dissection is independent from raw capture + snapshots. Raw capture is lightweight
|
||||
and runs continuously; dissection is the heavy operation.
|
||||
|
||||
**`tap.capture.dissection.stopAfter`** — Time after which dissection automatically
|
||||
disables once all client connections end. Set to `0` to never auto-disable (manual
|
||||
control only).
|
||||
|
||||
**`tap.capture.raw.enabled`** — Keep this `true`. Raw capture consumes very little
|
||||
production resources yet captures all traffic. This is what powers snapshots and
|
||||
retrospective analysis.
|
||||
|
||||
**`tap.capture.raw.storageSize`** — The FIFO buffer for raw capture per node.
|
||||
**Recommend 100Gi** for production. The larger this is, the further back in time
|
||||
snapshots can reach.
|
||||
|
||||
**`tap.capture.dbMaxSize`** — Size of the database holding dissected API calls.
|
||||
Bigger = more history kept. Adjust based on how much queryable history the user needs.
|
||||
|
||||
**`tap.capture.captureSelf`** — Debug option. Ignore during installation.
|
||||
|
||||
**`bpfOverride`** — Debug option. Ignore during installation.
|
||||
|
||||
#### Delayed Dissection
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
delayedDissection:
|
||||
cpu: "1"
|
||||
memory: 4Gi
|
||||
```
|
||||
|
||||
Delayed dissection is the process on the Hub that dissects raw capture data within
|
||||
a snapshot. It runs on the Hub node (not production nodes) and is triggered when
|
||||
a delayed dissection operation is requested on a snapshot.
|
||||
|
||||
**Give this as much resources as possible.** Recommend `cpu: "5"` and `memory: 5Gi`.
|
||||
This speeds up snapshot analysis significantly.
|
||||
|
||||
#### Snapshot Storage (Local)
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
local:
|
||||
storageClass: ""
|
||||
storageSize: 20Gi
|
||||
```
|
||||
|
||||
This is where snapshots are stored locally. **Be very generous with this.**
|
||||
**Recommend 2Ti (2TB)** for production environments that will accumulate snapshots.
|
||||
|
||||
**`storageClass`** — Must match a valid storage class in the cluster. Suggest
|
||||
based on the cloud provider:
|
||||
|
||||
| Provider | Recommended Storage Class |
|
||||
|----------|-------------------------|
|
||||
| EKS (AWS) | `gp2` or `gp3` |
|
||||
| GKE (Google) | `standard` or `premium-rwo` |
|
||||
| AKS (Azure) | `managed-csi` or `managed-premium` |
|
||||
| OpenShift | Check `kubectl get sc` — varies by provider |
|
||||
| KinD / minikube | `standard` (default) |
|
||||
| Private / bare metal | Ask the user for their storage class |
|
||||
|
||||
Always verify available storage classes with `kubectl get sc`.
|
||||
|
||||
#### Cloud Storage (Long-Term Retention)
|
||||
|
||||
Cloud storage enables uploading snapshots to S3, GCS, or Azure Blob for long-term
|
||||
retention, cross-cluster sharing, and backup/restore.
|
||||
|
||||
For detailed configuration per provider (including IRSA, Workload Identity, static
|
||||
credentials, and ConfigMap/Secret setup), see `references/cloud-storage.md`.
|
||||
|
||||
Summary of provider values:
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
cloud:
|
||||
provider: "" # "s3", "azblob", or "gcs" (empty = disabled)
|
||||
prefix: "" # Key prefix in bucket
|
||||
configMaps: [] # Pre-existing ConfigMaps with cloud config
|
||||
secrets: [] # Pre-existing Secrets with cloud credentials
|
||||
```
|
||||
|
||||
Help the user select the right provider based on where their cluster runs and
|
||||
walk them through the authentication setup.
|
||||
|
||||
#### Resources
|
||||
|
||||
For a first installation, **do not change the resource defaults.** Let the user
|
||||
run Kubeshark with defaults first and tune based on actual usage patterns later.
|
||||
|
||||
The defaults are reasonable starting points. Resource consumption depends heavily
|
||||
on how much traffic is processed, which is controlled by pod targeting rules.
|
||||
|
||||
#### Node Selectors
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
nodeSelectorTerms:
|
||||
workers:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/os
|
||||
operator: In
|
||||
values: [linux]
|
||||
```
|
||||
|
||||
Use `nodeSelectorTerms` when the user wants to focus on specific nodes. The less
|
||||
workload processed by Kubeshark, the less CPU and memory it consumes. The goal is
|
||||
to process workloads of interest, not the entire cluster.
|
||||
|
||||
#### Ingress (STRONGLY RECOMMENDED)
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
ingress:
|
||||
enabled: false
|
||||
className: ""
|
||||
host: ks.svc.cluster.local
|
||||
path: /
|
||||
tls: []
|
||||
annotations: {}
|
||||
```
|
||||
|
||||
**Ingress is the strongly preferred access method.** While port-forward is available,
|
||||
it is **highly NOT recommended** for anything beyond quick local testing. Port-forward
|
||||
is fragile, drops connections, and doesn't scale for team use.
|
||||
|
||||
**Always help the user configure ingress.** Ask them about their ingress controller
|
||||
(nginx, ALB, Traefik, etc.) and build the ingress config:
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
ingress:
|
||||
enabled: true
|
||||
className: nginx
|
||||
host: kubeshark.example.com
|
||||
tls:
|
||||
- secretName: kubeshark-tls
|
||||
hosts:
|
||||
- kubeshark.example.com
|
||||
annotations: {}
|
||||
```
|
||||
|
||||
For ALB on AWS:
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
ingress:
|
||||
enabled: true
|
||||
className: alb
|
||||
host: kubeshark.example.com
|
||||
annotations:
|
||||
alb.ingress.kubernetes.io/scheme: internal
|
||||
alb.ingress.kubernetes.io/target-type: ip
|
||||
```
|
||||
|
||||
#### Air-Gapped Clusters
|
||||
|
||||
For air-gapped environments, two settings are essential:
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
docker:
|
||||
registry: your-internal-registry.example.com/kubeshark
|
||||
internetConnectivity: false
|
||||
```
|
||||
|
||||
`internetConnectivity: false` is the **single most important option** for
|
||||
air-gapped clusters. Without it, Kubeshark will attempt outbound connections
|
||||
that will fail and cause issues.
|
||||
|
||||
### Step 4 — Install
|
||||
|
||||
```bash
|
||||
helm install kubeshark kubeshark/kubeshark \
|
||||
-f ~/.kubeshark/values.yaml \
|
||||
-n kubeshark --create-namespace
|
||||
```
|
||||
|
||||
### Step 5 — Upgrade
|
||||
|
||||
When upgrading, **always update the Helm repo first**:
|
||||
|
||||
```bash
|
||||
helm repo update
|
||||
helm upgrade kubeshark kubeshark/kubeshark \
|
||||
-f ~/.kubeshark/values.yaml \
|
||||
-n kubeshark
|
||||
```
|
||||
|
||||
## Uninstalling
|
||||
|
||||
**Via CLI:**
|
||||
|
||||
```bash
|
||||
kubeshark clean
|
||||
kubeshark clean -s kubeshark # Specific namespace
|
||||
```
|
||||
|
||||
**Via Helm:**
|
||||
|
||||
```bash
|
||||
helm uninstall kubeshark -n kubeshark
|
||||
```
|
||||
|
||||
PersistentVolumeClaims are not deleted by default. Remove manually if needed:
|
||||
|
||||
```bash
|
||||
kubectl delete pvc -l app.kubernetes.io/name=kubeshark -n kubeshark
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Pods not starting**: Check `kubectl get pods -l app.kubernetes.io/name=kubeshark -n <ns>`
|
||||
and `kubectl describe pod`. Common: ImagePullBackOff (registry), Pending (storage/resources),
|
||||
CrashLoopBackOff (check `kubectl logs`).
|
||||
- **No traffic**: Verify namespaces have running pods, check pod regex, ensure eBPF supported
|
||||
(kernel 4.14+, 5.4+ recommended).
|
||||
- **Permissions**: Requires privileged containers with NET_RAW, NET_ADMIN, SYS_ADMIN,
|
||||
SYS_PTRACE, SYS_RESOURCE, IPC_LOCK capabilities.
|
||||
- **Storage**: Verify storage class exists (`kubectl get sc`), PVC is bound (`kubectl get pvc`).
|
||||
|
||||
## Setup Reference
|
||||
|
||||
### Kubeshark MCP for AI Agents
|
||||
|
||||
After installation, connect the Kubeshark MCP so AI agents can interact with Kubeshark:
|
||||
|
||||
```bash
|
||||
# Claude Code
|
||||
claude mcp add kubeshark -- kubeshark mcp
|
||||
|
||||
# Direct URL (no kubectl needed)
|
||||
claude mcp add kubeshark -- kubeshark mcp --url https://kubeshark.example.com
|
||||
```
|
||||
96
skills/install/references/cloud-storage.md
Normal file
96
skills/install/references/cloud-storage.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Cloud Storage for Snapshots
|
||||
|
||||
This is a pointer to the authoritative cloud storage documentation maintained in
|
||||
the Helm chart:
|
||||
|
||||
**Source of truth**: `helm-chart/docs/snapshots_cloud_storage.md`
|
||||
|
||||
Always read that file for the latest configuration details, including:
|
||||
|
||||
- Amazon S3 (static credentials, IRSA, cross-account AssumeRole)
|
||||
- Azure Blob Storage (storage key, Workload Identity / DefaultAzureCredential)
|
||||
- Google Cloud Storage (service account JSON, GKE Workload Identity)
|
||||
- IAM permissions and trust policy examples
|
||||
- ConfigMap and Secret setup patterns
|
||||
- Inline values vs. external ConfigMap/Secret approaches
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Helm Values Structure
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
cloud:
|
||||
provider: "" # "s3", "azblob", or "gcs" (empty = disabled)
|
||||
prefix: "" # Key prefix in the bucket/container
|
||||
configMaps: [] # Pre-existing ConfigMaps with cloud config env vars
|
||||
secrets: [] # Pre-existing Secrets with cloud credentials
|
||||
s3:
|
||||
bucket: ""
|
||||
region: ""
|
||||
accessKey: ""
|
||||
secretKey: ""
|
||||
roleArn: ""
|
||||
externalId: ""
|
||||
azblob:
|
||||
storageAccount: ""
|
||||
container: ""
|
||||
storageKey: ""
|
||||
gcs:
|
||||
bucket: ""
|
||||
project: ""
|
||||
credentialsJson: ""
|
||||
```
|
||||
|
||||
### Recommended Auth Per Provider
|
||||
|
||||
| Provider | Production Recommendation |
|
||||
|----------|-------------------------|
|
||||
| S3 (EKS) | IRSA (IAM Roles for Service Accounts) — no static credentials |
|
||||
| S3 (non-EKS) | Static credentials via Secret, or default AWS credential chain |
|
||||
| Azure Blob (AKS) | Workload Identity / Managed Identity |
|
||||
| Azure Blob (non-AKS) | Storage account key via Secret |
|
||||
| GCS (GKE) | GKE Workload Identity — no JSON key file |
|
||||
| GCS (non-GKE) | Service account JSON key via Secret |
|
||||
|
||||
### Inline Values (Simplest Approach)
|
||||
|
||||
Set credentials directly in values.yaml. The Helm chart creates the necessary
|
||||
ConfigMap/Secret resources automatically.
|
||||
|
||||
**S3:**
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
cloud:
|
||||
provider: "s3"
|
||||
s3:
|
||||
bucket: my-kubeshark-snapshots
|
||||
region: us-east-1
|
||||
```
|
||||
|
||||
**GCS:**
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
cloud:
|
||||
provider: "gcs"
|
||||
gcs:
|
||||
bucket: my-kubeshark-snapshots
|
||||
project: my-gcp-project
|
||||
```
|
||||
|
||||
**Azure Blob:**
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
cloud:
|
||||
provider: "azblob"
|
||||
azblob:
|
||||
storageAccount: mykubesharksa
|
||||
container: snapshots
|
||||
```
|
||||
|
||||
For production setups with proper IAM integration, see the full documentation
|
||||
in `helm-chart/docs/snapshots_cloud_storage.md`.
|
||||
376
skills/install/references/helm-values.md
Normal file
376
skills/install/references/helm-values.md
Normal file
@@ -0,0 +1,376 @@
|
||||
# Kubeshark Helm Values Reference
|
||||
|
||||
Complete reference for all Kubeshark Helm chart values. Use this when building
|
||||
custom `values.yaml` files or `--set` flags.
|
||||
|
||||
## Docker Images
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
docker:
|
||||
registry: docker.io/kubeshark # Docker registry
|
||||
tag: "" # Image tag (empty = chart appVersion)
|
||||
tagLocked: true # Lock to specific tag
|
||||
imagePullPolicy: Always # Always, IfNotPresent, Never
|
||||
imagePullSecrets: [] # Registry pull secrets
|
||||
overrideImage: # Override individual component images
|
||||
worker: ""
|
||||
hub: ""
|
||||
front: ""
|
||||
overrideTag: # Override individual component tags
|
||||
worker: ""
|
||||
hub: ""
|
||||
front: ""
|
||||
```
|
||||
|
||||
## Proxy / Port-Forward
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
proxy:
|
||||
worker:
|
||||
srvPort: 48999
|
||||
hub:
|
||||
srvPort: 8898
|
||||
front:
|
||||
port: 8899 # Local port for port-forward
|
||||
host: 127.0.0.1 # Bind address
|
||||
```
|
||||
|
||||
## Pod Targeting
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
regex: .* # Pod name regex filter
|
||||
namespaces: [] # Target namespaces (empty = all)
|
||||
excludedNamespaces: [] # Namespaces to exclude
|
||||
bpfOverride: "" # Custom BPF filter override
|
||||
```
|
||||
|
||||
## Capture & Dissection
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
capture:
|
||||
dissection:
|
||||
enabled: true # Enable L7 dissection
|
||||
stopAfter: 5m # Auto-stop dissection after duration
|
||||
captureSelf: false # Capture Kubeshark's own traffic
|
||||
raw:
|
||||
enabled: true # Enable raw packet capture (needed for snapshots)
|
||||
storageSize: 1Gi # FIFO buffer size per node
|
||||
dbMaxSize: 500Mi # Max L7 database size per node
|
||||
delayedDissection:
|
||||
cpu: "1" # CPU for delayed dissection jobs
|
||||
memory: 4Gi # Memory for delayed dissection jobs
|
||||
storageSize: "" # Storage for delayed dissection
|
||||
storageClass: "" # Storage class for delayed dissection
|
||||
```
|
||||
|
||||
## Snapshots
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
snapshots:
|
||||
local:
|
||||
storageClass: "" # Storage class for local snapshots
|
||||
storageSize: 20Gi # PVC size for local snapshots
|
||||
cloud:
|
||||
provider: "" # s3, gcs, or azblob
|
||||
prefix: "" # Path prefix in bucket
|
||||
configMaps: [] # Additional ConfigMaps to mount
|
||||
secrets: [] # Additional Secrets to mount
|
||||
s3:
|
||||
bucket: ""
|
||||
region: ""
|
||||
accessKey: ""
|
||||
secretKey: ""
|
||||
roleArn: "" # IAM role ARN (IRSA)
|
||||
externalId: "" # STS external ID
|
||||
azblob:
|
||||
storageAccount: ""
|
||||
container: ""
|
||||
storageKey: ""
|
||||
gcs:
|
||||
bucket: ""
|
||||
project: ""
|
||||
credentialsJson: "" # Service account JSON
|
||||
```
|
||||
|
||||
## Helm Release
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
release:
|
||||
repo: https://helm.kubeshark.com # Helm chart repository
|
||||
name: kubeshark # Release name
|
||||
namespace: default # Release namespace
|
||||
helmChartPath: "" # Path to local chart (overrides repo)
|
||||
```
|
||||
|
||||
## Storage
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
persistentStorage: false # Enable PVC for worker data
|
||||
persistentStorageStatic: false # Static provisioning
|
||||
persistentStoragePvcVolumeMode: FileSystem # FileSystem or Block
|
||||
efsFileSytemIdAndPath: "" # EFS file system ID (EKS)
|
||||
secrets: [] # Additional secrets to mount
|
||||
storageLimit: 10Gi # Max storage per node
|
||||
storageClass: standard # Default storage class
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
resources:
|
||||
hub:
|
||||
limits:
|
||||
cpu: "0" # 0 = no limit
|
||||
memory: 5Gi
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 50Mi
|
||||
sniffer:
|
||||
limits:
|
||||
cpu: "0"
|
||||
memory: 5Gi
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 50Mi
|
||||
tracer:
|
||||
limits:
|
||||
cpu: "0"
|
||||
memory: 5Gi
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 50Mi
|
||||
```
|
||||
|
||||
## Health Probes
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
probes:
|
||||
hub:
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
sniffer:
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
```
|
||||
|
||||
## TLS & Service Mesh
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
serviceMesh: true # Capture mTLS traffic (service mesh)
|
||||
tls: true # Capture OpenSSL/Go TLS traffic
|
||||
disableTlsLog: true # Suppress TLS debug logging
|
||||
packetCapture: best # Capture method: best, af_packet, pcap
|
||||
```
|
||||
|
||||
## Labels, Annotations & Scheduling
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
labels: {} # Additional labels for all pods
|
||||
annotations: {} # Additional annotations for all pods
|
||||
nodeSelectorTerms:
|
||||
hub: # Hub pod node selector
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/os
|
||||
operator: In
|
||||
values: [linux]
|
||||
workers: # Worker DaemonSet node selector
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/os
|
||||
operator: In
|
||||
values: [linux]
|
||||
front: # Frontend pod node selector
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/os
|
||||
operator: In
|
||||
values: [linux]
|
||||
tolerations:
|
||||
hub: []
|
||||
workers:
|
||||
- operator: Exists
|
||||
effect: NoExecute # Workers tolerate NoExecute by default
|
||||
front: []
|
||||
priorityClass: "" # PriorityClassName for pods
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
auth:
|
||||
enabled: false
|
||||
type: saml # Only SAML supported currently
|
||||
roles:
|
||||
admin:
|
||||
filter: "" # KFL filter restricting visible traffic
|
||||
canDownloadPCAP: true
|
||||
canUseScripting: true
|
||||
scriptingPermissions:
|
||||
canSave: true
|
||||
canActivate: true
|
||||
canDelete: true
|
||||
canUpdateTargetedPods: true
|
||||
canStopTrafficCapturing: true
|
||||
canControlDissection: true
|
||||
showAdminConsoleLink: true
|
||||
rolesClaim: role # SAML attribute for role mapping
|
||||
defaultRole: "" # Role for users without a role claim
|
||||
defaultFilter: "" # Default KFL filter for all users
|
||||
saml:
|
||||
idpMetadataUrl: "" # SAML IdP metadata URL
|
||||
x509crt: "" # SP certificate
|
||||
x509key: "" # SP private key
|
||||
```
|
||||
|
||||
## Ingress
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
ingress:
|
||||
enabled: false
|
||||
className: "" # nginx, alb, traefik, etc.
|
||||
host: ks.svc.cluster.local
|
||||
path: /
|
||||
tls: [] # TLS configuration
|
||||
annotations: {} # Ingress annotations
|
||||
```
|
||||
|
||||
## Protocol Dissectors
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
enabledDissectors:
|
||||
- amqp
|
||||
- dns
|
||||
- http
|
||||
- icmp
|
||||
- kafka
|
||||
- mongodb
|
||||
- mysql
|
||||
- postgresql
|
||||
- redis
|
||||
- ws
|
||||
- ldap
|
||||
- radius
|
||||
- diameter
|
||||
- udp-flow
|
||||
- tcp-flow
|
||||
- udp-conn
|
||||
- tcp-conn
|
||||
portMapping: # Default port-to-protocol mappings
|
||||
http: [80, 443, 8080]
|
||||
amqp: [5671, 5672]
|
||||
kafka: [9092]
|
||||
mongodb: [27017]
|
||||
mysql: [3306]
|
||||
postgresql: [5432]
|
||||
redis: [6379]
|
||||
ldap: [389]
|
||||
diameter: [3868]
|
||||
customMacros:
|
||||
https: "tls and (http or http2)"
|
||||
```
|
||||
|
||||
## Networking & Security
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
hostNetwork: true # Use host network (required for capture)
|
||||
ipv6: true # Enable IPv6 support
|
||||
mountBpf: true # Mount BPF filesystem
|
||||
securityContext:
|
||||
privileged: true
|
||||
appArmorProfile:
|
||||
type: ""
|
||||
localhostProfile: ""
|
||||
seLinuxOptions:
|
||||
level: ""
|
||||
role: ""
|
||||
type: ""
|
||||
user: ""
|
||||
capabilities:
|
||||
networkCapture: [NET_RAW, NET_ADMIN]
|
||||
serviceMeshCapture: [SYS_ADMIN, SYS_PTRACE, DAC_OVERRIDE]
|
||||
ebpfCapture: [SYS_ADMIN, SYS_PTRACE, SYS_RESOURCE, IPC_LOCK]
|
||||
```
|
||||
|
||||
## Dashboard
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
dashboard:
|
||||
streamingType: connect-rpc
|
||||
completeStreamingEnabled: true
|
||||
clusterWideMapEnabled: false
|
||||
entriesLimit: "300000"
|
||||
routing:
|
||||
front:
|
||||
basePath: "" # Base path for reverse proxy
|
||||
```
|
||||
|
||||
## Scripting
|
||||
|
||||
```yaml
|
||||
scripting:
|
||||
enabled: false
|
||||
env: {} # Environment variables for scripts
|
||||
source: "" # Git repo for scripts
|
||||
sources: [] # Multiple script sources
|
||||
watchScripts: true # Watch for script changes
|
||||
active: [] # Active scripts
|
||||
console: true # Enable script console
|
||||
```
|
||||
|
||||
## Misc
|
||||
|
||||
```yaml
|
||||
tap:
|
||||
dryRun: false # Preview targeted pods without deploying
|
||||
debug: false # Enable debug mode
|
||||
telemetry:
|
||||
enabled: true # Anonymous usage telemetry
|
||||
resourceGuard:
|
||||
enabled: false # Resource usage guard
|
||||
watchdog:
|
||||
enabled: false # Watchdog process
|
||||
gitops:
|
||||
enabled: false # GitOps mode
|
||||
defaultFilter: "" # Default KFL display filter
|
||||
globalFilter: "" # Global KFL filter (cannot be overridden)
|
||||
dns:
|
||||
nameservers: [] # Custom DNS nameservers
|
||||
searches: [] # Custom DNS search domains
|
||||
options: [] # Custom DNS options
|
||||
misc:
|
||||
jsonTTL: 5m # TTL for JSON entries
|
||||
pcapTTL: "0" # TTL for PCAP files (0 = no TTL)
|
||||
trafficSampleRate: 100 # Traffic sampling rate (1-100)
|
||||
resolutionStrategy: auto # IP resolution: auto, dns, k8s
|
||||
detectDuplicates: false # Detect duplicate packets
|
||||
staleTimeoutSeconds: 30 # Timeout for stale connections
|
||||
tcpFlowTimeout: 1200 # TCP flow idle timeout (seconds)
|
||||
udpFlowTimeout: 1200 # UDP flow idle timeout (seconds)
|
||||
|
||||
headless: false # Suppress browser auto-open
|
||||
license: "" # Kubeshark Pro license key
|
||||
timezone: "" # Override timezone
|
||||
logLevel: warning # Log level: debug, info, warning, error
|
||||
|
||||
kube:
|
||||
configPath: "" # Custom kubeconfig path
|
||||
context: "" # Kubernetes context name
|
||||
```
|
||||
Reference in New Issue
Block a user