* Migrate auth.saml.roles to unified auth.roles
Follows the hub-side introduction of the backend-neutral AUTH_ROLES /
AUTH_ROLES_CLAIM / AUTH_DEFAULT_ROLE config (hub commit 51177bcb).
CLI and Helm chart now surface the unified location:
tap.auth.roles — map of role -> permissions (shared SAML/OIDC)
tap.auth.rolesClaim — token/assertion claim name carrying roles
tap.auth.defaultRole — fallback role for authenticated users with
no matching role in their token
Helm ConfigMap template emits AUTH_ROLES / AUTH_ROLES_CLAIM /
AUTH_DEFAULT_ROLE and no longer emits AUTH_SAML_ROLES or
AUTH_SAML_ROLE_ATTRIBUTE. Hub's back-compat fallback still reads those
keys from any existing ConfigMap that hasn't been helm-upgraded.
Legacy struct fields (SamlConfig.Roles, SamlConfig.RoleAttribute) stay
in place so existing values.yaml files with auth.saml.roles still parse
without errors, but the CLI and the chart ignore them. Follow-up release
can remove the struct fields once telemetry confirms migration.
Breaking for users with customized auth.saml.roles in their values.yaml
— the customization is masked by the new default auth.roles.admin and
must be migrated to auth.roles for the custom permissions to take
effect. Documented in the chart README and release notes.
Part of authz-refactoring (Step 2 of hub-oidc-rbac.md, CLI side).
* Remove legacy
* Align CLI + Helm chart with hub AUTH_TYPE rename
Follows hub commit 11564fef. The canonical AUTH_TYPE is now `oidc` for
generic OIDC; `dex` is a permanent alias; `descope` is a new explicit
label. This change surfaces the new vocabulary in the CLI config struct
and the Helm chart, and renames the nested `auth.dexOidc` values.yaml
field to `auth.oidc` for consistency.
Helm chart:
- 12-config-map.yaml: AUTH_OIDC_* keys now read `.Values.tap.auth.oidc.*`
instead of `auth.dexOidc.*`. The cloud-license override that forced
AUTH_TYPE=default unless the admin picked `dex` now accepts `oidc` too.
- 13-secret.yaml: OIDC_CLIENT_ID / OIDC_CLIENT_SECRET read from
`auth.oidc.*` (was `auth.dexOidc.*`).
- 06-front-deployment.yaml: REACT_APP_AUTH_ENABLED / REACT_APP_AUTH_TYPE
conditionals accept both `oidc` and `dex` where they previously only
matched `dex`.
- values.yaml: comment on `tap.auth.type` lists valid values and flags
the breaking change.
- README.md: `tap.auth.type` row lists valid values. All `dexOidc`
references renamed to `oidc`. Sample values.yaml blocks now show
`type: oidc` as the canonical form.
CLI:
- config/configStructs/tapConfig.go: AuthConfig.Type documented with the
full list of valid values and the migration hint.
Breaking changes (repeated in release notes):
1. `tap.auth.type: oidc` now routes to the generic OIDC middleware
(previously Descope). Switch to `tap.auth.type: descope` or `default`
if you were using `oidc` for Descope.
2. `tap.auth.dexOidc.*` values are no longer read. Rename to
`tap.auth.oidc.*`. No fallback.
3. `tap.auth.type: dex` continues to work — permanent alias of `oidc`.
Part of authz-refactoring (Step 4 of hub-oidc-rbac.md, CLI/Helm side).
* default kfl
* Authz Refactoring: Step 8: namespaces-list role filter
Align with hub PR kubeshark/hub#756. Per-role auth.roles[].filter (KFL)
is replaced by auth.roles[].namespaces (comma-separated list with "*",
literal, and glob semantics). Standalone tap.auth.defaultFilter knob
removed.
helm-chart/values.yaml
- admin role example uses namespaces: "*" instead of filter: "".
- Comment block explains the new namespaces semantics.
- defaultFilter: "" entry + accompanying comment block deleted.
helm-chart/templates/12-config-map.yaml
- AUTH_DEFAULT_FILTER ConfigMap entry removed (hub no longer reads it).
helm-chart/README.md
- tap.auth.defaultFilter row removed.
- tap.auth.roles default value example updated: filter: "" → namespaces: "*";
description gains the per-role namespaces semantics legend.
Network Observability for SREs & AI Agents
Kubeshark indexes cluster-wide network traffic at the kernel level using eBPF — delivering instant answers to any query using network, API, and Kubernetes semantics.
What you can do:
- Download Retrospective PCAPs — cluster-wide packet captures filtered by nodes, time, workloads, and IPs. Store PCAPs for long-term retention and later investigation.
- Visualize Network Data — explore traffic matching queries with API, Kubernetes, or network semantics through a real-time dashboard.
- See Encrypted Traffic in Plain Text — automatically decrypt TLS/mTLS traffic using eBPF, with no key management or sidecars required.
- Integrate with AI — connect your favorite AI assistant (e.g. Claude, Copilot) to include network data in AI-driven workflows like incident response and root cause analysis.
Get Started
helm repo add kubeshark https://helm.kubeshark.com
helm install kubeshark kubeshark/kubeshark
kubectl port-forward svc/kubeshark-front 8899:80
Open http://localhost:8899 in your browser. You're capturing traffic.
For production use, we recommend using an ingress controller instead of port-forward.
Connect an AI agent via MCP:
brew install kubeshark
claude mcp add kubeshark -- kubeshark mcp
Network Data for AI Agents
Kubeshark exposes cluster-wide network data via MCP — enabling AI agents to query traffic, investigate API calls, and perform root cause analysis through natural language.
"Why did checkout fail at 2:15 PM?" "Which services have error rates above 1%?" "Show TCP retransmission rates across all node-to-node paths" "Trace request abc123 through all services"
Works with Claude Code, Cursor, and any MCP-compatible AI.
AI Skills
Open-source, reusable skills that teach AI agents domain-specific workflows on top of Kubeshark's MCP tools:
| Skill | Description |
|---|---|
| Network RCA | Retrospective root cause analysis — snapshots, dissection, PCAP extraction, trend comparison |
| KFL | KFL (Kubeshark Filter Language) expert — writes, debugs, and optimizes traffic filters |
Install as a Claude Code plugin:
/plugin marketplace add kubeshark/kubeshark
/plugin install kubeshark
Or clone and use directly — skills trigger automatically based on conversation context.
Query with API, Kubernetes, and Network Semantics
Kubeshark indexes cluster-wide network traffic by parsing it according to protocol specifications, with support for HTTP, gRPC, Redis, Kafka, DNS, and more. A single KFL query can combine all three semantic layers — Kubernetes identity, API context, and network attributes — to pinpoint exactly the traffic you need. No code instrumentation required.
KFL reference → · Traffic indexing →
Workload Dependency Map
A visual map of how workloads communicate, showing dependencies, traffic volume, and protocol usage across the cluster.
Traffic Retention & PCAP Export
Capture and retain raw network traffic cluster-wide, including decrypted TLS. Download PCAPs scoped by time range, nodes, workloads, and IPs — ready for Wireshark or any PCAP-compatible tool. Store snapshots in cloud storage (S3, Azure Blob, GCS) for long-term retention and cross-cluster sharing.
Snapshots guide → · Cloud storage →
Features
| Feature | Description |
|---|---|
| Traffic Snapshots | Point-in-time snapshots with cloud storage (S3, Azure Blob, GCS), PCAP export for Wireshark |
| Traffic Indexing | Real-time and delayed L7 indexing with request/response matching and full payloads |
| Protocol Support | HTTP, gRPC, GraphQL, Redis, Kafka, DNS, and more |
| TLS Decryption | eBPF-based decryption without key management, included in snapshots |
| AI Integration | MCP server + open-source AI skills for network RCA and traffic filtering |
| KFL Query Language | CEL-based query language with Kubernetes, API, and network semantics |
| 100% On-Premises | Air-gapped support, no external dependencies |
Install
| Method | Command |
|---|---|
| Helm | helm repo add kubeshark https://helm.kubeshark.com && helm install kubeshark kubeshark/kubeshark |
| Homebrew | brew install kubeshark && kubeshark tap |
| Binary | Download |
Contributing
We welcome contributions. See CONTRIBUTING.md.




