Volodymyr Stoiko 7b5954ea00 helm: grant hub tokenreviews and label worker pods for internal auth (#1926)
* helm: grant hub tokenreviews and pass trusted controllers

Adds RBAC for hub to call the authentication.k8s.io/v1 TokenReview
endpoint, used by the new internalauth middleware to validate projected
ServiceAccountTokens presented by in-cluster gRPC callers.

Adds tap.internalAuth.trustedControllers value (empty by default),
threaded through to hub's -trusted-controllers flag as a CSV. Listing
a controller here lets pods owned by it authenticate to hub via the
projected SA token (audience kubeshark-hub). Hub-spawned Jobs are
always trusted regardless of this list. Hub matches OwnerReferences
by name AND UID, so a name-only forgery does not grant trust.

Sub-issue of kubeshark/hub#656.

* helm: inline trusted controllers in hub deployment template

The chart already knows its own controller names (worker DaemonSet
metadata.name is the literal "kubeshark-worker-daemon-set" in
09-worker-daemon-set.yaml). Pasting the same literal into a user-facing
tap.internalAuth.trustedControllers value adds a step without buying
anything — if the worker DS rename, the deployment template would have
to change in lockstep regardless.

Drop the values knob, render the flag unconditionally with the literal
worker DS name (matching the convention used elsewhere in this chart,
e.g. the hub deployment's {{ include "kubeshark.name" . }}-hub).

* helm: drop redundant comment on tokenreviews RBAC

* helm: drop -trusted-controllers flag (no caller today)

The flag was wiring forward-prep for a hypothetical worker->hub gRPC
caller from the DaemonSet. Hub-spawned Jobs (dissection-job) are
admitted via internalauth.RegisterSpawnedJob, not via this flag.
Re-add when an actual DaemonSet-deployed caller materializes.

* helm: label worker DS pods for hub internal auth

Worker pods don't call hub gRPC today, but pre-labeling the DS pod
template means a future worker->hub gRPC caller is one PR (worker-side)
away from working — no chart change required. Matches the generic
label-driven trust model in hub#783.

* helm: rename trust label to kubeshark.io/internal-auth

Matches the hub rename. Generic name so the same label can mark pods
trusted by future kubeshark services beyond hub.
2026-05-13 10:53:20 -07:00
2024-08-19 21:14:31 +03:00
2026-02-18 11:52:13 -08:00
2022-12-30 08:30:48 +03:00
2022-11-30 04:50:12 +03:00
2025-03-01 22:23:24 +02:00

Kubeshark

Release Docker pulls Discord Slack

Network Observability for SREs & AI Agents

Live Demo · Docs


Kubeshark indexes cluster-wide network traffic at the kernel level using eBPF — delivering instant answers to any query using network, API, and Kubernetes semantics.

What you can do:

  • Download Retrospective PCAPs — cluster-wide packet captures filtered by nodes, time, workloads, and IPs. Store PCAPs for long-term retention and later investigation.
  • Visualize Network Data — explore traffic matching queries with API, Kubernetes, or network semantics through a real-time dashboard.
  • See Encrypted Traffic in Plain Text — automatically decrypt TLS/mTLS traffic using eBPF, with no key management or sidecars required.
  • Integrate with AI — connect your favorite AI assistant (e.g. Claude, Copilot) to include network data in AI-driven workflows like incident response and root cause analysis.

Kubeshark


Get Started

helm repo add kubeshark https://helm.kubeshark.com
helm install kubeshark kubeshark/kubeshark
kubectl port-forward svc/kubeshark-front 8899:80

Open http://localhost:8899 in your browser. You're capturing traffic.

For production use, we recommend using an ingress controller instead of port-forward.

Connect an AI agent via MCP:

brew install kubeshark
claude mcp add kubeshark -- kubeshark mcp

MCP setup guide →


Network Data for AI Agents

Kubeshark exposes cluster-wide network data via MCP — enabling AI agents to query traffic, investigate API calls, and perform root cause analysis through natural language.

"Why did checkout fail at 2:15 PM?" "Which services have error rates above 1%?" "Show TCP retransmission rates across all node-to-node paths" "Trace request abc123 through all services"

Works with Claude Code, Cursor, and any MCP-compatible AI.

MCP Demo

MCP setup guide →

AI Skills

Open-source, reusable skills that teach AI agents domain-specific workflows on top of Kubeshark's MCP tools:

Skill Description
Network RCA Retrospective root cause analysis — snapshots, dissection, PCAP extraction, trend comparison
KFL KFL (Kubeshark Filter Language) expert — writes, debugs, and optimizes traffic filters

Install as a Claude Code plugin:

/plugin marketplace add kubeshark/kubeshark
/plugin install kubeshark

Or clone and use directly — skills trigger automatically based on conversation context.

AI Skills docs →


Query with API, Kubernetes, and Network Semantics

Kubeshark indexes cluster-wide network traffic by parsing it according to protocol specifications, with support for HTTP, gRPC, Redis, Kafka, DNS, and more. A single KFL query can combine all three semantic layers — Kubernetes identity, API context, and network attributes — to pinpoint exactly the traffic you need. No code instrumentation required.

KFL query combining API, Kubernetes, and network semantics

KFL reference → · Traffic indexing →

Workload Dependency Map

A visual map of how workloads communicate, showing dependencies, traffic volume, and protocol usage across the cluster.

Service Map

Learn more →

Traffic Retention & PCAP Export

Capture and retain raw network traffic cluster-wide, including decrypted TLS. Download PCAPs scoped by time range, nodes, workloads, and IPs — ready for Wireshark or any PCAP-compatible tool. Store snapshots in cloud storage (S3, Azure Blob, GCS) for long-term retention and cross-cluster sharing.

Traffic Retention

Snapshots guide → · Cloud storage →


Features

Feature Description
Traffic Snapshots Point-in-time snapshots with cloud storage (S3, Azure Blob, GCS), PCAP export for Wireshark
Traffic Indexing Real-time and delayed L7 indexing with request/response matching and full payloads
Protocol Support HTTP, gRPC, GraphQL, Redis, Kafka, DNS, and more
TLS Decryption eBPF-based decryption without key management, included in snapshots
AI Integration MCP server + open-source AI skills for network RCA and traffic filtering
KFL Query Language CEL-based query language with Kubernetes, API, and network semantics
100% On-Premises Air-gapped support, no external dependencies

Install

Method Command
Helm helm repo add kubeshark https://helm.kubeshark.com && helm install kubeshark kubeshark/kubeshark
Homebrew brew install kubeshark && kubeshark tap
Binary Download

Installation guide →


Contributing

We welcome contributions. See CONTRIBUTING.md.

License

Apache-2.0

Description
The API traffic analyzer for Kubernetes providing real-time K8s protocol-level visibility, capturing and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters. Inspired by Wireshark, purposely built for Kubernetes
Readme 166 MiB
Languages
Go 92%
Makefile 5.6%
Shell 1.4%
Smarty 1%