* helm: grant hub tokenreviews and pass trusted controllers
Adds RBAC for hub to call the authentication.k8s.io/v1 TokenReview
endpoint, used by the new internalauth middleware to validate projected
ServiceAccountTokens presented by in-cluster gRPC callers.
Adds tap.internalAuth.trustedControllers value (empty by default),
threaded through to hub's -trusted-controllers flag as a CSV. Listing
a controller here lets pods owned by it authenticate to hub via the
projected SA token (audience kubeshark-hub). Hub-spawned Jobs are
always trusted regardless of this list. Hub matches OwnerReferences
by name AND UID, so a name-only forgery does not grant trust.
Sub-issue of kubeshark/hub#656.
* helm: inline trusted controllers in hub deployment template
The chart already knows its own controller names (worker DaemonSet
metadata.name is the literal "kubeshark-worker-daemon-set" in
09-worker-daemon-set.yaml). Pasting the same literal into a user-facing
tap.internalAuth.trustedControllers value adds a step without buying
anything — if the worker DS rename, the deployment template would have
to change in lockstep regardless.
Drop the values knob, render the flag unconditionally with the literal
worker DS name (matching the convention used elsewhere in this chart,
e.g. the hub deployment's {{ include "kubeshark.name" . }}-hub).
* helm: drop redundant comment on tokenreviews RBAC
* helm: drop -trusted-controllers flag (no caller today)
The flag was wiring forward-prep for a hypothetical worker->hub gRPC
caller from the DaemonSet. Hub-spawned Jobs (dissection-job) are
admitted via internalauth.RegisterSpawnedJob, not via this flag.
Re-add when an actual DaemonSet-deployed caller materializes.
* helm: label worker DS pods for hub internal auth
Worker pods don't call hub gRPC today, but pre-labeling the DS pod
template means a future worker->hub gRPC caller is one PR (worker-side)
away from working — no chart change required. Matches the generic
label-driven trust model in hub#783.
* helm: rename trust label to kubeshark.io/internal-auth
Matches the hub rename. Generic name so the same label can mark pods
trusted by future kubeshark services beyond hub.
* Migrate auth.saml.roles to unified auth.roles
Follows the hub-side introduction of the backend-neutral AUTH_ROLES /
AUTH_ROLES_CLAIM / AUTH_DEFAULT_ROLE config (hub commit 51177bcb).
CLI and Helm chart now surface the unified location:
tap.auth.roles — map of role -> permissions (shared SAML/OIDC)
tap.auth.rolesClaim — token/assertion claim name carrying roles
tap.auth.defaultRole — fallback role for authenticated users with
no matching role in their token
Helm ConfigMap template emits AUTH_ROLES / AUTH_ROLES_CLAIM /
AUTH_DEFAULT_ROLE and no longer emits AUTH_SAML_ROLES or
AUTH_SAML_ROLE_ATTRIBUTE. Hub's back-compat fallback still reads those
keys from any existing ConfigMap that hasn't been helm-upgraded.
Legacy struct fields (SamlConfig.Roles, SamlConfig.RoleAttribute) stay
in place so existing values.yaml files with auth.saml.roles still parse
without errors, but the CLI and the chart ignore them. Follow-up release
can remove the struct fields once telemetry confirms migration.
Breaking for users with customized auth.saml.roles in their values.yaml
— the customization is masked by the new default auth.roles.admin and
must be migrated to auth.roles for the custom permissions to take
effect. Documented in the chart README and release notes.
Part of authz-refactoring (Step 2 of hub-oidc-rbac.md, CLI side).
* Remove legacy
* Align CLI + Helm chart with hub AUTH_TYPE rename
Follows hub commit 11564fef. The canonical AUTH_TYPE is now `oidc` for
generic OIDC; `dex` is a permanent alias; `descope` is a new explicit
label. This change surfaces the new vocabulary in the CLI config struct
and the Helm chart, and renames the nested `auth.dexOidc` values.yaml
field to `auth.oidc` for consistency.
Helm chart:
- 12-config-map.yaml: AUTH_OIDC_* keys now read `.Values.tap.auth.oidc.*`
instead of `auth.dexOidc.*`. The cloud-license override that forced
AUTH_TYPE=default unless the admin picked `dex` now accepts `oidc` too.
- 13-secret.yaml: OIDC_CLIENT_ID / OIDC_CLIENT_SECRET read from
`auth.oidc.*` (was `auth.dexOidc.*`).
- 06-front-deployment.yaml: REACT_APP_AUTH_ENABLED / REACT_APP_AUTH_TYPE
conditionals accept both `oidc` and `dex` where they previously only
matched `dex`.
- values.yaml: comment on `tap.auth.type` lists valid values and flags
the breaking change.
- README.md: `tap.auth.type` row lists valid values. All `dexOidc`
references renamed to `oidc`. Sample values.yaml blocks now show
`type: oidc` as the canonical form.
CLI:
- config/configStructs/tapConfig.go: AuthConfig.Type documented with the
full list of valid values and the migration hint.
Breaking changes (repeated in release notes):
1. `tap.auth.type: oidc` now routes to the generic OIDC middleware
(previously Descope). Switch to `tap.auth.type: descope` or `default`
if you were using `oidc` for Descope.
2. `tap.auth.dexOidc.*` values are no longer read. Rename to
`tap.auth.oidc.*`. No fallback.
3. `tap.auth.type: dex` continues to work — permanent alias of `oidc`.
Part of authz-refactoring (Step 4 of hub-oidc-rbac.md, CLI/Helm side).
* default kfl
* Authz Refactoring: Step 8: namespaces-list role filter
Align with hub PR kubeshark/hub#756. Per-role auth.roles[].filter (KFL)
is replaced by auth.roles[].namespaces (comma-separated list with "*",
literal, and glob semantics). Standalone tap.auth.defaultFilter knob
removed.
helm-chart/values.yaml
- admin role example uses namespaces: "*" instead of filter: "".
- Comment block explains the new namespaces semantics.
- defaultFilter: "" entry + accompanying comment block deleted.
helm-chart/templates/12-config-map.yaml
- AUTH_DEFAULT_FILTER ConfigMap entry removed (hub no longer reads it).
helm-chart/README.md
- tap.auth.defaultFilter row removed.
- tap.auth.roles default value example updated: filter: "" → namespaces: "*";
description gains the per-role namespaces semantics legend.
* fix(release-pr): sync bumped Chart.yaml to kubeshark.github.io
The release-pr target was switching back to master (and pulling)
BEFORE copying helm-chart/ into ../kubeshark.github.io/charts/chart.
That reverted the working tree to the pre-bump Chart.yaml, so the
kubeshark.github.io PR shipped the previous version and the
chart-releaser action failed trying to recreate an existing tag.
Copy the bumped chart from the release/vX.Y.Z working tree, then
switch kubeshark back to master at the end of the target.
Also consolidate iterative robustness improvements: VERSION
validation, idempotent sibling-repo tagging, idempotent branch /
commit / push / PR creation, and a "nothing to commit" guard so
reruns of release-pr do not fail.
* refactor(release): split release-pr into three rerunnable targets
Before, release-pr did three things in one recipe: tag sibling
repos, create the kubeshark release PR, and create the helm chart
PR. If any step failed, the whole target had to be rerun, even for
the parts that had already succeeded, and some sub-steps (like
tagging worker/hub/front after a docker-image-only rebuild) had no
standalone entry point.
Split into:
- release-siblings : tag worker, hub, front
- release-pr-kubeshark : bump Chart.yaml, build, open kubeshark PR
- release-pr-helm : sync chart to kubeshark.github.io, open helm PR
- release-pr : orchestrates all three in order
Each is idempotent and can be rerun independently. release-siblings
is now the canonical entry point for tagging sibling repos when
refreshing docker images without a full release.
release-pr-helm checks out release/v$(VERSION) (fetching from origin
if absent) before copying helm-chart/, so it has the bumped Chart.yaml
regardless of whether it runs right after release-pr-kubeshark or
days later in a separate invocation.
A shared _release-check-version prerequisite validates VERSION once
per target invocation.
* fix(release): make branch creation and push truly idempotent
Delete and recreate local release/helm branches instead of conditionally
checking out, and use --force-with-lease push to handle local/remote
divergence on reruns.
---------
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
Add mongodb to the enabled dissectors list and port mapping (27017)
in both Go config defaults and Helm chart values.
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace deprecated resolve_workload/resolve_ip references with the new
list_workloads and list_ips tools that support both singular lookup
(name+namespace or IP) and filtered scan (namespace/regex/label filters
against snapshots).
Ref: kubeshark/hub#687
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
* 💄 Improve README with AI skills, KFL semantics image, and cloud storage
- Add AI Skills section with Network RCA and KFL skills, Claude Code plugin install
- Rename "Network Traffic Indexing" to "Query with API, Kubernetes, and Network Semantics" with new KFL semantics image showing how a single query combines all three layers
- Add cloud storage providers (S3, Azure Blob, GCS) and decrypted TLS to Traffic Retention section
- Update Features table: add AI Skills, KFL query language, cloud storage, delayed indexing
* 🔒 Add encrypted traffic visibility to README "What you can do" section
* 🎨 Update snapshots image in README
---------
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
- Fix macOS sed -i requiring empty backup extension argument
- Checkout master after creating kubeshark release PR
- Checkout master in kubeshark.github.io before and after creating helm PR
- Run all kubeshark.github.io operations in a single shell to avoid lost cd context
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
* Use local timezone instead of UTC in Network RCA skill output
Add a Timezone Handling section that instructs the agent to detect the
local timezone, present local time as the primary reference with UTC in
parentheses, and convert UTC tool responses before presenting to users.
Update all example timestamps to demonstrate the local+UTC format.
Closes#1879
* Ensure agent proactively starts dissection for workload/API queries
The agent was waiting for dissection to complete without ever starting it.
Add explicit instructions: check dissection status first, start it if
missing, and default to the Dissection route for any non-PCAP question.
Only PCAP-specific requests can skip dissection.
* Translate every API/Kubernetes question into a fresh list_api_calls query
Add "Every Question Is a Query" section: each user prompt with API or
Kubernetes semantics should map to a list_api_calls call with the
appropriate KFL filter. Includes examples of natural language to KFL
translation. Agent should never answer from memory or stale results.
---------
Co-authored-by: Alon Girmonsky <alongir@Alons-Mac-Studio.local>
* Revamp README intro, sections, and descriptions
Rewrite the opening description to focus on indexing and querying.
Replace "What's captured" with actionable "What you can do" bullets.
Add port-forward step and ingress recommendation to Get Started.
Rename and tighten section descriptions: Network Data for AI Agents,
Network Traffic Indexing, Workload Dependency Map, Traffic Retention
& PCAP Export.
* Remove Raw Capture from features table
mcp-publisher login github uses the device flow (interactive OAuth) which
requires a human to visit a URL - this can never work in CI. Switch to
github-oidc which uses the OIDC token provided by GitHub Actions.
* Reapply "Add get_file_url and download_file MCP tools"
This reverts commit a46f05c4aa.
* Use dedicated HTTP client for file downloads to support large files
The default httpClient has a 30s total timeout that would fail for
large PCAP downloads (up to 10GB). Use a separate client with only
connection-level timeouts (TLS handshake, response headers) so the
body can stream without a deadline.
Allow users to specify a local Helm chart folder via CLI flag or config,
which takes precedence over the KUBESHARK_HELM_CHART_PATH env variable and
the remote Helm repo. Also update nginx proxy config to disable buffering
for better streaming and large snapshot support.
When tools like export_snapshot_pcap return a relative file path,
the MCP client needs a way to resolve it to a full URL or download
the file locally. These two new tools bridge that gap.
* Update README with new structure and AI focus
* Update AI section: AI-Powered Root Cause Analysis with agents
* updated links
* added an image to the API context
* some fixes to the readme
* Remove TODO comments - using real images
* Fix broken MCP Registry links in mcp/README.md
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Update README with new structure and AI focus
* Update AI section: AI-Powered Root Cause Analysis with agents
* updated links
* added an image to the API context
* some fixes to the readme
* Remove TODO comments - using real images
The Hub API expects 'name' field but the MCP server was sending 'tool'.
This caused all Hub-forwarded tools (list_l4_flows, get_l4_flow_summary,
list_api_calls, etc.) to fail with 'tool name is required' error.
Local tools like check_kubeshark_status were unaffected as they don't
call the Hub API.