k8sgpt/pkg/server
Alex Jones 65632b497b chore: resolved mod
Signed-off-by: Alex Jones <alexsimonjones@gmail.com>
2024-02-08 17:36:21 +00:00
..
analyze.go feat: enable rest/http support 2024-01-16 10:15:08 -08:00
config.go feat: rework cache package - add gcs cache - add cache purge command (#750) 2023-11-18 22:08:38 +01:00
handler.go feat: add configuration api route (#459) 2023-05-25 10:42:49 +01:00
integration.go feat: integration refactor (#684) 2023-09-28 07:43:05 +01:00
log.go feat: add error message if analyze request fail (#393) 2023-05-09 18:08:39 +02:00
README.md feat: integration refactor (#684) 2023-09-28 07:43:05 +01:00
server_test.go chore: lint fixes (#833) 2024-01-04 17:03:32 +00:00
server.go chore: resolved mod 2024-02-08 17:36:21 +00:00

serve

The serve commands allow you to run k8sgpt in a grpc server mode. This would be enabled typically through k8sgpt serve and is how the in-cluster k8sgpt deployment functions when managed by the k8sgpt-operator

The grpc interface that is served is hosted on buf and the repository for this is here

grpcurl

A fantastic tool for local debugging and development is grpcurl It allows you to form curl like requests that are http2 e.g.

grpcurl -plaintext -d '{"namespace": "k8sgpt", "explain" : "true"}' localhost:8080 schema.v1.ServerService/Analyze
grpcurl -plaintext  localhost:8080 schema.v1.ServerService/ListIntegrations 
{
  "integrations": [
    "trivy"
  ]
}

grpcurl -plaintext -d '{"integrations":{"trivy":{"enabled":"true","namespace":"default","skipInstall":"false"}}}' localhost:8080 schema.v1.ServerService/AddConfig