1
0
mirror of https://github.com/k8sgpt-ai/k8sgpt.git synced 2025-05-11 09:34:43 +00:00
k8sgpt/pkg/server
Alex Jones d6d80ee860
feat: adds interplex as a caching provider ()
* feat: adds interplex as a caching provider

Signed-off-by: AlexsJones <alexsimonjones@gmail.com>

* chore: refactored cache input to lower

Signed-off-by: AlexsJones <alexsimonjones@gmail.com>

* chore: refactored cache input to lower

Signed-off-by: AlexsJones <alexsimonjones@gmail.com>

---------

Signed-off-by: AlexsJones <alexsimonjones@gmail.com>
2024-12-02 12:58:08 +00:00
..
analyze feat: add stats option to analyze command for performance insights () 2024-10-30 13:58:18 +00:00
config feat: adds interplex as a caching provider () 2024-12-02 12:58:08 +00:00
query feat: adding a query mode for the schednex scheduler () 2024-09-22 19:29:29 +01:00
log.go feat: refactoring to the new schema () 2024-08-15 14:42:55 +01:00
README.md feat: refactoring to the new schema () 2024-08-15 14:42:55 +01:00
server_test.go feat: testupdate () 2024-11-10 15:33:50 +00:00
server.go fix: [Bug] Make lint command is not working () 2024-10-30 10:49:44 +00:00

serve

The serve commands allow you to run k8sgpt in a grpc server mode. This would be enabled typically through k8sgpt serve and is how the in-cluster k8sgpt deployment functions when managed by the k8sgpt-operator

The grpc interface that is served is hosted on buf and the repository for this is here

grpcurl

A fantastic tool for local debugging and development is grpcurl It allows you to form curl like requests that are http2 e.g.

grpcurl -plaintext -d '{"namespace": "k8sgpt", "explain" : "true"}' localhost:8080 schema.v1.ServiceAnalyzeService/Analyze
grpcurl -plaintext  localhost:8080 schema.v1.ServiceConfigService/ListIntegrations
{
  "integrations": [
    "trivy"
  ]
}

grpcurl -plaintext -d '{"integrations":{"trivy":{"enabled":"true","namespace":"default","skipInstall":"false"}}}' localhost:8080 schema.v1.ServiceConfigService/AddConfig