mirror of
https://github.com/k8sgpt-ai/k8sgpt.git
synced 2025-05-11 09:34:43 +00:00
* feat: adds interplex as a caching provider Signed-off-by: AlexsJones <alexsimonjones@gmail.com> * chore: refactored cache input to lower Signed-off-by: AlexsJones <alexsimonjones@gmail.com> * chore: refactored cache input to lower Signed-off-by: AlexsJones <alexsimonjones@gmail.com> --------- Signed-off-by: AlexsJones <alexsimonjones@gmail.com> |
||
---|---|---|
.. | ||
analyze | ||
config | ||
query | ||
log.go | ||
README.md | ||
server_test.go | ||
server.go |
serve
The serve commands allow you to run k8sgpt in a grpc server mode.
This would be enabled typically through k8sgpt serve
and is how the in-cluster k8sgpt deployment functions when managed by the k8sgpt-operator
The grpc interface that is served is hosted on buf and the repository for this is here
grpcurl
A fantastic tool for local debugging and development is grpcurl
It allows you to form curl like requests that are http2
e.g.
grpcurl -plaintext -d '{"namespace": "k8sgpt", "explain" : "true"}' localhost:8080 schema.v1.ServiceAnalyzeService/Analyze
grpcurl -plaintext localhost:8080 schema.v1.ServiceConfigService/ListIntegrations
{
"integrations": [
"trivy"
]
}
grpcurl -plaintext -d '{"integrations":{"trivy":{"enabled":"true","namespace":"default","skipInstall":"false"}}}' localhost:8080 schema.v1.ServiceConfigService/AddConfig