mirror of
https://github.com/k8sgpt-ai/k8sgpt.git
synced 2026-01-24 22:24:28 +00:00
Now, the default value of the `backend` flag for the analyze command will be an empty string. And the `NewAnalysis` function has been modified to use the default backend set by the user if the backend flag is not provided and the `defaultprovider` is set in the config file. Otherwise, backend will be set to "openai". Fixes: https://github.com/k8sgpt-ai/k8sgpt/issues/902 Signed-off-by: VaibhavMalik4187 <vaibhavmalik2018@gmail.com> Co-authored-by: JuHyung Son <sonju0427@gmail.com>
serve
The serve commands allow you to run k8sgpt in a grpc server mode.
This would be enabled typically through k8sgpt serve and is how the in-cluster k8sgpt deployment functions when managed by the k8sgpt-operator
The grpc interface that is served is hosted on buf and the repository for this is here
grpcurl
A fantastic tool for local debugging and development is grpcurl
It allows you to form curl like requests that are http2
e.g.
grpcurl -plaintext -d '{"namespace": "k8sgpt", "explain" : "true"}' localhost:8080 schema.v1.ServerService/Analyze
grpcurl -plaintext localhost:8080 schema.v1.ServerService/ListIntegrations
{
"integrations": [
"trivy"
]
}
grpcurl -plaintext -d '{"integrations":{"trivy":{"enabled":"true","namespace":"default","skipInstall":"false"}}}' localhost:8080 schema.v1.ServerService/AddConfig