mirror of
https://github.com/k8sgpt-ai/k8sgpt.git
synced 2025-05-11 01:26:35 +00:00
* feat: removal of trivy integration Signed-off-by: AlexsJones <alexsimonjones@gmail.com> * feat: removal of trivy integration Signed-off-by: AlexsJones <alexsimonjones@gmail.com> * chore: removed trivy Signed-off-by: AlexsJones <alexsimonjones@gmail.com> * chore: updated deps Signed-off-by: AlexsJones <alexsimonjones@gmail.com> --------- Signed-off-by: AlexsJones <alexsimonjones@gmail.com>
1.0 KiB
1.0 KiB
serve
The serve commands allow you to run k8sgpt in a grpc server mode.
This would be enabled typically through k8sgpt serve
and is how the in-cluster k8sgpt deployment functions when managed by the k8sgpt-operator
The grpc interface that is served is hosted on buf and the repository for this is here
grpcurl
A fantastic tool for local debugging and development is grpcurl
It allows you to form curl like requests that are http2
e.g.
grpcurl -plaintext -d '{"namespace": "k8sgpt", "explain" : "true"}' localhost:8080 schema.v1.ServiceAnalyzeService/Analyze
grpcurl -plaintext localhost:8080 schema.v1.ServiceConfigService/ListIntegrations
{
"integrations": [
"prometheus"
]
}
grpcurl -plaintext -d '{"integrations":{"prometheus":{"enabled":"true","namespace":"default","skipInstall":"false"}}}' localhost:8080 schema.v1.ServiceConfigService/AddConfig