mirror of
https://github.com/k8sgpt-ai/k8sgpt.git
synced 2025-09-16 15:20:38 +00:00
docs: fix README (#345)
Signed-off-by: Harshit Mehta <hdm23061993@gmail.com> Co-authored-by: Matthis <99146727+matthisholleville@users.noreply.github.com>
This commit is contained in:
@@ -311,6 +311,8 @@ _Analysis with serve mode_
|
||||
curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Running local models
|
||||
|
||||
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
|
||||
|
Reference in New Issue
Block a user