feat: openAI explicit value for maxToken and temperature (#659)

* feat: openAI explicit value for maxToken and temp

Because when k8sgpt talks with vLLM, the default MaxToken is 16,
which is so small.
Given the most model supports 2048 token(like Llama1 ..etc), so
put here for a safe value.

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* feat: make temperature a flag

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

---------

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
This commit is contained in:
Peter Pan
2023-09-18 20:14:43 +08:00
committed by GitHub
parent 54caff837d
commit f55946d60e
8 changed files with 66 additions and 26 deletions

View File

@@ -19,11 +19,12 @@ import (
)
var (
backend string
password string
baseURL string
model string
engine string
backend string
password string
baseURL string
model string
engine string
temperature float32
)
var configAI ai.AIConfiguration