mirror of
https://github.com/k8sgpt-ai/k8sgpt.git
synced 2025-09-17 07:41:25 +00:00
feat: openAI explicit value for maxToken and temperature (#659)
* feat: openAI explicit value for maxToken and temp Because when k8sgpt talks with vLLM, the default MaxToken is 16, which is so small. Given the most model supports 2048 token(like Llama1 ..etc), so put here for a safe value. Signed-off-by: Peter Pan <Peter.Pan@daocloud.io> * feat: make temperature a flag Signed-off-by: Peter Pan <Peter.Pan@daocloud.io> --------- Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
This commit is contained in:
@@ -19,11 +19,12 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
backend string
|
||||
password string
|
||||
baseURL string
|
||||
model string
|
||||
engine string
|
||||
backend string
|
||||
password string
|
||||
baseURL string
|
||||
model string
|
||||
engine string
|
||||
temperature float32
|
||||
)
|
||||
|
||||
var configAI ai.AIConfiguration
|
||||
|
Reference in New Issue
Block a user