* add amazonbedrock AI provider
Signed-off-by: Su Wei <suwei007@gmail.com>
* add amazonbedrock, change model list to const var
Signed-off-by: Su Wei <suwei007@gmail.com>
* update iai config and auth cmd, add providerRegion
Signed-off-by: Wei Su <wsuam@amazon.com>
* fix filename wrong
Signed-off-by: Wei Su <wsuam@amazon.com>
* chore: added some doc info
Signed-off-by: Alex Jones <alexsimonjones@gmail.com>
---------
Signed-off-by: Su Wei <suwei007@gmail.com>
Signed-off-by: Wei Su <wsuam@amazon.com>
Signed-off-by: Alex Jones <alexsimonjones@gmail.com>
Co-authored-by: Wei Su <wsuam@amazon.com>
Co-authored-by: Aris Boutselis <aris.boutselis@senseon.io>
Co-authored-by: Alex Jones <alexsimonjones@gmail.com>
The `auth add` cmd should use `backend` and `model` default values when user doesn't specify them
Closes: #567
Signed-off-by: Jian Zhang <jiazha@redhat.com>
Co-authored-by: Thomas Schuetz <38893055+thschue@users.noreply.github.com>
Co-authored-by: Alex Jones <alexsimonjones@gmail.com>
* feat: openAI explicit value for maxToken and temp
Because when k8sgpt talks with vLLM, the default MaxToken is 16,
which is so small.
Given the most model supports 2048 token(like Llama1 ..etc), so
put here for a safe value.
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
* feat: make temperature a flag
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
---------
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
* feat: rename "auth new" to "auth add"
This change allows to be more consistent with the rest of the code
Signed-off-by: Matthis Holleville <matthish29@gmail.com>
* feat: rework "auth remove" to be more consistent with other remove commands like "filters remove"
Signed-off-by: Matthis Holleville <matthish29@gmail.com>
* feat: update documentation
Signed-off-by: Matthis Holleville <matthish29@gmail.com>
---------
Signed-off-by: Matthis Holleville <matthish29@gmail.com>