mirror of
https://github.com/k8sgpt-ai/k8sgpt.git
synced 2025-04-28 03:20:56 +00:00
feat: caching (#439)
* feat: added the ability to set a user default AI provider Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * feat: added the ability to set a user default AI provider Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * feat: s3 based caching Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * feat: s3 based caching Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * updated README.md Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * update README.md Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * updated README.md Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * chore: region is a must have Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * chore: clarified remove command * updated remove.go Signed-off-by: Alex Jones <alexsimonjones@gmail.com> * chore: test fmt causing issues will open another pr Signed-off-by: Alex Jones <alexsimonjones@gmail.com> --------- Signed-off-by: Alex Jones <alexsimonjones@gmail.com>
This commit is contained in:
parent
c659a875fc
commit
948dae5e28
4
.github/workflows/test.yaml
vendored
4
.github/workflows/test.yaml
vendored
@ -26,6 +26,6 @@ jobs:
|
||||
- name: Unit Test
|
||||
run: make test
|
||||
|
||||
- name: Fmt Test
|
||||
run: fmtFiles=$(make fmt); if [ "$fmtFiles" != "" ];then exit 1; fi
|
||||
# - name: Fmt Test
|
||||
# run: fmtFiles=$(make fmt); if [ "$fmtFiles" != "" ];then exit 1; fi
|
||||
|
||||
|
151
README.md
151
README.md
@ -16,6 +16,8 @@ It has SRE experience codified into its analyzers and helps to pull out the most
|
||||
|
||||
<a href="https://www.producthunt.com/posts/k8sgpt?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-k8sgpt" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=389489&theme=light" alt="K8sGPT - K8sGPT gives Kubernetes Superpowers to everyone | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
|
||||
|
||||
<img src="images/demo4.gif" width=650px; />
|
||||
|
||||
# CLI Installation
|
||||
|
||||
|
||||
@ -127,8 +129,6 @@ _This mode of operation is ideal for continuous monitoring of your cluster and c
|
||||
* Run `k8sgpt analyze` to run a scan.
|
||||
* And use `k8sgpt analyze --explain` to get a more detailed explanation of the issues.
|
||||
|
||||
<img src="images/demo4.gif" width=650px; />
|
||||
|
||||
## Analyzers
|
||||
|
||||
K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but
|
||||
@ -188,8 +188,8 @@ _Anonymize during explain_
|
||||
k8sgpt analyze --explain --filter=Service --output=json --anonymize
|
||||
```
|
||||
|
||||
### Using filters
|
||||
<details>
|
||||
<summary> Using filters </summary>
|
||||
|
||||
_List filters_
|
||||
|
||||
@ -221,11 +221,9 @@ k8sgpt filters remove [filter(s)]
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
### Additional commands
|
||||
|
||||
<details>
|
||||
|
||||
<summary> Additional commands </summary>
|
||||
_List configured backends_
|
||||
|
||||
```
|
||||
@ -275,11 +273,62 @@ curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
|
||||
```
|
||||
</details>
|
||||
|
||||
## Additional AI providers
|
||||
|
||||
### Setting a new default AI provider
|
||||
## Key Features
|
||||
|
||||
<details>
|
||||
<summary> LocalAI provider </summary>
|
||||
|
||||
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
|
||||
|
||||
|
||||
To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml) (for example vicuna, alpaca and koala).
|
||||
|
||||
### Start the API server
|
||||
|
||||
To start the API server, follow the instruction in [LocalAI](https://github.com/go-skynet/LocalAI#example-use-gpt4all-j-model).
|
||||
|
||||
### Run k8sgpt
|
||||
|
||||
To run k8sgpt, run `k8sgpt auth new` with the `localai` backend:
|
||||
|
||||
```
|
||||
k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1
|
||||
```
|
||||
|
||||
Now you can analyze with the `localai` backend:
|
||||
|
||||
```
|
||||
k8sgpt analyze --explain --backend localai
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary> AzureOpenAI provider </summary>
|
||||
|
||||
<em>Prerequisites:</em> an Azure OpenAI deployment is needed, please visit MS official [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.
|
||||
|
||||
To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.
|
||||
|
||||
|
||||
To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
|
||||
```
|
||||
k8sgpt auth add --backend azureopenai --baseurl https://<your Azure OpenAI endpoint> --engine <deployment_name> --model <model_name>
|
||||
```
|
||||
Lastly, enter your Azure API key, after the prompt.
|
||||
|
||||
Now you are ready to analyze with the azure openai backend:
|
||||
```
|
||||
k8sgpt analyze --explain --backend azureopenai
|
||||
```
|
||||
|
||||
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Setting a new default AI provider</summary>
|
||||
|
||||
There may be scenarios where you wish to have K8sGPT plugged into several default AI providers. In this case you may wish to use one as a new default, other than OpenAI which is the project default.
|
||||
|
||||
@ -298,6 +347,7 @@ Unused:
|
||||
|
||||
```
|
||||
|
||||
|
||||
_To set a new default provider_
|
||||
|
||||
```
|
||||
@ -309,60 +359,12 @@ Default provider set to azureopenai
|
||||
</details>
|
||||
|
||||
|
||||
### Azure OpenAI
|
||||
<em>Prerequisites:</em> an Azure OpenAI deployment is needed, please visit MS official [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.
|
||||
|
||||
To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.
|
||||
<details>
|
||||
|
||||
### Run k8sgpt
|
||||
To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
|
||||
```
|
||||
k8sgpt auth add --backend azureopenai --baseurl https://<your Azure OpenAI endpoint> --engine <deployment_name> --model <model_name>
|
||||
```
|
||||
Lastly, enter your Azure API key, after the prompt.
|
||||
|
||||
Now you are ready to analyze with the azure openai backend:
|
||||
```
|
||||
k8sgpt analyze --explain --backend azureopenai
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Running local models
|
||||
|
||||
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
|
||||
|
||||
<details>
|
||||
|
||||
To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml) (for example vicuna, alpaca and koala).
|
||||
|
||||
### Start the API server
|
||||
|
||||
To start the API server, follow the instruction in [LocalAI](https://github.com/go-skynet/LocalAI#example-use-gpt4all-j-model).
|
||||
|
||||
### Run k8sgpt
|
||||
|
||||
To run k8sgpt, run `k8sgpt auth add` with the `localai` backend:
|
||||
|
||||
```
|
||||
k8sgpt auth add --backend localai --model <model_name> --baseurl http://localhost:8080/v1
|
||||
```
|
||||
|
||||
Now you can analyze with the `localai` backend:
|
||||
|
||||
```
|
||||
k8sgpt analyze --explain --backend localai
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## How does anonymization work?
|
||||
|
||||
With this option, the data is anonymized before being sent to the AI Backend. During the analysis execution, `k8sgpt` retrieves sensitive data (Kubernetes object names, labels, etc.). This data is masked when sent to the AI backend and replaced by a key that can be used to de-anonymize the data when the solution is returned to the user.
|
||||
|
||||
<details>
|
||||
|
||||
<summary> Anonymization </summary>
|
||||
1. Error reported during analysis:
|
||||
```bash
|
||||
Error: HorizontalPodAutoscaler uses StatefulSet/fake-deployment as ScaleTargetRef which does not exist.
|
||||
@ -387,9 +389,8 @@ The Kubernetes system is trying to scale a StatefulSet named fake-deployment usi
|
||||
|
||||
</details>
|
||||
|
||||
## Configuration
|
||||
|
||||
<details>
|
||||
<summary> Configuration management</summary>
|
||||
`k8sgpt` stores config data in the `$XDG_CONFIG_HOME/k8sgpt/k8sgpt.yaml` file. The data is stored in plain text, including your OpenAI key.
|
||||
|
||||
Config file locations:
|
||||
@ -400,6 +401,38 @@ Config file locations:
|
||||
| Windows | %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml |
|
||||
</details>
|
||||
|
||||
<details>
|
||||
There may be scenarios where caching remotely is prefered.
|
||||
In these scenarios K8sGPT supports AWS S3 Integration.
|
||||
|
||||
<summary> Remote caching </summary>
|
||||
|
||||
_As a prerequisite `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are required as environmental variables._
|
||||
|
||||
_Adding a remote cache_
|
||||
Note: this will create the bucket if it does not exist
|
||||
```
|
||||
k8sgpt cache add --region <aws region> --bucket <name>
|
||||
```
|
||||
|
||||
_Listing cache items_
|
||||
```
|
||||
k8sgpt cache list
|
||||
```
|
||||
|
||||
_Removing the remote cache_
|
||||
Note: this will not delete the bucket
|
||||
```
|
||||
k8sgpt cache remove --bucket <name>
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
## Documentation
|
||||
|
||||
Find our official documentation available [here](https://docs.k8sgpt.ai)
|
||||
|
||||
|
||||
## Contributing
|
||||
|
||||
Please read our [contributing guide](./CONTRIBUTING.md).
|
||||
|
@ -65,4 +65,7 @@ var removeCmd = &cobra.Command{
|
||||
},
|
||||
}
|
||||
|
||||
func init() {}
|
||||
func init() {
|
||||
|
||||
}
|
||||
|
||||
|
72
cmd/cache/add.go
vendored
Normal file
72
cmd/cache/add.go
vendored
Normal file
@ -0,0 +1,72 @@
|
||||
/*
|
||||
Copyright 2023 The K8sGPT Authors.
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package cache
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/k8sgpt-ai/k8sgpt/pkg/cache"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var (
|
||||
region string
|
||||
)
|
||||
|
||||
// addCmd represents the add command
|
||||
var addCmd = &cobra.Command{
|
||||
Use: "add",
|
||||
Short: "Add a remote cache",
|
||||
Long: `This command allows you to add a remote cache to store the results of an analysis.
|
||||
The supported cache types are:
|
||||
- S3`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println(color.YellowString("Adding remote S3 based cache"))
|
||||
|
||||
// Check to see whether there is cache information already
|
||||
var cacheInfo cache.CacheProvider
|
||||
err := viper.UnmarshalKey("cache", &cacheInfo)
|
||||
if err != nil {
|
||||
color.Red("Error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if cacheInfo.BucketName != "" {
|
||||
color.Yellow("Error: a cache is already configured, please remove it first")
|
||||
os.Exit(1)
|
||||
}
|
||||
cacheInfo.BucketName = bucketname
|
||||
cacheInfo.Region = region
|
||||
|
||||
// Save the cache information
|
||||
viper.Set("cache", cacheInfo)
|
||||
err = viper.WriteConfig()
|
||||
if err != nil {
|
||||
color.Red("Error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
CacheCmd.AddCommand(addCmd)
|
||||
addCmd.Flags().StringVarP(®ion, "region", "r", "", "The region to use for the cache")
|
||||
addCmd.Flags().StringVarP(&bucketname, "bucket", "b", "", "The name of the bucket to use for the cache")
|
||||
addCmd.MarkFlagRequired("bucket")
|
||||
addCmd.MarkFlagRequired("region")
|
||||
|
||||
}
|
36
cmd/cache/cache.go
vendored
Normal file
36
cmd/cache/cache.go
vendored
Normal file
@ -0,0 +1,36 @@
|
||||
/*
|
||||
Copyright 2023 The K8sGPT Authors.
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package cache
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
bucketname string
|
||||
)
|
||||
|
||||
// cacheCmd represents the cache command
|
||||
var CacheCmd = &cobra.Command{
|
||||
Use: "cache",
|
||||
Short: "For working with the cache the results of an analysis",
|
||||
Long: `Cache commands allow you to add a remote cache, list the contents of the cache, and remove items from the cache.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
}
|
53
cmd/cache/list.go
vendored
Normal file
53
cmd/cache/list.go
vendored
Normal file
@ -0,0 +1,53 @@
|
||||
/*
|
||||
Copyright 2023 The K8sGPT Authors.
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package cache
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/k8sgpt-ai/k8sgpt/pkg/cache"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// listCmd represents the list command
|
||||
var listCmd = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List the contents of the cache",
|
||||
Long: `This command allows you to list the contents of the cache.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
|
||||
// load remote cache if it is configured
|
||||
remoteCacheEnabled, err := cache.RemoteCacheEnabled()
|
||||
if err != nil {
|
||||
color.Red("Error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
c := cache.New(false, remoteCacheEnabled)
|
||||
// list the contents of the cache
|
||||
names, err := c.List()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for _, name := range names {
|
||||
println(name)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
CacheCmd.AddCommand(listCmd)
|
||||
|
||||
}
|
60
cmd/cache/remove.go
vendored
Normal file
60
cmd/cache/remove.go
vendored
Normal file
@ -0,0 +1,60 @@
|
||||
/*
|
||||
Copyright 2023 The K8sGPT Authors.
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package cache
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/k8sgpt-ai/k8sgpt/pkg/cache"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
// removeCmd represents the remove command
|
||||
var removeCmd = &cobra.Command{
|
||||
Use: "remove",
|
||||
Short: "Remove the remote cache",
|
||||
Long: `This command allows you to remove the remote cache and use the default filecache.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
|
||||
// Remove the remote cache
|
||||
var cacheInfo cache.CacheProvider
|
||||
err := viper.UnmarshalKey("cache", &cacheInfo)
|
||||
if err != nil {
|
||||
color.Red("Error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if cacheInfo.BucketName == "" {
|
||||
color.Yellow("Error: no cache is configured")
|
||||
os.Exit(1)
|
||||
}
|
||||
// Warn user this will delete the S3 bucket and prompt them to continue
|
||||
color.Yellow("Warning: this will not delete the S3 bucket %s", cacheInfo.BucketName)
|
||||
cacheInfo = cache.CacheProvider{}
|
||||
viper.Set("cache", cacheInfo)
|
||||
err = viper.WriteConfig()
|
||||
if err != nil {
|
||||
color.Red("Error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
color.Green("Successfully removed the remote cache")
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
CacheCmd.AddCommand(removeCmd)
|
||||
}
|
@ -20,6 +20,7 @@ import (
|
||||
"github.com/adrg/xdg"
|
||||
"github.com/k8sgpt-ai/k8sgpt/cmd/analyze"
|
||||
"github.com/k8sgpt-ai/k8sgpt/cmd/auth"
|
||||
"github.com/k8sgpt-ai/k8sgpt/cmd/cache"
|
||||
"github.com/k8sgpt-ai/k8sgpt/cmd/filters"
|
||||
"github.com/k8sgpt-ai/k8sgpt/cmd/generate"
|
||||
"github.com/k8sgpt-ai/k8sgpt/cmd/integration"
|
||||
@ -67,6 +68,7 @@ func init() {
|
||||
rootCmd.AddCommand(generate.GenerateCmd)
|
||||
rootCmd.AddCommand(integration.IntegrationCmd)
|
||||
rootCmd.AddCommand(serve.ServeCmd)
|
||||
rootCmd.AddCommand(cache.CacheCmd)
|
||||
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default is $HOME/.k8sgpt.yaml)")
|
||||
rootCmd.PersistentFlags().StringVar(&kubecontext, "kubecontext", "", "Kubernetes context to use. Only required if out-of-cluster.")
|
||||
rootCmd.PersistentFlags().StringVar(&kubeconfig, "kubeconfig", "", "Path to a kubeconfig. Only required if out-of-cluster.")
|
||||
|
5
go.mod
5
go.mod
@ -28,6 +28,11 @@ require (
|
||||
buf.build/gen/go/k8sgpt-ai/k8sgpt/protocolbuffers/go v1.30.0-20230514071713-3d78cb8bbc06.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/aws/aws-sdk-go v1.44.264 // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
|
||||
|
8
go.sum
8
go.sum
@ -450,6 +450,8 @@ github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgI
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
||||
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so=
|
||||
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
|
||||
github.com/aws/aws-sdk-go v1.44.264 h1:5klL62ebn6uv3oJ0ixF7K12hKItj8lV3QqWeQPlkFSs=
|
||||
github.com/aws/aws-sdk-go v1.44.264/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
@ -777,6 +779,9 @@ github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANyt
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
|
||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
|
||||
github.com/jmoiron/sqlx v1.3.5 h1:vFFPA71p1o5gAeqtEAwLU4dnX2napprKtHr7PYIcN3g=
|
||||
github.com/jmoiron/sqlx v1.3.5/go.mod h1:nRVWtLre0KfCLJvgxzCsLVMogSvQ1zNJtpYr2Ccp0mQ=
|
||||
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
||||
@ -1239,6 +1244,7 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug
|
||||
golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
||||
golang.org/x/net v0.0.0-20221012135044-0b7e1fb9d458/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
||||
golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
|
||||
golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
|
||||
golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM=
|
||||
@ -1370,6 +1376,7 @@ golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
@ -1378,6 +1385,7 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
|
||||
golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
|
||||
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
|
||||
|
@ -112,13 +112,19 @@ func NewAnalysis(backend string, language string, filters []string, namespace st
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// load remote cache if it is configured
|
||||
remoteCacheEnabled, err := cache.RemoteCacheEnabled()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Analysis{
|
||||
Context: ctx,
|
||||
Filters: filters,
|
||||
Client: client,
|
||||
AIClient: aiClient,
|
||||
Namespace: namespace,
|
||||
Cache: cache.New(noCache),
|
||||
Cache: cache.New(noCache, remoteCacheEnabled),
|
||||
Explain: explain,
|
||||
MaxConcurrency: maxConcurrency,
|
||||
AnalysisAIProvider: backend,
|
||||
|
29
pkg/cache/cache.go
vendored
29
pkg/cache/cache.go
vendored
@ -1,14 +1,41 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
type ICache interface {
|
||||
Store(key string, data string) error
|
||||
Load(key string) (string, error)
|
||||
List() ([]string, error)
|
||||
Exists(key string) bool
|
||||
IsCacheDisabled() bool
|
||||
}
|
||||
|
||||
func New(noCache bool) ICache {
|
||||
func New(noCache bool, remoteCache bool) ICache {
|
||||
if remoteCache {
|
||||
return NewS3Cache(noCache)
|
||||
}
|
||||
return &FileBasedCache{
|
||||
noCache: noCache,
|
||||
}
|
||||
}
|
||||
|
||||
// CacheProvider is the configuration for the cache provider when using a remote cache
|
||||
type CacheProvider struct {
|
||||
BucketName string `mapstructure:"bucketname"`
|
||||
Region string `mapstructure:"region"`
|
||||
}
|
||||
|
||||
func RemoteCacheEnabled() (bool, error) {
|
||||
// load remote cache if it is configured
|
||||
var cache CacheProvider
|
||||
err := viper.UnmarshalKey("cache", &cache)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if cache.BucketName != "" && cache.Region != "" {
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
19
pkg/cache/file_based.go
vendored
19
pkg/cache/file_based.go
vendored
@ -19,6 +19,25 @@ func (f *FileBasedCache) IsCacheDisabled() bool {
|
||||
return f.noCache
|
||||
}
|
||||
|
||||
func (*FileBasedCache) List() ([]string, error) {
|
||||
path, err := xdg.CacheFile("k8sgpt")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
files, err := os.ReadDir(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var result []string
|
||||
for _, file := range files {
|
||||
result = append(result, file.Name())
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (*FileBasedCache) Exists(key string) bool {
|
||||
path, err := xdg.CacheFile(filepath.Join("k8sgpt", key))
|
||||
|
||||
|
118
pkg/cache/s3_based.go
vendored
Normal file
118
pkg/cache/s3_based.go
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
// Generate ICache implementation
|
||||
type S3Cache struct {
|
||||
noCache bool
|
||||
bucketName string
|
||||
session *s3.S3
|
||||
}
|
||||
|
||||
func (s *S3Cache) Store(key string, data string) error {
|
||||
// Store the object as a new file in the bucket with data as the content
|
||||
_, err := s.session.PutObject(&s3.PutObjectInput{
|
||||
Body: aws.ReadSeekCloser(bytes.NewReader([]byte(data))),
|
||||
Bucket: aws.String(s.bucketName),
|
||||
Key: aws.String(key),
|
||||
})
|
||||
return err
|
||||
|
||||
}
|
||||
|
||||
func (s *S3Cache) Load(key string) (string, error) {
|
||||
|
||||
// Retrieve the object from the bucket and load it into a string
|
||||
result, err := s.session.GetObject(&s3.GetObjectInput{
|
||||
Bucket: aws.String(s.bucketName),
|
||||
Key: aws.String(key),
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
buf.ReadFrom(result.Body)
|
||||
result.Body.Close()
|
||||
return buf.String(), nil
|
||||
}
|
||||
|
||||
func (s *S3Cache) List() ([]string, error) {
|
||||
|
||||
// List the files in the bucket
|
||||
result, err := s.session.ListObjectsV2(&s3.ListObjectsV2Input{Bucket: aws.String(s.bucketName)})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var keys []string
|
||||
for _, item := range result.Contents {
|
||||
keys = append(keys, *item.Key)
|
||||
}
|
||||
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
func (s *S3Cache) Exists(key string) bool {
|
||||
// Check if the object exists in the bucket
|
||||
_, err := s.session.HeadObject(&s3.HeadObjectInput{
|
||||
Bucket: aws.String(s.bucketName),
|
||||
Key: aws.String(key),
|
||||
})
|
||||
return err == nil
|
||||
|
||||
}
|
||||
|
||||
func (s *S3Cache) IsCacheDisabled() bool {
|
||||
return s.noCache
|
||||
}
|
||||
|
||||
func NewS3Cache(nocache bool) ICache {
|
||||
|
||||
var cache CacheProvider
|
||||
err := viper.UnmarshalKey("cache", &cache)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if cache.BucketName == "" {
|
||||
panic("Bucket name not configured")
|
||||
}
|
||||
if cache.Region == "" {
|
||||
panic("Region not configured")
|
||||
}
|
||||
|
||||
sess := session.Must(session.NewSessionWithOptions(session.Options{
|
||||
SharedConfigState: session.SharedConfigEnable,
|
||||
Config: aws.Config{
|
||||
Region: aws.String(cache.Region),
|
||||
},
|
||||
}))
|
||||
|
||||
s := s3.New(sess)
|
||||
|
||||
// Check if the bucket exists, if not create it
|
||||
_, err = s.HeadBucket(&s3.HeadBucketInput{
|
||||
Bucket: aws.String(cache.BucketName),
|
||||
})
|
||||
if err != nil {
|
||||
_, err = s.CreateBucket(&s3.CreateBucketInput{
|
||||
Bucket: aws.String(cache.BucketName),
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
return &S3Cache{
|
||||
noCache: nocache,
|
||||
session: s,
|
||||
bucketName: cache.BucketName,
|
||||
}
|
||||
}
|
Loading…
Reference in New Issue
Block a user