Compare commits

..

3 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
b92512aec0 Complete PR #1602 review - all tests passing
Co-authored-by: AlexsJones <1235925+AlexsJones@users.noreply.github.com>
2026-01-30 16:59:09 +00:00
copilot-swe-agent[bot]
52f7808a3a Apply PR #1602: improve ConfigMap usage detection for sidecar patterns
Co-authored-by: AlexsJones <1235925+AlexsJones@users.noreply.github.com>
2026-01-30 16:52:40 +00:00
copilot-swe-agent[bot]
1f5d9a8fe0 Initial plan 2026-01-30 16:47:59 +00:00
20 changed files with 330 additions and 1222 deletions

View File

@@ -279,7 +279,6 @@ you will be able to write your own analyzers.
- [x] OperatorGroup
- [x] InstallPlan
- [x] Subscription
- [x] **CustomResource** - Generic analyzer for any CRD (cert-manager, ArgoCD, Kafka, etc.) [Documentation](docs/CRD_ANALYZER.md)
## Examples

View File

@@ -42,7 +42,7 @@ var (
mcpPort string
mcpHTTP bool
// filters can be injected into the server (repeatable flag)
filters []string
filters []string
)
var ServeCmd = &cobra.Command{

View File

@@ -1,252 +0,0 @@
# Generic CRD Analyzer Configuration Examples
The Generic CRD Analyzer enables K8sGPT to automatically analyze custom resources from any installed CRD in your Kubernetes cluster. This provides observability for operator-managed resources like cert-manager, ArgoCD, Kafka, and more.
## Basic Configuration
The CRD analyzer is configured via the K8sGPT configuration file (typically `~/.config/k8sgpt/k8sgpt.yaml`). Here's a minimal example:
```yaml
crd_analyzer:
enabled: true
```
With this basic configuration, the analyzer will:
- Discover all CRDs installed in your cluster
- Apply generic health checks based on common Kubernetes patterns
- Report issues with resources that have unhealthy status conditions
## Configuration Options
### Complete Example
```yaml
crd_analyzer:
enabled: true
include:
- name: certificates.cert-manager.io
statusPath: ".status.conditions"
readyCondition:
type: "Ready"
expectedStatus: "True"
- name: applications.argoproj.io
statusPath: ".status.health.status"
expectedValue: "Healthy"
- name: kafkas.kafka.strimzi.io
readyCondition:
type: "Ready"
expectedStatus: "True"
exclude:
- name: kafkatopics.kafka.strimzi.io
- name: servicemonitors.monitoring.coreos.com
```
### Configuration Fields
#### `enabled` (boolean)
- **Default**: `false`
- **Description**: Master switch to enable/disable the CRD analyzer
- **Example**: `enabled: true`
#### `include` (array)
- **Description**: List of CRDs with custom health check configurations
- **Fields**:
- `name` (string, required): The full CRD name (e.g., `certificates.cert-manager.io`)
- `statusPath` (string, optional): JSONPath to the status field to check (e.g., `.status.health.status`)
- `readyCondition` (object, optional): Configuration for checking a Ready-style condition
- `type` (string): The condition type to check (e.g., `"Ready"`)
- `expectedStatus` (string): Expected status value (e.g., `"True"`)
- `expectedValue` (string, optional): Expected value at the statusPath (requires `statusPath`)
#### `exclude` (array)
- **Description**: List of CRDs to skip during analysis
- **Fields**:
- `name` (string): The full CRD name to exclude
## Use Cases
### 1. cert-manager Certificate Analysis
Detect certificates that are not ready or have issuance failures:
```yaml
crd_analyzer:
enabled: true
include:
- name: certificates.cert-manager.io
readyCondition:
type: "Ready"
expectedStatus: "True"
```
**Detected Issues:**
- Certificates with `Ready=False`
- Certificate renewal failures
- Invalid certificate configurations
### 2. ArgoCD Application Health
Monitor ArgoCD application sync and health status:
```yaml
crd_analyzer:
enabled: true
include:
- name: applications.argoproj.io
statusPath: ".status.health.status"
expectedValue: "Healthy"
```
**Detected Issues:**
- Applications in `Degraded` state
- Sync failures
- Missing resources
### 3. Kafka Operator Resources
Check Kafka cluster health with Strimzi operator:
```yaml
crd_analyzer:
enabled: true
include:
- name: kafkas.kafka.strimzi.io
readyCondition:
type: "Ready"
expectedStatus: "True"
exclude:
- name: kafkatopics.kafka.strimzi.io # Exclude topics to reduce noise
```
**Detected Issues:**
- Kafka clusters not ready
- Broker failures
- Configuration issues
### 4. Prometheus Operator
Monitor Prometheus instances:
```yaml
crd_analyzer:
enabled: true
include:
- name: prometheuses.monitoring.coreos.com
readyCondition:
type: "Available"
expectedStatus: "True"
```
**Detected Issues:**
- Prometheus instances not available
- Configuration reload failures
- Storage issues
## Generic Health Checks
When a CRD is not explicitly configured in the `include` list, the analyzer applies generic health checks:
### Supported Patterns
1. **status.conditions** - Standard Kubernetes conditions
- Flags `Ready` conditions with status != `"True"`
- Flags any condition type containing "failed" with status = `"True"`
2. **status.phase** - Phase-based resources
- Flags resources with phase = `"Failed"` or `"Error"`
3. **status.health.status** - ArgoCD-style health
- Flags resources with health status != `"Healthy"` (except `"Unknown"`)
4. **status.state** - State-based resources
- Flags resources with state = `"Failed"` or `"Error"`
5. **Deletion with Finalizers** - Stuck resources
- Flags resources with `deletionTimestamp` set but still having finalizers
## Running the Analyzer
### Enable in Configuration
Add the CRD analyzer to your active filters:
```bash
# Add CustomResource filter
k8sgpt filters add CustomResource
# List active filters to verify
k8sgpt filters list
```
### Run Analysis
```bash
# Basic analysis
k8sgpt analyze --explain
# With specific filter
k8sgpt analyze --explain --filter=CustomResource
# In a specific namespace
k8sgpt analyze --explain --filter=CustomResource --namespace=production
```
### Example Output
```
AI Provider: openai
0: CustomResource/Certificate(default/example-cert)
- Error: Condition Ready is False (reason: Failed): Certificate issuance failed
- Details: The certificate 'example-cert' in namespace 'default' failed to issue.
The Let's Encrypt challenge validation failed due to DNS propagation issues.
Recommendation: Check DNS records and retry certificate issuance.
1: CustomResource/Application(argocd/my-app)
- Error: Health status is Degraded
- Details: The ArgoCD application 'my-app' is in a Degraded state.
This typically indicates that deployed resources are not healthy.
Recommendation: Check application logs and pod status.
```
## Best Practices
### 1. Start with Generic Checks
Begin with just `enabled: true` to see what issues are detected across all CRDs.
### 2. Add Specific Configurations Gradually
Add custom configurations for critical CRDs that need specialized health checks.
### 3. Use Exclusions to Reduce Noise
Exclude CRDs that generate false positives or are less critical.
### 4. Combine with Other Analyzers
Use the CRD analyzer alongside built-in analyzers for comprehensive cluster observability.
### 5. Monitor Performance
If you have many CRDs, the analysis may take longer. Use exclusions to optimize.
## Troubleshooting
### Analyzer Not Running
- Verify `enabled: true` is set in configuration
- Check that `CustomResource` is in active filters: `k8sgpt filters list`
- Ensure configuration file is in the correct location
### No Issues Detected
- Verify CRDs are actually installed: `kubectl get crds`
- Check if custom resources exist: `kubectl get <crd-name> --all-namespaces`
- Review generic health check patterns - your CRDs may use different status fields
### Too Many False Positives
- Add specific configurations for problematic CRDs in the `include` section
- Use the `exclude` list to skip noisy CRDs
- Review the status patterns your CRDs use and configure accordingly
### Configuration Not Applied
- Restart K8sGPT after configuration changes
- Verify YAML syntax is correct
- Check K8sGPT logs for configuration parsing errors

View File

@@ -1,45 +0,0 @@
# Example K8sGPT Configuration with CRD Analyzer
# Place this file at ~/.config/k8sgpt/k8sgpt.yaml
# CRD Analyzer Configuration
crd_analyzer:
enabled: true
# Specific CRD configurations with custom health checks
include:
# cert-manager certificates
- name: certificates.cert-manager.io
readyCondition:
type: "Ready"
expectedStatus: "True"
# ArgoCD applications
- name: applications.argoproj.io
statusPath: ".status.health.status"
expectedValue: "Healthy"
# Strimzi Kafka clusters
- name: kafkas.kafka.strimzi.io
readyCondition:
type: "Ready"
expectedStatus: "True"
# Prometheus instances
- name: prometheuses.monitoring.coreos.com
readyCondition:
type: "Available"
expectedStatus: "True"
# CRDs to skip during analysis
exclude:
- name: kafkatopics.kafka.strimzi.io
- name: servicemonitors.monitoring.coreos.com
- name: podmonitors.monitoring.coreos.com
- name: prometheusrules.monitoring.coreos.com
# Other K8sGPT configuration...
# ai:
# providers:
# - name: openai
# model: gpt-4
# # ... other AI config

View File

@@ -20,11 +20,11 @@ type MockBedrockClient struct {
func (m *MockBedrockClient) GetInferenceProfile(ctx context.Context, params *bedrock.GetInferenceProfileInput, optFns ...func(*bedrock.Options)) (*bedrock.GetInferenceProfileOutput, error) {
args := m.Called(ctx, params)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(*bedrock.GetInferenceProfileOutput), args.Error(1)
}
@@ -35,11 +35,11 @@ type MockBedrockRuntimeClient struct {
func (m *MockBedrockRuntimeClient) InvokeModel(ctx context.Context, params *bedrockruntime.InvokeModelInput, optFns ...func(*bedrockruntime.Options)) (*bedrockruntime.InvokeModelOutput, error) {
args := m.Called(ctx, params)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(*bedrockruntime.InvokeModelOutput), args.Error(1)
}
@@ -59,21 +59,21 @@ func TestBedrockInferenceProfileARNWithMocks(t *testing.T) {
},
},
}
// Create a client with test models
client := &AmazonBedRockClient{models: testModels}
// Create mock clients
mockMgmtClient := new(MockBedrockClient)
mockRuntimeClient := new(MockBedrockRuntimeClient)
// Inject mock clients into the AmazonBedRockClient
client.mgmtClient = mockMgmtClient
client.client = mockRuntimeClient
// Test with a valid inference profile ARN
inferenceProfileARN := "arn:aws:bedrock:us-east-1:123456789012:inference-profile/my-profile"
// Setup mock response for GetInferenceProfile
mockMgmtClient.On("GetInferenceProfile", mock.Anything, &bedrock.GetInferenceProfileInput{
InferenceProfileIdentifier: aws.String("my-profile"),
@@ -84,20 +84,20 @@ func TestBedrockInferenceProfileARNWithMocks(t *testing.T) {
},
},
}, nil)
// Configure the client with the inference profile ARN
config := AIProvider{
Model: inferenceProfileARN,
ProviderRegion: "us-east-1",
}
// Test the Configure method with the inference profile ARN
err := client.Configure(&config)
// Verify that the configuration was successful
assert.NoError(t, err)
assert.Equal(t, inferenceProfileARN, client.model.Config.ModelName)
// Verify that the mock was called
mockMgmtClient.AssertExpectations(t)
}

View File

@@ -45,7 +45,7 @@ func (p *ViperConfigProvider) UnmarshalKey(key string, rawVal interface{}) error
// Default instances to be used
var (
DefaultClientFactory = &DefaultAIClientFactory{}
DefaultClientFactory = &DefaultAIClientFactory{}
DefaultConfigProvider = &ViperConfigProvider{}
)
@@ -84,4 +84,4 @@ func SetTestConfigProvider(provider ConfigProvider) {
func ResetTestImplementations() {
testAIClientFactory = nil
testConfigProvider = nil
}
}

View File

@@ -94,11 +94,11 @@ func (c *OpenAIClient) GetCompletion(ctx context.Context, prompt string) (string
Content: prompt,
},
},
Temperature: c.temperature,
Temperature: c.temperature,
MaxCompletionTokens: maxToken,
PresencePenalty: presencePenalty,
FrequencyPenalty: frequencyPenalty,
TopP: c.topP,
PresencePenalty: presencePenalty,
FrequencyPenalty: frequencyPenalty,
TopP: c.topP,
})
if err != nil {
return "", err

View File

@@ -64,7 +64,6 @@ var additionalAnalyzerMap = map[string]common.IAnalyzer{
"InstallPlan": InstallPlanAnalyzer{},
"CatalogSource": CatalogSourceAnalyzer{},
"OperatorGroup": OperatorGroupAnalyzer{},
"CustomResource": CRDAnalyzer{},
}
func ListFilters() ([]string, []string, []string) {

View File

@@ -17,6 +17,7 @@ import (
"fmt"
"github.com/k8sgpt-ai/k8sgpt/pkg/common"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -80,6 +81,17 @@ func (ConfigMapAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
for _, cm := range configMaps.Items {
var failures []common.Failure
// Check if ConfigMap is dynamically loaded by sidecars
if isKnownSidecarPattern(cm) {
usedConfigMaps[cm.Name] = true
continue
}
// Check if usage check should be skipped
if shouldSkipUsageCheck(cm) {
continue
}
// Check for unused ConfigMaps
if !usedConfigMaps[cm.Name] {
failures = append(failures, common.Failure{
@@ -123,3 +135,33 @@ func (ConfigMapAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
return results, nil
}
// isKnownSidecarPattern detects ConfigMaps that are dynamically loaded by sidecar containers
// These ConfigMaps are not directly referenced in Pod specs but are watched via Kubernetes API
func isKnownSidecarPattern(cm v1.ConfigMap) bool {
// Common sidecar patterns
knownLabels := []string{
"grafana_dashboard", // Grafana sidecar dashboard loader
"grafana_datasource", // Grafana sidecar datasource loader
"prometheus_rule", // Prometheus operator rule loader
"fluentd_config", // Fluentd config reloader
}
for _, label := range knownLabels {
if _, exists := cm.Labels[label]; exists {
return true
}
}
// User-defined marker for dynamically loaded ConfigMaps
if cm.Labels["k8sgpt.ai/dynamically-loaded"] == "true" {
return true
}
return false
}
// shouldSkipUsageCheck allows users to opt-out of usage checking
func shouldSkipUsageCheck(cm v1.ConfigMap) bool {
return cm.Annotations["k8sgpt.ai/skip-usage-check"] == "true"
}

View File

@@ -147,3 +147,134 @@ func TestConfigMapAnalyzer(t *testing.T) {
})
}
}
// TestConfigMapAnalyzer_SidecarPatterns tests known sidecar patterns and skip annotations
func TestConfigMapAnalyzer_SidecarPatterns(t *testing.T) {
tests := []struct {
name string
namespace string
configMaps []v1.ConfigMap
pods []v1.Pod
expectedErrors int
}{
{
name: "grafana dashboard configmap should not be flagged as unused",
namespace: "monitoring",
configMaps: []v1.ConfigMap{
{
ObjectMeta: metav1.ObjectMeta{
Name: "grafana-dashboard",
Namespace: "monitoring",
Labels: map[string]string{
"grafana_dashboard": "1",
},
},
Data: map[string]string{
"dashboard.json": `{"title": "My Dashboard"}`,
},
},
},
pods: []v1.Pod{},
expectedErrors: 0,
},
{
name: "configmap with skip annotation should be ignored",
namespace: "default",
configMaps: []v1.ConfigMap{
{
ObjectMeta: metav1.ObjectMeta{
Name: "ignored-cm",
Namespace: "default",
Annotations: map[string]string{
"k8sgpt.ai/skip-usage-check": "true",
},
},
Data: map[string]string{
"key": "value",
},
},
},
expectedErrors: 0,
},
{
name: "normal unused configmap should still be flagged",
namespace: "default",
configMaps: []v1.ConfigMap{
{
ObjectMeta: metav1.ObjectMeta{
Name: "unused-cm",
Namespace: "default",
},
Data: map[string]string{
"key": "value",
},
},
},
expectedErrors: 1,
},
{
name: "prometheus rule configmap should not be flagged",
namespace: "monitoring",
configMaps: []v1.ConfigMap{
{
ObjectMeta: metav1.ObjectMeta{
Name: "prometheus-rules",
Namespace: "monitoring",
Labels: map[string]string{
"prometheus_rule": "1",
},
},
Data: map[string]string{
"rules.yaml": "groups: []",
},
},
},
expectedErrors: 0,
},
{
name: "custom dynamically-loaded label should work",
namespace: "default",
configMaps: []v1.ConfigMap{
{
ObjectMeta: metav1.ObjectMeta{
Name: "custom-sidecar-cm",
Namespace: "default",
Labels: map[string]string{
"k8sgpt.ai/dynamically-loaded": "true",
},
},
Data: map[string]string{
"config": "value",
},
},
},
expectedErrors: 0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
client := fake.NewSimpleClientset()
for _, cm := range tt.configMaps {
_, err := client.CoreV1().ConfigMaps(tt.namespace).Create(context.TODO(), &cm, metav1.CreateOptions{})
assert.NoError(t, err)
}
for _, pod := range tt.pods {
_, err := client.CoreV1().Pods(tt.namespace).Create(context.TODO(), &pod, metav1.CreateOptions{})
assert.NoError(t, err)
}
analyzer := ConfigMapAnalyzer{}
results, err := analyzer.Analyze(common.Analyzer{
Client: &kubernetes.Client{Client: client},
Context: context.TODO(),
Namespace: tt.namespace,
})
assert.NoError(t, err)
assert.Equal(t, tt.expectedErrors, len(results), "Expected %d errors but got %d", tt.expectedErrors, len(results))
})
}
}

View File

@@ -1,330 +0,0 @@
/*
Copyright 2023 The K8sGPT Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package analyzer
import (
"fmt"
"strings"
"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/spf13/viper"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apiextensionsclientset "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
)
type CRDAnalyzer struct{}
func (CRDAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
// Load CRD analyzer configuration
var config common.CRDAnalyzerConfig
if err := viper.UnmarshalKey("crd_analyzer", &config); err != nil {
// If no config or error, disable the analyzer
return nil, nil
}
if !config.Enabled {
return nil, nil
}
// Create apiextensions client to discover CRDs
apiExtClient, err := apiextensionsclientset.NewForConfig(a.Client.GetConfig())
if err != nil {
return nil, fmt.Errorf("failed to create apiextensions client: %w", err)
}
// List all CRDs in the cluster
crdList, err := apiExtClient.ApiextensionsV1().CustomResourceDefinitions().List(a.Context, metav1.ListOptions{})
if err != nil {
return nil, fmt.Errorf("failed to list CRDs: %w", err)
}
var results []common.Result
// Process each CRD
for _, crd := range crdList.Items {
// Check if CRD should be excluded
if shouldExcludeCRD(crd.Name, config.Exclude) {
continue
}
// Get the CRD configuration (if specified)
crdConfig := getCRDConfig(crd.Name, config.Include)
// Analyze resources for this CRD
crdResults, err := analyzeCRDResources(a, crd, crdConfig)
if err != nil {
// Log error but continue with other CRDs
continue
}
results = append(results, crdResults...)
}
return results, nil
}
// shouldExcludeCRD checks if a CRD should be excluded from analysis
func shouldExcludeCRD(crdName string, excludeList []common.CRDExcludeConfig) bool {
for _, exclude := range excludeList {
if exclude.Name == crdName {
return true
}
}
return false
}
// getCRDConfig returns the configuration for a specific CRD if it exists
func getCRDConfig(crdName string, includeList []common.CRDIncludeConfig) *common.CRDIncludeConfig {
for _, include := range includeList {
if include.Name == crdName {
return &include
}
}
return nil
}
// analyzeCRDResources analyzes all instances of a CRD
func analyzeCRDResources(a common.Analyzer, crd apiextensionsv1.CustomResourceDefinition, config *common.CRDIncludeConfig) ([]common.Result, error) {
if a.Client.GetDynamicClient() == nil {
return nil, fmt.Errorf("dynamic client is nil")
}
// Get the preferred version (typically the storage version)
var version string
for _, v := range crd.Spec.Versions {
if v.Storage {
version = v.Name
break
}
}
if version == "" && len(crd.Spec.Versions) > 0 {
version = crd.Spec.Versions[0].Name
}
// Construct GVR
gvr := schema.GroupVersionResource{
Group: crd.Spec.Group,
Version: version,
Resource: crd.Spec.Names.Plural,
}
// List resources
var list *unstructured.UnstructuredList
var err error
if crd.Spec.Scope == apiextensionsv1.NamespaceScoped {
if a.Namespace != "" {
list, err = a.Client.GetDynamicClient().Resource(gvr).Namespace(a.Namespace).List(a.Context, metav1.ListOptions{LabelSelector: a.LabelSelector})
} else {
list, err = a.Client.GetDynamicClient().Resource(gvr).Namespace(metav1.NamespaceAll).List(a.Context, metav1.ListOptions{LabelSelector: a.LabelSelector})
}
} else {
// Cluster-scoped
list, err = a.Client.GetDynamicClient().Resource(gvr).List(a.Context, metav1.ListOptions{LabelSelector: a.LabelSelector})
}
if err != nil {
return nil, err
}
var results []common.Result
// Analyze each resource instance
for _, item := range list.Items {
failures := analyzeResource(item, crd, config)
if len(failures) > 0 {
resourceName := item.GetName()
if item.GetNamespace() != "" {
resourceName = item.GetNamespace() + "/" + resourceName
}
results = append(results, common.Result{
Kind: crd.Spec.Names.Kind,
Name: resourceName,
Error: failures,
})
}
}
return results, nil
}
// analyzeResource analyzes a single CR instance for issues
func analyzeResource(item unstructured.Unstructured, crd apiextensionsv1.CustomResourceDefinition, config *common.CRDIncludeConfig) []common.Failure {
var failures []common.Failure
// Check for deletion with finalizers (resource stuck in deletion)
if item.GetDeletionTimestamp() != nil && len(item.GetFinalizers()) > 0 {
failures = append(failures, common.Failure{
Text: fmt.Sprintf("Resource is being deleted but has finalizers: %v", item.GetFinalizers()),
})
}
// If custom config is provided, use it
if config != nil {
configFailures := analyzeWithConfig(item, config)
failures = append(failures, configFailures...)
return failures
}
// Otherwise, use generic health checks based on common patterns
genericFailures := analyzeGenericHealth(item)
failures = append(failures, genericFailures...)
return failures
}
// analyzeWithConfig analyzes a resource using custom configuration
func analyzeWithConfig(item unstructured.Unstructured, config *common.CRDIncludeConfig) []common.Failure {
var failures []common.Failure
// Check ReadyCondition if specified
if config.ReadyCondition != nil {
conditions, found, err := unstructured.NestedSlice(item.Object, "status", "conditions")
if !found || err != nil {
failures = append(failures, common.Failure{
Text: "Expected status.conditions not found",
})
return failures
}
ready := false
var conditionMessages []string
for _, cond := range conditions {
condMap, ok := cond.(map[string]interface{})
if !ok {
continue
}
condType, _, _ := unstructured.NestedString(condMap, "type")
status, _, _ := unstructured.NestedString(condMap, "status")
message, _, _ := unstructured.NestedString(condMap, "message")
if condType == config.ReadyCondition.Type {
if status == config.ReadyCondition.ExpectedStatus {
ready = true
} else {
conditionMessages = append(conditionMessages, fmt.Sprintf("%s=%s: %s", condType, status, message))
}
}
}
if !ready {
msg := fmt.Sprintf("Ready condition not met: expected %s=%s", config.ReadyCondition.Type, config.ReadyCondition.ExpectedStatus)
if len(conditionMessages) > 0 {
msg += "; " + strings.Join(conditionMessages, "; ")
}
failures = append(failures, common.Failure{
Text: msg,
})
}
}
// Check ExpectedValue if specified and StatusPath provided
if config.ExpectedValue != "" && config.StatusPath != "" {
pathParts := strings.Split(config.StatusPath, ".")
// Remove leading dot if present
if len(pathParts) > 0 && pathParts[0] == "" {
pathParts = pathParts[1:]
}
actualValue, found, err := unstructured.NestedString(item.Object, pathParts...)
if !found || err != nil {
failures = append(failures, common.Failure{
Text: fmt.Sprintf("Expected field %s not found", config.StatusPath),
})
} else if actualValue != config.ExpectedValue {
failures = append(failures, common.Failure{
Text: fmt.Sprintf("Field %s has value '%s', expected '%s'", config.StatusPath, actualValue, config.ExpectedValue),
})
}
}
return failures
}
// analyzeGenericHealth applies generic health checks based on common Kubernetes patterns
func analyzeGenericHealth(item unstructured.Unstructured) []common.Failure {
var failures []common.Failure
// Check for status.conditions (common pattern)
conditions, found, err := unstructured.NestedSlice(item.Object, "status", "conditions")
if found && err == nil && len(conditions) > 0 {
for _, cond := range conditions {
condMap, ok := cond.(map[string]interface{})
if !ok {
continue
}
condType, _, _ := unstructured.NestedString(condMap, "type")
status, _, _ := unstructured.NestedString(condMap, "status")
reason, _, _ := unstructured.NestedString(condMap, "reason")
message, _, _ := unstructured.NestedString(condMap, "message")
// Check for common failure patterns
if condType == "Ready" && status != "True" {
msg := fmt.Sprintf("Condition Ready is %s", status)
if reason != "" {
msg += fmt.Sprintf(" (reason: %s)", reason)
}
if message != "" {
msg += fmt.Sprintf(": %s", message)
}
failures = append(failures, common.Failure{Text: msg})
} else if strings.Contains(strings.ToLower(condType), "failed") && status == "True" {
msg := fmt.Sprintf("Condition %s is True", condType)
if message != "" {
msg += fmt.Sprintf(": %s", message)
}
failures = append(failures, common.Failure{Text: msg})
}
}
}
// Check for status.phase (common pattern)
phase, found, _ := unstructured.NestedString(item.Object, "status", "phase")
if found && phase != "" {
lowerPhase := strings.ToLower(phase)
if lowerPhase == "failed" || lowerPhase == "error" {
failures = append(failures, common.Failure{
Text: fmt.Sprintf("Resource phase is %s", phase),
})
}
}
// Check for status.health.status (ArgoCD pattern)
healthStatus, found, _ := unstructured.NestedString(item.Object, "status", "health", "status")
if found && healthStatus != "" {
if healthStatus != "Healthy" && healthStatus != "Unknown" {
failures = append(failures, common.Failure{
Text: fmt.Sprintf("Health status is %s", healthStatus),
})
}
}
// Check for status.state (common pattern)
state, found, _ := unstructured.NestedString(item.Object, "status", "state")
if found && state != "" {
lowerState := strings.ToLower(state)
if lowerState == "failed" || lowerState == "error" {
failures = append(failures, common.Failure{
Text: fmt.Sprintf("Resource state is %s", state),
})
}
}
return failures
}

View File

@@ -1,410 +0,0 @@
/*
Copyright 2023 The K8sGPT Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package analyzer
import (
"context"
"strings"
"testing"
"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/spf13/viper"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/rest"
)
// TestCRDAnalyzer_Disabled tests that analyzer returns nil when disabled
func TestCRDAnalyzer_Disabled(t *testing.T) {
viper.Reset()
viper.Set("crd_analyzer", map[string]interface{}{
"enabled": false,
})
a := common.Analyzer{
Context: context.TODO(),
Client: &kubernetes.Client{},
}
res, err := (CRDAnalyzer{}).Analyze(a)
if err != nil {
t.Fatalf("Analyze error: %v", err)
}
if res != nil {
t.Fatalf("expected nil result when disabled, got %d results", len(res))
}
}
// TestCRDAnalyzer_NoConfig tests that analyzer returns nil when no config exists
func TestCRDAnalyzer_NoConfig(t *testing.T) {
viper.Reset()
a := common.Analyzer{
Context: context.TODO(),
Client: &kubernetes.Client{},
}
res, err := (CRDAnalyzer{}).Analyze(a)
if err != nil {
t.Fatalf("Analyze error: %v", err)
}
if res != nil {
t.Fatalf("expected nil result when no config, got %d results", len(res))
}
}
// TestAnalyzeGenericHealth_ReadyConditionFalse tests detection of Ready=False condition
func TestAnalyzeGenericHealth_ReadyConditionFalse(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "cert-manager.io/v1",
"kind": "Certificate",
"metadata": map[string]interface{}{
"name": "example-cert",
"namespace": "default",
},
"status": map[string]interface{}{
"conditions": []interface{}{
map[string]interface{}{
"type": "Ready",
"status": "False",
"reason": "Failed",
"message": "Certificate issuance failed",
},
},
},
},
}
failures := analyzeGenericHealth(item)
if len(failures) != 1 {
t.Fatalf("expected 1 failure, got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "Ready is False") {
t.Errorf("expected 'Ready is False' in failure text, got: %s", failures[0].Text)
}
if !strings.Contains(failures[0].Text, "Failed") {
t.Errorf("expected 'Failed' reason in failure text, got: %s", failures[0].Text)
}
}
// TestAnalyzeGenericHealth_FailedPhase tests detection of Failed phase
func TestAnalyzeGenericHealth_FailedPhase(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.io/v1",
"kind": "CustomJob",
"metadata": map[string]interface{}{
"name": "failed-job",
"namespace": "default",
},
"status": map[string]interface{}{
"phase": "Failed",
},
},
}
failures := analyzeGenericHealth(item)
if len(failures) != 1 {
t.Fatalf("expected 1 failure, got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "phase is Failed") {
t.Errorf("expected 'phase is Failed' in failure text, got: %s", failures[0].Text)
}
}
// TestAnalyzeGenericHealth_UnhealthyHealthStatus tests ArgoCD-style health status
func TestAnalyzeGenericHealth_UnhealthyHealthStatus(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Application",
"metadata": map[string]interface{}{
"name": "my-app",
"namespace": "argocd",
},
"status": map[string]interface{}{
"health": map[string]interface{}{
"status": "Degraded",
},
},
},
}
failures := analyzeGenericHealth(item)
if len(failures) != 1 {
t.Fatalf("expected 1 failure, got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "Health status is Degraded") {
t.Errorf("expected 'Health status is Degraded' in failure text, got: %s", failures[0].Text)
}
}
// TestAnalyzeGenericHealth_HealthyResource tests that healthy resources are not flagged
func TestAnalyzeGenericHealth_HealthyResource(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "cert-manager.io/v1",
"kind": "Certificate",
"metadata": map[string]interface{}{
"name": "healthy-cert",
"namespace": "default",
},
"status": map[string]interface{}{
"conditions": []interface{}{
map[string]interface{}{
"type": "Ready",
"status": "True",
},
},
},
},
}
failures := analyzeGenericHealth(item)
if len(failures) != 0 {
t.Fatalf("expected 0 failures for healthy resource, got %d", len(failures))
}
}
// TestAnalyzeResource_DeletionWithFinalizers tests detection of stuck deletion
func TestAnalyzeResource_DeletionWithFinalizers(t *testing.T) {
deletionTimestamp := metav1.Now()
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.io/v1",
"kind": "CustomResource",
"metadata": map[string]interface{}{
"name": "stuck-resource",
"namespace": "default",
"deletionTimestamp": deletionTimestamp.Format("2006-01-02T15:04:05Z"),
"finalizers": []interface{}{"example.io/finalizer"},
},
},
}
item.SetDeletionTimestamp(&deletionTimestamp)
item.SetFinalizers([]string{"example.io/finalizer"})
crd := apiextensionsv1.CustomResourceDefinition{}
failures := analyzeResource(item, crd, nil)
if len(failures) != 1 {
t.Fatalf("expected 1 failure for stuck deletion, got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "being deleted") {
t.Errorf("expected 'being deleted' in failure text, got: %s", failures[0].Text)
}
if !strings.Contains(failures[0].Text, "finalizers") {
t.Errorf("expected 'finalizers' in failure text, got: %s", failures[0].Text)
}
}
// TestAnalyzeWithConfig_ReadyConditionCheck tests custom ready condition checking
func TestAnalyzeWithConfig_ReadyConditionCheck(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "cert-manager.io/v1",
"kind": "Certificate",
"metadata": map[string]interface{}{
"name": "test-cert",
"namespace": "default",
},
"status": map[string]interface{}{
"conditions": []interface{}{
map[string]interface{}{
"type": "Ready",
"status": "False",
"message": "Certificate not issued",
},
},
},
},
}
config := &common.CRDIncludeConfig{
ReadyCondition: &common.CRDReadyCondition{
Type: "Ready",
ExpectedStatus: "True",
},
}
failures := analyzeWithConfig(item, config)
if len(failures) != 1 {
t.Fatalf("expected 1 failure, got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "Ready condition not met") {
t.Errorf("expected 'Ready condition not met' in failure text, got: %s", failures[0].Text)
}
}
// TestAnalyzeWithConfig_ExpectedValueCheck tests custom status path value checking
func TestAnalyzeWithConfig_ExpectedValueCheck(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Application",
"metadata": map[string]interface{}{
"name": "my-app",
"namespace": "argocd",
},
"status": map[string]interface{}{
"health": map[string]interface{}{
"status": "Degraded",
},
},
},
}
config := &common.CRDIncludeConfig{
StatusPath: "status.health.status",
ExpectedValue: "Healthy",
}
failures := analyzeWithConfig(item, config)
if len(failures) != 1 {
t.Fatalf("expected 1 failure, got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "Degraded") {
t.Errorf("expected 'Degraded' in failure text, got: %s", failures[0].Text)
}
if !strings.Contains(failures[0].Text, "expected 'Healthy'") {
t.Errorf("expected 'expected Healthy' in failure text, got: %s", failures[0].Text)
}
}
// TestShouldExcludeCRD tests exclusion logic
func TestShouldExcludeCRD(t *testing.T) {
excludeList := []common.CRDExcludeConfig{
{Name: "kafkatopics.kafka.strimzi.io"},
{Name: "prometheuses.monitoring.coreos.com"},
}
if !shouldExcludeCRD("kafkatopics.kafka.strimzi.io", excludeList) {
t.Error("expected kafkatopics to be excluded")
}
if shouldExcludeCRD("certificates.cert-manager.io", excludeList) {
t.Error("expected certificates not to be excluded")
}
}
// TestGetCRDConfig tests configuration retrieval
func TestGetCRDConfig(t *testing.T) {
includeList := []common.CRDIncludeConfig{
{
Name: "certificates.cert-manager.io",
StatusPath: "status.conditions",
ReadyCondition: &common.CRDReadyCondition{
Type: "Ready",
ExpectedStatus: "True",
},
},
}
config := getCRDConfig("certificates.cert-manager.io", includeList)
if config == nil {
t.Fatal("expected config to be found")
}
if config.StatusPath != "status.conditions" {
t.Errorf("expected StatusPath 'status.conditions', got %s", config.StatusPath)
}
config = getCRDConfig("nonexistent.crd.io", includeList)
if config != nil {
t.Error("expected nil config for non-existent CRD")
}
}
// TestAnalyzeGenericHealth_MultipleConditionTypes tests handling multiple condition types
func TestAnalyzeGenericHealth_MultipleConditionTypes(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.io/v1",
"kind": "CustomResource",
"metadata": map[string]interface{}{
"name": "multi-cond",
"namespace": "default",
},
"status": map[string]interface{}{
"conditions": []interface{}{
map[string]interface{}{
"type": "Available",
"status": "True",
},
map[string]interface{}{
"type": "Ready",
"status": "False",
"reason": "Pending",
"message": "Waiting for dependencies",
},
},
},
},
}
failures := analyzeGenericHealth(item)
if len(failures) != 1 {
t.Fatalf("expected 1 failure (Ready=False), got %d", len(failures))
}
if !strings.Contains(failures[0].Text, "Ready is False") {
t.Errorf("expected 'Ready is False' in failure text, got: %s", failures[0].Text)
}
}
// TestAnalyzeGenericHealth_NoStatusFields tests resource without any status fields
func TestAnalyzeGenericHealth_NoStatusFields(t *testing.T) {
item := unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.io/v1",
"kind": "CustomResource",
"metadata": map[string]interface{}{
"name": "no-status",
"namespace": "default",
},
},
}
failures := analyzeGenericHealth(item)
if len(failures) != 0 {
t.Fatalf("expected 0 failures for resource without status, got %d", len(failures))
}
}
// TestCRDAnalyzer_NilClientConfig tests that the analyzer handles errors gracefully
func TestCRDAnalyzer_NilClientConfig(t *testing.T) {
viper.Reset()
viper.Set("crd_analyzer", map[string]interface{}{
"enabled": true,
})
// Create a client with a config that will cause an error when trying to create apiextensions client
a := common.Analyzer{
Context: context.TODO(),
Client: &kubernetes.Client{Config: &rest.Config{}},
}
// The analyzer should handle the error gracefully without panicking
results, err := (CRDAnalyzer{}).Analyze(a)
// We expect either an error or no results, but no panic
if err != nil {
// Error is expected in this case - that's fine
if results != nil {
t.Errorf("Expected nil results when error occurs, got %v", results)
}
}
// The important thing is that we didn't panic
}

View File

@@ -55,7 +55,7 @@ func (d DeploymentAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error)
for _, deployment := range deployments.Items {
var failures []common.Failure
if *deployment.Spec.Replicas != deployment.Status.ReadyReplicas {
if deployment.Status.Replicas > *deployment.Spec.Replicas {
if deployment.Status.Replicas > *deployment.Spec.Replicas {
doc := apiDoc.GetApiDocV2("spec.replicas")
failures = append(failures, common.Failure{
@@ -88,7 +88,7 @@ func (d DeploymentAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error)
Masked: util.MaskString(deployment.Name),
},
}})
}
}
}
if len(failures) > 0 {
preAnalysis[fmt.Sprintf("%s/%s", deployment.Namespace, deployment.Name)] = common.PreAnalysis{

View File

@@ -1,60 +1,60 @@
package cache
import (
"os"
"testing"
"os"
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
func TestNewReturnsExpectedCache(t *testing.T) {
require.IsType(t, &FileBasedCache{}, New("file"))
require.IsType(t, &AzureCache{}, New("azure"))
require.IsType(t, &GCSCache{}, New("gcs"))
require.IsType(t, &S3Cache{}, New("s3"))
require.IsType(t, &InterplexCache{}, New("interplex"))
// default fallback
require.IsType(t, &FileBasedCache{}, New("unknown"))
require.IsType(t, &FileBasedCache{}, New("file"))
require.IsType(t, &AzureCache{}, New("azure"))
require.IsType(t, &GCSCache{}, New("gcs"))
require.IsType(t, &S3Cache{}, New("s3"))
require.IsType(t, &InterplexCache{}, New("interplex"))
// default fallback
require.IsType(t, &FileBasedCache{}, New("unknown"))
}
func TestNewCacheProvider_InterplexAndInvalid(t *testing.T) {
// valid: interplex
cp, err := NewCacheProvider("interplex", "", "", "localhost:1", "", "", "", false)
require.NoError(t, err)
require.Equal(t, "interplex", cp.CurrentCacheType)
require.Equal(t, "localhost:1", cp.Interplex.ConnectionString)
// valid: interplex
cp, err := NewCacheProvider("interplex", "", "", "localhost:1", "", "", "", false)
require.NoError(t, err)
require.Equal(t, "interplex", cp.CurrentCacheType)
require.Equal(t, "localhost:1", cp.Interplex.ConnectionString)
// invalid type
_, err = NewCacheProvider("not-a-type", "", "", "", "", "", "", false)
require.Error(t, err)
// invalid type
_, err = NewCacheProvider("not-a-type", "", "", "", "", "", "", false)
require.Error(t, err)
}
func TestAddRemoveRemoteCacheAndGet(t *testing.T) {
// isolate viper with temp config file
tmpFile, err := os.CreateTemp("", "k8sgpt-cache-config-*.yaml")
require.NoError(t, err)
defer func() {
_ = os.Remove(tmpFile.Name())
}()
viper.Reset()
viper.SetConfigFile(tmpFile.Name())
// isolate viper with temp config file
tmpFile, err := os.CreateTemp("", "k8sgpt-cache-config-*.yaml")
require.NoError(t, err)
defer func() {
_ = os.Remove(tmpFile.Name())
}()
viper.Reset()
viper.SetConfigFile(tmpFile.Name())
// add interplex remote cache
cp := CacheProvider{}
cp.CurrentCacheType = "interplex"
cp.Interplex.ConnectionString = "localhost:1"
require.NoError(t, AddRemoteCache(cp))
// add interplex remote cache
cp := CacheProvider{}
cp.CurrentCacheType = "interplex"
cp.Interplex.ConnectionString = "localhost:1"
require.NoError(t, AddRemoteCache(cp))
// read back via GetCacheConfiguration
c, err := GetCacheConfiguration()
require.NoError(t, err)
require.IsType(t, &InterplexCache{}, c)
// read back via GetCacheConfiguration
c, err := GetCacheConfiguration()
require.NoError(t, err)
require.IsType(t, &InterplexCache{}, c)
// remove remote cache
require.NoError(t, RemoveRemoteCache())
// now default should be file-based
c2, err := GetCacheConfiguration()
require.NoError(t, err)
require.IsType(t, &FileBasedCache{}, c2)
}
// remove remote cache
require.NoError(t, RemoveRemoteCache())
// now default should be file-based
c2, err := GetCacheConfiguration()
require.NoError(t, err)
require.IsType(t, &FileBasedCache{}, c2)
}

View File

@@ -1,77 +1,77 @@
package cache
import (
"os"
"path/filepath"
"testing"
"os"
"path/filepath"
"testing"
"github.com/adrg/xdg"
"github.com/stretchr/testify/require"
"github.com/adrg/xdg"
"github.com/stretchr/testify/require"
)
// withTempCacheHome sets XDG_CACHE_HOME to a temp dir for test isolation.
func withTempCacheHome(t *testing.T) func() {
t.Helper()
tmp, err := os.MkdirTemp("", "k8sgpt-cache-test-*")
require.NoError(t, err)
old := os.Getenv("XDG_CACHE_HOME")
require.NoError(t, os.Setenv("XDG_CACHE_HOME", tmp))
return func() {
_ = os.Setenv("XDG_CACHE_HOME", old)
_ = os.RemoveAll(tmp)
}
t.Helper()
tmp, err := os.MkdirTemp("", "k8sgpt-cache-test-*")
require.NoError(t, err)
old := os.Getenv("XDG_CACHE_HOME")
require.NoError(t, os.Setenv("XDG_CACHE_HOME", tmp))
return func() {
_ = os.Setenv("XDG_CACHE_HOME", old)
_ = os.RemoveAll(tmp)
}
}
func TestFileBasedCache_BasicOps(t *testing.T) {
cleanup := withTempCacheHome(t)
defer cleanup()
cleanup := withTempCacheHome(t)
defer cleanup()
c := &FileBasedCache{}
// Configure should be a no-op
require.NoError(t, c.Configure(CacheProvider{}))
require.Equal(t, "file", c.GetName())
require.False(t, c.IsCacheDisabled())
c.DisableCache()
require.True(t, c.IsCacheDisabled())
c := &FileBasedCache{}
// Configure should be a no-op
require.NoError(t, c.Configure(CacheProvider{}))
require.Equal(t, "file", c.GetName())
require.False(t, c.IsCacheDisabled())
c.DisableCache()
require.True(t, c.IsCacheDisabled())
key := "testkey"
data := "hello"
key := "testkey"
data := "hello"
// Store
require.NoError(t, c.Store(key, data))
// Store
require.NoError(t, c.Store(key, data))
// Exists
require.True(t, c.Exists(key))
// Exists
require.True(t, c.Exists(key))
// Load
got, err := c.Load(key)
require.NoError(t, err)
require.Equal(t, data, got)
// Load
got, err := c.Load(key)
require.NoError(t, err)
require.Equal(t, data, got)
// List should include our key file
items, err := c.List()
require.NoError(t, err)
// ensure at least one item and that one matches our key
found := false
for _, it := range items {
if it.Name == key {
found = true
break
}
}
require.True(t, found)
// List should include our key file
items, err := c.List()
require.NoError(t, err)
// ensure at least one item and that one matches our key
found := false
for _, it := range items {
if it.Name == key {
found = true
break
}
}
require.True(t, found)
// Remove
require.NoError(t, c.Remove(key))
require.False(t, c.Exists(key))
// Remove
require.NoError(t, c.Remove(key))
require.False(t, c.Exists(key))
}
func TestFileBasedCache_PathShape(t *testing.T) {
cleanup := withTempCacheHome(t)
defer cleanup()
// Verify xdg.CacheFile path shape (directory and filename)
p, err := xdg.CacheFile(filepath.Join("k8sgpt", "abc"))
require.NoError(t, err)
require.Equal(t, "abc", filepath.Base(p))
require.Contains(t, p, "k8sgpt")
}
cleanup := withTempCacheHome(t)
defer cleanup()
// Verify xdg.CacheFile path shape (directory and filename)
p, err := xdg.CacheFile(filepath.Join("k8sgpt", "abc"))
require.NoError(t, err)
require.Equal(t, "abc", filepath.Base(p))
require.Contains(t, p, "k8sgpt")
}

View File

@@ -97,32 +97,6 @@ type Sensitive struct {
Masked string
}
// CRDAnalyzerConfig defines the configuration for the generic CRD analyzer
type CRDAnalyzerConfig struct {
Enabled bool `yaml:"enabled" json:"enabled"`
Include []CRDIncludeConfig `yaml:"include" json:"include"`
Exclude []CRDExcludeConfig `yaml:"exclude" json:"exclude"`
}
// CRDIncludeConfig defines configuration for a specific CRD to analyze
type CRDIncludeConfig struct {
Name string `yaml:"name" json:"name"`
StatusPath string `yaml:"statusPath" json:"statusPath"`
ReadyCondition *CRDReadyCondition `yaml:"readyCondition" json:"readyCondition"`
ExpectedValue string `yaml:"expectedValue" json:"expectedValue"`
}
// CRDReadyCondition defines the expected ready condition
type CRDReadyCondition struct {
Type string `yaml:"type" json:"type"`
ExpectedStatus string `yaml:"expectedStatus" json:"expectedStatus"`
}
// CRDExcludeConfig defines a CRD to exclude from analysis
type CRDExcludeConfig struct {
Name string `yaml:"name" json:"name"`
}
type (
SourceType string
AvailabilityMode string

View File

@@ -1,41 +1,41 @@
package custom
import (
"context"
"testing"
"context"
"testing"
schemav1 "buf.build/gen/go/k8sgpt-ai/k8sgpt/protocolbuffers/go/schema/v1"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
schemav1 "buf.build/gen/go/k8sgpt-ai/k8sgpt/protocolbuffers/go/schema/v1"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
)
// mockAnalyzerClient implements rpc.CustomAnalyzerServiceClient for testing
type mockAnalyzerClient struct{
resp *schemav1.RunResponse
err error
type mockAnalyzerClient struct {
resp *schemav1.RunResponse
err error
}
func (m *mockAnalyzerClient) Run(ctx context.Context, in *schemav1.RunRequest, opts ...grpc.CallOption) (*schemav1.RunResponse, error) {
return m.resp, m.err
return m.resp, m.err
}
func TestClientRunMapsResponse(t *testing.T) {
// prepare fake response
resp := &schemav1.RunResponse{
Result: &schemav1.Result{
Name: "AnalyzerA",
Kind: "Pod",
Details: "details",
ParentObject: "Deployment/foo",
},
}
cli := &Client{analyzerClient: &mockAnalyzerClient{resp: resp}}
// prepare fake response
resp := &schemav1.RunResponse{
Result: &schemav1.Result{
Name: "AnalyzerA",
Kind: "Pod",
Details: "details",
ParentObject: "Deployment/foo",
},
}
cli := &Client{analyzerClient: &mockAnalyzerClient{resp: resp}}
got, err := cli.Run()
require.NoError(t, err)
require.Equal(t, "AnalyzerA", got.Name)
require.Equal(t, "Pod", got.Kind)
require.Equal(t, "details", got.Details)
require.Equal(t, "Deployment/foo", got.ParentObject)
require.Len(t, got.Error, 0)
}
got, err := cli.Run()
require.NoError(t, err)
require.Equal(t, "AnalyzerA", got.Name)
require.Equal(t, "Pod", got.Kind)
require.Equal(t, "details", got.Details)
require.Equal(t, "Deployment/foo", got.ParentObject)
require.Len(t, got.Error, 0)
}

View File

@@ -251,10 +251,10 @@ func (s *K8sGptMCPServer) handleListNamespaces(ctx context.Context, request mcp.
// handleListEvents lists Kubernetes events
func (s *K8sGptMCPServer) handleListEvents(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
var req struct {
Namespace string `json:"namespace,omitempty"`
InvolvedObjectName string `json:"involvedObjectName,omitempty"`
InvolvedObjectKind string `json:"involvedObjectKind,omitempty"`
Limit int64 `json:"limit,omitempty"`
Namespace string `json:"namespace,omitempty"`
InvolvedObjectName string `json:"involvedObjectName,omitempty"`
InvolvedObjectKind string `json:"involvedObjectKind,omitempty"`
Limit int64 `json:"limit,omitempty"`
}
if err := request.BindArguments(&req); err != nil {
return mcp.NewToolResultErrorf("Failed to parse request arguments: %v", err), nil

View File

@@ -307,4 +307,4 @@ func TestQuery_GetCompletionError(t *testing.T) {
mockAI.AssertExpectations(t)
mockFactory.AssertExpectations(t)
mockConfig.AssertExpectations(t)
}
}

View File

@@ -55,10 +55,10 @@ type Config struct {
QueryHandler *query.Handler
Logger *zap.Logger
// Filters can be injected into the server to limit analysis to specific analyzers
Filters []string
metricsServer *http.Server
listener net.Listener
EnableHttp bool
Filters []string
metricsServer *http.Server
listener net.Listener
EnableHttp bool
}
type Health struct {