Files
kubeshark/cli/kubernetes/provider.go
M. Mert Yıldıran 366c1d0c6c Refactor Mizu, define an extension API and add new protocols: AMQP, Kafka (#224)
* Separate HTTP related code into `extensions/http` as a Go plugin

* Move `extensions` folder into `tap` folder

* Move HTTP files into `tap/extensions/lib` for now

* Replace `orcaman/concurrent-map` with `sync.Map`

* Remove `grpc_assembler.go`

* Remove `github.com/up9inc/mizu/tap/extensions/http/lib`

* Add a build script to automatically build extensions from a known path and load them

* Start to define the extension API

* Implement the `run()` function for the TCP stream

* Add support of defining multiple ports to the extension API

* Set the extension name inside the extension

* Declare the `Dissect` function in the extension API

* Dissect HTTP request from inside the HTTP extension

* Make the distinction of outbound and inbound ports

* Dissect HTTP response from inside the HTTP extension

* Bring back the HTTP request-response pair matcher

* Return a `*api.RequestResponsePair` from the dissection

* Bring back the gRPC-HTTP/2 parser

* Fix the issues in `handleHTTP1ClientStream` and `handleHTTP1ServerStream`

* Call a function pointer to emit dissected data back to the `tap` package

* roee changes -
trying to fix agent to work with the "api" object) - ***still not working***

* small mistake in the conflicts

* Fix the issues that are introduced by the merge conflict

* Add `Emitter` interface to the API and send `OutputChannelItem`(s) to `OutputChannel`

* Fix the `HTTP1` handlers

* Set `ConnectionInfo` in HTTP handlers

* Fix the `Dockerfile` to build the extensions

* remove some unwanted code

* no message

* Re-enable `getStreamProps` function

* Migrate back from `gopacket/tcpassembly` to `gopacket/reassembly`

* Introduce `HTTPPayload` struct and `HTTPPayloader` interface to `MarshalJSON()` all the data structures that are returned by the HTTP protocol

* Read `socketHarOutChannel` instead of `filteredHarChannel`

* Connect `OutputChannelItem` to the last WebSocket means that finally the web UI started to work again

* Add `.env.example` to React app

* Marshal and unmarshal `*http.Request`, `*http.Response` pairs

* Move `loadExtensions` into `main.go` and map extensions into `extensionsMap`

* Add `Summarize()` method to the `Dissector` interface

* Add `Analyze` method to the `Dissector` interface and `MizuEntry` to the extension API

* Add `Protocol` struct and make it effect the UI

* Refactor `BaseEntryDetails` struct and display the source and destination ports in the UI

* Display the protocol name inside the details layout

* Add `Represent` method to the `Dissector` interface and manipulate the UI through this method

* Make the protocol color affect the details layout color and write protocol abbreviation vertically

* Remove everything HTTP related from the `tap` package and make the extension system fully functional

* Fix the TypeScript warnings

* Bring in the files related AMQP into `amqp` directory

* Add `--nodefrag` flag to the tapper and bring in the main AMQP code

* Implement the AMQP `BasicPublish` and fix some issues in the UI when the response payload is missing

* Implement `representBasicPublish` method

* Fix several minor issues

* Implement the AMQP `BasicDeliver`

* Implement the AMQP `QueueDeclare`

* Implement the AMQP `ExchangeDeclare`

* Implement the AMQP `ConnectionStart`

* Implement the AMQP `ConnectionClose`

* Implement the AMQP `QueueBind`

* Implement the AMQP `BasicConsume`

* Fix an issue in `ConnectionStart`

* Fix a linter error

* Bring in the files related Kafka into `kafka` directory

* Fix the build errors in Kafka Go files

* Implement `Dissect` method of Kafka and adapt request-response pair matcher to asynchronous client-server stream

* Do the "Is reversed?" checked inside `getStreamProps` and fix an issue in Kafka `Dissect` method

* Implement `Analyze`, `Summarize` methods of Kafka

* Implement the representations for Kafka `Metadata`, `RequestHeader` and `ResponseHeader`

* Refactor the AMQP and Kafka implementations to create the summary string only inside the `Analyze` method

* Implement the representations for Kafka `ApiVersions`

* Implement the representations for Kafka `Produce`

* Implement the representations for Kafka `Fetch`

* Implement the representations for Kafka `ListOffsets`, `CreateTopics` and `DeleteTopics`

* Fix the encoding of AMQP `BasicPublish` and `BasicDeliver` body

* Remove the unnecessary logging

* Remove more logging

* Introduce `Version` field to `Protocol` struct for dynamically switching the HTTP protocol to HTTP/2

* Fix the issues in analysis and representation of HTTP/2 (gRPC) protocol

* Fix the issues in summary section of details layout for HTTP/2 (gRPC) protocol

* Fix the read errors that freezes the sniffer in HTTP and Kafka

* Fix the issues in HTTP POST data

* Fix one more issue in HTTP POST data

* Fix an infinite loop in Kafka

* Fix another freezing issue in Kafka

* Revert "UI Infra - Support multiple entry types + refactoring (#211)"

This reverts commit f74a52d4dc.

* Fix more issues that are introduced by the merge

* Fix the status code in the summary section

* adding the cleaner again (why we removed it?).
add TODO: on the extension loop .

* fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists

* fix GetEntrties ("/entries" endpoint) - working with "tapApi.BaseEntryDetail" (moved from shared)

* Fix an issue in the UI summary section

* Refactor the protocol payload structs

* Fix a log message in the passive tapper

* Adapt `APP_PORTS` environment variable to the new extension system and change its format to `APP_PORTS='{"http": ["8001"]}' `

* Revert "fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists"

This reverts commit 4f514ae1f4.

* Bring in the necessary changes from f74a52d4dc

* Open the API server URL in the web browser as soon as Mizu is ready

* Make the TCP reader consists of a single Go routine (instead of two) and try to dissect in both client and server mode by rewinding

* Swap `TcpID` without overwriting it

* Sort extension by priority

* Try to dissect with looping through all the extensions

* fix getStreamProps function.
(it should be passed from CLI as it was before).

* Turn TCP reader back into two Goroutines (client and server)

* typo

* Learn `isClient` from the TCP stream

* Set `viewer` style `overflow: "auto"`

* Fix the memory leaks in AMQP and Kafka dissectors

* Revert some of the changes in be7c65eb6d

* Remove `allExtensionPorts` since it's no longer needed

* Remove `APP_PORTS` since it's no longer needed

* Fix all of the minor issues in the React code

* Check Kafka header size and fail-fast

* Break the dissectors loop upon a successful dissection

* Don't break the dissector loop. Protocols might collide

* Improve the HTTP request-response counter (still not perfect)

* Make the HTTP request-response counter perfect

* Revert "Revert some of the changes in be7c65eb6d3fb657a059707da3ca559937e59739"

This reverts commit 08e7d786d8.

* Bring back `filterItems` and `isHealthCheckByUserAgent` functions

* Remove some development artifacts

* remove unused and commented lines that are not relevant

* Fix the performance in TCP stream factory. Make it create two `tcpReader`(s) per extension

* Change a log to debug

* Make `*api.CounterPair` a field of `tcpReader`

* Set `isTapTarget` to always `true` again since `filterAuthorities` implementation has problems

* Remove a variable that's only used for logging even though not introduced by this branch

* Bring back the `NumberOfRules` field of `ApplicableRules` struct

* Remove the unused `NewEntry` function

* Move `k8sResolver == nil` check to a more appropriate place

* default healthChecksUserAgentHeaders should be empty array (like the default config value)

* remove spam console.log

* Rules button cause app to crash (access the service via incorrect property)

* Ignore all .env* files in docker build.

* Better caching in dockerfile: only copy go.mod before go mod download.

* Check for errors while loading an extension

* Add a comment about why `Protocol` is not a pointer

* Bring back the call to `deleteOlderThan`

* Remove the `nil` check

* Reduce the maximum allowed AMQP message from 128MB to 1MB

* Fix an error that only occurs when a Kafka broker is initiating

* Revert the change in b2abd7b990

* Fix the service name resolution in all protocols

* Remove the `anydirection` flag and fix the issue in `filterAuthorities`

* Pass `sync.Map` by reference to `deleteOlderThan` method

* Fix the packet capture issue in standalone mode that's introduced by the removal of `anydirection`

* Temporarily resolve the memory exhaustion in AMQP

* Fix a nil pointer dereference error

* Fix the CLI build error

* Fix a memory leak that's identified by `pprof`

Co-authored-by: Roee Gadot <roee.gadot@up9.com>
Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>
2021-09-02 14:34:06 +03:00

782 lines
27 KiB
Go

package kubernetes
import (
"bytes"
_ "bytes"
"context"
"encoding/json"
"errors"
"fmt"
"path/filepath"
"regexp"
"strconv"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"io"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/shared"
core "k8s.io/api/core/v1"
rbac "k8s.io/api/rbac/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
resource "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/watch"
applyconfapp "k8s.io/client-go/applyconfigurations/apps/v1"
applyconfcore "k8s.io/client-go/applyconfigurations/core/v1"
applyconfmeta "k8s.io/client-go/applyconfigurations/meta/v1"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/azure"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
_ "k8s.io/client-go/tools/portforward"
watchtools "k8s.io/client-go/tools/watch"
)
type Provider struct {
clientSet *kubernetes.Clientset
kubernetesConfig clientcmd.ClientConfig
clientConfig restclient.Config
Namespace string
}
const (
fieldManagerName = "mizu-manager"
)
func NewProvider(kubeConfigPath string) (*Provider, error) {
kubernetesConfig := loadKubernetesConfiguration(kubeConfigPath)
restClientConfig, err := kubernetesConfig.ClientConfig()
if err != nil {
if clientcmd.IsEmptyConfig(err) {
return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
if clientcmd.IsConfigurationInvalid(err) {
return nil, fmt.Errorf("invalid kube config file (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
return nil, fmt.Errorf("error while using kube config (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
clientSet, err := getClientSet(restClientConfig)
if err != nil {
return nil, fmt.Errorf("error while using kube config (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
return &Provider{
clientSet: clientSet,
kubernetesConfig: kubernetesConfig,
clientConfig: *restClientConfig,
}, nil
}
func (provider *Provider) CurrentNamespace() string {
ns, _, _ := provider.kubernetesConfig.Namespace()
return ns
}
func (provider *Provider) WaitUtilNamespaceDeleted(ctx context.Context, name string) error {
fieldSelector := fmt.Sprintf("metadata.name=%s", name)
var limit int64 = 1
lw := &cache.ListWatch{
ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
options.FieldSelector = fieldSelector
options.Limit = limit
return provider.clientSet.CoreV1().Namespaces().List(ctx, options)
},
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
options.FieldSelector = fieldSelector
options.Limit = limit
return provider.clientSet.CoreV1().Namespaces().Watch(ctx, options)
},
}
var preconditionFunc watchtools.PreconditionFunc = func(store cache.Store) (bool, error) {
_, exists, err := store.Get(&core.Namespace{ObjectMeta: metav1.ObjectMeta{Name: name}})
if err != nil {
return false, err
}
if exists {
return false, nil
}
return true, nil
}
conditionFunc := func(e watch.Event) (bool, error) {
if e.Type == watch.Deleted {
return true, nil
}
return false, nil
}
obj := &core.Namespace{}
_, err := watchtools.UntilWithSync(ctx, lw, obj, preconditionFunc, conditionFunc)
return err
}
func (provider *Provider) GetPodWatcher(ctx context.Context, namespace string) watch.Interface {
watcher, err := provider.clientSet.CoreV1().Pods(namespace).Watch(ctx, metav1.ListOptions{Watch: true})
if err != nil {
panic(err.Error())
}
return watcher
}
func (provider *Provider) CreateNamespace(ctx context.Context, name string) (*core.Namespace, error) {
namespaceSpec := &core.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
}
return provider.clientSet.CoreV1().Namespaces().Create(ctx, namespaceSpec, metav1.CreateOptions{})
}
type ApiServerOptions struct {
Namespace string
PodName string
PodImage string
ServiceAccountName string
IsNamespaceRestricted bool
MizuApiFilteringOptions *shared.TrafficFilteringOptions
MaxEntriesDBSizeBytes int64
Resources configStructs.Resources
ImagePullPolicy core.PullPolicy
}
func (provider *Provider) CreateMizuApiServerPod(ctx context.Context, opts *ApiServerOptions) (*core.Pod, error) {
marshaledFilteringOptions, err := json.Marshal(opts.MizuApiFilteringOptions)
if err != nil {
return nil, err
}
configMapVolumeName := &core.ConfigMapVolumeSource{}
configMapVolumeName.Name = mizu.ConfigMapName
configMapOptional := true
configMapVolumeName.Optional = &configMapOptional
cpuLimit, err := resource.ParseQuantity(opts.Resources.CpuLimit)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid cpu limit for %s container", opts.PodName))
}
memLimit, err := resource.ParseQuantity(opts.Resources.MemoryLimit)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid memory limit for %s container", opts.PodName))
}
cpuRequests, err := resource.ParseQuantity(opts.Resources.CpuRequests)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid cpu request for %s container", opts.PodName))
}
memRequests, err := resource.ParseQuantity(opts.Resources.MemoryRequests)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid memory request for %s container", opts.PodName))
}
command := []string{"./mizuagent", "--api-server"}
if opts.IsNamespaceRestricted {
command = append(command, "--namespace", opts.Namespace)
}
pod := &core.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: opts.PodName,
Namespace: opts.Namespace,
Labels: map[string]string{"app": opts.PodName},
},
Spec: core.PodSpec{
Containers: []core.Container{
{
Name: opts.PodName,
Image: opts.PodImage,
ImagePullPolicy: opts.ImagePullPolicy,
VolumeMounts: []core.VolumeMount{
{
Name: mizu.ConfigMapName,
MountPath: shared.RulePolicyPath,
},
},
Command: command,
Env: []core.EnvVar{
{
Name: shared.HostModeEnvVar,
Value: "1",
},
{
Name: shared.MizuFilteringOptionsEnvVar,
Value: string(marshaledFilteringOptions),
},
{
Name: shared.MaxEntriesDBSizeBytesEnvVar,
Value: strconv.FormatInt(opts.MaxEntriesDBSizeBytes, 10),
},
},
Resources: core.ResourceRequirements{
Limits: core.ResourceList{
"cpu": cpuLimit,
"memory": memLimit,
},
Requests: core.ResourceList{
"cpu": cpuRequests,
"memory": memRequests,
},
},
},
},
Volumes: []core.Volume{
{
Name: mizu.ConfigMapName,
VolumeSource: core.VolumeSource{
ConfigMap: configMapVolumeName,
},
},
},
DNSPolicy: core.DNSClusterFirstWithHostNet,
TerminationGracePeriodSeconds: new(int64),
},
}
//define the service account only when it exists to prevent pod crash
if opts.ServiceAccountName != "" {
pod.Spec.ServiceAccountName = opts.ServiceAccountName
}
return provider.clientSet.CoreV1().Pods(opts.Namespace).Create(ctx, pod, metav1.CreateOptions{})
}
func (provider *Provider) CreateService(ctx context.Context, namespace string, serviceName string, appLabelValue string) (*core.Service, error) {
service := core.Service{
ObjectMeta: metav1.ObjectMeta{
Name: serviceName,
Namespace: namespace,
},
Spec: core.ServiceSpec{
Ports: []core.ServicePort{{TargetPort: intstr.FromInt(8899), Port: 80}},
Type: core.ServiceTypeClusterIP,
Selector: map[string]string{"app": appLabelValue},
},
}
return provider.clientSet.CoreV1().Services(namespace).Create(ctx, &service, metav1.CreateOptions{})
}
func (provider *Provider) DoesServiceAccountExist(ctx context.Context, namespace string, serviceAccountName string) (bool, error) {
serviceAccount, err := provider.clientSet.CoreV1().ServiceAccounts(namespace).Get(ctx, serviceAccountName, metav1.GetOptions{})
return provider.doesResourceExist(serviceAccount, err)
}
func (provider *Provider) DoesConfigMapExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.CoreV1().ConfigMaps(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesServicesExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.CoreV1().Services(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesNamespaceExist(ctx context.Context, name string) (bool, error) {
resource, err := provider.clientSet.CoreV1().Namespaces().Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesClusterRoleExist(ctx context.Context, name string) (bool, error) {
resource, err := provider.clientSet.RbacV1().ClusterRoles().Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesClusterRoleBindingExist(ctx context.Context, name string) (bool, error) {
resource, err := provider.clientSet.RbacV1().ClusterRoleBindings().Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesRoleExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.RbacV1().Roles(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesRoleBindingExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.RbacV1().RoleBindings(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesPodExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.CoreV1().Pods(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) DoesDaemonSetExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
}
func (provider *Provider) doesResourceExist(resource interface{}, err error) (bool, error) {
var statusError *k8serrors.StatusError
if errors.As(err, &statusError) {
// expected behavior when resource does not exist
if statusError.ErrStatus.Reason == metav1.StatusReasonNotFound {
return false, nil
}
}
if err != nil {
return false, err
}
return resource != nil, nil
}
func (provider *Provider) CreateMizuRBAC(ctx context.Context, namespace string, serviceAccountName string, clusterRoleName string, clusterRoleBindingName string, version string) error {
serviceAccount := &core.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: namespace,
Labels: map[string]string{"mizu-cli-version": version},
},
}
clusterRole := &rbac.ClusterRole{
ObjectMeta: metav1.ObjectMeta{
Name: clusterRoleName,
Labels: map[string]string{"mizu-cli-version": version},
},
Rules: []rbac.PolicyRule{
{
APIGroups: []string{"", "extensions", "apps"},
Resources: []string{"pods", "services", "endpoints"},
Verbs: []string{"list", "get", "watch"},
},
},
}
clusterRoleBinding := &rbac.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: clusterRoleBindingName,
Labels: map[string]string{"mizu-cli-version": version},
},
RoleRef: rbac.RoleRef{
Name: clusterRoleName,
Kind: "ClusterRole",
APIGroup: "rbac.authorization.k8s.io",
},
Subjects: []rbac.Subject{
{
Kind: "ServiceAccount",
Name: serviceAccountName,
Namespace: namespace,
},
},
}
_, err := provider.clientSet.CoreV1().ServiceAccounts(namespace).Create(ctx, serviceAccount, metav1.CreateOptions{})
if err != nil && !k8serrors.IsAlreadyExists(err) {
return err
}
_, err = provider.clientSet.RbacV1().ClusterRoles().Create(ctx, clusterRole, metav1.CreateOptions{})
if err != nil && !k8serrors.IsAlreadyExists(err) {
return err
}
_, err = provider.clientSet.RbacV1().ClusterRoleBindings().Create(ctx, clusterRoleBinding, metav1.CreateOptions{})
if err != nil && !k8serrors.IsAlreadyExists(err) {
return err
}
return nil
}
func (provider *Provider) CreateMizuRBACNamespaceRestricted(ctx context.Context, namespace string, serviceAccountName string, roleName string, roleBindingName string, version string) error {
serviceAccount := &core.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: namespace,
Labels: map[string]string{"mizu-cli-version": version},
},
}
role := &rbac.Role{
ObjectMeta: metav1.ObjectMeta{
Name: roleName,
Labels: map[string]string{"mizu-cli-version": version},
},
Rules: []rbac.PolicyRule{
{
APIGroups: []string{"", "extensions", "apps"},
Resources: []string{"pods", "services", "endpoints"},
Verbs: []string{"list", "get", "watch"},
},
},
}
roleBinding := &rbac.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: roleBindingName,
Labels: map[string]string{"mizu-cli-version": version},
},
RoleRef: rbac.RoleRef{
Name: roleName,
Kind: "Role",
APIGroup: "rbac.authorization.k8s.io",
},
Subjects: []rbac.Subject{
{
Kind: "ServiceAccount",
Name: serviceAccountName,
Namespace: namespace,
},
},
}
_, err := provider.clientSet.CoreV1().ServiceAccounts(namespace).Create(ctx, serviceAccount, metav1.CreateOptions{})
if err != nil && !k8serrors.IsAlreadyExists(err) {
return err
}
_, err = provider.clientSet.RbacV1().Roles(namespace).Create(ctx, role, metav1.CreateOptions{})
if err != nil && !k8serrors.IsAlreadyExists(err) {
return err
}
_, err = provider.clientSet.RbacV1().RoleBindings(namespace).Create(ctx, roleBinding, metav1.CreateOptions{})
if err != nil && !k8serrors.IsAlreadyExists(err) {
return err
}
return nil
}
func (provider *Provider) RemoveNamespace(ctx context.Context, name string) error {
if isFound, err := provider.DoesNamespaceExist(ctx, name); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.CoreV1().Namespaces().Delete(ctx, name, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveNonNamespacedResources(ctx context.Context, clusterRoleName string, clusterRoleBindingName string) error {
if err := provider.RemoveClusterRole(ctx, clusterRoleName); err != nil {
return err
}
if err := provider.RemoveClusterRoleBinding(ctx, clusterRoleBindingName); err != nil {
return err
}
return nil
}
func (provider *Provider) RemoveClusterRole(ctx context.Context, name string) error {
if isFound, err := provider.DoesClusterRoleExist(ctx, name); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.RbacV1().ClusterRoles().Delete(ctx, name, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveClusterRoleBinding(ctx context.Context, name string) error {
if isFound, err := provider.DoesClusterRoleBindingExist(ctx, name); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.RbacV1().ClusterRoleBindings().Delete(ctx, name, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveRoleBinding(ctx context.Context, namespace string, name string) error {
if isFound, err := provider.DoesRoleBindingExist(ctx, namespace, name); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.RbacV1().RoleBindings(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveRole(ctx context.Context, namespace string, name string) error {
if isFound, err := provider.DoesRoleExist(ctx, namespace, name); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.RbacV1().Roles(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveServicAccount(ctx context.Context, namespace string, name string) error {
if isFound, err := provider.DoesServiceAccountExist(ctx, namespace, name); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.CoreV1().ServiceAccounts(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
func (provider *Provider) RemovePod(ctx context.Context, namespace string, podName string) error {
if isFound, err := provider.DoesPodExist(ctx, namespace, podName); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.CoreV1().Pods(namespace).Delete(ctx, podName, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveConfigMap(ctx context.Context, namespace string, configMapName string) error {
if isFound, err := provider.DoesConfigMapExist(ctx, namespace, configMapName); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.CoreV1().ConfigMaps(namespace).Delete(ctx, configMapName, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveService(ctx context.Context, namespace string, serviceName string) error {
if isFound, err := provider.DoesServicesExist(ctx, namespace, serviceName); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.CoreV1().Services(namespace).Delete(ctx, serviceName, metav1.DeleteOptions{})
}
func (provider *Provider) RemoveDaemonSet(ctx context.Context, namespace string, daemonSetName string) error {
if isFound, err := provider.DoesDaemonSetExist(ctx, namespace, daemonSetName); err != nil {
return err
} else if !isFound {
return nil
}
return provider.clientSet.AppsV1().DaemonSets(namespace).Delete(ctx, daemonSetName, metav1.DeleteOptions{})
}
func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string, configMapName string, data string) error {
if data == "" {
return nil
}
configMapData := make(map[string]string, 0)
configMapData[shared.RulePolicyFileName] = data
configMap := &core.ConfigMap{
TypeMeta: metav1.TypeMeta{
Kind: "ConfigMap",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: configMapName,
Namespace: namespace,
},
Data: configMapData,
}
if _, err := provider.clientSet.CoreV1().ConfigMaps(namespace).Create(ctx, configMap, metav1.CreateOptions{}); err != nil {
return err
}
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, resources configStructs.Resources, imagePullPolicy core.PullPolicy) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeToTappedPodIPMap) == 0 {
return fmt.Errorf("daemon set %s must tap at least 1 pod", daemonSetName)
}
nodeToTappedPodIPMapJsonStr, err := json.Marshal(nodeToTappedPodIPMap)
if err != nil {
return err
}
mizuCmd := []string{
"./mizuagent",
"-i", "any",
"--tap",
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
"--nodefrag",
}
agentContainer := applyconfcore.Container()
agentContainer.WithName(tapperPodName)
agentContainer.WithImage(podImage)
agentContainer.WithImagePullPolicy(imagePullPolicy)
agentContainer.WithSecurityContext(applyconfcore.SecurityContext().WithPrivileged(true))
agentContainer.WithCommand(mizuCmd...)
agentContainer.WithEnv(
applyconfcore.EnvVar().WithName(shared.HostModeEnvVar).WithValue("1"),
applyconfcore.EnvVar().WithName(shared.TappedAddressesPerNodeDictEnvVar).WithValue(string(nodeToTappedPodIPMapJsonStr)),
)
agentContainer.WithEnv(
applyconfcore.EnvVar().WithName(shared.NodeNameEnvVar).WithValueFrom(
applyconfcore.EnvVarSource().WithFieldRef(
applyconfcore.ObjectFieldSelector().WithAPIVersion("v1").WithFieldPath("spec.nodeName"),
),
),
)
cpuLimit, err := resource.ParseQuantity(resources.CpuLimit)
if err != nil {
return errors.New(fmt.Sprintf("invalid cpu limit for %s container", tapperPodName))
}
memLimit, err := resource.ParseQuantity(resources.MemoryLimit)
if err != nil {
return errors.New(fmt.Sprintf("invalid memory limit for %s container", tapperPodName))
}
cpuRequests, err := resource.ParseQuantity(resources.CpuRequests)
if err != nil {
return errors.New(fmt.Sprintf("invalid cpu request for %s container", tapperPodName))
}
memRequests, err := resource.ParseQuantity(resources.MemoryRequests)
if err != nil {
return errors.New(fmt.Sprintf("invalid memory request for %s container", tapperPodName))
}
agentResourceLimits := core.ResourceList{
"cpu": cpuLimit,
"memory": memLimit,
}
agentResourceRequests := core.ResourceList{
"cpu": cpuRequests,
"memory": memRequests,
}
agentResources := applyconfcore.ResourceRequirements().WithRequests(agentResourceRequests).WithLimits(agentResourceLimits)
agentContainer.WithResources(agentResources)
nodeNames := make([]string, 0, len(nodeToTappedPodIPMap))
for nodeName := range nodeToTappedPodIPMap {
nodeNames = append(nodeNames, nodeName)
}
nodeSelectorRequirement := applyconfcore.NodeSelectorRequirement()
nodeSelectorRequirement.WithKey("kubernetes.io/hostname")
nodeSelectorRequirement.WithOperator(core.NodeSelectorOpIn)
nodeSelectorRequirement.WithValues(nodeNames...)
nodeSelectorTerm := applyconfcore.NodeSelectorTerm()
nodeSelectorTerm.WithMatchExpressions(nodeSelectorRequirement)
nodeSelector := applyconfcore.NodeSelector()
nodeSelector.WithNodeSelectorTerms(nodeSelectorTerm)
nodeAffinity := applyconfcore.NodeAffinity()
nodeAffinity.WithRequiredDuringSchedulingIgnoredDuringExecution(nodeSelector)
affinity := applyconfcore.Affinity()
affinity.WithNodeAffinity(nodeAffinity)
noExecuteToleration := applyconfcore.Toleration()
noExecuteToleration.WithOperator(core.TolerationOpExists)
noExecuteToleration.WithEffect(core.TaintEffectNoExecute)
noScheduleToleration := applyconfcore.Toleration()
noScheduleToleration.WithOperator(core.TolerationOpExists)
noScheduleToleration.WithEffect(core.TaintEffectNoSchedule)
podSpec := applyconfcore.PodSpec()
podSpec.WithHostNetwork(true)
podSpec.WithDNSPolicy(core.DNSClusterFirstWithHostNet)
podSpec.WithTerminationGracePeriodSeconds(0)
if serviceAccountName != "" {
podSpec.WithServiceAccountName(serviceAccountName)
}
podSpec.WithContainers(agentContainer)
podSpec.WithAffinity(affinity)
podSpec.WithTolerations(noExecuteToleration, noScheduleToleration)
podTemplate := applyconfcore.PodTemplateSpec()
podTemplate.WithLabels(map[string]string{"app": tapperPodName})
podTemplate.WithSpec(podSpec)
labelSelector := applyconfmeta.LabelSelector()
labelSelector.WithMatchLabels(map[string]string{"app": tapperPodName})
daemonSet := applyconfapp.DaemonSet(daemonSetName, namespace)
daemonSet.WithSpec(applyconfapp.DaemonSetSpec().WithSelector(labelSelector).WithTemplate(podTemplate))
_, err = provider.clientSet.AppsV1().DaemonSets(namespace).Apply(ctx, daemonSet, metav1.ApplyOptions{FieldManager: fieldManagerName})
return err
}
func (provider *Provider) ListAllPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
var pods []core.Pod
for _, namespace := range namespaces {
namespacePods, err := provider.clientSet.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{})
if err != nil {
return nil, fmt.Errorf("failed to get pods in ns: [%s], %w", namespace, err)
}
pods = append(pods, namespacePods.Items...)
}
matchingPods := make([]core.Pod, 0)
for _, pod := range pods {
if regex.MatchString(pod.Name) {
matchingPods = append(matchingPods, pod)
}
}
return matchingPods, nil
}
func (provider *Provider) ListAllRunningPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
pods, err := provider.ListAllPodsMatchingRegex(ctx, regex, namespaces)
if err != nil {
return nil, err
}
matchingPods := make([]core.Pod, 0)
for _, pod := range pods {
if isPodRunning(&pod) {
matchingPods = append(matchingPods, pod)
}
}
return matchingPods, nil
}
func (provider *Provider) GetPodLogs(namespace string, podName string, ctx context.Context) (string, error) {
podLogOpts := core.PodLogOptions{}
req := provider.clientSet.CoreV1().Pods(namespace).GetLogs(podName, &podLogOpts)
podLogs, err := req.Stream(ctx)
if err != nil {
return "", fmt.Errorf("error opening log stream on ns: %s, pod: %s, %w", namespace, podName, err)
}
defer podLogs.Close()
buf := new(bytes.Buffer)
if _, err = io.Copy(buf, podLogs); err != nil {
return "", fmt.Errorf("error copy information from podLogs to buf, ns: %s, pod: %s, %w", namespace, podName, err)
}
str := buf.String()
return str, nil
}
func (provider *Provider) GetNamespaceEvents(namespace string, ctx context.Context) (string, error) {
eventsOpts := metav1.ListOptions{}
eventList, err := provider.clientSet.CoreV1().Events(namespace).List(ctx, eventsOpts)
if err != nil {
return "", fmt.Errorf("error getting events on ns: %s, %w", namespace, err)
}
return eventList.String(), nil
}
func getClientSet(config *restclient.Config) (*kubernetes.Clientset, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
return clientSet, nil
}
func loadKubernetesConfiguration(kubeConfigPath string) clientcmd.ClientConfig {
logger.Log.Debugf("Using kube config %s", kubeConfigPath)
configPathList := filepath.SplitList(kubeConfigPath)
configLoadingRules := &clientcmd.ClientConfigLoadingRules{}
if len(configPathList) <= 1 {
configLoadingRules.ExplicitPath = kubeConfigPath
} else {
configLoadingRules.Precedence = configPathList
}
contextName := ""
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
configLoadingRules,
&clientcmd.ConfigOverrides{
CurrentContext: contextName,
},
)
}
func isPodRunning(pod *core.Pod) bool {
return pod.Status.Phase == core.PodRunning
}