K8s audit evts (#450)

* Add new json/webserver libs, embedded webserver

Add two new external libraries:

 - nlohmann-json is a better json library that has stronger use of c++
   features like type deduction, better conversion from stl structures,
   etc. We'll use it to hold generic json objects instead of jsoncpp.

 - civetweb is an embeddable webserver that will allow us to accept
   posted json data.

New files webserver.{cpp,h} start an embedded webserver that listens for
POSTS on a configurable url and passes the json data to the falco
engine.

New falco config items are under webserver:
  - enabled: true|false. Whether to start the embedded webserver or not.
  - listen_port. Port that webserver listens on
  - k8s_audit_endpoint: uri on which to accept POSTed k8s audit events.

(This commit doesn't compile entirely on its own, but we're grouping
these related changes into one commit for clarity).

* Don't use relative paths to find lua code

You can look directly below PROJECT_SOURCE_DIR.

* Reorganize compiler lua code

The lua compiler code is generic enough to work on more than just
sinsp-based rules, so move the parts of the compiler related to event
types and filterchecks out into a standalone lua file
sinsp_rule_utils.lua.

The checks for event types/filterchecks are now done from rule_loader,
and are dependent on a "source" attribute of the rule being
"sinsp". We'll be adding additional types of events next that come from
sources other than system calls.

* Manage separate syscall/k8s audit rulesets

Add the ability to manage separate sets of rules (syscall and
k8s_audit). Stop using the sinsp_evttype_filter object from the sysdig
repo, replacing it with falco_ruleset/falco_sinsp_ruleset from
ruleset.{cpp,h}. It has the same methods to add rules, associate them
with rulesets, and (for syscall) quickly find the relevant rules for a
given syscall/event type.

At the falco engine level, there are new parallel interfaces for both
types of rules (syscall and k8s_audit) to:
  - add a rule: add_k8s_audit_filter/add_sinsp_filter
  - match an event against rules, possibly returning a result:
    process_sinsp_event/process_k8s_audit_event

At the rule loading level, the mechanics of creating filterchecks
objects is handled two factories (sinsp_filter_factory and
json_event_filter_factory), both of which are held by the engine.

* Handle multiple rule types when parsing rules

Modify the steps of parsing a rule's filter expression to handle
multiple types of rules. Notable changes:

 - In the rule loader/ast traversal, pass a filter api object down,
   which is passed back up in the lua parser api calls like nest(),
   bool_op(), rel_expr(), etc.
 - The filter api object is either the sinsp factory or k8s audit
   factory, depending on the rule type.
 - When the rule is complete, the complete filter is passed to the
   engine using either add_sinsp_filter()/add_k8s_audit_filter().

* Add multiple output formatting types

Add support for multiple output formatters. Notable changes:

 - The falco engine is passed along to falco_formats to gain access to
   the engine's factories.
 - When creating a formatter, the source of the rule is passed along
   with the format string, which controls which kind of output formatter
   is created.

Also clean up exception handling a bit so all lua callbacks catch all
exceptions and convert them into lua errors.

* Add support for json, k8s audit filter fields

With some corresponding changes in sysdig, you can now create general
purpose filter fields and events, which can be tied together with
nesting, expressions, and relational operators. The classes here
represent an instance of these fields devoted to generic json objects as
well as k8s audit events. Notable changes:

 - json_event: holds a json object, used by all of the below

 - json_event_filter_check: Has the ability to extract values out of a
   json_event object and has the ability to define macros that associate
   a field like "group.field" with a json pointer expression that
   extracts a single property's value out of the json object. The basic
   field definition also allows creating an index
   e.g. group.field[index], where a std::function is responsible for
   performing the indexing. This class has virtual void methods so it
   must be overridden.

 - jevt_filter_check: subclass of json_event_filter_check and defines
   the following fields:
     - jevt.time/jevt.rawtime: extracts the time from the underlying json object.
     - jevt.value[<json pointer>]: general purpose way to extract any
       json value out of the underlying object. <json pointer> is a json
       pointer expression
     - jevt.obj: Return the entire object, stringified.

 - k8s_audit_filter_check: implements fields that extract values from
   k8s audit events. Most of the implementation is in the form of macros
   like ka.user.name, ka.uri, ka.target.name, etc. that just use json
   pointers to extact the appropriate value from a k8s audit event. More
   advanced fields like ka.uri.param, ka.req.container.image use
   indexing to extract individual values out of maps or arrays.

 - json_event_filter_factory: used by things like the lua parser api,
   output formatter, etc to create the necessary objects and return
   them.

  - json_event_formatter: given a format string, create the necessary
    fields that will be used to create a resolved string when given a
    json_event object.

* Add ability to list fields

Similar to sysdig's -l option, add --list (<source>) to list the fields
supported by falco. With no source specified, will print all
fields. Source can be "syscall" for inspector fields e.g. what is
supported by sysdig, or "k8s_audit" to list fields supported only by the
k8s audit support in falco.

* Initial set of k8s audit rules

Add an initial set of k8s audit rules. They're broken into 3 classes of
rules:

 - Suspicious activity: this includes things like:
    - A disallowed k8s user performing an operation
    - A disallowed container being used in a pod.
    - A pod created with a privileged pod.
    - A pod created with a sensitive mount.
    - A pod using host networking
    - Creating a NodePort Service
    - A configmap containing private credentials
    - A request being made by an unauthenticated user.
    - Attach/exec to a pod. (We eventually want to also do privileged
      pods, but that will require some state management that we don't
      currently have).
    - Creating a new namespace outside of an allowed set
    - Creating a pod in either of the kube-system/kube-public namespaces
    - Creating a serviceaccount in either of the kube-system/kube-public
      namespaces
    - Modifying any role starting with "system:"
    - Creating a clusterrolebinding to the cluster-admin role
    - Creating a role that wildcards verbs or resources
    - Creating a role with writable permissions/pod exec permissions.
 - Resource tracking. This includes noting when a deployment, service,
    - configmap, cluster role, service account, etc are created or destroyed.
 - Audit tracking: This tracks all audit events.

To support these rules, add macros/new indexing functions as needed to
support the required fields and ways to index the results.

* Add ability to read trace files of k8s audit evts

Expand the use of the -e flag to cover both .scap files containing
system calls as well as jsonl files containing k8s audit events:

If a trace file is specified, first try to read it using the
inspector. If that throws an exception, try to read the first line as
json. If both fail, return an error.

Based on the results of the open, the main loop either calls
do_inspect(), looping over system events, or
read_k8s_audit_trace_file(), reading each line as json and passing it to
the engine and outputs.

* Example showing how to enable k8s audit logs.

An example of how to enable k8s audit logging for minikube.

* Add unit tests for k8s audit support

Initial unit test support for k8s audit events. A new multiplex file
falco_k8s_audit_tests.yaml defines the tests. Traces (jsonl files) are
in trace_files/k8s_audit and new rules files are in
test/rules/k8s_audit.

Current test cases include:

- User outside allowed set
- Creating disallowed pod.
- Creating a pod explicitly on the allowed list
- Creating a pod w/ a privileged container (or second container), or a
  pod with no privileged container.
- Creating a pod w/ a sensitive mount container (or second container), or a
  pod with no sensitive mount.
- Cases for a trace w/o the relevant property + the container being
  trusted, and hostnetwork tests.
- Tests that create a Service w/ and w/o a NodePort type.
- Tests for configmaps: tries each disallowed string, ensuring each is
  detected, and the other has a configmap with no disallowed string,
  ensuring it is not detected.
- The anonymous user creating a namespace.
- Tests for all kactivity rules e.g. those that create/delete
  resources as compared to suspicious activity.
- Exec/Attach to Pod
- Creating a namespace outside of an allowed set
- Creating a pod/serviceaccount in kube-system/kube-public namespaces
- Deleting/modifying a system cluster role
- Creating a binding to the cluster-admin role
- Creating a cluster role binding that wildcards verbs or resources
- Creating a cluster role with write/pod exec privileges

* Don't manually install gcc 4.8

gcc 4.8 should already be installed by default on the vm we use for
travis.
This commit is contained in:
Mark Stemm
2018-11-09 10:15:39 -08:00
committed by GitHub
parent ff4f7ca13b
commit 1f28f85bdf
85 changed files with 4105 additions and 427 deletions

View File

@@ -23,6 +23,7 @@ if(NOT DEFINED FALCO_RULES_DEST_FILENAME)
set(FALCO_RULES_DEST_FILENAME "falco_rules.yaml")
set(FALCO_LOCAL_RULES_DEST_FILENAME "falco_rules.local.yaml")
set(FALCO_APP_RULES_DEST_FILENAME "application_rules.yaml")
set(FALCO_K8S_AUDIT_RULES_DEST_FILENAME "k8s_audit_rules.yaml")
endif()
if(DEFINED FALCO_COMPONENT)
@@ -47,6 +48,10 @@ install(FILES falco_rules.local.yaml
DESTINATION "${FALCO_ETC_DIR}"
RENAME "${FALCO_LOCAL_RULES_DEST_FILENAME}")
install(FILES k8s_audit_rules.yaml
DESTINATION "${FALCO_ETC_DIR}"
RENAME "${FALCO_K8S_AUDIT_RULES_DEST_FILENAME}")
install(FILES application_rules.yaml
DESTINATION "/etc/falco/rules.available"
RENAME "${FALCO_APP_RULES_DEST_FILENAME}")

418
rules/k8s_audit_rules.yaml Normal file
View File

@@ -0,0 +1,418 @@
# Generally only consider audit events once the response has completed
- list: k8s_audit_stages
items: ["ResponseComplete"]
# Generally exclude users starting with "system:"
- macro: non_system_user
condition: (not ka.user.name startswith "system:")
# This macro selects the set of Audit Events used by the below rules.
- macro: kevt
condition: (jevt.value[/stage] in (k8s_audit_stages))
- macro: kevt_started
condition: (jevt.value[/stage]=ResponseStarted)
# If you wish to restrict activity to a specific set of users, override/append to this list.
- list: allowed_k8s_users
items: ["minikube", "minikube-user"]
- rule: Disallowed K8s User
desc: Detect any k8s operation by users outside of an allowed set of users.
condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users)
output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
priority: WARNING
source: k8s_audit
tags: [k8s]
# In a local/user rules file, you could override this macro to
# explicitly enumerate the container images that you want to run in
# your environment. In this main falco rules file, there isn't any way
# to know all the containers that can run, so any container is
# alllowed, by using a filter that is guaranteed to evaluate to true
# (the event time existing). In the overridden macro, the condition
# would look something like (ka.req.container.image.repository=my-repo/my-image)
- macro: allowed_k8s_containers
condition: (jevt.rawtime exists)
- macro: response_successful
condition: (ka.response.code startswith 2)
- macro: create
condition: ka.verb=create
- macro: modify
condition: (ka.verb in (create,update,patch))
- macro: delete
condition: ka.verb=delete
- macro: pod
condition: ka.target.resource=pods and not ka.target.subresource exists
- macro: pod_subresource
condition: ka.target.resource=pods and ka.target.subresource exists
- macro: deployment
condition: ka.target.resource=deployments
- macro: service
condition: ka.target.resource=services
- macro: configmap
condition: ka.target.resource=configmaps
- macro: namespace
condition: ka.target.resource=namespaces
- macro: serviceaccount
condition: ka.target.resource=serviceaccounts
- macro: clusterrole
condition: ka.target.resource=clusterroles
- macro: clusterrolebinding
condition: ka.target.resource=clusterrolebindings
- macro: role
condition: ka.target.resource=roles
- macro: health_endpoint
condition: ka.uri=/healthz
- rule: Create Disallowed Pod
desc: >
Detect an attempt to start a pod with a container image outside of a list of allowed images.
condition: kevt and pod and create and not allowed_k8s_containers
output: Pod started with container not in allowed list (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace image=%ka.req.container.image)
priority: WARNING
source: k8s_audit
tags: [k8s]
- list: trusted_k8s_containers
items: [sysdig/agent, sysdig/falco, quay.io/coreos/flannel, calico/node, rook/toolbox,
gcr.io/google_containers/hyperkube, gcr.io/google_containers/kube-proxy,
openshift3/ose-sti-builder,
registry.access.redhat.com/openshift3/logging-fluentd,
registry.access.redhat.com/openshift3/logging-elasticsearch,
registry.access.redhat.com/openshift3/metrics-cassandra,
registry.access.redhat.com/openshift3/ose-sti-builder,
registry.access.redhat.com/openshift3/ose-docker-builder,
registry.access.redhat.com/openshift3/image-inspector,
cloudnativelabs/kube-router, istio/proxy,
datadog/docker-dd-agent, datadog/agent,
docker/ucp-agent,
gliderlabs/logspout]
- rule: Create Privileged Pod
desc: >
Detect an attempt to start a pod with a privileged container
condition: kevt and pod and create and ka.req.container.privileged=true and not ka.req.container.image.repository in (trusted_k8s_containers)
output: Pod started with privileged container (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace image=%ka.req.container.image)
priority: WARNING
source: k8s_audit
tags: [k8s]
- macro: sensitive_vol_mount
condition: >
(ka.req.volume.hostpath[/proc*]=true or
ka.req.volume.hostpath[/var/run/docker.sock]=true or
ka.req.volume.hostpath[/]=true or
ka.req.volume.hostpath[/etc]=true or
ka.req.volume.hostpath[/root*]=true)
- rule: Create Sensitive Mount Pod
desc: >
Detect an attempt to start a pod with a volume from a sensitive host directory (i.e. /proc).
Exceptions are made for known trusted images.
condition: kevt and pod and create and sensitive_vol_mount and not ka.req.container.image.repository in (trusted_k8s_containers)
output: Pod started with sensitive mount (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace image=%ka.req.container.image mounts=%jevt.value[/requestObject/spec/volumes])
priority: WARNING
source: k8s_audit
tags: [k8s]
# Corresponds to K8s CIS Benchmark 1.7.4
- rule: Create HostNetwork Pod
desc: Detect an attempt to start a pod using the host network.
condition: kevt and pod and create and ka.req.container.host_network=true and not ka.req.container.image.repository in (trusted_k8s_containers)
output: Pod started using host network (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace image=%ka.req.container.image)
priority: WARNING
source: k8s_audit
tags: [k8s]
- rule: Create NodePort Service
desc: >
Detect an attempt to start a service with a NodePort service type
condition: kevt and service and create and ka.req.service.type=NodePort
output: NodePort Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace ports=%ka.req.service.ports)
priority: WARNING
source: k8s_audit
tags: [k8s]
- macro: contains_private_credentials
condition: >
(ka.req.configmap.obj contains "aws_access_key_id" or
ka.req.configmap.obj contains "aws-access-key-id" or
ka.req.configmap.obj contains "aws_s3_access_key_id" or
ka.req.configmap.obj contains "aws-s3-access-key-id" or
ka.req.configmap.obj contains "password" or
ka.req.configmap.obj contains "passphrase")
- rule: Create/Modify Configmap With Private Credentials
desc: >
Detect creating/modifying a configmap containing a private credential (aws key, password, etc.)
condition: kevt and configmap and modify and contains_private_credentials
output: K8s configmap with private credential (user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)
priority: WARNING
source: k8s_audit
tags: [k8s]
# Corresponds to K8s CIS Benchmark, 1.1.1.
- rule: Anonymous Request Allowed
desc: >
Detect any request made by the anonymous user that was allowed
condition: kevt and ka.user.name=system:anonymous and ka.auth.decision!=reject and not health_endpoint
output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
priority: WARNING
source: k8s_audit
tags: [k8s]
# Roughly corresponds to K8s CIS Benchmark, 1.1.12. In this case,
# notifies an attempt to exec/attach to a privileged container.
# Ideally, we'd add a more stringent rule that detects attaches/execs
# to a privileged pod, but that requires the engine for k8s audit
# events to be stateful, so it could know if a container named in an
# attach request was created privileged or not. For now, we have a
# less severe rule that detects attaches/execs to any pod.
- rule: Attach/Exec Pod
desc: >
Detect any attempt to attach/exec to a pod
condition: kevt_started and pod_subresource and create and ka.target.subresource in (exec,attach)
output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command])
priority: NOTICE
source: k8s_audit
tags: [k8s]
# In a local/user rules fie, you can append to this list to add additional allowed namespaces
- list: allowed_namespaces
items: [kube-system, kube-public, default]
- rule: Create Disallowed Namespace
desc: Detect any attempt to create a namespace outside of a set of known namespaces
condition: kevt and namespace and create and not ka.target.name in (allowed_namespaces)
output: Disallowed namespace created (user=%ka.user.name ns=%ka.target.name)
priority: WARNING
source: k8s_audit
tags: [k8s]
# Detect any new pod created in the kube-system namespace
- rule: Pod Created in Kube Namespace
desc: Detect any attempt to create a pod in the kube-system or kube-public namespaces
condition: kevt and pod and create and ka.target.namespace in (kube-system, kube-public)
output: Pod created in kube namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace image=%ka.req.container.image)
priority: WARNING
source: k8s_audit
tags: [k8s]
# Detect creating a service account in the kube-system/kube-public namespace
- rule: Service Account Created in Kube Namespace
desc: Detect any attempt to create a serviceaccount in the kube-system or kube-public namespaces
condition: kevt and serviceaccount and create and ka.target.namespace in (kube-system, kube-public)
output: Service account created in kube namespace (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace)
priority: WARNING
source: k8s_audit
tags: [k8s]
# Detect any modify/delete to any ClusterRole starting with
# "system:". "system:coredns" is excluded as changes are expected in
# normal operation.
- rule: System ClusterRole Modified/Deleted
desc: Detect any attempt to modify/delete a ClusterRole/Role starting with system
condition: kevt and (role or clusterrole) and (modify or delete) and (ka.target.name startswith "system:") and ka.target.name!="system:coredns"
output: System ClusterRole/Role modified or deleted (user=%ka.user.name role=%ka.target.name ns=%ka.target.namespace action=%ka.verb)
priority: WARNING
source: k8s_audit
tags: [k8s]
# Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
# (exapand this to any built-in cluster role that does "sensitive" things)
- rule: Attach to cluster-admin Role
desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
condition: kevt and clusterrolebinding and create and ka.req.binding.role=cluster-admin
output: Cluster Role Binding to cluster-admin role (user=%ka.user.name subject=%ka.req.binding.subjects)
priority: WARNING
source: k8s_audit
tags: [k8s]
- rule: ClusterRole With Wildcard Created
desc: Detect any attempt to create a Role/ClusterRole with wildcard resources or verbs
condition: kevt and (role or clusterrole) and create and (ka.req.role.rules.resources contains '"*"' or ka.req.role.rules.verbs contains '"*"')
output: Created Role/ClusterRole with wildcard (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules)
priority: WARNING
source: k8s_audit
tags: [k8s]
- macro: writable_verbs
condition: >
(ka.req.role.rules.verbs contains create or
ka.req.role.rules.verbs contains update or
ka.req.role.rules.verbs contains patch or
ka.req.role.rules.verbs contains delete or
ka.req.role.rules.verbs contains deletecollection)
- rule: ClusterRole With Write Privileges Created
desc: Detect any attempt to create a Role/ClusterRole that can perform write-related actions
condition: kevt and (role or clusterrole) and create and writable_verbs
output: Created Role/ClusterRole with write privileges (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules)
priority: NOTICE
source: k8s_audit
tags: [k8s]
- rule: ClusterRole With Pod Exec Created
desc: Detect any attempt to create a Role/ClusterRole that can exec to pods
condition: kevt and (role or clusterrole) and create and ka.req.role.rules.resources contains "pods/exec"
output: Created Role/ClusterRole with pod exec privileges (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules)
priority: WARNING
source: k8s_audit
tags: [k8s]
# The rules below this point are less discriminatory and generally
# represent a stream of activity for a cluster. If you wish to disable
# these events, modify the following macro.
- macro: consider_activity_events
condition: (jevt.rawtime exists)
- macro: kactivity
condition: (kevt and consider_activity_events)
- rule: K8s Deployment Created
desc: Detect any attempt to create a deployment
condition: (kactivity and create and deployment and response_successful)
output: K8s Deployment Created (user=%ka.user.name deployment=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Deployment Deleted
desc: Detect any attempt to delete a deployment
condition: (kactivity and delete and deployment and response_successful)
output: K8s Deployment Deleted (user=%ka.user.name deployment=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Service Created
desc: Detect any attempt to create a service
condition: (kactivity and create and service and response_successful)
output: K8s Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Service Deleted
desc: Detect any attempt to delete a service
condition: (kactivity and delete and service and response_successful)
output: K8s Service Deleted (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s ConfigMap Created
desc: Detect any attempt to create a configmap
condition: (kactivity and create and configmap and response_successful)
output: K8s ConfigMap Created (user=%ka.user.name configmap=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s ConfigMap Deleted
desc: Detect any attempt to delete a configmap
condition: (kactivity and delete and configmap and response_successful)
output: K8s ConfigMap Deleted (user=%ka.user.name configmap=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Namespace Created
desc: Detect any attempt to create a namespace
condition: (kactivity and create and namespace and response_successful)
output: K8s Namespace Created (user=%ka.user.name namespace=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Namespace Deleted
desc: Detect any attempt to delete a namespace
condition: (kactivity and non_system_user and delete and namespace and response_successful)
output: K8s Namespace Deleted (user=%ka.user.name namespace=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Serviceaccount Created
desc: Detect any attempt to create a service account
condition: (kactivity and create and serviceaccount and response_successful)
output: K8s Serviceaccount Created (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Serviceaccount Deleted
desc: Detect any attempt to delete a service account
condition: (kactivity and delete and serviceaccount and response_successful)
output: K8s Serviceaccount Deleted (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Role/Clusterrole Created
desc: Detect any attempt to create a cluster role/role
condition: (kactivity and create and (clusterrole or role) and response_successful)
output: K8s Cluster Role Created (user=%ka.user.name role=%ka.target.name rules=%ka.req.role.rules resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Role/Clusterrole Deleted
desc: Detect any attempt to delete a cluster role/role
condition: (kactivity and delete and (clusterrole or role) and response_successful)
output: K8s Cluster Role Deleted (user=%ka.user.name role=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Role/Clusterrolebinding Created
desc: Detect any attempt to create a clusterrolebinding
condition: (kactivity and create and clusterrolebinding and response_successful)
output: K8s Cluster Role Binding Created (user=%ka.user.name binding=%ka.target.name subjects=%ka.req.binding.subjects role=%ka.req.binding.role resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason foo=%ka.req.binding.subject.has_name[cluster-admin])
priority: INFO
source: k8s_audit
tags: [k8s]
- rule: K8s Role/Clusterrolebinding Deleted
desc: Detect any attempt to delete a clusterrolebinding
condition: (kactivity and delete and clusterrolebinding and response_successful)
output: K8s Cluster Role Binding Deleted (user=%ka.user.name binding=%ka.target.name resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
priority: INFO
source: k8s_audit
tags: [k8s]
# This rule generally matches all events, and as a result is disabled
# by default. If you wish to enable these events, modify the
# following macro.
# condition: (jevt.rawtime exists)
- macro: consider_all_events
condition: (not jevt.rawtime exists)
- macro: kall
condition: (kevt and consider_all_events)
- rule: All K8s Audit Events
desc: Match all K8s Audit Events
condition: kall
output: K8s Audit Event received (user=%ka.user.name verb=%ka.verb uri=%ka.uri obj=%jevt.obj)
priority: DEBUG
source: k8s_audit
tags: [k8s]