mirror of
https://github.com/falcosecurity/falco.git
synced 2026-03-25 06:02:18 +00:00
Compare commits
35 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ff376d312b | ||
|
|
205ce3c517 | ||
|
|
807c00b827 | ||
|
|
1c95644d17 | ||
|
|
780129fa1b | ||
|
|
3026f3946e | ||
|
|
cd32cceff8 | ||
|
|
68211daffa | ||
|
|
43bfaecff5 | ||
|
|
de8b92fa05 | ||
|
|
24b4d83eec | ||
|
|
7a56f1c2d9 | ||
|
|
e91bc497ac | ||
|
|
ffc3da3873 | ||
|
|
f23e956a8d | ||
|
|
2c8c381dae | ||
|
|
969374fcc7 | ||
|
|
732d530202 | ||
|
|
21ba0eeb11 | ||
|
|
7a25405ed5 | ||
|
|
ddd7e5b93f | ||
|
|
45241e74c8 | ||
|
|
12d0f4589e | ||
|
|
8bd98c16e9 | ||
|
|
93d5164efe | ||
|
|
c844b5632f | ||
|
|
537e4b7e8d | ||
|
|
f3e4d7cce0 | ||
|
|
f2adedec2f | ||
|
|
35a8392e6f | ||
|
|
78b9bd6e98 | ||
|
|
6a6342adc6 | ||
|
|
bd0ca4f5a7 | ||
|
|
3306941cce | ||
|
|
f561f41065 |
48
CHANGELOG.md
48
CHANGELOG.md
@@ -2,6 +2,52 @@
|
||||
|
||||
This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org).
|
||||
|
||||
## v0.15.1
|
||||
|
||||
Released 2019-06-07
|
||||
|
||||
## Major Changes
|
||||
|
||||
* Drop unnecessary events at the kernel level instead of userspace, which should improve performance [[#635](https://github.com/falcosecurity/falco/pull/635)]
|
||||
|
||||
## Minor Changes
|
||||
|
||||
* Add instructions for k8s audit support in >= 1.13 [[#608](https://github.com/falcosecurity/falco/pull/608)]
|
||||
|
||||
* Fix security issues reported by GitHub on Anchore integration [[#592](https://github.com/falcosecurity/falco/pull/592)]
|
||||
|
||||
* Several docs/readme improvements [[#620](https://github.com/falcosecurity/falco/pull/620)] [[#616](https://github.com/falcosecurity/falco/pull/616)] [[#631](https://github.com/falcosecurity/falco/pull/631)] [[#639](https://github.com/falcosecurity/falco/pull/639)] [[#642](https://github.com/falcosecurity/falco/pull/642)]
|
||||
|
||||
* Better tracking of rule counts per ruleset [[#645](https://github.com/falcosecurity/falco/pull/645)]
|
||||
|
||||
## Bug Fixes
|
||||
|
||||
* Handle rule patterns that are invalid regexes [[#636](https://github.com/falcosecurity/falco/pull/636)]
|
||||
|
||||
* Fix kernel module builds on newer kernels [[#646](https://github.com/falcosecurity/falco/pull/646)] [[#sysdig/1413](https://github.com/draios/sysdig/pull/1413)]
|
||||
|
||||
## Rule Changes
|
||||
|
||||
* New rule `Launch Remote File Copy Tools in Container` could be used to identify exfiltration attacks [[#600](https://github.com/falcosecurity/falco/pull/600)]
|
||||
|
||||
* New rule `Create Symlink Over Sensitive Files` can help detect attacks like [[CVE-2018-15664](https://nvd.nist.gov/vuln/detail/CVE-2018-15664)] [[#613](https://github.com/falcosecurity/falco/pull/613)] [[#637](https://github.com/falcosecurity/falco/pull/637)]
|
||||
|
||||
* Let etcd-manager write to /etc/hosts. [[#613](https://github.com/falcosecurity/falco/pull/613)]
|
||||
|
||||
* Let additional processes spawned by google-accounts-daemon access sensitive files [[#593](https://github.com/falcosecurity/falco/pull/593)]
|
||||
|
||||
* Add Sematext Monitoring & Logging agents to trusted k8s containers [[#594](https://github.com/falcosecurity/falco/pull/594/)]
|
||||
|
||||
* Add additional coverage for `Netcat Remote Code Execution in Container` rule. [[#617](https://github.com/falcosecurity/falco/pull/617/)]
|
||||
|
||||
* Fix `egrep` typo. [[#617](https://github.com/falcosecurity/falco/pull/617/)]
|
||||
|
||||
* Allow Ansible to run using Python 3 [[#625](https://github.com/falcosecurity/falco/pull/625/)]
|
||||
|
||||
* Additional `Write below etc` exceptions for nginx, rancher [[#637](https://github.com/falcosecurity/falco/pull/637)] [[#648](https://github.com/falcosecurity/falco/pull/648)] [[#652](https://github.com/falcosecurity/falco/pull/652)]
|
||||
|
||||
* Add rules for running with IBM Cloud Kubernetes Service [[#634](https://github.com/falcosecurity/falco/pull/634)]
|
||||
|
||||
## v0.15.0
|
||||
|
||||
Released 2019-05-13
|
||||
@@ -10,7 +56,7 @@ Released 2019-05-13
|
||||
|
||||
* **Actions and alerts for dropped events**: Falco can now take actions, including sending alerts/logging messages, and/or even exiting Falco, when it detects dropped system call events. [[#561](https://github.com/falcosecurity/falco/pull/561)] [[#571](https://github.com/falcosecurity/falco/pull/571)]
|
||||
|
||||
* **Support for Containerd/CRI-O**: Falco now supports containerd/cri-o containers. [[#585](https://github.com/falcosecurity/falco/pull/585)] [[#591](https://github.com/falcosecurity/falco/pull/591)] [[#599](https://github.com/falcosecurity/falco/pull/599)] [[#sysdig/1376](https://github.com/draios/sysdig/pull/1376)] [[#sysdig/1310](https://github.com/draios/sysdig/pull/1310)]
|
||||
* **Support for Containerd/CRI-O**: Falco now supports containerd/cri-o containers. [[#585](https://github.com/falcosecurity/falco/pull/585)] [[#591](https://github.com/falcosecurity/falco/pull/591)] [[#599](https://github.com/falcosecurity/falco/pull/599)] [[#sysdig/1376](https://github.com/draios/sysdig/pull/1376)] [[#sysdig/1310](https://github.com/draios/sysdig/pull/1310)] [[#sysdig/1399](https://github.com/draios/sysdig/pull/1399)]
|
||||
|
||||
* **Perform docker metadata fetches asynchronously**: When new containers are discovered, fetch metadata about the container asynchronously, which should significantly reduce the likelihood of dropped system call events. [[#sysdig/1326](https://github.com/draios/sysdig/pull/1326)] [[#550](https://github.com/falcosecurity/falco/pull/550)] [[#570](https://github.com/falcosecurity/falco/pull/570)]
|
||||
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
Current maintainers:
|
||||
@mstemm - Mark Stemm <mark.stemm@sysdig.com>
|
||||
@ldegio - Loris Degioanni <loris@sysdig.com>
|
||||
@fntlnz - Lorenzo Fontana <lo@sysdig.com>
|
||||
@leodido - Leonardo Di Donato <leo@sysdig.com>
|
||||
|
||||
Community Mangement:
|
||||
@mfdii - Michael Ducy <michael@sysdig.com>
|
||||
|
||||
Emeritus maintainers:
|
||||
@henridf - Henri Dubois-Ferriere <henri.dubois-ferriere@sysdig.com>
|
||||
@henridf - Henri Dubois-Ferriere <henri.dubois-ferriere@sysdig.com>
|
||||
|
||||
38
README.md
38
README.md
@@ -2,44 +2,44 @@
|
||||
|
||||
#### Latest release
|
||||
|
||||
**v0.15.0**
|
||||
**v0.15.1**
|
||||
Read the [change log](https://github.com/falcosecurity/falco/blob/dev/CHANGELOG.md)
|
||||
|
||||
Dev Branch: [](https://travis-ci.org/falcosecurity/falco)<br />
|
||||
Master Branch: [](https://travis-ci.org/falcosecurity/falco)<br />
|
||||
Dev Branch: [](https://travis-ci.com/falcosecurity/falco)<br />
|
||||
Master Branch: [](https://travis-ci.com/falcosecurity/falco)<br />
|
||||
CII Best Practices: [](https://bestpractices.coreinfrastructure.org/projects/2317)
|
||||
|
||||
|
||||
## Overview
|
||||
Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by [sysdig’s](https://github.com/draios/sysdig) system call capture infrastructure, Falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules.
|
||||
Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by [sysdig’s](https://github.com/draios/sysdig) system call capture infrastructure, Falco lets you continuously monitor and detect container, application, host, and network activity—all in one place—from one source of data, with one set of rules.
|
||||
|
||||
Falco is hosted by the Cloud Native Computing Foundation (CNCF) as a sandbox level project. If you are an organization that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details read the [Falco CNCF project proposal](https://github.com/cncf/toc/tree/master/proposals/falco.adoc).
|
||||
|
||||
#### What kind of behaviors can Falco detect?
|
||||
|
||||
Falco can detect and alert on any behavior that involves making Linux system calls. Falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, you can easily detect things like:
|
||||
Falco can detect and alert on any behavior that involves making Linux system calls. Falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, Falco can easily detect incidents including but not limited to:
|
||||
|
||||
- A shell is run inside a container
|
||||
- A container is running in privileged mode, or is mounting a sensitive path like `/proc` from the host.
|
||||
- A server process spawns a child process of an unexpected type
|
||||
- Unexpected read of a sensitive file (like `/etc/shadow`)
|
||||
- A non-device file is written to `/dev`
|
||||
- A standard system binary (like `ls`) makes an outbound network connection
|
||||
- A shell is running inside a container.
|
||||
- A container is running in privileged mode, or is mounting a sensitive path, such as `/proc`, from the host.
|
||||
- A server process is spawning a child process of an unexpected type.
|
||||
- Unexpected read of a sensitive file, such as `/etc/shadow`.
|
||||
- A non-device file is written to `/dev`.
|
||||
- A standard system binary, such as `ls`, is making an outbound network connection.
|
||||
|
||||
#### How Falco Compares to Other Security Tools like SELinux, Auditd, etc.
|
||||
#### How do you compare Falco with other security tools?
|
||||
|
||||
One of the questions we often get when we talk about Falco is “How does it compare to other tools like SELinux, AppArmor, Auditd, etc. that also have security policies?”. We wrote a [blog post](https://sysdig.com/blog/selinux-seccomp-falco-technical-discussion/) comparing Falco to other tools.
|
||||
One of the questions we often get when we talk about Falco is “How does Falco differ from other Linux security tools such as SELinux, AppArmor, Auditd, etc.?”. We wrote a [blog post](https://sysdig.com/blog/selinux-seccomp-falco-technical-discussion/) comparing Falco with other tools.
|
||||
|
||||
|
||||
Documentation
|
||||
---
|
||||
[Visit the wiki](https://github.com/falcosecurity/falco/wiki) for full documentation on falco.
|
||||
See [Falco Documentation](https://falco.org/docs/) to quickly get started using Falco.
|
||||
|
||||
Join the Community
|
||||
---
|
||||
* [Website](https://falco.org) for Falco.
|
||||
* We are working on a blog for the Falco project. In the meantime you can find [Falco](https://sysdig.com/blog/tag/falco/) posts over on the Sysdig blog.
|
||||
* Join our [Public Slack](https://slack.sysdig.com) channel for open source sysdig and Falco announcements and discussions.
|
||||
* Join our [Public Slack](https://slack.sysdig.com) channel for open source Sysdig and Falco announcements and discussions.
|
||||
|
||||
License Terms
|
||||
---
|
||||
@@ -48,11 +48,11 @@ Falco is licensed to you under the [Apache 2.0](./COPYING) open source license.
|
||||
Contributor License Agreements
|
||||
---
|
||||
### Background
|
||||
We are formalizing the way that we accept contributions of code from the contributing community. We must now ask that contributions to falco be provided subject to the terms and conditions of a [Contributor License Agreement (CLA)](./cla). The CLA comes in two forms, applicable to contributions by individuals, or by legal entities such as corporations and their employees. We recognize that entering into a CLA with us involves real consideration on your part, and we’ve tried to make this process as clear and simple as possible.
|
||||
We are formalizing the way that we accept contributions of code from the contributing community. We must now ask that contributions to falco be provided subject to the terms and conditions of a [Contributor License Agreement (CLA)](./cla). The CLA comes in two forms, applicable to contributions by individuals, or by legal entities such as corporations and their employees. We recognize that entering into a CLA with us involves real consideration on your part, and we’ve tried to make this process as clear and simple as possible.
|
||||
|
||||
We’ve modeled our CLA off of industry standards, such as [the CLA used by Kubernetes](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md). Note that this agreement is not a transfer of copyright ownership, this simply is a license agreement for contributions, intended to clarify the intellectual property license granted with contributions from any person or entity. It is for your protection as a contributor as well as the protection of falco; it does not change your rights to use your own contributions for any other purpose.
|
||||
We’ve modeled our CLA off of industry standards, such as [the CLA used by Kubernetes](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md). Note that this agreement is not a transfer of copyright ownership, this simply is a license agreement for contributions, intended to clarify the intellectual property license granted with contributions from any person or entity. It is for your protection as a contributor as well as the protection of falco; it does not change your rights to use your own contributions for any other purpose.
|
||||
|
||||
For some background on why contributor license agreements are necessary, you can read FAQs from many other open source projects:
|
||||
For some background on why contributor license agreements are necessary, you can read FAQs from many other open source projects:
|
||||
|
||||
- [Django’s excellent CLA FAQ](https://www.djangoproject.com/foundation/cla/faq/)
|
||||
- [A well-written chapter from Karl Fogel’s Producing Open Source Software on CLAs](http://producingoss.com/en/copyright-assignment.html)
|
||||
@@ -62,7 +62,7 @@ As always, we are grateful for your past and present contributions to falco.
|
||||
|
||||
### What do I need to do in order to contribute code?
|
||||
|
||||
At first, you need do all changes based on dev branch not master branch.
|
||||
At first, you need to make the changes based on the dev branch not the master branch.
|
||||
|
||||
**Individual contributions**: Individuals who wish to make contributions must review the [Individual Contributor License Agreement](./cla/falco_contributor_agreement.txt) and indicate agreement by adding the following line to every GIT commit message:
|
||||
|
||||
|
||||
@@ -1,54 +1,136 @@
|
||||
# Introduction
|
||||
This page describes how to get [Kubernetes Auditing](https://kubernetes.io/docs/tasks/debug-application-cluster/audit) working with Falco.
|
||||
Either using static audit backends in Kubernetes 1.11, or in Kubernetes 1.13 with dynamic sink which configures webhook backends through an AuditSink API object.
|
||||
|
||||
This page describes how to get K8s Audit Logging working with Falco. For now, we'll describe how to enable audit logging in k8s 1.11, where the audit configuration needs to be directly provided to the api server. In 1.13 there is a different mechanism that allows audit confguration to be managed like other k8s objects, but these instructions are for 1.11.
|
||||
<!-- toc -->
|
||||
|
||||
- [Instructions for Kubernetes 1.11](#instructions-for-kubernetes-111)
|
||||
* [Deploy Falco to your Kubernetes cluster](#deploy-falco-to-your-kubernetes-cluster)
|
||||
* [Define your audit policy and webhook configuration](#define-your-audit-policy-and-webhook-configuration)
|
||||
* [Restart the API Server to enable Audit Logging](#restart-the-api-server-to-enable-audit-logging)
|
||||
* [Observe Kubernetes audit events at falco](#observe-kubernetes-audit-events-at-falco)
|
||||
- [Instructions for Kubernetes 1.13](#instructions-for-kubernetes-113)
|
||||
* [Deploy Falco to your Kubernetes cluster](#deploy-falco-to-your-kubernetes-cluster-1)
|
||||
* [Restart the API Server to enable Audit Logging](#restart-the-api-server-to-enable-audit-logging-1)
|
||||
* [Deploy AuditSink objects](#deploy-auditsink-objects)
|
||||
* [Observe Kubernetes audit events at falco](#observe-kubernetes-audit-events-at-falco-1)
|
||||
- [Instructions for Kubernetes 1.13 with dynamic webhook and local log file](#instructions-for-kubernetes-113-with-dynamic-webhook-and-local-log-file)
|
||||
|
||||
<!-- tocstop -->
|
||||
|
||||
## Instructions for Kubernetes 1.11
|
||||
|
||||
The main steps are:
|
||||
|
||||
1. Deploy Falco to your K8s cluster
|
||||
1. Deploy Falco to your Kubernetes cluster
|
||||
1. Define your audit policy and webhook configuration
|
||||
1. Restart the API Server to enable Audit Logging
|
||||
1. Observe K8s audit events at falco
|
||||
1. Observe Kubernetes audit events at falco
|
||||
|
||||
## Deploy Falco to your K8s cluster
|
||||
### Deploy Falco to your Kubernetes cluster
|
||||
|
||||
Follow the [K8s Using Daemonset](../../integrations/k8s-using-daemonset/README.md) instructions to create a falco service account, service, configmap, and daemonset.
|
||||
Follow the [Kubernetes Using Daemonset](../../integrations/k8s-using-daemonset/README.md) instructions to create a falco service account, service, configmap, and daemonset.
|
||||
|
||||
## Define your audit policy and webhook configuration
|
||||
### Define your audit policy and webhook configuration
|
||||
|
||||
The files in this directory can be used to configure k8s audit logging. The relevant files are:
|
||||
The files in this directory can be used to configure Kubernetes audit logging. The relevant files are:
|
||||
|
||||
* [audit-policy.yaml](./audit-policy.yaml): The k8s audit log configuration we used to create the rules in [k8s_audit_rules.yaml](../../rules/k8s_audit_rules.yaml).
|
||||
* [webhook-config.yaml.in](./webhook-config.yaml.in): A (templated) webhook configuration that sends audit events to an ip associated with the falco service, port 8765. It is templated in that the *actual* ip is defined in an environment variable `FALCO_SERVICE_CLUSTERIP`, which can be plugged in using a program like `envsubst`.
|
||||
* [audit-policy.yaml](./audit-policy.yaml): The Kubernetes audit log configuration we used to create the rules in [k8s_audit_rules.yaml](../../rules/k8s_audit_rules.yaml).
|
||||
* [webhook-config.yaml.in](./webhook-config.yaml.in): A (templated) webhook configuration that sends audit events to an ip associated with the falco service, port 8765. It is templated in that the *actual* IP is defined in an environment variable `FALCO_SERVICE_CLUSTERIP`, which can be plugged in using a program like `envsubst`.
|
||||
|
||||
Run the following to fill in the template file with the ClusterIP ip address you created with the `falco-service` service above. Although services like `falco-service.default.svc.cluster.local` can not be resolved from the kube-apiserver container within the minikube vm (they're run as pods but not *really* a part of the cluster), the ClusterIPs associated with those services are routable.
|
||||
Run the following to fill in the template file with the `ClusterIP` IP address you created with the `falco-service` service above. Although services like `falco-service.default.svc.cluster.local` can not be resolved from the kube-apiserver container within the minikube vm (they're run as pods but not *really* a part of the cluster), the `ClusterIP`s associated with those services are routable.
|
||||
|
||||
```
|
||||
FALCO_SERVICE_CLUSTERIP=$(kubectl get service falco-service -o=jsonpath={.spec.clusterIP}) envsubst < webhook-config.yaml.in > webhook-config.yaml
|
||||
```
|
||||
|
||||
## Restart the API Server to enable Audit Logging
|
||||
### Restart the API Server to enable Audit Logging
|
||||
|
||||
A script [enable-k8s-audit.sh](./enable-k8s-audit.sh) performs the necessary steps of enabling audit log support for the apiserver, including copying the audit policy/webhook files to the apiserver machine, modifying the apiserver command line to add `--audit-log-path`, `--audit-policy-file`, etc. arguments, etc. (For minikube, ideally you'd be able to pass all these options directly on the `minikube start` command line, but manual patching is necessary. See [this issue](https://github.com/kubernetes/minikube/issues/2741) for more details.)
|
||||
|
||||
It is run as `bash ./enable-k8s-audit.sh <variant>`. `<variant>` can be one of the following:
|
||||
It is run as `bash ./enable-k8s-audit.sh <variant> static`. `<variant>` can be one of the following:
|
||||
|
||||
* "minikube"
|
||||
* "kops"
|
||||
* `minikube`
|
||||
* `kops`
|
||||
|
||||
When running with variant="kops", you must either modify the script to specify the kops apiserver hostname or set it via the environment: `APISERVER_HOST=api.my-kops-cluster.com bash ./enable-k8s-audit.sh kops`
|
||||
When running with `variant` equal to `kops`, you must either modify the script to specify the kops apiserver hostname or set it via the environment: `APISERVER_HOST=api.my-kops-cluster.com bash ./enable-k8s-audit.sh kops`
|
||||
|
||||
Its output looks like this:
|
||||
|
||||
```
|
||||
$ bash enable-k8s-audit.sh minikube
|
||||
$ bash enable-k8s-audit.sh minikube static
|
||||
***Copying apiserver config patch script to apiserver...
|
||||
apiserver-config.patch.sh 100% 1190 1.2MB/s 00:00
|
||||
***Copying audit policy/webhook files to apiserver...
|
||||
audit-policy.yaml 100% 2519 1.2MB/s 00:00
|
||||
webhook-config.yaml 100% 248 362.0KB/s 00:00
|
||||
***Modifying k8s apiserver config (will result in apiserver restarting)...
|
||||
***Done!
|
||||
$
|
||||
```
|
||||
### Observe Kubernetes audit events at falco
|
||||
|
||||
Kubernetes audit events will then be routed to the falco daemonset within the cluster, which you can observe via `kubectl logs -f $(kubectl get pods -l app=falco-example -o jsonpath={.items[0].metadata.name})`.
|
||||
|
||||
## Instructions for Kubernetes 1.13
|
||||
|
||||
The main steps are:
|
||||
|
||||
1. Deploy Falco to your Kubernetes cluster
|
||||
2. Restart the API Server to enable Audit Logging
|
||||
3. Deploy the AuditSink object for your audit policy and webhook configuration
|
||||
4. Observe Kubernetes audit events at falco
|
||||
|
||||
### Deploy Falco to your Kubernetes cluster
|
||||
|
||||
Follow the [Kubernetes Using Daemonset](../../integrations/k8s-using-daemonset/README.md) instructions to create a Falco service account, service, configmap, and daemonset.
|
||||
|
||||
### Restart the API Server to enable Audit Logging
|
||||
|
||||
A script [enable-k8s-audit.sh](./enable-k8s-audit.sh) performs the necessary steps of enabling dynamic audit support for the apiserver by modifying the apiserver command line to add `--audit-dynamic-configuration`, `--feature-gates=DynamicAuditing=true`, etc. arguments, etc. (For minikube, ideally you'd be able to pass all these options directly on the `minikube start` command line, but manual patching is necessary. See [this issue](https://github.com/kubernetes/minikube/issues/2741) for more details.)
|
||||
|
||||
It is run as `bash ./enable-k8s-audit.sh <variant> dynamic`. `<variant>` can be one of the following:
|
||||
|
||||
* `minikube`
|
||||
* `kops`
|
||||
|
||||
When running with `variant` equal to `kops`, you must either modify the script to specify the kops apiserver hostname or set it via the environment: `APISERVER_HOST=api.my-kops-cluster.com bash ./enable-k8s-audit.sh kops`
|
||||
|
||||
Its output looks like this:
|
||||
|
||||
```
|
||||
$ bash enable-k8s-audit.sh minikube dynamic
|
||||
***Copying apiserver config patch script to apiserver...
|
||||
apiserver-config.patch.sh 100% 1190 1.2MB/s 00:00
|
||||
***Modifying k8s apiserver config (will result in apiserver restarting)...
|
||||
***Done!
|
||||
$
|
||||
```
|
||||
## Observe K8s audit events at falco
|
||||
|
||||
K8s audit events will then be routed to the falco daemonset within the cluster, which you can observe via `kubectl logs -f $(kubectl get pods -l app=falco-example -o jsonpath={.items[0].metadata.name})`.
|
||||
### Deploy AuditSink objects
|
||||
|
||||
[audit-sink.yaml.in](./audit-sink.yaml.in), in this directory, is a template audit sink configuration that defines the dynamic audit policy and webhook to route Kubernetes audit events to Falco.
|
||||
|
||||
Run the following to fill in the template file with the `ClusterIP` IP address you created with the `falco-service` service above. Although services like `falco-service.default.svc.cluster.local` can not be resolved from the kube-apiserver container within the minikube vm (they're run as pods but not *really* a part of the cluster), the ClusterIPs associated with those services are routable.
|
||||
|
||||
```
|
||||
FALCO_SERVICE_CLUSTERIP=$(kubectl get service falco-service -o=jsonpath={.spec.clusterIP}) envsubst < audit-sink.yaml.in > audit-sink.yaml
|
||||
```
|
||||
|
||||
### Observe Kubernetes audit events at falco
|
||||
|
||||
Kubernetes audit events will then be routed to the falco daemonset within the cluster, which you can observe via `kubectl logs -f $(kubectl get pods -l app=falco-example -o jsonpath={.items[0].metadata.name})`.
|
||||
|
||||
## Instructions for Kubernetes 1.13 with dynamic webhook and local log file
|
||||
|
||||
If you want to use a mix of `AuditSink` for remote audit events as well as a local audit log file, you can run `enable-k8s-audit.sh` with the `"dynamic+log"` argument e.g. `bash ./enable-k8s-audit.sh <variant> dynamic+log`. This will enable dynamic audit logs as well as a static audit log to a local file. Its output looks like this:
|
||||
|
||||
```
|
||||
***Copying apiserver config patch script to apiserver...
|
||||
apiserver-config.patch.sh 100% 2211 662.9KB/s 00:00
|
||||
***Copying audit policy file to apiserver...
|
||||
audit-policy.yaml 100% 2519 847.7KB/s 00:00
|
||||
***Modifying k8s apiserver config (will result in apiserver restarting)...
|
||||
***Done!
|
||||
```
|
||||
|
||||
The audit log will be available on the apiserver host at `/var/lib/k8s_audit/audit.log`.
|
||||
|
||||
@@ -1,13 +1,23 @@
|
||||
#!/bin/sh
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
IFS=''
|
||||
|
||||
FILENAME=${1:-/etc/kubernetes/manifests/kube-apiserver.yaml}
|
||||
VARIANT=${2:-minikube}
|
||||
AUDIT_TYPE=${3:-static}
|
||||
|
||||
if grep audit-webhook-config-file $FILENAME ; then
|
||||
echo audit-webhook patch already applied
|
||||
exit 0
|
||||
if [ "$AUDIT_TYPE" == "static" ]; then
|
||||
if grep audit-webhook-config-file "$FILENAME" ; then
|
||||
echo audit-webhook patch already applied
|
||||
exit 0
|
||||
fi
|
||||
else
|
||||
if grep audit-dynamic-configuration "$FILENAME" ; then
|
||||
echo audit-dynamic-configuration patch already applied
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
TMPFILE="/tmp/kube-apiserver.yaml.patched"
|
||||
@@ -16,29 +26,42 @@ rm -f "$TMPFILE"
|
||||
APISERVER_PREFIX=" -"
|
||||
APISERVER_LINE="- kube-apiserver"
|
||||
|
||||
if [ $VARIANT == "kops" ]; then
|
||||
if [ "$VARIANT" == "kops" ]; then
|
||||
APISERVER_PREFIX=" "
|
||||
APISERVER_LINE="/usr/local/bin/kube-apiserver"
|
||||
fi
|
||||
|
||||
while read LINE
|
||||
while read -r LINE
|
||||
do
|
||||
echo "$LINE" >> "$TMPFILE"
|
||||
case "$LINE" in
|
||||
*$APISERVER_LINE*)
|
||||
echo "$APISERVER_PREFIX --audit-log-path=/var/lib/k8s_audit/audit.log" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --audit-webhook-batch-max-wait=5s" >> "$TMPFILE"
|
||||
if [[ ($AUDIT_TYPE == "static" || $AUDIT_TYPE == "dynamic+log") ]]; then
|
||||
echo "$APISERVER_PREFIX --audit-log-path=/var/lib/k8s_audit/audit.log" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml" >> "$TMPFILE"
|
||||
if [[ $AUDIT_TYPE == "static" ]]; then
|
||||
echo "$APISERVER_PREFIX --audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --audit-webhook-batch-max-wait=5s" >> "$TMPFILE"
|
||||
fi
|
||||
fi
|
||||
if [[ ($AUDIT_TYPE == "dynamic" || $AUDIT_TYPE == "dynamic+log") ]]; then
|
||||
echo "$APISERVER_PREFIX --audit-dynamic-configuration" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --feature-gates=DynamicAuditing=true" >> "$TMPFILE"
|
||||
echo "$APISERVER_PREFIX --runtime-config=auditregistration.k8s.io/v1alpha1=true" >> "$TMPFILE"
|
||||
fi
|
||||
;;
|
||||
*"volumeMounts:"*)
|
||||
echo " - mountPath: /var/lib/k8s_audit/" >> "$TMPFILE"
|
||||
echo " name: data" >> "$TMPFILE"
|
||||
if [[ ($AUDIT_TYPE == "static" || $AUDIT_TYPE == "dynamic+log") ]]; then
|
||||
echo " - mountPath: /var/lib/k8s_audit/" >> "$TMPFILE"
|
||||
echo " name: data" >> "$TMPFILE"
|
||||
fi
|
||||
;;
|
||||
*"volumes:"*)
|
||||
echo " - hostPath:" >> "$TMPFILE"
|
||||
echo " path: /var/lib/k8s_audit" >> "$TMPFILE"
|
||||
echo " name: data" >> "$TMPFILE"
|
||||
if [[ ($AUDIT_TYPE == "static" || $AUDIT_TYPE == "dynamic+log") ]]; then
|
||||
echo " - hostPath:" >> "$TMPFILE"
|
||||
echo " path: /var/lib/k8s_audit" >> "$TMPFILE"
|
||||
echo " name: data" >> "$TMPFILE"
|
||||
fi
|
||||
;;
|
||||
|
||||
esac
|
||||
|
||||
16
examples/k8s_audit_config/audit-sink.yaml.in
Normal file
16
examples/k8s_audit_config/audit-sink.yaml.in
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: auditregistration.k8s.io/v1alpha1
|
||||
kind: AuditSink
|
||||
metadata:
|
||||
name: falco-audit-sink
|
||||
spec:
|
||||
policy:
|
||||
level: RequestResponse
|
||||
stages:
|
||||
- ResponseComplete
|
||||
- ResponseStarted
|
||||
webhook:
|
||||
throttle:
|
||||
qps: 10
|
||||
burst: 15
|
||||
clientConfig:
|
||||
url: "http://$FALCO_SERVICE_CLUSTERIP:8765/k8s_audit"
|
||||
@@ -1,20 +1,21 @@
|
||||
#!/bin/bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
VARIANT=${1:-minikube}
|
||||
AUDIT_TYPE=${2:-static}
|
||||
|
||||
if [ $VARIANT == "minikube" ]; then
|
||||
if [ "$VARIANT" == "minikube" ]; then
|
||||
APISERVER_HOST=$(minikube ip)
|
||||
SSH_KEY=$(minikube ssh-key)
|
||||
SSH_USER=docker
|
||||
SSH_USER="docker"
|
||||
MANIFEST="/etc/kubernetes/manifests/kube-apiserver.yaml"
|
||||
fi
|
||||
|
||||
if [ $VARIANT == "kops" ]; then
|
||||
# APISERVER_HOST=api.your-kops-cluster-name.com
|
||||
if [ "$VARIANT" == "kops" ]; then
|
||||
# APISERVER_HOST=api.your-kops-cluster-name.com
|
||||
SSH_KEY=~/.ssh/id_rsa
|
||||
SSH_USER=admin
|
||||
SSH_USER="admin"
|
||||
MANIFEST=/etc/kubernetes/manifests/kube-apiserver.manifest
|
||||
|
||||
if [ -z "${APISERVER_HOST+xxx}" ]; then
|
||||
@@ -23,14 +24,23 @@ if [ $VARIANT == "kops" ]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "***Copying audit policy/webhook files to apiserver..."
|
||||
ssh -i $SSH_KEY $SSH_USER@$APISERVER_HOST "sudo mkdir -p /var/lib/k8s_audit && sudo chown $SSH_USER /var/lib/k8s_audit"
|
||||
scp -i $SSH_KEY audit-policy.yaml $SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit
|
||||
scp -i $SSH_KEY webhook-config.yaml $SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit
|
||||
scp -i $SSH_KEY apiserver-config.patch.sh $SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit
|
||||
echo "***Copying apiserver config patch script to apiserver..."
|
||||
ssh -i $SSH_KEY "$SSH_USER@$APISERVER_HOST" "sudo mkdir -p /var/lib/k8s_audit && sudo chown $SSH_USER /var/lib/k8s_audit"
|
||||
scp -i $SSH_KEY apiserver-config.patch.sh "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
|
||||
|
||||
if [ "$AUDIT_TYPE" == "static" ]; then
|
||||
echo "***Copying audit policy/webhook files to apiserver..."
|
||||
scp -i $SSH_KEY audit-policy.yaml "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
|
||||
scp -i $SSH_KEY webhook-config.yaml "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
|
||||
fi
|
||||
|
||||
if [ "$AUDIT_TYPE" == "dynamic+log" ]; then
|
||||
echo "***Copying audit policy file to apiserver..."
|
||||
scp -i $SSH_KEY audit-policy.yaml "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
|
||||
fi
|
||||
|
||||
echo "***Modifying k8s apiserver config (will result in apiserver restarting)..."
|
||||
|
||||
ssh -i $SSH_KEY $SSH_USER@$APISERVER_HOST "sudo bash /var/lib/k8s_audit/apiserver-config.patch.sh $MANIFEST $VARIANT"
|
||||
ssh -i $SSH_KEY "$SSH_USER@$APISERVER_HOST" "sudo bash /var/lib/k8s_audit/apiserver-config.patch.sh $MANIFEST $VARIANT $AUDIT_TYPE"
|
||||
|
||||
echo "***Done!"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#Demo of falco with man-in-the-middle attacks on installation scripts
|
||||
# Demo of falco with man-in-the-middle attacks on installation scripts
|
||||
|
||||
For context, see the corresponding [blog post](http://sysdig.com/blog/making-curl-to-bash-safer) for this demo.
|
||||
|
||||
|
||||
@@ -69,6 +69,9 @@
|
||||
- macro: spawned_process
|
||||
condition: evt.type = execve and evt.dir=<
|
||||
|
||||
- macro: create_symlink
|
||||
condition: evt.type in (symlink, symlinkat) and evt.dir=<
|
||||
|
||||
# File categories
|
||||
- macro: bin_dir
|
||||
condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin)
|
||||
@@ -284,6 +287,9 @@
|
||||
- list: sensitive_file_names
|
||||
items: [/etc/shadow, /etc/sudoers, /etc/pam.conf, /etc/security/pwquality.conf]
|
||||
|
||||
- list: sensitive_directory_names
|
||||
items: [/, /etc, /etc/, /root, /root/]
|
||||
|
||||
- macro: sensitive_files
|
||||
condition: >
|
||||
fd.name startswith /etc and
|
||||
@@ -522,7 +528,7 @@
|
||||
# compatiblity with some widely used rules files.
|
||||
# Begin Deprecated
|
||||
- macro: parent_ansible_running_python
|
||||
condition: (proc.pname in (python, pypy) and proc.pcmdline contains ansible)
|
||||
condition: (proc.pname in (python, pypy, python3) and proc.pcmdline contains ansible)
|
||||
|
||||
- macro: parent_bro_running_python
|
||||
condition: (proc.pname=python and proc.cmdline contains /usr/share/broctl)
|
||||
@@ -604,7 +610,7 @@
|
||||
## End Deprecated
|
||||
|
||||
- macro: ansible_running_python
|
||||
condition: (proc.name in (python, pypy) and proc.cmdline contains ansible)
|
||||
condition: (proc.name in (python, pypy, python3) and proc.cmdline contains ansible)
|
||||
|
||||
- macro: python_running_chef
|
||||
condition: (proc.name=python and (proc.cmdline contains yum-dump.py or proc.cmdline="python /usr/bin/chef-monitor.py"))
|
||||
@@ -643,7 +649,8 @@
|
||||
- macro: run_by_google_accounts_daemon
|
||||
condition: >
|
||||
(proc.aname[1] startswith google_accounts or
|
||||
proc.aname[2] startswith google_accounts)
|
||||
proc.aname[2] startswith google_accounts or
|
||||
proc.aname[3] startswith google_accounts)
|
||||
|
||||
# Chef is similar.
|
||||
- macro: run_by_chef
|
||||
@@ -774,7 +781,10 @@
|
||||
condition: (proc.cmdline startswith "java LiveUpdate" and fd.name in (/etc/liveupdate.conf, /etc/Product.Catalog.JavaLiveUpdate))
|
||||
|
||||
- macro: rancher_agent
|
||||
condition: (proc.name = agent and container.image.repository = rancher/agent)
|
||||
condition: (proc.name=agent and container.image.repository contains "rancher/agent")
|
||||
|
||||
- macro: rancher_network_manager
|
||||
condition: (proc.name=rancher-bridge and container.image.repository contains "rancher/network-manager")
|
||||
|
||||
- macro: sosreport_writing_files
|
||||
condition: >
|
||||
@@ -808,7 +818,7 @@
|
||||
condition: (veritas_progs and (fd.name startswith /etc/vx or fd.name startswith /etc/opt/VRTS or fd.name startswith /etc/vom))
|
||||
|
||||
- macro: nginx_writing_conf
|
||||
condition: (proc.name in (nginx,nginx-ingress-c) and fd.name startswith /etc/nginx)
|
||||
condition: (proc.name in (nginx,nginx-ingress-c,nginx-ingress) and (fd.name startswith /etc/nginx or fd.name startswith /etc/ingress-controller))
|
||||
|
||||
- macro: nginx_writing_certs
|
||||
condition: >
|
||||
@@ -872,6 +882,16 @@
|
||||
- macro: cassandra_writing_state
|
||||
condition: (java_running_cassandra and fd.directory=/root/.cassandra)
|
||||
|
||||
# Istio
|
||||
- macro: galley_writing_state
|
||||
condition: (proc.name=galley and fd.name in (known_istio_files))
|
||||
|
||||
- list: known_istio_files
|
||||
items: [/healthready, /healthliveness]
|
||||
|
||||
- macro: calico_writing_state
|
||||
condition: (proc.name=kube-controller and fd.name startswith /status.json and k8s.pod.name startswith calico)
|
||||
|
||||
- list: repository_files
|
||||
items: [sources.list]
|
||||
|
||||
@@ -1023,11 +1043,21 @@
|
||||
and fd.name startswith "/etc/dd-agent")
|
||||
|
||||
- macro: rancher_writing_conf
|
||||
condition: (container.image.repository in (rancher_images)
|
||||
and proc.name in (lib-controller,rancher-dns,healthcheck,rancher-metadat)
|
||||
and (fd.name startswith "/etc/haproxy" or
|
||||
fd.name startswith "/etc/rancher-dns")
|
||||
)
|
||||
condition: ((proc.name in (healthcheck, lb-controller, rancher-dns)) and
|
||||
(container.image.repository contains "rancher/healthcheck" or
|
||||
container.image.repository contains "rancher/lb-service-haproxy" or
|
||||
container.image.repository contains "rancher/dns") and
|
||||
(fd.name startswith "/etc/haproxy" or fd.name startswith "/etc/rancher-dns"))
|
||||
|
||||
- macro: rancher_writing_root
|
||||
condition: (proc.name=rancher-metadat and
|
||||
(container.image.repository contains "rancher/metadata" or container.image.repository contains "rancher/lb-service-haproxy") and
|
||||
fd.name startswith "/answers.json")
|
||||
|
||||
- macro: checkpoint_writing_state
|
||||
condition: (proc.name=checkpoint and
|
||||
container.image.repository contains "coreos/pod-checkpointer" and
|
||||
fd.name startswith "/etc/kubernetes")
|
||||
|
||||
- macro: jboss_in_container_writing_passwd
|
||||
condition: >
|
||||
@@ -1099,6 +1129,12 @@
|
||||
- macro: openshift_writing_conf
|
||||
condition: (proc.name=oc and fd.name startswith /etc/origin/node)
|
||||
|
||||
- macro: keepalived_writing_conf
|
||||
condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
|
||||
|
||||
- macro: etcd_manager_updating_dns
|
||||
condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
|
||||
|
||||
# Add conditions to this macro (probably in a separate file,
|
||||
# overwriting this macro) to allow for specific combinations of
|
||||
# programs writing below specific directories below
|
||||
@@ -1204,8 +1240,11 @@
|
||||
and not calico_writing_conf
|
||||
and not prometheus_conf_writing_conf
|
||||
and not openshift_writing_conf
|
||||
and not keepalived_writing_conf
|
||||
and not rancher_writing_conf
|
||||
and not checkpoint_writing_state
|
||||
and not jboss_in_container_writing_passwd
|
||||
and not etcd_manager_updating_dns
|
||||
|
||||
- rule: Write below etc
|
||||
desc: an attempt to write to any file below /etc
|
||||
@@ -1285,6 +1324,9 @@
|
||||
and not chef_writing_conf
|
||||
and not kubectl_writing_state
|
||||
and not cassandra_writing_state
|
||||
and not galley_writing_state
|
||||
and not calico_writing_state
|
||||
and not rancher_writing_root
|
||||
and not known_root_conditions
|
||||
and not user_known_write_root_conditions
|
||||
output: "File below / or /root opened for writing (user=%user.name command=%proc.cmdline parent=%proc.pname file=%fd.name program=%proc.name)"
|
||||
@@ -1343,6 +1385,7 @@
|
||||
and not proc.cmdline contains /usr/bin/mandb
|
||||
and not run_by_qualys
|
||||
and not run_by_chef
|
||||
and not run_by_google_accounts_daemon
|
||||
and not user_read_sensitive_file_conditions
|
||||
and not perl_running_plesk
|
||||
and not perl_running_updmap
|
||||
@@ -1436,12 +1479,14 @@
|
||||
and not proc.name in (docker_binaries, k8s_binaries, lxd_binaries, sysdigcloud_binaries,
|
||||
sysdig, nsenter, calico, oci-umount)
|
||||
and not proc.name in (user_known_change_thread_namespace_binaries)
|
||||
and not proc.name startswith "runc:"
|
||||
and not proc.name startswith "runc"
|
||||
and not proc.cmdline startswith "containerd"
|
||||
and not proc.pname in (sysdigcloud_binaries)
|
||||
and not python_running_sdchecks
|
||||
and not java_running_sdjagent
|
||||
and not kubelet_running_loopback
|
||||
and not rancher_agent
|
||||
and not rancher_network_manager
|
||||
output: >
|
||||
Namespace change (setns) by unexpected program (user=%user.name command=%proc.cmdline
|
||||
parent=%proc.pname %container.info)
|
||||
@@ -1978,6 +2023,7 @@
|
||||
and not somebody_becoming_themself
|
||||
and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
|
||||
nomachine_binaries)
|
||||
and not proc.name startswith "runc:"
|
||||
and not java_running_sdjagent
|
||||
and not nrpe_becoming_nagios
|
||||
and not user_known_non_sudo_setuid_conditions
|
||||
@@ -2096,7 +2142,7 @@
|
||||
items: [nc, ncat, nmap, dig, tcpdump, tshark, ngrep]
|
||||
|
||||
- macro: network_tool_procs
|
||||
condition: proc.name in (network_tool_binaries)
|
||||
condition: (proc.name in (network_tool_binaries))
|
||||
|
||||
# Container is supposed to be immutable. Package management should be done in building the image.
|
||||
- rule: Launch Package Management Process in Container
|
||||
@@ -2114,7 +2160,8 @@
|
||||
condition: >
|
||||
spawned_process and container and
|
||||
((proc.name = "nc" and (proc.args contains "-e" or proc.args contains "-c")) or
|
||||
(proc.name = "ncat" and (proc.args contains "--sh-exec" or proc.args contains "--exec"))
|
||||
(proc.name = "ncat" and (proc.args contains "--sh-exec" or proc.args contains "--exec" or proc.args contains "-e "
|
||||
or proc.args contains "-c " or proc.args contains "--lua-exec"))
|
||||
)
|
||||
output: >
|
||||
Netcat runs inside container that allows remote code execution (user=%user.name
|
||||
@@ -2122,7 +2169,7 @@
|
||||
priority: WARNING
|
||||
tags: [network, process, mitre_execution]
|
||||
|
||||
- rule: Lauch Suspicious Network Tool in Container
|
||||
- rule: Launch Suspicious Network Tool in Container
|
||||
desc: Detect network tools launched inside container
|
||||
condition: >
|
||||
spawned_process and container and network_tool_procs
|
||||
@@ -2151,7 +2198,7 @@
|
||||
tags: [network, process, mitre_discovery, mitre_exfiltration]
|
||||
|
||||
- list: grep_binaries
|
||||
items: [grep, egre, fgrep]
|
||||
items: [grep, egrep, fgrep]
|
||||
|
||||
- macro: grep_commands
|
||||
condition: (proc.name in (grep_binaries))
|
||||
@@ -2269,6 +2316,32 @@
|
||||
NOTICE
|
||||
tag: [file, mitre_persistence]
|
||||
|
||||
- list: remote_file_copy_binaries
|
||||
items: [rsync, scp, sftp, dcp]
|
||||
|
||||
- macro: remote_file_copy_procs
|
||||
condition: (proc.name in (remote_File_copy_binaries))
|
||||
|
||||
- rule: Launch Remote File Copy Tools in Container
|
||||
desc: Detect remote file copy tools launched in container
|
||||
condition: >
|
||||
spawned_process and container and remote_file_copy_procs
|
||||
output: >
|
||||
Remote file copy tool launched in container (user=%user.name command=%proc.cmdline parent_process=%proc.pname
|
||||
container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
|
||||
priority: NOTICE
|
||||
tags: [network, process, mitre_lateral_movement, mitre_exfiltration]
|
||||
|
||||
|
||||
- rule: Create Symlink Over Sensitive Files
|
||||
desc: Detect symlink created over sensitive files
|
||||
condition: >
|
||||
create_symlink and
|
||||
(evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
|
||||
output: >
|
||||
Symlinks created over senstivie files (user=%user.name command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
|
||||
priority: NOTICE
|
||||
tags: [file, mitre_exfiltration]
|
||||
# Application rules have moved to application_rules.yaml. Please look
|
||||
# there if you want to enable them by adding to
|
||||
# falco_rules.local.yaml.
|
||||
|
||||
@@ -118,10 +118,16 @@
|
||||
registry.access.redhat.com/openshift3/ose-sti-builder,
|
||||
registry.access.redhat.com/openshift3/ose-docker-builder,
|
||||
registry.access.redhat.com/openshift3/image-inspector,
|
||||
registry.access.redhat.com/sematext/sematext-agent-docker,
|
||||
registry.access.redhat.com/sematext/agent,
|
||||
registry.access.redhat.com/sematext/logagent,
|
||||
cloudnativelabs/kube-router, istio/proxy,
|
||||
datadog/docker-dd-agent, datadog/agent,
|
||||
docker/ucp-agent,
|
||||
gliderlabs/logspout]
|
||||
gliderlabs/logspout
|
||||
sematext/agent
|
||||
sematext/logagent
|
||||
sematext/sematext-agent-docker]
|
||||
|
||||
- rule: Create Privileged Pod
|
||||
desc: >
|
||||
|
||||
@@ -251,6 +251,14 @@ uint16_t falco_engine::find_ruleset_id(const std::string &ruleset)
|
||||
return it->second;
|
||||
}
|
||||
|
||||
uint64_t falco_engine::num_rules_for_ruleset(const std::string &ruleset)
|
||||
{
|
||||
uint16_t ruleset_id = find_ruleset_id(ruleset);
|
||||
|
||||
return m_sinsp_rules->num_rules_for_ruleset(ruleset_id) +
|
||||
m_k8s_audit_rules->num_rules_for_ruleset(ruleset_id);
|
||||
}
|
||||
|
||||
void falco_engine::evttypes_for_ruleset(std::vector<bool> &evttypes, const std::string &ruleset)
|
||||
{
|
||||
uint16_t ruleset_id = find_ruleset_id(ruleset);
|
||||
|
||||
@@ -106,6 +106,11 @@ public:
|
||||
//
|
||||
uint16_t find_ruleset_id(const std::string &ruleset);
|
||||
|
||||
//
|
||||
// Return the number of falco rules enabled for the provided ruleset
|
||||
//
|
||||
uint64_t num_rules_for_ruleset(const std::string &ruleset);
|
||||
|
||||
//
|
||||
// Print details on the given rule. If rule is NULL, print
|
||||
// details on all rules.
|
||||
|
||||
@@ -41,6 +41,7 @@ falco_ruleset::~falco_ruleset()
|
||||
}
|
||||
|
||||
falco_ruleset::ruleset_filters::ruleset_filters()
|
||||
: m_num_filters(0)
|
||||
{
|
||||
}
|
||||
|
||||
@@ -58,10 +59,14 @@ falco_ruleset::ruleset_filters::~ruleset_filters()
|
||||
|
||||
void falco_ruleset::ruleset_filters::add_filter(filter_wrapper *wrap)
|
||||
{
|
||||
|
||||
bool added = false;
|
||||
|
||||
for(uint32_t etag = 0; etag < wrap->event_tags.size(); etag++)
|
||||
{
|
||||
if(wrap->event_tags[etag])
|
||||
{
|
||||
added = true;
|
||||
if(m_filter_by_event_tag.size() <= etag)
|
||||
{
|
||||
m_filter_by_event_tag.resize(etag+1);
|
||||
@@ -75,10 +80,17 @@ void falco_ruleset::ruleset_filters::add_filter(filter_wrapper *wrap)
|
||||
m_filter_by_event_tag[etag]->push_back(wrap);
|
||||
}
|
||||
}
|
||||
|
||||
if(added)
|
||||
{
|
||||
m_num_filters++;
|
||||
}
|
||||
}
|
||||
|
||||
void falco_ruleset::ruleset_filters::remove_filter(filter_wrapper *wrap)
|
||||
{
|
||||
bool removed = false;
|
||||
|
||||
for(uint32_t etag = 0; etag < wrap->event_tags.size(); etag++)
|
||||
{
|
||||
if(wrap->event_tags[etag])
|
||||
@@ -88,22 +100,38 @@ void falco_ruleset::ruleset_filters::remove_filter(filter_wrapper *wrap)
|
||||
list<filter_wrapper *> *l = m_filter_by_event_tag[etag];
|
||||
if(l)
|
||||
{
|
||||
l->erase(remove(l->begin(),
|
||||
l->end(),
|
||||
wrap),
|
||||
l->end());
|
||||
auto it = remove(l->begin(),
|
||||
l->end(),
|
||||
wrap);
|
||||
|
||||
if(l->size() == 0)
|
||||
if(it != l->end())
|
||||
{
|
||||
delete l;
|
||||
m_filter_by_event_tag[etag] = NULL;
|
||||
removed = true;
|
||||
|
||||
l->erase(it,
|
||||
l->end());
|
||||
|
||||
if(l->size() == 0)
|
||||
{
|
||||
delete l;
|
||||
m_filter_by_event_tag[etag] = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if(removed)
|
||||
{
|
||||
m_num_filters--;
|
||||
}
|
||||
}
|
||||
|
||||
uint64_t falco_ruleset::ruleset_filters::num_filters()
|
||||
{
|
||||
return m_num_filters;
|
||||
}
|
||||
|
||||
bool falco_ruleset::ruleset_filters::run(gen_event *evt, uint32_t etag)
|
||||
{
|
||||
@@ -176,7 +204,16 @@ void falco_ruleset::add(string &name,
|
||||
|
||||
void falco_ruleset::enable(const string &pattern, bool enabled, uint16_t ruleset)
|
||||
{
|
||||
regex re(pattern);
|
||||
regex re;
|
||||
bool match_using_regex = true;
|
||||
|
||||
try {
|
||||
re.assign(pattern);
|
||||
}
|
||||
catch (std::regex_error e)
|
||||
{
|
||||
match_using_regex = false;
|
||||
}
|
||||
|
||||
while (m_rulesets.size() < (size_t) ruleset + 1)
|
||||
{
|
||||
@@ -185,7 +222,16 @@ void falco_ruleset::enable(const string &pattern, bool enabled, uint16_t ruleset
|
||||
|
||||
for(const auto &val : m_filters)
|
||||
{
|
||||
if (regex_match(val.first, re))
|
||||
bool matches;
|
||||
if(match_using_regex)
|
||||
{
|
||||
matches = regex_match(val.first, re);
|
||||
}
|
||||
else
|
||||
{
|
||||
matches = (val.first.find(pattern) != string::npos);
|
||||
}
|
||||
if (matches)
|
||||
{
|
||||
if(enabled)
|
||||
{
|
||||
@@ -222,6 +268,16 @@ void falco_ruleset::enable_tags(const set<string> &tags, bool enabled, uint16_t
|
||||
}
|
||||
}
|
||||
|
||||
uint64_t falco_ruleset::num_rules_for_ruleset(uint16_t ruleset)
|
||||
{
|
||||
while (m_rulesets.size() < (size_t) ruleset + 1)
|
||||
{
|
||||
m_rulesets.push_back(new ruleset_filters());
|
||||
}
|
||||
|
||||
return m_rulesets[ruleset]->num_filters();
|
||||
}
|
||||
|
||||
bool falco_ruleset::run(gen_event *evt, uint32_t etag, uint16_t ruleset)
|
||||
{
|
||||
if(m_rulesets.size() < (size_t) ruleset + 1)
|
||||
|
||||
@@ -61,6 +61,10 @@ public:
|
||||
// enable_tags.
|
||||
void enable_tags(const std::set<std::string> &tags, bool enabled, uint16_t ruleset = 0);
|
||||
|
||||
|
||||
// Return the number of falco rules enabled for the provided ruleset
|
||||
uint64_t num_rules_for_ruleset(uint16_t ruleset = 0);
|
||||
|
||||
// Match all filters against the provided event.
|
||||
bool run(gen_event *evt, uint32_t etag, uint16_t ruleset = 0);
|
||||
|
||||
@@ -89,11 +93,15 @@ private:
|
||||
void add_filter(filter_wrapper *wrap);
|
||||
void remove_filter(filter_wrapper *wrap);
|
||||
|
||||
uint64_t num_filters();
|
||||
|
||||
bool run(gen_event *evt, uint32_t etag);
|
||||
|
||||
void event_tags_for_ruleset(std::vector<bool> &event_tags);
|
||||
|
||||
private:
|
||||
uint64_t m_num_filters;
|
||||
|
||||
// Maps from event tag to a list of filters. There can
|
||||
// be multiple filters for a given event tag.
|
||||
std::vector<std::list<filter_wrapper *> *> m_filter_by_event_tag;
|
||||
|
||||
@@ -23,6 +23,7 @@ include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/libsinsp")
|
||||
include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/sysdig")
|
||||
include_directories("${PROJECT_SOURCE_DIR}/userspace/engine")
|
||||
include_directories("${PROJECT_BINARY_DIR}/userspace/falco")
|
||||
include_directories("${PROJECT_BINARY_DIR}/driver/src")
|
||||
include_directories("${CURL_INCLUDE_DIR}")
|
||||
include_directories("${TBB_INCLUDE_DIR}")
|
||||
include_directories("${NJSON_INCLUDE}")
|
||||
|
||||
@@ -856,7 +856,6 @@ int falco_init(int argc, char **argv)
|
||||
if(!all_events)
|
||||
{
|
||||
inspector->set_drop_event_flags(EF_DROP_FALCO);
|
||||
inspector->start_dropping_mode(1);
|
||||
}
|
||||
|
||||
if (describe_all_rules)
|
||||
@@ -964,6 +963,12 @@ int falco_init(int argc, char **argv)
|
||||
}
|
||||
}
|
||||
|
||||
// This must be done after the open
|
||||
if(!all_events)
|
||||
{
|
||||
inspector->start_dropping_mode(1);
|
||||
}
|
||||
|
||||
// If daemonizing, do it here so any init errors will
|
||||
// be returned in the foreground process.
|
||||
if (daemon && !g_daemonized) {
|
||||
|
||||
Reference in New Issue
Block a user