Compare commits

...

35 Commits

Author SHA1 Message Date
Mark Stemm
6acb13e6bb Merge branch 'dev' 2018-07-24 17:33:24 -07:00
Mark Stemm
fdbe62fdae Prepare for 0.11.0 (#393)
Updating CHANGELOG.md and README
2018-07-24 17:27:17 -07:00
Mark Stemm
d63542d8ff Rule updates 2018 07.v1 (#388)
* Add dpkg-divert as a debian package mgmt program.

* Add pip3 as a package mgmt program.

* Let ucpagent write config

Since the name is fairly generic (apiserver), require that it runs in a
container with image docker/ucp-agent.

* Let iscsi admin programs write config

* Add parent to some output strings

Will aid in addressing false positives.

* Let update-ca-trust write to pki files

* Add additional root writing programs

- zap: web application security tool
- airflow: apache app for managing data pipelines
- rpm can sometimes write below /root/.rpmdb
- maven can write groovy files

* Expand redis etc files

Additional program redis-launcher.(sh) and path /etc/redis.

* Add additional root directories

/root/workspace could be used by jenkins, /root/oradiag_root could be
used by Oracle 11 SQL*Net.

* Add pam-config as an auth program

* Add additional trusted containers

openshift image inspector, alternate name for datadog agent, docker ucp
agent, gliderlabs logspout.

* Add microdnf as a rpm binary.

https://github.com/rpm-software-management/microdnf

* Let coreos update-ssh-keys write /home/core/.ssh

* Allow additional writes below /etc/iscsi

Allow any path starting with /etc/iscsi.

* Add additional /root write paths

Additional files, with /root/workspace changing from a directory to a
path prefix.

* Add additional openshift trusted container.

* Also allow grandparents for ms_oms_writing_conf

In some cases the program spawns intermediate shells, for example:

07:15:30.756713513: Error File below /etc opened for writing (user= command=StatusReport.sh /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime parent=sh pcmdline=sh -c /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime file=/etc/opt/omi/conf/omsconfig/last_statusreport program=StatusReport.sh gparent=omiagent ggparent=omiagent gggparent=omiagent) k8s.pod= container=host k8s.pod= container=host

This should fix #387.
2018-07-24 13:14:35 -07:00
Brett Bertocci
7289315837 Ensure the /lib/modules symlink to /host/lib/modules is set correctly
If /lib/modules exists in the base image, the symlink will get created at
/lib/modules/modules. This removes any existing empty directory but will
fail if we try to remove a non-empty /lib/modules. (Punting on how to
handle non-empty base image dirs for now)
2018-07-16 13:42:41 -07:00
Jorge Salamero Sanz
25efce033b Merge pull request #391 from nestorsalceda/move-examples-to-integrations
Move existing integrations in examples directory to its own directory
2018-07-16 16:47:51 +02:00
Néstor Salceda
8bc4a5e38f Move puppet module from examples to integrations 2018-07-13 13:09:13 +02:00
Néstor Salceda
c05319927a Move kubernetes manifests from examples to integrations 2018-07-13 13:08:38 +02:00
Néstor Salceda
1e32d637b2 Move logrotate from examples to integrations 2018-07-13 13:02:26 +02:00
Jorge Salamero Sanz
ccf35552dd Merge pull request #389 from nestorsalceda/kubernetes-response-engine
Add Kubernetes response engine
2018-07-12 18:55:07 +02:00
Jorge Salamero Sanz
ec0c109d2a Merge pull request #390 from nestorsalceda/anchore-falco
Add integration between Falco and Anchore
2018-07-12 18:52:54 +02:00
Néstor Salceda
46b0fd833c Add a README 2018-07-12 17:56:59 +02:00
Néstor Salceda
bed5993500 Create Falco rule from Anchore policy result
When we are trying to run an image with negative policy result from
Anchore, Falco will alert us.
2018-07-12 17:15:21 +02:00
Néstor Salceda
bed360497e Remove repeated configurations and other stuff
As long as this PR merged, this is not needed:

https://github.com/kubernetes/charts/pull/6600
2018-07-11 17:52:11 +02:00
Néstor Salceda
3afe04629a Move kubernetes_response_engine under integrations
A top level directory for this integration could led to confussion.
2018-07-11 17:49:25 +02:00
Néstor Salceda
bebdff3d67 This rule does not add any value to the integration
It was just an example for cryptomining.
2018-07-11 17:18:56 +02:00
Jorge Salamero Sanz
9543514270 Update README.md 2018-07-10 18:29:02 +02:00
Néstor Salceda
46405510e2 Update link target 2018-07-10 18:19:20 +02:00
Néstor Salceda
42285687d4 Add a README for Kubernetes infrastructure 2018-07-10 18:16:57 +02:00
Néstor Salceda
8b82a08148 Add Kubernetes manifests for deploying Nats + Falco + Kubeless 2018-07-10 18:11:38 +02:00
Jorge Salamero Sanz
19d251ef4b Update README.md 2018-07-10 18:08:54 +02:00
Néstor Salceda
66ba09ea3b Add a README for playbooks 2018-07-10 17:38:26 +02:00
Néstor Salceda
4867c47d4b Upload playbooks code 2018-07-10 16:41:56 +02:00
Néstor Salceda
526f32b54b Add a README for falco-nats output 2018-07-10 16:22:58 +02:00
Néstor Salceda
26ca866162 Add nats output for Falco 2018-07-10 16:22:18 +02:00
Néstor Salceda
893554e0f0 Add README for the kubernetes response engine 2018-07-10 13:44:02 +02:00
Mark Stemm
c5523d89a7 Rule updates 2018 04.v2 (#366)
* Add alternatives as a binary dir writer

It can set symlinks below binary dirs.

* Let userhelper read sens.files/write below /etc

Part of usermode package, can be used by oVirt.

* Let package mgmt progs urlgrabber pki files

Some package management programs run urlgrabber-ext-{down} to update pki
files.

* Add additional root directory

for Jupyter-notebook

* Let brandbot write to /etc/os-release

Used on centos

* Add an additional veritas conf directory.

Also /etc/opt/VRTS...

* Let appdynamics spawn shells

Java, so we look at parent cmdline.

* Add more ancestors to output

In an attempt to track down the source of some additional shell
spawners, add additional parents.

* Let chef write below bin dirs/rpm database

Rename an existing macro chef_running_yum_dump to python_running_chef
and add additional variants.

Also add chef-client as a package management binary.

* Remove dangling macro.

No longer in use.

* Add additional volume mgmt progs

Add pvscan as a volume management program and add an additional
directory below /etc. Also rename the macro to make it more generic.

* Let openldap write below /etc/openldap

Only program is run-openldap.sh for now.

* Add additional veritas directory

Also /etc/vom.

* Let sed write /etc/sedXXXXX files

These are often seen in install scrips for rpm/deb packages. The test
only checks for /etc/sed, as we don't have anything like a regex match
or glob operator.

* Let dse (DataStax Search) write to /root

Only file is /root/tmp__.

* Add additional mysql programs and directories

Add run-mysqld and /etc/my.cnf.d directory.

* Let redis write its config below /etc.

* Let id program open network connections

Seen using port 111 (sun-rpc, but really user lookups).

* Opt-in rule for protecting tomcat shell spawns

Some users want to consider any shell spawned by tomcat suspect for
example, protecting against the famous apache struts attack
CVE-2017-5638, while others do not.

Split the difference by adding a macro
possibly_parent_java_running_tomcat, but disabling it by default.

*  added ossec-syscheckd to read_sensitive_file_binaries

* Add "Write below monitored directory"

Take the technique used by "Write below binary dir", and make it more
general, expanding to a list of "monitored directories". This contains
common directories like /boot, /lib, etc.

It has a small workaround to look for home ssh directories without using
the glob operator, which has a pending fix in
https://github.com/draios/sysdig/pull/1153.

* Fix FPs

Move monitored_dir to after evt type checks and allow mkinitramfs to
write below /boot

* Addl boot writers.
2018-07-06 13:17:17 -07:00
Andrea Kao
81dcee23a9 edit Falco license info so that GitHub recognizes it (#380)
GitHub uses a library called Licensee to identify a project's license
type. It shows this information in the status bar and via the API if it
can unambiguously identify the license.

This commit updates the COPYING file so that it contains only the full
text of the GPL 2.0 license. The info that pertains to OpenSSL has now
been moved to the "License Terms" section in the README.

Collectively, these changes allow Licensee to successfully identify the
license type of Falco as GPL 2.0.

falco-CLA-1.0-signed-off-by: Andrea Kao <eirinikos@gmail.com>
2018-06-18 09:44:07 -07:00
Michael Ducy
81a38fb909 add gcc-6 to Dockerfiles: (#382) 2018-06-12 13:07:15 -07:00
Mattia Pagnozzi
e9e9bd85c3 Add libcurl include directory in CMakeLists (#374)
It's used in sinsp.
2018-06-07 17:59:02 -07:00
Mark Stemm
70f768d9ea Enable all rules (#379)
* Proactively enable rules instead of only disabling

Previously, rules were enabled by default. Some performance improvements
in https://github.com/draios/sysdig/pull/1126 broke this, requiring that
each rule is explicitly enabled or disabled for a given ruleset.

So if enabled is true, explicitly enable the rule for the default ruleset.

* Get rid of shadowed res variable.

It was used both for the inspector loop and the falco result.
2018-06-07 17:16:30 -07:00
Gianluca Borello
c3b0f0d96d Fix Travis CI 2018-05-09 14:15:10 -07:00
Gianluca Borello
2a7851c77b eBPF support for Falco 2018-05-09 14:15:10 -07:00
Mark Stemm
512a36dfe1 Conditional rules (#364)
* Add ability to skip rules for unknown filters

Add the ability to skip a rule if its condition refers to a filtercheck
that doesn't exist. This allows defining a rules file that contains new
conditions that can still has limited backward compatibility with older
falco versions.

When compiling a filter, return a list of filtercheck names that are
present in the ast (which also includes filterchecks from any
macros). This set of filtercheck names is matched against the set of
filterchecks known to sinsp, expressed as lua patterns, and in the
global table defined_filters. If no match is found, the rule loader
throws an error.

The pattern changes slightly depending on whether the filter has
arguments or not. Two filters (proc.apid/proc.aname) can work with or
without arguments, so both styles of patterns are used.

If the rule has an attribute "skip-if-unknown-filter", the rule will be
skipped instead.

* Unit tests for skipping unknown filter

New unit test for skipping unknown filter. Test cases:

 - A rule that refers to an unknown filter results in an error.
 - A rule that refers to an unknown filter, but has
   "skip-if-unknown-filter: true", can be read, but doesn't match any events.
 - A rule that refers to an unknown filter, but has
   "skip-if-unknown-filter: false", returns an error.

Also test the case of a filtercheck like evt.arg.xxx working properly
with the embedded patterns as well as proc.aname/apid which work both ways.
2018-05-03 14:24:32 -07:00
David Archer
73e1ae616a Don't make driver compilation fail when kernel is compiled with CONFIG_ORC_UNWINDER or CONFIG_STACK_VALIDATION. (#362)
sysdig-CLA-1.0-signed-off-by: David Archer <darcher@gmail.com>
2018-04-30 14:40:28 -07:00
David Archer
b496116fe3 Don't make driver compilation fail when kernel is compiled with CONFIG_ORC_UNWINDER or CONFIG_STACK_VALIDATION. (#362)
sysdig-CLA-1.0-signed-off-by: David Archer <darcher@gmail.com>
2018-04-30 14:30:39 -07:00
80 changed files with 2787 additions and 65 deletions

View File

@@ -10,7 +10,7 @@ before_install:
- sudo apt-get update
install:
- sudo apt-get --force-yes install g++-4.8
- sudo apt-get install rpm linux-headers-$(uname -r)
- sudo apt-get install rpm linux-headers-$(uname -r) libelf-dev
- git clone https://github.com/draios/sysdig.git ../sysdig
- sudo apt-get install -y python-pip libvirt-dev jq dkms
- cd ..

View File

@@ -2,6 +2,45 @@
This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org).
## v0.11.0
Released 2018-07-24
## Major Changes
* **EBPF Support** (Beta): Falco can now read events via an ebpf program loaded into the kernel instead of the `falco-probe` kernel module. Full docs [here](https://github.com/draios/sysdig/wiki/eBPF-(beta)). [[#365](https://github.com/draios/falco/pull/365)]
## Minor Changes
* Rules may now have an `skip-if-unknown-filter` property. If set to true, a rule will be skipped if its condition/output property refers to a filtercheck (e.g. `fd.some-new-attibute`) that is not present in the current falco version. [[#364](https://github.com/draios/falco/pull/364)] [[#345](https://github.com/draios/falco/issues/345)]
* Small changes to Falco `COPYING` file so github automatically recognizes license [[#380](https://github.com/draios/falco/pull/380)]
* New example integration showing how to connect Falco with Anchore to dynamically create falco rules based on negative scan results [[#390](https://github.com/draios/falco/pull/390)]
* New example integration showing how to connect Falco, [nats](https://nats.io/), and K8s to run flexible "playbooks" based on Falco events [[#389](https://github.com/draios/falco/pull/389)]
## Bug Fixes
* Ensure all rules are enabled by default [[#379](https://github.com/draios/falco/pull/379)]
* Fix libcurl compilation problems [[#374](https://github.com/draios/falco/pull/374)]
* Add gcc-6 to docker container, which improves compatibility when building kernel module [[#382](https://github.com/draios/falco/pull/382)] [[#371](https://github.com/draios/falco/issues/371)]
* Ensure the /lib/modules symlink to /host/lib/modules is set correctly [[#392](https://github.com/draios/falco/issues/392)]
## Rule Changes
* Add additional binary writing programs [[#366](https://github.com/draios/falco/pull/366)]
* Add additional package management programs [[#388](https://github.com/draios/falco/pull/388)] [[#366](https://github.com/draios/falco/pull/366)]
* Expand write_below_etc handling for additional programs [[#388](https://github.com/draios/falco/pull/388)] [[#366](https://github.com/draios/falco/pull/366)]
* Expand set of programs allowed to write to `/etc/pki` [[#388](https://github.com/draios/falco/pull/388)]
* Expand set of root written directories/files [[#388](https://github.com/draios/falco/pull/388)] [[#366](https://github.com/draios/falco/pull/366)]
* Let pam-config read sensitive files [[#388](https://github.com/draios/falco/pull/388)]
* Add additional trusted containers: openshift, datadog, docker ucp agent, gliderlabs logspout [[#388](https://github.com/draios/falco/pull/388)]
* Let coreos update-ssh-keys write to /home/core/.ssh [[#388](https://github.com/draios/falco/pull/388)]
* Expand coverage for MS OMS [[#388](https://github.com/draios/falco/issues/388)] [[#387](https://github.com/draios/falco/issues/387)]
* Expand the set of shell spawning programs [[#366](https://github.com/draios/falco/pull/366)]
* Add additional mysql programs/directories [[#366](https://github.com/draios/falco/pull/366)]
* Let program `id` open network connections [[#366](https://github.com/draios/falco/pull/366)]
* Opt-in rule for protecting tomcat shell spawns [[#366](https://github.com/draios/falco/pull/366)]
* New rule `Write below monitored directory` [[#366](https://github.com/draios/falco/pull/366)]
## v0.10.0
Released 2018-04-24

12
COPYING
View File

@@ -277,18 +277,6 @@ YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
* In addition, as a special exception, the copyright holders give
* permission to link the code of portions of this program with the
* OpenSSL library under certain conditions as described in each
* individual source file, and distribute linked combinations
* including the two.
* You must obey the GNU General Public License in all respects
* for all of the code used other than OpenSSL. If you modify
* file(s) with this exception, you may extend this exception to your
* version of the file(s), but you are not obligated to do so. If you
* do not wish to do so, delete this exception statement from your
* version.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs

View File

@@ -2,7 +2,7 @@
#### Latest release
**v0.10.0**
**v0.11.0**
Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md)
Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)<br />
@@ -41,6 +41,10 @@ License Terms
---
Falco is licensed to you under the [GPL 2.0](./COPYING) open source license.
In addition, as a special exception, the copyright holders give permission to link the code of portions of this program with the OpenSSL library under certain conditions as described in each individual source file, and distribute linked combinations including the two.
You must obey the GNU General Public License in all respects for all of the code used other than OpenSSL. If you modify file(s) with this exception, you may extend this exception to your version of the file(s), but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
Contributor License Agreements
---
### Background

View File

@@ -17,18 +17,30 @@ ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash-completion \
curl \
jq \
gnupg2 \
bc \
clang-7 \
ca-certificates \
curl \
gnupg2 \
gcc \
gcc-5 \
gdb && rm -rf /var/lib/apt/lists/*
gcc-6 \
gdb \
jq \
libc6-dev \
libelf-dev \
llvm-7 \
&& rm -rf /var/lib/apt/lists/*
# Since our base Debian image ships with GCC 7 which breaks older kernels, revert the
# default to gcc-5.
RUN rm -rf /usr/bin/gcc && ln -s /usr/bin/gcc-5 /usr/bin/gcc
RUN rm -rf /usr/bin/clang \
&& rm -rf /usr/bin/llc \
&& ln -s /usr/bin/clang-7 /usr/bin/clang \
&& ln -s /usr/bin/llc-7 /usr/bin/llc
RUN curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add - \
&& curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/$FALCO_REPOSITORY/deb/draios.list \
&& apt-get update \
@@ -36,7 +48,11 @@ RUN curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public |
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules
# Some base images have an empty /lib/modules by default
# If it's not empty, docker build will fail instead of
# silently overwriting the existing directory
RUN rm -df /lib/modules \
&& ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules
COPY ./docker-entrypoint.sh /

View File

@@ -17,19 +17,35 @@ ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash-completion \
curl \
jq \
gnupg2 \
bc \
clang-7 \
ca-certificates \
curl \
dkms \
gnupg2 \
gcc \
gcc-5 \
dkms && rm -rf /var/lib/apt/lists/*
gcc-6 \
jq \
libc6-dev \
libelf-dev \
llvm-7 \
&& rm -rf /var/lib/apt/lists/*
# Since our base Debian image ships with GCC 7 which breaks older kernels, revert the
# default to gcc-5.
RUN rm -rf /usr/bin/gcc && ln -s /usr/bin/gcc-5 /usr/bin/gcc
RUN ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules
RUN rm -rf /usr/bin/clang \
&& rm -rf /usr/bin/llc \
&& ln -s /usr/bin/clang-7 /usr/bin/clang \
&& ln -s /usr/bin/llc-7 /usr/bin/llc
# Some base images have an empty /lib/modules by default
# If it's not empty, docker build will fail instead of
# silently overwriting the existing directory
RUN rm -df /lib/modules \
&& ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules
ADD falco-${FALCO_VERSION}-x86_64.deb /
RUN dpkg -i /falco-${FALCO_VERSION}-x86_64.deb

View File

@@ -17,17 +17,29 @@ ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash-completion \
curl \
jq \
bc \
clang-7 \
ca-certificates \
curl \
gnupg2 \
gcc \
gcc-5 && rm -rf /var/lib/apt/lists/*
gcc-5 \
gcc-6 \
jq \
libc6-dev \
libelf-dev \
llvm-7 \
&& rm -rf /var/lib/apt/lists/*
# Since our base Debian image ships with GCC 7 which breaks older kernels, revert the
# default to gcc-5.
RUN rm -rf /usr/bin/gcc && ln -s /usr/bin/gcc-5 /usr/bin/gcc
RUN rm -rf /usr/bin/clang \
&& rm -rf /usr/bin/llc \
&& ln -s /usr/bin/clang-7 /usr/bin/clang \
&& ln -s /usr/bin/llc-7 /usr/bin/llc
RUN curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add - \
&& curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/$FALCO_REPOSITORY/deb/draios.list \
&& apt-get update \
@@ -35,7 +47,11 @@ RUN curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public |
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules
# Some base images have an empty /lib/modules by default
# If it's not empty, docker build will fail instead of
# silently overwriting the existing directory
RUN rm -df /lib/modules \
&& ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules
COPY ./docker-entrypoint.sh /

View File

@@ -0,0 +1,13 @@
FROM python:3-stretch
RUN pip install pipenv
WORKDIR /app
ADD Pipfile /app/Pipfile
ADD Pipfile.lock /app/Pipfile.lock
RUN pipenv install --system --deploy
ADD . /app
CMD ["python", "main.py"]

View File

@@ -0,0 +1,16 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[dev-packages]
doublex-expects = "==0.7.0rc2"
doublex = "*"
mamba = "*"
expects = "*"
[packages]
requests = "*"
[requires]
python_version = "3.6"

156
integrations/anchore-falco/Pipfile.lock generated Normal file
View File

@@ -0,0 +1,156 @@
{
"_meta": {
"hash": {
"sha256": "f2737a14e8f562cf355e13ae09f1eed0f80415effd2aa01b86125e94523da345"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.6"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"certifi": {
"hashes": [
"sha256:13e698f54293db9f89122b0581843a782ad0934a4fe0172d2a980ba77fc61bb7",
"sha256:9fa520c1bacfb634fa7af20a76bcbd3d5fb390481724c597da32c719a7dca4b0"
],
"version": "==2018.4.16"
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"idna": {
"hashes": [
"sha256:156a6814fb5ac1fc6850fb002e0852d56c0c8d2531923a51032d1b70760e186e",
"sha256:684a38a6f903c1d71d6d5fac066b58d7768af4de2b832e426ec79c30daa94a16"
],
"version": "==2.7"
},
"requests": {
"hashes": [
"sha256:63b52e3c866428a224f97cab011de738c36aec0185aa91cfacd418b5d58911d1",
"sha256:ec22d826a36ed72a7358ff3fe56cbd4ba69dd7a6718ffd450ff0e9df7a47ce6a"
],
"index": "pypi",
"version": "==2.19.1"
},
"urllib3": {
"hashes": [
"sha256:a68ac5e15e76e7e5dd2b8f94007233e01effe3e50e8daddf69acfd81cb686baf",
"sha256:b5725a0bd4ba422ab0e66e89e030c806576753ea3ee08554382c14e685d117b5"
],
"version": "==1.23"
}
},
"develop": {
"args": {
"hashes": [
"sha256:a785b8d837625e9b61c39108532d95b85274acd679693b71ebb5156848fcf814"
],
"version": "==0.1.0"
},
"clint": {
"hashes": [
"sha256:05224c32b1075563d0b16d0015faaf9da43aa214e4a2140e51f08789e7a4c5aa"
],
"version": "==0.5.1"
},
"coverage": {
"hashes": [
"sha256:03481e81d558d30d230bc12999e3edffe392d244349a90f4ef9b88425fac74ba",
"sha256:0b136648de27201056c1869a6c0d4e23f464750fd9a9ba9750b8336a244429ed",
"sha256:104ab3934abaf5be871a583541e8829d6c19ce7bde2923b2751e0d3ca44db60a",
"sha256:15b111b6a0f46ee1a485414a52a7ad1d703bdf984e9ed3c288a4414d3871dcbd",
"sha256:198626739a79b09fa0a2f06e083ffd12eb55449b5f8bfdbeed1df4910b2ca640",
"sha256:1c383d2ef13ade2acc636556fd544dba6e14fa30755f26812f54300e401f98f2",
"sha256:28b2191e7283f4f3568962e373b47ef7f0392993bb6660d079c62bd50fe9d162",
"sha256:2eb564bbf7816a9d68dd3369a510be3327f1c618d2357fa6b1216994c2e3d508",
"sha256:337ded681dd2ef9ca04ef5d93cfc87e52e09db2594c296b4a0a3662cb1b41249",
"sha256:3a2184c6d797a125dca8367878d3b9a178b6fdd05fdc2d35d758c3006a1cd694",
"sha256:3c79a6f7b95751cdebcd9037e4d06f8d5a9b60e4ed0cd231342aa8ad7124882a",
"sha256:3d72c20bd105022d29b14a7d628462ebdc61de2f303322c0212a054352f3b287",
"sha256:3eb42bf89a6be7deb64116dd1cc4b08171734d721e7a7e57ad64cc4ef29ed2f1",
"sha256:4635a184d0bbe537aa185a34193898eee409332a8ccb27eea36f262566585000",
"sha256:56e448f051a201c5ebbaa86a5efd0ca90d327204d8b059ab25ad0f35fbfd79f1",
"sha256:5a13ea7911ff5e1796b6d5e4fbbf6952381a611209b736d48e675c2756f3f74e",
"sha256:69bf008a06b76619d3c3f3b1983f5145c75a305a0fea513aca094cae5c40a8f5",
"sha256:6bc583dc18d5979dc0f6cec26a8603129de0304d5ae1f17e57a12834e7235062",
"sha256:701cd6093d63e6b8ad7009d8a92425428bc4d6e7ab8d75efbb665c806c1d79ba",
"sha256:7608a3dd5d73cb06c531b8925e0ef8d3de31fed2544a7de6c63960a1e73ea4bc",
"sha256:76ecd006d1d8f739430ec50cc872889af1f9c1b6b8f48e29941814b09b0fd3cc",
"sha256:7aa36d2b844a3e4a4b356708d79fd2c260281a7390d678a10b91ca595ddc9e99",
"sha256:7d3f553904b0c5c016d1dad058a7554c7ac4c91a789fca496e7d8347ad040653",
"sha256:7e1fe19bd6dce69d9fd159d8e4a80a8f52101380d5d3a4d374b6d3eae0e5de9c",
"sha256:8c3cb8c35ec4d9506979b4cf90ee9918bc2e49f84189d9bf5c36c0c1119c6558",
"sha256:9d6dd10d49e01571bf6e147d3b505141ffc093a06756c60b053a859cb2128b1f",
"sha256:9e112fcbe0148a6fa4f0a02e8d58e94470fc6cb82a5481618fea901699bf34c4",
"sha256:ac4fef68da01116a5c117eba4dd46f2e06847a497de5ed1d64bb99a5fda1ef91",
"sha256:b8815995e050764c8610dbc82641807d196927c3dbed207f0a079833ffcf588d",
"sha256:be6cfcd8053d13f5f5eeb284aa8a814220c3da1b0078fa859011c7fffd86dab9",
"sha256:c1bb572fab8208c400adaf06a8133ac0712179a334c09224fb11393e920abcdd",
"sha256:de4418dadaa1c01d497e539210cb6baa015965526ff5afc078c57ca69160108d",
"sha256:e05cb4d9aad6233d67e0541caa7e511fa4047ed7750ec2510d466e806e0255d6",
"sha256:e4d96c07229f58cb686120f168276e434660e4358cc9cf3b0464210b04913e77",
"sha256:f3f501f345f24383c0000395b26b726e46758b71393267aeae0bd36f8b3ade80",
"sha256:f8a923a85cb099422ad5a2e345fe877bbc89a8a8b23235824a93488150e45f6e"
],
"version": "==4.5.1"
},
"doublex": {
"hashes": [
"sha256:062af49d9e4148bc47b7512d3fdc8e145dea4671d074ffd54b2464a19d3757ab"
],
"index": "pypi",
"version": "==1.8.4"
},
"doublex-expects": {
"hashes": [
"sha256:5421bd92319c77ccc5a81d595d06e9c9f7f670de342b33e8007a81e70f9fade8"
],
"index": "pypi",
"version": "==0.7.0rc2"
},
"expects": {
"hashes": [
"sha256:37538d7b0fa9c0d53e37d07b0e8c07d89754d3deec1f0f8ed1be27f4f10363dd"
],
"index": "pypi",
"version": "==0.8.0"
},
"mamba": {
"hashes": [
"sha256:63e70a8666039cf143a255000e23f29be4ea4b5b8169f2b053f94eb73a2ea9e2"
],
"index": "pypi",
"version": "==0.9.3"
},
"pyhamcrest": {
"hashes": [
"sha256:6b672c02fdf7470df9674ab82263841ce8333fb143f32f021f6cb26f0e512420",
"sha256:7a4bdade0ed98c699d728191a058a60a44d2f9c213c51e2dd1e6fb42f2c6128a",
"sha256:8ffaa0a53da57e89de14ced7185ac746227a8894dbd5a3c718bf05ddbd1d56cd",
"sha256:bac0bea7358666ce52e3c6c85139632ed89f115e9af52d44b3c36e0bf8cf16a9",
"sha256:f30e9a310bcc1808de817a92e95169ffd16b60cbc5a016a49c8d0e8ababfae79"
],
"version": "==1.9.0"
},
"six": {
"hashes": [
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
],
"version": "==1.11.0"
}
}
}

View File

@@ -0,0 +1,89 @@
# Create Falco rule from Anchore policy result
This integration creates a rule for Sysdig Falco based on Anchore policy result.
So that when we will try to run an image which has a ```stop``` final action result
in Anchore, Falco will alert us.
## Getting started
### Prerequisites
For running this integration you will need:
* Python 3.6
* pipenv
* An [anchore-engine](https://github.com/anchore/anchore-engine) running
### Configuration
This integration uses the [same environment variables that anchore-cli](https://github.com/anchore/anchore-cli#configuring-the-anchore-cli):
* ANCHORE_CLI_USER: The user used to conect to anchore-engine. By default is ```admin```
* ANCHORE_CLI_PASS: The password used to connect to anchore-engine.
* ANCHORE_CLI_URL: The url where anchore-engine listens. Make sure does not end with a slash. By default is ```http://localhost:8228/v1```
* ANCHORE_CLI_SSL_VERIFY: Flag for enabling if HTTP client verifies SSL. By default is ```true```
### Running
This is a Python program which generates a Falco rule based on anchore-engine
information:
```
pipenv run python main.py
```
And this will output something like:
```yaml
- macro: anchore_stop_policy_evaluation_containers
condition: container.image.id in ("8626492fecd368469e92258dfcafe055f636cb9cbc321a5865a98a0a6c99b8dd", "e86d9bb526efa0b0401189d8df6e3856d0320a3d20045c87b4e49c8a8bdb22c1")
- rule: Run Anchore Containers with Stop Policy Evaluation
desc: Detect containers which does not receive a positive Policy Evaluation from Anchore Engine.
condition: evt.type=execve and proc.vpid=1 and container and anchore_stop_policy_evaluation_containers
output: A stop policy evaluation container from anchore has started (%container.info image=%container.image)
priority: INFO
tags: [container]
```
You can save that output to ```/etc/falco/rules.d/anchore-integration-rules.yaml```
and Falco will start checking this rule.
As long as information in anchore-engine can change, it's a good idea to run this
integration **periodically** and keep the rule synchronized with anchore-engine
policy evaluation result.
## Tests
As long as there are contract tests with anchore-engine, it needs a working
anchore-engine and its environment variables.
```
pipenv install -d
pipenv run mamba --format=documentation
```
## Docker support
### Build the image
```
docker build -t sysdig/anchore-falco .
```
### Running the image
An image exists on DockerHub, its name is ```sysdig/anchore-falco```.
So you can run directly with Docker:
```
docker run --rm -e ANCHORE_CLI_USER=<user-for-custom-anchore-engine> \
-e ANCHORE_CLI_PASS=<passsword-for-user-for-custom-anchore-engine> \
-e ANCHORE_CLI_URL=http://<custom-anchore-engine-host>:8228/v1 \
sysdig/anchore-falco
```
And this will output the Falco rule based on *custom-anchore-engine-host*.

View File

@@ -0,0 +1,25 @@
import string
FALCO_RULE_TEMPLATE = string.Template('''
- macro: anchore_stop_policy_evaluation_containers
condition: container.image.id in ($images)
- rule: Run Anchore Containers with Stop Policy Evaluation
desc: Detect containers which does not receive a positive Policy Evaluation from Anchore Engine.
condition: evt.type=execve and proc.vpid=1 and container and anchore_stop_policy_evaluation_containers
output: A stop policy evaluation container from anchore has started (%container.info image=%container.image)
priority: INFO
tags: [container]
''')
class CreateFalcoRuleFromAnchoreStopPolicyResults:
def __init__(self, anchore_client):
self._anchore_client = anchore_client
def run(self):
images = self._anchore_client.get_images_with_policy_result('stop')
images = ['"{}"'.format(image) for image in images]
return FALCO_RULE_TEMPLATE.substitute(images=', '.join(images))

View File

@@ -0,0 +1,39 @@
import requests
class AnchoreClient:
def __init__(self, user, password, url, ssl_verify):
self._user = user
self._password = password
self._url = url
self._ssl_verify = ssl_verify
def get_images_with_policy_result(self, policy_result):
results = []
for image in self._get_all_images():
final_action = self._evaluate_image(image)
if final_action == 'stop':
results.append(image['image_id'])
return results
def _get_all_images(self):
response = self._do_get_request(self._url + '/images')
return [
{
'image_id': image['image_detail'][0]['imageId'],
'image_digest': image['image_detail'][0]['imageDigest'],
'full_tag': image['image_detail'][0]['fulltag']
} for image in response.json()]
def _do_get_request(self, url):
return requests.get(url,
auth=(self._user, self._password),
verify=self._ssl_verify,
headers={'Content-Type': 'application/json'})
def _evaluate_image(self, image):
response = self._do_get_request(self._url + '/images/{}/check?tag={}'.format(image['image_digest'], image['full_tag']))
if response.status_code == 200:
return response.json()[0][image['image_digest']][image['full_tag']][0]['detail']['result']['final_action']

View File

@@ -0,0 +1,21 @@
import os
import actions, infrastructure
def main():
anchore_client = infrastructure.AnchoreClient(
os.environ.get('ANCHORE_CLI_USER', 'admin'),
os.environ['ANCHORE_CLI_PASS'],
os.environ.get('ANCHORE_CLI_URL', 'http://localhost:8228/v1'),
os.environ.get('ANCHORE_CLI_SSL_VERIFY', True)
)
action = actions.CreateFalcoRuleFromAnchoreStopPolicyResults(anchore_client)
result = action.run()
print(result)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,21 @@
from mamba import description, it, before
from expects import expect, contain
from doublex import Stub, when
import actions
import infrastructure
with description(actions.CreateFalcoRuleFromAnchoreStopPolicyResults) as self:
with before.each:
self.anchore_client = Stub(infrastructure.AnchoreClient)
self.action = actions.CreateFalcoRuleFromAnchoreStopPolicyResults(self.anchore_client)
with it('queries Anchore Server for images with Stop as policy results'):
image_id = 'any image id'
when(self.anchore_client).get_images_with_policy_result('stop').returns([image_id])
result = self.action.run()
expect(result).to(contain(image_id))

View File

@@ -0,0 +1,19 @@
from mamba import description, it
from expects import expect, have_length, be_above
import os
import infrastructure
with description(infrastructure.AnchoreClient) as self:
with it('retrieves images with stop policy results'):
user = os.environ['ANCHORE_CLI_USER']
password = os.environ['ANCHORE_CLI_PASS']
url = os.environ['ANCHORE_CLI_URL']
client = infrastructure.AnchoreClient(user, password, url, True)
result = client.get_images_with_policy_result('stop')
expect(result).to(have_length(be_above(1)))

View File

@@ -0,0 +1,18 @@
# Kubernetes Response Engine for Sysdig Falco
A response engine for Falco that allows to process security events executing playbooks to respond to security threats.
## Architecture
* *[Falco](https://sysdig.com/opensource/falco/)* monitors containers and processes to alert on unexpected behavior. This is defined through the runtime policy built from multiple rules that define what the system should and shouldn't do.
* *falco-nats* forwards the alert to a message broker service into a topic compound by `falco.<severity>.<rule_name_slugified>`.
* *[NATS](https://nats.io/)*, our message broker, delivers the alert to any subscribers to the different topics.
* *[Kubeless](https://kubeless.io/)*, a FaaS framework that runs in Kubernetes, receives the security events and executes the configured playbooks.
## Glossary
* *Security event*: Alert sent by Falco when a configured rule matches the behaviour on that host.
* *Playbook*: Each piece code executed when an alert is received to respond to that threat in an automated way, some examples include:
- sending an alert to Slack
- stop the pod killing the container
- taint the specific node where the pod is running

View File

@@ -0,0 +1,9 @@
deploy:
kubectl apply -f nats/
kubectl apply -f kubeless/
kubectl apply -f network-policy.yaml
clean:
kubectl delete -f kubeless/
kubectl delete -f nats/
kubectl delete -f network-policy.yaml

View File

@@ -0,0 +1,20 @@
# Kubernetes Manifests for Kubernetes Response Engine
In this directory are the manifests for creating required infrastructure in the
Kubernetes cluster
## Deploy
For deploying NATS, Falco + Falco-NATS output and Kubeless just run default Makefile target:
```
make
```
## Clean
You can clean your cluster with:
```
make clean
```

View File

@@ -0,0 +1,5 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: kubeless

View File

@@ -0,0 +1,369 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: controller-acct
namespace: kubeless
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kubeless-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- create
- get
- delete
- list
- update
- patch
- apiGroups:
- apps
- extensions
resources:
- deployments
verbs:
- create
- get
- delete
- list
- update
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- delete
- apiGroups:
- ""
resourceNames:
- kubeless-registry-credentials
resources:
- secrets
verbs:
- get
- apiGroups:
- kubeless.io
resources:
- functions
- httptriggers
- cronjobtriggers
verbs:
- get
- list
- watch
- update
- delete
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- create
- get
- delete
- deletecollection
- list
- update
- patch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- create
- get
- delete
- list
- update
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- list
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanagers
- prometheuses
- servicemonitors
verbs:
- '*'
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- create
- get
- list
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubeless-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeless-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
description: Kubernetes Native Serverless Framework
kind: CustomResourceDefinition
metadata:
name: functions.kubeless.io
spec:
group: kubeless.io
names:
kind: Function
plural: functions
singular: function
scope: Namespaced
version: v1beta1
---
apiVersion: apiextensions.k8s.io/v1beta1
description: CRD object for HTTP trigger type
kind: CustomResourceDefinition
metadata:
name: httptriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: HTTPTrigger
plural: httptriggers
singular: httptrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apiextensions.k8s.io/v1beta1
description: CRD object for HTTP trigger type
kind: CustomResourceDefinition
metadata:
name: cronjobtriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: CronJobTrigger
plural: cronjobtriggers
singular: cronjobtrigger
scope: Namespaced
version: v1beta1
---
apiVersion: v1
data:
builder-image: kubeless/function-image-builder:v1.0.0-alpha.6
builder-image-secret: ""
deployment: '{}'
enable-build-step: "false"
function-registry-tls-verify: "true"
ingress-enabled: "false"
provision-image: kubeless/unzip@sha256:f162c062973cca05459834de6ed14c039d45df8cdb76097f50b028a1621b3697
provision-image-secret: ""
runtime-images: |-
[
{
"ID": "python",
"compiled": false,
"versions": [
{
"name": "python27",
"version": "2.7",
"runtimeImage": "kubeless/python@sha256:07cfb0f3d8b6db045dc317d35d15634d7be5e436944c276bf37b1c630b03add8",
"initImage": "python:2.7"
},
{
"name": "python34",
"version": "3.4",
"runtimeImage": "kubeless/python@sha256:f19640c547a3f91dbbfb18c15b5e624029b4065c1baf2892144e07c36f0a7c8f",
"initImage": "python:3.4"
},
{
"name": "python36",
"version": "3.6",
"runtimeImage": "kubeless/python@sha256:0c9f8f727d42625a4e25230cfe612df7488b65f283e7972f84108d87e7443d72",
"initImage": "python:3.6"
}
],
"depName": "requirements.txt",
"fileNameSuffix": ".py"
},
{
"ID": "nodejs",
"compiled": false,
"versions": [
{
"name": "node6",
"version": "6",
"runtimeImage": "kubeless/nodejs@sha256:013facddb0f66c150844192584d823d7dfb2b5b8d79fd2ae98439c86685da657",
"initImage": "node:6.10"
},
{
"name": "node8",
"version": "8",
"runtimeImage": "kubeless/nodejs@sha256:b155d7e20e333044b60009c12a25a97c84eed610f2a3d9d314b47449dbdae0e5",
"initImage": "node:8"
}
],
"depName": "package.json",
"fileNameSuffix": ".js"
},
{
"ID": "nodejs_distroless",
"compiled": false,
"versions": [
{
"name": "node8",
"version": "8",
"runtimeImage": "henrike42/kubeless/runtimes/nodejs/distroless:0.0.2",
"initImage": "node:8"
}
],
"depName": "package.json",
"fileNameSuffix": ".js"
},
{
"ID": "ruby",
"compiled": false,
"versions": [
{
"name": "ruby24",
"version": "2.4",
"runtimeImage": "kubeless/ruby@sha256:01665f1a32fe4fab4195af048627857aa7b100e392ae7f3e25a44bd296d6f105",
"initImage": "bitnami/ruby:2.4"
}
],
"depName": "Gemfile",
"fileNameSuffix": ".rb"
},
{
"ID": "php",
"compiled": false,
"versions": [
{
"name": "php72",
"version": "7.2",
"runtimeImage": "kubeless/php@sha256:9b86066b2640bedcd88acb27f43dfaa2b338f0d74d9d91131ea781402f7ec8ec",
"initImage": "composer:1.6"
}
],
"depName": "composer.json",
"fileNameSuffix": ".php"
},
{
"ID": "go",
"compiled": true,
"versions": [
{
"name": "go1.10",
"version": "1.10",
"runtimeImage": "kubeless/go@sha256:e2fd49f09b6ff8c9bac6f1592b3119ea74237c47e2955a003983e08524cb3ae5",
"initImage": "kubeless/go-init@sha256:983b3f06452321a2299588966817e724d1a9c24be76cf1b12c14843efcdff502"
}
],
"depName": "Gopkg.toml",
"fileNameSuffix": ".go"
},
{
"ID": "dotnetcore",
"compiled": true,
"versions": [
{
"name": "dotnetcore2.0",
"version": "2.0",
"runtimeImage": "allantargino/kubeless-dotnetcore@sha256:1699b07d9fc0276ddfecc2f823f272d96fd58bbab82d7e67f2fd4982a95aeadc",
"initImage": "allantargino/aspnetcore-build@sha256:0d60f845ff6c9c019362a68b87b3920f3eb2d32f847f2d75e4d190cc0ce1d81c"
}
],
"depName": "project.csproj",
"fileNameSuffix": ".cs"
},
{
"ID": "java",
"compiled": true,
"versions": [
{
"name": "java1.8",
"version": "1.8",
"runtimeImage": "kubeless/java@sha256:debf9502545f4c0e955eb60fabb45748c5d98ed9365c4a508c07f38fc7fefaac",
"initImage": "kubeless/java-init@sha256:7e5e4376d3ab76c336d4830c9ed1b7f9407415feca49b8c2bf013e279256878f"
}
],
"depName": "pom.xml",
"fileNameSuffix": ".java"
},
{
"ID": "ballerina",
"compiled": true,
"versions": [
{
"name": "ballerina0.975.0",
"version": "0.975.0",
"runtimeImage": "kubeless/ballerina@sha256:83e51423972f4b0d6b419bee0b4afb3bb87d2bf1b604ebc4366c430e7cc28a35",
"initImage": "kubeless/ballerina-init@sha256:05857ce439a7e290f9d86f8cb38ea3b574670c0c0e91af93af06686fa21ecf4f"
}
],
"depName": "",
"fileNameSuffix": ".bal"
}
]
service-type: ClusterIP
kind: ConfigMap
metadata:
name: kubeless-config
namespace: kubeless
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: controller
name: kubeless-controller-manager
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: controller
template:
metadata:
labels:
kubeless: controller
spec:
containers:
- env:
- name: KUBELESS_INGRESS_ENABLED
valueFrom:
configMapKeyRef:
key: ingress-enabled
name: kubeless-config
- name: KUBELESS_SERVICE_TYPE
valueFrom:
configMapKeyRef:
key: service-type
name: kubeless-config
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: bitnami/kubeless-controller-manager:v1.0.0-alpha.6
imagePullPolicy: IfNotPresent
name: kubeless-controller-manager
serviceAccountName: controller-acct

View File

@@ -0,0 +1,74 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nats-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- natstriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nats-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nats-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
description: CRD object for NATS trigger type
kind: CustomResourceDefinition
metadata:
name: natstriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: NATSTrigger
plural: natstriggers
singular: natstrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: nats-trigger-controller
name: nats-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: nats-trigger-controller
template:
metadata:
labels:
kubeless: nats-trigger-controller
spec:
containers:
- image: bitnami/nats-trigger-controller:v1.0.0-alpha.6
imagePullPolicy: IfNotPresent
name: nats-trigger-controller
serviceAccountName: controller-acct

View File

@@ -0,0 +1,82 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: nats-io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats-operator
namespace: nats-io
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nats-operator
namespace: nats-io
spec:
replicas: 1
selector:
matchLabels:
name: nats-operator
template:
metadata:
labels:
name: nats-operator
spec:
serviceAccountName: nats-operator
containers:
- name: nats-operator
image: connecteverything/nats-operator:0.2.2-v1alpha2
imagePullPolicy: Always
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nats-io:nats-operator-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nats-io:nats-operator
subjects:
- kind: ServiceAccount
name: nats-operator
namespace: nats-io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nats-io:nats-operator
rules:
# Allow creating CRDs
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs: ["*"]
# Allow all actions on NatsClusters
- apiGroups:
- nats.io
resources:
- natsclusters
verbs: ["*"]
# Allow actions on basic Kubernetes objects
- apiGroups: [""]
resources:
- configmaps
- secrets
- pods
- services
- endpoints
- events
verbs: ["*"]

View File

@@ -0,0 +1,8 @@
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats"
namespace: "nats-io"
spec:
size: 3
version: "1.1.0"

View File

@@ -0,0 +1,11 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate
spec:
podSelector:
matchLabels:
isolated: 'true'
policyTypes:
- Ingress
- Egress

View File

@@ -0,0 +1 @@
falco-nats

View File

@@ -0,0 +1,5 @@
FROM alpine:latest
COPY ./falco-nats /bin/
CMD ["/bin/falco-nats"]

View File

@@ -0,0 +1,12 @@
build:
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -ldflags="-s" -o falco-nats main.go
deps:
go get -u github.com/nats-io/go-nats
clean:
rm falco-nats
docker: build
docker build -t sysdig/falco-nats .
docker push sysdig/falco-nats

View File

@@ -0,0 +1,27 @@
# NATS output for Sysdig Falco
As Falco does not support a NATS output natively, we have created this small
golang utility wich reads Falco alerts from a named pipe and sends them to a
NATS server.
This utility is designed to being run in a sidecar container in the same
Pod as Falco.
## Configuration
You have a [complete Kubernetes manifest available](https://github.com/draios/falco/tree/kubernetes-response-engine/deployment/falco/falco-daemonset.yaml) for future reading.
Take a look at sidecar container and to the initContainers directive which
craetes the shared pipe between containers.
### Container image
You have this adapter available as a container image. Its name is *sysdig/falco-nats*.
### Parameters Reference
* -s: Specifies the NATS server URL where message will be published. By default
is: *nats://nats.nats-io.svc.cluster.local:4222*
* -f: Specifies the named pipe path where Falco publishes its alerts. By default
is: */var/run/falco/nats*

View File

@@ -0,0 +1,100 @@
// Copyright 2012-2018 The NATS Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build ignore
package main
import (
"bufio"
"encoding/json"
"flag"
"github.com/nats-io/go-nats"
"log"
"os"
"regexp"
"strings"
)
var slugRegularExpression = regexp.MustCompile("[^a-z0-9]+")
func main() {
var urls = flag.String("s", "nats://nats.nats-io.svc.cluster.local:4222", "The nats server URLs (separated by comma)")
var pipePath = flag.String("f", "/var/run/falco/nats", "The named pipe path")
log.SetFlags(0)
flag.Usage = usage
flag.Parse()
nc, err := nats.Connect(*urls)
if err != nil {
log.Fatal(err)
}
defer nc.Close()
pipe, err := os.OpenFile(*pipePath, os.O_RDONLY, 0600)
if err != nil {
log.Fatal(err)
}
log.Printf("Opened pipe %s", *pipePath)
reader := bufio.NewReader(pipe)
scanner := bufio.NewScanner(reader)
log.Printf("Scanning %s", *pipePath)
for scanner.Scan() {
msg := []byte(scanner.Text())
subj, err := subjectAndRuleSlug(msg)
if err != nil {
log.Fatal(err)
}
nc.Publish(subj, msg)
nc.Flush()
if err := nc.LastError(); err != nil {
log.Fatal(err)
} else {
log.Printf("Published [%s] : '%s'\n", subj, msg)
}
}
}
func usage() {
log.Fatalf("Usage: nats-pub [-s server (%s)] <subject> <msg> \n", nats.DefaultURL)
}
type parsedAlert struct {
Priority string `json:"priority"`
Rule string `json:"rule"`
}
func subjectAndRuleSlug(alert []byte) (string, error) {
var result parsedAlert
err := json.Unmarshal(alert, &result)
if err != nil {
return "", err
}
subject := "falco." + result.Priority + "." + slugify(result.Rule)
subject = strings.ToLower(subject)
return subject, nil
}
func slugify(input string) string {
return strings.Trim(slugRegularExpression.ReplaceAllString(strings.ToLower(input), "_"), "_")
}

View File

@@ -0,0 +1,104 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/

View File

@@ -0,0 +1,19 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[dev-packages]
mamba = "*"
expects = "*"
doublex = "*"
doublex-expects = "==0.7.0rc2"
[packages]
kubernetes = "*"
requests = "*"
"e1839a8" = {path = ".", editable = true}
maya = "*"
[requires]
python_version = "3.6"

View File

@@ -0,0 +1,367 @@
{
"_meta": {
"hash": {
"sha256": "00ca5a9cb1f462d534a06bca990e987e75a05b7baf6ba5ddac529f03312135e6"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.6"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"cachetools": {
"hashes": [
"sha256:90f1d559512fc073483fe573ef5ceb39bf6ad3d39edc98dc55178a2b2b176fa3",
"sha256:d1c398969c478d336f767ba02040fa22617333293fb0b8968e79b16028dfee35"
],
"version": "==2.1.0"
},
"certifi": {
"hashes": [
"sha256:13e698f54293db9f89122b0581843a782ad0934a4fe0172d2a980ba77fc61bb7",
"sha256:9fa520c1bacfb634fa7af20a76bcbd3d5fb390481724c597da32c719a7dca4b0"
],
"version": "==2018.4.16"
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"dateparser": {
"hashes": [
"sha256:940828183c937bcec530753211b70f673c0a9aab831e43273489b310538dff86",
"sha256:b452ef8b36cd78ae86a50721794bc674aa3994e19b570f7ba92810f4e0a2ae03"
],
"version": "==0.7.0"
},
"e1839a8": {
"editable": true,
"path": "."
},
"google-auth": {
"hashes": [
"sha256:1745c9066f698eac3da99cef082914495fb71bc09597ba7626efbbb64c4acc57",
"sha256:82a34e1a59ad35f01484d283d2a36b7a24c8c404a03a71b3afddd0a4d31e169f"
],
"version": "==1.5.0"
},
"humanize": {
"hashes": [
"sha256:a43f57115831ac7c70de098e6ac46ac13be00d69abbf60bdcac251344785bb19"
],
"version": "==0.5.1"
},
"idna": {
"hashes": [
"sha256:156a6814fb5ac1fc6850fb002e0852d56c0c8d2531923a51032d1b70760e186e",
"sha256:684a38a6f903c1d71d6d5fac066b58d7768af4de2b832e426ec79c30daa94a16"
],
"version": "==2.7"
},
"ipaddress": {
"hashes": [
"sha256:64b28eec5e78e7510698f6d4da08800a5c575caa4a286c93d651c5d3ff7b6794",
"sha256:b146c751ea45cad6188dd6cf2d9b757f6f4f8d6ffb96a023e6f2e26eea02a72c"
],
"version": "==1.0.22"
},
"kubernetes": {
"hashes": [
"sha256:b370ab4abd925309db69a14a4723487948e9a83de60ca92782ec14992b741c89",
"sha256:c80dcf531deca2037105df09c933355c80830ffbf9e496b5e6a3967ac6809ef7"
],
"index": "pypi",
"version": "==6.0.0"
},
"maya": {
"hashes": [
"sha256:6f63bc69aa77309fc220bc02618da8701a21da87c2e7a747ee5ccd56a907c3a5",
"sha256:f526bc8596d993f4bd9755668f66aaf61d635bb4149e084d4a2bc0ebe42aa0b6"
],
"index": "pypi",
"version": "==0.5.0"
},
"oauthlib": {
"hashes": [
"sha256:ac35665a61c1685c56336bda97d5eefa246f1202618a1d6f34fccb1bdd404162",
"sha256:d883b36b21a6ad813953803edfa563b1b579d79ca758fe950d1bc9e8b326025b"
],
"version": "==2.1.0"
},
"pendulum": {
"hashes": [
"sha256:4173ce3e81ad0d9d61dbce86f4286c43a26a398270df6a0a89f501f0c28ad27d",
"sha256:56a347d0457859c84b8cdba161fc37c7df5db9b3becec7881cd770e9d2058b3c",
"sha256:738878168eb26e5446da5d1f7b3312ae993a542061be8882099c00ef4866b1a2",
"sha256:95536b33ae152e3c831eb236c1bf9ac9dcfb3b5b98fdbe8e9e601eab6c373897",
"sha256:c04fcf955e622e97e405e5f6d1b1f4a7adc69d79d82f3609643de69283170d6d",
"sha256:dd6500d27bb7ccc029d497da4f9bd09549bd3c0ea276dad894ea2fdf309e83f3",
"sha256:ddaf97a061eb5e2ae37857a8cb548e074125017855690d20e443ad8d9f31e164",
"sha256:e7df37447824f9af0b58c7915a4caf349926036afd86ad38e7529a6b2f8fc34b",
"sha256:e9732b8bb214fad2c72ddcbfec07542effa8a8b704e174347ede1ff8dc679cce",
"sha256:f4eee1e1735487d9d25cc435c519fd4380cb1f82cde3ebad1efbc2fc30deca5b"
],
"version": "==1.5.1"
},
"pyasn1": {
"hashes": [
"sha256:2f57960dc7a2820ea5a1782b872d974b639aa3b448ac6628d1ecc5d0fe3986f2",
"sha256:3651774ca1c9726307560792877db747ba5e8a844ea1a41feb7670b319800ab3",
"sha256:602fda674355b4701acd7741b2be5ac188056594bf1eecf690816d944e52905e",
"sha256:8fb265066eac1d3bb5015c6988981b009ccefd294008ff7973ed5f64335b0f2d",
"sha256:9334cb427609d2b1e195bb1e251f99636f817d7e3e1dffa150cb3365188fb992",
"sha256:9a15cc13ff6bf5ed29ac936ca941400be050dff19630d6cd1df3fb978ef4c5ad",
"sha256:a66dcda18dbf6e4663bde70eb30af3fc4fe1acb2d14c4867a861681887a5f9a2",
"sha256:ba77f1e8d7d58abc42bfeddd217b545fdab4c1eeb50fd37c2219810ad56303bf",
"sha256:cdc8eb2eaafb56de66786afa6809cd9db2df1b3b595dcb25aa5b9dc61189d40a",
"sha256:d01fbba900c80b42af5c3fe1a999acf61e27bf0e452e0f1ef4619065e57622da",
"sha256:f281bf11fe204f05859225ec2e9da7a7c140b65deccd8a4eb0bc75d0bd6949e0",
"sha256:fb81622d8f3509f0026b0683fe90fea27be7284d3826a5f2edf97f69151ab0fc"
],
"version": "==0.4.3"
},
"pyasn1-modules": {
"hashes": [
"sha256:041e9fbafac548d095f5b6c3b328b80792f006196e15a232b731a83c93d59493",
"sha256:0cdca76a68dcb701fff58c397de0ef9922b472b1cb3ea9695ca19d03f1869787",
"sha256:0cea139045c38f84abaa803bcb4b5e8775ea12a42af10019d942f227acc426c3",
"sha256:0f2e50d20bc670be170966638fa0ae603f0bc9ed6ebe8e97a6d1d4cef30cc889",
"sha256:47fb6757ab78fe966e7c58b2030b546854f78416d653163f0ce9290cf2278e8b",
"sha256:598a6004ec26a8ab40a39ea955068cf2a3949ad9c0030da970f2e1ca4c9f1cc9",
"sha256:72fd8b0c11191da088147c6e4678ec53e573923ecf60b57eeac9e97433e09fc2",
"sha256:854700bbdd01394e2ada9c1bfbd0ed9f5d0c551350dbbd023e88b11d2771ae06",
"sha256:af00ea8f2022b6287dc375b2c70f31ab5af83989fc6fe9eacd4976ce26cd7ccc",
"sha256:b1f395cae2d669e0830cb023aa86f9f283b7a9aa32317d7f80d8e78aa2745812",
"sha256:c6747146e95d2b14cc2a8399b2b0bde3f93778f8f9ec704690d2b589c376c137",
"sha256:f53fe5bcebdf318f51399b250fe8325ef3a26d927f012cc0c8e0f9e9af7f9deb"
],
"version": "==0.2.1"
},
"python-dateutil": {
"hashes": [
"sha256:1adb80e7a782c12e52ef9a8182bebeb73f1d7e24e374397af06fb4956c8dc5c0",
"sha256:e27001de32f627c22380a688bcc43ce83504a7bc5da472209b4c70f02829f0b8"
],
"version": "==2.7.3"
},
"pytz": {
"hashes": [
"sha256:65ae0c8101309c45772196b21b74c46b2e5d11b6275c45d251b150d5da334555",
"sha256:c06425302f2cf668f1bba7a0a03f3c1d34d4ebeef2c72003da308b3947c7f749"
],
"version": "==2018.4"
},
"pytzdata": {
"hashes": [
"sha256:1d936da41ee06216d89fdc7ead1ee9a5da2811a8787515a976b646e110c3f622",
"sha256:e4ef42e82b0b493c5849eed98b5ab49d6767caf982127e9a33167f1153b36cc5"
],
"version": "==2018.5"
},
"pyyaml": {
"hashes": [
"sha256:0c507b7f74b3d2dd4d1322ec8a94794927305ab4cebbe89cc47fe5e81541e6e8",
"sha256:16b20e970597e051997d90dc2cddc713a2876c47e3d92d59ee198700c5427736",
"sha256:3262c96a1ca437e7e4763e2843746588a965426550f3797a79fca9c6199c431f",
"sha256:326420cbb492172dec84b0f65c80942de6cedb5233c413dd824483989c000608",
"sha256:4474f8ea030b5127225b8894d626bb66c01cda098d47a2b0d3429b6700af9fd8",
"sha256:592766c6303207a20efc445587778322d7f73b161bd994f227adaa341ba212ab",
"sha256:5ac82e411044fb129bae5cfbeb3ba626acb2af31a8d17d175004b70862a741a7",
"sha256:5f84523c076ad14ff5e6c037fe1c89a7f73a3e04cf0377cb4d017014976433f3",
"sha256:827dc04b8fa7d07c44de11fabbc888e627fa8293b695e0f99cb544fdfa1bf0d1",
"sha256:b4c423ab23291d3945ac61346feeb9a0dc4184999ede5e7c43e1ffb975130ae6",
"sha256:bc6bced57f826ca7cb5125a10b23fd0f2fff3b7c4701d64c439a300ce665fff8",
"sha256:c01b880ec30b5a6e6aa67b09a2fe3fb30473008c85cd6a67359a1b15ed6d83a4",
"sha256:ca233c64c6e40eaa6c66ef97058cdc80e8d0157a443655baa1b2966e812807ca",
"sha256:e863072cdf4c72eebf179342c94e6989c67185842d9997960b3e69290b2fa269"
],
"version": "==3.12"
},
"regex": {
"hashes": [
"sha256:0201b4cb42f03842a75044a3d08b62a79114f753b33ee421182c631d9f5c81f5",
"sha256:204524604456e3e0e25c3f24da4efc43db78edfe7623f1049e03d3aa51ddda48",
"sha256:24c0e838bde42fe9d4d5650e75bff2d4bb5867968fb9409331dbe39154f6e8e2",
"sha256:4360143da844cd985effb7fb9af04beaa2d371ab13e4a1996424aa2f6fbfb877",
"sha256:4b8c6fd44dbd46cdbf755c20a7b9dedb32b8d15b707a0e470dfa66ba5df00a35",
"sha256:4fb5622987f3863cfa76c40ab3338a7dc8ed2bac236bb53e638b21ea397a3252",
"sha256:5eebefef6e3d97e4c1f9f77eac6555c32ed3afbd769955a9f7339256a4d50d6c",
"sha256:7222204c6acb9e52688678ec7306b2dfd84df68bc8eb251be74fec4e9dd85bf9",
"sha256:809cbbcbe291cf7bc9cf6aeac6a9a400a71318292d0a2a07effaf4b4782203a0",
"sha256:9c9075c727afec23eab196be51737eedb00cd67bb4a2e0170fa8dc65163838f3",
"sha256:a105b1d7287d412e8fe99959c1b80f7cbd76184b6466d63579b6d256a406a76e",
"sha256:c3d9cfd214a3e5a25f2da9817c389e32069e210b067ebb901e10f3270da9b259",
"sha256:c3ebfb5ec2dd750f7861734b25ea7d5ae89d6f33b427cccf3cafa36a1511d862",
"sha256:c670acd71d975b0c91579d40ae7f703d0daa1c871f12e46394a2c7be0ec8e217",
"sha256:e371482ee3e6e5ca19ea83cdfc84bf69cac230e3cb1073c8c3bebf3f143cd7a5"
],
"version": "==2018.6.9"
},
"requests": {
"hashes": [
"sha256:63b52e3c866428a224f97cab011de738c36aec0185aa91cfacd418b5d58911d1",
"sha256:ec22d826a36ed72a7358ff3fe56cbd4ba69dd7a6718ffd450ff0e9df7a47ce6a"
],
"index": "pypi",
"version": "==2.19.1"
},
"requests-oauthlib": {
"hashes": [
"sha256:8886bfec5ad7afb391ed5443b1f697c6f4ae98d0e5620839d8b4499c032ada3f",
"sha256:e21232e2465808c0e892e0e4dbb8c2faafec16ac6dc067dd546e9b466f3deac8",
"sha256:fe3282f48fb134ee0035712159f5429215459407f6d5484013343031ff1a400d"
],
"version": "==1.0.0"
},
"rsa": {
"hashes": [
"sha256:25df4e10c263fb88b5ace923dd84bf9aa7f5019687b5e55382ffcdb8bede9db5",
"sha256:43f682fea81c452c98d09fc316aae12de6d30c4b5c84226642cf8f8fd1c93abd"
],
"version": "==3.4.2"
},
"six": {
"hashes": [
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
],
"version": "==1.11.0"
},
"snaptime": {
"hashes": [
"sha256:e3f1eb89043d58d30721ab98cb65023f1a4c2740e3b197704298b163c92d508b"
],
"version": "==0.2.4"
},
"tzlocal": {
"hashes": [
"sha256:4ebeb848845ac898da6519b9b31879cf13b6626f7184c496037b818e238f2c4e"
],
"version": "==1.5.1"
},
"urllib3": {
"hashes": [
"sha256:a68ac5e15e76e7e5dd2b8f94007233e01effe3e50e8daddf69acfd81cb686baf",
"sha256:b5725a0bd4ba422ab0e66e89e030c806576753ea3ee08554382c14e685d117b5"
],
"version": "==1.23"
},
"websocket-client": {
"hashes": [
"sha256:18f1170e6a1b5463986739d9fd45c4308b0d025c1b2f9b88788d8f69e8a5eb4a",
"sha256:db70953ae4a064698b27ae56dcad84d0ee68b7b43cb40940f537738f38f510c1"
],
"version": "==0.48.0"
}
},
"develop": {
"args": {
"hashes": [
"sha256:a785b8d837625e9b61c39108532d95b85274acd679693b71ebb5156848fcf814"
],
"version": "==0.1.0"
},
"clint": {
"hashes": [
"sha256:05224c32b1075563d0b16d0015faaf9da43aa214e4a2140e51f08789e7a4c5aa"
],
"version": "==0.5.1"
},
"coverage": {
"hashes": [
"sha256:03481e81d558d30d230bc12999e3edffe392d244349a90f4ef9b88425fac74ba",
"sha256:0b136648de27201056c1869a6c0d4e23f464750fd9a9ba9750b8336a244429ed",
"sha256:104ab3934abaf5be871a583541e8829d6c19ce7bde2923b2751e0d3ca44db60a",
"sha256:15b111b6a0f46ee1a485414a52a7ad1d703bdf984e9ed3c288a4414d3871dcbd",
"sha256:198626739a79b09fa0a2f06e083ffd12eb55449b5f8bfdbeed1df4910b2ca640",
"sha256:1c383d2ef13ade2acc636556fd544dba6e14fa30755f26812f54300e401f98f2",
"sha256:28b2191e7283f4f3568962e373b47ef7f0392993bb6660d079c62bd50fe9d162",
"sha256:2eb564bbf7816a9d68dd3369a510be3327f1c618d2357fa6b1216994c2e3d508",
"sha256:337ded681dd2ef9ca04ef5d93cfc87e52e09db2594c296b4a0a3662cb1b41249",
"sha256:3a2184c6d797a125dca8367878d3b9a178b6fdd05fdc2d35d758c3006a1cd694",
"sha256:3c79a6f7b95751cdebcd9037e4d06f8d5a9b60e4ed0cd231342aa8ad7124882a",
"sha256:3d72c20bd105022d29b14a7d628462ebdc61de2f303322c0212a054352f3b287",
"sha256:3eb42bf89a6be7deb64116dd1cc4b08171734d721e7a7e57ad64cc4ef29ed2f1",
"sha256:4635a184d0bbe537aa185a34193898eee409332a8ccb27eea36f262566585000",
"sha256:56e448f051a201c5ebbaa86a5efd0ca90d327204d8b059ab25ad0f35fbfd79f1",
"sha256:5a13ea7911ff5e1796b6d5e4fbbf6952381a611209b736d48e675c2756f3f74e",
"sha256:69bf008a06b76619d3c3f3b1983f5145c75a305a0fea513aca094cae5c40a8f5",
"sha256:6bc583dc18d5979dc0f6cec26a8603129de0304d5ae1f17e57a12834e7235062",
"sha256:701cd6093d63e6b8ad7009d8a92425428bc4d6e7ab8d75efbb665c806c1d79ba",
"sha256:7608a3dd5d73cb06c531b8925e0ef8d3de31fed2544a7de6c63960a1e73ea4bc",
"sha256:76ecd006d1d8f739430ec50cc872889af1f9c1b6b8f48e29941814b09b0fd3cc",
"sha256:7aa36d2b844a3e4a4b356708d79fd2c260281a7390d678a10b91ca595ddc9e99",
"sha256:7d3f553904b0c5c016d1dad058a7554c7ac4c91a789fca496e7d8347ad040653",
"sha256:7e1fe19bd6dce69d9fd159d8e4a80a8f52101380d5d3a4d374b6d3eae0e5de9c",
"sha256:8c3cb8c35ec4d9506979b4cf90ee9918bc2e49f84189d9bf5c36c0c1119c6558",
"sha256:9d6dd10d49e01571bf6e147d3b505141ffc093a06756c60b053a859cb2128b1f",
"sha256:9e112fcbe0148a6fa4f0a02e8d58e94470fc6cb82a5481618fea901699bf34c4",
"sha256:ac4fef68da01116a5c117eba4dd46f2e06847a497de5ed1d64bb99a5fda1ef91",
"sha256:b8815995e050764c8610dbc82641807d196927c3dbed207f0a079833ffcf588d",
"sha256:be6cfcd8053d13f5f5eeb284aa8a814220c3da1b0078fa859011c7fffd86dab9",
"sha256:c1bb572fab8208c400adaf06a8133ac0712179a334c09224fb11393e920abcdd",
"sha256:de4418dadaa1c01d497e539210cb6baa015965526ff5afc078c57ca69160108d",
"sha256:e05cb4d9aad6233d67e0541caa7e511fa4047ed7750ec2510d466e806e0255d6",
"sha256:e4d96c07229f58cb686120f168276e434660e4358cc9cf3b0464210b04913e77",
"sha256:f3f501f345f24383c0000395b26b726e46758b71393267aeae0bd36f8b3ade80",
"sha256:f8a923a85cb099422ad5a2e345fe877bbc89a8a8b23235824a93488150e45f6e"
],
"version": "==4.5.1"
},
"doublex": {
"hashes": [
"sha256:062af49d9e4148bc47b7512d3fdc8e145dea4671d074ffd54b2464a19d3757ab"
],
"index": "pypi",
"version": "==1.8.4"
},
"doublex-expects": {
"hashes": [
"sha256:5421bd92319c77ccc5a81d595d06e9c9f7f670de342b33e8007a81e70f9fade8"
],
"index": "pypi",
"version": "==0.7.0rc2"
},
"expects": {
"hashes": [
"sha256:37538d7b0fa9c0d53e37d07b0e8c07d89754d3deec1f0f8ed1be27f4f10363dd"
],
"index": "pypi",
"version": "==0.8.0"
},
"mamba": {
"hashes": [
"sha256:63e70a8666039cf143a255000e23f29be4ea4b5b8169f2b053f94eb73a2ea9e2"
],
"index": "pypi",
"version": "==0.9.3"
},
"pyhamcrest": {
"hashes": [
"sha256:6b672c02fdf7470df9674ab82263841ce8333fb143f32f021f6cb26f0e512420",
"sha256:7a4bdade0ed98c699d728191a058a60a44d2f9c213c51e2dd1e6fb42f2c6128a",
"sha256:8ffaa0a53da57e89de14ced7185ac746227a8894dbd5a3c718bf05ddbd1d56cd",
"sha256:bac0bea7358666ce52e3c6c85139632ed89f115e9af52d44b3c36e0bf8cf16a9",
"sha256:f30e9a310bcc1808de817a92e95169ffd16b60cbc5a016a49c8d0e8ababfae79"
],
"version": "==1.9.0"
},
"six": {
"hashes": [
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
],
"version": "==1.11.0"
}
}
}

View File

@@ -0,0 +1,164 @@
# Playbooks
Following [owasp ideas](https://owaspsummit.org/Working-Sessions/Security-Playbooks/index.html),
playbooks are workflows and prescriptive instructions on how to handle specific
Security activities or incidents.
Being more specific, playbooks are actions that are going to be executed when
Falco finds a weird behavior in our Kubernetes cluster. We have implemented
them with Python and we have found that several Serverless concepts fits well
with playbooks, so we use [Kubeless](https://kubeless.io/) for its deployment.
## Requirements
* A working Kubernetes cluster
* [kubeless cli executable](https://kubeless.io/docs/quick-start/)
* Python 3.6
* pipenv
## Deploying a playbook
Deploying a playbook involves a couple of components, the function that is going
to be with Kubeless and a trigger for that function.
We have automated those steps in a generic script *deploy_playbook* who packages
the reaction and its dependencies, uploads to Kubernetes and creates the kubeless
trigger.
```
./deploy_playbook -p slack -e SLACK_WEBHOOK_URL="https://..." -t "falco.error.*" -t "falco.info.*"
```
### Parameters
* -p: The playbook to deploy, it must match with the top-level script. In this
example *slack.py* that contains the wiring between playbooks and Kubeless
functions
* -e: Sets configuration settings for Playbook. In this case the URL where we
have to post messages. You can specify multiple *-e* flags.
* -t: Topic to susbcribe. You can specify multiple *-t* flags and a trigger
will be created for each topic, so when we receive a message in that topic,
our function will be ran. In this case, playbook will be run when a
falco.error or falco.info alert is raised.
### Kubeless 101
Under the hood, there are several useful commands for checking function state with kubeless.
We can retrieve all functions deployed in our cluster:
```
kubeless function list
```
And we can see several interesting stats about a function usage:
```
kubeless function top
```
And we can see bindings between functions and NATS topics:
```
kubeless trigger nats list
```
### Undeploying a function
You have to delete every component using kubeless cli tool.
Generally, it takes 2 steps: Remove the triggers and remove the function.
Remove the triggers:
```
kubeless trigger nats delete trigger-name
```
If you have deployed with the script, trigger-name look like:
*falco-<playbook>-trigger-<index>* where index is the index of the topic created.
Anyway, you can list all triggers and select the name.
Remove the function:
```
kubeless function delete function-name
```
If you have deployed with the script, the function name will start with *falco-<playbook>*,
but you can list all functions and select its name.
## Testing
One of the goals of the project was that playbooks were tested.
You can execute the tests with:
```
pipenv --three install -d
export KUBERNETES_LOAD_KUBE_CONFIG=1
pipenv run mamba --format=documentation
```
The first line install development tools, which includes test runner and assertions.
The second one tells Kubernetes Client to use the same configuration than kubectl and
the third one runs the test.
The tests under *specs/infrastructure* runs against a real Kubernetes cluster,
but the *spec/reactions* can be run without any kind of infrastructure.
## Available Playbooks
### Delete a Pod
This playbook kills a pod using Kubernetes API
```
./deploy_playbook -p delete -t "falco.notice.terminal_shell_in_container"
```
In this example, everytime we receive a *Terminal shell in container* alert from
Falco, that pod will be deleted.
### Send message to Slack
This playbook posts a message to Slack
```
./deploy_playbook -p slack -t "falco.error.*" -e SLACK_WEBHOOK_URL="https://..."
```
#### Parameters
* SLACK_WEBHOOK_URL: This is the webhook used for posting messages in Slack
In this example, when Falco raises an error we will be notified in Slack
### Taint a Node
This playbook taints the node which where pod is running.
```
$ ./deploy_playbook -p taint -t “falco.notice.contact_k8s_api_server_from_container”
```
#### Parameters:
* TAINT_KEY: This is the taint key. Default value: falco/alert
* TAINT_VALUE: This is the taint value. Default value: true
* TAINT_EFFECT: This is the taint effect. Default value: NoSchedule
In this example, we avoid scheduling in the node which originates the Contact
K8S API server from container. But we can use a more aggresive approach and
use -e TAINT_EFFECT=NoExecute
### Network isolate a Pod
This reaction denies all ingress/egress traffic from a Pod. It's intended to
be used with Calico or other similar projects for managing networking in
Kubernetes.
```
./deploy_playbook -p isolate -t “falco.notice.write_below_binary_dir” -t “falco.error.write_below_etc”
```
So as soon as we notice someone wrote under /bin (and additional binaries) or
/etc, we disconnect that pod. It's like a trap for our attackers.

View File

@@ -0,0 +1,16 @@
import sys
import os.path
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__))))
import os
import playbooks
from playbooks import infrastructure
playbook = playbooks.DeletePod(
infrastructure.KubernetesClient()
)
def handler(event, context):
playbook.run(event['data'])

View File

@@ -0,0 +1,68 @@
#!/bin/bash
#
# Deploys a playbook
set -e
function usage() {
cat<<EOF
Usage: $0 [options]
-p playbook Playbook to be deployed. Is the script for Kubeless: slack, taint, isolate.
-e environment Environment variables for the Kubeless function. You can pass multiple environment variables passing several -e parameters.
-t topic NATS topic to subscribe function. You can bind to multiple topics passing several -t parameters.
You must pass the playbook and at least one topic to subscribe.
Example:
deploy_playbook -r slack -t "falco.error.*" -e SLACK_WEBHOOK_URL=http://foobar.com/...
EOF
exit 1
}
function join { local IFS="$1"; shift; echo "$*"; }
playbook=""
environment=()
topics=()
while getopts "r:e:t:" arg; do
case $arg in
r)
playbook="${OPTARG}"
;;
e)
environment+=("${OPTARG}")
;;
t)
topics+=("${OPTARG}")
;;
*)
usage
;;
esac
done
if [[ "${playbook}" == "" || ${#topics[@]} -eq 0 ]]; then
usage
fi
pipenv lock --requirements | sed '/^-/ d' > requirements.txt
zip "${playbook}".zip -r playbooks/*.py "${playbook}".py
kubeless function deploy --from-file "${playbook}".zip \
--dependencies requirements.txt \
--env "$(join , ${environment[*]})" \
--runtime python3.6 \
--handler "${playbook}".handler \
falco-"${playbook}"
rm requirements.txt ${playbook}.zip
for index in ${!topics[*]}; do
kubeless trigger nats create falco-"${playbook}"-trigger-"${index}" \
--function-selector created-by=kubeless,function=falco-${playbook} \
--trigger-topic "${topics[$index]}"
done

View File

@@ -0,0 +1,16 @@
import sys
import os.path
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__))))
import os
import playbooks
from playbooks import infrastructure
playbook = playbooks.NetworkIsolatePod(
infrastructure.KubernetesClient()
)
def handler(event, context):
playbook.run(event['data'])

View File

@@ -0,0 +1,101 @@
import maya
class DeletePod:
def __init__(self, k8s_client):
self._k8s_client = k8s_client
def run(self, alert):
pod_name = alert['output_fields']['k8s.pod.name']
self._k8s_client.delete_pod(pod_name)
class AddMessageToSlack:
def __init__(self, slack_client):
self._slack_client = slack_client
def run(self, alert):
message = self._build_slack_message(alert)
self._slack_client.post_message(message)
return message
def _build_slack_message(self, alert):
return {
'text': self._output(alert),
'attachments': [{
'color': self._color_from(alert['priority']),
'fields': [
{
'title': 'Rule',
'value': alert['rule'],
'short': False
},
{
'title': 'Priority',
'value': alert['priority'],
'short': True
},
{
'title': 'Time',
'value': str(maya.parse(alert['time'])),
'short': True
},
{
'title': 'Kubernetes Pod Name',
'value': alert['output_fields']['k8s.pod.name'],
'short': True
},
{
'title': 'Container Id',
'value': alert['output_fields']['container.id'],
'short': True
}
]
}]
}
def _output(self, alert):
output = alert['output'].split(': ')[1]
priority_plus_whitespace_length = len(alert['priority']) + 1
return output[priority_plus_whitespace_length:]
_COLORS = {
'Emergency': '#b12737',
'Alert': '#f24141',
'Critical': '#fc7335',
'Error': '#f28143',
'Warning': '#f9c414',
'Notice': '#397ec3',
'Informational': '#8fc0e7',
'Debug': '#8fc0e7',
}
def _color_from(self, priority):
return self._COLORS.get(priority, '#eeeeee')
class TaintNode:
def __init__(self, k8s_client, key, value, effect):
self._k8s_client = k8s_client
self._key = key
self._value = value
self._effect = effect
def run(self, alert):
pod = alert['output_fields']['k8s.pod.name']
node = self._k8s_client.find_node_running_pod(pod)
self._k8s_client.taint_node(node, self._key, self._value, self._effect)
class NetworkIsolatePod:
def __init__(self, k8s_client):
self._k8s_client = k8s_client
def run(self, alert):
pod = alert['output_fields']['k8s.pod.name']
self._k8s_client.add_label_to_pod(pod, 'isolated', 'true')

View File

@@ -0,0 +1,74 @@
import os
import json
from kubernetes import client, config
import requests
class KubernetesClient:
def __init__(self):
if 'KUBERNETES_LOAD_KUBE_CONFIG' in os.environ:
config.load_kube_config()
else:
config.load_incluster_config()
self._v1 = client.CoreV1Api()
def delete_pod(self, name):
namespace = self._find_pod_namespace(name)
body = client.V1DeleteOptions()
self._v1.delete_namespaced_pod(name=name,
namespace=namespace,
body=body)
def exists_pod(self, name):
response = self._v1.list_pod_for_all_namespaces(watch=False)
for item in response.items:
if item.metadata.name == name:
if item.metadata.deletion_timestamp is None:
return True
return False
def _find_pod_namespace(self, name):
response = self._v1.list_pod_for_all_namespaces(watch=False)
for item in response.items:
if item.metadata.name == name:
return item.metadata.namespace
def find_node_running_pod(self, name):
response = self._v1.list_pod_for_all_namespaces(watch=False)
for item in response.items:
if item.metadata.name == name:
return item.spec.node_name
def taint_node(self, name, key, value, effect):
body = client.V1Node(
spec=client.V1NodeSpec(
taints=[
client.V1Taint(key=key, value=value, effect=effect)
]
)
)
return self._v1.patch_node(name, body)
def add_label_to_pod(self, name, label, value):
namespace = self._find_pod_namespace(name)
body = client.V1Pod(
metadata=client.V1ObjectMeta(
labels={label: value}
)
)
return self._v1.patch_namespaced_pod(name, namespace, body)
class SlackClient:
def __init__(self, slack_webhook_url):
self._slack_webhook_url = slack_webhook_url
def post_message(self, message):
requests.post(self._slack_webhook_url,
data=json.dumps(message))

View File

@@ -0,0 +1,11 @@
from setuptools import setup
setup(name='playbooks',
version='0.1',
description='A set of playbooks for Falco alerts',
url='http://github.com/draios/falco-playbooks',
author='Néstor Salceda',
author_email='nestor.salceda@sysdig.com',
license='',
packages=['playbooks'],
zip_safe=False)

View File

@@ -0,0 +1,16 @@
import sys
import os.path
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__))))
import os
import playbooks
from playbooks import infrastructure
playbook = playbooks.AddMessageToSlack(
infrastructure.SlackClient(os.environ['SLACK_WEBHOOK_URL'])
)
def handler(event, context):
playbook.run(event['data'])

View File

@@ -0,0 +1,65 @@
from mamba import description, context, it, before
from expects import expect, be_false, be_true, start_with, equal, have_key
import subprocess
import os.path
from playbooks import infrastructure
with description(infrastructure.KubernetesClient) as self:
with before.each:
self.kubernetes_client = infrastructure.KubernetesClient()
with context('when checking if a pod exists'):
with before.each:
self._create_nginx_pod()
with context('and pod exists'):
with it('returns true'):
expect(self.kubernetes_client.exists_pod('nginx')).to(be_true)
with context('and pod does not exist'):
with it('returns false'):
self.kubernetes_client.delete_pod('nginx')
expect(self.kubernetes_client.exists_pod('nginx')).to(be_false)
with it('finds node running pod'):
self._create_nginx_pod()
node = self.kubernetes_client.find_node_running_pod('nginx')
expect(node).to(start_with('gke-sysdig-work-default-pool'))
with it('taints node'):
self._create_nginx_pod()
node_name = self.kubernetes_client.find_node_running_pod('nginx')
node = self.kubernetes_client.taint_node(node_name,
'playbooks',
'true',
'NoSchedule')
expect(node.spec.taints[0].effect).to(equal('NoSchedule'))
expect(node.spec.taints[0].key).to(equal('playbooks'))
expect(node.spec.taints[0].value).to(equal('true'))
with it('adds labels to a pod'):
self._create_nginx_pod()
pod = self.kubernetes_client.add_label_to_pod('nginx',
'testing',
'true')
expect(pod.metadata.labels).to(have_key('testing', 'true'))
def _create_nginx_pod(self):
current_directory = os.path.dirname(os.path.realpath(__file__))
pod_manifesto = os.path.join(current_directory,
'..',
'support',
'deployment.yaml')
subprocess.run(['kubectl', 'create', '-f', pod_manifesto])

View File

@@ -0,0 +1,16 @@
from mamba import description, it
import os
from playbooks import infrastructure
with description(infrastructure.SlackClient) as self:
with it('posts a message to #kubeless-demo channel'):
slack_client = infrastructure.SlackClient(os.environ['SLACK_WEBHOOK_URL'])
message = {
'text': 'Hello from Python! :metal:'
}
slack_client.post_message(message)

View File

@@ -0,0 +1,62 @@
from mamba import description, it, before, context
from expects import expect, have_key, have_keys, contain
from doublex import Spy
from doublex_expects import have_been_called_with
from playbooks import infrastructure
import playbooks
with description(playbooks.AddMessageToSlack) as self:
with before.each:
self.slack_client = Spy(infrastructure.SlackClient)
self.playbook = playbooks.AddMessageToSlack(self.slack_client)
with context('when publishing a message to slack'):
with before.each:
self.alert = {
"output": "10:22:15.576767292: Notice Unexpected setuid call by non-sudo, non-root program (user=bin cur_uid=2 parent=event_generator command=event_generator uid=root) k8s.pod=falco-event-generator-6fd89678f9-cdkvz container=1c76f49f40b4",
"output_fields": {
"container.id": "1c76f49f40b4",
"evt.arg.uid": "root",
"evt.time": 1527157335576767292,
"k8s.pod.name": "falco-event-generator-6fd89678f9-cdkvz",
"proc.cmdline": "event_generator ",
"proc.pname": "event_generator",
"user.name": "bin",
"user.uid": 2
},
"priority": "Notice",
"rule": "Non sudo setuid",
"time": "2018-05-24T10:22:15.576767292Z"
}
self.message = self.playbook.run(self.alert)
with it('publishes message to slack'):
expect(self.slack_client.post_message).to(have_been_called_with(self.message))
with it('includes falco output'):
falco_output = 'Unexpected setuid call by non-sudo, non-root program (user=bin cur_uid=2 parent=event_generator command=event_generator uid=root) k8s.pod=falco-event-generator-6fd89678f9-cdkvz container=1c76f49f40b4'
expect(self.message).to(have_key('text', falco_output))
with it('includes color based on priority'):
expect(self.message['attachments'][0]).to(have_key('color'))
with it('includes priority'):
expect(self.message['attachments'][0]['fields']).to(contain(have_keys(title='Priority', value='Notice')))
with it('includes rule name'):
expect(self.message['attachments'][0]['fields']).to(contain(have_keys(title='Rule', value='Non sudo setuid')))
with it('includes time when alert happened'):
expect(self.message['attachments'][0]['fields']).to(contain(have_keys(title='Time', value='Thu, 24 May 2018 10:22:15 GMT')))
with it('includes kubernetes pod name'):
expect(self.message['attachments'][0]['fields']).to(contain(have_keys(title='Kubernetes Pod Name', value='falco-event-generator-6fd89678f9-cdkvz')))
with it('includes container id'):
expect(self.message['attachments'][0]['fields']).to(contain(have_keys(title='Container Id', value='1c76f49f40b4')))

View File

@@ -0,0 +1,22 @@
from mamba import description, it, before
from expects import expect
from doublex import Spy
from doublex_expects import have_been_called_with
from playbooks import infrastructure
import playbooks
with description(playbooks.DeletePod) as self:
with before.each:
self.k8s_client = Spy(infrastructure.KubernetesClient)
self.playbook = playbooks.DeletePod(self.k8s_client)
with it('deletes a pod'):
pod_name = 'a pod name'
alert = {'output_fields': {'k8s.pod.name': pod_name}}
self.playbook.run(alert)
expect(self.k8s_client.delete_pod).to(have_been_called_with(pod_name))

View File

@@ -0,0 +1,22 @@
from mamba import description, it, before
from expects import expect
from doublex import Spy
from doublex_expects import have_been_called
from playbooks import infrastructure
import playbooks
with description(playbooks.NetworkIsolatePod) as self:
with before.each:
self.k8s_client = Spy(infrastructure.KubernetesClient)
self.playbook = playbooks.NetworkIsolatePod(self.k8s_client)
with it('adds isolation label to pod'):
pod_name = 'any pod name'
alert = {'output_fields': {'k8s.pod.name': pod_name}}
self.playbook.run(alert)
expect(self.k8s_client.add_label_to_pod).to(have_been_called)

View File

@@ -0,0 +1,34 @@
from mamba import description, it, before
from expects import expect
from doublex import Spy, when
from doublex_expects import have_been_called_with
from playbooks import infrastructure
import playbooks
with description(playbooks.TaintNode) as self:
with before.each:
self.k8s_client = Spy(infrastructure.KubernetesClient)
self.key = 'falco/alert'
self.value = 'true'
self.effect = 'NoSchedule'
self.playbook = playbooks.TaintNode(self.k8s_client,
self.key,
self.value,
self.effect)
with it('taints the node'):
pod_name = 'any pod name'
alert = {'output_fields': {'k8s.pod.name': pod_name}}
node = 'any node'
when(self.k8s_client).find_node_running_pod(pod_name).returns(node)
self.playbook.run(alert)
expect(self.k8s_client.taint_node).to(have_been_called_with(node,
self.key,
self.value,
self.effect))

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80

View File

@@ -0,0 +1,19 @@
import sys
import os.path
sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__))))
import os
import playbooks
from playbooks import infrastructure
playbook = playbooks.TaintNode(
infrastructure.KubernetesClient(),
os.environ.get('TAINT_KEY', 'falco/alert'),
os.environ.get('TAINT_VALUE', 'true'),
os.environ.get('TAINT_EFFECT', 'NoSchedule')
)
def handler(event, context):
playbook.run(event['data'])

View File

@@ -152,13 +152,13 @@
- list: rpm_binaries
items: [dnf, rpm, rpmkey, yum, '"75-system-updat"', rhsmcertd-worke, subscription-ma,
repoquery, rpmkeys, rpmq, yum-cron, yum-config-mana, yum-debug-dump,
abrt-action-sav, rpmdb_stat]
abrt-action-sav, rpmdb_stat, microdnf]
- macro: rpm_procs
condition: proc.name in (rpm_binaries) or proc.name in (salt-minion)
- list: deb_binaries
items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, apt, apt-get, aptitude,
items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
apt-listchanges, unattended-upgr, apt-add-reposit
]
@@ -166,11 +166,14 @@
# The truncated dpkg-preconfigu is intentional, process names are
# truncated at the sysdig level.
- list: package_mgmt_binaries
items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, sane-utils.post]
items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client]
- macro: package_mgmt_procs
condition: proc.name in (package_mgmt_binaries)
- macro: coreos_write_ssh_dir
condition: (proc.name=update-ssh-keys and fd.name startswith /home/core/.ssh)
- macro: run_by_package_mgmt_binaries
condition: proc.aname in (package_mgmt_binaries, needrestart)
@@ -375,6 +378,9 @@
(proc.cmdline startswith "sed -ri" or proc.cmdline startswith "sed -i") and
(fd.name startswith /etc/httpd/conf.d/ or fd.name startswith /etc/httpd/conf))
- macro: userhelper_writing_etc_security
condition: (proc.name=userhelper and fd.name startswith /etc/security)
- macro: parent_Xvfb_running_xkbcomp
condition: (proc.pname=Xvfb and proc.cmdline startswith 'sh -c "/usr/bin/xkbcomp"')
@@ -395,18 +401,13 @@
- list: known_shell_spawn_binaries
items: []
- macro: shell_spawning_containers
condition: (container.image startswith jenkins or
container.image startswith gitlab/gitlab-ce or
container.image startswith gitlab/gitlab-ee)
## End Deprecated
- macro: ansible_running_python
condition: (proc.name in (python, pypy) and proc.cmdline contains ansible)
- macro: chef_running_yum_dump
condition: (proc.name=python and proc.cmdline contains yum-dump.py)
- macro: python_running_chef
condition: (proc.name=python and (proc.cmdline contains yum-dump.py or proc.cmdline="python /usr/bin/chef-monitor.py"))
- macro: python_running_denyhosts
condition: >
@@ -490,9 +491,13 @@
- macro: htpasswd_writing_passwd
condition: (proc.name=htpasswd and fd.name=/etc/nginx/.htpasswd)
- macro: lvprogs_writing_lvm_archive
condition: (proc.name in (dmeventd,lvcreate) and (fd.name startswith /etc/lvm/archive or
fd.name startswith /etc/lvm/backup))
- macro: lvprogs_writing_conf
condition: >
(proc.name in (dmeventd,lvcreate,pvscan) and
(fd.name startswith /etc/lvm/archive or
fd.name startswith /etc/lvm/backup or
fd.name startswith /etc/lvm/cache))
- macro: ovsdb_writing_openvswitch
condition: (proc.name=ovsdb-server and fd.directory=/etc/openvswitch)
@@ -517,10 +522,14 @@
- macro: countly_writing_nginx_conf
condition: (proc.cmdline startswith "nodejs /opt/countly/bin" and fd.name startswith /etc/nginx)
- list: ms_oms_binaries
items: [omi.postinst, omsconfig.posti, scx.postinst, omsadmin.sh, omiagent]
- macro: ms_oms_writing_conf
condition: >
((proc.name in (omiagent,omsagent,in_heartbeat_r*,omsadmin.sh,PerformInventor)
or proc.pname in (omi.postinst,omsconfig.posti,scx.postinst,omsadmin.sh,omiagent))
or proc.pname in (ms_oms_binaries)
or proc.aname[2] in (ms_oms_binaries))
and (fd.name startswith /etc/opt/omi or fd.name startswith /etc/opt/microsoft/omsagent))
- macro: ms_scx_writing_conf
@@ -541,6 +550,15 @@
- macro: slapadd_writing_conf
condition: (proc.name=slapadd and fd.name startswith /etc/ldap)
- macro: openldap_writing_conf
condition: (proc.pname=run-openldap.sh and fd.name startswith /etc/openldap)
- macro: ucpagent_writing_conf
condition: (proc.name=apiserver and container.image startswith docker/ucp-agent and fd.name=/etc/authorization_config.cfg)
- macro: iscsi_writing_conf
condition: (proc.name=iscsiadm and fd.name startswith /etc/iscsi)
- macro: symantec_writing_conf
condition: >
((proc.name=symcfgd and fd.name startswith /etc/symantec) or
@@ -554,6 +572,17 @@
(proc.name=urlgrabber-ext- and proc.aname[3]=sosreport and
(fd.name startswith /etc/pkt/nssdb or fd.name startswith /etc/pki/nssdb))
- macro: pkgmgmt_progs_writing_pki
condition: >
(proc.name=urlgrabber-ext- and proc.pname in (yum, yum-cron, repoquery) and
(fd.name startswith /etc/pkt/nssdb or fd.name startswith /etc/pki/nssdb))
- macro: update_ca_trust_writing_pki
condition: (proc.pname=update-ca-trust and proc.name=trust and fd.name startswith /etc/pki)
- macro: brandbot_writing_os_release
condition: proc.name=brandbot and fd.name=/etc/os-release
- macro: selinux_writing_conf
condition: (proc.name in (semodule,genhomedircon,sefcontext_comp) and fd.name startswith /etc/selinux)
@@ -567,7 +596,7 @@
condition: (proc.name in (veritas_binaries) or veritas_driver_script)
- macro: veritas_writing_config
condition: (veritas_progs and fd.name startswith /etc/vx)
condition: (veritas_progs and (fd.name startswith /etc/vx or fd.name startswith /etc/opt/VRTS or fd.name startswith /etc/vom))
- macro: nginx_writing_conf
condition: (proc.name=nginx and fd.name startswith /etc/nginx)
@@ -593,6 +622,11 @@
- macro: exe_running_docker_save
condition: (proc.cmdline startswith "exe /var/lib/docker" and proc.pname in (dockerd, docker))
# Ideally we'd have a length check here as well but sysdig
# filterchecks don't have operators like len()
- macro: sed_temporary_file
condition: (proc.name=sed and fd.name startswith "/etc/sed")
- macro: python_running_get_pip
condition: (proc.cmdline startswith "python get-pip.py")
@@ -602,6 +636,21 @@
- macro: gugent_writing_guestagent_log
condition: (proc.name=gugent and fd.name=GuestAgent.log)
- macro: dse_writing_tmp
condition: (proc.name=dse-entrypoint and fd.name=/root/tmp__)
- macro: zap_writing_state
condition: (proc.name=java and proc.cmdline contains "jar /zap" and fd.name startswith /root/.ZAP)
- macro: airflow_writing_state
condition: (proc.name=airflow and fd.name startswith /root/airflow)
- macro: rpm_writing_root_rpmdb
condition: (proc.name=rpm and fd.directory=/root/.rpmdb)
- macro: maven_writing_groovy
condition: (proc.name=java and proc.cmdline contains "classpath /usr/local/apache-maven" and fd.name startswith /root/.groovy)
- rule: Write below binary dir
desc: an attempt to write to any file below a set of binary directories
condition: >
@@ -616,6 +665,45 @@
priority: ERROR
tags: [filesystem]
# If you'd like to generally monitor a wider set of directories on top
# of the ones covered by the rule Write below binary dir, you can use
# the following rule and lists.
- list: monitored_directories
items: [/boot, /lib, /lib64, /usr/lib, /usr/local/lib, /usr/local/sbin, /usr/local/bin, /root/.ssh, /etc/cardserver]
# Until https://github.com/draios/sysdig/pull/1153, which fixes
# https://github.com/draios/sysdig/issues/1152, is widely available,
# we can't use glob operators to match pathnames. Until then, we do a
# looser check to match ssh directories.
# When fixed, we will use "fd.name glob '/home/*/.ssh/*'"
- macro: user_ssh_directory
condition: (fd.name startswith '/home' and fd.name contains '.ssh')
- macro: mkinitramfs_writing_boot
condition: (proc.pname in (mkinitramfs, update-initramf) and fd.directory=/boot)
- macro: monitored_dir
condition: >
(fd.directory in (monitored_directories)
or user_ssh_directory)
and not mkinitramfs_writing_boot
- rule: Write below monitored dir
desc: an attempt to write to any file below a set of binary directories
condition: >
evt.dir = < and open_write and monitored_dir
and not package_mgmt_procs
and not coreos_write_ssh_dir
and not exe_running_docker_save
and not python_running_get_pip
and not python_running_ms_oms
output: >
File below a monitored directory opened for writing (user=%user.name
command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2])
priority: ERROR
tags: [filesystem]
- list: safe_etc_dirs
items: [/etc/cassandra, /etc/ssl/certs/java, /etc/logstash, /etc/nginx/conf.d, /etc/container_environment, /etc/hrmconfig]
@@ -677,7 +765,13 @@
condition: (proc.name=httpd and fd.name startswith /etc/httpd/)
- macro: mysql_writing_conf
condition: ((proc.name=start-mysql.sh or proc.pname=start-mysql.sh) and fd.name startswith /etc/mysql)
condition: >
((proc.name in (start-mysql.sh, run-mysqld) or proc.pname=start-mysql.sh) and
(fd.name startswith /etc/mysql or fd.directory=/etc/my.cnf.d))
- macro: redis_writing_conf
condition: >
(proc.name in (run-redis, redis-launcher.) and fd.name=/etc/redis.conf or fd.name startswith /etc/redis)
- macro: openvpn_writing_conf
condition: (proc.name in (openvpn,openvpn-entrypo) and fd.name startswith /etc/openvpn)
@@ -733,6 +827,7 @@
and not proc.pname in (sysdigcloud_binaries, mail_config_binaries, hddtemp.postins, sshkit_script_binaries, locales.postins, deb_binaries, dhcp_binaries)
and not fd.name pmatch (safe_etc_dirs)
and not fd.name in (/etc/container_environment.sh, /etc/container_environment.json, /etc/motd, /etc/motd.svc)
and not sed_temporary_file
and not exe_running_docker_save
and not ansible_running_python
and not python_running_denyhosts
@@ -754,7 +849,7 @@
and not supervise_writing_status
and not pki_realm_writing_realms
and not htpasswd_writing_passwd
and not lvprogs_writing_lvm_archive
and not lvprogs_writing_conf
and not ovsdb_writing_openvswitch
and not datadog_writing_conf
and not curl_writing_pki_db
@@ -791,6 +886,14 @@
and not cockpit_writing_conf
and not ipsec_writing_conf
and not httpd_writing_ssl_conf
and not userhelper_writing_etc_security
and not pkgmgmt_progs_writing_pki
and not update_ca_trust_writing_pki
and not brandbot_writing_os_release
and not redis_writing_conf
and not openldap_writing_conf
and not ucpagent_writing_conf
and not iscsi_writing_conf
- rule: Write below etc
desc: an attempt to write to any file below /etc
@@ -802,7 +905,7 @@
- list: known_root_files
items: [/root/.monit.state, /root/.auth_tokens, /root/.bash_history, /root/.ash_history, /root/.aws/credentials,
/root/.viminfo.tmp, /root/.lesshst, /root/.bzr.log, /root/.gitconfig.lock, /root/.babel.json, /root/.localstack,
/root/.node_repl_history, /root/.mongorc.js, /root/.dbshell, /root/.augeas/history, /root/.rnd]
/root/.node_repl_history, /root/.mongorc.js, /root/.dbshell, /root/.augeas/history, /root/.rnd, /root/.wget-hsts]
- list: known_root_directories
items: [/root/.oracle_jre_usage, /root/.ssh, /root/.subversion, /root/.nami]
@@ -837,7 +940,12 @@
or fd.name startswith /root/.dbus
or fd.name startswith /root/.composer
or fd.name startswith /root/.gconf
or fd.name startswith /root/.nv)
or fd.name startswith /root/.nv
or fd.name startswith /root/.local/share/jupyter
or fd.name startswith /root/oradiag_root
or fd.name startswith /root/workspace
or fd.name startswith /root/jvm
or fd.name startswith /root/.node-gyp)
- rule: Write below root
desc: an attempt to write to any file directly below / or /root
@@ -847,6 +955,11 @@
and not fd.directory in (known_root_directories)
and not exe_running_docker_save
and not gugent_writing_guestagent_log
and not dse_writing_tmp
and not zap_writing_state
and not airflow_writing_state
and not rpm_writing_root_rpmdb
and not maven_writing_groovy
and not known_root_conditions
output: "File below / or /root opened for writing (user=%user.name command=%proc.cmdline parent=%proc.pname file=%fd.name program=%proc.name)"
priority: ERROR
@@ -871,8 +984,8 @@
items: [
iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, sshd,
vsftpd, systemd, mysql_install_d, psql, screen, debconf-show, sa-update,
pam-auth-update, /usr/sbin/spamd, polkit-agent-he, lsattr, file, sosreport,
scxcimservera, adclient, rtvscand, cockpit-session
pam-auth-update, pam-config, /usr/sbin/spamd, polkit-agent-he, lsattr, file, sosreport,
scxcimservera, adclient, rtvscand, cockpit-session, userhelper, ossec-syscheckd
]
# Add conditions to this macro (probably in a separate file,
@@ -918,8 +1031,8 @@
# Only let rpm-related programs write to the rpm database
- rule: Write below rpm database
desc: an attempt to write to the rpm database by any non-rpm related program
condition: fd.name startswith /var/lib/rpm and open_write and not rpm_procs and not ansible_running_python and not chef_running_yum_dump
output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name)"
condition: fd.name startswith /var/lib/rpm and open_write and not rpm_procs and not ansible_running_python and not python_running_chef
output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline)"
priority: ERROR
tags: [filesystem, software_mgmt]
@@ -940,6 +1053,9 @@
- macro: rabbitmqctl_running_scripts
condition: (proc.aname[2]=rabbitmqctl and proc.cmdline startswith "sh -c ")
- macro: run_by_appdynamics
condition: (proc.pname=java and proc.pcmdline startswith "java -jar -Dappdynamics")
- rule: DB program spawned process
desc: >
a database-server related program spawned a new process other than itself.
@@ -960,7 +1076,7 @@
condition: (bin_dir_rename) and modify and not package_mgmt_procs and not exe_running_docker_save
output: >
File below known binary directory renamed/removed (user=%user.name command=%proc.cmdline
operation=%evt.type file=%fd.name %evt.args)
pcmdline=%proc.pcmdline operation=%evt.type file=%fd.name %evt.args)
priority: ERROR
tags: [filesystem]
@@ -1072,6 +1188,17 @@
- macro: possibly_node_in_container
condition: (never_true and (proc.pname=node and proc.aname[3]=docker-containe))
# Similarly, you may want to consider any shell spawned by apache
# tomcat as suspect. The famous apache struts attack (CVE-2017-5638)
# could be exploited to do things like spawn shells.
#
# However, many applications *do* use tomcat to run arbitrary shells,
# as a part of build pipelines, etc.
#
# Like for node, we make this case opt-in.
- macro: possibly_parent_java_running_tomcat
condition: (never_true and proc.pname=java and proc.pcmdline contains org.apache.catalina.startup.Bootstrap)
- macro: protected_shell_spawner
condition: >
(proc.aname in (protected_shell_spawning_binaries)
@@ -1084,6 +1211,7 @@
or parent_java_running_glassfish
or parent_java_running_hadoop
or parent_java_running_datastax
or possibly_parent_java_running_tomcat
or possibly_node_in_container)
- list: mesos_shell_binaries
@@ -1122,11 +1250,12 @@
and not redis_running_prepost_scripts
and not rabbitmq_running_scripts
and not rabbitmqctl_running_scripts
and not run_by_appdynamics
and not user_shell_container_exclusions
output: >
Shell spawned by untrusted binary (user=%user.name shell=%proc.name parent=%proc.pname
cmdline=%proc.cmdline pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3]
gggparent=%proc.aname[4] ggggparent=%proc.aname[5])
aname[4]=%proc.aname[4] aname[5]=%proc.aname[5] aname[6]=%proc.aname[6] aname[7]=%proc.aname[7])
priority: DEBUG
tags: [shell]
@@ -1146,11 +1275,16 @@
container.image startswith registry.access.redhat.com/openshift3/metrics-cassandra or
container.image startswith openshift3/ose-sti-builder or
container.image startswith registry.access.redhat.com/openshift3/ose-sti-builder or
container.image startswith registry.access.redhat.com/openshift3/ose-docker-builder or
container.image startswith registry.access.redhat.com/openshift3/image-inspector or
container.image startswith cloudnativelabs/kube-router or
container.image startswith "consul:" or
container.image startswith mesosphere/mesos-slave or
container.image startswith istio/proxy_ or
container.image startswith datadog/docker-dd-agent)
container.image startswith datadog/docker-dd-agent or
container.image startswith datadog/agent or
container.image startswith docker/ucp-agent or
container.image startswith gliderlabs/logspout)
# Add conditions to this macro (probably in a separate file,
# overwriting this macro) to specify additional containers that are
@@ -1330,7 +1464,7 @@
condition: >
(fd.sockfamily = ip and system_procs)
and (inbound_outbound)
and not proc.name in (systemd, hostid)
and not proc.name in (systemd, hostid, id)
and not login_doing_dns_lookup
output: >
Known system binary sent/received network traffic

View File

@@ -713,3 +713,30 @@ trace_files: !mux
- open_dev_null: 1
dev_null: 0
trace_file: trace_files/cat_write.scap
skip_unknown_noevt:
detect: False
stdout_contains: Skipping rule "Contains Unknown Event And Skipping" that contains unknown filter proc.nobody
rules_file:
- rules/skip_unknown_evt.yaml
trace_file: trace_files/cat_write.scap
skip_unknown_prefix:
detect: False
rules_file:
- rules/skip_unknown_prefix.yaml
trace_file: trace_files/cat_write.scap
skip_unknown_error:
exit_status: 1
stderr_contains: Rule "Contains Unknown Event And Not Skipping" contains unknown filter proc.nobody. Exiting.
rules_file:
- rules/skip_unknown_error.yaml
trace_file: trace_files/cat_write.scap
skip_unknown_unspec_error:
exit_status: 1
stderr_contains: Rule "Contains Unknown Event And Unspecified" contains unknown filter proc.nobody. Exiting.
rules_file:
- rules/skip_unknown_unspec.yaml
trace_file: trace_files/cat_write.scap

View File

@@ -0,0 +1,6 @@
- rule: Contains Unknown Event And Not Skipping
desc: Contains an unknown event
condition: proc.nobody=cat
output: Never
skip-if-unknown-filter: false
priority: INFO

View File

@@ -0,0 +1,6 @@
- rule: Contains Unknown Event And Skipping
desc: Contains an unknown event
condition: evt.type=open and proc.nobody=cat
output: Never
skip-if-unknown-filter: true
priority: INFO

View File

@@ -0,0 +1,8 @@
- rule: Contains Prefix of Filter
desc: Testing matching filter prefixes
condition: >
evt.type=open and evt.arg.path="foo" and evt.arg[0]="foo"
and proc.aname="ls" and proc.aname[1]="ls"
and proc.apid=10 and proc.apid[1]=10
output: Never
priority: INFO

View File

@@ -0,0 +1,5 @@
- rule: Contains Unknown Event And Unspecified
desc: Contains an unknown event
condition: proc.nobody=cat
output: Never
priority: INFO

View File

@@ -3,6 +3,7 @@ include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/libscap")
include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/libsinsp")
include_directories("${PROJECT_BINARY_DIR}/userspace/engine")
include_directories("${LUAJIT_INCLUDE}")
include_directories("${CURL_INCLUDE_DIR}")
add_library(falco_engine STATIC rules.cpp falco_common.cpp falco_engine.cpp token_bucket.cpp formats.cpp)

View File

@@ -322,6 +322,21 @@ function get_evttypes_syscalls(name, ast, source, warn_evttypes)
return evttypes, syscallnums
end
function get_filters(ast)
local filters = {}
function cb(node)
if node.type == "FieldName" then
filters[node.value] = 1
end
end
parser.traverse_ast(ast.filter.value, {FieldName=1} , cb)
return filters
end
function compiler.expand_lists_in(source, list_defs)
for name, def in pairs(list_defs) do
@@ -408,7 +423,9 @@ function compiler.compile_filter(name, source, macro_defs, list_defs, warn_evtty
evttypes, syscallnums = get_evttypes_syscalls(name, ast, source, warn_evttypes)
return ast, evttypes, syscallnums
filters = get_filters(ast)
return ast, evttypes, syscallnums, filters
end

View File

@@ -275,6 +275,12 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
error ("Missing name in rule")
end
-- By default, if a rule's condition refers to an unknown
-- filter like evt.type, etc the loader throws an error.
if v['skip-if-unknown-filter'] == nil then
v['skip-if-unknown-filter'] = false
end
-- Possibly append to the condition field of an existing rule
append = false
@@ -378,9 +384,34 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
warn_evttypes = v['warn_evttypes']
end
local filter_ast, evttypes, syscallnums = compiler.compile_filter(v['rule'], v['condition'],
state.macros, state.lists,
warn_evttypes)
local filter_ast, evttypes, syscallnums, filters = compiler.compile_filter(v['rule'], v['condition'],
state.macros, state.lists,
warn_evttypes)
-- If a filter in the rule doesn't exist, either skip the rule
-- or raise an error, depending on the value of
-- skip-if-unknown-filter.
for filter, _ in pairs(filters) do
found = false
for pat, _ in pairs(defined_filters) do
if string.match(filter, pat) ~= nil then
found = true
break
end
end
if not found then
if v['skip-if-unknown-filter'] then
if verbose then
print("Skipping rule \""..v['rule'].."\" that contains unknown filter "..filter)
end
goto next_rule
else
error("Rule \""..v['rule'].."\" contains unknown filter "..filter)
end
end
end
if (filter_ast.type == "Rule") then
state.n_rules = state.n_rules + 1
@@ -418,6 +449,8 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
if (v['enabled'] == false) then
falco_rules.enable_rule(rules_mgr, v['rule'], 0)
else
falco_rules.enable_rule(rules_mgr, v['rule'], 1)
end
-- If the format string contains %container.info, replace it
@@ -452,6 +485,8 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
else
error ("Unexpected type in load_rule: "..filter_ast.type)
end
::next_rule::
end
if verbose then

View File

@@ -258,6 +258,63 @@ void falco_rules::load_rules(const string &rules_content,
lua_setglobal(m_ls, m_lua_ignored_syscalls.c_str());
// Create a table containing all filtercheck names.
lua_newtable(m_ls);
vector<const filter_check_info*> fc_plugins;
sinsp::get_filtercheck_fields_info(&fc_plugins);
for(uint32_t j = 0; j < fc_plugins.size(); j++)
{
const filter_check_info* fci = fc_plugins[j];
if(fci->m_flags & filter_check_info::FL_HIDDEN)
{
continue;
}
for(int32_t k = 0; k < fci->m_nfields; k++)
{
const filtercheck_field_info* fld = &fci->m_fields[k];
if(fld->m_flags & EPF_TABLE_ONLY ||
fld->m_flags & EPF_PRINT_ONLY)
{
continue;
}
// Some filters can work with or without an argument
std::set<string> flexible_filters = {
"^proc.aname",
"^proc.apid"
};
std::list<string> fields;
std::string field_base = string("^") + fld->m_name;
if(fld->m_flags & EPF_REQUIRES_ARGUMENT ||
flexible_filters.find(field_base) != flexible_filters.end())
{
fields.push_back(field_base + "[%[%.]");
}
if(!(fld->m_flags & EPF_REQUIRES_ARGUMENT) ||
flexible_filters.find(field_base) != flexible_filters.end())
{
fields.push_back(field_base + "$");
}
for(auto &field : fields)
{
lua_pushstring(m_ls, field.c_str());
lua_pushnumber(m_ls, 1);
lua_settable(m_ls, -3);
}
}
}
lua_setglobal(m_ls, m_lua_defined_filters.c_str());
lua_pushstring(m_ls, rules_content.c_str());
lua_pushlightuserdata(m_ls, this);
lua_pushboolean(m_ls, (verbose ? 1 : 0));

View File

@@ -56,6 +56,7 @@ class falco_rules
string m_lua_load_rules = "load_rules";
string m_lua_ignored_syscalls = "ignored_syscalls";
string m_lua_ignored_events = "ignored_events";
string m_lua_defined_filters = "defined_filters";
string m_lua_events = "events";
string m_lua_syscalls = "syscalls";
string m_lua_describe_rule = "describe_rule";

View File

@@ -158,7 +158,7 @@ uint64_t do_inspect(falco_engine *engine,
bool all_events)
{
uint64_t num_evts = 0;
int32_t res;
int32_t rc;
sinsp_evt* ev;
StatsFileWriter writer;
uint64_t duration_start = 0;
@@ -179,7 +179,7 @@ uint64_t do_inspect(falco_engine *engine,
while(1)
{
res = inspector->next(&ev);
rc = inspector->next(&ev);
writer.handle();
@@ -193,21 +193,21 @@ uint64_t do_inspect(falco_engine *engine,
{
break;
}
else if(res == SCAP_TIMEOUT)
else if(rc == SCAP_TIMEOUT)
{
continue;
}
else if(res == SCAP_EOF)
else if(rc == SCAP_EOF)
{
break;
}
else if(res != SCAP_SUCCESS)
else if(rc != SCAP_SUCCESS)
{
//
// Event read error.
// Notify the chisels that we're exiting, and then die with an error.
//
cerr << "res = " << res << endl;
cerr << "rc = " << rc << endl;
throw sinsp_exception(inspector->getlasterr().c_str());
}