Files
falco/test/run_regression_tests.sh
Mark Stemm 917d66e9e8 Create embeddable falco engine.
Create standalone classes falco_engine/falco_outputs that can be
embedded in other programs. falco_engine is responsible for matching
events against rules, and falco_output is responsible for formatting an
alert string given an event and writing the alert string to all
configured outputs.

falco_engine's main interfaces are:

 - load_rules/load_rules_file: Given a path to a rules file or a string
   containing a set of rules, load the rules. Also loads needed lua code.
 - process_event(): check the event against the set of rules and return
   the results of a match, if any.
 - describe_rule(): print details on a specific rule or all rules.
 - print_stats(): print stats on the rules that matched.
 - enable_rule(): enable/disable any rules matching a pattern. New falco
   command line option -D allows you to disable one or more rules on the
   command line.

falco_output's main interfaces are:
 - init(): load needed lua code.
 - add_output(): add an output channel for alert notifications.
 - handle_event(): given an event that matches one or more rules, format
   an alert message and send it to any output channels.

Each of falco_engine/falco_output maintains a separate lua state and
loads separate sets of lua files. The code to create and initialize the
lua state is in a base class falco_common.

falco_engine no longer logs anything. In the case of errors, it throws
exceptions. falco_logger is now only used as a logging mechanism for
falco itself and as an output method for alert messages. (This should
really probably be split, but it's ok for now).

falco_engine contains an sinsp_evttype_filter object containing the set
of eventtype filters. Instead of calling
m_inspector->add_evttype_filter() to add a filter created by the
compiler, call falco_engine::add_evttype_filter() instead. This means
that the inspector runs with a NULL filter and all events are returned
from do_inspect. This depends on
https://github.com/draios/sysdig/pull/633 which has a wrapper around a
set of eventtype filters.

Some additional changes along with creating these classes:

- Some cleanups of unnecessary header files, cmake include_directory()s,
  etc to only include necessary includes and only include them in header
  files when required.

- Try to avoid 'using namespace std' in header files, or assuming
  someone else has done that. Generally add 'using namespace std' to all
  source files.

- Instead of using sinsp_exception for all errors, define a
  falco_engine_exception class for exceptions coming from the falco
  engine and use it instead. For falco program code, switch to general
  exceptions under std::exception and catch + display an error for all
  exceptions, not just sinsp_exceptions.

- Remove fields.{cpp,h}. This was dead code.

- Start tracking counts of rules by priority string (i.e. what's in the
  falco rules file) as compared to priority level (i.e. roughtly
  corresponding to a syslog level). This keeps the rule processing and
  rule output halves separate. This led to some test changes. The regex
  used in the test is now case insensitive to be a bit more flexible.

- Now that https://github.com/draios/sysdig/pull/632 is merged, we can
  delete the rules object (and its lua_parser) safely.

- Move loading the initial lua script to the constructor. Otherwise,
  calling load_rules() twice re-loads the lua script and throws away any
  state like the mapping from rule index to rule.

- Allow an empty rules file.

Finally, fix most memory leaks found by valgrind:

 - falco_configuration wasn't deleting the allocated m_config yaml
   config.
 - several ifstreams were being created simply to test which falco
   config file to use.
 - In the lua output methods, an event formatter was being created using
   falco.formatter() but there was no corresponding free_formatter().

This depends on changes in https://github.com/draios/sysdig/pull/640.
2016-10-24 15:56:45 -07:00

73 lines
1.9 KiB
Bash
Executable File

#!/bin/bash
SCRIPT=$(readlink -f $0)
SCRIPTDIR=$(dirname $SCRIPT)
MULT_FILE=$SCRIPTDIR/falco_tests.yaml
BRANCH=$1
function download_trace_files() {
echo "branch=$BRANCH"
for TRACE in traces-positive traces-negative traces-info ; do
rm -rf $SCRIPTDIR/$TRACE
curl -fso $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE-$BRANCH.zip || curl -fso $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip &&
unzip -d $SCRIPTDIR $SCRIPTDIR/$TRACE.zip &&
rm -rf $SCRIPTDIR/$TRACE.zip
done
}
function prepare_multiplex_fileset() {
dir=$1
detect=$2
detect_level=$3
json_output=$4
for trace in $SCRIPTDIR/$dir/*.scap ; do
[ -e "$trace" ] || continue
NAME=`basename $trace .scap`
cat << EOF >> $MULT_FILE
$NAME-detect-$detect-json-$json_output:
detect: $detect
detect_level: $detect_level
trace_file: $trace
json_output: $json_output
EOF
done
}
function prepare_multiplex_file() {
cp $SCRIPTDIR/falco_tests.yaml.in $MULT_FILE
prepare_multiplex_fileset traces-positive True WARNING False
prepare_multiplex_fileset traces-negative False WARNING True
prepare_multiplex_fileset traces-info True INFO False
prepare_multiplex_fileset traces-positive True WARNING True
prepare_multiplex_fileset traces-info True INFO True
echo "Contents of $MULT_FILE:"
cat $MULT_FILE
}
function run_tests() {
CMD="avocado run --multiplex $MULT_FILE --job-results-dir $SCRIPTDIR/job-results -- $SCRIPTDIR/falco_test.py"
echo "Running: $CMD"
$CMD
TEST_RC=$?
}
function print_test_failure_details() {
echo "Showing full job logs for any tests that failed:"
jq '.tests[] | select(.status != "PASS") | .logfile' $SCRIPTDIR/job-results/latest/results.json | xargs cat
}
download_trace_files
prepare_multiplex_file
run_tests
if [ $TEST_RC -ne 0 ]; then
print_test_failure_details
fi
exit $TEST_RC