Instead of having FALCO_SHARE_DIR be a relative path, fully specify it
by prepending CMAKE_INSTALL_PREFIX in the top level CMakeLists.txt and
don't prepend CMAKE_INSTALL_PREFIX in config_falco_engine.h.in. This
makes it consistent with its use in the agent.
Collect stats on the number of events processed and dropped. When run
with -v, print these stats. This duplicates syddig behavior and can be
useful when dianosing problems related to dropped events throwing off
internal state tracking.
Bring over functionality from sysdig to write trace files. This is easy
as all of the code to actually write the files is in the inspector. This
just handles the -w option and arguments.
This can be useful to write a trace file in parallel with live event
monitoring so you can reproduce it later.
The logic for detecting if a file exists was backwards. It would treat a
file as existing if it could *not* be opened. Reverse that logic so it
works.
This fixes https://github.com/draios/falco/issues/135.
Copy handling of -pk/-pm/-pc/-k/-m arguments from sysdig. All of the
relevant code was already in the inspector so that was easy.
The information from k8s/mesos/containers is used in two ways:
- In rule outputs, if the format string contains %container.info, that
is replaced with the value from -pk/-pm/-pc, if one of those options
was provided. If no option was provided, %container.info is replaced
with a generic %container.name (id=%container.id) instead.
- If the format string does not contain %container.info, and one of
-pk/-pm/-pc was provided, that is added to the end of the formatting
string.
- If -p was specified with a general value (i.e. not
kubernetes/mesos/container), the value is simply added to the end and
any %container.info is replaced with the generic value.
There are a lot of command line options now, so sort them alphabetically
in the usage and getopt handling to make them easier to find.
Also rename -p <pidfile> to -P <pidfile>, thinking ahead to the next
commit.
If a rule has a enabled attribute, and if the value is false, call the
engine's enable_rule() method to disable the rule. Like add_filter,
there's a static method which takes the object as the first argument and
a non-static method that calls the engine.
This fixes#72.
The falco engine changes broke the output methods that take
configuration (like the filename for file output, or the program for
program output). Fix that by properly passing the options argument to
each method's output function.
Add test that cover reading from multiple sets of rule files and
disabling rules. Specific changes:
- Modify falco to allow multiple -r arguments to read from multiple
files.
- In the test multiplex file, add a disabled_rules attribute,
containing a sequence of rules to disable. Result in -D arguments
when running falco.
- In the test multiplex file, 'rules_file' can be a sequence. It
results in multiple -r arguments when running falco.
- In the test multiplex file, 'detect_level' can be a squence of
multiple severity levels. All levels will be checked for in the
output.
- Move all test rules files to a rules subdirectory and all trace files
to a traces subdirectory.
- Add a small trace file for a simple cat of /dev/null. Used by the
new tests.
- Add the following new tests:
- Reading from multiple files, with the first file being
empty. Ensure that the rules from the second file are properly
loaded.
- Reading from multiple files with the last being empty. Ensures
that the empty file doesn't overwrite anything from the first
file.
- Reading from multiple files with varying severity levels for each
rule. Ensures that both files are properly read.
- Disabling rules from a rules file, both with full rule names
and regexes. Will result in not detecting anything.
Add the ability to drop events at the falco engine level in a way that
can scale with the dropping that already occurs at the kernel/inspector
level.
New inline function should_drop_evt() controls whether or not events are
matched against the set of rules, and is controlled by two
values--sampling ratio and sampling multiplier.
Here's how the sampling ratio and multiplier influence whether or not an
event is dropped in should_drop_evt(). The intent is that
m_sampling_ratio is generally changing external to the engine e.g. in
the main inspector class based on how busy the inspector is. A sampling
ratio implies no dropping. Values > 1 imply increasing levels of
dropping. External to the engine, the sampling ratio results in events
being dropped at the kernel/inspector interface. The sampling
multiplier is an amplification to the sampling factor in
m_sampling_ratio. If 0, no additional events are dropped other than
those that might be dropped by the kernel/inspector interface. If 1,
events that make it past the kernel module are subject to an additional
level of dropping at the falco engine, scaling with the sampling ratio
in m_sampling_ratio.
Unlike the dropping that occurs at the kernel level, where the events in
the first part of each second are dropped, this dropping is random.
Move the c++ and lua code implementing falco engine/falco common to its
own directory userspace/engine. It's compiled as a static library
libfalco_engine.a, and has its own CMakeLists.txt so it can be included
by other projects.
The engine's CMakeLists.txt has a add_subdirectory for the falco rules
directory, so including the engine also builds the rules.
The variables you need to set to use the engine's CMakeLists.txt are:
- CMAKE_INSTALL_PREFIX: the root directory below which everything is
installed.
- FALCO_ETC_DIR: where to install the rules file.
- FALCO_SHARE_DIR: where to install lua code, relative to the
- install/package root.
- LUAJIT_INCLUDE: where to find header files for lua.
- FALCO_SINSP_LIBRARY: the library containing sinsp code. It will be
- considered a dependency of the engine.
- LPEG_LIB/LYAML_LIB/LIBYAML_LIB: locations for third-party libraries.
- FALCO_COMPONENT: if set, will be included as a part of any install()
commands.
Instead of specifying /usr/share/falco in config_falco_*.h.in, use
CMAKE_INSTALL_PREFIX and FALCO_SHARE_DIR.
The lua code for the engine has also moved, so the two lua source
directories (userspace/engine/lua and userspace/falco/lua) need to be
available separately via falco_common, so make it an argument to
falco_common::init.
As a part of making it easy to include in another project, also clean up
LPEG build/defs. Modify build-lpeg to add a PREFIX argument to allow for
object files/libraries being in an alternate location, and when building
lpeg, put object files in a build/ subdirectory.
Create standalone classes falco_engine/falco_outputs that can be
embedded in other programs. falco_engine is responsible for matching
events against rules, and falco_output is responsible for formatting an
alert string given an event and writing the alert string to all
configured outputs.
falco_engine's main interfaces are:
- load_rules/load_rules_file: Given a path to a rules file or a string
containing a set of rules, load the rules. Also loads needed lua code.
- process_event(): check the event against the set of rules and return
the results of a match, if any.
- describe_rule(): print details on a specific rule or all rules.
- print_stats(): print stats on the rules that matched.
- enable_rule(): enable/disable any rules matching a pattern. New falco
command line option -D allows you to disable one or more rules on the
command line.
falco_output's main interfaces are:
- init(): load needed lua code.
- add_output(): add an output channel for alert notifications.
- handle_event(): given an event that matches one or more rules, format
an alert message and send it to any output channels.
Each of falco_engine/falco_output maintains a separate lua state and
loads separate sets of lua files. The code to create and initialize the
lua state is in a base class falco_common.
falco_engine no longer logs anything. In the case of errors, it throws
exceptions. falco_logger is now only used as a logging mechanism for
falco itself and as an output method for alert messages. (This should
really probably be split, but it's ok for now).
falco_engine contains an sinsp_evttype_filter object containing the set
of eventtype filters. Instead of calling
m_inspector->add_evttype_filter() to add a filter created by the
compiler, call falco_engine::add_evttype_filter() instead. This means
that the inspector runs with a NULL filter and all events are returned
from do_inspect. This depends on
https://github.com/draios/sysdig/pull/633 which has a wrapper around a
set of eventtype filters.
Some additional changes along with creating these classes:
- Some cleanups of unnecessary header files, cmake include_directory()s,
etc to only include necessary includes and only include them in header
files when required.
- Try to avoid 'using namespace std' in header files, or assuming
someone else has done that. Generally add 'using namespace std' to all
source files.
- Instead of using sinsp_exception for all errors, define a
falco_engine_exception class for exceptions coming from the falco
engine and use it instead. For falco program code, switch to general
exceptions under std::exception and catch + display an error for all
exceptions, not just sinsp_exceptions.
- Remove fields.{cpp,h}. This was dead code.
- Start tracking counts of rules by priority string (i.e. what's in the
falco rules file) as compared to priority level (i.e. roughtly
corresponding to a syslog level). This keeps the rule processing and
rule output halves separate. This led to some test changes. The regex
used in the test is now case insensitive to be a bit more flexible.
- Now that https://github.com/draios/sysdig/pull/632 is merged, we can
delete the rules object (and its lua_parser) safely.
- Move loading the initial lua script to the constructor. Otherwise,
calling load_rules() twice re-loads the lua script and throws away any
state like the mapping from rule index to rule.
- Allow an empty rules file.
Finally, fix most memory leaks found by valgrind:
- falco_configuration wasn't deleting the allocated m_config yaml
config.
- several ifstreams were being created simply to test which falco
config file to use.
- In the lua output methods, an event formatter was being created using
falco.formatter() but there was no corresponding free_formatter().
This depends on changes in https://github.com/draios/sysdig/pull/640.
New command line option 'A', related to the boolean all_events instructs
falco to run on all events, and not just those without the EF_DROP_FALCO
flag set.
When all_events is true, the checks for ignored events/syscalls are
skipped when loading rules.
Add a new output type "program" that writes a formatted event to a
configurable program, using io.popen().
Each notification results in one invocation of the program.
Instead of combining all rules into one huge filter expression and
giving it to the inspector, keep each filter expression separate and
annotate it with the events for which the rule applies.
This uses the capabilties in draios/sysdig#627
to have multiple sets of event-specific filters.
Change traverse_ast to allow a set of node types instead of a single
node type.
Within the compiler, a new pass over the ast get_evttypes looks for
evt.type clauses, converts the evt.type as a string to any event type
ids for which it may apply, and passes that back with the compiled
rule.
As rule conditions may refer to evt.types in negative
contexts (i.e. evt.type != XXX, or not evt.type = XXX), this pass
prefers rules that list event type checks at the beginning of
conditions, and allows other rules with a warning.
When traversing the ast looking for evt.type checks, once any "!=" or
"not ..." is seen, no other evt.type checks are "allowed". If one
is found, the rule is considered ambiguous wrt event types. In this
case, a warning is printed and the rule is associated with a catchall
set that runs for all event types.
Also, instead of rejecting rules with no event type check, print a
warning and associate it with the catchall set.
In the rule loader, create a new global events that maps each event as a
string to the list of event ids for which it may apply. Instead of
calling install_filter once after all rules have been loaded, call a new
function add_filter for each rule. In turn, it passes the rule and list
of event ids to the inspector using add_evttype_filter().
Also, with -v (verbose) also print the exact set of events found for
each event type. This is used by a upcoming change to the set of unit
tests.
Add a verbose flag -v which implies printing additional info. This is
passed down to lua during load_rules and sets the per-module verbose
value for the compiler and parser modules.
Later commits will use this to print additional info when loading rules.
https://github.com/draios/sysdig/pull/623 adds support for a startswith
operator to allow for string prefix matching. Modify the parser to
recognize that operator, and use that operator for rules that really
want to check the beginning of a pathname, directory, etc. to make them
faster and avoid FPs.
Once sysdig adds support for handling "in (...)" filter expressions as
set membership tests, it will be advantageous to combine lists of items
together into a single list so they can all be checked in a single set
membership test.
This commit adds support for a new yaml item type "list" containing a
field "name" and field "items" containing a list of items. These are
represented as a yaml list, which allows yaml to handle some of the
initial parsing with the list items maintained natively in lua.
Allow lists to contain list references by expanding any references to
the items in the list, before storing the list items in
state.lists.
When parsing macro or rule conditions, replace all references to a list
name with the list items as a comma separated string.
Modify the falco rules to switch to lists whenever possible. The
new convention is to use the suffix _binaries for lists of program names
and _procs for macros that define a filter expression using the list.
Instead of using sysdig's json output, which only contains the fields
from the format string without any formatting text, use the string
output to build a json object containing the format string, rule name,
severity, and the event time (converted to a json-friendly ISO8601).
This fixes https://github.com/draios/falco/issues/82.
Add additional rules related to using pipe installers within a fbash
session:
- Modify write_etc to only trigger if *not* in a fbash session. There's
a new rule write_etc_installer which has the same conditions when in
a fbash session, logging at INFO severity.
- A new rule write_rpm_database warns if any non package management
program tries to write below /var/lib/rpm.
- Add a new warning if any program below a fbash session tries to open
an outbound network connection on ports other than http(s) and dns.
- Add INFO level messages when programs in a fbash session try to run
package management binaries (rpm,yum,etc) or service
management (systemctl,chkconfig,etc) binaries.
In order to test these new INFO level rules, make up a third class of
trace files traces-info.zip containing trace files that should result in
info-level messages.
To differentiate warning and info level detection, add an attribute to
the multiplex file "detect_level", which is "Warning" for the files in
traces-positive and "Info" for the files in traces-info. Modify
falco_test.py to look specifically for a non-zero count for the given
detect_level.
Doing this exposed a bug in the way the level-specific counts were being
recorded--they were keeping counts by level name, not number. Fix that.
Start using the Avocado framework for automated regression
testing. Create a test FalcoTest in falco_test.py which can run on a
collection of trace files. The script test/run_regression_tests.sh is
responsible for pulling zip files containing the positive (falco should
detect) and negative (falco should not detect) trace files, creating a
Avocado multiplex file that defines all the tests (one for each trace
file), running avocado on all the trace files, and showing full logs for
any test that didn't pass.
The old regression script, which simply ran falco, has been removed.
Modify falco's stats output to show the total number of events detected
for use in the tests.
In travis.yml, pull a known stable version of avocado and build it,
including installing any dependencies, as a part of the build process.
At shutdown, print stats on the number of rules triggered by severity
and rule name. This is done by a lua function print_stats and the
associated table rule_output_counts.
When passing rules to outputs, update the counts in rule_output_counts.
Add signal handlers for SIGINT/SIGTERM that set a shutdown
flag. Initialize the live inspector with a timeout so the main loop can
watch the flag set by the signal handlers.
Make sure that references to variables that may be paths (which in turn
may contain spaces) are quoted, so cmake won't break on the spaces.
This fixes https://github.com/draios/falco/issues/79.
When run with -l <rule>, falco will print the name/description of the
single rule <rule> and exit. With -L, falco will print the
name/description of all rules.
All the work is done in lua in the rule loader. A new lua function
describe_rule calls the local function describe_single_rule once or
multiple times depending on -l/-L. describe_single_rule prints the rule
name and a wrapped version of the rule description.
Add name and description fields to all rules. The name field is actually
a field called 'rule', which corresponds to the 'macro' field for
macros.
Within the rule loader, the state changes slightly. There are two
indices into the set of rules 'rules_by_name' and
'rules_by_idx' (formerly 'outputs'). They both now contain the original
table from the yaml parse. One field 'level' is added which is the
priority mapped to a number.
Get rid of the notion of default priority or output. Every rule must now
provide both.
Go through all current rules and add names and descriptions.
Remove the old use of the '-o' command line option, it wasn't being
used.
Allow any config file option to be overridden on the command line, via
--option/-o. These options are applied to the configuration object after
reading the file, ensuring the command line options override anything in
the config file.
To support this, add some methods to yaml_configuration that allows you
to set the value for a top level key or key + subkey, and methods to
falco_configuration that allow providing a set of command line arguments
alongside the config file.
Ensure that any fatal error is always printed to stderr even if stderr
logging is not enabled. This makes sure that falco won't silently exit
on an error. This is especially important when daemonizing and when an
initial fatal error occurs first.
As a part of this, change all fatal errors to throw exceptions instead,
so all fatal errors get routed through the exception handler.
Improve daemonization by reopening stdin/stdout/stderr to /dev/null so
you don't have to worry about writing to a closed stderr on exit.
Henri pointed out that events may also be flagged as ignored. So
populate a second table with the set of ignored events, rename
check_for_ignored_syscalls to check_for_ignored_syscalls_events, and
separately check each table based on whether the LHS of the expression
is evt.type or syscall.type.
Add support for daemonizing via the --daemon flag. If daemonized, the
pid is written to the file provided via the --pidfile flag. When
daemonized, falco immediately returns an error if stderr output or
logging was chosen on the command line.
Clean up handling of outputs to match the expected use case (daemon):
- syslog output is enabled by default
- stdout output is disabled by default
- If not configured at all, both outputs are enabled.
Also fix some bugs I found while running via packages:
- There were still some references to the old rules filename
falco_rules.conf.
- The redhat package mistakenly defined some system directories like
/etc, /etc/init.d. Add them to the exclusion list (See
https://cmake.org/Bug/view.php?id=13609 for context).
- Clean up some of the error messages to be more consistent.
After this I was able to build and install debian and rpm
packages. Starting the falco service ran falco as a daemon with syslog
output.
Create a table containing the filtered syscalls and set it as the lua
global m_lua_ignored_syscalls == ignored_syscalls.
In the parser, add a general purpose ast traversal function
traverse_ast that calls a callback for all nodes of a specific type.
In the compiler, add a new function check_for_ignored_syscalls that uses
the traversal function to be called back for all "BinaryRelOp"
nodes (i.e. X = Y, X in [a, b, c], etc). For those nodes, if the lhs is
a field 'evt.type' or 'syscall.type' and the rhs contains one of the
ignored syscalls, throw an error.
Call check_for_ignored_syscalls after parsing any macro or rule
filter. The thrown error will contain the macro or rule that had the
ignored syscall.
In the next commit I'll change the rules to skip the ignored syscalls.
Uses yaml parsing lib to parse a yaml file comprising of a list of
macros and rules, like:
- macro: bin_dir
condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin)
- macro: core_binaries
condition: proc.name in (ls, mkdir, cat, less, ps)
- condition: (fd.typechar = 4 or fd.typechar=6) and core_binaries
output: "%evt.time: %proc.name network with %fd.l4proto"
- condition: evt.type = write and bin_dir
output: "%evt.time: System binary modified (file '%fd.filename' written by process %proc.name)"
- condition: container.id != host and proc.name = bash
output: "%evt.time: Shell running in container (%proc.name, %container.id)"