Start packaging (and building when necessary) a falco-specific kernel
module in falco releases. Previously, falco would depend on sysdig and
use its kernel module instead.
The kernel module was already templated to some degree in various
places, so we just had to change the templated name from
sysdig/sysdig-probe to falco/falco-probe.
In containers, run falco-probe-loader instead of
sysdig-probe-loader. This is actually a script in the sysdig repository
which is modified in https://github.com/draios/sysdig/pull/789, and uses
the filename to indicate what kernel module to build and/or load.
For the falco package itself, don't depend on sysdig any longer but instead
depend on dkms and its dependencies, using sysdig as a guide on the set
of required packages.
Additionally, for the package pre-install/post-install scripts start
running falco-probe-loader.
Finally, add a --version argument to falco so it can pass the desired
version string to falco-probe-loader.
Use the sinsp_evt_formatter_cache added in
https://github.com/draios/sysdig/pull/771 instead of a local cache. This
simplifies the lua side quite a bit, as it only needs to call
format_output(), and clean up everything via free_formatters() in
output_cleanup().
On the C side, use a sinsp_evt_formatter object and use it in
format_event().
In C functions that implement lua functions, don't directly throw
falco_exceptions, which results in opaque error messages like:
Mon Feb 27 10:09:58 2017: Runtime error: Error invoking function output:
C++ exception. Exiting.
Instead, return lua errors via lua_error().
- Instead of having a possibly null string pointer as the argument to
enable_* and process_event, have wrapper versions that assume a
default falco ruleset. The default ruleset name is a static member of
the falco_engine class, and the default ruleset id is created/found
in the constructor.
- This makes the whole mechanism simple enough that it doesn't require
seprarate testing, so remove the capability within falco to read a
ruleset from the environment and remove automated tests that specify
a ruleset.
- Make pattern/tags/ruleset arguments to enable_* functions const.
(I'll squash this down before I commit)
- in lua, look for a tags attribute to each rule. This is passed up in
add_filter as a tags argument (as a lua table). If not present, an
empty table is used. The tags table is iterated to populate a set
of tags as strings, which is passed to add_filter().
- A new method falco_engine::enable_rule_by_tag is similar to
enable_rule(), but is given a set of tag strings. Any rules containing
one of the tags is enabled/disabled.
- The list of event types has been changed to a set to more accurately
reflect its purpose.
- New argument to falco -T allows disabling all rules matching a given
tag, via enable_rule_by_tag(). It can be provided multiple times.
- New argument to falco -t allows running those rules matching a given
tag. If provided all rules are first disabled. It can be
provided multiple times, but can not be combined with -T or
-D (disable rules by name)
- falco_enging supports the notion of a ruleset. The idea is that you
can choose a set of rules that are enabled/disabled by using
enable_rule()/enable_rule_by_tag() in combination with a
ruleset. Later, in process_event() you include that ruleset and the
rules you had previously enabled will be run.
- rulsets are provided as strings in enable_rule()/enable_rule_by_tag()
and as numbers in process_event()--this avoids the overhead of string
lookups per-event. Ruleset ids are created on the fly as needed. A
utility method find_ruleset_id() looks up the ruleset id for a given
name. The default ruleset is NULL string/0 numeric if not provided.
- Although the ruleset is a useful falco engine feature, it isn't that
important to the falco standalone program, so it's not
documented. However, you can change the ruleset by providing
FALCO_RULESET in the environment.
Prefix output strings with * so they are always permissive in the
engine.
In falco outputs, which adds its own prefix, remove any leading * before
adding the custom prefix.
Allow any list/macro/rule to be overridden by a subsequent file. The
persistent state that lives across invocations of load_rules are the 3
arrays ordered_{list,macro,rule}_names, which have the
lists/macros/rules in the order in which they first appear, and tables
{rules,macros,lists}_by_name, which maps from a name to a yaml object.
With each call to load_rules, the set of loaded rules is reset and the
state of expanded lists, compiled macros, compiled rules, and rule
metadata are recreated from scratch, using the ordered_*_names arrays
and *_by_name tables. That way, any list/macro/rule can be redefined in
a subsequent file with new values.
Add the ability to clear the set of loaded rules from lua. It simply
recreates the sinsp_evttype_filter instance m_evttype_filter, which is
now a unique_ptr.
sinsp_utils::get_current_time_ns() has the same purpose as
get_epoch_ns(), and now that we're including the token bucket in
falco_engine, it's easy to package the dependency. So use that function
instead.
Add token-bucket based rate limiting for falco notifications.
The token bucket is implemented in token_bucket.cpp (actually in the
engine directory, just to make it easier to include in other
programs). It maintains a current count of tokens (i.e. right to send a
notification). Its main method is claim(), which attemps to claim a
token and returns true if one was claimed successfully. It has a
configurable configurable max burst size and rate. The token bucket
gains "rate" tokens per second, up to a maximum of max_burst tokens.
These parameters are configurable in falco.yaml via the config
options (defaults shown):
outputs:
rate: 1
max_burst: 1000
In falco_outputs::handle_event(), try to claim a token, and if
unsuccessful log a debug message and return immediately.
Previously, log messages had levels, but it only influenced the level
argument passed to syslog(). Now, add the ability to control log level
from falco itself.
New falco.yaml argument "log_level" can be one of the strings
corresponding to the well-known syslog levels, which is converted to a
syslog-style level as integer.
In falco_logger::log(), skip messages below the specified level.
Instead of creating a formatter for each event, cache them and create
them only when needed. A new function output_cleanup cleans up the
cached formatters, and is called in the destructor if init() was called.
When run via scripts like run_performance_tests.sh, it's useful to
include extra info like the test being run and the specific program
variant to the stats file. So support that via the
environment. Environment keys starting with FALCO_STATS_EXTRA_XXX will
have the XXX and environment value added to the stats file.
It's undocumented as I doubt other programs will need this functionality
and it keeps the docs simpler.
With -s, periodically fetch capture stats from the inspector and write
them to the provided file.
Separate class StatsFileWriter handles the details. It does rely on a
timer + SIGALRM handler so you can only practically create a single
object, but it does keep the code/state separate.
The output format has a sample number, the set of current stats, a
delta with the difference from the prior sample, and the percentage of
events dropped during that sample.
Change falco_engine::process_event to return a unique_ptr that wraps the
rule result, so it won't be leaked if this method throws an exception.
This means that callers don't need to create their own.
Validate rule outputs when loading rules by attempting to create a
formatter based on the rule's output field. If there's an error, it will
propagate up through load_rules and cause falco to exit rather than
discover the problem only when trying to format the event and the rule's
output field.
This required moving formats.{cpp,h} into the falco engine directory
from the falco general directory. Note that these functions are loaded
twice in the two lua states used by falco (engine and outputs).
There's also a couple of minor cleanups:
- falco_formats had a private instance variable that was unused, remove
it.
- rename the package for the falco_formats functions to formats instead
of falco so it's more standalone.
- don't throw a c++ exception in falco_formats::formatter. Instead
generate a lua error, which is handled more cleanly.
- free_formatter doesn't return any values, so set the return value of
the function to 0.
container.info handling used to be handled by the the falco_outputs
object. However, this caused problems for applications that only used
the falco engine, doing their own output formatting for matching events.
Fix this by moving output formatting into the falco engine itself. The
part that replaces %container.info/adds extra formatting to the end of a
rule's output now happens while loading the rule.
Related to the changes in https://github.com/draios/agent/pull/267,
improve error messages when trying to load sets of rules with errors:
- Check that yaml parsing of rules_content actually resulted in
something.
- Return an error for rules that have an empty name.
- Return an error for yaml objects that aren't a rule/macro/list.
- When compiling, don't print an error message, simply return one,
including a wrapper "can not compile ..." string.
Instead of having FALCO_SHARE_DIR be a relative path, fully specify it
by prepending CMAKE_INSTALL_PREFIX in the top level CMakeLists.txt and
don't prepend CMAKE_INSTALL_PREFIX in config_falco_engine.h.in. This
makes it consistent with its use in the agent.
Collect stats on the number of events processed and dropped. When run
with -v, print these stats. This duplicates syddig behavior and can be
useful when dianosing problems related to dropped events throwing off
internal state tracking.
Bring over functionality from sysdig to write trace files. This is easy
as all of the code to actually write the files is in the inspector. This
just handles the -w option and arguments.
This can be useful to write a trace file in parallel with live event
monitoring so you can reproduce it later.
The logic for detecting if a file exists was backwards. It would treat a
file as existing if it could *not* be opened. Reverse that logic so it
works.
This fixes https://github.com/draios/falco/issues/135.
Copy handling of -pk/-pm/-pc/-k/-m arguments from sysdig. All of the
relevant code was already in the inspector so that was easy.
The information from k8s/mesos/containers is used in two ways:
- In rule outputs, if the format string contains %container.info, that
is replaced with the value from -pk/-pm/-pc, if one of those options
was provided. If no option was provided, %container.info is replaced
with a generic %container.name (id=%container.id) instead.
- If the format string does not contain %container.info, and one of
-pk/-pm/-pc was provided, that is added to the end of the formatting
string.
- If -p was specified with a general value (i.e. not
kubernetes/mesos/container), the value is simply added to the end and
any %container.info is replaced with the generic value.
There are a lot of command line options now, so sort them alphabetically
in the usage and getopt handling to make them easier to find.
Also rename -p <pidfile> to -P <pidfile>, thinking ahead to the next
commit.
If a rule has a enabled attribute, and if the value is false, call the
engine's enable_rule() method to disable the rule. Like add_filter,
there's a static method which takes the object as the first argument and
a non-static method that calls the engine.
This fixes#72.
The falco engine changes broke the output methods that take
configuration (like the filename for file output, or the program for
program output). Fix that by properly passing the options argument to
each method's output function.
Add test that cover reading from multiple sets of rule files and
disabling rules. Specific changes:
- Modify falco to allow multiple -r arguments to read from multiple
files.
- In the test multiplex file, add a disabled_rules attribute,
containing a sequence of rules to disable. Result in -D arguments
when running falco.
- In the test multiplex file, 'rules_file' can be a sequence. It
results in multiple -r arguments when running falco.
- In the test multiplex file, 'detect_level' can be a squence of
multiple severity levels. All levels will be checked for in the
output.
- Move all test rules files to a rules subdirectory and all trace files
to a traces subdirectory.
- Add a small trace file for a simple cat of /dev/null. Used by the
new tests.
- Add the following new tests:
- Reading from multiple files, with the first file being
empty. Ensure that the rules from the second file are properly
loaded.
- Reading from multiple files with the last being empty. Ensures
that the empty file doesn't overwrite anything from the first
file.
- Reading from multiple files with varying severity levels for each
rule. Ensures that both files are properly read.
- Disabling rules from a rules file, both with full rule names
and regexes. Will result in not detecting anything.
Add the ability to drop events at the falco engine level in a way that
can scale with the dropping that already occurs at the kernel/inspector
level.
New inline function should_drop_evt() controls whether or not events are
matched against the set of rules, and is controlled by two
values--sampling ratio and sampling multiplier.
Here's how the sampling ratio and multiplier influence whether or not an
event is dropped in should_drop_evt(). The intent is that
m_sampling_ratio is generally changing external to the engine e.g. in
the main inspector class based on how busy the inspector is. A sampling
ratio implies no dropping. Values > 1 imply increasing levels of
dropping. External to the engine, the sampling ratio results in events
being dropped at the kernel/inspector interface. The sampling
multiplier is an amplification to the sampling factor in
m_sampling_ratio. If 0, no additional events are dropped other than
those that might be dropped by the kernel/inspector interface. If 1,
events that make it past the kernel module are subject to an additional
level of dropping at the falco engine, scaling with the sampling ratio
in m_sampling_ratio.
Unlike the dropping that occurs at the kernel level, where the events in
the first part of each second are dropped, this dropping is random.
Move the c++ and lua code implementing falco engine/falco common to its
own directory userspace/engine. It's compiled as a static library
libfalco_engine.a, and has its own CMakeLists.txt so it can be included
by other projects.
The engine's CMakeLists.txt has a add_subdirectory for the falco rules
directory, so including the engine also builds the rules.
The variables you need to set to use the engine's CMakeLists.txt are:
- CMAKE_INSTALL_PREFIX: the root directory below which everything is
installed.
- FALCO_ETC_DIR: where to install the rules file.
- FALCO_SHARE_DIR: where to install lua code, relative to the
- install/package root.
- LUAJIT_INCLUDE: where to find header files for lua.
- FALCO_SINSP_LIBRARY: the library containing sinsp code. It will be
- considered a dependency of the engine.
- LPEG_LIB/LYAML_LIB/LIBYAML_LIB: locations for third-party libraries.
- FALCO_COMPONENT: if set, will be included as a part of any install()
commands.
Instead of specifying /usr/share/falco in config_falco_*.h.in, use
CMAKE_INSTALL_PREFIX and FALCO_SHARE_DIR.
The lua code for the engine has also moved, so the two lua source
directories (userspace/engine/lua and userspace/falco/lua) need to be
available separately via falco_common, so make it an argument to
falco_common::init.
As a part of making it easy to include in another project, also clean up
LPEG build/defs. Modify build-lpeg to add a PREFIX argument to allow for
object files/libraries being in an alternate location, and when building
lpeg, put object files in a build/ subdirectory.
Create standalone classes falco_engine/falco_outputs that can be
embedded in other programs. falco_engine is responsible for matching
events against rules, and falco_output is responsible for formatting an
alert string given an event and writing the alert string to all
configured outputs.
falco_engine's main interfaces are:
- load_rules/load_rules_file: Given a path to a rules file or a string
containing a set of rules, load the rules. Also loads needed lua code.
- process_event(): check the event against the set of rules and return
the results of a match, if any.
- describe_rule(): print details on a specific rule or all rules.
- print_stats(): print stats on the rules that matched.
- enable_rule(): enable/disable any rules matching a pattern. New falco
command line option -D allows you to disable one or more rules on the
command line.
falco_output's main interfaces are:
- init(): load needed lua code.
- add_output(): add an output channel for alert notifications.
- handle_event(): given an event that matches one or more rules, format
an alert message and send it to any output channels.
Each of falco_engine/falco_output maintains a separate lua state and
loads separate sets of lua files. The code to create and initialize the
lua state is in a base class falco_common.
falco_engine no longer logs anything. In the case of errors, it throws
exceptions. falco_logger is now only used as a logging mechanism for
falco itself and as an output method for alert messages. (This should
really probably be split, but it's ok for now).
falco_engine contains an sinsp_evttype_filter object containing the set
of eventtype filters. Instead of calling
m_inspector->add_evttype_filter() to add a filter created by the
compiler, call falco_engine::add_evttype_filter() instead. This means
that the inspector runs with a NULL filter and all events are returned
from do_inspect. This depends on
https://github.com/draios/sysdig/pull/633 which has a wrapper around a
set of eventtype filters.
Some additional changes along with creating these classes:
- Some cleanups of unnecessary header files, cmake include_directory()s,
etc to only include necessary includes and only include them in header
files when required.
- Try to avoid 'using namespace std' in header files, or assuming
someone else has done that. Generally add 'using namespace std' to all
source files.
- Instead of using sinsp_exception for all errors, define a
falco_engine_exception class for exceptions coming from the falco
engine and use it instead. For falco program code, switch to general
exceptions under std::exception and catch + display an error for all
exceptions, not just sinsp_exceptions.
- Remove fields.{cpp,h}. This was dead code.
- Start tracking counts of rules by priority string (i.e. what's in the
falco rules file) as compared to priority level (i.e. roughtly
corresponding to a syslog level). This keeps the rule processing and
rule output halves separate. This led to some test changes. The regex
used in the test is now case insensitive to be a bit more flexible.
- Now that https://github.com/draios/sysdig/pull/632 is merged, we can
delete the rules object (and its lua_parser) safely.
- Move loading the initial lua script to the constructor. Otherwise,
calling load_rules() twice re-loads the lua script and throws away any
state like the mapping from rule index to rule.
- Allow an empty rules file.
Finally, fix most memory leaks found by valgrind:
- falco_configuration wasn't deleting the allocated m_config yaml
config.
- several ifstreams were being created simply to test which falco
config file to use.
- In the lua output methods, an event formatter was being created using
falco.formatter() but there was no corresponding free_formatter().
This depends on changes in https://github.com/draios/sysdig/pull/640.
New command line option 'A', related to the boolean all_events instructs
falco to run on all events, and not just those without the EF_DROP_FALCO
flag set.
When all_events is true, the checks for ignored events/syscalls are
skipped when loading rules.
Add a new output type "program" that writes a formatted event to a
configurable program, using io.popen().
Each notification results in one invocation of the program.
Instead of combining all rules into one huge filter expression and
giving it to the inspector, keep each filter expression separate and
annotate it with the events for which the rule applies.
This uses the capabilties in draios/sysdig#627
to have multiple sets of event-specific filters.
Change traverse_ast to allow a set of node types instead of a single
node type.
Within the compiler, a new pass over the ast get_evttypes looks for
evt.type clauses, converts the evt.type as a string to any event type
ids for which it may apply, and passes that back with the compiled
rule.
As rule conditions may refer to evt.types in negative
contexts (i.e. evt.type != XXX, or not evt.type = XXX), this pass
prefers rules that list event type checks at the beginning of
conditions, and allows other rules with a warning.
When traversing the ast looking for evt.type checks, once any "!=" or
"not ..." is seen, no other evt.type checks are "allowed". If one
is found, the rule is considered ambiguous wrt event types. In this
case, a warning is printed and the rule is associated with a catchall
set that runs for all event types.
Also, instead of rejecting rules with no event type check, print a
warning and associate it with the catchall set.
In the rule loader, create a new global events that maps each event as a
string to the list of event ids for which it may apply. Instead of
calling install_filter once after all rules have been loaded, call a new
function add_filter for each rule. In turn, it passes the rule and list
of event ids to the inspector using add_evttype_filter().
Also, with -v (verbose) also print the exact set of events found for
each event type. This is used by a upcoming change to the set of unit
tests.
Add a verbose flag -v which implies printing additional info. This is
passed down to lua during load_rules and sets the per-module verbose
value for the compiler and parser modules.
Later commits will use this to print additional info when loading rules.
https://github.com/draios/sysdig/pull/623 adds support for a startswith
operator to allow for string prefix matching. Modify the parser to
recognize that operator, and use that operator for rules that really
want to check the beginning of a pathname, directory, etc. to make them
faster and avoid FPs.
Once sysdig adds support for handling "in (...)" filter expressions as
set membership tests, it will be advantageous to combine lists of items
together into a single list so they can all be checked in a single set
membership test.
This commit adds support for a new yaml item type "list" containing a
field "name" and field "items" containing a list of items. These are
represented as a yaml list, which allows yaml to handle some of the
initial parsing with the list items maintained natively in lua.
Allow lists to contain list references by expanding any references to
the items in the list, before storing the list items in
state.lists.
When parsing macro or rule conditions, replace all references to a list
name with the list items as a comma separated string.
Modify the falco rules to switch to lists whenever possible. The
new convention is to use the suffix _binaries for lists of program names
and _procs for macros that define a filter expression using the list.
Instead of using sysdig's json output, which only contains the fields
from the format string without any formatting text, use the string
output to build a json object containing the format string, rule name,
severity, and the event time (converted to a json-friendly ISO8601).
This fixes https://github.com/draios/falco/issues/82.
Add additional rules related to using pipe installers within a fbash
session:
- Modify write_etc to only trigger if *not* in a fbash session. There's
a new rule write_etc_installer which has the same conditions when in
a fbash session, logging at INFO severity.
- A new rule write_rpm_database warns if any non package management
program tries to write below /var/lib/rpm.
- Add a new warning if any program below a fbash session tries to open
an outbound network connection on ports other than http(s) and dns.
- Add INFO level messages when programs in a fbash session try to run
package management binaries (rpm,yum,etc) or service
management (systemctl,chkconfig,etc) binaries.
In order to test these new INFO level rules, make up a third class of
trace files traces-info.zip containing trace files that should result in
info-level messages.
To differentiate warning and info level detection, add an attribute to
the multiplex file "detect_level", which is "Warning" for the files in
traces-positive and "Info" for the files in traces-info. Modify
falco_test.py to look specifically for a non-zero count for the given
detect_level.
Doing this exposed a bug in the way the level-specific counts were being
recorded--they were keeping counts by level name, not number. Fix that.
Start using the Avocado framework for automated regression
testing. Create a test FalcoTest in falco_test.py which can run on a
collection of trace files. The script test/run_regression_tests.sh is
responsible for pulling zip files containing the positive (falco should
detect) and negative (falco should not detect) trace files, creating a
Avocado multiplex file that defines all the tests (one for each trace
file), running avocado on all the trace files, and showing full logs for
any test that didn't pass.
The old regression script, which simply ran falco, has been removed.
Modify falco's stats output to show the total number of events detected
for use in the tests.
In travis.yml, pull a known stable version of avocado and build it,
including installing any dependencies, as a part of the build process.
At shutdown, print stats on the number of rules triggered by severity
and rule name. This is done by a lua function print_stats and the
associated table rule_output_counts.
When passing rules to outputs, update the counts in rule_output_counts.
Add signal handlers for SIGINT/SIGTERM that set a shutdown
flag. Initialize the live inspector with a timeout so the main loop can
watch the flag set by the signal handlers.
Make sure that references to variables that may be paths (which in turn
may contain spaces) are quoted, so cmake won't break on the spaces.
This fixes https://github.com/draios/falco/issues/79.
When run with -l <rule>, falco will print the name/description of the
single rule <rule> and exit. With -L, falco will print the
name/description of all rules.
All the work is done in lua in the rule loader. A new lua function
describe_rule calls the local function describe_single_rule once or
multiple times depending on -l/-L. describe_single_rule prints the rule
name and a wrapped version of the rule description.
Add name and description fields to all rules. The name field is actually
a field called 'rule', which corresponds to the 'macro' field for
macros.
Within the rule loader, the state changes slightly. There are two
indices into the set of rules 'rules_by_name' and
'rules_by_idx' (formerly 'outputs'). They both now contain the original
table from the yaml parse. One field 'level' is added which is the
priority mapped to a number.
Get rid of the notion of default priority or output. Every rule must now
provide both.
Go through all current rules and add names and descriptions.
Remove the old use of the '-o' command line option, it wasn't being
used.
Allow any config file option to be overridden on the command line, via
--option/-o. These options are applied to the configuration object after
reading the file, ensuring the command line options override anything in
the config file.
To support this, add some methods to yaml_configuration that allows you
to set the value for a top level key or key + subkey, and methods to
falco_configuration that allow providing a set of command line arguments
alongside the config file.
Ensure that any fatal error is always printed to stderr even if stderr
logging is not enabled. This makes sure that falco won't silently exit
on an error. This is especially important when daemonizing and when an
initial fatal error occurs first.
As a part of this, change all fatal errors to throw exceptions instead,
so all fatal errors get routed through the exception handler.
Improve daemonization by reopening stdin/stdout/stderr to /dev/null so
you don't have to worry about writing to a closed stderr on exit.
Henri pointed out that events may also be flagged as ignored. So
populate a second table with the set of ignored events, rename
check_for_ignored_syscalls to check_for_ignored_syscalls_events, and
separately check each table based on whether the LHS of the expression
is evt.type or syscall.type.
Add support for daemonizing via the --daemon flag. If daemonized, the
pid is written to the file provided via the --pidfile flag. When
daemonized, falco immediately returns an error if stderr output or
logging was chosen on the command line.
Clean up handling of outputs to match the expected use case (daemon):
- syslog output is enabled by default
- stdout output is disabled by default
- If not configured at all, both outputs are enabled.
Also fix some bugs I found while running via packages:
- There were still some references to the old rules filename
falco_rules.conf.
- The redhat package mistakenly defined some system directories like
/etc, /etc/init.d. Add them to the exclusion list (See
https://cmake.org/Bug/view.php?id=13609 for context).
- Clean up some of the error messages to be more consistent.
After this I was able to build and install debian and rpm
packages. Starting the falco service ran falco as a daemon with syslog
output.
Create a table containing the filtered syscalls and set it as the lua
global m_lua_ignored_syscalls == ignored_syscalls.
In the parser, add a general purpose ast traversal function
traverse_ast that calls a callback for all nodes of a specific type.
In the compiler, add a new function check_for_ignored_syscalls that uses
the traversal function to be called back for all "BinaryRelOp"
nodes (i.e. X = Y, X in [a, b, c], etc). For those nodes, if the lhs is
a field 'evt.type' or 'syscall.type' and the rhs contains one of the
ignored syscalls, throw an error.
Call check_for_ignored_syscalls after parsing any macro or rule
filter. The thrown error will contain the macro or rule that had the
ignored syscall.
In the next commit I'll change the rules to skip the ignored syscalls.
Uses yaml parsing lib to parse a yaml file comprising of a list of
macros and rules, like:
- macro: bin_dir
condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin)
- macro: core_binaries
condition: proc.name in (ls, mkdir, cat, less, ps)
- condition: (fd.typechar = 4 or fd.typechar=6) and core_binaries
output: "%evt.time: %proc.name network with %fd.l4proto"
- condition: evt.type = write and bin_dir
output: "%evt.time: System binary modified (file '%fd.filename' written by process %proc.name)"
- condition: container.id != host and proc.name = bash
output: "%evt.time: Shell running in container (%proc.name, %container.id)"
Rather than do include_directory() on the whole sysdig repo, just do it
for driver, libscap, and libsinp.
This is a step on the way to building a digwatch package.
As pointed out by Loris, timestamping output messages should be a
responsibility of the output/collection system.
So as a first step towards this, add timestamps automatically for output
formats, and remove them from rules.
the high-level change is that events matching a rule are now send into a
lua "on_event" function for handling, rather than doing the handling
down in c++.
more specifics:
before, the lua "load_rule" function registered formatters with
associated IDs with the c++ side, which later used this state to
reconcile events with formats and print output accordingly.
now, no such state is kept on the c++ side. the lua "load_rule" function
maintains the id->formatters map, and uses it to print outputs when it
receives events.
this change simplifies the existing flow and will also make the forthcoming
implementation of function outputs far simpler than it would have been
in the current setup.
This change adds syntax support for function call outputs. For example:
... | syslog(evt, WARN)
Regular outputs are still allowed and parsed in the same way.
Before this change, events were only printed if they had all the
fields (same behavior as with sysdig when the output format doesn't have
a leading "*"). With this change, all events are printed; those that
don't have all fields are prefixed with a notification.
Move compiler loading out of libsinsp/lua_parser.cpp and into a new
class in digwatch/rules.cpp.
This way the libsinsp support is strictly about providing a lua API for
scripts to setup filters. Loading the actual parser and rules is logic
that belongs in the app (digwatch in this case, maybe sysdig down the
line) rather than there.