Support the notion of a message for all fields in a single class, and
making sure it's wrapped as well as the other fields.
This is used to display a single message about how indexing working for
ka.* filter fields and what IDX_ALLOWED/IDX_NUMERIC/IDX_KEY means,
rather than repeating the same text over and over in every field.
The wrapping is handled by a function falco::utils::wrap_text.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Refactor how JSON event/k8s audit events extract values in two important
ways:
1. An event can now extract multiple values.
2. The extracted value is a class json_event_value instead of a simple
string.
The driver for 1. was that some filtercheck fields like
"ka.req.container.privileged" actually should extract multiple values,
as a pod can have multiple containers and it doesn't make sense to
summarize that down to a single value.
The driver for 2. is that by having an object represent a single
extracted value, you can also hold things like numbers e.g. ports, uids,
gids, etc. and ranges e.g. [0:3]. With an object, you can override
operators ==, <, etc. to do comparisons between the numbers and ranges,
or even set membership tests between extracted numbers and sets of
ranges.
This is really handy for a lot of new fields implemented as a part of
PSP support, where you end up having to check for overlaps between the
paths, images, ports, uids, etc in a K8s Audit Event and the acceptable
values, ranges, path prefixes enumerated in a PSP.
Implementing these changes also involve an overhaul of how aliases are
implemented. Instead of having an optional "formatting" function, where
arguments to the formatting function were expressed as text within the
index, define optional extraction and indexing functions. If an
extraction function is defined, it's responsible for taking the full
json object and calling add_extracted_value() to add values. There's a
default extraction function that uses a list of json_pointers with
automatic iteration over array values returned by a json pointer.
There's still a notion of filter fields supporting indexes--that's
simply handled within the default extraction or custom extraction
function. And for most fields, there won't be a need to write a custom
extraction function simply to implement indexing.
Within a json_event_filter_check object, instead of having a single
extracted value as a string, hold a vector of extracted json_event_value
objects (vector because order matters) and a set of json_event_value
objects (for set comparisons) as m_evalues. Values on the right hand
side of the expression are held as a set m_values.
json_event_filter_check::compare now supports IN/INTERSECTS as set
comparisons. It also supports PMATCH using path_prefix_search objects,
which simplifies checks like ka.req.pod.volumes.hostpath--now they can
be expressed as "ka.req.pod.volumes.hostpath intersects (/proc,
/var/run/docker.sock, /, /etc, /root)" instead of
"ka.req.volume.hostpath[/proc]=true or
ka.req.volume.hostpath[/root]=true or ...".
Define ~10 new filtercheck fields that extract pod properties like
hostIpc, readOnlyRootFilesystem, etc. that are relevant for PSP validation.
As a part of these changes, also clarify the names of filter fields
related to pods to always have a .pod in the name. Furthermore, fields
dealing with containers in a pod always have a .pod.containers prefix in
the name.
Finally, change the comparisons for existing k8s audit rules to use
"intersects" and/or "in" when appropriate instead of a single equality
comparison.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Related to the changes in https://github.com/draios/sysdig/pull/1501,
add support for an "intersects" operator that verifies if any of the
values in the rhs of an expression are found in the set of extracted
values.
For example:
(a,b,c) in (a,b) is false, but (a,b,c) intersects (a,b) is true.
The code that implements CO_INTERSECTS is in a different commit.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
If this work as intended PR will automatically get the area labels depending on the files he modified.
In case the user wants it can still apply other areas manually, by slash command, or editing the PR template during the opening of the PR.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Properly parse multi-document yaml files e.g. blocks separated by
---. This is easily handled by lyaml itself--you just need to pass the
option all = true to yaml.load, and each document will be provided as a table.
This does break the table iteration a bit, so some more refactoring:
- Create a load_state table that holds context like the current
- document index, the required_engine_version, etc.
- Pull out the parts that parse a single document to load_rules_doc(),
which is given the table for a single document + load_state.
- Simplify get_orig_yaml_obj to just provide a single row index and
- return all rows from that point to the next blank line or line
starting with '-'
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Make additional improvements to display relevant context when validating
files. This handles cases where a macro/rule overwrites a prior rule.
- Instead of saving the index into the array of lines for each rule,
save the rule yaml itself, as a property 'context' for each object.
- When appending rules, the context of the base macro/rule and the
context of the appended rule/macro are concatenated.
- New functions get_orig_yaml_obj, build_error, and
build_error_with_context handle building the error string.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Fix a couple of small bugs when verifying macro/rule objects:
1) Yaml can have document separators "---", and those were mistakenly
being considered array items.
2) When reading macros and rules and using array position to find the
right document offset, the overall object order should be
used (e.g. this is the 5th object from the file) and not the array
position (e.g. this is the 3rd rule from the file).
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Given the compiler we currently use, you can't actually enable/disable
regexes in falco_engine::enable_rule using a regex pattern. The regex
either will fail to compile or will compile but not actually match
strings. This is noted on the c++11 compatibility notes for gcc 4.8.2:
https://gcc.gnu.org/onlinedocs/gcc-4.8.2/libstdc++/manual/manual/status.html#status.iso.2011.
The only use of using enable_rule was treating the regex pattern as a
substring match anyway, so we can change the engine to treat the pattern
as a substring.
So change the method/supporting sub-classes to note that the argument is
a substring match, and change falco itself to refer to substrings
instead of patterns.
This fixes https://github.com/falcosecurity/falco/issues/742.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of relying on lua errors to pass back parse errors, pass back an
explicit true + required engine version or false + error message.
Also clean up the error message to display info + context on the
error. When the error related to yaml parsing, use the row number passed
back in lyaml's error string to print the specific line with the error.
When parsing rules/macros/lists, print the object being parsed alongside
the error.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
To speed up list expansion, instead of using regexes to replace a list
name with its contents, do string searches followed by examining the
preceding/following characters for the proper delimiter.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
We shouldn't need to clean up strings via a cleanup function and don't
need to do it via a bunch of string.gsub() functions.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of iterating over the entire list of filters and doing pattern
matches against each defined filter, perform table lookups.
For filters that take arguments e.g. proc.aname[3] or evt.arg.xxx, split
the filtercheck string on bracket/dot and check the values against a
table.
There are now two tables of defined filters: defined_arg_filters and
defined_noarg_filters. Each filter is put into a table depending on
whether the filter takes an argument or not.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Json-related filtercheck fields supported indexing with brackets, but
when looking at the field descriptions you couldn't tell if a field
allowed an index, required an index, or did not allow an index.
This information was available, but it was a part of the protected
aliases map within the class.
Move this to the public field information so it can be used outside the
class.
Also add m_ prefixes for member names, now that the struct isn't
trivial.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add more accurate tracking of the number of falco rules loaded per
ruleset, which are made available via the engine method
::num_rules_for_ruleset().
In the ruleset objects, keep track if a filter wrapper is actually
added/removed and if so increment/decrement the count.
* Update engine fields checksum for fd.dev.*
New fields fd.dev.*, so updating the fields checksum.
* Print a message why the trace file can't be read.
At debug level only, but better than nothing.
* Adjust tests to match new container_started macro
Now that the container_started macro works either on the container event
or the first process being spawned in a container, we need to adjust the
counts for some rules to handle both cases.
* Add option to display times in ISO 8601 UTC
ISO 8601 time is useful when, say, running falco in a container, which
may have a different /etc/localtime than the host system.
A new config option time_format_iso_8601 controls whether log message
and event times are displayed in ISO 8601 in UTC or in local time. The
default is false (display times in local time).
This option is passed to logger init as well as outputs. For outputs it
eventually changes the time format field from %evt.time/%jevt.time to
%evt.time.iso8601/%jevt.time.iso8601.
Adding this field changes the falco engine version so increment it.
This depends on https://github.com/draios/sysdig/pull/1317.
* Unit test for ISO 8601 output
A unit test for ISO 8601 output ensures that both the log and event time
is in ISO 8601 format.
* Use ISO 8601 output by default in containers
Now that we have an option that controls iso 8601 output, use it by
default in containers. We do this by changing the value of
time_format_iso_8601 in falco.yaml in the container.
* Handle errors in strftime/asctime/gmtime
A placeholder "N/A" is used in log messages instead.
* Add support for container metaevent to detect container spawning
Create a new macro "container_started" to check both the old and
the new check.
Also, only look for execve exit events with vpid=1.
* Use TBB_INCLUDE_DIR for consistency w sysdig,agent
Previously it was a mix of TBB_INCLUDE and TBB_INCLUDE_DIR.
* Build using matching sysdig branch, if exists
* Expose required_engine_version when loading rules
When loading a rules file, have alternate methods that return the
required_engine_version. The existing methods remain unchanged and just
call the new methods with a dummy placeholder.
* Add --support argument to print support bundle
Add an argument --support that can be used as a single way to collect
necessary support information, including the falco version, config,
commandline, and all rules files.
There might be a big of extra structure to the rules files, as they
actually support an array of "variants", but we're thinking ahead to
cases where there might be a comprehensive library of rules files and
choices, so we're adding the extra structure.
* Add ability to print field names only
Add ability to print field names only instead of all information about
fields (description, etc) using -N cmdline option.
This will be used to add some versioning support steps that check for a
changed set of fields.
* Add an engine version that changes w/ filter flds
Add a method falco_engine::engine_version() that returns the current
engine version (e.g. set of supported fields, rules objects, operators,
etc.). It's defined in falco_engine_version.h, starts at 2 and should be
updated whenever a breaking change is made.
The most common reason for an engine change will be an update to the set
of filter fields. To make this easy to diagnose, add a build time check
that compares the sha256 output of "falco --list -N" against a value
that's embedded in falco_engine_version.h. A mismatch fails the build.
* Check engine version when loading rules
A rules file can now have a field "required_engine_version N". If
present, the number is compared to the falco engine version. If the
falco engine version is less, an error is thrown.
* Unit tests for engine versioning
Add a required version: 2 to one trace file to check the positive case
and add a new test that verifies that a too-new rules file won't be loaded.
* Rename falco test docker image
Rename sysdig/falco to falcosecurity/falco in unit tests.
* Don't pin falco_rules.yaml to an engine version
Currently, falco_rules.yaml is compatible with versions <= 0.13.1 other
than the required_engine_version object itself, so keep that line
commented out so users can use this rules file with older falco
versions.
We'll uncomment it with the first incompatible falco engine change.