Now the context doesn't have a column pointer but it either a line for
yaml parse errors or an individual object (rule, macro, etc) from the
parsed yaml.
Ensure that compiling filters for rules or macros doesn't result in
throwing lua errors. Instead, return a bool status + the return
value(s). If the status is false, the next return value is an error
message.
New tests for specific parse failures and expected output. What's
covered is:
- Input not being parsable as yaml
- Input not being yaml at all (lyaml handles this slightly
differently).
In each case the return value and stdout output with validation are checked.
Change the semantics of lua load_rules to return a
successful/unsuccessful status instead of throwing errors when loading
fails. On success, load_rules returns [true, required_engine_version]
and on failure load_rules returns [false, row, col, error string]. The
row/col will be used to include context on error.
Falco's output itself now prints validation results to stdout, which
makes it easier to capture the output and pass it along. Log messages
continue to go to stderr.
Still need to finish going through load_rules and return all errors
directly instead of throwing a lua error.
New test options stdout_is/stderr_is do a direct comparison between
stdout/stderr and the provided value.
Test option validate_rules_file maps to -V arguments, which validate
rules and exits.
If the rules file can't be parsed as yaml, lyaml returns a line and
column number. Add some context showing the lines around the line number
and a pointer to the column.
To speed up list expansion, instead of using regexes to replace a list
name with its contents, do string searches followed by examining the
preceding/following characters for the proper delimiter.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
We shouldn't need to clean up strings via a cleanup function and don't
need to do it via a bunch of string.gsub() functions.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of iterating over the entire list of filters and doing pattern
matches against each defined filter, perform table lookups.
For filters that take arguments e.g. proc.aname[3] or evt.arg.xxx, split
the filtercheck string on bracket/dot and check the values against a
table.
There are now two tables of defined filters: defined_arg_filters and
defined_noarg_filters. Each filter is put into a table depending on
whether the filter takes an argument or not.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Json-related filtercheck fields supported indexing with brackets, but
when looking at the field descriptions you couldn't tell if a field
allowed an index, required an index, or did not allow an index.
This information was available, but it was a part of the protected
aliases map within the class.
Move this to the public field information so it can be used outside the
class.
Also add m_ prefixes for member names, now that the struct isn't
trivial.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Some refinements and improvements to the GitHub PR template.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
This coding convention's solely goal is to approximately match the current code style.
It MUST not be intended in any other way until a real and definitive coding convention is put in.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>