Commit Graph

30 Commits

Author SHA1 Message Date
Mark Stemm
cb51522423 Skip plugins list/load/tests for MUSL_OPTIMIZED_BUILD
When MUSL_OPTIMIZED_BUILD is specified, falco is statically linked under
musl, and can't dlopen() files: see
https://inbox.vuxu.org/musl/20200423162406.GV11469@brightrain.aerifal.cx/T/

So skip listing/loading/testing plugins when MUSL_OPTIMIZED_BUILD is specified.

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2021-11-12 18:27:59 +01:00
Mark Stemm
2a4e4d555d Add automated tests for plugins
Test infrastructure and sample confs/rules/traces for plugins
automated tests:

New test cases are in falco_tests_plugins.yaml and cover:
- Listing plugins and fields when plugins are loaded.
- Basic cloudtrail + json plugin on a fake cloudtrail json file and a
  sample rule that uses both plugins.
- Conflicts between source/extractor plugins
- Incompatible plugin api
- Wrong plugin path
- Checking for warnings when reading rules with unnown sources (e.g. when plugins are not loaded)

Some test-only plugins written in C are in test/plugins and built on
the fly. (They aren't included in packages of course).

The test framework needed some small changes to handle these tests:
- Add a mode to not check detection counts at all (for --list/--list-plugins)
- addl_cmdline_opts to allow specifying --list/--list-plugins
- Using DOTALL when matching stderr/stdout (allows multi-line matches more easily)

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2021-11-12 18:27:59 +01:00
Lorenzo Fontana
f4ff2ed072 chore(test): replace bucket url with official distribution url
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2021-04-09 10:23:42 +02:00
Leonardo Grasso
aeca36bdaf update(test): update regression tests fixture URL
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2021-04-08 12:36:09 +02:00
Mark Stemm
b4eb5b87b6 Automated tests for exceptions
Handle various positive and negative cases. Should handle every error
and warning path when reading exceptions objects or rule exception
fields, and various positive cases of using exceptions to prevent
alerts.

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2021-01-19 10:37:55 +01:00
Lorenzo Fontana
f5c1e7c165 build: fix build directory for xunit tests
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-11-05 11:49:40 -05:00
Lorenzo Fontana
aaf6816821 build: make our integration tests report clear steps for circleCI UI
inspection via collect test data [0]

[0] https://circleci.com/docs/2.0/collect-test-data/

Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-11-05 11:49:40 -05:00
Lorenzo Fontana
7e9ca5c540 build: run_regression_tests.sh skip packages tests if asked
Co-Authored-By: Leonardo Grasso <me@leonardograsso.com>
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
2020-09-10 15:01:07 +02:00
Mark Stemm
f32bb84851 Start versioning trace files
Start versioning trace files with a unique date. Any time we need to
create new trace files, change TRACE_FILES_VERSION in this script and
copy to traces-{positive,negative,info}-<VERSION>.zip.

The zip file should unzip to traces-{positive,negative,info}, without
any version.

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2020-09-03 18:56:51 +02:00
Leonardo Grasso
e0b66ecae9 revert: "build: temporary remove falco_traces.yaml from integration test suite"
This reverts commit 7a2708de09.

Co-Authored-By: Lorenzo Fontana <fontanalorenz@gmail.com>
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-08-24 20:32:24 +02:00
Lorenzo Fontana
7a2708de09 build: temporary remove falco_traces.yaml from integration test suite
This happens because the file descriptors paths have been fixed
in this commit [0].
However, the scap files fixtures we have for the tests still contain
the old paths causing this problem.

We are commenting out those tests and opening an issue to get this fixed
later.

[0] 37aab8debf

Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
2020-08-20 19:26:56 +02:00
Leonardo Di Donato
a83b91fc53 new(test): run_regression_tests.sh -h
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
d8faa95702 fix(test): run_regression_tests.sh must generate falco_traces test suite in a non-interactive way
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
bb1282c7be update(test): make run_regression_tests.sh script accept different
options

The following options have been added:
* -v (verbose)
* -p (prepare falco_traces test suite)
* -b (specify custom branch for downloading trace files)
* -d (specify the build directory)

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Mark Stemm
89121527da Add automated tests for K8s PSP Support
Add ~74 new automated tests that verify K8s PSP Support.

For each PSP attribute, add both positive and negative test cases. For
some of the more complicated attributes like runAsUser/Group/etc,
include cases where the uids are specicified both at the container
security context level and pod security context level and then combined
with mayRunAs/mustRunAs, etc.

Also, some existing tests are updated to handle proper use of "in" and
"intersects" in expressions.

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2019-10-15 19:45:31 +02:00
Lorenzo Fontana
c76518c681 update: license headers
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-10-08 16:02:26 +02:00
Mark Stemm
6e11e75c15 Pass the build dir along when running tests
As of 0e1c436d14, the build directory is
an argument to run_regression_tests.sh. However, the build directory in
falco_tests.yaml is currently hard-coded to /build, with the build
variant influencing the subdirectory.

Clean this up so the entire build directory passed to
run_regression_tests.sh is passed to avocado and used for the build
directory.

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2019-08-30 07:25:23 -07:00
Leonardo Di Donato
4224329905 fix(test): correct bash shebangs
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-07-26 03:23:01 +02:00
Mark Stemm
0e1c436d14 Add jenkins checks (#584)
* Supporting files to build/test via jenkins

Changes to build/test via jenkins, which also means running all tests in
a container instead of directly on the host:

- Jenkinsfile controls the stages, build.sh does the build and
  run-tests.sh does the regression tests.

- Create a new container falcosecurity/falco-tester that includes the
  dependencies required to run the regression tests. This is a different
  image than falco-builder because it doesn't need to be centos 6 based,
  doesn't install any compiler/etc, and installs the test running
  framework we use (avocado). We now use a newer version of avocado,
  which resulted in some small changes to how it is run and how yaml
  options are parsed.

- Modify run_regression_tests.sh to download trace files to the build
  directory and only if not present. Also honor BUILD_TYPE/BUILD_DIR,
  which is provided via the docker run cmd.

- The package tests are now moved to a separate falco_tests_package.yaml
  file. They will use rpm installs by default instead of debian
  packages. Also add the ability to install rpms in addition to debian
  packages.

- Automate the process of creating the docker local package by: 1)
  Adding CMake rules to copy the Dockerfile, entrypoint to the build
  directory and 2) Copy test trace files and rules into the build
  directory. This allows running the docker build command from
  build/docker/local instead of the source directory.

- Modify the way the container test is run a bit to use the trace
  files/rules copied into the container directly instead of host-mounted
  trace files.

* Use container builder + tester for travis

We'll probably be using jenkins soon, but this will allow switching back
to travis later if we want.

* Use download.draios.com for binutils packages

That way we won't be dependent on snapshot.debian.org.
2019-04-26 12:24:15 -07:00
Mark Stemm
1f28f85bdf K8s audit evts (#450)
* Add new json/webserver libs, embedded webserver

Add two new external libraries:

 - nlohmann-json is a better json library that has stronger use of c++
   features like type deduction, better conversion from stl structures,
   etc. We'll use it to hold generic json objects instead of jsoncpp.

 - civetweb is an embeddable webserver that will allow us to accept
   posted json data.

New files webserver.{cpp,h} start an embedded webserver that listens for
POSTS on a configurable url and passes the json data to the falco
engine.

New falco config items are under webserver:
  - enabled: true|false. Whether to start the embedded webserver or not.
  - listen_port. Port that webserver listens on
  - k8s_audit_endpoint: uri on which to accept POSTed k8s audit events.

(This commit doesn't compile entirely on its own, but we're grouping
these related changes into one commit for clarity).

* Don't use relative paths to find lua code

You can look directly below PROJECT_SOURCE_DIR.

* Reorganize compiler lua code

The lua compiler code is generic enough to work on more than just
sinsp-based rules, so move the parts of the compiler related to event
types and filterchecks out into a standalone lua file
sinsp_rule_utils.lua.

The checks for event types/filterchecks are now done from rule_loader,
and are dependent on a "source" attribute of the rule being
"sinsp". We'll be adding additional types of events next that come from
sources other than system calls.

* Manage separate syscall/k8s audit rulesets

Add the ability to manage separate sets of rules (syscall and
k8s_audit). Stop using the sinsp_evttype_filter object from the sysdig
repo, replacing it with falco_ruleset/falco_sinsp_ruleset from
ruleset.{cpp,h}. It has the same methods to add rules, associate them
with rulesets, and (for syscall) quickly find the relevant rules for a
given syscall/event type.

At the falco engine level, there are new parallel interfaces for both
types of rules (syscall and k8s_audit) to:
  - add a rule: add_k8s_audit_filter/add_sinsp_filter
  - match an event against rules, possibly returning a result:
    process_sinsp_event/process_k8s_audit_event

At the rule loading level, the mechanics of creating filterchecks
objects is handled two factories (sinsp_filter_factory and
json_event_filter_factory), both of which are held by the engine.

* Handle multiple rule types when parsing rules

Modify the steps of parsing a rule's filter expression to handle
multiple types of rules. Notable changes:

 - In the rule loader/ast traversal, pass a filter api object down,
   which is passed back up in the lua parser api calls like nest(),
   bool_op(), rel_expr(), etc.
 - The filter api object is either the sinsp factory or k8s audit
   factory, depending on the rule type.
 - When the rule is complete, the complete filter is passed to the
   engine using either add_sinsp_filter()/add_k8s_audit_filter().

* Add multiple output formatting types

Add support for multiple output formatters. Notable changes:

 - The falco engine is passed along to falco_formats to gain access to
   the engine's factories.
 - When creating a formatter, the source of the rule is passed along
   with the format string, which controls which kind of output formatter
   is created.

Also clean up exception handling a bit so all lua callbacks catch all
exceptions and convert them into lua errors.

* Add support for json, k8s audit filter fields

With some corresponding changes in sysdig, you can now create general
purpose filter fields and events, which can be tied together with
nesting, expressions, and relational operators. The classes here
represent an instance of these fields devoted to generic json objects as
well as k8s audit events. Notable changes:

 - json_event: holds a json object, used by all of the below

 - json_event_filter_check: Has the ability to extract values out of a
   json_event object and has the ability to define macros that associate
   a field like "group.field" with a json pointer expression that
   extracts a single property's value out of the json object. The basic
   field definition also allows creating an index
   e.g. group.field[index], where a std::function is responsible for
   performing the indexing. This class has virtual void methods so it
   must be overridden.

 - jevt_filter_check: subclass of json_event_filter_check and defines
   the following fields:
     - jevt.time/jevt.rawtime: extracts the time from the underlying json object.
     - jevt.value[<json pointer>]: general purpose way to extract any
       json value out of the underlying object. <json pointer> is a json
       pointer expression
     - jevt.obj: Return the entire object, stringified.

 - k8s_audit_filter_check: implements fields that extract values from
   k8s audit events. Most of the implementation is in the form of macros
   like ka.user.name, ka.uri, ka.target.name, etc. that just use json
   pointers to extact the appropriate value from a k8s audit event. More
   advanced fields like ka.uri.param, ka.req.container.image use
   indexing to extract individual values out of maps or arrays.

 - json_event_filter_factory: used by things like the lua parser api,
   output formatter, etc to create the necessary objects and return
   them.

  - json_event_formatter: given a format string, create the necessary
    fields that will be used to create a resolved string when given a
    json_event object.

* Add ability to list fields

Similar to sysdig's -l option, add --list (<source>) to list the fields
supported by falco. With no source specified, will print all
fields. Source can be "syscall" for inspector fields e.g. what is
supported by sysdig, or "k8s_audit" to list fields supported only by the
k8s audit support in falco.

* Initial set of k8s audit rules

Add an initial set of k8s audit rules. They're broken into 3 classes of
rules:

 - Suspicious activity: this includes things like:
    - A disallowed k8s user performing an operation
    - A disallowed container being used in a pod.
    - A pod created with a privileged pod.
    - A pod created with a sensitive mount.
    - A pod using host networking
    - Creating a NodePort Service
    - A configmap containing private credentials
    - A request being made by an unauthenticated user.
    - Attach/exec to a pod. (We eventually want to also do privileged
      pods, but that will require some state management that we don't
      currently have).
    - Creating a new namespace outside of an allowed set
    - Creating a pod in either of the kube-system/kube-public namespaces
    - Creating a serviceaccount in either of the kube-system/kube-public
      namespaces
    - Modifying any role starting with "system:"
    - Creating a clusterrolebinding to the cluster-admin role
    - Creating a role that wildcards verbs or resources
    - Creating a role with writable permissions/pod exec permissions.
 - Resource tracking. This includes noting when a deployment, service,
    - configmap, cluster role, service account, etc are created or destroyed.
 - Audit tracking: This tracks all audit events.

To support these rules, add macros/new indexing functions as needed to
support the required fields and ways to index the results.

* Add ability to read trace files of k8s audit evts

Expand the use of the -e flag to cover both .scap files containing
system calls as well as jsonl files containing k8s audit events:

If a trace file is specified, first try to read it using the
inspector. If that throws an exception, try to read the first line as
json. If both fail, return an error.

Based on the results of the open, the main loop either calls
do_inspect(), looping over system events, or
read_k8s_audit_trace_file(), reading each line as json and passing it to
the engine and outputs.

* Example showing how to enable k8s audit logs.

An example of how to enable k8s audit logging for minikube.

* Add unit tests for k8s audit support

Initial unit test support for k8s audit events. A new multiplex file
falco_k8s_audit_tests.yaml defines the tests. Traces (jsonl files) are
in trace_files/k8s_audit and new rules files are in
test/rules/k8s_audit.

Current test cases include:

- User outside allowed set
- Creating disallowed pod.
- Creating a pod explicitly on the allowed list
- Creating a pod w/ a privileged container (or second container), or a
  pod with no privileged container.
- Creating a pod w/ a sensitive mount container (or second container), or a
  pod with no sensitive mount.
- Cases for a trace w/o the relevant property + the container being
  trusted, and hostnetwork tests.
- Tests that create a Service w/ and w/o a NodePort type.
- Tests for configmaps: tries each disallowed string, ensuring each is
  detected, and the other has a configmap with no disallowed string,
  ensuring it is not detected.
- The anonymous user creating a namespace.
- Tests for all kactivity rules e.g. those that create/delete
  resources as compared to suspicious activity.
- Exec/Attach to Pod
- Creating a namespace outside of an allowed set
- Creating a pod/serviceaccount in kube-system/kube-public namespaces
- Deleting/modifying a system cluster role
- Creating a binding to the cluster-admin role
- Creating a cluster role binding that wildcards verbs or resources
- Creating a cluster role with write/pod exec privileges

* Don't manually install gcc 4.8

gcc 4.8 should already be installed by default on the vm we use for
travis.
2018-11-09 10:15:39 -08:00
Mark Stemm
6445cdb950 Better copyright notices (#426)
* Use correct copyright years.

Also include the start year.

* Improve copyright notices.

Use the proper start year instead of just 2018.

Add the right owner Draios dba Sysdig.

Add copyright notices to some files that were missing them.
2018-09-26 19:49:19 -07:00
Mark Stemm
2352b96d6b Change license to Apache 2.0 (#419)
Replace references to GNU Public License to Apache license in:

 - COPYING file
 - README
 - all source code below falco
 - rules files
 - rules and code below test directory
 - code below falco directory
 - entrypoint for docker containers (but not the Dockerfiles)

I didn't generally add copyright notices to all the examples files, as
they aren't core falco. If they did refer to the gpl I changed them to
apache.
2018-09-20 11:47:10 -07:00
Mark Stemm
5bafa198c6 Update automated tests to handle new priority lvls
The default falco ruleset now has a wider variety of priorities, so
adjust the automated tests to match:

 - Instead of creating a generic test yaml entry for every trace file in
   traces-{positive,negative,info} with assumptions about detect levels,
   add a new falco_traces.yaml.in multiplex file that has specific
   information about the detect priorities and rule detect counts for each
   trace file.
 - If a given trace file doesn't have a corresponding entry in
   falco_traces.yaml.in, a generic entry is added with a simple
   detect: (True|False) value and level. That way you can get specific
   detect levels/counts for existing trace files, but if you forget to
   add a trace to falco_traces.yaml.in, you'll still get some coverage.
 - falco_tests.yaml.in isn't added to any longer, so rename it to
   falco_tests.yaml.
 - Avocado is now run twice--once on each yaml file. The final test
   passes if both avocado runs pass.
2017-05-25 12:15:35 -07:00
Mark Stemm
897df28036 Add regression tests for configurable outputs.
- In the regression tests, make the config file configurable in the
   multiplex file via 'conf_file'.
 - A new multiplex file item 'outputs' containing a list of <filename>:
   <regex> tuples. For each item, the test reads the file and matches
   each line against the regex. A match must be found for the test to
   pass.
 - Add 2 new tests that test file output and program output. They write
   to files below /tmp/falco_outputs/ and the contents are checked to
   ensure that alerts are written.
2016-10-24 15:56:45 -07:00
Mark Stemm
917d66e9e8 Create embeddable falco engine.
Create standalone classes falco_engine/falco_outputs that can be
embedded in other programs. falco_engine is responsible for matching
events against rules, and falco_output is responsible for formatting an
alert string given an event and writing the alert string to all
configured outputs.

falco_engine's main interfaces are:

 - load_rules/load_rules_file: Given a path to a rules file or a string
   containing a set of rules, load the rules. Also loads needed lua code.
 - process_event(): check the event against the set of rules and return
   the results of a match, if any.
 - describe_rule(): print details on a specific rule or all rules.
 - print_stats(): print stats on the rules that matched.
 - enable_rule(): enable/disable any rules matching a pattern. New falco
   command line option -D allows you to disable one or more rules on the
   command line.

falco_output's main interfaces are:
 - init(): load needed lua code.
 - add_output(): add an output channel for alert notifications.
 - handle_event(): given an event that matches one or more rules, format
   an alert message and send it to any output channels.

Each of falco_engine/falco_output maintains a separate lua state and
loads separate sets of lua files. The code to create and initialize the
lua state is in a base class falco_common.

falco_engine no longer logs anything. In the case of errors, it throws
exceptions. falco_logger is now only used as a logging mechanism for
falco itself and as an output method for alert messages. (This should
really probably be split, but it's ok for now).

falco_engine contains an sinsp_evttype_filter object containing the set
of eventtype filters. Instead of calling
m_inspector->add_evttype_filter() to add a filter created by the
compiler, call falco_engine::add_evttype_filter() instead. This means
that the inspector runs with a NULL filter and all events are returned
from do_inspect. This depends on
https://github.com/draios/sysdig/pull/633 which has a wrapper around a
set of eventtype filters.

Some additional changes along with creating these classes:

- Some cleanups of unnecessary header files, cmake include_directory()s,
  etc to only include necessary includes and only include them in header
  files when required.

- Try to avoid 'using namespace std' in header files, or assuming
  someone else has done that. Generally add 'using namespace std' to all
  source files.

- Instead of using sinsp_exception for all errors, define a
  falco_engine_exception class for exceptions coming from the falco
  engine and use it instead. For falco program code, switch to general
  exceptions under std::exception and catch + display an error for all
  exceptions, not just sinsp_exceptions.

- Remove fields.{cpp,h}. This was dead code.

- Start tracking counts of rules by priority string (i.e. what's in the
  falco rules file) as compared to priority level (i.e. roughtly
  corresponding to a syslog level). This keeps the rule processing and
  rule output halves separate. This led to some test changes. The regex
  used in the test is now case insensitive to be a bit more flexible.

- Now that https://github.com/draios/sysdig/pull/632 is merged, we can
  delete the rules object (and its lua_parser) safely.

- Move loading the initial lua script to the constructor. Otherwise,
  calling load_rules() twice re-loads the lua script and throws away any
  state like the mapping from rule index to rule.

- Allow an empty rules file.

Finally, fix most memory leaks found by valgrind:

 - falco_configuration wasn't deleting the allocated m_config yaml
   config.
 - several ifstreams were being created simply to test which falco
   config file to use.
 - In the lua output methods, an event formatter was being created using
   falco.formatter() but there was no corresponding free_formatter().

This depends on changes in https://github.com/draios/sysdig/pull/640.
2016-10-24 15:56:45 -07:00
Mark Stemm
7b68fc2692 Add tests for event type rule identification
Add tests that verify that the event type identification functionality
is working. Notable changes:

 - Modify falco_test.py to additionally check for warnings when loading
   any set of rules and verify that the event types for each rule match
   expected values. This is controlled by the new multiplex fields
   "rules_warning" and "rules_events".

 - Instead of starting with an empty falco_tests.yaml from scratch from
   the downloaded trace files, use a checked-in version which defines
   two tests:
     - Loading the checked-in falco_rules.yaml and verify that no rules
       have warnings.
     - A sample falco_rules_warnings.yaml that has ~30 different
       mutations of rule filtering expressions. The test verifies for each
       rule whether or not the rule should result in a warning and what the
       extracted event types are.
   The generated tests from the trace files are appended to this file.

- Add an empty .scap file to use with the above tests.
2016-07-18 11:26:28 -07:00
Mark Stemm
8ffb553c75 Add ability to run branch-specific trace files.
Pass the travis branch to run_regression_tests.sh. When downloading
trace files, first look for a file traces-XXX-$BRANCH and if found
download it. This allows testing out a set of changes with a trace file
specifically for that branch, that can be moved to the normal file once
the PR is merged.

Also increase the timeout for the spawned falco process from 1 to 3
minutes. In debug mode, the kubernetes demo was taking slightly over 1
minute.
2016-07-12 08:22:29 -07:00
Mark Stemm
995e61210e Add regression tests for json output.
Modify falco_test.py to look for a boolean multiplex attribute
'json_output'. If true, examine the lines of the output and for any line
that begins with '{', parse it as json and ensure it has the 4
attributes we expect.

Modify run_regression_tests to have a utility function
prepare_multiplex_fileset that does the work of looping over files in a
directory, along with detect, level, and json output arguments. The
appropriate multiplex attributes are added for each file.

Use that utility function to test json output for the positive and
informational  directories along with non-json output. The negative
directory is only tested once.
2016-06-07 14:04:53 -07:00
Mark Stemm
fc6d775e5b Add additional rules/tests for pipe installers.
Add additional rules related to using pipe installers within a fbash
session:

 - Modify write_etc to only trigger if *not* in a fbash session. There's
   a new rule write_etc_installer which has the same conditions when in
   a fbash session, logging at INFO severity.

 - A new rule write_rpm_database warns if any non package management
   program tries to write below /var/lib/rpm.

 - Add a new warning if any program below a fbash session tries to open
   an outbound network connection on ports other than http(s) and dns.

 - Add INFO level messages when programs in a fbash session try to run
   package management binaries (rpm,yum,etc) or service
   management (systemctl,chkconfig,etc) binaries.

In order to test these new INFO level rules, make up a third class of
trace files traces-info.zip containing trace files that should result in
info-level messages.

To differentiate warning and info level detection, add an attribute to
the multiplex file "detect_level", which is "Warning" for the files in
traces-positive and "Info" for the files in traces-info. Modify
falco_test.py to look specifically for a non-zero count for the given
detect_level.

Doing this exposed a bug in the way the level-specific counts were being
recorded--they were keeping counts by level name, not number. Fix that.
2016-06-06 10:29:41 -07:00
Mark Stemm
4751546c03 Add correctness tests using Avocado
Start using the Avocado framework for automated regression
testing. Create a test FalcoTest in falco_test.py which can run on a
collection of trace files. The script test/run_regression_tests.sh is
responsible for pulling zip files containing the positive (falco should
detect) and negative (falco should not detect) trace files, creating a
Avocado multiplex file that defines all the tests (one for each trace
file), running avocado on all the trace files, and showing full logs for
any test that didn't pass.

The old regression script, which simply ran falco, has been removed.

Modify falco's stats output to show the total number of events detected
for use in the tests.

In travis.yml, pull a known stable version of avocado and build it,
including installing any dependencies, as a part of the build process.
2016-05-24 13:56:48 -07:00