* Make stats file interval configurable
New argument --stats_interval=<msec> controls the interval at which
statistics are written to the stats file. The default is 5000 ms (5 sec)
which matches the prior hardcoded interval.
The stats interval is triggered via signals, so an interval below ~250ms
will probably interfere with falco's behavior.
* Add ability to emit general purpose messages
A new method falco_outputs::handle_msg allows emitting generic messages
that have a "rule", message, and output fields, but aren't exactly tied
to any event and aren't passed through an event formatter.
This allows falco to emit "events" based on internal checks like kernel
buffer overflow detection.
* Clean up newline handling for logging
Log messages from falco_logger::log may or may not have trailing
newlines. Handle both by always adding a newline to stderr logs and
always removing any newline from syslog logs.
* Add method to get sequence from subkey
New variant of get_sequence that allows fetching a list of items from a
key + subkey, for example:
key:
subkey:
- list
- items
- here
Both use a shared method get_sequence_from_node().
* Monitor syscall event drops + optional actions
Start actively monitoring the kernel buffer for syscall event drops,
which are visible in scap_stats.n_drops, and add the ability
to take actions when events are dropped. The -v (verbose) and
-s (stats filename) arguments also print out information on dropped
events, but they were only printed/logged without any actions.
In falco config you can specify one or more of the following actions to
take when falco notes system call drops:
- ignore (do nothing)
- log a critical message
- emit an "internal" falco alert. It looks like any other alert with a
time, "rule", message, and output fields but is not related to any
rule in falco_rules.yaml/other rules files.
- exit falco (the idea being that the restart would be monitored
elsewhere).
A new module syscall_event_drop_mgr is called for every event and
collects scap stats every second. If in the prior second there were
drops, perform_actions() handles the actions.
To prevent potential flooding in high drop rate environments, actions
are goverened by a token bucket with a rate of 1 actions per 30 seconds,
with a max burst of 10 seconds. We might tune this later based on
experience in busy environments.
This might be considered a fix for
https://github.com/falcosecurity/falco/issues/545. It doesn't
specifically flag falco rules alerts when there are drops, but does
make it easier to notice when there are drops.
* Add unit test for syscall event drop detection
Add unit tests for syscall event drop detection. First, add an optional
config option that artifically increments the drop count every
second. (This is only used for testing).
Then add test cases for each of the following:
- No dropped events: should not see any log messages or alerts.
- ignore action: should note the drops but not log messages or alert.
- log action: should only see log messages for the dropped events.
- alert action: should only see alerts for the dropped events.
- exit action: should see log message noting the dropped event and exit
with rc=1
A new trace file ping_sendto.scap has 10 seconds worth of events to
allow the periodic tracking of drops to kick in.
+ Add the user_known_write_root_conditions macro to allow custom conditions in the "Write below root" rule
+ Add the user_known_non_sudo_setuid_conditions to allow custom conditions in the "Non sudo setuid" rule
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
When using host network, the containers can't resolve kubernetes.default, thus not getting the metadata like pod name, namespace, etc. Using the environment variable KUBERNETES_SERVICE_HOST, which points to the current cluster API server, will allow that.
* Add support for container metaevent to detect container spawning
Create a new macro "container_started" to check both the old and
the new check.
Also, only look for execve exit events with vpid=1.
* Use TBB_INCLUDE_DIR for consistency w sysdig,agent
Previously it was a mix of TBB_INCLUDE and TBB_INCLUDE_DIR.
* Build using matching sysdig branch, if exists
! Make sure we add the Sysdig repo and call an update before trying to install Falco
! Remove the require in the service class to fix a dependencies loop
* Bump the version to 0.4.0
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* Update the Puppet module:
* Apply puppet-lint recommendations
* Update the README since the project moved from draios to falcosecurity in GitHub
* Move parameters in their own file
+ Add the DEB repository automatically
+ Add the EPEL repository automatically
+ Add a logrotate configuration
* Update the configuration file with all the latest updates
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* * Set required modules versions properly
* Set dependencies between classes
* Set the class order
* Apply mstemm's code review
* * Drop the Puppet 3 support
* Use a working version of puppetlabs-apt
* Use dependencies to be compatible with Puppet 4.7 and above
* Move kubernetes-response-engine to falcosecurit/kubernetes-response-engine
As long as Falco and Response Engine have different release cycle, they
are separated.
* Add a README explaining that repository has been moved
@mfdii is absolutely right about this on #539
Related to https://github.com/falcosecurity/falco/pull/526, it turns out
attempting to build a kernel module on the default debian-based ami used
by kops tries to invoke gcc-6:
-----
* Setting up /usr/src links from host
* Unloading falco-probe, if present
* Running dkms install for falco
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area...
make -j8 KERNELRELEASE=4.9.0-7-amd64 -C /lib/modules/4.9.0-7-amd64/build
M=/var/lib/dkms/falco/0.14.0/build...(bad exit status: 2)
Error! Bad return status for module build on kernel:
4.9.0-7-amd64 (x86_64)
Consult /var/lib/dkms/falco/0.14.0/build/make.log for more information.
* Running dkms build failed, dumping
/var/lib/dkms/falco/0.14.0/build/make.log
DKMS make.log for falco-0.14.0 for kernel 4.9.0-7-amd64 (x86_64)
Wed Feb 13 01:02:01 UTC 2019
make: Entering directory '/host/usr/src/linux-headers-4.9.0-7-amd64'
arch/x86/Makefile:140: CONFIG_X86_X32 enabled but no binutils support
/host/usr/src/linux-headers-4.9.0-7-common/scripts/gcc-version.sh:
line 25: gcc-6: command not found
-----
So manually add back gcc-6 and its dependencies.
To allow for a more portable build environment, create a builder image
that is based on centos 6 with devtoolset-2 for a refrence g++.
In that image, install all required packages and run a script that can
either run cmake or make.
The image depends on the following parameters:
FALCO_VERSION: the version to give any built packages
BUILD_TYPE: Debug or Release
BUILD_DRIVER/BPF: whether or not to build the kernel module/bpf program when
building. This should usually be OFF, as the kernel module would be
built for the files in the centos image, not the host.
BUILD_WARNINGS_AS_ERRORS: consider all build warnings fatal
MAKE_JOBS: passed to the -j argument of make
A typical way to run this builder is the following. Assumes you have
checked out falco and sysdig to directories below /home/user/src, and
want to use a build directory of /home/user/build/falco:
$ docker run --user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro -e MAKE_JOBS=4 -it -v /home/user/src:/source -v /home/user/build/falco:/build falco-builder cmake
$ docker run --user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro -e MAKE_JOBS=4 -it -v /home/user/src:/source -v /home/user/build/falco:/build falcosecurity/falco-builder package
* Expose required_engine_version when loading rules
When loading a rules file, have alternate methods that return the
required_engine_version. The existing methods remain unchanged and just
call the new methods with a dummy placeholder.
* Add --support argument to print support bundle
Add an argument --support that can be used as a single way to collect
necessary support information, including the falco version, config,
commandline, and all rules files.
There might be a big of extra structure to the rules files, as they
actually support an array of "variants", but we're thinking ahead to
cases where there might be a comprehensive library of rules files and
choices, so we're adding the extra structure.
Instead of using container.image, that always reports the raw string
used to spawn the container, switch to the more reliable
container.image.{repository,tag}, since they are guaranteed to report
the actual repository/tag of the container image.
This also give a little performance improvement since a single 'in'
predicate can now be used instead of a sequence of startswith.
* Add ability to print field names only
Add ability to print field names only instead of all information about
fields (description, etc) using -N cmdline option.
This will be used to add some versioning support steps that check for a
changed set of fields.
* Add an engine version that changes w/ filter flds
Add a method falco_engine::engine_version() that returns the current
engine version (e.g. set of supported fields, rules objects, operators,
etc.). It's defined in falco_engine_version.h, starts at 2 and should be
updated whenever a breaking change is made.
The most common reason for an engine change will be an update to the set
of filter fields. To make this easy to diagnose, add a build time check
that compares the sha256 output of "falco --list -N" against a value
that's embedded in falco_engine_version.h. A mismatch fails the build.
* Check engine version when loading rules
A rules file can now have a field "required_engine_version N". If
present, the number is compared to the falco engine version. If the
falco engine version is less, an error is thrown.
* Unit tests for engine versioning
Add a required version: 2 to one trace file to check the positive case
and add a new test that verifies that a too-new rules file won't be loaded.
* Rename falco test docker image
Rename sysdig/falco to falcosecurity/falco in unit tests.
* Don't pin falco_rules.yaml to an engine version
Currently, falco_rules.yaml is compatible with versions <= 0.13.1 other
than the required_engine_version object itself, so keep that line
commented out so users can use this rules file with older falco
versions.
We'll uncomment it with the first incompatible falco engine change.
* Allow SSL for k8s audit endpoint
Allow enabling SSL for the Kubernetes audit log web server. This
required adding two new configuration options: webserver.ssl_enabled and
webserver.ssl_certificate. To enable SSL add the below to the webserver
section of the falco.yaml config:
webserver:
enabled: true
listen_port: 8765s
k8s_audit_endpoint: /k8s_audit
ssl_enabled: true
ssl_certificate: /etc/falco/falco.pem
Note that the port number has an s appended to indicate SSL
for the port which is how civetweb expects SSL ports be denoted. We
could change this to dynamically add the s if ssl_enabled: true.
The ssl_certificate is a combination SSL Certificate and corresponding
key contained in a single file. You can generate a key/cert as follows:
$ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
$ cat certificate.pem key.pem > falco.pem
$ sudo cp falco.pem /etc/falco/falco.pem
fix ssl option handling
* Add notes on how to create ssl certificate
Add notes on how to create the ssl certificate to the config comments.
gcc 5 is no longer included in debian unstable, but we need it to build
centos kernels, which are 3.x based and explicitly want a gcc version 3,
4, or 5 compiler.
So grab copies we've saved from debian snapshots with the prefix
https://snapshot.debian.org/archive/debian/20190122T000000Z. They're
stored at downloads.draios.com and installed in a dpkg -i step after the
main packages are installed, but before any other by-hand packages are
installed.
A recent sysdig change added support for CRI and also added new external
dependencies (cri uses grpc to communicate between the client/server).
Add those dependencies.
* Add falco service to k8s install/update labels
Update the instructions for K8s RBAC installation to also create a
service that maps to port 8765 of the falco pod. This allows other
services to access the embedded webserver within falco.
Also clean up the set of labels to use a consistent app: falco-example,
role:security for each object.
* Cange K8s Audit Example to use falco daemonset
Change the K8s Audit Example instructions to use minikube in conjunction
with a falco daemonset running inside of minikube. (We're going to start
prebuilding kernel modules for recent minikube variants to make this
possible).
When running inside of minikube in conjunction with a service, you have
to go through some additional steps to find the ClusterIP associated
with the falco service and use that ip when configuring the k8s audit
webhook. Overall it's still a more self-contained set of instructions,
though.
In the common case, falco doesn't generate much output, so it's
desirable to not buffer it in case you're tail -fing some logs.
So change the default for buffered outputs to false.
* Improved inbound/outbound macros
Improved versions of inbound/outbound macros that add coverage for
recvfrom/recvmsg, sendto/sendmsg and also ignore non-blocking syscalls
in a different way.
* Let nginx-ingress-c(ontroller) write to /etc/nginx
Process truncated due to comm limit.
Also fix some parentheses for another write_etc_common macro.
* Let calico setns also.
* Let prometheus-conf write its config
Let prometheus-conf write its config below /etc/prometheus.
* Let openshift oc write to /etc/origin/node
As per https://github.com/draios/sysdig/pull/1275, the gen_event class
mandate the implementation of two new methods.
This change aims to simplify the implementation of a generic event
processing infrastructure, that could handle both sinsp and json
events.