If this work as intended PR will automatically get the area labels depending on the files he modified.
In case the user wants it can still apply other areas manually, by slash command, or editing the PR template during the opening of the PR.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
added `libmpx2` to be install during `apt-get install` which is a dependency for `dpkg: libgcc-6-dev:amd64`
Signed-off-by: Sumit Kumar <sumitsaiwal@gmail.com>
This is a temporary fix for Travis CI (which is where we use
falco-builder docker image).
Was already done in the past (see:
9285aa59c1 (diff-354f30a63fb0907d4ad57269548329e3)).
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
* Supporting files to build/test via jenkins
Changes to build/test via jenkins, which also means running all tests in
a container instead of directly on the host:
- Jenkinsfile controls the stages, build.sh does the build and
run-tests.sh does the regression tests.
- Create a new container falcosecurity/falco-tester that includes the
dependencies required to run the regression tests. This is a different
image than falco-builder because it doesn't need to be centos 6 based,
doesn't install any compiler/etc, and installs the test running
framework we use (avocado). We now use a newer version of avocado,
which resulted in some small changes to how it is run and how yaml
options are parsed.
- Modify run_regression_tests.sh to download trace files to the build
directory and only if not present. Also honor BUILD_TYPE/BUILD_DIR,
which is provided via the docker run cmd.
- The package tests are now moved to a separate falco_tests_package.yaml
file. They will use rpm installs by default instead of debian
packages. Also add the ability to install rpms in addition to debian
packages.
- Automate the process of creating the docker local package by: 1)
Adding CMake rules to copy the Dockerfile, entrypoint to the build
directory and 2) Copy test trace files and rules into the build
directory. This allows running the docker build command from
build/docker/local instead of the source directory.
- Modify the way the container test is run a bit to use the trace
files/rules copied into the container directly instead of host-mounted
trace files.
* Use container builder + tester for travis
We'll probably be using jenkins soon, but this will allow switching back
to travis later if we want.
* Use download.draios.com for binutils packages
That way we won't be dependent on snapshot.debian.org.
* Add option to display times in ISO 8601 UTC
ISO 8601 time is useful when, say, running falco in a container, which
may have a different /etc/localtime than the host system.
A new config option time_format_iso_8601 controls whether log message
and event times are displayed in ISO 8601 in UTC or in local time. The
default is false (display times in local time).
This option is passed to logger init as well as outputs. For outputs it
eventually changes the time format field from %evt.time/%jevt.time to
%evt.time.iso8601/%jevt.time.iso8601.
Adding this field changes the falco engine version so increment it.
This depends on https://github.com/draios/sysdig/pull/1317.
* Unit test for ISO 8601 output
A unit test for ISO 8601 output ensures that both the log and event time
is in ISO 8601 format.
* Use ISO 8601 output by default in containers
Now that we have an option that controls iso 8601 output, use it by
default in containers. We do this by changing the value of
time_format_iso_8601 in falco.yaml in the container.
* Handle errors in strftime/asctime/gmtime
A placeholder "N/A" is used in log messages instead.
Related to https://github.com/falcosecurity/falco/pull/526, it turns out
attempting to build a kernel module on the default debian-based ami used
by kops tries to invoke gcc-6:
-----
* Setting up /usr/src links from host
* Unloading falco-probe, if present
* Running dkms install for falco
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area...
make -j8 KERNELRELEASE=4.9.0-7-amd64 -C /lib/modules/4.9.0-7-amd64/build
M=/var/lib/dkms/falco/0.14.0/build...(bad exit status: 2)
Error! Bad return status for module build on kernel:
4.9.0-7-amd64 (x86_64)
Consult /var/lib/dkms/falco/0.14.0/build/make.log for more information.
* Running dkms build failed, dumping
/var/lib/dkms/falco/0.14.0/build/make.log
DKMS make.log for falco-0.14.0 for kernel 4.9.0-7-amd64 (x86_64)
Wed Feb 13 01:02:01 UTC 2019
make: Entering directory '/host/usr/src/linux-headers-4.9.0-7-amd64'
arch/x86/Makefile:140: CONFIG_X86_X32 enabled but no binutils support
/host/usr/src/linux-headers-4.9.0-7-common/scripts/gcc-version.sh:
line 25: gcc-6: command not found
-----
So manually add back gcc-6 and its dependencies.
To allow for a more portable build environment, create a builder image
that is based on centos 6 with devtoolset-2 for a refrence g++.
In that image, install all required packages and run a script that can
either run cmake or make.
The image depends on the following parameters:
FALCO_VERSION: the version to give any built packages
BUILD_TYPE: Debug or Release
BUILD_DRIVER/BPF: whether or not to build the kernel module/bpf program when
building. This should usually be OFF, as the kernel module would be
built for the files in the centos image, not the host.
BUILD_WARNINGS_AS_ERRORS: consider all build warnings fatal
MAKE_JOBS: passed to the -j argument of make
A typical way to run this builder is the following. Assumes you have
checked out falco and sysdig to directories below /home/user/src, and
want to use a build directory of /home/user/build/falco:
$ docker run --user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro -e MAKE_JOBS=4 -it -v /home/user/src:/source -v /home/user/build/falco:/build falco-builder cmake
$ docker run --user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro -e MAKE_JOBS=4 -it -v /home/user/src:/source -v /home/user/build/falco:/build falcosecurity/falco-builder package
gcc 5 is no longer included in debian unstable, but we need it to build
centos kernels, which are 3.x based and explicitly want a gcc version 3,
4, or 5 compiler.
So grab copies we've saved from debian snapshots with the prefix
https://snapshot.debian.org/archive/debian/20190122T000000Z. They're
stored at downloads.draios.com and installed in a dpkg -i step after the
main packages are installed, but before any other by-hand packages are
installed.
* Use correct copyright years.
Also include the start year.
* Improve copyright notices.
Use the proper start year instead of just 2018.
Add the right owner Draios dba Sysdig.
Add copyright notices to some files that were missing them.
Replace references to GNU Public License to Apache license in:
- COPYING file
- README
- all source code below falco
- rules files
- rules and code below test directory
- code below falco directory
- entrypoint for docker containers (but not the Dockerfiles)
I didn't generally add copyright notices to all the examples files, as
they aren't core falco. If they did refer to the gpl I changed them to
apache.
debian:unstable head contains binutils 2.31, which generates binaries
that are incompatible with kernels < 4.16.
To fix this, after installing everything, downgrade binutils to
2.30-22. This has to be done as the last step as it introduces conflicts
in other dependencies of the various gcc versions and some of the
packages already in the image.
If /lib/modules exists in the base image, the symlink will get created at
/lib/modules/modules. This removes any existing empty directory but will
fail if we try to remove a non-empty /lib/modules. (Punting on how to
handle non-empty base image dirs for now)
* Add Rule for unexpected udp traffic
New rule Unexpected UDP Traffic checks for udp traffic not on a list of
expected ports. Currently blocked on
https://github.com/draios/falco/issues/308.
* Add sendto/recvfrom in inbound/outbound macros
Expand the inbound/outbound macros to handle sendfrom/recvto events, so
they can work on unconnected udp sockets. In order to avoid a flood of
events, they also depend on fd.name_changed to only consider
sendto/recvfrom when the connection tuple changes.
Also make the check for protocol a positive check for udp instead of not tcp,
to avoid a warning about event type filters potentially appearing before
a negative condition. This makes filtering rules by event type easier.
This depends on https://github.com/draios/sysdig/pull/1052.
* Add additional restrictions for inbound/outbound
- only look for fd.name_changed on unconnected sockets.
- skip connections where both ips are 0.0.0.0 or localhost network.
- only look for successful or non-blocking actions that are in progress
* Add a combined inbound/outbound macro
Add a combined inbound/outbound macro so you don't have to do all the
other net/result related tests more than once.
* Fix evt generator for new in/outbound restrictions
The new rules skip localhost, so instead connect a udp socket to a
non-local port. That still triggers the inbound/outbound macros.
* Address FPs in regression tests
In some cases, an app may make a udp connection to an address with a
port of 0, or to an address with an application's port, before making a
tcp connection that actually sends/receives traffic. Allow these
connects.
Also, check both the server and client port and only consider the
traffic unexpected if neither port is in range.
* Refactor shell rules to avoid FPs.
Refactoring the shell related rules to avoid FPs. Instead of considering
all shells suspicious and trying to carve out exceptions for the
legitimate uses of shells, only consider shells spawned below certain
processes suspicious.
The set of processes is a collection of commonly used web servers,
databases, nosql document stores, mail programs, message queues, process
monitors, application servers, etc.
Also, runsv is also considered a top level process that denotes a
service. This allows a way for more flexible servers like ad-hoc nodejs
express apps, etc to denote themselves as a full server process.
* Update event generator to reflect new shell rules
spawn_shell is now a silent action. its replacement is
spawn_shell_under_httpd, which respawns itself as httpd and then runs a
shell.
db_program_spawn_binaries now runs ls instead of a shell so it only
matches db_program_spawn_process.
* Comment out old shell related rules
* Modify nodejs example to work w/ new shell rules
Start the express server using runit's runsv, which allows falco to
consider any shells run by it as suspicious.
* Use the updated argument for mkdir
In https://github.com/draios/sysdig/pull/757 the path argument for mkdir
moved to the second argument. This only became visible in the unit tests
once the trace files were updated to reflect the other shell rule
changes--the trace files had the old format.
* Update unit tests for shell rules changes
Shell in container doesn't exist any longer and its functionality has
been subsumed by run shell untrusted.
* Allow git binaries to run shells
In some cases, these are run below a service runsv so we still need
exceptions for them.
* Let consul agent spawn curl for health checks
* Don't protect tomcat
There's enough evidence of people spawning general commands that we
can't protect it.
* Reorder exceptions, add rabbitmq exception
Move the nginx exception to the main rule instead of the
protected_shell_spawner macro. Also add erl_child_setup (related to
rabbitmq) as an allowed shell spawner.
* Add additional spawn binaries
All off these are either below nginx, httpd, or runsv but should still
be allowed to spawn shells.
* Exclude shells when ancestor is a pkg mgmt binary
Skip shells when any process ancestor (parent, gparent, etc) is a
package management binary. This includes the program needrestart. This
is a deep search but should prevent a lot of other more detailed
exceptions trying to find the specific scripts run as a part of
installations.
* Skip shells related to serf
Serf is a service discovery tool and can in some cases be spawned by
apache/nginx. Also allow shells that are just checking the status of
pids via kill -0.
* Add several exclusions back
Add several exclusions back from the shell in container rule. These are
all allowed shell spawns that happen to be below
nginx/fluentd/apache/etc.
* Remove commented-out rules
This saves space as well as cleanup. I haven't yet removed the
macros/lists used by these rules and not used anywhere else. I'll do
that cleanup in a separate step.
* Also exclude based on command lines
Add back the exclusions based on command lines, using the existing set
of command lines.
* Add addl exclusions for shells
Of note is runsv, which means it can directly run shells (the ./run and
./finish scripts), but the things it runs can not.
* Don't trigger on shells spawning shells
We'll detect the first shell and not any other shells it spawns.
* Allow "runc:" parents to count as a cont entrypnt
In some cases, the initial process for a container can have a parent
"runc:[0:PARENT]", so also allow those cases to count as a container
entrypoint.
* Use container_entrypoint macro
Use the container_entrypoint macro to denote entering a container and
also allow exe to be one of the processes that's the parent of an
entrypoint.
Start packaging (and building when necessary) a falco-specific kernel
module in falco releases. Previously, falco would depend on sysdig and
use its kernel module instead.
The kernel module was already templated to some degree in various
places, so we just had to change the templated name from
sysdig/sysdig-probe to falco/falco-probe.
In containers, run falco-probe-loader instead of
sysdig-probe-loader. This is actually a script in the sysdig repository
which is modified in https://github.com/draios/sysdig/pull/789, and uses
the filename to indicate what kernel module to build and/or load.
For the falco package itself, don't depend on sysdig any longer but instead
depend on dkms and its dependencies, using sysdig as a guide on the set
of required packages.
Additionally, for the package pre-install/post-install scripts start
running falco-probe-loader.
Finally, add a --version argument to falco so it can pass the desired
version string to falco-probe-loader.
Small changes to improve the use of falco_event_generator with falco:
- In event_generator, some actions like exec_ls won't trigger
notifications on their own. So exclude them from -a all.
- For all actions, print details on what the action will do.
- For actions that won't result in a falco notification in containers,
note that in the output.
- The short version of --once wasn't working, fix the getopt.
- Explicitly saying -a all wasn't working, fix.
- Don't rely on an external ruleset in the nodejs docker-compose
demo--the built in rules are sufficient now.
This helps when running on a system which has the module loaded, but getting
access to the module file is hard for some reason. Since I know that the right
version of the module is loaded I just want falco to connect.
I tested this with this run command:
docker run -e SYSDIG_SKIP_LOAD=1 -it -v /dev:/host/dev -v /proc:/host/proc --privileged falco
And it successfully connected to Sysdig and started printing out warnings for my
system.
falco-CLA-1.0-signed-off-by: Carl Sverre accounts@carlsverre.com
Add jq to the docker image containing falco. jq is very handy for
transforming json, which comes into play if you want to post to
slack (or other) webhooks.
Add an exfiltration action that reads /etc/shadow and sends the contents
to a arbitrary ip address and port via a udp datagram.
Add the ability to specify actions via the environment instead of the
command line. If actions are specified via the environment, they replace
any actions specified on the command line.
C++ program that performs bad activities related to the current falco
ruleset. There are configurable actions for almost all of the current
ruleset, via the --action argument.
By default runs in a loop forever. Can be overridden via --once.
Also add a Dockerfile that compiles event_generator.cpp within an alpine
linux image and copies it to /usr/local/bin. This image has been pushed
to docker hub as "sysdig/falco-event-generator:latest".
Add a Makefile that runs the right docker build command.
Instead of running bash as the sysdig container does, run falco. This
makes sense as falco doesn't have a general purpose use like sysdig
does.
To make it easier to run both in docker and as a daemon using the
default command line, enable both syslog and stdout/stderr output by
default. Now that falco dups stdout/stderr to /dev/null when
daemonizing, the stdout/stderr is just thrown away. And when running in
docker, the syslog output will just be discarded unless someone plumbs
the container's syslog output.
Update README.md to reflect that specifying the falco command is not
necessary.
Based on the Dockerfiles from the sysdig repository. The only change
from the sysdig versions is to use environment variable FALCO_REPOSITORY
and to install falco instead of sysdig.
Note that the entrypoint still uses sysdig-probe-loader and
SYSDIG_HOST_ROOT, as it's building the kernel module for sysdig.
I verified I could create and run an image using the dev version using
"docker build ." from docker/dev, and run it using:
docker run -i -t --name falco --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:r\o sysdig/falco falco -r /etc/falco_rules.conf
I still need to update jenkins to create a release build.