Fix broken outputs.proto link, previously pointing to nonexistent
branch, making it point to master branch.
Signed-off-by: deepskyblue86 <angelopuglisi86@gmail.com>
Please note that the `HOME` env has been added for consistency purposes with the main docker image.
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
The BUILD_BYPRODUCTS for the civetweb target
is needed so that when Falco is built using Ninja
the falco target can have a reference to
understand what target is building the civetweb lib
and do the build automatically without having to do
`ninja civetweb` first.
Signed-off-by: Lorenzo Fontana <lo@linux.com>
Attempting to start falco on a host that had a similarly named module
(e.g., "falcon") would cause the falco-driver-loader to loop attempting
to rmmod falco when falco was not loaded.
falco-driver-loader will now inspect only the first column of lsmod
output and require the whole search string to match
Fixes#1468
Signed-off-by: Dominic Evans <dominic.evans@uk.ibm.com>
Besides all the other improvements, we are really interested
in getting the Make options for other ISAs than x86_64 when it
comes to compiling abseil [0].
This is what happens on aarch64
```
make[4]: *** [Makefile:2968: /root/falco/build-musl/grpc-prefix/src/grpc/objs/opt/third_party/abseil-cpp/absl/base/internal/thread_identity.o] Error 1
c++: error: unrecognized command line option '-maes'
c++: error: unrecognized command line option '-msse4'
c++: error: unrecognized command line option '-msse4'
c++: error: unrecognized command line option '-maes'
```
[0] bf87ec9e44
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
This is needed because Luajit does not support many architectures
such as aarch64 and ppcle64.
Note: some operating systems, such as Alpine, already use moonjit as a dropin
replacement for luajit.
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
`multipath`, which is run by `systemd-udevd`, writes to
`/etc/multipath/wwids`, `/etc/multipath/bindings` and a few other paths
under `/etc/multipath` as part of its normal operation.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
Use the right list name in the rule Full K8s Administrative Access--it
was using the nonexistent list admin_k8s_users, so it was just using the
string "admin_k8s_users".
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Previously, formatters were freed by LUA code when re-opening outputs.
Since now, outputs are not controlling anymore the falco_formats class (see #1412), we just free formatters only if were already initialized.
That is needed when the engine restarts (see #1446).
By doing so, we also ensure that correct inspector instance is set to the formatter cache.
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
Like other rules that rely on a process name for exceptions, don't
trigger an event if the process name is missing e.g. "<NA>".
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Also ignore docker programs which would prevent cases where the path is
expressed within the container filesystem (/.bash_history) vs host
filesystem (/var/lib/docker/overlay/.../.bash_history).
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
This will be used by the static build to load lua files from
alternate directories that are not tied to the compile flags
Signed-off-by: Radu Andries <radu.andries@sysdig.com>
It turns out if you read this rules file with falco versions 0.24.0 and
earlier, it can't parse the bare string containing colons:
(Ignore the misleading error context, that's a different problem):
```
Thu Sep 10 10:31:23 2020: Falco initialized with configuration file
/etc/falco/falco.yaml
Thu Sep 10 10:31:23 2020: Loading rules from file
/tmp/k8s_audit_rules.yaml:
Thu Sep 10 10:31:23 2020: Runtime error: found unexpected ':'
---
source: k8s_audit
tags: [k8s]
# In a local/user rules file, you could override this macro to
```
I think the change in 0.25.0 to use a bundled libyaml fixed the problem,
as it also upgraded libyaml to a version that fixed
https://github.com/yaml/libyaml/pull/104.
Work around the problem with earlier falco releases by quoting the colon.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
In some cases, when removing a container, dockerd will itself remove the
entire overlay filesystem, including a shell history file:
---
Shell history had been deleted or renamed (user=root type=unlinkat
command=dockerd -H fd://
... name=/var/lib/docker/overlay2/.../root/.bash_history ..
---
To avoid these FPs, skip paths starting with /var/lib/docker.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
The falco-driver-loader script calls dkms to compile the kernel
module using the default gcc.
In some systems, and in the falcosecurity/falco container image,
the defult gcc is not the right one to compile it.
The script will try to compile the module by cycling trough all the available GCCs
starting from the default one until the module is compiled the first
time.
The default gcc is the highest priority while trying.
Newer GCCs have the priority over older GCCs.
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
Start versioning trace files with a unique date. Any time we need to
create new trace files, change TRACE_FILES_VERSION in this script and
copy to traces-{positive,negative,info}-<VERSION>.zip.
The zip file should unzip to traces-{positive,negative,info}, without
any version.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add system:managed-certificate-controller as a system role that can be
modified. Can be changed as a part of upgrades.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add several images seen in GKE environments that can run in the
kube-system namespace.
Also change the names of the lists to be more specific. The old names
are retained but are kept around for backwards compatibility.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add a set of images known to run in the host network. Mostly related to
GKE, sometimes plus metrics collection.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Sort the items in the list falco_privileged_images alphabetically
and also separate them into individual lines. Make it easier to note
changes to the entries in the list using git blame.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Previously any write to a file called sources.list would match the
access_repositories condition, even a file /usr/tmp/..../sources.list.
Change the macro so the files in repository_files must be somewhere
below any of repository_directories.
Also allow programs spawned by package management programs to change
these files, using package_mgmt_ancestor_procs.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Let programs spawned by linux-bench (CIS Linux Benchmark program) read
/etc/shadow. Tests in the benchmark check for permissions of the file
and accounts in the contents of the file.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
In some cases, dropped events around the time a new container is started
can result in missing the exec/clone for a process that does a setns to
enter the namespace of a container. Here's an example from an oss
capture:
```
282273 09:01:22.098095673 30 runc:[0:PARENT] (168555) < setns res=0
282283 09:01:22.098138869 30 runc:[0:PARENT] (168555) < setns res=0
282295 09:01:22.098179685 30 runc:[0:PARENT] (168555) < setns res=0
517284 09:01:30.128723777 13 <NA> (168909) < setns res=0
517337 09:01:30.129054963 13 <NA> (168909) < setns res=0
517451 09:01:30.129560037 2 <NA> (168890) < setns res=0
524597 09:01:30.162741004 19 <NA> (168890) < setns res=0
527433 09:01:30.179786170 18 runc:[0:PARENT] (168927) < setns res=0
527448 09:01:30.179852428 18 runc:[0:PARENT] (168927) < setns res=0
535566 09:01:30.232420372 25 nsenter (168938) < setns res=0
537412 09:01:30.246200357 0 nsenter (168941) < setns res=0
554163 09:01:30.347158783 17 nsenter (168950) < setns res=0
659908 09:01:31.064622960 12 runc:[0:PARENT] (169023) < setns res=0
659919 09:01:31.064665759 12 runc:[0:PARENT] (169023) < setns res=0
732062 09:01:31.608297074 4 nsenter (169055) < setns res=0
812985 09:01:32.217527319 6 runc:[0:PARENT] (169077) < setns res=0
812991 09:01:32.217579396 6 runc:[0:PARENT] (169077) < setns res=0
813000 09:01:32.217632211 6 runc:[0:PARENT] (169077) < setns res=0
```
When this happens, it can cause false positives for the "Change thread
namespace" rule as it allows certain process names like "runc",
"containerd", etc to perform setns calls.
Other rules already use the proc_name_exists macro to require that the
process name exists. This change adds proc_name_exists to the Change
Thread Namespace rule as well.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
This update will provide information as to which process uid intitiated the event. This is really important for processes that are started
by a different user name.
Signed-off-by: Chuck Schweizer <chuck.schweizer.lvk2@statefarm.com>
This happens because the file descriptors paths have been fixed
in this commit [0].
However, the scap files fixtures we have for the tests still contain
the old paths causing this problem.
We are commenting out those tests and opening an issue to get this fixed
later.
[0] 37aab8debf
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
This change was needed because gRPC was using some internal classes
to do vector operations in 0.25.0
Those operations were leading to sigsegv under certain operating
systems, like Ubuntu 18.04
In 0.27.0 they swapped their internal libraries with abseil-cpp.
I tested this and our gRPC server works very well with this new version
as well the CRI api.
I didn't go to 0.31.0 yet because it's very different now and it will
require more iterations to get there, specifically on the CRI api code.
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
In their readme, jq claims that you don't have
to do autoreconf -fi when downloading a released tarball.
However, they forgot to push the released makefiles
into their release tarbal.
For this reason, we have to mirror their release after
doing the configuration ourselves.
This is needed because many distros do not ship the right
version of autoreconf, making virtually impossible to build
Falco on them.
Here is how it was created:
git clone https://github.com/stedolan/jq.git
cd jq
git checkout tags/jq-1.6
git submodule update --init
autoreconf -fi
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
dockerd and docker have "-current" suffix on centos and rhel. This
macro does not match causing false positives on multiple rules
using it
Signed-off-by: Radu Andries <radu@sysdig.com>
Add 'docker.io/falcosecurity/falco' image to 'falco_privileged_images' macro. This preven messages like this when booting up falco :
```
Warning Pod started with privileged container (user=system:serviceaccount:kube-system:daemon-set-controller pod=falco-42brw ns=monitoring images=docker.io/falcosecurity/falco:0.24.0)
```
Signed-off-by: Nicolas Vanheuverzwijn <nicolas.vanheu@gmail.com>
options
The following options have been added:
* -v (verbose)
* -p (prepare falco_traces test suite)
* -b (specify custom branch for downloading trace files)
* -d (specify the build directory)
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
This make target calls the `trace-files-psp`, `trace-files-k8s-audit`,
`trace-files-base-scap` targets to place all the integration test
fixtures in the proper position.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Do it only when not running with userspace instrumentation enabled and
the syscall input source is enabled (!disable_syscall)
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
This driver version, among other things (like userspace instrumentation
support) includes a fix for building the eBPF driver on CentOS 8
machines too.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
means "auto")
The 0 ("auto") value sets the threadiness to the number of online cores
automatically.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Removed not existing labels and made the error message a bit more
verbose to tell people what to expect next.
Signed-off-by: Lorenzo Fontana <lo@linux.com>
GitLab is now using Falco to provide Container Host Security protection
Co-Authored-By: Kris Nova <kris@nivenly.com>
Signed-off-by: Kris Nova <kris@nivenly.com>
GitLab is now using Falco to provide Container Host Security protection
Co-Authored-By: Kris Nova <kris@nivenly.com>
Signed-off-by: Kris Nova <kris@nivenly.com>
kops 1.17 adds a kube-apiserver-healthcheck user: https://github.com/kubernetes/kops/tree/master/cmd/kube-apiserver-healthcheck
Logs are currently spammed with:
```
{"output":"18:02:15.466580992: Warning K8s Operation performed by user not in allowed list of users (user=kube-apiserver-healthcheck target=<NA>/<NA> verb=get uri=/healthz resp=200)","priority":"Warning","rule":"Disallowed K8s User","time":"2020-06-29T18:02:15.466580992Z", "output_fields": {"jevt.time":"18:02:15.466580992","ka.response.code":"200","ka.target.name":"<NA>","ka.target.resource":"<NA>","ka.uri":"/healthz","ka.user.name":"kube-apiserver-healthcheck","ka.verb":"get"}}
```
Signed-off-by: Antoine Deschênes <antoine.deschenes@equisoft.com>
These application binaries raise events in the `Change thread namespace`
rule as part of their normal operation.
Here are more details regarding each binary :
- `protokube` : See [this](https://github.com/kubernetes/kops/tree/master/protokube)
- `dockerd` : The `dockerd` process name is whitelisted already in this
rule, but not if it is the parent, which will happen if you are doing
docker-in-docker.
- `tini` : See [this](https://github.com/krallin/tini)
- `aws` : This one I noticed because Falco itself uses the AWS CLI to
send events to SNS, which was triggering this rule.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
While using Falco, I noticed we were getting many events that were
virtually identical to those that were previously filtered out by the
`exexe_running_docker_save` macro, but where the `cmdline` was something
like `exe /var/run/docker/netns/cc5c7b9bb110 all false`. I believe this
is caused by the use of docker-in-docker.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
A macro like this is useful because configuration management software
may need to run containers with an attached terminal to perform some of
its duties, and users may want to ignore this behavior.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
This macro is useful to allow binaries to be installed under certain
circumstances. For example, it may be fine to install a binary during a
build in a ci/cd pipeline.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
What type of PR is this?
Uncomment one (or more) /kind <> lines:
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
If contributing rules or changes to rules, please make sure to also uncomment one of the following line:
/kind rule-update
/kind rule-create
Any specific area of the project related to this PR?
Uncomment one (or more) /area <> lines:
/area build
/area engine
/area rules
/area tests
/area proposals
What this PR does / why we need it:
updating ADOPTERS.md with a new adopter details
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
re-issuing the PR from #1235 (due to change of owner, per request by @leogr)
Does this PR introduce a user-facing change?:
NONE
/assign @leogr
Signed-off-by: Dotan Horovits dotan.horovits@gmail.com
Since `evt.arg[1]` does not work for all syscalls, switch to:
- `evt.arg.path` for `rmdir` and `unlink` (used by `remove` macro)
- `evt.arg.name` for `unlinkat` (used by `remove` macro)
- `evt.arg.oldpath/newpath` for `rename` and `renameat` (used by `rename` macro)
That ensures `Modify binary dirs` works properly.
Note that we cannot yet use `renameat2` (not supported by sinsp, see https://github.com/draios/sysdig/issues/1603 )
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
Since the dir's path is found:
- in `evt.arg[1]` for `mkdir`
- but in `evt.arg[2]` for `mkdirat`
switch to `evt.arg.path` to catch both.
That ensures `Mkdir binary dirs` works properly.
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
This macro will be useful because it will make it possible to filter out
events with a higher degree of granularity than is currently possible
for the `Set Setuid or Setgid bit` rule.
For example, if some application is expected to set the setuid or the
setgid bit under a specific condition, like if it's started with a
specific command, then the `user_known_chmod_applications` list is not
enough because we don't want to filter out _all_ events by this
application, only specific ones. This macro allows that.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
The CMake module downloads `string-view-lite` from
https://github.com/martinmoene/string-view-lite
It is a single-file header-only version of C++17-like `string_view` for
C++98, C++03, C++11, and later.
Notices it also provides C++20 extensions like:
- empty()
- starts_with()
- ends_with()
- etc.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler
Example alert:
---
K8s Operation performed by user not in allowed list of
users (user=vpa-recommender target=vpa-recommender/endpoints verb=update
uri=core/v1/namespaces/kube-system/endpoints/vpa-recommender resp=200)
K8s Operation performed by user not in allowed list of
users (user=vpa-updater target=vpa-updater/endpoints verb=update
uri=core/v1/namespaces/kube-system/endpoints/vpa-updater resp=200)
---
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Example event. I'm pretty sure the full file in this case is /etc/lvm/cache:
---
File below /etc opened for writing (user=root command=lvs --noheadings
--readonly --separator=";" -a -o
lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size parent=ceph-volume
pcmdline=ceph-volume /usr/sbin/ceph-volume inventory --format json file=/etc/lvm/c...
---
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
"The Azure's NPM is a a daemonset that supports network policies as
defined by the Kubernetes policy specification."
Example event:
---
Log files were tampered (user=root command=azure-npm
file=/var/log/iptables.conf CID1 image=mcr.microsoft.com/containernetworking/azure-npm)
---
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
- Highlights scope of Falco
- Highlights subprojects and groups evolution
- Defines build artifacts
- Defines artifact naming convention
- Dictates that we take action to make these changes happen
Signed-off-by: Kris Nova <kris@nivenly.com>
A new unit test file test_rulesets adds tests for the following:
- enabling/disabling rules based on substrings
- enabling/disabling rules based on exact matches
- enabling/disabling rules based on tags
There are variants that test for default and non-default rulesets.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Currently, when calling enable_rule, the provided rule name pattern is a
substring match, that is if the rules file has a rule "My fantastic
rule", and you call engine->enable_rule("fantastic", true), the rule
will be enabled.
This can cause problems if one rule name is a complete subset of another
rule name e.g. rules "My rule" and "My rule is great", and calling
engine->enable_rule("My rule", true).
To allow for this case, add an alternate method enable_rule_exact() in
both default ruleset and ruleset variants. In this case, the rule name
must be an exact match.
In the underlying ruleset code, add a "match_exact" option to
falco_ruleset::enable() that denotes whether the substring is an exact
or substring match.
This doesn't change the default behavior of falco in any way, as the
existing calls still use enable_rule().
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
This driver version contains a fix for kernels < 3.17
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
DBG stands for Drivers Build Grid, a repository holding a set of
prebuilt drivers (both Falco kernel modules and Falco eBPF probes).
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
eBPF probes coming from the drivers build grid
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
The new Falco kernel modules URLs are:
`<base_url>/kernel-module/<driver_version>/falco_<target_id>_<kernel_release>_<kernel_version>`
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Instead of using the request object to identify service account tokens,
exclude any secrets activity by system users (e.g. users starting with
"system:"). This allows the rules to work on k8s audit events at
Metadata level instead of RequestResponse level.
Also change the example objects for automated tests to ones collected at
Metadata level.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add test to verify new rules for creating/deleting secrets. New trace
files for creating a secret/deleting a secret, and test cases that
verify that the rules trigger. Two additional test cases/traces file
tracks creating a service account token secret/kube-system secret and
ensures that the rules do *not* trigger.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
New rules K8s Secret Created/K8s Secret Deleted detect creating/deleting
secrets, following the pattern of the other "K8s XXX Created/Deleted"
rules. One minor difference is that service account token secrets are
excluded, as those are created automatically as namespaces are created.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
replace rbac.authorization.k8s.io/v1beta1 with rbac.authorization.k8s.io/v1 as for the changelog
Signed-off-by: maxgio92 <massimiliano.giovagnoli.1992@gmail.com>
replace extension/v1beta1 with 1.16-supported apps/v1 version as for release announcement
BREAKING CHANGE: spec.rollbackTo is removed, spec.selector is now required and immutable after
creation, spec.progressDeadlineSeconds now defaults to 600 seconds, spec.revisionHistoryLimit now
defaults to 10, maxSurge and maxUnavailable now default to 25%
issue #1043
Signed-off-by: maxgio92 <massimiliano.giovagnoli.1992@gmail.com>
The libsinsp cri interface prepends (at runtime) the `HOST_ROOT` prefix.
Thus, even if the CRI socket has been mounted on
`/host/var/run/containerd/containerd.sock`, the correct `--cri` flag
value is `/var/run/containerd/containerd.sock`.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
It's useful to ignore some system binaries that use the network under
certain conditions, so this should be overridable by the user.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
This makes it more convenient to add more allowed procs and many other
rules have a similar mechanism to whitelist certain processes.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
The HOST_ROOT environment variable was incorrectly detected when
deploying Falco inside a container.
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
This avoids `FALCO_VERSION` variable to be equal to `latest` while
`falco --version` correctly returns 0.21.0
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
It may been necessary to override a Falco version package update since
the release process stopped for causes not depending on itself.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
It can happen that bintray API is unresponsibe. In this case, we may
need to re-run the CI job manually and be able to not be blocked by
already created versions for the a given git tag.
Same for _developmen_ releases (from master).
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Add a deployment yaml that allows running the event generator in a k8s
cluster:
- Change the event generator to create/delete objects in a namespace
"falco-eg-sandbox" instead of "falco-event-generator". That way you
separate the generator from the resources it modifies (mostly, the
exception being the rolebinding).
- Create a serviceaccount, clusterrole, and rolebinding that allows the
event generator to create/list/delete objects in the falco-eg-sandbox
namespace. The list of permissions is fairly broad mostly so the
event generator can delete all resources without explicitly naming
them. The binding does limit permissions to the falco-eg-sandbox
namespace, though.
A one-line way to run this would be:
kubectl create namespace falco-event-generator && \
kubectl create namespace falco-eg-sandbox && \
kubectl apply -f event-generator-role-rolebinding-serviceaccount.yaml && \
kubectl apply -f event-generator-k8saudit-deployment.yaml
I haven't actually pushed a new docker image to replace the current
event generator yet--the deployment yaml refers to a placeholder
falcosecurity/falco-event-generator:eg-sandbox image. Once the review is
done I'll rebase this to change the image to latest before merging.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add a Daemonset yaml that allows running the falco event generator on
syscalls. It will run on any non-master node.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
The driver version was also setup in the wrong cmake file.
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
Using the VERSION_BUCKET build arguments at docker build time users can now choose from which Falco version to build them.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
- Updating stable image to pull from `debian:stable`
- Updating maintainer label in all Dockerfiles to include `LABEL maintainer="cncf-falco-dev@lists.cncf.io"`
Signed-off-by: Kris Nova <kris@nivenly.com>
Also it seemed that any of value of -DSYSDIG_VERSION
failed to propagate, from first cmake to second cmake.
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
This updates the `CONTRIBUTING.md` in order to include `"banned.h"` in
every cpp file which invalidates certain functions, hence, banned.
Fixes#1035
Signed-off-by: Vaibhav <vrongmeal@gmail.com>
BAN_ALTERNATIVE is same as BAN but the message also provides an alternative
function that the user could use instead of the banned function.
Fixes#1035
Signed-off-by: Vaibhav <vrongmeal@gmail.com>
These include:
* vsprintf()
* sprintf()
* strcat()
* strncat()
* strncpy()
* swprintf()
* vswprintf()
This also changes `userspace/falco/logger.cpp` to remove a `sprintf`
statement. The statement did not affect the codebase in any form so
it was simply removed rather than being substituted.
Fixes#1035
Signed-off-by: Vaibhav <vrongmeal@gmail.com>
This defines certain functions as invalid tokens, i.e., when
compiled, the compiler throws an error.
Currently only `strcpy` is included as a banned function.
Fixes#788
Signed-off-by: Vaibhav <vrongmeal@gmail.com>
Sample Falco alert:
```
Shell spawned by untrusted binary (user=git shell=sh parent=puma reactor
cmdline=sh -c pgrep -fl "unicorn.* worker\[.*?\]" pcmdline=puma reactor
gparent=puma ggparent=runsv aname[4]=ru...
```
https://github.com/puma/puma says it is "A Ruby/Rack web server built
for concurrency".
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Sample falco alert:
```
File below /etc opened for writing (user=root command=cp
/run/secrets/kubernetes.io/serviceaccount/ca.crt
/etc/pki/ca-trust/source/anchors/openshift-ca.crt parent=bash
pcmdline=bash -c #!/bin/bash\nset -euo pipefail\n\n# set by the node
image\nunset KUB...
```
The exception is conditioned on containers.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Some target inherited: we can run `make sinsp` and `make scap` from the falco build directory too
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Currently, the falco event generator only generates system call
activity. This adds support for k8s_audit events by adding a script +
supporting k8s object files that generate activity that matches the k8s
audit event ruleset.
The main script is k8s_event_generator.sh, which loops over the files in
the yaml subdirectory, running kubectl apply -f for each.
In the interests of keeping things self-contained, all objects are
created in a `falco-event-generator` namespace. This means that some
activity related with cluster roles/cluster role bindings is not
performed.
Each k8s object has annotations that note:
1. The specific falco rules that should trigger.
2. A user-friendly message to print when apply-ing the file.
You can provide a specific rule name to the script. If provided, only
those objects related to that rule will trigger. The default is "all",
meaning that all objects are created.
The script loops forever, deleting the falco-event-generator namespace
after each iteration.
Additionally, the docker image has been updated to also copy the script
+ supporting files, as well as fetching the latest available `kubectl`
binary. The entrypoint is now a script that allows choosing between:
- syscall activity: run with .... "syscall"
- k8s_audit activity: run with .... "k8s_audit"
- spawn a shell: run with .... "bash"
The default is "syscall" to preserve existing behavior.
In most cases, you'll need to provide kube config
files/directories that allow access to your cluster. A
command like the following will work:
```
docker run -v $HOME/.kube:/root/.kube -it falcosecurity/falco-event-generator
k8s_audit
```
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Callers aren't expected to catch execeptions and instead rely on the
bool return value to indicate whether or not the parsing was successful.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Currently, the json object POSTed to the /k8s_audit endpoint is assumed
to be an obect, with a "type" of either "Event" or "EventList". When the
K8s API Server POSTs events, it aggregates them into an EventList,
ensuring that there is always a single object.
However, we're going to add some intermediate tools that tail log files
and send them to the endpoint, and the easiest way to send a batch of
events is to pass them as a json array instead of a single object.
To properly handle this, modify parse_k8s_audit_event_json to also
handle a json array. For arrays, it iterates over the objects, calling
parse_k8s_audit_json recursively. This only iterates an initial top
level array to avoid excessive recursion/attacks involving degenerate
json objects with excessively nested arrays.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
With cmake FALCO_Coverage=on the --coverage option
is passed to both clang and gcc to help analyze untested
portions of the code base. It produces gcov files.
These files can be analyzed by many tools such as lcov,
gcovr, etc.
Here is an example of one such tool, lcov:
lcov --directory . --capture --output-file coverage.info
lcov --extract coverage.info '/source/*' --output-file coverage.info
genhtml coverage.info
Signed-off-by: Chris Goller <goller@gmail.com>
* Use the user_known_package_manager_in_container_conditions macro in the "Launch Package Management Process in Container" rule
Signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
In all extraction functions, always catch json type errors alongside
json out of range errors. Both cases result in not extracting any value
from the event.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
The rule detects the execution of the k8s client tool in a container and
logs it with WARNING priority.
Signed-off-by: David de Torres <detorres.david@gmail.com>
First of a handful of PRs to start clarifying the independence of Falco
I don't see any breaking changes here, just cosmetic changes.
Signed-off-by: Kris Nova <kris@nivenly.com>
The call to rule_loader.load_rules only returns 2 values, so only pop
two values from the stack. This fixes#906.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Use falcoctl, which properly handles psp names containing
spaces/dashes. Also add tests that verify that the resulting rules are
valid.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Use the changes in https://github.com/falcosecurity/falcoctl/pull/25
that make sure rules, macros, lists, and rule names all have a unique
prefix. In this case the prefix is based on the psp name, so make sure
the psp name actually reflects what it does--there were a few
cut-and-paste carryovers.
This test assumes that falcoctl will be tagged/released as 0.0.3--the
tests won't pass until the falcoctl PR is merged and there's a release.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add tests that verify that this falco is backwards compatible with the
v4 k8s audit rules file. It includes tests for:
- checking images by repository/image:
ka.req.container.image/ka.req.container.image.repository
- checking privileged status of any container in a pod:
ka.req.container.privileged
- checking host_network: ka.req.container.host_network
The tests were copied from the v5 versions of the tests, when necessary
adding back v4-compatible versions of macros like
allowed_k8s_containers.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
As a part of the changes in
https://github.com/falcosecurity/falco/pull/826/, we added several
breaking changes to rules files like renaming/removing some filter
fields. This isn't ideal for customers who are using their own rules
files.
We shouldn't break older rules files in this way, so add some minimal
backwards compatibility which adds back the fields that were
removed *and* actually used in k8s_audit_rules.yaml. They have the same
functionality as before. One exception is
ka.req.binding.subject.has_name, which was only used in a single output
field for debugging and shouldn't have been in the rules file in the
first place. This always returns the string "N/A".
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of using a psp_conv binary built in the falco build, download
falcoctl 0.0.2 and use its "falcoctl convert psp" subcommand to perform
the conversion.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add ~74 new automated tests that verify K8s PSP Support.
For each PSP attribute, add both positive and negative test cases. For
some of the more complicated attributes like runAsUser/Group/etc,
include cases where the uids are specicified both at the container
security context level and pod security context level and then combined
with mayRunAs/mustRunAs, etc.
Also, some existing tests are updated to handle proper use of "in" and
"intersects" in expressions.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Support the notion of a message for all fields in a single class, and
making sure it's wrapped as well as the other fields.
This is used to display a single message about how indexing working for
ka.* filter fields and what IDX_ALLOWED/IDX_NUMERIC/IDX_KEY means,
rather than repeating the same text over and over in every field.
The wrapping is handled by a function falco::utils::wrap_text.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Refactor how JSON event/k8s audit events extract values in two important
ways:
1. An event can now extract multiple values.
2. The extracted value is a class json_event_value instead of a simple
string.
The driver for 1. was that some filtercheck fields like
"ka.req.container.privileged" actually should extract multiple values,
as a pod can have multiple containers and it doesn't make sense to
summarize that down to a single value.
The driver for 2. is that by having an object represent a single
extracted value, you can also hold things like numbers e.g. ports, uids,
gids, etc. and ranges e.g. [0:3]. With an object, you can override
operators ==, <, etc. to do comparisons between the numbers and ranges,
or even set membership tests between extracted numbers and sets of
ranges.
This is really handy for a lot of new fields implemented as a part of
PSP support, where you end up having to check for overlaps between the
paths, images, ports, uids, etc in a K8s Audit Event and the acceptable
values, ranges, path prefixes enumerated in a PSP.
Implementing these changes also involve an overhaul of how aliases are
implemented. Instead of having an optional "formatting" function, where
arguments to the formatting function were expressed as text within the
index, define optional extraction and indexing functions. If an
extraction function is defined, it's responsible for taking the full
json object and calling add_extracted_value() to add values. There's a
default extraction function that uses a list of json_pointers with
automatic iteration over array values returned by a json pointer.
There's still a notion of filter fields supporting indexes--that's
simply handled within the default extraction or custom extraction
function. And for most fields, there won't be a need to write a custom
extraction function simply to implement indexing.
Within a json_event_filter_check object, instead of having a single
extracted value as a string, hold a vector of extracted json_event_value
objects (vector because order matters) and a set of json_event_value
objects (for set comparisons) as m_evalues. Values on the right hand
side of the expression are held as a set m_values.
json_event_filter_check::compare now supports IN/INTERSECTS as set
comparisons. It also supports PMATCH using path_prefix_search objects,
which simplifies checks like ka.req.pod.volumes.hostpath--now they can
be expressed as "ka.req.pod.volumes.hostpath intersects (/proc,
/var/run/docker.sock, /, /etc, /root)" instead of
"ka.req.volume.hostpath[/proc]=true or
ka.req.volume.hostpath[/root]=true or ...".
Define ~10 new filtercheck fields that extract pod properties like
hostIpc, readOnlyRootFilesystem, etc. that are relevant for PSP validation.
As a part of these changes, also clarify the names of filter fields
related to pods to always have a .pod in the name. Furthermore, fields
dealing with containers in a pod always have a .pod.containers prefix in
the name.
Finally, change the comparisons for existing k8s audit rules to use
"intersects" and/or "in" when appropriate instead of a single equality
comparison.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Related to the changes in https://github.com/draios/sysdig/pull/1501,
add support for an "intersects" operator that verifies if any of the
values in the rhs of an expression are found in the set of extracted
values.
For example:
(a,b,c) in (a,b) is false, but (a,b,c) intersects (a,b) is true.
The code that implements CO_INTERSECTS is in a different commit.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Without this, as ecs-agent starts we get a bunch of errors that look
like this (reformatted for readability):
Notice Container with sensitive mount started (
user=root
command=init -- /agent ecs-agent (id=19d4e98bb0dc)
image=amazon/amazon-ecs-agent:latest
mounts=/proc:/host/proc:ro:false:rprivate,$lotsofthings
)
ecs-agent needs those to work properly, so this can cause lots of false
positives when starting a new instance.
Signed-off-by: Felipe Bessa Coelho <fcoelho.9@gmail.com>
When I try to build the dev branch using the docker builder, the tests
target isn't properly checking out and building catch2 for the
dependency catch2.hpp. Adding this explicit dependency allowed the build
to succeed.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
I wasn't able to compile the dev branch with gcc 5.4 (e.g. not using the
builder), getting this error:
```
.../falco/userspace/falco/grpc_server.cpp:40:109: error: specialization of ‘template<class Request, class Response> void falco::grpc::request_stream_context<Request, Response>::start(falco::grpc::server*)’ in different namespace [-fpermissive]
void falco::grpc::request_stream_context<falco::output::request, falco::output::response>::start(server* srv)
^
In file included from .../falco/userspace/falco/grpc_server.cpp:26:0:
.../falco/userspace/falco/grpc_server.h:102:7: error: from definition of ‘template<class Request, class Response> void falco::grpc::request_stream_context<Request, Response>::start(falco::grpc::server*)’ [-fpermissive]
void start(server* srv);
```
It looks like gcc 5.4 doesn't handle a declaration with namespace blocks
but a definition with namespaces in the
function. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56480 has more
detail.
A workaround is to add `namespace falco {` and `namespace grpc {` around
the declarations.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
If this work as intended PR will automatically get the area labels depending on the files he modified.
In case the user wants it can still apply other areas manually, by slash command, or editing the PR template during the opening of the PR.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
added `libmpx2` to be install during `apt-get install` which is a dependency for `dpkg: libgcc-6-dev:amd64`
Signed-off-by: Sumit Kumar <sumitsaiwal@gmail.com>
As of 0e1c436d14, the build directory is
an argument to run_regression_tests.sh. However, the build directory in
falco_tests.yaml is currently hard-coded to /build, with the build
variant influencing the subdirectory.
Clean this up so the entire build directory passed to
run_regression_tests.sh is passed to avocado and used for the build
directory.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
New automated tests for testing parsing of multiple-doc rules files:
- invalid_{overwrite,append}_{macro,rule}_multiple_docs are just like
the previous versions, but with the multiple files combined into a
single multi-document file.
- multiple_docs combines the rules file from multiple_rules
The expect the same results and output as the multiple-file versions.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Properly parse multi-document yaml files e.g. blocks separated by
---. This is easily handled by lyaml itself--you just need to pass the
option all = true to yaml.load, and each document will be provided as a table.
This does break the table iteration a bit, so some more refactoring:
- Create a load_state table that holds context like the current
- document index, the required_engine_version, etc.
- Pull out the parts that parse a single document to load_rules_doc(),
which is given the table for a single document + load_state.
- Simplify get_orig_yaml_obj to just provide a single row index and
- return all rows from that point to the next blank line or line
starting with '-'
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
A recent sysdig change resulted in container info embedded in capture
files being reported as events. In turn, this caused some tests that
were depending on empty.scap not having any events to fail.
So recreate empty.scap from an environment where no containers were
running. As a result they won't be included in the capture file and
there won't be any container events.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add new tests that ensure that validation across files and involving
multiple macro/rule objects display the right context. When appending,
both objects are displayed. When overwriting, the overwritten object is
displayed.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Make additional improvements to display relevant context when validating
files. This handles cases where a macro/rule overwrites a prior rule.
- Instead of saving the index into the array of lines for each rule,
save the rule yaml itself, as a property 'context' for each object.
- When appending rules, the context of the base macro/rule and the
context of the appended rule/macro are concatenated.
- New functions get_orig_yaml_obj, build_error, and
build_error_with_context handle building the error string.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
GKE regularly calls /exec.fifo from both a system level, and within
individual falco pods. As is this triggers errors multiple times every
hour. This change adds /exec.fifo to the expected files below root that
will be called.
Signed-off-by: Jonathan McGowan <jonnymcgow7@gmail.com>
Fix a couple of small bugs when verifying macro/rule objects:
1) Yaml can have document separators "---", and those were mistakenly
being considered array items.
2) When reading macros and rules and using array position to find the
right document offset, the overall object order should be
used (e.g. this is the 5th object from the file) and not the array
position (e.g. this is the 3rd rule from the file).
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Modify the disabled_rules_using_regex test to
disabled_rules_using_substring with an appropriate substring.
Also add a test where rule names have regex chars and allow rule names
to have regex chars when parsing falco's output in tests. These changes
are future-looking in case we want to add back support for rule
enabling/disabling using regexes.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Given the compiler we currently use, you can't actually enable/disable
regexes in falco_engine::enable_rule using a regex pattern. The regex
either will fail to compile or will compile but not actually match
strings. This is noted on the c++11 compatibility notes for gcc 4.8.2:
https://gcc.gnu.org/onlinedocs/gcc-4.8.2/libstdc++/manual/manual/status.html#status.iso.2011.
The only use of using enable_rule was treating the regex pattern as a
substring match anyway, so we can change the engine to treat the pattern
as a substring.
So change the method/supporting sub-classes to note that the argument is
a substring match, and change falco itself to refer to substrings
instead of patterns.
This fixes https://github.com/falcosecurity/falco/issues/742.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Ideally I'd like to have 3.5 as minimum version.
Nevertheless for the moment I bump this to 3.3.2 to match the CMake
version of the internal Jenkins CI.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Falco version respects the following rules:
If the current commit matches (exactly) a git tag then the
FALCO_VERSION equals it (with the initial "v" stripped out).
Otherwise FALCO_VERSION is 0.<commit hash>[.-dirty].
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Add a bunch of additional test cases for validating rules files. Each
has a specific kind of parse failure and checks for the appropriate
error info on stdout.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of relying on lua errors to pass back parse errors, pass back an
explicit true + required engine version or false + error message.
Also clean up the error message to display info + context on the
error. When the error related to yaml parsing, use the row number passed
back in lyaml's error string to print the specific line with the error.
When parsing rules/macros/lists, print the object being parsed alongside
the error.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
When parsing rules files with -V (validate), print info on the result of
loading the rules file to stdout. That way a caller can capture stdout
to pass along any rules parsing error.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
New test options stdout_is/stderr_is do a direct comparison between
stdout/stderr and the provided value.
Test option validate_rules_file maps to -V arguments, which validate
rules and exits.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
To speed up list expansion, instead of using regexes to replace a list
name with its contents, do string searches followed by examining the
preceding/following characters for the proper delimiter.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
We shouldn't need to clean up strings via a cleanup function and don't
need to do it via a bunch of string.gsub() functions.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of iterating over the entire list of filters and doing pattern
matches against each defined filter, perform table lookups.
For filters that take arguments e.g. proc.aname[3] or evt.arg.xxx, split
the filtercheck string on bracket/dot and check the values against a
table.
There are now two tables of defined filters: defined_arg_filters and
defined_noarg_filters. Each filter is put into a table depending on
whether the filter takes an argument or not.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Json-related filtercheck fields supported indexing with brackets, but
when looking at the field descriptions you couldn't tell if a field
allowed an index, required an index, or did not allow an index.
This information was available, but it was a part of the protected
aliases map within the class.
Move this to the public field information so it can be used outside the
class.
Also add m_ prefixes for member names, now that the struct isn't
trivial.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Some refinements and improvements to the GitHub PR template.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
This coding convention's solely goal is to approximately match the current code style.
It MUST not be intended in any other way until a real and definitive coding convention is put in.
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
1. Extend macro mkdir with syscall mkdirat (#337)
2. add placeholder for whitelist in rule Clear Log Activities (#632)
Signed-off-by: kaizhe <derek0405@gmail.com>
add docker.io/ to the trusted images list
Signed-off-by: kaizhe <derek0405@gmail.com>
rule update: add container.id and image in the rule output except those rules with "not container" in condition
Signed-off-by: kaizhe <derek0405@gmail.com>
Remove empty line
Signed-off-by: Kaizhe Huang<derek0405@gmail.com>
The main changes are to use falco_rules.yaml when using
k8s_audit_rules.yaml, as it now depends on it, and to modify one of the
tests to add granular exceptions instead of a single trusted list.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Start using a falco_ prefix for falco-provided lists/macros. Not
changing existing object names to retain compatibility.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Define macros k8s_audit_always_true/k8s_audit_never_true that work for
k8s audit events. Use them in macros that were asserting true/false values.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Previously, the exceptions for Launch Privileged Container/Launch
Sensitive Mount Container came from a list of "trusted" images and/or a
macro that defined "trusted" containers. We want more fine-grained
control over the exceptions for these rules, so split them into
exception lists/macros that are specific to each rule. This defines:
- falco_privileged_images: only those images that are known to require
privileged=true
- falco_privileged_containers: uses privileged_images and (for now) still
allows all openshift images
- user_privileged_containers: allows user exceptions
- falco_sensitive_mount_images: only thoe images that are known to perform
sensitive mounts
- falco_sensitive_mount_containers: uses sensitive_mount_images
- user_sensitive_mount_containers: allows user exceptions
For backwards compatibility purposes only, we keep the trusted_images
list and user_trusted_containers macro and they are still used as
exceptions for both rules. Comments recommend using the more
fine-grained alternatives, though.
While defining these lists, also do another survey to see if they still
require these permissions and remove them if they didn't. Removed:
- quay.io/coreos/flannel
- consul
Moved to sensitive mount only:
- gcr.io/google_containers/hyperkube
- datadog
- gliderlabs/logspout
Finally, get rid of the k8s audit-specific lists of privileged/sensitive
mount images, relying on the ones in falco_rules.yaml.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add more accurate tracking of the number of falco rules loaded per
ruleset, which are made available via the engine method
::num_rules_for_ruleset().
In the ruleset objects, keep track if a filter wrapper is actually
added/removed and if so increment/decrement the count.
* Allow containerd to start containers
Needed for IBM Cloud Kubernetes Service
* Whitelist state checks for galley(istio)
Galley is a component of istio
https://istio.io/docs/reference/commands/galley/
* Whitelist calcio scratching /status.json
This is the observed behaviour on IBM Cloud Kubernetes Service
* Add whitelisting for keeaplived config file
Some newer distros default to Python 3 by default, not 2, which causes Ansible to trigger these rules.
falco-CLA-1.0-contributing-entity: 1500 Services Ltd
falco-CLA-1.0-signed-off-by: Chris Northwood <chris.northwood@1500cloud.com>
Please note
registry.access.redhat.com/sematext/agent,
registry.access.redhat.com/sematext/logagent
are not available yet, but we are in the process of certification ...
I have made minor language edits to fix the following;
* Punctuation
* Typos
* Parallelism
* Clarity.
Example: Such as (inclusion) vs Like (comparison).
falco-CLA-1.0-signed-off-by: Radhika Puthiyetath <radhika.pc@gmail.com>
For a while, falco has set the inspector drop mode to 1, which should
discard several classes of events that weren't necessary to use most
falco rules.
However, it was mistakenly being called before the inspector was opened,
which meant it wasn't actually doing anything.
Fix this by setting the dropping mode after the inspector open.
On some spot testing on a moderately loaded environment, this results in
a 30-40% drop in the number of system calls processed per second, and
should result in a nice boost in performance.
In K8s 1.13, there's a new mechanism for k8s audit logs using Audit
Sinks, which can be created and managed like other k8s objects.
Add instructions for enabling k8s audit logging for 1.13. The patching
script is still required, as dynamic audit is not a GA feature and needs
to be enabled. Also, the audit sink config is a template and needs to be
filled in with the cluster ip address, like the webhook config for 1.11.
* update(integrations): CRI flag
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
* fix(integrations): set the containerd socket
Co-Authored-By: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
* Fix parentheses for rpm_procs macro
Ensures a preceding not will apply to the whole macro
* Let anything write to /etc/fluent/configs.d
It looks like a lot of scripted programs (shell scripts running cp, sed,
arbitrary ruby programs) are run by fluentd to set up config. They're
too generic to identify, so jut add /etc/fluent/configs.d to
safe_etc_dirs, sadly.
* Let java setup write to /etc/passwd in containers
/opt/jboss/container/java/run/run-java.sh and /opt/run-java/run-java.sh
write to /etc/passwd in a contaner, probably to add a user. Add an
exception for them.
* Update engine fields checksum for fd.dev.*
New fields fd.dev.*, so updating the fields checksum.
* Print a message why the trace file can't be read.
At debug level only, but better than nothing.
* Adjust tests to match new container_started macro
Now that the container_started macro works either on the container event
or the first process being spawned in a container, we need to adjust the
counts for some rules to handle both cases.
* Supporting files to build/test via jenkins
Changes to build/test via jenkins, which also means running all tests in
a container instead of directly on the host:
- Jenkinsfile controls the stages, build.sh does the build and
run-tests.sh does the regression tests.
- Create a new container falcosecurity/falco-tester that includes the
dependencies required to run the regression tests. This is a different
image than falco-builder because it doesn't need to be centos 6 based,
doesn't install any compiler/etc, and installs the test running
framework we use (avocado). We now use a newer version of avocado,
which resulted in some small changes to how it is run and how yaml
options are parsed.
- Modify run_regression_tests.sh to download trace files to the build
directory and only if not present. Also honor BUILD_TYPE/BUILD_DIR,
which is provided via the docker run cmd.
- The package tests are now moved to a separate falco_tests_package.yaml
file. They will use rpm installs by default instead of debian
packages. Also add the ability to install rpms in addition to debian
packages.
- Automate the process of creating the docker local package by: 1)
Adding CMake rules to copy the Dockerfile, entrypoint to the build
directory and 2) Copy test trace files and rules into the build
directory. This allows running the docker build command from
build/docker/local instead of the source directory.
- Modify the way the container test is run a bit to use the trace
files/rules copied into the container directly instead of host-mounted
trace files.
* Use container builder + tester for travis
We'll probably be using jenkins soon, but this will allow switching back
to travis later if we want.
* Use download.draios.com for binutils packages
That way we won't be dependent on snapshot.debian.org.
* Remove netstat as a generic network program
We'll try to limit the list to programs that can broadly see activity or
actually create traffic.
* Rules for inbound conn sources, not outbound
Replace "Unexpected outbound connection source" with "Unexpected inbound
connection source" to watch inbound connections by source instead of
outbound connections by source. The rule itself is pretty much unchanged
other than switching to using cip/cnet instead of sip/snet.
Expand the supporting macros so they include outbound/inbound in the
name, to make it clearer.
* rules update: add rules for mitre framework
* rules update: add mitre persistence rules
* minor changes
* add exclude hidden directories list
* limit hidden files creation in container
* minor fix
* minor fix
* tune rules to have only_check_container macro
* rules update: add rules for remove data from disk and clear log
* minor changes
* minor fix rule name
* add check_container_only macro
* addresses comments
* add rule for updating package repos
* Don't consider dd a bulk writer
Threre are enough legitimate cases to exclude it.
* Make cron/chmod policies opt-in
They have enough legitimate uses that we shouldn't run by default.
* minor fix
* Fix mistake in always_true macro
comparison operator was wrong.
* Whitespace diffs
* Add opt-in rules for interp procs + networking
New rules "Interpreted procs inbound network activity" and "Interpreted
procs outbound network activity" check for any network activity being
done by interpreted programs like ruby, python, etc. They aren't enabled
by default, as there are many legitimate cases where these programs
might perform inbound or outbound networking. Macros
"consider_interpreted_inbound" and "consider_interpreted_outbound" can
be used to enable them.
* Opt-in rule for running network tools on host
New rule Lauch Suspicious Network Tool on Host is similar to "Lauch
Suspicious Network Tool in Container" [sic] but works on the host. It's
not enabled by default, but can be enabled using the macro
consider_network_tools_on_host.
* Add parens around container macro
* Make Modify User Context generic to shell configs
Rename Modify User Context to Modify Shell Configuration File to note
that it's limited to shell configuration files, and expand the set of
files to cover a collection of file names and files for zsh, csh, and
bash.
* Also prevent shells from directly opening conns
Bash can directly open network connections by writing to
/dev/{tcp,udp}/<addr>/<port>. These aren't actual files, but are
interpreted by bash as instructions to open network connections.
* Add rule to detect shell config reads
New rule Read Shell Configuration File is analogous to Write Shell
Configuration File, but handles reads by programs other than shell
programs. It's also disabled by default using consider_shell_config_reads.
* Add rule to check ssh directory/file reads
New rule Read ssh information looks for any open of a file or directory
below /root/.ssh or a user ssh directory. ssh binaries (new list
ssh_binaries) are excluded.
The rule is also opt-in via the macro consider_ssh_reads.
* Rule to check for disallowed ssh proxies
New rule "Program run with disallowed http proxy env" looks for spawned
programs that have a HTTP_PROXY environment variable, but the value of
the HTTP_PROXY is not an expected value.
This handles attempts to redirect traffic to unexpected locations.
* Add rules showing how to categorize outbound conns
New rules Unexpected outbound connection destination and Unexpected
outbound connection source show how to categorize network connections by
either destination or source ip address, netmask, or domain name.
In order to be effective, they require a comprehensive set of allowed
sources and/or destinations, so they both require customization and are
gated by the macro consider_all_outbound_conns.
* Add .bash_history to bash config files
* Restrict http proxy rule to specific procs
Only considering wget, curl for now.
* Shell programs can directly modify config
Most notably .bash_history.
* Use right system_procs/binaries
system_binaries doesn't exist, so use system_procs + an additional test
for shell_binaries.
* rule update: add MITRE tags for rules
* update mitre tags with all lower case and add two more rules
* add two more mitre_persistence rules plus minor changes
* replace contains with icontains
* limit search passwd in container
* Add option to display times in ISO 8601 UTC
ISO 8601 time is useful when, say, running falco in a container, which
may have a different /etc/localtime than the host system.
A new config option time_format_iso_8601 controls whether log message
and event times are displayed in ISO 8601 in UTC or in local time. The
default is false (display times in local time).
This option is passed to logger init as well as outputs. For outputs it
eventually changes the time format field from %evt.time/%jevt.time to
%evt.time.iso8601/%jevt.time.iso8601.
Adding this field changes the falco engine version so increment it.
This depends on https://github.com/draios/sysdig/pull/1317.
* Unit test for ISO 8601 output
A unit test for ISO 8601 output ensures that both the log and event time
is in ISO 8601 format.
* Use ISO 8601 output by default in containers
Now that we have an option that controls iso 8601 output, use it by
default in containers. We do this by changing the value of
time_format_iso_8601 in falco.yaml in the container.
* Handle errors in strftime/asctime/gmtime
A placeholder "N/A" is used in log messages instead.
* Also let dockerd-current setns()
* Add additional setns programs
Let oci-umount (https://github.com/containers/oci-umount) setns().
* Let Openscap RPM probes touch rpm db
Define a list openscap_rpm_binaries containing openscap probes related
to rpm and let those binaries touch the rpm database.
* Let oc write to more directories below /etc
Make the prefix more general, allowing any path below /etc/origin/node.
When creating syscall event drop alerts, instead of including just the
total and dropped event count, include all possible causes of drops as
well as whether bpf is enabled.
* Skip incomplete container info for container start
In the container_started macro, ensure that the container metadata is
complete after either the container event (very unlikely) or after the
exec of the first process into the container (very likely now that
container metadata fetches are async).
When using these rules with older falco versions, this macro will still
work as the synchronous container metadata fetch will result in a
repository that isn't "incomplete".
* Update test traces to have full container info
Some test trace files used for regression tests didn't have full
container info, and once we started looking for those fields, the tests
stopped working.
So update the traces, and event counts to match.
Bringing over the top CMakeLists.txt change in
https://github.com/draios/sysdig/pull/1349 to define GRPC_CPP_PLUGIN so
it can be referred to when autogenerating grpc code.
* Make stats file interval configurable
New argument --stats_interval=<msec> controls the interval at which
statistics are written to the stats file. The default is 5000 ms (5 sec)
which matches the prior hardcoded interval.
The stats interval is triggered via signals, so an interval below ~250ms
will probably interfere with falco's behavior.
* Add ability to emit general purpose messages
A new method falco_outputs::handle_msg allows emitting generic messages
that have a "rule", message, and output fields, but aren't exactly tied
to any event and aren't passed through an event formatter.
This allows falco to emit "events" based on internal checks like kernel
buffer overflow detection.
* Clean up newline handling for logging
Log messages from falco_logger::log may or may not have trailing
newlines. Handle both by always adding a newline to stderr logs and
always removing any newline from syslog logs.
* Add method to get sequence from subkey
New variant of get_sequence that allows fetching a list of items from a
key + subkey, for example:
key:
subkey:
- list
- items
- here
Both use a shared method get_sequence_from_node().
* Monitor syscall event drops + optional actions
Start actively monitoring the kernel buffer for syscall event drops,
which are visible in scap_stats.n_drops, and add the ability
to take actions when events are dropped. The -v (verbose) and
-s (stats filename) arguments also print out information on dropped
events, but they were only printed/logged without any actions.
In falco config you can specify one or more of the following actions to
take when falco notes system call drops:
- ignore (do nothing)
- log a critical message
- emit an "internal" falco alert. It looks like any other alert with a
time, "rule", message, and output fields but is not related to any
rule in falco_rules.yaml/other rules files.
- exit falco (the idea being that the restart would be monitored
elsewhere).
A new module syscall_event_drop_mgr is called for every event and
collects scap stats every second. If in the prior second there were
drops, perform_actions() handles the actions.
To prevent potential flooding in high drop rate environments, actions
are goverened by a token bucket with a rate of 1 actions per 30 seconds,
with a max burst of 10 seconds. We might tune this later based on
experience in busy environments.
This might be considered a fix for
https://github.com/falcosecurity/falco/issues/545. It doesn't
specifically flag falco rules alerts when there are drops, but does
make it easier to notice when there are drops.
* Add unit test for syscall event drop detection
Add unit tests for syscall event drop detection. First, add an optional
config option that artifically increments the drop count every
second. (This is only used for testing).
Then add test cases for each of the following:
- No dropped events: should not see any log messages or alerts.
- ignore action: should note the drops but not log messages or alert.
- log action: should only see log messages for the dropped events.
- alert action: should only see alerts for the dropped events.
- exit action: should see log message noting the dropped event and exit
with rc=1
A new trace file ping_sendto.scap has 10 seconds worth of events to
allow the periodic tracking of drops to kick in.
+ Add the user_known_write_root_conditions macro to allow custom conditions in the "Write below root" rule
+ Add the user_known_non_sudo_setuid_conditions to allow custom conditions in the "Non sudo setuid" rule
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
When using host network, the containers can't resolve kubernetes.default, thus not getting the metadata like pod name, namespace, etc. Using the environment variable KUBERNETES_SERVICE_HOST, which points to the current cluster API server, will allow that.
* Add support for container metaevent to detect container spawning
Create a new macro "container_started" to check both the old and
the new check.
Also, only look for execve exit events with vpid=1.
* Use TBB_INCLUDE_DIR for consistency w sysdig,agent
Previously it was a mix of TBB_INCLUDE and TBB_INCLUDE_DIR.
* Build using matching sysdig branch, if exists
! Make sure we add the Sysdig repo and call an update before trying to install Falco
! Remove the require in the service class to fix a dependencies loop
* Bump the version to 0.4.0
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* Update the Puppet module:
* Apply puppet-lint recommendations
* Update the README since the project moved from draios to falcosecurity in GitHub
* Move parameters in their own file
+ Add the DEB repository automatically
+ Add the EPEL repository automatically
+ Add a logrotate configuration
* Update the configuration file with all the latest updates
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* * Set required modules versions properly
* Set dependencies between classes
* Set the class order
* Apply mstemm's code review
* * Drop the Puppet 3 support
* Use a working version of puppetlabs-apt
* Use dependencies to be compatible with Puppet 4.7 and above
* Move kubernetes-response-engine to falcosecurit/kubernetes-response-engine
As long as Falco and Response Engine have different release cycle, they
are separated.
* Add a README explaining that repository has been moved
@mfdii is absolutely right about this on #539
Related to https://github.com/falcosecurity/falco/pull/526, it turns out
attempting to build a kernel module on the default debian-based ami used
by kops tries to invoke gcc-6:
-----
* Setting up /usr/src links from host
* Unloading falco-probe, if present
* Running dkms install for falco
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area...
make -j8 KERNELRELEASE=4.9.0-7-amd64 -C /lib/modules/4.9.0-7-amd64/build
M=/var/lib/dkms/falco/0.14.0/build...(bad exit status: 2)
Error! Bad return status for module build on kernel:
4.9.0-7-amd64 (x86_64)
Consult /var/lib/dkms/falco/0.14.0/build/make.log for more information.
* Running dkms build failed, dumping
/var/lib/dkms/falco/0.14.0/build/make.log
DKMS make.log for falco-0.14.0 for kernel 4.9.0-7-amd64 (x86_64)
Wed Feb 13 01:02:01 UTC 2019
make: Entering directory '/host/usr/src/linux-headers-4.9.0-7-amd64'
arch/x86/Makefile:140: CONFIG_X86_X32 enabled but no binutils support
/host/usr/src/linux-headers-4.9.0-7-common/scripts/gcc-version.sh:
line 25: gcc-6: command not found
-----
So manually add back gcc-6 and its dependencies.
To allow for a more portable build environment, create a builder image
that is based on centos 6 with devtoolset-2 for a refrence g++.
In that image, install all required packages and run a script that can
either run cmake or make.
The image depends on the following parameters:
FALCO_VERSION: the version to give any built packages
BUILD_TYPE: Debug or Release
BUILD_DRIVER/BPF: whether or not to build the kernel module/bpf program when
building. This should usually be OFF, as the kernel module would be
built for the files in the centos image, not the host.
BUILD_WARNINGS_AS_ERRORS: consider all build warnings fatal
MAKE_JOBS: passed to the -j argument of make
A typical way to run this builder is the following. Assumes you have
checked out falco and sysdig to directories below /home/user/src, and
want to use a build directory of /home/user/build/falco:
$ docker run --user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro -e MAKE_JOBS=4 -it -v /home/user/src:/source -v /home/user/build/falco:/build falco-builder cmake
$ docker run --user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro -e MAKE_JOBS=4 -it -v /home/user/src:/source -v /home/user/build/falco:/build falcosecurity/falco-builder package
* Expose required_engine_version when loading rules
When loading a rules file, have alternate methods that return the
required_engine_version. The existing methods remain unchanged and just
call the new methods with a dummy placeholder.
* Add --support argument to print support bundle
Add an argument --support that can be used as a single way to collect
necessary support information, including the falco version, config,
commandline, and all rules files.
There might be a big of extra structure to the rules files, as they
actually support an array of "variants", but we're thinking ahead to
cases where there might be a comprehensive library of rules files and
choices, so we're adding the extra structure.
Instead of using container.image, that always reports the raw string
used to spawn the container, switch to the more reliable
container.image.{repository,tag}, since they are guaranteed to report
the actual repository/tag of the container image.
This also give a little performance improvement since a single 'in'
predicate can now be used instead of a sequence of startswith.
* Add ability to print field names only
Add ability to print field names only instead of all information about
fields (description, etc) using -N cmdline option.
This will be used to add some versioning support steps that check for a
changed set of fields.
* Add an engine version that changes w/ filter flds
Add a method falco_engine::engine_version() that returns the current
engine version (e.g. set of supported fields, rules objects, operators,
etc.). It's defined in falco_engine_version.h, starts at 2 and should be
updated whenever a breaking change is made.
The most common reason for an engine change will be an update to the set
of filter fields. To make this easy to diagnose, add a build time check
that compares the sha256 output of "falco --list -N" against a value
that's embedded in falco_engine_version.h. A mismatch fails the build.
* Check engine version when loading rules
A rules file can now have a field "required_engine_version N". If
present, the number is compared to the falco engine version. If the
falco engine version is less, an error is thrown.
* Unit tests for engine versioning
Add a required version: 2 to one trace file to check the positive case
and add a new test that verifies that a too-new rules file won't be loaded.
* Rename falco test docker image
Rename sysdig/falco to falcosecurity/falco in unit tests.
* Don't pin falco_rules.yaml to an engine version
Currently, falco_rules.yaml is compatible with versions <= 0.13.1 other
than the required_engine_version object itself, so keep that line
commented out so users can use this rules file with older falco
versions.
We'll uncomment it with the first incompatible falco engine change.
* Allow SSL for k8s audit endpoint
Allow enabling SSL for the Kubernetes audit log web server. This
required adding two new configuration options: webserver.ssl_enabled and
webserver.ssl_certificate. To enable SSL add the below to the webserver
section of the falco.yaml config:
webserver:
enabled: true
listen_port: 8765s
k8s_audit_endpoint: /k8s_audit
ssl_enabled: true
ssl_certificate: /etc/falco/falco.pem
Note that the port number has an s appended to indicate SSL
for the port which is how civetweb expects SSL ports be denoted. We
could change this to dynamically add the s if ssl_enabled: true.
The ssl_certificate is a combination SSL Certificate and corresponding
key contained in a single file. You can generate a key/cert as follows:
$ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
$ cat certificate.pem key.pem > falco.pem
$ sudo cp falco.pem /etc/falco/falco.pem
fix ssl option handling
* Add notes on how to create ssl certificate
Add notes on how to create the ssl certificate to the config comments.
gcc 5 is no longer included in debian unstable, but we need it to build
centos kernels, which are 3.x based and explicitly want a gcc version 3,
4, or 5 compiler.
So grab copies we've saved from debian snapshots with the prefix
https://snapshot.debian.org/archive/debian/20190122T000000Z. They're
stored at downloads.draios.com and installed in a dpkg -i step after the
main packages are installed, but before any other by-hand packages are
installed.
A recent sysdig change added support for CRI and also added new external
dependencies (cri uses grpc to communicate between the client/server).
Add those dependencies.
* Add falco service to k8s install/update labels
Update the instructions for K8s RBAC installation to also create a
service that maps to port 8765 of the falco pod. This allows other
services to access the embedded webserver within falco.
Also clean up the set of labels to use a consistent app: falco-example,
role:security for each object.
* Cange K8s Audit Example to use falco daemonset
Change the K8s Audit Example instructions to use minikube in conjunction
with a falco daemonset running inside of minikube. (We're going to start
prebuilding kernel modules for recent minikube variants to make this
possible).
When running inside of minikube in conjunction with a service, you have
to go through some additional steps to find the ClusterIP associated
with the falco service and use that ip when configuring the k8s audit
webhook. Overall it's still a more self-contained set of instructions,
though.
In the common case, falco doesn't generate much output, so it's
desirable to not buffer it in case you're tail -fing some logs.
So change the default for buffered outputs to false.
* Improved inbound/outbound macros
Improved versions of inbound/outbound macros that add coverage for
recvfrom/recvmsg, sendto/sendmsg and also ignore non-blocking syscalls
in a different way.
* Let nginx-ingress-c(ontroller) write to /etc/nginx
Process truncated due to comm limit.
Also fix some parentheses for another write_etc_common macro.
* Let calico setns also.
* Let prometheus-conf write its config
Let prometheus-conf write its config below /etc/prometheus.
* Let openshift oc write to /etc/origin/node
As per https://github.com/draios/sysdig/pull/1275, the gen_event class
mandate the implementation of two new methods.
This change aims to simplify the implementation of a generic event
processing infrastructure, that could handle both sinsp and json
events.
The -Wextra compile-time option will enable additional diagnostic
warnigns. The -Werror option will cause the compiler to treat warnings
as errors. This change adds a build time option,
BUILD_WARNINGS_AS_ERRORS, to conditionally enable those flags. Note
that depending on the compiler you're using, if you enable this option,
compilation may fail (some compiler version have additional warnings
that have not yet been resolved).
Testing with these options in place identified a destructor that was
throwing an exception. C++11 doesn't allow destructors to throw
exceptions, so those throw's would have resulted in calls to
terminate(). I replace them with an error log and a call to assert().
It's possible to call event_tags_for_ruleset/evttypes_for_ruleset for a
ruleset that hasn't been loaded. In this case, it's possible to go past
the end of the m_rulesets array.
After fixing that, it's also possible to go past the end of the
event_tags array in event_tags_for_ruleset().
So in both cases, check the index against the array size before
indexing.
Add k8s audit rules to falco's config so they are read by default.
Rename some generic macros like modify, create, delete in the k8s audit
rules so they don't overlap with macros in the main rules file.
* Add sensitive mount of mouting to /var/lib/kubelet*
* Fix GKE/Istio false positives
- Allow kubectl to write below /root/.kube
- Allow loopback/bridge (e.g. /home/kubernetes/bin/) to setns.
- Let istio pilot-agent write to /etc/istio.
- Let google_accounts(_daemon) write user .ssh files.
- Add /health as an allowed file below /.
This fixes https://github.com/falcosecurity/falco/issues/439.
* Improve ufw/cloud-init exceptions
Tie them to both the program and the file being written.
Also move the cloud-init exception to monitored_directory.
* Add new json/webserver libs, embedded webserver
Add two new external libraries:
- nlohmann-json is a better json library that has stronger use of c++
features like type deduction, better conversion from stl structures,
etc. We'll use it to hold generic json objects instead of jsoncpp.
- civetweb is an embeddable webserver that will allow us to accept
posted json data.
New files webserver.{cpp,h} start an embedded webserver that listens for
POSTS on a configurable url and passes the json data to the falco
engine.
New falco config items are under webserver:
- enabled: true|false. Whether to start the embedded webserver or not.
- listen_port. Port that webserver listens on
- k8s_audit_endpoint: uri on which to accept POSTed k8s audit events.
(This commit doesn't compile entirely on its own, but we're grouping
these related changes into one commit for clarity).
* Don't use relative paths to find lua code
You can look directly below PROJECT_SOURCE_DIR.
* Reorganize compiler lua code
The lua compiler code is generic enough to work on more than just
sinsp-based rules, so move the parts of the compiler related to event
types and filterchecks out into a standalone lua file
sinsp_rule_utils.lua.
The checks for event types/filterchecks are now done from rule_loader,
and are dependent on a "source" attribute of the rule being
"sinsp". We'll be adding additional types of events next that come from
sources other than system calls.
* Manage separate syscall/k8s audit rulesets
Add the ability to manage separate sets of rules (syscall and
k8s_audit). Stop using the sinsp_evttype_filter object from the sysdig
repo, replacing it with falco_ruleset/falco_sinsp_ruleset from
ruleset.{cpp,h}. It has the same methods to add rules, associate them
with rulesets, and (for syscall) quickly find the relevant rules for a
given syscall/event type.
At the falco engine level, there are new parallel interfaces for both
types of rules (syscall and k8s_audit) to:
- add a rule: add_k8s_audit_filter/add_sinsp_filter
- match an event against rules, possibly returning a result:
process_sinsp_event/process_k8s_audit_event
At the rule loading level, the mechanics of creating filterchecks
objects is handled two factories (sinsp_filter_factory and
json_event_filter_factory), both of which are held by the engine.
* Handle multiple rule types when parsing rules
Modify the steps of parsing a rule's filter expression to handle
multiple types of rules. Notable changes:
- In the rule loader/ast traversal, pass a filter api object down,
which is passed back up in the lua parser api calls like nest(),
bool_op(), rel_expr(), etc.
- The filter api object is either the sinsp factory or k8s audit
factory, depending on the rule type.
- When the rule is complete, the complete filter is passed to the
engine using either add_sinsp_filter()/add_k8s_audit_filter().
* Add multiple output formatting types
Add support for multiple output formatters. Notable changes:
- The falco engine is passed along to falco_formats to gain access to
the engine's factories.
- When creating a formatter, the source of the rule is passed along
with the format string, which controls which kind of output formatter
is created.
Also clean up exception handling a bit so all lua callbacks catch all
exceptions and convert them into lua errors.
* Add support for json, k8s audit filter fields
With some corresponding changes in sysdig, you can now create general
purpose filter fields and events, which can be tied together with
nesting, expressions, and relational operators. The classes here
represent an instance of these fields devoted to generic json objects as
well as k8s audit events. Notable changes:
- json_event: holds a json object, used by all of the below
- json_event_filter_check: Has the ability to extract values out of a
json_event object and has the ability to define macros that associate
a field like "group.field" with a json pointer expression that
extracts a single property's value out of the json object. The basic
field definition also allows creating an index
e.g. group.field[index], where a std::function is responsible for
performing the indexing. This class has virtual void methods so it
must be overridden.
- jevt_filter_check: subclass of json_event_filter_check and defines
the following fields:
- jevt.time/jevt.rawtime: extracts the time from the underlying json object.
- jevt.value[<json pointer>]: general purpose way to extract any
json value out of the underlying object. <json pointer> is a json
pointer expression
- jevt.obj: Return the entire object, stringified.
- k8s_audit_filter_check: implements fields that extract values from
k8s audit events. Most of the implementation is in the form of macros
like ka.user.name, ka.uri, ka.target.name, etc. that just use json
pointers to extact the appropriate value from a k8s audit event. More
advanced fields like ka.uri.param, ka.req.container.image use
indexing to extract individual values out of maps or arrays.
- json_event_filter_factory: used by things like the lua parser api,
output formatter, etc to create the necessary objects and return
them.
- json_event_formatter: given a format string, create the necessary
fields that will be used to create a resolved string when given a
json_event object.
* Add ability to list fields
Similar to sysdig's -l option, add --list (<source>) to list the fields
supported by falco. With no source specified, will print all
fields. Source can be "syscall" for inspector fields e.g. what is
supported by sysdig, or "k8s_audit" to list fields supported only by the
k8s audit support in falco.
* Initial set of k8s audit rules
Add an initial set of k8s audit rules. They're broken into 3 classes of
rules:
- Suspicious activity: this includes things like:
- A disallowed k8s user performing an operation
- A disallowed container being used in a pod.
- A pod created with a privileged pod.
- A pod created with a sensitive mount.
- A pod using host networking
- Creating a NodePort Service
- A configmap containing private credentials
- A request being made by an unauthenticated user.
- Attach/exec to a pod. (We eventually want to also do privileged
pods, but that will require some state management that we don't
currently have).
- Creating a new namespace outside of an allowed set
- Creating a pod in either of the kube-system/kube-public namespaces
- Creating a serviceaccount in either of the kube-system/kube-public
namespaces
- Modifying any role starting with "system:"
- Creating a clusterrolebinding to the cluster-admin role
- Creating a role that wildcards verbs or resources
- Creating a role with writable permissions/pod exec permissions.
- Resource tracking. This includes noting when a deployment, service,
- configmap, cluster role, service account, etc are created or destroyed.
- Audit tracking: This tracks all audit events.
To support these rules, add macros/new indexing functions as needed to
support the required fields and ways to index the results.
* Add ability to read trace files of k8s audit evts
Expand the use of the -e flag to cover both .scap files containing
system calls as well as jsonl files containing k8s audit events:
If a trace file is specified, first try to read it using the
inspector. If that throws an exception, try to read the first line as
json. If both fail, return an error.
Based on the results of the open, the main loop either calls
do_inspect(), looping over system events, or
read_k8s_audit_trace_file(), reading each line as json and passing it to
the engine and outputs.
* Example showing how to enable k8s audit logs.
An example of how to enable k8s audit logging for minikube.
* Add unit tests for k8s audit support
Initial unit test support for k8s audit events. A new multiplex file
falco_k8s_audit_tests.yaml defines the tests. Traces (jsonl files) are
in trace_files/k8s_audit and new rules files are in
test/rules/k8s_audit.
Current test cases include:
- User outside allowed set
- Creating disallowed pod.
- Creating a pod explicitly on the allowed list
- Creating a pod w/ a privileged container (or second container), or a
pod with no privileged container.
- Creating a pod w/ a sensitive mount container (or second container), or a
pod with no sensitive mount.
- Cases for a trace w/o the relevant property + the container being
trusted, and hostnetwork tests.
- Tests that create a Service w/ and w/o a NodePort type.
- Tests for configmaps: tries each disallowed string, ensuring each is
detected, and the other has a configmap with no disallowed string,
ensuring it is not detected.
- The anonymous user creating a namespace.
- Tests for all kactivity rules e.g. those that create/delete
resources as compared to suspicious activity.
- Exec/Attach to Pod
- Creating a namespace outside of an allowed set
- Creating a pod/serviceaccount in kube-system/kube-public namespaces
- Deleting/modifying a system cluster role
- Creating a binding to the cluster-admin role
- Creating a cluster role binding that wildcards verbs or resources
- Creating a cluster role with write/pod exec privileges
* Don't manually install gcc 4.8
gcc 4.8 should already be installed by default on the vm we use for
travis.
* Add a falco-sns utility which publishes to an AWS SNS topic
* Add an script for deploying function in AWS Lambda
* Bump dependencies
* Use an empty topic and pass AWS_DEFAULT_REGION environment variable
* Add gitignore
* Install ca-certificates.
Are used when we publish to a SNS topic.
* Add myself as a maintainer
* Decode events from SNS based messages
* Add Terraform manifests for getting an EKS up and running
Please, take attention to setup kubectl and how to join workers:
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#obtaining-kubectl-configuration-from-terraformhttps://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#required-kubernetes-configuration-to-join-worker-nodes
* Ignore terraform generated files
* Remove autogenerated files
* Also publish MessageAttributes which allows to use Filter Policies
This allows to subscribe only to errors, or warnings or several
priorities or by rule names.
It covers same funcionality than NATS publishe does.
* Add kubeconfig and aws-iam-authenticator from heptio to Lambda environment
* Add role trust from cluster creator to lambda role
* Enable CloudWatch for Lambda stuff
* Generate kubeconfig, kubeconfig for lambdas and the lambda arn
This is used by deployment script
* Just a cosmetic change
* Add a Makefile which creates the cluster and configures it
* Use terraform and artifacts which belongs to this repository for deploying
* Move CNCF related deployment to its own directory
* Create only SNS and Lambda stuff.
Assume that the EKS cluster will be created outside
* Bridge IAM with RBAC
This allows to use the role for lambdas for authenticating against
Kubernetes
* Do not rely on terraform for deploying a playbook in lambda
* Clean whitespace
* Move rebased playbooks to functions
* Fix rebase issues with deployment and rbac stuff
* Add a clean target to Makefile
* Inject sys.path modification to Kubeless function deployment
* Add documentation and instructions
* Load/unload kernel module on start/stop
When falco is started, load the kernel module. (The falco binary also
will do a modprobe if it can't open the inspector, as a backup).
When falco is stopped, unload the kernel module.
This fixes https://github.com/falcosecurity/falco/issues/418.
* Put script execute line in right place.
Add a signal handler for SIGHUP that sets a global variable g_restart.
All the real execution of falco was already centralized in a standalone
function falco_init(), so simply exit on g_restart=true and call
falco_init() in a loop that restarts if g_restart is set to true.
Take care to not daemonize more than once and to reset the getopt index
to 1 on restart.
This fixes https://github.com/falcosecurity/falco/issues/432.
Update the express version to mitigate some security vulnerabilities.
Update the port to match the one used by demo.yml.
Change to /usr/src/app so npm install works as expected.
* Also add endswith to lua parser
Add endswith as a symbol so it can be parsed in filter expressions.
* Unit test for endswith support
Add a test case for endswith support, based on the filename ending with null.
* Add a Phantom Client which creates containers in Phantom server
* Add a playbook for creating events in Phantom using a Falco alert
* Add a flag for configuring SSL checking
* Add a deployable playbook with Kubeless for integrating with Phantom
* Add a README for Phantom integration
* Use named argument as real parameters.
Just cosmetic for clarification
* Call to lower() before checking for case insensitive comparison
* Add the playbook which creates a container in Phantom
I lose it when rebase the branch :P
* Add additional rpm writing programs
rhn_check, yumdb.
* Add 11-dhclient as a dhcp binary
* Let runuser read below pam
It reads those files to check permissions.
* Let chef write to /root/.chef*
Some deployments write directly below /root.
* Refactor openshift privileged images
Rework how openshift images are handled:
Many customers deploy to a private registry, which would normally
involve duplicating the image list for the new registry. Now, split the
image prefix search (e.g. <host>/openshift3) from the check of the image
name. The prefix search is in allowed_openshift_registry_root, and can
be easily overridden to add a new private registry hostname. The image
list check is in openshift_image, is conditioned on
allowed_openshift_registry_root, and does a contains search instead of a
prefix match.
Also try to get a more comprehensive set of possible openshift3 images,
using online docs as a guide.
* Also let sdchecks directly setns
A new macro python_running_sdchecks is similar to
parent_python_running_sdchecks but works on the process itself.
Add this as an exception to Change thread namespace.
* Fix spec name
* Add a playbook for capturing stuff using sysdig in a container
* Add event-name to job name for avoid collisions among captures
* Implement job for starting container in Pod in Kubernetes Client
We are going to pick data for all Pod, not limited to one container
* Use sysdig/capturer image for capture and upload to s3 the capture
* There is a bug with environment string splitting in kubeless
https://github.com/kubeless/kubeless/issues/824
So here is a workaround which uses multiple --env flags, one for each
environment.
* Use shorter job name. Kubernetes limit is 64 characters.
* Add a deployable playbook with Kubeless for capturing stuff with Sysdig
* Document the integration with Sysdig capture
* Add Dockerfile for creating sysdig-capturer
* Create a DemistoClient for publishing Falco alerts to Demisto
* Extract a function for extracting description from Falco output
* Add a playbook which creates a Falco alert as a Demisto incident
* Add a Kubeless Demisto Handler for Demisto integration
* Document the integration with Demisto
* Allow changing SSL certificate verification
* Fix naming for playbook specs
* Call to lower() before checking value of VERIFY_SSL. Allow case insensitive.
* Use correct copyright years.
Also include the start year.
* Improve copyright notices.
Use the proper start year instead of just 2018.
Add the right owner Draios dba Sysdig.
Add copyright notices to some files that were missing them.
Replace references to GNU Public License to Apache license in:
- COPYING file
- README
- all source code below falco
- rules files
- rules and code below test directory
- code below falco directory
- entrypoint for docker containers (but not the Dockerfiles)
I didn't generally add copyright notices to all the examples files, as
they aren't core falco. If they did refer to the gpl I changed them to
apache.
debian:unstable head contains binutils 2.31, which generates binaries
that are incompatible with kernels < 4.16.
To fix this, after installing everything, downgrade binutils to
2.30-22. This has to be done as the last step as it introduces conflicts
in other dependencies of the various gcc versions and some of the
packages already in the image.
* Add dpkg-divert as a debian package mgmt program.
* Add pip3 as a package mgmt program.
* Let ucpagent write config
Since the name is fairly generic (apiserver), require that it runs in a
container with image docker/ucp-agent.
* Let iscsi admin programs write config
* Add parent to some output strings
Will aid in addressing false positives.
* Let update-ca-trust write to pki files
* Add additional root writing programs
- zap: web application security tool
- airflow: apache app for managing data pipelines
- rpm can sometimes write below /root/.rpmdb
- maven can write groovy files
* Expand redis etc files
Additional program redis-launcher.(sh) and path /etc/redis.
* Add additional root directories
/root/workspace could be used by jenkins, /root/oradiag_root could be
used by Oracle 11 SQL*Net.
* Add pam-config as an auth program
* Add additional trusted containers
openshift image inspector, alternate name for datadog agent, docker ucp
agent, gliderlabs logspout.
* Add microdnf as a rpm binary.
https://github.com/rpm-software-management/microdnf
* Let coreos update-ssh-keys write /home/core/.ssh
* Allow additional writes below /etc/iscsi
Allow any path starting with /etc/iscsi.
* Add additional /root write paths
Additional files, with /root/workspace changing from a directory to a
path prefix.
* Add additional openshift trusted container.
* Also allow grandparents for ms_oms_writing_conf
In some cases the program spawns intermediate shells, for example:
07:15:30.756713513: Error File below /etc opened for writing (user= command=StatusReport.sh /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime parent=sh pcmdline=sh -c /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime file=/etc/opt/omi/conf/omsconfig/last_statusreport program=StatusReport.sh gparent=omiagent ggparent=omiagent gggparent=omiagent) k8s.pod= container=host k8s.pod= container=host
This should fix#387.
If /lib/modules exists in the base image, the symlink will get created at
/lib/modules/modules. This removes any existing empty directory but will
fail if we try to remove a non-empty /lib/modules. (Punting on how to
handle non-empty base image dirs for now)
* Add alternatives as a binary dir writer
It can set symlinks below binary dirs.
* Let userhelper read sens.files/write below /etc
Part of usermode package, can be used by oVirt.
* Let package mgmt progs urlgrabber pki files
Some package management programs run urlgrabber-ext-{down} to update pki
files.
* Add additional root directory
for Jupyter-notebook
* Let brandbot write to /etc/os-release
Used on centos
* Add an additional veritas conf directory.
Also /etc/opt/VRTS...
* Let appdynamics spawn shells
Java, so we look at parent cmdline.
* Add more ancestors to output
In an attempt to track down the source of some additional shell
spawners, add additional parents.
* Let chef write below bin dirs/rpm database
Rename an existing macro chef_running_yum_dump to python_running_chef
and add additional variants.
Also add chef-client as a package management binary.
* Remove dangling macro.
No longer in use.
* Add additional volume mgmt progs
Add pvscan as a volume management program and add an additional
directory below /etc. Also rename the macro to make it more generic.
* Let openldap write below /etc/openldap
Only program is run-openldap.sh for now.
* Add additional veritas directory
Also /etc/vom.
* Let sed write /etc/sedXXXXX files
These are often seen in install scrips for rpm/deb packages. The test
only checks for /etc/sed, as we don't have anything like a regex match
or glob operator.
* Let dse (DataStax Search) write to /root
Only file is /root/tmp__.
* Add additional mysql programs and directories
Add run-mysqld and /etc/my.cnf.d directory.
* Let redis write its config below /etc.
* Let id program open network connections
Seen using port 111 (sun-rpc, but really user lookups).
* Opt-in rule for protecting tomcat shell spawns
Some users want to consider any shell spawned by tomcat suspect for
example, protecting against the famous apache struts attack
CVE-2017-5638, while others do not.
Split the difference by adding a macro
possibly_parent_java_running_tomcat, but disabling it by default.
* added ossec-syscheckd to read_sensitive_file_binaries
* Add "Write below monitored directory"
Take the technique used by "Write below binary dir", and make it more
general, expanding to a list of "monitored directories". This contains
common directories like /boot, /lib, etc.
It has a small workaround to look for home ssh directories without using
the glob operator, which has a pending fix in
https://github.com/draios/sysdig/pull/1153.
* Fix FPs
Move monitored_dir to after evt type checks and allow mkinitramfs to
write below /boot
* Addl boot writers.
GitHub uses a library called Licensee to identify a project's license
type. It shows this information in the status bar and via the API if it
can unambiguously identify the license.
This commit updates the COPYING file so that it contains only the full
text of the GPL 2.0 license. The info that pertains to OpenSSL has now
been moved to the "License Terms" section in the README.
Collectively, these changes allow Licensee to successfully identify the
license type of Falco as GPL 2.0.
falco-CLA-1.0-signed-off-by: Andrea Kao <eirinikos@gmail.com>
* Proactively enable rules instead of only disabling
Previously, rules were enabled by default. Some performance improvements
in https://github.com/draios/sysdig/pull/1126 broke this, requiring that
each rule is explicitly enabled or disabled for a given ruleset.
So if enabled is true, explicitly enable the rule for the default ruleset.
* Get rid of shadowed res variable.
It was used both for the inspector loop and the falco result.
* Add ability to skip rules for unknown filters
Add the ability to skip a rule if its condition refers to a filtercheck
that doesn't exist. This allows defining a rules file that contains new
conditions that can still has limited backward compatibility with older
falco versions.
When compiling a filter, return a list of filtercheck names that are
present in the ast (which also includes filterchecks from any
macros). This set of filtercheck names is matched against the set of
filterchecks known to sinsp, expressed as lua patterns, and in the
global table defined_filters. If no match is found, the rule loader
throws an error.
The pattern changes slightly depending on whether the filter has
arguments or not. Two filters (proc.apid/proc.aname) can work with or
without arguments, so both styles of patterns are used.
If the rule has an attribute "skip-if-unknown-filter", the rule will be
skipped instead.
* Unit tests for skipping unknown filter
New unit test for skipping unknown filter. Test cases:
- A rule that refers to an unknown filter results in an error.
- A rule that refers to an unknown filter, but has
"skip-if-unknown-filter: true", can be read, but doesn't match any events.
- A rule that refers to an unknown filter, but has
"skip-if-unknown-filter: false", returns an error.
Also test the case of a filtercheck like evt.arg.xxx working properly
with the embedded patterns as well as proc.aname/apid which work both ways.
* Use better way to skip falco events
Use the new method falco_consider() to determine which events to
skip. This centralizes the logic in a single function. All events will
still be considered if falco was run with -A.
This depends on https://github.com/draios/sysdig/pull/1105.
* Add ability to specify -A flag in tests
test attribute all_events corresponds to the -A flag. Add for some tests
that would normally refer to skipped events.
* Improve compatibility with falco 0.9.0
Temporarily remove some rules features that are not compatible with
falco 0.9.0. We'll release a new falco soon, after which we'll add these
rules features back.
* Disable the unexpected udp traffic rule by default
Some applications will connect a udp socket to an address only to
test connectivity. Assuming the udp connect works, they will follow
up with a tcp connect that actually sends/receives data.
This occurs often enough that we don't want to update the Unexpected UDP
Traffic rule by default, so add a macro do_unexpected_udp_check which is
set to never_true. To opt-in, override the macro to use the condition
always_true.
* added new command lines for rabbitMQ
* added httpd_writing_ssl_conf macro and add it to write_etc_common
* modified httpd_writing_ssl_conf to add additional files
* added additional command to httpd_writing_ssl_conf
* Wrap condition
Wrap condition with folded style.
* Consolidate test connect ports into one list
There were several exceptions for apps that do a udp connect on an
address simply to see if it works, folllowed by a tcp connect that
actually sends/receives data.
Unify these exceptions into a single list test_connect_ports, and add
port 9 (discard, used by dockerd).
* Only check whole rule names when matching counts
Tweak the regex so a rule my_great_rule doesn't pick up event counts for
a rule "great_rule: nnn".
* Add ability to skip evttype warnings for rules
A new attribute warn_evttypes, if present, suppresses printing warnings
related to a rule not matching any event type. Useful if you have a rule
where not including an event type is intentional.
* Add test for preserving rule order
Test the fix for https://github.com/draios/falco/issues/354. A rules
file has a event-specific rule first and a catchall rule second. Without
the changes in https://github.com/draios/sysdig/pull/1103, the first
rule does not match the event.
* Add Rule for unexpected udp traffic
New rule Unexpected UDP Traffic checks for udp traffic not on a list of
expected ports. Currently blocked on
https://github.com/draios/falco/issues/308.
* Add sendto/recvfrom in inbound/outbound macros
Expand the inbound/outbound macros to handle sendfrom/recvto events, so
they can work on unconnected udp sockets. In order to avoid a flood of
events, they also depend on fd.name_changed to only consider
sendto/recvfrom when the connection tuple changes.
Also make the check for protocol a positive check for udp instead of not tcp,
to avoid a warning about event type filters potentially appearing before
a negative condition. This makes filtering rules by event type easier.
This depends on https://github.com/draios/sysdig/pull/1052.
* Add additional restrictions for inbound/outbound
- only look for fd.name_changed on unconnected sockets.
- skip connections where both ips are 0.0.0.0 or localhost network.
- only look for successful or non-blocking actions that are in progress
* Add a combined inbound/outbound macro
Add a combined inbound/outbound macro so you don't have to do all the
other net/result related tests more than once.
* Fix evt generator for new in/outbound restrictions
The new rules skip localhost, so instead connect a udp socket to a
non-local port. That still triggers the inbound/outbound macros.
* Address FPs in regression tests
In some cases, an app may make a udp connection to an address with a
port of 0, or to an address with an application's port, before making a
tcp connection that actually sends/receives traffic. Allow these
connects.
Also, check both the server and client port and only consider the
traffic unexpected if neither port is in range.
* Properly support syscalls in filter conditions
Syscalls have their own numbers but they weren't really handled within
falco. This meant that there wasn't a way to handle filters with
evt.type=xxx clauses where xxx was a value that didn't have a
corresponding event entry (like "madvise", for examples), or where a
syscall like open could also be done indirectly via syscall(__NR_open,
...).
First, add a new top-level global syscalls that maps from a string like
"madvise" to all the syscall nums for that id, just as we do for event
names/numbers.
In the compiler, when traversing the AST for evt.type=XXX or evt.type in
(XXX, ...) clauses, also try to match XXX against the global syscalls
table, and return any ids in a standalone table.
Also throw an error if an XXX doesn't match any event name or syscall name.
The syscall numbers are passed as an argument to sinsp_evttype_filter so
it can preindex the filters by syscall number.
This depends on https://github.com/draios/sysdig/pull/1100
* Add unit test for syscall support
This does a madvise, which doesn't have a ppm event type, both directly
and indirectly via syscall(__NR_madvise, ...), as well as an open
directly + indirectly. The corresponding rules file matches on madvise
and open.
The test ensures that both opens and both madvises are detected.
* Also check evt.abspath in "Modify binary dirs" rule
For unlinkat evt.arg[1] is not the path of the file/dir removed.
* Monitor renameat too in "Modify binary dirs" rule
To further reduce falco's cpu usage, start setting the inspector in
"autodrop" mode with a sampling ratio of 1. When autodrop mode is
enabled, a second class of events (those having EF_ALWAYS_DROP in the
syscall table, or those syscalls that do not have specific handling in
the syscall table) are also excluded.
* Add ability to read rules files from directories
When the argument to -r <path> or an entry in falco.yaml's rules_file
list is a directory, read all files in the directory and add them to the
rules file list. The files in the directory are sorted alphabetically
before being added to the list.
The installed falco adds directories /etc/falco/rules.available and
/etc/falco/rules.d and moves /etc/falco/application_rules.yaml to
/etc/falco/rules.available. /etc/falco/rules.d is empty, but the idea is
that admins can symlink to /etc/falco/rules.available for applications
they want to enable.
This will make it easier to add application-specific rulesets that
admins can opt-in to.
* Unit test for reading rules from directory
Copy the rules/trace file from the test multiple_rules to a new test
rules_directory. The rules files are in rules/rules_dir/{000,001}*.yaml,
and the test uses a rules_file argument of rules_dir. Ensure that the
same events are detected.
* Reopen file/program outputs on SIGUSR1
When signaled with SIGUSR1, close and reopen file and program based
outputs. This is useful when combined with logrotate to rotate logs.
* Example logrotate config
Example logrotate config that relies on SIGUSR1 to rotate logs.
* Ensure options exist for all outputs
Options may not be provided for some outputs (like stdout), so create an
empty set of options in that case.
* Allow appending to skipped rules
If a rule has an append attribute but the original rule was skipped (due
to having lower priority than the configured priority), silently skip
the appending rule instead of returning an error.
* Unit test for appending to skipped rules
Unit test verifies fix for appending to skipped rules. One rules file
defines a rule with priority WARNING, a second rules file appends to
that rules file, and the configured priority is ERROR.
Ensures that falco rules without errors.
* add common fluentd command, let docker modify
Add a common fluentd command, and let docker operations modify bin dir
* Add abrt-action-sav(...) as a rpm program
https://linux.die.net/man/1/abrt-action-save-package-data
* Add etc writers for more ms-on-linux svcs
Microsoft SCX and Azure Network Watcher Agent.
* Let nginx write its own config.
* Let chef-managed gitlab write gitlab config
* Let docker container fsen outside of containers
The docker process can also be outside of a container when doing actions
like docker save, etc, so drop the docker requirement.
* Expand the set of haproxy configs.
Let the parent process also be haproxy_reload and add an additional
directory.
* Add an additional node-related file below /root
For node cli.
* Let adclient read sensitive files
Active Directory Client.
* Let mesos docker executor write shells
* Add additional privileged containers.
A few more openshift-related containers and datadog.
* Add a kafka admin command line as allowed shell
In this case, run by cassandra
* Add additional ignored root directories
gradle and crashlytics
* Add back mesos shell spawning binaries back
This list will be limited only to those binaries known to spawn
shells. Add mesos-slave/mesos-health-ch.
* Add addl trusted containers
Consul and mesos-slave.
* Add additional config writers for sosreport
Can also write files below /etc/pki/nssdb.
* Expand selinux config progs
Rename macro to selinux_writing_conf and add additional programs.
* Let rtvscand read sensitive files
Symantec av cli program.
* Let nginx-launch write its own certificates
Sometimes directly, sometimes by invoking openssl.
* Add addl haproxy config writers
Also allow the general prefix /etc/haproxy.
* Add additional root files.
Mongodb-related.
* Add additional rpm binaries
rpmdb_stat
* Let python running get-pip.py modify binary files
Used as a part of directly running get-pip.py.
* Let centrify scripts read sensitive files
Scripts start with /usr/share/centrifydc
* Let centrify progs write krb info
Specifically, adjoin and addns.
* Let ansible run below /root/.ansible
* Let ms oms-run progs manage users
The parent process is generally omsagent-<version> or scx-<version.
* Combine & expand omiagent/omsagent macros
Combine the two macros into a single ms_oms_writing_conf and add both
direct and parent binaries.
* Let python scripts rltd to ms oms write binaries
Python scripts below /var/lib/waagent.
* Let google accounts daemon modify users
Parent process is google_accounts(_daemon).
* Let update-rc.d modify files below /etc
* Let dhcp binaries write indirectly to etc
This allows them to run programs like sed, cp, etc.
* Add istio as a trusted container.
* Add addl user management progs
Related to post-install steps for systemd/udev.
* Let azure-related scripts write below etc
Directory is /etc/azure, scripts are below /var/lib/waagent.
* Let cockpit write its config
http://www.cockpit-project.org/
* Add openshift's cassandra as a trusted container
* Let ipsec write config
Related to strongswan (https://strongswan.org/).
* Let consul-template write to addl /etc files
It may spawn intermediate shells and write below /etc/ssl.
* Add openvpn-entrypo(int) as an openvpn program
Also allow subdirectories below /etc/openvpn.
* Add additional files/directories below /root
* Add cockpit-session as a sensitive file reader
* Add puppet macro back
Still used in some people's user rules files.
* Rename name= to program=
Some users pointed out that name= was ambiguous, especially when the
event includes files being acted upon. Change to program=.
* Also let omiagent run progs that write oms config
It can run things like python scripts.
* Allow writes below /root/.android
Add an example puppet module for falco. This module configures the main
falco configuration file /etc/falco/falco.yaml, providing templates for
all configuration options.
It installs falco using debian/rpm packages and installs/manages it as a
systemd service.
* Add option to exclude output property in json fmt
New falco.yaml option json_include_output_property controls where the
formatted string "output" is included in the json object when json
output is enabled. By default the string is included.
* Add tests for new json output option
New test sets json_include_output_property to false and then verifies
that the json output does *not* contain the surrounding text "Warning an
open...".
* Add the ability to validate multiple rules files
Allow multiple -V arguments just as we do with multiple -r arguments.
* With verbose output, print dangling macros/lists
Start tracking whether or not a given macro/list is actually used when
compiling the set of rules. Every macro/list has an attribute used,
which defaults to false and is set to true whenever it is referred to in
a macro/rule/list.
When run with -v, any macro/list that still has used=false results in a
warning message.
Also, it turns out the fix for
https://github.com/draios/falco/issues/197 wasn't being applied to
macros. Fix that.
* Let OMS agent for linux write config
Programs are omiagent/omsagent/PerformInventor/in_heartbeat_r* and files
are below /etc/opt/omi and /etc/opt/microsoft/omsagent.
* Handle really long classpath lines for cassandra
Some cassandra cmdlines are so long the classpath truncates the cmdline
before the actual entry class gets named. In those cases also look for
cassandra-specific config options.
* Let postgres binaries read sensitive files
Also add a couple of postgres cluster management programs.
* Add apt-add-reposit(ory) as a debian mgmt program
* Add addl info to debug writing sensitive files
Add parent/grandparent process info.
* Requrire root directory files to contain /
In some cases, a file below root might be detected but the file itself
has no directory component at all. This might be a bug with dropped
events. Make the test more strict by requiring that the file actually
contains a "/".
* Let updmap read sensitive files
Part of texlive (https://www.tug.org/texlive/)
* For selected rules, require proc name to exist
Some rules such as reading sensitive files and writing below etc have
many exceptions that depend on the process name. In very busy
environments, system call events might end up being dropped, which
causes the process name to be missing.
In these cases, we'll let the sensitive file read/write below etc to
occur. That's handled by a macro proc_name_exists, which ensures that
proc.name is not "<NA>" (the placeholder when it doesn't exist).
* Let ucf write generally below /etc
ucf is a general purpose config copying program, so let it generally
write below /etc, as long as it in turn is run by the apt program
"frontend".
* Add new conf writers for couchdb/texmf/slapadd
Each has specific subdirectories below /etc
* Let sed write to addl temp files below /etc
Let sed write to additional temporary files (some directory + "sed")
below /etc. All generally related to package installation scripts.
* Let rabbitmq(ctl) spawn limited shells
Let rabbitmq spawn limited shells that perform read-only tasks like
reading processes/ifaces.
Let rabbitmqctl generally spawn shells.
* Let redis run startup/shutdown scripts
Let redis run specific startup/shutdown scripts that trigger at
start/stop. They generally reside below /etc/redis, but just looking for
the names redis-server.{pre,post}-up in the commandline.
* Let erlexec spawn shells
https://github.com/saleyn/erlexec, "Execute and control OS processes
from Erlang/OTP."
* Handle updated trace files
As a part of these changes, we updated some of the positive trace files
to properly include a process name. These newer trace files have
additional opens, so update the expected event counts to match.
* Let yum-debug-dump write to rpm database
* Additional config writers
Symantec AV for Linux, sosreport, semodule (selinux), all with their
config files.
* Tidy up comments a bit.
* Try protecting node apps again
Try improving coverage of run shell untrusted by looking for shells
below node processes again. Want to see how many FPs this causes before
fully committing to it.
* Let node run directly by docker count as a service
Generally, we don't want to consider all uses of node as a service wrt
spawned shells. But we might be able to consider node run directly by
docker as a "service". So add that to protected_shell_spawner.
* Also add PM2 as a protected shell spawner
This should handle cases where PM2 manages node apps.
* Remove dangling macros/lists
Do a pass over the set of macros/lists, removing most of those that are
no longer referred to by any macro/list. The bulk of the macros/lists
were related to the rule Run Shell Untrusted, which was refactored to
only detect shells run below specific programs. With that change, many
of these exceptions were no longer neeeded.
* Add a "never_true" macro
Add a never_true macro that will never match any event. Useful if you
want to disable a rule/macro/etc.
* Add missing case to write_below_etc
Add the macro veritas_writing_config to write_below_etc, which was
mistakenly not added before.
* Make tracking shells spawned by node optional
The change to generally consider node run directly in a container as a
protected shell spawner was too permissive, causing false
positives. However, there are some deployments that want to track shells
spawned by node as suspect. To address this, create a macro
possibly_node_in_container which defaults to never matching (via the
never_true) macro. In a user rules file, you can override the macro to
remove the never_true clause, reverting to the old behavior.
* Add some dangling macros/lists back
Some macros/lists are still referred to by some widely used user rules
files, so add them back temporarily.
* Use kubernetes.default to reach k8s api server
Originally raised in #296, but since then we documented rbac and
without-rbac methods, so mirroring the change here.
* Mount docker socket/dev read-write
This matches the direct docker run commands, which also mount those
resources read-write.
* Add additional allowed files below root.
These are related to node.js apps.
* Let yum-config-mana(ger) write to rpm database.
* Let gugent write to (root) + GuestAgent.log
vRA7 Guest Agent writes to GuestAgent.log with a cwd of root.
* Let cron-start write to pam_env.conf
* Add additional root files and directories
All seen in legitimate cases.
* Let nginx run aws s3 cp
Possibly seen as a part of consul deployments and/or openresty.
* Add rule for disallowed ssh connections
New rule "Disallowed SSH Connection" detects ssh connection attempts
other than those allowed by the macro allowed_ssh_hosts. The default
version of the macro allows any ssh connection, so the rule never
triggers by default.
The macro could be overridden in a local/user rules file, though.
* Detect contacting NodePort svcs in containers
New rule "Unexpected K8s NodePort Connection" detects attempts to
contact K8s NodePort services (i.e. ports >=30000) from within
containers.
It requires overridding a macro nodeport_containers which specifies a
set of containers that are allowed to use these port ranges. By default
every container is allowed.
* Remove remaining fbash references.
No longer relevant after all the installer rules were removed.
* Detect contacting EC2 metadata svc from containers
Add a rule that detects attempts to contact the ec2 metadata service
from containers. By default, the rule does not trigger unless a list of
explicitly allowed containers is provided.
* Detect contacting K8S API Server from container
New rule "Contact K8S API Server From Container" looks for connections
to the K8s API Server. The ip/port for the K8s API Server is in the
macro k8s_api_server and contains an ip/port that's not likely to occur
in practice, so the rule is effectively disabled by default.
* Additional rpm writers, root directories
salt-minion can also touch the rpm database, and some node packages
write below /root/.config/configstore.
* Add smbd as a protected shell spawner.
It's a server-like program.
* Also handle .ash_history
default shell for alpine linux
* Add exceptions for veritas
Let many veritas programs write below /etc/vx.
Let one veritas-related perl script read sensitive files.
* Allow postgres to run wal-e
https://github.com/wal-e/wal-e, archiving program for postgres.
* Let consul (agent) run addl scripts
Also let consul (agent, but the distinction is in the command line args)
to run nc in addition to curl. Also rename the macro.
* Let postgres setuid to itself
Let postgres setuid to itself. Seen by archiving programs like wal-e.
* Also allow consul to run alert check scripts
"sh -c /bin/consul-alerts watch checks --alert-addr 0.0.0.0:9000 ..."
* Add additional privileged containers.
Openshift's logging support containers generally run privileged.
* Let addl progs write below /etc/lvm
Add lvcreate as a program that can write below /etc/lvm and rename the
macro to lvprogs_writing_lvm_archive.
* Let glide write below root
https://glide.sh/, package management for go.
* Let sosreport read sensitive files.
* Let scom server read sensitive files.
Microsoft System Center Operations Manager (SCOM).
* Let kube-router run privileged.
https://github.com/cloudnativelabs/kube-router
* Let needrestart_binaries spawns shells
Was included in prior version of shell rules, adding back.
* Let splunk spawn shells below /opt/splunkforwarder
* Add yum-cron as a rpm binary
* Add a different way to run denyhosts.
Strange that the program is denyhosts.py but observed in actual
environments.
* Let nrpe setuid to nagios.
* Also let postgres run wal-e wrt shells
Previously added as an exception for db program spawned process, need to
add as an exception for run shell untrusted.
* Remove installer shell-related rules
They aren't used that often and removing them cleans up space for new
rules we want to add soon.
* Let kubelet running loopback spawn shells
Seen by @JPLachance, thanks for the heads up!
* Let docker's "exe" broadly write to files.
As a part of some docker commands like "docker save", etc, the program
exe can write from files on the host filesystem /var/lib/docker/... to a
variety of files within the container.
Allow this via a macro exe_running_docker_save that checks the
commandline as well as the parent and use it as an exclusion for the
write below binary dir/root/etc rules.
* Let chef perform more tasks
- Let chef-client generally read sensitive files and write below /etc.
- Let python running a chef script yum-dump.py write the rpm database.
Rename user_known_container_shell_spawn_binaries to
user_known_shell_spawn_binaries (the container distinction doesn't exist
any longer) and add it as an exception for run shell untrusted.
That way others can easily exclude shell spawning programs in a second
rules file.
* Refactor shell rules to avoid FPs.
Refactoring the shell related rules to avoid FPs. Instead of considering
all shells suspicious and trying to carve out exceptions for the
legitimate uses of shells, only consider shells spawned below certain
processes suspicious.
The set of processes is a collection of commonly used web servers,
databases, nosql document stores, mail programs, message queues, process
monitors, application servers, etc.
Also, runsv is also considered a top level process that denotes a
service. This allows a way for more flexible servers like ad-hoc nodejs
express apps, etc to denote themselves as a full server process.
* Update event generator to reflect new shell rules
spawn_shell is now a silent action. its replacement is
spawn_shell_under_httpd, which respawns itself as httpd and then runs a
shell.
db_program_spawn_binaries now runs ls instead of a shell so it only
matches db_program_spawn_process.
* Comment out old shell related rules
* Modify nodejs example to work w/ new shell rules
Start the express server using runit's runsv, which allows falco to
consider any shells run by it as suspicious.
* Use the updated argument for mkdir
In https://github.com/draios/sysdig/pull/757 the path argument for mkdir
moved to the second argument. This only became visible in the unit tests
once the trace files were updated to reflect the other shell rule
changes--the trace files had the old format.
* Update unit tests for shell rules changes
Shell in container doesn't exist any longer and its functionality has
been subsumed by run shell untrusted.
* Allow git binaries to run shells
In some cases, these are run below a service runsv so we still need
exceptions for them.
* Let consul agent spawn curl for health checks
* Don't protect tomcat
There's enough evidence of people spawning general commands that we
can't protect it.
* Reorder exceptions, add rabbitmq exception
Move the nginx exception to the main rule instead of the
protected_shell_spawner macro. Also add erl_child_setup (related to
rabbitmq) as an allowed shell spawner.
* Add additional spawn binaries
All off these are either below nginx, httpd, or runsv but should still
be allowed to spawn shells.
* Exclude shells when ancestor is a pkg mgmt binary
Skip shells when any process ancestor (parent, gparent, etc) is a
package management binary. This includes the program needrestart. This
is a deep search but should prevent a lot of other more detailed
exceptions trying to find the specific scripts run as a part of
installations.
* Skip shells related to serf
Serf is a service discovery tool and can in some cases be spawned by
apache/nginx. Also allow shells that are just checking the status of
pids via kill -0.
* Add several exclusions back
Add several exclusions back from the shell in container rule. These are
all allowed shell spawns that happen to be below
nginx/fluentd/apache/etc.
* Remove commented-out rules
This saves space as well as cleanup. I haven't yet removed the
macros/lists used by these rules and not used anywhere else. I'll do
that cleanup in a separate step.
* Also exclude based on command lines
Add back the exclusions based on command lines, using the existing set
of command lines.
* Add addl exclusions for shells
Of note is runsv, which means it can directly run shells (the ./run and
./finish scripts), but the things it runs can not.
* Don't trigger on shells spawning shells
We'll detect the first shell and not any other shells it spawns.
* Allow "runc:" parents to count as a cont entrypnt
In some cases, the initial process for a container can have a parent
"runc:[0:PARENT]", so also allow those cases to count as a container
entrypoint.
* Use container_entrypoint macro
Use the container_entrypoint macro to denote entering a container and
also allow exe to be one of the processes that's the parent of an
entrypoint.
* Let supervisor write more generally below /etc
* Let perl+plesk scripts run shells/write below etc
* Allow spaces after some cmdlines
* Add additional shell spawner.
* Add addl package mgmt binaries.
* Add addl cases for java + jenkins
Addl jar files to consider.
* Add addl jenkins-related cmdlines
Mostly related to node scripts run by jenkins
* Let python running some mesos tasks spawn shells
In this case marathon run by python
* Let ucf write below etc
Only below /etc/gconf for now.
* Let dpkg-reconfigur indirectly write below /etc
It may run programs that modify files below /etc
* Add files/dirs/prefixes for writes below root
Build a set of acceptable files/dirs/prefixes for writes below
/root. Mostly triggered by apps that run directly as root.
* Add addl shell spawn binaries.
* Also let java + sbt spawn shells in containers
Not seen only at host level
* Make sure the file below etc is /etc/
Make sure the file below /etc is really below the directory etc aka
/etc/xxx. Otherwise it would match a file /etcfoo.
* Let rancher healthcheck spawn shells
The name healthcheck is relatively innocuous so also look at the parent
process.
* Add addl shell container shell spawn binaries
* Add addl x2go binaries
* Let rabbitq write its config files
* Let rook write below /etc
toolbox.sh is fairly generic so add a condition based on the image name.
* Let consul-template spawn shells
* Add rook/toolbox as a trusted container
Their github pages recommend running privileged.
* Add addl mail binary that can setuid
* Let plesk autoinstaller spawn shells
The name autoinstaller is fairly generic so also look at the parent.
* Let php handlers write its config
* Let addl pkg-* binary write to /etc indirectly
* Add additional shell spawning binaries.
* Add ability to specify user trusted containers
New macro user_trusted_containers allows a user-provided set of
containers that are trusted and are allowed to run privileged.
* If npm runs node, let node spawn shells
* Let python run airflow via a shell.
* Add addl passenger commandlines (for shells)
* Add addl ways datadog can be run
* Let find run shells in containers.
* Add rpmq as a rpm binary
* Let httpd write below /etc/httpd/
* Let awstats/sa-update spawn shells
* Add container entrypoint as a shell
Some images have an extra shell level for image entrypoints.
* Add an additional jenkins commandline
* Let mysql write its config
* Let openvpn write its config
* Add addl root dirs/files
Also move /root/.java to be a general prefix.
* Let mysql_upgrade/opkg-cl spawn shells
* Allow login to perform dns lookups
With run with -h <host> to specify a remote host, some versions of login
will do a dns lookup to try to resolve the host.
* Let consul-template write haproxy config.
* Also let mysql indirectly edit its config
It might spawn a program to edit the config in addition to directly.
* Allow certain sed temp files below /etc/
* Allow debian binaries to indirectly write to /etc
They may spawn programs like sed, touch, etc to change files below /etc.
* Add additional root file
* Let rancher healthcheck be run more indirectly
The grandparent as well as parent of healthcheck can be tini.
* Add more cases for haproxy writing config
Allow more files as well as more scripts to update the config.
* Let vmtoolsd spawn shells on the host
* Add an additional innocuous entrypoint shell
* Let peer-finder (mongodb) spawn shells
* Split application rules to separate file.
Move the contents of application rules, which have never been enabled by
default, to a separate file. It's only installed in the mail falco packages.
* Add more build-related command lines
* Let perl running openresty spawn shells
* Let countly write nginx config
* Let confd spawn shells
* Also let aws spawn shells in containers.
The terminal shell in container rule has always been less permissive
than the other shell rules, mostly because we expect terminal-attached
shells to be less common. However, they might run innocuous commands,
especially from scripting languages like python. So allow the innocuous
commands to run.
* Let luajit spawn shells.
* Start support for db mgmt programs
Add support for db management programs that tend to spawn
shells. Starting with two lists
mysql_mgmt_binaries/postgres_mgmt_binaries which are combined into
db_mgmt_binaries. db_mgmt_binaries is added to both shell spawning rules
and the individual programs are removed.
* Let apache beam spawn shells
The program is "python pipeline.py" but it appears to be related to
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/pipeline.py.
* Better support for dovecot
Allow dovecot to setuid by adding to mail_binaries.
Allow the program auth, when run by dovecot, to spawn shells.
* Better support for plesk
Create a list plesk_binaries and allow them to run shells.
Also let them write to files below /etc/sw/keys.
* Let strongswan spawn shells.
Specifically the program starter. Using the full command line to be more
specific.
* Let proftpd modify files below /etc.
* Let chef binaries write below /etc
* Let mandb read sensitive files
* Let specific phusion passenger binaries run shells
The program is "my_init", which is fairly generic, so capture it by the
full command line.
* Make git-remote-http more permissive.
* Let networkmanager modify /etc/resolv.conf
specifically nm-dispatcher
* Let hostid open network connections
It might perform dns lookups as a part of resolving ip addresses.
* Let uwsgi spawn shells
* Add docker-runc-cur as a docker binary.
truncated version of docker-runc-current.
* Add rule for allowed containers
New rule Launch Disallowed Container triggers when a container is
started that does not match the macro allowed_containers. In the main
falco rules file, this macro never matches, so it never
triggers. However, in a second rules file the macro allowed_containers
could be filled in with the specific images that match.
* Also let foreman spawn shells
Used by Red Hat Sattelite.
* Let confluence run shells.
Appears as java program, so look for the classpath.
* Make allowed_containers macro more foolproof.
In some cases, the container image might not be known/is NULL, so the
comparison aganst "dummy-not-allowed-container-image" doesn't work.
Replace this with proc.vpid=1, which is in the main rule Launch
Disallowed Continer. Ensures it will only trigger when the
allowed_containers macro is overridden.
* Let tomcat spawn shells.
It's java so you need to look at the classpath.
* Let pip install software.
* Add another yarn command line.
* Let add-shell write to /etc/shells.tmp
* Let more plesk binaries setuid.
* Add imap-login as a mail binary.
* Fix plesk writing keys macro
Should be testing proc.name, not proc.cmdline.
* Let screen read sensitive files.
* Add more shell spawners.
S99qualys-cloud is the init script, cfn-signal is cloudformation.
* Exclude nologin from user mgmt programs.
* Let programs run by locales.postins write to /etc
It can run scripts like sed to modify files before writing the final
file.
* Let install4j java progs spawn shells.
Again, searching by classpath.
* Let some shell cmds be spawned outside containers
We had a list known_container_shell_spawn_cmdlines that contained
innocuous commandlines, but it only worked for containers.
Split this list into container-specific and general commandlines, and
add an exception for the general commandlines for the Run Shell
Untrusted rule.
* Add addl ruby-based passenger spawners
Add a different way to identify ruby run by phusion passenger.
* Allow bundle ruby cmds to be identififed by name
In some cases, bundle runs ruby scripts by direct script
name (foo.rb). Also allow that to spawn shells.
* Let nginx spawn shells.
* Skip setuid rules for containers.
For now, entirely skip the setuid rule for containers. Will add back
once I can find a way to check for unknown users.
* Let PassengerWatchd run shells
* Add additional foreman shells
Let the direct parent also be scl when the ancestor is tfm-rake,tfm-ruby.
* Add additional innocuous command lines.
* Also let cron spawn shells in containers
Seen when using things like phusion passenger.
* Also let run-parts run cmp/cp for sensitive files
Might be a case of a missing process but might also be legitimate.
* Let erlexec spawn shells.
* Add additional innocuous shell cmdlines.
* Add suexec as a userexec binary.
* Add imap/mailmng-core as mail binaries.
Also split list across multiple lines.
* Let perl spawn shells when run by cpanm
* Let apache_control_ spawn shells
* Let ics_start/stop running java spawn shells
java is the direct parent, ics_start/stop are ancestors.
* Let PassengerAgent setuid.
It setuids to nobody.
* Let multilog write below /etc if run by supervise
* Let bwrap setuid
A container setup utility.
* Detect writes below /, /root
New rule Write below root detects writes either directly below / or
anywhere below /root.
* Don't let shells directly open network connections
In addition to system binaries, don't let shells directly open network
connections. Bash has /dev/{tcp,udp} which allows direct connections.
* Add additional sensitive mounts.
Add additional sensitive mounts, including the docker socket, /,
anywhere below /root, or anywhere below /etc.
* Let pki-realm write below /etc/pki/realms
Appears to be an ansible script.
* Let sgdisk write below dev
* Let debconf-show read sensitive files.
* Additional case for build-related scripts.
* Add additional mail binaries.
* Let ruby running discourse spawn shells.
* Let beam.smp and paster run shells
* Temporarily undo shells opening net conns update
At some customers, at container create time events are being lost, and
for that reason programs spawned by the shell that perform network
connections are being misattributed to the shell.
* Make the actual sensitive files a list.
Make the actual sensitive files used by the sensitive files macro a list
so it can be easily extended.
* Print mounts in Launch Sensitive Mount Container
Add the full list of mounts to the output of Launch Sensitive Mount
Container, so it's easy to see which sensitive mount was used.
* Add container.image to container-related rules.
Helps in diagnosis.
* Add sw-engine-kv as a plesk binary.
* Allow sa-update to read sensitive files
SpamAssassin updater.
* Add additional shell spawners.
* Allow sumologic secureFiles to run user mgmt progs
See https://help.sumologic.com/Send-Data/Installed-Collectors/05Reference-Information-for-Collector-Installation/08Enhanced-File-System-Security-for-Installed-Collectors.
* Only consider full mounts of /etc as sensitive
A legitimate case is k8s mounting /etc/kubernetes/ssl, which was
matching /etc*. The glob matcher we have isn't a full regex so you can't
exclude strings, only characters.
* Let htpasswd write below /etc
Part of nginx
* Let pam-auth-update read sensitive files
* Let hawkular-metric spawn shells.
* Generalize jenkins scripts spawning shells
Generalize jenkins_script_sh to jenkins_scripts and add additional
cases.
* Let php run by assemble spawn shells
Better than globally letting php spawn shells.
* Add additional setuid binaries.
* Add additional package mgmt prog
rhsmcertd-worke(r), red hat subscription manager
* Add additional yarn cmdlines.
* Let dmeventd write below etc.
device mapper event daemon.
* Let rhsmcertd-worke(r) spawn shells.
* Let node spawn bitnami-related shells.
* Add user allowed sensitive mounts
New macro user_sensitive_mount_containers allows a second rules file to
specify containers/images that can perform sensitive mounts.
* Add start-stop-daemon as setuid program
It has -g/-u args to change gid/uid.
Also move some other single setuid programs to the list
known_setuid_binaries.
* Add additional shell spawners/cmdlines.
* Let python running localstack spawn shells.
* Add additional chef binaries.
* Let fluentd spawn shells.
* Don't consider unix_chkpwd to be a user mgmt prog
It only checks passwords.
* Get setuid for NULL user in container working
Reorganize the unknown_user_in_container macro to get it working again
in containers. Previously, it was being skipped entirely due to a
problem with handling of unknown users, which get returned as NULL.
The new macro is known_user_in_container, which tests the user.name
against "N/A". It happens that if user.name is NULL, the comparison
fails, so it has the same effect as if the string "N/A" were being
returned. Any valid user name won't match the string "N/A", so known
users will cause the macro to return true.
The setuid rule needs an additional check for not container, so add that.
* Add exceptions for Write below root
Add lists of files/directories that are acceptable to write.
@ret2libc reported that osx builds were failing with the current version
of libcurl. Update to the latest version and add the necessary configure
arguments.
Also use https links for all dependencies downloads.
The rules CMakeLists.txt, which controls the installation of the falco
rules files, was in the engine CMakeLists.txt, which meant that programs
that included the engine would also include rules files.
This may not always be desired, so move the rules CMakeLists.txt to the
main falco CMakeLists.txt instead.
Work around https://github.com/draios/sysdig/issues/954, which relates
to not always knowing the proper user name in containers, by not running
the rule when in a container and the user name is "<NA>". This won't
address cases where the uid from inside the container maps to a user
name outside the container that is different than the user inside the
container, but it will help a bit.
Add crlutil as a program that can modify below etc.
Let centrify programs modify below etc.
Add more info for writes below etc to track etc writers through scripts.
Increase the level of debugging for shells.
Add more run_by_xxx macros for h2o/phusion passenger. Handles cases
where the ancestor has a name, but the direct parent is a general
scripting language like ruby/perl/etc.
New macro run_by_chef is similar to run_by_qualys in that it looks in
various places in the process heirarchy. Use that macro to allow writes
below etc. Will probably add in more places soon.
Qualys seems to run a variety of shell subprocesses, at various
levels. Add a macro run_by_qualys that checks at a few levels without
the cost of a full proc.aname, which traverses the full parent
heirarchy.
Add a macro user_shell_container_exclusions that allows a second rules
file to easily extend the shelll in container rule without overriding
the entire rule.
Also add an exclusion node_running_edi_dynamodb which can be used for
that macro.
Add more specific controls of files below /etc, allowing specific
combinations of programs and files:
- start-fluentd can write to /etc/fluent/fluent.conf
- locales.postins can write to /etc/locale.gen
Use pmatch, which compares a file against a set of prefix paths, instead
of fd.directory. This allows the directories in safe_etc_dirs to be a
prefix of a file instead of just the directory containing a file.
Move entrypoint detection to its own macro. Also consider something the
entrypoint if its parent is runc:[0:PARENT]. There's a race where
runc:[0:PARENT] exits in parallel with the root program being execd, so
the parent might not exist or might have this name.
Combine parent_php_running_builds and parent_ruby_running_gcc into a
single parent_scripting_running_builds which handles the general case of
some script running some make/compilation related program. Also add some
build-related command line prefixes.
Allow supervisor-related programs to spawn shells and access sensitive
files.
Allow sendmail config binaries to write below etc directly (their
children already could).
Add some directories related to phusion (system-as-a-container).
For a few rules add parent programs in the output so it's easier to
diagnose the context for an event.
Let varnishd spawn shells.
- Move qualys-cloud-ag to the monitoring_binaries list
- Add a new list sendmail_config_binaries containing programs that can
modify files.
- Make parent_php_running_git a bit more generic for
parent_php_running_builds and add some additional sub-commands.
- Allow several combinations of scripting programs (ruby, python, etc.)
to run other build-ish commands.
- Let mysql_install_d(b) spawn shells and access sensitive files.
- Let qualys-cloud-ag(ent) spawn shells
- Add a few additional innocuous commandlines
- Let postfix setuid to itself
A new (empty) list user_known_container_shell_spawn_binaries allows
additional files to add additional programs that are allowed to spawn
shells in containers.
Add additional shell spawning command lines.
Allow package management binaries in containers--lots of people seem to
do it. Also allow pycompile/py3compile.
I need to refactor the shell spawners to more clearly isolate shell
spawners that we don't want to occur in a container from ones that can
run both inside and outside of a container.
A new falco.yaml option buffered_outputs, also controlled by
-U/--unbuffered, sets unbuffered outputs for the output methods. This is
especially useful with keep_alive files/programs where you want the
output right away.
Also add cleanup methods for the output channels that ensure output to
the file/program is flushed and closed.
Add the ability to keep file/program outputs open (i.e. writing to the
same open file/program for multiple notifications). A new option to the
file/program output "keep_alive", if true, keeps the file/program pipe
open across events.
This makes the need for unbuffered output aka
https://github.com/draios/falco/issues/211 more pressing. Will add that next.
When json output is set, add a sub-object called output_fields to the
json output that contains the individual templated fields from the
output string. Makes it easier to parse those fields.
This fixes https://github.com/draios/falco/issues/261.
These changes allow for a local rules file that will be preserved across
upgrades and allows the main rules file to be overwritten across upgrades.
- Move all config/rules files below /etc/falco/
- Add a "local rules" file /etc/falco/falco_rules.local.yaml. The intent
is that it contains modifications/deltas to the main rules file
/etc/falco/falco_rules.yaml. The main falco_rules.yaml should be
treated as immutable.
- All config files are flagged so they are not overwritten on upgrade.
- Change the handling of the config item "rules_file" in falco.yaml to
allow a list of files. By default, this list contains:
[/etc/falco/falco_rules.yaml, /etc/falco/falco_rules.local.yaml].
Also change rpm/debian packaging to ensure that the above files are
preserved across upgrades:
- Use relative paths for share/bin dirs. This ensures that when packaged
as rpms they won't be flagged as config files.
- Add CMAKE_INSTALL_PREFIX to FALCO_ENGINE_LUA_DIR now that it's relative.
- In debian packaging, flag
/etc/falco/{falco.yaml,falco_rules.yaml,falco_rules.local.yaml} as
conffiles. That way they are preserved across upgrades if modified.
- In rpm packaging when using cmake, any files installed with an
absolute path are automatically flagged as %config. The only files
directly installed are now the config files, so that addresses the problem.
Add CMAKE_INSTALL_PREFIX to lua dir.
Clean up the handling of priority levels within rules. It used to be a
mix of strings handled in various places. Now, in falco_common.h there's
a consistent type for priority-as-number as well as a list of
priority-as-string values. Priorities are passed around as numbers
instead of strings. It's still permissive about capitalization.
Also add the ability to load rules by severity. New falco
config option "priority=<val>"/-o priority=<val> specifies the minimum
priority level of rules that will be loaded.
Add unit tests for same. The test suppresses INFO notifications for a
rule/trace file combination that would otherwise generate them.
Add the ability to append to rules/macros, like we already do with
lists. For rules/macros, if the object has an append: true key, the
condition value is appended to the condition of an existing rule/macro
with the same name.
Like lists, it's an error to specify append: true without there being an
existing rule/macro.
Also add tests that test the same kind of things we did for lists:
- That append: true really does append
- That append: false overwrites the rule/macro
- That it's an error to append with a prior rule/macro existing.
List nodes can now have an 'append' key. If present and true, any values
in this list will be appended to the end of any existing list with the
same name.
It is an error to have a list with 'append' true that has a name that is
not an existing list.
Add new unit tests to check that list substitution is working as
expected, with test cases for the list substitution occurring at the
beginning, middle, and end of a condition.
Also add tests that verify that overrides on list/macro/rule names
always occur in order.
When performing list substitution, only replace a list name when it is
surrounded by whitespace or expected punctuation characters. Lua
patterns don't have a notion of this-or-that patterns e.g. (^|abc), so
we have 3 versions of the substitution depending on whether he list name
occurs in the beginning, middle, or end of a string.
This fixes#197.
Also validate macros when they are parsed. Macros are also validated as
a part of rules being parsed, but it's possible to have an individual
rules file containing only macros, or a macro not explicitly tied to any
rule. In this case, it's useful to be able to check the macro to see if
it contains dangling macro references.
When parsing condition expressions, if the type of an ast node is
String (aka quoted string), don't trim whitespace from the value. This
ensures that conditions that want to match exact strings e.g. command
lines with leading/trailing spaces will work properly.
This fixes#253.
* Updates from beta customers.
- add anacron as a cron program
* Reorganize package management binaries
Split package_management_binaries into two separate lists rpm_binaries
and deb_binaries. unattended-upgr is common to both worlds so it's still
in package_management_binaries.
Also change Write below rpm database to use rpm_binaries instead of its
own list.
Also add 75-system-updat (truncated) as a shell spawner.
* Add rules for jenkins
Add rules that allow jenkins to spawn shells, both in containers and
directly on the host.
Also handle jenkins slaves that run /tmp/slave.jar.
* Allow npm to run shells.
Not yet allowing node to run shells itself, although we want to add
something to reduce node-related FPs.
* Allow urlgrabber/git-remote to access /etc
urlgrabber and git-remote both try to access the RHEL nss database,
containing shared certificates. I may change this in a more general way
by changing open_read/open_write to only look for successful opens.
* Only look for successful open_read/open_writes
Change the macros open_read/open_write to only trigger on successful
opens (when fd.num > 0). This is a pretty big change to behavior, but
is more intuitive.
This required a small update to the open counts for a couple of unit
tests, but otherwise they still all passed with this change.
* Allow rename_device to write below /dev
Part of udev.
* Allow cloud-init to spawn shells.
Part of https://cloud-init.io/
* Allow python to run a shell that runs sdchecks
sdchecks is a part of the sysdig monitor agent.
* Allow dev creation binaries to write below etc.
Specifically this includes blkid and /etc/blkid/blkid.tab.
* Allow git binaries to spawn shells.
They were already allowed to run shells in a container.
* Add /dev/kmsg as an allowed /dev file
Allows userspace programs to write to kernel log.
* Allow other make programs to spawn shells.
Also allow gmake/cmake to spawn shells and put them in their own list
make_binaries.
* Add better mesos support.
Mesos slaves appear to be in a container due to their cgroup and can run
programs mesos-health-check/mesos-docker-exec to monitor the containers
on the slave, so allow them to run shells.
Add mesos-agent, mesos-logrotate, mesos-fetch as shell spawners both in
and out of containers.
Add gen_resolvconf. (short for gen_resolvconf.py) as a program that can
write to /etc.
Add toybox (used by mesos, part of http://landley.net/toybox/about.html)
as a shell spawner.
* systemd can listen on network ports.
Systemd can listen on network ports to launch daemons on demand, so
allow it to perform network activity.
* Let docker binaries setuid.
Let docker binaries setuid and add docker-entrypoi (truncation
intentional) to the set of docker binaries.
* Change cis-related rules to be less noisy
Change the two cis-related falco rules "File Open by Privileged
Container" and "Sensitive Mount by Container" to be less noisy. We found
in practice that tracking every open still results in too many falco
notifications.
For now, change the rules to only track the initial process start in the
container by looking for vpid=1. This should result in only triggering
when a privileged/sensitive mount container is started. This is slightly
less coverage but is far less noisy.
* Add quay.io/sysdig as trusted containers
These are used for sysdig cloud onpremise deployments.
* Add gitlab-runner-b(uild) as a gitlab binary.
Add gitlab-runner-b (truncated gitlab-runner-build) as a gitlab binary.
* Add ceph as a shell spawner.
Also allow ceph to spawn shells in a container.
* Allow some shells by command line.
For some mesos containers, where the container doesn't have an image and
is just a tarball in a cgroup/namespace, we don't have any image to work
with. In those cases, allow specific command lines.
* Allow user 'nobody' to setuid.
Allow the user nobody to setuid. This depends on the user nobody being
set up in the first place to have no access, but that should be an ok
assumption.
* Additional allowed shell commandlines
* Add additional shells.
* Allow multiple users to become themself.
Add rule somebody_becoming_themself that handles cases of nobody and
www-data trying to setuid to themself. The sysdig filter language
doesn't support template/variable values to allow "user.name=X and
evt.arg.uid=X for a given X", so we have to enumerate the users.
* More known spawn command lines
* Let make binaries be run in containers.
Some CI/CD pipelines build in containers.
* Add additional shell spawning command lines
* Add additional apt program apt-listchanges.
* Add gitlab-ce as shell spawning container.
* Allow PM2 to spawn shells in containers.
Was already in the general list, seen in some customers, so adding to
the in containers list.
* Clean up pass to fix long lines.
Take a pass through the rules making sure each line is < 120 characters.
* Change tests for privileged container rules.
Change unit tests to reflect the new privileged/sensitive mount
container rules that only detect container launch.
The default falco ruleset now has a wider variety of priorities, so
adjust the automated tests to match:
- Instead of creating a generic test yaml entry for every trace file in
traces-{positive,negative,info} with assumptions about detect levels,
add a new falco_traces.yaml.in multiplex file that has specific
information about the detect priorities and rule detect counts for each
trace file.
- If a given trace file doesn't have a corresponding entry in
falco_traces.yaml.in, a generic entry is added with a simple
detect: (True|False) value and level. That way you can get specific
detect levels/counts for existing trace files, but if you forget to
add a trace to falco_traces.yaml.in, you'll still get some coverage.
- falco_tests.yaml.in isn't added to any longer, so rename it to
falco_tests.yaml.
- Avocado is now run twice--once on each yaml file. The final test
passes if both avocado runs pass.
Review the priorities used by each rule and try to use a consistent set
that uses more of the possible priorities. The general guidelines I used
were:
- If a rule is related to a write of state (i.e. filesystem, etc.),
its priority is ERROR.
- If a rule is related to an unauthorized read of state (i.e. reading
sensitive filees, etc.), its priority is WARNING.
- If a rule is related to unexpected behavior (spawning an unexpected
shell in a container, opening an unexpected network connection, etc.), its priority
is NOTICE.
- If a rule is related to behaving against good practices (unexpected
privileged containers, containers with sensitive mounts, running
interactive commands as root), its priority is INFO.
One exception is that the most FP-prone rule (Run shell untrusted) has a
priority of DEBUG.
Allow the sysdig cloud agent to call setns to collect java process
metrics.
We've also seen cases where some of the intermediate processes created
below runc appear to call setns. It appears that this only should happen
if some events (like the execve that spawns the intermediate processes)
are lost, but just to be safe allow processes starting with "runc:" to
call setns.
Add a new falco rule "Terminal shell in container" that looks for shells
spawned in a container with an attached terminal. This is similar to the
existing "Run shell in container" rule, but doesn't have as many
exceptions as we expect this to be even less rare.
Add automated tests for running falco from a package and container. As a
result, this will also test building the kernel module as well as
runnning falco-probe-loader as a backup.
In travis.yml, switch to the docker-enabled vm and install dkms. This
changed the environment slightly, so change how avocado's python
dependencies are installed. After building falco, copy the .deb package
to docker/local and build a local docker image based on that package.
Add the following new tests:
- docker_package: this uses "docker run" to run the image created in
travis.yml. This includes using dkms to build the kernel module and
load it. In addition, the conf directory is mounted to /host/conf, the
rules directory is mounted to /host/rules, and the traces directory is
mounted to /host/traces.
- docker_package_local_driver: this disables dkms via a volume mount
that maps /dev/null to /usr/sbin/dkms and copies the kernel module by
hand into the container to /root/.sysdig/falco-probe-....ko. As a
result, falco-probe-loader will use the local kernel module instead
of building one itself.
- debian_package: this installs the .deb package and runs the installed
version of falco.
Ideally, there'd also be a test for downloading the driver, but since
the driver depends on the kernel as well as the falco version string,
you can't put a single driver on download.draios.com that will work
long-term.
These tests depend on the following new test attributes:
- package: if present, this points to the docker image/debian package
to install.
- addl_docker_run_args: if present, will be added to the docker run
command.
- copy_local_driver: if present, will copy the built kernel module to
~/.sysdig. ~/.sysdig/* is always cleared out before each test.
- run_duration: maps to falco's -M <secs> flag
- trace_file is now optional.
Also add some misc general test changes:
- Clean up our use of process.run. By default it will fail a test if the
run program returns non-zero, so we don't have to grab the exit
status. In addition, get rid of sudo in the command lines and use the
sudo attribute instead.
- Fix some tests that were writing to files below /tmp/falco_outputs
by creating the directory first. Useful when running avocado directly.
If a daemonset specifies a command, this overrides the entrypoint. In
falco's case, the entrypoint handles the details of loading the kernel
driver, so specifying a command accidently prevents the driver from
being loaded.
This happens to work if you had a previously loaded sysdig_probe driver
lying around.
The fix is to specify args instead. In this case, the driver will be
loaded via the entrypoint.
This fixes https://github.com/draios/falco/issues/225.
Start packaging (and building when necessary) a falco-specific kernel
module in falco releases. Previously, falco would depend on sysdig and
use its kernel module instead.
The kernel module was already templated to some degree in various
places, so we just had to change the templated name from
sysdig/sysdig-probe to falco/falco-probe.
In containers, run falco-probe-loader instead of
sysdig-probe-loader. This is actually a script in the sysdig repository
which is modified in https://github.com/draios/sysdig/pull/789, and uses
the filename to indicate what kernel module to build and/or load.
For the falco package itself, don't depend on sysdig any longer but instead
depend on dkms and its dependencies, using sysdig as a guide on the set
of required packages.
Additionally, for the package pre-install/post-install scripts start
running falco-probe-loader.
Finally, add a --version argument to falco so it can pass the desired
version string to falco-probe-loader.
Add example k8s yaml files that allow for running falco as a k8s
daemonset and the event generator as a deployment, running on 1 node.
Falco is configured to send its output to a slack webhook corresponding
to the #demo-falco-alerts channel on sysdig's public slack channel.
The output is is k8s friendly by using -pk, -k (k8s api server), and
-K (credentials to communicate with api server).
Use the sinsp_evt_formatter_cache added in
https://github.com/draios/sysdig/pull/771 instead of a local cache. This
simplifies the lua side quite a bit, as it only needs to call
format_output(), and clean up everything via free_formatters() in
output_cleanup().
On the C side, use a sinsp_evt_formatter object and use it in
format_event().
In C functions that implement lua functions, don't directly throw
falco_exceptions, which results in opaque error messages like:
Mon Feb 27 10:09:58 2017: Runtime error: Error invoking function output:
C++ exception. Exiting.
Instead, return lua errors via lua_error().
- Sometimes systemd changes its process name to '(systemd)', probably
for a forked daemon process. Add that version to login_binaries.
- Add sv (part of runit) as a program that can write below /etc.
- Allow all /dev/tty* files by moving /dev/tty from the list to a
"startswith /dev/tty" condition.
- Instead of having a possibly null string pointer as the argument to
enable_* and process_event, have wrapper versions that assume a
default falco ruleset. The default ruleset name is a static member of
the falco_engine class, and the default ruleset id is created/found
in the constructor.
- This makes the whole mechanism simple enough that it doesn't require
seprarate testing, so remove the capability within falco to read a
ruleset from the environment and remove automated tests that specify
a ruleset.
- Make pattern/tags/ruleset arguments to enable_* functions const.
(I'll squash this down before I commit)
Tag the existing ruleset to group tags in a meaningful way. The added
tags are:
- filesystem: the rule relates to reading/writing files
- sofware_mgmt: the rule relates to any software/package management
tool like rpm, dpkg, etc.
- process: the rule relates to starting a new process or changing the
state of a current process.
- database: the rule relates to databases
- host: the rule *only* works outside of containers
- shell: the rule specifically relates to starting shells
- container: the rule *only* works inside containers
- cis: the rule is related to the CIS Docker benchmark.
- users: the rule relates to management of users or changing the
identity of a running process.
- network: the rule relates to network activity
Rules can have multiple tags if they relate to multiple of the
above. Rules do not have to have tags, although all the current rules do.
Add automated tests that verify the ability to tag sets of rules,
disable them with -T, and run them with -t, works:
- New test option disable_tags adds -T <tag> arguments to the falco
command line, and run_tags adds -t <tag> arguments to the falco command
line.
- A new trace file open-multiple-files.scap opens 13 different files,
and a new rules file has 13 different rules with all combinations of
the tags a, b, c (both forward and backward), a rule with an empty
list of tags, a rule with no tags field, and a rule with a completely
different tag d.
Using the above, add tests for:
- Both disabling all combations of a, b, c using disable_tags as well as
run all combinations of a, b, c, using run_tags.
- Specifying both disabled (-T/-D) and enabled (-t) rules. Not allowed.
- Specifying a ruleset while having tagged rules enabled, rules based
on a name disabled, and no particular rules enabled or disabled.
- in lua, look for a tags attribute to each rule. This is passed up in
add_filter as a tags argument (as a lua table). If not present, an
empty table is used. The tags table is iterated to populate a set
of tags as strings, which is passed to add_filter().
- A new method falco_engine::enable_rule_by_tag is similar to
enable_rule(), but is given a set of tag strings. Any rules containing
one of the tags is enabled/disabled.
- The list of event types has been changed to a set to more accurately
reflect its purpose.
- New argument to falco -T allows disabling all rules matching a given
tag, via enable_rule_by_tag(). It can be provided multiple times.
- New argument to falco -t allows running those rules matching a given
tag. If provided all rules are first disabled. It can be
provided multiple times, but can not be combined with -T or
-D (disable rules by name)
- falco_enging supports the notion of a ruleset. The idea is that you
can choose a set of rules that are enabled/disabled by using
enable_rule()/enable_rule_by_tag() in combination with a
ruleset. Later, in process_event() you include that ruleset and the
rules you had previously enabled will be run.
- rulsets are provided as strings in enable_rule()/enable_rule_by_tag()
and as numbers in process_event()--this avoids the overhead of string
lookups per-event. Ruleset ids are created on the fly as needed. A
utility method find_ruleset_id() looks up the ruleset id for a given
name. The default ruleset is NULL string/0 numeric if not provided.
- Although the ruleset is a useful falco engine feature, it isn't that
important to the falco standalone program, so it's not
documented. However, you can change the ruleset by providing
FALCO_RULESET in the environment.
- Add flanneld as a privileged container.
- Add parentheses grouping around many of the "x running y"
containers. I haven't found this strictly necessary with their
current use in rules, but this ensures they will be isolated when
used.
- Allow denyhosts to spawn shells--it runs iptables to add/remove hosts
from its deny list.
This is a rework of a PR made by @juju4 that had a bunch of additions
related to running other security/monitoring products, including aide,
bro, icinga2, nagios, ansible, etc.
This overlapped a lot with changes I had been making to reduce
noisiness, so rather than have @juju4 deal with the conflicts I took the
changes and made a separate commit with the non-conflicting additions.
A summary of the changes:
- Add docker-compose as a docker binary.
- Add showq/critical-stack as setuid binaries.
- Add lxd binaries
- Add some additional package management binaries.
- Add support for host intrustion detection systems like aide.
- Add support for network intrustion detections systems like bro.
- Add support for monitoring systems like nagios, icinga2, npcd.
- Other one-off additions to other lists of mail/etc programs.
A new trace file falco-event-generator.scap contains the result of
running the falco event generator in docker, via:
docker run --security-opt seccomp=unconfined sysdig/falco-event-generator:latest /usr/local/bin/event_generator --once
Make sure this trace file detects the exact set of events we expect for
each rule. This required adding a new verification method
check_detections_by_rule that finds the per-rule counts and compares
them to the expected counts, which are included in the test description
under the key "detect_counts".
This is the first time a trace file for a test is actually in one of the
downloaded zip files. This means it will be tested twice (one for simple
detect-or-not, once for actual counts).
Adding this test showed a problem with Run shell in container
rule--since sysdig/falco-event-generator startswith sysdig/falco, it was
being treated as a trusted container. Modify the macro
trusted_containers to not allow falco-event-generator to be trusted.
Small changes to improve the use of falco_event_generator with falco:
- In event_generator, some actions like exec_ls won't trigger
notifications on their own. So exclude them from -a all.
- For all actions, print details on what the action will do.
- For actions that won't result in a falco notification in containers,
note that in the output.
- The short version of --once wasn't working, fix the getopt.
- Explicitly saying -a all wasn't working, fix.
- Don't rely on an external ruleset in the nodejs docker-compose
demo--the built in rules are sufficient now.
- Add a second possible location for denyhosts
- Add PM2 (http://pm2.keymetrics.io/) as a shell spawner.
- There was a bug in use of ansible_running_python. We actually need
two variants depending on whether ansible is the parent or current
process. parent_ansble_running_python is used for Run shell
untrusted, ansible_running_python is used for other rules.
We had added this image while the changes in
https://github.com/draios/falco/pull/177 made it to everyone. This is in
a release now, so we'll remove it from the rule set.
Within the sysdig code there are several ASSERTS() that can occur for
error paths that aren't truly critical, such as:
17:33:52 DEBUG| [stderr] falco: /home/travis/build/draios/sysdig/userspace/libsinsp/parsers.cpp:1657: static void sinsp_parser::parse_openat_dir(sinsp_evt*, char*, int64_t, std::string*): Assertion `false' failed.
Looking at the code, it's not a truly fatal error, just an inability to
find fd information:
----
if(evt->m_fdinfo == NULL)
{
ASSERT(false);
*sdir = "<UNKNOWN>";
}
----
When running regression tests in travis, we don't want these ASSERTs to
cause falco to exit.
To allow this, in CMakeLists.txt only set DRAIOS_DEBUG_FLAGS if it
wasn't already set, and in travis's cmake, add -DNDEBUG to
DRAIOS_DEBUG_FLAGS.
Several changes to reduce spurious alerts when managing machines via
ansible:
- Add ansible_running_python (that is, ansible-spawned python scripts)
as scripts that can read sensitive files and write below
/etc. Notably this is the user ansible module.
- Also add comments to ansible_running_python suggesting users make it
more strict by specifically naming the root directory for ansible
scripts.
- Add pypy as a python variant that can run ansible-related scripts.
Also other changes to reduce FPs:
- add apt-add-reposit, apt-auto-remova (truncation intentional),
apt-get, apt, apt-key as package management programs, and add package
management binaries to the set of shell spawners. The overlapping
binaries that were in known_shell_spawn_binaries were removed.
- add passwd_binaries, gpg, insserv, apparmor_parser, update-mime,
tzdata.{config,postinst}, systemd-machine, and debconf-show to
the set of binaries that can write below /etc.
- Add vsftpd as a program that can read sensitive files.
- Add additional programs (incl. python support programs like pip,
pycompile) as ones that can spawn shells.
- Allow privileged containers to spawn shells.
- Break out the set of files below /dev that are written to with O_CREAT
into a separate list, and add /dev/random,urandom,console to the list.
- Add python running denyhosts as a program that can write below /etc.
- Also add binaries starting with linux-image- as ones that can spawn
shells. These are perl scripts run as a part of installing
linux-image-N.N packages.
Changes to allow shells spawned by ansible. In general this is actually
pretty difficult--on the remote managed machine, ansible performs
actions simply by running python over ssh without any explicit ansible
helper or command line.
One (weak) hint is that the python scripts being run are usually under a
directory with ansible in the name. So use that as the basis for a macro
ansible_running_python. In turn, that macro is used as a negative
condition for the run shell untrusted rule.
This is a pretty fragile and easily exploited condition, so add a note
to the macro saying so.
Feedback from a falco user:
--
to more findings from last night:
logrotate cronjob (Debian default):
Shell spawned by untrusted binary (user=root shell=sh parent=logrotate cmdline=sh -c invoke-rc.d rsyslog rotate > /dev/null logrotate_script /var/log/syslog)
passwd cronjob (Debian default):
Sensitive file opened for reading by non-trusted program (user=root name=cmp command=cmp -s shadow.bak /etc/shadow file=/etc/shadow)
--
New macro cmp_cp_by_passwd allows cmp/cp to be run by passwd to examine
sensitive files. Add logrotate as a program that can spawn a shell.
Also do some cleanups, moving items to lists and splitting long
single-line conditions into multiple lines.
agent-master went out of sync, probably some rebase/forcepush happened
on dev. Used `git merge -s ours agent-master` here to put all the
commits of agent-master on dev and ignoring anything from agent-master.
So now we can merge from dev to agent-master with fast forward and no
conflicts
Add a test that specifically tests truncated outputs. A rule contains an
output field %fd.cport which has no value for an open event. Ensure that
the rule's output has <NA> for the cport and the remainder of the rule's
output is filled in.
Prefix output strings with * so they are always permissive in the
engine.
In falco outputs, which adds its own prefix, remove any leading * before
adding the custom prefix.
Add cchh/sysdig as a trusted container. We'll probably remove this once
the next agent release occurs that has the fix
https://github.com/draios/falco/pull/177.
Also reformat to avoid long lines.
New tests that test every possible override:
- Overriding a rule with one that doesn't match
- Overriding a macro to one that doesn't match
- Overriding a top level list to a binary that doesn't match
- Overriding an embedded list to one that doesn't match
In each case, the override results in no longer matching an open by the
program "cat".
Allow any list/macro/rule to be overridden by a subsequent file. The
persistent state that lives across invocations of load_rules are the 3
arrays ordered_{list,macro,rule}_names, which have the
lists/macros/rules in the order in which they first appear, and tables
{rules,macros,lists}_by_name, which maps from a name to a yaml object.
With each call to load_rules, the set of loaded rules is reset and the
state of expanded lists, compiled macros, compiled rules, and rule
metadata are recreated from scratch, using the ordered_*_names arrays
and *_by_name tables. That way, any list/macro/rule can be redefined in
a subsequent file with new values.
Add the ability to clear the set of loaded rules from lua. It simply
recreates the sinsp_evttype_filter instance m_evttype_filter, which is
now a unique_ptr.
Periodically both apt and apt-get will spawn shells to update success timestamps and motd.
falco-CLA-1.0-signed-off-by: Jonathan Coetzee <jon@thancoetzee.com>
SSH'ing into an Ubuntu 16.04 box triggers a bunch of "Sensitive file opened for reading by non-trusted program" errors caused by systemd
falco-CLA-1.0-signed-off-by: Jonathan Coetzee jon@thancoetzee.com
sinsp_utils::get_current_time_ns() has the same purpose as
get_epoch_ns(), and now that we're including the token bucket in
falco_engine, it's easy to package the dependency. So use that function
instead.
Add token-bucket based rate limiting for falco notifications.
The token bucket is implemented in token_bucket.cpp (actually in the
engine directory, just to make it easier to include in other
programs). It maintains a current count of tokens (i.e. right to send a
notification). Its main method is claim(), which attemps to claim a
token and returns true if one was claimed successfully. It has a
configurable configurable max burst size and rate. The token bucket
gains "rate" tokens per second, up to a maximum of max_burst tokens.
These parameters are configurable in falco.yaml via the config
options (defaults shown):
outputs:
rate: 1
max_burst: 1000
In falco_outputs::handle_event(), try to claim a token, and if
unsuccessful log a debug message and return immediately.
Add google_containers/kube-proxy as a trusted image (can be run
privileged, can mount sensitive filesystems). While our k8s deployments
run kube-proxy via the hyperkube image, evidently it's sometimes run via
its own image.
This is one of the fixes for #156.
Also update the output message for this rule.
Previously, log messages had levels, but it only influenced the level
argument passed to syslog(). Now, add the ability to control log level
from falco itself.
New falco.yaml argument "log_level" can be one of the strings
corresponding to the well-known syslog levels, which is converted to a
syslog-style level as integer.
In falco_logger::log(), skip messages below the specified level.
Instead of creating a formatter for each event, cache them and create
them only when needed. A new function output_cleanup cleans up the
cached formatters, and is called in the destructor if init() was called.
New argument --metric, which can be cpu|drops, controls whether to graph
cpu usage or event drop percentage. Titles/axis labels/etc. change
appropriately.
When run via scripts like run_performance_tests.sh, it's useful to
include extra info like the test being run and the specific program
variant to the stats file. So support that via the
environment. Environment keys starting with FALCO_STATS_EXTRA_XXX will
have the XXX and environment value added to the stats file.
It's undocumented as I doubt other programs will need this functionality
and it keeps the docs simpler.
With -s, periodically fetch capture stats from the inspector and write
them to the provided file.
Separate class StatsFileWriter handles the details. It does rely on a
timer + SIGALRM handler so you can only practically create a single
object, but it does keep the code/state separate.
The output format has a sample number, the set of current stats, a
delta with the difference from the prior sample, and the percentage of
events dropped during that sample.
Change falco_engine::process_event to return a unique_ptr that wraps the
rule result, so it won't be leaked if this method throws an exception.
This means that callers don't need to create their own.
Add the ability to check falco's return code with exit_status and to
generally match stderr with stderr_contains in a test.
Use those to create a test that has an invalid output expression using
%not_a_real_field. It expects falco to exit with 1 and the output to
contain a message about the invalid output.
Validate rule outputs when loading rules by attempting to create a
formatter based on the rule's output field. If there's an error, it will
propagate up through load_rules and cause falco to exit rather than
discover the problem only when trying to format the event and the rule's
output field.
This required moving formats.{cpp,h} into the falco engine directory
from the falco general directory. Note that these functions are loaded
twice in the two lua states used by falco (engine and outputs).
There's also a couple of minor cleanups:
- falco_formats had a private instance variable that was unused, remove
it.
- rename the package for the falco_formats functions to formats instead
of falco so it's more standalone.
- don't throw a c++ exception in falco_formats::formatter. Instead
generate a lua error, which is handled more cleanly.
- free_formatter doesn't return any values, so set the return value of
the function to 0.
container.info handling used to be handled by the the falco_outputs
object. However, this caused problems for applications that only used
the falco engine, doing their own output formatting for matching events.
Fix this by moving output formatting into the falco engine itself. The
part that replaces %container.info/adds extra formatting to the end of a
rule's output now happens while loading the rule.
Make necessary changes to allow run_performance_tests to invoke the
'test_mm' program we use internally.
Also add ability to run with a build directory separate from the source
directory and to specify an alternate rules file.
Finally, set up the kubernetes demo using sudo, a result of recent changes.
Related to the changes in https://github.com/draios/agent/pull/267,
improve error messages when trying to load sets of rules with errors:
- Check that yaml parsing of rules_content actually resulted in
something.
- Return an error for rules that have an empty name.
- Return an error for yaml objects that aren't a rule/macro/list.
- When compiling, don't print an error message, simply return one,
including a wrapper "can not compile ..." string.
Instead of having FALCO_SHARE_DIR be a relative path, fully specify it
by prepending CMAKE_INSTALL_PREFIX in the top level CMakeLists.txt and
don't prepend CMAKE_INSTALL_PREFIX in config_falco_engine.h.in. This
makes it consistent with its use in the agent.
Honor a USE_BUNDLED_DEPS option for third-party libraries which can be
applied globally. There are also USE_BUNDLED_XXX options that can be
used individually for each library.
Verified that this works by first building with USE_BUNDLED_DEPS=ON (the
default), installing external packages ncurses-dev libssl-dev
libcurl4-openssl-dev so CMake's find_package could use them, modifying
the CMakeLists.txt to add "PATHS ${PROJECT_BINARY_DIR}/..." options to
each find_path()/find_library() command to point to the previously
installed third party libraries. It found them as expected.
The sysdig fix in https://github.com/draios/sysdig/pull/672 forced this
change, but it does also happen to fix a falco feature request
https://github.com/draios/falco/issues/144.
This helps when running on a system which has the module loaded, but getting
access to the module file is hard for some reason. Since I know that the right
version of the module is loaded I just want falco to connect.
I tested this with this run command:
docker run -e SYSDIG_SKIP_LOAD=1 -it -v /dev:/host/dev -v /proc:/host/proc --privileged falco
And it successfully connected to Sysdig and started printing out warnings for my
system.
falco-CLA-1.0-signed-off-by: Carl Sverre accounts@carlsverre.com
SSH'ing into an Ubuntu 16.04 box triggers a bunch of "Sensitive file opened for reading by non-trusted program" errors caused by systemd
falco-CLA-1.0-signed-off-by: Jonathan Coetzee jon@thancoetzee.com
sinsp_utils::get_current_time_ns() has the same purpose as
get_epoch_ns(), and now that we're including the token bucket in
falco_engine, it's easy to package the dependency. So use that function
instead.
Add token-bucket based rate limiting for falco notifications.
The token bucket is implemented in token_bucket.cpp (actually in the
engine directory, just to make it easier to include in other
programs). It maintains a current count of tokens (i.e. right to send a
notification). Its main method is claim(), which attemps to claim a
token and returns true if one was claimed successfully. It has a
configurable configurable max burst size and rate. The token bucket
gains "rate" tokens per second, up to a maximum of max_burst tokens.
These parameters are configurable in falco.yaml via the config
options (defaults shown):
outputs:
rate: 1
max_burst: 1000
In falco_outputs::handle_event(), try to claim a token, and if
unsuccessful log a debug message and return immediately.
Add google_containers/kube-proxy as a trusted image (can be run
privileged, can mount sensitive filesystems). While our k8s deployments
run kube-proxy via the hyperkube image, evidently it's sometimes run via
its own image.
This is one of the fixes for #156.
Also update the output message for this rule.
Previously, log messages had levels, but it only influenced the level
argument passed to syslog(). Now, add the ability to control log level
from falco itself.
New falco.yaml argument "log_level" can be one of the strings
corresponding to the well-known syslog levels, which is converted to a
syslog-style level as integer.
In falco_logger::log(), skip messages below the specified level.
Instead of creating a formatter for each event, cache them and create
them only when needed. A new function output_cleanup cleans up the
cached formatters, and is called in the destructor if init() was called.
New argument --metric, which can be cpu|drops, controls whether to graph
cpu usage or event drop percentage. Titles/axis labels/etc. change
appropriately.
When run via scripts like run_performance_tests.sh, it's useful to
include extra info like the test being run and the specific program
variant to the stats file. So support that via the
environment. Environment keys starting with FALCO_STATS_EXTRA_XXX will
have the XXX and environment value added to the stats file.
It's undocumented as I doubt other programs will need this functionality
and it keeps the docs simpler.
With -s, periodically fetch capture stats from the inspector and write
them to the provided file.
Separate class StatsFileWriter handles the details. It does rely on a
timer + SIGALRM handler so you can only practically create a single
object, but it does keep the code/state separate.
The output format has a sample number, the set of current stats, a
delta with the difference from the prior sample, and the percentage of
events dropped during that sample.
Change falco_engine::process_event to return a unique_ptr that wraps the
rule result, so it won't be leaked if this method throws an exception.
This means that callers don't need to create their own.
Add the ability to check falco's return code with exit_status and to
generally match stderr with stderr_contains in a test.
Use those to create a test that has an invalid output expression using
%not_a_real_field. It expects falco to exit with 1 and the output to
contain a message about the invalid output.
Validate rule outputs when loading rules by attempting to create a
formatter based on the rule's output field. If there's an error, it will
propagate up through load_rules and cause falco to exit rather than
discover the problem only when trying to format the event and the rule's
output field.
This required moving formats.{cpp,h} into the falco engine directory
from the falco general directory. Note that these functions are loaded
twice in the two lua states used by falco (engine and outputs).
There's also a couple of minor cleanups:
- falco_formats had a private instance variable that was unused, remove
it.
- rename the package for the falco_formats functions to formats instead
of falco so it's more standalone.
- don't throw a c++ exception in falco_formats::formatter. Instead
generate a lua error, which is handled more cleanly.
- free_formatter doesn't return any values, so set the return value of
the function to 0.
container.info handling used to be handled by the the falco_outputs
object. However, this caused problems for applications that only used
the falco engine, doing their own output formatting for matching events.
Fix this by moving output formatting into the falco engine itself. The
part that replaces %container.info/adds extra formatting to the end of a
rule's output now happens while loading the rule.
Make necessary changes to allow run_performance_tests to invoke the
'test_mm' program we use internally.
Also add ability to run with a build directory separate from the source
directory and to specify an alternate rules file.
Finally, set up the kubernetes demo using sudo, a result of recent changes.
Related to the changes in https://github.com/draios/agent/pull/267,
improve error messages when trying to load sets of rules with errors:
- Check that yaml parsing of rules_content actually resulted in
something.
- Return an error for rules that have an empty name.
- Return an error for yaml objects that aren't a rule/macro/list.
- When compiling, don't print an error message, simply return one,
including a wrapper "can not compile ..." string.
Make sure falco doesn't detect the things draios-agent does as
suspicious. It's possible that you might run open source falco alongside
sysdig cloud.
App checks spawned by sysdig cloud binaries might also change namespace,
so also allow children of sysdigcloud binaries to call setns.
Collect stats on the number of events processed and dropped. When run
with -v, print these stats. This duplicates syddig behavior and can be
useful when dianosing problems related to dropped events throwing off
internal state tracking.
Bring over functionality from sysdig to write trace files. This is easy
as all of the code to actually write the files is in the inspector. This
just handles the -w option and arguments.
This can be useful to write a trace file in parallel with live event
monitoring so you can reproduce it later.
Add a new list k8s_binaries and allow those binaries to do things like
setns/spawn shells. It's not the case that all of these binaries
actually do these things, but keeping it as a single list makes
management easier.
The logic for detecting if a file exists was backwards. It would treat a
file as existing if it could *not* be opened. Reverse that logic so it
works.
This fixes https://github.com/draios/falco/issues/135.
Copy handling of -pk/-pm/-pc/-k/-m arguments from sysdig. All of the
relevant code was already in the inspector so that was easy.
The information from k8s/mesos/containers is used in two ways:
- In rule outputs, if the format string contains %container.info, that
is replaced with the value from -pk/-pm/-pc, if one of those options
was provided. If no option was provided, %container.info is replaced
with a generic %container.name (id=%container.id) instead.
- If the format string does not contain %container.info, and one of
-pk/-pm/-pc was provided, that is added to the end of the formatting
string.
- If -p was specified with a general value (i.e. not
kubernetes/mesos/container), the value is simply added to the end and
any %container.info is replaced with the generic value.
There are a lot of command line options now, so sort them alphabetically
in the usage and getopt handling to make them easier to find.
Also rename -p <pidfile> to -P <pidfile>, thinking ahead to the next
commit.
Add jq to the docker image containing falco. jq is very handy for
transforming json, which comes into play if you want to post to
slack (or other) webhooks.
Add an exfiltration action that reads /etc/shadow and sends the contents
to a arbitrary ip address and port via a udp datagram.
Add the ability to specify actions via the environment instead of the
command line. If actions are specified via the environment, they replace
any actions specified on the command line.
The new privileged falco rule was noisy when running kubernetes, which
can run privileged. Add it to the trusted_containers list.
Also eliminate a couple spurious warnings related to spawning shells in
containers.
New rule 'File Open by Privileged Container' triggers when a container
that is running privileged opens a file.
New rule 'Sensitive Mount by Container' triggers when a container that
has a sensitive mount opens a file. Currently, a sensitive mount is a
mount of /proc.
This depends on https://github.com/draios/sysdig/pull/655.
If a rule has a enabled attribute, and if the value is false, call the
engine's enable_rule() method to disable the rule. Like add_filter,
there's a static method which takes the object as the first argument and
a non-static method that calls the engine.
This fixes#72.
- In the regression tests, make the config file configurable in the
multiplex file via 'conf_file'.
- A new multiplex file item 'outputs' containing a list of <filename>:
<regex> tuples. For each item, the test reads the file and matches
each line against the regex. A match must be found for the test to
pass.
- Add 2 new tests that test file output and program output. They write
to files below /tmp/falco_outputs/ and the contents are checked to
ensure that alerts are written.
The falco engine changes broke the output methods that take
configuration (like the filename for file output, or the program for
program output). Fix that by properly passing the options argument to
each method's output function.
Falco itself spawns a shell when using program notifications, so add
falco to the set of trusted programs. (Also add some other programs like
make, awk, configure, that are run while building).
New variable FALCO_RULES_DEST_FILENAME allows the rules file to be
installed with a different filename. Not set in the falco repo, but in
the agent repo it's installed as falco_rules.default.yaml.
Improve ruleset after using with falco event_generator:
- Instead of assuming all shells are bash, add a list shell_binaries
and macro shell_procs, and replace references to bash with
shell_procs. This revealed some other programs that can spawn shells.
- Add "login" as an interactive command. systemd-login isn't in alpine
linux, which is the linux distro used for the container.
- Move read_sensitive_file_untrusted before
read_sensitive_file_trusted_after_startup, so it can hit first.
C++ program that performs bad activities related to the current falco
ruleset. There are configurable actions for almost all of the current
ruleset, via the --action argument.
By default runs in a loop forever. Can be overridden via --once.
Also add a Dockerfile that compiles event_generator.cpp within an alpine
linux image and copies it to /usr/local/bin. This image has been pushed
to docker hub as "sysdig/falco-event-generator:latest".
Add a Makefile that runs the right docker build command.
Docker 1.12 split docker into docker and dockerd, so add dockerd as a
docker binary. Also be consistent about using docker_binares instead of
just references to docker.
Also add ldconfig as a program that can write to files below /etc.
Add test that cover reading from multiple sets of rule files and
disabling rules. Specific changes:
- Modify falco to allow multiple -r arguments to read from multiple
files.
- In the test multiplex file, add a disabled_rules attribute,
containing a sequence of rules to disable. Result in -D arguments
when running falco.
- In the test multiplex file, 'rules_file' can be a sequence. It
results in multiple -r arguments when running falco.
- In the test multiplex file, 'detect_level' can be a squence of
multiple severity levels. All levels will be checked for in the
output.
- Move all test rules files to a rules subdirectory and all trace files
to a traces subdirectory.
- Add a small trace file for a simple cat of /dev/null. Used by the
new tests.
- Add the following new tests:
- Reading from multiple files, with the first file being
empty. Ensure that the rules from the second file are properly
loaded.
- Reading from multiple files with the last being empty. Ensures
that the empty file doesn't overwrite anything from the first
file.
- Reading from multiple files with varying severity levels for each
rule. Ensures that both files are properly read.
- Disabling rules from a rules file, both with full rule names
and regexes. Will result in not detecting anything.
Add the ability to drop events at the falco engine level in a way that
can scale with the dropping that already occurs at the kernel/inspector
level.
New inline function should_drop_evt() controls whether or not events are
matched against the set of rules, and is controlled by two
values--sampling ratio and sampling multiplier.
Here's how the sampling ratio and multiplier influence whether or not an
event is dropped in should_drop_evt(). The intent is that
m_sampling_ratio is generally changing external to the engine e.g. in
the main inspector class based on how busy the inspector is. A sampling
ratio implies no dropping. Values > 1 imply increasing levels of
dropping. External to the engine, the sampling ratio results in events
being dropped at the kernel/inspector interface. The sampling
multiplier is an amplification to the sampling factor in
m_sampling_ratio. If 0, no additional events are dropped other than
those that might be dropped by the kernel/inspector interface. If 1,
events that make it past the kernel module are subject to an additional
level of dropping at the falco engine, scaling with the sampling ratio
in m_sampling_ratio.
Unlike the dropping that occurs at the kernel level, where the events in
the first part of each second are dropped, this dropping is random.
Move the c++ and lua code implementing falco engine/falco common to its
own directory userspace/engine. It's compiled as a static library
libfalco_engine.a, and has its own CMakeLists.txt so it can be included
by other projects.
The engine's CMakeLists.txt has a add_subdirectory for the falco rules
directory, so including the engine also builds the rules.
The variables you need to set to use the engine's CMakeLists.txt are:
- CMAKE_INSTALL_PREFIX: the root directory below which everything is
installed.
- FALCO_ETC_DIR: where to install the rules file.
- FALCO_SHARE_DIR: where to install lua code, relative to the
- install/package root.
- LUAJIT_INCLUDE: where to find header files for lua.
- FALCO_SINSP_LIBRARY: the library containing sinsp code. It will be
- considered a dependency of the engine.
- LPEG_LIB/LYAML_LIB/LIBYAML_LIB: locations for third-party libraries.
- FALCO_COMPONENT: if set, will be included as a part of any install()
commands.
Instead of specifying /usr/share/falco in config_falco_*.h.in, use
CMAKE_INSTALL_PREFIX and FALCO_SHARE_DIR.
The lua code for the engine has also moved, so the two lua source
directories (userspace/engine/lua and userspace/falco/lua) need to be
available separately via falco_common, so make it an argument to
falco_common::init.
As a part of making it easy to include in another project, also clean up
LPEG build/defs. Modify build-lpeg to add a PREFIX argument to allow for
object files/libraries being in an alternate location, and when building
lpeg, put object files in a build/ subdirectory.
Create standalone classes falco_engine/falco_outputs that can be
embedded in other programs. falco_engine is responsible for matching
events against rules, and falco_output is responsible for formatting an
alert string given an event and writing the alert string to all
configured outputs.
falco_engine's main interfaces are:
- load_rules/load_rules_file: Given a path to a rules file or a string
containing a set of rules, load the rules. Also loads needed lua code.
- process_event(): check the event against the set of rules and return
the results of a match, if any.
- describe_rule(): print details on a specific rule or all rules.
- print_stats(): print stats on the rules that matched.
- enable_rule(): enable/disable any rules matching a pattern. New falco
command line option -D allows you to disable one or more rules on the
command line.
falco_output's main interfaces are:
- init(): load needed lua code.
- add_output(): add an output channel for alert notifications.
- handle_event(): given an event that matches one or more rules, format
an alert message and send it to any output channels.
Each of falco_engine/falco_output maintains a separate lua state and
loads separate sets of lua files. The code to create and initialize the
lua state is in a base class falco_common.
falco_engine no longer logs anything. In the case of errors, it throws
exceptions. falco_logger is now only used as a logging mechanism for
falco itself and as an output method for alert messages. (This should
really probably be split, but it's ok for now).
falco_engine contains an sinsp_evttype_filter object containing the set
of eventtype filters. Instead of calling
m_inspector->add_evttype_filter() to add a filter created by the
compiler, call falco_engine::add_evttype_filter() instead. This means
that the inspector runs with a NULL filter and all events are returned
from do_inspect. This depends on
https://github.com/draios/sysdig/pull/633 which has a wrapper around a
set of eventtype filters.
Some additional changes along with creating these classes:
- Some cleanups of unnecessary header files, cmake include_directory()s,
etc to only include necessary includes and only include them in header
files when required.
- Try to avoid 'using namespace std' in header files, or assuming
someone else has done that. Generally add 'using namespace std' to all
source files.
- Instead of using sinsp_exception for all errors, define a
falco_engine_exception class for exceptions coming from the falco
engine and use it instead. For falco program code, switch to general
exceptions under std::exception and catch + display an error for all
exceptions, not just sinsp_exceptions.
- Remove fields.{cpp,h}. This was dead code.
- Start tracking counts of rules by priority string (i.e. what's in the
falco rules file) as compared to priority level (i.e. roughtly
corresponding to a syslog level). This keeps the rule processing and
rule output halves separate. This led to some test changes. The regex
used in the test is now case insensitive to be a bit more flexible.
- Now that https://github.com/draios/sysdig/pull/632 is merged, we can
delete the rules object (and its lua_parser) safely.
- Move loading the initial lua script to the constructor. Otherwise,
calling load_rules() twice re-loads the lua script and throws away any
state like the mapping from rule index to rule.
- Allow an empty rules file.
Finally, fix most memory leaks found by valgrind:
- falco_configuration wasn't deleting the allocated m_config yaml
config.
- several ifstreams were being created simply to test which falco
config file to use.
- In the lua output methods, an event formatter was being created using
falco.formatter() but there was no corresponding free_formatter().
This depends on changes in https://github.com/draios/sysdig/pull/640.
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines in the [CONTRIBUTING.md](CONTRIBUTING.md) file and learn how to compile Falco from source [here](https://falco.org/docs/source).
2. Please label this pull request according to what type of issue you are addressing.
3. . Please add a release note!
4. If the PR is unfinished while opening it specify a wip in the title before the actual title, for example, "wip: my awesome feature"
-->
**What type of PR is this?**
> Uncomment one (or more) `/kind <>` lines:
> /kind bug
> /kind cleanup
> /kind design
> /kind documentation
> /kind failing-test
> /kind feature
> If contributing rules or changes to rules, please make sure to also uncomment one of the following line:
> /kind rule-update
> /kind rule-create
<!--
Please remove the leading whitespace before the `/kind <>` you uncommented.
-->
**Any specific area of the project related to this PR?**
> Uncomment one (or more) `/area <>` lines:
> /area build
> /area engine
> /area rules
> /area tests
> /area proposals
<!--
Please remove the leading whitespace before the `/area <>` you uncommented.
-->
**What this PR does / why we need it**:
**Which issue(s) this PR fixes**:
<!--
Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
If PR is `kind/failing-tests` or `kind/flaky-test`, please post the related issues/tests in a comment and do not use `Fixes`.
-->
Fixes #
**Special notes for your reviewer**:
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below.
If the PR requires additional action from users switching to the new release, prepend the string "action required:".
For example, `action required: change the API interface of the rule engine`.
This is a list of production adopters of Falco (in alphabetical order):
* [Booz Allen Hamilton](https://www.boozallen.com/) - BAH leverages Falco as part of their Kubernetes environment to verify that work loads behave as they did in their CD DevSecOps pipelines. BAH offers a solution to internal developers to easily build DevSecOps pipelines for projects. This makes it easy for developers to incorporate Security principles early on in the development cycle. In production, Falco is used to verify that the code the developer ships does not violate any of the production security requirements. BAH [are speaking at Kubecon NA 2019](https://kccncna19.sched.com/event/UaWr/building-reusable-devsecops-pipelines-on-a-secure-kubernetes-platform-steven-terrana-booz-allen-hamilton-michael-ducy-sysdig) on their use of Falco.
* [Coveo](https://www.coveo.com/) - Coveo stitches together content and data, learning from every interaction, to tailor every experience using AI to drive growth, satisfy customers and develop employee proficiency. All Falco events are centralized in our SIEM for analysis. Understanding what is running on production servers, and the context around why things are running is even more tricky now that we have further abstractions with containers and orchestration systems. Falco is giving us a good visibility inside containers and complement other Host and Network Intrusion Detection Systems. In a near future, we expect to deploy serverless functions to take action when Falco identifies patterns worth taking action for.
* [Frame.io](https://frame.io/) - Frame.io is a cloud-based (SaaS) video review and collaboration platform that enables users to securely upload source media, work-in-progress edits, dailies, and more into private workspaces where they can invite their team and clients to collaborate on projects. Understanding what is running on production servers, and the context around why things are running is even more tricky now that we have further abstractions like Docker and Kubernetes. To get this needed visibility into our system, we rely on Falco. Falco's ability to collect raw system calls such as open, connect, exec, along with their arguments offer key insights on what is happening on the production system and became the foundation of our intrusion detection and alerting system.
* [GitLab](https://about.gitlab.com/direction/defend/container_host_security/) - GitLab is a complete DevOps platform, delivered as a single application, fundamentally changing the way Development, Security, and Ops teams collaborate. GitLab Ultimate provides the single tool teams need to find, triage, and fix vulnerabilities in applications, services, and cloud-native environments enabling them to manage their risk. This provides them with repeatable, defensible processes that automate security and compliance policies. GitLab includes a tight integration with Falco, allowing users to defend their containerized applications from attacks while running in production.
* [League](https://league.com/ca/) - League provides health benefits management services to help employees understand and get the most from their benefits, and employers to provide effective, efficient plans. Falco is used to monitor our deployed services on Kubernetes, protecting against malicious access to containerswhich could lead to leaks of PHI or other sensitive data. The Falco alerts are logged in Stackdriver for grouping and further analysis. In the future, we're hoping for integrations with Prometheus and AlertManager as well.
* [Logz.io](https://logz.io/) - Logz.io is a cloud observability platform for modern engineering teams. The Logz.io platform consists of three products — Log Management, Infrastructure Monitoring, and Cloud SIEM — that work together to unify the jobs of monitoring, troubleshooting, and security. We empower engineers to deliver better software by offering the world's most popular open source observability tools — the ELK Stack, Grafana, and Jaeger — in a single, easy to use, and powerful platform purpose-built for monitoring distributed cloud environments. Cloud SIEM supports data from multiple sources, including Falco's alerts, and offers useful rules and dashboards content to visualize and manage incidents across your systems in a unified UI.
* [Preferral](https://www.preferral.com) - Preferral is a HIPAA-compliant platform for Referral Management and Online Referral Forms. Preferral streamlines the referral process for patients, specialists and their referral partners. By automating the referral process, referring practices spend less time on the phone, manual efforts are eliminated, and patients get the right care from the right specialist. Preferral leverages Falco to provide a Host Intrusion Detection System to meet their HIPPA compliance requirements.
* [Shopify](https://www.shopify.com) - Shopify is the leading multi-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. Shopify uses Falco to complement its Host and Network Intrusion Detection Systems.
* [Sight Machine](https://www.sightmachine.com) - Sight Machine is the category leader for manufacturing analytics and used by Global 500 companies to make better, faster decisions about their operations. Sight Machine uses Falco to help enforce SOC2 compliance as well as a tool for real time security monitoring and alerting in Kubernetes.
* [Skyscanner](https://www.skyscanner.net) - Skyscanner is the world's travel search engine for flights, hotels and car rentals. Most of our infrastructure is based on Kubernetes, and our Security team is using Falco to monitor anomalies at runtime, integrating Falco's findings with our internal ChatOps tooling to provide insight on the behavior of our machines in production. We also postprocess and store Falco's results to generate dashboards for auditing purposes.
* [Sumo Logic](https://www.sumologic.com/) - Sumo Logic provides a SaaS based log aggregation service that provides dashboards and applications to easily identify and analyze problems in your application and infrastructure. Sumo Logic provides native integrations for many CNCF projects, such as Falco, that allows end users to easily collect Falco events and analyze Falco events on DecSecOps focused dashboards.
* [Sysdig](https://www.sysdig.com/) Sysdig originally created Falco in 2016 to detect unexpected or suspicious activity using a rules engine on top of the data that comes from the sysdig kernel system call probe. Sysdig provides tooling to help with vulnerability management, compliance, detection, incident response and forensics in Cloud-native environments. Sysdig Secure has extended falco to include: a rule library, the ability to update macros, lists & rules via the user interface and API, automated tuning of rules, and rule creation based on profiling known system behavior. On top of the basic Falco rules, Sysdig Secure implements the concept of a "Security policy" that can comprise several rules which are evaluated for a user-define infrastructure scope like Kubernetes namespaces, OpenShift clusters, deployment workload, cloud regions etc.
Want to talk? Join us on the [#falco](https://kubernetes.slack.com/archives/CMWH3EH32) channel in the [Kubernetes Slack](https://slack.k8s.io).
## Overview
Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules.
### Latest releases
#### What kind of behaviors can Falco detect?
Read the [change log](CHANGELOG.md).
Falco can detect and alert on any behavior that involves making Linux system calls. Thanks to Sysdig's core decoding and state tracking functionality, falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, you can easily detect things like:
- A container is running in privileged mode, or is mounting a sensitive path like `/proc` from the host.
- A server process spawns a child process of an unexpected type
- Unexpected read of a sensitive file (like `/etc/shadow`)
- A non-device file is written to `/dev`
- A standard system binary (like `ls`) makes an outbound network connection
Documentation
---
[Visit the wiki] (https://github.com/draios/falco/wiki) for full documentation on falco.
Join the Community
---
* Contact the [official mailing list] (https://groups.google.com/forum/#!forum/falco) for support and to talk with other users.
* Follow us on [Twitter] (https://twitter.com/sysdig) for general falco and sysdig news.
* This is our [blog] (https://sysdig.com/blog/), where you can find the latest [falco](https://sysdig.com/blog/tag/falco/) posts.
* Join our [Public Slack](https://sysdig.slack.com) channel for sysdig and falco announcements and discussions.
The Falco Project, originally created by [Sysdig](https://sysdig.com), is an incubating [CNCF](https://cncf.io) open source cloud native runtime security tool.
Falco makes it easy to consume kernel events, and enrich those events with information from Kubernetes and the rest of the cloud native stack.
Falco has a rich rule set of security rules specifically built for Kubernetes, Linux, and cloud-native.
If a rule is violated in a system, Falco will send an alert notifying the user of the violation and its severity.
License Terms
---
Falco is licensed to you under the [GPL 2.0](./COPYING) open source license.
### Installing Falco
Contributor License Agreements
---
###Background
As we did for sysdig, we are formalizing the way that we accept contributions of code from the contributing community. We must now ask that contributions to falco be provided subject to the terms and conditions of a [Contributor License Agreement (CLA)](./cla). The CLA comes in two forms, applicable to contributions by individuals, or by legal entities such as corporations and their employees. We recognize that entering into a CLA with us involves real consideration on your part, and we’ve tried to make this process as clear and simple as possible.
If you would like to run Falco in **production** please adhere to the [official installation guide](https://falco.org/docs/installation/).
We’ve modeled our CLA off of industry standards, such as [the CLA used by Kubernetes](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md). Note that this agreement is not a transfer of copyright ownership, this simply is a license agreement for contributions, intended to clarify the intellectual property license granted with contributions from any person or entity. It is for your protection as a contributor as well as the protection of falco; it does not change your rights to use your own contributions for any other purpose.
##### Kubernetes
For some background on why contributor license agreements are necessary, you can read FAQs from many other open source projects:
- [A well-written chapter from Karl Fogel’s Producing Open Source Software on CLAs](http://producingoss.com/en/copyright-assignment.html)
- [The Wikipedia article on CLAs](http://en.wikipedia.org/wiki/Contributor_license_agreement)
### Developing
As always, we are grateful for your past and present contributions to falco.
Falco is designed to be extensible such that it can be built into cloud-native applications and infrastructure.
###What do I need to do in order to contribute code?
Falco has a [gRPC](https://falco.org/docs/grpc/) endpoint and an API defined in [protobuf](https://github.com/falcosecurity/falco/blob/master/userspace/falco/outputs.proto).
The Falco Project supports various SDKs for this endpoint.
**Individual contributions**: Individuals who wish to make contributions must review the [Individual Contributor License Agreement](./cla/falco_contributor_agreement.txt) and indicate agreement by adding the following line to every GIT commit message:
##### SDKs
falco-CLA-1.0-signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name; pseudonyms or anonymous contributions are not allowed.
**Corporate contributions**: Employees of corporations, members of LLCs or LLPs, or others acting on behalf of a contributing entity, must review the [Corporate Contributor License Agreement](./cla/falco_corp_contributor_agreement.txt), must be an authorized representative of the contributing entity, and indicate agreement to it on behalf of the contributing entity by adding the following lines to every GIT commit message:
### What can Falco detect?
```
falco-CLA-1.0-contributing-entity: Full Legal Name of Entity
falco-CLA-1.0-signed-off-by: Joe Smith <joe.smith@email.com>
```
Falco can detect and alert on any behavior that involves making Linux system calls.
Falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process.
For example, Falco can easily detect incidents including but not limited to:
Use a real name of a natural person who is an authorized representative of the contributing entity; pseudonyms or anonymous contributions are not allowed.
- A shell is running inside a container or pod in Kubernetes.
- A container is running in privileged mode, or is mounting a sensitive path, such as `/proc`, from the host.
- A server process is spawning a child process of an unexpected type.
- Unexpected read of a sensitive file, such as `/etc/shadow`.
- A non-device file is written to `/dev`.
- A standard system binary, such as `ls`, is making an outbound network connection.
- A privileged pod is started in a Kubernetes cluster.
### Documentation
The [Official Documentation](https://falco.org/docs/) is the best resource to learn about Falco.
### Join the Community
To get involved with The Falco Project please visit [the community repository](https://github.com/falcosecurity/community) to find more.
How to reach out?
- Join the #falco channel on the [Kubernetes Slack](https://slack.k8s.io)
- [Join the Falco mailing list](https://lists.cncf.io/g/cncf-falco-dev)
- [Read the Falco documentation](https://falco.org/docs/)
### Contributing
See the [CONTRIBUTING.md](https://github.com/falcosecurity/.github/blob/master/CONTRIBUTING.md).
### Security Audit
A third party security audit was performed by Cure53, you can see the full report [here](./audits/SECURITY_AUDIT_2019_07.pdf).
### Reporting security vulnerabilities
Please report security vulnerabilities following the community process documented [here](https://github.com/falcosecurity/.github/blob/master/SECURITY.md).
### License Terms
Falco is licensed to you under the [Apache 2.0](./COPYING) open source license.
Our release process is mostly automated, but we still need some manual steps to initiate and complete it.
Changes and new features are grouped in [milestones](https://github.com/falcosecurity/falco/milestones), the milestone with the next version represents what is going to be released.
A release happens every two months ([as per community discussion](https://github.com/falcosecurity/community/blob/master/meeting-notes/2020-09-30.md#agenda)), and we need to assign owners for each (usually we pair a new person with an experienced one). Assignees and the due date are proposed during the [weekly community call](https://github.com/falcosecurity/community). Note that hotfix releases can happen as soon as it is needed.
Finally, on the proposed due date the assignees for the upcoming release proceed with the processes described below.
## Pre-Release Checklist
Before cutting a release we need to do some homework in the Falco repository. This should take 5 minutes using the GitHub UI.
### 1. Release notes
- Find the LAST release (-1) and use `YYYY-MM-DD` as the day before of the [latest release](https://github.com/falcosecurity/falco/releases)
- Check the release note block of every PR matching the `is:pr is:merged closed:>YYYY-MM-DD` [filter](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Amerged+closed%3A%3EYYYY-MM-DD)
- Ensure the release note block follows the [commit convention](https://github.com/falcosecurity/falco/blob/master/CONTRIBUTING.md#commit-convention), otherwise fix its content
- If the PR has no milestone, assign it to the milestone currently undergoing release
- Check issues without a milestone (using [is:pr is:merged no:milestone closed:>YYYY-MM-DD](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Amerged+no%3Amilestone+closed%3A%3EYYYY-MM-DD) filter) and add them to the milestone currently undergoing release
- Double-check that there are no more merged PRs without the target milestone assigned with the `is:pr is:merged no:milestone closed:>YYYY-MM-DD` [filters](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Amerged+no%3Amilestone+closed%3A%3EYYYY-MM-DD), if any, fix them
### 2. Milestones
- Move the [tasks not completed](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Aopen) to a new minor milestone
### 3. Release PR
- Double-check if any hard-coded version number is present in the code, it should be not present anywhere:
- If any, manually correct it then open an issue to automate version number bumping later
- Versions table in the `README.md` update itself automatically
- Generate the change log https://github.com/leodido/rn2md, or https://fs.fntlnz.wtf/falco/milestones-changelog.txt for the lazy people (it updates every 5 minutes)
- If you review timeout errors with `rn2md` try to generate an GitHub Oauth access token and use `-t`
- Add the latest changes on top the previous `CHANGELOG.md`
- Submit a PR with the above modifications
- Await PR approval
- Close the completed milestone as soon as the PR is merged
## Release
Now assume `x.y.z` is the new version.
### 1. Create a tag
- Once the release PR has got merged, and the CI has done its job on the master, git tag the new release
```
git pull
git checkout master
git tag x.y.z
git push origin x.y.z
```
> **N.B.**: do NOT use an annotated tag
- Wait for the CI to complete
### 2. Update the GitHub release
- [Draft a new release](https://github.com/falcosecurity/falco/releases/new)
- Use `x.y.z` both as tag version and release title
- Use the following template to fill the release description:
```
<!-- Substitute x.y.z with the current release version -->
| deb | [](https://dl.bintray.com/falcosecurity/deb/stable/falco-x.y.z-x86_64.deb) |
<!-- Copy the relevant part of the changelog here -->
### Statistics
| Merged PRs | Number |
| --------------- | ------ |
| Not user-facing | x |
| Release note | x |
| Total | x |
<!-- Calculate stats and fill the above table -->
```
- Finally, publish the release!
### 3. Update the meeting notes
For each release we archive the meeting notes in git for historical purposes.
- The notes from the Falco meetings can be [found here](https://hackmd.io/6sEAlInlSaGnLz2FnFz21A).
- Note: There may be other notes from working groups that can optionally be added as well as needed.
- Add the entire content of the document to a new file in [github.com/falcosecurity/community/tree/master/meeting-notes](https://github.com/falcosecurity/community/tree/master/meeting-notes) as a new file labeled `release-x.y.z.md`
- Open up a pull request with the new change.
## Post-Release tasks
Announce the new release to the world!
- Send an announcement to cncf-falco-dev@lists.cncf.io (plain text, please)
- Let folks in the slack #falco channel know about a new release came out
This document describes The Falco Project's branding guidelines, language, and message.
Content in this document can be used to publically share about Falco.
### Logo
There are 3 logos available for use in this directory. Use the primary logo unless required otherwise due to background issues, or printing.
The Falco logo is Apache 2 licensed and free to use in media and publication for the CNCF Falco project.
### Colors
| Name | PMS | RGB |
|-----------|------|-------------|
| Teal | 3125 | 0 174 199 |
| Cool Gray | 11 | 83 86 90 |
| Black | | 0 0 0 |
| Blue-Gray | 7700 | 22 92 125 |
| Gold | 1375 | 255 158 27 |
| Orange | 171 | 255 92 57 |
| Emerald | 3278 | 0 155 119 |
| Green | 360 | 108 194 74 |
The primary colors are those in the first two rows.
### Slogan
> Cloud Native Runtime Security
### What is Falco?
Falco is a runtime security project originally created by Sysdig, Inc.
Falco was contributed to the CNCF in October 2018.
The CNCF now owns The Falco Project.
### What is Runtime Security?
Runtime security refers to an approach to preventing unwanted activity on a computer system.
With runtime security, an operator deploys **both** prevention tooling (access control, policy enforcement, etc) along side detection tooling (systems observability, anomaly detection, etc).
Runtime security is the practice of using detection tooling to detect unwanted behavior, such that it can then be prevented using prevention techniques.
Runtime security is a holistic approach to defense, and useful in scenarios where prevention tooling either was unaware of an exploit or attack vector, or when defective applications are ran in even the most secure environment.
### What does Falco do?
Falco consumes signals from the Linux kernel, and container management tools such as Docker and Kubernetes.
Falco parses the signals and asserts them against security rules.
If a rule has been violated, Falco triggers an alert.
### How does Falco work?
Falco traces kernel events and reports information about the system calls being executed at runtime.
Falco leverages the extended berkley packet filter (eBPF) which is a kernel feature implemented for dynamic crash-resilient and secure code execution in the kernel.
Falco enriches these kernel events with information about containers running on the system.
Falco also can consume signals from other input streams such as the containerd socket, the Kubernetes API server and the Kubernetes audit log.
At runtime, Falco will reason about these events and assert them against configured security rules.
Based on the severity of a violation an alert is triggered.
These alerts are configurable and extensible, for instance sending a notification or [plumbing through to other projects like Prometheus](https://github.com/falcosecurity/falco-exporter).
### Benefits of using Falco
- **Strengthen Security** Create security rules driven by a context-rich and flexible engine to define unexpected application behavior.
- **Reduce Risk** Immediately respond to policy violation alerts by plugging Falco into your current security response workflows and processes.
- **Leverage up-to-date Rules** Alert using community-sourced detections of malicious activity and CVE exploits.
### Falco and securing Kubernetes
Securing Kubernetes requires putting controls in place to detect unexpected behavior that could be malicious or harmful to a cluster or application(s).
Examples of malicious behavior include:
- Exploits of unpatched and new vulnerabilities in applications or Kubernetes itself.
- Insecure configurations in applications or Kubernetes itself.
- Leaked or weak credentials or secret material.
- Insider threats from adjacent applications running at the same layer.
Falco is capable of [consuming the Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/falco/#use-falco-to-collect-audit-events).
By adding Kubernetes application context, and Kubernetes audit logs teams can understand who did what.
### Writing about Falco
##### Yes
Notice the capitalization of the following terms.
- The Falco Project
- Falco
##### No
- falco
- the falco project
- the Falco project
### Encouraged Phrasing
Below are phrases that the project has reviewed, and found to be effective ways of messaging Falco's value add.
Even when processes are in place for vulnerability scanning and implementing pod security and network policies, not every risk will be addressed. You still need mechanisms to confirm these security barriers are effective, help configure them, and provide with a last line of defense when they fail.
##### Falco as a factory
This term refers to the concept that Falco is a stateless processing engine. A large amount of data comes into the engine, but meticulously crafted security alerts come out.
##### The engine that powers...
Falco ultimately is a security engine. It reasons about signals coming from a system at runtime, and can alert if an anomaly is detected.
##### Anomaly detection
This refers to an event that occurs with something unsual, concerning, or odd occurs.
We can associate anomalies with unwanted behavior, and alert in their presence.
##### Detection tooling
Falco does not prevent unwanted behavior.
Falco however alerts when unusual behavior occurs.
This is commonly referred to as **detection** or **forensics**.
---
# Glossary
#### Probe
Used to describe the `.o` object that would be dynamically loaded into the kernel as a secure and stable (e)BPF probe.
This is one option used to pass kernel events up to userspace for Falco to consume.
Sometimes this word is incorrectly used to refer to a `module`.
#### Module
Used to describe the `.ko` object that would be loaded into the kernel as a potentially risky kernel module.
This is one option used to pass kernel events up to userspace for Falco to consume.
Sometimes this word is incorrectly used to refer to a `probe`.
#### Driver
The global term for the software that sends events from the kernel. Such as the eBPF `probe` or the `kernel module`.
#### Falco
The name of the project, and also the name of [the main engine](https://github.com/falcosecurity/falco) that the rest of the project is built on.
#### Sysdig, Inc
The name of the company that originally created The Falco Project, and later donated to the CNCF.
#### sysdig
A [CLI tool](https://github.com/draios/sysdig) used to evaluate kernel system events at runtime.
DRAIOS, INC. – OPEN SOURCE CONTRIBUTION LICENSE AGREEMENT (“Agreement”)
Draios, Inc. dba Sysdig (“Draios” or “Sysdig”) welcomes you to work on our open source software projects. In order to clarify the intellectual property license granted with Contributions from any person or entity, you must agree to the license terms below in order to contribute code back to our repositories. This license is for your protection as a Contributor as well as the protection of Sysdig; it does not change your rights to use your own Contributions for any other purpose. To indicate your Agreement, follow the procedure set forth below under TO AGREE, after reading this Agreement.
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to Draios/Sysdig. Except for the license granted herein to Draios/Sysdig and recipients of software distributed by Draios/Sysdig, You reserve all right, title, and interest in and to Your Contributions.
1. Definitions. "You" (or "Your") shall mean the individual natural person and copyright owner who is making this Agreement with Draios/Sysdig. “You” excludes legal entities such as corporations, and Draios/Sysdig provides a separate CLA for corporations or other entities. "Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to Draios/Sysdig for inclusion in, or documentation of, any of the products owned or managed by Draios/Sysdig (the "Work"). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to Draios/Sysdig or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Draios/Sysdig for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to Draios/Sysdig and to recipients of software distributed by Draios/Sysdig a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works.
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to Draios/Sysdig and to recipients of software distributed by Draios/Sysdig a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims that You have the right to license and that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity other than Draios/Sysdig institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
4. You represent to Draios/Sysdig that You are legally entitled to grant the licenses set forth above.
5. You represent that each of Your Contributions is Your original creation unless you act according to section 7 below. You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which You are personally aware and which are associated with any part of Your Contributions. You represent that Your sign-off indicating assent to this Agreement includes your real name and not a pseudonym, and that you shall not attempt or make an anonymous Contribution.
6. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions to Draios/Sysdig on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON- INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
7. If You wish to submit work that is not Your original creation, You may submit it to Draios/Sysdig separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]".
8. You agree to notify Draios/Sysdig of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.
9. You understand and agree that this project and Your Contribution are public and that a record of the contribution, including all personal information that I submit with it, including my sign-off, may be stored by Draios/Sysdig indefinitely and may be redistributed to others. You understand and agree that Draios/Sysdig has no obligation to use any Contribution in any Draios/Sysdig project or product, and Draios/Sysdig may decline to accept Your Contributions or Draios/Sysdig may remove Your Contributions from Draios/Sysdig projects or products at any time without notice. You understand and agree that Draios/Sysdig is not and will not pay you any form of compensation, in currency, equity or otherwise, in exchange for Your Contributions or for Your assent to this Agreement. You understand and agree that you are independent of Draios/Sysdig and you are not, by entering into this Agreement or providing Your Contributions, becoming employed, hired as an independent contractor, or forming any other relationship with Draios/Sysdig relating to employment, compensation or ownership or involving any fiduciary obligation.
TO AGREE:
Add the following line to every GIT commit message:
falco-CLA-1.0-signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name; pseudonyms or anonymous contributions are not allowed.
DRAIOS, INC. – OPEN SOURCE CONTRIBUTION LICENSE AGREEMENT FOR CONTRIBUTING ENTITIES (SUCH AS CORPORATIONS) (“Agreement”)
Draios, Inc. dba Sysdig (“Draios” or “Sysdig”) welcomes you to work on our open source software projects. In order to clarify the intellectual property license granted with Contributions from any person or entity, you must agree to the license terms below in order to contribute code back to our repositories. This license is for your protection as a Contributor as well as the protection of Sysdig; it does not change your rights to use your own Contributions for any other purpose. To indicate your Agreement, follow the procedure set forth below under TO AGREE, after reading this Agreement.
A “contributing entity” means a corporation, limited liability company, partnership, or other entity that is organized and recognized under the laws of a state of the United States or another country (a “contributing entity”). We provide a separate CLA for individual contributors.
You accept and agree to the following terms and conditions for Your present and future Contributions that are submitted to Draios/Sysdig. Except for the license granted herein to Draios/Sysdig and recipients of software distributed by Draios/Sysdig, You reserve all right, title, and interest in and to Your Contributions.
1. Definitions. "You" (or "Your") shall mean the contributing entity that owns for copyright purposes or otherwise has the right to contribute the Contribution, and that is making this Agreement with Draios/Sysdig, and all other entities that control, are controlled by, or are under common control with the contributing entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to Draios/Sysdig for inclusion in, or documentation of, any of the products owned or managed by Draios/Sysdig (the "Work"). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to Draios/Sysdig or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Draios/Sysdig for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to Draios/Sysdig and to recipients of software distributed by Draios/Sysdig a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works.
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to Draios/Sysdig and to recipients of software distributed by Draios/Sysdig a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims that You have the right to license and that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity other than Draios/Sysdig institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
4. You represent to Draios/Sysdig that You own or have the right to contribute Your Contributions to Draios/Sysdig, and that You are legally entitled to grant the licenses set forth above.
5. You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which You are personally aware and which are associated with any part of Your Contributions. You represent that Your sign-off indicating assent to this Agreement includes the real name of a natural person who is an authorized representative of You, and not a pseudonym, and that You are not attempting or making an anonymous Contribution.
6. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions to Draios/Sysdig on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON- INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
7. If You wish to submit work that is not Your original creation, You may submit it to Draios/Sysdig separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which You are aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]".
8. You agree to notify Draios/Sysdig of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.
9. You understand and agree that this project and Your Contribution are public and that a record of the contribution, including all personal information that You submit with it, including the sign-off of Your authorized representative, may be stored by Draios/Sysdig indefinitely and may be redistributed to others. You understand and agree that Draios/Sysdig has no obligation to use any Contribution in any Draios/Sysdig project or product, and Draios/Sysdig may decline to accept Your Contributions or Draios/Sysdig may remove Your Contributions from Draios/Sysdig projects or products at any time without notice. You understand and agree that Draios/Sysdig is not and will not pay You any form of compensation, in currency, equity or otherwise, in exchange for Your Contributions or for Your assent to this Agreement. You understand and agree that You are independent of Draios/Sysdig and You are not, by entering into this Agreement or providing Your Contributions, becoming employed, hired as an independent contractor, or forming any other relationship with Draios/Sysdig relating to employment, compensation or ownership or involving any fiduciary obligation.
TO AGREE:
Add the following lines to every GIT commit message:
falco-CLA-1.0-contributing-entity: Full Legal Name of Entity
falco-CLA-1.0-signed-off-by: Joe Smith <joe.smith@email.com>
Use a real name of a natural person who is an authorized representative of the contributing entity; pseudonyms or anonymous contributions are not allowed.
message(STATUS"cppcheck command not found, static code analysis using cppcheck will not be available.")
else()
message(STATUS"cppcheck found at: ${CPPCHECK}")
# we are aware that cppcheck can be run
# along with the software compilation in a single step
# using the CMAKE_CXX_CPPCHECK variables.
# However, for practical needs we want to keep the
# two things separated and have a specific target for it.
# Our cppcheck target reads the compilation database produced by CMake
set(CMAKE_EXPORT_COMPILE_COMMANDSOn)
add_custom_target(
cppcheck
COMMAND${CPPCHECK}
"--enable=all"
"--force"
"--inconclusive"
"--inline-suppr"# allows to specify suppressions directly in source code
"--project=${CMAKE_CURRENT_BINARY_DIR}/compile_commands.json"# use the compilation database as source
"--quiet"
"--xml"# we want to generate a report
"--output-file=${CMAKE_CURRENT_BINARY_DIR}/static-analysis-reports/cppcheck/cppcheck.xml"# generate the report under the reports folder in the build folder
"-i${CMAKE_CURRENT_BINARY_DIR}"# exclude the build folder
)
endif()# CPPCHECK
if(NOTCPPCHECK_HTMLREPORT)
message(STATUS"cppcheck-htmlreport command not found, will not be able to produce html reports for cppcheck results")
else()
message(STATUS"cppcheck-htmlreport found at: ${CPPCHECK_HTMLREPORT}")
- snprintf(error, SCAP_LASTERR_SIZE, "Too many sysdig instances attached to device %s. Current value for /sys/module/" PROBE_DEVICE_NAME "_probe/parameters/max_consumers is '%"PRIu32"'.", filename, curr_max_consumers);
+ snprintf(error, SCAP_LASTERR_SIZE, "Too many Falco instances attached to device %s. Current value for /sys/module/" PROBE_DEVICE_NAME "/parameters/max_consumers is '%"PRIu32"'.", filename, curr_max_consumers);
This directory contains various ways to package Falco as a container and related tools.
## Currently Supported Images
| Name | Directory | Description |
|---|---|---|
| [falcosecurity/falco:latest](https://hub.docker.com/repository/docker/falcosecurity/falco), [falcosecurity/falco:_tag_](https://hub.docker.com/repository/docker/falcosecurity/falco), [falcosecurity/falco:master](https://hub.docker.com/repository/docker/falcosecurity/falco) | docker/falco | Falco (DEB built from git tag or from the master) with all the building toolchain. |
| [falcosecurity/falco-driver-loader:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-driver-loader), [falcosecurity/falco-driver-loader:_tag_](https://hub.docker.com/repository/docker/falcosecurity/falco-driver-loader), [falcosecurity/falco-driver-loader:master](https://hub.docker.com/repository/docker/falcosecurity/falco-driver-loader) | docker/driver-loader | `falco-driver-loader` as entrypoint with the building toolchain. |
| [falcosecurity/falco-no-driver:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-no-driver), [falcosecurity/falco-no-driver:_tag_](https://hub.docker.com/repository/docker/falcosecurity/falco-no-driver),[falcosecurity/falco-no-driver:master](https://hub.docker.com/repository/docker/falcosecurity/falco-no-driver) | docker/no-driver | Falco (TGZ built from git tag or from the master) without the building toolchain. |
| [falcosecurity/falco-builder:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-builder) | docker/builder | The complete build tool chain for compiling Falco from source. See [the documentation](https://falco.org/docs/source/) for more details on building from source. Used to build Falco (CI). |
| [falcosecurity/falco-tester:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-tester) | docker/tester | Container image for running the Falco test suite. Used to run Falco integration tests (CI). |
| _to not be published_ | docker/local | Built on-the-fly and used by falco-tester. |
> Note: `falco-builder`, `falco-tester` (and the `docker/local` image that it's built on the fly) are not integrated into the release process because they are development and CI tools that need to be manually pushed only when updated.
In the docker-compose output, you'll see the following falco warnings:
```
falco | 23:19:56.528652447: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=127.0.0.1:43639->127.0.0.1:9090)
falco | 23:19:56.528667589: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=)
falco | 23:19:56.530758087: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=::1:41996->::1:9090)
falco | 23:19:56.605318716: Warning Unexpected listen call by a process in a fbash session (command=python ./botnet_client.py)
falco | 23:19:56.605323967: Warning Unexpected listen call by a process in a fbash session (command=python ./botnet_client.py)
#Demo of falco with bash exec via poorly designed REST API.
## Introduction
This example shows how a server could have a poorly designed API that
allowed a client to execute arbitrary programs on the server, and how
that behavior can be detected using Sysdig Falco.
`server.js` in this directory defines the server. The poorly designed
API is this route handler:
```javascript
router.get('/exec/:cmd',function(req,res){
varoutput=child_process.execSync(req.params.cmd);
res.send(output);
});
app.use('/api',router);
```
It blindly takes the url portion after `/api/exec/<cmd>` and tries to
execute it. A horrible design choice(!), but allows us to easily show
Sysdig falco's capabilities.
## Demo architecture
### Start everything using docker-compose
From this directory, run the following:
```
$ docker-compose -f demo.yml up
```
This starts the following containers:
* express_server: simple express server exposing a REST API under the endpoint `/api/exec/<cmd>`.
* falco: will detect when you execute a shell via the express server.
### Access urls under `/api/exec/<cmd>` to run arbitrary commands.
Run the following commands to execute arbitrary commands like 'ls', 'pwd', etc:
```
$ curl http://localhost:8080/api/exec/ls
demo.yml
node_modules
package.json
README.md
server.js
```
```
$ curl http://localhost:8080/api/exec/pwd
.../examples/nodejs-bad-rest-api
```
### Try to run bash via `/api/exec/bash`, falco sends alert.
If you try to run bash via `/api/exec/bash`, falco will generate an alert:
```
falco | 22:26:53.536628076: Warning Shell spawned in a container other than entrypoint (user=root container_id=6f339b8aeb0a container_name=express_server shell=bash parent=sh cmdline=bash )
We intend to build a simple gRPC server and SDKs - eg., [falco#785](https://github.com/falcosecurity/falco/issues/785) - to allow users receive and consume the alerts regarding the violated rules.
## Motivation
The most valuable information that Falco can give to its users are the alerts.
An alert is an "output" when it goes over a transport, and it is emitted by Falco every time a rule is matched.
At the current moment, however, Falco can deliver alerts in a very basic way, for example by dumping them to standard output.
For this reason, many Falco users asked, with issues - eg., [falco#528](https://github.com/falcosecurity/falco/issues/528) - or in the [slack channel](https://slack.k8s.io) if we can find a more consumable way to implement Falco outputs in an extensible way.
The motivation behind this proposal is to design a new output implementation that can meet our user's needs.
### Goals
- To decouple the outputs from the Falco code base
- To design and implement an additional output mode by mean of a gRPC **streaming** server
- To keep it as simple as possible
- To have a simple contract interface
- To only have the responsibility to route Falco output requests and responses
- To continue supporting the old output formats by implementing their same interface
- To be secure by default (**mutual TLS** authentication)
- To be **asynchronous** and **non-blocking**
- To provide a connection over unix socket (no authentication)
- To implement a Go client
- To implement a Rust client
- To implement a Python client
### Non-Goals
- To substitute existing outputs (stdout, syslog, etc.)
- To support different queing systems than the default (round-robin) one
- To support queuing mechanisms for message retransmission
- Users can have a local gRPC relay server along with Falco that multiplexes connections and handles retires and backoff
- To change the output format
- To make the message context (text, fields, etc.) and format configurable
- Users can already override rules changing their output messages
- To act as an orchestrator for Falco instances
## Proposal
### Use cases
- Receive Falco events with a well-defined contract over wire
- Integrate Falco events with existing alerting/paging mechanisms
- Integrate Falco events with existing monitoring infrastructures/tools
- Falco outputs SDKs for different languages
### Diagrams
The following sequence diagram illustrates the flow happening for a single rule being matched and the consequent alert through the gRPC output client.
# Support for K8s Pod Security Policies (PSPs) in Falco
<!-- toc -->
- [Summary](#summary)
- [Motivation](#motivation)
* [Goals](#goals)
* [Non-Goals](#non-goals)
- [Proposal](#proposal)
* [Use cases](#use-cases)
* [Diagrams](#diagrams)
* [Design Details](#design-details)
<!-- tocstop -->
## Summary
We want to make it easier for K8s Cluster Operators to Author Pod Security Policies by providing a way to read a PSP, convert it to a set of Falco rules, and then run Falco with those rules.
## Motivation
PSPs provide a rich powerful framework to restrict the behavior of pods and apply consistent security policies across a cluster, but it’s difficult to know the gap between what you want your security policy to be and what your cluster is actually doing. Additionally, since PSPs enforce once applied, they might prevent pods from running, and the process of tuning a PSP live on a cluster can be disruptive and painful.
That's where Falco comes in. We want to make it possible for Falco to perform a "dry run" evaluation of a PSP, translating it to Falco rules that observe the behaviour of deployed pods and sending alerts for violations, *without* blocking. This helps accelerate the authoring cycle, providing a complete authoring framework for PSPs without deploying straight to the cluster.
### Goals
Transparently read a candidate PSP into an equivalent set of Falco rules that can look for the conditions in the PSP.
The PSP is converted into a set of Falco rules which can be either saved as a file for later use/inspection, or loaded directly so they they can monitor system calls and k8s audit activity.
### Non-Goals
Falco will not automatically read PSPs from a cluster, will not install PSPs, and will not provide guidance on the parts of your infrastructure that are already covered by PSPs. This feature only helps with the testing part of a candidate PSP. For coming up with an initial PSP, you can use tools like [https://github.com/sysdiglabs/kube-psp-advisor](Kube PSP Advisor).
The use case here is for cluster operators who want to author PSPs, but don't want to just put it in a cluster and see what breaks. For example, if your PSP sets privileged to false, but it turns out some of your pods are running privileged, they won't be able to start.
With this feature, they could iterate without enforcement until they have a PSP that matches the actual behaviour of their cluster. Some of that will come from changing the PSP, some of that will come from changing the behaviour of the cluster. The important part is that it's not mistakenly preventing things from running while you're figuring it out.
## Proposal
### Use cases
You'll be able to run falco with a `--psp` argument that provides a single PSP yaml file. Falco will automatically convert the PSP into an equivalent set of Falco rules, load the rules, and then run with the loaded rules. You can optionally provide a `--psp_save=<path>` command line option to save the converted rules to a file.
### Diagrams
No diagrams yet.
### Design Details
* We'll use [inja](https://github.com/pantor/inja) as the templating engine.
* For the most part, we can rely on the existing framework of rules, filter expressions, and output expressions that already exist in Falco. One significant change will be that filter fields can extract more than one "value" per event, and we'll need to define new operators to perform set comparisions betweeen values in an event and values in the comparison right-hand-side.
* This will rely heavily on existing support for [K8s Audit Events](https://falco.org/docs/event-sources/kubernetes-audit/) in Falco.
This is a proposal to better structure the Falco API.
The Falco API is a set of contracts describing how users can interacts with Falco.
By definiing a set of interfaces the Falco Authors intend to decouple Falco from other softwares and data (eg., from the input sources) and, at the same time, make it more extensible.
Thus, this document intent is to propose a list of services that contistute the Falco API (targeting the first stable version of Falco, v1.0.0).
## Motivation
We want to enable users to use thirdy-party clients to interface with Falco outputs, inputs, rules, and configurations.
Such ability would enable the community to create a whole set of OSS tools, built on top of Falco.
Propose some basic naming conventions when new lists, macros, rules are introduced.
## Motivation
We want to help people from the community to contribute to falco rules. It will help improving the security content provided by Falco out of the box. Since people have different preference of naming things, it's necessary to set forth some basic naming convention for people to follow when creating new rules, macros and lists.
### Goals
People will have to follow the naming conventions rules when introducing new Falco rules, macros and lists.
### Non-Goals
There will be no intention to cover Falco rule syntax in this proposal.
## Proposal
### Use cases
When new PRs are created in the area of rules, reviewers need to examine whether there are new rules, macros or lists are introduced. If yes, check wether follow the naming convention.
### Diagrams
N/A
### Design Details
#### Rule
- Rule Name: Use phrases with capitalizing every word except preposition (e.g. `Search Private Keys or Passwords`)
- Description: Use sentence always starting with "Detect" and ending with period. (e.g. `Detect grep private keys or passwords activity.`)
- Output: Use sentence. Must at least include output fields (user=%user.name command=%proc.cmdline container_id=%container.id)
- Tags: Use at least one of the following: [network, process, filesystem]. Also encourage to use mitre_* tags if applicable
#### Macro
- Macro Name: Use lowercase_separated_by_underscores (e.g. `parent_java_running_zookeeper`)
#### List
- List Name: Use lowercase_separated_by_underscores (e.g. `protected_shell_spawning_binaries`)
The **Falco Artifact Scope** proposal is divided in two parts:
1. the Part 1 - *this document*: the State of Art of Falco artifacts
2. the [Part 2](./20200506-artifacts-scope-part-2.md): the intended state moving forward
## Summary
As a project we would like to support the following artifacts.
Everything else will be moved to [contrib](https://github.com/falcosecurity/contrib).
As a project we will build, change, rename, and move files, documents, scripts, configurations according to the new state of the art described into [Part 2](./20200506-artifacts-scope-part-2.md).
Inspired by many previous issues and many of the weekly community calls.
## Terms
**falco**
*The Falco binary*
**driver**
*System call provider from the Linux kernel. Either (`bpf`, `module`)*
**falco-driver-loader**
*The bash script found [here](https://github.com/falcosecurity/falco/blob/master/scripts/falco-driver-loader) that tries to compile else download the driver (kernel module or eBPF probe).*
**package**
*An installable artifact that is operating system specific. All packages MUST be hosted on [bintray](https://bintray.com/falcosecurity).*
**image**
*OCI compliant container image hosted on dockerhub with tags for every release and the current master branch.*
# Packages
List of currently official packages (for x86 64bits only):
-`falco-x.y.z-x86_64.deb` for debian like systems, it installs the kernel module by default
-`falco-x.y.z-x86_64.rpm` for rpm like systems, it installs the kernel module by default
-`falco-x.y.z-x86_64.tar.gz` for binary installation, it contains `falco` binary, `falco-driver-loader` script, drivers source, and related dependencies
# Images
List of currently official container images (for X86 64bits only):
| Name | Directory | Description |
|---|---|---|
| [falcosecurity/falco:latest](https://hub.docker.com/repository/docker/falcosecurity/falco), [falcosecurity/falco:_tag_](https://hub.docker.com/repository/docker/falcosecurity/falco), [falcosecurity/falco:master](https://hub.docker.com/repository/docker/falcosecurity/falco) | docker/stable | Falco (DEB built from git tag or from the master) with all the building toolchain. |
| [falcosecurity/falco:latest-slim](https://hub.docker.com/repository/docker/falcosecurity/falco), [falcosecurity/falco:_tag_-slim](https://hub.docker.com/repository/docker/falcosecurity/falco),[falcosecurity/falco:master-slim](https://hub.docker.com/repository/docker/falcosecurity/falco) | docker/slim | Falco (DEB build from git tag or from the master) without the building toolchain. |
| [falcosecurity/falco-driver-loader:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-driver-loader), [falcosecurity/falco-driver-loader:_tag_](https://hub.docker.com/repository/docker/falcosecurity/falco-driver-loader), [falcosecurity/falco-driver-loader:master](https://hub.docker.com/repository/docker/falcosecurity/falco-driver-loader) | docker/driver-loader | `falco-driver-loader` as entrypoint with the building toolchain. |
| [falcosecurity/falco-builder:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-builder) | docker/builder | The complete build tool chain for compiling Falco from source. See [the documentation](https://falco.org/docs/source/) for more details on building from source. Used to build Falco (CI). |
| [falcosecurity/falco-tester:latest](https://hub.docker.com/repository/docker/falcosecurity/falco-tester) | docker/tester | Container image for running the Falco test suite. Used to run Falco integration tests (CI). |
| _to not be published_ | docker/local | Built on-the-fly and used by falco-tester. |
**Note**: `falco-builder`, `falco-tester` (and the `docker/local` image which it's built on the fly by the `falco-tester` one) are not integrated into the release process because they are development and CI tools that need to be manually pushed only when updated.
# Falco Project Evolution
We will modeling a loosely defined adoption of the Kubernetes and CNCF incubator efforts.
The criteria will remain loose, and tighten as needed at the discretion of the Falco open source community.
### contrib
"_Sandbox level_"
This new [contrib](https://github.com/falcosecurity/contrib) repository will be equivalent to the `Falco Sandbox` and serves as a place for the community to `test-drive` ideas/projects/code.
### repository
"_Incubating level_" projects such as [falco-exporter](https://github.com/falco-exporter) can be promoted from `contrib` to their own repository.
This is done as needed, and can best be measured by the need to cut a release and use the GitHub release features. Again, this is at the discretion of the Falco open source community.
### official support
As the need for a project grows, it can ultimately achieve the highest and most coveted status within The Falco Project. "_Offical support_."
The artifacts listed above are part of the official Falco release process. These artifact will be refined and amended by the [Part 2](./20200506-artifacts-scope-part-2.md).
# Action
The *Part 1* is mainly intended as a cleanup process.
For each item not listed above, ask if it needs to be moved or deleted.
After the cleanup process, all items will match the *Part 1* of this proposal.
### Action Items
Here are SOME of the items that would need to be done, for example:
- Remove `minimal` from `falco` repository (it's almost similar to `slim`, we don't need two images for the same purpose)
- Rename `driverloader` image to `falco-driver-loader` (since it has not been release yet, we can rename it without breaking things)
- Move everything else to contrib
- Move [/integrations](https://github.com/falcosecurity/falco/tree/master/integrations) to contrib
- Move [/examples](https://github.com/falcosecurity/falco/tree/master/examples) to contrib
- Old documentation
### Documentation
Update documentation in [falco-website#184](https://github.com/falcosecurity/falco-website/pull/184).
### Adjusting projects
- YAML manifest documentation to be moved to `contrib`
- Minkube, Kind, Puppet, Ansible, etc documentation to be moved to `contrib`
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.