It turns out if you read this rules file with falco versions 0.24.0 and
earlier, it can't parse the bare string containing colons:
(Ignore the misleading error context, that's a different problem):
```
Thu Sep 10 10:31:23 2020: Falco initialized with configuration file
/etc/falco/falco.yaml
Thu Sep 10 10:31:23 2020: Loading rules from file
/tmp/k8s_audit_rules.yaml:
Thu Sep 10 10:31:23 2020: Runtime error: found unexpected ':'
---
source: k8s_audit
tags: [k8s]
# In a local/user rules file, you could override this macro to
```
I think the change in 0.25.0 to use a bundled libyaml fixed the problem,
as it also upgraded libyaml to a version that fixed
https://github.com/yaml/libyaml/pull/104.
Work around the problem with earlier falco releases by quoting the colon.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
In some cases, when removing a container, dockerd will itself remove the
entire overlay filesystem, including a shell history file:
---
Shell history had been deleted or renamed (user=root type=unlinkat
command=dockerd -H fd://
... name=/var/lib/docker/overlay2/.../root/.bash_history ..
---
To avoid these FPs, skip paths starting with /var/lib/docker.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add system:managed-certificate-controller as a system role that can be
modified. Can be changed as a part of upgrades.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add several images seen in GKE environments that can run in the
kube-system namespace.
Also change the names of the lists to be more specific. The old names
are retained but are kept around for backwards compatibility.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Add a set of images known to run in the host network. Mostly related to
GKE, sometimes plus metrics collection.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Sort the items in the list falco_privileged_images alphabetically
and also separate them into individual lines. Make it easier to note
changes to the entries in the list using git blame.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Previously any write to a file called sources.list would match the
access_repositories condition, even a file /usr/tmp/..../sources.list.
Change the macro so the files in repository_files must be somewhere
below any of repository_directories.
Also allow programs spawned by package management programs to change
these files, using package_mgmt_ancestor_procs.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Let programs spawned by linux-bench (CIS Linux Benchmark program) read
/etc/shadow. Tests in the benchmark check for permissions of the file
and accounts in the contents of the file.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
In some cases, dropped events around the time a new container is started
can result in missing the exec/clone for a process that does a setns to
enter the namespace of a container. Here's an example from an oss
capture:
```
282273 09:01:22.098095673 30 runc:[0:PARENT] (168555) < setns res=0
282283 09:01:22.098138869 30 runc:[0:PARENT] (168555) < setns res=0
282295 09:01:22.098179685 30 runc:[0:PARENT] (168555) < setns res=0
517284 09:01:30.128723777 13 <NA> (168909) < setns res=0
517337 09:01:30.129054963 13 <NA> (168909) < setns res=0
517451 09:01:30.129560037 2 <NA> (168890) < setns res=0
524597 09:01:30.162741004 19 <NA> (168890) < setns res=0
527433 09:01:30.179786170 18 runc:[0:PARENT] (168927) < setns res=0
527448 09:01:30.179852428 18 runc:[0:PARENT] (168927) < setns res=0
535566 09:01:30.232420372 25 nsenter (168938) < setns res=0
537412 09:01:30.246200357 0 nsenter (168941) < setns res=0
554163 09:01:30.347158783 17 nsenter (168950) < setns res=0
659908 09:01:31.064622960 12 runc:[0:PARENT] (169023) < setns res=0
659919 09:01:31.064665759 12 runc:[0:PARENT] (169023) < setns res=0
732062 09:01:31.608297074 4 nsenter (169055) < setns res=0
812985 09:01:32.217527319 6 runc:[0:PARENT] (169077) < setns res=0
812991 09:01:32.217579396 6 runc:[0:PARENT] (169077) < setns res=0
813000 09:01:32.217632211 6 runc:[0:PARENT] (169077) < setns res=0
```
When this happens, it can cause false positives for the "Change thread
namespace" rule as it allows certain process names like "runc",
"containerd", etc to perform setns calls.
Other rules already use the proc_name_exists macro to require that the
process name exists. This change adds proc_name_exists to the Change
Thread Namespace rule as well.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
This update will provide information as to which process uid intitiated the event. This is really important for processes that are started
by a different user name.
Signed-off-by: Chuck Schweizer <chuck.schweizer.lvk2@statefarm.com>
dockerd and docker have "-current" suffix on centos and rhel. This
macro does not match causing false positives on multiple rules
using it
Signed-off-by: Radu Andries <radu@sysdig.com>
Add 'docker.io/falcosecurity/falco' image to 'falco_privileged_images' macro. This preven messages like this when booting up falco :
```
Warning Pod started with privileged container (user=system:serviceaccount:kube-system:daemon-set-controller pod=falco-42brw ns=monitoring images=docker.io/falcosecurity/falco:0.24.0)
```
Signed-off-by: Nicolas Vanheuverzwijn <nicolas.vanheu@gmail.com>
kops 1.17 adds a kube-apiserver-healthcheck user: https://github.com/kubernetes/kops/tree/master/cmd/kube-apiserver-healthcheck
Logs are currently spammed with:
```
{"output":"18:02:15.466580992: Warning K8s Operation performed by user not in allowed list of users (user=kube-apiserver-healthcheck target=<NA>/<NA> verb=get uri=/healthz resp=200)","priority":"Warning","rule":"Disallowed K8s User","time":"2020-06-29T18:02:15.466580992Z", "output_fields": {"jevt.time":"18:02:15.466580992","ka.response.code":"200","ka.target.name":"<NA>","ka.target.resource":"<NA>","ka.uri":"/healthz","ka.user.name":"kube-apiserver-healthcheck","ka.verb":"get"}}
```
Signed-off-by: Antoine Deschênes <antoine.deschenes@equisoft.com>
These application binaries raise events in the `Change thread namespace`
rule as part of their normal operation.
Here are more details regarding each binary :
- `protokube` : See [this](https://github.com/kubernetes/kops/tree/master/protokube)
- `dockerd` : The `dockerd` process name is whitelisted already in this
rule, but not if it is the parent, which will happen if you are doing
docker-in-docker.
- `tini` : See [this](https://github.com/krallin/tini)
- `aws` : This one I noticed because Falco itself uses the AWS CLI to
send events to SNS, which was triggering this rule.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
While using Falco, I noticed we were getting many events that were
virtually identical to those that were previously filtered out by the
`exexe_running_docker_save` macro, but where the `cmdline` was something
like `exe /var/run/docker/netns/cc5c7b9bb110 all false`. I believe this
is caused by the use of docker-in-docker.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
A macro like this is useful because configuration management software
may need to run containers with an attached terminal to perform some of
its duties, and users may want to ignore this behavior.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
This macro is useful to allow binaries to be installed under certain
circumstances. For example, it may be fine to install a binary during a
build in a ci/cd pipeline.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
Since `evt.arg[1]` does not work for all syscalls, switch to:
- `evt.arg.path` for `rmdir` and `unlink` (used by `remove` macro)
- `evt.arg.name` for `unlinkat` (used by `remove` macro)
- `evt.arg.oldpath/newpath` for `rename` and `renameat` (used by `rename` macro)
That ensures `Modify binary dirs` works properly.
Note that we cannot yet use `renameat2` (not supported by sinsp, see https://github.com/draios/sysdig/issues/1603 )
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
Since the dir's path is found:
- in `evt.arg[1]` for `mkdir`
- but in `evt.arg[2]` for `mkdirat`
switch to `evt.arg.path` to catch both.
That ensures `Mkdir binary dirs` works properly.
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
This macro will be useful because it will make it possible to filter out
events with a higher degree of granularity than is currently possible
for the `Set Setuid or Setgid bit` rule.
For example, if some application is expected to set the setuid or the
setgid bit under a specific condition, like if it's started with a
specific command, then the `user_known_chmod_applications` list is not
enough because we don't want to filter out _all_ events by this
application, only specific ones. This macro allows that.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler
Example alert:
---
K8s Operation performed by user not in allowed list of
users (user=vpa-recommender target=vpa-recommender/endpoints verb=update
uri=core/v1/namespaces/kube-system/endpoints/vpa-recommender resp=200)
K8s Operation performed by user not in allowed list of
users (user=vpa-updater target=vpa-updater/endpoints verb=update
uri=core/v1/namespaces/kube-system/endpoints/vpa-updater resp=200)
---
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Example event. I'm pretty sure the full file in this case is /etc/lvm/cache:
---
File below /etc opened for writing (user=root command=lvs --noheadings
--readonly --separator=";" -a -o
lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size parent=ceph-volume
pcmdline=ceph-volume /usr/sbin/ceph-volume inventory --format json file=/etc/lvm/c...
---
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
"The Azure's NPM is a a daemonset that supports network policies as
defined by the Kubernetes policy specification."
Example event:
---
Log files were tampered (user=root command=azure-npm
file=/var/log/iptables.conf CID1 image=mcr.microsoft.com/containernetworking/azure-npm)
---
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Instead of using the request object to identify service account tokens,
exclude any secrets activity by system users (e.g. users starting with
"system:"). This allows the rules to work on k8s audit events at
Metadata level instead of RequestResponse level.
Also change the example objects for automated tests to ones collected at
Metadata level.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
New rules K8s Secret Created/K8s Secret Deleted detect creating/deleting
secrets, following the pattern of the other "K8s XXX Created/Deleted"
rules. One minor difference is that service account token secrets are
excluded, as those are created automatically as namespaces are created.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
It's useful to ignore some system binaries that use the network under
certain conditions, so this should be overridable by the user.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
This makes it more convenient to add more allowed procs and many other
rules have a similar mechanism to whitelist certain processes.
Signed-off-by: Nicolas Marier <nmarier@coveo.com>
Sample Falco alert:
```
Shell spawned by untrusted binary (user=git shell=sh parent=puma reactor
cmdline=sh -c pgrep -fl "unicorn.* worker\[.*?\]" pcmdline=puma reactor
gparent=puma ggparent=runsv aname[4]=ru...
```
https://github.com/puma/puma says it is "A Ruby/Rack web server built
for concurrency".
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Sample falco alert:
```
File below /etc opened for writing (user=root command=cp
/run/secrets/kubernetes.io/serviceaccount/ca.crt
/etc/pki/ca-trust/source/anchors/openshift-ca.crt parent=bash
pcmdline=bash -c #!/bin/bash\nset -euo pipefail\n\n# set by the node
image\nunset KUB...
```
The exception is conditioned on containers.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
* Use the user_known_package_manager_in_container_conditions macro in the "Launch Package Management Process in Container" rule
Signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
The rule detects the execution of the k8s client tool in a container and
logs it with WARNING priority.
Signed-off-by: David de Torres <detorres.david@gmail.com>
Refactor how JSON event/k8s audit events extract values in two important
ways:
1. An event can now extract multiple values.
2. The extracted value is a class json_event_value instead of a simple
string.
The driver for 1. was that some filtercheck fields like
"ka.req.container.privileged" actually should extract multiple values,
as a pod can have multiple containers and it doesn't make sense to
summarize that down to a single value.
The driver for 2. is that by having an object represent a single
extracted value, you can also hold things like numbers e.g. ports, uids,
gids, etc. and ranges e.g. [0:3]. With an object, you can override
operators ==, <, etc. to do comparisons between the numbers and ranges,
or even set membership tests between extracted numbers and sets of
ranges.
This is really handy for a lot of new fields implemented as a part of
PSP support, where you end up having to check for overlaps between the
paths, images, ports, uids, etc in a K8s Audit Event and the acceptable
values, ranges, path prefixes enumerated in a PSP.
Implementing these changes also involve an overhaul of how aliases are
implemented. Instead of having an optional "formatting" function, where
arguments to the formatting function were expressed as text within the
index, define optional extraction and indexing functions. If an
extraction function is defined, it's responsible for taking the full
json object and calling add_extracted_value() to add values. There's a
default extraction function that uses a list of json_pointers with
automatic iteration over array values returned by a json pointer.
There's still a notion of filter fields supporting indexes--that's
simply handled within the default extraction or custom extraction
function. And for most fields, there won't be a need to write a custom
extraction function simply to implement indexing.
Within a json_event_filter_check object, instead of having a single
extracted value as a string, hold a vector of extracted json_event_value
objects (vector because order matters) and a set of json_event_value
objects (for set comparisons) as m_evalues. Values on the right hand
side of the expression are held as a set m_values.
json_event_filter_check::compare now supports IN/INTERSECTS as set
comparisons. It also supports PMATCH using path_prefix_search objects,
which simplifies checks like ka.req.pod.volumes.hostpath--now they can
be expressed as "ka.req.pod.volumes.hostpath intersects (/proc,
/var/run/docker.sock, /, /etc, /root)" instead of
"ka.req.volume.hostpath[/proc]=true or
ka.req.volume.hostpath[/root]=true or ...".
Define ~10 new filtercheck fields that extract pod properties like
hostIpc, readOnlyRootFilesystem, etc. that are relevant for PSP validation.
As a part of these changes, also clarify the names of filter fields
related to pods to always have a .pod in the name. Furthermore, fields
dealing with containers in a pod always have a .pod.containers prefix in
the name.
Finally, change the comparisons for existing k8s audit rules to use
"intersects" and/or "in" when appropriate instead of a single equality
comparison.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Without this, as ecs-agent starts we get a bunch of errors that look
like this (reformatted for readability):
Notice Container with sensitive mount started (
user=root
command=init -- /agent ecs-agent (id=19d4e98bb0dc)
image=amazon/amazon-ecs-agent:latest
mounts=/proc:/host/proc:ro:false:rprivate,$lotsofthings
)
ecs-agent needs those to work properly, so this can cause lots of false
positives when starting a new instance.
Signed-off-by: Felipe Bessa Coelho <fcoelho.9@gmail.com>
If this work as intended PR will automatically get the area labels depending on the files he modified.
In case the user wants it can still apply other areas manually, by slash command, or editing the PR template during the opening of the PR.
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
GKE regularly calls /exec.fifo from both a system level, and within
individual falco pods. As is this triggers errors multiple times every
hour. This change adds /exec.fifo to the expected files below root that
will be called.
Signed-off-by: Jonathan McGowan <jonnymcgow7@gmail.com>
1. Extend macro mkdir with syscall mkdirat (#337)
2. add placeholder for whitelist in rule Clear Log Activities (#632)
Signed-off-by: kaizhe <derek0405@gmail.com>
add docker.io/ to the trusted images list
Signed-off-by: kaizhe <derek0405@gmail.com>
rule update: add container.id and image in the rule output except those rules with "not container" in condition
Signed-off-by: kaizhe <derek0405@gmail.com>
Remove empty line
Signed-off-by: Kaizhe Huang<derek0405@gmail.com>
Start using a falco_ prefix for falco-provided lists/macros. Not
changing existing object names to retain compatibility.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Define macros k8s_audit_always_true/k8s_audit_never_true that work for
k8s audit events. Use them in macros that were asserting true/false values.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Previously, the exceptions for Launch Privileged Container/Launch
Sensitive Mount Container came from a list of "trusted" images and/or a
macro that defined "trusted" containers. We want more fine-grained
control over the exceptions for these rules, so split them into
exception lists/macros that are specific to each rule. This defines:
- falco_privileged_images: only those images that are known to require
privileged=true
- falco_privileged_containers: uses privileged_images and (for now) still
allows all openshift images
- user_privileged_containers: allows user exceptions
- falco_sensitive_mount_images: only thoe images that are known to perform
sensitive mounts
- falco_sensitive_mount_containers: uses sensitive_mount_images
- user_sensitive_mount_containers: allows user exceptions
For backwards compatibility purposes only, we keep the trusted_images
list and user_trusted_containers macro and they are still used as
exceptions for both rules. Comments recommend using the more
fine-grained alternatives, though.
While defining these lists, also do another survey to see if they still
require these permissions and remove them if they didn't. Removed:
- quay.io/coreos/flannel
- consul
Moved to sensitive mount only:
- gcr.io/google_containers/hyperkube
- datadog
- gliderlabs/logspout
Finally, get rid of the k8s audit-specific lists of privileged/sensitive
mount images, relying on the ones in falco_rules.yaml.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
* Allow containerd to start containers
Needed for IBM Cloud Kubernetes Service
* Whitelist state checks for galley(istio)
Galley is a component of istio
https://istio.io/docs/reference/commands/galley/
* Whitelist calcio scratching /status.json
This is the observed behaviour on IBM Cloud Kubernetes Service
* Add whitelisting for keeaplived config file
Some newer distros default to Python 3 by default, not 2, which causes Ansible to trigger these rules.
falco-CLA-1.0-contributing-entity: 1500 Services Ltd
falco-CLA-1.0-signed-off-by: Chris Northwood <chris.northwood@1500cloud.com>
Please note
registry.access.redhat.com/sematext/agent,
registry.access.redhat.com/sematext/logagent
are not available yet, but we are in the process of certification ...
* Fix parentheses for rpm_procs macro
Ensures a preceding not will apply to the whole macro
* Let anything write to /etc/fluent/configs.d
It looks like a lot of scripted programs (shell scripts running cp, sed,
arbitrary ruby programs) are run by fluentd to set up config. They're
too generic to identify, so jut add /etc/fluent/configs.d to
safe_etc_dirs, sadly.
* Let java setup write to /etc/passwd in containers
/opt/jboss/container/java/run/run-java.sh and /opt/run-java/run-java.sh
write to /etc/passwd in a contaner, probably to add a user. Add an
exception for them.
* Remove netstat as a generic network program
We'll try to limit the list to programs that can broadly see activity or
actually create traffic.
* Rules for inbound conn sources, not outbound
Replace "Unexpected outbound connection source" with "Unexpected inbound
connection source" to watch inbound connections by source instead of
outbound connections by source. The rule itself is pretty much unchanged
other than switching to using cip/cnet instead of sip/snet.
Expand the supporting macros so they include outbound/inbound in the
name, to make it clearer.
* rules update: add rules for mitre framework
* rules update: add mitre persistence rules
* minor changes
* add exclude hidden directories list
* limit hidden files creation in container
* minor fix
* minor fix
* tune rules to have only_check_container macro
* rules update: add rules for remove data from disk and clear log
* minor changes
* minor fix rule name
* add check_container_only macro
* addresses comments
* add rule for updating package repos
* Don't consider dd a bulk writer
Threre are enough legitimate cases to exclude it.
* Make cron/chmod policies opt-in
They have enough legitimate uses that we shouldn't run by default.
* minor fix
* Fix mistake in always_true macro
comparison operator was wrong.
* Whitespace diffs
* Add opt-in rules for interp procs + networking
New rules "Interpreted procs inbound network activity" and "Interpreted
procs outbound network activity" check for any network activity being
done by interpreted programs like ruby, python, etc. They aren't enabled
by default, as there are many legitimate cases where these programs
might perform inbound or outbound networking. Macros
"consider_interpreted_inbound" and "consider_interpreted_outbound" can
be used to enable them.
* Opt-in rule for running network tools on host
New rule Lauch Suspicious Network Tool on Host is similar to "Lauch
Suspicious Network Tool in Container" [sic] but works on the host. It's
not enabled by default, but can be enabled using the macro
consider_network_tools_on_host.
* Add parens around container macro
* Make Modify User Context generic to shell configs
Rename Modify User Context to Modify Shell Configuration File to note
that it's limited to shell configuration files, and expand the set of
files to cover a collection of file names and files for zsh, csh, and
bash.
* Also prevent shells from directly opening conns
Bash can directly open network connections by writing to
/dev/{tcp,udp}/<addr>/<port>. These aren't actual files, but are
interpreted by bash as instructions to open network connections.
* Add rule to detect shell config reads
New rule Read Shell Configuration File is analogous to Write Shell
Configuration File, but handles reads by programs other than shell
programs. It's also disabled by default using consider_shell_config_reads.
* Add rule to check ssh directory/file reads
New rule Read ssh information looks for any open of a file or directory
below /root/.ssh or a user ssh directory. ssh binaries (new list
ssh_binaries) are excluded.
The rule is also opt-in via the macro consider_ssh_reads.
* Rule to check for disallowed ssh proxies
New rule "Program run with disallowed http proxy env" looks for spawned
programs that have a HTTP_PROXY environment variable, but the value of
the HTTP_PROXY is not an expected value.
This handles attempts to redirect traffic to unexpected locations.
* Add rules showing how to categorize outbound conns
New rules Unexpected outbound connection destination and Unexpected
outbound connection source show how to categorize network connections by
either destination or source ip address, netmask, or domain name.
In order to be effective, they require a comprehensive set of allowed
sources and/or destinations, so they both require customization and are
gated by the macro consider_all_outbound_conns.
* Add .bash_history to bash config files
* Restrict http proxy rule to specific procs
Only considering wget, curl for now.
* Shell programs can directly modify config
Most notably .bash_history.
* Use right system_procs/binaries
system_binaries doesn't exist, so use system_procs + an additional test
for shell_binaries.
* rule update: add MITRE tags for rules
* update mitre tags with all lower case and add two more rules
* add two more mitre_persistence rules plus minor changes
* replace contains with icontains
* limit search passwd in container
* Also let dockerd-current setns()
* Add additional setns programs
Let oci-umount (https://github.com/containers/oci-umount) setns().
* Let Openscap RPM probes touch rpm db
Define a list openscap_rpm_binaries containing openscap probes related
to rpm and let those binaries touch the rpm database.
* Let oc write to more directories below /etc
Make the prefix more general, allowing any path below /etc/origin/node.
* Skip incomplete container info for container start
In the container_started macro, ensure that the container metadata is
complete after either the container event (very unlikely) or after the
exec of the first process into the container (very likely now that
container metadata fetches are async).
When using these rules with older falco versions, this macro will still
work as the synchronous container metadata fetch will result in a
repository that isn't "incomplete".
* Update test traces to have full container info
Some test trace files used for regression tests didn't have full
container info, and once we started looking for those fields, the tests
stopped working.
So update the traces, and event counts to match.
+ Add the user_known_write_root_conditions macro to allow custom conditions in the "Write below root" rule
+ Add the user_known_non_sudo_setuid_conditions to allow custom conditions in the "Non sudo setuid" rule
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* Add support for container metaevent to detect container spawning
Create a new macro "container_started" to check both the old and
the new check.
Also, only look for execve exit events with vpid=1.
* Use TBB_INCLUDE_DIR for consistency w sysdig,agent
Previously it was a mix of TBB_INCLUDE and TBB_INCLUDE_DIR.
* Build using matching sysdig branch, if exists
Instead of using container.image, that always reports the raw string
used to spawn the container, switch to the more reliable
container.image.{repository,tag}, since they are guaranteed to report
the actual repository/tag of the container image.
This also give a little performance improvement since a single 'in'
predicate can now be used instead of a sequence of startswith.
* Add ability to print field names only
Add ability to print field names only instead of all information about
fields (description, etc) using -N cmdline option.
This will be used to add some versioning support steps that check for a
changed set of fields.
* Add an engine version that changes w/ filter flds
Add a method falco_engine::engine_version() that returns the current
engine version (e.g. set of supported fields, rules objects, operators,
etc.). It's defined in falco_engine_version.h, starts at 2 and should be
updated whenever a breaking change is made.
The most common reason for an engine change will be an update to the set
of filter fields. To make this easy to diagnose, add a build time check
that compares the sha256 output of "falco --list -N" against a value
that's embedded in falco_engine_version.h. A mismatch fails the build.
* Check engine version when loading rules
A rules file can now have a field "required_engine_version N". If
present, the number is compared to the falco engine version. If the
falco engine version is less, an error is thrown.
* Unit tests for engine versioning
Add a required version: 2 to one trace file to check the positive case
and add a new test that verifies that a too-new rules file won't be loaded.
* Rename falco test docker image
Rename sysdig/falco to falcosecurity/falco in unit tests.
* Don't pin falco_rules.yaml to an engine version
Currently, falco_rules.yaml is compatible with versions <= 0.13.1 other
than the required_engine_version object itself, so keep that line
commented out so users can use this rules file with older falco
versions.
We'll uncomment it with the first incompatible falco engine change.
* Improved inbound/outbound macros
Improved versions of inbound/outbound macros that add coverage for
recvfrom/recvmsg, sendto/sendmsg and also ignore non-blocking syscalls
in a different way.
* Let nginx-ingress-c(ontroller) write to /etc/nginx
Process truncated due to comm limit.
Also fix some parentheses for another write_etc_common macro.
* Let calico setns also.
* Let prometheus-conf write its config
Let prometheus-conf write its config below /etc/prometheus.
* Let openshift oc write to /etc/origin/node
Add k8s audit rules to falco's config so they are read by default.
Rename some generic macros like modify, create, delete in the k8s audit
rules so they don't overlap with macros in the main rules file.
* Add sensitive mount of mouting to /var/lib/kubelet*
* Fix GKE/Istio false positives
- Allow kubectl to write below /root/.kube
- Allow loopback/bridge (e.g. /home/kubernetes/bin/) to setns.
- Let istio pilot-agent write to /etc/istio.
- Let google_accounts(_daemon) write user .ssh files.
- Add /health as an allowed file below /.
This fixes https://github.com/falcosecurity/falco/issues/439.
* Improve ufw/cloud-init exceptions
Tie them to both the program and the file being written.
Also move the cloud-init exception to monitored_directory.
* Add new json/webserver libs, embedded webserver
Add two new external libraries:
- nlohmann-json is a better json library that has stronger use of c++
features like type deduction, better conversion from stl structures,
etc. We'll use it to hold generic json objects instead of jsoncpp.
- civetweb is an embeddable webserver that will allow us to accept
posted json data.
New files webserver.{cpp,h} start an embedded webserver that listens for
POSTS on a configurable url and passes the json data to the falco
engine.
New falco config items are under webserver:
- enabled: true|false. Whether to start the embedded webserver or not.
- listen_port. Port that webserver listens on
- k8s_audit_endpoint: uri on which to accept POSTed k8s audit events.
(This commit doesn't compile entirely on its own, but we're grouping
these related changes into one commit for clarity).
* Don't use relative paths to find lua code
You can look directly below PROJECT_SOURCE_DIR.
* Reorganize compiler lua code
The lua compiler code is generic enough to work on more than just
sinsp-based rules, so move the parts of the compiler related to event
types and filterchecks out into a standalone lua file
sinsp_rule_utils.lua.
The checks for event types/filterchecks are now done from rule_loader,
and are dependent on a "source" attribute of the rule being
"sinsp". We'll be adding additional types of events next that come from
sources other than system calls.
* Manage separate syscall/k8s audit rulesets
Add the ability to manage separate sets of rules (syscall and
k8s_audit). Stop using the sinsp_evttype_filter object from the sysdig
repo, replacing it with falco_ruleset/falco_sinsp_ruleset from
ruleset.{cpp,h}. It has the same methods to add rules, associate them
with rulesets, and (for syscall) quickly find the relevant rules for a
given syscall/event type.
At the falco engine level, there are new parallel interfaces for both
types of rules (syscall and k8s_audit) to:
- add a rule: add_k8s_audit_filter/add_sinsp_filter
- match an event against rules, possibly returning a result:
process_sinsp_event/process_k8s_audit_event
At the rule loading level, the mechanics of creating filterchecks
objects is handled two factories (sinsp_filter_factory and
json_event_filter_factory), both of which are held by the engine.
* Handle multiple rule types when parsing rules
Modify the steps of parsing a rule's filter expression to handle
multiple types of rules. Notable changes:
- In the rule loader/ast traversal, pass a filter api object down,
which is passed back up in the lua parser api calls like nest(),
bool_op(), rel_expr(), etc.
- The filter api object is either the sinsp factory or k8s audit
factory, depending on the rule type.
- When the rule is complete, the complete filter is passed to the
engine using either add_sinsp_filter()/add_k8s_audit_filter().
* Add multiple output formatting types
Add support for multiple output formatters. Notable changes:
- The falco engine is passed along to falco_formats to gain access to
the engine's factories.
- When creating a formatter, the source of the rule is passed along
with the format string, which controls which kind of output formatter
is created.
Also clean up exception handling a bit so all lua callbacks catch all
exceptions and convert them into lua errors.
* Add support for json, k8s audit filter fields
With some corresponding changes in sysdig, you can now create general
purpose filter fields and events, which can be tied together with
nesting, expressions, and relational operators. The classes here
represent an instance of these fields devoted to generic json objects as
well as k8s audit events. Notable changes:
- json_event: holds a json object, used by all of the below
- json_event_filter_check: Has the ability to extract values out of a
json_event object and has the ability to define macros that associate
a field like "group.field" with a json pointer expression that
extracts a single property's value out of the json object. The basic
field definition also allows creating an index
e.g. group.field[index], where a std::function is responsible for
performing the indexing. This class has virtual void methods so it
must be overridden.
- jevt_filter_check: subclass of json_event_filter_check and defines
the following fields:
- jevt.time/jevt.rawtime: extracts the time from the underlying json object.
- jevt.value[<json pointer>]: general purpose way to extract any
json value out of the underlying object. <json pointer> is a json
pointer expression
- jevt.obj: Return the entire object, stringified.
- k8s_audit_filter_check: implements fields that extract values from
k8s audit events. Most of the implementation is in the form of macros
like ka.user.name, ka.uri, ka.target.name, etc. that just use json
pointers to extact the appropriate value from a k8s audit event. More
advanced fields like ka.uri.param, ka.req.container.image use
indexing to extract individual values out of maps or arrays.
- json_event_filter_factory: used by things like the lua parser api,
output formatter, etc to create the necessary objects and return
them.
- json_event_formatter: given a format string, create the necessary
fields that will be used to create a resolved string when given a
json_event object.
* Add ability to list fields
Similar to sysdig's -l option, add --list (<source>) to list the fields
supported by falco. With no source specified, will print all
fields. Source can be "syscall" for inspector fields e.g. what is
supported by sysdig, or "k8s_audit" to list fields supported only by the
k8s audit support in falco.
* Initial set of k8s audit rules
Add an initial set of k8s audit rules. They're broken into 3 classes of
rules:
- Suspicious activity: this includes things like:
- A disallowed k8s user performing an operation
- A disallowed container being used in a pod.
- A pod created with a privileged pod.
- A pod created with a sensitive mount.
- A pod using host networking
- Creating a NodePort Service
- A configmap containing private credentials
- A request being made by an unauthenticated user.
- Attach/exec to a pod. (We eventually want to also do privileged
pods, but that will require some state management that we don't
currently have).
- Creating a new namespace outside of an allowed set
- Creating a pod in either of the kube-system/kube-public namespaces
- Creating a serviceaccount in either of the kube-system/kube-public
namespaces
- Modifying any role starting with "system:"
- Creating a clusterrolebinding to the cluster-admin role
- Creating a role that wildcards verbs or resources
- Creating a role with writable permissions/pod exec permissions.
- Resource tracking. This includes noting when a deployment, service,
- configmap, cluster role, service account, etc are created or destroyed.
- Audit tracking: This tracks all audit events.
To support these rules, add macros/new indexing functions as needed to
support the required fields and ways to index the results.
* Add ability to read trace files of k8s audit evts
Expand the use of the -e flag to cover both .scap files containing
system calls as well as jsonl files containing k8s audit events:
If a trace file is specified, first try to read it using the
inspector. If that throws an exception, try to read the first line as
json. If both fail, return an error.
Based on the results of the open, the main loop either calls
do_inspect(), looping over system events, or
read_k8s_audit_trace_file(), reading each line as json and passing it to
the engine and outputs.
* Example showing how to enable k8s audit logs.
An example of how to enable k8s audit logging for minikube.
* Add unit tests for k8s audit support
Initial unit test support for k8s audit events. A new multiplex file
falco_k8s_audit_tests.yaml defines the tests. Traces (jsonl files) are
in trace_files/k8s_audit and new rules files are in
test/rules/k8s_audit.
Current test cases include:
- User outside allowed set
- Creating disallowed pod.
- Creating a pod explicitly on the allowed list
- Creating a pod w/ a privileged container (or second container), or a
pod with no privileged container.
- Creating a pod w/ a sensitive mount container (or second container), or a
pod with no sensitive mount.
- Cases for a trace w/o the relevant property + the container being
trusted, and hostnetwork tests.
- Tests that create a Service w/ and w/o a NodePort type.
- Tests for configmaps: tries each disallowed string, ensuring each is
detected, and the other has a configmap with no disallowed string,
ensuring it is not detected.
- The anonymous user creating a namespace.
- Tests for all kactivity rules e.g. those that create/delete
resources as compared to suspicious activity.
- Exec/Attach to Pod
- Creating a namespace outside of an allowed set
- Creating a pod/serviceaccount in kube-system/kube-public namespaces
- Deleting/modifying a system cluster role
- Creating a binding to the cluster-admin role
- Creating a cluster role binding that wildcards verbs or resources
- Creating a cluster role with write/pod exec privileges
* Don't manually install gcc 4.8
gcc 4.8 should already be installed by default on the vm we use for
travis.
* Add additional rpm writing programs
rhn_check, yumdb.
* Add 11-dhclient as a dhcp binary
* Let runuser read below pam
It reads those files to check permissions.
* Let chef write to /root/.chef*
Some deployments write directly below /root.
* Refactor openshift privileged images
Rework how openshift images are handled:
Many customers deploy to a private registry, which would normally
involve duplicating the image list for the new registry. Now, split the
image prefix search (e.g. <host>/openshift3) from the check of the image
name. The prefix search is in allowed_openshift_registry_root, and can
be easily overridden to add a new private registry hostname. The image
list check is in openshift_image, is conditioned on
allowed_openshift_registry_root, and does a contains search instead of a
prefix match.
Also try to get a more comprehensive set of possible openshift3 images,
using online docs as a guide.
* Also let sdchecks directly setns
A new macro python_running_sdchecks is similar to
parent_python_running_sdchecks but works on the process itself.
Add this as an exception to Change thread namespace.
* Use correct copyright years.
Also include the start year.
* Improve copyright notices.
Use the proper start year instead of just 2018.
Add the right owner Draios dba Sysdig.
Add copyright notices to some files that were missing them.
Replace references to GNU Public License to Apache license in:
- COPYING file
- README
- all source code below falco
- rules files
- rules and code below test directory
- code below falco directory
- entrypoint for docker containers (but not the Dockerfiles)
I didn't generally add copyright notices to all the examples files, as
they aren't core falco. If they did refer to the gpl I changed them to
apache.
* Add dpkg-divert as a debian package mgmt program.
* Add pip3 as a package mgmt program.
* Let ucpagent write config
Since the name is fairly generic (apiserver), require that it runs in a
container with image docker/ucp-agent.
* Let iscsi admin programs write config
* Add parent to some output strings
Will aid in addressing false positives.
* Let update-ca-trust write to pki files
* Add additional root writing programs
- zap: web application security tool
- airflow: apache app for managing data pipelines
- rpm can sometimes write below /root/.rpmdb
- maven can write groovy files
* Expand redis etc files
Additional program redis-launcher.(sh) and path /etc/redis.
* Add additional root directories
/root/workspace could be used by jenkins, /root/oradiag_root could be
used by Oracle 11 SQL*Net.
* Add pam-config as an auth program
* Add additional trusted containers
openshift image inspector, alternate name for datadog agent, docker ucp
agent, gliderlabs logspout.
* Add microdnf as a rpm binary.
https://github.com/rpm-software-management/microdnf
* Let coreos update-ssh-keys write /home/core/.ssh
* Allow additional writes below /etc/iscsi
Allow any path starting with /etc/iscsi.
* Add additional /root write paths
Additional files, with /root/workspace changing from a directory to a
path prefix.
* Add additional openshift trusted container.
* Also allow grandparents for ms_oms_writing_conf
In some cases the program spawns intermediate shells, for example:
07:15:30.756713513: Error File below /etc opened for writing (user= command=StatusReport.sh /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime parent=sh pcmdline=sh -c /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime file=/etc/opt/omi/conf/omsconfig/last_statusreport program=StatusReport.sh gparent=omiagent ggparent=omiagent gggparent=omiagent) k8s.pod= container=host k8s.pod= container=host
This should fix#387.
* Add alternatives as a binary dir writer
It can set symlinks below binary dirs.
* Let userhelper read sens.files/write below /etc
Part of usermode package, can be used by oVirt.
* Let package mgmt progs urlgrabber pki files
Some package management programs run urlgrabber-ext-{down} to update pki
files.
* Add additional root directory
for Jupyter-notebook
* Let brandbot write to /etc/os-release
Used on centos
* Add an additional veritas conf directory.
Also /etc/opt/VRTS...
* Let appdynamics spawn shells
Java, so we look at parent cmdline.
* Add more ancestors to output
In an attempt to track down the source of some additional shell
spawners, add additional parents.
* Let chef write below bin dirs/rpm database
Rename an existing macro chef_running_yum_dump to python_running_chef
and add additional variants.
Also add chef-client as a package management binary.
* Remove dangling macro.
No longer in use.
* Add additional volume mgmt progs
Add pvscan as a volume management program and add an additional
directory below /etc. Also rename the macro to make it more generic.
* Let openldap write below /etc/openldap
Only program is run-openldap.sh for now.
* Add additional veritas directory
Also /etc/vom.
* Let sed write /etc/sedXXXXX files
These are often seen in install scrips for rpm/deb packages. The test
only checks for /etc/sed, as we don't have anything like a regex match
or glob operator.
* Let dse (DataStax Search) write to /root
Only file is /root/tmp__.
* Add additional mysql programs and directories
Add run-mysqld and /etc/my.cnf.d directory.
* Let redis write its config below /etc.
* Let id program open network connections
Seen using port 111 (sun-rpc, but really user lookups).
* Opt-in rule for protecting tomcat shell spawns
Some users want to consider any shell spawned by tomcat suspect for
example, protecting against the famous apache struts attack
CVE-2017-5638, while others do not.
Split the difference by adding a macro
possibly_parent_java_running_tomcat, but disabling it by default.
* added ossec-syscheckd to read_sensitive_file_binaries
* Add "Write below monitored directory"
Take the technique used by "Write below binary dir", and make it more
general, expanding to a list of "monitored directories". This contains
common directories like /boot, /lib, etc.
It has a small workaround to look for home ssh directories without using
the glob operator, which has a pending fix in
https://github.com/draios/sysdig/pull/1153.
* Fix FPs
Move monitored_dir to after evt type checks and allow mkinitramfs to
write below /boot
* Addl boot writers.
* Improve compatibility with falco 0.9.0
Temporarily remove some rules features that are not compatible with
falco 0.9.0. We'll release a new falco soon, after which we'll add these
rules features back.
* Disable the unexpected udp traffic rule by default
Some applications will connect a udp socket to an address only to
test connectivity. Assuming the udp connect works, they will follow
up with a tcp connect that actually sends/receives data.
This occurs often enough that we don't want to update the Unexpected UDP
Traffic rule by default, so add a macro do_unexpected_udp_check which is
set to never_true. To opt-in, override the macro to use the condition
always_true.
* added new command lines for rabbitMQ
* added httpd_writing_ssl_conf macro and add it to write_etc_common
* modified httpd_writing_ssl_conf to add additional files
* added additional command to httpd_writing_ssl_conf
* Wrap condition
Wrap condition with folded style.
* Consolidate test connect ports into one list
There were several exceptions for apps that do a udp connect on an
address simply to see if it works, folllowed by a tcp connect that
actually sends/receives data.
Unify these exceptions into a single list test_connect_ports, and add
port 9 (discard, used by dockerd).
* Add Rule for unexpected udp traffic
New rule Unexpected UDP Traffic checks for udp traffic not on a list of
expected ports. Currently blocked on
https://github.com/draios/falco/issues/308.
* Add sendto/recvfrom in inbound/outbound macros
Expand the inbound/outbound macros to handle sendfrom/recvto events, so
they can work on unconnected udp sockets. In order to avoid a flood of
events, they also depend on fd.name_changed to only consider
sendto/recvfrom when the connection tuple changes.
Also make the check for protocol a positive check for udp instead of not tcp,
to avoid a warning about event type filters potentially appearing before
a negative condition. This makes filtering rules by event type easier.
This depends on https://github.com/draios/sysdig/pull/1052.
* Add additional restrictions for inbound/outbound
- only look for fd.name_changed on unconnected sockets.
- skip connections where both ips are 0.0.0.0 or localhost network.
- only look for successful or non-blocking actions that are in progress
* Add a combined inbound/outbound macro
Add a combined inbound/outbound macro so you don't have to do all the
other net/result related tests more than once.
* Fix evt generator for new in/outbound restrictions
The new rules skip localhost, so instead connect a udp socket to a
non-local port. That still triggers the inbound/outbound macros.
* Address FPs in regression tests
In some cases, an app may make a udp connection to an address with a
port of 0, or to an address with an application's port, before making a
tcp connection that actually sends/receives traffic. Allow these
connects.
Also, check both the server and client port and only consider the
traffic unexpected if neither port is in range.
* Also check evt.abspath in "Modify binary dirs" rule
For unlinkat evt.arg[1] is not the path of the file/dir removed.
* Monitor renameat too in "Modify binary dirs" rule
* Add ability to read rules files from directories
When the argument to -r <path> or an entry in falco.yaml's rules_file
list is a directory, read all files in the directory and add them to the
rules file list. The files in the directory are sorted alphabetically
before being added to the list.
The installed falco adds directories /etc/falco/rules.available and
/etc/falco/rules.d and moves /etc/falco/application_rules.yaml to
/etc/falco/rules.available. /etc/falco/rules.d is empty, but the idea is
that admins can symlink to /etc/falco/rules.available for applications
they want to enable.
This will make it easier to add application-specific rulesets that
admins can opt-in to.
* Unit test for reading rules from directory
Copy the rules/trace file from the test multiple_rules to a new test
rules_directory. The rules files are in rules/rules_dir/{000,001}*.yaml,
and the test uses a rules_file argument of rules_dir. Ensure that the
same events are detected.
* add common fluentd command, let docker modify
Add a common fluentd command, and let docker operations modify bin dir
* Add abrt-action-sav(...) as a rpm program
https://linux.die.net/man/1/abrt-action-save-package-data
* Add etc writers for more ms-on-linux svcs
Microsoft SCX and Azure Network Watcher Agent.
* Let nginx write its own config.
* Let chef-managed gitlab write gitlab config
* Let docker container fsen outside of containers
The docker process can also be outside of a container when doing actions
like docker save, etc, so drop the docker requirement.
* Expand the set of haproxy configs.
Let the parent process also be haproxy_reload and add an additional
directory.
* Add an additional node-related file below /root
For node cli.
* Let adclient read sensitive files
Active Directory Client.
* Let mesos docker executor write shells
* Add additional privileged containers.
A few more openshift-related containers and datadog.
* Add a kafka admin command line as allowed shell
In this case, run by cassandra
* Add additional ignored root directories
gradle and crashlytics
* Add back mesos shell spawning binaries back
This list will be limited only to those binaries known to spawn
shells. Add mesos-slave/mesos-health-ch.
* Add addl trusted containers
Consul and mesos-slave.
* Add additional config writers for sosreport
Can also write files below /etc/pki/nssdb.
* Expand selinux config progs
Rename macro to selinux_writing_conf and add additional programs.
* Let rtvscand read sensitive files
Symantec av cli program.
* Let nginx-launch write its own certificates
Sometimes directly, sometimes by invoking openssl.
* Add addl haproxy config writers
Also allow the general prefix /etc/haproxy.
* Add additional root files.
Mongodb-related.
* Add additional rpm binaries
rpmdb_stat
* Let python running get-pip.py modify binary files
Used as a part of directly running get-pip.py.
* Let centrify scripts read sensitive files
Scripts start with /usr/share/centrifydc
* Let centrify progs write krb info
Specifically, adjoin and addns.
* Let ansible run below /root/.ansible
* Let ms oms-run progs manage users
The parent process is generally omsagent-<version> or scx-<version.
* Combine & expand omiagent/omsagent macros
Combine the two macros into a single ms_oms_writing_conf and add both
direct and parent binaries.
* Let python scripts rltd to ms oms write binaries
Python scripts below /var/lib/waagent.
* Let google accounts daemon modify users
Parent process is google_accounts(_daemon).
* Let update-rc.d modify files below /etc
* Let dhcp binaries write indirectly to etc
This allows them to run programs like sed, cp, etc.
* Add istio as a trusted container.
* Add addl user management progs
Related to post-install steps for systemd/udev.
* Let azure-related scripts write below etc
Directory is /etc/azure, scripts are below /var/lib/waagent.
* Let cockpit write its config
http://www.cockpit-project.org/
* Add openshift's cassandra as a trusted container
* Let ipsec write config
Related to strongswan (https://strongswan.org/).
* Let consul-template write to addl /etc files
It may spawn intermediate shells and write below /etc/ssl.
* Add openvpn-entrypo(int) as an openvpn program
Also allow subdirectories below /etc/openvpn.
* Add additional files/directories below /root
* Add cockpit-session as a sensitive file reader
* Add puppet macro back
Still used in some people's user rules files.
* Rename name= to program=
Some users pointed out that name= was ambiguous, especially when the
event includes files being acted upon. Change to program=.
* Also let omiagent run progs that write oms config
It can run things like python scripts.
* Allow writes below /root/.android
* Let OMS agent for linux write config
Programs are omiagent/omsagent/PerformInventor/in_heartbeat_r* and files
are below /etc/opt/omi and /etc/opt/microsoft/omsagent.
* Handle really long classpath lines for cassandra
Some cassandra cmdlines are so long the classpath truncates the cmdline
before the actual entry class gets named. In those cases also look for
cassandra-specific config options.
* Let postgres binaries read sensitive files
Also add a couple of postgres cluster management programs.
* Add apt-add-reposit(ory) as a debian mgmt program
* Add addl info to debug writing sensitive files
Add parent/grandparent process info.
* Requrire root directory files to contain /
In some cases, a file below root might be detected but the file itself
has no directory component at all. This might be a bug with dropped
events. Make the test more strict by requiring that the file actually
contains a "/".
* Let updmap read sensitive files
Part of texlive (https://www.tug.org/texlive/)
* For selected rules, require proc name to exist
Some rules such as reading sensitive files and writing below etc have
many exceptions that depend on the process name. In very busy
environments, system call events might end up being dropped, which
causes the process name to be missing.
In these cases, we'll let the sensitive file read/write below etc to
occur. That's handled by a macro proc_name_exists, which ensures that
proc.name is not "<NA>" (the placeholder when it doesn't exist).
* Let ucf write generally below /etc
ucf is a general purpose config copying program, so let it generally
write below /etc, as long as it in turn is run by the apt program
"frontend".
* Add new conf writers for couchdb/texmf/slapadd
Each has specific subdirectories below /etc
* Let sed write to addl temp files below /etc
Let sed write to additional temporary files (some directory + "sed")
below /etc. All generally related to package installation scripts.
* Let rabbitmq(ctl) spawn limited shells
Let rabbitmq spawn limited shells that perform read-only tasks like
reading processes/ifaces.
Let rabbitmqctl generally spawn shells.
* Let redis run startup/shutdown scripts
Let redis run specific startup/shutdown scripts that trigger at
start/stop. They generally reside below /etc/redis, but just looking for
the names redis-server.{pre,post}-up in the commandline.
* Let erlexec spawn shells
https://github.com/saleyn/erlexec, "Execute and control OS processes
from Erlang/OTP."
* Handle updated trace files
As a part of these changes, we updated some of the positive trace files
to properly include a process name. These newer trace files have
additional opens, so update the expected event counts to match.
* Let yum-debug-dump write to rpm database
* Additional config writers
Symantec AV for Linux, sosreport, semodule (selinux), all with their
config files.
* Tidy up comments a bit.
* Try protecting node apps again
Try improving coverage of run shell untrusted by looking for shells
below node processes again. Want to see how many FPs this causes before
fully committing to it.
* Let node run directly by docker count as a service
Generally, we don't want to consider all uses of node as a service wrt
spawned shells. But we might be able to consider node run directly by
docker as a "service". So add that to protected_shell_spawner.
* Also add PM2 as a protected shell spawner
This should handle cases where PM2 manages node apps.
* Remove dangling macros/lists
Do a pass over the set of macros/lists, removing most of those that are
no longer referred to by any macro/list. The bulk of the macros/lists
were related to the rule Run Shell Untrusted, which was refactored to
only detect shells run below specific programs. With that change, many
of these exceptions were no longer neeeded.
* Add a "never_true" macro
Add a never_true macro that will never match any event. Useful if you
want to disable a rule/macro/etc.
* Add missing case to write_below_etc
Add the macro veritas_writing_config to write_below_etc, which was
mistakenly not added before.
* Make tracking shells spawned by node optional
The change to generally consider node run directly in a container as a
protected shell spawner was too permissive, causing false
positives. However, there are some deployments that want to track shells
spawned by node as suspect. To address this, create a macro
possibly_node_in_container which defaults to never matching (via the
never_true) macro. In a user rules file, you can override the macro to
remove the never_true clause, reverting to the old behavior.
* Add some dangling macros/lists back
Some macros/lists are still referred to by some widely used user rules
files, so add them back temporarily.
* Add additional allowed files below root.
These are related to node.js apps.
* Let yum-config-mana(ger) write to rpm database.
* Let gugent write to (root) + GuestAgent.log
vRA7 Guest Agent writes to GuestAgent.log with a cwd of root.
* Let cron-start write to pam_env.conf
* Add additional root files and directories
All seen in legitimate cases.
* Let nginx run aws s3 cp
Possibly seen as a part of consul deployments and/or openresty.
* Add rule for disallowed ssh connections
New rule "Disallowed SSH Connection" detects ssh connection attempts
other than those allowed by the macro allowed_ssh_hosts. The default
version of the macro allows any ssh connection, so the rule never
triggers by default.
The macro could be overridden in a local/user rules file, though.
* Detect contacting NodePort svcs in containers
New rule "Unexpected K8s NodePort Connection" detects attempts to
contact K8s NodePort services (i.e. ports >=30000) from within
containers.
It requires overridding a macro nodeport_containers which specifies a
set of containers that are allowed to use these port ranges. By default
every container is allowed.
* Remove remaining fbash references.
No longer relevant after all the installer rules were removed.
* Detect contacting EC2 metadata svc from containers
Add a rule that detects attempts to contact the ec2 metadata service
from containers. By default, the rule does not trigger unless a list of
explicitly allowed containers is provided.
* Detect contacting K8S API Server from container
New rule "Contact K8S API Server From Container" looks for connections
to the K8s API Server. The ip/port for the K8s API Server is in the
macro k8s_api_server and contains an ip/port that's not likely to occur
in practice, so the rule is effectively disabled by default.
* Additional rpm writers, root directories
salt-minion can also touch the rpm database, and some node packages
write below /root/.config/configstore.
* Add smbd as a protected shell spawner.
It's a server-like program.
* Also handle .ash_history
default shell for alpine linux
* Add exceptions for veritas
Let many veritas programs write below /etc/vx.
Let one veritas-related perl script read sensitive files.
* Allow postgres to run wal-e
https://github.com/wal-e/wal-e, archiving program for postgres.
* Let consul (agent) run addl scripts
Also let consul (agent, but the distinction is in the command line args)
to run nc in addition to curl. Also rename the macro.
* Let postgres setuid to itself
Let postgres setuid to itself. Seen by archiving programs like wal-e.
* Also allow consul to run alert check scripts
"sh -c /bin/consul-alerts watch checks --alert-addr 0.0.0.0:9000 ..."
* Add additional privileged containers.
Openshift's logging support containers generally run privileged.
* Let addl progs write below /etc/lvm
Add lvcreate as a program that can write below /etc/lvm and rename the
macro to lvprogs_writing_lvm_archive.
* Let glide write below root
https://glide.sh/, package management for go.
* Let sosreport read sensitive files.
* Let scom server read sensitive files.
Microsoft System Center Operations Manager (SCOM).
* Let kube-router run privileged.
https://github.com/cloudnativelabs/kube-router
* Let needrestart_binaries spawns shells
Was included in prior version of shell rules, adding back.
* Let splunk spawn shells below /opt/splunkforwarder
* Add yum-cron as a rpm binary
* Add a different way to run denyhosts.
Strange that the program is denyhosts.py but observed in actual
environments.
* Let nrpe setuid to nagios.
* Also let postgres run wal-e wrt shells
Previously added as an exception for db program spawned process, need to
add as an exception for run shell untrusted.
* Remove installer shell-related rules
They aren't used that often and removing them cleans up space for new
rules we want to add soon.
* Let kubelet running loopback spawn shells
Seen by @JPLachance, thanks for the heads up!
* Let docker's "exe" broadly write to files.
As a part of some docker commands like "docker save", etc, the program
exe can write from files on the host filesystem /var/lib/docker/... to a
variety of files within the container.
Allow this via a macro exe_running_docker_save that checks the
commandline as well as the parent and use it as an exclusion for the
write below binary dir/root/etc rules.
* Let chef perform more tasks
- Let chef-client generally read sensitive files and write below /etc.
- Let python running a chef script yum-dump.py write the rpm database.
Rename user_known_container_shell_spawn_binaries to
user_known_shell_spawn_binaries (the container distinction doesn't exist
any longer) and add it as an exception for run shell untrusted.
That way others can easily exclude shell spawning programs in a second
rules file.
* Refactor shell rules to avoid FPs.
Refactoring the shell related rules to avoid FPs. Instead of considering
all shells suspicious and trying to carve out exceptions for the
legitimate uses of shells, only consider shells spawned below certain
processes suspicious.
The set of processes is a collection of commonly used web servers,
databases, nosql document stores, mail programs, message queues, process
monitors, application servers, etc.
Also, runsv is also considered a top level process that denotes a
service. This allows a way for more flexible servers like ad-hoc nodejs
express apps, etc to denote themselves as a full server process.
* Update event generator to reflect new shell rules
spawn_shell is now a silent action. its replacement is
spawn_shell_under_httpd, which respawns itself as httpd and then runs a
shell.
db_program_spawn_binaries now runs ls instead of a shell so it only
matches db_program_spawn_process.
* Comment out old shell related rules
* Modify nodejs example to work w/ new shell rules
Start the express server using runit's runsv, which allows falco to
consider any shells run by it as suspicious.
* Use the updated argument for mkdir
In https://github.com/draios/sysdig/pull/757 the path argument for mkdir
moved to the second argument. This only became visible in the unit tests
once the trace files were updated to reflect the other shell rule
changes--the trace files had the old format.
* Update unit tests for shell rules changes
Shell in container doesn't exist any longer and its functionality has
been subsumed by run shell untrusted.
* Allow git binaries to run shells
In some cases, these are run below a service runsv so we still need
exceptions for them.
* Let consul agent spawn curl for health checks
* Don't protect tomcat
There's enough evidence of people spawning general commands that we
can't protect it.
* Reorder exceptions, add rabbitmq exception
Move the nginx exception to the main rule instead of the
protected_shell_spawner macro. Also add erl_child_setup (related to
rabbitmq) as an allowed shell spawner.
* Add additional spawn binaries
All off these are either below nginx, httpd, or runsv but should still
be allowed to spawn shells.
* Exclude shells when ancestor is a pkg mgmt binary
Skip shells when any process ancestor (parent, gparent, etc) is a
package management binary. This includes the program needrestart. This
is a deep search but should prevent a lot of other more detailed
exceptions trying to find the specific scripts run as a part of
installations.
* Skip shells related to serf
Serf is a service discovery tool and can in some cases be spawned by
apache/nginx. Also allow shells that are just checking the status of
pids via kill -0.
* Add several exclusions back
Add several exclusions back from the shell in container rule. These are
all allowed shell spawns that happen to be below
nginx/fluentd/apache/etc.
* Remove commented-out rules
This saves space as well as cleanup. I haven't yet removed the
macros/lists used by these rules and not used anywhere else. I'll do
that cleanup in a separate step.
* Also exclude based on command lines
Add back the exclusions based on command lines, using the existing set
of command lines.
* Add addl exclusions for shells
Of note is runsv, which means it can directly run shells (the ./run and
./finish scripts), but the things it runs can not.
* Don't trigger on shells spawning shells
We'll detect the first shell and not any other shells it spawns.
* Allow "runc:" parents to count as a cont entrypnt
In some cases, the initial process for a container can have a parent
"runc:[0:PARENT]", so also allow those cases to count as a container
entrypoint.
* Use container_entrypoint macro
Use the container_entrypoint macro to denote entering a container and
also allow exe to be one of the processes that's the parent of an
entrypoint.
* Let supervisor write more generally below /etc
* Let perl+plesk scripts run shells/write below etc
* Allow spaces after some cmdlines
* Add additional shell spawner.
* Add addl package mgmt binaries.
* Add addl cases for java + jenkins
Addl jar files to consider.
* Add addl jenkins-related cmdlines
Mostly related to node scripts run by jenkins
* Let python running some mesos tasks spawn shells
In this case marathon run by python
* Let ucf write below etc
Only below /etc/gconf for now.
* Let dpkg-reconfigur indirectly write below /etc
It may run programs that modify files below /etc
* Add files/dirs/prefixes for writes below root
Build a set of acceptable files/dirs/prefixes for writes below
/root. Mostly triggered by apps that run directly as root.
* Add addl shell spawn binaries.
* Also let java + sbt spawn shells in containers
Not seen only at host level
* Make sure the file below etc is /etc/
Make sure the file below /etc is really below the directory etc aka
/etc/xxx. Otherwise it would match a file /etcfoo.
* Let rancher healthcheck spawn shells
The name healthcheck is relatively innocuous so also look at the parent
process.
* Add addl shell container shell spawn binaries
* Add addl x2go binaries
* Let rabbitq write its config files
* Let rook write below /etc
toolbox.sh is fairly generic so add a condition based on the image name.
* Let consul-template spawn shells
* Add rook/toolbox as a trusted container
Their github pages recommend running privileged.
* Add addl mail binary that can setuid
* Let plesk autoinstaller spawn shells
The name autoinstaller is fairly generic so also look at the parent.
* Let php handlers write its config
* Let addl pkg-* binary write to /etc indirectly
* Add additional shell spawning binaries.
* Add ability to specify user trusted containers
New macro user_trusted_containers allows a user-provided set of
containers that are trusted and are allowed to run privileged.
* If npm runs node, let node spawn shells
* Let python run airflow via a shell.
* Add addl passenger commandlines (for shells)
* Add addl ways datadog can be run
* Let find run shells in containers.
* Add rpmq as a rpm binary
* Let httpd write below /etc/httpd/
* Let awstats/sa-update spawn shells
* Add container entrypoint as a shell
Some images have an extra shell level for image entrypoints.
* Add an additional jenkins commandline
* Let mysql write its config
* Let openvpn write its config
* Add addl root dirs/files
Also move /root/.java to be a general prefix.
* Let mysql_upgrade/opkg-cl spawn shells
* Allow login to perform dns lookups
With run with -h <host> to specify a remote host, some versions of login
will do a dns lookup to try to resolve the host.
* Let consul-template write haproxy config.
* Also let mysql indirectly edit its config
It might spawn a program to edit the config in addition to directly.
* Allow certain sed temp files below /etc/
* Allow debian binaries to indirectly write to /etc
They may spawn programs like sed, touch, etc to change files below /etc.
* Add additional root file
* Let rancher healthcheck be run more indirectly
The grandparent as well as parent of healthcheck can be tini.
* Add more cases for haproxy writing config
Allow more files as well as more scripts to update the config.
* Let vmtoolsd spawn shells on the host
* Add an additional innocuous entrypoint shell
* Let peer-finder (mongodb) spawn shells
* Split application rules to separate file.
Move the contents of application rules, which have never been enabled by
default, to a separate file. It's only installed in the mail falco packages.
* Add more build-related command lines
* Let perl running openresty spawn shells
* Let countly write nginx config
* Let confd spawn shells
* Also let aws spawn shells in containers.
The terminal shell in container rule has always been less permissive
than the other shell rules, mostly because we expect terminal-attached
shells to be less common. However, they might run innocuous commands,
especially from scripting languages like python. So allow the innocuous
commands to run.