1. Extend macro mkdir with syscall mkdirat (#337)
2. add placeholder for whitelist in rule Clear Log Activities (#632)
Signed-off-by: kaizhe <derek0405@gmail.com>
add docker.io/ to the trusted images list
Signed-off-by: kaizhe <derek0405@gmail.com>
rule update: add container.id and image in the rule output except those rules with "not container" in condition
Signed-off-by: kaizhe <derek0405@gmail.com>
Remove empty line
Signed-off-by: Kaizhe Huang<derek0405@gmail.com>
Start using a falco_ prefix for falco-provided lists/macros. Not
changing existing object names to retain compatibility.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Define macros k8s_audit_always_true/k8s_audit_never_true that work for
k8s audit events. Use them in macros that were asserting true/false values.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
Previously, the exceptions for Launch Privileged Container/Launch
Sensitive Mount Container came from a list of "trusted" images and/or a
macro that defined "trusted" containers. We want more fine-grained
control over the exceptions for these rules, so split them into
exception lists/macros that are specific to each rule. This defines:
- falco_privileged_images: only those images that are known to require
privileged=true
- falco_privileged_containers: uses privileged_images and (for now) still
allows all openshift images
- user_privileged_containers: allows user exceptions
- falco_sensitive_mount_images: only thoe images that are known to perform
sensitive mounts
- falco_sensitive_mount_containers: uses sensitive_mount_images
- user_sensitive_mount_containers: allows user exceptions
For backwards compatibility purposes only, we keep the trusted_images
list and user_trusted_containers macro and they are still used as
exceptions for both rules. Comments recommend using the more
fine-grained alternatives, though.
While defining these lists, also do another survey to see if they still
require these permissions and remove them if they didn't. Removed:
- quay.io/coreos/flannel
- consul
Moved to sensitive mount only:
- gcr.io/google_containers/hyperkube
- datadog
- gliderlabs/logspout
Finally, get rid of the k8s audit-specific lists of privileged/sensitive
mount images, relying on the ones in falco_rules.yaml.
Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
* Allow containerd to start containers
Needed for IBM Cloud Kubernetes Service
* Whitelist state checks for galley(istio)
Galley is a component of istio
https://istio.io/docs/reference/commands/galley/
* Whitelist calcio scratching /status.json
This is the observed behaviour on IBM Cloud Kubernetes Service
* Add whitelisting for keeaplived config file
Some newer distros default to Python 3 by default, not 2, which causes Ansible to trigger these rules.
falco-CLA-1.0-contributing-entity: 1500 Services Ltd
falco-CLA-1.0-signed-off-by: Chris Northwood <chris.northwood@1500cloud.com>
Please note
registry.access.redhat.com/sematext/agent,
registry.access.redhat.com/sematext/logagent
are not available yet, but we are in the process of certification ...
* Fix parentheses for rpm_procs macro
Ensures a preceding not will apply to the whole macro
* Let anything write to /etc/fluent/configs.d
It looks like a lot of scripted programs (shell scripts running cp, sed,
arbitrary ruby programs) are run by fluentd to set up config. They're
too generic to identify, so jut add /etc/fluent/configs.d to
safe_etc_dirs, sadly.
* Let java setup write to /etc/passwd in containers
/opt/jboss/container/java/run/run-java.sh and /opt/run-java/run-java.sh
write to /etc/passwd in a contaner, probably to add a user. Add an
exception for them.
* Remove netstat as a generic network program
We'll try to limit the list to programs that can broadly see activity or
actually create traffic.
* Rules for inbound conn sources, not outbound
Replace "Unexpected outbound connection source" with "Unexpected inbound
connection source" to watch inbound connections by source instead of
outbound connections by source. The rule itself is pretty much unchanged
other than switching to using cip/cnet instead of sip/snet.
Expand the supporting macros so they include outbound/inbound in the
name, to make it clearer.
* rules update: add rules for mitre framework
* rules update: add mitre persistence rules
* minor changes
* add exclude hidden directories list
* limit hidden files creation in container
* minor fix
* minor fix
* tune rules to have only_check_container macro
* rules update: add rules for remove data from disk and clear log
* minor changes
* minor fix rule name
* add check_container_only macro
* addresses comments
* add rule for updating package repos
* Don't consider dd a bulk writer
Threre are enough legitimate cases to exclude it.
* Make cron/chmod policies opt-in
They have enough legitimate uses that we shouldn't run by default.
* minor fix
* Fix mistake in always_true macro
comparison operator was wrong.
* Whitespace diffs
* Add opt-in rules for interp procs + networking
New rules "Interpreted procs inbound network activity" and "Interpreted
procs outbound network activity" check for any network activity being
done by interpreted programs like ruby, python, etc. They aren't enabled
by default, as there are many legitimate cases where these programs
might perform inbound or outbound networking. Macros
"consider_interpreted_inbound" and "consider_interpreted_outbound" can
be used to enable them.
* Opt-in rule for running network tools on host
New rule Lauch Suspicious Network Tool on Host is similar to "Lauch
Suspicious Network Tool in Container" [sic] but works on the host. It's
not enabled by default, but can be enabled using the macro
consider_network_tools_on_host.
* Add parens around container macro
* Make Modify User Context generic to shell configs
Rename Modify User Context to Modify Shell Configuration File to note
that it's limited to shell configuration files, and expand the set of
files to cover a collection of file names and files for zsh, csh, and
bash.
* Also prevent shells from directly opening conns
Bash can directly open network connections by writing to
/dev/{tcp,udp}/<addr>/<port>. These aren't actual files, but are
interpreted by bash as instructions to open network connections.
* Add rule to detect shell config reads
New rule Read Shell Configuration File is analogous to Write Shell
Configuration File, but handles reads by programs other than shell
programs. It's also disabled by default using consider_shell_config_reads.
* Add rule to check ssh directory/file reads
New rule Read ssh information looks for any open of a file or directory
below /root/.ssh or a user ssh directory. ssh binaries (new list
ssh_binaries) are excluded.
The rule is also opt-in via the macro consider_ssh_reads.
* Rule to check for disallowed ssh proxies
New rule "Program run with disallowed http proxy env" looks for spawned
programs that have a HTTP_PROXY environment variable, but the value of
the HTTP_PROXY is not an expected value.
This handles attempts to redirect traffic to unexpected locations.
* Add rules showing how to categorize outbound conns
New rules Unexpected outbound connection destination and Unexpected
outbound connection source show how to categorize network connections by
either destination or source ip address, netmask, or domain name.
In order to be effective, they require a comprehensive set of allowed
sources and/or destinations, so they both require customization and are
gated by the macro consider_all_outbound_conns.
* Add .bash_history to bash config files
* Restrict http proxy rule to specific procs
Only considering wget, curl for now.
* Shell programs can directly modify config
Most notably .bash_history.
* Use right system_procs/binaries
system_binaries doesn't exist, so use system_procs + an additional test
for shell_binaries.
* rule update: add MITRE tags for rules
* update mitre tags with all lower case and add two more rules
* add two more mitre_persistence rules plus minor changes
* replace contains with icontains
* limit search passwd in container
* Also let dockerd-current setns()
* Add additional setns programs
Let oci-umount (https://github.com/containers/oci-umount) setns().
* Let Openscap RPM probes touch rpm db
Define a list openscap_rpm_binaries containing openscap probes related
to rpm and let those binaries touch the rpm database.
* Let oc write to more directories below /etc
Make the prefix more general, allowing any path below /etc/origin/node.
* Skip incomplete container info for container start
In the container_started macro, ensure that the container metadata is
complete after either the container event (very unlikely) or after the
exec of the first process into the container (very likely now that
container metadata fetches are async).
When using these rules with older falco versions, this macro will still
work as the synchronous container metadata fetch will result in a
repository that isn't "incomplete".
* Update test traces to have full container info
Some test trace files used for regression tests didn't have full
container info, and once we started looking for those fields, the tests
stopped working.
So update the traces, and event counts to match.
+ Add the user_known_write_root_conditions macro to allow custom conditions in the "Write below root" rule
+ Add the user_known_non_sudo_setuid_conditions to allow custom conditions in the "Non sudo setuid" rule
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* Add support for container metaevent to detect container spawning
Create a new macro "container_started" to check both the old and
the new check.
Also, only look for execve exit events with vpid=1.
* Use TBB_INCLUDE_DIR for consistency w sysdig,agent
Previously it was a mix of TBB_INCLUDE and TBB_INCLUDE_DIR.
* Build using matching sysdig branch, if exists
Instead of using container.image, that always reports the raw string
used to spawn the container, switch to the more reliable
container.image.{repository,tag}, since they are guaranteed to report
the actual repository/tag of the container image.
This also give a little performance improvement since a single 'in'
predicate can now be used instead of a sequence of startswith.
* Add ability to print field names only
Add ability to print field names only instead of all information about
fields (description, etc) using -N cmdline option.
This will be used to add some versioning support steps that check for a
changed set of fields.
* Add an engine version that changes w/ filter flds
Add a method falco_engine::engine_version() that returns the current
engine version (e.g. set of supported fields, rules objects, operators,
etc.). It's defined in falco_engine_version.h, starts at 2 and should be
updated whenever a breaking change is made.
The most common reason for an engine change will be an update to the set
of filter fields. To make this easy to diagnose, add a build time check
that compares the sha256 output of "falco --list -N" against a value
that's embedded in falco_engine_version.h. A mismatch fails the build.
* Check engine version when loading rules
A rules file can now have a field "required_engine_version N". If
present, the number is compared to the falco engine version. If the
falco engine version is less, an error is thrown.
* Unit tests for engine versioning
Add a required version: 2 to one trace file to check the positive case
and add a new test that verifies that a too-new rules file won't be loaded.
* Rename falco test docker image
Rename sysdig/falco to falcosecurity/falco in unit tests.
* Don't pin falco_rules.yaml to an engine version
Currently, falco_rules.yaml is compatible with versions <= 0.13.1 other
than the required_engine_version object itself, so keep that line
commented out so users can use this rules file with older falco
versions.
We'll uncomment it with the first incompatible falco engine change.
* Improved inbound/outbound macros
Improved versions of inbound/outbound macros that add coverage for
recvfrom/recvmsg, sendto/sendmsg and also ignore non-blocking syscalls
in a different way.
* Let nginx-ingress-c(ontroller) write to /etc/nginx
Process truncated due to comm limit.
Also fix some parentheses for another write_etc_common macro.
* Let calico setns also.
* Let prometheus-conf write its config
Let prometheus-conf write its config below /etc/prometheus.
* Let openshift oc write to /etc/origin/node
Add k8s audit rules to falco's config so they are read by default.
Rename some generic macros like modify, create, delete in the k8s audit
rules so they don't overlap with macros in the main rules file.
* Add sensitive mount of mouting to /var/lib/kubelet*
* Fix GKE/Istio false positives
- Allow kubectl to write below /root/.kube
- Allow loopback/bridge (e.g. /home/kubernetes/bin/) to setns.
- Let istio pilot-agent write to /etc/istio.
- Let google_accounts(_daemon) write user .ssh files.
- Add /health as an allowed file below /.
This fixes https://github.com/falcosecurity/falco/issues/439.
* Improve ufw/cloud-init exceptions
Tie them to both the program and the file being written.
Also move the cloud-init exception to monitored_directory.
* Add new json/webserver libs, embedded webserver
Add two new external libraries:
- nlohmann-json is a better json library that has stronger use of c++
features like type deduction, better conversion from stl structures,
etc. We'll use it to hold generic json objects instead of jsoncpp.
- civetweb is an embeddable webserver that will allow us to accept
posted json data.
New files webserver.{cpp,h} start an embedded webserver that listens for
POSTS on a configurable url and passes the json data to the falco
engine.
New falco config items are under webserver:
- enabled: true|false. Whether to start the embedded webserver or not.
- listen_port. Port that webserver listens on
- k8s_audit_endpoint: uri on which to accept POSTed k8s audit events.
(This commit doesn't compile entirely on its own, but we're grouping
these related changes into one commit for clarity).
* Don't use relative paths to find lua code
You can look directly below PROJECT_SOURCE_DIR.
* Reorganize compiler lua code
The lua compiler code is generic enough to work on more than just
sinsp-based rules, so move the parts of the compiler related to event
types and filterchecks out into a standalone lua file
sinsp_rule_utils.lua.
The checks for event types/filterchecks are now done from rule_loader,
and are dependent on a "source" attribute of the rule being
"sinsp". We'll be adding additional types of events next that come from
sources other than system calls.
* Manage separate syscall/k8s audit rulesets
Add the ability to manage separate sets of rules (syscall and
k8s_audit). Stop using the sinsp_evttype_filter object from the sysdig
repo, replacing it with falco_ruleset/falco_sinsp_ruleset from
ruleset.{cpp,h}. It has the same methods to add rules, associate them
with rulesets, and (for syscall) quickly find the relevant rules for a
given syscall/event type.
At the falco engine level, there are new parallel interfaces for both
types of rules (syscall and k8s_audit) to:
- add a rule: add_k8s_audit_filter/add_sinsp_filter
- match an event against rules, possibly returning a result:
process_sinsp_event/process_k8s_audit_event
At the rule loading level, the mechanics of creating filterchecks
objects is handled two factories (sinsp_filter_factory and
json_event_filter_factory), both of which are held by the engine.
* Handle multiple rule types when parsing rules
Modify the steps of parsing a rule's filter expression to handle
multiple types of rules. Notable changes:
- In the rule loader/ast traversal, pass a filter api object down,
which is passed back up in the lua parser api calls like nest(),
bool_op(), rel_expr(), etc.
- The filter api object is either the sinsp factory or k8s audit
factory, depending on the rule type.
- When the rule is complete, the complete filter is passed to the
engine using either add_sinsp_filter()/add_k8s_audit_filter().
* Add multiple output formatting types
Add support for multiple output formatters. Notable changes:
- The falco engine is passed along to falco_formats to gain access to
the engine's factories.
- When creating a formatter, the source of the rule is passed along
with the format string, which controls which kind of output formatter
is created.
Also clean up exception handling a bit so all lua callbacks catch all
exceptions and convert them into lua errors.
* Add support for json, k8s audit filter fields
With some corresponding changes in sysdig, you can now create general
purpose filter fields and events, which can be tied together with
nesting, expressions, and relational operators. The classes here
represent an instance of these fields devoted to generic json objects as
well as k8s audit events. Notable changes:
- json_event: holds a json object, used by all of the below
- json_event_filter_check: Has the ability to extract values out of a
json_event object and has the ability to define macros that associate
a field like "group.field" with a json pointer expression that
extracts a single property's value out of the json object. The basic
field definition also allows creating an index
e.g. group.field[index], where a std::function is responsible for
performing the indexing. This class has virtual void methods so it
must be overridden.
- jevt_filter_check: subclass of json_event_filter_check and defines
the following fields:
- jevt.time/jevt.rawtime: extracts the time from the underlying json object.
- jevt.value[<json pointer>]: general purpose way to extract any
json value out of the underlying object. <json pointer> is a json
pointer expression
- jevt.obj: Return the entire object, stringified.
- k8s_audit_filter_check: implements fields that extract values from
k8s audit events. Most of the implementation is in the form of macros
like ka.user.name, ka.uri, ka.target.name, etc. that just use json
pointers to extact the appropriate value from a k8s audit event. More
advanced fields like ka.uri.param, ka.req.container.image use
indexing to extract individual values out of maps or arrays.
- json_event_filter_factory: used by things like the lua parser api,
output formatter, etc to create the necessary objects and return
them.
- json_event_formatter: given a format string, create the necessary
fields that will be used to create a resolved string when given a
json_event object.
* Add ability to list fields
Similar to sysdig's -l option, add --list (<source>) to list the fields
supported by falco. With no source specified, will print all
fields. Source can be "syscall" for inspector fields e.g. what is
supported by sysdig, or "k8s_audit" to list fields supported only by the
k8s audit support in falco.
* Initial set of k8s audit rules
Add an initial set of k8s audit rules. They're broken into 3 classes of
rules:
- Suspicious activity: this includes things like:
- A disallowed k8s user performing an operation
- A disallowed container being used in a pod.
- A pod created with a privileged pod.
- A pod created with a sensitive mount.
- A pod using host networking
- Creating a NodePort Service
- A configmap containing private credentials
- A request being made by an unauthenticated user.
- Attach/exec to a pod. (We eventually want to also do privileged
pods, but that will require some state management that we don't
currently have).
- Creating a new namespace outside of an allowed set
- Creating a pod in either of the kube-system/kube-public namespaces
- Creating a serviceaccount in either of the kube-system/kube-public
namespaces
- Modifying any role starting with "system:"
- Creating a clusterrolebinding to the cluster-admin role
- Creating a role that wildcards verbs or resources
- Creating a role with writable permissions/pod exec permissions.
- Resource tracking. This includes noting when a deployment, service,
- configmap, cluster role, service account, etc are created or destroyed.
- Audit tracking: This tracks all audit events.
To support these rules, add macros/new indexing functions as needed to
support the required fields and ways to index the results.
* Add ability to read trace files of k8s audit evts
Expand the use of the -e flag to cover both .scap files containing
system calls as well as jsonl files containing k8s audit events:
If a trace file is specified, first try to read it using the
inspector. If that throws an exception, try to read the first line as
json. If both fail, return an error.
Based on the results of the open, the main loop either calls
do_inspect(), looping over system events, or
read_k8s_audit_trace_file(), reading each line as json and passing it to
the engine and outputs.
* Example showing how to enable k8s audit logs.
An example of how to enable k8s audit logging for minikube.
* Add unit tests for k8s audit support
Initial unit test support for k8s audit events. A new multiplex file
falco_k8s_audit_tests.yaml defines the tests. Traces (jsonl files) are
in trace_files/k8s_audit and new rules files are in
test/rules/k8s_audit.
Current test cases include:
- User outside allowed set
- Creating disallowed pod.
- Creating a pod explicitly on the allowed list
- Creating a pod w/ a privileged container (or second container), or a
pod with no privileged container.
- Creating a pod w/ a sensitive mount container (or second container), or a
pod with no sensitive mount.
- Cases for a trace w/o the relevant property + the container being
trusted, and hostnetwork tests.
- Tests that create a Service w/ and w/o a NodePort type.
- Tests for configmaps: tries each disallowed string, ensuring each is
detected, and the other has a configmap with no disallowed string,
ensuring it is not detected.
- The anonymous user creating a namespace.
- Tests for all kactivity rules e.g. those that create/delete
resources as compared to suspicious activity.
- Exec/Attach to Pod
- Creating a namespace outside of an allowed set
- Creating a pod/serviceaccount in kube-system/kube-public namespaces
- Deleting/modifying a system cluster role
- Creating a binding to the cluster-admin role
- Creating a cluster role binding that wildcards verbs or resources
- Creating a cluster role with write/pod exec privileges
* Don't manually install gcc 4.8
gcc 4.8 should already be installed by default on the vm we use for
travis.