* Allow SSL for k8s audit endpoint
Allow enabling SSL for the Kubernetes audit log web server. This
required adding two new configuration options: webserver.ssl_enabled and
webserver.ssl_certificate. To enable SSL add the below to the webserver
section of the falco.yaml config:
webserver:
enabled: true
listen_port: 8765s
k8s_audit_endpoint: /k8s_audit
ssl_enabled: true
ssl_certificate: /etc/falco/falco.pem
Note that the port number has an s appended to indicate SSL
for the port which is how civetweb expects SSL ports be denoted. We
could change this to dynamically add the s if ssl_enabled: true.
The ssl_certificate is a combination SSL Certificate and corresponding
key contained in a single file. You can generate a key/cert as follows:
$ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
$ cat certificate.pem key.pem > falco.pem
$ sudo cp falco.pem /etc/falco/falco.pem
fix ssl option handling
* Add notes on how to create ssl certificate
Add notes on how to create the ssl certificate to the config comments.
gcc 5 is no longer included in debian unstable, but we need it to build
centos kernels, which are 3.x based and explicitly want a gcc version 3,
4, or 5 compiler.
So grab copies we've saved from debian snapshots with the prefix
https://snapshot.debian.org/archive/debian/20190122T000000Z. They're
stored at downloads.draios.com and installed in a dpkg -i step after the
main packages are installed, but before any other by-hand packages are
installed.
A recent sysdig change added support for CRI and also added new external
dependencies (cri uses grpc to communicate between the client/server).
Add those dependencies.
* Add falco service to k8s install/update labels
Update the instructions for K8s RBAC installation to also create a
service that maps to port 8765 of the falco pod. This allows other
services to access the embedded webserver within falco.
Also clean up the set of labels to use a consistent app: falco-example,
role:security for each object.
* Cange K8s Audit Example to use falco daemonset
Change the K8s Audit Example instructions to use minikube in conjunction
with a falco daemonset running inside of minikube. (We're going to start
prebuilding kernel modules for recent minikube variants to make this
possible).
When running inside of minikube in conjunction with a service, you have
to go through some additional steps to find the ClusterIP associated
with the falco service and use that ip when configuring the k8s audit
webhook. Overall it's still a more self-contained set of instructions,
though.
In the common case, falco doesn't generate much output, so it's
desirable to not buffer it in case you're tail -fing some logs.
So change the default for buffered outputs to false.
* Improved inbound/outbound macros
Improved versions of inbound/outbound macros that add coverage for
recvfrom/recvmsg, sendto/sendmsg and also ignore non-blocking syscalls
in a different way.
* Let nginx-ingress-c(ontroller) write to /etc/nginx
Process truncated due to comm limit.
Also fix some parentheses for another write_etc_common macro.
* Let calico setns also.
* Let prometheus-conf write its config
Let prometheus-conf write its config below /etc/prometheus.
* Let openshift oc write to /etc/origin/node
The -Wextra compile-time option will enable additional diagnostic
warnigns. The -Werror option will cause the compiler to treat warnings
as errors. This change adds a build time option,
BUILD_WARNINGS_AS_ERRORS, to conditionally enable those flags. Note
that depending on the compiler you're using, if you enable this option,
compilation may fail (some compiler version have additional warnings
that have not yet been resolved).
Testing with these options in place identified a destructor that was
throwing an exception. C++11 doesn't allow destructors to throw
exceptions, so those throw's would have resulted in calls to
terminate(). I replace them with an error log and a call to assert().
It's possible to call event_tags_for_ruleset/evttypes_for_ruleset for a
ruleset that hasn't been loaded. In this case, it's possible to go past
the end of the m_rulesets array.
After fixing that, it's also possible to go past the end of the
event_tags array in event_tags_for_ruleset().
So in both cases, check the index against the array size before
indexing.
Add k8s audit rules to falco's config so they are read by default.
Rename some generic macros like modify, create, delete in the k8s audit
rules so they don't overlap with macros in the main rules file.
* Add sensitive mount of mouting to /var/lib/kubelet*
* Fix GKE/Istio false positives
- Allow kubectl to write below /root/.kube
- Allow loopback/bridge (e.g. /home/kubernetes/bin/) to setns.
- Let istio pilot-agent write to /etc/istio.
- Let google_accounts(_daemon) write user .ssh files.
- Add /health as an allowed file below /.
This fixes https://github.com/falcosecurity/falco/issues/439.
* Improve ufw/cloud-init exceptions
Tie them to both the program and the file being written.
Also move the cloud-init exception to monitored_directory.
* Add new json/webserver libs, embedded webserver
Add two new external libraries:
- nlohmann-json is a better json library that has stronger use of c++
features like type deduction, better conversion from stl structures,
etc. We'll use it to hold generic json objects instead of jsoncpp.
- civetweb is an embeddable webserver that will allow us to accept
posted json data.
New files webserver.{cpp,h} start an embedded webserver that listens for
POSTS on a configurable url and passes the json data to the falco
engine.
New falco config items are under webserver:
- enabled: true|false. Whether to start the embedded webserver or not.
- listen_port. Port that webserver listens on
- k8s_audit_endpoint: uri on which to accept POSTed k8s audit events.
(This commit doesn't compile entirely on its own, but we're grouping
these related changes into one commit for clarity).
* Don't use relative paths to find lua code
You can look directly below PROJECT_SOURCE_DIR.
* Reorganize compiler lua code
The lua compiler code is generic enough to work on more than just
sinsp-based rules, so move the parts of the compiler related to event
types and filterchecks out into a standalone lua file
sinsp_rule_utils.lua.
The checks for event types/filterchecks are now done from rule_loader,
and are dependent on a "source" attribute of the rule being
"sinsp". We'll be adding additional types of events next that come from
sources other than system calls.
* Manage separate syscall/k8s audit rulesets
Add the ability to manage separate sets of rules (syscall and
k8s_audit). Stop using the sinsp_evttype_filter object from the sysdig
repo, replacing it with falco_ruleset/falco_sinsp_ruleset from
ruleset.{cpp,h}. It has the same methods to add rules, associate them
with rulesets, and (for syscall) quickly find the relevant rules for a
given syscall/event type.
At the falco engine level, there are new parallel interfaces for both
types of rules (syscall and k8s_audit) to:
- add a rule: add_k8s_audit_filter/add_sinsp_filter
- match an event against rules, possibly returning a result:
process_sinsp_event/process_k8s_audit_event
At the rule loading level, the mechanics of creating filterchecks
objects is handled two factories (sinsp_filter_factory and
json_event_filter_factory), both of which are held by the engine.
* Handle multiple rule types when parsing rules
Modify the steps of parsing a rule's filter expression to handle
multiple types of rules. Notable changes:
- In the rule loader/ast traversal, pass a filter api object down,
which is passed back up in the lua parser api calls like nest(),
bool_op(), rel_expr(), etc.
- The filter api object is either the sinsp factory or k8s audit
factory, depending on the rule type.
- When the rule is complete, the complete filter is passed to the
engine using either add_sinsp_filter()/add_k8s_audit_filter().
* Add multiple output formatting types
Add support for multiple output formatters. Notable changes:
- The falco engine is passed along to falco_formats to gain access to
the engine's factories.
- When creating a formatter, the source of the rule is passed along
with the format string, which controls which kind of output formatter
is created.
Also clean up exception handling a bit so all lua callbacks catch all
exceptions and convert them into lua errors.
* Add support for json, k8s audit filter fields
With some corresponding changes in sysdig, you can now create general
purpose filter fields and events, which can be tied together with
nesting, expressions, and relational operators. The classes here
represent an instance of these fields devoted to generic json objects as
well as k8s audit events. Notable changes:
- json_event: holds a json object, used by all of the below
- json_event_filter_check: Has the ability to extract values out of a
json_event object and has the ability to define macros that associate
a field like "group.field" with a json pointer expression that
extracts a single property's value out of the json object. The basic
field definition also allows creating an index
e.g. group.field[index], where a std::function is responsible for
performing the indexing. This class has virtual void methods so it
must be overridden.
- jevt_filter_check: subclass of json_event_filter_check and defines
the following fields:
- jevt.time/jevt.rawtime: extracts the time from the underlying json object.
- jevt.value[<json pointer>]: general purpose way to extract any
json value out of the underlying object. <json pointer> is a json
pointer expression
- jevt.obj: Return the entire object, stringified.
- k8s_audit_filter_check: implements fields that extract values from
k8s audit events. Most of the implementation is in the form of macros
like ka.user.name, ka.uri, ka.target.name, etc. that just use json
pointers to extact the appropriate value from a k8s audit event. More
advanced fields like ka.uri.param, ka.req.container.image use
indexing to extract individual values out of maps or arrays.
- json_event_filter_factory: used by things like the lua parser api,
output formatter, etc to create the necessary objects and return
them.
- json_event_formatter: given a format string, create the necessary
fields that will be used to create a resolved string when given a
json_event object.
* Add ability to list fields
Similar to sysdig's -l option, add --list (<source>) to list the fields
supported by falco. With no source specified, will print all
fields. Source can be "syscall" for inspector fields e.g. what is
supported by sysdig, or "k8s_audit" to list fields supported only by the
k8s audit support in falco.
* Initial set of k8s audit rules
Add an initial set of k8s audit rules. They're broken into 3 classes of
rules:
- Suspicious activity: this includes things like:
- A disallowed k8s user performing an operation
- A disallowed container being used in a pod.
- A pod created with a privileged pod.
- A pod created with a sensitive mount.
- A pod using host networking
- Creating a NodePort Service
- A configmap containing private credentials
- A request being made by an unauthenticated user.
- Attach/exec to a pod. (We eventually want to also do privileged
pods, but that will require some state management that we don't
currently have).
- Creating a new namespace outside of an allowed set
- Creating a pod in either of the kube-system/kube-public namespaces
- Creating a serviceaccount in either of the kube-system/kube-public
namespaces
- Modifying any role starting with "system:"
- Creating a clusterrolebinding to the cluster-admin role
- Creating a role that wildcards verbs or resources
- Creating a role with writable permissions/pod exec permissions.
- Resource tracking. This includes noting when a deployment, service,
- configmap, cluster role, service account, etc are created or destroyed.
- Audit tracking: This tracks all audit events.
To support these rules, add macros/new indexing functions as needed to
support the required fields and ways to index the results.
* Add ability to read trace files of k8s audit evts
Expand the use of the -e flag to cover both .scap files containing
system calls as well as jsonl files containing k8s audit events:
If a trace file is specified, first try to read it using the
inspector. If that throws an exception, try to read the first line as
json. If both fail, return an error.
Based on the results of the open, the main loop either calls
do_inspect(), looping over system events, or
read_k8s_audit_trace_file(), reading each line as json and passing it to
the engine and outputs.
* Example showing how to enable k8s audit logs.
An example of how to enable k8s audit logging for minikube.
* Add unit tests for k8s audit support
Initial unit test support for k8s audit events. A new multiplex file
falco_k8s_audit_tests.yaml defines the tests. Traces (jsonl files) are
in trace_files/k8s_audit and new rules files are in
test/rules/k8s_audit.
Current test cases include:
- User outside allowed set
- Creating disallowed pod.
- Creating a pod explicitly on the allowed list
- Creating a pod w/ a privileged container (or second container), or a
pod with no privileged container.
- Creating a pod w/ a sensitive mount container (or second container), or a
pod with no sensitive mount.
- Cases for a trace w/o the relevant property + the container being
trusted, and hostnetwork tests.
- Tests that create a Service w/ and w/o a NodePort type.
- Tests for configmaps: tries each disallowed string, ensuring each is
detected, and the other has a configmap with no disallowed string,
ensuring it is not detected.
- The anonymous user creating a namespace.
- Tests for all kactivity rules e.g. those that create/delete
resources as compared to suspicious activity.
- Exec/Attach to Pod
- Creating a namespace outside of an allowed set
- Creating a pod/serviceaccount in kube-system/kube-public namespaces
- Deleting/modifying a system cluster role
- Creating a binding to the cluster-admin role
- Creating a cluster role binding that wildcards verbs or resources
- Creating a cluster role with write/pod exec privileges
* Don't manually install gcc 4.8
gcc 4.8 should already be installed by default on the vm we use for
travis.
* Add a falco-sns utility which publishes to an AWS SNS topic
* Add an script for deploying function in AWS Lambda
* Bump dependencies
* Use an empty topic and pass AWS_DEFAULT_REGION environment variable
* Add gitignore
* Install ca-certificates.
Are used when we publish to a SNS topic.
* Add myself as a maintainer
* Decode events from SNS based messages
* Add Terraform manifests for getting an EKS up and running
Please, take attention to setup kubectl and how to join workers:
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#obtaining-kubectl-configuration-from-terraformhttps://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#required-kubernetes-configuration-to-join-worker-nodes
* Ignore terraform generated files
* Remove autogenerated files
* Also publish MessageAttributes which allows to use Filter Policies
This allows to subscribe only to errors, or warnings or several
priorities or by rule names.
It covers same funcionality than NATS publishe does.
* Add kubeconfig and aws-iam-authenticator from heptio to Lambda environment
* Add role trust from cluster creator to lambda role
* Enable CloudWatch for Lambda stuff
* Generate kubeconfig, kubeconfig for lambdas and the lambda arn
This is used by deployment script
* Just a cosmetic change
* Add a Makefile which creates the cluster and configures it
* Use terraform and artifacts which belongs to this repository for deploying
* Move CNCF related deployment to its own directory
* Create only SNS and Lambda stuff.
Assume that the EKS cluster will be created outside
* Bridge IAM with RBAC
This allows to use the role for lambdas for authenticating against
Kubernetes
* Do not rely on terraform for deploying a playbook in lambda
* Clean whitespace
* Move rebased playbooks to functions
* Fix rebase issues with deployment and rbac stuff
* Add a clean target to Makefile
* Inject sys.path modification to Kubeless function deployment
* Add documentation and instructions
* Load/unload kernel module on start/stop
When falco is started, load the kernel module. (The falco binary also
will do a modprobe if it can't open the inspector, as a backup).
When falco is stopped, unload the kernel module.
This fixes https://github.com/falcosecurity/falco/issues/418.
* Put script execute line in right place.
Add a signal handler for SIGHUP that sets a global variable g_restart.
All the real execution of falco was already centralized in a standalone
function falco_init(), so simply exit on g_restart=true and call
falco_init() in a loop that restarts if g_restart is set to true.
Take care to not daemonize more than once and to reset the getopt index
to 1 on restart.
This fixes https://github.com/falcosecurity/falco/issues/432.
Update the express version to mitigate some security vulnerabilities.
Update the port to match the one used by demo.yml.
Change to /usr/src/app so npm install works as expected.