Compare commits

..

106 Commits

Author SHA1 Message Date
Leonardo Grasso
7ab327749f chore(userspace/engine): format lua source code
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-29 14:44:55 +02:00
Leonardo Grasso
4450fd3c4c revert(rules): remove require_engine_version at rule level
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-29 14:44:54 +02:00
Lorenzo Fontana
5cca1a6589 rule(Create Disallowed Pod): required_engine_version 5
rule(Create Privileged Pod): required_engine_version 5
rule(Create Sensitive Mount Pod): required_engine_version 5
rule(Create HostNetwork Pod): required_engine_version 5
rule(Pod Created in Kube Namespace): required_engine_version 5
rule(ClusterRole With Wildcard Created): required_engine_version 5
rule(ClusterRole With Write Privileges Created): required_engine_version 5
rule(ClusterRole With Pod Exec Created): required_engine_version 5

Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-29 14:44:51 +02:00
Lorenzo Fontana
130126f170 rules(Container Drift Detected (open+create)): specify that rule is only
compatible with engine 6

Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-29 14:43:26 +02:00
Lorenzo Fontana
c886debf83 rules: the required_engine_version is now on by default
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-29 14:42:05 +02:00
Antoine Deschênes
0a600253ac falco-driver-loader: fix conflicting $1 argument usage
Signed-off-by: Antoine Deschênes <antoine@antoinedeschenes.com>
2020-07-28 09:58:39 +02:00
kaizhe
571f8a28e7 add macro user_read_sensitive_file_containers
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-07-25 08:53:06 +02:00
kaizhe
6bb0bba68a rules update(Read sensitive file untrusted): add trusted images into whitelist
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-07-25 08:53:06 +02:00
Leonardo Grasso
f1a42cf259 rule(list allowed_k8s_users): add "kubernetes-admin" user
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-25 08:51:13 +02:00
Nicolas Vanheuverzwijn
427c15f257 rule(macro falco_privileged_images): add 'docker.io/falcosecurity/falco'
Add 'docker.io/falcosecurity/falco' image to  'falco_privileged_images' macro. This preven messages like this when booting up falco :

```
Warning Pod started with privileged container (user=system:serviceaccount:kube-system:daemon-set-controller pod=falco-42brw ns=monitoring images=docker.io/falcosecurity/falco:0.24.0)
```

Signed-off-by: Nicolas Vanheuverzwijn <nicolas.vanheu@gmail.com>
2020-07-23 20:49:57 +02:00
kaizhe
a9b4e6c73e add sysdig/agent-slim to the user_trusted_images macro
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-07-20 23:41:47 +02:00
kaizhe
b32853798f rule update (macro: user_trusted_containers): add sysdig/node-image-analyzer to macro user_trusted_containers
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-07-20 23:41:47 +02:00
Shane Lawrence
b86bc4a857 Use ISO 8601 format for changelog dates.
Signed-off-by: Shane Lawrence <shane@lawrence.dev>
2020-07-20 23:25:30 +02:00
Leo Di Donato
23224355a5 docs(test): integration tests intended to be run against a release build of Falco
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>

Co-authored-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-20 22:48:00 +02:00
Leo Di Donato
84fbac0863 chore(.circleci): switch back to falcosecurity/falco-tester:latest runner for integration tests
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
3814b2e81b docs(test): run all the test suites at once
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
a83b91fc53 new(test): run_regression_tests.sh -h
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
e618f005b6 update(docker/tester): use the new run_regression_tests.sh CLI flags
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
d8faa95702 fix(test): run_regression_tests.sh must generate falco_traces test suite in a non-interactive way
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
ef5e71598a docs(test): instruction to run falco_tests_package integration test suite locally
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
bb1282c7be update(test): make run_regression_tests.sh script accept different
options

The following options have been added:
* -v (verbose)
* -p (prepare falco_traces test suite)
* -b (specify custom branch for downloading trace files)
* -d (specify the build directory)

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
8f07189ede docs(test): instructions for executing falco_traces integration test suite
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
dec2ff7d72 docs(test): prepare the local environment for running integration test suites
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
f3022e0abf build(test): target test-traces files
This make target calls the `trace-files-psp`, `trace-files-k8s-audit`,
`trace-files-base-scap` targets to place all the integration test
fixtures in the proper position.

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
9b42b20e1c build(test/trace_files): target trace-files-base-scap
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
850a49989f build(test/trace_files/psp): target trace-files-psp
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Di Donato
0dc2a6abd3 build(test/traces_file/k8s_audit): target trace-files-k8s-audit
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-20 22:48:00 +02:00
Leonardo Grasso
4346e98f20 feat(userspace/falco): print version at startup
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-16 22:35:56 +02:00
Lorenzo Fontana
38009f23b4 build: remove libyaml from cpack rpm
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-16 19:34:39 +02:00
Lorenzo Fontana
324a3b88e7 build: remove libyaml-0-2 as dependency in packages and dockerfiles
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-16 19:34:39 +02:00
Lorenzo Fontana
c03f563450 build: libyaml in bundled deps
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-16 19:34:39 +02:00
Leonardo Di Donato
c4b7f17271 docs: refinements to the release process docs
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-16 16:38:15 +02:00
Leonardo Di Donato
ebb0c47524 docs: 0.24.0 changelog entries
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-16 16:38:15 +02:00
Lorenzo Fontana
a447b6996e fix(userspace): rethrow inspector open exceptions
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <fontanalorenz@gmail.com>
2020-07-15 18:33:50 +02:00
Leonardo Di Donato
596e7ee303 fix(userspace/falco): try to insert kernel module driver conditionally
Do it only when not running with userspace instrumentation enabled and
the syscall input source is enabled (!disable_syscall)

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-15 18:33:50 +02:00
Leonardo Di Donato
8ae6aa51b9 chore: onetbb dependency is back
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-15 18:33:50 +02:00
Leo Di Donato
1343fd7e92 update(userspace/falco): userspace instrumentation help line
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-15 18:33:50 +02:00
Kris Nova
1954cf3af3 update(userspace/falco): edits to the falco CLI
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-15 18:33:50 +02:00
Kris Nova
bc8f9a5692 feat(cli): adding -u to the usage text
Signed-off-by: Kris Nova <kris@nivenly.com>
2020-07-15 18:33:50 +02:00
Kris Nova
1af1226566 feat(build): fixing MD5 of tpp for udig/pdig build
Signed-off-by: Kris Nova <kris@nivenly.com>
2020-07-15 18:33:50 +02:00
Loris Degioanni
c743f1eb68 feat(cli): adding -u to flip inspector method calls
udig support through the -u command line flag

Signed-off-by: Kris Nóva <kris@nivenly.com>
Co-authored-by: Kris Nóva <kris@nivenly.com>
2020-07-15 18:33:50 +02:00
Leonardo Grasso
bca98e0419 update(rules): disable drift detection rules by default
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-15 18:01:57 +02:00
Nicolas Marier
32bae35de2 rule(list package_mgmt_binaries): add snapd to list
Snap is a package manager by Canonical which was not in the
`package_mgmt_binaries` list.

Signed-off-by: Nicolas Marier <nmarier@coveo.com>
2020-07-10 10:04:26 +02:00
Leonardo Grasso
de147447ed update(userspace/falco): rename --stats_interval to --stats-interval
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-08 17:55:16 +02:00
Leonardo Di Donato
825e249294 update(userspace/falco): rename --stats_interval to --stats-interval
To match the style of other long flags of the Falco CLI.

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-08 17:55:16 +02:00
Leonardo Di Donato
00689a5d97 fix(userspace/falco): allow stats interval greather than 999
milliseconds

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-08 17:55:16 +02:00
Leonardo Grasso
4d31784a83 fix(docker): correct syntax error in the entrypoint script
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-08 12:11:33 +02:00
Leonardo Di Donato
2848eceb03 build(cmake/modules): update driver version to 85c889
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 21:19:08 +02:00
Leonardo Di Donato
c7ac1ef61b update(userspace/engine): const correctness for json_event class
Co-authored-by: Nathan Baker <nathan.baker@sysdig.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 21:19:08 +02:00
Leonardo Di Donato
5fd3c38422 build(cmake/modules): update driver version to 33c00f
This driver version, among other things (like userspace instrumentation
support) includes a fix for building the eBPF driver on CentOS 8
machines too.

Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 18:41:01 +02:00
Leo Di Donato
3bad1d2a56 docs: auto threadiness comment into Falco config
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 13:42:09 +02:00
Leonardo Di Donato
8ad5c4f834 update: default grpc server threadiness is 0 now ("auto")
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 13:42:09 +02:00
Leonardo Di Donato
553856ad68 chore(userspace): log the gRPC threadiness
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 13:42:09 +02:00
Leonardo Di Donato
2d52be603d update(userspace/falco): gRPC server threadiness 0 by default (which
means "auto")

The 0 ("auto") value sets the threadiness to the number of online cores
automatically.

Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 13:42:09 +02:00
Leonardo Di Donato
75e62269c3 new: hardware_concurrency helper
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-07 13:42:09 +02:00
Lorenzo Fontana
3d1f27d082 build: stale bot adjustements
Removed not existing labels and made the error message a bit more
verbose to tell people what to expect next.

Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-07 12:20:55 +02:00
Leonardo Grasso
ad960a9485 chore(docker): rename SKIP_MODULE_LOAD to SKIP_DRIVER_LOADER
As per https://github.com/falcosecurity/falco/blob/master/proposals/20200506-artifacts-scope-part-2.md#action-items

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-07 12:17:20 +02:00
kaizhe
d8d218230d rules update: create placeholder macros for customization
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-07-03 20:54:36 +02:00
Leonardo Grasso
b7e7a10035 docs: add myself to owners
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-03 16:37:17 +02:00
Leonardo Grasso
fecf1a9fea fix(userspace/falco/lua): correct argument
This explain why `buffered_output: false` was not honored for stdout

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-07-03 11:45:00 +02:00
Leonardo Di Donato
54a6d5c523 build: do not download lyaml and lpeg from draios S3 anymore
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-02 06:01:12 +02:00
Leonardo Di Donato
9fe78bf658 build: fetch libb64 and luajit from github, not from draios repos
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-02 06:01:12 +02:00
Leonardo Di Donato
727755e276 build: fetch openssl, curl, njson dependencies from github not draios
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-07-02 06:01:12 +02:00
Lorenzo Fontana
352307431a fix: update k8s audit endpoint to /k8s-audit everywhere
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-07-01 13:29:51 +02:00
Leonardo Grasso
6cfb0ec2b8 update(test): setup bidi gRPC integration test
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-06-30 13:04:03 +02:00
Leonardo Grasso
4af769f84c new(test): add gRPC unix socket support
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-06-30 13:04:03 +02:00
Leonardo Grasso
82e0b5f217 fix(userspace/falco): honor -M also when using a trace file
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-06-30 13:04:03 +02:00
Leonardo Di Donato
b4d005eb51 new(test): read grpc config fields
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-30 13:04:03 +02:00
Leonardo Di Donato
061c5f5ac9 new(test): setup gRPC output test case
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-30 13:04:03 +02:00
Leonardo Di Donato
c06ccf8378 update(docker/tester): grpcurl
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-30 13:04:03 +02:00
samwhite-gl
3408ea9164 Add GitLab to ADOPTERS.md
GitLab is now using Falco to provide Container Host Security protection

Co-Authored-By: Kris Nova <kris@nivenly.com>
Signed-off-by: Kris Nova <kris@nivenly.com>
2020-06-30 11:45:58 +02:00
samwhite-gl
51aea00be8 Add GitLab to ADOPTERS.md
GitLab is now using Falco to provide Container Host Security protection

Co-Authored-By: Kris Nova <kris@nivenly.com>
Signed-off-by: Kris Nova <kris@nivenly.com>
2020-06-30 11:45:58 +02:00
Antoine Deschênes
a5cadbf5fa rule(Disallowed K8s User): whitelist kube-apiserver-healthcheck
kops 1.17 adds a kube-apiserver-healthcheck user: https://github.com/kubernetes/kops/tree/master/cmd/kube-apiserver-healthcheck

Logs are currently spammed with:
```
{"output":"18:02:15.466580992: Warning K8s Operation performed by user not in allowed list of users (user=kube-apiserver-healthcheck target=<NA>/<NA> verb=get uri=/healthz resp=200)","priority":"Warning","rule":"Disallowed K8s User","time":"2020-06-29T18:02:15.466580992Z", "output_fields": {"jevt.time":"18:02:15.466580992","ka.response.code":"200","ka.target.name":"<NA>","ka.target.resource":"<NA>","ka.uri":"/healthz","ka.user.name":"kube-apiserver-healthcheck","ka.verb":"get"}}
```

Signed-off-by: Antoine Deschênes <antoine.deschenes@equisoft.com>
2020-06-30 11:44:11 +02:00
Lorenzo Fontana
9eb0b7fb5f update(userspace/falco): avoid memory allocation for falco output
response

Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
869d883dc7 update(userspace/falco): better gRPC server logging
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
b88767f558 bc(userspace/falco): the Falco gRPC Outputs API are now "falco.outputs.service/get" and "falco.outputs.service/sub"
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
bdbdf7b830 update(userspace/falco): pluralize Falco output proto and service
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
4e2f3e2c71 update(proposals): keep Falco gRPC Outputs proposal in sync
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
3d9bc8f67b update(userspace/falco): remove keepalive from output request
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
c89c11c3c4 update(userspace/falco): remove output queue size
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
5bd9ba0529 update(userspace/falco/grpc): simpler bidirectional context state
transitions

Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
b9e6d65e69 update(userspace/falco/grpc): bidirectional sub implementation
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
0d194f2b40 update(userspace/falco/grpc): for stream contexts use a flag to detect
if it is still running or not

Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Lorenzo Fontana
d9f2cda8cf update(userspace/falco/grpc): dealing with multiple streaming requests
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
2ebc55f897 wip(userspace/falco): bidirectional gRPC outputs logic (initial)
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
01ae8701d9 new(userspace/falco): concrete initial implementation of the subscribe gRPC service
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
be6c4b273d new(userspace/falco): gRPC context for bidirectional services
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
a72f27c028 new(userspace/falco): macro to REGISTER_BIDI gRPC services
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
58adc5b60c new(userspace/falco): output gRPC service to provide a server streaming method and a bidirectional method to obtain Falco alerts
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
cf31712fad update(userspace/falco): context class for bidirectional gRPC services
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
a568c42adb update(userspace/falco): unsafe_size() method for falco::output::queue
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
Leonardo Di Donato
05dd170d70 fix(userspace/falco): virtual destructor of base grpc context
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-29 20:42:50 +02:00
kaizhe
e29a4c8560 rule(list network_tool_binaries): add zmap to the list
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-06-29 18:17:28 +02:00
Lorenzo Fontana
c5ba95deff docs: teal logo is svg
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2020-06-29 09:14:50 -07:00
Leonardo Grasso
27037e64cc chore(rules): remove redundant condition from root_dir macro
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-06-26 19:57:18 +02:00
Leonardo Grasso
1859552834 fix(rules): correct root_dir macro to avoid unwanted matching
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-06-26 19:57:18 +02:00
Nicolas Marier
298ba29c88 rule(Change thread namespace): whitelist protokube, dockerd, tini and aws
These application binaries raise events in the `Change thread namespace`
rule as part of their normal operation.

Here are more details regarding each binary :

- `protokube` : See [this](https://github.com/kubernetes/kops/tree/master/protokube)
- `dockerd` : The `dockerd` process name is whitelisted already in this
  rule, but not if it is the parent, which will happen if you are doing
  docker-in-docker.
- `tini` : See [this](https://github.com/krallin/tini)
- `aws` : This one I noticed because Falco itself uses the AWS CLI to
  send events to SNS, which was triggering this rule.

Signed-off-by: Nicolas Marier <nmarier@coveo.com>
2020-06-24 11:02:12 +02:00
Nicolas Marier
0272b94bb1 rule(macro exe_running_docker_save): add new cmdline
While using Falco, I noticed we were getting many events that were
virtually identical to those that were previously filtered out by the
`exexe_running_docker_save` macro, but where the `cmdline` was something
like `exe /var/run/docker/netns/cc5c7b9bb110 all false`. I believe this
is caused by the use of docker-in-docker.

Signed-off-by: Nicolas Marier <nmarier@coveo.com>
2020-06-24 11:02:12 +02:00
Nicolas Marier
dbd86234ad rule(macro user_expected_terminal_shell_in_container_conditions): create the macro
A macro like this is useful because configuration management software
may need to run containers with an attached terminal to perform some of
its duties, and users may want to ignore this behavior.

Signed-off-by: Nicolas Marier <nmarier@coveo.com>
2020-06-23 21:53:41 +02:00
Nicolas Marier
b69bde6bd4 rule(macro user_known_write_below_binary_dir_activities): Create the macro
This macro is useful to allow binaries to be installed under certain
circumstances. For example, it may be fine to install a binary during a
build in a ci/cd pipeline.

Signed-off-by: Nicolas Marier <nmarier@coveo.com>
2020-06-22 16:19:07 +02:00
Leonardo Di Donato
d2f0ad7c07 fix(rules): exclude runc writing /var/lib/docker for container drift
detected rules

Co-authored-by: Lorenzo Fontana <lo@linux.com>
Co-authored-by: Leonardo Grasso <me@leonardograsso.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2020-06-22 12:24:59 +02:00
Omer Azaria
70b9bfe1d6 rule(Container Drift Detected): detect new exec created in a container
Signed-off-by: Omer Azaria <omer.azaria@sysdig.com>
2020-06-22 12:24:59 +02:00
Dotan Horovits
17f6da7885 Add Logz.io to Falco's adopters list ADOPTERS.md (continuing commit #1235)
What type of PR is this?

Uncomment one (or more) /kind <> lines:

/kind bug

/kind cleanup

/kind design

/kind documentation

/kind failing-test

/kind feature

If contributing rules or changes to rules, please make sure to also uncomment one of the following line:

/kind rule-update

/kind rule-create

Any specific area of the project related to this PR?

Uncomment one (or more) /area <> lines:

/area build

/area engine

/area rules

/area tests

/area proposals

What this PR does / why we need it:
updating ADOPTERS.md with a new adopter details

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:
re-issuing the PR from #1235 (due to change of owner, per request by @leogr)

Does this PR introduce a user-facing change?:

NONE
/assign @leogr 

Signed-off-by: Dotan Horovits dotan.horovits@gmail.com
2020-06-19 15:37:55 +02:00
kaizhe
dee0cc67f3 rule update (Anonymous Request Allowed): update to checking auth decision equals to allow
Signed-off-by: kaizhe <derek0405@gmail.com>
2020-06-19 15:32:58 +02:00
Leonardo Grasso
8429256e37 fix(falco.yaml): correct k8s audit endpoint
Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
2020-06-19 15:31:17 +02:00
Shane Lawrence
00884ef581 Log modified copy instead of original message.
Signed-off-by: Shane Lawrence <shane@lawrence.dev>
2020-06-19 15:28:42 +02:00
75 changed files with 3123 additions and 1899 deletions

5
.github/stale.yml vendored
View File

@@ -6,7 +6,6 @@ daysUntilClose: 7
exemptLabels: exemptLabels:
- cncf - cncf
- roadmap - roadmap
- enhancement
- "help wanted" - "help wanted"
# Label to use when marking an issue as stale # Label to use when marking an issue as stale
staleLabel: wontfix staleLabel: wontfix
@@ -15,5 +14,7 @@ markComment: >
This issue has been automatically marked as stale because it has not had This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you recent activity. It will be closed if no further activity occurs. Thank you
for your contributions. for your contributions.
Issues labeled "cncf", "roadmap" and "help wanted" will not be automatically closed.
Please refer to a maintainer to get such label added if you think this should be kept open.
# Comment to post when closing a stale issue. Set to `false` to disable # Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false closeComment: false

View File

@@ -8,8 +8,13 @@ This is a list of production adopters of Falco (in alphabetical order):
* [Frame.io](https://frame.io/) - Frame.io is a cloud-based (SaaS) video review and collaboration platform that enables users to securely upload source media, work-in-progress edits, dailies, and more into private workspaces where they can invite their team and clients to collaborate on projects. Understanding what is running on production servers, and the context around why things are running is even more tricky now that we have further abstractions like Docker and Kubernetes. To get this needed visibility into our system, we rely on Falco. Falco's ability to collect raw system calls such as open, connect, exec, along with their arguments offer key insights on what is happening on the production system and became the foundation of our intrusion detection and alerting system. * [Frame.io](https://frame.io/) - Frame.io is a cloud-based (SaaS) video review and collaboration platform that enables users to securely upload source media, work-in-progress edits, dailies, and more into private workspaces where they can invite their team and clients to collaborate on projects. Understanding what is running on production servers, and the context around why things are running is even more tricky now that we have further abstractions like Docker and Kubernetes. To get this needed visibility into our system, we rely on Falco. Falco's ability to collect raw system calls such as open, connect, exec, along with their arguments offer key insights on what is happening on the production system and became the foundation of our intrusion detection and alerting system.
* [GitLab](https://about.gitlab.com/direction/defend/container_host_security/) - GitLab is a complete DevOps platform, delivered as a single application, fundamentally changing the way Development, Security, and Ops teams collaborate. GitLab Ultimate provides the single tool teams need to find, triage, and fix vulnerabilities in applications, services, and cloud-native environments enabling them to manage their risk. This provides them with repeatable, defensible processes that automate security and compliance policies. GitLab includes a tight integration with Falco, allowing users to defend their containerized applications from attacks while running in production.
* [League](https://league.com/ca/) - League provides health benefits management services to help employees understand and get the most from their benefits, and employers to provide effective, efficient plans. Falco is used to monitor our deployed services on Kubernetes, protecting against malicious access to containerswhich could lead to leaks of PHI or other sensitive data. The Falco alerts are logged in Stackdriver for grouping and further analysis. In the future, we're hoping for integrations with Prometheus and AlertManager as well. * [League](https://league.com/ca/) - League provides health benefits management services to help employees understand and get the most from their benefits, and employers to provide effective, efficient plans. Falco is used to monitor our deployed services on Kubernetes, protecting against malicious access to containerswhich could lead to leaks of PHI or other sensitive data. The Falco alerts are logged in Stackdriver for grouping and further analysis. In the future, we're hoping for integrations with Prometheus and AlertManager as well.
* [Logz.io](https://logz.io/) - Logz.io is a cloud observability platform for modern engineering teams. The Logz.io platform consists of three products — Log Management, Infrastructure Monitoring, and Cloud SIEM — that work together to unify the jobs of monitoring, troubleshooting, and security. We empower engineers to deliver better software by offering the world's most popular open source observability tools — the ELK Stack, Grafana, and Jaeger — in a single, easy to use, and powerful platform purpose-built for monitoring distributed cloud environments. Cloud SIEM supports data from multiple sources, including Falco's alerts, and offers useful rules and dashboards content to visualize and manage incidents across your systems in a unified UI.
* https://logz.io/blog/k8s-security-with-falco-and-cloud-siem/
* [Preferral](https://www.preferral.com) - Preferral is a HIPAA-compliant platform for Referral Management and Online Referral Forms. Preferral streamlines the referral process for patients, specialists and their referral partners. By automating the referral process, referring practices spend less time on the phone, manual efforts are eliminated, and patients get the right care from the right specialist. Preferral leverages Falco to provide a Host Intrusion Detection System to meet their HIPPA compliance requirements. * [Preferral](https://www.preferral.com) - Preferral is a HIPAA-compliant platform for Referral Management and Online Referral Forms. Preferral streamlines the referral process for patients, specialists and their referral partners. By automating the referral process, referring practices spend less time on the phone, manual efforts are eliminated, and patients get the right care from the right specialist. Preferral leverages Falco to provide a Host Intrusion Detection System to meet their HIPPA compliance requirements.
* https://hipaa.preferral.com/01-preferral_hipaa_compliance/ * https://hipaa.preferral.com/01-preferral_hipaa_compliance/

View File

@@ -2,9 +2,114 @@
This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org). This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org).
## v0.24.0
Released on 2020-07-16
### Major Changes
* new: Falco now supports userspace instrumentation with the -u flag [[#1195](https://github.com/falcosecurity/falco/pull/1195)]
* BREAKING CHANGE: --stats_interval is now --stats-interval [[#1308](https://github.com/falcosecurity/falco/pull/1308)]
* new: auto threadiness for gRPC server [[#1271](https://github.com/falcosecurity/falco/pull/1271)]
* BREAKING CHANGE: server streaming gRPC outputs method is now `falco.outputs.service/get` [[#1241](https://github.com/falcosecurity/falco/pull/1241)]
* new: new bi-directional async streaming gRPC outputs (`falco.outputs.service/sub`) [[#1241](https://github.com/falcosecurity/falco/pull/1241)]
* new: unix socket for the gRPC server [[#1217](https://github.com/falcosecurity/falco/pull/1217)]
### Minor Changes
* update: driver version is 85c88952b018fdbce2464222c3303229f5bfcfad now [[#1305](https://github.com/falcosecurity/falco/pull/1305)]
* update: `SKIP_MODULE_LOAD` renamed to `SKIP_DRIVER_LOADER` [[#1297](https://github.com/falcosecurity/falco/pull/1297)]
* docs: add leogr to OWNERS [[#1300](https://github.com/falcosecurity/falco/pull/1300)]
* update: default threadiness to 0 ("auto" behavior) [[#1271](https://github.com/falcosecurity/falco/pull/1271)]
* update: k8s audit endpoint now defaults to /k8s-audit everywhere [[#1292](https://github.com/falcosecurity/falco/pull/1292)]
* update(falco.yaml): `webserver.k8s_audit_endpoint` default value changed from `/k8s_audit` to `/k8s-audit` [[#1261](https://github.com/falcosecurity/falco/pull/1261)]
* docs(test): instructions to run regression test suites locally [[#1234](https://github.com/falcosecurity/falco/pull/1234)]
### Bug Fixes
* fix: --stats-interval correctly accepts values >= 999 (ms) [[#1308](https://github.com/falcosecurity/falco/pull/1308)]
* fix: make the eBPF driver build work on CentOS 8 [[#1301](https://github.com/falcosecurity/falco/pull/1301)]
* fix(userspace/falco): correct options handling for `buffered_output: false` which was not honored for the `stdout` output [[#1296](https://github.com/falcosecurity/falco/pull/1296)]
* fix(userspace/falco): honor -M also when using a trace file [[#1245](https://github.com/falcosecurity/falco/pull/1245)]
* fix: high CPU usage when using server streaming gRPC outputs [[#1241](https://github.com/falcosecurity/falco/pull/1241)]
* fix: missing newline from some log messages (eg., token bucket depleted) [[#1257](https://github.com/falcosecurity/falco/pull/1257)]
### Rule Changes
* rule(Container Drift Detected (chmod)): disabled by default [[#1316](https://github.com/falcosecurity/falco/pull/1316)]
* rule(Container Drift Detected (open+create)): disabled by default [[#1316](https://github.com/falcosecurity/falco/pull/1316)]
* rule(Write below etc): allow snapd to write its unit files [[#1289](https://github.com/falcosecurity/falco/pull/1289)]
* rule(macro remote_file_copy_procs): fix reference to remote_file_copy_binaries [[#1224](https://github.com/falcosecurity/falco/pull/1224)]
* rule(list allowed_k8s_users): whitelisted kube-apiserver-healthcheck user created by kops >= 1.17.0 for the kube-apiserver-healthcheck sidecar [[#1286](https://github.com/falcosecurity/falco/pull/1286)]
* rule(Change thread namespace): Allow `protokube`, `dockerd`, `tini` and `aws` binaries to change thread namespace. [[#1222](https://github.com/falcosecurity/falco/pull/1222)]
* rule(macro exe_running_docker_save): to filter out cmdlines containing `/var/run/docker`. [[#1222](https://github.com/falcosecurity/falco/pull/1222)]
* rule(macro user_known_cron_jobs): new macro to be overridden to list known cron jobs [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Schedule Cron Jobs): exclude known cron jobs [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_update_package_registry): new macro to be overridden to list known package registry update [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Update Package Registry): exclude known package registry update [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_read_ssh_information_activities): new macro to be overridden to list known activities that read SSH info [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Read ssh information): do not throw for activities known to read SSH info [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_read_sensitive_files_activities): new macro to be overridden to list activities known to read sensitive files [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Read sensitive file trusted after startup): do not throw for activities known to read sensitive files [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Read sensitive file untrusted): do not throw for activities known to read sensitive files [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_write_rpm_database_activities): new macro to be overridden to list activities known to write RPM database [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Write below rpm database): do not throw for activities known to write RPM database [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_db_spawned_processes): new macro to be overridden to list processes known to spawn DB [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(DB program spawned process): do not throw for processes known to spawn DB [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_modify_bin_dir_activities): new macro to be overridden to list activities known to modify bin directories [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Modify binary dirs): do not throw for activities known to modify bin directories [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_mkdir_bin_dir_activities): new macro to be overridden to list activities known to create directories below bin directories [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Mkdir binary dirs): do not throw for activities known to create directories below bin directories [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_system_user_login): new macro to exclude known system user logins [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(System user interactive): do not throw for known system user logins [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_user_management_activities): new macro to be overridden to list activities known to do user managements activities [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(User mgmt binaries): do not throw for activities known to do user managements activities [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_create_files_below_dev_activities): new macro to be overridden to list activities known to create files below dev [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Create files below dev): do not throw for activities known to create files below dev [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_contact_k8s_api_server_activities): new macro to be overridden to list activities known to contact Kubernetes API server [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Contact K8S API Server From Container): do not throw for activities known to contact Kubernetes API server [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_network_tool_activities): new macro to be overridden to list activities known to spawn/use network tools [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Launch Suspicious Network Tool in Container): do not throw for activities known to spawn/use network tools [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_remove_data_activities): new macro to be overridden to list activities known to perform data remove commands [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Remove Bulk Data from Disk): do not throw for activities known to perform data remove commands [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_create_hidden_file_activities): new macro to be overridden to list activities known to create hidden files [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Create Hidden Files or Directories): do not throw for activities known to create hidden files [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_stand_streams_redirect_activities): new macro to be overridden to list activities known to redirect stream to network connection (in containers) [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Redirect STDOUT/STDIN to Network Connection in Container): do not throw for activities known to redirect stream to network connection (in containers) [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_container_drift_activities): new macro to be overridden to list activities known to create executables in containers [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Container Drift Detected (chmod)): do not throw for activities known to give execution permissions to files in containers [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Container Drift Detected (open+create)): do not throw for activities known to create executables in containers [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_node_port_service): do not throw for services known to start with a NopePort service type (k8s) [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Create NodePort Service): do not throw for services known to start with a NopePort service type (k8s) [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro user_known_exec_pod_activities): do not throw for activities known to attach/exec to a pod (k8s) [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Attach/Exec Pod): do not throw for activities known to attach/exec to a pod (k8s) [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro trusted_pod): defines trusted pods by an image list [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Pod Created in Kube Namespace): do not throw for trusted pods [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(macro trusted_sa): define trusted ServiceAccount [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(Service Account Created in Kube Namespace): do not throw for trusted ServiceAccount [[#1294](https://github.com/falcosecurity/falco/pull/1294)]
* rule(list network_tool_binaries): add zmap to the list [[#1284](https://github.com/falcosecurity/falco/pull/1284)]
* rule(macro root_dir): correct macro to exactly match the `/root` dir and not other with just `/root` as a prefix [[#1279](https://github.com/falcosecurity/falco/pull/1279)]
* rule(macro user_expected_terminal_shell_in_container_conditions): allow whitelisting terminals in containers under specific conditions [[#1154](https://github.com/falcosecurity/falco/pull/1154)]
* rule(macro user_known_write_below_binary_dir_activities): allow writing to a binary dir in some conditions [[#1260](https://github.com/falcosecurity/falco/pull/1260)]
* rule(macro trusted_logging_images): Add addl fluentd image [[#1230](https://github.com/falcosecurity/falco/pull/1230)]
* rule(macro trusted_logging_images): Let azure-npm image write to /var/log [[#1230](https://github.com/falcosecurity/falco/pull/1230)]
* rule(macro lvprogs_writing_conf): Add lvs as a lvm program [[#1230](https://github.com/falcosecurity/falco/pull/1230)]
* rule(macro user_known_k8s_client_container): Allow hcp-tunnelfront to run kubectl in containers [[#1230](https://github.com/falcosecurity/falco/pull/1230)]
* rule(list allowed_k8s_users): Add vertical pod autoscaler as known k8s users [[#1230](https://github.com/falcosecurity/falco/pull/1230)]
* rule(Anonymous Request Allowed): update to checking auth decision equals to allow [[#1267](https://github.com/falcosecurity/falco/pull/1267)]
* rule(Container Drift Detected (chmod)): new rule to detect if an existing file get exec permissions in a container [[#1254](https://github.com/falcosecurity/falco/pull/1254)]
* rule(Container Drift Detected (open+create)): new rule to detect if a new file with execution permission is created in a container [[#1254](https://github.com/falcosecurity/falco/pull/1254)]
* rule(Mkdir binary dirs): correct condition in macro `bin_dir_mkdir` to catch `mkdirat` syscall [[#1250](https://github.com/falcosecurity/falco/pull/1250)]
* rule(Modify binary dirs): correct condition in macro `bin_dir_rename` to catch `rename`, `renameat`, and `unlinkat` syscalls [[#1250](https://github.com/falcosecurity/falco/pull/1250)]
* rule(Create files below dev): correct condition to catch `openat` syscall [[#1250](https://github.com/falcosecurity/falco/pull/1250)]
* rule(macro user_known_set_setuid_or_setgid_bit_conditions): create macro [[#1213](https://github.com/falcosecurity/falco/pull/1213)]
## v0.23.0 ## v0.23.0
Released on 2020-18-05 Released on 2020-05-18
### Major Changes ### Major Changes
@@ -46,7 +151,7 @@ Released on 2020-18-05
## v0.22.1 ## v0.22.1
Released on 2020-17-04 Released on 2020-04-17
### Major Changes ### Major Changes
@@ -66,7 +171,7 @@ Released on 2020-17-04
## v0.22.0 ## v0.22.0
Released on 2020-16-04 Released on 2020-04-16
### Major Changes ### Major Changes

View File

@@ -93,7 +93,7 @@ message(STATUS "Using bundled nlohmann-json in '${NJSON_SRC}'")
set(NJSON_INCLUDE "${NJSON_SRC}/single_include") set(NJSON_INCLUDE "${NJSON_SRC}/single_include")
ExternalProject_Add( ExternalProject_Add(
njson njson
URL "https://s3.amazonaws.com/download.draios.com/dependencies/njson-3.3.0.tar.gz" URL "https://github.com/nlohmann/json/archive/v3.3.0.tar.gz"
URL_HASH "SHA256=2fd1d207b4669a7843296c41d3b6ac5b23d00dec48dba507ba051d14564aa801" URL_HASH "SHA256=2fd1d207b4669a7843296c41d3b6ac5b23d00dec48dba507ba051d14564aa801"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND "" BUILD_COMMAND ""
@@ -106,14 +106,15 @@ find_package(Curses REQUIRED)
message(STATUS "Found ncurses: include: ${CURSES_INCLUDE_DIR}, lib: ${CURSES_LIBRARIES}") message(STATUS "Found ncurses: include: ${CURSES_INCLUDE_DIR}, lib: ${CURSES_LIBRARIES}")
# libb64 # libb64
set(B64_SRC "${PROJECT_BINARY_DIR}/b64-prefix/src/b64") set(B64_SRC "${PROJECT_BINARY_DIR}/b64-prefix/src/b64")
message(STATUS "Using bundled b64 in '${B64_SRC}'") message(STATUS "Using bundled b64 in '${B64_SRC}'")
set(B64_INCLUDE "${B64_SRC}/include") set(B64_INCLUDE "${B64_SRC}/include")
set(B64_LIB "${B64_SRC}/src/libb64.a") set(B64_LIB "${B64_SRC}/src/libb64.a")
ExternalProject_Add( ExternalProject_Add(
b64 b64
URL "https://s3.amazonaws.com/download.draios.com/dependencies/libb64-1.2.src.zip" URL "https://github.com/libb64/libb64/archive/v1.2.1.zip"
URL_HASH "SHA256=343d8d61c5cbe3d3407394f16a5390c06f8ff907bd8d614c16546310b689bfd3" URL_HASH "SHA256=665134c2b600098a7ebd3d00b6a866cb34909a6d48e0e37a0eda226a4ad2638a"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
@@ -135,8 +136,8 @@ set(LUAJIT_INCLUDE "${LUAJIT_SRC}")
set(LUAJIT_LIB "${LUAJIT_SRC}/libluajit.a") set(LUAJIT_LIB "${LUAJIT_SRC}/libluajit.a")
ExternalProject_Add( ExternalProject_Add(
luajit luajit
URL "https://s3.amazonaws.com/download.draios.com/dependencies/LuaJIT-2.0.3.tar.gz" URL "https://github.com/LuaJIT/LuaJIT/archive/v2.0.3.tar.gz"
URL_HASH "SHA256=55be6cb2d101ed38acca32c5b1f99ae345904b365b642203194c585d27bebd79" URL_HASH "SHA256=8da3d984495a11ba1bce9a833ba60e18b532ca0641e7d90d97fafe85ff014baa"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
@@ -151,20 +152,15 @@ list(APPEND LPEG_DEPENDENCIES "luajit")
ExternalProject_Add( ExternalProject_Add(
lpeg lpeg
DEPENDS ${LPEG_DEPENDENCIES} DEPENDS ${LPEG_DEPENDENCIES}
URL "https://s3.amazonaws.com/download.draios.com/dependencies/lpeg-1.0.0.tar.gz" URL "http://www.inf.puc-rio.br/~roberto/lpeg/lpeg-1.0.2.tar.gz"
URL_HASH "SHA256=10190ae758a22a16415429a9eb70344cf29cbda738a6962a9f94a732340abf8e" URL_HASH "SHA256=48d66576051b6c78388faad09b70493093264588fcd0f258ddaab1cdd4a15ffe"
BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} "${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh" "${LPEG_SRC}/build" BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} "${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh" "${LPEG_SRC}/build"
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
INSTALL_COMMAND "") INSTALL_COMMAND "")
# libyaml # libyaml
find_library(LIBYAML_LIB NAMES libyaml.so) include(libyaml)
if(LIBYAML_LIB)
message(STATUS "Found libyaml: lib: ${LIBYAML_LIB}")
else()
message(FATAL_ERROR "Couldn't find system libyaml")
endif()
# lyaml # lyaml
set(LYAML_SRC "${PROJECT_BINARY_DIR}/lyaml-prefix/src/lyaml/ext/yaml") set(LYAML_SRC "${PROJECT_BINARY_DIR}/lyaml-prefix/src/lyaml/ext/yaml")
@@ -175,7 +171,7 @@ list(APPEND LYAML_DEPENDENCIES "luajit")
ExternalProject_Add( ExternalProject_Add(
lyaml lyaml
DEPENDS ${LYAML_DEPENDENCIES} DEPENDS ${LYAML_DEPENDENCIES}
URL "https://s3.amazonaws.com/download.draios.com/dependencies/lyaml-release-v6.0.tar.gz" URL "https://github.com/gvvaughan/lyaml/archive/release-v6.0.tar.gz"
URL_HASH "SHA256=9d7cf74d776999ff6f758c569d5202ff5da1f303c6f4229d3b41f71cd3a3e7a7" URL_HASH "SHA256=9d7cf74d776999ff6f758c569d5202ff5da1f303c6f4229d3b41f71cd3a3e7a7"
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1

2
OWNERS
View File

@@ -3,6 +3,7 @@ approvers:
- kris-nova - kris-nova
- leodido - leodido
- mstemm - mstemm
- leogr
reviewers: reviewers:
- fntlnz - fntlnz
- kaizhe - kaizhe
@@ -10,3 +11,4 @@ reviewers:
- leodido - leodido
- mfdii - mfdii
- mstemm - mstemm
- leogr

View File

@@ -2,7 +2,7 @@
Our release process is mostly automated, but we still need some manual steps to initiate and complete it. Our release process is mostly automated, but we still need some manual steps to initiate and complete it.
Changes and new features are grouped in [milestones](https://github.com/falcosecurity/falco/milestones), the milestone with the next version represents what is going to be released. Changes and new features are grouped in [milestones](https://github.com/falcosecurity/falco/milestones), the milestone with the next version represents what is going to be released.
Releases happen on a monthly cadence, towards the 16th of the on-going month, and we need to assign owners for each (usually we pair a new person with an experienced one). Assignees and the due date are proposed during the [weekly community call](https://github.com/falcosecurity/community). Note that hotfix releases can happen as soon as it is needed. Releases happen on a monthly cadence, towards the 16th of the on-going month, and we need to assign owners for each (usually we pair a new person with an experienced one). Assignees and the due date are proposed during the [weekly community call](https://github.com/falcosecurity/community). Note that hotfix releases can happen as soon as it is needed.
@@ -19,18 +19,19 @@ Finally, on the proposed due date the assignees for the upcoming release proceed
- Double-check that there are no more merged PRs without the target milestone assigned with the `is:pr is:merged no:milestone closed:>YYYT-MM-DD` [filters](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Amerged+no%3Amilestone+closed%3A%3EYYYT-MM-DD), if any, fix them - Double-check that there are no more merged PRs without the target milestone assigned with the `is:pr is:merged no:milestone closed:>YYYT-MM-DD` [filters](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Amerged+no%3Amilestone+closed%3A%3EYYYT-MM-DD), if any, fix them
### 2. Milestones ### 2. Milestones
- Move the [tasks not completed](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Aopen) to a new minor milestone - Move the [tasks not completed](https://github.com/falcosecurity/falco/pulls?q=is%3Apr+is%3Aopen) to a new minor milestone
- Close the completed milestone
### 3. Release PR ### 3. Release PR
- Double-check if any hard-coded version number is present in the code, it should be not present anywhere: - Double-check if any hard-coded version number is present in the code, it should be not present anywhere:
- If any, manually correct it then open an issue to automate version number bumping later - If any, manually correct it then open an issue to automate version number bumping later
- Versions table in the `README.md` update itself automatically - Versions table in the `README.md` update itself automatically
- Generate the change log https://github.com/leodido/rn2md, or https://fs.fntlnz.wtf/falco/milestones-changelog.txt for the lazy people (it updates every 5 minutes) - Generate the change log https://github.com/leodido/rn2md, or https://fs.fntlnz.wtf/falco/milestones-changelog.txt for the lazy people (it updates every 5 minutes)
- Add the lastest changes on top the previous `CHANGELOG.md` - Add the lastest changes on top the previous `CHANGELOG.md`
- Submit a PR with the above modifications - Submit a PR with the above modifications
- Await PR approval - Await PR approval
- Close the completed milestone as soon PR is merged
## Release ## Release
@@ -52,6 +53,7 @@ Let `x.y.z` the new version.
- Wait for the CI to complete - Wait for the CI to complete
### 2. Update the GitHub release ### 2. Update the GitHub release
- [Draft a new release](https://github.com/falcosecurity/falco/releases/new) - [Draft a new release](https://github.com/falcosecurity/falco/releases/new)
- Use `x.y.z` both as tag version and release title - Use `x.y.z` both as tag version and release title
- Use the following template to fill the release description: - Use the following template to fill the release description:

View File

Before

Width:  |  Height:  |  Size: 4.2 KiB

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@@ -30,14 +30,14 @@ set(CPACK_GENERATOR DEB RPM TGZ)
set(CPACK_DEBIAN_PACKAGE_SECTION "utils") set(CPACK_DEBIAN_PACKAGE_SECTION "utils")
set(CPACK_DEBIAN_PACKAGE_ARCHITECTURE "amd64") set(CPACK_DEBIAN_PACKAGE_ARCHITECTURE "amd64")
set(CPACK_DEBIAN_PACKAGE_HOMEPAGE "https://www.falco.org") set(CPACK_DEBIAN_PACKAGE_HOMEPAGE "https://www.falco.org")
set(CPACK_DEBIAN_PACKAGE_DEPENDS "dkms (>= 2.1.0.0), libyaml-0-2") set(CPACK_DEBIAN_PACKAGE_DEPENDS "dkms (>= 2.1.0.0)")
set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA
"${CMAKE_BINARY_DIR}/scripts/debian/postinst;${CMAKE_BINARY_DIR}/scripts/debian/prerm;${CMAKE_BINARY_DIR}/scripts/debian/postrm;${PROJECT_SOURCE_DIR}/cmake/cpack/debian/conffiles" "${CMAKE_BINARY_DIR}/scripts/debian/postinst;${CMAKE_BINARY_DIR}/scripts/debian/prerm;${CMAKE_BINARY_DIR}/scripts/debian/postrm;${PROJECT_SOURCE_DIR}/cmake/cpack/debian/conffiles"
) )
set(CPACK_RPM_PACKAGE_LICENSE "Apache v2.0") set(CPACK_RPM_PACKAGE_LICENSE "Apache v2.0")
set(CPACK_RPM_PACKAGE_URL "https://www.falco.org") set(CPACK_RPM_PACKAGE_URL "https://www.falco.org")
set(CPACK_RPM_PACKAGE_REQUIRES "dkms, kernel-devel, libyaml, ncurses") set(CPACK_RPM_PACKAGE_REQUIRES "dkms, kernel-devel, ncurses")
set(CPACK_RPM_POST_INSTALL_SCRIPT_FILE "${CMAKE_BINARY_DIR}/scripts/rpm/postinstall") set(CPACK_RPM_POST_INSTALL_SCRIPT_FILE "${CMAKE_BINARY_DIR}/scripts/rpm/postinstall")
set(CPACK_RPM_PRE_UNINSTALL_SCRIPT_FILE "${CMAKE_BINARY_DIR}/scripts/rpm/preuninstall") set(CPACK_RPM_PRE_UNINSTALL_SCRIPT_FILE "${CMAKE_BINARY_DIR}/scripts/rpm/preuninstall")
set(CPACK_RPM_POST_UNINSTALL_SCRIPT_FILE "${CMAKE_BINARY_DIR}/scripts/rpm/postuninstall") set(CPACK_RPM_POST_UNINSTALL_SCRIPT_FILE "${CMAKE_BINARY_DIR}/scripts/rpm/postuninstall")

View File

@@ -32,8 +32,8 @@ else()
ExternalProject_Add( ExternalProject_Add(
openssl openssl
# START CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736 # START CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736
URL "https://s3.amazonaws.com/download.draios.com/dependencies/openssl-1.0.2n.tar.gz" URL "https://github.com/openssl/openssl/archive/OpenSSL_1_0_2n.tar.gz"
URL_HASH "SHA256=370babb75f278c39e0c50e8c4e7493bc0f18db6867478341a832a982fd15a8fe" URL_HASH "SHA256=4f4bc907caff1fee6ff8593729e5729891adcee412049153a3bb4db7625e8364"
# END CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736 # END CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736
CONFIGURE_COMMAND ./config shared --prefix=${OPENSSL_INSTALL_DIR} CONFIGURE_COMMAND ./config shared --prefix=${OPENSSL_INSTALL_DIR}
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}

View File

@@ -31,7 +31,7 @@ else()
curl curl
DEPENDS openssl DEPENDS openssl
# START CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007 # START CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007
URL "https://s3.amazonaws.com/download.draios.com/dependencies/curl-7.61.0.tar.bz2" URL "https://github.com/curl/curl/releases/download/curl-7_61_0/curl-7.61.0.tar.bz2"
URL_HASH "SHA256=5f6f336921cf5b84de56afbd08dfb70adeef2303751ffb3e570c936c6d656c9c" URL_HASH "SHA256=5f6f336921cf5b84de56afbd08dfb70adeef2303751ffb3e570c936c6d656c9c"
# END CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007 # END CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007
CONFIGURE_COMMAND CONFIGURE_COMMAND

View File

@@ -0,0 +1,32 @@
#
# Copyright (C) 2020 The Falco Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
if(NOT USE_BUNDLED_DEPS)
find_library(LIBYAML_LIB NAMES libyaml.so)
if(LIBYAML_LIB)
message(STATUS "Found libyaml: lib: ${LIBYAML_LIB}")
else()
message(FATAL_ERROR "Couldn't find system libyaml")
endif()
else()
set(LIBYAML_SRC "${PROJECT_BINARY_DIR}/libyaml-prefix/src/libyaml")
message(STATUS "Using bundled libyaml in '${LIBYAML_SRC}'")
set(LIBYAML_LIB "${LIBYAML_SRC}/src/.libs/libyaml.a")
ExternalProject_Add(
libyaml
URL "https://github.com/yaml/libyaml/releases/download/0.2.5/yaml-0.2.5.tar.gz"
URL_HASH "SHA256=c642ae9b75fee120b2d96c712538bd2cf283228d2337df2cf2988e3c02678ef4"
CONFIGURE_COMMAND ./configure --enable-static=true --enable-shared=false
BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1
INSTALL_COMMAND "")
endif()

View File

@@ -26,8 +26,8 @@ file(MAKE_DIRECTORY ${SYSDIG_CMAKE_WORKING_DIR})
# To update sysdig version for the next release, change the default below # To update sysdig version for the next release, change the default below
# In case you want to test against another sysdig version just pass the variable - ie., `cmake -DSYSDIG_VERSION=dev ..` # In case you want to test against another sysdig version just pass the variable - ie., `cmake -DSYSDIG_VERSION=dev ..`
if(NOT SYSDIG_VERSION) if(NOT SYSDIG_VERSION)
set(SYSDIG_VERSION "96bd9bc560f67742738eb7255aeb4d03046b8045") set(SYSDIG_VERSION "85c88952b018fdbce2464222c3303229f5bfcfad")
set(SYSDIG_CHECKSUM "SHA256=766e8952a36a4198fd976b9d848523e6abe4336612188e4fc911e217d8e8a00d") set(SYSDIG_CHECKSUM "SHA256=6c3f5f2d699c9540e281f50cbc5cb6b580f0fc689798bc65d4a77f57f932a71c")
endif() endif()
set(PROBE_VERSION "${SYSDIG_VERSION}") set(PROBE_VERSION "${SYSDIG_VERSION}")

View File

@@ -1,45 +0,0 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: falco
namespace: falco
labels:
app: falco
spec:
selector:
matchLabels:
app: falco
template:
metadata:
labels:
app: falco
spec:
tolerations:
- operator: Exists
hostPID: true
hostNetwork: true
containers:
- name: falco-init
image: alpine
imagePullPolicy: Always
securityContext:
privileged: true
lifecycle:
preStop:
exec:
command:
- "nsenter"
- "-t"
- "1"
- "-m"
- "--"
- "/bin/sh"
- "-c"
- |
#!/bin/bash
curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
echo "deb https://dl.bintray.com/falcosecurity/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
apt-get update -y
apt-get -y install linux-headers-$(uname -r)
apt-get install -y falco
exit 0

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# #
# Copyright (C) 2019 The Falco Authors. # Copyright (C) 2020 The Falco Authors.
# #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -16,10 +16,14 @@
# limitations under the License. # limitations under the License.
# #
# todo(leogr): remove deprecation notice within a couple of releases
if [[ ! -z "${SKIP_MODULE_LOAD}" ]]; then
echo "* SKIP_MODULE_LOAD is deprecated and will be removed soon, use SKIP_DRIVER_LOADER instead"
fi
# Set the SKIP_MODULE_LOAD variable to skip loading the kernel module # Set the SKIP_DRIVER_LOADER variable to skip loading the driver
if [[ -z "${SKIP_MODULE_LOAD}" ]]; then if [[ -z "${SKIP_DRIVER_LOADER}" ]] && [[ -z "${SKIP_MODULE_LOAD}" ]]; then
echo "* Setting up /usr/src links from host" echo "* Setting up /usr/src links from host"
for i in "$HOST_ROOT/usr/src"/* for i in "$HOST_ROOT/usr/src"/*

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# #
# Copyright (C) 2019 The Falco Authors. # Copyright (C) 2020 The Falco Authors.
# #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,9 +17,9 @@
# #
# Set the SKIP_MODULE_LOAD variable to skip loading the kernel module # Set the SKIP_DRIVER_LOADER variable to skip loading the driver
if [[ -z "${SKIP_MODULE_LOAD}" ]]; then if [[ -z "${SKIP_DRIVER_LOADER}" ]]; then
echo "* Setting up /usr/src links from host" echo "* Setting up /usr/src links from host"
for i in "$HOST_ROOT/usr/src"/* for i in "$HOST_ROOT/usr/src"/*

View File

@@ -13,7 +13,7 @@ WORKDIR /
ADD https://bintray.com/api/ui/download/falcosecurity/${VERSION_BUCKET}/x86_64/falco-${FALCO_VERSION}-x86_64.tar.gz / ADD https://bintray.com/api/ui/download/falcosecurity/${VERSION_BUCKET}/x86_64/falco-${FALCO_VERSION}-x86_64.tar.gz /
RUN apt-get update -y && \ RUN apt-get update -y && \
apt-get install -y libyaml-0-2 binutils && \ apt-get install -y binutils && \
tar -xvf falco-${FALCO_VERSION}-x86_64.tar.gz && \ tar -xvf falco-${FALCO_VERSION}-x86_64.tar.gz && \
rm -f falco-${FALCO_VERSION}-x86_64.tar.gz && \ rm -f falco-${FALCO_VERSION}-x86_64.tar.gz && \
mv falco-${FALCO_VERSION}-x86_64 falco && \ mv falco-${FALCO_VERSION}-x86_64 falco && \
@@ -43,9 +43,6 @@ COPY --from=ubuntu /lib/x86_64-linux-gnu/libanl.so.1 \
COPY --from=ubuntu /usr/lib/x86_64-linux-gnu/libstdc++.so.6 \ COPY --from=ubuntu /usr/lib/x86_64-linux-gnu/libstdc++.so.6 \
/usr/lib/x86_64-linux-gnu/libstdc++.so.6 /usr/lib/x86_64-linux-gnu/libstdc++.so.6
COPY --from=ubuntu /usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.5 \
/usr/lib/x86_64-linux-gnu/libyaml-0.so.2
COPY --from=ubuntu /etc/ld.so.cache \ COPY --from=ubuntu /etc/ld.so.cache \
/etc/nsswitch.conf \ /etc/nsswitch.conf \
/etc/ld.so.cache \ /etc/ld.so.cache \

View File

@@ -1,16 +1,20 @@
FROM fedora:31 FROM fedora:31
LABEL name="falcosecurity/falco-tester" LABEL name="falcosecurity/falco-tester"
LABEL usage="docker run -v /boot:/boot:ro -v /var/run/docker.sock:/var/run/docker.sock -v $PWD/..:/source -v $PWD/build:/build -e FALCO_VERSION=<current_falco_version> --name <name> falcosecurity/falco-tester test" LABEL usage="docker run -v /boot:/boot:ro -v /var/run/docker.sock:/var/run/docker.sock -v $PWD/..:/source -v $PWD/build:/build --name <name> falcosecurity/falco-tester test"
LABEL maintainer="cncf-falco-dev@lists.cncf.io" LABEL maintainer="cncf-falco-dev@lists.cncf.io"
ENV FALCO_VERSION= ENV FALCO_VERSION=
ENV BUILD_TYPE=release ENV BUILD_TYPE=release
ADD https://github.com/fullstorydev/grpcurl/releases/download/v1.6.0/grpcurl_1.6.0_linux_x86_64.tar.gz /
RUN dnf install -y python-pip python docker findutils jq unzip && dnf clean all RUN dnf install -y python-pip python docker findutils jq unzip && dnf clean all
ENV PATH="/root/.local/bin/:${PATH}" ENV PATH="/root/.local/bin/:${PATH}"
RUN pip install --user avocado-framework==69.0 RUN pip install --user avocado-framework==69.0
RUN pip install --user avocado-framework-plugin-varianter-yaml-to-mux==69.0 RUN pip install --user avocado-framework-plugin-varianter-yaml-to-mux==69.0
RUN pip install --user watchdog==0.10.2
RUN pip install --user pathtools==0.1.2
RUN tar -C /usr/bin -xvf grpcurl_1.6.0_linux_x86_64.tar.gz
COPY ./root / COPY ./root /

View File

@@ -6,7 +6,7 @@ RUN test -n FALCO_VERSION
ENV FALCO_VERSION ${FALCO_VERSION} ENV FALCO_VERSION ${FALCO_VERSION}
RUN apt update -y RUN apt update -y
RUN apt install dkms libyaml-0-2 -y RUN apt install dkms -y
ADD falco-${FALCO_VERSION}-x86_64.deb / ADD falco-${FALCO_VERSION}-x86_64.deb /
RUN dpkg -i /falco-${FALCO_VERSION}-x86_64.deb RUN dpkg -i /falco-${FALCO_VERSION}-x86_64.deb

View File

@@ -6,7 +6,7 @@ RUN test -n FALCO_VERSION
ENV FALCO_VERSION ${FALCO_VERSION} ENV FALCO_VERSION ${FALCO_VERSION}
RUN apt update -y RUN apt update -y
RUN apt install dkms libyaml-0-2 curl -y RUN apt install dkms curl -y
ADD falco-${FALCO_VERSION}-x86_64.tar.gz / ADD falco-${FALCO_VERSION}-x86_64.tar.gz /
RUN cp -R /falco-${FALCO_VERSION}-x86_64/* / RUN cp -R /falco-${FALCO_VERSION}-x86_64/* /

View File

@@ -69,7 +69,7 @@ case "$CMD" in
# run tests # run tests
echo "Running regression tests ..." echo "Running regression tests ..."
cd "$SOURCE_DIR/falco/test" cd "$SOURCE_DIR/falco/test"
./run_regression_tests.sh "$BUILD_DIR/$BUILD_TYPE" ./run_regression_tests.sh -d "$BUILD_DIR/$BUILD_TYPE"
# clean docker images # clean docker images
clean_image "deb" clean_image "deb"

View File

@@ -139,7 +139,7 @@ stdout_output:
webserver: webserver:
enabled: true enabled: true
listen_port: 8765 listen_port: 8765
k8s_audit_endpoint: /k8s_audit k8s_audit_endpoint: /k8s-audit
ssl_enabled: false ssl_enabled: false
ssl_certificate: /etc/falco/falco.pem ssl_certificate: /etc/falco/falco.pem
@@ -182,7 +182,8 @@ http_output:
# grpc: # grpc:
# enabled: true # enabled: true
# bind_address: "0.0.0.0:5060" # bind_address: "0.0.0.0:5060"
# threadiness: 8 # # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores
# threadiness: 0
# private_key: "/etc/falco/certs/server.key" # private_key: "/etc/falco/certs/server.key"
# cert_chain: "/etc/falco/certs/server.crt" # cert_chain: "/etc/falco/certs/server.crt"
# root_certs: "/etc/falco/certs/ca.crt" # root_certs: "/etc/falco/certs/ca.crt"
@@ -191,7 +192,8 @@ http_output:
grpc: grpc:
enabled: false enabled: false
bind_address: "unix:///var/run/falco.sock" bind_address: "unix:///var/run/falco.sock"
threadiness: 8 # when threadiness is 0, Falco automatically guesses it depending on the number of online cores
threadiness: 0
# gRPC output service. # gRPC output service.
# By default it is off. # By default it is off.

View File

@@ -1,4 +1,4 @@
# gRPC Falco Output # Falco gRPC Outputs
<!-- toc --> <!-- toc -->
@@ -25,7 +25,7 @@ An alert is an "output" when it goes over a transport, and it is emitted by Falc
At the current moment, however, Falco can deliver alerts in a very basic way, for example by dumping them to standard output. At the current moment, however, Falco can deliver alerts in a very basic way, for example by dumping them to standard output.
For this reason, many Falco users asked, with issues - eg., [falco#528](https://github.com/falcosecurity/falco/issues/528) - or in the [slack channel](https://sysdig.slack.com) if we can find a more consumable way to implement Falco outputs in an extensible way. For this reason, many Falco users asked, with issues - eg., [falco#528](https://github.com/falcosecurity/falco/issues/528) - or in the [slack channel](https://slack.k8s.io) if we can find a more consumable way to implement Falco outputs in an extensible way.
The motivation behind this proposal is to design a new output implementation that can meet our user's needs. The motivation behind this proposal is to design a new output implementation that can meet our user's needs.
@@ -39,7 +39,10 @@ The motivation behind this proposal is to design a new output implementation tha
- To continue supporting the old output formats by implementing their same interface - To continue supporting the old output formats by implementing their same interface
- To be secure by default (**mutual TLS** authentication) - To be secure by default (**mutual TLS** authentication)
- To be **asynchronous** and **non-blocking** - To be **asynchronous** and **non-blocking**
- To implement a Go SDK - To provide a connection over unix socket (no authentication)
- To implement a Go client
- To implement a Rust client
- To implement a Python client
### Non-Goals ### Non-Goals
@@ -77,26 +80,25 @@ syntax = "proto3";
import "google/protobuf/timestamp.proto"; import "google/protobuf/timestamp.proto";
import "schema.proto"; import "schema.proto";
package falco.output; package falco.outputs;
option go_package = "github.com/falcosecurity/client-go/pkg/api/output"; option go_package = "github.com/falcosecurity/client-go/pkg/api/outputs";
// The `subscribe` service defines the RPC call // This service defines the RPC methods
// to perform an output `request` which will lead to obtain an output `response`. // to `request` a stream of output `response`s.
service service { service service {
rpc subscribe(request) returns (stream response); // Subscribe to a stream of Falco outputs by sending a stream of requests.
rpc sub(stream request) returns (stream response);
// Get all the Falco outputs present in the system up to this call.
rpc get(request) returns (stream response);
} }
// The `request` message is the logical representation of the request model. // The `request` message is the logical representation of the request model.
// It is the input of the `subscribe` service. // It is the input of the `output.service` service.
// It is used to configure the kind of subscription to the gRPC streaming server.
message request { message request {
bool keepalive = 1;
// string duration = 2; // TODO(leodido, fntlnz): not handled yet but keeping for reference.
// repeated string tags = 3; // TODO(leodido, fntlnz): not handled yet but keeping for reference.
} }
// The `response` message is the logical representation of the output model. // The `response` message is the representation of the output model.
// It contains all the elements that Falco emits in an output along with the // It contains all the elements that Falco emits in an output along with the
// definitions for priorities and source. // definitions for priorities and source.
message response { message response {
@@ -106,7 +108,7 @@ message response {
string rule = 4; string rule = 4;
string output = 5; string output = 5;
map<string, string> output_fields = 6; map<string, string> output_fields = 6;
// repeated string tags = 7; // TODO(leodido,fntlnz): tags not supported yet, keeping for reference string hostname = 7;
} }
``` ```

File diff suppressed because it is too large Load Diff

View File

@@ -44,9 +44,18 @@
items: ["vpa-recommender", "vpa-updater"] items: ["vpa-recommender", "vpa-updater"]
- list: allowed_k8s_users - list: allowed_k8s_users
items: [ items:
"minikube", "minikube-user", "kubelet", "kops", "admin", "kube", "kube-proxy", [
vertical_pod_autoscaler_users, "minikube",
"minikube-user",
"kubelet",
"kops",
"admin",
"kube",
"kube-proxy",
"kube-apiserver-healthcheck",
"kubernetes-admin",
vertical_pod_autoscaler_users,
] ]
- rule: Disallowed K8s User - rule: Disallowed K8s User
@@ -114,6 +123,7 @@
- macro: health_endpoint - macro: health_endpoint
condition: ka.uri=/healthz condition: ka.uri=/healthz
# requires FALCO_ENGINE_VERSION 5
- rule: Create Disallowed Pod - rule: Create Disallowed Pod
desc: > desc: >
Detect an attempt to start a pod with a container image outside of a list of allowed images. Detect an attempt to start a pod with a container image outside of a list of allowed images.
@@ -123,6 +133,7 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
# requires FALCO_ENGINE_VERSION 5
- rule: Create Privileged Pod - rule: Create Privileged Pod
desc: > desc: >
Detect an attempt to start a pod with a privileged container Detect an attempt to start a pod with a privileged container
@@ -135,7 +146,8 @@
- macro: sensitive_vol_mount - macro: sensitive_vol_mount
condition: > condition: >
(ka.req.pod.volumes.hostpath intersects (/proc, /var/run/docker.sock, /, /etc, /root, /var/run/crio/crio.sock, /home/admin, /var/lib/kubelet, /var/lib/kubelet/pki, /etc/kubernetes, /etc/kubernetes/manifests)) (ka.req.pod.volumes.hostpath intersects (/proc, /var/run/docker.sock, /, /etc, /root, /var/run/crio/crio.sock, /home/admin, /var/lib/kubelet, /var/lib/kubelet/pki, /etc/kubernetes, /etc/kubernetes/manifests))
# requires FALCO_ENGINE_VERSION 5
- rule: Create Sensitive Mount Pod - rule: Create Sensitive Mount Pod
desc: > desc: >
Detect an attempt to start a pod with a volume from a sensitive host directory (i.e. /proc). Detect an attempt to start a pod with a volume from a sensitive host directory (i.e. /proc).
@@ -147,6 +159,7 @@
tags: [k8s] tags: [k8s]
# Corresponds to K8s CIS Benchmark 1.7.4 # Corresponds to K8s CIS Benchmark 1.7.4
# requires FALCO_ENGINE_VERSION 5
- rule: Create HostNetwork Pod - rule: Create HostNetwork Pod
desc: Detect an attempt to start a pod using the host network. desc: Detect an attempt to start a pod using the host network.
condition: kevt and pod and kcreate and ka.req.pod.host_network intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostnetwork_images) condition: kevt and pod and kcreate and ka.req.pod.host_network intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostnetwork_images)
@@ -155,10 +168,13 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
- macro: user_known_node_port_service
condition: (k8s_audit_never_true)
- rule: Create NodePort Service - rule: Create NodePort Service
desc: > desc: >
Detect an attempt to start a service with a NodePort service type Detect an attempt to start a service with a NodePort service type
condition: kevt and service and kcreate and ka.req.service.type=NodePort condition: kevt and service and kcreate and ka.req.service.type=NodePort and not user_known_node_port_service
output: NodePort Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace ports=%ka.req.service.ports) output: NodePort Service Created (user=%ka.user.name service=%ka.target.name ns=%ka.target.namespace ports=%ka.req.service.ports)
priority: WARNING priority: WARNING
source: k8s_audit source: k8s_audit
@@ -175,7 +191,7 @@
- rule: Create/Modify Configmap With Private Credentials - rule: Create/Modify Configmap With Private Credentials
desc: > desc: >
Detect creating/modifying a configmap containing a private credential (aws key, password, etc.) Detect creating/modifying a configmap containing a private credential (aws key, password, etc.)
condition: kevt and configmap and kmodify and contains_private_credentials condition: kevt and configmap and kmodify and contains_private_credentials
output: K8s configmap with private credential (user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj) output: K8s configmap with private credential (user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)
priority: WARNING priority: WARNING
@@ -186,7 +202,7 @@
- rule: Anonymous Request Allowed - rule: Anonymous Request Allowed
desc: > desc: >
Detect any request made by the anonymous user that was allowed Detect any request made by the anonymous user that was allowed
condition: kevt and ka.user.name=system:anonymous and ka.auth.decision!=reject and not health_endpoint condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint
output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason)) output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
priority: WARNING priority: WARNING
source: k8s_audit source: k8s_audit
@@ -201,10 +217,13 @@
# attach request was created privileged or not. For now, we have a # attach request was created privileged or not. For now, we have a
# less severe rule that detects attaches/execs to any pod. # less severe rule that detects attaches/execs to any pod.
- macro: user_known_exec_pod_activities
condition: (k8s_audit_never_true)
- rule: Attach/Exec Pod - rule: Attach/Exec Pod
desc: > desc: >
Detect any attempt to attach/exec to a pod Detect any attempt to attach/exec to a pod
condition: kevt_started and pod_subresource and kcreate and ka.target.subresource in (exec,attach) condition: kevt_started and pod_subresource and kcreate and ka.target.subresource in (exec,attach) and not user_known_exec_pod_activities
output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command]) output: Attach/Exec to pod (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command])
priority: NOTICE priority: NOTICE
source: k8s_audit source: k8s_audit
@@ -222,19 +241,32 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
- list: user_trusted_image_list
items: []
- macro: trusted_pod
condition: (ka.req.pod.containers.image.repository in (user_trusted_image_list))
# Detect any new pod created in the kube-system namespace # Detect any new pod created in the kube-system namespace
# requires FALCO_ENGINE_VERSION 5
- rule: Pod Created in Kube Namespace - rule: Pod Created in Kube Namespace
desc: Detect any attempt to create a pod in the kube-system or kube-public namespaces desc: Detect any attempt to create a pod in the kube-system or kube-public namespaces
condition: kevt and pod and kcreate and ka.target.namespace in (kube-system, kube-public) condition: kevt and pod and kcreate and ka.target.namespace in (kube-system, kube-public) and not trusted_pod
output: Pod created in kube namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image) output: Pod created in kube namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
priority: WARNING priority: WARNING
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
- list: user_known_sa_list
items: []
- macro: trusted_sa
condition: (ka.target.name in (user_known_sa_list))
# Detect creating a service account in the kube-system/kube-public namespace # Detect creating a service account in the kube-system/kube-public namespace
- rule: Service Account Created in Kube Namespace - rule: Service Account Created in Kube Namespace
desc: Detect any attempt to create a serviceaccount in the kube-system or kube-public namespaces desc: Detect any attempt to create a serviceaccount in the kube-system or kube-public namespaces
condition: kevt and serviceaccount and kcreate and ka.target.namespace in (kube-system, kube-public) and response_successful condition: kevt and serviceaccount and kcreate and ka.target.namespace in (kube-system, kube-public) and response_successful and not trusted_sa
output: Service account created in kube namespace (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace) output: Service account created in kube namespace (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace)
priority: WARNING priority: WARNING
source: k8s_audit source: k8s_audit
@@ -261,6 +293,7 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
# requires FALCO_ENGINE_VERSION 5
- rule: ClusterRole With Wildcard Created - rule: ClusterRole With Wildcard Created
desc: Detect any attempt to create a Role/ClusterRole with wildcard resources or verbs desc: Detect any attempt to create a Role/ClusterRole with wildcard resources or verbs
condition: kevt and (role or clusterrole) and kcreate and (ka.req.role.rules.resources intersects ("*") or ka.req.role.rules.verbs intersects ("*")) condition: kevt and (role or clusterrole) and kcreate and (ka.req.role.rules.resources intersects ("*") or ka.req.role.rules.verbs intersects ("*"))
@@ -273,6 +306,7 @@
condition: > condition: >
(ka.req.role.rules.verbs intersects (create, update, patch, delete, deletecollection)) (ka.req.role.rules.verbs intersects (create, update, patch, delete, deletecollection))
# requires FALCO_ENGINE_VERSION 5
- rule: ClusterRole With Write Privileges Created - rule: ClusterRole With Write Privileges Created
desc: Detect any attempt to create a Role/ClusterRole that can perform write-related actions desc: Detect any attempt to create a Role/ClusterRole that can perform write-related actions
condition: kevt and (role or clusterrole) and kcreate and writable_verbs condition: kevt and (role or clusterrole) and kcreate and writable_verbs
@@ -281,6 +315,7 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
# requires FALCO_ENGINE_VERSION 5
- rule: ClusterRole With Pod Exec Created - rule: ClusterRole With Pod Exec Created
desc: Detect any attempt to create a Role/ClusterRole that can exec to pods desc: Detect any attempt to create a Role/ClusterRole that can exec to pods
condition: kevt and (role or clusterrole) and kcreate and ka.req.role.rules.resources intersects ("pods/exec") condition: kevt and (role or clusterrole) and kcreate and ka.req.role.rules.resources intersects ("pods/exec")
@@ -444,20 +479,26 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
# This macro disables following rule, change to k8s_audit_never_true to enable it # This macro disables following rule, change to k8s_audit_never_true to enable it
- macro: allowed_full_admin_users - macro: allowed_full_admin_users
condition: (k8s_audit_always_true) condition: (k8s_audit_always_true)
# This list includes some of the default user names for an administrator in several K8s installations # This list includes some of the default user names for an administrator in several K8s installations
- list: full_admin_k8s_users - list: full_admin_k8s_users
items: ["admin", "kubernetes-admin", "kubernetes-admin@kubernetes", "kubernetes-admin@cluster.local", "minikube-user"] items:
[
"admin",
"kubernetes-admin",
"kubernetes-admin@kubernetes",
"kubernetes-admin@cluster.local",
"minikube-user",
]
# This rules detect an operation triggered by an user name that is # This rules detect an operation triggered by an user name that is
# included in the list of those that are default administrators upon # included in the list of those that are default administrators upon
# cluster creation. This may signify a permission setting too broader. # cluster creation. This may signify a permission setting too broader.
# As we can't check for role of the user on a general ka.* event, this # As we can't check for role of the user on a general ka.* event, this
# may or may not be an administrator. Customize the full_admin_k8s_users # may or may not be an administrator. Customize the full_admin_k8s_users
# list to your needs, and activate at your discrection. # list to your needs, and activate at your discrection.
# # How to test: # # How to test:
@@ -476,8 +517,6 @@
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]
- macro: ingress - macro: ingress
condition: ka.target.resource=ingresses condition: ka.target.resource=ingresses
@@ -509,12 +548,10 @@
output: > output: >
K8s Ingress Without TLS Cert Created (user=%ka.user.name ingress=%ka.target.name K8s Ingress Without TLS Cert Created (user=%ka.user.name ingress=%ka.target.name
namespace=%ka.target.namespace) namespace=%ka.target.namespace)
source: k8s_audit source: k8s_audit
priority: WARNING priority: WARNING
tags: [k8s, network] tags: [k8s, network]
- macro: node - macro: node
condition: ka.target.resource=nodes condition: ka.target.resource=nodes
@@ -557,4 +594,3 @@
priority: WARNING priority: WARNING
source: k8s_audit source: k8s_audit
tags: [k8s] tags: [k8s]

View File

@@ -473,9 +473,8 @@ else
FALCO_DRIVER_CURL_OPTIONS=-fsS FALCO_DRIVER_CURL_OPTIONS=-fsS
fi fi
MAX_RMMOD_WAIT=60 if [[ -z "$MAX_RMMOD_WAIT" ]]; then
if [[ $# -ge 1 ]]; then MAX_RMMOD_WAIT=60
MAX_RMMOD_WAIT=$1
fi fi
DRIVER_VERSION="@PROBE_VERSION@" DRIVER_VERSION="@PROBE_VERSION@"

View File

@@ -1 +1,4 @@
add_subdirectory(trace_files) add_subdirectory(trace_files)
add_custom_target(test-trace-files ALL)
add_dependencies(test-trace-files trace-files-base-scap trace-files-psp trace-files-k8s-audit)

View File

@@ -7,13 +7,25 @@ You can find instructions on how to run this test suite on the Falco website [he
## Test suites ## Test suites
- [falco_tests](./falco_tests.yaml) - [falco_tests](./falco_tests.yaml)
- [falco_traces](./falco_traces.yaml) - [falco_traces](./falco_traces.yaml.in)
- [falco_tests_package](./falco_tests_package.yaml) - [falco_tests_package](./falco_tests_package.yaml)
- [falco_k8s_audit_tests](./falco_k8s_audit_tests.yaml) - [falco_k8s_audit_tests](./falco_k8s_audit_tests.yaml)
- [falco_tests_psp](./falco_tests_psp.yaml) - [falco_tests_psp](./falco_tests_psp.yaml)
## Running locally ## Running locally
This step assumes you already built Falco.
Note that the tests are intended to be run against a [release build](https://falco.org/docs/source/#specify-the-build-type) of Falco, at the moment.
Also, it assumes you prepared [falco_traces](#falco_traces) (see the section below) and you already run the following command from the build directory:
```console
make test-trace-files
```
It prepares the fixtures (`json` and `scap` files) needed by the integration tests.
Using `virtualenv` the steps to locally run a specific test suite are the following ones (from this directory): Using `virtualenv` the steps to locally run a specific test suite are the following ones (from this directory):
```console ```console
@@ -32,8 +44,72 @@ In case you want to only execute a specific test case, use the `--mux-filter-onl
BUILD_DIR="../build" avocado run --mux-yaml falco_tests.yaml --job-results-dir /tmp/job-results --mux-filter-only /run/trace_files/program_output -- falco_test.py BUILD_DIR="../build" avocado run --mux-yaml falco_tests.yaml --job-results-dir /tmp/job-results --mux-filter-only /run/trace_files/program_output -- falco_test.py
``` ```
To obtain the path of all the available variants, execute: To obtain the path of all the available variants for a given test suite, execute:
```console ```console
avocado variants --mux-yaml falco_test.yaml avocado variants --mux-yaml falco_tests.yaml
``` ```
### falco_traces
The `falco_traces.yaml` test suite gets generated through the `falco_traces.yaml.in` file and some fixtures (`scap` files) downloaded from the web at execution time.
1. Ensure you have `unzip` and `xargs` utilities
2. Prepare the test suite with the following command:
```console
bash run_regression_tests.sh -p -v
```
### falco_tests_package
The `falco_tests_package.yaml` test suite requires some additional setup steps to be succesfully run on your local machine.
In particular, it requires some runners (ie., docker images) to be already built and present into your local machine.
1. Ensure you have `docker` up and running
2. Ensure you build Falco (with bundled deps)
The recommended way of doing it by running the `falcosecurity/falco-builder` docker image from the project root:
```console
docker run -v $PWD/..:/source -v $PWD/mybuild:/build falcosecurity/falco-builder cmake
docker run -v $PWD/..:/source -v $PWD/mybuild:/build falcosecurity/falco-builder falco
```
3. Ensure you build the Falco packages from the Falco above:
```console
docker run -v $PWD/..:/source -v $PWD/mybuild:/build falcosecurity/falco-builder package
```
4. Ensure you build the runners:
```console
FALCO_VERSION=$(./mybuild/release/userspace/falco/falco --version | head -n 1 | cut -d' ' -f3 | tr -d '\r')
mkdir -p /tmp/runners-rootfs
cp -R ./test/rules /tmp/runners-rootfs
cp -R ./test/trace_files /tmp/runners-rootfs
cp ./mybuild/release/falco-${FALCO_VERSION}-x86_64.{deb,rpm,tar.gz} /tmp/runners-rootfs
docker build -f docker/tester/root/runners/deb.Dockerfile --build-arg FALCO_VERSION=${FALCO_VERSION} -t falcosecurity/falco:test-deb /tmp/runners-rootfs
docker build -f docker/tester/root/runners/rpm.Dockerfile --build-arg FALCO_VERSION=${FALCO_VERSION} -t falcosecurity/falco:test-rpm /tmp/runners-rootfs
docker build -f docker/tester/root/runners/tar.gz.Dockerfile --build-arg FALCO_VERSION=${FALCO_VERSION} -t falcosecurity/falco:test-tar.gz /tmp/runners-rootfs
```
5. Run the `falco_tests_package.yaml` test suite from the `test` directory
```console
cd test
BUILD_DIR="../mybuild" avocado run --mux-yaml falco_tests_package.yaml --job-results-dir /tmp/job-results -- falco_test.py
```
### Execute all the test suites
In case you want to run all the test suites at once, you can directly use the `run_regression_tests.sh` runner script.
```console
cd test
./run_regression_tests.sh -v
```
Just make sure you followed all the previous setup steps.

View File

@@ -0,0 +1,38 @@
#
# Copyright (C) 2020 The Falco Authors.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Whether to output events in json or text.
json_output: false
# Send information logs to stderr and/or syslog
# Note these are *not* security notification logs!
# These are just Falco lifecycle (and possibly error) logs.
log_stderr: false
log_syslog: false
# Where security notifications should go.
stdout_output:
enabled: false
# gRPC server using an unix socket.
grpc:
enabled: true
bind_address: "unix:///tmp/falco/falco.sock"
threadiness: 8
grpc_output:
enabled: true

View File

@@ -136,7 +136,7 @@ stdout_output:
webserver: webserver:
enabled: true enabled: true
listen_port: 8765 listen_port: 8765
k8s_audit_endpoint: /k8s_audit k8s_audit_endpoint: /k8s-audit
ssl_enabled: false ssl_enabled: false
ssl_certificate: /etc/falco/falco.pem ssl_certificate: /etc/falco/falco.pem

View File

@@ -20,17 +20,17 @@ set -euo pipefail
BUILD_DIR=$1 BUILD_DIR=$1
SCRIPT=$(readlink -f $0) SCRIPT=$(readlink -f $0)
SCRIPTDIR=$(dirname $SCRIPT) SCRIPTDIR=$(dirname "$SCRIPT")
RUNNERDIR="${SCRIPTDIR}/runner" RUNNERDIR="${SCRIPTDIR}/runner"
FALCO_VERSION=$(cat ${BUILD_DIR}/userspace/falco/config_falco.h | grep 'FALCO_VERSION ' | cut -d' ' -f3 | sed -e 's/^"//' -e 's/"$//') FALCO_VERSION=$(cat ${BUILD_DIR}/userspace/falco/config_falco.h | grep 'FALCO_VERSION ' | cut -d' ' -f3 | sed -e 's/^"//' -e 's/"$//')
DRIVER_VERSION=$(cat ${BUILD_DIR}/userspace/falco/config_falco.h | grep 'DRIVER_VERSION ' | cut -d' ' -f3 | sed -e 's/^"//' -e 's/"$//') DRIVER_VERSION=$(cat ${BUILD_DIR}/userspace/falco/config_falco.h | grep 'DRIVER_VERSION ' | cut -d' ' -f3 | sed -e 's/^"//' -e 's/"$//')
FALCO_PACKAGE="falco-${FALCO_VERSION}-x86_64.tar.gz" FALCO_PACKAGE="falco-${FALCO_VERSION}-x86_64.tar.gz"
cp "${BUILD_DIR}/${FALCO_PACKAGE}" "${RUNNERDIR}" cp "${BUILD_DIR}/${FALCO_PACKAGE}" "${RUNNERDIR}"
pushd ${RUNNERDIR} pushd "${RUNNERDIR}"
docker build --build-arg FALCO_VERSION="$FALCO_VERSION" \ docker build --build-arg FALCO_VERSION="$FALCO_VERSION" \
-t falcosecurity/falco:test-driver-loader \ -t falcosecurity/falco:test-driver-loader \
-f "${RUNNERDIR}/Dockerfile" ${RUNNERDIR} -f "${RUNNERDIR}/Dockerfile" "${RUNNERDIR}"
popd popd
rm -f "${RUNNERDIR}/${FALCO_PACKAGE}" rm -f "${RUNNERDIR}/${FALCO_PACKAGE}"

View File

@@ -10,7 +10,6 @@ ENV HOST_ROOT=/host
RUN apt-get update -y RUN apt-get update -y
RUN apt-get install -y --no-install-recommends \ RUN apt-get install -y --no-install-recommends \
ca-certificates \ ca-certificates \
libyaml-0-2 \
dkms \ dkms \
curl \ curl \
gcc \ gcc \

View File

@@ -28,6 +28,8 @@ import urllib.request
from avocado import Test from avocado import Test
from avocado import main from avocado import main
from avocado.utils import process from avocado.utils import process
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
class FalcoTest(Test): class FalcoTest(Test):
@@ -195,6 +197,24 @@ class FalcoTest(Test):
os.makedirs(filedir) os.makedirs(filedir)
self.outputs = outputs self.outputs = outputs
self.grpcurl_res = None
self.grpc_observer = None
self.grpc_address = self.params.get('address', 'grpc/*', default='/var/run/falco.sock')
if self.grpc_address.startswith("unix://"):
self.is_grpc_using_unix_socket = True
self.grpc_address = self.grpc_address[len("unix://"):]
else:
self.is_grpc_using_unix_socket = False
self.grpc_proto = self.params.get('proto', 'grpc/*', default='')
self.grpc_service = self.params.get('service', 'grpc/*', default='')
self.grpc_method = self.params.get('method', 'grpc/*', default='')
self.grpc_results = self.params.get('results', 'grpc/*', default='')
if self.grpc_results == '':
self.grpc_results = []
else:
if type(self.grpc_results) == str:
self.grpc_results = [self.grpc_results]
self.disable_tags = self.params.get('disable_tags', '*', default='') self.disable_tags = self.params.get('disable_tags', '*', default='')
if self.disable_tags == '': if self.disable_tags == '':
@@ -417,6 +437,48 @@ class FalcoTest(Test):
self.log.debug("Copying {} to {}".format(driver_path, module_path)) self.log.debug("Copying {} to {}".format(driver_path, module_path))
shutil.copyfile(driver_path, module_path) shutil.copyfile(driver_path, module_path)
def init_grpc_handler(self):
self.grpcurl_res = None
if len(self.grpc_results) > 0:
if not self.is_grpc_using_unix_socket:
self.fail("This test suite supports gRPC with unix socket only")
cmdline = "grpcurl -import-path ../userspace/falco " \
"-proto {} -plaintext -unix {} " \
"{}/{}".format(self.grpc_proto, self.grpc_address, self.grpc_service, self.grpc_method)
that = self
class GRPCUnixSocketEventHandler(PatternMatchingEventHandler):
def on_created(self, event):
# that.log.info("EVENT: {}", event)
that.grpcurl_res = process.run(cmdline)
path = os.path.dirname(self.grpc_address)
process.run("mkdir -p {}".format(path))
event_handler = GRPCUnixSocketEventHandler(patterns=['*'],
ignore_directories=True)
self.grpc_observer = Observer()
self.grpc_observer.schedule(event_handler, path, recursive=False)
self.grpc_observer.start()
def check_grpc(self):
if self.grpc_observer is not None:
self.grpc_observer.stop()
self.grpc_observer = None
if self.grpcurl_res is None:
self.fail("gRPC responses not found")
for exp_result in self.grpc_results:
found = False
for line in self.grpcurl_res.stdout.decode("utf-8").splitlines():
match = re.search(exp_result, line)
if match is not None:
found = True
if found == False:
self.fail("Could not find a line '{}' in gRPC responses".format(exp_result))
def test(self): def test(self):
self.log.info("Trace file %s", self.trace_file) self.log.info("Trace file %s", self.trace_file)
@@ -424,6 +486,8 @@ class FalcoTest(Test):
self.possibly_copy_driver() self.possibly_copy_driver()
self.init_grpc_handler()
if self.package != 'None': if self.package != 'None':
# This sets falco_binary_path as a side-effect. # This sets falco_binary_path as a side-effect.
self.install_package() self.install_package()
@@ -526,6 +590,7 @@ class FalcoTest(Test):
self.check_detections_by_rule(res) self.check_detections_by_rule(res)
self.check_json_output(res) self.check_json_output(res)
self.check_outputs() self.check_outputs()
self.check_grpc()
pass pass

View File

@@ -672,6 +672,22 @@ trace_files: !mux
outputs: outputs:
- /tmp/falco_outputs/program_output.txt: Warning An open was seen - /tmp/falco_outputs/program_output.txt: Warning An open was seen
grpc_unix_socket_outputs:
detect: True
detect_level: WARNING
rules_file:
- rules/single_rule.yaml
conf_file: confs/grpc_unix_socket.yaml
trace_file: trace_files/cat_write.scap
run_duration: 5
grpc:
address: unix:///tmp/falco/falco.sock
proto: outputs.proto
service: falco.outputs.service
method: get
results:
- "Warning An open was seen"
detect_counts: detect_counts:
detect: True detect: True
detect_level: WARNING detect_level: WARNING

View File

@@ -1,5 +1,5 @@
# #
# Copyright (C) 2019 The Falco Authors. # Copyright (C) 2020 The Falco Authors.
# #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -3,9 +3,11 @@ avocado-framework-plugin-varianter-yaml-to-mux==69.0
certifi==2020.4.5.1 certifi==2020.4.5.1
chardet==3.0.4 chardet==3.0.4
idna==2.9 idna==2.9
pathtools==0.1.2
pbr==5.4.5 pbr==5.4.5
PyYAML==5.3.1 PyYAML==5.3.1
requests==2.23.0 requests==2.23.0
six==1.14.0 six==1.14.0
stevedore==1.32.0 stevedore==1.32.0
urllib3==1.25.9 urllib3==1.25.9
watchdog==0.10.2

View File

@@ -1,3 +1,2 @@
- macro: allowed_k8s_containers - macro: allowed_k8s_containers
condition: (ka.req.pod.containers.image.repository in (nginx)) condition: (ka.req.pod.containers.image.repository in (nginx))

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# #
# Copyright (C) 2019 The Falco Authors. # Copyright (C) 2020 The Falco Authors.
# #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -18,45 +18,46 @@
set -euo pipefail set -euo pipefail
SCRIPT=$(readlink -f $0) SCRIPT=$(readlink -f $0)
SCRIPTDIR=$(dirname $SCRIPT) SCRIPTDIR=$(dirname "$SCRIPT")
BUILD_DIR=$1
BRANCH=${2:-none}
TRACE_DIR=$BUILD_DIR/test
mkdir -p $TRACE_DIR
function download_trace_files() { function download_trace_files() {
echo "branch=$BRANCH"
for TRACE in traces-positive traces-negative traces-info ; do for TRACE in traces-positive traces-negative traces-info ; do
if [ ! -e $TRACE_DIR/$TRACE ]; then if [ ! -e "$TRACE_DIR/$TRACE" ]; then
if [ $BRANCH != "none" ]; then if [ "$OPT_BRANCH" != "none" ]; then
curl -fso $TRACE_DIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE-$BRANCH.zip curl -fso "$TRACE_DIR/$TRACE.zip" https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE-$OPT_BRANCH.zip
else else
curl -fso $TRACE_DIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip curl -fso "$TRACE_DIR/$TRACE.zip" https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip
fi fi
unzip -d $TRACE_DIR $TRACE_DIR/$TRACE.zip unzip -d "$TRACE_DIR" "$TRACE_DIR/$TRACE.zip"
rm -rf $TRACE_DIR/$TRACE.zip rm -rf "$TRACE_DIR/$TRACE.zip"
fi else
if ${OPT_VERBOSE}; then
echo "Trace directory $TRACE_DIR/$TRACE already exist: skipping"
fi
fi
done done
} }
function prepare_multiplex_fileset() { function prepare_multiplex_fileset() {
dir=$1 dir=$1
detect=$2 detect=$2
for trace in $TRACE_DIR/$dir/*.scap ; do for trace in "$TRACE_DIR/$dir"/*.scap ; do
[ -e "$trace" ] || continue [ -e "$trace" ] || continue
NAME=`basename $trace .scap` NAME=$(basename "$trace" .scap)
# falco_traces.yaml might already have an entry for this trace # falco_traces.yaml might already have an entry for this trace file, with specific detection levels and counts.
# file, with specific detection levels and counts. If so, skip # If so, skip it.
# it. Otherwise, add a generic entry showing whether or not to # Otherwise, add a generic entry showing whether or not to detect anything.
# detect anything. if grep -q "$NAME:" "$SCRIPTDIR/falco_traces.yaml"; then
grep -q "$NAME:" $SCRIPTDIR/falco_traces.yaml && continue if ${OPT_VERBOSE}; then
echo "Entry $NAME already exist: skipping"
fi
continue
fi
cat << EOF >> "$SCRIPTDIR/falco_traces.yaml"
cat << EOF >> $SCRIPTDIR/falco_traces.yaml
$NAME: $NAME:
detect: $detect detect: $detect
detect_level: WARNING detect_level: WARNING
@@ -66,41 +67,96 @@ EOF
} }
function prepare_multiplex_file() { function prepare_multiplex_file() {
cp $SCRIPTDIR/falco_traces.yaml.in $SCRIPTDIR/falco_traces.yaml /bin/cp -f "$SCRIPTDIR/falco_traces.yaml.in" "$SCRIPTDIR/falco_traces.yaml"
prepare_multiplex_fileset traces-positive True prepare_multiplex_fileset traces-positive True
prepare_multiplex_fileset traces-negative False prepare_multiplex_fileset traces-negative False
prepare_multiplex_fileset traces-info True prepare_multiplex_fileset traces-info True
echo "Contents of $SCRIPTDIR/falco_traces.yaml:" if ${OPT_VERBOSE}; then
cat $SCRIPTDIR/falco_traces.yaml echo "Contents of $SCRIPTDIR/falco_traces.yaml"
cat "$SCRIPTDIR/falco_traces.yaml"
fi
} }
function print_test_failure_details() { function print_test_failure_details() {
echo "Showing full job logs for any tests that failed:" echo "Showing full job logs for any tests that failed:"
jq '.tests[] | select(.status != "PASS") | .logfile' $SCRIPTDIR/job-results/latest/results.json | xargs cat jq '.tests[] | select(.status != "PASS") | .logfile' "$SCRIPTDIR/job-results/latest/results.json" | xargs cat
} }
function run_tests() { function run_tests() {
rm -rf /tmp/falco_outputs rm -rf /tmp/falco_outputs
mkdir /tmp/falco_outputs mkdir /tmp/falco_outputs
# If we got this far, we can undo set -e, as we're watching the # If we got this far, we can undo set -e,
# return status when running avocado. # as we're watching the return status when running avocado.
set +e set +e
TEST_RC=0 TEST_RC=0
for mult in $SCRIPTDIR/falco_traces.yaml $SCRIPTDIR/falco_tests.yaml $SCRIPTDIR/falco_tests_package.yaml $SCRIPTDIR/falco_k8s_audit_tests.yaml $SCRIPTDIR/falco_tests_psp.yaml; do for mult in $SCRIPTDIR/falco_traces.yaml $SCRIPTDIR/falco_tests.yaml $SCRIPTDIR/falco_tests_package.yaml $SCRIPTDIR/falco_k8s_audit_tests.yaml $SCRIPTDIR/falco_tests_psp.yaml; do
CMD="avocado run --mux-yaml $mult --job-results-dir $SCRIPTDIR/job-results -- $SCRIPTDIR/falco_test.py" CMD="avocado run --mux-yaml $mult --job-results-dir $SCRIPTDIR/job-results -- $SCRIPTDIR/falco_test.py"
echo "Running: $CMD" echo "Running $CMD"
BUILD_DIR=${BUILD_DIR} $CMD BUILD_DIR=${OPT_BUILD_DIR} $CMD
RC=$? RC=$?
TEST_RC=$((TEST_RC+$RC)) TEST_RC=$((TEST_RC+RC))
if [ $RC -ne 0 ]; then if [ $RC -ne 0 ]; then
print_test_failure_details print_test_failure_details
fi fi
done done
} }
OPT_ONLY_PREPARE="false"
OPT_VERBOSE="false"
OPT_BUILD_DIR="$(dirname "$SCRIPTDIR")/build"
OPT_BRANCH="none"
while getopts ':p :h :v :b: :d:' 'OPTKEY'; do
case ${OPTKEY} in
'p')
OPT_ONLY_PREPARE="true"
;;
'h')
/bin/bash usage
exit 0
;;
'v')
OPT_VERBOSE="true"
;;
'd')
OPT_BUILD_DIR=${OPTARG}
;;
'b')
OPT_BRANCH=${OPTARG}
;;
'?')
echo "Invalid option: ${OPTARG}." >&2
/bin/bash usage
exit 1
;;
':')
echo "Missing argument for option: ${OPTARG}." >&2
/bin/bash usage
exit 1
;;
*)
echo "Unimplemented option: ${OPTKEY}." >&2
/bin/bash usage
exit 1
;;
esac
done
TRACE_DIR=$OPT_BUILD_DIR/test
if ${OPT_VERBOSE}; then
echo "Build directory = $OPT_BUILD_DIR"
echo "Trace directory = $TRACE_DIR"
echo "Custom branch = $OPT_BRANCH"
fi
mkdir -p "$TRACE_DIR"
download_trace_files download_trace_files
prepare_multiplex_file prepare_multiplex_file
run_tests
exit $TEST_RC if ! ${OPT_ONLY_PREPARE}; then
run_tests
exit $TEST_RC
fi

View File

@@ -1,5 +1,6 @@
add_subdirectory(k8s_audit) add_subdirectory(k8s_audit)
add_subdirectory(psp) add_subdirectory(psp)
# Note: list of traces is created at cmake time, not build time # Note: list of traces is created at cmake time, not build time
file(GLOB test_trace_files file(GLOB test_trace_files
"${CMAKE_CURRENT_SOURCE_DIR}/*.scap") "${CMAKE_CURRENT_SOURCE_DIR}/*.scap")
@@ -11,4 +12,8 @@ foreach(trace_file_path ${test_trace_files})
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${trace_file} add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${trace_file}
COMMAND ${CMAKE_COMMAND} -E copy ${trace_file_path} ${CMAKE_CURRENT_BINARY_DIR}/${trace_file} COMMAND ${CMAKE_COMMAND} -E copy ${trace_file_path} ${CMAKE_CURRENT_BINARY_DIR}/${trace_file}
DEPENDS ${trace_file_path}) DEPENDS ${trace_file_path})
list(APPEND BASE_SCAP_TRACE_FILES_TARGETS test-trace-${trace_file})
endforeach() endforeach()
add_custom_target(trace-files-base-scap ALL)
add_dependencies(trace-files-base-scap ${BASE_SCAP_TRACE_FILES_TARGETS})

View File

@@ -9,4 +9,8 @@ foreach(trace_file_path ${test_trace_files})
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${trace_file} add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${trace_file}
COMMAND ${CMAKE_COMMAND} -E copy ${trace_file_path} ${CMAKE_CURRENT_BINARY_DIR}/${trace_file} COMMAND ${CMAKE_COMMAND} -E copy ${trace_file_path} ${CMAKE_CURRENT_BINARY_DIR}/${trace_file}
DEPENDS ${trace_file_path}) DEPENDS ${trace_file_path})
list(APPEND K8S_AUDIT_TRACE_FILES_TARGETS test-trace-${trace_file})
endforeach() endforeach()
add_custom_target(trace-files-k8s-audit ALL)
add_dependencies(trace-files-k8s-audit ${K8S_AUDIT_TRACE_FILES_TARGETS})

View File

@@ -10,4 +10,8 @@ foreach(trace_file_path ${test_trace_files})
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${trace_file} add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${trace_file}
COMMAND ${CMAKE_COMMAND} -E copy ${trace_file_path} ${CMAKE_CURRENT_BINARY_DIR}/${trace_file} COMMAND ${CMAKE_COMMAND} -E copy ${trace_file_path} ${CMAKE_CURRENT_BINARY_DIR}/${trace_file}
DEPENDS ${trace_file_path}) DEPENDS ${trace_file_path})
list(APPEND PSP_TRACE_FILES_TARGETS test-trace-${trace_file})
endforeach() endforeach()
add_custom_target(trace-files-psp ALL)
add_dependencies(trace-files-psp ${PSP_TRACE_FILES_TARGETS})

32
test/usage Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
#
# Copyright (C) 2020 The Falco Authors.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
cat <<EOF
Hello, this is Falco integration tests runner.
SYNOPSIS
bash run_regression_tests.sh [-h] [-v] [-p] [-d=<build directory>] [-b=<custom branch>]
DESCRIPTION
-h Display usage instructions
-v Verbose output
-p Prepare the falco_traces integration test suite
-b=CUSTOM_BRANCH Specify a custom branch for downloading falco_traces fixtures (defaults to "none")
-d=BUILD_DIRECTORY Specify the build directory where Falco has been built (defaults to $SCRIPTDIR/../build)
EOF

View File

@@ -1,22 +0,0 @@
# Userspace
Here is where the main Falco engine lives.
There are two libraries here that are roughly seperated in the following way.are
### falco
This is the beloved `main()` function of the Falco program, as well as the logic for various falco outputs.
An output is just a way of delivering a Falco alert, the most simple output is the Falco stdout log.
### engine
This is the processing engine that connect the inbound stream of systemcalls to the rules engine.
This is the main powerhouse behind Falco, and does the assertion at runtime that compares system call events to rules.are
### CMake
If you are adding new files to either library you must define the `.cpp` file in the associated CMakeLists.txt file such that the linker will know where to find your new file.

View File

@@ -16,7 +16,6 @@ set(FALCO_ENGINE_SOURCE_FILES
falco_engine.cpp falco_engine.cpp
falco_utils.cpp falco_utils.cpp
json_evt.cpp json_evt.cpp
prettyprint.cpp
ruleset.cpp ruleset.cpp
token_bucket.cpp token_bucket.cpp
formats.cpp) formats.cpp)
@@ -24,6 +23,10 @@ set(FALCO_ENGINE_SOURCE_FILES
add_library(falco_engine STATIC ${FALCO_ENGINE_SOURCE_FILES}) add_library(falco_engine STATIC ${FALCO_ENGINE_SOURCE_FILES})
add_dependencies(falco_engine njson lyaml lpeg string-view-lite) add_dependencies(falco_engine njson lyaml lpeg string-view-lite)
if(USE_BUNDLED_DEPS)
add_dependencies(falco_engine libyaml)
endif()
target_include_directories( target_include_directories(
falco_engine falco_engine
PUBLIC PUBLIC

View File

@@ -22,7 +22,6 @@ limitations under the License.
#include "falco_engine.h" #include "falco_engine.h"
#include "falco_utils.h" #include "falco_utils.h"
#include "falco_engine_version.h" #include "falco_engine_version.h"
#include "prettyprint.h"
#include "config_falco_engine.h" #include "config_falco_engine.h"
#include "formats.h" #include "formats.h"
@@ -317,9 +316,6 @@ unique_ptr<falco_engine::rule_result> falco_engine::process_sinsp_event(sinsp_ev
string err = "Error invoking function output: " + string(lerr); string err = "Error invoking function output: " + string(lerr);
throw falco_exception(err); throw falco_exception(err);
} }
prettyprint::sinsp_event(ev, "Raw event just before popping to Lua");
res->evt = ev; res->evt = ev;
const char *p = lua_tostring(m_ls, -3); const char *p = lua_tostring(m_ls, -3);
res->rule = p; res->rule = p;

View File

@@ -16,9 +16,9 @@ limitations under the License.
// The version of rules/filter fields/etc supported by this falco // The version of rules/filter fields/etc supported by this falco
// engine. // engine.
#define FALCO_ENGINE_VERSION (5) #define FALCO_ENGINE_VERSION (6)
// This is the result of running "falco --list -N | sha256sum" and // This is the result of running "falco --list -N | sha256sum" and
// represents the fields supported by this version of falco. It's used // represents the fields supported by this version of falco. It's used
// at build time to detect a changed set of fields. // at build time to detect a changed set of fields.
#define FALCO_FIELDS_CHECKSUM "ca9e75fa41fe4480cdfad8cf275cdbbc334e656569f070c066d87cbd2955c1ae" #define FALCO_FIELDS_CHECKSUM "2f324e2e66d4b423f53600e7e0fcf2f0ff72e4a87755c490f2ae8f310aba9386"

View File

@@ -52,6 +52,12 @@ std::string wrap_text(const std::string& str, uint32_t initial_pos, uint32_t ind
return ret; return ret;
} }
uint32_t hardware_concurrency()
{
auto hc = std::thread::hardware_concurrency();
return hc ? hc : 1;
}
void readfile(const std::string& filename, std::string& data) void readfile(const std::string& filename, std::string& data)
{ {
std::ifstream file(filename.c_str(), std::ios::in); std::ifstream file(filename.c_str(), std::ios::in);

View File

@@ -21,6 +21,7 @@ limitations under the License.
#include <fstream> #include <fstream>
#include <iostream> #include <iostream>
#include <string> #include <string>
#include <thread>
#include <nonstd/string_view.hpp> #include <nonstd/string_view.hpp>
#pragma once #pragma once
@@ -34,6 +35,9 @@ namespace utils
std::string wrap_text(const std::string& str, uint32_t initial_pos, uint32_t indent, uint32_t line_len); std::string wrap_text(const std::string& str, uint32_t initial_pos, uint32_t indent, uint32_t line_len);
void readfile(const std::string& filename, std::string& data); void readfile(const std::string& filename, std::string& data);
uint32_t hardware_concurrency();
namespace network namespace network
{ {
static const std::string UNIX_SCHEME("unix://"); static const std::string UNIX_SCHEME("unix://");

View File

@@ -45,7 +45,7 @@ const json &json_event::jevt()
return m_jevt; return m_jevt;
} }
uint64_t json_event::get_ts() uint64_t json_event::get_ts() const
{ {
return m_event_ts; return m_event_ts;
} }

View File

@@ -38,14 +38,14 @@ public:
void set_jevt(nlohmann::json &evt, uint64_t ts); void set_jevt(nlohmann::json &evt, uint64_t ts);
const nlohmann::json &jevt(); const nlohmann::json &jevt();
uint64_t get_ts(); uint64_t get_ts() const;
inline uint16_t get_source() inline uint16_t get_source() const
{ {
return ESRC_K8S_AUDIT; return ESRC_K8S_AUDIT;
} }
inline uint16_t get_type() inline uint16_t get_type() const
{ {
// All k8s audit events have the single tag "1". - see falco_engine::process_k8s_audit_event // All k8s audit events have the single tag "1". - see falco_engine::process_k8s_audit_event
return 1; return 1;

View File

@@ -1,4 +1,4 @@
-- Copyright (C) 2019 The Falco Authors. -- Copyright (C) 2020 The Falco Authors.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@@ -11,25 +11,24 @@
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
local parser = require("parser") local parser = require("parser")
local compiler = {} local compiler = {}
compiler.trim = parser.trim compiler.trim = parser.trim
function map(f, arr) function map(f, arr)
local res = {} local res = {}
for i,v in ipairs(arr) do for i, v in ipairs(arr) do
res[i] = f(v) res[i] = f(v)
end end
return res return res
end end
function foldr(f, acc, arr) function foldr(f, acc, arr)
for i,v in pairs(arr) do for i, v in pairs(arr) do
acc = f(acc, v) acc = f(acc, v)
end end
return acc return acc
end end
--[[ --[[
@@ -47,181 +46,192 @@ end
--]] --]]
function copy_ast_obj(obj) function copy_ast_obj(obj)
if type(obj) ~= 'table' then return obj end if type(obj) ~= 'table' then
local res = {} return obj
for k, v in pairs(obj) do res[copy_ast_obj(k)] = copy_ast_obj(v) end end
return res local res = {}
for k, v in pairs(obj) do
res[copy_ast_obj(k)] = copy_ast_obj(v)
end
return res
end end
function expand_macros(ast, defs, changed) function expand_macros(ast, defs, changed)
if (ast.type == "Rule") then if (ast.type == "Rule") then
return expand_macros(ast.filter, defs, changed) return expand_macros(ast.filter, defs, changed)
elseif ast.type == "Filter" then elseif ast.type == "Filter" then
if (ast.value.type == "Macro") then if (ast.value.type == "Macro") then
if (defs[ast.value.value] == nil) then if (defs[ast.value.value] == nil) then
return false, "Undefined macro '".. ast.value.value .. "' used in filter." return false, "Undefined macro '" .. ast.value.value .. "' used in filter."
end end
defs[ast.value.value].used = true defs[ast.value.value].used = true
ast.value = copy_ast_obj(defs[ast.value.value].ast) ast.value = copy_ast_obj(defs[ast.value.value].ast)
changed = true changed = true
return true, changed return true, changed
end end
return expand_macros(ast.value, defs, changed) return expand_macros(ast.value, defs, changed)
elseif ast.type == "BinaryBoolOp" then elseif ast.type == "BinaryBoolOp" then
if (ast.left.type == "Macro") then if (ast.left.type == "Macro") then
if (defs[ast.left.value] == nil) then if (defs[ast.left.value] == nil) then
return false, "Undefined macro '".. ast.left.value .. "' used in filter." return false, "Undefined macro '" .. ast.left.value .. "' used in filter."
end end
defs[ast.left.value].used = true defs[ast.left.value].used = true
ast.left = copy_ast_obj(defs[ast.left.value].ast) ast.left = copy_ast_obj(defs[ast.left.value].ast)
changed = true changed = true
end end
if (ast.right.type == "Macro") then if (ast.right.type == "Macro") then
if (defs[ast.right.value] == nil) then if (defs[ast.right.value] == nil) then
return false, "Undefined macro ".. ast.right.value .. " used in filter." return false, "Undefined macro " .. ast.right.value .. " used in filter."
end end
defs[ast.right.value].used = true defs[ast.right.value].used = true
ast.right = copy_ast_obj(defs[ast.right.value].ast) ast.right = copy_ast_obj(defs[ast.right.value].ast)
changed = true changed = true
end end
local status, changed_left = expand_macros(ast.left, defs, false) local status, changed_left = expand_macros(ast.left, defs, false)
if status == false then if status == false then
return false, changed_left return false, changed_left
end end
local status, changed_right = expand_macros(ast.right, defs, false) local status, changed_right = expand_macros(ast.right, defs, false)
if status == false then if status == false then
return false, changed_right return false, changed_right
end end
return true, changed or changed_left or changed_right return true, changed or changed_left or changed_right
elseif ast.type == "UnaryBoolOp" then elseif ast.type == "UnaryBoolOp" then
if (ast.argument.type == "Macro") then if (ast.argument.type == "Macro") then
if (defs[ast.argument.value] == nil) then if (defs[ast.argument.value] == nil) then
return false, "Undefined macro ".. ast.argument.value .. " used in filter." return false, "Undefined macro " .. ast.argument.value .. " used in filter."
end end
defs[ast.argument.value].used = true defs[ast.argument.value].used = true
ast.argument = copy_ast_obj(defs[ast.argument.value].ast) ast.argument = copy_ast_obj(defs[ast.argument.value].ast)
changed = true changed = true
end end
return expand_macros(ast.argument, defs, changed) return expand_macros(ast.argument, defs, changed)
end end
return true, changed return true, changed
end end
function get_macros(ast, set) function get_macros(ast, set)
if (ast.type == "Macro") then if (ast.type == "Macro") then
set[ast.value] = true set[ast.value] = true
return set return set
end end
if ast.type == "Filter" then if ast.type == "Filter" then
return get_macros(ast.value, set) return get_macros(ast.value, set)
end end
if ast.type == "BinaryBoolOp" then if ast.type == "BinaryBoolOp" then
local left = get_macros(ast.left, {}) local left = get_macros(ast.left, {})
local right = get_macros(ast.right, {}) local right = get_macros(ast.right, {})
for m, _ in pairs(left) do set[m] = true end for m, _ in pairs(left) do
for m, _ in pairs(right) do set[m] = true end set[m] = true
end
for m, _ in pairs(right) do
set[m] = true
end
return set return set
end end
if ast.type == "UnaryBoolOp" then if ast.type == "UnaryBoolOp" then
return get_macros(ast.argument, set) return get_macros(ast.argument, set)
end end
return set return set
end end
function get_filters(ast) function get_filters(ast)
local filters = {} local filters = {}
function cb(node) function cb(node)
if node.type == "FieldName" then if node.type == "FieldName" then
filters[node.value] = 1 filters[node.value] = 1
end end
end end
parser.traverse_ast(ast.filter.value, {FieldName=1} , cb) parser.traverse_ast(ast.filter.value, {
FieldName = 1
}, cb)
return filters return filters
end end
function compiler.expand_lists_in(source, list_defs) function compiler.expand_lists_in(source, list_defs)
for name, def in pairs(list_defs) do for name, def in pairs(list_defs) do
local bpos = string.find(source, name, 1, true) local bpos = string.find(source, name, 1, true)
while bpos ~= nil do while bpos ~= nil do
def.used = true def.used = true
local epos = bpos + string.len(name) local epos = bpos + string.len(name)
-- The characters surrounding the name must be delimiters of beginning/end of string -- The characters surrounding the name must be delimiters of beginning/end of string
if (bpos == 1 or string.match(string.sub(source, bpos-1, bpos-1), "[%s(),=]")) and (epos > string.len(source) or string.match(string.sub(source, epos, epos), "[%s(),=]")) then if (bpos == 1 or string.match(string.sub(source, bpos - 1, bpos - 1), "[%s(),=]")) and
new_source = "" (epos > string.len(source) or string.match(string.sub(source, epos, epos), "[%s(),=]")) then
new_source = ""
if bpos > 1 then if bpos > 1 then
new_source = new_source..string.sub(source, 1, bpos-1) new_source = new_source .. string.sub(source, 1, bpos - 1)
end end
sub = table.concat(def.items, ", ") sub = table.concat(def.items, ", ")
new_source = new_source..sub new_source = new_source .. sub
if epos <= string.len(source) then if epos <= string.len(source) then
new_source = new_source..string.sub(source, epos, string.len(source)) new_source = new_source .. string.sub(source, epos, string.len(source))
end end
source = new_source source = new_source
bpos = bpos + (string.len(sub)-string.len(name)) bpos = bpos + (string.len(sub) - string.len(name))
end end
bpos = string.find(source, name, bpos+1, true) bpos = string.find(source, name, bpos + 1, true)
end end
end end
return source return source
end end
function compiler.compile_macro(line, macro_defs, list_defs) function compiler.compile_macro(line, macro_defs, list_defs)
line = compiler.expand_lists_in(line, list_defs) line = compiler.expand_lists_in(line, list_defs)
local ast, error_msg = parser.parse_filter(line) local ast, error_msg = parser.parse_filter(line)
if (error_msg) then if (error_msg) then
msg = "Compilation error when compiling \""..line.."\": ".. error_msg msg = "Compilation error when compiling \"" .. line .. "\": " .. error_msg
return false, msg return false, msg
end end
-- Simply as a validation step, try to expand all macros in this -- Simply as a validation step, try to expand all macros in this
-- macro's condition. This changes the ast, so we make a copy -- macro's condition. This changes the ast, so we make a copy
-- first. -- first.
local ast_copy = copy_ast_obj(ast) local ast_copy = copy_ast_obj(ast)
if (ast.type == "Rule") then if (ast.type == "Rule") then
-- Line is a filter, so expand macro references -- Line is a filter, so expand macro references
repeat repeat
status, expanded = expand_macros(ast_copy, macro_defs, false) status, expanded = expand_macros(ast_copy, macro_defs, false)
if status == false then if status == false then
msg = "Compilation error when compiling \""..line.."\": ".. expanded msg = "Compilation error when compiling \"" .. line .. "\": " .. expanded
return false, msg return false, msg
end end
until expanded == false until expanded == false
else else
return false, "Unexpected top-level AST type: "..ast.type return false, "Unexpected top-level AST type: " .. ast.type
end end
return true, ast return true, ast
end end
--[[ --[[
@@ -229,32 +239,31 @@ end
--]] --]]
function compiler.compile_filter(name, source, macro_defs, list_defs) function compiler.compile_filter(name, source, macro_defs, list_defs)
source = compiler.expand_lists_in(source, list_defs) source = compiler.expand_lists_in(source, list_defs)
local ast, error_msg = parser.parse_filter(source) local ast, error_msg = parser.parse_filter(source)
if (error_msg) then if (error_msg) then
msg = "Compilation error when compiling \""..source.."\": "..error_msg msg = "Compilation error when compiling \"" .. source .. "\": " .. error_msg
return false, msg return false, msg
end end
if (ast.type == "Rule") then if (ast.type == "Rule") then
-- Line is a filter, so expand macro references -- Line is a filter, so expand macro references
repeat repeat
status, expanded = expand_macros(ast, macro_defs, false) status, expanded = expand_macros(ast, macro_defs, false)
if status == false then if status == false then
return false, expanded return false, expanded
end end
until expanded == false until expanded == false
else else
return false, "Unexpected top-level AST type: "..ast.type return false, "Unexpected top-level AST type: " .. ast.type
end end
filters = get_filters(ast) filters = get_filters(ast)
return true, ast, filters return true, ast, filters
end end
return compiler return compiler

View File

@@ -1,4 +1,4 @@
-- Copyright (C) 2019 The Falco Authors. -- Copyright (C) 2020 The Falco Authors.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@@ -12,7 +12,6 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
--[[ --[[
Falco grammar and parser. Falco grammar and parser.
@@ -40,232 +39,258 @@ local space = lpeg.space
-- creates an error message for the input string -- creates an error message for the input string
local function syntaxerror(errorinfo, pos, msg) local function syntaxerror(errorinfo, pos, msg)
local error_msg = "%s: syntax error, %s" local error_msg = "%s: syntax error, %s"
return string.format(error_msg, pos, msg) return string.format(error_msg, pos, msg)
end end
-- gets the farthest failure position -- gets the farthest failure position
local function getffp(s, i, t) local function getffp(s, i, t)
return t.ffp or i, t return t.ffp or i, t
end end
-- gets the table that contains the error information -- gets the table that contains the error information
local function geterrorinfo() local function geterrorinfo()
return Cmt(Carg(1), getffp) * (C(V "OneWord") + Cc("EOF")) / function(t, u) return Cmt(Carg(1), getffp) * (C(V "OneWord") + Cc("EOF")) / function(t, u)
t.unexpected = u t.unexpected = u
return t return t
end end
end end
-- creates an errror message using the farthest failure position -- creates an errror message using the farthest failure position
local function errormsg() local function errormsg()
return geterrorinfo() / function(t) return geterrorinfo() / function(t)
local p = t.ffp or 1 local p = t.ffp or 1
local msg = "unexpected '%s', expecting %s" local msg = "unexpected '%s', expecting %s"
msg = string.format(msg, t.unexpected, t.expected) msg = string.format(msg, t.unexpected, t.expected)
return nil, syntaxerror(t, p, msg) return nil, syntaxerror(t, p, msg)
end end
end end
-- reports a syntactic error -- reports a syntactic error
local function report_error() local function report_error()
return errormsg() return errormsg()
end end
--- sets the farthest failure position and the expected tokens --- sets the farthest failure position and the expected tokens
local function setffp(s, i, t, n) local function setffp(s, i, t, n)
if not t.ffp or i > t.ffp then if not t.ffp or i > t.ffp then
t.ffp = i t.ffp = i
t.list = {} t.list = {}
t.list[n] = n t.list[n] = n
t.expected = "'" .. n .. "'" t.expected = "'" .. n .. "'"
elseif i == t.ffp then elseif i == t.ffp then
if not t.list[n] then if not t.list[n] then
t.list[n] = n t.list[n] = n
t.expected = "'" .. n .. "', " .. t.expected t.expected = "'" .. n .. "', " .. t.expected
end end
end end
return false return false
end end
local function updateffp(name) local function updateffp(name)
return Cmt(Carg(1) * Cc(name), setffp) return Cmt(Carg(1) * Cc(name), setffp)
end end
-- regular combinators and auxiliary functions -- regular combinators and auxiliary functions
local function token(pat, name) local function token(pat, name)
return pat * V "Skip" + updateffp(name) * P(false) return pat * V "Skip" + updateffp(name) * P(false)
end end
local function symb(str) local function symb(str)
return token(P(str), str) return token(P(str), str)
end end
local function kw(str) local function kw(str)
return token(P(str) * -V "idRest", str) return token(P(str) * -V "idRest", str)
end end
local function list(pat, sep) local function list(pat, sep)
return Ct(pat ^ -1 * (sep * pat ^ 0) ^ 0) / function(elements) return Ct(pat ^ -1 * (sep * pat ^ 0) ^ 0) / function(elements)
return {type = "List", elements = elements} return {
end type = "List",
elements = elements
}
end
end end
--http://lua-users.org/wiki/StringTrim -- http://lua-users.org/wiki/StringTrim
function trim(s) function trim(s)
if (type(s) ~= "string") then if (type(s) ~= "string") then
return s return s
end end
return (s:gsub("^%s*(.-)%s*$", "%1")) return (s:gsub("^%s*(.-)%s*$", "%1"))
end end
parser.trim = trim parser.trim = trim
local function terminal(tag) local function terminal(tag)
-- Rather than trim the whitespace in this way, it would be nicer to exclude it from the capture... -- Rather than trim the whitespace in this way, it would be nicer to exclude it from the capture...
return token(V(tag), tag) / function(tok) return token(V(tag), tag) / function(tok)
val = tok val = tok
if tag ~= "String" then if tag ~= "String" then
val = trim(tok) val = trim(tok)
end end
return {type = tag, value = val} return {
end type = tag,
value = val
}
end
end end
local function unaryboolop(op, e) local function unaryboolop(op, e)
return {type = "UnaryBoolOp", operator = op, argument = e} return {
type = "UnaryBoolOp",
operator = op,
argument = e
}
end end
local function unaryrelop(e, op) local function unaryrelop(e, op)
return {type = "UnaryRelOp", operator = op, argument = e} return {
type = "UnaryRelOp",
operator = op,
argument = e
}
end end
local function binaryop(e1, op, e2) local function binaryop(e1, op, e2)
if not op then if not op then
return e1 return e1
else else
return {type = "BinaryBoolOp", operator = op, left = e1, right = e2} return {
end type = "BinaryBoolOp",
operator = op,
left = e1,
right = e2
}
end
end end
local function bool(pat, sep) local function bool(pat, sep)
return Cf(pat * Cg(sep * pat) ^ 0, binaryop) return Cf(pat * Cg(sep * pat) ^ 0, binaryop)
end end
local function rel(left, sep, right) local function rel(left, sep, right)
return left * sep * right / function(e1, op, e2) return left * sep * right / function(e1, op, e2)
return {type = "BinaryRelOp", operator = op, left = e1, right = e2} return {
end type = "BinaryRelOp",
operator = op,
left = e1,
right = e2
}
end
end end
-- grammar -- grammar
local function filter(e) local function filter(e)
return {type = "Filter", value = e} return {
type = "Filter",
value = e
}
end end
local function rule(filter) local function rule(filter)
return {type = "Rule", filter = filter} return {
type = "Rule",
filter = filter
}
end end
local G = { local G = {
V "Start", -- Entry rule V "Start", -- Entry rule
Start = V "Skip" * (V "Comment" + V "Rule" / rule) ^ -1 * -1 + report_error(), Start = V "Skip" * (V "Comment" + V "Rule" / rule) ^ -1 * -1 + report_error(),
-- Grammar -- Grammar
Comment = P "#" * P(1) ^ 0, Comment = P "#" * P(1) ^ 0,
Rule = V "Filter" / filter * ((V "Skip") ^ -1), Rule = V "Filter" / filter * ((V "Skip") ^ -1),
Filter = V "OrExpression", Filter = V "OrExpression",
OrExpression = bool(V "AndExpression", V "OrOp"), OrExpression = bool(V "AndExpression", V "OrOp"),
AndExpression = bool(V "NotExpression", V "AndOp"), AndExpression = bool(V "NotExpression", V "AndOp"),
NotExpression = V "UnaryBoolOp" * V "NotExpression" / unaryboolop + V "ExistsExpression", NotExpression = V "UnaryBoolOp" * V "NotExpression" / unaryboolop + V "ExistsExpression",
ExistsExpression = terminal "FieldName" * V "ExistsOp" / unaryrelop + V "MacroExpression", ExistsExpression = terminal "FieldName" * V "ExistsOp" / unaryrelop + V "MacroExpression",
MacroExpression = terminal "Macro" + V "RelationalExpression", MacroExpression = terminal "Macro" + V "RelationalExpression",
RelationalExpression = rel(terminal "FieldName", V "RelOp", V "Value") + RelationalExpression = rel(terminal "FieldName", V "RelOp", V "Value") +
rel(terminal "FieldName", V "SetOp", V "InList") + rel(terminal "FieldName", V "SetOp", V "InList") + V "PrimaryExp",
V "PrimaryExp", PrimaryExp = symb("(") * V "Filter" * symb(")"),
PrimaryExp = symb("(") * V "Filter" * symb(")"), FuncArgs = symb("(") * list(V "Value", symb(",")) * symb(")"),
FuncArgs = symb("(") * list(V "Value", symb(",")) * symb(")"), -- Terminals
-- Terminals Value = terminal "Number" + terminal "String" + terminal "BareString",
Value = terminal "Number" + terminal "String" + terminal "BareString", InList = symb("(") * list(V "Value", symb(",")) * symb(")"),
InList = symb("(") * list(V "Value", symb(",")) * symb(")"), -- Lexemes
-- Lexemes Space = space ^ 1,
Space = space ^ 1, Skip = (V "Space") ^ 0,
Skip = (V "Space") ^ 0, idStart = alpha + P("_"),
idStart = alpha + P("_"), idRest = alnum + P("_"),
idRest = alnum + P("_"), Identifier = V "idStart" * V "idRest" ^ 0,
Identifier = V "idStart" * V "idRest" ^ 0, Macro = V "idStart" * V "idRest" ^ 0 * -P ".",
Macro = V "idStart" * V "idRest" ^ 0 * -P ".", Int = digit ^ 1,
Int = digit ^ 1, PathString = (alnum + S ",.-_/*?") ^ 1,
PathString = (alnum + S ",.-_/*?") ^ 1, PortRangeString = (V "Int" + S ":,") ^ 1,
PortRangeString = (V "Int" + S ":,") ^ 1, Index = V "PortRangeString" + V "Int" + V "PathString",
Index = V "PortRangeString" + V "Int" + V "PathString", FieldName = V "Identifier" * (P "." + V "Identifier") ^ 1 * (P "[" * V "Index" * P "]") ^ -1,
FieldName = V "Identifier" * (P "." + V "Identifier") ^ 1 * (P "[" * V "Index" * P "]") ^ -1, Name = C(V "Identifier") * -V "idRest",
Name = C(V "Identifier") * -V "idRest", Hex = (P("0x") + P("0X")) * xdigit ^ 1,
Hex = (P("0x") + P("0X")) * xdigit ^ 1, Expo = S("eE") * S("+-") ^ -1 * digit ^ 1,
Expo = S("eE") * S("+-") ^ -1 * digit ^ 1, Float = (((digit ^ 1 * P(".") * digit ^ 0) + (P(".") * digit ^ 1)) * V "Expo" ^ -1) + (digit ^ 1 * V "Expo"),
Float = (((digit ^ 1 * P(".") * digit ^ 0) + (P(".") * digit ^ 1)) * V "Expo" ^ -1) + (digit ^ 1 * V "Expo"), Number = C(V "Hex" + V "Float" + V "Int") / function(n)
Number = C(V "Hex" + V "Float" + V "Int") / function(n) return tonumber(n)
return tonumber(n) end,
end, String = (P '"' * C(((P "\\" * P(1)) + (P(1) - P '"')) ^ 0) * P '"' + P "'" *
String = (P '"' * C(((P "\\" * P(1)) + (P(1) - P '"')) ^ 0) * P '"' + C(((P "\\" * P(1)) + (P(1) - P "'")) ^ 0) * P "'"),
P "'" * C(((P "\\" * P(1)) + (P(1) - P "'")) ^ 0) * P "'"), BareString = C((P(1) - S " (),=") ^ 1),
BareString = C((P(1) - S " (),=") ^ 1), OrOp = kw("or") / "or",
OrOp = kw("or") / "or", AndOp = kw("and") / "and",
AndOp = kw("and") / "and", Colon = kw(":"),
Colon = kw(":"), RelOp = symb("=") / "=" + symb("==") / "==" + symb("!=") / "!=" + symb("<=") / "<=" + symb(">=") / ">=" + symb("<") /
RelOp = symb("=") / "=" + symb("==") / "==" + symb("!=") / "!=" + symb("<=") / "<=" + symb(">=") / ">=" + "<" + symb(">") / ">" + symb("contains") / "contains" + symb("icontains") / "icontains" + symb("glob") / "glob" +
symb("<") / "<" + symb("startswith") / "startswith" + symb("endswith") / "endswith",
symb(">") / ">" + SetOp = kw("in") / "in" + kw("intersects") / "intersects" + kw("pmatch") / "pmatch",
symb("contains") / "contains" + UnaryBoolOp = kw("not") / "not",
symb("icontains") / "icontains" + ExistsOp = kw("exists") / "exists",
symb("glob") / "glob" + -- for error reporting
symb("startswith") / "startswith" + OneWord = V "Name" + V "Number" + V "String" + P(1)
symb("endswith") / "endswith",
SetOp = kw("in") / "in" + kw("intersects") / "intersects" + kw("pmatch") / "pmatch",
UnaryBoolOp = kw("not") / "not",
ExistsOp = kw("exists") / "exists",
-- for error reporting
OneWord = V "Name" + V "Number" + V "String" + P(1)
} }
--[[ --[[
Parses a single filter and returns the AST. Parses a single filter and returns the AST.
--]] --]]
function parser.parse_filter(subject) function parser.parse_filter(subject)
local errorinfo = {subject = subject} local errorinfo = {
lpeg.setmaxstack(1000) subject = subject
local ast, error_msg = lpeg.match(G, subject, nil, errorinfo) }
return ast, error_msg lpeg.setmaxstack(1000)
local ast, error_msg = lpeg.match(G, subject, nil, errorinfo)
return ast, error_msg
end end
function print_ast(ast, level) function print_ast(ast, level)
local t = ast.type local t = ast.type
level = level or 0 level = level or 0
local prefix = string.rep(" ", level * 4) local prefix = string.rep(" ", level * 4)
level = level + 1 level = level + 1
if t == "Rule" then if t == "Rule" then
print_ast(ast.filter, level) print_ast(ast.filter, level)
elseif t == "Filter" then elseif t == "Filter" then
print_ast(ast.value, level) print_ast(ast.value, level)
elseif t == "BinaryBoolOp" or t == "BinaryRelOp" then elseif t == "BinaryBoolOp" or t == "BinaryRelOp" then
print(prefix .. ast.operator) print(prefix .. ast.operator)
print_ast(ast.left, level) print_ast(ast.left, level)
print_ast(ast.right, level) print_ast(ast.right, level)
elseif t == "UnaryRelOp" or t == "UnaryBoolOp" then elseif t == "UnaryRelOp" or t == "UnaryBoolOp" then
print(prefix .. ast.operator) print(prefix .. ast.operator)
print_ast(ast.argument, level) print_ast(ast.argument, level)
elseif t == "List" then elseif t == "List" then
for i, v in ipairs(ast.elements) do for i, v in ipairs(ast.elements) do
print_ast(v, level) print_ast(v, level)
end end
elseif t == "FieldName" or t == "Number" or t == "String" or t == "BareString" or t == "Macro" then elseif t == "FieldName" or t == "Number" or t == "String" or t == "BareString" or t == "Macro" then
print(prefix .. t .. " " .. ast.value) print(prefix .. t .. " " .. ast.value)
elseif t == "MacroDef" then elseif t == "MacroDef" then
-- don't print for now -- don't print for now
else else
error("Unexpected type in print_ast: " .. t) error("Unexpected type in print_ast: " .. t)
end end
end end
parser.print_ast = print_ast parser.print_ast = print_ast
@@ -275,32 +300,32 @@ parser.print_ast = print_ast
-- cb(ast_node, ctx) -- cb(ast_node, ctx)
-- ctx is optional. -- ctx is optional.
function traverse_ast(ast, node_types, cb, ctx) function traverse_ast(ast, node_types, cb, ctx)
local t = ast.type local t = ast.type
if node_types[t] ~= nil then if node_types[t] ~= nil then
cb(ast, ctx) cb(ast, ctx)
end end
if t == "Rule" then if t == "Rule" then
traverse_ast(ast.filter, node_types, cb, ctx) traverse_ast(ast.filter, node_types, cb, ctx)
elseif t == "Filter" then elseif t == "Filter" then
traverse_ast(ast.value, node_types, cb, ctx) traverse_ast(ast.value, node_types, cb, ctx)
elseif t == "BinaryBoolOp" or t == "BinaryRelOp" then elseif t == "BinaryBoolOp" or t == "BinaryRelOp" then
traverse_ast(ast.left, node_types, cb, ctx) traverse_ast(ast.left, node_types, cb, ctx)
traverse_ast(ast.right, node_types, cb, ctx) traverse_ast(ast.right, node_types, cb, ctx)
elseif t == "UnaryRelOp" or t == "UnaryBoolOp" then elseif t == "UnaryRelOp" or t == "UnaryBoolOp" then
traverse_ast(ast.argument, node_types, cb, ctx) traverse_ast(ast.argument, node_types, cb, ctx)
elseif t == "List" then elseif t == "List" then
for i, v in ipairs(ast.elements) do for i, v in ipairs(ast.elements) do
traverse_ast(v, node_types, cb, ctx) traverse_ast(v, node_types, cb, ctx)
end end
elseif t == "MacroDef" then elseif t == "MacroDef" then
traverse_ast(ast.value, node_types, cb, ctx) traverse_ast(ast.value, node_types, cb, ctx)
elseif t == "FieldName" or t == "Number" or t == "String" or t == "BareString" or t == "Macro" then elseif t == "FieldName" or t == "Number" or t == "String" or t == "BareString" or t == "Macro" then
-- do nothing, no traversal needed -- do nothing, no traversal needed
else else
error("Unexpected type in traverse_ast: " .. t) error("Unexpected type in traverse_ast: " .. t)
end end
end end
parser.traverse_ast = traverse_ast parser.traverse_ast = traverse_ast

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
-- Copyright (C) 2019 The Falco Authors. -- Copyright (C) 2020 The Falco Authors.
-- --
-- Licensed under the Apache License, Version 2.0 (the "License"); -- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License. -- you may not use this file except in compliance with the License.
@@ -12,55 +12,52 @@
-- See the License for the specific language governing permissions and -- See the License for the specific language governing permissions and
-- limitations under the License. -- limitations under the License.
-- --
local parser = require("parser") local parser = require("parser")
local sinsp_rule_utils = {} local sinsp_rule_utils = {}
function sinsp_rule_utils.check_for_ignored_syscalls_events(ast, filter_type, source) function sinsp_rule_utils.check_for_ignored_syscalls_events(ast, filter_type, source)
function check_syscall(val) function check_syscall(val)
if ignored_syscalls[val] then if ignored_syscalls[val] then
error("Ignored syscall \""..val.."\" in "..filter_type..": "..source) error("Ignored syscall \"" .. val .. "\" in " .. filter_type .. ": " .. source)
end end
end end
function check_event(val) function check_event(val)
if ignored_events[val] then if ignored_events[val] then
error("Ignored event \""..val.."\" in "..filter_type..": "..source) error("Ignored event \"" .. val .. "\" in " .. filter_type .. ": " .. source)
end end
end end
function cb(node) function cb(node)
if node.left.type == "FieldName" and if node.left.type == "FieldName" and (node.left.value == "evt.type" or node.left.value == "syscall.type") then
(node.left.value == "evt.type" or
node.left.value == "syscall.type") then
if (node.operator == "in" or if (node.operator == "in" or node.operator == "intersects" or node.operator == "pmatch") then
node.operator == "intersects" or for i, v in ipairs(node.right.elements) do
node.operator == "pmatch") then if v.type == "BareString" then
for i, v in ipairs(node.right.elements) do if node.left.value == "evt.type" then
if v.type == "BareString" then check_event(v.value)
if node.left.value == "evt.type" then else
check_event(v.value) check_syscall(v.value)
else end
check_syscall(v.value) end
end end
end else
end if node.right.type == "BareString" then
else if node.left.value == "evt.type" then
if node.right.type == "BareString" then check_event(node.right.value)
if node.left.value == "evt.type" then else
check_event(node.right.value) check_syscall(node.right.value)
else end
check_syscall(node.right.value) end
end end
end end
end end
end
end
parser.traverse_ast(ast, {BinaryRelOp=1}, cb) parser.traverse_ast(ast, {
BinaryRelOp = 1
}, cb)
end end
-- Examine the ast and find the event types/syscalls for which the -- Examine the ast and find the event types/syscalls for which the
@@ -75,125 +72,129 @@ end
function sinsp_rule_utils.get_evttypes_syscalls(name, ast, source, warn_evttypes, verbose) function sinsp_rule_utils.get_evttypes_syscalls(name, ast, source, warn_evttypes, verbose)
local evttypes = {} local evttypes = {}
local syscallnums = {} local syscallnums = {}
local evtnames = {} local evtnames = {}
local found_event = false local found_event = false
local found_not = false local found_not = false
local found_event_after_not = false local found_event_after_not = false
function cb(node) function cb(node)
if node.type == "UnaryBoolOp" then if node.type == "UnaryBoolOp" then
if node.operator == "not" then if node.operator == "not" then
found_not = true found_not = true
end end
else else
if node.operator == "!=" then if node.operator == "!=" then
found_not = true found_not = true
end end
if node.left.type == "FieldName" and node.left.value == "evt.type" then if node.left.type == "FieldName" and node.left.value == "evt.type" then
found_event = true found_event = true
if found_not then if found_not then
found_event_after_not = true found_event_after_not = true
end end
if (node.operator == "in" or if (node.operator == "in" or node.operator == "intersects" or node.operator == "pmatch") then
node.operator == "intersects" or for i, v in ipairs(node.right.elements) do
node.operator == "pmatch") then if v.type == "BareString" then
for i, v in ipairs(node.right.elements) do
if v.type == "BareString" then
-- The event must be a known event -- The event must be a known event
if events[v.value] == nil and syscalls[v.value] == nil then if events[v.value] == nil and syscalls[v.value] == nil then
error("Unknown event/syscall \""..v.value.."\" in filter: "..source) error("Unknown event/syscall \"" .. v.value .. "\" in filter: " .. source)
end end
evtnames[v.value] = 1 evtnames[v.value] = 1
if events[v.value] ~= nil then if events[v.value] ~= nil then
for id in string.gmatch(events[v.value], "%S+") do for id in string.gmatch(events[v.value], "%S+") do
evttypes[id] = 1 evttypes[id] = 1
end end
end end
if syscalls[v.value] ~= nil then if syscalls[v.value] ~= nil then
for id in string.gmatch(syscalls[v.value], "%S+") do for id in string.gmatch(syscalls[v.value], "%S+") do
syscallnums[id] = 1 syscallnums[id] = 1
end end
end end
end end
end end
else else
if node.right.type == "BareString" then if node.right.type == "BareString" then
-- The event must be a known event -- The event must be a known event
if events[node.right.value] == nil and syscalls[node.right.value] == nil then if events[node.right.value] == nil and syscalls[node.right.value] == nil then
error("Unknown event/syscall \""..node.right.value.."\" in filter: "..source) error("Unknown event/syscall \"" .. node.right.value .. "\" in filter: " .. source)
end end
evtnames[node.right.value] = 1 evtnames[node.right.value] = 1
if events[node.right.value] ~= nil then if events[node.right.value] ~= nil then
for id in string.gmatch(events[node.right.value], "%S+") do for id in string.gmatch(events[node.right.value], "%S+") do
evttypes[id] = 1 evttypes[id] = 1
end end
end end
if syscalls[node.right.value] ~= nil then if syscalls[node.right.value] ~= nil then
for id in string.gmatch(syscalls[node.right.value], "%S+") do for id in string.gmatch(syscalls[node.right.value], "%S+") do
syscallnums[id] = 1 syscallnums[id] = 1
end end
end end
end end
end end
end end
end end
end end
parser.traverse_ast(ast.filter.value, {BinaryRelOp=1, UnaryBoolOp=1} , cb) parser.traverse_ast(ast.filter.value, {
BinaryRelOp = 1,
UnaryBoolOp = 1
}, cb)
if not found_event then if not found_event then
if warn_evttypes == true then if warn_evttypes == true then
io.stderr:write("Rule "..name..": warning (no-evttype):\n") io.stderr:write("Rule " .. name .. ": warning (no-evttype):\n")
io.stderr:write(source.."\n") io.stderr:write(source .. "\n")
io.stderr:write(" did not contain any evt.type restriction, meaning it will run for all event types.\n") io.stderr:write(
io.stderr:write(" This has a significant performance penalty. Consider adding an evt.type restriction if possible.\n") " did not contain any evt.type restriction, meaning it will run for all event types.\n")
end io.stderr:write(
evttypes = {} " This has a significant performance penalty. Consider adding an evt.type restriction if possible.\n")
syscallnums = {} end
evtnames = {} evttypes = {}
end syscallnums = {}
evtnames = {}
end
if found_event_after_not then if found_event_after_not then
if warn_evttypes == true then if warn_evttypes == true then
io.stderr:write("Rule "..name..": warning (trailing-evttype):\n") io.stderr:write("Rule " .. name .. ": warning (trailing-evttype):\n")
io.stderr:write(source.."\n") io.stderr:write(source .. "\n")
io.stderr:write(" does not have all evt.type restrictions at the beginning of the condition,\n") io.stderr:write(" does not have all evt.type restrictions at the beginning of the condition,\n")
io.stderr:write(" or uses a negative match (i.e. \"not\"/\"!=\") for some evt.type restriction.\n") io.stderr:write(" or uses a negative match (i.e. \"not\"/\"!=\") for some evt.type restriction.\n")
io.stderr:write(" This has a performance penalty, as the rule can not be limited to specific event types.\n") io.stderr:write(
io.stderr:write(" Consider moving all evt.type restrictions to the beginning of the rule and/or\n") " This has a performance penalty, as the rule can not be limited to specific event types.\n")
io.stderr:write(" replacing negative matches with positive matches if possible.\n") io.stderr:write(" Consider moving all evt.type restrictions to the beginning of the rule and/or\n")
end io.stderr:write(" replacing negative matches with positive matches if possible.\n")
evttypes = {} end
syscallnums = {} evttypes = {}
evtnames = {} syscallnums = {}
end evtnames = {}
end
evtnames_only = {} evtnames_only = {}
local num_evtnames = 0 local num_evtnames = 0
for name, dummy in pairs(evtnames) do for name, dummy in pairs(evtnames) do
table.insert(evtnames_only, name) table.insert(evtnames_only, name)
num_evtnames = num_evtnames + 1 num_evtnames = num_evtnames + 1
end end
if num_evtnames == 0 then if num_evtnames == 0 then
table.insert(evtnames_only, "all") table.insert(evtnames_only, "all")
end end
table.sort(evtnames_only) table.sort(evtnames_only)
if verbose then if verbose then
io.stderr:write("Event types/Syscalls for rule "..name..": "..table.concat(evtnames_only, ",").."\n") io.stderr:write("Event types/Syscalls for rule " .. name .. ": " .. table.concat(evtnames_only, ",") .. "\n")
end end
return evttypes, syscallnums return evttypes, syscallnums
end end
return sinsp_rule_utils return sinsp_rule_utils

View File

@@ -1,82 +0,0 @@
/*
Copyright (C) 2019 The Falco Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "prettyprint.h"
/**
* sinsp_event will pretty print a pointer to a sinsp_evt.
*
* This can be used for debugging an event at various times during development.
* This should never be turned on in production. Feel free to add fields below
* as we need them, and we can just dump an event in here whenever we need while
* debugging.
*
* sinsp_events are blue because they are happy.
*/
void prettyprint::sinsp_event(sinsp_evt *ev, const char* note)
{
ev->get_type()
prettyprint::warning();
printf("\033[0;34m"); // Start Blue
printf("\n*************************************************************\n");
printf("[Sinsp Event: %s]\n\n", note);
printf("name: %s\n", ev->get_name());
for(uint32_t i = 0; i <= ev->get_num_params(); i++){
}
for(int64_t j = 0; j <= ev->get_fd_num(); j++) {
printf("%s: %s\n", ev->get_param_name(j), ev->get_param_value_str(j, true).c_str());
};
// One off fields
//printf("fdinfo: %s\n", ev->get_fd_info()->tostring_clean().c_str());
//printf("type: %d\n", ev->get_type());
/*
printf("k8s.ns.name: %s\n", ev->get_param_value_str("k8s.ns.name", true).c_str());
printf("k8s %s\n", ev->get_param_value_str("k8s", true).c_str());
printf("container: %s\n", ev->get_param_value_str("container", true).c_str());
printf("proc.pid: %s\n", ev->get_param_value_str("%proc.pid", true).c_str());
printf("proc: %s\n", ev->get_param_value_str("%proc", true).c_str());
printf("data: %s\n", ev->get_param_value_str("data", true).c_str());
printf("cpu: %s\n", ev->get_param_value_str("cpu", true).c_str());
printf("fd: %s\n", ev->get_param_value_str("fd", true).c_str());
printf("fd: %s\n", ev->get_param_value_str("evt.arg.fd", true).c_str());
printf("user: %s\n", ev->get_param_value_str("user", true).c_str());
*/
printf("*************************************************************\n");
printf("\033[0m");
}
/**
* has_alerted controls our one time preliminary alert for using pretty print which is debug only
*/
bool prettyprint::has_alerted = false;
/**
* Warnings are red
*/
void prettyprint::warning() {
if (!prettyprint::has_alerted) {
printf("\033[0;31m"); // Start Red
printf("\n\n");
printf("*************************************************************\n");
printf(" [Pretty Printing Debugging is Enabled] \n");
printf(" This should never be used in production, by anyone, ever. \n");
printf("*************************************************************\n");
printf("\033[0m");
prettyprint::has_alerted = true;
}
}

View File

@@ -1,42 +0,0 @@
/*
Copyright (C) 2019 The Falco Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <string>
#include <set>
#include <vector>
#include <list>
#include <map>
#include "sinsp.h"
#include "filter.h"
#include "event.h"
#include "gen_filter.h"
#ifndef FALCO_FALCO_USERSPACE_PRETTYPRINT_H_
#define FALCO_FALCO_USERSPACE_PRETTYPRINT_H_
class prettyprint {
public:
static void sinsp_event(sinsp_evt *ev, const char* note = "");
private:
static bool has_alerted;
static void warning();
};
#endif //FALCO_FALCO_USERSPACE_PRETTYPRINT_H_

View File

@@ -19,23 +19,24 @@ add_custom_command(
${CMAKE_CURRENT_BINARY_DIR}/version.grpc.pb.h ${CMAKE_CURRENT_BINARY_DIR}/version.grpc.pb.h
${CMAKE_CURRENT_BINARY_DIR}/version.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/version.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/version.pb.h ${CMAKE_CURRENT_BINARY_DIR}/version.pb.h
${CMAKE_CURRENT_BINARY_DIR}/output.grpc.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/outputs.grpc.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/output.grpc.pb.h ${CMAKE_CURRENT_BINARY_DIR}/outputs.grpc.pb.h
${CMAKE_CURRENT_BINARY_DIR}/output.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/outputs.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/output.pb.h ${CMAKE_CURRENT_BINARY_DIR}/outputs.pb.h
${CMAKE_CURRENT_BINARY_DIR}/schema.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/schema.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/schema.pb.h ${CMAKE_CURRENT_BINARY_DIR}/schema.pb.h
COMMENT "Generate gRPC version API" COMMENT "Generate gRPC API"
# Falco gRPC Version API
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/version.proto DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/version.proto
COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --cpp_out=. ${CMAKE_CURRENT_SOURCE_DIR}/version.proto COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --cpp_out=. ${CMAKE_CURRENT_SOURCE_DIR}/version.proto
COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --grpc_out=. --plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN} COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --grpc_out=. --plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN}
${CMAKE_CURRENT_SOURCE_DIR}/version.proto ${CMAKE_CURRENT_SOURCE_DIR}/version.proto
COMMENT "Generate gRPC outputs API" # Falco gRPC Outputs API
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/output.proto DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/outputs.proto
COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --cpp_out=. ${CMAKE_CURRENT_SOURCE_DIR}/output.proto COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --cpp_out=. ${CMAKE_CURRENT_SOURCE_DIR}/outputs.proto
${CMAKE_CURRENT_SOURCE_DIR}/schema.proto ${CMAKE_CURRENT_SOURCE_DIR}/schema.proto
COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --grpc_out=. --plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN} COMMAND ${PROTOC} -I ${CMAKE_CURRENT_SOURCE_DIR} --grpc_out=. --plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN}
${CMAKE_CURRENT_SOURCE_DIR}/output.proto ${CMAKE_CURRENT_SOURCE_DIR}/outputs.proto
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}) WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
add_executable( add_executable(
@@ -54,8 +55,8 @@ add_executable(
grpc_server.cpp grpc_server.cpp
${CMAKE_CURRENT_BINARY_DIR}/version.grpc.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/version.grpc.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/version.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/version.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/output.grpc.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/outputs.grpc.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/output.pb.cc ${CMAKE_CURRENT_BINARY_DIR}/outputs.pb.cc
${CMAKE_CURRENT_BINARY_DIR}/schema.pb.cc) ${CMAKE_CURRENT_BINARY_DIR}/schema.pb.cc)
add_dependencies(falco civetweb string-view-lite) add_dependencies(falco civetweb string-view-lite)

View File

@@ -1,5 +1,5 @@
/* /*
Copyright (C) 2019 The Falco Authors. Copyright (C) 2020 The Falco Authors.
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
@@ -20,6 +20,7 @@ limitations under the License.
#include <sys/types.h> #include <sys/types.h>
#include <sys/stat.h> #include <sys/stat.h>
#include <unistd.h> #include <unistd.h>
#include "falco_utils.h"
#include "configuration.h" #include "configuration.h"
#include "logger.h" #include "logger.h"
@@ -32,7 +33,7 @@ falco_configuration::falco_configuration():
m_time_format_iso_8601(false), m_time_format_iso_8601(false),
m_webserver_enabled(false), m_webserver_enabled(false),
m_webserver_listen_port(8765), m_webserver_listen_port(8765),
m_webserver_k8s_audit_endpoint("/k8s_audit"), m_webserver_k8s_audit_endpoint("/k8s-audit"),
m_webserver_ssl_enabled(false), m_webserver_ssl_enabled(false),
m_config(NULL) m_config(NULL)
{ {
@@ -148,11 +149,12 @@ void falco_configuration::init(string conf_filename, list<string> &cmdline_optio
m_grpc_enabled = m_config->get_scalar<bool>("grpc", "enabled", false); m_grpc_enabled = m_config->get_scalar<bool>("grpc", "enabled", false);
m_grpc_bind_address = m_config->get_scalar<string>("grpc", "bind_address", "0.0.0.0:5060"); m_grpc_bind_address = m_config->get_scalar<string>("grpc", "bind_address", "0.0.0.0:5060");
m_grpc_threadiness = m_config->get_scalar<uint32_t>("grpc", "threadiness", 8); // todo > limit it to avoid overshubscription? std::thread::hardware_concurrency() m_grpc_threadiness = m_config->get_scalar<uint32_t>("grpc", "threadiness", 0);
if(m_grpc_threadiness == 0) if(m_grpc_threadiness == 0)
{ {
throw logic_error("error reading config file (" + m_config_file + "): gRPC threadiness must be greater than 0"); m_grpc_threadiness = falco::utils::hardware_concurrency();
} }
// todo > else limit threadiness to avoid oversubscription?
m_grpc_private_key = m_config->get_scalar<string>("grpc", "private_key", "/etc/falco/certs/server.key"); m_grpc_private_key = m_config->get_scalar<string>("grpc", "private_key", "/etc/falco/certs/server.key");
m_grpc_cert_chain = m_config->get_scalar<string>("grpc", "cert_chain", "/etc/falco/certs/server.crt"); m_grpc_cert_chain = m_config->get_scalar<string>("grpc", "cert_chain", "/etc/falco/certs/server.crt");
m_grpc_root_certs = m_config->get_scalar<string>("grpc", "root_certs", "/etc/falco/certs/ca.crt"); m_grpc_root_certs = m_config->get_scalar<string>("grpc", "root_certs", "/etc/falco/certs/ca.crt");
@@ -198,7 +200,7 @@ void falco_configuration::init(string conf_filename, list<string> &cmdline_optio
m_webserver_enabled = m_config->get_scalar<bool>("webserver", "enabled", false); m_webserver_enabled = m_config->get_scalar<bool>("webserver", "enabled", false);
m_webserver_listen_port = m_config->get_scalar<uint32_t>("webserver", "listen_port", 8765); m_webserver_listen_port = m_config->get_scalar<uint32_t>("webserver", "listen_port", 8765);
m_webserver_k8s_audit_endpoint = m_config->get_scalar<string>("webserver", "k8s_audit_endpoint", "/k8s_audit"); m_webserver_k8s_audit_endpoint = m_config->get_scalar<string>("webserver", "k8s_audit_endpoint", "/k8s-audit");
m_webserver_ssl_enabled = m_config->get_scalar<bool>("webserver", "ssl_enabled", false); m_webserver_ssl_enabled = m_config->get_scalar<bool>("webserver", "ssl_enabled", false);
m_webserver_ssl_certificate = m_config->get_scalar<string>("webserver", "ssl_certificate", "/etc/falco/falco.pem"); m_webserver_ssl_certificate = m_config->get_scalar<string>("webserver", "ssl_certificate", "/etc/falco/falco.pem");
@@ -344,4 +346,4 @@ void falco_configuration::set_cmdline_option(const string &opt)
{ {
m_config->set_scalar(keyval.first, keyval.second); m_config->set_scalar(keyval.first, keyval.second);
} }
} }

View File

@@ -206,7 +206,7 @@ public:
bool m_time_format_iso_8601; bool m_time_format_iso_8601;
bool m_grpc_enabled; bool m_grpc_enabled;
int m_grpc_threadiness; uint32_t m_grpc_threadiness;
std::string m_grpc_bind_address; std::string m_grpc_bind_address;
std::string m_grpc_private_key; std::string m_grpc_private_key;
std::string m_grpc_cert_chain; std::string m_grpc_cert_chain;

View File

@@ -140,9 +140,9 @@ static void usage()
" -P, --pidfile <pid_file> When run as a daemon, write pid to specified file\n" " -P, --pidfile <pid_file> When run as a daemon, write pid to specified file\n"
" -r <rules_file> Rules file/directory (defaults to value set in configuration file, or /etc/falco_rules.yaml).\n" " -r <rules_file> Rules file/directory (defaults to value set in configuration file, or /etc/falco_rules.yaml).\n"
" Can be specified multiple times to read from multiple files/directories.\n" " Can be specified multiple times to read from multiple files/directories.\n"
" -s <stats_file> If specified, write statistics related to falco's reading/processing of events\n" " -s <stats_file> If specified, append statistics related to Falco's reading/processing of events\n"
" to this file. (Only useful in live mode).\n" " to this file (only useful in live mode).\n"
" --stats_interval <msec> When using -s <stats_file>, write statistics every <msec> ms.\n" " --stats-interval <msec> When using -s <stats_file>, write statistics every <msec> ms.\n"
" This uses signals, so don't recommend intervals below 200 ms.\n" " This uses signals, so don't recommend intervals below 200 ms.\n"
" Defaults to 5000 (5 seconds).\n" " Defaults to 5000 (5 seconds).\n"
" -S <len>, --snaplen <len>\n" " -S <len>, --snaplen <len>\n"
@@ -158,6 +158,8 @@ static void usage()
" This causes every single line emitted by falco to be flushed,\n" " This causes every single line emitted by falco to be flushed,\n"
" which generates higher CPU usage but is useful when piping those outputs\n" " which generates higher CPU usage but is useful when piping those outputs\n"
" into another process or into a script.\n" " into another process or into a script.\n"
" -u, --userspace Parse events from userspace.\n"
" To be used in conjunction with the ptrace(2) based driver (pdig).\n"
" -V, --validate <rules_file> Read the contents of the specified rules(s) file and exit.\n" " -V, --validate <rules_file> Read the contents of the specified rules(s) file and exit.\n"
" Can be specified multiple times to validate multiple files.\n" " Can be specified multiple times to validate multiple files.\n"
" -v Verbose output.\n" " -v Verbose output.\n"
@@ -443,6 +445,7 @@ int falco_init(int argc, char **argv)
set<string> disable_sources; set<string> disable_sources;
bool disable_syscall = false; bool disable_syscall = false;
bool disable_k8s_audit = false; bool disable_k8s_audit = false;
bool userspace = false;
// Used for writing trace files // Used for writing trace files
int duration_seconds = 0; int duration_seconds = 0;
@@ -479,9 +482,10 @@ int falco_init(int argc, char **argv)
{"print-base64", no_argument, 0, 'b'}, {"print-base64", no_argument, 0, 'b'},
{"print", required_argument, 0, 'p'}, {"print", required_argument, 0, 'p'},
{"snaplen", required_argument, 0, 'S'}, {"snaplen", required_argument, 0, 'S'},
{"stats_interval", required_argument, 0}, {"stats-interval", required_argument, 0},
{"support", no_argument, 0}, {"support", no_argument, 0},
{"unbuffered", no_argument, 0, 'U'}, {"unbuffered", no_argument, 0, 'U'},
{"userspace", no_argument, 0, 'u'},
{"validate", required_argument, 0, 'V'}, {"validate", required_argument, 0, 'V'},
{"version", no_argument, 0, 0}, {"version", no_argument, 0, 0},
{"writefile", required_argument, 0, 'w'}, {"writefile", required_argument, 0, 'w'},
@@ -500,7 +504,7 @@ int falco_init(int argc, char **argv)
// Parse the args // Parse the args
// //
while((op = getopt_long(argc, argv, while((op = getopt_long(argc, argv,
"hc:AbdD:e:F:ik:K:Ll:m:M:No:P:p:r:S:s:T:t:UvV:w:", "hc:AbdD:e:F:ik:K:Ll:m:M:No:P:p:r:S:s:T:t:UuvV:w:",
long_options, &long_index)) != -1) long_options, &long_index)) != -1)
{ {
switch(op) switch(op)
@@ -607,6 +611,9 @@ int falco_init(int argc, char **argv)
buffered_outputs = false; buffered_outputs = false;
buffered_cmdline = true; buffered_cmdline = true;
break; break;
case 'u':
userspace = true;
break;
case 'v': case 'v':
verbose = true; verbose = true;
break; break;
@@ -646,7 +653,7 @@ int falco_init(int argc, char **argv)
list_flds_source = optarg; list_flds_source = optarg;
} }
} }
else if (string(long_options[long_index].name) == "stats_interval") else if (string(long_options[long_index].name) == "stats-interval")
{ {
stats_interval = atoi(optarg); stats_interval = atoi(optarg);
} }
@@ -795,12 +802,16 @@ int falco_init(int argc, char **argv)
falco_logger::set_time_format_iso_8601(config.m_time_format_iso_8601); falco_logger::set_time_format_iso_8601(config.m_time_format_iso_8601);
// log after config init because config determines where logs go // log after config init because config determines where logs go
falco_logger::log(LOG_INFO, "Falco version " + std::string(FALCO_VERSION) + " (driver version " + std::string(DRIVER_VERSION) + ")\n");
falco_logger::log(LOG_INFO, "Falco initialized with configuration file " + conf_filename + "\n"); falco_logger::log(LOG_INFO, "Falco initialized with configuration file " + conf_filename + "\n");
} }
else else
{ {
config.init(cmdline_options); config.init(cmdline_options);
falco_logger::set_time_format_iso_8601(config.m_time_format_iso_8601); falco_logger::set_time_format_iso_8601(config.m_time_format_iso_8601);
// log after config init because config determines where logs go
falco_logger::log(LOG_INFO, "Falco version " + std::string(FALCO_VERSION) + " (driver version " + std::string(DRIVER_VERSION) + ")\n");
falco_logger::log(LOG_INFO, "Falco initialized. No configuration file found, proceeding with defaults\n"); falco_logger::log(LOG_INFO, "Falco initialized. No configuration file found, proceeding with defaults\n");
} }
@@ -1091,7 +1102,17 @@ int falco_init(int argc, char **argv)
} }
else else
{ {
open_t open_cb = [](sinsp* inspector) { open_t open_cb = [&userspace](sinsp* inspector)
{
if(userspace)
{
// open_udig() is the underlying method used in the capture code to parse userspace events from the kernel.
//
// Falco uses a ptrace(2) based userspace implementation.
// Regardless of the implementation, the underlying method remains the same.
inspector->open_udig();
return;
}
inspector->open(); inspector->open();
}; };
open_t open_nodriver_cb = [](sinsp* inspector) { open_t open_nodriver_cb = [](sinsp* inspector) {
@@ -1116,11 +1137,17 @@ int falco_init(int argc, char **argv)
} }
catch(sinsp_exception &e) catch(sinsp_exception &e)
{ {
if(system("modprobe " PROBE_NAME " > /dev/null 2> /dev/null")) // If syscall input source is enabled and not through userspace instrumentation
if (!disable_syscall && !userspace)
{ {
falco_logger::log(LOG_ERR, "Unable to load the driver. Exiting.\n"); // Try to insert the Falco kernel module
if(system("modprobe " PROBE_NAME " > /dev/null 2> /dev/null"))
{
falco_logger::log(LOG_ERR, "Unable to load the driver. Exiting.\n");
}
open_f(inspector);
} }
open_f(inspector); rethrow_exception(current_exception());
} }
} }
@@ -1139,7 +1166,7 @@ int falco_init(int argc, char **argv)
duration = ((double)clock()) / CLOCKS_PER_SEC; duration = ((double)clock()) / CLOCKS_PER_SEC;
// //
// run k8s, if required // Run k8s, if required
// //
if(k8s_api) if(k8s_api)
{ {
@@ -1178,7 +1205,7 @@ int falco_init(int argc, char **argv)
} }
// //
// run mesos, if required // Run mesos, if required
// //
if(mesos_api) if(mesos_api)
{ {
@@ -1206,6 +1233,7 @@ int falco_init(int argc, char **argv)
// gRPC server // gRPC server
if(config.m_grpc_enabled) if(config.m_grpc_enabled)
{ {
falco_logger::log(LOG_INFO, "gRPC server threadiness equals to " + to_string(config.m_grpc_threadiness) + "\n");
// TODO(fntlnz,leodido): when we want to spawn multiple threads we need to have a queue per thread, or implement // TODO(fntlnz,leodido): when we want to spawn multiple threads we need to have a queue per thread, or implement
// different queuing mechanisms, round robin, fanout? What we want to achieve? // different queuing mechanisms, round robin, fanout? What we want to achieve?
grpc_server.init( grpc_server.init(
@@ -1260,6 +1288,14 @@ int falco_init(int argc, char **argv)
} }
// Honor -M also when using a trace file.
// Since inspection stops as soon as all events have been consumed
// just await the given duration is reached, if needed.
if(!trace_filename.empty() && duration_to_tot>0)
{
std::this_thread::sleep_for(std::chrono::seconds(duration_to_tot));
}
inspector->close(); inspector->close();
engine->print_stats(); engine->print_stats();
sdropmgr.print_stats(); sdropmgr.print_stats();

View File

@@ -22,11 +22,10 @@ limitations under the License.
#include "formats.h" #include "formats.h"
#include "logger.h" #include "logger.h"
#include "falco_output_queue.h" #include "falco_outputs_queue.h"
#include "banned.h" // This raises a compilation error when certain functions are used #include "banned.h" // This raises a compilation error when certain functions are used
using namespace std; using namespace std;
using namespace falco::output;
const static struct luaL_reg ll_falco_outputs [] = const static struct luaL_reg ll_falco_outputs [] =
{ {
@@ -145,8 +144,6 @@ void falco_outputs::handle_event(gen_event *ev, string &rule, string &source,
return; return;
} }
std::lock_guard<std::mutex> guard(m_ls_semaphore); std::lock_guard<std::mutex> guard(m_ls_semaphore);
lua_getglobal(m_ls, m_lua_output_event.c_str()); lua_getglobal(m_ls, m_lua_output_event.c_str());
@@ -318,7 +315,7 @@ int falco_outputs::handle_grpc(lua_State *ls)
lua_error(ls); lua_error(ls);
} }
response grpc_res = response(); falco::outputs::response grpc_res;
// time // time
gen_event *evt = (gen_event *)lua_topointer(ls, 1); gen_event *evt = (gen_event *)lua_topointer(ls, 1);
@@ -368,7 +365,7 @@ int falco_outputs::handle_grpc(lua_State *ls)
auto host = grpc_res.mutable_hostname(); auto host = grpc_res.mutable_hostname();
*host = (char *)lua_tostring(ls, 7); *host = (char *)lua_tostring(ls, 7);
falco::output::queue::get().push(grpc_res); falco::outputs::queue::get().push(grpc_res);
return 1; return 1;
} }

View File

@@ -16,12 +16,12 @@ limitations under the License.
#pragma once #pragma once
#include "output.pb.h" #include "outputs.pb.h"
#include "tbb/concurrent_queue.h" #include "tbb/concurrent_queue.h"
namespace falco namespace falco
{ {
namespace output namespace outputs
{ {
typedef tbb::concurrent_queue<response> response_cq; typedef tbb::concurrent_queue<response> response_cq;

View File

@@ -36,7 +36,7 @@ class context
{ {
public: public:
context(::grpc::ServerContext* ctx); context(::grpc::ServerContext* ctx);
~context() = default; virtual ~context() = default;
void get_metadata(std::string key, std::string& val); void get_metadata(std::string key, std::string& val);
@@ -50,7 +50,7 @@ class stream_context : public context
public: public:
stream_context(::grpc::ServerContext* ctx): stream_context(::grpc::ServerContext* ctx):
context(ctx){}; context(ctx){};
~stream_context() = default; virtual ~stream_context() = default;
enum : char enum : char
{ {
@@ -61,6 +61,15 @@ public:
mutable void* m_stream = nullptr; // todo(fntlnz, leodido) > useful in the future mutable void* m_stream = nullptr; // todo(fntlnz, leodido) > useful in the future
mutable bool m_has_more = false; mutable bool m_has_more = false;
mutable bool m_is_running = true;
};
class bidi_context : public stream_context
{
public:
bidi_context(::grpc::ServerContext* ctx):
stream_context(ctx){};
virtual ~bidi_context() = default;
}; };
} // namespace grpc } // namespace grpc

View File

@@ -24,12 +24,12 @@ namespace grpc
{ {
template<> template<>
void request_stream_context<falco::output::service, falco::output::request, falco::output::response>::start(server* srv) void request_stream_context<outputs::service, outputs::request, outputs::response>::start(server* srv)
{ {
m_state = request_context_base::REQUEST; m_state = request_context_base::REQUEST;
m_srv_ctx.reset(new ::grpc::ServerContext); m_srv_ctx.reset(new ::grpc::ServerContext);
auto srvctx = m_srv_ctx.get(); auto srvctx = m_srv_ctx.get();
m_res_writer.reset(new ::grpc::ServerAsyncWriter<output::response>(srvctx)); m_res_writer.reset(new ::grpc::ServerAsyncWriter<outputs::response>(srvctx));
m_stream_ctx.reset(); m_stream_ctx.reset();
m_req.Clear(); m_req.Clear();
auto cq = srv->m_completion_queue.get(); auto cq = srv->m_completion_queue.get();
@@ -38,7 +38,7 @@ void request_stream_context<falco::output::service, falco::output::request, falc
} }
template<> template<>
void request_stream_context<falco::output::service, falco::output::request, falco::output::response>::process(server* srv) void request_stream_context<outputs::service, outputs::request, outputs::response>::process(server* srv)
{ {
// When it is the 1st process call // When it is the 1st process call
if(m_state == request_context_base::REQUEST) if(m_state == request_context_base::REQUEST)
@@ -48,40 +48,46 @@ void request_stream_context<falco::output::service, falco::output::request, falc
} }
// Processing // Processing
output::response res; outputs::response res;
(srv->*m_process_func)(*m_stream_ctx, m_req, res); // subscribe() (srv->*m_process_func)(*m_stream_ctx, m_req, res); // get()
if(!m_stream_ctx->m_is_running)
{
m_state = request_context_base::FINISH;
m_res_writer->Finish(::grpc::Status::OK, this);
return;
}
// When there are still more responses to stream // When there are still more responses to stream
if(m_stream_ctx->m_has_more) if(m_stream_ctx->m_has_more)
{ {
// todo(leodido) > log "write: tag=this, state=m_state" // todo(leodido) > log "write: tag=this, state=m_state"
m_res_writer->Write(res, this); m_res_writer->Write(res, this);
return;
} }
// No more responses to stream // No more responses to stream
else // Communicate to the gRPC runtime that we have finished.
{ // The memory address of "this" instance uniquely identifies the event.
// Communicate to the gRPC runtime that we have finished. m_state = request_context_base::FINISH;
// The memory address of "this" instance uniquely identifies the event. // todo(leodido) > log "finish: tag=this, state=m_state"
m_state = request_context_base::FINISH; m_res_writer->Finish(::grpc::Status::OK, this);
// todo(leodido) > log "finish: tag=this, state=m_state"
m_res_writer->Finish(::grpc::Status::OK, this);
}
} }
template<> template<>
void request_stream_context<falco::output::service, falco::output::request, falco::output::response>::end(server* srv, bool errored) void request_stream_context<outputs::service, outputs::request, outputs::response>::end(server* srv, bool error)
{ {
if(m_stream_ctx) if(m_stream_ctx)
{ {
if(errored) if(error)
{ {
// todo(leodido) > log error "error streaming: tag=this, state=m_state, stream=m_stream_ctx->m_stream" // todo(leodido) > log error "error streaming: tag=this, state=m_state, stream=m_stream_ctx->m_stream"
} }
m_stream_ctx->m_status = errored ? stream_context::ERROR : stream_context::SUCCESS; m_stream_ctx->m_status = error ? stream_context::ERROR : stream_context::SUCCESS;
// Complete the processing // Complete the processing
output::response res; outputs::response res;
(srv->*m_process_func)(*m_stream_ctx, m_req, res); // subscribe() (srv->*m_process_func)(*m_stream_ctx, m_req, res); // get()
} }
else else
{ {
@@ -98,7 +104,7 @@ void request_stream_context<falco::output::service, falco::output::request, falc
} }
template<> template<>
void falco::grpc::request_context<falco::version::service, falco::version::request, falco::version::response>::start(server* srv) void request_context<version::service, version::request, version::response>::start(server* srv)
{ {
m_state = request_context_base::REQUEST; m_state = request_context_base::REQUEST;
m_srv_ctx.reset(new ::grpc::ServerContext); m_srv_ctx.reset(new ::grpc::ServerContext);
@@ -113,7 +119,7 @@ void falco::grpc::request_context<falco::version::service, falco::version::reque
} }
template<> template<>
void falco::grpc::request_context<falco::version::service, falco::version::request, falco::version::response>::process(server* srv) void request_context<version::service, version::request, version::response>::process(server* srv)
{ {
version::response res; version::response res;
(srv->*m_process_func)(m_srv_ctx.get(), m_req, res); (srv->*m_process_func)(m_srv_ctx.get(), m_req, res);
@@ -125,13 +131,85 @@ void falco::grpc::request_context<falco::version::service, falco::version::reque
} }
template<> template<>
void falco::grpc::request_context<falco::version::service, falco::version::request, falco::version::response>::end(server* srv, bool errored) void request_context<version::service, version::request, version::response>::end(server* srv, bool error)
{ {
// todo(leodido) > handle processing errors here // todo(leodido) > handle processing errors here
// Ask to start processing requests // Ask to start processing requests
start(srv); start(srv);
} }
template<>
void request_bidi_context<outputs::service, outputs::request, outputs::response>::start(server* srv)
{
m_state = request_context_base::REQUEST;
m_srv_ctx.reset(new ::grpc::ServerContext);
auto srvctx = m_srv_ctx.get();
m_reader_writer.reset(new ::grpc::ServerAsyncReaderWriter<outputs::response, outputs::request>(srvctx));
m_req.Clear();
auto cq = srv->m_completion_queue.get();
// Request to start processing given requests.
// Using "this" - ie., the memory address of this context - as the tag that uniquely identifies the request.
// In this way, different contexts can serve different requests concurrently.
(srv->m_output_svc.*m_request_func)(srvctx, m_reader_writer.get(), cq, cq, this);
};
template<>
void request_bidi_context<outputs::service, outputs::request, outputs::response>::process(server* srv)
{
switch(m_state)
{
case request_context_base::REQUEST:
m_bidi_ctx.reset(new bidi_context(m_srv_ctx.get()));
m_bidi_ctx->m_status = bidi_context::STREAMING;
m_state = request_context_base::WRITE;
m_reader_writer->Read(&m_req, this);
return;
case request_context_base::WRITE:
// Processing
{
outputs::response res;
(srv->*m_process_func)(*m_bidi_ctx, m_req, res); // sub()
if(!m_bidi_ctx->m_is_running)
{
m_state = request_context_base::FINISH;
m_reader_writer->Finish(::grpc::Status::OK, this);
return;
}
if(m_bidi_ctx->m_has_more)
{
m_state = request_context_base::WRITE;
m_reader_writer->Write(res, this);
return;
}
m_state = request_context_base::WRITE;
m_reader_writer->Read(&m_req, this);
}
return;
default:
return;
}
};
template<>
void request_bidi_context<outputs::service, outputs::request, outputs::response>::end(server* srv, bool error)
{
if(m_bidi_ctx)
{
m_bidi_ctx->m_status = error ? bidi_context::ERROR : bidi_context::SUCCESS;
// Complete the processing
outputs::response res;
(srv->*m_process_func)(*m_bidi_ctx, m_req, res); // sub()
}
// Ask to start processing requests
start(srv);
};
} // namespace grpc } // namespace grpc
} // namespace falco } // namespace falco

View File

@@ -29,7 +29,8 @@ class request_context_base
{ {
public: public:
request_context_base() = default; request_context_base() = default;
~request_context_base() = default; // virtual to guarantee that the derived classes are destructed properly
virtual ~request_context_base() = default;
std::unique_ptr<::grpc::ServerContext> m_srv_ctx; std::unique_ptr<::grpc::ServerContext> m_srv_ctx;
enum : char enum : char
@@ -39,6 +40,7 @@ public:
WRITE, WRITE,
FINISH FINISH
} m_state = UNKNOWN; } m_state = UNKNOWN;
virtual void start(server* srv) = 0; virtual void start(server* srv) = 0;
virtual void process(server* srv) = 0; virtual void process(server* srv) = 0;
virtual void end(server* srv, bool isError) = 0; virtual void end(server* srv, bool isError) = 0;
@@ -63,7 +65,7 @@ public:
void start(server* srv); void start(server* srv);
void process(server* srv); void process(server* srv);
void end(server* srv, bool isError); void end(server* srv, bool error);
private: private:
std::unique_ptr<::grpc::ServerAsyncWriter<Response>> m_res_writer; std::unique_ptr<::grpc::ServerAsyncWriter<Response>> m_res_writer;
@@ -90,11 +92,37 @@ public:
void start(server* srv); void start(server* srv);
void process(server* srv); void process(server* srv);
void end(server* srv, bool isError); void end(server* srv, bool error);
private: private:
std::unique_ptr<::grpc::ServerAsyncResponseWriter<Response>> m_res_writer; std::unique_ptr<::grpc::ServerAsyncResponseWriter<Response>> m_res_writer;
Request m_req; Request m_req;
}; };
template<class Service, class Request, class Response>
class request_bidi_context : public request_context_base
{
public:
request_bidi_context():
m_process_func(nullptr),
m_request_func(nullptr){};
~request_bidi_context() = default;
// Pointer to function that does actual processing
void (server::*m_process_func)(const bidi_context&, const Request&, Response&);
// Pointer to function that requests the system to start processing given requests
void (Service::AsyncService::*m_request_func)(::grpc::ServerContext*, ::grpc::ServerAsyncReaderWriter<Response, Request>*, ::grpc::CompletionQueue*, ::grpc::ServerCompletionQueue*, void*);
void start(server* srv);
void process(server* srv);
void end(server* srv, bool error);
private:
std::unique_ptr<::grpc::ServerAsyncReaderWriter<Response, Request>> m_reader_writer;
std::unique_ptr<bidi_context> m_bidi_ctx;
Request m_req;
};
} // namespace grpc } // namespace grpc
} // namespace falco } // namespace falco

View File

@@ -44,6 +44,15 @@ limitations under the License.
c.start(this); \ c.start(this); \
} }
#define REGISTER_BIDI(req, res, svc, rpc, impl, num) \
std::vector<request_bidi_context<svc, req, res>> rpc##_contexts(num); \
for(request_bidi_context<svc, req, res> & c : rpc##_contexts) \
{ \
c.m_process_func = &server::impl; \
c.m_request_func = &svc::AsyncService::Request##rpc; \
c.start(this); \
}
static void gpr_log_dispatcher_func(gpr_log_func_args* args) static void gpr_log_dispatcher_func(gpr_log_func_args* args)
{ {
int priority; int priority;
@@ -60,7 +69,10 @@ static void gpr_log_dispatcher_func(gpr_log_func_args* args)
break; break;
} }
falco_logger::log(priority, args->message); string copy = "grpc: ";
copy.append(args->message);
copy.push_back('\n');
falco_logger::log(priority, copy);
} }
void falco::grpc::server::thread_process(int thread_index) void falco::grpc::server::thread_process(int thread_index)
@@ -199,7 +211,8 @@ void falco::grpc::server::run()
// todo(leodido) > take a look at thread_stress_test.cc into grpc repository // todo(leodido) > take a look at thread_stress_test.cc into grpc repository
REGISTER_UNARY(version::request, version::response, version::service, version, version, context_num) REGISTER_UNARY(version::request, version::response, version::service, version, version, context_num)
REGISTER_STREAM(output::request, output::response, output::service, subscribe, subscribe, context_num) REGISTER_STREAM(outputs::request, outputs::response, outputs::service, get, get, context_num)
REGISTER_BIDI(outputs::request, outputs::response, outputs::service, sub, sub, context_num)
m_threads.resize(m_threadiness); m_threads.resize(m_threadiness);
int thread_idx = 0; int thread_idx = 0;
@@ -211,7 +224,7 @@ void falco::grpc::server::run()
while(server_impl::is_running()) while(server_impl::is_running())
{ {
sleep(1); std::this_thread::sleep_for(std::chrono::milliseconds(100));
} }
// todo(leodido) > log "stopping gRPC server" // todo(leodido) > log "stopping gRPC server"
stop(); stop();

View File

@@ -44,7 +44,7 @@ public:
void run(); void run();
void stop(); void stop();
output::service::AsyncService m_output_svc; outputs::service::AsyncService m_output_svc;
version::service::AsyncService m_version_svc; version::service::AsyncService m_version_svc;
std::unique_ptr<::grpc::ServerCompletionQueue> m_completion_queue; std::unique_ptr<::grpc::ServerCompletionQueue> m_completion_queue;

View File

@@ -16,7 +16,8 @@ limitations under the License.
#include "config_falco.h" #include "config_falco.h"
#include "grpc_server_impl.h" #include "grpc_server_impl.h"
#include "falco_output_queue.h" #include "falco_outputs_queue.h"
#include "logger.h"
#include "banned.h" // This raises a compilation error when certain functions are used #include "banned.h" // This raises a compilation error when certain functions are used
bool falco::grpc::server_impl::is_running() bool falco::grpc::server_impl::is_running()
@@ -28,29 +29,39 @@ bool falco::grpc::server_impl::is_running()
return true; return true;
} }
void falco::grpc::server_impl::subscribe(const stream_context& ctx, const output::request& req, output::response& res) void falco::grpc::server_impl::get(const stream_context& ctx, const outputs::request& req, outputs::response& res)
{ {
if(ctx.m_status == stream_context::SUCCESS || ctx.m_status == stream_context::ERROR) if(ctx.m_status == stream_context::SUCCESS || ctx.m_status == stream_context::ERROR)
{ {
// todo(leodido) > log "status=ctx->m_status, stream=ctx->m_stream" // todo(leodido) > log "status=ctx->m_status, stream=ctx->m_stream"
ctx.m_stream = nullptr; ctx.m_stream = nullptr;
return;
} }
else
{
// Start or continue streaming
// todo(leodido) > check for m_status == stream_context::STREAMING?
// todo(leodido) > set m_stream
if(output::queue::get().try_pop(res) && !req.keepalive())
{
ctx.m_has_more = true;
return;
}
while(is_running() && !output::queue::get().try_pop(res) && req.keepalive())
{
}
ctx.m_has_more = !is_running() ? false : req.keepalive(); ctx.m_is_running = is_running();
// Start or continue streaming
// m_status == stream_context::STREAMING?
// todo(leodido) > set m_stream
ctx.m_has_more = outputs::queue::get().try_pop(res);
}
void falco::grpc::server_impl::sub(const bidi_context& ctx, const outputs::request& req, outputs::response& res)
{
if(ctx.m_status == stream_context::SUCCESS || ctx.m_status == stream_context::ERROR)
{
ctx.m_stream = nullptr;
return;
} }
ctx.m_is_running = is_running();
// Start or continue streaming
// m_status == stream_context::STREAMING?
// todo(leodido) > set m_stream
ctx.m_has_more = outputs::queue::get().try_pop(res);
} }
void falco::grpc::server_impl::version(const context& ctx, const version::request&, version::response& res) void falco::grpc::server_impl::version(const context& ctx, const version::request&, version::response& res)

View File

@@ -17,7 +17,7 @@ limitations under the License.
#pragma once #pragma once
#include <atomic> #include <atomic>
#include "output.grpc.pb.h" #include "outputs.grpc.pb.h"
#include "version.grpc.pb.h" #include "version.grpc.pb.h"
#include "grpc_context.h" #include "grpc_context.h"
@@ -36,8 +36,11 @@ public:
protected: protected:
bool is_running(); bool is_running();
void subscribe(const stream_context& ctx, const output::request& req, output::response& res); // Outputs
void get(const stream_context& ctx, const outputs::request& req, outputs::response& res);
void sub(const bidi_context& ctx, const outputs::request& req, outputs::response& res);
// Version
void version(const context& ctx, const version::request& req, version::response& res); void version(const context& ctx, const version::request& req, version::response& res);
private: private:

View File

@@ -134,7 +134,7 @@ void falco_logger::log(int priority, const string msg)
if(gtm != NULL && if(gtm != NULL &&
(strftime(buf, sizeof(buf), "%FT%T%z", gtm) != 0)) (strftime(buf, sizeof(buf), "%FT%T%z", gtm) != 0))
{ {
fprintf(stderr, "%s: %s", buf, msg.c_str()); fprintf(stderr, "%s: %s", buf, copy.c_str());
} }
} }
else else
@@ -151,7 +151,7 @@ void falco_logger::log(int priority, const string msg)
{ {
tstr = "N/A"; tstr = "N/A";
} }
fprintf(stderr, "%s: %s", tstr.c_str(), msg.c_str()); fprintf(stderr, "%s: %s", tstr.c_str(), copy.c_str());
} }
} }
} }

View File

@@ -18,7 +18,7 @@ local mod = {}
local outputs = {} local outputs = {}
function mod.stdout(event, rule, source, priority, priority_num, msg, format, hostname, options) function mod.stdout(event, rule, source, priority, priority_num, msg, format, hostname, options)
mod.stdout_message(priority, priority_num, msg, outputs) mod.stdout_message(priority, priority_num, msg, options)
end end
function mod.stdout_message(priority, priority_num, msg, options) function mod.stdout_message(priority, priority_num, msg, options)

View File

@@ -1,40 +0,0 @@
syntax = "proto3";
import "google/protobuf/timestamp.proto";
import "schema.proto";
package falco.output;
option go_package = "github.com/falcosecurity/client-go/pkg/api/output";
// The `subscribe` service defines the RPC call
// to perform an output `request` which will lead to obtain an output `response`.
service service {
rpc subscribe(request) returns (stream response);
}
// The `request` message is the logical representation of the request model.
// It is the input of the `subscribe` service.
// It is used to configure the kind of subscription to the gRPC streaming server.
//
// By default the request asks to the server to only receive the accumulated events.
// In case you want to wait indefinitely for new events to come set the keepalive option to true.
message request {
bool keepalive = 1;
// string duration = 2; // TODO(leodido, fntlnz): not handled yet but keeping for reference.
// repeated string tags = 3; // TODO(leodido, fntlnz): not handled yet but keeping for reference.
}
// The `response` message is the logical representation of the output model.
// It contains all the elements that Falco emits in an output along with the
// definitions for priorities and source.
message response {
google.protobuf.Timestamp time = 1;
falco.schema.priority priority = 2;
falco.schema.source source = 3;
string rule = 4;
string output = 5;
map<string, string> output_fields = 6;
string hostname = 7;
// repeated string tags = 8; // TODO(leodido,fntlnz): tags not supported yet, keeping for reference
}

View File

@@ -0,0 +1,55 @@
/*
Copyright (C) 2020 The Falco Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
syntax = "proto3";
import "google/protobuf/timestamp.proto";
import "schema.proto";
package falco.outputs;
option go_package = "github.com/falcosecurity/client-go/pkg/api/outputs";
// This service defines the RPC methods
// to `request` a stream of output `response`s.
service service {
// Subscribe to a stream of Falco outputs by sending a stream of requests.
rpc sub(stream request) returns (stream response);
// Get all the Falco outputs present in the system up to this call.
rpc get(request) returns (stream response);
}
// The `request` message is the logical representation of the request model.
// It is the input of the `output.service` service.
message request {
// TODO(leodido,fntlnz): tags not supported yet, keeping it for reference.
// repeated string tags = 1;
}
// The `response` message is the representation of the output model.
// It contains all the elements that Falco emits in an output along with the
// definitions for priorities and source.
message response {
google.protobuf.Timestamp time = 1;
falco.schema.priority priority = 2;
falco.schema.source source = 3;
string rule = 4;
string output = 5;
map<string, string> output_fields = 6;
string hostname = 7;
// TODO(leodido,fntlnz): tags not supported yet, keeping it for reference.
// repeated string tags = 8;
}

View File

@@ -1,3 +1,19 @@
/*
Copyright (C) 2020 The Falco Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
syntax = "proto3"; syntax = "proto3";
package falco.schema; package falco.schema;

View File

@@ -58,8 +58,8 @@ bool StatsFileWriter::init(sinsp *inspector, string &filename, uint32_t interval
return false; return false;
} }
timer.it_value.tv_sec = 0; timer.it_value.tv_sec = interval_msec / 1000;
timer.it_value.tv_usec = interval_msec * 1000; timer.it_value.tv_usec = (interval_msec % 1000) * 1000;
timer.it_interval = timer.it_value; timer.it_interval = timer.it_value;
if (setitimer(ITIMER_REAL, &timer, NULL) == -1) if (setitimer(ITIMER_REAL, &timer, NULL) == -1)
{ {

View File

@@ -1,3 +1,19 @@
/*
Copyright (C) 2020 The Falco Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
syntax = "proto3"; syntax = "proto3";
package falco.version; package falco.version;