Compare commits

..

51 Commits

Author SHA1 Message Date
Lorenzo Fontana
f1d676f949 new(userspace/falco): constants and header file for utils module
Signed-off-by: Lorenzo Fontana <lo@linux.com>
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-30 11:51:15 +02:00
Leonardo Di Donato
73f70cd0ef fix(usperspace): close modules files before leaving scope
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-30 08:45:32 +00:00
Leonardo Di Donato
b1edc405c2 update: check mmodule only when syscall source is enabled
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-30 08:39:52 +00:00
Leonardo Di Donato
efe39b4360 update(userspace): polyfill helper types (_t) for c++11
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-30 08:37:47 +00:00
Leonardo Di Donato
a04ac1def3 build: using c++11 standard
Co-authored-by: Lorenzo Fontana <lo@linux.com>

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-30 08:34:38 +00:00
Leonardo Di Donato
f710edcde2 wip(userspace): checking module using event timestamps rather than an external timer
This approach does not sound good to me since events can miss
timestamps.

Furthermore logically it is wrong to check the module sends event using
the events ...

Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-30 08:32:43 +00:00
Leonardo Di Donato
7a3d5c62a0 docs: configuration opts for kernel module check
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
435a3b01db fix: improvements to the gitignore for integration tests
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
acd3e7f23a fix: check module in main loop
This way it will be able to detect events (and signals etc).
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
deaae756c0 new: helper to insert module
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
5a6c7af0c5 new: make backoff maximum wait per run configurable
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
05565f3524 update: minimum frequency for module check
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
980fb2f3a9 new: read module check configs
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
ba5e59964d new: method to grab nested (3 levels) configs
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:37 +00:00
Leonardo Di Donato
60721d52cb new: default falco config for module checking
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
8d9f88d45a new: lively check module every x seconds
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
4c04821d48 chore: bash improvements to engine fields verifier
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
fc2c1ac6cb new: generic exponential backoff helper
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
295c7afc32 new: helper to check module is inserted and loaded
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
f10b170174 new: timer
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
9f9d0e751b fix: remove polyfill for make_unique
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
322a2cdd25 build: get SYSDIG_DIR realpath
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
5c5c2e3309 build: compile usinf the 2014 ISO C++ standard
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:36 +00:00
Leonardo Di Donato
71832bc3ad new: explicitly check module is present at startup
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:43:35 +00:00
Leonardo Di Donato
93a3d14c41 fix(userspace): re-throw exceptions coming from sinsp
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:40:54 +00:00
Leonardo Di Donato
c7e7a868ed build: set SYSDIG_DIR to its real path
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-29 10:29:41 +00:00
Leonardo Di Donato
193f33cd40 fix: office hours are bi-weekly
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-21 17:28:30 +02:00
Leonardo Di Donato
14853597d3 docs: office hours zoom link
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Lorenzo Fontana <lo@linux.com>
2019-08-21 17:08:03 +02:00
Leonardo Di Donato
49c4ef5d8c feat(userspace): open the event source/s depending on the flags
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Lorenzo Fonanta <lo@linux.com>
2019-08-21 17:08:03 +02:00
Leonardo Di Donato
1eeb059e10 feat(userspace): can not disable both the event sources
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-21 17:08:03 +02:00
Leonardo Di Donato
870c17e31d feat: flag to disable sources (syscall, k8s_audit)
Co-authored-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-21 17:08:03 +02:00
Kris Nova
c713b89542 Adding OSS changes to README
Signed-off-by: Kris Nova <kris@nivenly.com>
2019-08-21 15:38:59 +02:00
Lorenzo Fontana
7d8e1dee9b fix(docker/local): fix build dependencies
Signed-off-by: Lorenzo Fontana <lo@linux.com>
Co-Authored-By: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-21 14:45:37 +02:00
Lorenzo Fontana
39b51562ed fix(rules): modification of a file should trigger as if it was opened or created
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-20 09:45:08 +02:00
Lorenzo Fontana
f05d18a847 new: download all dependencies over https
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-17 17:36:43 +02:00
Guangming Wang
731e197108 cleanup: fix misspelled words in readme.md
Signed-off-by: Guangming Wang <guangming.wang@daocloud.io>
2019-08-16 18:13:42 +02:00
Lorenzo Fontana
e229cecbe1 fix(rules): make chmod rules enabled by default
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-16 10:23:28 +02:00
Lorenzo Fontana
3ea98b05dd fix(rules/Set Setuid or Setgid bit): use chmod syscalls instead of chmod command
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-16 10:23:28 +02:00
Lorenzo Fontana
7bc3fa165f new: add @kris-nova to owners
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-13 22:42:43 +02:00
Leonardo Di Donato
3a1ab88111 new: webserver unit test skeleton
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-13 15:48:06 +02:00
Leonardo Di Donato
2439e97da6 update(tests): setup unit tests for userspace/falco too
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-13 15:48:06 +02:00
Leonardo Di Donato
8c62ec5472 fix(usperspace): webserver must not fail with input that exceeds the expected ranges
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-13 15:48:06 +02:00
Leonardo Di Donato
c9cd6eebf7 update(userspace): falco webserver must catch json type errors (exceptions)
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-13 15:48:06 +02:00
Leonardo Di Donato
723bc1cabf fix(userspace): accessing a (json) object can throw exceptions because of wrong types
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-13 15:48:06 +02:00
Leonardo Di Donato
330d7ef2d7 fix: ignore build files generated by the regression tests
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-08-13 15:48:06 +02:00
kaizhe
1fc509d78b rule update: fine grained sending to mining domain
Signed-off-by: kaizhe <derek0405@gmail.com>
2019-08-12 17:37:01 +02:00
kaizhe
a7ee01103d rule update: add rules for crypto mining
Signed-off-by: kaizhe <derek0405@gmail.com>
2019-08-12 17:37:01 +02:00
Lorenzo Fontana
03fbf432f1 fix: make sure that when deleting shell history the system call is taken into account
Signed-off-by: Lorenzo Fontana <lo@linux.com>
2019-08-07 15:38:22 +02:00
Mark Stemm
94d89eaea2 New tests for handling multi-doc files
New automated tests for testing parsing of multiple-doc rules files:

 - invalid_{overwrite,append}_{macro,rule}_multiple_docs are just like
   the previous versions, but with the multiple files combined into a
   single multi-document file.

 - multiple_docs combines the rules file from multiple_rules

The expect the same results and output as the multiple-file versions.

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2019-08-02 11:01:59 -07:00
Mark Stemm
76f64f5d79 Properly parse multi-document yaml files
Properly parse multi-document yaml files e.g. blocks separated by
---. This is easily handled by lyaml itself--you just need to pass the
option all = true to yaml.load, and each document will be provided as a table.

This does break the table iteration a bit, so some more refactoring:

 - Create a load_state table that holds context like the current
 - document index, the required_engine_version, etc.
 - Pull out the parts that parse a single document to load_rules_doc(),
   which is given the table for a single document + load_state.
 - Simplify get_orig_yaml_obj to just provide a single row index and
 - return all rows from that point to the next blank line or line
   starting with '-'

Signed-off-by: Mark Stemm <mark.stemm@gmail.com>
2019-08-02 11:01:59 -07:00
kaizhe
3dbd43749a rule update: add exception for write below rpm (#745)
Signed-off-by: kaizhe <derek0405@gmail.com>
2019-08-01 20:07:24 +02:00
35 changed files with 1152 additions and 243 deletions

3
.gitignore vendored
View File

@@ -3,12 +3,15 @@
*.pyc *.pyc
test/falco_tests.yaml test/falco_tests.yaml
test/falco_traces.yaml
test/traces-negative test/traces-negative
test/traces-positive test/traces-positive
test/traces-info test/traces-info
test/job-results test/job-results
test/build
test/.phoronix-test-suite test/.phoronix-test-suite
test/results*.json.* test/results*.json.*
test/build
userspace/falco/lua/re.lua userspace/falco/lua/re.lua
userspace/falco/lua/lpeg.so userspace/falco/lua/lpeg.so

View File

@@ -20,7 +20,7 @@ cmake_minimum_required(VERSION 3.3.2)
project(falco) project(falco)
if(NOT SYSDIG_DIR) if(NOT SYSDIG_DIR)
get_filename_component(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig" REALPATH) get_filename_component(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig" REALPATH)
endif() endif()
# Custom CMake modules # Custom CMake modules
@@ -53,7 +53,7 @@ if(BUILD_WARNINGS_AS_ERRORS)
endif() endif()
set(CMAKE_C_FLAGS "${CMAKE_COMMON_FLAGS}") set(CMAKE_C_FLAGS "${CMAKE_COMMON_FLAGS}")
set(CMAKE_CXX_FLAGS "--std=c++0x ${CMAKE_COMMON_FLAGS}") set(CMAKE_CXX_FLAGS "--std=c++11 ${CMAKE_COMMON_FLAGS}")
set(CMAKE_C_FLAGS_DEBUG "${DRAIOS_DEBUG_FLAGS}") set(CMAKE_C_FLAGS_DEBUG "${DRAIOS_DEBUG_FLAGS}")
set(CMAKE_CXX_FLAGS_DEBUG "${DRAIOS_DEBUG_FLAGS}") set(CMAKE_CXX_FLAGS_DEBUG "${DRAIOS_DEBUG_FLAGS}")
@@ -129,7 +129,7 @@ else()
set(ZLIB_LIB "${ZLIB_SRC}/libz.a") set(ZLIB_LIB "${ZLIB_SRC}/libz.a")
ExternalProject_Add(zlib ExternalProject_Add(zlib
# START CHANGE for CVE-2016-9840, CVE-2016-9841, CVE-2016-9842, CVE-2016-9843 # START CHANGE for CVE-2016-9840, CVE-2016-9841, CVE-2016-9842, CVE-2016-9843
URL "http://s3.amazonaws.com/download.draios.com/dependencies/zlib-1.2.11.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/zlib-1.2.11.tar.gz"
URL_MD5 "1c9f62f0778697a09d36121ead88e08e" URL_MD5 "1c9f62f0778697a09d36121ead88e08e"
# END CHANGE for CVE-2016-9840, CVE-2016-9841, CVE-2016-9842, CVE-2016-9843 # END CHANGE for CVE-2016-9840, CVE-2016-9841, CVE-2016-9842, CVE-2016-9843
CONFIGURE_COMMAND "./configure" CONFIGURE_COMMAND "./configure"
@@ -156,7 +156,7 @@ else()
set(JQ_INCLUDE "${JQ_SRC}") set(JQ_INCLUDE "${JQ_SRC}")
set(JQ_LIB "${JQ_SRC}/.libs/libjq.a") set(JQ_LIB "${JQ_SRC}/.libs/libjq.a")
ExternalProject_Add(jq ExternalProject_Add(jq
URL "http://s3.amazonaws.com/download.draios.com/dependencies/jq-1.5.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/jq-1.5.tar.gz"
URL_MD5 "0933532b086bd8b6a41c1b162b1731f9" URL_MD5 "0933532b086bd8b6a41c1b162b1731f9"
CONFIGURE_COMMAND ./configure --disable-maintainer-mode --enable-all-static --disable-dependency-tracking CONFIGURE_COMMAND ./configure --disable-maintainer-mode --enable-all-static --disable-dependency-tracking
BUILD_COMMAND ${CMD_MAKE} LDFLAGS=-all-static BUILD_COMMAND ${CMD_MAKE} LDFLAGS=-all-static
@@ -188,7 +188,7 @@ else()
message(STATUS "Using bundled nlohmann-json in '${NJSON_SRC}'") message(STATUS "Using bundled nlohmann-json in '${NJSON_SRC}'")
set(NJSON_INCLUDE "${NJSON_SRC}/single_include") set(NJSON_INCLUDE "${NJSON_SRC}/single_include")
ExternalProject_Add(njson ExternalProject_Add(njson
URL "http://download.draios.com/dependencies/njson-3.3.0.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/njson-3.3.0.tar.gz"
URL_MD5 "e26760e848656a5da400662e6c5d999a" URL_MD5 "e26760e848656a5da400662e6c5d999a"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND "" BUILD_COMMAND ""
@@ -212,7 +212,7 @@ else()
set(CURSES_LIBRARIES "${CURSES_BUNDLE_DIR}/lib/libncurses.a") set(CURSES_LIBRARIES "${CURSES_BUNDLE_DIR}/lib/libncurses.a")
message(STATUS "Using bundled ncurses in '${CURSES_BUNDLE_DIR}'") message(STATUS "Using bundled ncurses in '${CURSES_BUNDLE_DIR}'")
ExternalProject_Add(ncurses ExternalProject_Add(ncurses
URL "http://s3.amazonaws.com/download.draios.com/dependencies/ncurses-6.0-20150725.tgz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/ncurses-6.0-20150725.tgz"
URL_MD5 "32b8913312e738d707ae68da439ca1f4" URL_MD5 "32b8913312e738d707ae68da439ca1f4"
CONFIGURE_COMMAND ./configure --without-cxx --without-cxx-binding --without-ada --without-manpages --without-progs --without-tests --with-terminfo-dirs=/etc/terminfo:/lib/terminfo:/usr/share/terminfo CONFIGURE_COMMAND ./configure --without-cxx --without-cxx-binding --without-ada --without-manpages --without-progs --without-tests --with-terminfo-dirs=/etc/terminfo:/lib/terminfo:/usr/share/terminfo
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
@@ -239,7 +239,7 @@ else()
set(B64_INCLUDE "${B64_SRC}/include") set(B64_INCLUDE "${B64_SRC}/include")
set(B64_LIB "${B64_SRC}/src/libb64.a") set(B64_LIB "${B64_SRC}/src/libb64.a")
ExternalProject_Add(b64 ExternalProject_Add(b64
URL "http://s3.amazonaws.com/download.draios.com/dependencies/libb64-1.2.src.zip" URL "https://s3.amazonaws.com/download.draios.com/dependencies/libb64-1.2.src.zip"
URL_MD5 "a609809408327117e2c643bed91b76c5" URL_MD5 "a609809408327117e2c643bed91b76c5"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
@@ -292,7 +292,7 @@ else()
ExternalProject_Add(openssl ExternalProject_Add(openssl
# START CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736 # START CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736
URL "http://s3.amazonaws.com/download.draios.com/dependencies/openssl-1.0.2n.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/openssl-1.0.2n.tar.gz"
URL_MD5 "13bdc1b1d1ff39b6fd42a255e74676a4" URL_MD5 "13bdc1b1d1ff39b6fd42a255e74676a4"
# END CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736 # END CHANGE for CVE-2017-3735, CVE-2017-3731, CVE-2017-3737, CVE-2017-3738, CVE-2017-3736
CONFIGURE_COMMAND ./config shared --prefix=${OPENSSL_INSTALL_DIR} CONFIGURE_COMMAND ./config shared --prefix=${OPENSSL_INSTALL_DIR}
@@ -325,7 +325,7 @@ else()
ExternalProject_Add(curl ExternalProject_Add(curl
DEPENDS openssl DEPENDS openssl
# START CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007 # START CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007
URL "http://s3.amazonaws.com/download.draios.com/dependencies/curl-7.61.0.tar.bz2" URL "https://s3.amazonaws.com/download.draios.com/dependencies/curl-7.61.0.tar.bz2"
URL_MD5 "31d0a9f48dc796a7db351898a1e5058a" URL_MD5 "31d0a9f48dc796a7db351898a1e5058a"
# END CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007 # END CHANGE for CVE-2017-8816, CVE-2017-8817, CVE-2017-8818, CVE-2018-1000007
CONFIGURE_COMMAND ./configure ${CURL_SSL_OPTION} --disable-shared --enable-optimize --disable-curldebug --disable-rt --enable-http --disable-ftp --disable-file --disable-ldap --disable-ldaps --disable-rtsp --disable-telnet --disable-tftp --disable-pop3 --disable-imap --disable-smb --disable-smtp --disable-gopher --disable-sspi --disable-ntlm-wb --disable-tls-srp --without-winssl --without-darwinssl --without-polarssl --without-cyassl --without-nss --without-axtls --without-ca-path --without-ca-bundle --without-libmetalink --without-librtmp --without-winidn --without-libidn2 --without-libpsl --without-nghttp2 --without-libssh2 --disable-threaded-resolver --without-brotli CONFIGURE_COMMAND ./configure ${CURL_SSL_OPTION} --disable-shared --enable-optimize --disable-curldebug --disable-rt --enable-http --disable-ftp --disable-file --disable-ldap --disable-ldaps --disable-rtsp --disable-telnet --disable-tftp --disable-pop3 --disable-imap --disable-smb --disable-smtp --disable-gopher --disable-sspi --disable-ntlm-wb --disable-tls-srp --without-winssl --without-darwinssl --without-polarssl --without-cyassl --without-nss --without-axtls --without-ca-path --without-ca-bundle --without-libmetalink --without-librtmp --without-winidn --without-libidn2 --without-libpsl --without-nghttp2 --without-libssh2 --disable-threaded-resolver --without-brotli
@@ -360,7 +360,7 @@ else()
set(LUAJIT_INCLUDE "${LUAJIT_SRC}") set(LUAJIT_INCLUDE "${LUAJIT_SRC}")
set(LUAJIT_LIB "${LUAJIT_SRC}/libluajit.a") set(LUAJIT_LIB "${LUAJIT_SRC}/libluajit.a")
ExternalProject_Add(luajit ExternalProject_Add(luajit
URL "http://s3.amazonaws.com/download.draios.com/dependencies/LuaJIT-2.0.3.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/LuaJIT-2.0.3.tar.gz"
URL_MD5 "f14e9104be513913810cd59c8c658dc0" URL_MD5 "f14e9104be513913810cd59c8c658dc0"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
@@ -390,7 +390,7 @@ else()
endif() endif()
ExternalProject_Add(lpeg ExternalProject_Add(lpeg
DEPENDS ${LPEG_DEPENDENCIES} DEPENDS ${LPEG_DEPENDENCIES}
URL "http://s3.amazonaws.com/download.draios.com/dependencies/lpeg-1.0.0.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/lpeg-1.0.0.tar.gz"
URL_MD5 "0aec64ccd13996202ad0c099e2877ece" URL_MD5 "0aec64ccd13996202ad0c099e2877ece"
BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} "${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh" "${LPEG_SRC}/build" BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} "${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh" "${LPEG_SRC}/build"
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
@@ -425,7 +425,7 @@ else()
set(LIBYAML_LIB "${LIBYAML_SRC}/.libs/libyaml.a") set(LIBYAML_LIB "${LIBYAML_SRC}/.libs/libyaml.a")
message(STATUS "Using bundled libyaml in '${LIBYAML_SRC}'") message(STATUS "Using bundled libyaml in '${LIBYAML_SRC}'")
ExternalProject_Add(libyaml ExternalProject_Add(libyaml
URL "http://s3.amazonaws.com/download.draios.com/dependencies/libyaml-0.1.4.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/libyaml-0.1.4.tar.gz"
URL_MD5 "4a4bced818da0b9ae7fc8ebc690792a7" URL_MD5 "4a4bced818da0b9ae7fc8ebc690792a7"
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
@@ -461,7 +461,7 @@ else()
ExternalProject_Add(lyaml ExternalProject_Add(lyaml
DEPENDS ${LYAML_DEPENDENCIES} DEPENDS ${LYAML_DEPENDENCIES}
URL "http://s3.amazonaws.com/download.draios.com/dependencies/lyaml-release-v6.0.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/lyaml-release-v6.0.tar.gz"
URL_MD5 "dc3494689a0dce7cf44e7a99c72b1f30" URL_MD5 "dc3494689a0dce7cf44e7a99c72b1f30"
BUILD_COMMAND ${CMD_MAKE} BUILD_COMMAND ${CMD_MAKE}
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
@@ -486,7 +486,7 @@ else()
set(TBB_INCLUDE_DIR "${TBB_SRC}/include/") set(TBB_INCLUDE_DIR "${TBB_SRC}/include/")
set(TBB_LIB "${TBB_SRC}/build/lib_release/libtbb.a") set(TBB_LIB "${TBB_SRC}/build/lib_release/libtbb.a")
ExternalProject_Add(tbb ExternalProject_Add(tbb
URL "http://s3.amazonaws.com/download.draios.com/dependencies/tbb-2018_U5.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/tbb-2018_U5.tar.gz"
URL_MD5 "ff3ae09f8c23892fbc3008c39f78288f" URL_MD5 "ff3ae09f8c23892fbc3008c39f78288f"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND ${CMD_MAKE} tbb_build_dir=${TBB_SRC}/build tbb_build_prefix=lib extra_inc=big_iron.inc BUILD_COMMAND ${CMD_MAKE} tbb_build_dir=${TBB_SRC}/build tbb_build_prefix=lib extra_inc=big_iron.inc
@@ -518,7 +518,7 @@ else()
endif() endif()
ExternalProject_Add(civetweb ExternalProject_Add(civetweb
DEPENDS ${CIVETWEB_DEPENDENCIES} DEPENDS ${CIVETWEB_DEPENDENCIES}
URL "http://s3.amazonaws.com/download.draios.com/dependencies/civetweb-1.11.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/civetweb-1.11.tar.gz"
URL_MD5 "b6d2175650a27924bccb747cbe084cd4" URL_MD5 "b6d2175650a27924bccb747cbe084cd4"
CONFIGURE_COMMAND ${CMAKE_COMMAND} -E make_directory ${CIVETWEB_SRC}/install/lib CONFIGURE_COMMAND ${CMAKE_COMMAND} -E make_directory ${CIVETWEB_SRC}/install/lib
COMMAND ${CMAKE_COMMAND} -E make_directory ${CIVETWEB_SRC}/install/include COMMAND ${CMAKE_COMMAND} -E make_directory ${CIVETWEB_SRC}/install/include
@@ -606,7 +606,7 @@ else()
ExternalProject_Add(grpc ExternalProject_Add(grpc
DEPENDS protobuf zlib c-ares DEPENDS protobuf zlib c-ares
URL "http://download.draios.com/dependencies/grpc-1.8.1.tar.gz" URL "https://s3.amazonaws.com/download.draios.com/dependencies/grpc-1.8.1.tar.gz"
URL_MD5 "2fc42c182a0ed1b48ad77397f76bb3bc" URL_MD5 "2fc42c182a0ed1b48ad77397f76bb3bc"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
# TODO what if using system openssl, protobuf or cares? # TODO what if using system openssl, protobuf or cares?

9
OWNERS
View File

@@ -1,11 +1,12 @@
approvers: approvers:
- leodido
- fntlnz - fntlnz
- kris-nova
- leodido
- mstemm - mstemm
reviewers: reviewers:
- leodido
- fntlnz - fntlnz
- mfdii
- kaizhe - kaizhe
- kris-nova
- leodido
- mfdii
- mstemm - mstemm

View File

@@ -45,10 +45,17 @@ See [Falco Documentation](https://falco.org/docs/) to quickly get started using
Join the Community Join the Community
--- ---
* [Join the mailing list](http://bit.ly/2Mu0wXA) for news and a Google calendar invite for our Falco open source meetings. Note: this is the only way to get a calendar invite for our open meetings.
* [Website](https://falco.org) for Falco. * [Website](https://falco.org) for Falco.
* We are working on a blog for the Falco project. In the meantime you can find [Falco](https://sysdig.com/blog/tag/falco/) posts over on the Sysdig blog.
* Join our [Public Slack](https://slack.sysdig.com) channel for open source Sysdig and Falco announcements and discussions. * Join our [Public Slack](https://slack.sysdig.com) channel for open source Sysdig and Falco announcements and discussions.
Office hours
---
Falco has bi-weekly office hour style meetings where we plan our work on the project. You can get a Google calendar invite by joining the mailing list. It will automatically be sent.
Wednesdays at 8am Pacific on [Zoom](https://sysdig.zoom.us/j/213235330).
License Terms License Terms
--- ---
Falco is licensed to you under the [Apache 2.0](./COPYING) open source license. Falco is licensed to you under the [Apache 2.0](./COPYING) open source license.

View File

@@ -16,22 +16,32 @@ RUN cp /etc/skel/.bashrc /root && cp /etc/skel/.profile /root
ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/ ADD http://download.draios.com/apt-draios-priority /etc/apt/preferences.d/
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y --no-install-recommends \ && apt-get install -y --no-install-recommends \
bash-completion \ bash-completion \
bc \ bc \
clang-7 \ clang-7 \
ca-certificates \ ca-certificates \
curl \ curl \
dkms \ dkms \
gnupg2 \ gnupg2 \
gcc \ gcc \
jq \ jq \
libc6-dev \ libc6-dev \
libelf-dev \ libelf-dev \
llvm-7 \ llvm-7 \
netcat \ netcat \
xz-utils \ xz-utils \
&& rm -rf /var/lib/apt/lists/* libmpc3 \
binutils \
libgomp1 \
libitm1 \
libatomic1 \
liblsan0 \
libtsan0 \
libmpx2 \
libquadmath0 \
libcc1-0 \
&& rm -rf /var/lib/apt/lists/*
# gcc 6 is no longer included in debian unstable, but we need it to # gcc 6 is no longer included in debian unstable, but we need it to
# build kernel modules on the default debian-based ami used by # build kernel modules on the default debian-based ami used by

View File

@@ -80,7 +80,6 @@ buffered_outputs: false
# The rate at which log/alert messages are emitted is governed by a # The rate at which log/alert messages are emitted is governed by a
# token bucket. The rate corresponds to one message every 30 seconds # token bucket. The rate corresponds to one message every 30 seconds
# with a burst of 10 messages. # with a burst of 10 messages.
syscall_event_drops: syscall_event_drops:
actions: actions:
- log - log
@@ -88,6 +87,21 @@ syscall_event_drops:
rate: .03333 rate: .03333
max_burst: 10 max_burst: 10
# Options to configure the kernel module check.
# Falco uses a kernel module to obtain the info to match against the rules.
# In order to correctly behave it needs to ensure that the kernel module is always present and well behaving.
# The following options configure:
# the frequency it should check for the kernel module
# the maximum number of consecutive failures after which it should stop
# the exponential backoff mechanism it have to use to check for the module and eventally to try to re-insert it automatically.
module_check:
frequency: 10
max_consecutive_failures: 3
backoff:
max_attempts: 5
init_delay: 100
max_delay: 3000
# A throttling mechanism implemented as a token bucket limits the # A throttling mechanism implemented as a token bucket limits the
# rate of falco notifications. This throttling is controlled by the following configuration # rate of falco notifications. This throttling is controlled by the following configuration
# options: # options:
@@ -99,14 +113,12 @@ syscall_event_drops:
# an initial quiet period, and then up to 1 notification per second # an initial quiet period, and then up to 1 notification per second
# afterward. It would gain the full burst back after 1000 seconds of # afterward. It would gain the full burst back after 1000 seconds of
# no activity. # no activity.
outputs: outputs:
rate: 1 rate: 1
max_burst: 1000 max_burst: 1000
# Where security notifications should go. # Where security notifications should go.
# Multiple outputs can be enabled. # Multiple outputs can be enabled.
syslog_output: syslog_output:
enabled: true enabled: true
@@ -117,7 +129,6 @@ syslog_output:
# #
# Also, the file will be closed and reopened if falco is signaled with # Also, the file will be closed and reopened if falco is signaled with
# SIGUSR1. # SIGUSR1.
file_output: file_output:
enabled: false enabled: false
keep_alive: false keep_alive: false
@@ -136,7 +147,6 @@ stdout_output:
# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem # $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
# $ cat certificate.pem key.pem > falco.pem # $ cat certificate.pem key.pem > falco.pem
# $ sudo cp falco.pem /etc/falco/falco.pem # $ sudo cp falco.pem /etc/falco/falco.pem
webserver: webserver:
enabled: true enabled: true
listen_port: 8765 listen_port: 8765

View File

@@ -18,7 +18,7 @@ For running this integration you will need:
This integration uses the [same environment variables that anchore-cli](https://github.com/anchore/anchore-cli#configuring-the-anchore-cli): This integration uses the [same environment variables that anchore-cli](https://github.com/anchore/anchore-cli#configuring-the-anchore-cli):
* ANCHORE_CLI_USER: The user used to conect to anchore-engine. By default is ```admin``` * ANCHORE_CLI_USER: The user used to connect to anchore-engine. By default is ```admin```
* ANCHORE_CLI_PASS: The password used to connect to anchore-engine. * ANCHORE_CLI_PASS: The password used to connect to anchore-engine.
* ANCHORE_CLI_URL: The url where anchore-engine listens. Make sure does not end with a slash. By default is ```http://localhost:8228/v1``` * ANCHORE_CLI_URL: The url where anchore-engine listens. Make sure does not end with a slash. By default is ```http://localhost:8228/v1```
* ANCHORE_CLI_SSL_VERIFY: Flag for enabling if HTTP client verifies SSL. By default is ```true``` * ANCHORE_CLI_SSL_VERIFY: Flag for enabling if HTTP client verifies SSL. By default is ```true```
@@ -81,7 +81,7 @@ So you can run directly with Docker:
``` ```
docker run --rm -e ANCHORE_CLI_USER=<user-for-custom-anchore-engine> \ docker run --rm -e ANCHORE_CLI_USER=<user-for-custom-anchore-engine> \
-e ANCHORE_CLI_PASS=<passsword-for-user-for-custom-anchore-engine> \ -e ANCHORE_CLI_PASS=<password-for-user-for-custom-anchore-engine> \
-e ANCHORE_CLI_URL=http://<custom-anchore-engine-host>:8228/v1 \ -e ANCHORE_CLI_URL=http://<custom-anchore-engine-host>:8228/v1 \
sysdig/anchore-falco sysdig/anchore-falco
``` ```

View File

@@ -72,6 +72,9 @@
- macro: create_symlink - macro: create_symlink
condition: evt.type in (symlink, symlinkat) and evt.dir=< condition: evt.type in (symlink, symlinkat) and evt.dir=<
- macro: chmod
condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
# File categories # File categories
- macro: bin_dir - macro: bin_dir
condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin) condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin)
@@ -905,12 +908,15 @@
- macro: access_repositories - macro: access_repositories
condition: (fd.filename in (repository_files) or fd.directory in (repository_directories)) condition: (fd.filename in (repository_files) or fd.directory in (repository_directories))
- macro: modify_repositories
condition: (evt.arg.newpath pmatch (repository_directories))
- rule: Update Package Repository - rule: Update Package Repository
desc: Detect package repositories get updated desc: Detect package repositories get updated
condition: > condition: >
open_write and access_repositories and not package_mgmt_procs ((open_write and access_repositories) or (modify and modify_repositories)) and not package_mgmt_procs
output: > output: >
Repository files get updated (user=%user.name command=%proc.cmdline file=%fd.name container_id=%container.id image=%container.image.repository) Repository files get updated (user=%user.name command=%proc.cmdline file=%fd.name newpath=%evt.arg.newpath container_id=%container.id image=%container.image.repository)
priority: priority:
NOTICE NOTICE
tags: [filesystem, mitre_persistence] tags: [filesystem, mitre_persistence]
@@ -1412,6 +1418,12 @@
priority: WARNING priority: WARNING
tags: [filesystem, mitre_credential_access, mitre_discovery] tags: [filesystem, mitre_credential_access, mitre_discovery]
- macro: amazon_linux_running_python_yum
condition: >
(proc.name = python and
proc.pcmdline = "python -m amazon_linux_extras system_motd" and
proc.cmdline startswith "python -c import yum;")
# Only let rpm-related programs write to the rpm database # Only let rpm-related programs write to the rpm database
- rule: Write below rpm database - rule: Write below rpm database
desc: an attempt to write to the rpm database by any non-rpm related program desc: an attempt to write to the rpm database by any non-rpm related program
@@ -1421,6 +1433,7 @@
and not ansible_running_python and not ansible_running_python
and not python_running_chef and not python_running_chef
and not exe_running_docker_save and not exe_running_docker_save
and not amazon_linux_running_python_yum
output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline container_id=%container.id image=%container.image.repository)" output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline container_id=%container.id image=%container.image.repository)"
priority: ERROR priority: ERROR
tags: [filesystem, software_mgmt, mitre_persistence] tags: [filesystem, software_mgmt, mitre_persistence]
@@ -2374,29 +2387,48 @@
WARNING WARNING
tags: [process, mitre_persistence] tags: [process, mitre_persistence]
- rule: Delete Bash History - rule: Delete or rename shell history
desc: Detect bash history deletion desc: Detect shell history deletion
condition: > condition: >
((spawned_process and proc.name in (shred, rm, mv) and proc.args contains "bash_history") or (modify and (
(open_write and fd.name contains "bash_history" and evt.arg.flags contains "O_TRUNC")) evt.arg.name contains "bash_history" or
evt.arg.name contains "zsh_history" or
evt.arg.name contains "fish_read_history" or
evt.arg.name endswith "fish_history" or
evt.arg.oldpath contains "bash_history" or
evt.arg.oldpath contains "zsh_history" or
evt.arg.oldpath contains "fish_read_history" or
evt.arg.oldpath endswith "fish_history" or
evt.arg.path contains "bash_history" or
evt.arg.path contains "zsh_history" or
evt.arg.path contains "fish_read_history" or
evt.arg.path endswith "fish_history")) or
(open_write and (
fd.name contains "bash_history" or
fd.name contains "zsh_history" or
fd.name contains "fish_read_history" or
fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
output: > output: >
Bash history has been deleted (user=%user.name command=%proc.cmdline file=%fd.name %container.info) Shell history had been deleted or renamed (user=%user.name type=%evt.type command=%proc.cmdline fd.name=%fd.name name=%evt.arg.name path=%evt.arg.path oldpath=%evt.arg.oldpath %container.info)
priority: priority:
WARNING WARNING
tag: [process, mitre_defense_evation] tag: [process, mitre_defense_evation]
- macro: consider_all_chmods - macro: consider_all_chmods
condition: (never_true) condition: (always_true)
- list: user_known_chmod_applications
items: []
- rule: Set Setuid or Setgid bit - rule: Set Setuid or Setgid bit
desc: > desc: >
When the setuid or setgid bits are set for an application, When the setuid or setgid bits are set for an application,
this means that the application will run with the privileges of the owning user or group respectively. this means that the application will run with the privileges of the owning user or group respectively.
Detect setuid or setgid bits set via chmod Detect setuid or setgid bits set via chmod
condition: consider_all_chmods and spawned_process and proc.name = "chmod" and (proc.args contains "+s" or proc.args contains "4777") condition: consider_all_chmods and chmod and (evt.arg.mode contains "S_ISUID" or evt.arg.mode contains "S_ISGID") and not proc.cmdline in (user_known_chmod_applications)
output: > output: >
Setuid or setgid bit is set via chmod (user=%user.name command=%proc.cmdline Setuid or setgid bit is set via chmod (fd=%evt.arg.fd filename=%evt.arg.filename mode=%evt.arg.mode user=%user.name
container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
priority: priority:
NOTICE NOTICE
tag: [process, mitre_persistence] tag: [process, mitre_persistence]
@@ -2411,12 +2443,14 @@
- rule: Create Hidden Files or Directories - rule: Create Hidden Files or Directories
desc: Detect hidden files or directories created desc: Detect hidden files or directories created
condition: > condition: >
((mkdir and consider_hidden_file_creation and evt.arg.path contains "/.") or (consider_hidden_file_creation and (
(open_write and consider_hidden_file_creation and evt.arg.flags contains "O_CREAT" and (modify and evt.arg.newpath contains "/.") or
fd.name contains "/." and not fd.name pmatch (exclude_hidden_directories))) (mkdir and evt.arg.path contains "/.") or
(open_write and evt.arg.flags contains "O_CREAT" and fd.name contains "/." and not fd.name pmatch (exclude_hidden_directories)))
)
output: > output: >
Hidden file or directory created (user=%user.name command=%proc.cmdline Hidden file or directory created (user=%user.name command=%proc.cmdline
file=%fd.name container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) file=%fd.name newpath=%evt.arg.newpath container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
priority: priority:
NOTICE NOTICE
tag: [file, mitre_persistence] tag: [file, mitre_persistence]
@@ -2446,6 +2480,103 @@
Symlinks created over senstivie files (user=%user.name command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname) Symlinks created over senstivie files (user=%user.name command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
priority: NOTICE priority: NOTICE
tags: [file, mitre_exfiltration] tags: [file, mitre_exfiltration]
- list: miner_ports
items: [
25, 3333, 3334, 3335, 3336, 3357, 4444,
5555, 5556, 5588, 5730, 6099, 6666, 7777,
7778, 8000, 8001, 8008, 8080, 8118, 8333,
8888, 8899, 9332, 9999, 14433, 14444,
45560, 45700
]
- list: miner_domains
items: [
"asia1.ethpool.org","ca.minexmr.com",
"cn.stratum.slushpool.com","de.minexmr.com",
"eth-ar.dwarfpool.com","eth-asia.dwarfpool.com",
"eth-asia1.nanopool.org","eth-au.dwarfpool.com",
"eth-au1.nanopool.org","eth-br.dwarfpool.com",
"eth-cn.dwarfpool.com","eth-cn2.dwarfpool.com",
"eth-eu.dwarfpool.com","eth-eu1.nanopool.org",
"eth-eu2.nanopool.org","eth-hk.dwarfpool.com",
"eth-jp1.nanopool.org","eth-ru.dwarfpool.com",
"eth-ru2.dwarfpool.com","eth-sg.dwarfpool.com",
"eth-us-east1.nanopool.org","eth-us-west1.nanopool.org",
"eth-us.dwarfpool.com","eth-us2.dwarfpool.com",
"eu.stratum.slushpool.com","eu1.ethermine.org",
"eu1.ethpool.org","fr.minexmr.com",
"mine.moneropool.com","mine.xmrpool.net",
"pool.minexmr.com","pool.monero.hashvault.pro",
"pool.supportxmr.com","sg.minexmr.com",
"sg.stratum.slushpool.com","stratum-eth.antpool.com",
"stratum-ltc.antpool.com","stratum-zec.antpool.com",
"stratum.antpool.com","us-east.stratum.slushpool.com",
"us1.ethermine.org","us1.ethpool.org",
"us2.ethermine.org","us2.ethpool.org",
"xmr-asia1.nanopool.org","xmr-au1.nanopool.org",
"xmr-eu1.nanopool.org","xmr-eu2.nanopool.org",
"xmr-jp1.nanopool.org","xmr-us-east1.nanopool.org",
"xmr-us-west1.nanopool.org","xmr.crypto-pool.fr",
"xmr.pool.minergate.com"
]
- list: https_miner_domains
items: [
"ca.minexmr.com",
"cn.stratum.slushpool.com",
"de.minexmr.com",
"fr.minexmr.com",
"mine.moneropool.com",
"mine.xmrpool.net",
"pool.minexmr.com",
"sg.minexmr.com",
"stratum-eth.antpool.com",
"stratum-ltc.antpool.com",
"stratum-zec.antpool.com",
"stratum.antpool.com",
"xmr.crypto-pool.fr"
]
- list: http_miner_domains
items: [
"ca.minexmr.com",
"de.minexmr.com",
"fr.minexmr.com",
"mine.moneropool.com",
"mine.xmrpool.net",
"pool.minexmr.com",
"sg.minexmr.com",
"xmr.crypto-pool.fr"
]
# Add rule based on crypto mining IOCs
- macro: minerpool_https
condition: (fd.sport="443" and fd.sip.name in (https_miner_domains))
- macro: minerpool_http
condition: (fd.sport="80" and fd.sip.name in (http_miner_domains))
- macro: minerpool_other
condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains))
- macro: net_miner_pool
condition: (evt.type in (sendto, sendmsg) and evt.dir=< and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
- rule: Detect outbound connections to common miner pool ports
desc: Miners typically connect to miner pools on common ports.
condition: net_miner_pool
output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
priority: CRITICAL
tags: [network, mitre_execution]
- rule: Detect crypto miners using the Stratum protocol
desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
condition: spawned_process and proc.cmdline contains "stratum+tcp"
output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
priority: CRITICAL
tags: [process, mitre_execution]
# Application rules have moved to application_rules.yaml. Please look # Application rules have moved to application_rules.yaml. Please look
# there if you want to enable them by adding to # there if you want to enable them by adding to
# falco_rules.local.yaml. # falco_rules.local.yaml.

View File

@@ -19,7 +19,7 @@ configure_file(debian/postinst.in debian/postinst)
configure_file(debian/prerm.in debian/prerm) configure_file(debian/prerm.in debian/prerm)
if(NOT SYSDIG_DIR) if(NOT SYSDIG_DIR)
set(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig") get_filename_component(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig" REALPATH)
endif() endif()
file(COPY "${PROJECT_SOURCE_DIR}/scripts/debian/falco" file(COPY "${PROJECT_SOURCE_DIR}/scripts/debian/falco"

1
test/.gitignore vendored
View File

@@ -1 +0,0 @@
falco_traces.yaml

View File

@@ -40,6 +40,8 @@ class FalcoTest(Test):
build_type = "debug" if build_type == "debug" else "release" build_type = "debug" if build_type == "debug" else "release"
build_dir = os.path.join('/build', build_type) build_dir = os.path.join('/build', build_type)
if not os.path.exists(build_dir):
build_dir = '../build'
self.falcodir = self.params.get('falcodir', '/', default=os.path.join(self.basedir, build_dir)) self.falcodir = self.params.get('falcodir', '/', default=os.path.join(self.basedir, build_dir))
self.stdout_is = self.params.get('stdout_is', '*', default='') self.stdout_is = self.params.get('stdout_is', '*', default='')

View File

@@ -123,6 +123,18 @@ trace_files: !mux
trace_file: trace_files/cat_write.scap trace_file: trace_files/cat_write.scap
all_events: True all_events: True
multiple_docs:
detect: True
detect_level:
- WARNING
- INFO
- ERROR
rules_file:
- rules/single_rule.yaml
- rules/double_rule.yaml
trace_file: trace_files/cat_write.scap
all_events: True
rules_directory: rules_directory:
detect: True detect: True
detect_level: detect_level:
@@ -435,6 +447,35 @@ trace_files: !mux
- rules/invalid_append_macro.yaml - rules/invalid_append_macro.yaml
trace_file: trace_files/cat_write.scap trace_file: trace_files/cat_write.scap
invalid_overwrite_macro_multiple_docs:
exit_status: 1
stdout_is: |+
Compilation error when compiling "foo": Undefined macro 'foo' used in filter.
---
- macro: some macro
condition: foo
append: false
---
validate_rules_file:
- rules/invalid_overwrite_macro_multiple_docs.yaml
trace_file: trace_files/cat_write.scap
invalid_append_macro_multiple_docs:
exit_status: 1
stdout_is: |+
Compilation error when compiling "evt.type=execve foo": 17: syntax error, unexpected 'foo', expecting 'or', 'and'
---
- macro: some macro
condition: evt.type=execve
- macro: some macro
condition: foo
append: true
---
validate_rules_file:
- rules/invalid_append_macro_multiple_docs.yaml
trace_file: trace_files/cat_write.scap
invalid_overwrite_rule: invalid_overwrite_rule:
exit_status: 1 exit_status: 1
stdout_contains: |+ stdout_contains: |+
@@ -477,6 +518,44 @@ trace_files: !mux
- rules/invalid_append_rule.yaml - rules/invalid_append_rule.yaml
trace_file: trace_files/cat_write.scap trace_file: trace_files/cat_write.scap
invalid_overwrite_rule_multiple_docs:
exit_status: 1
stdout_is: |+
Undefined macro 'bar' used in filter.
---
- rule: some rule
desc: some desc
condition: bar
output: some output
priority: INFO
append: false
---
validate_rules_file:
- rules/invalid_overwrite_rule_multiple_docs.yaml
trace_file: trace_files/cat_write.scap
invalid_append_rule_multiple_docs:
exit_status: 1
stdout_contains: |+
Compilation error when compiling "evt.type=open bar": 15: syntax error, unexpected 'bar', expecting 'or', 'and'
---
- rule: some rule
desc: some desc
condition: evt.type=open
output: some output
priority: INFO
- rule: some rule
desc: some desc
condition: bar
output: some output
priority: INFO
append: true
---
validate_rules_file:
- rules/invalid_append_rule_multiple_docs.yaml
trace_file: trace_files/cat_write.scap
invalid_missing_rule_name: invalid_missing_rule_name:
exit_status: 1 exit_status: 1
stdout_is: |+ stdout_is: |+

View File

@@ -0,0 +1,8 @@
---
- macro: some macro
condition: evt.type=execve
---
- macro: some macro
condition: foo
append: true

View File

@@ -0,0 +1,13 @@
---
- rule: some rule
desc: some desc
condition: evt.type=open
output: some output
priority: INFO
---
- rule: some rule
desc: some desc
condition: bar
output: some output
priority: INFO
append: true

View File

@@ -0,0 +1,8 @@
---
- macro: some macro
condition: evt.type=execve
---
- macro: some macro
condition: foo
append: false

View File

@@ -0,0 +1,13 @@
---
- rule: some rule
desc: some desc
condition: evt.type=open
output: some output
priority: INFO
---
- rule: some rule
desc: some desc
condition: bar
output: some output
priority: INFO
append: false

View File

@@ -0,0 +1,66 @@
---
#
# Copyright (C) 2016-2018 Draios Inc dba Sysdig.
#
# This file is part of falco.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
- required_engine_version: 2
- list: cat_binaries
items: [cat]
- list: cat_capable_binaries
items: [cat_binaries]
- macro: is_cat
condition: proc.name in (cat_capable_binaries)
- rule: open_from_cat
desc: A process named cat does an open
condition: evt.type=open and is_cat
output: "An open was seen (command=%proc.cmdline)"
priority: WARNING
---
#
# Copyright (C) 2016-2018 Draios Inc dba Sysdig.
#
# This file is part of falco.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This ruleset depends on the is_cat macro defined in single_rule.yaml
- rule: exec_from_cat
desc: A process named cat does execve
condition: evt.type=execve and is_cat
output: "An exec was seen (command=%proc.cmdline)"
priority: ERROR
- rule: access_from_cat
desc: A process named cat does an access
condition: evt.type=access and is_cat
output: "An access was seen (command=%proc.cmdline)"
priority: INFO

View File

@@ -15,7 +15,7 @@
# License for the specific language governing permissions and limitations under # License for the specific language governing permissions and limitations under
# the License. # the License.
# #
set(FALCO_TESTS_SOURCES test_base.cpp engine/test_token_bucket.cpp) set(FALCO_TESTS_SOURCES test_base.cpp engine/test_token_bucket.cpp falco/test_webserver.cpp)
set(FALCO_TESTED_LIBRARIES falco_engine) set(FALCO_TESTED_LIBRARIES falco_engine)
@@ -38,7 +38,10 @@ if(FALCO_BUILD_TESTS)
falco_test falco_test
PUBLIC "${CATCH2_INCLUDE}" PUBLIC "${CATCH2_INCLUDE}"
"${FAKEIT_INCLUDE}" "${FAKEIT_INCLUDE}"
"${PROJECT_SOURCE_DIR}/userspace/engine") "${PROJECT_SOURCE_DIR}/userspace/engine"
"${YAMLCPP_INCLUDE_DIR}"
"${CIVETWEB_INCLUDE_DIR}"
"${PROJECT_SOURCE_DIR}/userspace/falco")
include(CMakeParseArguments) include(CMakeParseArguments)
include(CTest) include(CTest)

View File

@@ -0,0 +1,31 @@
/*
Copyright (C) 2016-2019 Draios Inc dba Sysdig.
This file is part of falco.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "webserver.h"
#include <catch.hpp>
TEST_CASE("webserver must accept invalid data", "[!hide][webserver][k8s_audit_handler][accept_data]")
{
// falco_engine* engine = new falco_engine();
// falco_outputs* outputs = new falco_outputs(engine);
// std::string errstr;
// std::string input("{\"kind\": 0}");
//k8s_audit_handler::accept_data(engine, outputs, input, errstr);
REQUIRE(1 == 1);
}

View File

@@ -16,7 +16,7 @@
# limitations under the License. # limitations under the License.
if(NOT SYSDIG_DIR) if(NOT SYSDIG_DIR)
set(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig") get_filename_component(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig" REALPATH)
endif() endif()
set(FALCO_ENGINE_SOURCE_FILES set(FALCO_ENGINE_SOURCE_FILES

View File

@@ -365,46 +365,54 @@ unique_ptr<falco_engine::rule_result> falco_engine::process_k8s_audit_event(json
bool falco_engine::parse_k8s_audit_json(nlohmann::json &j, std::list<json_event> &evts) bool falco_engine::parse_k8s_audit_json(nlohmann::json &j, std::list<json_event> &evts)
{ {
// If the Kind is EventList, split it into individual events. // Note that nlohmann::basic_json::value can throw nlohmann::basic_json::type_error (302, 306)
if(j.value("kind", "<NA>") == "EventList") try
{ {
for(auto &je : j["items"]) // If the kind is EventList, split it into individual events
if(j.value("kind", "<NA>") == "EventList")
{
for(auto &je : j["items"])
{
evts.emplace_back();
je["kind"] = "Event";
uint64_t ns = 0;
if(!sinsp_utils::parse_iso_8601_utc_string(je.value(k8s_audit_time, "<NA>"), ns))
{
return false;
}
std::string tmp;
sinsp_utils::ts_to_string(ns, &tmp, false, true);
evts.back().set_jevt(je, ns);
}
return true;
}
else if(j.value("kind", "<NA>") == "Event")
{ {
evts.emplace_back(); evts.emplace_back();
je["kind"] = "Event";
uint64_t ns = 0; uint64_t ns = 0;
if(!sinsp_utils::parse_iso_8601_utc_string(je.value(k8s_audit_time, "<NA>"), ns)) if(!sinsp_utils::parse_iso_8601_utc_string(j.value(k8s_audit_time, "<NA>"), ns))
{ {
return false; return false;
} }
std::string tmp; evts.back().set_jevt(j, ns);
sinsp_utils::ts_to_string(ns, &tmp, false, true); return true;
evts.back().set_jevt(je, ns);
} }
else
return true;
}
else if(j.value("kind", "<NA>") == "Event")
{
evts.emplace_back();
uint64_t ns = 0;
if(!sinsp_utils::parse_iso_8601_utc_string(j.value(k8s_audit_time, "<NA>"), ns))
{ {
return false; return false;
} }
evts.back().set_jevt(j, ns);
return true;
} }
else catch(exception &e)
{ {
// Propagate the exception
rethrow_exception(current_exception());
return false; return false;
} }
} }
unique_ptr<falco_engine::rule_result> falco_engine::process_k8s_audit_event(json_event *ev) unique_ptr<falco_engine::rule_result> falco_engine::process_k8s_audit_event(json_event *ev)

View File

@@ -225,7 +225,23 @@ function split_lines(rules_content)
return lines, indices return lines, indices
end end
function get_orig_yaml_obj(rules_lines, row, num_lines) function get_orig_yaml_obj(rules_lines, row)
local ret = ""
idx = row
while (idx <= #rules_lines) do
ret = ret..rules_lines[idx].."\n"
idx = idx + 1
if idx > #rules_lines or rules_lines[idx] == "" or string.sub(rules_lines[idx], 1, 1) == '-' then
break
end
end
return ret
end
function get_lines(rules_lines, row, num_lines)
local ret = "" local ret = ""
idx = row idx = row
@@ -238,7 +254,7 @@ function get_orig_yaml_obj(rules_lines, row, num_lines)
end end
function build_error(rules_lines, row, num_lines, err) function build_error(rules_lines, row, num_lines, err)
local ret = err.."\n---\n"..get_orig_yaml_obj(rules_lines, row, num_lines).."---" local ret = err.."\n---\n"..get_lines(rules_lines, row, num_lines).."---"
return ret return ret
end end
@@ -248,65 +264,19 @@ function build_error_with_context(ctx, err)
return ret return ret
end end
function load_rules(sinsp_lua_parser, function load_rules_doc(rules_mgr, doc, load_state)
json_lua_parser,
rules_content,
rules_mgr,
verbose,
all_events,
extra,
replace_container_info,
min_priority)
local required_engine_version = 0
local lines, indices = split_lines(rules_content)
local status, rules = pcall(yaml.load, rules_content)
if status == false then
local pat = "^([%d]+):([%d]+): "
-- rules is actually an error string
local row = 0
local col = 0
row, col = string.match(rules, pat)
if row ~= nil and col ~= nil then
rules = string.gsub(rules, pat, "")
end
row = tonumber(row)
col = tonumber(col)
return false, build_error(lines, row, 3, rules)
end
if rules == nil then
-- An empty rules file is acceptable
return true, required_engine_version
end
if type(rules) ~= "table" then
return false, build_error(lines, 1, 1, "Rules content is not yaml")
end
-- Look for non-numeric indices--implies that document is not array
-- of objects.
for key, val in pairs(rules) do
if type(key) ~= "number" then
return false, build_error(lines, 1, 1, "Rules content is not yaml array of objects")
end
end
-- Iterate over yaml list. In this pass, all we're doing is -- Iterate over yaml list. In this pass, all we're doing is
-- populating the set of rules, macros, and lists. We're not -- populating the set of rules, macros, and lists. We're not
-- expanding/compiling anything yet. All that will happen in a -- expanding/compiling anything yet. All that will happen in a
-- second pass -- second pass
for i,v in ipairs(rules) do for i,v in ipairs(doc) do
load_state.cur_item_idx = load_state.cur_item_idx + 1
-- Save back the original object as it appeared in the file. Will be used to provide context. -- Save back the original object as it appeared in the file. Will be used to provide context.
local context = get_orig_yaml_obj(lines, indices[i], (indices[i+1]-indices[i])) local context = get_orig_yaml_obj(load_state.lines,
load_state.indices[load_state.cur_item_idx])
if (not (type(v) == "table")) then if (not (type(v) == "table")) then
return false, build_error_with_context(context, "Unexpected element of type " ..type(v)..". Each element should be a yaml associative array.") return false, build_error_with_context(context, "Unexpected element of type " ..type(v)..". Each element should be a yaml associative array.")
@@ -315,8 +285,8 @@ function load_rules(sinsp_lua_parser,
v['context'] = context v['context'] = context
if (v['required_engine_version']) then if (v['required_engine_version']) then
required_engine_version = v['required_engine_version'] load_state.required_engine_version = v['required_engine_version']
if type(required_engine_version) ~= "number" then if type(load_state.required_engine_version) ~= "number" then
return false, build_error_with_context(v['context'], "Value of required_engine_version must be a number") return false, build_error_with_context(v['context'], "Value of required_engine_version must be a number")
end end
@@ -458,7 +428,7 @@ function load_rules(sinsp_lua_parser,
error("Invalid priority level: "..v['priority']) error("Invalid priority level: "..v['priority'])
end end
if v['priority_num'] <= min_priority then if v['priority_num'] <= load_state.min_priority then
-- Note that we can overwrite rules, but the rules are still -- Note that we can overwrite rules, but the rules are still
-- loaded in the order in which they first appeared, -- loaded in the order in which they first appeared,
-- potentially across multiple files. -- potentially across multiple files.
@@ -483,7 +453,74 @@ function load_rules(sinsp_lua_parser,
end end
end end
-- We've now loaded all the rules, macros, and list. Now return true, ""
end
function load_rules(sinsp_lua_parser,
json_lua_parser,
rules_content,
rules_mgr,
verbose,
all_events,
extra,
replace_container_info,
min_priority)
local load_state = {lines={}, indices={}, cur_item_idx=0, min_priority=min_priority, required_engine_version=0}
load_state.lines, load_state.indices = split_lines(rules_content)
local status, docs = pcall(yaml.load, rules_content, { all = true })
if status == false then
local pat = "^([%d]+):([%d]+): "
-- docs is actually an error string
local row = 0
local col = 0
row, col = string.match(docs, pat)
if row ~= nil and col ~= nil then
docs = string.gsub(docs, pat, "")
end
row = tonumber(row)
col = tonumber(col)
return false, build_error(load_state.lines, row, 3, docs)
end
if docs == nil then
-- An empty rules file is acceptable
return true, load_state.required_engine_version
end
if type(docs) ~= "table" then
return false, build_error(load_state.lines, 1, 1, "Rules content is not yaml")
end
for docidx, doc in ipairs(docs) do
if type(doc) ~= "table" then
return false, build_error(load_state.lines, 1, 1, "Rules content is not yaml")
end
-- Look for non-numeric indices--implies that document is not array
-- of objects.
for key, val in pairs(doc) do
if type(key) ~= "number" then
return false, build_error(load_state.lines, 1, 1, "Rules content is not yaml array of objects")
end
end
res, errstr = load_rules_doc(rules_mgr, doc, load_state)
if not res then
return res, errstr
end
end
-- We've now loaded all the rules, macros, and lists. Now
-- compile/expand the rules, macros, and lists. We use -- compile/expand the rules, macros, and lists. We use
-- ordered_rule_{lists,macros,names} to compile them in the order -- ordered_rule_{lists,macros,names} to compile them in the order
-- in which they appeared in the file(s). -- in which they appeared in the file(s).
@@ -705,7 +742,7 @@ function load_rules(sinsp_lua_parser,
io.flush() io.flush()
return true, required_engine_version return true, load_state.required_engine_version
end end
local rule_fmt = "%-50s %s" local rule_fmt = "%-50s %s"

View File

@@ -16,7 +16,7 @@
# limitations under the License. # limitations under the License.
# #
if(NOT SYSDIG_DIR) if(NOT SYSDIG_DIR)
set(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig") get_filename_component(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig" REALPATH)
endif() endif()
configure_file("${SYSDIG_DIR}/userspace/sysdig/config_sysdig.h.in" config_sysdig.h) configure_file("${SYSDIG_DIR}/userspace/sysdig/config_sysdig.h.in" config_sysdig.h)
@@ -27,6 +27,8 @@ add_executable(falco
falco_outputs.cpp falco_outputs.cpp
event_drops.cpp event_drops.cpp
statsfilewriter.cpp statsfilewriter.cpp
timer.cpp
module_utils.cpp
falco.cpp falco.cpp
"${SYSDIG_DIR}/userspace/sysdig/fields_info.cpp" "${SYSDIG_DIR}/userspace/sysdig/fields_info.cpp"
webserver.cpp) webserver.cpp)
@@ -49,16 +51,11 @@ target_link_libraries(falco
configure_file(config_falco.h.in config_falco.h) configure_file(config_falco.h.in config_falco.h)
add_custom_command(TARGET falco add_custom_command(TARGET falco
COMMAND bash ${CMAKE_CURRENT_SOURCE_DIR}/verify_engine_fields.sh ${CMAKE_SOURCE_DIR} COMMAND bash ${CMAKE_CURRENT_SOURCE_DIR}/verify_engine_fields ${CMAKE_SOURCE_DIR}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Comparing engine fields checksum in falco_engine.h to actual fields" COMMENT "Comparing engine fields checksum in falco_engine.h to actual fields"
) )
# add_custom_target(verify_engine_fields
# DEPENDS verify_engine_fields.sh falco_engine.h)
# add_dependencies(verify_engine_fields falco)
install(TARGETS falco DESTINATION ${FALCO_BIN_DIR}) install(TARGETS falco DESTINATION ${FALCO_BIN_DIR})
install(DIRECTORY lua install(DIRECTORY lua
DESTINATION ${FALCO_SHARE_DIR} DESTINATION ${FALCO_SHARE_DIR}

View File

@@ -27,4 +27,4 @@ limitations under the License.
#define FALCO_INSTALL_CONF_FILE "/etc/falco/falco.yaml" #define FALCO_INSTALL_CONF_FILE "/etc/falco/falco.yaml"
#define FALCO_SOURCE_LUA_DIR "${PROJECT_SOURCE_DIR}/userspace/falco/lua/" #define FALCO_SOURCE_LUA_DIR "${PROJECT_SOURCE_DIR}/userspace/falco/lua/"
#define PROBE_NAME "${PROBE_NAME}" #define PROBE_NAME "@PROBE_NAME@"

View File

@@ -220,6 +220,15 @@ void falco_configuration::init(string conf_filename, list<string> &cmdline_optio
m_syscall_evt_drop_rate = m_config->get_scalar<double>("syscall_event_drops", "rate", 0.3333); m_syscall_evt_drop_rate = m_config->get_scalar<double>("syscall_event_drops", "rate", 0.3333);
m_syscall_evt_drop_max_burst = m_config->get_scalar<double>("syscall_event_drops", "max_burst", 10); m_syscall_evt_drop_max_burst = m_config->get_scalar<double>("syscall_event_drops", "max_burst", 10);
m_module_check_frequency = m_config->get_scalar<uint64_t>("module_check", "frequency", 10);
if(m_module_check_frequency < 10) {
throw invalid_argument("Module check frequency must be higher than 10 seconds");
}
m_module_check_max_consecutive_failures = m_config->get_scalar<int>("module_check", "max_consecutive_failures", 3);
m_module_check_backoff_max_attempts = m_config->get_scalar<int>("module_check", "backoff", "max_attempts", 5);
m_module_check_backoff_init_delay = m_config->get_scalar<uint64_t>("module_check", "backoff", "init_delay", 100);
m_module_check_backoff_max_delay = m_config->get_scalar<uint64_t>("module_check", "backoff", "max_delay", 3000);
m_syscall_evt_simulate_drops = m_config->get_scalar<bool>("syscall_event_drops", "simulate_drops", false); m_syscall_evt_simulate_drops = m_config->get_scalar<bool>("syscall_event_drops", "simulate_drops", false);
} }

View File

@@ -133,6 +133,47 @@ public:
} }
} }
/**
* Get a scalar value defined inside a 3 level nested structure like:
* file_output:
* enabled: true
* filename: output_file.txt
*
* get_scalar<bool>("file_output", "enabled", false)
*/
template<typename T>
const T get_scalar(const std::string& key, const std::string& subkey, const std::string& subsubkey, const T& default_value)
{
try
{
auto node = m_root[key][subkey][subsubkey];
if (node.IsDefined())
{
return node.as<T>();
}
}
catch (const YAML::BadConversion& ex)
{
std::cerr << "Cannot read config file (" + m_path + "): wrong type at key " + key + "\n";
throw;
}
return default_value;
}
/**
* Set the second-level node identified by key[key][subkey] to value.
*/
template<typename T>
void set_scalar(const std::string& key, const std::string& subkey, const std::string& subsubkey, const T& value)
{
auto node = m_root;
if (node.IsDefined())
{
node[key][subkey][subsubkey] = value;
}
}
// called with the last variadic arg (where the sequence is expected to be found) // called with the last variadic arg (where the sequence is expected to be found)
template <typename T> template <typename T>
void get_sequence_from_node(T& ret, const YAML::Node &node) void get_sequence_from_node(T& ret, const YAML::Node &node)
@@ -216,6 +257,12 @@ class falco_configuration
double m_syscall_evt_drop_rate; double m_syscall_evt_drop_rate;
double m_syscall_evt_drop_max_burst; double m_syscall_evt_drop_max_burst;
uint64_t m_module_check_frequency;
int m_module_check_max_consecutive_failures;
int m_module_check_backoff_max_attempts;
uint64_t m_module_check_backoff_init_delay;
uint64_t m_module_check_backoff_max_delay;
// Only used for testing // Only used for testing
bool m_syscall_evt_simulate_drops; bool m_syscall_evt_simulate_drops;

View File

@@ -26,6 +26,7 @@ limitations under the License.
#include <vector> #include <vector>
#include <algorithm> #include <algorithm>
#include <string> #include <string>
#include <functional>
#include <signal.h> #include <signal.h>
#include <fcntl.h> #include <fcntl.h>
#include <sys/utsname.h> #include <sys/utsname.h>
@@ -46,6 +47,11 @@ limitations under the License.
#include "config_falco.h" #include "config_falco.h"
#include "statsfilewriter.h" #include "statsfilewriter.h"
#include "webserver.h" #include "webserver.h"
#include "timer.h"
#include "retry.h"
#include "module_utils.h"
typedef function<void(sinsp* inspector)> open_t;
bool g_terminate = false; bool g_terminate = false;
bool g_reopen_outputs = false; bool g_reopen_outputs = false;
@@ -82,25 +88,27 @@ static void usage()
" -h, --help Print this page\n" " -h, --help Print this page\n"
" -c Configuration file (default " FALCO_SOURCE_CONF_FILE ", " FALCO_INSTALL_CONF_FILE ")\n" " -c Configuration file (default " FALCO_SOURCE_CONF_FILE ", " FALCO_INSTALL_CONF_FILE ")\n"
" -A Monitor all events, including those with EF_DROP_FALCO flag.\n" " -A Monitor all events, including those with EF_DROP_FALCO flag.\n"
" -b, --print-base64 Print data buffers in base64. This is useful for encoding\n" " -b, --print-base64 Print data buffers in base64.\n"
" binary data that needs to be used over media designed to\n" " This is useful for encoding binary data that needs to be used over media designed to.\n"
" --cri <path> Path to CRI socket for container metadata\n" " --cri <path> Path to CRI socket for container metadata.\n"
" Use the specified socket to fetch data from a CRI-compatible runtime\n" " Use the specified socket to fetch data from a CRI-compatible runtime.\n"
" -d, --daemon Run as a daemon\n" " -d, --daemon Run as a daemon.\n"
" --disable-source <event_source>\n"
" Disable a specific event source.\n"
" Available event sources are: syscall, k8s_audit.\n"
" It can be passed multiple times.\n"
" Can not disable both the event sources.\n"
" -D <substring> Disable any rules with names having the substring <substring>. Can be specified multiple times.\n" " -D <substring> Disable any rules with names having the substring <substring>. Can be specified multiple times.\n"
" Can not be specified with -t.\n" " Can not be specified with -t.\n"
" -e <events_file> Read the events from <events_file> (in .scap format for sinsp events, or jsonl for\n" " -e <events_file> Read the events from <events_file> (in .scap format for sinsp events, or jsonl for\n"
" k8s audit events) instead of tapping into live.\n" " k8s audit events) instead of tapping into live.\n"
" -k <url>, --k8s-api=<url>\n" " -k <url>, --k8s-api <url>\n"
" Enable Kubernetes support by connecting to the API server\n" " Enable Kubernetes support by connecting to the API server specified as argument.\n"
" specified as argument. E.g. \"http://admin:password@127.0.0.1:8080\".\n" " E.g. \"http://admin:password@127.0.0.1:8080\".\n"
" The API server can also be specified via the environment variable\n" " The API server can also be specified via the environment variable FALCO_K8S_API.\n"
" FALCO_K8S_API.\n" " -K <bt_file> | <cert_file>:<key_file[#password]>[:<ca_cert_file>], --k8s-api-cert <bt_file> | <cert_file>:<key_file[#password]>[:<ca_cert_file>]\n"
" -K <bt_file> | <cert_file>:<key_file[#password]>[:<ca_cert_file>], --k8s-api-cert=<bt_file> | <cert_file>:<key_file[#password]>[:<ca_cert_file>]\n" " Use the provided files names to authenticate user and (optionally) verify the K8S API server identity.\n"
" Use the provided files names to authenticate user and (optionally) verify the K8S API\n" " Each entry must specify full (absolute, or relative to the current directory) path to the respective file.\n"
" server identity.\n"
" Each entry must specify full (absolute, or relative to the current directory) path\n"
" to the respective file.\n"
" Private key password is optional (needed only if key is password protected).\n" " Private key password is optional (needed only if key is password protected).\n"
" CA certificate is optional. For all files, only PEM file format is supported. \n" " CA certificate is optional. For all files, only PEM file format is supported. \n"
" Specifying CA certificate only is obsoleted - when single entry is provided \n" " Specifying CA certificate only is obsoleted - when single entry is provided \n"
@@ -111,51 +119,47 @@ static void usage()
" -l <rule> Show the name and description of the rule with name <rule> and exit.\n" " -l <rule> Show the name and description of the rule with name <rule> and exit.\n"
" --list [<source>] List all defined fields. If <source> is provided, only list those fields for\n" " --list [<source>] List all defined fields. If <source> is provided, only list those fields for\n"
" the source <source>. Current values for <source> are \"syscall\", \"k8s_audit\"\n" " the source <source>. Current values for <source> are \"syscall\", \"k8s_audit\"\n"
" -m <url[,marathon_url]>, --mesos-api=<url[,marathon_url]>\n" " -m <url[,marathon_url]>, --mesos-api <url[,marathon_url]>\n"
" Enable Mesos support by connecting to the API server\n" " Enable Mesos support by connecting to the API server\n"
" specified as argument. E.g. \"http://admin:password@127.0.0.1:5050\".\n" " specified as argument. E.g. \"http://admin:password@127.0.0.1:5050\".\n"
" Marathon url is optional and defaults to Mesos address, port 8080.\n" " Marathon url is optional and defaults to Mesos address, port 8080.\n"
" The API servers can also be specified via the environment variable\n" " The API servers can also be specified via the environment variable FALCO_MESOS_API.\n"
" FALCO_MESOS_API.\n"
" -M <num_seconds> Stop collecting after <num_seconds> reached.\n" " -M <num_seconds> Stop collecting after <num_seconds> reached.\n"
" -N When used with --list, only print field names.\n" " -N When used with --list, only print field names.\n"
" -o, --option <key>=<val> Set the value of option <key> to <val>. Overrides values in configuration file.\n" " -o, --option <key>=<val> Set the value of option <key> to <val>. Overrides values in configuration file.\n"
" <key> can be a two-part <key>.<subkey>\n" " <key> can be a two-part <key>.<subkey>\n"
" -p <output_format>, --print=<output_format>\n" " -p <output_format>, --print <output_format>\n"
" Add additional information to each falco notification's output.\n" " Add additional information to each falco notification's output.\n"
" With -pc or -pcontainer will use a container-friendly format.\n" " With -pc or -pcontainer will use a container-friendly format.\n"
" With -pk or -pkubernetes will use a kubernetes-friendly format.\n" " With -pk or -pkubernetes will use a kubernetes-friendly format.\n"
" With -pm or -pmesos will use a mesos-friendly format.\n" " With -pm or -pmesos will use a mesos-friendly format.\n"
" Additionally, specifying -pc/-pk/-pm will change the interpretation\n" " Additionally, specifying -pc/-pk/-pm will change the interpretation\n"
" of %%container.info in rule output fields\n" " of %%container.info in rule output fields.\n"
" See the examples section below for more info.\n"
" -P, --pidfile <pid_file> When run as a daemon, write pid to specified file\n" " -P, --pidfile <pid_file> When run as a daemon, write pid to specified file\n"
" -r <rules_file> Rules file/directory (defaults to value set in configuration file,\n" " -r <rules_file> Rules file/directory (defaults to value set in configuration file, or /etc/falco_rules.yaml).\n"
" or /etc/falco_rules.yaml). Can be specified multiple times to read\n" " Can be specified multiple times to read from multiple files/directories.\n"
" from multiple files/directories.\n"
" -s <stats_file> If specified, write statistics related to falco's reading/processing of events\n" " -s <stats_file> If specified, write statistics related to falco's reading/processing of events\n"
" to this file. (Only useful in live mode).\n" " to this file. (Only useful in live mode).\n"
" --stats_interval <msec> When using -s <stats_file>, write statistics every <msec> ms.\n" " --stats_interval <msec> When using -s <stats_file>, write statistics every <msec> ms.\n"
" (This uses signals, so don't recommend intervals below 200 ms)\n" " This uses signals, so don't recommend intervals below 200 ms.\n"
" defaults to 5000 (5 seconds)\n" " Defaults to 5000 (5 seconds).\n"
" -S <len>, --snaplen=<len>\n" " -S <len>, --snaplen <len>\n"
" Capture the first <len> bytes of each I/O buffer.\n" " Capture the first <len> bytes of each I/O buffer.\n"
" By default, the first 80 bytes are captured. Use this\n" " By default, the first 80 bytes are captured. Use this\n"
" option with caution, it can generate huge trace files.\n" " option with caution, it can generate huge trace files.\n"
" --support Print support information including version, rules files used, etc.\n" " --support Print support information including version, rules files used, etc. and exit.\n"
" and exit.\n"
" -T <tag> Disable any rules with a tag=<tag>. Can be specified multiple times.\n" " -T <tag> Disable any rules with a tag=<tag>. Can be specified multiple times.\n"
" Can not be specified with -t.\n" " Can not be specified with -t.\n"
" -t <tag> Only run those rules with a tag=<tag>. Can be specified multiple times.\n" " -t <tag> Only run those rules with a tag=<tag>. Can be specified multiple times.\n"
" Can not be specified with -T/-D.\n" " Can not be specified with -T/-D.\n"
" -U,--unbuffered Turn off output buffering to configured outputs. This causes every\n" " -U,--unbuffered Turn off output buffering to configured outputs.\n"
" single line emitted by falco to be flushed, which generates higher CPU\n" " This causes every single line emitted by falco to be flushed,\n"
" usage but is useful when piping those outputs into another process\n" " which generates higher CPU usage but is useful when piping those outputs\n"
" or into a script.\n" " into another process or into a script.\n"
" -V,--validate <rules_file> Read the contents of the specified rules(s) file and exit\n" " -V, --validate <rules_file> Read the contents of the specified rules(s) file and exit.\n"
" Can be specified multiple times to validate multiple files.\n" " Can be specified multiple times to validate multiple files.\n"
" -v Verbose output.\n" " -v Verbose output.\n"
" --version Print version number.\n" " --version Print version number.\n"
"\n" "\n"
); );
} }
@@ -227,6 +231,8 @@ uint64_t do_inspect(falco_engine *engine,
string &stats_filename, string &stats_filename,
uint64_t stats_interval, uint64_t stats_interval,
bool all_events, bool all_events,
bool verbose,
bool disable_syscall,
int &result) int &result)
{ {
uint64_t num_evts = 0; uint64_t num_evts = 0;
@@ -252,11 +258,42 @@ uint64_t do_inspect(falco_engine *engine,
} }
} }
// Module check settings
utils::timer t;
t.reset();
uint64_t frequency = config.m_module_check_frequency;
auto num_failures = 0;
auto max_failures = config.m_module_check_max_consecutive_failures;
auto max_attempts = config.m_module_check_backoff_max_attempts;
auto ini_delay = config.m_module_check_backoff_init_delay;
auto max_delay = config.m_module_check_backoff_max_delay;
// //
// Loop through the events // Loop through the events
// //
while(1) while(true)
{ {
// Check module every x seconds
if(!disable_syscall && t.seconds_elapsed() > frequency)
{
// Check module is present and loaded with exponential backoff (eg., 100, 200, 400, ...)
// When module is missing or unloaded, try to insert it
// Retries at most <max_attempts> times
// Stops early if module is found
auto found = utils::retry(max_attempts, ini_delay, max_delay, utils::module_predicate, utils::has_module, verbose, true);
// Count how many intervals the module is missing, reset counter when module has been found
num_failures = found ? 0 : num_failures + 1;
// Stop falco if module is missing from <count * stop_after> checks
if (num_failures >= max_failures)
{
result = EXIT_FAILURE;
break;
}
// Reset timer
t.reset();
}
rc = inspector->next(&ev); rc = inspector->next(&ev);
@@ -270,7 +307,7 @@ uint64_t do_inspect(falco_engine *engine,
if (g_terminate || g_restart) if (g_terminate || g_restart)
{ {
falco_logger::log(LOG_INFO, "SIGHUP Received, restarting...\n"); falco_logger::log(LOG_INFO, "SIGHUP received, restarting...\n");
break; break;
} }
else if(rc == SCAP_TIMEOUT) else if(rc == SCAP_TIMEOUT)
@@ -291,10 +328,11 @@ uint64_t do_inspect(falco_engine *engine,
throw sinsp_exception(inspector->getlasterr().c_str()); throw sinsp_exception(inspector->getlasterr().c_str());
} }
if (duration_start == 0) if(duration_start == 0)
{ {
duration_start = ev->get_ts(); duration_start = ev->get_ts();
} else if(duration_to_tot_ns > 0) }
else if(duration_to_tot_ns > 0)
{ {
if(ev->get_ts() - duration_start >= duration_to_tot_ns) if(ev->get_ts() - duration_start >= duration_to_tot_ns)
{ {
@@ -428,6 +466,9 @@ int falco_init(int argc, char **argv)
string list_flds_source = ""; string list_flds_source = "";
bool print_support = false; bool print_support = false;
string cri_socket_path; string cri_socket_path;
set<string> disable_sources;
bool disable_syscall = false;
bool disable_k8s_audit = false;
// Used for writing trace files // Used for writing trace files
int duration_seconds = 0; int duration_seconds = 0;
@@ -447,25 +488,26 @@ int falco_init(int argc, char **argv)
static struct option long_options[] = static struct option long_options[] =
{ {
{"help", no_argument, 0, 'h' },
{"print-base64", no_argument, 0, 'b'},
{"daemon", no_argument, 0, 'd' },
{"k8s-api", required_argument, 0, 'k'},
{"k8s-api-cert", required_argument, 0, 'K' },
{"list", optional_argument, 0},
{"mesos-api", required_argument, 0, 'm'},
{"option", required_argument, 0, 'o'},
{"print", required_argument, 0, 'p' },
{"pidfile", required_argument, 0, 'P' },
{"snaplen", required_argument, 0, 'S' },
{"stats_interval", required_argument, 0},
{"support", no_argument, 0},
{"unbuffered", no_argument, 0, 'U' },
{"version", no_argument, 0, 0 },
{"validate", required_argument, 0, 'V' },
{"writefile", required_argument, 0, 'w' },
{"ignored-events", no_argument, 0, 'i'},
{"cri", required_argument, 0}, {"cri", required_argument, 0},
{"daemon", no_argument, 0, 'd'},
{"disable-source", required_argument, 0},
{"help", no_argument, 0, 'h'},
{"ignored-events", no_argument, 0, 'i'},
{"k8s-api-cert", required_argument, 0, 'K'},
{"k8s-api", required_argument, 0, 'k'},
{"list", optional_argument, 0},
{"mesos-api", required_argument, 0, 'm'},
{"option", required_argument, 0, 'o'},
{"pidfile", required_argument, 0, 'P'},
{"print-base64", no_argument, 0, 'b'},
{"print", required_argument, 0, 'p'},
{"snaplen", required_argument, 0, 'S'},
{"stats_interval", required_argument, 0},
{"support", no_argument, 0},
{"unbuffered", no_argument, 0, 'U'},
{"validate", required_argument, 0, 'V'},
{"version", no_argument, 0, 0},
{"writefile", required_argument, 0, 'w'},
{0, 0, 0, 0} {0, 0, 0, 0}
}; };
@@ -609,7 +651,10 @@ int falco_init(int argc, char **argv)
} }
else if (string(long_options[long_index].name) == "cri") else if (string(long_options[long_index].name) == "cri")
{ {
cri_socket_path = optarg; if(optarg != NULL)
{
cri_socket_path = optarg;
}
} }
else if (string(long_options[long_index].name) == "list") else if (string(long_options[long_index].name) == "list")
{ {
@@ -627,6 +672,13 @@ int falco_init(int argc, char **argv)
{ {
print_support = true; print_support = true;
} }
else if (string(long_options[long_index].name) == "disable-source")
{
if(optarg != NULL)
{
disable_sources.insert(optarg);
}
}
break; break;
default: default:
@@ -669,6 +721,25 @@ int falco_init(int argc, char **argv)
return EXIT_SUCCESS; return EXIT_SUCCESS;
} }
if(disable_sources.size() > 0)
{
auto it = disable_sources.begin();
while(it != disable_sources.end())
{
if(*it != "syscall" && *it != "k8s_audit")
{
it = disable_sources.erase(it);
continue;
}
++it;
}
disable_syscall = disable_sources.count("syscall") > 0;
disable_k8s_audit = disable_sources.count("k8s_audit") > 0;
if (disable_syscall && disable_k8s_audit) {
throw std::invalid_argument("The event source \"syscall\" and \"k8s_audit\" can not be disabled together");
}
}
outputs = new falco_outputs(engine); outputs = new falco_outputs(engine);
outputs->set_inspector(inspector); outputs->set_inspector(inspector);
@@ -972,7 +1043,7 @@ int falco_init(int argc, char **argv)
g_daemonized = true; g_daemonized = true;
} }
if (trace_filename.size()) if(trace_filename.size())
{ {
// Try to open the trace file as a sysdig // Try to open the trace file as a sysdig
// capture file first. // capture file first.
@@ -1016,17 +1087,43 @@ int falco_init(int argc, char **argv)
} }
else else
{ {
open_t open_cb = [](sinsp* inspector) {
inspector->open();
};
open_t open_nodriver_cb = [](sinsp* inspector) {
inspector->open_nodriver();
};
open_t open_f;
// Default mode: both event sources enabled
if (!disable_syscall && !disable_k8s_audit) {
open_f = open_cb;
}
if (disable_syscall) {
open_f = open_nodriver_cb;
}
if (disable_k8s_audit) {
open_f = open_cb;
}
// Check that the kernel module is present at startup, otherwise try to add it
if(!utils::has_module(verbose, false))
{
falco_logger::log(LOG_ERR, "Module not found. Trying to load it ...\n");
if(!utils::ins_module())
{
result = EXIT_FAILURE;
goto exit;
}
}
try try
{ {
inspector->open(200); open_f(inspector);
} }
catch(sinsp_exception &e) catch(sinsp_exception &e)
{ {
if(system("modprobe " PROBE_NAME " > /dev/null 2> /dev/null")) rethrow_exception(current_exception());
{
falco_logger::log(LOG_ERR, "Unable to load the driver. Exiting.\n");
}
inspector->open();
} }
} }
@@ -1101,7 +1198,7 @@ int falco_init(int argc, char **argv)
delete mesos_api; delete mesos_api;
mesos_api = 0; mesos_api = 0;
if(trace_filename.empty() && config.m_webserver_enabled) if(trace_filename.empty() && config.m_webserver_enabled && !disable_k8s_audit)
{ {
std::string ssl_option = (config.m_webserver_ssl_enabled ? " (SSL)" : ""); std::string ssl_option = (config.m_webserver_ssl_enabled ? " (SSL)" : "");
falco_logger::log(LOG_INFO, "Starting internal webserver, listening on port " + to_string(config.m_webserver_listen_port) + ssl_option + "\n"); falco_logger::log(LOG_INFO, "Starting internal webserver, listening on port " + to_string(config.m_webserver_listen_port) + ssl_option + "\n");
@@ -1117,9 +1214,7 @@ int falco_init(int argc, char **argv)
} }
else else
{ {
uint64_t num_evts; uint64_t num_evts = do_inspect(engine,
num_evts = do_inspect(engine,
outputs, outputs,
inspector, inspector,
config, config,
@@ -1128,6 +1223,8 @@ int falco_init(int argc, char **argv)
stats_filename, stats_filename,
stats_interval, stats_interval,
all_events, all_events,
verbose,
disable_syscall,
result); result);
duration = ((double)clock()) / CLOCKS_PER_SEC - duration; duration = ((double)clock()) / CLOCKS_PER_SEC - duration;
@@ -1136,11 +1233,11 @@ int falco_init(int argc, char **argv)
if(verbose) if(verbose)
{ {
fprintf(stderr, "Driver Events:%" PRIu64 "\nDriver Drops:%" PRIu64 "\n", fprintf(stdout, "Driver events: %" PRIu64 "\nDriver drops: %" PRIu64 "\n",
cstats.n_evts, cstats.n_evts,
cstats.n_drops); cstats.n_drops);
fprintf(stderr, "Elapsed time: %.3lf, Captured Events: %" PRIu64 ", %.2lf eps\n", fprintf(stdout, "Elapsed time: %.3lf\nCaptured events: %" PRIu64 "\nEps: %.2lf\n",
duration, duration,
num_evts, num_evts,
num_evts / duration); num_evts / duration);

View File

@@ -0,0 +1,120 @@
/*
Copyright (C) 2016-2019 Draios Inc dba Sysdig.
This file is part of falco.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "config_falco.h"
#include "logger.h"
#include "module_utils.h"
#include <fstream>
#include <functional>
namespace utils
{
bool has_module(bool verbose, bool strict)
{
// Comparing considering underscores (95) equal to dashes (45), and viceversa
std::function<bool(const char &, const char &)> comparator = [](const char &a, const char &b) {
return a == b || (a == 45 && b == 95) || (b == 95 && a == 45);
};
std::ifstream modules(db);
std::string line;
while(std::getline(modules, line))
{
bool shorter = module.length() <= line.length();
if(shorter && std::equal(module.begin(), module.end(), line.begin(), comparator))
{
bool result = true;
if(!strict)
{
falco_logger::log(LOG_INFO, "Kernel module found: true (not strict)\n");
modules.close();
return result;
}
std::istringstream iss(line);
std::vector<std::string> cols(std::istream_iterator<std::string>{iss}, std::istream_iterator<std::string>());
// Check the module's number of instances - ie., whether it is loaded or not
auto ninstances = cols.at(2);
result = result && std::stoi(ninstances) > 0;
// Check the module's load state
auto state = cols.at(4);
std::transform(state.begin(), state.end(), state.begin(), ::tolower);
result = result && (state == module_state_live);
if(verbose)
{
falco_logger::log(LOG_INFO, "Kernel module instances: " + ninstances + "\n");
falco_logger::log(LOG_INFO, "Kernel module load state: " + state + "\n");
}
// Check the module's taint state
if(cols.size() > 6)
{
auto taint = cols.at(6);
auto died = taint.find(taint_die) != std::string::npos;
auto warn = taint.find(taint_warn) != std::string::npos;
auto unloaded = taint.find(taint_forced_rmmod) != std::string::npos;
result = result && !died && !warn && !unloaded;
if(verbose)
{
taint.erase(0, taint.find_first_not_of('('));
taint.erase(taint.find_last_not_of(')') + 1);
falco_logger::log(LOG_INFO, "Kernel module taint state: " + taint + "\n");
std::ostringstream message;
message << std::boolalpha << "Kernel module presence: " << result << "\n";
falco_logger::log(LOG_INFO, message.str());
}
}
modules.close();
return result;
}
}
modules.close();
return false;
}
bool ins_module()
{
if(system("modprobe " PROBE_NAME " > /dev/null 2> /dev/null"))
{
// todo > fallback to a custom directory where to look for the module using `modprobe -d build/driver`
falco_logger::log(LOG_ERR, "Unable to load the module.\n");
return false;
}
return true;
}
bool module_predicate(bool has_module)
{
if(has_module)
{
return false;
}
// Retry only when we have been not able to insert the module
return !ins_module();
}
} // namespace utils

View File

@@ -0,0 +1,37 @@
/*
Copyright (C) 2016-2019 Draios Inc dba Sysdig.
This file is part of falco.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#pragma once
#include <string>
namespace utils
{
const std::string db("/proc/modules");
const std::string module(PROBE_NAME);
const std::string module_state_live("live");
// Module's taint state constants
// see: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/panic.c#n351
const std::string taint_die("D");
const std::string taint_forced_rmmod("R");
const std::string taint_warn("W");
bool has_module(bool verbose, bool strict);
bool ins_module();
bool module_predicate(bool has_module);
} // namespace utils

85
userspace/falco/retry.h Normal file
View File

@@ -0,0 +1,85 @@
/*
Copyright (C) 2016-2019 Draios Inc dba Sysdig.
This file is part of falco.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "logger.h"
#include <algorithm>
#include <type_traits>
#include <chrono>
#include <iostream>
#include <thread>
#if __cplusplus < 201402L
template< class T >
using decay_t = typename decay<T>::type;
template< bool B, class T = void >
using enable_if_t = typename enable_if<B,T>::type;
#endif
#if __cplusplus != 201402L || __cplusplus != 201703L
template< class F, class... ArgTypes>
using result_of_t = typename result_of<F, ArgTypes...>::type;
#endif
namespace utils
{
template<
typename Predicate,
typename Callable,
typename... Args,
// figure out the callable return type
typename R = decay_t<result_of_t<Callable &(Args...)>>,
// require that Predicate is actually a Predicate
enable_if_t<std::is_convertible<result_of_t<Predicate &(R)>, bool>::value, int> = 0>
R retry(int max_retries,
uint64_t initial_delay_ms,
uint64_t max_backoff_ms,
Predicate &&retriable,
Callable &&callable,
Args &&... args)
{
int retries = 0;
while(true)
{
falco_logger::log(LOG_INFO, "Retry no.: " + std::to_string(retries) + "\n");
bool result = callable(std::forward<Args>(args)...);
if(!retriable(result))
{
return result;
}
if(retries >= max_retries)
{
return result;
}
int64_t delay = 0;
if(initial_delay_ms > 0)
{
delay = std::min(initial_delay_ms << retries, max_backoff_ms);
}
std::ostringstream message;
message << "Waiting " << delay << "ms ... \n";
falco_logger::log(LOG_INFO, message.str());
// Blocking for `delay` ms
std::this_thread::sleep_for(std::chrono::milliseconds(delay));
retries++;
}
}
}

34
userspace/falco/timer.cpp Normal file
View File

@@ -0,0 +1,34 @@
/*
Copyright (C) 2016-2019 Draios Inc dba Sysdig.
This file is part of falco.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "timer.h"
#include <chrono>
namespace utils
{
void timer::reset()
{
start = clock::now();
}
unsigned long long timer::seconds_elapsed() const
{
return std::chrono::duration_cast<seconds>(clock::now() - start).count();
}
} // namespace utils

37
userspace/falco/timer.h Normal file
View File

@@ -0,0 +1,37 @@
/*
Copyright (C) 2016-2019 Draios Inc dba Sysdig.
This file is part of falco.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "logger.h"
#include <chrono>
namespace utils
{
struct timer
{
typedef std::chrono::steady_clock clock;
typedef std::chrono::seconds seconds;
void reset();
unsigned long long seconds_elapsed() const;
private:
clock::time_point start;
};
} // namespace utils

View File

@@ -1,6 +1,6 @@
#!/bin/sh #!/usr/bin/env bash
set -euo pipefail set -eu -o pipefail
SOURCE_DIR=$1 SOURCE_DIR=$1
OPENSSL=../../openssl-prefix/src/openssl/target/bin/openssl OPENSSL=../../openssl-prefix/src/openssl/target/bin/openssl
@@ -11,13 +11,13 @@ if ! command -v ${OPENSSL} version > /dev/null 2>&1; then
fi fi
NEW_CHECKSUM=$(./falco --list -N | ${OPENSSL} dgst -sha256 | awk '{print $2}') NEW_CHECKSUM=$(./falco --list -N | ${OPENSSL} dgst -sha256 | awk '{print $2}')
CUR_CHECKSUM=$(grep FALCO_FIELDS_CHECKSUM ${SOURCE_DIR}/userspace/engine/falco_engine_version.h | awk '{print $3}' | sed -e 's/"//g') CUR_CHECKSUM=$(grep FALCO_FIELDS_CHECKSUM "${SOURCE_DIR}/userspace/engine/falco_engine_version.h" | awk '{print $3}' | sed -e 's/"//g')
if [ $NEW_CHECKSUM != $CUR_CHECKSUM ]; then if [ "$NEW_CHECKSUM" != "$CUR_CHECKSUM" ]; then
echo "Set of fields supported by falco/sysdig libraries has changed (new checksum $NEW_CHECKSUM != old checksum $CUR_CHECKSUM)." echo "Set of fields supported by falco/sysdig libraries has changed (new checksum $NEW_CHECKSUM != old checksum $CUR_CHECKSUM)."
echo "Update checksum and/or version in falco_engine_version.h." echo "Update checksum and/or version in falco_engine_version.h."
exit 1 exit 1
fi fi
exit 0 exit 0

View File

@@ -44,16 +44,31 @@ bool k8s_audit_handler::accept_data(falco_engine *engine,
std::list<json_event> jevts; std::list<json_event> jevts;
json j; json j;
try { try
{
j = json::parse(data); j = json::parse(data);
} }
catch (json::parse_error& e) catch(json::parse_error &e)
{
errstr = string("Could not parse data: ") + e.what();
return false;
}
catch(json::out_of_range &e)
{ {
errstr = string("Could not parse data: ") + e.what(); errstr = string("Could not parse data: ") + e.what();
return false; return false;
} }
if(!engine->parse_k8s_audit_json(j, jevts)) bool ok;
try
{
ok = engine->parse_k8s_audit_json(j, jevts);
}
catch(json::type_error &e)
{
ok = false;
}
if(!ok)
{ {
errstr = string("Data not recognized as a k8s audit event"); errstr = string("Data not recognized as a k8s audit event");
return false; return false;
@@ -160,12 +175,6 @@ void falco_webserver::init(falco_configuration *config,
m_outputs = outputs; m_outputs = outputs;
} }
template<typename T, typename ...Args>
std::unique_ptr<T> make_unique( Args&& ...args )
{
return std::unique_ptr<T>( new T( std::forward<Args>(args)... ) );
}
void falco_webserver::start() void falco_webserver::start()
{ {
if(m_server) if(m_server)

View File

@@ -25,6 +25,14 @@ limitations under the License.
#include "falco_engine.h" #include "falco_engine.h"
#include "falco_outputs.h" #include "falco_outputs.h"
#if __cplusplus < 201402L
template<typename T, typename... Ts>
std::unique_ptr<T> make_unique(Ts&&... params)
{
return std::unique_ptr<T>(new T(std::forward<Ts>(params)...));
}
#endif
class k8s_audit_handler : public CivetHandler class k8s_audit_handler : public CivetHandler
{ {
public: public: