* Also add endswith to lua parser
Add endswith as a symbol so it can be parsed in filter expressions.
* Unit test for endswith support
Add a test case for endswith support, based on the filename ending with null.
* Add a Phantom Client which creates containers in Phantom server
* Add a playbook for creating events in Phantom using a Falco alert
* Add a flag for configuring SSL checking
* Add a deployable playbook with Kubeless for integrating with Phantom
* Add a README for Phantom integration
* Use named argument as real parameters.
Just cosmetic for clarification
* Call to lower() before checking for case insensitive comparison
* Add the playbook which creates a container in Phantom
I lose it when rebase the branch :P
* Add additional rpm writing programs
rhn_check, yumdb.
* Add 11-dhclient as a dhcp binary
* Let runuser read below pam
It reads those files to check permissions.
* Let chef write to /root/.chef*
Some deployments write directly below /root.
* Refactor openshift privileged images
Rework how openshift images are handled:
Many customers deploy to a private registry, which would normally
involve duplicating the image list for the new registry. Now, split the
image prefix search (e.g. <host>/openshift3) from the check of the image
name. The prefix search is in allowed_openshift_registry_root, and can
be easily overridden to add a new private registry hostname. The image
list check is in openshift_image, is conditioned on
allowed_openshift_registry_root, and does a contains search instead of a
prefix match.
Also try to get a more comprehensive set of possible openshift3 images,
using online docs as a guide.
* Also let sdchecks directly setns
A new macro python_running_sdchecks is similar to
parent_python_running_sdchecks but works on the process itself.
Add this as an exception to Change thread namespace.
* Fix spec name
* Add a playbook for capturing stuff using sysdig in a container
* Add event-name to job name for avoid collisions among captures
* Implement job for starting container in Pod in Kubernetes Client
We are going to pick data for all Pod, not limited to one container
* Use sysdig/capturer image for capture and upload to s3 the capture
* There is a bug with environment string splitting in kubeless
https://github.com/kubeless/kubeless/issues/824
So here is a workaround which uses multiple --env flags, one for each
environment.
* Use shorter job name. Kubernetes limit is 64 characters.
* Add a deployable playbook with Kubeless for capturing stuff with Sysdig
* Document the integration with Sysdig capture
* Add Dockerfile for creating sysdig-capturer
* Create a DemistoClient for publishing Falco alerts to Demisto
* Extract a function for extracting description from Falco output
* Add a playbook which creates a Falco alert as a Demisto incident
* Add a Kubeless Demisto Handler for Demisto integration
* Document the integration with Demisto
* Allow changing SSL certificate verification
* Fix naming for playbook specs
* Call to lower() before checking value of VERIFY_SSL. Allow case insensitive.
* Use correct copyright years.
Also include the start year.
* Improve copyright notices.
Use the proper start year instead of just 2018.
Add the right owner Draios dba Sysdig.
Add copyright notices to some files that were missing them.
Replace references to GNU Public License to Apache license in:
- COPYING file
- README
- all source code below falco
- rules files
- rules and code below test directory
- code below falco directory
- entrypoint for docker containers (but not the Dockerfiles)
I didn't generally add copyright notices to all the examples files, as
they aren't core falco. If they did refer to the gpl I changed them to
apache.
debian:unstable head contains binutils 2.31, which generates binaries
that are incompatible with kernels < 4.16.
To fix this, after installing everything, downgrade binutils to
2.30-22. This has to be done as the last step as it introduces conflicts
in other dependencies of the various gcc versions and some of the
packages already in the image.
* Add dpkg-divert as a debian package mgmt program.
* Add pip3 as a package mgmt program.
* Let ucpagent write config
Since the name is fairly generic (apiserver), require that it runs in a
container with image docker/ucp-agent.
* Let iscsi admin programs write config
* Add parent to some output strings
Will aid in addressing false positives.
* Let update-ca-trust write to pki files
* Add additional root writing programs
- zap: web application security tool
- airflow: apache app for managing data pipelines
- rpm can sometimes write below /root/.rpmdb
- maven can write groovy files
* Expand redis etc files
Additional program redis-launcher.(sh) and path /etc/redis.
* Add additional root directories
/root/workspace could be used by jenkins, /root/oradiag_root could be
used by Oracle 11 SQL*Net.
* Add pam-config as an auth program
* Add additional trusted containers
openshift image inspector, alternate name for datadog agent, docker ucp
agent, gliderlabs logspout.
* Add microdnf as a rpm binary.
https://github.com/rpm-software-management/microdnf
* Let coreos update-ssh-keys write /home/core/.ssh
* Allow additional writes below /etc/iscsi
Allow any path starting with /etc/iscsi.
* Add additional /root write paths
Additional files, with /root/workspace changing from a directory to a
path prefix.
* Add additional openshift trusted container.
* Also allow grandparents for ms_oms_writing_conf
In some cases the program spawns intermediate shells, for example:
07:15:30.756713513: Error File below /etc opened for writing (user= command=StatusReport.sh /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime parent=sh pcmdline=sh -c /opt/microsoft/omsconfig/Scripts/StatusReport.sh D34448EA-363A-42C2-ACE0-ACD6C1514CF1 EndTime file=/etc/opt/omi/conf/omsconfig/last_statusreport program=StatusReport.sh gparent=omiagent ggparent=omiagent gggparent=omiagent) k8s.pod= container=host k8s.pod= container=host
This should fix#387.
If /lib/modules exists in the base image, the symlink will get created at
/lib/modules/modules. This removes any existing empty directory but will
fail if we try to remove a non-empty /lib/modules. (Punting on how to
handle non-empty base image dirs for now)