mirror of
https://github.com/falcosecurity/falco.git
synced 2025-10-22 03:49:36 +00:00
* Fix spec name * Add a playbook for capturing stuff using sysdig in a container * Add event-name to job name for avoid collisions among captures * Implement job for starting container in Pod in Kubernetes Client We are going to pick data for all Pod, not limited to one container * Use sysdig/capturer image for capture and upload to s3 the capture * There is a bug with environment string splitting in kubeless https://github.com/kubeless/kubeless/issues/824 So here is a workaround which uses multiple --env flags, one for each environment. * Use shorter job name. Kubernetes limit is 64 characters. * Add a deployable playbook with Kubeless for capturing stuff with Sysdig * Document the integration with Sysdig capture * Add Dockerfile for creating sysdig-capturer
16 lines
393 B
Bash
Executable File
16 lines
393 B
Bash
Executable File
#!/bin/bash
|
|
|
|
set -exuo
|
|
|
|
echo "* Setting up /usr/src links from host"
|
|
|
|
for i in $(ls $SYSDIG_HOST_ROOT/usr/src)
|
|
do
|
|
ln -s $SYSDIG_HOST_ROOT/usr/src/$i /usr/src/$i
|
|
done
|
|
|
|
/usr/bin/sysdig-probe-loader
|
|
|
|
sysdig -S -M $CAPTURE_DURATION -pk -z -w $CAPTURE_FILE_NAME.scap.gz
|
|
s3cmd --access_key=$AWS_ACCESS_KEY_ID --secret_key=$AWS_SECRET_ACCESS_KEY put $CAPTURE_FILE_NAME.scap.gz $AWS_S3_BUCKET
|