* Update the Puppet module:
* Apply puppet-lint recommendations
* Update the README since the project moved from draios to falcosecurity in GitHub
* Move parameters in their own file
+ Add the DEB repository automatically
+ Add the EPEL repository automatically
+ Add a logrotate configuration
* Update the configuration file with all the latest updates
falco-CLA-1.0-contributing-entity: Coveo Solutions Inc.
falco-CLA-1.0-signed-off-by: Jean-Philippe Lachance <jplachance@coveo.com>
* * Set required modules versions properly
* Set dependencies between classes
* Set the class order
* Apply mstemm's code review
* * Drop the Puppet 3 support
* Use a working version of puppetlabs-apt
* Use dependencies to be compatible with Puppet 4.7 and above
* Move kubernetes-response-engine to falcosecurit/kubernetes-response-engine
As long as Falco and Response Engine have different release cycle, they
are separated.
* Add a README explaining that repository has been moved
@mfdii is absolutely right about this on #539
* Add falco service to k8s install/update labels
Update the instructions for K8s RBAC installation to also create a
service that maps to port 8765 of the falco pod. This allows other
services to access the embedded webserver within falco.
Also clean up the set of labels to use a consistent app: falco-example,
role:security for each object.
* Cange K8s Audit Example to use falco daemonset
Change the K8s Audit Example instructions to use minikube in conjunction
with a falco daemonset running inside of minikube. (We're going to start
prebuilding kernel modules for recent minikube variants to make this
possible).
When running inside of minikube in conjunction with a service, you have
to go through some additional steps to find the ClusterIP associated
with the falco service and use that ip when configuring the k8s audit
webhook. Overall it's still a more self-contained set of instructions,
though.
* Add a falco-sns utility which publishes to an AWS SNS topic
* Add an script for deploying function in AWS Lambda
* Bump dependencies
* Use an empty topic and pass AWS_DEFAULT_REGION environment variable
* Add gitignore
* Install ca-certificates.
Are used when we publish to a SNS topic.
* Add myself as a maintainer
* Decode events from SNS based messages
* Add Terraform manifests for getting an EKS up and running
Please, take attention to setup kubectl and how to join workers:
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#obtaining-kubectl-configuration-from-terraformhttps://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#required-kubernetes-configuration-to-join-worker-nodes
* Ignore terraform generated files
* Remove autogenerated files
* Also publish MessageAttributes which allows to use Filter Policies
This allows to subscribe only to errors, or warnings or several
priorities or by rule names.
It covers same funcionality than NATS publishe does.
* Add kubeconfig and aws-iam-authenticator from heptio to Lambda environment
* Add role trust from cluster creator to lambda role
* Enable CloudWatch for Lambda stuff
* Generate kubeconfig, kubeconfig for lambdas and the lambda arn
This is used by deployment script
* Just a cosmetic change
* Add a Makefile which creates the cluster and configures it
* Use terraform and artifacts which belongs to this repository for deploying
* Move CNCF related deployment to its own directory
* Create only SNS and Lambda stuff.
Assume that the EKS cluster will be created outside
* Bridge IAM with RBAC
This allows to use the role for lambdas for authenticating against
Kubernetes
* Do not rely on terraform for deploying a playbook in lambda
* Clean whitespace
* Move rebased playbooks to functions
* Fix rebase issues with deployment and rbac stuff
* Add a clean target to Makefile
* Inject sys.path modification to Kubeless function deployment
* Add documentation and instructions
* Add a Phantom Client which creates containers in Phantom server
* Add a playbook for creating events in Phantom using a Falco alert
* Add a flag for configuring SSL checking
* Add a deployable playbook with Kubeless for integrating with Phantom
* Add a README for Phantom integration
* Use named argument as real parameters.
Just cosmetic for clarification
* Call to lower() before checking for case insensitive comparison
* Add the playbook which creates a container in Phantom
I lose it when rebase the branch :P
* Fix spec name
* Add a playbook for capturing stuff using sysdig in a container
* Add event-name to job name for avoid collisions among captures
* Implement job for starting container in Pod in Kubernetes Client
We are going to pick data for all Pod, not limited to one container
* Use sysdig/capturer image for capture and upload to s3 the capture
* There is a bug with environment string splitting in kubeless
https://github.com/kubeless/kubeless/issues/824
So here is a workaround which uses multiple --env flags, one for each
environment.
* Use shorter job name. Kubernetes limit is 64 characters.
* Add a deployable playbook with Kubeless for capturing stuff with Sysdig
* Document the integration with Sysdig capture
* Add Dockerfile for creating sysdig-capturer
* Create a DemistoClient for publishing Falco alerts to Demisto
* Extract a function for extracting description from Falco output
* Add a playbook which creates a Falco alert as a Demisto incident
* Add a Kubeless Demisto Handler for Demisto integration
* Document the integration with Demisto
* Allow changing SSL certificate verification
* Fix naming for playbook specs
* Call to lower() before checking value of VERIFY_SSL. Allow case insensitive.
Replace references to GNU Public License to Apache license in:
- COPYING file
- README
- all source code below falco
- rules files
- rules and code below test directory
- code below falco directory
- entrypoint for docker containers (but not the Dockerfiles)
I didn't generally add copyright notices to all the examples files, as
they aren't core falco. If they did refer to the gpl I changed them to
apache.