Files
falco/proposals/20191030-api.md
Leonardo Di Donato ced04a4d89 update: goals and (initial) architecture for API services
Signed-off-by: Leonardo Di Donato <leodidonato@gmail.com>
2019-12-09 16:46:16 +01:00

3.7 KiB

Falco APIs

Summary

With this proposal we inted to present the intent of creating a Falco API.

This proposal also intends to list the set of the gRPC services such API will expose.

Motivation

We want users to interface with Falco outputs, inputs, rules, and configurations via thirdy-part clients.

Such ability would enable the community to create a whole set of OSS tools, built on top of Falco.

Some examples, already in place, are:

Goals

  • To design and implement an API to make Falco more configurable
  • To design and implement an API to make Falco more consumable
  • To design a set of simple contract interfaces
  • To have a secure by default Falco API

Non-Goals

Proposal

Use cases

  • Receive Falco events with a well-defined contract over wire --> outputs
  • Receive Falco drops with a well-defined contract over wire --> drops
  • Receive current Falco version and related meta information (commit hash, built type, etc.) --> version
  • Config Falco via API (CRUD) -> configs
  • Inject and/or modify Falco rules via API (CRUD) --> rules
  • Send input events to Falco over wire --> inputs

Unary services

Version

Rules

Configs

Streaming (server or client) services

Outputs

Drops

Bidirectional streaming services

Inputs

Architecture

We propose to have:

  • 1 gRPC server
  • 1 service grouping services by their type, so
    • 1 service for unary services
    • 1 service for streaming services
    • 1 service for bidirectional services

This translates in having the following set of proto files.

  • a .proto containing the request and response messages for any service - eg., version.proto, outputs.proto etc.

    For example,

    # version.proto
    message request
    {
      ...
    }
    
    // The `response` message contains the version of Falco.
    // It provides the whole version as a string and also
    // its parts as per semver 2.0 specification (https://semver.org).
    message response
    {
    string version = 1;
      uint32 major = 2;
      uint32 minor = 3;
      uint32 patch = 4;
      string prerelease = 5;
      string build = 6;
    }
    
  • one or more .proto containing the commond models - ie., the already existing schema.proto containing source enum, etc.

    # schema.proto
    enum priority {
      ..
    }
    enum source {
    option allow_alias = true;
      SYSCALL = 0;
      syscall = 0;
      Syscall = 0;
      K8S_AUDIT = 1;
      k8s_audit = 1;
      K8s_audit = 1;
      K8S_audit = 1;
    }
    
  • a .proto for service definitions

    For example

    service unary {
      rpc version(request) returns (response);
      ...
    }
    service streaming {
      rpc outputs(request) returns (stream response);
      rpc drops(request) returns (stream response);
    }
    service bidirectional {
      ...
    }
    

Benefits

We think this structure makes the implementation more easy to write and to maintain because it automatically matches to abstractions - ie., one service by RPC type, a pool of contexts by RPC type - implemented in the code.

Limitations

Having more RPCs of the same type (eg., outputs and drops) sharing the same service means:

  • they will share the number of simultaneous completion queue requests the gRPC server can handle

This assertion is true in the current implementation. Creating one completion queue for every RPC would solve this in case this proves to have a significant impact on the level of performances we desire to achieve.

Diagrams