Refactor Mizu, define an extension API and add new protocols: AMQP, Kafka (#224)

* Separate HTTP related code into `extensions/http` as a Go plugin

* Move `extensions` folder into `tap` folder

* Move HTTP files into `tap/extensions/lib` for now

* Replace `orcaman/concurrent-map` with `sync.Map`

* Remove `grpc_assembler.go`

* Remove `github.com/up9inc/mizu/tap/extensions/http/lib`

* Add a build script to automatically build extensions from a known path and load them

* Start to define the extension API

* Implement the `run()` function for the TCP stream

* Add support of defining multiple ports to the extension API

* Set the extension name inside the extension

* Declare the `Dissect` function in the extension API

* Dissect HTTP request from inside the HTTP extension

* Make the distinction of outbound and inbound ports

* Dissect HTTP response from inside the HTTP extension

* Bring back the HTTP request-response pair matcher

* Return a `*api.RequestResponsePair` from the dissection

* Bring back the gRPC-HTTP/2 parser

* Fix the issues in `handleHTTP1ClientStream` and `handleHTTP1ServerStream`

* Call a function pointer to emit dissected data back to the `tap` package

* roee changes -
trying to fix agent to work with the "api" object) - ***still not working***

* small mistake in the conflicts

* Fix the issues that are introduced by the merge conflict

* Add `Emitter` interface to the API and send `OutputChannelItem`(s) to `OutputChannel`

* Fix the `HTTP1` handlers

* Set `ConnectionInfo` in HTTP handlers

* Fix the `Dockerfile` to build the extensions

* remove some unwanted code

* no message

* Re-enable `getStreamProps` function

* Migrate back from `gopacket/tcpassembly` to `gopacket/reassembly`

* Introduce `HTTPPayload` struct and `HTTPPayloader` interface to `MarshalJSON()` all the data structures that are returned by the HTTP protocol

* Read `socketHarOutChannel` instead of `filteredHarChannel`

* Connect `OutputChannelItem` to the last WebSocket means that finally the web UI started to work again

* Add `.env.example` to React app

* Marshal and unmarshal `*http.Request`, `*http.Response` pairs

* Move `loadExtensions` into `main.go` and map extensions into `extensionsMap`

* Add `Summarize()` method to the `Dissector` interface

* Add `Analyze` method to the `Dissector` interface and `MizuEntry` to the extension API

* Add `Protocol` struct and make it effect the UI

* Refactor `BaseEntryDetails` struct and display the source and destination ports in the UI

* Display the protocol name inside the details layout

* Add `Represent` method to the `Dissector` interface and manipulate the UI through this method

* Make the protocol color affect the details layout color and write protocol abbreviation vertically

* Remove everything HTTP related from the `tap` package and make the extension system fully functional

* Fix the TypeScript warnings

* Bring in the files related AMQP into `amqp` directory

* Add `--nodefrag` flag to the tapper and bring in the main AMQP code

* Implement the AMQP `BasicPublish` and fix some issues in the UI when the response payload is missing

* Implement `representBasicPublish` method

* Fix several minor issues

* Implement the AMQP `BasicDeliver`

* Implement the AMQP `QueueDeclare`

* Implement the AMQP `ExchangeDeclare`

* Implement the AMQP `ConnectionStart`

* Implement the AMQP `ConnectionClose`

* Implement the AMQP `QueueBind`

* Implement the AMQP `BasicConsume`

* Fix an issue in `ConnectionStart`

* Fix a linter error

* Bring in the files related Kafka into `kafka` directory

* Fix the build errors in Kafka Go files

* Implement `Dissect` method of Kafka and adapt request-response pair matcher to asynchronous client-server stream

* Do the "Is reversed?" checked inside `getStreamProps` and fix an issue in Kafka `Dissect` method

* Implement `Analyze`, `Summarize` methods of Kafka

* Implement the representations for Kafka `Metadata`, `RequestHeader` and `ResponseHeader`

* Refactor the AMQP and Kafka implementations to create the summary string only inside the `Analyze` method

* Implement the representations for Kafka `ApiVersions`

* Implement the representations for Kafka `Produce`

* Implement the representations for Kafka `Fetch`

* Implement the representations for Kafka `ListOffsets`, `CreateTopics` and `DeleteTopics`

* Fix the encoding of AMQP `BasicPublish` and `BasicDeliver` body

* Remove the unnecessary logging

* Remove more logging

* Introduce `Version` field to `Protocol` struct for dynamically switching the HTTP protocol to HTTP/2

* Fix the issues in analysis and representation of HTTP/2 (gRPC) protocol

* Fix the issues in summary section of details layout for HTTP/2 (gRPC) protocol

* Fix the read errors that freezes the sniffer in HTTP and Kafka

* Fix the issues in HTTP POST data

* Fix one more issue in HTTP POST data

* Fix an infinite loop in Kafka

* Fix another freezing issue in Kafka

* Revert "UI Infra - Support multiple entry types + refactoring (#211)"

This reverts commit f74a52d4dc.

* Fix more issues that are introduced by the merge

* Fix the status code in the summary section

* adding the cleaner again (why we removed it?).
add TODO: on the extension loop .

* fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists

* fix GetEntrties ("/entries" endpoint) - working with "tapApi.BaseEntryDetail" (moved from shared)

* Fix an issue in the UI summary section

* Refactor the protocol payload structs

* Fix a log message in the passive tapper

* Adapt `APP_PORTS` environment variable to the new extension system and change its format to `APP_PORTS='{"http": ["8001"]}' `

* Revert "fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists"

This reverts commit 4f514ae1f4.

* Bring in the necessary changes from f74a52d4dc

* Open the API server URL in the web browser as soon as Mizu is ready

* Make the TCP reader consists of a single Go routine (instead of two) and try to dissect in both client and server mode by rewinding

* Swap `TcpID` without overwriting it

* Sort extension by priority

* Try to dissect with looping through all the extensions

* fix getStreamProps function.
(it should be passed from CLI as it was before).

* Turn TCP reader back into two Goroutines (client and server)

* typo

* Learn `isClient` from the TCP stream

* Set `viewer` style `overflow: "auto"`

* Fix the memory leaks in AMQP and Kafka dissectors

* Revert some of the changes in be7c65eb6d

* Remove `allExtensionPorts` since it's no longer needed

* Remove `APP_PORTS` since it's no longer needed

* Fix all of the minor issues in the React code

* Check Kafka header size and fail-fast

* Break the dissectors loop upon a successful dissection

* Don't break the dissector loop. Protocols might collide

* Improve the HTTP request-response counter (still not perfect)

* Make the HTTP request-response counter perfect

* Revert "Revert some of the changes in be7c65eb6d3fb657a059707da3ca559937e59739"

This reverts commit 08e7d786d8.

* Bring back `filterItems` and `isHealthCheckByUserAgent` functions

* Remove some development artifacts

* remove unused and commented lines that are not relevant

* Fix the performance in TCP stream factory. Make it create two `tcpReader`(s) per extension

* Change a log to debug

* Make `*api.CounterPair` a field of `tcpReader`

* Set `isTapTarget` to always `true` again since `filterAuthorities` implementation has problems

* Remove a variable that's only used for logging even though not introduced by this branch

* Bring back the `NumberOfRules` field of `ApplicableRules` struct

* Remove the unused `NewEntry` function

* Move `k8sResolver == nil` check to a more appropriate place

* default healthChecksUserAgentHeaders should be empty array (like the default config value)

* remove spam console.log

* Rules button cause app to crash (access the service via incorrect property)

* Ignore all .env* files in docker build.

* Better caching in dockerfile: only copy go.mod before go mod download.

* Check for errors while loading an extension

* Add a comment about why `Protocol` is not a pointer

* Bring back the call to `deleteOlderThan`

* Remove the `nil` check

* Reduce the maximum allowed AMQP message from 128MB to 1MB

* Fix an error that only occurs when a Kafka broker is initiating

* Revert the change in b2abd7b990

* Fix the service name resolution in all protocols

* Remove the `anydirection` flag and fix the issue in `filterAuthorities`

* Pass `sync.Map` by reference to `deleteOlderThan` method

* Fix the packet capture issue in standalone mode that's introduced by the removal of `anydirection`

* Temporarily resolve the memory exhaustion in AMQP

* Fix a nil pointer dereference error

* Fix the CLI build error

* Fix a memory leak that's identified by `pprof`

Co-authored-by: Roee Gadot <roee.gadot@up9.com>
Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>
This commit is contained in:
M. Mert Yıldıran
2021-09-02 14:34:06 +03:00
committed by GitHub
parent 17fa163ee3
commit 366c1d0c6c
111 changed files with 14396 additions and 1947 deletions

View File

@@ -0,0 +1,343 @@
package main
import (
"fmt"
"io"
"reflect"
"time"
"github.com/up9inc/mizu/tap/api"
)
type Response struct {
Size int32
CorrelationID int32
Payload interface{}
}
func ReadResponse(r io.Reader, tcpID *api.TcpID, emitter api.Emitter) (err error) {
d := &decoder{reader: r, remain: 4}
size := d.readInt32()
if size > 1000000 {
return fmt.Errorf("A Kafka message cannot be bigger than 1MB")
}
if size < 4 {
return fmt.Errorf("A Kafka response header cannot be smaller than 8 bytes")
}
if err = d.err; err != nil {
err = dontExpectEOF(err)
return err
}
d.remain = int(size)
correlationID := d.readInt32()
var payload interface{}
response := &Response{
Size: size,
CorrelationID: correlationID,
Payload: payload,
}
key := fmt.Sprintf(
"%s:%s->%s:%s::%d",
tcpID.DstIP,
tcpID.DstPort,
tcpID.SrcIP,
tcpID.SrcPort,
correlationID,
)
reqResPair := reqResMatcher.registerResponse(key, response)
if reqResPair == nil {
return fmt.Errorf("Couldn't match a Kafka response to a Kafka request in 3 seconds!")
}
apiKey := reqResPair.Request.ApiKey
apiVersion := reqResPair.Request.ApiVersion
switch apiKey {
case Metadata:
var mt interface{}
var metadataResponse interface{}
if apiVersion >= 11 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV11{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV11{}
} else if apiVersion >= 10 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV10{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV10{}
} else if apiVersion >= 8 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV8{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV8{}
} else if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV7{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV7{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV5{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV5{}
} else if apiVersion >= 3 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV3{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV3{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV2{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV1{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&MetadataResponseV0{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV0{}
}
mt.(messageType).decode(d, valueOf(metadataResponse))
reqResPair.Response.Payload = metadataResponse
break
case ApiVersions:
var mt interface{}
var apiVersionsResponse interface{}
if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ApiVersionsResponseV1{}).Elem())
mt = types[0]
apiVersionsResponse = &ApiVersionsResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&ApiVersionsResponseV0{}).Elem())
mt = types[0]
apiVersionsResponse = &ApiVersionsResponseV0{}
}
mt.(messageType).decode(d, valueOf(apiVersionsResponse))
reqResPair.Response.Payload = apiVersionsResponse
break
case Produce:
var mt interface{}
var produceResponse interface{}
if apiVersion >= 8 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV8{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV8{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV5{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV5{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV2{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV1{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&ProduceResponseV0{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV0{}
}
mt.(messageType).decode(d, valueOf(produceResponse))
reqResPair.Response.Payload = produceResponse
break
case Fetch:
var mt interface{}
var fetchResponse interface{}
if apiVersion >= 11 {
types := makeTypes(reflect.TypeOf(&FetchResponseV11{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV11{}
} else if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&FetchResponseV7{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV7{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&FetchResponseV5{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV5{}
} else if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&FetchResponseV4{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV4{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&FetchResponseV1{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&FetchResponseV0{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV0{}
}
mt.(messageType).decode(d, valueOf(fetchResponse))
reqResPair.Response.Payload = fetchResponse
break
case ListOffsets:
var mt interface{}
var listOffsetsResponse interface{}
if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV4{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV4{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV2{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV1{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV0{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV0{}
}
mt.(messageType).decode(d, valueOf(listOffsetsResponse))
reqResPair.Response.Payload = listOffsetsResponse
case CreateTopics:
var mt interface{}
var createTopicsResponse interface{}
if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV0{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV0{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV5{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV5{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV2{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV1{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV0{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV0{}
}
mt.(messageType).decode(d, valueOf(createTopicsResponse))
reqResPair.Response.Payload = createTopicsResponse
break
case DeleteTopics:
var mt interface{}
var deleteTopicsResponse interface{}
if apiVersion >= 6 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV6{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV6{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV5{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV5{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV1{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV0{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV0{}
}
mt.(messageType).decode(d, valueOf(deleteTopicsResponse))
reqResPair.Response.Payload = deleteTopicsResponse
default:
return fmt.Errorf("(Response) Not implemented: %s", apiKey)
}
connectionInfo := &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
item := &api.OutputChannelItem{
Protocol: _protocol,
Timestamp: time.Now().UnixNano() / int64(time.Millisecond),
ConnectionInfo: connectionInfo,
Pair: &api.RequestResponsePair{
Request: api.GenericMessage{
IsRequest: true,
CaptureTime: time.Now(),
Payload: KafkaPayload{
Data: &KafkaWrapper{
Method: apiNames[apiKey],
Url: "",
Details: reqResPair.Request,
},
},
},
Response: api.GenericMessage{
IsRequest: false,
CaptureTime: time.Now(),
Payload: KafkaPayload{
Data: &KafkaWrapper{
Method: apiNames[apiKey],
Url: "",
Details: reqResPair.Response,
},
},
},
},
}
emitter.Emit(item)
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
err = fmt.Errorf("unsupported api key: %d", i)
return err
}
t := &apiTypes[apiKey]
if t == nil {
err = fmt.Errorf("unsupported api: %s", apiNames[apiKey])
return err
}
d.discardAll()
return nil
}
func WriteResponse(w io.Writer, apiVersion int16, correlationID int32, msg Message) error {
apiKey := msg.ApiKey()
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
return fmt.Errorf("unsupported api key: %d", i)
}
t := &apiTypes[apiKey]
if t == nil {
return fmt.Errorf("unsupported api: %s", apiNames[apiKey])
}
minVersion := t.minVersion()
maxVersion := t.maxVersion()
if apiVersion < minVersion || apiVersion > maxVersion {
return fmt.Errorf("unsupported %s version: v%d not in range v%d-v%d", apiKey, apiVersion, minVersion, maxVersion)
}
r := &t.responses[apiVersion-minVersion]
v := valueOf(msg)
b := newPageBuffer()
defer b.unref()
e := &encoder{writer: b}
e.writeInt32(0) // placeholder for the response size
e.writeInt32(correlationID)
r.encode(e, v)
err := e.err
if err == nil {
size := packUint32(uint32(b.Size()) - 4)
b.WriteAt(size[:], 0)
_, err = b.WriteTo(w)
}
return err
}