Refactor Mizu, define an extension API and add new protocols: AMQP, Kafka (#224)

* Separate HTTP related code into `extensions/http` as a Go plugin

* Move `extensions` folder into `tap` folder

* Move HTTP files into `tap/extensions/lib` for now

* Replace `orcaman/concurrent-map` with `sync.Map`

* Remove `grpc_assembler.go`

* Remove `github.com/up9inc/mizu/tap/extensions/http/lib`

* Add a build script to automatically build extensions from a known path and load them

* Start to define the extension API

* Implement the `run()` function for the TCP stream

* Add support of defining multiple ports to the extension API

* Set the extension name inside the extension

* Declare the `Dissect` function in the extension API

* Dissect HTTP request from inside the HTTP extension

* Make the distinction of outbound and inbound ports

* Dissect HTTP response from inside the HTTP extension

* Bring back the HTTP request-response pair matcher

* Return a `*api.RequestResponsePair` from the dissection

* Bring back the gRPC-HTTP/2 parser

* Fix the issues in `handleHTTP1ClientStream` and `handleHTTP1ServerStream`

* Call a function pointer to emit dissected data back to the `tap` package

* roee changes -
trying to fix agent to work with the "api" object) - ***still not working***

* small mistake in the conflicts

* Fix the issues that are introduced by the merge conflict

* Add `Emitter` interface to the API and send `OutputChannelItem`(s) to `OutputChannel`

* Fix the `HTTP1` handlers

* Set `ConnectionInfo` in HTTP handlers

* Fix the `Dockerfile` to build the extensions

* remove some unwanted code

* no message

* Re-enable `getStreamProps` function

* Migrate back from `gopacket/tcpassembly` to `gopacket/reassembly`

* Introduce `HTTPPayload` struct and `HTTPPayloader` interface to `MarshalJSON()` all the data structures that are returned by the HTTP protocol

* Read `socketHarOutChannel` instead of `filteredHarChannel`

* Connect `OutputChannelItem` to the last WebSocket means that finally the web UI started to work again

* Add `.env.example` to React app

* Marshal and unmarshal `*http.Request`, `*http.Response` pairs

* Move `loadExtensions` into `main.go` and map extensions into `extensionsMap`

* Add `Summarize()` method to the `Dissector` interface

* Add `Analyze` method to the `Dissector` interface and `MizuEntry` to the extension API

* Add `Protocol` struct and make it effect the UI

* Refactor `BaseEntryDetails` struct and display the source and destination ports in the UI

* Display the protocol name inside the details layout

* Add `Represent` method to the `Dissector` interface and manipulate the UI through this method

* Make the protocol color affect the details layout color and write protocol abbreviation vertically

* Remove everything HTTP related from the `tap` package and make the extension system fully functional

* Fix the TypeScript warnings

* Bring in the files related AMQP into `amqp` directory

* Add `--nodefrag` flag to the tapper and bring in the main AMQP code

* Implement the AMQP `BasicPublish` and fix some issues in the UI when the response payload is missing

* Implement `representBasicPublish` method

* Fix several minor issues

* Implement the AMQP `BasicDeliver`

* Implement the AMQP `QueueDeclare`

* Implement the AMQP `ExchangeDeclare`

* Implement the AMQP `ConnectionStart`

* Implement the AMQP `ConnectionClose`

* Implement the AMQP `QueueBind`

* Implement the AMQP `BasicConsume`

* Fix an issue in `ConnectionStart`

* Fix a linter error

* Bring in the files related Kafka into `kafka` directory

* Fix the build errors in Kafka Go files

* Implement `Dissect` method of Kafka and adapt request-response pair matcher to asynchronous client-server stream

* Do the "Is reversed?" checked inside `getStreamProps` and fix an issue in Kafka `Dissect` method

* Implement `Analyze`, `Summarize` methods of Kafka

* Implement the representations for Kafka `Metadata`, `RequestHeader` and `ResponseHeader`

* Refactor the AMQP and Kafka implementations to create the summary string only inside the `Analyze` method

* Implement the representations for Kafka `ApiVersions`

* Implement the representations for Kafka `Produce`

* Implement the representations for Kafka `Fetch`

* Implement the representations for Kafka `ListOffsets`, `CreateTopics` and `DeleteTopics`

* Fix the encoding of AMQP `BasicPublish` and `BasicDeliver` body

* Remove the unnecessary logging

* Remove more logging

* Introduce `Version` field to `Protocol` struct for dynamically switching the HTTP protocol to HTTP/2

* Fix the issues in analysis and representation of HTTP/2 (gRPC) protocol

* Fix the issues in summary section of details layout for HTTP/2 (gRPC) protocol

* Fix the read errors that freezes the sniffer in HTTP and Kafka

* Fix the issues in HTTP POST data

* Fix one more issue in HTTP POST data

* Fix an infinite loop in Kafka

* Fix another freezing issue in Kafka

* Revert "UI Infra - Support multiple entry types + refactoring (#211)"

This reverts commit f74a52d4dc.

* Fix more issues that are introduced by the merge

* Fix the status code in the summary section

* adding the cleaner again (why we removed it?).
add TODO: on the extension loop .

* fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists

* fix GetEntrties ("/entries" endpoint) - working with "tapApi.BaseEntryDetail" (moved from shared)

* Fix an issue in the UI summary section

* Refactor the protocol payload structs

* Fix a log message in the passive tapper

* Adapt `APP_PORTS` environment variable to the new extension system and change its format to `APP_PORTS='{"http": ["8001"]}' `

* Revert "fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists"

This reverts commit 4f514ae1f4.

* Bring in the necessary changes from f74a52d4dc

* Open the API server URL in the web browser as soon as Mizu is ready

* Make the TCP reader consists of a single Go routine (instead of two) and try to dissect in both client and server mode by rewinding

* Swap `TcpID` without overwriting it

* Sort extension by priority

* Try to dissect with looping through all the extensions

* fix getStreamProps function.
(it should be passed from CLI as it was before).

* Turn TCP reader back into two Goroutines (client and server)

* typo

* Learn `isClient` from the TCP stream

* Set `viewer` style `overflow: "auto"`

* Fix the memory leaks in AMQP and Kafka dissectors

* Revert some of the changes in be7c65eb6d

* Remove `allExtensionPorts` since it's no longer needed

* Remove `APP_PORTS` since it's no longer needed

* Fix all of the minor issues in the React code

* Check Kafka header size and fail-fast

* Break the dissectors loop upon a successful dissection

* Don't break the dissector loop. Protocols might collide

* Improve the HTTP request-response counter (still not perfect)

* Make the HTTP request-response counter perfect

* Revert "Revert some of the changes in be7c65eb6d3fb657a059707da3ca559937e59739"

This reverts commit 08e7d786d8.

* Bring back `filterItems` and `isHealthCheckByUserAgent` functions

* Remove some development artifacts

* remove unused and commented lines that are not relevant

* Fix the performance in TCP stream factory. Make it create two `tcpReader`(s) per extension

* Change a log to debug

* Make `*api.CounterPair` a field of `tcpReader`

* Set `isTapTarget` to always `true` again since `filterAuthorities` implementation has problems

* Remove a variable that's only used for logging even though not introduced by this branch

* Bring back the `NumberOfRules` field of `ApplicableRules` struct

* Remove the unused `NewEntry` function

* Move `k8sResolver == nil` check to a more appropriate place

* default healthChecksUserAgentHeaders should be empty array (like the default config value)

* remove spam console.log

* Rules button cause app to crash (access the service via incorrect property)

* Ignore all .env* files in docker build.

* Better caching in dockerfile: only copy go.mod before go mod download.

* Check for errors while loading an extension

* Add a comment about why `Protocol` is not a pointer

* Bring back the call to `deleteOlderThan`

* Remove the `nil` check

* Reduce the maximum allowed AMQP message from 128MB to 1MB

* Fix an error that only occurs when a Kafka broker is initiating

* Revert the change in b2abd7b990

* Fix the service name resolution in all protocols

* Remove the `anydirection` flag and fix the issue in `filterAuthorities`

* Pass `sync.Map` by reference to `deleteOlderThan` method

* Fix the packet capture issue in standalone mode that's introduced by the removal of `anydirection`

* Temporarily resolve the memory exhaustion in AMQP

* Fix a nil pointer dereference error

* Fix the CLI build error

* Fix a memory leak that's identified by `pprof`

Co-authored-by: Roee Gadot <roee.gadot@up9.com>
Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>
This commit is contained in:
M. Mert Yıldıran
2021-09-02 14:34:06 +03:00
committed by GitHub
parent 17fa163ee3
commit 366c1d0c6c
111 changed files with 14396 additions and 1947 deletions

View File

@@ -0,0 +1,290 @@
package main
import (
"fmt"
"io"
"reflect"
"github.com/up9inc/mizu/tap/api"
)
type Request struct {
Size int32
ApiKey ApiKey
ApiVersion int16
CorrelationID int32
ClientID string
Payload interface{}
}
func ReadRequest(r io.Reader, tcpID *api.TcpID) (apiKey ApiKey, apiVersion int16, err error) {
d := &decoder{reader: r, remain: 4}
size := d.readInt32()
if size > 1000000 {
return 0, 0, fmt.Errorf("A Kafka message cannot be bigger than 1MB")
}
if size < 8 {
return 0, 0, fmt.Errorf("A Kafka request header cannot be smaller than 8 bytes")
}
if err = d.err; err != nil {
err = dontExpectEOF(err)
return 0, 0, err
}
d.remain = int(size)
apiKey = ApiKey(d.readInt16())
apiVersion = d.readInt16()
correlationID := d.readInt32()
clientID := d.readString()
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
err = fmt.Errorf("unsupported api key: %d", i)
return apiKey, apiVersion, err
}
if err = d.err; err != nil {
err = dontExpectEOF(err)
return apiKey, apiVersion, err
}
t := &apiTypes[apiKey]
if t == nil {
err = fmt.Errorf("unsupported api: %s", apiNames[apiKey])
return apiKey, apiVersion, err
}
var payload interface{}
switch apiKey {
case Metadata:
var mt interface{}
var metadataRequest interface{}
if apiVersion >= 11 {
types := makeTypes(reflect.TypeOf(&MetadataRequestV11{}).Elem())
mt = types[0]
metadataRequest = &MetadataRequestV11{}
} else if apiVersion >= 10 {
types := makeTypes(reflect.TypeOf(&MetadataRequestV10{}).Elem())
mt = types[0]
metadataRequest = &MetadataRequestV10{}
} else if apiVersion >= 8 {
types := makeTypes(reflect.TypeOf(&MetadataRequestV8{}).Elem())
mt = types[0]
metadataRequest = &MetadataRequestV8{}
} else if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&MetadataRequestV4{}).Elem())
mt = types[0]
metadataRequest = &MetadataRequestV4{}
} else {
types := makeTypes(reflect.TypeOf(&MetadataRequestV0{}).Elem())
mt = types[0]
metadataRequest = &MetadataRequestV0{}
}
mt.(messageType).decode(d, valueOf(metadataRequest))
payload = metadataRequest
break
case ApiVersions:
var mt interface{}
var apiVersionsRequest interface{}
if apiVersion >= 3 {
types := makeTypes(reflect.TypeOf(&ApiVersionsRequestV3{}).Elem())
mt = types[0]
apiVersionsRequest = &ApiVersionsRequestV3{}
} else {
types := makeTypes(reflect.TypeOf(&ApiVersionsRequestV0{}).Elem())
mt = types[0]
apiVersionsRequest = &ApiVersionsRequestV0{}
}
mt.(messageType).decode(d, valueOf(apiVersionsRequest))
payload = apiVersionsRequest
break
case Produce:
var mt interface{}
var produceRequest interface{}
if apiVersion >= 3 {
types := makeTypes(reflect.TypeOf(&ProduceRequestV3{}).Elem())
mt = types[0]
produceRequest = &ProduceRequestV3{}
} else {
types := makeTypes(reflect.TypeOf(&ProduceRequestV0{}).Elem())
mt = types[0]
produceRequest = &ProduceRequestV0{}
}
mt.(messageType).decode(d, valueOf(produceRequest))
payload = produceRequest
break
case Fetch:
var mt interface{}
var fetchRequest interface{}
if apiVersion >= 11 {
types := makeTypes(reflect.TypeOf(&FetchRequestV11{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV11{}
} else if apiVersion >= 9 {
types := makeTypes(reflect.TypeOf(&FetchRequestV9{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV9{}
} else if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&FetchRequestV7{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV7{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&FetchRequestV5{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV5{}
} else if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&FetchRequestV4{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV4{}
} else if apiVersion >= 3 {
types := makeTypes(reflect.TypeOf(&FetchRequestV3{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV3{}
} else {
types := makeTypes(reflect.TypeOf(&FetchRequestV0{}).Elem())
mt = types[0]
fetchRequest = &FetchRequestV0{}
}
mt.(messageType).decode(d, valueOf(fetchRequest))
payload = fetchRequest
break
case ListOffsets:
var mt interface{}
var listOffsetsRequest interface{}
if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV4{}).Elem())
mt = types[0]
listOffsetsRequest = &ListOffsetsRequestV4{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV2{}).Elem())
mt = types[0]
listOffsetsRequest = &ListOffsetsRequestV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV1{}).Elem())
mt = types[0]
listOffsetsRequest = &ListOffsetsRequestV1{}
} else {
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV0{}).Elem())
mt = types[0]
listOffsetsRequest = &ListOffsetsRequestV0{}
}
mt.(messageType).decode(d, valueOf(listOffsetsRequest))
payload = listOffsetsRequest
break
case CreateTopics:
var mt interface{}
var createTopicsRequest interface{}
if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&CreateTopicsRequestV1{}).Elem())
mt = types[0]
createTopicsRequest = &CreateTopicsRequestV1{}
} else {
types := makeTypes(reflect.TypeOf(&CreateTopicsRequestV0{}).Elem())
mt = types[0]
createTopicsRequest = &CreateTopicsRequestV0{}
}
mt.(messageType).decode(d, valueOf(createTopicsRequest))
payload = createTopicsRequest
break
case DeleteTopics:
var mt interface{}
var deleteTopicsRequest interface{}
if apiVersion >= 6 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsRequestV6{}).Elem())
mt = types[0]
deleteTopicsRequest = &DeleteTopicsRequestV6{}
} else {
types := makeTypes(reflect.TypeOf(&DeleteTopicsRequestV0{}).Elem())
mt = types[0]
deleteTopicsRequest = &DeleteTopicsRequestV0{}
}
mt.(messageType).decode(d, valueOf(deleteTopicsRequest))
payload = deleteTopicsRequest
default:
return apiKey, 0, fmt.Errorf("(Request) Not implemented: %s", apiKey)
}
request := &Request{
Size: size,
ApiKey: apiKey,
ApiVersion: apiVersion,
CorrelationID: correlationID,
ClientID: clientID,
Payload: payload,
}
key := fmt.Sprintf(
"%s:%s->%s:%s::%d",
tcpID.SrcIP,
tcpID.SrcPort,
tcpID.DstIP,
tcpID.DstPort,
correlationID,
)
reqResMatcher.registerRequest(key, request)
d.discardAll()
return apiKey, apiVersion, nil
}
func WriteRequest(w io.Writer, apiVersion int16, correlationID int32, clientID string, msg Message) error {
apiKey := msg.ApiKey()
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
return fmt.Errorf("unsupported api key: %d", i)
}
t := &apiTypes[apiKey]
if t == nil {
return fmt.Errorf("unsupported api: %s", apiNames[apiKey])
}
minVersion := t.minVersion()
maxVersion := t.maxVersion()
if apiVersion < minVersion || apiVersion > maxVersion {
return fmt.Errorf("unsupported %s version: v%d not in range v%d-v%d", apiKey, apiVersion, minVersion, maxVersion)
}
r := &t.requests[apiVersion-minVersion]
v := valueOf(msg)
b := newPageBuffer()
defer b.unref()
e := &encoder{writer: b}
e.writeInt32(0) // placeholder for the request size
e.writeInt16(int16(apiKey))
e.writeInt16(apiVersion)
e.writeInt32(correlationID)
if r.flexible {
// Flexible messages use a nullable string for the client ID, then extra space for a
// tag buffer, which begins with a size value. Since we're not writing any fields into the
// latter, we can just write zero for now.
//
// See
// https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
// for details.
e.writeNullString(clientID)
e.writeUnsignedVarInt(0)
} else {
// Technically, recent versions of kafka interpret this field as a nullable
// string, however kafka 0.10 expected a non-nullable string and fails with
// a NullPointerException when it receives a null client id.
e.writeString(clientID)
}
r.encode(e, v)
err := e.err
if err == nil {
size := packUint32(uint32(b.Size()) - 4)
b.WriteAt(size[:], 0)
_, err = b.WriteTo(w)
}
return err
}