Files
kubeshark/tap/extensions/kafka/response.go
M. Mert Yıldıran d2fe3f6620 Migrate from SQLite to Basenine and introduce a new filtering syntax (#279)
* Fix the OOMKilled error by calling `debug.FreeOSMemory` periodically

* Remove `MAX_NUMBER_OF_GOROUTINES` environment variable

* Change the line

* Increase the default value of `TCP_STREAM_CHANNEL_TIMEOUT_MS` to `10000`

* Write the client and integrate to the new real-time database

* Refactor the WebSocket implementaiton for `/ws`

* Adapt the UI to the new filtering system

* Fix the rest of the issues in the UI

* Increase the buffer of the scanner

* Implement accessing single records

* Increase the buffer of another scanner

* Populate `Request` and `Response` fields of `MizuEntry`

* Add syntax highlighting for the query

* Add database to `Dockerfile`

* Fix some issues

* Update the `realtime_dbms` Git module commit hash

* Upgrade Gin version and print the query string

* Revert "Upgrade Gin version and print the query string"

This reverts commit aa09f904ee.

* Use WebSocket's itself to query instead of the query string

* Fix some errors related to conversion to HAR

* Fix the issues caused by the latest merge

* Fix the build error

* Fix PR validation GitHub workflow

* Replace the git submodule with latest Basenine version `0.1.0`

Remove `realtime_client.go` and use the official client library `github.com/up9inc/basenine/client/go` instead.

* Move Basenine host and port constants to `shared` module

* Reliably execute and wait for Basenine to become available

* Upgrade Basenine version

* Properly close WebSocket and data channel

* Fix the issues caused by the recent merge commit

* Clean up the TypeScript code

* Update `.gitignore`

* Limit the database size

* Add `Macros` method signature to `Dissector` interface and set the macros provided by the protocol extensions

* Run `go mod tidy` on `agent`

* Upgrade `github.com/up9inc/basenine/client/go` version

* Implement a mechanism to update the query using click events in the UI and use it for protocol macros

* Update the query on click to timestamps

* Fix some issues in the WebSocket and channel handling

* Update the query on clicks to status code

* Update the query on clicks to method, path and service

* Update the query on clicks to is outgoing, source and destination ports

* Add an API endpoint to validate the query against syntax errors

* Move the query background color state into `TrafficPage`

* Fix the logic in `setQuery`

* Display a toast message in case of a syntax error in the query

* Remove a call to `fmt.Printf`

* Upgrade Basenine version to `0.1.3`

* Fix an issue related to getting `MAX_ENTRIES_DB_BYTES` environment variable

* Have the `path` key in request details, in HTTP

* Rearrange the HTTP headers for the querying

* Do the same thing for `cookies` and `queryString`

* Update the query on click to table elements

Add the selectors for `TABLE` type representations in HTTP extension.

* Update the query on click to `bodySize` and `elapsedTime` in `EntryTitle`

* Add the selectors for `TABLE` type representations in AMQP extension

* Add the selectors for `TABLE` type representations in Kafka extension

* Add the selectors for `TABLE` type representations in Redis extension

* Define a struct in `tap/api.go` for the section representation data

* Add the selectors for `BODY` type representations

* Add `request.path` to the HTTP request details

* Change the summary string's field name from `path` to `summary`

* Introduce `queryable` CSS class for queryable UI elements and underline them on hover

* Instead of `N requests` at the bottom, make it `Displaying N results (queried X/Y)` and live update the values

Upgrade Basenine version to `0.2.0`.

* Verify the sha256sum of Basenine executable inside `Dockerfile`

* Pass the start time to web UI through WebSocket and always show the `EntriesList` footer

* Pipe the `stderr` of Basenine as well

* Fix the layout issues related to `CodeEditor` in the UI

* Use the correct `shasum` command in `Dockerfile`

* Upgrade Basenine version to `0.2.1`

* Limit the height of `CodeEditor` container

* Remove `Paused` enum `ConnectionStatus` in UI

* Fix the issue caused by the recent merge

* Add the filtering guide (cheatsheet)

* Update open cheatsheet button's title

* Update cheatsheet content

* Remove the old SQLite code, adapt the `--analyze` related code to Basenine

* Change the method signature of `NewEntry`

* Change the method signature of `Represent`

* Introduce `HTTPPair` field in `MizuEntry` specific to HTTP

* Remove `Entry`, `EntryId` and `EstimatedSizeBytes` fields from `MizuEntry`

Also remove the `getEstimatedEntrySizeBytes` method.

* Remove `gorm.io/gorm` dependency

* Remove unused `sensitiveDataFiltering` folder

* Increase the left margin of open cheatsheet button

* Add `overflow: auto` to the cheatsheet `Modal`

* Fix `GetEntry` method

* Fix the macro for gRPC

* Fix an interface conversion in case of AMQP

* Fix two more interface conversion errors in AMQP

* Make the `syncEntriesImpl` method blocking

* Fix a grammar mistake in the cheatsheet

* Adapt to the changes in the recent merge commit

* Improve the cheatsheet text

* Always display the timestamp in `en-US`

* Upgrade Basenine version to `0.2.2`

* Fix the order of closing Basenine connections and channels

* Don't close the Basenine channels at all

* Upgrade Basenine version to `0.2.3`

* Set the initial filter to `rlimit(100)`

* Make Basenine persistent

* Upgrade Basenine version to `0.2.4`

* Update `debug.Dockerfile`

* Fix a failing test

* Upgrade Basenine version to `0.2.5`

* Revert "Do not show play icon when disconnected (#428)"

This reverts commit 8af2e562f8.

* Upgrade Basenine version to `0.2.6`

* Make all non-informative things informative

* Make `100` a constant

* Use `===` in JavaScript no matter what

* Remove a forgotten `console.log`

* Add a comment and update the `query` in `syncEntriesImpl`

* Don't call `panic` in `GetEntry`

* Replace `panic` calls in `startBasenineServer` with `logger.Log.Panicf`

* Remove unnecessary `\n` characters in the logs
2021-11-09 19:54:48 +03:00

346 lines
10 KiB
Go

package main
import (
"fmt"
"io"
"reflect"
"time"
"github.com/up9inc/mizu/tap/api"
)
type Response struct {
Size int32 `json:"size"`
CorrelationID int32 `json:"correlationID"`
Payload interface{} `json:"payload"`
CaptureTime time.Time `json:"captureTime"`
}
func ReadResponse(r io.Reader, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter) (err error) {
d := &decoder{reader: r, remain: 4}
size := d.readInt32()
if size > 1000000 {
return fmt.Errorf("A Kafka message cannot be bigger than 1MB")
}
if size < 4 {
return fmt.Errorf("A Kafka response header cannot be smaller than 8 bytes")
}
if err = d.err; err != nil {
err = dontExpectEOF(err)
return err
}
d.remain = int(size)
correlationID := d.readInt32()
var payload interface{}
response := &Response{
Size: size,
CorrelationID: correlationID,
Payload: payload,
CaptureTime: superTimer.CaptureTime,
}
key := fmt.Sprintf(
"%s:%s->%s:%s::%d",
tcpID.DstIP,
tcpID.DstPort,
tcpID.SrcIP,
tcpID.SrcPort,
correlationID,
)
reqResPair := reqResMatcher.registerResponse(key, response)
if reqResPair == nil {
return fmt.Errorf("Couldn't match a Kafka response to a Kafka request in 3 seconds!")
}
apiKey := reqResPair.Request.ApiKey
apiVersion := reqResPair.Request.ApiVersion
switch apiKey {
case Metadata:
var mt interface{}
var metadataResponse interface{}
if apiVersion >= 11 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV11{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV11{}
} else if apiVersion >= 10 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV10{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV10{}
} else if apiVersion >= 8 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV8{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV8{}
} else if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV7{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV7{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV5{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV5{}
} else if apiVersion >= 3 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV3{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV3{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV2{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&MetadataResponseV1{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&MetadataResponseV0{}).Elem())
mt = types[0]
metadataResponse = &MetadataResponseV0{}
}
mt.(messageType).decode(d, valueOf(metadataResponse))
reqResPair.Response.Payload = metadataResponse
break
case ApiVersions:
var mt interface{}
var apiVersionsResponse interface{}
if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ApiVersionsResponseV1{}).Elem())
mt = types[0]
apiVersionsResponse = &ApiVersionsResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&ApiVersionsResponseV0{}).Elem())
mt = types[0]
apiVersionsResponse = &ApiVersionsResponseV0{}
}
mt.(messageType).decode(d, valueOf(apiVersionsResponse))
reqResPair.Response.Payload = apiVersionsResponse
break
case Produce:
var mt interface{}
var produceResponse interface{}
if apiVersion >= 8 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV8{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV8{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV5{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV5{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV2{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ProduceResponseV1{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&ProduceResponseV0{}).Elem())
mt = types[0]
produceResponse = &ProduceResponseV0{}
}
mt.(messageType).decode(d, valueOf(produceResponse))
reqResPair.Response.Payload = produceResponse
break
case Fetch:
var mt interface{}
var fetchResponse interface{}
if apiVersion >= 11 {
types := makeTypes(reflect.TypeOf(&FetchResponseV11{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV11{}
} else if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&FetchResponseV7{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV7{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&FetchResponseV5{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV5{}
} else if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&FetchResponseV4{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV4{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&FetchResponseV1{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&FetchResponseV0{}).Elem())
mt = types[0]
fetchResponse = &FetchResponseV0{}
}
mt.(messageType).decode(d, valueOf(fetchResponse))
reqResPair.Response.Payload = fetchResponse
break
case ListOffsets:
var mt interface{}
var listOffsetsResponse interface{}
if apiVersion >= 4 {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV4{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV4{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV2{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV1{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&ListOffsetsResponseV0{}).Elem())
mt = types[0]
listOffsetsResponse = &ListOffsetsResponseV0{}
}
mt.(messageType).decode(d, valueOf(listOffsetsResponse))
reqResPair.Response.Payload = listOffsetsResponse
case CreateTopics:
var mt interface{}
var createTopicsResponse interface{}
if apiVersion >= 7 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV0{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV0{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV5{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV5{}
} else if apiVersion >= 2 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV2{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV2{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV1{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&CreateTopicsResponseV0{}).Elem())
mt = types[0]
createTopicsResponse = &CreateTopicsResponseV0{}
}
mt.(messageType).decode(d, valueOf(createTopicsResponse))
reqResPair.Response.Payload = createTopicsResponse
break
case DeleteTopics:
var mt interface{}
var deleteTopicsResponse interface{}
if apiVersion >= 6 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV6{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV6{}
} else if apiVersion >= 5 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV5{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV5{}
} else if apiVersion >= 1 {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV1{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV1{}
} else {
types := makeTypes(reflect.TypeOf(&DeleteTopicsReponseV0{}).Elem())
mt = types[0]
deleteTopicsResponse = &DeleteTopicsReponseV0{}
}
mt.(messageType).decode(d, valueOf(deleteTopicsResponse))
reqResPair.Response.Payload = deleteTopicsResponse
default:
return fmt.Errorf("(Response) Not implemented: %s", apiKey)
}
connectionInfo := &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
item := &api.OutputChannelItem{
Protocol: _protocol,
Timestamp: reqResPair.Request.CaptureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: connectionInfo,
Pair: &api.RequestResponsePair{
Request: api.GenericMessage{
IsRequest: true,
CaptureTime: reqResPair.Request.CaptureTime,
Payload: KafkaPayload{
Data: &KafkaWrapper{
Method: apiNames[apiKey],
Url: "",
Details: reqResPair.Request,
},
},
},
Response: api.GenericMessage{
IsRequest: false,
CaptureTime: reqResPair.Response.CaptureTime,
Payload: KafkaPayload{
Data: &KafkaWrapper{
Method: apiNames[apiKey],
Url: "",
Details: reqResPair.Response,
},
},
},
},
}
emitter.Emit(item)
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
err = fmt.Errorf("unsupported api key: %d", i)
return err
}
t := &apiTypes[apiKey]
if t == nil {
err = fmt.Errorf("unsupported api: %s", apiNames[apiKey])
return err
}
d.discardAll()
return nil
}
func WriteResponse(w io.Writer, apiVersion int16, correlationID int32, msg Message) error {
apiKey := msg.ApiKey()
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
return fmt.Errorf("unsupported api key: %d", i)
}
t := &apiTypes[apiKey]
if t == nil {
return fmt.Errorf("unsupported api: %s", apiNames[apiKey])
}
minVersion := t.minVersion()
maxVersion := t.maxVersion()
if apiVersion < minVersion || apiVersion > maxVersion {
return fmt.Errorf("unsupported %s version: v%d not in range v%d-v%d", apiKey, apiVersion, minVersion, maxVersion)
}
r := &t.responses[apiVersion-minVersion]
v := valueOf(msg)
b := newPageBuffer()
defer b.unref()
e := &encoder{writer: b}
e.writeInt32(0) // placeholder for the response size
e.writeInt32(correlationID)
r.encode(e, v)
err := e.err
if err == nil {
size := packUint32(uint32(b.Size()) - 4)
b.WriteAt(size[:], 0)
_, err = b.WriteTo(w)
}
return err
}