mirror of
https://github.com/kubeshark/kubeshark.git
synced 2025-09-27 05:23:06 +00:00
* Fix the OOMKilled error by calling `debug.FreeOSMemory` periodically * Remove `MAX_NUMBER_OF_GOROUTINES` environment variable * Change the line * Increase the default value of `TCP_STREAM_CHANNEL_TIMEOUT_MS` to `10000` * Write the client and integrate to the new real-time database * Refactor the WebSocket implementaiton for `/ws` * Adapt the UI to the new filtering system * Fix the rest of the issues in the UI * Increase the buffer of the scanner * Implement accessing single records * Increase the buffer of another scanner * Populate `Request` and `Response` fields of `MizuEntry` * Add syntax highlighting for the query * Add database to `Dockerfile` * Fix some issues * Update the `realtime_dbms` Git module commit hash * Upgrade Gin version and print the query string * Revert "Upgrade Gin version and print the query string" This reverts commitaa09f904ee
. * Use WebSocket's itself to query instead of the query string * Fix some errors related to conversion to HAR * Fix the issues caused by the latest merge * Fix the build error * Fix PR validation GitHub workflow * Replace the git submodule with latest Basenine version `0.1.0` Remove `realtime_client.go` and use the official client library `github.com/up9inc/basenine/client/go` instead. * Move Basenine host and port constants to `shared` module * Reliably execute and wait for Basenine to become available * Upgrade Basenine version * Properly close WebSocket and data channel * Fix the issues caused by the recent merge commit * Clean up the TypeScript code * Update `.gitignore` * Limit the database size * Add `Macros` method signature to `Dissector` interface and set the macros provided by the protocol extensions * Run `go mod tidy` on `agent` * Upgrade `github.com/up9inc/basenine/client/go` version * Implement a mechanism to update the query using click events in the UI and use it for protocol macros * Update the query on click to timestamps * Fix some issues in the WebSocket and channel handling * Update the query on clicks to status code * Update the query on clicks to method, path and service * Update the query on clicks to is outgoing, source and destination ports * Add an API endpoint to validate the query against syntax errors * Move the query background color state into `TrafficPage` * Fix the logic in `setQuery` * Display a toast message in case of a syntax error in the query * Remove a call to `fmt.Printf` * Upgrade Basenine version to `0.1.3` * Fix an issue related to getting `MAX_ENTRIES_DB_BYTES` environment variable * Have the `path` key in request details, in HTTP * Rearrange the HTTP headers for the querying * Do the same thing for `cookies` and `queryString` * Update the query on click to table elements Add the selectors for `TABLE` type representations in HTTP extension. * Update the query on click to `bodySize` and `elapsedTime` in `EntryTitle` * Add the selectors for `TABLE` type representations in AMQP extension * Add the selectors for `TABLE` type representations in Kafka extension * Add the selectors for `TABLE` type representations in Redis extension * Define a struct in `tap/api.go` for the section representation data * Add the selectors for `BODY` type representations * Add `request.path` to the HTTP request details * Change the summary string's field name from `path` to `summary` * Introduce `queryable` CSS class for queryable UI elements and underline them on hover * Instead of `N requests` at the bottom, make it `Displaying N results (queried X/Y)` and live update the values Upgrade Basenine version to `0.2.0`. * Verify the sha256sum of Basenine executable inside `Dockerfile` * Pass the start time to web UI through WebSocket and always show the `EntriesList` footer * Pipe the `stderr` of Basenine as well * Fix the layout issues related to `CodeEditor` in the UI * Use the correct `shasum` command in `Dockerfile` * Upgrade Basenine version to `0.2.1` * Limit the height of `CodeEditor` container * Remove `Paused` enum `ConnectionStatus` in UI * Fix the issue caused by the recent merge * Add the filtering guide (cheatsheet) * Update open cheatsheet button's title * Update cheatsheet content * Remove the old SQLite code, adapt the `--analyze` related code to Basenine * Change the method signature of `NewEntry` * Change the method signature of `Represent` * Introduce `HTTPPair` field in `MizuEntry` specific to HTTP * Remove `Entry`, `EntryId` and `EstimatedSizeBytes` fields from `MizuEntry` Also remove the `getEstimatedEntrySizeBytes` method. * Remove `gorm.io/gorm` dependency * Remove unused `sensitiveDataFiltering` folder * Increase the left margin of open cheatsheet button * Add `overflow: auto` to the cheatsheet `Modal` * Fix `GetEntry` method * Fix the macro for gRPC * Fix an interface conversion in case of AMQP * Fix two more interface conversion errors in AMQP * Make the `syncEntriesImpl` method blocking * Fix a grammar mistake in the cheatsheet * Adapt to the changes in the recent merge commit * Improve the cheatsheet text * Always display the timestamp in `en-US` * Upgrade Basenine version to `0.2.2` * Fix the order of closing Basenine connections and channels * Don't close the Basenine channels at all * Upgrade Basenine version to `0.2.3` * Set the initial filter to `rlimit(100)` * Make Basenine persistent * Upgrade Basenine version to `0.2.4` * Update `debug.Dockerfile` * Fix a failing test * Upgrade Basenine version to `0.2.5` * Revert "Do not show play icon when disconnected (#428)" This reverts commit8af2e562f8
. * Upgrade Basenine version to `0.2.6` * Make all non-informative things informative * Make `100` a constant * Use `===` in JavaScript no matter what * Remove a forgotten `console.log` * Add a comment and update the `query` in `syncEntriesImpl` * Don't call `panic` in `GetEntry` * Replace `panic` calls in `startBasenineServer` with `logger.Log.Panicf` * Remove unnecessary `\n` characters in the logs
294 lines
8.3 KiB
Go
294 lines
8.3 KiB
Go
package main
|
|
|
|
import (
|
|
"fmt"
|
|
"io"
|
|
"reflect"
|
|
"time"
|
|
|
|
"github.com/up9inc/mizu/tap/api"
|
|
)
|
|
|
|
type Request struct {
|
|
Size int32 `json:"size"`
|
|
ApiKey ApiKey `json:"apiKey"`
|
|
ApiVersion int16 `json:"apiVersion"`
|
|
CorrelationID int32 `json:"correlationID"`
|
|
ClientID string `json:"clientID"`
|
|
Payload interface{} `json:"payload"`
|
|
CaptureTime time.Time `json:"captureTime"`
|
|
}
|
|
|
|
func ReadRequest(r io.Reader, tcpID *api.TcpID, superTimer *api.SuperTimer) (apiKey ApiKey, apiVersion int16, err error) {
|
|
d := &decoder{reader: r, remain: 4}
|
|
size := d.readInt32()
|
|
|
|
if size > 1000000 {
|
|
return 0, 0, fmt.Errorf("A Kafka message cannot be bigger than 1MB")
|
|
}
|
|
|
|
if size < 8 {
|
|
return 0, 0, fmt.Errorf("A Kafka request header cannot be smaller than 8 bytes")
|
|
}
|
|
|
|
if err = d.err; err != nil {
|
|
err = dontExpectEOF(err)
|
|
return 0, 0, err
|
|
}
|
|
|
|
d.remain = int(size)
|
|
apiKey = ApiKey(d.readInt16())
|
|
apiVersion = d.readInt16()
|
|
correlationID := d.readInt32()
|
|
clientID := d.readString()
|
|
|
|
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
|
|
err = fmt.Errorf("unsupported api key: %d", i)
|
|
return apiKey, apiVersion, err
|
|
}
|
|
|
|
if err = d.err; err != nil {
|
|
err = dontExpectEOF(err)
|
|
return apiKey, apiVersion, err
|
|
}
|
|
|
|
t := &apiTypes[apiKey]
|
|
if t == nil {
|
|
err = fmt.Errorf("unsupported api: %s", apiNames[apiKey])
|
|
return apiKey, apiVersion, err
|
|
}
|
|
|
|
var payload interface{}
|
|
|
|
switch apiKey {
|
|
case Metadata:
|
|
var mt interface{}
|
|
var metadataRequest interface{}
|
|
if apiVersion >= 11 {
|
|
types := makeTypes(reflect.TypeOf(&MetadataRequestV11{}).Elem())
|
|
mt = types[0]
|
|
metadataRequest = &MetadataRequestV11{}
|
|
} else if apiVersion >= 10 {
|
|
types := makeTypes(reflect.TypeOf(&MetadataRequestV10{}).Elem())
|
|
mt = types[0]
|
|
metadataRequest = &MetadataRequestV10{}
|
|
} else if apiVersion >= 8 {
|
|
types := makeTypes(reflect.TypeOf(&MetadataRequestV8{}).Elem())
|
|
mt = types[0]
|
|
metadataRequest = &MetadataRequestV8{}
|
|
} else if apiVersion >= 4 {
|
|
types := makeTypes(reflect.TypeOf(&MetadataRequestV4{}).Elem())
|
|
mt = types[0]
|
|
metadataRequest = &MetadataRequestV4{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&MetadataRequestV0{}).Elem())
|
|
mt = types[0]
|
|
metadataRequest = &MetadataRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(metadataRequest))
|
|
payload = metadataRequest
|
|
break
|
|
case ApiVersions:
|
|
var mt interface{}
|
|
var apiVersionsRequest interface{}
|
|
if apiVersion >= 3 {
|
|
types := makeTypes(reflect.TypeOf(&ApiVersionsRequestV3{}).Elem())
|
|
mt = types[0]
|
|
apiVersionsRequest = &ApiVersionsRequestV3{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&ApiVersionsRequestV0{}).Elem())
|
|
mt = types[0]
|
|
apiVersionsRequest = &ApiVersionsRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(apiVersionsRequest))
|
|
payload = apiVersionsRequest
|
|
break
|
|
case Produce:
|
|
var mt interface{}
|
|
var produceRequest interface{}
|
|
if apiVersion >= 3 {
|
|
types := makeTypes(reflect.TypeOf(&ProduceRequestV3{}).Elem())
|
|
mt = types[0]
|
|
produceRequest = &ProduceRequestV3{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&ProduceRequestV0{}).Elem())
|
|
mt = types[0]
|
|
produceRequest = &ProduceRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(produceRequest))
|
|
payload = produceRequest
|
|
break
|
|
case Fetch:
|
|
var mt interface{}
|
|
var fetchRequest interface{}
|
|
if apiVersion >= 11 {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV11{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV11{}
|
|
} else if apiVersion >= 9 {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV9{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV9{}
|
|
} else if apiVersion >= 7 {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV7{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV7{}
|
|
} else if apiVersion >= 5 {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV5{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV5{}
|
|
} else if apiVersion >= 4 {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV4{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV4{}
|
|
} else if apiVersion >= 3 {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV3{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV3{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&FetchRequestV0{}).Elem())
|
|
mt = types[0]
|
|
fetchRequest = &FetchRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(fetchRequest))
|
|
payload = fetchRequest
|
|
break
|
|
case ListOffsets:
|
|
var mt interface{}
|
|
var listOffsetsRequest interface{}
|
|
if apiVersion >= 4 {
|
|
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV4{}).Elem())
|
|
mt = types[0]
|
|
listOffsetsRequest = &ListOffsetsRequestV4{}
|
|
} else if apiVersion >= 2 {
|
|
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV2{}).Elem())
|
|
mt = types[0]
|
|
listOffsetsRequest = &ListOffsetsRequestV2{}
|
|
} else if apiVersion >= 1 {
|
|
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV1{}).Elem())
|
|
mt = types[0]
|
|
listOffsetsRequest = &ListOffsetsRequestV1{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&ListOffsetsRequestV0{}).Elem())
|
|
mt = types[0]
|
|
listOffsetsRequest = &ListOffsetsRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(listOffsetsRequest))
|
|
payload = listOffsetsRequest
|
|
break
|
|
case CreateTopics:
|
|
var mt interface{}
|
|
var createTopicsRequest interface{}
|
|
if apiVersion >= 1 {
|
|
types := makeTypes(reflect.TypeOf(&CreateTopicsRequestV1{}).Elem())
|
|
mt = types[0]
|
|
createTopicsRequest = &CreateTopicsRequestV1{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&CreateTopicsRequestV0{}).Elem())
|
|
mt = types[0]
|
|
createTopicsRequest = &CreateTopicsRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(createTopicsRequest))
|
|
payload = createTopicsRequest
|
|
break
|
|
case DeleteTopics:
|
|
var mt interface{}
|
|
var deleteTopicsRequest interface{}
|
|
if apiVersion >= 6 {
|
|
types := makeTypes(reflect.TypeOf(&DeleteTopicsRequestV6{}).Elem())
|
|
mt = types[0]
|
|
deleteTopicsRequest = &DeleteTopicsRequestV6{}
|
|
} else {
|
|
types := makeTypes(reflect.TypeOf(&DeleteTopicsRequestV0{}).Elem())
|
|
mt = types[0]
|
|
deleteTopicsRequest = &DeleteTopicsRequestV0{}
|
|
}
|
|
mt.(messageType).decode(d, valueOf(deleteTopicsRequest))
|
|
payload = deleteTopicsRequest
|
|
default:
|
|
return apiKey, 0, fmt.Errorf("(Request) Not implemented: %s", apiKey)
|
|
}
|
|
|
|
request := &Request{
|
|
Size: size,
|
|
ApiKey: apiKey,
|
|
ApiVersion: apiVersion,
|
|
CorrelationID: correlationID,
|
|
ClientID: clientID,
|
|
CaptureTime: superTimer.CaptureTime,
|
|
Payload: payload,
|
|
}
|
|
|
|
key := fmt.Sprintf(
|
|
"%s:%s->%s:%s::%d",
|
|
tcpID.SrcIP,
|
|
tcpID.SrcPort,
|
|
tcpID.DstIP,
|
|
tcpID.DstPort,
|
|
correlationID,
|
|
)
|
|
reqResMatcher.registerRequest(key, request)
|
|
|
|
d.discardAll()
|
|
|
|
return apiKey, apiVersion, nil
|
|
}
|
|
|
|
func WriteRequest(w io.Writer, apiVersion int16, correlationID int32, clientID string, msg Message) error {
|
|
apiKey := msg.ApiKey()
|
|
|
|
if i := int(apiKey); i < 0 || i >= len(apiTypes) {
|
|
return fmt.Errorf("unsupported api key: %d", i)
|
|
}
|
|
|
|
t := &apiTypes[apiKey]
|
|
if t == nil {
|
|
return fmt.Errorf("unsupported api: %s", apiNames[apiKey])
|
|
}
|
|
|
|
minVersion := t.minVersion()
|
|
maxVersion := t.maxVersion()
|
|
|
|
if apiVersion < minVersion || apiVersion > maxVersion {
|
|
return fmt.Errorf("unsupported %s version: v%d not in range v%d-v%d", apiKey, apiVersion, minVersion, maxVersion)
|
|
}
|
|
|
|
r := &t.requests[apiVersion-minVersion]
|
|
v := valueOf(msg)
|
|
b := newPageBuffer()
|
|
defer b.unref()
|
|
|
|
e := &encoder{writer: b}
|
|
e.writeInt32(0) // placeholder for the request size
|
|
e.writeInt16(int16(apiKey))
|
|
e.writeInt16(apiVersion)
|
|
e.writeInt32(correlationID)
|
|
|
|
if r.flexible {
|
|
// Flexible messages use a nullable string for the client ID, then extra space for a
|
|
// tag buffer, which begins with a size value. Since we're not writing any fields into the
|
|
// latter, we can just write zero for now.
|
|
//
|
|
// See
|
|
// https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
|
|
// for details.
|
|
e.writeNullString(clientID)
|
|
e.writeUnsignedVarInt(0)
|
|
} else {
|
|
// Technically, recent versions of kafka interpret this field as a nullable
|
|
// string, however kafka 0.10 expected a non-nullable string and fails with
|
|
// a NullPointerException when it receives a null client id.
|
|
e.writeString(clientID)
|
|
}
|
|
r.encode(e, v)
|
|
err := e.err
|
|
|
|
if err == nil {
|
|
size := packUint32(uint32(b.Size()) - 4)
|
|
b.WriteAt(size[:], 0)
|
|
_, err = b.WriteTo(w)
|
|
}
|
|
|
|
return err
|
|
}
|