mirror of
https://github.com/kubeshark/kubeshark.git
synced 2025-09-13 05:11:34 +00:00
Add Go crypto/tls
eBPF tracer for TLS connections (#1120)
* Run `go generate tls_tapper.go` * Add `golang_uprobes.c` * Add Golang hooks and offsets * Add `golangConnection` struct and implement `pollGolangReadWrite` method * Upgrade `github.com/cilium/ebpf` version to `v0.8.1` * Fix the linter error * Move map related stuff to `maps.h` and run `go generate tls_tapper.go` * Remove unused parameter * Add an environment variable to test Golang locally * Replace `Libssl` occurrences with `Ssllib` for consistency * Fix exe path finding * Temporarily disable OpenSSL * Fix the mixed offsets and dissection preparation * Change the read symbol from `net/http.(*persistConn).Read` to `crypto/tls.(*Conn).Read` * Remove `len` and `cap` fields * Fix the indent * Fix the read data address * Make `golang_dial_writes` key `__u64` and include the PID * Fix the read data address one more time * Temporarily disable the PCAP capture * Add a uprobe for `net/http.(*gzipReader).Read` to read chunked HTTP response body * Cancel `golang_crypto_tls_read_uprobe` if it's a gzip read * Make hash map names more meaningful * Pass the connection address from `write` to `gzip` through a common address between `gzip` and `dial` * Fix the probed line number links * Add `golangReader` struct and implement its `Read` method * Have a single counter pair and request response matcher per Golang connection * Add `MIZU_GLOBAL_GOLANG_PATH` environment variable * `NULL` terminate the bytes with `unix.ByteSliceToString` * Temporarily reject the gzip chunks * Add malformed TODOs * Revert "`NULL` terminate the bytes with `unix.ByteSliceToString`" This reverts commit7ee7ef7e44
. * Bring back `len` and `cap` fields * Set `len` and `cap` in `golang_net_http_gzipreader_read_uprobe` as well * Remove two `TODO`s * Fix the `key_gzip` offsets * Compress if it's gzip chunk (probably wrong!) * Revert "Compress if it's gzip chunk (probably wrong!)" This reverts commit094a7c3da4
. * Remove `golang_net_http_gzipreader_read_uprobe` * Read constant 4KiB * Use constant read length * Get the correct len of bytes (saw the second entry) * Set all buffer sizes to `CHUNK_SIZE` * Remove a `TODO` * Revert "Temporarily disable the PCAP capture" This reverts commita2da15ef2d
. * Update `golang_crypto_tls_read_uprobe` * Set the `reader` field of `tlsStream` to fix a `nil pointer dereference` error * Don't export any fields of `golangConnection` * Close the reader when we drop the connection * Add a tracepoint for `sys_enter_close` to detect socket closes * Rename `socket` struct to `golang_socket` * Call `should_tap` in Golang uprobes * Add `log_error` calls * Revert "Temporarily disable OpenSSL" This reverts commitf54d9a453f
. * Fix linter * Revert "Revert "Temporarily disable OpenSSL"" This reverts commit2433d867af
. * Change `golang_read_writes` map type from `BPF_RINGBUF` to `BPF_PERF_OUTPUT` * Rename `golang_read_write` to `golang_event` * Define an error * Add comments * Revert "Revert "Revert "Temporarily disable OpenSSL""" This reverts commite5a1de9c71
. * Fix `pollGolang` * Revert "Revert "Revert "Revert "Temporarily disable OpenSSL"""" This reverts commit6e1bd5d4f3
. * Fix `panic: send on closed channel` * Revert "Revert "Revert "Revert "Revert "Temporarily disable OpenSSL""""" This reverts commit57d0584655
. * Use `findLibraryByPid` * Revert "Revert "Revert "Revert "Revert "Revert "Temporarily disable OpenSSL"""""" This reverts commit46f3d290b0
. * Revert "Revert "Revert "Revert "Revert "Revert "Revert "Temporarily disable OpenSSL""""""" This reverts commit775c833c06
. * Log tapping Golang * Fix `Poll` * Refactor `golang_net_http_dialconn_uprobe` * Remove an excess error check * Fix `can only use path@version syntax with 'go get' and 'go install' in module-aware mode` error in `tap/tlstapper/bpf-builder/build.sh` * Unify Golang and OpenSSL under a single perf event buffer and `tls_chunk` struct * Generate `tlsTapperChunkType` type (enum) as well * Use kernel page size for the `sys_closes` perf buffer * Fix the linter error * Fix `MIZU_GLOBAL_GOLANG_PID` environment variable's functionality * Rely on tracepoints for file descriptor retrieval in Golang implementation * Remove the unnecessary changes * Move common functions into `common.c` * Declare `lookup_ssl_info` function to reduce duplication * Fix linter * Add comments and TODOs * Remove `MIZU_GLOBAL_GOLANG_PATH` environment variable * Update the object files * Fix indentation * Update object files * Add `go_abi_internal.h` * Fix `lookup_ssl_info` * Convert indentation to spaces * Add header guard comment * Add more comments * Find the `ret` instructions using Capstone Engine and `uprobe` the `return` statements * Implement `get_fd_from_tcp_conn` function * Separate SSL contexts to OpenSSL and Go * Move `get_count_bytes` from `common.c` to `openssl_uprobes.c` * Rename everything contains Golang to Go * Reduce duplication in `go_uprobes.c` * Update the comments * Install Capstone in CI and Docker native builds * Update `devops/install-capstone.sh` * Add Capstone to AArch64 cross-compilation target * Fix some of the issues on ARM64 * Delete the map element in `_ex_urpobe` * Remove an unsued `LOG_` macro * Rename `aquynh` to `capstone-engine` * Add comment * Revert "Fix some of the issues on ARM64" This reverts commit0b3eceddf4
. * Revert "Revert "Fix some of the issues on ARM64"" This reverts commit681534ada1
. * Update object files * Remove unnecessary return * Increase timeout * #run_acceptance_tests * #run_acceptance_tests * Fix the `arm64v8` sourced builds * #run_acceptance_tests
This commit is contained in:
145
tap/tlstapper/bpf/common.c
Normal file
145
tap/tlstapper/bpf/common.c
Normal file
@@ -0,0 +1,145 @@
|
||||
/*
|
||||
Note: This file is licenced differently from the rest of the project
|
||||
SPDX-License-Identifier: GPL-2.0
|
||||
Copyright (C) UP9 Inc.
|
||||
*/
|
||||
|
||||
#include "include/headers.h"
|
||||
#include "include/util.h"
|
||||
#include "include/maps.h"
|
||||
#include "include/log.h"
|
||||
#include "include/logger_messages.h"
|
||||
#include "include/common.h"
|
||||
|
||||
|
||||
static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd) {
|
||||
__u32 pid = id >> 32;
|
||||
__u64 key = (__u64) pid << 32 | fd;
|
||||
|
||||
struct fd_info *fdinfo = bpf_map_lookup_elem(&file_descriptor_to_ipv4, &key);
|
||||
|
||||
if (fdinfo == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
int err = bpf_probe_read(chunk->address, sizeof(chunk->address), fdinfo->ipv4_addr);
|
||||
chunk->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
static __always_inline void send_chunk_part(struct pt_regs *ctx, __u8* buffer, __u64 id,
|
||||
struct tls_chunk* chunk, int start, int end) {
|
||||
size_t recorded = MIN(end - start, sizeof(chunk->data));
|
||||
|
||||
if (recorded <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
chunk->recorded = recorded;
|
||||
chunk->start = start;
|
||||
|
||||
// This ugly trick is for the ebpf verifier happiness
|
||||
//
|
||||
long err = 0;
|
||||
if (chunk->recorded == sizeof(chunk->data)) {
|
||||
err = bpf_probe_read(chunk->data, sizeof(chunk->data), buffer + start);
|
||||
} else {
|
||||
recorded &= (sizeof(chunk->data) - 1); // Buffer must be N^2
|
||||
err = bpf_probe_read(chunk->data, recorded, buffer + start);
|
||||
}
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_FROM_SSL_BUFFER, id, err, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
bpf_perf_event_output(ctx, &chunks_buffer, BPF_F_CURRENT_CPU, chunk, sizeof(struct tls_chunk));
|
||||
}
|
||||
|
||||
static __always_inline void send_chunk(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk) {
|
||||
// ebpf loops must be bounded at compile time, we can't use (i < chunk->len / CHUNK_SIZE)
|
||||
//
|
||||
// https://lwn.net/Articles/794934/
|
||||
//
|
||||
// However we want to run in kernel older than 5.3, hence we use "#pragma unroll" anyway
|
||||
//
|
||||
#pragma unroll
|
||||
for (int i = 0; i < MAX_CHUNKS_PER_OPERATION; i++) {
|
||||
if (chunk->len <= (CHUNK_SIZE * i)) {
|
||||
break;
|
||||
}
|
||||
|
||||
send_chunk_part(ctx, buffer, id, chunk, CHUNK_SIZE * i, chunk->len);
|
||||
}
|
||||
}
|
||||
|
||||
static __always_inline void output_ssl_chunk(struct pt_regs *ctx, struct ssl_info* info, int count_bytes, __u64 id, __u32 flags) {
|
||||
if (count_bytes <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (count_bytes > (CHUNK_SIZE * MAX_CHUNKS_PER_OPERATION)) {
|
||||
log_error(ctx, LOG_ERROR_BUFFER_TOO_BIG, id, count_bytes, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
struct tls_chunk* chunk;
|
||||
int zero = 0;
|
||||
|
||||
// If other thread, running on the same CPU get to this point at the same time like us (context switch)
|
||||
// the data will be corrupted - protection may be added in the future
|
||||
//
|
||||
chunk = bpf_map_lookup_elem(&heap, &zero);
|
||||
|
||||
if (!chunk) {
|
||||
log_error(ctx, LOG_ERROR_ALLOCATING_CHUNK, id, 0l, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
chunk->flags = flags;
|
||||
chunk->pid = id >> 32;
|
||||
chunk->tgid = id;
|
||||
chunk->len = count_bytes;
|
||||
chunk->fd = info->fd;
|
||||
|
||||
if (!add_address_to_chunk(ctx, chunk, id, chunk->fd)) {
|
||||
// Without an address, we drop the chunk because there is not much to do with it in Go
|
||||
//
|
||||
return;
|
||||
}
|
||||
|
||||
send_chunk(ctx, info->buffer, id, chunk);
|
||||
}
|
||||
|
||||
static __always_inline struct ssl_info new_ssl_info() {
|
||||
struct ssl_info info = { .fd = invalid_fd, .created_at_nano = bpf_ktime_get_ns() };
|
||||
return info;
|
||||
}
|
||||
|
||||
static __always_inline struct ssl_info lookup_ssl_info(struct pt_regs *ctx, struct bpf_map_def* map_fd, __u64 pid_tgid) {
|
||||
struct ssl_info *infoPtr = bpf_map_lookup_elem(map_fd, &pid_tgid);
|
||||
struct ssl_info info = new_ssl_info();
|
||||
|
||||
if (infoPtr != NULL) {
|
||||
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, pid_tgid, err, ORIGIN_SSL_UPROBE_CODE);
|
||||
}
|
||||
|
||||
if ((bpf_ktime_get_ns() - info.created_at_nano) > SSL_INFO_MAX_TTL_NANO) {
|
||||
// If the ssl info is too old, we don't want to use its info because it may be incorrect.
|
||||
//
|
||||
info.fd = invalid_fd;
|
||||
info.created_at_nano = bpf_ktime_get_ns();
|
||||
}
|
||||
}
|
||||
|
||||
return info;
|
||||
}
|
@@ -28,7 +28,7 @@ void sys_enter_read(struct sys_enter_read_ctx *ctx) {
|
||||
return;
|
||||
}
|
||||
|
||||
struct ssl_info *infoPtr = bpf_map_lookup_elem(&ssl_read_context, &id);
|
||||
struct ssl_info *infoPtr = bpf_map_lookup_elem(&openssl_read_context, &id);
|
||||
|
||||
if (infoPtr == NULL) {
|
||||
return;
|
||||
@@ -44,7 +44,7 @@ void sys_enter_read(struct sys_enter_read_ctx *ctx) {
|
||||
|
||||
info.fd = ctx->fd;
|
||||
|
||||
err = bpf_map_update_elem(&ssl_read_context, &id, &info, BPF_ANY);
|
||||
err = bpf_map_update_elem(&openssl_read_context, &id, &info, BPF_ANY);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, ORIGIN_SYS_ENTER_READ_CODE);
|
||||
@@ -68,7 +68,7 @@ void sys_enter_write(struct sys_enter_write_ctx *ctx) {
|
||||
return;
|
||||
}
|
||||
|
||||
struct ssl_info *infoPtr = bpf_map_lookup_elem(&ssl_write_context, &id);
|
||||
struct ssl_info *infoPtr = bpf_map_lookup_elem(&openssl_write_context, &id);
|
||||
|
||||
if (infoPtr == NULL) {
|
||||
return;
|
||||
@@ -84,7 +84,7 @@ void sys_enter_write(struct sys_enter_write_ctx *ctx) {
|
||||
|
||||
info.fd = ctx->fd;
|
||||
|
||||
err = bpf_map_update_elem(&ssl_write_context, &id, &info, BPF_ANY);
|
||||
err = bpf_map_update_elem(&openssl_write_context, &id, &info, BPF_ANY);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, ORIGIN_SYS_ENTER_WRITE_CODE);
|
||||
|
154
tap/tlstapper/bpf/go_uprobes.c
Normal file
154
tap/tlstapper/bpf/go_uprobes.c
Normal file
@@ -0,0 +1,154 @@
|
||||
/*
|
||||
Note: This file is licenced differently from the rest of the project
|
||||
SPDX-License-Identifier: GPL-2.0
|
||||
Copyright (C) UP9 Inc.
|
||||
|
||||
|
||||
---
|
||||
|
||||
README
|
||||
|
||||
Go does not follow any platform ABI like x86-64 System V ABI.
|
||||
Before 1.17, Go followed stack-based Plan9 (Bell Labs) calling convention. (ABI0)
|
||||
After 1.17, Go switched to an internal register-based calling convention. (ABIInternal)
|
||||
For now, the probes in this file supports only ABIInternal (Go 1.17+)
|
||||
|
||||
`uretprobe` in Linux kernel uses trampoline pattern to jump to original return
|
||||
address of the probed function. A Goroutine's stack size is 2Kb while a C thread is 2MB on Linux.
|
||||
If stack size exceeds 2Kb, Go runtime relocates the stack. That causes the
|
||||
return address to become incorrect in case of `uretprobe` and probed Go program crashes.
|
||||
Therefore `uretprobe` CAN'T BE USED for a Go program.
|
||||
|
||||
`_ex_uprobe` suffixed probes suppose to be `uretprobe`(s) are actually `uprobe`(s)
|
||||
because of the non-standard ABI of Go. Therefore we probe all `ret` mnemonics under the symbol
|
||||
by automatically finding them through reading the ELF binary and disassembling the symbols.
|
||||
Disassembly related code located in `go_offsets.go` file and it uses Capstone Engine.
|
||||
Solution based on: https://github.com/iovisor/bcc/issues/1320#issuecomment-407927542
|
||||
*Example* We probe an arbitrary point in a function body (offset +559):
|
||||
https://github.com/golang/go/blob/go1.17.6/src/crypto/tls/conn.go#L1299
|
||||
|
||||
We get the file descriptor using the common $rax register that holds the address
|
||||
of `go.itab.*net.TCPConn,net.Conn` and through a series of dereferencing
|
||||
using `bpf_probe_read` calls in `go_crypto_tls_get_fd_from_tcp_conn` function.
|
||||
|
||||
---
|
||||
|
||||
SOURCES:
|
||||
|
||||
Tracing Go Functions with eBPF (before 1.17): https://www.grant.pizza/blog/tracing-go-functions-with-ebpf-part-2/
|
||||
Challenges of BPF Tracing Go: https://blog.0x74696d.com/posts/challenges-of-bpf-tracing-go/
|
||||
x86 calling conventions: https://en.wikipedia.org/wiki/X86_calling_conventions
|
||||
Plan 9 from Bell Labs: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
|
||||
The issue for calling convention change in Go: https://github.com/golang/go/issues/40724
|
||||
Proposal of Register-based Go calling convention: https://go.googlesource.com/proposal/+/master/design/40724-register-calling.md
|
||||
Go internal ABI (1.17) specification: https://go.googlesource.com/go/+/refs/heads/dev.regabi/src/cmd/compile/internal-abi.md
|
||||
Go internal ABI (current) specification: https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md
|
||||
A Quick Guide to Go's Assembler: https://go.googlesource.com/go/+/refs/heads/dev.regabi/doc/asm.html
|
||||
Dissecting Go Binaries: https://www.grant.pizza/blog/dissecting-go-binaries/
|
||||
Capstone Engine: https://www.capstone-engine.org/
|
||||
*/
|
||||
|
||||
#include "include/headers.h"
|
||||
#include "include/util.h"
|
||||
#include "include/maps.h"
|
||||
#include "include/log.h"
|
||||
#include "include/logger_messages.h"
|
||||
#include "include/pids.h"
|
||||
#include "include/common.h"
|
||||
#include "include/go_abi_internal.h"
|
||||
#include "include/go_types.h"
|
||||
|
||||
static __always_inline __u32 go_crypto_tls_get_fd_from_tcp_conn(struct pt_regs *ctx) {
|
||||
struct go_interface conn;
|
||||
long err = bpf_probe_read(&conn, sizeof(conn), (void*)GO_ABI_INTERNAL_PT_REGS_R1(ctx));
|
||||
if (err != 0) {
|
||||
return invalid_fd;
|
||||
}
|
||||
|
||||
void* net_fd_ptr;
|
||||
err = bpf_probe_read(&net_fd_ptr, sizeof(net_fd_ptr), conn.ptr);
|
||||
if (err != 0) {
|
||||
return invalid_fd;
|
||||
}
|
||||
|
||||
__u32 fd;
|
||||
err = bpf_probe_read(&fd, sizeof(fd), net_fd_ptr + 0x10);
|
||||
if (err != 0) {
|
||||
return invalid_fd;
|
||||
}
|
||||
|
||||
return fd;
|
||||
}
|
||||
|
||||
static __always_inline void go_crypto_tls_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context) {
|
||||
__u64 pid_tgid = bpf_get_current_pid_tgid();
|
||||
__u64 pid = pid_tgid >> 32;
|
||||
if (!should_tap(pid)) {
|
||||
return;
|
||||
}
|
||||
|
||||
struct ssl_info info = new_ssl_info();
|
||||
|
||||
info.buffer_len = GO_ABI_INTERNAL_PT_REGS_R2(ctx);
|
||||
info.buffer = (void*)GO_ABI_INTERNAL_PT_REGS_R4(ctx);
|
||||
info.fd = go_crypto_tls_get_fd_from_tcp_conn(ctx);
|
||||
|
||||
// GO_ABI_INTERNAL_PT_REGS_GP is Goroutine address
|
||||
__u64 pid_fp = pid << 32 | GO_ABI_INTERNAL_PT_REGS_GP(ctx);
|
||||
long err = bpf_map_update_elem(go_context, &pid_fp, &info, BPF_ANY);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_PUTTING_SSL_CONTEXT, pid_tgid, err, 0l);
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
static __always_inline void go_crypto_tls_ex_uprobe(struct pt_regs *ctx, struct bpf_map_def* go_context, __u32 flags) {
|
||||
__u64 pid_tgid = bpf_get_current_pid_tgid();
|
||||
__u64 pid = pid_tgid >> 32;
|
||||
if (!should_tap(pid)) {
|
||||
return;
|
||||
}
|
||||
|
||||
// GO_ABI_INTERNAL_PT_REGS_GP is Goroutine address
|
||||
__u64 pid_fp = pid << 32 | GO_ABI_INTERNAL_PT_REGS_GP(ctx);
|
||||
struct ssl_info *info_ptr = bpf_map_lookup_elem(go_context, &pid_fp);
|
||||
|
||||
if (info_ptr == NULL) {
|
||||
return;
|
||||
}
|
||||
bpf_map_delete_elem(go_context, &pid_fp);
|
||||
|
||||
struct ssl_info info;
|
||||
long err = bpf_probe_read(&info, sizeof(struct ssl_info), info_ptr);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, pid_tgid, err, ORIGIN_SSL_URETPROBE_CODE);
|
||||
return;
|
||||
}
|
||||
|
||||
output_ssl_chunk(ctx, &info, info.buffer_len, pid_tgid, flags);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
SEC("uprobe/go_crypto_tls_write")
|
||||
void BPF_KPROBE(go_crypto_tls_write) {
|
||||
go_crypto_tls_uprobe(ctx, &go_write_context);
|
||||
}
|
||||
|
||||
SEC("uprobe/go_crypto_tls_write_ex")
|
||||
void BPF_KPROBE(go_crypto_tls_write_ex) {
|
||||
go_crypto_tls_ex_uprobe(ctx, &go_write_context, 0);
|
||||
}
|
||||
|
||||
SEC("uprobe/go_crypto_tls_read")
|
||||
void BPF_KPROBE(go_crypto_tls_read) {
|
||||
go_crypto_tls_uprobe(ctx, &go_read_context);
|
||||
}
|
||||
|
||||
SEC("uprobe/go_crypto_tls_read_ex")
|
||||
void BPF_KPROBE(go_crypto_tls_read_ex) {
|
||||
go_crypto_tls_ex_uprobe(ctx, &go_read_context, FLAGS_IS_READ_BIT);
|
||||
}
|
19
tap/tlstapper/bpf/include/common.h
Normal file
19
tap/tlstapper/bpf/include/common.h
Normal file
@@ -0,0 +1,19 @@
|
||||
/*
|
||||
Note: This file is licenced differently from the rest of the project
|
||||
SPDX-License-Identifier: GPL-2.0
|
||||
Copyright (C) UP9 Inc.
|
||||
*/
|
||||
|
||||
#ifndef __COMMON__
|
||||
#define __COMMON__
|
||||
|
||||
const __s32 invalid_fd = -1;
|
||||
|
||||
static int add_address_to_chunk(struct pt_regs *ctx, struct tls_chunk* chunk, __u64 id, __u32 fd);
|
||||
static void send_chunk_part(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk, int start, int end);
|
||||
static void send_chunk(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tls_chunk* chunk);
|
||||
static void output_ssl_chunk(struct pt_regs *ctx, struct ssl_info* info, int count_bytes, __u64 id, __u32 flags);
|
||||
static struct ssl_info new_ssl_info();
|
||||
static struct ssl_info lookup_ssl_info(struct pt_regs *ctx, struct bpf_map_def* map_fd, __u64 pid_tgid);
|
||||
|
||||
#endif /* __COMMON__ */
|
141
tap/tlstapper/bpf/include/go_abi_internal.h
Normal file
141
tap/tlstapper/bpf/include/go_abi_internal.h
Normal file
@@ -0,0 +1,141 @@
|
||||
/*
|
||||
Note: This file is licenced differently from the rest of the project
|
||||
SPDX-License-Identifier: GPL-2.0
|
||||
Copyright (C) UP9 Inc.
|
||||
*/
|
||||
|
||||
#ifndef __GO_ABI_INTERNAL__
|
||||
#define __GO_ABI_INTERNAL__
|
||||
|
||||
/*
|
||||
Go internal ABI specification
|
||||
https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md
|
||||
*/
|
||||
|
||||
/* Scan the ARCH passed in from ARCH env variable */
|
||||
#if defined(__TARGET_ARCH_x86)
|
||||
#define bpf_target_x86
|
||||
#define bpf_target_defined
|
||||
#elif defined(__TARGET_ARCH_s390)
|
||||
#define bpf_target_s390
|
||||
#define bpf_target_defined
|
||||
#elif defined(__TARGET_ARCH_arm)
|
||||
#define bpf_target_arm
|
||||
#define bpf_target_defined
|
||||
#elif defined(__TARGET_ARCH_arm64)
|
||||
#define bpf_target_arm64
|
||||
#define bpf_target_defined
|
||||
#elif defined(__TARGET_ARCH_mips)
|
||||
#define bpf_target_mips
|
||||
#define bpf_target_defined
|
||||
#elif defined(__TARGET_ARCH_powerpc)
|
||||
#define bpf_target_powerpc
|
||||
#define bpf_target_defined
|
||||
#elif defined(__TARGET_ARCH_sparc)
|
||||
#define bpf_target_sparc
|
||||
#define bpf_target_defined
|
||||
#else
|
||||
#undef bpf_target_defined
|
||||
#endif
|
||||
|
||||
/* Fall back to what the compiler says */
|
||||
#ifndef bpf_target_defined
|
||||
#if defined(__x86_64__)
|
||||
#define bpf_target_x86
|
||||
#elif defined(__s390__)
|
||||
#define bpf_target_s390
|
||||
#elif defined(__arm__)
|
||||
#define bpf_target_arm
|
||||
#elif defined(__aarch64__)
|
||||
#define bpf_target_arm64
|
||||
#elif defined(__mips__)
|
||||
#define bpf_target_mips
|
||||
#elif defined(__powerpc__)
|
||||
#define bpf_target_powerpc
|
||||
#elif defined(__sparc__)
|
||||
#define bpf_target_sparc
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#if defined(bpf_target_x86)
|
||||
|
||||
#ifdef __i386__
|
||||
|
||||
/*
|
||||
https://go.googlesource.com/go/+/refs/heads/dev.regabi/src/cmd/compile/internal-abi.md#amd64-architecture
|
||||
https://github.com/golang/go/blob/go1.17.6/src/cmd/compile/internal/ssa/gen/AMD64Ops.go#L100
|
||||
*/
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->eax)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_P2(x) ((x)->ecx)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_P3(x) ((x)->edx)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_P4(x) 0
|
||||
#define GO_ABI_INTERNAL_PT_REGS_P5(x) 0
|
||||
#define GO_ABI_INTERNAL_PT_REGS_P6(x) 0
|
||||
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->esp)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->ebp)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_GP(x) ((x)->e14)
|
||||
|
||||
#else
|
||||
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->rax)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->rcx)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->rdx)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->rbx)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->rbp)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->rsi)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->rsp)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->rbp)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_GP(x) ((x)->r14)
|
||||
|
||||
#endif
|
||||
|
||||
#elif defined(bpf_target_arm)
|
||||
|
||||
/*
|
||||
https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md#arm64-architecture
|
||||
https://github.com/golang/go/blob/go1.17.6/src/cmd/compile/internal/ssa/gen/ARM64Ops.go#L129-L131
|
||||
*/
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->uregs[0])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->uregs[1])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->uregs[2])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->uregs[3])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->uregs[4])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->uregs[5])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->uregs[14])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->uregs[29])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_GP(x) ((x)->uregs[28])
|
||||
|
||||
#elif defined(bpf_target_arm64)
|
||||
|
||||
/* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
|
||||
struct pt_regs;
|
||||
#define PT_REGS_ARM64 const volatile struct user_pt_regs
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R1(x) (((PT_REGS_ARM64 *)(x))->regs[0])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R2(x) (((PT_REGS_ARM64 *)(x))->regs[1])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R3(x) (((PT_REGS_ARM64 *)(x))->regs[2])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R4(x) (((PT_REGS_ARM64 *)(x))->regs[3])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R5(x) (((PT_REGS_ARM64 *)(x))->regs[4])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R6(x) (((PT_REGS_ARM64 *)(x))->regs[5])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_SP(x) (((PT_REGS_ARM64 *)(x))->regs[30])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_FP(x) (((PT_REGS_ARM64 *)(x))->regs[29])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_GP(x) (((PT_REGS_ARM64 *)(x))->regs[28])
|
||||
|
||||
#elif defined(bpf_target_powerpc)
|
||||
|
||||
/*
|
||||
https://go.googlesource.com/go/+/refs/heads/master/src/cmd/compile/abi-internal.md#ppc64-architecture
|
||||
https://github.com/golang/go/blob/go1.17.6/src/cmd/compile/internal/ssa/gen/PPC64Ops.go#L125-L127
|
||||
*/
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R1(x) ((x)->gpr[3])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R2(x) ((x)->gpr[4])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R3(x) ((x)->gpr[5])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R4(x) ((x)->gpr[6])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R5(x) ((x)->gpr[7])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_R6(x) ((x)->gpr[8])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_SP(x) ((x)->sp)
|
||||
#define GO_ABI_INTERNAL_PT_REGS_FP(x) ((x)->gpr[12])
|
||||
#define GO_ABI_INTERNAL_PT_REGS_GP(x) ((x)->gpr[30])
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* __GO_ABI_INTERNAL__ */
|
15
tap/tlstapper/bpf/include/go_types.h
Normal file
15
tap/tlstapper/bpf/include/go_types.h
Normal file
@@ -0,0 +1,15 @@
|
||||
/*
|
||||
Note: This file is licenced differently from the rest of the project
|
||||
SPDX-License-Identifier: GPL-2.0
|
||||
Copyright (C) UP9 Inc.
|
||||
*/
|
||||
|
||||
#ifndef __GO_TYPES__
|
||||
#define __GO_TYPES__
|
||||
|
||||
struct go_interface {
|
||||
__s64 type;
|
||||
void* ptr;
|
||||
};
|
||||
|
||||
#endif /* __GO_TYPES__ */
|
@@ -16,11 +16,15 @@ Copyright (C) UP9 Inc.
|
||||
// One minute in nano seconds. Chosen by gut feeling.
|
||||
#define SSL_INFO_MAX_TTL_NANO (1000000000l * 60l)
|
||||
|
||||
#define MAX_ENTRIES_HASH (1 << 12) // 4096
|
||||
#define MAX_ENTRIES_PERF_OUTPUT (1 << 10) // 1024
|
||||
#define MAX_ENTRIES_LRU_HASH (1 << 14) // 16384
|
||||
|
||||
// The same struct can be found in chunk.go
|
||||
//
|
||||
// Be careful when editing, alignment and padding should be exactly the same in go/c.
|
||||
//
|
||||
struct tlsChunk {
|
||||
struct tls_chunk {
|
||||
__u32 pid;
|
||||
__u32 tgid;
|
||||
__u32 len;
|
||||
@@ -34,6 +38,7 @@ struct tlsChunk {
|
||||
|
||||
struct ssl_info {
|
||||
void* buffer;
|
||||
__u32 buffer_len;
|
||||
__u32 fd;
|
||||
__u64 created_at_nano;
|
||||
|
||||
@@ -48,6 +53,16 @@ struct fd_info {
|
||||
__u8 flags;
|
||||
};
|
||||
|
||||
// Heap-like area for eBPF programs - stack size limited to 512 bytes, we must use maps for bigger (chunk) objects.
|
||||
//
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
|
||||
__uint(max_entries, 1);
|
||||
__type(key, int);
|
||||
__type(value, struct tls_chunk);
|
||||
} heap SEC(".maps");
|
||||
|
||||
|
||||
#define BPF_MAP(_name, _type, _key_type, _value_type, _max_entries) \
|
||||
struct bpf_map_def SEC("maps") _name = { \
|
||||
.type = _type, \
|
||||
@@ -57,19 +72,26 @@ struct fd_info {
|
||||
};
|
||||
|
||||
#define BPF_HASH(_name, _key_type, _value_type) \
|
||||
BPF_MAP(_name, BPF_MAP_TYPE_HASH, _key_type, _value_type, 4096)
|
||||
BPF_MAP(_name, BPF_MAP_TYPE_HASH, _key_type, _value_type, MAX_ENTRIES_HASH)
|
||||
|
||||
#define BPF_PERF_OUTPUT(_name) \
|
||||
BPF_MAP(_name, BPF_MAP_TYPE_PERF_EVENT_ARRAY, int, __u32, 1024)
|
||||
|
||||
#define BPF_LRU_HASH(_name, _key_type, _value_type) \
|
||||
BPF_MAP(_name, BPF_MAP_TYPE_LRU_HASH, _key_type, _value_type, 16384)
|
||||
BPF_MAP(_name, BPF_MAP_TYPE_PERF_EVENT_ARRAY, int, __u32, MAX_ENTRIES_PERF_OUTPUT)
|
||||
|
||||
#define BPF_LRU_HASH(_name, _key_type, _value_type) \
|
||||
BPF_MAP(_name, BPF_MAP_TYPE_LRU_HASH, _key_type, _value_type, MAX_ENTRIES_LRU_HASH)
|
||||
|
||||
// Generic
|
||||
BPF_HASH(pids_map, __u32, __u32);
|
||||
BPF_LRU_HASH(ssl_write_context, __u64, struct ssl_info);
|
||||
BPF_LRU_HASH(ssl_read_context, __u64, struct ssl_info);
|
||||
BPF_LRU_HASH(file_descriptor_to_ipv4, __u64, struct fd_info);
|
||||
BPF_PERF_OUTPUT(chunks_buffer);
|
||||
BPF_PERF_OUTPUT(log_buffer);
|
||||
|
||||
// OpenSSL specific
|
||||
BPF_LRU_HASH(openssl_write_context, __u64, struct ssl_info);
|
||||
BPF_LRU_HASH(openssl_read_context, __u64, struct ssl_info);
|
||||
|
||||
// Go specific
|
||||
BPF_LRU_HASH(go_write_context, __u64, struct ssl_info);
|
||||
BPF_LRU_HASH(go_read_context, __u64, struct ssl_info);
|
||||
|
||||
#endif /* __MAPS__ */
|
||||
|
@@ -10,149 +10,35 @@ Copyright (C) UP9 Inc.
|
||||
#include "include/log.h"
|
||||
#include "include/logger_messages.h"
|
||||
#include "include/pids.h"
|
||||
#include "include/common.h"
|
||||
|
||||
// Heap-like area for eBPF programs - stack size limited to 512 bytes, we must use maps for bigger (chunk) objects.
|
||||
//
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
|
||||
__uint(max_entries, 1);
|
||||
__type(key, int);
|
||||
__type(value, struct tlsChunk);
|
||||
} heap SEC(".maps");
|
||||
|
||||
static __always_inline int get_count_bytes(struct pt_regs *ctx, struct ssl_info* info, __u64 id) {
|
||||
int returnValue = PT_REGS_RC(ctx);
|
||||
|
||||
if (info->count_ptr == NULL) {
|
||||
// ssl_read and ssl_write return the number of bytes written/read
|
||||
//
|
||||
return returnValue;
|
||||
}
|
||||
|
||||
// ssl_read_ex and ssl_write_ex return 1 for success
|
||||
//
|
||||
if (returnValue != 1) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// ssl_read_ex and ssl_write_ex write the number of bytes to an arg named *count
|
||||
//
|
||||
size_t countBytes;
|
||||
long err = bpf_probe_read(&countBytes, sizeof(size_t), (void*) info->count_ptr);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, id, err, 0l);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return countBytes;
|
||||
}
|
||||
int returnValue = PT_REGS_RC(ctx);
|
||||
|
||||
static __always_inline int add_address_to_chunk(struct pt_regs *ctx, struct tlsChunk* chunk, __u64 id, __u32 fd) {
|
||||
__u32 pid = id >> 32;
|
||||
__u64 key = (__u64) pid << 32 | fd;
|
||||
|
||||
struct fd_info *fdinfo = bpf_map_lookup_elem(&file_descriptor_to_ipv4, &key);
|
||||
|
||||
if (fdinfo == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
int err = bpf_probe_read(chunk->address, sizeof(chunk->address), fdinfo->ipv4_addr);
|
||||
chunk->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
if (info->count_ptr == NULL) {
|
||||
// ssl_read and ssl_write return the number of bytes written/read
|
||||
//
|
||||
return returnValue;
|
||||
}
|
||||
|
||||
static __always_inline void send_chunk_part(struct pt_regs *ctx, __u8* buffer, __u64 id,
|
||||
struct tlsChunk* chunk, int start, int end) {
|
||||
size_t recorded = MIN(end - start, sizeof(chunk->data));
|
||||
|
||||
if (recorded <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
chunk->recorded = recorded;
|
||||
chunk->start = start;
|
||||
|
||||
// This ugly trick is for the ebpf verifier happiness
|
||||
//
|
||||
long err = 0;
|
||||
if (chunk->recorded == sizeof(chunk->data)) {
|
||||
err = bpf_probe_read(chunk->data, sizeof(chunk->data), buffer + start);
|
||||
} else {
|
||||
recorded &= (sizeof(chunk->data) - 1); // Buffer must be N^2
|
||||
err = bpf_probe_read(chunk->data, recorded, buffer + start);
|
||||
}
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_FROM_SSL_BUFFER, id, err, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
bpf_perf_event_output(ctx, &chunks_buffer, BPF_F_CURRENT_CPU, chunk, sizeof(struct tlsChunk));
|
||||
}
|
||||
// ssl_read_ex and ssl_write_ex return 1 for success
|
||||
//
|
||||
if (returnValue != 1) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
static __always_inline void send_chunk(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tlsChunk* chunk) {
|
||||
// ebpf loops must be bounded at compile time, we can't use (i < chunk->len / CHUNK_SIZE)
|
||||
//
|
||||
// https://lwn.net/Articles/794934/
|
||||
//
|
||||
// However we want to run in kernel older than 5.3, hence we use "#pragma unroll" anyway
|
||||
//
|
||||
#pragma unroll
|
||||
for (int i = 0; i < MAX_CHUNKS_PER_OPERATION; i++) {
|
||||
if (chunk->len <= (CHUNK_SIZE * i)) {
|
||||
break;
|
||||
}
|
||||
|
||||
send_chunk_part(ctx, buffer, id, chunk, CHUNK_SIZE * i, chunk->len);
|
||||
}
|
||||
}
|
||||
// ssl_read_ex and ssl_write_ex write the number of bytes to an arg named *count
|
||||
//
|
||||
size_t countBytes;
|
||||
long err = bpf_probe_read(&countBytes, sizeof(size_t), (void*) info->count_ptr);
|
||||
|
||||
static __always_inline void output_ssl_chunk(struct pt_regs *ctx, struct ssl_info* info, __u64 id, __u32 flags) {
|
||||
int countBytes = get_count_bytes(ctx, info, id);
|
||||
|
||||
if (countBytes <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (countBytes > (CHUNK_SIZE * MAX_CHUNKS_PER_OPERATION)) {
|
||||
log_error(ctx, LOG_ERROR_BUFFER_TOO_BIG, id, countBytes, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
struct tlsChunk* chunk;
|
||||
int zero = 0;
|
||||
|
||||
// If other thread, running on the same CPU get to this point at the same time like us (context switch)
|
||||
// the data will be corrupted - protection may be added in the future
|
||||
//
|
||||
chunk = bpf_map_lookup_elem(&heap, &zero);
|
||||
|
||||
if (!chunk) {
|
||||
log_error(ctx, LOG_ERROR_ALLOCATING_CHUNK, id, 0l, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
chunk->flags = flags;
|
||||
chunk->pid = id >> 32;
|
||||
chunk->tgid = id;
|
||||
chunk->len = countBytes;
|
||||
chunk->fd = info->fd;
|
||||
|
||||
if (!add_address_to_chunk(ctx, chunk, id, chunk->fd)) {
|
||||
// Without an address, we drop the chunk because there is not much to do with it in Go
|
||||
//
|
||||
return;
|
||||
}
|
||||
|
||||
send_chunk(ctx, info->buffer, id, chunk);
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, id, err, 0l);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return countBytes;
|
||||
}
|
||||
|
||||
static __always_inline void ssl_uprobe(struct pt_regs *ctx, void* ssl, void* buffer, int num, struct bpf_map_def* map_fd, size_t *count_ptr) {
|
||||
@@ -163,25 +49,7 @@ static __always_inline void ssl_uprobe(struct pt_regs *ctx, void* ssl, void* buf
|
||||
}
|
||||
|
||||
struct ssl_info *infoPtr = bpf_map_lookup_elem(map_fd, &id);
|
||||
struct ssl_info info = {};
|
||||
|
||||
if (infoPtr == NULL) {
|
||||
info.fd = -1;
|
||||
info.created_at_nano = bpf_ktime_get_ns();
|
||||
} else {
|
||||
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
|
||||
|
||||
if (err != 0) {
|
||||
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SSL_UPROBE_CODE);
|
||||
}
|
||||
|
||||
if ((bpf_ktime_get_ns() - info.created_at_nano) > SSL_INFO_MAX_TTL_NANO) {
|
||||
// If the ssl info is too old, we don't want to use its info because it may be incorrect.
|
||||
//
|
||||
info.fd = -1;
|
||||
info.created_at_nano = bpf_ktime_get_ns();
|
||||
}
|
||||
}
|
||||
struct ssl_info info = lookup_ssl_info(ctx, &openssl_write_context, id);
|
||||
|
||||
info.count_ptr = count_ptr;
|
||||
info.buffer = buffer;
|
||||
@@ -227,50 +95,52 @@ static __always_inline void ssl_uretprobe(struct pt_regs *ctx, struct bpf_map_de
|
||||
return;
|
||||
}
|
||||
|
||||
if (info.fd == -1) {
|
||||
if (info.fd == invalid_fd) {
|
||||
log_error(ctx, LOG_ERROR_MISSING_FILE_DESCRIPTOR, id, 0l, 0l);
|
||||
return;
|
||||
}
|
||||
|
||||
output_ssl_chunk(ctx, &info, id, flags);
|
||||
|
||||
int count_bytes = get_count_bytes(ctx, &info, id);
|
||||
|
||||
output_ssl_chunk(ctx, &info, count_bytes, id, flags);
|
||||
}
|
||||
|
||||
SEC("uprobe/ssl_write")
|
||||
void BPF_KPROBE(ssl_write, void* ssl, void* buffer, int num) {
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &ssl_write_context, 0);
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &openssl_write_context, 0);
|
||||
}
|
||||
|
||||
SEC("uretprobe/ssl_write")
|
||||
void BPF_KPROBE(ssl_ret_write) {
|
||||
ssl_uretprobe(ctx, &ssl_write_context, 0);
|
||||
ssl_uretprobe(ctx, &openssl_write_context, 0);
|
||||
}
|
||||
|
||||
SEC("uprobe/ssl_read")
|
||||
void BPF_KPROBE(ssl_read, void* ssl, void* buffer, int num) {
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &ssl_read_context, 0);
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &openssl_read_context, 0);
|
||||
}
|
||||
|
||||
SEC("uretprobe/ssl_read")
|
||||
void BPF_KPROBE(ssl_ret_read) {
|
||||
ssl_uretprobe(ctx, &ssl_read_context, FLAGS_IS_READ_BIT);
|
||||
ssl_uretprobe(ctx, &openssl_read_context, FLAGS_IS_READ_BIT);
|
||||
}
|
||||
|
||||
SEC("uprobe/ssl_write_ex")
|
||||
void BPF_KPROBE(ssl_write_ex, void* ssl, void* buffer, size_t num, size_t *written) {
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &ssl_write_context, written);
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &openssl_write_context, written);
|
||||
}
|
||||
|
||||
SEC("uretprobe/ssl_write_ex")
|
||||
void BPF_KPROBE(ssl_ret_write_ex) {
|
||||
ssl_uretprobe(ctx, &ssl_write_context, 0);
|
||||
ssl_uretprobe(ctx, &openssl_write_context, 0);
|
||||
}
|
||||
|
||||
SEC("uprobe/ssl_read_ex")
|
||||
void BPF_KPROBE(ssl_read_ex, void* ssl, void* buffer, size_t num, size_t *readbytes) {
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &ssl_read_context, readbytes);
|
||||
ssl_uprobe(ctx, ssl, buffer, num, &openssl_read_context, readbytes);
|
||||
}
|
||||
|
||||
SEC("uretprobe/ssl_read_ex")
|
||||
void BPF_KPROBE(ssl_ret_read_ex) {
|
||||
ssl_uretprobe(ctx, &ssl_read_context, FLAGS_IS_READ_BIT);
|
||||
ssl_uretprobe(ctx, &openssl_read_context, FLAGS_IS_READ_BIT);
|
||||
}
|
||||
|
@@ -13,7 +13,9 @@ Copyright (C) UP9 Inc.
|
||||
|
||||
// To avoid multiple .o files
|
||||
//
|
||||
#include "common.c"
|
||||
#include "openssl_uprobes.c"
|
||||
#include "go_uprobes.c"
|
||||
#include "fd_tracepoints.c"
|
||||
#include "fd_to_address_tracepoints.c"
|
||||
|
||||
|
Reference in New Issue
Block a user