mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-04 20:46:45 +00:00
community: add truncation params when an openai assistant's run is created (#28158)
**Description:** When an OpenAI assistant is invoked, it creates a run by default, allowing users to set only a few request fields. The truncation strategy is set to auto, which includes previous messages in the thread along with the current question until the context length is reached. This causes token usage to grow incrementally: consumed_tokens = previous_consumed_tokens + current_consumed_tokens. This PR adds support for user-defined truncation strategies, giving better control over token consumption. **Issue:** High token consumption.
This commit is contained in:
@@ -543,11 +543,16 @@ class OpenAIAssistantV2Runnable(OpenAIAssistantRunnable):
|
||||
Returns:
|
||||
Any: The created run object.
|
||||
"""
|
||||
params = {
|
||||
k: v
|
||||
for k, v in input.items()
|
||||
if k in ("instructions", "model", "tools", "tool_resources", "run_metadata")
|
||||
}
|
||||
allowed_assistant_params = (
|
||||
"instructions",
|
||||
"model",
|
||||
"tools",
|
||||
"tool_resources",
|
||||
"run_metadata",
|
||||
"truncation_strategy",
|
||||
"max_prompt_tokens",
|
||||
)
|
||||
params = {k: v for k, v in input.items() if k in allowed_assistant_params}
|
||||
return self.client.beta.threads.runs.create(
|
||||
input["thread_id"],
|
||||
assistant_id=self.assistant_id,
|
||||
|
Reference in New Issue
Block a user