fix(docs): chatbot.ipynb trimming regression (#32561)

Supersedes #32544

Changes to the `trimmer` behavior resulted in the call `"What math
problem was asked?"` to no longer see the relevant query due to the
number of the queries' tokens. Adjusted to not trigger trimming the
relevant part of the message history. Also, add print to the trimmer to
increase observability on what is leaving the context window.

Add note to trimming tut & format links as inline
This commit is contained in:
Mason Daugherty 2025-08-15 10:47:22 -04:00 committed by GitHub
parent b2b835cb36
commit fe740a9397
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 39 additions and 11 deletions

View File

@ -53,7 +53,7 @@
"\n", "\n",
"To keep the most recent messages, we set `strategy=\"last\"`. We'll also set `include_system=True` to include the `SystemMessage`, and `start_on=\"human\"` to make sure the resulting chat history is valid. \n", "To keep the most recent messages, we set `strategy=\"last\"`. We'll also set `include_system=True` to include the `SystemMessage`, and `start_on=\"human\"` to make sure the resulting chat history is valid. \n",
"\n", "\n",
"This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case.\n", "This is a good default configuration when using `trim_messages` based on token count. Remember to adjust `token_counter` and `max_tokens` for your use case. Keep in mind that new queries added to the chat history will be included in the token count unless you trim prior to adding the new query.\n",
"\n", "\n",
"Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:" "Notice that for our `token_counter` we can pass in a function (more on that below) or a language model (since language models have a message token counting method). It makes sense to pass in a model when you're trimming your messages to fit into the context window of that specific model:"
] ]
@ -525,7 +525,7 @@
"id": "4d91d390-e7f7-467b-ad87-d100411d7a21", "id": "4d91d390-e7f7-467b-ad87-d100411d7a21",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are first trimmed: https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r\n", "Looking at [the LangSmith trace](https://smith.langchain.com/public/65af12c4-c24d-4824-90f0-6547566e59bb/r) we can see that before the messages are passed to the model they are first trimmed.\n",
"\n", "\n",
"Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:" "Looking at just the trimmer, we can see that it's a Runnable object that can be invoked like all Runnables:"
] ]
@ -620,7 +620,7 @@
"id": "556b7b4c-43cb-41de-94fc-1a41f4ec4d2e", "id": "556b7b4c-43cb-41de-94fc-1a41f4ec4d2e",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Looking at the LangSmith trace we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message: https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r" "Looking at [the LangSmith trace](https://smith.langchain.com/public/17dd700b-9994-44ca-930c-116e00997315/r) we can see that we retrieve all of our messages but before the messages are passed to the model they are trimmed to be just the system message and last human message."
] ]
}, },
{ {
@ -630,7 +630,7 @@
"source": [ "source": [
"## API reference\n", "## API reference\n",
"\n", "\n",
"For a complete description of all arguments head to the API reference: https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html" "For a complete description of all arguments head to the [API reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.trim_messages.html)."
] ]
} }
], ],

View File

@ -720,7 +720,7 @@
" AIMessage(content='yes!', additional_kwargs={}, response_metadata={})]" " AIMessage(content='yes!', additional_kwargs={}, response_metadata={})]"
] ]
}, },
"execution_count": 23, "execution_count": 109,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -771,8 +771,13 @@
"\n", "\n",
"\n", "\n",
"def call_model(state: State):\n", "def call_model(state: State):\n",
" print(f\"Messages before trimming: {len(state['messages'])}\")\n",
" # highlight-start\n", " # highlight-start\n",
" trimmed_messages = trimmer.invoke(state[\"messages\"])\n", " trimmed_messages = trimmer.invoke(state[\"messages\"])\n",
" print(f\"Messages after trimming: {len(trimmed_messages)}\")\n",
" print(\"Remaining messages:\")\n",
" for msg in trimmed_messages:\n",
" print(f\" {type(msg).__name__}: {msg.content}\")\n",
" prompt = prompt_template.invoke(\n", " prompt = prompt_template.invoke(\n",
" {\"messages\": trimmed_messages, \"language\": state[\"language\"]}\n", " {\"messages\": trimmed_messages, \"language\": state[\"language\"]}\n",
" )\n", " )\n",
@ -792,7 +797,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Now if we try asking the model our name, it won't know it since we trimmed that part of the chat history:" "Now if we try asking the model our name, it won't know it since we trimmed that part of the chat history. (By defining our trim stragegy as `'last'`, we are only keeping the most recent messages that fit within the `max_tokens`.)"
] ]
}, },
{ {
@ -804,9 +809,20 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Messages before trimming: 12\n",
"Messages after trimming: 8\n",
"Remaining messages:\n",
" SystemMessage: you're a good assistant\n",
" HumanMessage: whats 2 + 2\n",
" AIMessage: 4\n",
" HumanMessage: thanks\n",
" AIMessage: no problem!\n",
" HumanMessage: having fun?\n",
" AIMessage: yes!\n",
" HumanMessage: What is my name?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n", "\n",
"I don't know your name. You haven't told me yet!\n" "I don't know your name. If you'd like to share it, feel free!\n"
] ]
} }
], ],
@ -840,15 +856,27 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Messages before trimming: 12\n",
"Messages after trimming: 8\n",
"Remaining messages:\n",
" SystemMessage: you're a good assistant\n",
" HumanMessage: whats 2 + 2\n",
" AIMessage: 4\n",
" HumanMessage: thanks\n",
" AIMessage: no problem!\n",
" HumanMessage: having fun?\n",
" AIMessage: yes!\n",
" HumanMessage: What math problem was asked?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n", "\n",
"You asked what 2 + 2 equals.\n" "The math problem that was asked was \"what's 2 + 2.\"\n"
] ]
} }
], ],
"source": [ "source": [
"config = {\"configurable\": {\"thread_id\": \"abc678\"}}\n", "config = {\"configurable\": {\"thread_id\": \"abc678\"}}\n",
"query = \"What math problem did I ask?\"\n", "\n",
"query = \"What math problem was asked?\"\n",
"language = \"English\"\n", "language = \"English\"\n",
"\n", "\n",
"input_messages = messages + [HumanMessage(query)]\n", "input_messages = messages + [HumanMessage(query)]\n",
@ -890,9 +918,9 @@
"text": [ "text": [
"|Hi| Todd|!| Here|s| a| joke| for| you|:\n", "|Hi| Todd|!| Here|s| a| joke| for| you|:\n",
"\n", "\n",
"|Why| don|t| skeleton|s| fight| each| other|?\n", "|Why| don't| scientists| trust| atoms|?\n",
"\n", "\n",
"|Because| they| don|t| have| the| guts|!||" "|Because| they| make| up| everything|!||"
] ]
} }
], ],