mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-22 23:00:00 +00:00
docs: custom callback handlers page (#20494)
**Description:** Update to the Callbacks page on custom callback handlers **Issue:** #20493 **Dependencies:** None --------- Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
parent
5da9dd1195
commit
9e694963a4
@ -7,12 +7,22 @@
|
||||
"source": [
|
||||
"# Custom callback handlers\n",
|
||||
"\n",
|
||||
"You can create a custom handler to set on the object as well. In the example below, we'll implement streaming with a custom handler."
|
||||
"To create a custom callback handler we need to determine the [event(s)](/docs/modules/callbacks/) we want our callback handler to handle as well as what we want our callback handler to do when the event is triggered. Then all we need to do is attach the callback handler to the object either as a constructer callback or a request callback (see [callback types](/docs/modules/callbacks/))."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "428d5e5f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In the example below, we'll implement streaming with a custom handler.\n",
|
||||
"\n",
|
||||
"In our custom callback handler `MyCustomHandler`, we implement the `on_llm_new_token` to print the token we have just received. We then attach our custom handler to the model object as a constructor callback."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 5,
|
||||
"id": "ed9e8756",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -22,38 +32,25 @@
|
||||
"text": [
|
||||
"My custom handler, token: \n",
|
||||
"My custom handler, token: Why\n",
|
||||
"My custom handler, token: don\n",
|
||||
"My custom handler, token: 't\n",
|
||||
"My custom handler, token: scientists\n",
|
||||
"My custom handler, token: trust\n",
|
||||
"My custom handler, token: atoms\n",
|
||||
"My custom handler, token: do\n",
|
||||
"My custom handler, token: bears\n",
|
||||
"My custom handler, token: have\n",
|
||||
"My custom handler, token: hairy\n",
|
||||
"My custom handler, token: coats\n",
|
||||
"My custom handler, token: ?\n",
|
||||
"My custom handler, token: \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"My custom handler, token: Because\n",
|
||||
"My custom handler, token: they\n",
|
||||
"My custom handler, token: make\n",
|
||||
"My custom handler, token: up\n",
|
||||
"My custom handler, token: everything\n",
|
||||
"My custom handler, token: .\n",
|
||||
"My custom handler, token: F\n",
|
||||
"My custom handler, token: ur\n",
|
||||
"My custom handler, token: protection\n",
|
||||
"My custom handler, token: !\n",
|
||||
"My custom handler, token: \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Why don't scientists trust atoms? \\n\\nBecause they make up everything.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.callbacks import BaseCallbackHandler\n",
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@ -62,27 +59,23 @@
|
||||
" print(f\"My custom handler, token: {token}\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n",
|
||||
"# Additionally, we pass in a list with our custom handler\n",
|
||||
"chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\"Tell me a joke about {animal}\"])\n",
|
||||
"\n",
|
||||
"chat.invoke([HumanMessage(content=\"Tell me a joke\")])"
|
||||
"# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n",
|
||||
"# Additionally, we pass in our custom handler as a list to the callbacks parameter\n",
|
||||
"model = ChatOpenAI(streaming=True, callbacks=[MyCustomHandler()])\n",
|
||||
"\n",
|
||||
"chain = prompt | model\n",
|
||||
"\n",
|
||||
"response = chain.invoke({\"animal\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "67ef5548",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@ -94,7 +87,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
Loading…
Reference in New Issue
Block a user