docs: Fix classification notebook small mistake (#32636)

Fix some minor issues in the Classification Notebook.
While some code still using hardcoded OpenAI model instead of selected
chat model.

Specifically, on page [Classify Text into
Labels](https://python.langchain.com/docs/tutorials/classification/)

We selected chat model before and have init_chat_model with our chosen
mode.
<img width="1262" height="576" alt="image"
src="https://github.com/user-attachments/assets/14eb436b-d2ef-4074-96d8-71640a13c0f7"
/>

But the following sample code still uses the hard-coded OpenAI model,
which in my case is obviously unrunable (lack of openai api key)
<img width="1263" height="543" alt="image"
src="https://github.com/user-attachments/assets/d13846aa-1c4b-4dee-b9c1-c66570ba3461"
/>
This commit is contained in:
chen-assert
2025-09-11 10:43:44 +08:00
committed by GitHub
parent 653b0908af
commit d72da29c0b

View File

@@ -95,7 +95,6 @@
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"tagging_prompt = ChatPromptTemplate.from_template(\n",
@@ -253,9 +252,7 @@
"\"\"\"\n",
")\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4o-mini\").with_structured_output(\n",
" Classification\n",
")"
"structured_llm = llm.with_structured_output(Classification)"
]
},
{
@@ -286,7 +283,7 @@
"source": [
"inp = \"Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!\"\n",
"prompt = tagging_prompt.invoke({\"input\": inp})\n",
"llm.invoke(prompt)"
"structured_llm.invoke(prompt)"
]
},
{
@@ -309,7 +306,7 @@
"source": [
"inp = \"Estoy muy enojado con vos! Te voy a dar tu merecido!\"\n",
"prompt = tagging_prompt.invoke({\"input\": inp})\n",
"llm.invoke(prompt)"
"structured_llm.invoke(prompt)"
]
},
{
@@ -332,7 +329,7 @@
"source": [
"inp = \"Weather is ok here, I can go outside without much more than a coat\"\n",
"prompt = tagging_prompt.invoke({\"input\": inp})\n",
"llm.invoke(prompt)"
"structured_llm.invoke(prompt)"
]
},
{