langchain[patch]: deprecate various chains (#25310)

- [x] NatbotChain: move to community, deprecate langchain version.
Update to use `prompt | llm | output_parser` instead of LLMChain.
- [x] LLMMathChain: deprecate + add langgraph replacement example to API
ref
- [x] HypotheticalDocumentEmbedder (retriever): update to use `prompt |
llm | output_parser` instead of LLMChain
- [x] FlareChain: update to use `prompt | llm | output_parser` instead
of LLMChain
- [x] ConstitutionalChain: deprecate + add langgraph replacement example
to API ref
- [x] LLMChainExtractor (document compressor): update to use `prompt |
llm | output_parser` instead of LLMChain
- [x] LLMChainFilter (document compressor): update to use `prompt | llm
| output_parser` instead of LLMChain
- [x] RePhraseQueryRetriever (retriever): update to use `prompt | llm |
output_parser` instead of LLMChain
This commit is contained in:
ccurme
2024-08-15 10:49:26 -04:00
committed by GitHub
parent 66e30efa61
commit 8afbab4cf6
19 changed files with 1166 additions and 126 deletions

View File

@@ -0,0 +1,332 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConstitutionalChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {},
"source": [
"[ConstitutionalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html) allowed for a LLM to critique and revise generations based on [principles](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html), structured as combinations of critique and revision requests. For example, a principle might include a request to identify harmful content, and a request to rewrite the content.\n",
"\n",
"In `ConstitutionalChain`, this structure of critique requests and associated revisions was formatted into a LLM prompt and parsed out of string responses. This is more naturally achieved via [structured output](/docs/how_to/structured_output/) features of chat models. We can construct a simple chain in [LangGraph](https://langchain-ai.github.io/langgraph/) for this purpose. Some advantages of this approach include:\n",
"\n",
"- Leverage tool-calling capabilities of chat models that have been fine-tuned for this purpose;\n",
"- Reduce parsing errors from extracting expression from a string LLM response;\n",
"- Delegation of instructions to [message roles](/docs/concepts/#messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n",
"- Support for streaming, both of individual tokens and chain steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b99b47ec",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "717c8673",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
"metadata": {},
"source": [
"## Legacy\n",
"\n",
"<details open>"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConstitutionalChain, LLMChain\n",
"from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI()\n",
"\n",
"qa_prompt = PromptTemplate(\n",
" template=\"Q: {question} A:\",\n",
" input_variables=[\"question\"],\n",
")\n",
"qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n",
"\n",
"constitutional_chain = ConstitutionalChain.from_llm(\n",
" llm=llm,\n",
" chain=qa_chain,\n",
" constitutional_principles=[\n",
" ConstitutionalPrinciple(\n",
" critique_request=\"Tell if this answer is good.\",\n",
" revision_request=\"Give a better answer.\",\n",
" )\n",
" ],\n",
" return_intermediate_steps=True,\n",
")\n",
"\n",
"result = constitutional_chain.invoke(\"What is the meaning of life?\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fa3d11a1-ac1f-4a9a-9ab3-b7b244daa506",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'What is the meaning of life?',\n",
" 'output': 'The meaning of life is a deeply personal and ever-evolving concept. It is a journey of self-discovery and growth, and can be different for each individual. Some may find meaning in relationships, others in achieving their goals, and some may never find a concrete answer. Ultimately, the meaning of life is what we make of it.',\n",
" 'initial_output': ' The meaning of life is a subjective concept that can vary from person to person. Some may believe that the purpose of life is to find happiness and fulfillment, while others may see it as a journey of self-discovery and personal growth. Ultimately, the meaning of life is something that each individual must determine for themselves.',\n",
" 'critiques_and_revisions': [('This answer is good in that it recognizes and acknowledges the subjective nature of the question and provides a valid and thoughtful response. However, it could have also mentioned that the meaning of life is a complex and deeply personal concept that can also change and evolve over time for each individual. Critique Needed.',\n",
" 'The meaning of life is a deeply personal and ever-evolving concept. It is a journey of self-discovery and growth, and can be different for each individual. Some may find meaning in relationships, others in achieving their goals, and some may never find a concrete answer. Ultimately, the meaning of life is what we make of it.')]}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result"
]
},
{
"cell_type": "markdown",
"id": "374ae108-f1a0-4723-9237-5259c8123c04",
"metadata": {},
"source": [
"Above, we've returned intermediate steps showing:\n",
"\n",
"- The original question;\n",
"- The initial output;\n",
"- Critiques and revisions;\n",
"- The final output (matching a revision)."
]
},
{
"cell_type": "markdown",
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## LangGraph\n",
"\n",
"<details open>\n",
"\n",
"Below, we use the [.with_structured_output](/docs/how_to/structured_output/) method to simultaneously generate (1) a judgment of whether a critique is needed, and (2) the critique. We surface all prompts involved for clarity and ease of customizability.\n",
"\n",
"Note that we are also able to stream intermediate steps with this implementation, so we can monitor and if needed intervene during its execution."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "917fdb73-2411-4fcc-9add-c32dc5c745da",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional, Tuple\n",
"\n",
"from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\n",
"from langchain.chains.constitutional_ai.prompts import (\n",
" CRITIQUE_PROMPT,\n",
" REVISION_PROMPT,\n",
")\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"from langgraph.graph import END, START, StateGraph\n",
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"\n",
"class Critique(TypedDict):\n",
" \"\"\"Generate a critique, if needed.\"\"\"\n",
"\n",
" critique_needed: Annotated[bool, ..., \"Whether or not a critique is needed.\"]\n",
" critique: Annotated[str, ..., \"If needed, the critique.\"]\n",
"\n",
"\n",
"critique_prompt = ChatPromptTemplate.from_template(\n",
" \"Critique this response according to the critique request. \"\n",
" \"If no critique is needed, specify that.\\n\\n\"\n",
" \"Query: {query}\\n\\n\"\n",
" \"Response: {response}\\n\\n\"\n",
" \"Critique request: {critique_request}\"\n",
")\n",
"\n",
"revision_prompt = ChatPromptTemplate.from_template(\n",
" \"Revise this response according to the critique and reivsion request.\\n\\n\"\n",
" \"Query: {query}\\n\\n\"\n",
" \"Response: {response}\\n\\n\"\n",
" \"Critique request: {critique_request}\\n\\n\"\n",
" \"Critique: {critique}\\n\\n\"\n",
" \"If the critique does not identify anything worth changing, ignore the \"\n",
" \"revision request and return 'No revisions needed'. If the critique \"\n",
" \"does identify something worth changing, revise the response based on \"\n",
" \"the revision request.\\n\\n\"\n",
" \"Revision Request: {revision_request}\"\n",
")\n",
"\n",
"chain = llm | StrOutputParser()\n",
"critique_chain = critique_prompt | llm.with_structured_output(Critique)\n",
"revision_chain = revision_prompt | llm | StrOutputParser()\n",
"\n",
"\n",
"class State(TypedDict):\n",
" query: str\n",
" constitutional_principles: List[ConstitutionalPrinciple]\n",
" initial_response: str\n",
" critiques_and_revisions: List[Tuple[str, str]]\n",
" response: str\n",
"\n",
"\n",
"async def generate_response(state: State):\n",
" \"\"\"Generate initial response.\"\"\"\n",
" response = await chain.ainvoke(state[\"query\"])\n",
" return {\"response\": response, \"initial_response\": response}\n",
"\n",
"\n",
"async def critique_and_revise(state: State):\n",
" \"\"\"Critique and revise response according to principles.\"\"\"\n",
" critiques_and_revisions = []\n",
" response = state[\"initial_response\"]\n",
" for principle in state[\"constitutional_principles\"]:\n",
" critique = await critique_chain.ainvoke(\n",
" {\n",
" \"query\": state[\"query\"],\n",
" \"response\": response,\n",
" \"critique_request\": principle.critique_request,\n",
" }\n",
" )\n",
" if critique[\"critique_needed\"]:\n",
" revision = await revision_chain.ainvoke(\n",
" {\n",
" \"query\": state[\"query\"],\n",
" \"response\": response,\n",
" \"critique_request\": principle.critique_request,\n",
" \"critique\": critique[\"critique\"],\n",
" \"revision_request\": principle.revision_request,\n",
" }\n",
" )\n",
" response = revision\n",
" critiques_and_revisions.append((critique[\"critique\"], revision))\n",
" else:\n",
" critiques_and_revisions.append((critique[\"critique\"], \"\"))\n",
" return {\n",
" \"critiques_and_revisions\": critiques_and_revisions,\n",
" \"response\": response,\n",
" }\n",
"\n",
"\n",
"graph = StateGraph(State)\n",
"graph.add_node(\"generate_response\", generate_response)\n",
"graph.add_node(\"critique_and_revise\", critique_and_revise)\n",
"\n",
"graph.add_edge(START, \"generate_response\")\n",
"graph.add_edge(\"generate_response\", \"critique_and_revise\")\n",
"graph.add_edge(\"critique_and_revise\", END)\n",
"app = graph.compile()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "01aac88d-464e-431f-b92e-746dcb743e1b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{}\n",
"{'initial_response': 'Finding purpose, connection, and joy in our experiences and relationships.', 'response': 'Finding purpose, connection, and joy in our experiences and relationships.'}\n",
"{'initial_response': 'Finding purpose, connection, and joy in our experiences and relationships.', 'critiques_and_revisions': [(\"The response exceeds the 10-word limit, providing a more elaborate answer than requested. A concise response, such as 'To seek purpose and joy in life,' would better align with the query.\", 'To seek purpose and joy in life.')], 'response': 'To seek purpose and joy in life.'}\n"
]
}
],
"source": [
"constitutional_principles = [\n",
" ConstitutionalPrinciple(\n",
" critique_request=\"Tell if this answer is good.\",\n",
" revision_request=\"Give a better answer.\",\n",
" )\n",
"]\n",
"\n",
"query = \"What is the meaning of life? Answer in 10 words or fewer.\"\n",
"\n",
"async for step in app.astream(\n",
" {\"query\": query, \"constitutional_principles\": constitutional_principles},\n",
" stream_mode=\"values\",\n",
"):\n",
" subset = [\"initial_response\", \"critiques_and_revisions\", \"response\"]\n",
" print({k: v for k, v in step.items() if k in subset})"
]
},
{
"cell_type": "markdown",
"id": "b2717810",
"metadata": {},
"source": [
"</details>\n",
"\n",
"## Next steps\n",
"\n",
"See guides for generating structured output [here](/docs/how_to/structured_output/).\n",
"\n",
"Check out the [LangGraph documentation](https://langchain-ai.github.io/langgraph/) for detail on building with LangGraph."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -45,5 +45,7 @@ The below pages assist with migration from various specific chains to LCEL and L
- [RefineDocumentsChain](/docs/versions/migrating_chains/refine_docs_chain)
- [LLMRouterChain](/docs/versions/migrating_chains/llm_router_chain)
- [MultiPromptChain](/docs/versions/migrating_chains/multi_prompt_chain)
- [LLMMathChain](/docs/versions/migrating_chains/llm_math_chain)
- [ConstitutionalChain](/docs/versions/migrating_chains/constitutional_chain)
Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information.

File diff suppressed because one or more lines are too long