diff --git a/docs/docs/versions/migrating_chains/constitutional_chain.ipynb b/docs/docs/versions/migrating_chains/constitutional_chain.ipynb
new file mode 100644
index 00000000000..c3729b67c5a
--- /dev/null
+++ b/docs/docs/versions/migrating_chains/constitutional_chain.ipynb
@@ -0,0 +1,332 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "title: Migrating from ConstitutionalChain\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
+ "metadata": {},
+ "source": [
+ "[ConstitutionalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html) allowed for a LLM to critique and revise generations based on [principles](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html), structured as combinations of critique and revision requests. For example, a principle might include a request to identify harmful content, and a request to rewrite the content.\n",
+ "\n",
+ "In `ConstitutionalChain`, this structure of critique requests and associated revisions was formatted into a LLM prompt and parsed out of string responses. This is more naturally achieved via [structured output](/docs/how_to/structured_output/) features of chat models. We can construct a simple chain in [LangGraph](https://langchain-ai.github.io/langgraph/) for this purpose. Some advantages of this approach include:\n",
+ "\n",
+ "- Leverage tool-calling capabilities of chat models that have been fine-tuned for this purpose;\n",
+ "- Reduce parsing errors from extracting expression from a string LLM response;\n",
+ "- Delegation of instructions to [message roles](/docs/concepts/#messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n",
+ "- Support for streaming, both of individual tokens and chain steps."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b99b47ec",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install --upgrade --quiet langchain-openai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "717c8673",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "from getpass import getpass\n",
+ "\n",
+ "os.environ[\"OPENAI_API_KEY\"] = getpass()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e3621b62-a037-42b8-8faa-59575608bb8b",
+ "metadata": {},
+ "source": [
+ "## Legacy\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain.chains import ConstitutionalChain, LLMChain\n",
+ "from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\n",
+ "from langchain_core.prompts import PromptTemplate\n",
+ "from langchain_openai import OpenAI\n",
+ "\n",
+ "llm = OpenAI()\n",
+ "\n",
+ "qa_prompt = PromptTemplate(\n",
+ " template=\"Q: {question} A:\",\n",
+ " input_variables=[\"question\"],\n",
+ ")\n",
+ "qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n",
+ "\n",
+ "constitutional_chain = ConstitutionalChain.from_llm(\n",
+ " llm=llm,\n",
+ " chain=qa_chain,\n",
+ " constitutional_principles=[\n",
+ " ConstitutionalPrinciple(\n",
+ " critique_request=\"Tell if this answer is good.\",\n",
+ " revision_request=\"Give a better answer.\",\n",
+ " )\n",
+ " ],\n",
+ " return_intermediate_steps=True,\n",
+ ")\n",
+ "\n",
+ "result = constitutional_chain.invoke(\"What is the meaning of life?\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "fa3d11a1-ac1f-4a9a-9ab3-b7b244daa506",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'question': 'What is the meaning of life?',\n",
+ " 'output': 'The meaning of life is a deeply personal and ever-evolving concept. It is a journey of self-discovery and growth, and can be different for each individual. Some may find meaning in relationships, others in achieving their goals, and some may never find a concrete answer. Ultimately, the meaning of life is what we make of it.',\n",
+ " 'initial_output': ' The meaning of life is a subjective concept that can vary from person to person. Some may believe that the purpose of life is to find happiness and fulfillment, while others may see it as a journey of self-discovery and personal growth. Ultimately, the meaning of life is something that each individual must determine for themselves.',\n",
+ " 'critiques_and_revisions': [('This answer is good in that it recognizes and acknowledges the subjective nature of the question and provides a valid and thoughtful response. However, it could have also mentioned that the meaning of life is a complex and deeply personal concept that can also change and evolve over time for each individual. Critique Needed.',\n",
+ " 'The meaning of life is a deeply personal and ever-evolving concept. It is a journey of self-discovery and growth, and can be different for each individual. Some may find meaning in relationships, others in achieving their goals, and some may never find a concrete answer. Ultimately, the meaning of life is what we make of it.')]}"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "result"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "374ae108-f1a0-4723-9237-5259c8123c04",
+ "metadata": {},
+ "source": [
+ "Above, we've returned intermediate steps showing:\n",
+ "\n",
+ "- The original question;\n",
+ "- The initial output;\n",
+ "- Critiques and revisions;\n",
+ "- The final output (matching a revision)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "## LangGraph\n",
+ "\n",
+ "\n",
+ "\n",
+ "Below, we use the [.with_structured_output](/docs/how_to/structured_output/) method to simultaneously generate (1) a judgment of whether a critique is needed, and (2) the critique. We surface all prompts involved for clarity and ease of customizability.\n",
+ "\n",
+ "Note that we are also able to stream intermediate steps with this implementation, so we can monitor and if needed intervene during its execution."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "917fdb73-2411-4fcc-9add-c32dc5c745da",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from typing import List, Optional, Tuple\n",
+ "\n",
+ "from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\n",
+ "from langchain.chains.constitutional_ai.prompts import (\n",
+ " CRITIQUE_PROMPT,\n",
+ " REVISION_PROMPT,\n",
+ ")\n",
+ "from langchain_core.output_parsers import StrOutputParser\n",
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "from langchain_openai import ChatOpenAI\n",
+ "from langgraph.graph import END, START, StateGraph\n",
+ "from typing_extensions import Annotated, TypedDict\n",
+ "\n",
+ "llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
+ "\n",
+ "\n",
+ "class Critique(TypedDict):\n",
+ " \"\"\"Generate a critique, if needed.\"\"\"\n",
+ "\n",
+ " critique_needed: Annotated[bool, ..., \"Whether or not a critique is needed.\"]\n",
+ " critique: Annotated[str, ..., \"If needed, the critique.\"]\n",
+ "\n",
+ "\n",
+ "critique_prompt = ChatPromptTemplate.from_template(\n",
+ " \"Critique this response according to the critique request. \"\n",
+ " \"If no critique is needed, specify that.\\n\\n\"\n",
+ " \"Query: {query}\\n\\n\"\n",
+ " \"Response: {response}\\n\\n\"\n",
+ " \"Critique request: {critique_request}\"\n",
+ ")\n",
+ "\n",
+ "revision_prompt = ChatPromptTemplate.from_template(\n",
+ " \"Revise this response according to the critique and reivsion request.\\n\\n\"\n",
+ " \"Query: {query}\\n\\n\"\n",
+ " \"Response: {response}\\n\\n\"\n",
+ " \"Critique request: {critique_request}\\n\\n\"\n",
+ " \"Critique: {critique}\\n\\n\"\n",
+ " \"If the critique does not identify anything worth changing, ignore the \"\n",
+ " \"revision request and return 'No revisions needed'. If the critique \"\n",
+ " \"does identify something worth changing, revise the response based on \"\n",
+ " \"the revision request.\\n\\n\"\n",
+ " \"Revision Request: {revision_request}\"\n",
+ ")\n",
+ "\n",
+ "chain = llm | StrOutputParser()\n",
+ "critique_chain = critique_prompt | llm.with_structured_output(Critique)\n",
+ "revision_chain = revision_prompt | llm | StrOutputParser()\n",
+ "\n",
+ "\n",
+ "class State(TypedDict):\n",
+ " query: str\n",
+ " constitutional_principles: List[ConstitutionalPrinciple]\n",
+ " initial_response: str\n",
+ " critiques_and_revisions: List[Tuple[str, str]]\n",
+ " response: str\n",
+ "\n",
+ "\n",
+ "async def generate_response(state: State):\n",
+ " \"\"\"Generate initial response.\"\"\"\n",
+ " response = await chain.ainvoke(state[\"query\"])\n",
+ " return {\"response\": response, \"initial_response\": response}\n",
+ "\n",
+ "\n",
+ "async def critique_and_revise(state: State):\n",
+ " \"\"\"Critique and revise response according to principles.\"\"\"\n",
+ " critiques_and_revisions = []\n",
+ " response = state[\"initial_response\"]\n",
+ " for principle in state[\"constitutional_principles\"]:\n",
+ " critique = await critique_chain.ainvoke(\n",
+ " {\n",
+ " \"query\": state[\"query\"],\n",
+ " \"response\": response,\n",
+ " \"critique_request\": principle.critique_request,\n",
+ " }\n",
+ " )\n",
+ " if critique[\"critique_needed\"]:\n",
+ " revision = await revision_chain.ainvoke(\n",
+ " {\n",
+ " \"query\": state[\"query\"],\n",
+ " \"response\": response,\n",
+ " \"critique_request\": principle.critique_request,\n",
+ " \"critique\": critique[\"critique\"],\n",
+ " \"revision_request\": principle.revision_request,\n",
+ " }\n",
+ " )\n",
+ " response = revision\n",
+ " critiques_and_revisions.append((critique[\"critique\"], revision))\n",
+ " else:\n",
+ " critiques_and_revisions.append((critique[\"critique\"], \"\"))\n",
+ " return {\n",
+ " \"critiques_and_revisions\": critiques_and_revisions,\n",
+ " \"response\": response,\n",
+ " }\n",
+ "\n",
+ "\n",
+ "graph = StateGraph(State)\n",
+ "graph.add_node(\"generate_response\", generate_response)\n",
+ "graph.add_node(\"critique_and_revise\", critique_and_revise)\n",
+ "\n",
+ "graph.add_edge(START, \"generate_response\")\n",
+ "graph.add_edge(\"generate_response\", \"critique_and_revise\")\n",
+ "graph.add_edge(\"critique_and_revise\", END)\n",
+ "app = graph.compile()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "01aac88d-464e-431f-b92e-746dcb743e1b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{}\n",
+ "{'initial_response': 'Finding purpose, connection, and joy in our experiences and relationships.', 'response': 'Finding purpose, connection, and joy in our experiences and relationships.'}\n",
+ "{'initial_response': 'Finding purpose, connection, and joy in our experiences and relationships.', 'critiques_and_revisions': [(\"The response exceeds the 10-word limit, providing a more elaborate answer than requested. A concise response, such as 'To seek purpose and joy in life,' would better align with the query.\", 'To seek purpose and joy in life.')], 'response': 'To seek purpose and joy in life.'}\n"
+ ]
+ }
+ ],
+ "source": [
+ "constitutional_principles = [\n",
+ " ConstitutionalPrinciple(\n",
+ " critique_request=\"Tell if this answer is good.\",\n",
+ " revision_request=\"Give a better answer.\",\n",
+ " )\n",
+ "]\n",
+ "\n",
+ "query = \"What is the meaning of life? Answer in 10 words or fewer.\"\n",
+ "\n",
+ "async for step in app.astream(\n",
+ " {\"query\": query, \"constitutional_principles\": constitutional_principles},\n",
+ " stream_mode=\"values\",\n",
+ "):\n",
+ " subset = [\"initial_response\", \"critiques_and_revisions\", \"response\"]\n",
+ " print({k: v for k, v in step.items() if k in subset})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b2717810",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "## Next steps\n",
+ "\n",
+ "See guides for generating structured output [here](/docs/how_to/structured_output/).\n",
+ "\n",
+ "Check out the [LangGraph documentation](https://langchain-ai.github.io/langgraph/) for detail on building with LangGraph."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/docs/versions/migrating_chains/index.mdx b/docs/docs/versions/migrating_chains/index.mdx
index 4f809972e0a..69f6e24c8ef 100644
--- a/docs/docs/versions/migrating_chains/index.mdx
+++ b/docs/docs/versions/migrating_chains/index.mdx
@@ -45,5 +45,7 @@ The below pages assist with migration from various specific chains to LCEL and L
- [RefineDocumentsChain](/docs/versions/migrating_chains/refine_docs_chain)
- [LLMRouterChain](/docs/versions/migrating_chains/llm_router_chain)
- [MultiPromptChain](/docs/versions/migrating_chains/multi_prompt_chain)
+- [LLMMathChain](/docs/versions/migrating_chains/llm_math_chain)
+- [ConstitutionalChain](/docs/versions/migrating_chains/constitutional_chain)
Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information.
\ No newline at end of file
diff --git a/docs/docs/versions/migrating_chains/llm_math_chain.ipynb b/docs/docs/versions/migrating_chains/llm_math_chain.ipynb
new file mode 100644
index 00000000000..87f2511085e
--- /dev/null
+++ b/docs/docs/versions/migrating_chains/llm_math_chain.ipynb
@@ -0,0 +1,281 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "title: Migrating from LLMMathChain\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
+ "metadata": {},
+ "source": [
+ "[`LLMMathChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html) enabled the evaluation of mathematical expressions generated by a LLM. Instructions for generating the expressions were formatted into the prompt, and the expressions were parsed out of the string response before evaluation using the [numexpr](https://numexpr.readthedocs.io/en/latest/user_guide.html) library.\n",
+ "\n",
+ "This is more naturally achieved via [tool calling](/docs/concepts/#functiontool-calling). We can equip a chat model with a simple calculator tool leveraging `numexpr` and construct a simple chain around it using [LangGraph](https://langchain-ai.github.io/langgraph/). Some advantages of this approach include:\n",
+ "\n",
+ "- Leverage tool-calling capabilities of chat models that have been fine-tuned for this purpose;\n",
+ "- Reduce parsing errors from extracting expression from a string LLM response;\n",
+ "- Delegation of instructions to [message roles](/docs/concepts/#messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n",
+ "- Support for streaming, both of individual tokens and chain steps."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b99b47ec",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install --upgrade --quiet numexpr"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "717c8673",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "from getpass import getpass\n",
+ "\n",
+ "os.environ[\"OPENAI_API_KEY\"] = getpass()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e3621b62-a037-42b8-8faa-59575608bb8b",
+ "metadata": {},
+ "source": [
+ "## Legacy\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'question': 'What is 551368 divided by 82?', 'answer': 'Answer: 6724.0'}"
+ ]
+ },
+ "execution_count": 1,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from langchain.chains import LLMMathChain\n",
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "from langchain_openai import ChatOpenAI\n",
+ "\n",
+ "llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
+ "\n",
+ "chain = LLMMathChain.from_llm(llm)\n",
+ "\n",
+ "chain.invoke(\"What is 551368 divided by 82?\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "## LangGraph\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "f0903025-9aa8-4a53-8336-074341c00e59",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import math\n",
+ "from typing import Annotated, Sequence\n",
+ "\n",
+ "import numexpr\n",
+ "from langchain_core.messages import BaseMessage\n",
+ "from langchain_core.runnables import RunnableConfig\n",
+ "from langchain_core.tools import tool\n",
+ "from langchain_openai import ChatOpenAI\n",
+ "from langgraph.graph import END, StateGraph\n",
+ "from langgraph.graph.message import add_messages\n",
+ "from langgraph.prebuilt.tool_node import ToolNode\n",
+ "from typing_extensions import TypedDict\n",
+ "\n",
+ "\n",
+ "@tool\n",
+ "def calculator(expression: str) -> str:\n",
+ " \"\"\"Calculate expression using Python's numexpr library.\n",
+ "\n",
+ " Expression should be a single line mathematical expression\n",
+ " that solves the problem.\n",
+ "\n",
+ " Examples:\n",
+ " \"37593 * 67\" for \"37593 times 67\"\n",
+ " \"37593**(1/5)\" for \"37593^(1/5)\"\n",
+ " \"\"\"\n",
+ " local_dict = {\"pi\": math.pi, \"e\": math.e}\n",
+ " return str(\n",
+ " numexpr.evaluate(\n",
+ " expression.strip(),\n",
+ " global_dict={}, # restrict access to globals\n",
+ " local_dict=local_dict, # add common mathematical functions\n",
+ " )\n",
+ " )\n",
+ "\n",
+ "\n",
+ "llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
+ "tools = [calculator]\n",
+ "llm_with_tools = llm.bind_tools(tools, tool_choice=\"any\")\n",
+ "\n",
+ "\n",
+ "class ChainState(TypedDict):\n",
+ " \"\"\"LangGraph state.\"\"\"\n",
+ "\n",
+ " messages: Annotated[Sequence[BaseMessage], add_messages]\n",
+ "\n",
+ "\n",
+ "async def acall_chain(state: ChainState, config: RunnableConfig):\n",
+ " last_message = state[\"messages\"][-1]\n",
+ " response = await llm_with_tools.ainvoke(state[\"messages\"], config)\n",
+ " return {\"messages\": [response]}\n",
+ "\n",
+ "\n",
+ "async def acall_model(state: ChainState, config: RunnableConfig):\n",
+ " response = await llm.ainvoke(state[\"messages\"], config)\n",
+ " return {\"messages\": [response]}\n",
+ "\n",
+ "\n",
+ "graph_builder = StateGraph(ChainState)\n",
+ "graph_builder.add_node(\"call_tool\", acall_chain)\n",
+ "graph_builder.add_node(\"execute_tool\", ToolNode(tools))\n",
+ "graph_builder.add_node(\"call_model\", acall_model)\n",
+ "graph_builder.set_entry_point(\"call_tool\")\n",
+ "graph_builder.add_edge(\"call_tool\", \"execute_tool\")\n",
+ "graph_builder.add_edge(\"execute_tool\", \"call_model\")\n",
+ "graph_builder.add_edge(\"call_model\", END)\n",
+ "chain = graph_builder.compile()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "d0a8a81a-328b-497d-956b-4d16b2efea0e",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCAGDAH0DASIAAhEBAxEB/8QAHQABAAICAwEBAAAAAAAAAAAAAAYIBQcCAwQJAf/EAFQQAAEDAwEEAwgMCQkHBQAAAAECAwQABREGBxITIQgWMRQVIkFRVpTTFyM3VWF2k5Wz0dLUMjZTVHFzgZGSCUJSYnR1obGyJDM1Q3J3tSVEg6TB/8QAGgEBAQADAQEAAAAAAAAAAAAAAAECAwQFBv/EADYRAAIAAwMJBQcFAQAAAAAAAAABAgMREiGRBBMUMUFRUmHRBSNxocEzQlOBorHhFSIy4vCS/9oADAMBAAIRAxEAPwD6p0pSgFcXHENIK1qCEDmVKOAKxF8vL8Z9m3W1pEi6yAVJDueFHbHa67jnujsCRzWrkMDeUnxo0FbZKw/eArUEvJPEuQDiE58SGsbiB4uSc+Uk5NboYIUrUbp9y03mSVqizIUQq7QUkdoMlH11+darL78QPSUfXX4nSdjQkJTZrelI7AIqMD/Cv3qrZfeeB6Mj6qy7nn5C4darL78QPSUfXTrVZffiB6Sj66dVbL7zwPRkfVTqrZfeeB6Mj6qdzz8i3DrVZffiB6Sj66darL78QPSUfXTqrZfeeB6Mj6qdVbL7zwPRkfVTuefkLh1qsvvxA9JR9dd8W926csIjT4shZ/mtPJUf8DXR1VsvvPA9GR9VdEvRGnZyNyRYba8nGBvxGzj9Bxyp3PPyJcZulRZy0TNJpMm0Lkz7egDiWh1ziEJHaWFq8IK/qKUUnGBuZzUhgT2LnDZlxXA9HeSFoWOWR+g8wfgPMVhFBRWoXVf7WKHopSlaiClKUBGND4uKLne14U9PluoSrnkMMrU20n9HgqVjyuK8tSeoxs7Hc+nlwVZDsGZJjLBGOx5RSf2oUg/oNSeujKPaxLZs8NnkV6xWJ1Xqu0aG07Pv1+ntWy0QW+LIlPZ3UJyAOzJJJIAABJJAHM1lqhO2m02i+7L7/AvtkueobU80gPW+ytlcxftiSlTISQStCgFjBz4HLPYechCtc9KjS+mtFWrUlqbnXmHMv0WyOJNtmNOMKcWjiKU2Wd/eS2sKSgpBWSEpyTipJqfpC6E0bb7VNvV1lwGbnHMuOly0zC4GR+EtxsMlbQGeZcCceOtGzG9oeptkEuRPteo9QQtN6ytdys/fa3CPe7hbY77Dru/HASVOJIcCSUpUsJzjJ5yDaVqW/a11fbHZFo2iR9CTbGtUK36fhSIUp65cdxCkTSndcZRww2UBxSGzvqKjyxQG3dT7dNDaQFgNyvqc3+MuZaRDjPSzOaQGyotBlCis4dbISOZBJAIBxHNM9JGyak2w3bQzcG5MmPEgvxZirXMAeW+l1akuZYCWAlKEYU4oBRUoDmkgav2JaLv0K49HU3PT1ziL0/pm8wJypcNxAhSAqM0lK1EYTvhDm4c+GnJTkVP2ZFw0N0ntRzZenr1OtOrLXaYsO522CuTGYeYckJcTIWkHggB5Ct5WBjPPligN4UpSgFRjTuLXqm/WdG6mOeFc2UDPgcdTgcH7XGlrPwuGpPUZtqe69oN7kpzw4sKLDJIwOJvOuqGfHhK2v31vl/xjXL1RVtJNSlK0EFKUoCOXOM9p+7vXuGw5JiyEpTcYrCSt07owh9tI/CUkclJHhKSE7uSgJXxv2l9J7VLJHavNrtWqbUl3jtNzGUSWkuAFO8AoEBQClDyjJFSWsDcdE2ufLXMQl+3znDlcm3SFx1rOMZXuEBZx/TB7B5BW9RQxpKO5rb1Lr1kVHRt2UJCgNm+lgFDBAtLHMdv9H4BWU0zsW0Bou7t3WwaLsVluTaVJRLgW9pl1IUMKAUlIIyOVezqQ+OSdU35I8Q47R/xLeadSZHnVfvlmfVVc3L4/Jii3kopUX6kyPOq/fLM+qrUvSUvWodk2jLJdLJqe6LkzNQQLY4Jamlp4Tzm6vADY8LHYf8KZuXx+TFFvLBV558CNdYMmFMYblQ5LamXmHkhSHEKBCkqB5EEEgj4aj/UmR51X75Zn1VOpMjzqv3yzPqqZuXx+TFFvMA30btlLS0rRs40uhaSClSbSwCD5R4NGujfsqYdQ43s50uhxBCkqTaWAQR2EeDWf6kyPOq/fLM+qp1ED2Eyr/fZTeMFBm8HI/S0EH/GliXx+TFFvPdetRphPd74CET704nLUNK8BAPY46RnhtjxqIycYSFKwD36fsqbFb+BxTIkOuLfkSCMF11Z3lKxk4GTgDJwkJHYK7LRY4Fhjli3xW4zajvL3B4S1f0lKPNR+EkmvfWMUSSsQavuPAUpStJBSlKAUpSgFKUoBVd+nD7mOlvjhaPpqsRVd+nD7mOlvjhaPpqAsRSlKAUpSgFKUoBSlKAUpSgFKUoBSlKAVXfpw+5jpb44Wj6arEVXfpw+5jpb44Wj6agLEUpSgFKUoBSlKAUpSgFKUoBSuDrqGG1uOLS22gFSlqOAkDtJNQ3rdfrqhMm0WuEi3uAKZcuMlxt11J7FFtLZ3AeRAJzg8wk8q3S5UU2tktKk1pUI7+6w/MLH6W96unf3WH5hY/S3vV1u0WPesUKE3r4v9N3YUdhu3G5sQo3B03eiblaykeAhCj7YyOWBuL3gB27pQT219ae/usPzCx+lverrUfSM2DzeknYbNbr9GtUJy1zkymZcWS6XeGcB1nJb5JWkDmOwpSeeMFose9YoUIl/JobGpGz3Y3M1XPStqfq9xuQhlWRuRGd9LBx5VFxxefGlSKuBUCgz9UWyFHhxLTYI0WO2llllqS8lDaEjCUgBvkAABiu7v7rD8wsfpb3q6aLHvWKFCb0qEd/dYfmFj9Le9XTv7rD8wsfpb3q6aLHvWKFCb0qOWDVEiZONtu0NuBcSgutcB4usvoBAUUqKUkKGRlJHjGCoZxI6544IpbpENQpSlayClKUBgNoCijQepFA4Itskg/wDxKrx20AW6KAMANJ5D9Ar17QvxB1L/AHZJ+iVXhiupYtLLqslKGAo4GTgJr0ZXsfm/sjLYeulaS2Iz9d7TbJYdoVy1iiFaLsFy06Wi21hUdqMSoNoL5HFLgG6pSt7GcgJHbUI0ztn1UradpJ+NfrtqnQ2pLu/a0S5tjiwoJ9qeW2qI4lfHXulrBU4kpWN4gjlUtGJYxGsbA5CjzE3y2qiSJfcDMgS2y25J3y3wEqzgub4Kdwc94EYyKzFVBtHuM6C/7sD/AM1Iq31VOoOPEQHA3vJ4hG8E5548uP2iuVaAk6eu03pmSpEfVM+BGb0nDkrhsx4ykOMiY6lUclbRUEKKVKKgQvKyAoAACNL2sa+9jJ7bH1iZRYW7uWhpDve1wjBTP7kIL+OLx8AryFbucJ3KloFpK8cW82+dcJsCNOjSJ0EoEqM08lTkcrTvI4iQcp3k8xnGRzFVtv207aDJ0dr7aZbtRsW+0aWusyNG0uuA0tqXHhu8N3jPKHFS45urI3FAJ8HkrnU52RSkztuO2OSgEIeXZXEhQwQDABGf30tVBsyYca40rjxrkg/o4J+oVPKgU38eNKfrJP0CqntYZV7nh6sr2ClKVxEFKUoCP7QvxB1L/dkn6JVeS3f8Pjfqk/5Cs/eLai82idb3SUtS2FsKIGcBSSk/51CGb1KsUdqHdLVcjJYSG1PQYLspp3AxvpU2k4BxnBAIzivRkfvl2IddTLWqEQ0psCteiLw0/ZdR6lg2RmSuUzpluenvY2pZUVJSjc39wlSjw9/dyeysZaOjBp+yyNPljUOplQtOz0z7NbnJyFRoGCctITw/CQUqUj2wqUEqISpOc1sPrnG97L98yS/VU65xvey/fMkv1VbsxHwsWXuIVK6OWm5Gn73ZUXC9RYNwu4vsdLEwJVa5gd4vEiHdPDy4SrB3hknAFZWTedpMOS7Hh6P0/PhtLLbMqVqd1p15AOErWgQVBKiMEgEgEkZPbUg65xvey/fMkv1VY2+7WNP6XitSbyblaYzzyIzb061yWULdWcIQCpsAqUeQHaaZiZshZLLMXctlXW7Ulj1jOnXHSmqosQQ5SLBcEuMvs8TicBxTjPtiArJBCUK8I1inejTpl27qdVc753gVc+/CtLd2J71mXxOLv8Pc393ie2cPf3N7nu1O+ucb3sv3zJL9VTrnG97L98yS/VUzEfCy2XuIJfujVprUN2ujr1zvseyXaaLjc9NRpiUW2dIykqW4jcK/CKUlSUrSlRGSDUutuza22jaNdtZQ5M5ifdYbUSdCS8O5Hy3gNvFsjIcSkFAIIG6TkE4I9vXON72X75kl+qp1zje9l++ZJfqqZiPhYsvcds38eNKfrJP0CqntQqxxJOoNQQ7s7CkQIMBDoZTLRw3XnFgJ3tw80pCd4eFgkq7MDJmtcmUtVhh3L1b9SMUpSuMgpSlAKUpQClKUAqu/Th9zHS3xwtH01WIqu/Th9zHS3xwtH01AWIpSlAKUpQClKUApSlAKUpQClKUApSlAKrv04fcx0t8cLR9NViKrv04fcx0t8cLR9NQFiKUpQClKUApSlAKUpQClK4rcQ2MrUEj+scUBypXV3Uz+Wb/iFO6mfyzf8Qq0YO2ldXdTP5Zv+IU7qZ/LN/xClGDtr5jdKjpy3PUslzQd42cd4LnpzUUeW+536L4cVFdJ3Ugx0eCvxL8hBwc19NO6mfyzf8Qr51fymuwFybqOw7RdORDLkXRxuz3KPGTvKW/jEdzA5kqA4ZJ5eA2O1VKMFmuiT0qJXSjt2pZ69Gr0tCtDrDDT5uHdaJTiwtS0g8JvdKAlsnt/3g7PHYCtWdG3ZJA2D7HbBpNtyOZzLXdFxfbUMPS14Lqs+MA4SCf5qE1s7upn8s3/ABClGDtpXV3Uz+Wb/iFO6mfyzf8AEKUYO2ldXdTP5Zv+IU7qZ/LN/wAQpRg7aUpUB5bpN722yXL3d7gMrd3fLupJ/wDyteWvSVqv1uiXK82+JeLlKZQ89JnMJeVlQBKU7w8FA7AkYGB5cmpzqr8WLx/Y3v8AQaj2mvxctX9ka/0CvSyduCW4oXR1MtSPF7H2lvNqz+gNfZp7H2lvNqz+gNfZqPL2+aBb1V1eVqFoXLusQN7gPdzd05xwO6Nzg8TPLc397PLGeVex7bJo+PCmSXLvupiXdNidZ7le44nKUAlhLW5vqJ3gQUpIKfCB3edbc/M43iSr3mV9j7S3m1Z/QGvs09j7S3m1Z/QGvs1H9Q7e9BaU1C9ZbpqFuLOYWhuQrud5bEVa8bqXn0oLbRO8DhaknBB8ddmsdueiNBXoWi9XsM3INJfcjxor8pTDZ/BW7wkK4ST4ivdFM/M43iKveZz2PtLebVn9Aa+zT2PtLebVn9Aa+zXUnaNp5S9UIFw8LTKUruw4Dn+zBTAfH83w/alBXgb3bjt5VBT0krAvapZdJsx50iHdbM1dY1zYt0twLU842llO6lkhKCle8XFEJSfBVunNM/MXvvEVe8n/ALH2lvNqz+gNfZp7H2lvNqz+gNfZqPXjb5oKwaldsU/ULTE9l5EZ9XAeVHYdVjdbdkBBabWcjwVLB5jlWwKufmP33iKveYD2PtLebVn9Aa+zT2PtLebVn9Aa+zUNtm3q1zttV92euQpzUi3sxSzLRBkuIeddDpWlSg1uNpSG04WpW6sqIBykislH296ClavGmWtQtLuypZgJHAd7nVJHawJG5wi7yI3Avezyxmpn5nG8RV7yQex9pbzas/oDX2aDZ/pdJBGm7QCOwiA19mo/N296Ct2rTpqRqFpF1TKRBWAw6Y7chWN1lcgI4SXDkDcUsKyQMZrs2ba8uGsdRbQIE1mM0zp++d7IqmEqCltdzMO5XlRyredUMjAwBy8ZZ+ZxPEVe8kdgaa0vquFarehMa2T4zy+42+TTTjZbwptOMJBSpQIGBkJIGd4md1BHPdC05/Zpn+TVTuuXKr3DE9bXq0GYvVX4sXj+xvf6DUe01+Llq/sjX+gVJNRsrkaeujTaSpxcV1KUjxkoIFRrS60uaatKknKVRGSD5RuCs5PsX4+g2FT9l2y6FbrNb9n2uNMbRp12j3FTb78W4TzYpKe6C63Lyl4MJT+CspwFBQPgk1nbnZb4rb+7tdRouW7p2BLRYV2/uJ/vg+AlTZvCGO1W4V8JPglRZ31DlirTUqWCFQrfs6j2m9610vrXTO0W8d+7/MlMP6fnTu9U+HKc3gXQ08lltSQopcS4BkJ5b2am2mp03YRtB2gMTdG6kv0O+zWJ9qudigKncVlEVpkRnVA+1qbLZALhCSFZyKsPSrZoCtGsHLxpXUG3KP1T1Ddl6wgsv2dy125chp1QtqYy21uJ8FpSVoJIWQSCN3ePKuemmbvs71nsyv0/TN9m253QLFgf7229yQ7DlpWw5uvtpG82MBQ3iMApIOKspSlkFPtO7Molve1BonXOmNot3euV9lL7os0+f3nnxZMguJec4byWG8JX7YlQB8EnCiat+22Gm0ITndSAkZOTgfDXKoDM2A7NLhLflStA6ckSX1qddedtbKlrWo5Uokp5kkk5olTUCKMvz9E9JTUMyVYbxNtWqbZa4sS5W6CuRHYdZckJcS+tOeEAHkK3lYGM8+Vapt+nNSew/p/Y2nSF8a1LAvzC3r6uERbkMtXDupU5Mr8BRWgfgg7+8sgira2u1w7HbYtvt8VmDBitpZYjR0BDbSEjCUpSOQAHLAr1UsgqDe9OakZ2R6w2Pt6PvcnUt3vspyNe0wlKtzjT87uhE1yV+AkoQRlJO/vNgAHtrcWySBctO7UdqlvuFpnsM3K7t3iFclMnuSSyqKw0UpcHLiJW0rKDg4weytuUooaAw7nuhac/s0z/ACaqd1BigubQdP7ozuRJi1cuxOWRn95H76nNYZT7nh6sr2ConK2fJ47i7Ze7lY2VqKzFhhhbIUeZKUutL3cnnhJAyScc6llK5oJkUv8AixWhDeoFw88738hC+706gXDzzvfyEL7vUypW7SZnLBdBUhvUC4eed7+Qhfd6dQLh553v5CF93qZUppMzlgugqQ3qBcPPO9/IQvu9ar6Rd31Lsg0fZbtaNVT5UibfoNrcRNjRFIDTzm6tQ3WUneA7OePgNWGqu/Th9zHS3xwtH01NJmcsF0FTa3UC4eed7+Qhfd6dQLh553v5CF93qZUppMzlgugqQ3qBcPPO9/IQvu9OoFw88738hC+71MqU0mZywXQVIb1AuHnne/kIX3ev0aBuAIPXK9H4CxC5/wD16mNKaTM5YLoKmHsOmI1hLroefnTXgEuzZakqdWB2J8EBKUjJO6kAZJOMkmsxSlc8UUUbtRO8gpSlYgUpSgFKUoBVd+nD7mOlvjhaPpqsRVd+nD7mOlvjhaPpqAsRSlKAUpSgFKUoBSlKAUpSgFKUoBSlKAVXfpw+5jpb44Wj6arEV8U+mdsN9gnbjdrbEj8HT9z/APU7VujwEMuKOWh5OGsKRjt3QkntoD7WUqm38mNsfl6E2RXTV1wDjMnVzzTjEdYxuxWOIlpeO3K1OOnyFO4R21cmgFKUoBSlKAUpSgFcXFpaQpa1BCEglSlHAA8prlWB18tTWhNRrScKTbZJB8h4SqzghtxKHeVXsxSta3e4jj2axMSYKubT9wnKiqdT/SCA0sgHxb2CQc4FcetGrfNyz/PTv3WvTbUhNuihIAAaQAB4uQr016VJSusLF9S1W4xvWjVvm5Z/np37rTrRq3zcs/z0791rJVh0axsDkKPMTfLaqJIl9wMyBLbLbknfLfASrOC5vgp3Bz3gRjIp3Xw1jF1JXkd3WjVvm5Z/np37rWlek9sGufSbsdhhXK2WqzyrTOEhuczc3HVqZVgPMAGMMBYCTvc8FCTg8wd+Up3Xw19XUV5GCtFy1DYbVCtlv0pZIkCEwiNHjt3l0JabQkJSkDuXsAAH7K9fWjVvm5Z/np37rWSpTuvhr6uoryMb1o1b5uWf56d+6060at83LP8APTv3Wu653m32Rpl24zo0Bp55uM2uU8ltK3VqCUNpKiMqUogBI5knAr2U7r4axi6ivIxvWjVvm5Z/np37rWVsGqXLlLVAuEE2y5BBdQ0HeK28gEAqQvAzgkZBAIyOWCDXgi3m3zrhNgRp0aROglAlRmnkqcjlad5HESDlO8nmM4yOYrxTFFOuNLY5FSpKSfg4JOP3gfuqOCXGmlClc3dXYq7Wy6yeUpSvLMRUf2hfiDqX+7JP0SqkFR/aF+IOpf7sk/RKrdI9rD4r7lWtHhiupYtLLqslKGAo4GTgJrTuxGfrvabZLDtCuWsUQrRdguWnS0W2sKjtRiVBtBfI4pcA3VKVvYzkBI7a3Lbv+Hxv1Sf8hWuNKbArXoi8NP2XUepYNkZkrlM6Zbnp72NqWVFSUo3N/cJUo8Pf3cnsrsiraIaq0ztn1UradpJ+NfrtqnQ2pLu/a0S5tjiwoJ9qeW2qI4lfHXulrBU4kpWN4gjlWFtHuM6C/wC7A/8ANSK25aOjBp+yyNPljUOplQtOz0z7NbnJyFRoGCctITw/CQUqUj2wqUEqISpOc17pXRy03I0/e7Ki4XqLBuF3F9jpYmBKrXMDvF4kQ7p4eXCVYO8Mk4ArXRg2pVfdd6m1zc9fbULdZNYr07b9LWGHdIrTNtjvqcecbkqKVqcSfazwBkDwuYwpOCDsOTedpMOS7Hh6P0/PhtLLbMqVqd1p15AOErWgQVBKiMEgEgEkZPbX7b9mTFxlaovd2S9AvGq7WxbbnEiy0vsR0NJeSngrLSCTh9eVKTg4HgjBzk79QNTWTavrPSqtn9+v9+Go7dq7TU28yLUmAzHTCdZiIlJTHUgb5SQpSCHFLPYcjsrsse0DX+nbTsu1pqDU7F6tmtpkSLLsDVuaaagiWyt1kx3EjiKLZCUq3yveBURu1teLsZsUZWg8vTH0aNgOW6C28ptSX2lx0x1ccbnhHcQPwd0ZJ5Y5VhNLdG/TulrzZJabtfrpb7CtTllstynB2FbVFJSC0ncCiUpUpKeIpe6DyxWNIgaS1DetZ7S9negNoV11OhuyXrWFnfjaXYgNcKNHNwQlnL+OIp3ASpRJ3ckgJHI1l5m1ja1re5aruujLfe3I1pu0q2W23RrfbHLfKMdwtnul16QiQCtSVZLYTuBQwFkZOxm+ivp2M9EZi6h1PEsUO7s3qJp5qc33vjPtvh8BCFNlQbK85RvYG8cYOCMw9sAtTeq7jerTqLUunGrnMFwuFqtFwDMOXI5bzqklBUlS90b24pO9jnmllgxGyF52Rtw2xOvsGM+tdlU4wVBRbUYAJTkcjg8s1syb+PGlP1kn6BVY63bN7bado921nElTmJ92iNxJ0MPAxHy3gNvFBGQ4lIKAQQN0nIJ5jIzfx40p+sk/QKrfLuteEX2ZUT2lKV5RBXivdsRerNPty1biJcdyOpQGcBSSknH7a9tKqbhaaBrpjUYskZqHeIc6LOYQltzhQnn2nCBjebcQgpUk4z4iMjeCTyrl17tPkuHzXK9XWw6V3aRLd7gdfH8MyuNede7T5Lh81yvV0692nyXD5rlerrYdKaRK4Hj+CXGvOvdp8lw+a5Xq68V22raYsLDb9znPW5lx1LKHJUGQ0lTijhKAVIGVE9g7TW0Krv04fcx0t8cLR9NTSJXA8fwLjYvXu0+S4fNcr1dOvdp8lw+a5Xq62HSmkSuB4/gXGvOvdp8lw+a5Xq6de7T5Lh81yvV1sOlNIlcDx/AuNede7T5Lh81yvV167Iy7qXUcG6IjSItut6XdxyWyplb7qwE+ChYCgkJ3vCIAJIxnmROKVIsoho1BDRvnX0QqtgpSlcJBSlKAUpSgFKUoBVd+nD7mOlvjhaPpqsRVd+nD7mOlvjhaPpqAsRSlKAUpSgFKUoBSlKAUpSgFKUoBSlKAVXfpw+5jpb44Wj6arEV8w+k707J2r0jRF32cL09ddPahjy5ObxxwVxXSVNj2hPJRHJXZjng0B9PKVX7ol9K1zpSRdTSRpBzTEazLjtodVO7qTJU4HCoA8JABQG057f8AeDs8dgaAUpSgFKUoBSlKAUpSgFK4rWltClrUEoSMlSjgAeWtJ6y2sz74+5G0/JNvtYOO7kJBekeUo3hhCPIcbx7QU8s9uS5HNyyKzL2a3sRTd1KqhJgNzllcxx+c4e1yXIW8o/pKia6e8Fu/NG/3V9Auwbr5v0/klUW1r5s/ynOwB2NqyybRrFBU6m9OItdxaYQVKVLAwwvAGSVoG5+ltPjVW8e8Fu/NG/3U7wW780b/AHVf0FfF+n+wqjbvRe2KsbBNjFj0xuo75lHdl0dTg8SW4AXOY7QnCUA+NKBW2KqV3gt35o3+6neC3fmjf7qfoK+L9P8AYVRbWlVK7wW8f+0bH7KytpuVz084ly03WZBKexrjF1k/AWl5T8GQAfIRyrCPsFpfsmVfNU9WLi0FKhOzzaM3q9K4U1tuJemUlamWydx5AIHEbzzxkgFJyUkjmQQTNq+anSY5EblzFRoClKVpApSlAa3243tyFpyJamV7i7q/wnSDg8BKSpwD/qISg/As1qGtlbe4qw/pmb/yUuvxif6y0BSf8GlVrWvv+yIYYckha1utcafYRbBSlK9kwItqjahpjRs8Qrtc+BK4fGU01HdfLTfPC3OGlXDTyPNWByPkrpvO1vSdiktR5V2C33oiJ7TcSO7JU5HWVBLqQ0lW8nwFZI7OROARnWWodPOWHabq6bebXrC4QLyY8iDI0vJlBB3GUtrZdQy4kJIKchS+RCu0VKNG6QTpraqyi32uXCsUfScaHHU8lSktqEl1XCLhJBWAQSMk4I8Vecp06KJqiV9Ntf8AMpLLvtO0xY7JbLtKuzfcNz3e4lsIW+uTkbw4aEJUpXLmcDl48V4tlOvl7RbPd7ieAYzF2lQ4q2EKRvsNrw2pQUSd4g8+z9ArUugrZeNBN7Pr/ctOXedCjWmba3YsWEt2TAdXJC0OFnG/uqQndyByGPEa2VsQjzG7PqWRMt0y1mbqKfLaYnslpzhrc3kq3T4iPGOVJU6ZMjhtXXavkgbFpSleiQ/O+cixPMXaIT3Vb1iSgA43wn8JB+BSd5J/6vF21aOO+3KYbeaVvtOJC0qHjBGQaqld3A1apiiM4aVgYzk45DHjzVorDBXbLHbobhy5HjNtKOc80pAP+VfKdvQw0lxbb18rv98zNaj30pSvkgKUpQGF1jpdjWOn5NsfWWS5hbT6RktOJIUhYHjwQMjxjI7DVd5kOXaLg7bbkyItxYALjOcgg9i0H+cg4OFfAQcEEC0VYfUukbTq6Khi6w0yA2SppwEocaJ7ShYwpPYM4PPx5r2uzu0XkbcEarA/Ia7mVDk7INDTJDsh/SNlefdWXHHFwWypaickk45kmur2F9A+Zlj+b2vs1YSRsDjb3+yahuTKPEl5DTuP27gP7810+wGrznl+itV9Cu0Oz3e6f8voKczVsCBGtUJiHDjtxYjCA20wykJQ2kDASAOQAHirvrZfsBq855forVPYDV5zy/RWq3fquRr3/J9BZ5mtKwWodCac1a+09e7Fb7s80ncbXMjIdUhOc4BUDgZrdHsBq855forVPYDV5zy/RWqkXamRRKkUVfk+gs8zQXsMaCxjqbY8f3e19msxp3RGn9IqfVY7JAtCnwA6YUdDRcAzjO6BnGT++ty+wGrznl+itfVWVtWwyxRHUuXGTNveP+TLWlLJ555oQlIV+hWR8FaX2nkEv90N75LrQU5kK2Z6Md1bd2Lk+2U2OC6HA4rkJbyTySnyoSoZUrsJASM+Hu77rgyy3HaQ00hLTSEhKEIGEpA5AAeIVzr5TLcsjyyZbiuS1LcBSlK4AKUpQClKUApSlAKUpQClKUApSlAKUpQClKUB/9k=",
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Visualize chain:\n",
+ "\n",
+ "from IPython.display import Image\n",
+ "\n",
+ "Image(chain.get_graph().draw_mermaid_png())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "3ea1d71f-e31d-4722-be39-9a2b16d72f5f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "================================\u001b[1m Human Message \u001b[0m=================================\n",
+ "\n",
+ "What is 551368 divided by 82\n",
+ "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
+ "Tool Calls:\n",
+ " calculator (call_1ic3gjuII0Aq9vxlSYiwvjSb)\n",
+ " Call ID: call_1ic3gjuII0Aq9vxlSYiwvjSb\n",
+ " Args:\n",
+ " expression: 551368 / 82\n",
+ "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
+ "Name: calculator\n",
+ "\n",
+ "6724.0\n",
+ "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
+ "\n",
+ "551368 divided by 82 equals 6724.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Stream chain steps:\n",
+ "\n",
+ "example_query = \"What is 551368 divided by 82\"\n",
+ "\n",
+ "events = chain.astream(\n",
+ " {\"messages\": [(\"user\", example_query)]},\n",
+ " stream_mode=\"values\",\n",
+ ")\n",
+ "async for event in events:\n",
+ " event[\"messages\"][-1].pretty_print()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b2717810",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "## Next steps\n",
+ "\n",
+ "See guides for building and working with tools [here](/docs/how_to/#tools).\n",
+ "\n",
+ "Check out the [LangGraph documentation](https://langchain-ai.github.io/langgraph/) for detail on building with LangGraph."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/libs/community/langchain_community/chains/natbot/__init__.py b/libs/community/langchain_community/chains/natbot/__init__.py
new file mode 100644
index 00000000000..aeec86f8bf2
--- /dev/null
+++ b/libs/community/langchain_community/chains/natbot/__init__.py
@@ -0,0 +1,8 @@
+"""Implement a GPT-3 driven browser.
+
+Heavily influenced from https://github.com/nat/natbot
+"""
+
+from langchain_community.chains.natbot.base import NatBotChain
+
+__all__ = ["NatBotChain"]
diff --git a/libs/community/langchain_community/chains/natbot/base.py b/libs/community/langchain_community/chains/natbot/base.py
new file mode 100644
index 00000000000..7cb575e90e4
--- /dev/null
+++ b/libs/community/langchain_community/chains/natbot/base.py
@@ -0,0 +1,3 @@
+from langchain.chains import NatBotChain
+
+__all__ = ["NatBotChain"]
diff --git a/libs/community/langchain_community/chains/natbot/crawler.py b/libs/community/langchain_community/chains/natbot/crawler.py
new file mode 100644
index 00000000000..5c2c7657127
--- /dev/null
+++ b/libs/community/langchain_community/chains/natbot/crawler.py
@@ -0,0 +1,7 @@
+from langchain.chains.natbot.crawler import (
+ Crawler,
+ ElementInViewPort,
+ black_listed_elements,
+)
+
+__all__ = ["ElementInViewPort", "Crawler", "black_listed_elements"]
diff --git a/libs/community/langchain_community/chains/natbot/prompt.py b/libs/community/langchain_community/chains/natbot/prompt.py
new file mode 100644
index 00000000000..0ea63d5bbe2
--- /dev/null
+++ b/libs/community/langchain_community/chains/natbot/prompt.py
@@ -0,0 +1,3 @@
+from langchain.chains.natbot.prompt import PROMPT
+
+__all__ = ["PROMPT"]
diff --git a/libs/community/tests/integration_tests/retrievers/document_compressors/test_chain_extract.py b/libs/community/tests/integration_tests/retrievers/document_compressors/test_chain_extract.py
index ded7e5149be..aa167487172 100644
--- a/libs/community/tests/integration_tests/retrievers/document_compressors/test_chain_extract.py
+++ b/libs/community/tests/integration_tests/retrievers/document_compressors/test_chain_extract.py
@@ -6,14 +6,6 @@ from langchain_core.documents import Document
from langchain_community.chat_models import ChatOpenAI
-def test_llm_construction_with_kwargs() -> None:
- llm_chain_kwargs = {"verbose": True}
- compressor = LLMChainExtractor.from_llm(
- ChatOpenAI(), llm_chain_kwargs=llm_chain_kwargs
- )
- assert compressor.llm_chain.verbose is True
-
-
def test_llm_chain_extractor() -> None:
texts = [
"The Roman Empire followed the Roman Republic.",
diff --git a/libs/langchain/tests/unit_tests/chains/test_natbot.py b/libs/community/tests/unit_tests/chains/test_natbot.py
similarity index 99%
rename from libs/langchain/tests/unit_tests/chains/test_natbot.py
rename to libs/community/tests/unit_tests/chains/test_natbot.py
index 3f1f79da2e8..2b85ebbc209 100644
--- a/libs/langchain/tests/unit_tests/chains/test_natbot.py
+++ b/libs/community/tests/unit_tests/chains/test_natbot.py
@@ -2,11 +2,10 @@
from typing import Any, Dict, List, Optional
+from langchain.chains.natbot.base import NatBotChain
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
-from langchain.chains.natbot.base import NatBotChain
-
class FakeLLM(LLM):
"""Fake LLM wrapper for testing purposes."""
diff --git a/libs/langchain/langchain/chains/constitutional_ai/base.py b/libs/langchain/langchain/chains/constitutional_ai/base.py
index bd86b57ed27..a095bf4047d 100644
--- a/libs/langchain/langchain/chains/constitutional_ai/base.py
+++ b/libs/langchain/langchain/chains/constitutional_ai/base.py
@@ -2,6 +2,7 @@
from typing import Any, Dict, List, Optional
+from langchain_core._api import deprecated
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
@@ -13,9 +14,151 @@ from langchain.chains.constitutional_ai.prompts import CRITIQUE_PROMPT, REVISION
from langchain.chains.llm import LLMChain
+@deprecated(
+ since="0.2.13",
+ message=(
+ "This class is deprecated and will be removed in langchain 1.0. "
+ "See API reference for replacement: "
+ "https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html" # noqa: E501
+ ),
+ removal="1.0",
+)
class ConstitutionalChain(Chain):
"""Chain for applying constitutional principles.
+ Note: this class is deprecated. See below for a replacement implementation
+ using LangGraph. The benefits of this implementation are:
+
+ - Uses LLM tool calling features instead of parsing string responses;
+ - Support for both token-by-token and step-by-step streaming;
+ - Support for checkpointing and memory of chat history;
+ - Easier to modify or extend (e.g., with additional tools, structured responses, etc.)
+
+ Install LangGraph with:
+
+ .. code-block:: bash
+
+ pip install -U langgraph
+
+ .. code-block:: python
+
+ from typing import List, Optional, Tuple
+
+ from langchain.chains.constitutional_ai.prompts import (
+ CRITIQUE_PROMPT,
+ REVISION_PROMPT,
+ )
+ from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
+ from langchain_core.output_parsers import StrOutputParser
+ from langchain_core.prompts import ChatPromptTemplate
+ from langchain_openai import ChatOpenAI
+ from langgraph.graph import END, START, StateGraph
+ from typing_extensions import Annotated, TypedDict
+
+ llm = ChatOpenAI(model="gpt-4o-mini")
+
+ class Critique(TypedDict):
+ \"\"\"Generate a critique, if needed.\"\"\"
+ critique_needed: Annotated[bool, ..., "Whether or not a critique is needed."]
+ critique: Annotated[str, ..., "If needed, the critique."]
+
+ critique_prompt = ChatPromptTemplate.from_template(
+ "Critique this response according to the critique request. "
+ "If no critique is needed, specify that.\\n\\n"
+ "Query: {query}\\n\\n"
+ "Response: {response}\\n\\n"
+ "Critique request: {critique_request}"
+ )
+
+ revision_prompt = ChatPromptTemplate.from_template(
+ "Revise this response according to the critique and reivsion request.\\n\\n"
+ "Query: {query}\\n\\n"
+ "Response: {response}\\n\\n"
+ "Critique request: {critique_request}\\n\\n"
+ "Critique: {critique}\\n\\n"
+ "If the critique does not identify anything worth changing, ignore the "
+ "revision request and return 'No revisions needed'. If the critique "
+ "does identify something worth changing, revise the response based on "
+ "the revision request.\\n\\n"
+ "Revision Request: {revision_request}"
+ )
+
+ chain = llm | StrOutputParser()
+ critique_chain = critique_prompt | llm.with_structured_output(Critique)
+ revision_chain = revision_prompt | llm | StrOutputParser()
+
+
+ class State(TypedDict):
+ query: str
+ constitutional_principles: List[ConstitutionalPrinciple]
+ initial_response: str
+ critiques_and_revisions: List[Tuple[str, str]]
+ response: str
+
+
+ async def generate_response(state: State):
+ \"\"\"Generate initial response.\"\"\"
+ response = await chain.ainvoke(state["query"])
+ return {"response": response, "initial_response": response}
+
+ async def critique_and_revise(state: State):
+ \"\"\"Critique and revise response according to principles.\"\"\"
+ critiques_and_revisions = []
+ response = state["initial_response"]
+ for principle in state["constitutional_principles"]:
+ critique = await critique_chain.ainvoke(
+ {
+ "query": state["query"],
+ "response": response,
+ "critique_request": principle.critique_request,
+ }
+ )
+ if critique["critique_needed"]:
+ revision = await revision_chain.ainvoke(
+ {
+ "query": state["query"],
+ "response": response,
+ "critique_request": principle.critique_request,
+ "critique": critique["critique"],
+ "revision_request": principle.revision_request,
+ }
+ )
+ response = revision
+ critiques_and_revisions.append((critique["critique"], revision))
+ else:
+ critiques_and_revisions.append((critique["critique"], ""))
+ return {
+ "critiques_and_revisions": critiques_and_revisions,
+ "response": response,
+ }
+
+ graph = StateGraph(State)
+ graph.add_node("generate_response", generate_response)
+ graph.add_node("critique_and_revise", critique_and_revise)
+
+ graph.add_edge(START, "generate_response")
+ graph.add_edge("generate_response", "critique_and_revise")
+ graph.add_edge("critique_and_revise", END)
+ app = graph.compile()
+
+ .. code-block:: python
+
+ constitutional_principles=[
+ ConstitutionalPrinciple(
+ critique_request="Tell if this answer is good.",
+ revision_request="Give a better answer.",
+ )
+ ]
+
+ query = "What is the meaning of life? Answer in 10 words or fewer."
+
+ async for step in app.astream(
+ {"query": query, "constitutional_principles": constitutional_principles},
+ stream_mode="values",
+ ):
+ subset = ["initial_response", "critiques_and_revisions", "response"]
+ print({k: v for k, v in step.items() if k in subset})
+
Example:
.. code-block:: python
@@ -44,7 +187,7 @@ class ConstitutionalChain(Chain):
)
constitutional_chain.run(question="What is the meaning of life?")
- """
+ """ # noqa: E501
chain: LLMChain
constitutional_principles: List[ConstitutionalPrinciple]
diff --git a/libs/langchain/langchain/chains/flare/base.py b/libs/langchain/langchain/chains/flare/base.py
index 8a100ed0595..1d55bed468b 100644
--- a/libs/langchain/langchain/chains/flare/base.py
+++ b/libs/langchain/langchain/chains/flare/base.py
@@ -1,7 +1,6 @@
from __future__ import annotations
import re
-from abc import abstractmethod
from typing import Any, Dict, List, Optional, Sequence, Tuple
import numpy as np
@@ -9,10 +8,12 @@ from langchain_core.callbacks import (
CallbackManagerForChainRun,
)
from langchain_core.language_models import BaseLanguageModel
-from langchain_core.outputs import Generation
+from langchain_core.messages import AIMessage
+from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from langchain_core.retrievers import BaseRetriever
+from langchain_core.runnables import Runnable
from langchain.chains.base import Chain
from langchain.chains.flare.prompts import (
@@ -23,51 +24,14 @@ from langchain.chains.flare.prompts import (
from langchain.chains.llm import LLMChain
-class _ResponseChain(LLMChain):
- """Base class for chains that generate responses."""
-
- prompt: BasePromptTemplate = PROMPT
-
- @classmethod
- def is_lc_serializable(cls) -> bool:
- return False
-
- @property
- def input_keys(self) -> List[str]:
- return self.prompt.input_variables
-
- def generate_tokens_and_log_probs(
- self,
- _input: Dict[str, Any],
- *,
- run_manager: Optional[CallbackManagerForChainRun] = None,
- ) -> Tuple[Sequence[str], Sequence[float]]:
- llm_result = self.generate([_input], run_manager=run_manager)
- return self._extract_tokens_and_log_probs(llm_result.generations[0])
-
- @abstractmethod
- def _extract_tokens_and_log_probs(
- self, generations: List[Generation]
- ) -> Tuple[Sequence[str], Sequence[float]]:
- """Extract tokens and log probs from response."""
-
-
-class _OpenAIResponseChain(_ResponseChain):
- """Chain that generates responses from user input and context."""
-
- llm: BaseLanguageModel
-
- def _extract_tokens_and_log_probs(
- self, generations: List[Generation]
- ) -> Tuple[Sequence[str], Sequence[float]]:
- tokens = []
- log_probs = []
- for gen in generations:
- if gen.generation_info is None:
- raise ValueError
- tokens.extend(gen.generation_info["logprobs"]["tokens"])
- log_probs.extend(gen.generation_info["logprobs"]["token_logprobs"])
- return tokens, log_probs
+def _extract_tokens_and_log_probs(response: AIMessage) -> Tuple[List[str], List[float]]:
+ """Extract tokens and log probabilities from chat model response."""
+ tokens = []
+ log_probs = []
+ for token in response.response_metadata["logprobs"]["content"]:
+ tokens.append(token["token"])
+ log_probs.append(token["logprob"])
+ return tokens, log_probs
class QuestionGeneratorChain(LLMChain):
@@ -111,9 +75,9 @@ class FlareChain(Chain):
"""Chain that combines a retriever, a question generator,
and a response generator."""
- question_generator_chain: QuestionGeneratorChain
+ question_generator_chain: Runnable
"""Chain that generates questions from uncertain spans."""
- response_chain: _ResponseChain
+ response_chain: Runnable
"""Chain that generates responses from user input and context."""
output_parser: FinishedOutputParser = Field(default_factory=FinishedOutputParser)
"""Parser that determines whether the chain is finished."""
@@ -152,12 +116,16 @@ class FlareChain(Chain):
for question in questions:
docs.extend(self.retriever.invoke(question))
context = "\n\n".join(d.page_content for d in docs)
- result = self.response_chain.predict(
- user_input=user_input,
- context=context,
- response=response,
- callbacks=callbacks,
+ result = self.response_chain.invoke(
+ {
+ "user_input": user_input,
+ "context": context,
+ "response": response,
+ },
+ {"callbacks": callbacks},
)
+ if isinstance(result, AIMessage):
+ result = result.content
marginal, finished = self.output_parser.parse(result)
return marginal, finished
@@ -178,13 +146,18 @@ class FlareChain(Chain):
for span in low_confidence_spans
]
callbacks = _run_manager.get_child()
- question_gen_outputs = self.question_generator_chain.apply(
- question_gen_inputs, callbacks=callbacks
- )
- questions = [
- output[self.question_generator_chain.output_keys[0]]
- for output in question_gen_outputs
- ]
+ if isinstance(self.question_generator_chain, LLMChain):
+ question_gen_outputs = self.question_generator_chain.apply(
+ question_gen_inputs, callbacks=callbacks
+ )
+ questions = [
+ output[self.question_generator_chain.output_keys[0]]
+ for output in question_gen_outputs
+ ]
+ else:
+ questions = self.question_generator_chain.batch(
+ question_gen_inputs, config={"callbacks": callbacks}
+ )
_run_manager.on_text(
f"Generated Questions: {questions}", color="yellow", end="\n"
)
@@ -206,8 +179,10 @@ class FlareChain(Chain):
f"Current Response: {response}", color="blue", end="\n"
)
_input = {"user_input": user_input, "context": "", "response": response}
- tokens, log_probs = self.response_chain.generate_tokens_and_log_probs(
- _input, run_manager=_run_manager
+ tokens, log_probs = _extract_tokens_and_log_probs(
+ self.response_chain.invoke(
+ _input, {"callbacks": _run_manager.get_child()}
+ )
)
low_confidence_spans = _low_confidence_spans(
tokens,
@@ -251,18 +226,16 @@ class FlareChain(Chain):
FlareChain class with the given language model.
"""
try:
- from langchain_openai import OpenAI
+ from langchain_openai import ChatOpenAI
except ImportError:
raise ImportError(
"OpenAI is required for FlareChain. "
"Please install langchain-openai."
"pip install langchain-openai"
)
- question_gen_chain = QuestionGeneratorChain(llm=llm)
- response_llm = OpenAI(
- max_tokens=max_generation_len, model_kwargs={"logprobs": 1}, temperature=0
- )
- response_chain = _OpenAIResponseChain(llm=response_llm)
+ llm = ChatOpenAI(max_tokens=max_generation_len, logprobs=True, temperature=0)
+ response_chain = PROMPT | llm
+ question_gen_chain = QUESTION_GENERATOR_PROMPT | llm | StrOutputParser()
return cls(
question_generator_chain=question_gen_chain,
response_chain=response_chain,
diff --git a/libs/langchain/langchain/chains/hyde/base.py b/libs/langchain/langchain/chains/hyde/base.py
index 851e76c1599..833999127b6 100644
--- a/libs/langchain/langchain/chains/hyde/base.py
+++ b/libs/langchain/langchain/chains/hyde/base.py
@@ -11,7 +11,9 @@ import numpy as np
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.embeddings import Embeddings
from langchain_core.language_models import BaseLanguageModel
+from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import BasePromptTemplate
+from langchain_core.runnables import Runnable
from langchain.chains.base import Chain
from langchain.chains.hyde.prompts import PROMPT_MAP
@@ -25,7 +27,7 @@ class HypotheticalDocumentEmbedder(Chain, Embeddings):
"""
base_embeddings: Embeddings
- llm_chain: LLMChain
+ llm_chain: Runnable
class Config:
arbitrary_types_allowed = True
@@ -34,12 +36,15 @@ class HypotheticalDocumentEmbedder(Chain, Embeddings):
@property
def input_keys(self) -> List[str]:
"""Input keys for Hyde's LLM chain."""
- return self.llm_chain.input_keys
+ return self.llm_chain.input_schema.schema()["required"]
@property
def output_keys(self) -> List[str]:
"""Output keys for Hyde's LLM chain."""
- return self.llm_chain.output_keys
+ if isinstance(self.llm_chain, LLMChain):
+ return self.llm_chain.output_keys
+ else:
+ return ["text"]
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call the base embeddings."""
@@ -51,9 +56,12 @@ class HypotheticalDocumentEmbedder(Chain, Embeddings):
def embed_query(self, text: str) -> List[float]:
"""Generate a hypothetical document and embedded it."""
- var_name = self.llm_chain.input_keys[0]
- result = self.llm_chain.generate([{var_name: text}])
- documents = [generation.text for generation in result.generations[0]]
+ var_name = self.input_keys[0]
+ result = self.llm_chain.invoke({var_name: text})
+ if isinstance(self.llm_chain, LLMChain):
+ documents = [result[self.output_keys[0]]]
+ else:
+ documents = [result]
embeddings = self.embed_documents(documents)
return self.combine_embeddings(embeddings)
@@ -64,7 +72,9 @@ class HypotheticalDocumentEmbedder(Chain, Embeddings):
) -> Dict[str, str]:
"""Call the internal llm chain."""
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
- return self.llm_chain(inputs, callbacks=_run_manager.get_child())
+ return self.llm_chain.invoke(
+ inputs, config={"callbacks": _run_manager.get_child()}
+ )
@classmethod
def from_llm(
@@ -86,7 +96,7 @@ class HypotheticalDocumentEmbedder(Chain, Embeddings):
f"of {list(PROMPT_MAP.keys())}."
)
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = prompt | llm | StrOutputParser()
return cls(base_embeddings=base_embeddings, llm_chain=llm_chain, **kwargs)
@property
diff --git a/libs/langchain/langchain/chains/llm_math/base.py b/libs/langchain/langchain/chains/llm_math/base.py
index 0733b0079b3..e7fd89dcd54 100644
--- a/libs/langchain/langchain/chains/llm_math/base.py
+++ b/libs/langchain/langchain/chains/llm_math/base.py
@@ -7,6 +7,7 @@ import re
import warnings
from typing import Any, Dict, List, Optional
+from langchain_core._api import deprecated
from langchain_core.callbacks import (
AsyncCallbackManagerForChainRun,
CallbackManagerForChainRun,
@@ -20,16 +21,132 @@ from langchain.chains.llm import LLMChain
from langchain.chains.llm_math.prompt import PROMPT
+@deprecated(
+ since="0.2.13",
+ message=(
+ "This class is deprecated and will be removed in langchain 1.0. "
+ "See API reference for replacement: "
+ "https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html" # noqa: E501
+ ),
+ removal="1.0",
+)
class LLMMathChain(Chain):
"""Chain that interprets a prompt and executes python code to do math.
+ Note: this class is deprecated. See below for a replacement implementation
+ using LangGraph. The benefits of this implementation are:
+
+ - Uses LLM tool calling features;
+ - Support for both token-by-token and step-by-step streaming;
+ - Support for checkpointing and memory of chat history;
+ - Easier to modify or extend (e.g., with additional tools, structured responses, etc.)
+
+ Install LangGraph with:
+
+ .. code-block:: bash
+
+ pip install -U langgraph
+
+ .. code-block:: python
+
+ import math
+ from typing import Annotated, Sequence
+
+ from langchain_core.messages import BaseMessage
+ from langchain_core.runnables import RunnableConfig
+ from langchain_core.tools import tool
+ from langchain_openai import ChatOpenAI
+ from langgraph.graph import END, StateGraph
+ from langgraph.graph.message import add_messages
+ from langgraph.prebuilt.tool_node import ToolNode
+ import numexpr
+ from typing_extensions import TypedDict
+
+ @tool
+ def calculator(expression: str) -> str:
+ \"\"\"Calculate expression using Python's numexpr library.
+
+ Expression should be a single line mathematical expression
+ that solves the problem.
+
+ Examples:
+ "37593 * 67" for "37593 times 67"
+ "37593**(1/5)" for "37593^(1/5)"
+ \"\"\"
+ local_dict = {"pi": math.pi, "e": math.e}
+ return str(
+ numexpr.evaluate(
+ expression.strip(),
+ global_dict={}, # restrict access to globals
+ local_dict=local_dict, # add common mathematical functions
+ )
+ )
+
+ llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
+ tools = [calculator]
+ llm_with_tools = llm.bind_tools(tools, tool_choice="any")
+
+ class ChainState(TypedDict):
+ \"\"\"LangGraph state.\"\"\"
+
+ messages: Annotated[Sequence[BaseMessage], add_messages]
+
+ async def acall_chain(state: ChainState, config: RunnableConfig):
+ last_message = state["messages"][-1]
+ response = await llm_with_tools.ainvoke(state["messages"], config)
+ return {"messages": [response]}
+
+ async def acall_model(state: ChainState, config: RunnableConfig):
+ response = await llm.ainvoke(state["messages"], config)
+ return {"messages": [response]}
+
+ graph_builder = StateGraph(ChainState)
+ graph_builder.add_node("call_tool", acall_chain)
+ graph_builder.add_node("execute_tool", ToolNode(tools))
+ graph_builder.add_node("call_model", acall_model)
+ graph_builder.set_entry_point("call_tool")
+ graph_builder.add_edge("call_tool", "execute_tool")
+ graph_builder.add_edge("execute_tool", "call_model")
+ graph_builder.add_edge("call_model", END)
+ chain = graph_builder.compile()
+
+ .. code-block:: python
+
+ example_query = "What is 551368 divided by 82"
+
+ events = chain.astream(
+ {"messages": [("user", example_query)]},
+ stream_mode="values",
+ )
+ async for event in events:
+ event["messages"][-1].pretty_print()
+
+ .. code-block:: none
+
+ ================================ Human Message =================================
+
+ What is 551368 divided by 82
+ ================================== Ai Message ==================================
+ Tool Calls:
+ calculator (call_MEiGXuJjJ7wGU4aOT86QuGJS)
+ Call ID: call_MEiGXuJjJ7wGU4aOT86QuGJS
+ Args:
+ expression: 551368 / 82
+ ================================= Tool Message =================================
+ Name: calculator
+
+ 6724.0
+ ================================== Ai Message ==================================
+
+ 551368 divided by 82 equals 6724.
+
Example:
.. code-block:: python
from langchain.chains import LLMMathChain
from langchain_community.llms import OpenAI
llm_math = LLMMathChain.from_llm(OpenAI())
- """
+ """ # noqa: E501
llm_chain: LLMChain
llm: Optional[BaseLanguageModel] = None
diff --git a/libs/langchain/langchain/chains/natbot/base.py b/libs/langchain/langchain/chains/natbot/base.py
index 910e03f7d4f..e92131ff35c 100644
--- a/libs/langchain/langchain/chains/natbot/base.py
+++ b/libs/langchain/langchain/chains/natbot/base.py
@@ -5,15 +5,27 @@ from __future__ import annotations
import warnings
from typing import Any, Dict, List, Optional
+from langchain_core._api import deprecated
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
+from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import root_validator
+from langchain_core.runnables import Runnable
from langchain.chains.base import Chain
-from langchain.chains.llm import LLMChain
from langchain.chains.natbot.prompt import PROMPT
+@deprecated(
+ since="0.2.13",
+ message=(
+ "Importing NatBotChain from langchain is deprecated and will be removed in "
+ "langchain 1.0. Please import from langchain_community instead: "
+ "from langchain_community.chains.natbot import NatBotChain. "
+ "You may need to pip install -U langchain-community."
+ ),
+ removal="1.0",
+)
class NatBotChain(Chain):
"""Implement an LLM driven browser.
@@ -37,7 +49,7 @@ class NatBotChain(Chain):
natbot = NatBotChain.from_default("Buy me a new hat.")
"""
- llm_chain: LLMChain
+ llm_chain: Runnable
objective: str
"""Objective that NatBot is tasked with completing."""
llm: Optional[BaseLanguageModel] = None
@@ -60,7 +72,7 @@ class NatBotChain(Chain):
"class method."
)
if "llm_chain" not in values and values["llm"] is not None:
- values["llm_chain"] = LLMChain(llm=values["llm"], prompt=PROMPT)
+ values["llm_chain"] = PROMPT | values["llm"] | StrOutputParser()
return values
@classmethod
@@ -77,7 +89,7 @@ class NatBotChain(Chain):
cls, llm: BaseLanguageModel, objective: str, **kwargs: Any
) -> NatBotChain:
"""Load from LLM."""
- llm_chain = LLMChain(llm=llm, prompt=PROMPT)
+ llm_chain = PROMPT | llm | StrOutputParser()
return cls(llm_chain=llm_chain, objective=objective, **kwargs)
@property
@@ -104,12 +116,14 @@ class NatBotChain(Chain):
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
url = inputs[self.input_url_key]
browser_content = inputs[self.input_browser_content_key]
- llm_cmd = self.llm_chain.predict(
- objective=self.objective,
- url=url[:100],
- previous_command=self.previous_command,
- browser_content=browser_content[:4500],
- callbacks=_run_manager.get_child(),
+ llm_cmd = self.llm_chain.invoke(
+ {
+ "objective": self.objective,
+ "url": url[:100],
+ "previous_command": self.previous_command,
+ "browser_content": browser_content[:4500],
+ },
+ config={"callbacks": _run_manager.get_child()},
)
llm_cmd = llm_cmd.strip()
self.previous_command = llm_cmd
diff --git a/libs/langchain/langchain/retrievers/document_compressors/chain_extract.py b/libs/langchain/langchain/retrievers/document_compressors/chain_extract.py
index 95a56677cc4..cc86f2be49b 100644
--- a/libs/langchain/langchain/retrievers/document_compressors/chain_extract.py
+++ b/libs/langchain/langchain/retrievers/document_compressors/chain_extract.py
@@ -8,8 +8,9 @@ from typing import Any, Callable, Dict, Optional, Sequence, cast
from langchain_core.callbacks.manager import Callbacks
from langchain_core.documents import Document
from langchain_core.language_models import BaseLanguageModel
-from langchain_core.output_parsers import BaseOutputParser
+from langchain_core.output_parsers import BaseOutputParser, StrOutputParser
from langchain_core.prompts import PromptTemplate
+from langchain_core.runnables import Runnable
from langchain.chains.llm import LLMChain
from langchain.retrievers.document_compressors.base import BaseDocumentCompressor
@@ -49,12 +50,15 @@ class LLMChainExtractor(BaseDocumentCompressor):
"""Document compressor that uses an LLM chain to extract
the relevant parts of documents."""
- llm_chain: LLMChain
+ llm_chain: Runnable
"""LLM wrapper to use for compressing documents."""
get_input: Callable[[str, Document], dict] = default_get_input
"""Callable for constructing the chain input from the query and a Document."""
+ class Config:
+ arbitrary_types_allowed = True
+
def compress_documents(
self,
documents: Sequence[Document],
@@ -65,10 +69,13 @@ class LLMChainExtractor(BaseDocumentCompressor):
compressed_docs = []
for doc in documents:
_input = self.get_input(query, doc)
- output_dict = self.llm_chain.invoke(_input, config={"callbacks": callbacks})
- output = output_dict[self.llm_chain.output_key]
- if self.llm_chain.prompt.output_parser is not None:
- output = self.llm_chain.prompt.output_parser.parse(output)
+ output_ = self.llm_chain.invoke(_input, config={"callbacks": callbacks})
+ if isinstance(self.llm_chain, LLMChain):
+ output = output_[self.llm_chain.output_key]
+ if self.llm_chain.prompt.output_parser is not None:
+ output = self.llm_chain.prompt.output_parser.parse(output)
+ else:
+ output = output_
if len(output) == 0:
continue
compressed_docs.append(
@@ -85,9 +92,7 @@ class LLMChainExtractor(BaseDocumentCompressor):
"""Compress page content of raw documents asynchronously."""
outputs = await asyncio.gather(
*[
- self.llm_chain.apredict_and_parse(
- **self.get_input(query, doc), callbacks=callbacks
- )
+ self.llm_chain.ainvoke(self.get_input(query, doc), callbacks=callbacks)
for doc in documents
]
)
@@ -111,5 +116,9 @@ class LLMChainExtractor(BaseDocumentCompressor):
"""Initialize from LLM."""
_prompt = prompt if prompt is not None else _get_default_chain_prompt()
_get_input = get_input if get_input is not None else default_get_input
- llm_chain = LLMChain(llm=llm, prompt=_prompt, **(llm_chain_kwargs or {}))
+ if _prompt.output_parser is not None:
+ parser = _prompt.output_parser
+ else:
+ parser = StrOutputParser()
+ llm_chain = _prompt | llm | parser
return cls(llm_chain=llm_chain, get_input=_get_input) # type: ignore[arg-type]
diff --git a/libs/langchain/langchain/retrievers/document_compressors/chain_filter.py b/libs/langchain/langchain/retrievers/document_compressors/chain_filter.py
index 1efaef7abf0..2db6f5be3a7 100644
--- a/libs/langchain/langchain/retrievers/document_compressors/chain_filter.py
+++ b/libs/langchain/langchain/retrievers/document_compressors/chain_filter.py
@@ -5,7 +5,9 @@ from typing import Any, Callable, Dict, Optional, Sequence
from langchain_core.callbacks.manager import Callbacks
from langchain_core.documents import Document
from langchain_core.language_models import BaseLanguageModel
+from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import BasePromptTemplate, PromptTemplate
+from langchain_core.runnables import Runnable
from langchain_core.runnables.config import RunnableConfig
from langchain.chains import LLMChain
@@ -32,13 +34,16 @@ def default_get_input(query: str, doc: Document) -> Dict[str, Any]:
class LLMChainFilter(BaseDocumentCompressor):
"""Filter that drops documents that aren't relevant to the query."""
- llm_chain: LLMChain
+ llm_chain: Runnable
"""LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser."""
get_input: Callable[[str, Document], dict] = default_get_input
"""Callable for constructing the chain input from the query and a Document."""
+ class Config:
+ arbitrary_types_allowed = True
+
def compress_documents(
self,
documents: Sequence[Document],
@@ -56,11 +61,15 @@ class LLMChainFilter(BaseDocumentCompressor):
documents,
)
- for output_dict, doc in outputs:
+ for output_, doc in outputs:
include_doc = None
- output = output_dict[self.llm_chain.output_key]
- if self.llm_chain.prompt.output_parser is not None:
- include_doc = self.llm_chain.prompt.output_parser.parse(output)
+ if isinstance(self.llm_chain, LLMChain):
+ output = output_[self.llm_chain.output_key]
+ if self.llm_chain.prompt.output_parser is not None:
+ include_doc = self.llm_chain.prompt.output_parser.parse(output)
+ else:
+ if isinstance(output_, bool):
+ include_doc = output_
if include_doc:
filtered_docs.append(doc)
@@ -82,11 +91,15 @@ class LLMChainFilter(BaseDocumentCompressor):
),
documents,
)
- for output_dict, doc in outputs:
+ for output_, doc in outputs:
include_doc = None
- output = output_dict[self.llm_chain.output_key]
- if self.llm_chain.prompt.output_parser is not None:
- include_doc = self.llm_chain.prompt.output_parser.parse(output)
+ if isinstance(self.llm_chain, LLMChain):
+ output = output_[self.llm_chain.output_key]
+ if self.llm_chain.prompt.output_parser is not None:
+ include_doc = self.llm_chain.prompt.output_parser.parse(output)
+ else:
+ if isinstance(output_, bool):
+ include_doc = output_
if include_doc:
filtered_docs.append(doc)
@@ -110,5 +123,9 @@ class LLMChainFilter(BaseDocumentCompressor):
A LLMChainFilter that uses the given language model.
"""
_prompt = prompt if prompt is not None else _get_default_chain_prompt()
- llm_chain = LLMChain(llm=llm, prompt=_prompt)
+ if _prompt.output_parser is not None:
+ parser = _prompt.output_parser
+ else:
+ parser = StrOutputParser()
+ llm_chain = _prompt | llm | parser
return cls(llm_chain=llm_chain, **kwargs)
diff --git a/libs/langchain/langchain/retrievers/re_phraser.py b/libs/langchain/langchain/retrievers/re_phraser.py
index 5fdc47d10f2..55cb054e997 100644
--- a/libs/langchain/langchain/retrievers/re_phraser.py
+++ b/libs/langchain/langchain/retrievers/re_phraser.py
@@ -7,11 +7,11 @@ from langchain_core.callbacks import (
)
from langchain_core.documents import Document
from langchain_core.language_models import BaseLLM
+from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import BasePromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.retrievers import BaseRetriever
-
-from langchain.chains.llm import LLMChain
+from langchain_core.runnables import Runnable
logger = logging.getLogger(__name__)
@@ -30,7 +30,7 @@ class RePhraseQueryRetriever(BaseRetriever):
Then, retrieve docs for the re-phrased query."""
retriever: BaseRetriever
- llm_chain: LLMChain
+ llm_chain: Runnable
@classmethod
def from_llm(
@@ -51,8 +51,7 @@ class RePhraseQueryRetriever(BaseRetriever):
Returns:
RePhraseQueryRetriever
"""
-
- llm_chain = LLMChain(llm=llm, prompt=prompt)
+ llm_chain = prompt | llm | StrOutputParser()
return cls(
retriever=retriever,
llm_chain=llm_chain,
@@ -72,8 +71,9 @@ class RePhraseQueryRetriever(BaseRetriever):
Returns:
Relevant documents for re-phrased question
"""
- response = self.llm_chain(query, callbacks=run_manager.get_child())
- re_phrased_question = response["text"]
+ re_phrased_question = self.llm_chain.invoke(
+ query, {"callbacks": run_manager.get_child()}
+ )
logger.info(f"Re-phrased question: {re_phrased_question}")
docs = self.retriever.invoke(
re_phrased_question, config={"callbacks": run_manager.get_child()}
diff --git a/libs/langchain/tests/unit_tests/retrievers/document_compressors/test_chain_extract.py b/libs/langchain/tests/unit_tests/retrievers/document_compressors/test_chain_extract.py
new file mode 100644
index 00000000000..1e4afed1eec
--- /dev/null
+++ b/libs/langchain/tests/unit_tests/retrievers/document_compressors/test_chain_extract.py
@@ -0,0 +1,84 @@
+from langchain_core.documents import Document
+from langchain_core.language_models import FakeListChatModel
+
+from langchain.retrievers.document_compressors import LLMChainExtractor
+
+
+def test_llm_chain_extractor() -> None:
+ documents = [
+ Document(
+ page_content=(
+ "The sky is blue. Candlepin bowling is popular in New England."
+ ),
+ metadata={"a": 1},
+ ),
+ Document(
+ page_content=(
+ "Mercury is the closest planet to the Sun. "
+ "Candlepin bowling balls are smaller."
+ ),
+ metadata={"b": 2},
+ ),
+ Document(page_content="The moon is round.", metadata={"c": 3}),
+ ]
+ llm = FakeListChatModel(
+ responses=[
+ "Candlepin bowling is popular in New England.",
+ "Candlepin bowling balls are smaller.",
+ "NO_OUTPUT",
+ ]
+ )
+ doc_compressor = LLMChainExtractor.from_llm(llm)
+ output = doc_compressor.compress_documents(
+ documents, "Tell me about Candlepin bowling."
+ )
+ expected = documents = [
+ Document(
+ page_content="Candlepin bowling is popular in New England.",
+ metadata={"a": 1},
+ ),
+ Document(
+ page_content="Candlepin bowling balls are smaller.", metadata={"b": 2}
+ ),
+ ]
+ assert output == expected
+
+
+async def test_llm_chain_extractor_async() -> None:
+ documents = [
+ Document(
+ page_content=(
+ "The sky is blue. Candlepin bowling is popular in New England."
+ ),
+ metadata={"a": 1},
+ ),
+ Document(
+ page_content=(
+ "Mercury is the closest planet to the Sun. "
+ "Candlepin bowling balls are smaller."
+ ),
+ metadata={"b": 2},
+ ),
+ Document(page_content="The moon is round.", metadata={"c": 3}),
+ ]
+ llm = FakeListChatModel(
+ responses=[
+ "Candlepin bowling is popular in New England.",
+ "Candlepin bowling balls are smaller.",
+ "NO_OUTPUT",
+ ]
+ )
+ doc_compressor = LLMChainExtractor.from_llm(llm)
+ output = await doc_compressor.acompress_documents(
+ documents, "Tell me about Candlepin bowling."
+ )
+ expected = documents = [
+ Document(
+ page_content="Candlepin bowling is popular in New England.",
+ metadata={"a": 1},
+ ),
+ Document(
+ page_content="Candlepin bowling balls are smaller.", metadata={"b": 2}
+ ),
+ ]
+ assert output == expected
diff --git a/libs/langchain/tests/unit_tests/retrievers/document_compressors/test_chain_filter.py b/libs/langchain/tests/unit_tests/retrievers/document_compressors/test_chain_filter.py
new file mode 100644
index 00000000000..4020694afa6
--- /dev/null
+++ b/libs/langchain/tests/unit_tests/retrievers/document_compressors/test_chain_filter.py
@@ -0,0 +1,46 @@
+from langchain_core.documents import Document
+from langchain_core.language_models import FakeListChatModel
+
+from langchain.retrievers.document_compressors import LLMChainFilter
+
+
+def test_llm_chain_filter() -> None:
+ documents = [
+ Document(
+ page_content="Candlepin bowling is popular in New England.",
+ metadata={"a": 1},
+ ),
+ Document(
+ page_content="Candlepin bowling balls are smaller.",
+ metadata={"b": 2},
+ ),
+ Document(page_content="The moon is round.", metadata={"c": 3}),
+ ]
+ llm = FakeListChatModel(responses=["YES", "YES", "NO"])
+ doc_compressor = LLMChainFilter.from_llm(llm)
+ output = doc_compressor.compress_documents(
+ documents, "Tell me about Candlepin bowling."
+ )
+ expected = documents[:2]
+ assert output == expected
+
+
+async def test_llm_chain_extractor_async() -> None:
+ documents = [
+ Document(
+ page_content="Candlepin bowling is popular in New England.",
+ metadata={"a": 1},
+ ),
+ Document(
+ page_content="Candlepin bowling balls are smaller.",
+ metadata={"b": 2},
+ ),
+ Document(page_content="The moon is round.", metadata={"c": 3}),
+ ]
+ llm = FakeListChatModel(responses=["YES", "YES", "NO"])
+ doc_compressor = LLMChainFilter.from_llm(llm)
+ output = await doc_compressor.acompress_documents(
+ documents, "Tell me about Candlepin bowling."
+ )
+ expected = documents[:2]
+ assert output == expected