docs[patch]: Fix docs bugs in response to feedback (#23649)

- Update Meta Llama 3 cookbook link
- Add prereq section and information on `messages_modifier` to LangGraph
migration guide
- Update `PydanticToolsParser` explanation and entrypoint in tool
calling guide
- Add more obvious warning to `OllamaFunctions`
- Fix Wikidata tool install flow
- Update Bedrock LLM initialization

@baskaryan can you add a bit of information on how to authenticate into
the `ChatBedrock` and `BedrockLLM` models? I wasn't able to figure it
out :(
This commit is contained in:
Jacob Lee 2024-06-28 17:24:55 -07:00 committed by GitHub
parent af2c05e5f3
commit 72175c57bd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 91 additions and 62 deletions

View File

@ -1055,7 +1055,7 @@ See several videos and cookbooks showcasing RAG with LangGraph:
- [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag)
See our LangGraph RAG recipes with partners:
- [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/agents/langchain)
- [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/3p_integrations/langchain)
- [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain)
:::

View File

@ -21,7 +21,16 @@
"source": [
"# How to migrate from legacy LangChain agents to LangGraph\n",
"\n",
"Here we focus on how to move from legacy LangChain agents to LangGraph agents.\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Agents](/docs/concepts/#agents)\n",
"- [LangGraph](https://langchain-ai.github.io/langgraph/)\n",
"- [Tool calling](/docs/how_to/tool_calling/)\n",
"\n",
":::\n",
"\n",
"Here we focus on how to move from legacy LangChain agents to more flexible [LangGraph](https://langchain-ai.github.io/langgraph/) agents.\n",
"LangChain agents (the [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor) in particular) have multiple configuration parameters.\n",
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) prebuilt helper method.\n",
"\n",
@ -209,7 +218,7 @@
"\n",
"Let's take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish.\n",
"\n",
"First up, using AgentExecutor:"
"First up, using `AgentExecutor`:"
]
},
{
@ -252,7 +261,16 @@
"id": "bd5f5500-5ae4-4000-a9fd-8c5a2cc6404d",
"metadata": {},
"source": [
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent). This can either be a string or a LangChain SystemMessage."
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n",
"\n",
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`messages_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies messages before they are passed into the model, and can be one of four values:\n",
"\n",
"- A `SystemMessage`, which is added to the beginning of the list of messages.\n",
"- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n",
"- A `Callable`, which should take in a list of messages. The output is then passed to the language model.\n",
"- Or a [`Runnable`](/docs/concepts/#langchain-expression-language-lcel), which should should take in a list of messages. The output is then passed to the language model.\n",
"\n",
"Here's how it looks in action:"
]
},
{
@ -1226,6 +1244,18 @@
"except GraphRecursionError as e:\n",
" print(\"Stopping agent prematurely due to triggering stop condition\")"
]
},
{
"cell_type": "markdown",
"id": "41377eb8",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now learned how to migrate your LangChain agent executors to LangGraph.\n",
"\n",
"Next, check out other [LangGraph how-to guides](https://langchain-ai.github.io/langgraph/how-tos/)."
]
}
],
"metadata": {

View File

@ -24,6 +24,7 @@
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Output parsers](/docs/concepts/#output-parsers)\n",
"\n",
":::\n",
"\n",
@ -51,6 +52,12 @@
"parameters matching the desired schema, then treat the generated output as your final \n",
"result.\n",
"\n",
":::note\n",
"\n",
"If you only need formatted values, try the [.with_structured_output()](/docs/how_to/structured_output/#the-with_structured_output-method) chat model method as a simpler entrypoint.\n",
"\n",
":::\n",
"\n",
"However, tool calling goes beyond [structured output](/docs/how_to/structured_output/)\n",
"since you can pass responses from called tools back to the model to create longer interactions.\n",
"For instance, given a search engine tool, an LLM might handle a \n",
@ -85,7 +92,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@ -116,7 +123,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@ -164,7 +171,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -185,16 +192,16 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_z9fCZJPE8AmtnIiD08Q6Najb', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 95, 'total_tokens': 113}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7564c56a-3760-41f0-94fa-5c646bb11e60-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z9fCZJPE8AmtnIiD08Q6Najb'}], usage_metadata={'input_tokens': 95, 'output_tokens': 18, 'total_tokens': 113})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_g4RuAijtDcSeM96jXyCuiLSN', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 95, 'total_tokens': 113}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5157d15a-7e0e-4ab1-af48-3d98010cd152-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_g4RuAijtDcSeM96jXyCuiLSN'}], usage_metadata={'input_tokens': 95, 'output_tokens': 18, 'total_tokens': 113})"
]
},
"execution_count": 8,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@ -235,7 +242,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [
{
@ -243,14 +250,15 @@
"text/plain": [
"[{'name': 'Multiply',\n",
" 'args': {'a': 3, 'b': 12},\n",
" 'id': 'call_KquHA7mSbgtAkpkmRPaFnJKa'},\n",
" 'id': 'call_TnadLbWJu9HwDULRb51RNSMw'},\n",
" {'name': 'Add',\n",
" 'args': {'a': 11, 'b': 49},\n",
" 'id': 'call_Fl0hQi4IBTzlpaJYlM5kPQhE'}]"
" 'id': 'call_Q9vt1up05sOQScXvUYWzSpCg'}]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@ -271,12 +279,13 @@
"a name, string arguments, identifier, and error message.\n",
"\n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further \n",
"process the output. For example, we can convert back to the original Pydantic class:"
"process the output. For example, we can convert existing values populated on the `.tool_calls` attribute back to the original Pydantic class using the\n",
"[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html):"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [
{
@ -285,12 +294,13 @@
"[Multiply(a=3, b=12), Add(a=11, b=49)]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.output_parsers import PydanticToolsParser\n",
"\n",
"chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n",
"chain.invoke(query)"
@ -333,7 +343,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@ -29,7 +29,7 @@
"> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build \n",
"> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is \n",
"> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy \n",
"> generative AI capabilities into your applications using the AWS services you are already familiar with.\n"
"> generative AI capabilities into your applications using the AWS services you are already familiar with."
]
},
{
@ -47,7 +47,7 @@
}
],
"source": [
"%pip install --upgrade --quiet langchain-aws"
"%pip install --upgrade --quiet langchain-aws"
]
},
{

View File

@ -20,6 +20,12 @@
"Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use llama3 and phi3 models.\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).\n",
"\n",
":::warning\n",
"\n",
"This is an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. Use with caution.\n",
"\n",
":::\n",
"\n",
"## Setup\n",
"\n",
"Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance.\n",

View File

@ -34,7 +34,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet boto3"
"%pip install --upgrade --quiet langchain_aws"
]
},
{
@ -45,9 +45,9 @@
},
"outputs": [],
"source": [
"from langchain_community.llms import Bedrock\n",
"from langchain_aws import BedrockLLM\n",
"\n",
"llm = Bedrock(\n",
"llm = BedrockLLM(\n",
" credentials_profile_name=\"bedrock-admin\", model_id=\"amazon.titan-text-express-v1\"\n",
")"
]
@ -65,7 +65,7 @@
"metadata": {},
"outputs": [],
"source": [
"custom_llm = Bedrock(\n",
"custom_llm = BedrockLLM(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" provider=\"cohere\",\n",
" model_id=\"<Custom model ARN>\", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model\n",
@ -108,7 +108,7 @@
"\n",
"\n",
"# Guardrails for Amazon Bedrock with trace\n",
"llm = Bedrock(\n",
"llm = BedrockLLM(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" model_id=\"<Model_ID>\",\n",
" model_kwargs={},\n",
@ -134,7 +134,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@ -18,16 +18,6 @@ pip install langchain-community boto3
### Bedrock Chat
See a [usage example](/docs/integrations/chat/bedrock).
```python
from langchain_aws import ChatBedrock
```
## LLMs
### Bedrock
>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of
> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`,
> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to
@ -38,6 +28,15 @@ from langchain_aws import ChatBedrock
> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy
> generative AI capabilities into your applications using the AWS services you are already familiar with.
See a [usage example](/docs/integrations/chat/bedrock).
```python
from langchain_aws import ChatBedrock
```
## LLMs
### Bedrock
See a [usage example](/docs/integrations/llms/bedrock).

View File

@ -23,12 +23,12 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet wikibase-rest-api-client mediawikiapi"
"%pip install --upgrade --quiet \"wikibase-rest-api-client<0.2\" mediawikiapi"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 2,
"id": "955988a1-ebc2-4c9a-9298-c493fe842de1",
"metadata": {
"execution": {
@ -39,26 +39,6 @@
"shell.execute_reply.started": "2024-03-06T22:55:15.973114Z"
}
},
"outputs": [],
"source": [
"from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun\n",
"\n",
"wikidata = WikidataQueryRun(api_wrapper=WikidataAPIWrapper())"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9926a8a7-3e4e-4a97-ba43-7e5a274b9561",
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-06T22:54:38.551998Z",
"iopub.status.busy": "2024-03-06T22:54:38.551266Z",
"iopub.status.idle": "2024-03-06T22:54:51.913177Z",
"shell.execute_reply": "2024-03-06T22:54:51.911636Z",
"shell.execute_reply.started": "2024-03-06T22:54:38.551955Z"
}
},
"outputs": [
{
"name": "stdout",
@ -77,7 +57,7 @@
"sport: athletics\n",
"place of birth: Maida Vale, Warrington Lodge\n",
"educated at: King's College, Princeton University, Sherborne School, Hazlehurst Community Primary School\n",
"employer: Victoria University of Manchester, Government Communications Headquarters, University of Cambridge, National Physical Laboratory\n",
"employer: Victoria University of Manchester, Government Communications Headquarters, University of Cambridge, National Physical Laboratory (United Kingdom)\n",
"place of death: Wilmslow\n",
"field of work: cryptanalysis, computer science, mathematics, logic, cryptography\n",
"cause of death: cyanide poisoning\n",
@ -98,13 +78,17 @@
}
],
"source": [
"from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun\n",
"\n",
"wikidata = WikidataQueryRun(api_wrapper=WikidataAPIWrapper())\n",
"\n",
"print(wikidata.run(\"Alan Turing\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2762aa55-92bd-4e50-b433-8c5c37da465f",
"id": "7188d62f",
"metadata": {},
"outputs": [],
"source": []
@ -126,7 +110,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.5"
}
},
"nbformat": 4,