mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-25 16:13:25 +00:00
Update Documentation: Corrected Typos and Improved Clarity (#11725)
Docs updates --------- Co-authored-by: Advaya <126754021+bluevayes@users.noreply.github.com> Co-authored-by: Erick Friis <erick@langchain.dev>
This commit is contained in:
parent
e165daa0ae
commit
8fa960641a
@ -38,7 +38,7 @@ It uses the ReAct framework to decide which tool to use, and uses memory to reme
|
||||
## [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search)
|
||||
|
||||
This agent utilizes a single tool that should be named `Intermediate Answer`.
|
||||
This tool should be able to lookup factual answers to questions. This agent
|
||||
This tool should be able to look up factual answers to questions. This agent
|
||||
is equivalent to the original [self-ask with search paper](https://ofir.io/self-ask.pdf),
|
||||
where a Google search API was provided as the tool.
|
||||
|
||||
@ -46,7 +46,7 @@ where a Google search API was provided as the tool.
|
||||
|
||||
This agent uses the ReAct framework to interact with a docstore. Two tools must
|
||||
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
|
||||
The `Search` tool should search for a document, while the `Lookup` tool should lookup
|
||||
The `Search` tool should search for a document, while the `Lookup` tool should look up
|
||||
a term in the most recently found document.
|
||||
This agent is equivalent to the
|
||||
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Custom LLM agent
|
||||
# Custom LLM Agent
|
||||
|
||||
This notebook goes through how to create your own custom LLM agent.
|
||||
|
||||
|
@ -1,13 +1,13 @@
|
||||
# Custom LLM Agent (with a ChatModel)
|
||||
# Custom LLM Chat Agent
|
||||
|
||||
This notebook goes through how to create your own custom agent based on a chat model.
|
||||
This notebook explains how to create your own custom agent based on a chat model.
|
||||
|
||||
An LLM chat agent consists of three parts:
|
||||
An LLM chat agent consists of four key components:
|
||||
|
||||
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
|
||||
- `ChatModel`: This is the language model that powers the agent
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
|
||||
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
|
||||
- `PromptTemplate`: This is the prompt template that instructs the language model on what to do.
|
||||
- `ChatModel`: This is the language model that powers the agent.
|
||||
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found.
|
||||
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object.
|
||||
|
||||
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)
|
||||
|
@ -3,7 +3,7 @@
|
||||
This walkthrough demonstrates how to replicate the [MRKL](https://arxiv.org/pdf/2205.00445.pdf) system using agents.
|
||||
|
||||
This uses the example Chinook database.
|
||||
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.
|
||||
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/ and place the `.db` file in a "notebooks" folder at the root of this repository.
|
||||
|
||||
```python
|
||||
from langchain.chains import LLMMathChain
|
||||
@ -127,7 +127,7 @@ mrkl.run("What is the full name of the artist who recently released an album cal
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
## With a chat model
|
||||
## Using a Chat Model
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
|
@ -4,17 +4,17 @@ sidebar_position: 2
|
||||
# Tools
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/tools/) for documentation on built-in tool integrations.
|
||||
For documentation on built-in tool integrations, visit [Integrations](/docs/integrations/tools/).
|
||||
:::
|
||||
|
||||
Tools are interfaces that an agent can use to interact with the world.
|
||||
|
||||
## Get started
|
||||
## Getting Started
|
||||
|
||||
Tools are functions that agents can use to interact with the world.
|
||||
These tools can be generic utilities (e.g. search), other chains, or even other agents.
|
||||
|
||||
Currently, tools can be loaded with the following snippet:
|
||||
Currently, tools can be loaded using the following snippet:
|
||||
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
|
@ -4,7 +4,7 @@ sidebar_position: 3
|
||||
# Toolkits
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/toolkits/) for documentation on built-in toolkit integrations.
|
||||
For documentation on built-in toolkit integrations, visit [Integrations](/docs/integrations/toolkits/).
|
||||
:::
|
||||
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenient loading methods.
|
||||
|
Loading…
Reference in New Issue
Block a user