mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-08 06:23:20 +00:00
docs: agents & callbacks fixes (#10066)
Various improvements to the Agents & Callbacks sections of the documentation including formatting, spelling, and grammar fixes to improve readability.
This commit is contained in:
@@ -1,6 +1,8 @@
|
||||
Install openai,google-search-results packages which are required as the langchain packages call them internally
|
||||
Install `openai`, `google-search-results` packages which are required as the LangChain packages call them internally.
|
||||
|
||||
>pip install openai google-search-results
|
||||
```bash
|
||||
pip install openai google-search-results
|
||||
```
|
||||
|
||||
```python
|
||||
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
|
||||
|
@@ -53,7 +53,7 @@ executor = load_agent_executor(model, tools, verbose=True)
|
||||
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
|
||||
```
|
||||
|
||||
## Run Example
|
||||
## Run example
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -202,7 +202,7 @@ print(response)
|
||||
|
||||
## Adding in memory
|
||||
|
||||
Here is how you add in memory to this agent
|
||||
Here is how you add in memory to this agent:
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -1,12 +1,12 @@
|
||||
This will go over how to get started building an agent.
|
||||
We will use a LangChain agent class, but show how to customize it to give it specific context.
|
||||
We will then define custom tools, and then run it all in the standard LangChain AgentExecutor.
|
||||
We will then define custom tools, and then run it all in the standard LangChain `AgentExecutor`.
|
||||
|
||||
### Set up the agent
|
||||
|
||||
We will use the OpenAIFunctionsAgent.
|
||||
We will use the `OpenAIFunctionsAgent`.
|
||||
This is easiest and best agent to get started with.
|
||||
It does however require usage of ChatOpenAI models.
|
||||
It does however require usage of `ChatOpenAI` models.
|
||||
If you want to use a different language model, we would recommend using the [ReAct](/docs/modules/agents/agent_types/react) agent.
|
||||
|
||||
For this guide, we will construct a custom agent that has access to a custom tool.
|
||||
@@ -40,7 +40,7 @@ tools = [get_word_length]
|
||||
|
||||
Now let us create the prompt.
|
||||
We can use the `OpenAIFunctionsAgent.create_prompt` helper function to create a prompt automatically.
|
||||
This allows for a few different ways to customize, including passing in a custom SystemMessage, which we will do.
|
||||
This allows for a few different ways to customize, including passing in a custom `SystemMessage`, which we will do.
|
||||
|
||||
```python
|
||||
from langchain.schema import SystemMessage
|
||||
@@ -55,7 +55,7 @@ Putting those pieces together, we can now create the agent.
|
||||
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
|
||||
```
|
||||
|
||||
Finally, we create the AgentExecutor - the runtime for our agent.
|
||||
Finally, we create the `AgentExecutor` - the runtime for our agent.
|
||||
|
||||
```python
|
||||
from langchain.agents import AgentExecutor
|
||||
@@ -97,7 +97,7 @@ Let's fix that by adding in memory.
|
||||
In order to do this, we need to do two things:
|
||||
|
||||
1. Add a place for memory variables to go in the prompt
|
||||
2. Add memory to the AgentExecutor (note that we add it here, and NOT to the agent, as this is the outermost chain)
|
||||
2. Add memory to the `AgentExecutor` (note that we add it here, and NOT to the agent, as this is the outermost chain)
|
||||
|
||||
First, let's add a place for memory in the prompt.
|
||||
We do this by adding a placeholder for messages with the key `"chat_history"`.
|
||||
|
@@ -1,5 +1,5 @@
|
||||
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
|
||||
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)
|
||||
2. If the Agent returns an `AgentFinish`, then return that directly to the user
|
||||
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
|
||||
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
|
||||
@@ -43,7 +43,7 @@ tools = [
|
||||
]
|
||||
```
|
||||
|
||||
## Prompt Template
|
||||
## Prompt template
|
||||
|
||||
This instructs the agent on what to do. Generally, the template should incorporate:
|
||||
|
||||
@@ -112,11 +112,11 @@ prompt = CustomPromptTemplate(
|
||||
)
|
||||
```
|
||||
|
||||
## Output Parser
|
||||
## Output parser
|
||||
|
||||
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
|
||||
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc.
|
||||
|
||||
|
||||
```python
|
||||
@@ -164,7 +164,7 @@ This depends heavily on the prompt and model you are using. Generally, you want
|
||||
|
||||
## Set up the Agent
|
||||
|
||||
We can now combine everything to set up our agent
|
||||
We can now combine everything to set up our agent:
|
||||
|
||||
|
||||
```python
|
||||
@@ -225,7 +225,7 @@ agent_executor.run("How many people live in canada as of 2023?")
|
||||
|
||||
If you want to add memory to the agent, you'll need to:
|
||||
|
||||
1. Add a place in the custom prompt for the chat_history
|
||||
1. Add a place in the custom prompt for the `chat_history`
|
||||
2. Add a memory object to the agent executor.
|
||||
|
||||
|
||||
|
@@ -1,5 +1,5 @@
|
||||
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
|
||||
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
|
||||
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)
|
||||
2. If the Agent returns an `AgentFinish`, then return that directly to the user
|
||||
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
|
||||
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
|
||||
@@ -35,7 +35,7 @@ import re
|
||||
from getpass import getpass
|
||||
```
|
||||
|
||||
## Set up tool
|
||||
## Set up tools
|
||||
|
||||
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
|
||||
|
||||
@@ -57,7 +57,7 @@ tools = [
|
||||
]
|
||||
```
|
||||
|
||||
## Prompt Template
|
||||
## Prompt template
|
||||
|
||||
This instructs the agent on what to do. Generally, the template should incorporate:
|
||||
|
||||
@@ -131,11 +131,11 @@ prompt = CustomPromptTemplate(
|
||||
)
|
||||
```
|
||||
|
||||
## Output Parser
|
||||
## Output parser
|
||||
|
||||
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
|
||||
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc
|
||||
This is where you can change the parsing to do retries, handle whitespace, etc.
|
||||
|
||||
|
||||
```python
|
||||
@@ -188,7 +188,7 @@ This depends heavily on the prompt and model you are using. Generally, you want
|
||||
|
||||
## Set up the Agent
|
||||
|
||||
We can now combine everything to set up our agent
|
||||
We can now combine everything to set up our agent:
|
||||
|
||||
|
||||
```python
|
||||
|
@@ -72,7 +72,7 @@ class BaseCallbackHandler:
|
||||
|
||||
LangChain provides a few built-in handlers that you can use to get started. These are available in the `langchain/callbacks` module. The most basic handler is the `StdOutCallbackHandler`, which simply logs all events to `stdout`.
|
||||
|
||||
**Note** when the `verbose` flag on the object is set to true, the `StdOutCallbackHandler` will be invoked even without being explicitly passed in.
|
||||
**Note**: when the `verbose` flag on the object is set to true, the `StdOutCallbackHandler` will be invoked even without being explicitly passed in.
|
||||
|
||||
```python
|
||||
from langchain.callbacks import StdOutCallbackHandler
|
||||
@@ -137,6 +137,6 @@ The `verbose` argument is available on most objects throughout the API (Chains,
|
||||
|
||||
### When do you want to use each of these?
|
||||
|
||||
- Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are _not specific to a single request_, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.
|
||||
- Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are _not specific to a single request_, but rather to the entire chain. For example, if you want to log all the requests made to an `LLMChain`, you would pass a handler to the constructor.
|
||||
- Request callbacks are most useful for use cases such as streaming, where you want to stream the output of a single request to a specific websocket connection, or other similar use cases. For example, if you want to stream the output of a single request to a websocket, you would pass a handler to the `call()` method
|
||||
|
||||
|
Reference in New Issue
Block a user