mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-06 21:43:44 +00:00
Harrison/agent intro (#8138)
Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
@@ -1,36 +1,72 @@
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
from langchain.agents import initialize_agent
|
||||
from langchain.agents import AgentType
|
||||
from langchain.llms import OpenAI
|
||||
```
|
||||
This will go over how to get started building an agent.
|
||||
We will use a LangChain agent class, but show how to customize it to give it specific context.
|
||||
We will then define custom tools, and then run it all in the standard LangChain AgentExecutor.
|
||||
|
||||
### Set up the agent
|
||||
|
||||
We will use the OpenAIFunctionsAgent.
|
||||
This is easiest and best agent to get started with.
|
||||
It does however require usage of ChatOpenAI models.
|
||||
If you want to use a different language model, we would recommend using the [ReAct](/docs/modules/agents/agent_types/react) agent.
|
||||
|
||||
For this guide, we will construct a custom agent that has access to a custom tool.
|
||||
We are choosing this example because we think for most use cases you will NEED to customize either the agent or the tools.
|
||||
The tool we will give the agent is a tool to calculate the length of a word.
|
||||
This is useful because this is actually something LLMs can mess up due to tokenization.
|
||||
We will first create it WITHOUT memory, but we will then show how to add memory in.
|
||||
Memory is needed to enable conversation.
|
||||
|
||||
First, let's load the language model we're going to use to control the agent.
|
||||
|
||||
|
||||
```python
|
||||
llm = OpenAI(temperature=0)
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
llm = ChatOpenAI(temperature=0)
|
||||
```
|
||||
|
||||
Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
|
||||
Next, let's define some tools to use.
|
||||
Let's write a really simple Python function to calculate the length of a word that is passed in.
|
||||
|
||||
|
||||
|
||||
```python
|
||||
tools = load_tools(["serpapi", "llm-math"], llm=llm)
|
||||
from langchain.agents import tool
|
||||
|
||||
@tool
|
||||
def get_word_length(word: str) -> int:
|
||||
"""Returns the length of a word."""
|
||||
return len(word)
|
||||
|
||||
tools = [get_word_length]
|
||||
```
|
||||
|
||||
Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
|
||||
|
||||
Now let us create the prompt.
|
||||
We can use the `OpenAIFunctionsAgent.create_prompt` helper function to create a prompt automatically.
|
||||
This allows for a few different ways to customize, including passing in a custom SystemMessage, which we will do.
|
||||
|
||||
```python
|
||||
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
||||
from langchain.schema import SystemMessage
|
||||
system_message = SystemMessage(content="You are very powerful assistant, but bad at calculating lengths of words.")
|
||||
prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)
|
||||
```
|
||||
|
||||
Putting those pieces together, we can now create the agent.
|
||||
|
||||
```python
|
||||
from langchain.agents import OpenAIFunctionsAgent
|
||||
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
|
||||
```
|
||||
|
||||
Finally, we create the AgentExecutor - the runtime for our agent.
|
||||
|
||||
```python
|
||||
from langchain.agents import AgentExecutor
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
|
||||
```
|
||||
|
||||
Now let's test it out!
|
||||
|
||||
|
||||
```python
|
||||
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
|
||||
agent_executor.run("how many letters in the word educa?")
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
@@ -39,29 +75,58 @@ agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to t
|
||||
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
|
||||
Action: Search
|
||||
Action Input: "Leo DiCaprio girlfriend"
|
||||
Observation: Camila Morrone
|
||||
Thought: I need to find out Camila Morrone's age
|
||||
Action: Search
|
||||
Action Input: "Camila Morrone age"
|
||||
Observation: 25 years
|
||||
Thought: I need to calculate 25 raised to the 0.43 power
|
||||
Action: Calculator
|
||||
Action Input: 25^0.43
|
||||
Observation: Answer: 3.991298452658078
|
||||
|
||||
Thought: I now know the final answer
|
||||
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.
|
||||
|
||||
|
||||
Invoking: `get_word_length` with `{'word': 'educa'}`
|
||||
|
||||
5
|
||||
|
||||
There are 5 letters in the word "educa".
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078."
|
||||
'There are 5 letters in the word "educa".'
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
This is great - we have an agent!
|
||||
However, this agent is stateless - it doesn't remember anything about previous interactions.
|
||||
This means you can't ask follow up questions easily.
|
||||
Let's fix that by adding in memory.
|
||||
|
||||
In order to do this, we need to do two things:
|
||||
|
||||
1. Add a place for memory variables to go in the prompt
|
||||
2. Add memory to the AgentExecutor (note that we add it here, and NOT to the agent, as this is the outermost chain)
|
||||
|
||||
First, let's add a place for memory in the prompt.
|
||||
We do this by adding a placeholder for messages with the key `"chat_history"`.
|
||||
|
||||
```python
|
||||
from langchain.prompts import MessagesPlaceholder
|
||||
|
||||
MEMORY_KEY = "chat_history"
|
||||
prompt = OpenAIFunctionsAgent.create_prompt(
|
||||
system_message=system_message,
|
||||
extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)]
|
||||
)
|
||||
```
|
||||
|
||||
Next, let's create a memory object.
|
||||
We will do this by using `ConversationBufferMemory`.
|
||||
Importantly, we set `memory_key` also equal to `"chat_history"` (to align it with the prompt) and set `return_messages` (to make it return messages rather than a string).
|
||||
|
||||
```python
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
|
||||
memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)
|
||||
```
|
||||
|
||||
We can then put it all together!
|
||||
|
||||
```python
|
||||
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True)
|
||||
agent_executor.run("how many letters in the word educa?")
|
||||
agent_executor.run("is that a real word?")
|
||||
```
|
||||
|
Reference in New Issue
Block a user