mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-21 06:33:41 +00:00
163 lines
7.9 KiB
Plaintext
163 lines
7.9 KiB
Plaintext
# Quickstart
|
|
|
|
## Installation
|
|
|
|
To install LangChain run:
|
|
|
|
import Tabs from '@theme/Tabs';
|
|
import TabItem from '@theme/TabItem';
|
|
import Install from "@snippets/get_started/quickstart/installation.mdx"
|
|
|
|
<Install/>
|
|
|
|
For more details, see our [Installation guide](/docs/get_started/installation.html).
|
|
|
|
## Environment setup
|
|
|
|
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
|
|
|
|
import OpenAISetup from "@snippets/get_started/quickstart/openai_setup.mdx"
|
|
|
|
<OpenAISetup/>
|
|
|
|
## Building an application
|
|
|
|
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
|
|
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
|
|
|
|
The core building block of LangChain applications is the LLMChain.
|
|
This combines three things:
|
|
- LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
|
|
- Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
|
|
- Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.
|
|
|
|
In this getting started guide we will cover those three components by themselves, and then cover the LLMChain which combines all of them.
|
|
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
|
|
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.
|
|
|
|
## LLMs
|
|
|
|
There are two types of language models, which in LangChain are called:
|
|
|
|
- LLMs: this is a language model which takes a string as input and returns a string
|
|
- ChatModels: this is a language model which takes a list of messages as input and returns a message
|
|
|
|
The input/output for LLMs is simple and easy to understand - a string.
|
|
But what about ChatModels? The input there is a list of `ChatMessage`s, and the output is a single `ChatMessage`.
|
|
A `ChatMessage` has two required components:
|
|
|
|
- `content`: This is the content of the message.
|
|
- `role`: This is the role of the entity from which the `ChatMessage` is coming from.
|
|
|
|
LangChain provides several objects to easily distinguish between different roles:
|
|
|
|
- `HumanMessage`: A `ChatMessage` coming from a human/user.
|
|
- `AIMessage`: A `ChatMessage` coming from an AI/assistant.
|
|
- `SystemMessage`: A `ChatMessage` coming from the system.
|
|
- `FunctionMessage`: A `ChatMessage` coming from a function call.
|
|
|
|
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
|
|
For more information on how to use these different messages most effectively, see our prompting guide.
|
|
|
|
LangChain exposes a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
|
|
The standard interface that LangChain exposes has two methods:
|
|
- `predict`: Takes in a string, returns a string
|
|
- `predict_messages`: Takes in a list of messages, returns a message.
|
|
|
|
Let's see how to work with these different types of models and these different types of inputs.
|
|
First, let's import an LLM and a ChatModel.
|
|
|
|
import ImportLLMs from "@snippets/get_started/quickstart/import_llms.mdx"
|
|
|
|
<ImportLLMs/>
|
|
|
|
The `OpenAI` and `ChatOpenAI` objects are basically just configuration objects.
|
|
You can initialize them with parameters like `temperature` and others, and pass them around.
|
|
|
|
Next, let's use the `predict` method to run over a string input.
|
|
|
|
import InputString from "@snippets/get_started/quickstart/input_string.mdx"
|
|
|
|
<InputString/>
|
|
|
|
Finally, let's use the `predict_messages` method to run over a list of messages.
|
|
|
|
import InputMessages from "@snippets/get_started/quickstart/input_messages.mdx"
|
|
|
|
<InputMessages/>
|
|
|
|
For both these methods, you can also pass in parameters as key word arguments.
|
|
For example, you could pass in `temperature=0` to adjust the temperature that is used from what the object was configured with.
|
|
Whatever values are passed in during run time will always override what the object was configured with.
|
|
|
|
|
|
## Prompt templates
|
|
|
|
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
|
|
|
|
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
|
|
|
|
PromptTemplates help with exactly this!
|
|
They bundle up all the logic for going from user input into a fully formatted prompt.
|
|
This can start off very simple - for example, a prompt to produce the above string would just be:
|
|
|
|
import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx"
|
|
import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx"
|
|
|
|
<PromptTemplateLLM/>
|
|
|
|
However, the advantages of using these over raw string formatting are several.
|
|
You can "partial" out variables - eg you can format only some of the variables at a time.
|
|
You can compose them together, easily combining different templates into a single prompt.
|
|
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
|
|
|
|
PromptTemplates can also be used to produce a list of messages.
|
|
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc)
|
|
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
|
|
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
|
|
Let's take a look at this below:
|
|
|
|
<PromptTemplateChatModel/>
|
|
|
|
ChatPromptTemplates can also include other things besides ChatMessageTemplates - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
|
|
|
|
## Output Parsers
|
|
|
|
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
|
|
There are few main type of OutputParsers, including:
|
|
|
|
- Convert text from LLM -> structured information (eg JSON)
|
|
- Convert a ChatMessage into just a string
|
|
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
|
|
|
|
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers)
|
|
|
|
In this getting started guide, we will write our own output parser - one that converts a comma separated list into a list.
|
|
|
|
import OutputParser from "@snippets/get_started/quickstart/output_parser.mdx"
|
|
|
|
<OutputParser/>
|
|
|
|
## LLMChain
|
|
|
|
We can now combine all these into one chain.
|
|
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to an LLM, and then pass the output through an (optional) output parser.
|
|
This is a convenient way to bundle up a modular piece of logic.
|
|
Let's see it in action!
|
|
|
|
import LLMChain from "@snippets/get_started/quickstart/llm_chain.mdx"
|
|
|
|
<LLMChain/>
|
|
|
|
## Next Steps
|
|
|
|
This is it!
|
|
We've now gone over how to create the core building block of LangChain applications - the LLMChains.
|
|
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
|
|
To continue on your journey:
|
|
|
|
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers
|
|
- Learn the other [key components](/docs/modules)
|
|
- Check out our [helpful guides](/docs/guides) for detailed walkthroughs on particular topics
|
|
- Explore [end-to-end use cases](/docs/use_cases)
|