mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-15 12:02:11 +00:00
docs: fix model i/o index links (#15421)
This commit is contained in:
parent
5a43e0e885
commit
a3d47b4f19
@ -21,7 +21,7 @@
|
||||
"In chains, a sequence of actions is hardcoded (in code).\n",
|
||||
"In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.\n",
|
||||
"\n",
|
||||
"## [Quick Start](/docs/modules/agents/quick_start)\n",
|
||||
"## [Quickstart](/docs/modules/agents/quick_start)\n",
|
||||
"\n",
|
||||
"For a quick start to working with agents, please check out [this getting started guide](/docs/modules/agents/quick_start). This covers basics like initializing an agent, creating tools, and adding memory.\n",
|
||||
"\n",
|
||||
|
@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_position: 0\n",
|
||||
"title: Quick Start\n",
|
||||
"title: Quickstart\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
@ -16,7 +16,7 @@
|
||||
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Quick Start\n",
|
||||
"# Quickstart\n",
|
||||
"\n",
|
||||
"To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index.\n",
|
||||
"\n",
|
||||
@ -686,7 +686,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -25,4 +25,4 @@ This includes:
|
||||
|
||||
- [How to cache ChatModel responses](./chat_model_caching)
|
||||
- [How to stream responses from a ChatModel](./streaming)
|
||||
- [How to track token usage in a ChatModel call)(./token_usage_tracking)
|
||||
- [How to track token usage in a ChatModel call](./token_usage_tracking)
|
||||
|
@ -11,27 +11,27 @@ The core element of any language model application is...the model. LangChain giv
|
||||
|
||||

|
||||
|
||||
## [Conceptual Guide](./concepts)
|
||||
## [Conceptual Guide](/docs/modules/model_io/concepts)
|
||||
|
||||
A conceptual explanation of messages, prompts, LLMs vs ChatModels, and output parsers. You should read this before getting started.
|
||||
|
||||
## [Quick Start](./quick_start)
|
||||
## [Quickstart](/docs/modules/model_io/quick_start)
|
||||
|
||||
Covers the basics of getting started working with different types of models. You should walk through [this section] if you want to get an overview of the functionality.
|
||||
|
||||
## [Prompts](./prompts)
|
||||
## [Prompts](/docs/modules/model_io/prompts/)
|
||||
|
||||
[This section](./prompts) deep dives into the different types of prompt templates and how to use them.
|
||||
[This section](/docs/modules/model_io/prompts/) deep dives into the different types of prompt templates and how to use them.
|
||||
|
||||
## [LLMs](./llms)
|
||||
## [LLMs](/docs/modules/model_io/llms/)
|
||||
|
||||
[This section](./llms) covers functionality related to the LLM class. This is a type of model that takes a text string as input and returns a text string.
|
||||
[This section](/docs/modules/model_io/llms/) covers functionality related to the LLM class. This is a type of model that takes a text string as input and returns a text string.
|
||||
|
||||
## [ChatModels](./chat)
|
||||
## [ChatModels](/docs/modules/model_io/chat/)
|
||||
|
||||
[This section](./chat) covers functionality related to the ChatModel class. This is a type of model that takes a list of messages as input and returns a message.
|
||||
[This section](/docs/modules/model_io/chat/) covers functionality related to the ChatModel class. This is a type of model that takes a list of messages as input and returns a message.
|
||||
|
||||
## [Output Parsers](./output_parsers)
|
||||
## [Output Parsers](/docs/modules/model_io/output_parsers/)
|
||||
|
||||
Output parsers are responsible for transforming the output of LLMs and ChatModels into more structured data. [This section](./output_parsers) covers the different types of output parsers.
|
||||
Output parsers are responsible for transforming the output of LLMs and ChatModels into more structured data. [This section](/docs/modules/model_io/output_parsers/) covers the different types of output parsers.
|
||||
|
||||
|
@ -8,7 +8,7 @@ guide the model's response, helping it understand the context and generate relev
|
||||
and coherent language-based output, such as answering questions, completing sentences,
|
||||
or engaging in a conversation.
|
||||
|
||||
## [Quick Start](./quick_start)
|
||||
## [Quickstart](./quick_start)
|
||||
|
||||
This [quick start](./quick_start) provides a basic overview of how to work with prompts.
|
||||
|
||||
|
@ -1,3 +1,7 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
---
|
||||
|
||||
# Quickstart
|
||||
|
||||
The quick start will cover the basics of working with language models. It will introduce the two different types of models - LLMs and ChatModels. It will then cover how to use PromptTemplates to format the inputs to these models, and how to use Output Parsers to work with the outputs. For a deeper conceptual guide into these topics - please see [this documentation](./concepts)
|
||||
|
Loading…
Reference in New Issue
Block a user