From 91b37b2d812eea7fbd2e5c8338c0776826a54c8b Mon Sep 17 00:00:00 2001 From: Dismas Banda Date: Wed, 10 Jul 2024 16:35:54 +0200 Subject: [PATCH] docs: fix spelling mistake in concepts.mdx: Fouth -> Fourth (#24067) **Description:** Corrected the spelling for fourth. **Twitter handle:** @dismasbanda --- docs/docs/concepts.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/concepts.mdx b/docs/docs/concepts.mdx index aa904ca79aa..23c886dc28e 100644 --- a/docs/docs/concepts.mdx +++ b/docs/docs/concepts.mdx @@ -1022,7 +1022,7 @@ See our [blog post overview](https://blog.langchain.dev/query-construction/) and #### Indexing -Fouth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/docs/concepts/#embedding-models). +Fourth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/docs/concepts/#embedding-models). Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens.