From b7225fd0107dd506bf2573642b6c893e1026dde9 Mon Sep 17 00:00:00 2001 From: Nicolas Date: Fri, 13 Jan 2023 22:31:33 -0300 Subject: [PATCH] docs: fix small typo (#611) --- docs/use_cases/combine_docs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/use_cases/combine_docs.md b/docs/use_cases/combine_docs.md index 07884f7a48d..5b2067bc96d 100644 --- a/docs/use_cases/combine_docs.md +++ b/docs/use_cases/combine_docs.md @@ -55,7 +55,7 @@ There are two big issues to deal with in fetching: ### Text Splitting One big issue with all of these methods is how to make sure you are working with pieces of text that are not too large. This is important because most language models have a context length, and so you cannot (yet) just pass a -large document in as context. Therefor, it is important to not only fetch relevant data but also make sure it is +large document in as context. Therefore, it is important to not only fetch relevant data but also make sure it is in small enough chunks. LangChain provides some utilities to help with splitting up larger pieces of data. This comes in the form of the TextSplitter class.