mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-12 00:11:17 +00:00
Harrison/updating docs (#1196)
This commit is contained in:
@@ -82,7 +82,7 @@ for language models.
|
||||
## Augmenting
|
||||
So you've fetched your relevant data - now what? How do you pass them to the language model in a format it can understand?
|
||||
For a detailed overview of the different ways of doing so, and the tradeoffs between them, please see
|
||||
[this documentation](../modules/chains/combine_docs.md)
|
||||
[this documentation](../modules/indexes/combine_docs.md)
|
||||
|
||||
## Use Cases
|
||||
LangChain supports the above three methods of augmenting LLMs with external data.
|
||||
|
@@ -12,8 +12,8 @@ chain.run(input_documents=docs, question=query)
|
||||
```
|
||||
|
||||
The following resources exist:
|
||||
- [Question Answering Notebook](/modules/chains/combine_docs_examples/question_answering.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [VectorDB Question Answering Notebook](/modules/chains/combine_docs_examples/vector_db_qa.ipynb): A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
|
||||
- [Question Answering Notebook](/modules/indexes/chain_examples/question_answering.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [VectorDB Question Answering Notebook](/modules/indexes/chain_examples/vector_db_qa.ipynb): A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
|
||||
|
||||
### Adding in sources
|
||||
|
||||
@@ -28,12 +28,12 @@ chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
||||
```
|
||||
|
||||
The following resources exist:
|
||||
- [QA With Sources Notebook](/modules/chains/combine_docs_examples/qa_with_sources.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [VectorDB QA With Sources Notebook](/modules/chains/combine_docs_examples/vector_db_qa_with_sources.ipynb): A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
|
||||
- [QA With Sources Notebook](/modules/indexes/chain_examples/qa_with_sources.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [VectorDB QA With Sources Notebook](/modules/indexes/chain_examples/vector_db_qa_with_sources.ipynb): A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
|
||||
|
||||
### Additional Related Resources
|
||||
|
||||
Additional related resources include:
|
||||
- [Utilities for working with Documents](/modules/utils/how_to_guides.rst): Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example).
|
||||
- [CombineDocuments Chains](/modules/chains/combine_docs.md): A conceptual overview of specific types of chains by which you can accomplish this task.
|
||||
- [CombineDocuments Chains](/modules/indexes/combine_docs.md): A conceptual overview of specific types of chains by which you can accomplish this task.
|
||||
- [Data Augmented Generation](combine_docs.md): An overview of data augmented generation, which is the general concept of combining external data with LLMs (of which this is a subset).
|
||||
|
@@ -12,9 +12,9 @@ chain.run(docs)
|
||||
```
|
||||
|
||||
The following resources exist:
|
||||
- [Summarization Notebook](../modules/chains/combine_docs_examples/summarize.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [Summarization Notebook](../modules/indexes/chain_examples/summarize.ipynb): A notebook walking through how to accomplish this task.
|
||||
|
||||
Additional related resources include:
|
||||
- [Utilities for working with Documents](../modules/utils/how_to_guides.rst): Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents).
|
||||
- [CombineDocuments Chains](../modules/chains/combine_docs.md): A conceptual overview of specific types of chains by which you can accomplish this task.
|
||||
- [CombineDocuments Chains](../modules/indexes/combine_docs.md): A conceptual overview of specific types of chains by which you can accomplish this task.
|
||||
- [Data Augmented Generation](./combine_docs.md): An overview of data augmented generation, which is the general concept of combining external data with LLMs (of which this is a subset).
|
||||
|
Reference in New Issue
Block a user