mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-05 04:55:14 +00:00
improve documentation on how to pass in custom prompts (#561)
This commit is contained in:
@@ -3,6 +3,14 @@
|
||||
Question answering involves fetching multiple documents, and then asking a question of them.
|
||||
The LLM response will contain the answer to your question, based on the content of the documents.
|
||||
|
||||
The recommended way to get started using a question answering chain is:
|
||||
|
||||
```python
|
||||
from langchain.chains.question_answering import load_qa_chain
|
||||
chain = load_qa_chain(llm, chain_type="stuff")
|
||||
chain.run(input_documents=docs, question=query)
|
||||
```
|
||||
|
||||
The following resources exist:
|
||||
- [Question Answering Notebook](/modules/chains/combine_docs_examples/question_answering.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [VectorDB Question Answering Notebook](/modules/chains/combine_docs_examples/vector_db_qa.ipynb): A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
|
||||
@@ -11,6 +19,14 @@ The following resources exist:
|
||||
|
||||
There is also a variant of this, where in addition to responding with the answer the language model will also cite its sources (eg which of the documents passed in it used).
|
||||
|
||||
The recommended way to get started using a question answering with sources chain is:
|
||||
|
||||
```python
|
||||
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
|
||||
chain = load_qa_with_sources_chain(llm, chain_type="stuff")
|
||||
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
|
||||
```
|
||||
|
||||
The following resources exist:
|
||||
- [QA With Sources Notebook](/modules/chains/combine_docs_examples/qa_with_sources.ipynb): A notebook walking through how to accomplish this task.
|
||||
- [VectorDB QA With Sources Notebook](/modules/chains/combine_docs_examples/vector_db_qa_with_sources.ipynb): A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
|
||||
|
@@ -1,7 +1,15 @@
|
||||
# Summarization
|
||||
|
||||
Summarization involves creating a smaller summary of multiple longer documents.
|
||||
This can be useful for distilling long documents into the core pieces of information
|
||||
This can be useful for distilling long documents into the core pieces of information.
|
||||
|
||||
The recommended way to get started using a summarization chain is:
|
||||
|
||||
```python
|
||||
from langchain.chains.summarize import load_summarize_chain
|
||||
chain = load_summarize_chain(llm, chain_type="map_reduce")
|
||||
chain.run(docs)
|
||||
```
|
||||
|
||||
The following resources exist:
|
||||
- [Summarization Notebook](/modules/chains/combine_docs_examples/summarize.ipynb): A notebook walking through how to accomplish this task.
|
||||
|
Reference in New Issue
Block a user