mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-26 16:43:35 +00:00
parent
5171c3bcca
commit
4a94f56258
@ -65,7 +65,7 @@ index.query(question)
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
Of course, some users do not wnat this level of abstraction.
|
Of course, some users do not want this level of abstraction.
|
||||||
|
|
||||||
Below, we will discuss each stage in more detail.
|
Below, we will discuss each stage in more detail.
|
||||||
|
|
||||||
@ -113,13 +113,13 @@ Here are the three pieces together:
|
|||||||
|
|
||||||
#### 1.2.1 Integrations
|
#### 1.2.1 Integrations
|
||||||
|
|
||||||
`Data Loaders`
|
`Document Loaders`
|
||||||
|
|
||||||
* Browse the > 120 data loader integrations [here](https://integrations.langchain.com/).
|
* Browse the > 120 data loader integrations [here](https://integrations.langchain.com/).
|
||||||
|
|
||||||
* See further documentation on loaders [here](https://python.langchain.com/docs/modules/data_connection/document_loaders/).
|
* See further documentation on loaders [here](https://python.langchain.com/docs/modules/data_connection/document_loaders/).
|
||||||
|
|
||||||
`Data Transformers`
|
`Document Transformers`
|
||||||
|
|
||||||
* All can ingest loaded `Documents` and process them (e.g., split).
|
* All can ingest loaded `Documents` and process them (e.g., split).
|
||||||
|
|
||||||
@ -133,7 +133,7 @@ Here are the three pieces together:
|
|||||||
|
|
||||||
#### 1.2.2 Retaining metadata
|
#### 1.2.2 Retaining metadata
|
||||||
|
|
||||||
`Context-aware splitters` keep the location or "context" of each split in the origional `Document`:
|
`Context-aware splitters` keep the location ("context") of each split in the origional `Document`:
|
||||||
|
|
||||||
* [Markdown files](https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA)
|
* [Markdown files](https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA)
|
||||||
* [Code (py or js)](https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/source_code)
|
* [Code (py or js)](https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/source_code)
|
||||||
@ -171,7 +171,7 @@ For example, SVMs (see thread [here](https://twitter.com/karpathy/status/1647025
|
|||||||
|
|
||||||
LangChain [has many retrievers](https://python.langchain.com/docs/modules/data_connection/retrievers/) including, but not limited to, vectorstores.
|
LangChain [has many retrievers](https://python.langchain.com/docs/modules/data_connection/retrievers/) including, but not limited to, vectorstores.
|
||||||
|
|
||||||
All retrievers implement some common, useful methods, such as `get_relevant_documents()`.
|
All retrievers implement some common methods, such as `get_relevant_documents()`.
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -222,7 +222,7 @@ len(unique_docs)
|
|||||||
|
|
||||||
### 3.1 Getting started
|
### 3.1 Getting started
|
||||||
|
|
||||||
Distill the retried documents into an answer using an LLM (e.g., `gpt-3.5-turbo`) with `RetrievalQA` chain.
|
Distill the retrieved documents into an answer using an LLM (e.g., `gpt-3.5-turbo`) with `RetrievalQA` chain.
|
||||||
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -247,9 +247,9 @@ qa_chain({"query": question})
|
|||||||
|
|
||||||
`LLMs`
|
`LLMs`
|
||||||
|
|
||||||
* Browse the > 55 model integrations [here](https://integrations.langchain.com/).
|
* Browse the > 55 LLM integrations [here](https://integrations.langchain.com/).
|
||||||
|
|
||||||
* See further documentation on vectorstores [here](https://python.langchain.com/docs/modules/model_io/models/).
|
* See further documentation on LLMs [here](https://python.langchain.com/docs/modules/model_io/models/).
|
||||||
|
|
||||||
#### 3.2.2 Running LLMs locally
|
#### 3.2.2 Running LLMs locally
|
||||||
|
|
||||||
@ -355,7 +355,7 @@ result
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
#### 3.2.5 Customizing how pass retrieved documents to the LLM
|
#### 3.2.5 Customizing retrieved docs in the LLM prompt
|
||||||
|
|
||||||
Retrieved documents can be fed to an LLM for answer distillation in a few different ways.
|
Retrieved documents can be fed to an LLM for answer distillation in a few different ways.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user