docs[patch]: Update docs links (#23013)

This commit is contained in:
Jacob Lee 2024-06-17 15:58:28 -07:00 committed by GitHub
parent c2b2e3266c
commit 6605ae22f6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 8 additions and 8 deletions

View File

@ -907,8 +907,8 @@ Second, consider the data sources available to your RAG system. You want to quer
| Name | When to use | Description |
|------------------|--------------------------------------------|-------------|
| [Logical routing](/docs/how_to/routing/#using-a-runnablebranch) | When you can prompt an LLM with rules to decide where to route the input. | Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. |
| [Semantic routing](/docs/how_to/routing/#using-a-runnablebranch) | When semantic similarity is an effective way to determine where to route the input. | Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. |
| [Logical routing](/docs/how_to/routing/) | When you can prompt an LLM with rules to decide where to route the input. | Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. |
| [Semantic routing](/docs/how_to/routing/#routing-by-semantic-similarity) | When semantic similarity is an effective way to determine where to route the input. | Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. |
:::tip
@ -961,13 +961,13 @@ Fifth, consider ways to improve the quality of your similarity search itself. Em
![](/img/colbert.png)
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](https://python.langchain.com/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
| Name | When to use | Description |
|-------------------|----------------------------------------------------------|-------------|
| [ColBERT](/docs/integrations/providers/ragatouille/#using-colbert-as-a-reranker) | When higher granularity embeddings are needed. | ColBERT uses contextually influenced embeddings for each token in the document and query to get a granular query-document similarity score. |
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. |
| [Maximal Marginal Relevance (MMR) ](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
:::tip
@ -1012,7 +1012,7 @@ We've found that graphs are a great way to reliably express logical flows and ha
See several videos and cookbooks showcasing RAG with LangGraph:
- [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck)
- [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts)
- [Cookbooks for RAG using LangGraph ](https://github.com/langchain-ai/langgraph/tree/main/examples/rag)
- [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag)
See our LangGraph RAG recipes with partners:
- [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/agents/langchain)

View File

@ -323,7 +323,7 @@
"id": "fa0f589d",
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"## Routing by semantic similarity\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example."
]
@ -371,7 +371,7 @@
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
" | ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
" | StrOutputParser()\n",
")"
]