mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-01 19:12:42 +00:00
docs,langchain-community: Fix typos in docs and code (#30541)
Fix typos
This commit is contained in:
@@ -127,7 +127,7 @@
|
||||
"id": "c89e2045-9244-43e6-bf3f-59af22658529",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we've got a [model](/docs/concepts/chat_models/), [retriver](/docs/concepts/retrievers/) and [prompt](/docs/concepts/prompt_templates/), let's chain them all together. Following the how-to guide on [adding citations](/docs/how_to/qa_citations) to a RAG application, we'll make it so our chain returns both the answer and the retrieved Documents. This uses the same [LangGraph](/docs/concepts/architecture/#langgraph) implementation as in the [RAG Tutorial](/docs/tutorials/rag)."
|
||||
"Now that we've got a [model](/docs/concepts/chat_models/), [retriever](/docs/concepts/retrievers/) and [prompt](/docs/concepts/prompt_templates/), let's chain them all together. Following the how-to guide on [adding citations](/docs/how_to/qa_citations) to a RAG application, we'll make it so our chain returns both the answer and the retrieved Documents. This uses the same [LangGraph](/docs/concepts/architecture/#langgraph) implementation as in the [RAG Tutorial](/docs/tutorials/rag)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@@ -270,7 +270,7 @@
|
||||
"source": [
|
||||
"## Retrieval with query analysis\n",
|
||||
"\n",
|
||||
"So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time."
|
||||
"So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asynchronously - this will let us loop over the queries and not get blocked on the response time."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@@ -24,7 +24,7 @@
|
||||
"\n",
|
||||
"Note that the map step is typically parallelized over the input documents. This strategy is especially effective when understanding of a sub-document does not rely on preceeding context. For example, when summarizing a corpus of many, shorter documents.\n",
|
||||
"\n",
|
||||
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of `langchain-core`, suports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows and is well-suited to this problem:\n",
|
||||
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of `langchain-core`, supports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows and is well-suited to this problem:\n",
|
||||
"\n",
|
||||
"- LangGraph allows for individual steps (such as successive summarizations) to be streamed, allowing for greater control of execution;\n",
|
||||
"- LangGraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) supports error recovery, extending with human-in-the-loop workflows, and easier incorporation into conversational applications.\n",
|
||||
|
@@ -48,7 +48,7 @@
|
||||
"print(f\"Response provided by LLM with system prompt set is : {sys_resp}\")\n",
|
||||
"\n",
|
||||
"# The top_responses parameter can give multiple responses based on its parameter value\n",
|
||||
"# This below code retrive top 10 miner's response all the response are in format of json\n",
|
||||
"# This below code retrieve top 10 miner's response all the response are in format of json\n",
|
||||
"\n",
|
||||
"# Json response structure is\n",
|
||||
"\"\"\" {\n",
|
||||
|
@@ -198,7 +198,7 @@
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" query: str: The search query to be used. Try to keep this specific and short, e.g. a specific topic or author name\n",
|
||||
" itemType: Optional. Type of item to search for (e.g. \"book\" or \"journalArticle\"). Multiple types can be passed as a string seperated by \"||\", e.g. \"book || journalArticle\". Defaults to all types.\n",
|
||||
" itemType: Optional. Type of item to search for (e.g. \"book\" or \"journalArticle\"). Multiple types can be passed as a string separated by \"||\", e.g. \"book || journalArticle\". Defaults to all types.\n",
|
||||
" tag: Optional. For searching over tags attached to library items. If documents tagged with multiple tags are to be retrieved, pass them as a list. If documents with any of the tags are to be retrieved, pass them as a string separated by \"||\", e.g. \"tag1 || tag2\"\n",
|
||||
" qmode: Search mode to use. Changes what the query searches over. \"everything\" includes full-text content. \"titleCreatorYear\" to search over title, authors and year. Defaults to \"everything\".\n",
|
||||
" since: Return only objects modified after the specified library version. Defaults to return everything.\n",
|
||||
|
@@ -70,7 +70,7 @@
|
||||
"Gathers all schema information for the connected database or a specific schema. Critical for the agent when determining actions. \n",
|
||||
"\n",
|
||||
"### `cassandra_db_select_table_data`\n",
|
||||
"Selects data from a specific keyspace and table. The agent can pass paramaters for a predicate and limits on the number of returned records. \n",
|
||||
"Selects data from a specific keyspace and table. The agent can pass parameters for a predicate and limits on the number of returned records. \n",
|
||||
"\n",
|
||||
"### `cassandra_db_query`\n",
|
||||
"Expiriemental alternative to `cassandra_db_select_table_data` which takes a query string completely formed by the agent instead of parameters. *Warning*: This can lead to unusual queries that may not be as performant(or even work). This may be removed in future releases. If it does something cool, we want to know about that too. You never know!"
|
||||
|
@@ -123,7 +123,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, Query and retrive data"
|
||||
"Finally, Query and retrieve data"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@@ -123,7 +123,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, Query and retrive data"
|
||||
"Finally, Query and retrieve data"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@@ -436,7 +436,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"##delete and get function need to maintian docids\n",
|
||||
"##delete and get function need to maintain docids\n",
|
||||
"##your docid\n",
|
||||
"\n",
|
||||
"res_d = vearch_standalone.delete(\n",
|
||||
|
@@ -19,7 +19,7 @@
|
||||
"\n",
|
||||
"In the reduce step, `MapReduceDocumentsChain` supports a recursive \"collapsing\" of the summaries: the inputs would be partitioned based on a token limit, and summaries would be generated of the partitions. This step would be repeated until the total length of the summaries was within a desired limit, allowing for the summarization of arbitrary-length text. This is particularly useful for models with smaller context windows.\n",
|
||||
"\n",
|
||||
"LangGraph suports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows, and confers a number of advantages for this problem:\n",
|
||||
"LangGraph supports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows, and confers a number of advantages for this problem:\n",
|
||||
"\n",
|
||||
"- LangGraph allows for individual steps (such as successive summarizations) to be streamed, allowing for greater control of execution;\n",
|
||||
"- LangGraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) supports error recovery, extending with human-in-the-loop workflows, and easier incorporation into conversational applications.\n",
|
||||
|
Reference in New Issue
Block a user