mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-05 00:30:18 +00:00
Compare commits
16 Commits
eugene/tra
...
eugene/ind
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
89b0cd8d6c | ||
|
|
d18651768a | ||
|
|
2c180d645e | ||
|
|
f152d6ed3d | ||
|
|
4d6f28cdde | ||
|
|
bf8d4716a7 | ||
|
|
4ec5fdda8d | ||
|
|
ee579c77c1 | ||
|
|
9787552b00 | ||
|
|
8b84457b17 | ||
|
|
e0186df56b | ||
|
|
fcd018be47 | ||
|
|
0990ab146c | ||
|
|
ee8aa54f53 | ||
|
|
5b7d5f7729 | ||
|
|
e0889384d9 |
@@ -85,8 +85,13 @@ Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas i
|
||||
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
|
||||
With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability.
|
||||
|
||||
[**Seamless LangServe deployment**](/docs/langserve)
|
||||
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).
|
||||
LCEL aims to provide consistency around behavior and customization over legacy subclassed chains such as `LLMChain` and
|
||||
`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety
|
||||
of viable models emerge, customization has become more and more important.
|
||||
|
||||
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/how_to/migrate_chains/).
|
||||
|
||||
For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel).
|
||||
|
||||
### Runnable interface
|
||||
<span data-heading-keywords="invoke,runnable"></span>
|
||||
@@ -516,6 +521,8 @@ Generally, when designing tools to be used by a chat model or LLM, it is importa
|
||||
|
||||
For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
|
||||
|
||||
To use an existing pre-built tool, see [here](docs/integrations/tools/) for a list of pre-built tools.
|
||||
|
||||
### Toolkits
|
||||
|
||||
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
|
||||
|
||||
@@ -43,6 +43,7 @@ This highlights functionality that is core to using LangChain.
|
||||
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
|
||||
- [How to: inspect runnables](/docs/how_to/inspect)
|
||||
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
|
||||
- [How to: migrate chains to LCEL](/docs/how_to/migrate_chains)
|
||||
|
||||
## Components
|
||||
|
||||
@@ -182,7 +183,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
|
||||
|
||||
### Tools
|
||||
|
||||
LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call.
|
||||
LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
|
||||
|
||||
- [How to: create custom tools](/docs/how_to/custom_tools)
|
||||
- [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin)
|
||||
@@ -204,7 +205,7 @@ LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to p
|
||||
|
||||
:::note
|
||||
|
||||
For in depth how-to guides for agents, please check out [LangGraph](https://github.com/langchain-ai/langgraph) documentation.
|
||||
For in depth how-to guides for agents, please check out [LangGraph](https://langchain-ai.github.io/langgraph/) documentation.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
798
docs/docs/how_to/migrate_chains.ipynb
Normal file
798
docs/docs/how_to/migrate_chains.ipynb
Normal file
@@ -0,0 +1,798 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f331037f-be3f-4782-856f-d55dab952488",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# How to migrate chains to LCEL\n",
|
||||
"\n",
|
||||
":::info Prerequisites\n",
|
||||
"\n",
|
||||
"This guide assumes familiarity with the following concepts:\n",
|
||||
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"LCEL is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n",
|
||||
"\n",
|
||||
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n",
|
||||
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n",
|
||||
"\n",
|
||||
"LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL primitives. Doing so confers some general advantages:\n",
|
||||
"\n",
|
||||
"- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n",
|
||||
"- The chains may be more easily extended or modified;\n",
|
||||
"- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n",
|
||||
"\n",
|
||||
"The LCEL implementations can be slightly more verbose, but there are significant benefits in transparency and customizability.\n",
|
||||
"\n",
|
||||
"In this guide we review LCEL implementations of common legacy abstractions. Where appropriate, we link out to separate guides with more detail."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b99b47ec",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "717c8673",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## `LLMChain`\n",
|
||||
"<span data-heading-keywords=\"llmchain\"></span>\n",
|
||||
"\n",
|
||||
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
|
||||
"\n",
|
||||
"Some advantages of switching to the LCEL implementation are:\n",
|
||||
"\n",
|
||||
"- Clarity around contents and parameters. The legacy `LLMChain` contains a default output parser and other options.\n",
|
||||
"- Easier streaming. `LLMChain` only supports streaming via callbacks.\n",
|
||||
"- Easier access to raw message outputs if desired. `LLMChain` only exposes these via a parameter or via callback.\n",
|
||||
"\n",
|
||||
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
|
||||
"\n",
|
||||
"<ColumnContainer>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### Legacy\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "e628905c-430e-4e4a-9d7c-c91d2f42052e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'adjective': 'funny',\n",
|
||||
" 'text': \"Why couldn't the bicycle find its way home?\\n\\nBecause it lost its bearings!\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
|
||||
"\n",
|
||||
"chain({\"adjective\": \"funny\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"</Column>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### LCEL\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "0d2a7cf8-1bc7-405c-bb0d-f2ab2ba3b6ab",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
|
||||
"\n",
|
||||
"chain.invoke({\"adjective\": \"funny\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"</Column>\n",
|
||||
"</ColumnContainer>\n",
|
||||
"\n",
|
||||
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "529206c5-abbe-4213-9e6c-3b8586c8000d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'adjective': 'funny',\n",
|
||||
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"outer_chain = RunnablePassthrough().assign(text=chain)\n",
|
||||
"\n",
|
||||
"outer_chain.invoke({\"adjective\": \"funny\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "29d2e26c-2854-4971-9c2b-613450993921",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "00df631d-5121-4918-94aa-b88acce9b769",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## `ConversationChain`\n",
|
||||
"<span data-heading-keywords=\"conversationchain\"></span>\n",
|
||||
"\n",
|
||||
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n",
|
||||
"\n",
|
||||
"Some advantages of switching to the LCEL implementation are:\n",
|
||||
"\n",
|
||||
"- Innate support for threads/separate sessions. To make this work with `ConversationChain`, you'd need to instantiate a separate memory class outside the chain.\n",
|
||||
"- More explicit parameters. `ConversationChain` contains a hidden default prompt, which can cause confusion.\n",
|
||||
"- Streaming support. `ConversationChain` only supports streaming via callbacks.\n",
|
||||
"\n",
|
||||
"`RunnableWithMessageHistory` implements sessions via configuration parameters. It should be instantiated with a callable that returns a [chat message history](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html). By default, it expects this function to take a single argument `session_id`.\n",
|
||||
"\n",
|
||||
"<ColumnContainer>\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### Legacy\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'how are you?',\n",
|
||||
" 'history': '',\n",
|
||||
" 'response': \"Arrr, I be doin' well, me matey! Just sailin' the high seas in search of treasure and adventure. How can I assist ye today?\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import ConversationChain\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"template = \"\"\"\n",
|
||||
"You are a pirate. Answer the following questions as best you can.\n",
|
||||
"Chat history: {history}\n",
|
||||
"Question: {input}\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"memory = ConversationBufferMemory()\n",
|
||||
"\n",
|
||||
"chain = ConversationChain(\n",
|
||||
" llm=ChatOpenAI(),\n",
|
||||
" memory=memory,\n",
|
||||
" prompt=prompt,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain({\"input\": \"how are you?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"</Column>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### LCEL\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "173e1a9c-2a18-4669-b0de-136f39197786",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Arr, matey! I be sailin' the high seas with me crew, searchin' for buried treasure and adventure! How be ye doin' on this fine day?\""
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a pirate. Answer the following questions as best you can.\"),\n",
|
||||
" (\"placeholder\", \"{chat_history}\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history = InMemoryChatMessageHistory()\n",
|
||||
"\n",
|
||||
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
|
||||
"\n",
|
||||
"wrapped_chain = RunnableWithMessageHistory(chain, lambda x: history)\n",
|
||||
"\n",
|
||||
"wrapped_chain.invoke(\n",
|
||||
" {\"input\": \"how are you?\"},\n",
|
||||
" config={\"configurable\": {\"session_id\": \"42\"}},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6b386ce6-895e-442c-88f3-7bec0ab9f401",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"</Column>\n",
|
||||
"</ColumnContainer>\n",
|
||||
"\n",
|
||||
"The above example uses the same `history` for all sessions. The example below shows how to use a different chat history for each session."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "4e05994f-1fbc-4699-bf2e-62cb0e4deeb8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Ahoy there! What be ye wantin' from this old pirate?\", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 29, 'total_tokens': 44}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1846d5f5-0dda-43b6-bb49-864e541f9c29-0', usage_metadata={'input_tokens': 29, 'output_tokens': 15, 'total_tokens': 44})"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.chat_history import BaseChatMessageHistory\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"\n",
|
||||
"store = {}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
|
||||
" if session_id not in store:\n",
|
||||
" store[session_id] = InMemoryChatMessageHistory()\n",
|
||||
" return store[session_id]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
|
||||
"\n",
|
||||
"wrapped_chain = RunnableWithMessageHistory(chain, get_session_history)\n",
|
||||
"\n",
|
||||
"wrapped_chain.invoke(\"Hello!\", config={\"configurable\": {\"session_id\": \"abc123\"}})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c36ebecb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n",
|
||||
"\n",
|
||||
"## `RetrievalQA`\n",
|
||||
"<span data-heading-keywords=\"retrievalqa\"></span>\n",
|
||||
"\n",
|
||||
"The [`RetrievalQA`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) chain performed natural-language question answering over a data source using retrieval-augmented generation.\n",
|
||||
"\n",
|
||||
"Some advantages of switching to the LCEL implementation are:\n",
|
||||
"\n",
|
||||
"- Easier customizability. Details such as the prompt and how documents are formatted are only configurable via specific parameters in the `RetrievalQA` chain.\n",
|
||||
"- More easily return source documents.\n",
|
||||
"- Support for runnable methods like streaming and async operations.\n",
|
||||
"\n",
|
||||
"Now let's look at them side-by-side. We'll use the same ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "1efbe16e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load docs\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||||
"from langchain_community.vectorstores import FAISS\n",
|
||||
"from langchain_openai.chat_models import ChatOpenAI\n",
|
||||
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
|
||||
"data = loader.load()\n",
|
||||
"\n",
|
||||
"# Split\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
||||
"all_splits = text_splitter.split_documents(data)\n",
|
||||
"\n",
|
||||
"# Store splits\n",
|
||||
"vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n",
|
||||
"\n",
|
||||
"# LLM\n",
|
||||
"llm = ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c7e16438",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<ColumnContainer>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### Legacy"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "43bf55a0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'query': 'What are autonomous agents?',\n",
|
||||
" 'result': 'Autonomous agents are LLM-empowered agents that handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. These agents can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other LLMs. They are capable of reasoning and planning ahead for complicated tasks by breaking them down into smaller steps.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"\n",
|
||||
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
|
||||
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||||
"\n",
|
||||
"qa_chain = RetrievalQA.from_llm(\n",
|
||||
" llm, retriever=vectorstore.as_retriever(), prompt=prompt\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"qa_chain(\"What are autonomous agents?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "081948e5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"</Column>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### LCEL\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "9efcc931",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Autonomous agents are agents that can handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. They can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other language model models. These agents use reasoning steps to develop solutions to specific tasks, like creating a novel anticancer drug.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
|
||||
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def format_docs(docs):\n",
|
||||
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"qa_chain = (\n",
|
||||
" {\n",
|
||||
" \"context\": vectorstore.as_retriever() | format_docs,\n",
|
||||
" \"question\": RunnablePassthrough(),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"qa_chain.invoke(\"What are autonomous agents?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d6f44fe8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"</Column>\n",
|
||||
"</ColumnContainer>\n",
|
||||
"\n",
|
||||
"The LCEL implementation exposes the internals of what's happening around retrieving, formatting documents, and passing them through a prompt to the LLM, but it is more verbose. You can customize and wrap this composition logic in a helper function, or use the higher-level [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) and [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) helper method:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "5fe42761",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'What are autonomous agents?',\n",
|
||||
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}),\n",
|
||||
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}),\n",
|
||||
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}),\n",
|
||||
" Document(page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\", metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'})],\n",
|
||||
" 'answer': 'Autonomous agents are entities that can operate independently, making decisions and taking actions without direct human intervention. These agents can perform tasks such as planning, executing complex experiments, and leveraging various tools and resources to achieve objectives. In the context provided, LLM-powered autonomous agents are specifically designed for scientific discovery, capable of handling tasks like designing novel anticancer drugs through reasoning steps.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 20,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"from langchain.chains import create_retrieval_chain\n",
|
||||
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
|
||||
"\n",
|
||||
"# See full prompt at https://smith.langchain.com/hub/langchain-ai/retrieval-qa-chat\n",
|
||||
"retrieval_qa_chat_prompt = hub.pull(\"langchain-ai/retrieval-qa-chat\")\n",
|
||||
"\n",
|
||||
"combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)\n",
|
||||
"rag_chain = create_retrieval_chain(vectorstore.as_retriever(), combine_docs_chain)\n",
|
||||
"\n",
|
||||
"rag_chain.invoke({\"input\": \"What are autonomous agents?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2772f4e9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## `ConversationalRetrievalChain`\n",
|
||||
"<span data-heading-keywords=\"conversationalretrievalchain\"></span>\n",
|
||||
"\n",
|
||||
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
|
||||
"\n",
|
||||
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n",
|
||||
"\n",
|
||||
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
|
||||
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
|
||||
"- More easily return source documents.\n",
|
||||
"- Support for runnable methods like streaming and async operations.\n",
|
||||
"\n",
|
||||
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8bc06416",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<ColumnContainer>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### Legacy"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "54eb9576",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What are autonomous agents?',\n",
|
||||
" 'chat_history': '',\n",
|
||||
" 'answer': 'Autonomous agents are powered by Large Language Models (LLMs) to handle tasks like scientific discovery and complex experiments autonomously. These agents can browse the internet, read documentation, execute code, and leverage other LLMs to perform tasks. They can reason and plan ahead to decompose complicated tasks into manageable steps.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"\n",
|
||||
"condense_question_template = \"\"\"\n",
|
||||
"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n",
|
||||
"\n",
|
||||
"Chat History:\n",
|
||||
"{chat_history}\n",
|
||||
"Follow Up Input: {question}\n",
|
||||
"Standalone question:\"\"\"\n",
|
||||
"\n",
|
||||
"condense_question_prompt = ChatPromptTemplate.from_template(condense_question_template)\n",
|
||||
"\n",
|
||||
"qa_template = \"\"\"\n",
|
||||
"You are an assistant for question-answering tasks.\n",
|
||||
"Use the following pieces of retrieved context to answer\n",
|
||||
"the question. If you don't know the answer, say that you\n",
|
||||
"don't know. Use three sentences maximum and keep the\n",
|
||||
"answer concise.\n",
|
||||
"\n",
|
||||
"Chat History:\n",
|
||||
"{chat_history}\n",
|
||||
"\n",
|
||||
"Other context:\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"qa_prompt = ChatPromptTemplate.from_template(qa_template)\n",
|
||||
"\n",
|
||||
"convo_qa_chain = ConversationalRetrievalChain.from_llm(\n",
|
||||
" llm,\n",
|
||||
" vectorstore.as_retriever(),\n",
|
||||
" condense_question_prompt=condense_question_prompt,\n",
|
||||
" combine_docs_chain_kwargs={\n",
|
||||
" \"prompt\": qa_prompt,\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"convo_qa_chain(\n",
|
||||
" {\n",
|
||||
" \"question\": \"What are autonomous agents?\",\n",
|
||||
" \"chat_history\": \"\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "43a8a23c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"</Column>\n",
|
||||
"\n",
|
||||
"<Column>\n",
|
||||
"\n",
|
||||
"#### LCEL\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "c884b138",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'What are autonomous agents?',\n",
|
||||
" 'chat_history': [],\n",
|
||||
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}),\n",
|
||||
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}),\n",
|
||||
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}),\n",
|
||||
" Document(page_content='Or\\n@article{weng2023agent,\\n title = \"LLM-powered Autonomous Agents\",\\n author = \"Weng, Lilian\",\\n journal = \"lilianweng.github.io\",\\n year = \"2023\",\\n month = \"Jun\",\\n url = \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\\n}\\nReferences#\\n[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022\\n[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'})],\n",
|
||||
" 'answer': 'Autonomous agents are entities capable of acting independently, making decisions, and performing tasks without direct human intervention. These agents can interact with their environment, perceive information, and take actions based on their goals or objectives. They often use artificial intelligence techniques to navigate and accomplish tasks in complex or dynamic environments.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 25,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
|
||||
"\n",
|
||||
"condense_question_system_template = (\n",
|
||||
" \"Given a chat history and the latest user question \"\n",
|
||||
" \"which might reference context in the chat history, \"\n",
|
||||
" \"formulate a standalone question which can be understood \"\n",
|
||||
" \"without the chat history. Do NOT answer the question, \"\n",
|
||||
" \"just reformulate it if needed and otherwise return it as is.\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"condense_question_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", condense_question_system_template),\n",
|
||||
" (\"placeholder\", \"{chat_history}\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"history_aware_retriever = create_history_aware_retriever(\n",
|
||||
" llm, vectorstore.as_retriever(), condense_question_prompt\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"system_prompt = (\n",
|
||||
" \"You are an assistant for question-answering tasks. \"\n",
|
||||
" \"Use the following pieces of retrieved context to answer \"\n",
|
||||
" \"the question. If you don't know the answer, say that you \"\n",
|
||||
" \"don't know. Use three sentences maximum and keep the \"\n",
|
||||
" \"answer concise.\"\n",
|
||||
" \"\\n\\n\"\n",
|
||||
" \"{context}\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"qa_prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", system_prompt),\n",
|
||||
" (\"placeholder\", \"{chat_history}\"),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"qa_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
|
||||
"\n",
|
||||
"convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)\n",
|
||||
"\n",
|
||||
"convo_qa_chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input\": \"What are autonomous agents?\",\n",
|
||||
" \"chat_history\": [],\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b2717810",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"</Column>\n",
|
||||
"\n",
|
||||
"</ColumnContainer>\n",
|
||||
"\n",
|
||||
"## Next steps\n",
|
||||
"\n",
|
||||
"You've now seen how to migrate existing usage of some legacy chains to LCEL.\n",
|
||||
"\n",
|
||||
"Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -41,7 +41,7 @@
|
||||
"\n",
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain OpenAI integration lives in the `langchain-community` and `llama-cpp-python` packages:"
|
||||
"The LangChain LlamaCpp integration lives in the `langchain-community` and `llama-cpp-python` packages:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -60,6 +60,7 @@
|
||||
"**PebbloRetrievalQA chain supports the following vector databases:**\n",
|
||||
"- Qdrant\n",
|
||||
"- Pinecone\n",
|
||||
"- Postgres(utilizing the pgvector extension)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"**Load vector database with authorization and semantic information in metadata:**\n",
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Annoy\n",
|
||||
"\n",
|
||||
"> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.\n",
|
||||
"> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mapped into memory so that many processes may share the same data.\n",
|
||||
"\n",
|
||||
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration\n",
|
||||
"\n",
|
||||
|
||||
@@ -270,8 +270,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
|
||||
"from langchain_core.chat_history import BaseChatMessageHistory\n",
|
||||
"from langchain_core.chat_history import (\n",
|
||||
" BaseChatMessageHistory,\n",
|
||||
" InMemoryChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"\n",
|
||||
"store = {}\n",
|
||||
@@ -279,7 +281,7 @@
|
||||
"\n",
|
||||
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
|
||||
" if session_id not in store:\n",
|
||||
" store[session_id] = ChatMessageHistory()\n",
|
||||
" store[session_id] = InMemoryChatMessageHistory()\n",
|
||||
" return store[session_id]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
|
||||
@@ -568,6 +568,8 @@ Removal: 0.3.0
|
||||
|
||||
Alternative: [RunnableSequence](/docs/how_to/sequence/), e.g., `prompt | llm`
|
||||
|
||||
This [migration guide](/docs/how_to/migrate_chains/#llmchain) has a side-by-side comparison.
|
||||
|
||||
|
||||
#### LLMSingleActionAgent
|
||||
|
||||
@@ -754,6 +756,7 @@ Removal: 0.3.0
|
||||
|
||||
|
||||
Alternative: [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain)
|
||||
This [migration guide](/docs/how_to/migrate_chains/#retrievalqa) has a side-by-side comparison.
|
||||
|
||||
|
||||
#### load_agent_from_config
|
||||
@@ -820,6 +823,7 @@ Removal: 0.3.0
|
||||
|
||||
|
||||
Alternative: [create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring)
|
||||
This [migration guide](/docs/how_to/migrate_chains/#conversationalretrievalchain) has a side-by-side comparison.
|
||||
|
||||
|
||||
#### create_extraction_chain_pydantic
|
||||
|
||||
@@ -11,6 +11,7 @@ LangChain v0.2 was released in May 2024. This release includes a number of [brea
|
||||
:::note Reference
|
||||
|
||||
- [Breaking Changes & Deprecations](/docs/versions/v0_2/deprecations)
|
||||
- [Migrating legacy chains to LCEL](/docs/how_to/migrate_chains/)
|
||||
- [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events)
|
||||
|
||||
:::
|
||||
|
||||
@@ -10,7 +10,7 @@ export function ColumnContainer({children}) {
|
||||
|
||||
export function Column({children}) {
|
||||
return (
|
||||
<div style={{ flex: "1 0 300px", padding: "10px", overflowX: "clip", zoom: '80%' }}>
|
||||
<div style={{ flex: "1 0 300px", padding: "10px", overflowX: "scroll", zoom: '80%' }}>
|
||||
{children}
|
||||
</div>
|
||||
)
|
||||
|
||||
@@ -272,14 +272,11 @@ class PebbloRetrievalQA(Chain):
|
||||
"""
|
||||
Validate that the vectorstore of the retriever is supported vectorstores.
|
||||
"""
|
||||
if not any(
|
||||
isinstance(retriever.vectorstore, supported_class)
|
||||
for supported_class in SUPPORTED_VECTORSTORES
|
||||
):
|
||||
if retriever.vectorstore.__class__.__name__ not in SUPPORTED_VECTORSTORES:
|
||||
raise ValueError(
|
||||
f"Vectorstore must be an instance of one of the supported "
|
||||
f"vectorstores: {SUPPORTED_VECTORSTORES}. "
|
||||
f"Got {type(retriever.vectorstore).__name__} instead."
|
||||
f"Got '{retriever.vectorstore.__class__.__name__}' instead."
|
||||
)
|
||||
return retriever
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ The methods in this module are designed to work with different types of vector s
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import List, Optional, Union
|
||||
from typing import Any, List, Optional, Union
|
||||
|
||||
from langchain_core.vectorstores import VectorStoreRetriever
|
||||
|
||||
@@ -21,11 +21,33 @@ from langchain_community.chains.pebblo_retrieval.models import (
|
||||
AuthContext,
|
||||
SemanticContext,
|
||||
)
|
||||
from langchain_community.vectorstores import Pinecone, Qdrant
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
SUPPORTED_VECTORSTORES = [Pinecone, Qdrant]
|
||||
PINECONE = "Pinecone"
|
||||
QDRANT = "Qdrant"
|
||||
PGVECTOR = "PGVector"
|
||||
|
||||
SUPPORTED_VECTORSTORES = {PINECONE, QDRANT, PGVECTOR}
|
||||
|
||||
|
||||
def clear_enforcement_filters(retriever: VectorStoreRetriever) -> None:
|
||||
"""
|
||||
Clear the identity and semantic enforcement filters in the retriever search_kwargs.
|
||||
"""
|
||||
if retriever.vectorstore.__class__.__name__ == PGVECTOR:
|
||||
search_kwargs = retriever.search_kwargs
|
||||
if "filter" in search_kwargs:
|
||||
filters = search_kwargs["filter"]
|
||||
_pgvector_clear_pebblo_filters(
|
||||
search_kwargs, filters, "authorized_identities"
|
||||
)
|
||||
_pgvector_clear_pebblo_filters(
|
||||
search_kwargs, filters, "pebblo_semantic_topics"
|
||||
)
|
||||
_pgvector_clear_pebblo_filters(
|
||||
search_kwargs, filters, "pebblo_semantic_entities"
|
||||
)
|
||||
|
||||
|
||||
def set_enforcement_filters(
|
||||
@@ -36,6 +58,8 @@ def set_enforcement_filters(
|
||||
"""
|
||||
Set identity and semantic enforcement filters in the retriever.
|
||||
"""
|
||||
# Clear existing enforcement filters
|
||||
clear_enforcement_filters(retriever)
|
||||
if auth_context is not None:
|
||||
_set_identity_enforcement_filter(retriever, auth_context)
|
||||
if semantic_context is not None:
|
||||
@@ -233,6 +257,244 @@ def _apply_pinecone_authorization_filter(
|
||||
}
|
||||
|
||||
|
||||
def _apply_pgvector_filter(
|
||||
search_kwargs: dict, filters: Optional[Any], pebblo_filter: dict
|
||||
) -> None:
|
||||
"""
|
||||
Apply pebblo filters in the search_kwargs filters.
|
||||
"""
|
||||
if isinstance(filters, dict):
|
||||
if len(filters) == 1:
|
||||
# The only operators allowed at the top level are $and, $or, and $not
|
||||
# First check if an operator or a field
|
||||
key, value = list(filters.items())[0]
|
||||
if key.startswith("$"):
|
||||
# Then it's an operator
|
||||
if key.lower() not in ["$and", "$or", "$not"]:
|
||||
raise ValueError(
|
||||
f"Invalid filter condition. Expected $and, $or or $not "
|
||||
f"but got: {key}"
|
||||
)
|
||||
if not isinstance(value, list):
|
||||
raise ValueError(
|
||||
f"Expected a list, but got {type(value)} for value: {value}"
|
||||
)
|
||||
|
||||
# Here we handle the $and, $or, and $not operators(Semantic filters)
|
||||
if key.lower() == "$and":
|
||||
# Add pebblo_filter to the $and list as it is
|
||||
value.append(pebblo_filter)
|
||||
elif key.lower() == "$not":
|
||||
# Check if pebblo_filter is an operator or a field
|
||||
_key, _value = list(pebblo_filter.items())[0]
|
||||
if _key.startswith("$"):
|
||||
# Then it's a operator
|
||||
if _key.lower() == "$not":
|
||||
# It's Semantic filter, add it's value to filters
|
||||
value.append(_value)
|
||||
logger.warning(
|
||||
"Adding $not operator to the existing $not operator"
|
||||
)
|
||||
return
|
||||
else:
|
||||
# Only $not operator is supported in pebblo_filter
|
||||
raise ValueError(
|
||||
f"Invalid filter key. Expected '$not' but got: {_key}"
|
||||
)
|
||||
else:
|
||||
# Then it's a field(Auth filter), move filters into $and
|
||||
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
|
||||
return
|
||||
elif key.lower() == "$or":
|
||||
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
|
||||
else:
|
||||
# Then it's a field and we can check pebblo_filter now
|
||||
# Check if pebblo_filter is an operator or a field
|
||||
_key, _ = list(pebblo_filter.items())[0]
|
||||
if _key.startswith("$"):
|
||||
# Then it's a operator
|
||||
if _key.lower() == "$not":
|
||||
# It's a $not operator(Semantic filter), move filters into $and
|
||||
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
|
||||
return
|
||||
else:
|
||||
# Only $not operator is allowed in pebblo_filter
|
||||
raise ValueError(
|
||||
f"Invalid filter key. Expected '$not' but got: {_key}"
|
||||
)
|
||||
else:
|
||||
# Then it's a field(This handles Auth filter)
|
||||
filters.update(pebblo_filter)
|
||||
return
|
||||
elif len(filters) > 1:
|
||||
# Then all keys have to be fields (they cannot be operators)
|
||||
for key in filters.keys():
|
||||
if key.startswith("$"):
|
||||
raise ValueError(
|
||||
f"Invalid filter condition. Expected a field but got: {key}"
|
||||
)
|
||||
# filters should all be fields and we can check pebblo_filter now
|
||||
# Check if pebblo_filter is an operator or a field
|
||||
_key, _ = list(pebblo_filter.items())[0]
|
||||
if _key.startswith("$"):
|
||||
# Then it's a operator
|
||||
if _key.lower() == "$not":
|
||||
# It's a $not operator(Semantic filter), move filters into '$and'
|
||||
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
|
||||
return
|
||||
else:
|
||||
# Only $not operator is supported in pebblo_filter
|
||||
raise ValueError(
|
||||
f"Invalid filter key. Expected '$not' but got: {_key}"
|
||||
)
|
||||
else:
|
||||
# Then it's a field(This handles Auth filter)
|
||||
filters.update(pebblo_filter)
|
||||
return
|
||||
else:
|
||||
# Got an empty dictionary for filters, set pebblo_filter in filter
|
||||
search_kwargs.setdefault("filter", {}).update(pebblo_filter)
|
||||
elif filters is None:
|
||||
# If filters is None, set pebblo_filter as a new filter
|
||||
search_kwargs.setdefault("filter", {}).update(pebblo_filter)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Invalid filter. Expected a dictionary/None but got type: {type(filters)}"
|
||||
)
|
||||
|
||||
|
||||
def _pgvector_clear_pebblo_filters(
|
||||
search_kwargs: dict, filters: dict, pebblo_filter_key: str
|
||||
) -> None:
|
||||
"""
|
||||
Remove pebblo filters from the search_kwargs filters.
|
||||
"""
|
||||
if isinstance(filters, dict):
|
||||
if len(filters) == 1:
|
||||
# The only operators allowed at the top level are $and, $or, and $not
|
||||
# First check if an operator or a field
|
||||
key, value = list(filters.items())[0]
|
||||
if key.startswith("$"):
|
||||
# Then it's an operator
|
||||
# Validate the operator's key and value type
|
||||
if key.lower() not in ["$and", "$or", "$not"]:
|
||||
raise ValueError(
|
||||
f"Invalid filter condition. Expected $and, $or or $not "
|
||||
f"but got: {key}"
|
||||
)
|
||||
elif not isinstance(value, list):
|
||||
raise ValueError(
|
||||
f"Expected a list, but got {type(value)} for value: {value}"
|
||||
)
|
||||
|
||||
# Here we handle the $and, $or, and $not operators
|
||||
if key.lower() == "$and":
|
||||
# Remove the pebblo filter from the $and list
|
||||
for i, _filter in enumerate(value):
|
||||
if pebblo_filter_key in _filter:
|
||||
# This handles Auth filter
|
||||
value.pop(i)
|
||||
break
|
||||
# Check for $not operator with Semantic filter
|
||||
if "$not" in _filter:
|
||||
sem_filter_found = False
|
||||
# This handles Semantic filter
|
||||
for j, nested_filter in enumerate(_filter["$not"]):
|
||||
if pebblo_filter_key in nested_filter:
|
||||
if len(_filter["$not"]) == 1:
|
||||
# If only one filter is left,
|
||||
# then remove the $not operator
|
||||
value.pop(i)
|
||||
else:
|
||||
value[i]["$not"].pop(j)
|
||||
sem_filter_found = True
|
||||
break
|
||||
if sem_filter_found:
|
||||
break
|
||||
if len(value) == 1:
|
||||
# If only one filter is left, then remove the $and operator
|
||||
search_kwargs["filter"] = value[0]
|
||||
elif key.lower() == "$not":
|
||||
# Remove the pebblo filter from the $not list
|
||||
for i, _filter in enumerate(value):
|
||||
if pebblo_filter_key in _filter:
|
||||
# This removes Semantic filter
|
||||
value.pop(i)
|
||||
break
|
||||
if len(value) == 0:
|
||||
# If no filter is left, then unset the filter
|
||||
search_kwargs["filter"] = {}
|
||||
elif key.lower() == "$or":
|
||||
# If $or, pebblo filter will not be present
|
||||
return
|
||||
else:
|
||||
# Then it's a field, check if it's a pebblo filter
|
||||
if key == pebblo_filter_key:
|
||||
filters.pop(key)
|
||||
return
|
||||
elif len(filters) > 1:
|
||||
# Then all keys have to be fields (they cannot be operators)
|
||||
if pebblo_filter_key in filters:
|
||||
# This handles Auth filter
|
||||
filters.pop(pebblo_filter_key)
|
||||
return
|
||||
else:
|
||||
# Got an empty dictionary for filters, ignore the filter
|
||||
return
|
||||
elif filters is None:
|
||||
# If filters is None, ignore the filter
|
||||
return
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Invalid filter. Expected a dictionary/None but got type: {type(filters)}"
|
||||
)
|
||||
|
||||
|
||||
def _apply_pgvector_semantic_filter(
|
||||
search_kwargs: dict, semantic_context: Optional[SemanticContext]
|
||||
) -> None:
|
||||
"""
|
||||
Set semantic enforcement filter in search_kwargs for PGVector vectorstore.
|
||||
"""
|
||||
# Check if semantic_context is provided
|
||||
if semantic_context is not None:
|
||||
_semantic_filters = []
|
||||
filters = search_kwargs.get("filter")
|
||||
if semantic_context.pebblo_semantic_topics is not None:
|
||||
# Add pebblo_semantic_topics filter to search_kwargs
|
||||
topic_filter: dict = {
|
||||
"pebblo_semantic_topics": {
|
||||
"$eq": semantic_context.pebblo_semantic_topics.deny
|
||||
}
|
||||
}
|
||||
_semantic_filters.append(topic_filter)
|
||||
|
||||
if semantic_context.pebblo_semantic_entities is not None:
|
||||
# Add pebblo_semantic_entities filter to search_kwargs
|
||||
entity_filter: dict = {
|
||||
"pebblo_semantic_entities": {
|
||||
"$eq": semantic_context.pebblo_semantic_entities.deny
|
||||
}
|
||||
}
|
||||
_semantic_filters.append(entity_filter)
|
||||
|
||||
if len(_semantic_filters) > 0:
|
||||
semantic_filter: dict = {"$not": _semantic_filters}
|
||||
_apply_pgvector_filter(search_kwargs, filters, semantic_filter)
|
||||
|
||||
|
||||
def _apply_pgvector_authorization_filter(
|
||||
search_kwargs: dict, auth_context: Optional[AuthContext]
|
||||
) -> None:
|
||||
"""
|
||||
Set identity enforcement filter in search_kwargs for PGVector vectorstore.
|
||||
"""
|
||||
if auth_context is not None:
|
||||
auth_filter: dict = {"authorized_identities": {"$eq": auth_context.user_auth}}
|
||||
filters = search_kwargs.get("filter")
|
||||
_apply_pgvector_filter(search_kwargs, filters, auth_filter)
|
||||
|
||||
|
||||
def _set_identity_enforcement_filter(
|
||||
retriever: VectorStoreRetriever, auth_context: Optional[AuthContext]
|
||||
) -> None:
|
||||
@@ -243,10 +505,12 @@ def _set_identity_enforcement_filter(
|
||||
of the retriever based on the type of the vectorstore.
|
||||
"""
|
||||
search_kwargs = retriever.search_kwargs
|
||||
if isinstance(retriever.vectorstore, Pinecone):
|
||||
if retriever.vectorstore.__class__.__name__ == PINECONE:
|
||||
_apply_pinecone_authorization_filter(search_kwargs, auth_context)
|
||||
elif isinstance(retriever.vectorstore, Qdrant):
|
||||
elif retriever.vectorstore.__class__.__name__ == QDRANT:
|
||||
_apply_qdrant_authorization_filter(search_kwargs, auth_context)
|
||||
elif retriever.vectorstore.__class__.__name__ == PGVECTOR:
|
||||
_apply_pgvector_authorization_filter(search_kwargs, auth_context)
|
||||
|
||||
|
||||
def _set_semantic_enforcement_filter(
|
||||
@@ -259,7 +523,9 @@ def _set_semantic_enforcement_filter(
|
||||
of the retriever based on the type of the vectorstore.
|
||||
"""
|
||||
search_kwargs = retriever.search_kwargs
|
||||
if isinstance(retriever.vectorstore, Pinecone):
|
||||
if retriever.vectorstore.__class__.__name__ == PINECONE:
|
||||
_apply_pinecone_semantic_filter(search_kwargs, semantic_context)
|
||||
elif isinstance(retriever.vectorstore, Qdrant):
|
||||
elif retriever.vectorstore.__class__.__name__ == QDRANT:
|
||||
_apply_qdrant_semantic_filter(search_kwargs, semantic_context)
|
||||
elif retriever.vectorstore.__class__.__name__ == PGVECTOR:
|
||||
_apply_pgvector_semantic_filter(search_kwargs, semantic_context)
|
||||
|
||||
@@ -9,8 +9,8 @@ from typing import Any, Callable, Dict, List, Union
|
||||
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.outputs import ChatResult
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.chat_models.openai import ChatOpenAI
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
@@ -106,7 +106,7 @@ class AzureChatOpenAI(ChatOpenAI):
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "chat_models", "azure_openai"]
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
if values["n"] < 1:
|
||||
|
||||
@@ -50,11 +50,10 @@ from langchain_core.pydantic_v1 import (
|
||||
Extra,
|
||||
Field,
|
||||
SecretStr,
|
||||
root_validator,
|
||||
)
|
||||
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
|
||||
from langchain_core.tools import BaseTool
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from langchain_core.utils.function_calling import convert_to_openai_tool
|
||||
|
||||
from langchain_community.utilities.requests import Requests
|
||||
@@ -300,7 +299,7 @@ class ChatEdenAI(BaseChatModel):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
values["edenai_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -21,8 +21,8 @@ from langchain_core.outputs import (
|
||||
ChatGeneration,
|
||||
ChatResult,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -261,7 +261,7 @@ class ChatGooglePalm(BaseChatModel, BaseModel):
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "chat_models", "google_palm"]
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate api key, python package exists, temperature, top_p, and top_k."""
|
||||
google_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -29,6 +29,7 @@ from langchain_core.utils import (
|
||||
convert_to_secret_str,
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -190,7 +191,7 @@ class ChatHunyuan(BaseChatModel):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
values["hunyuan_api_base"] = get_from_dict_or_env(
|
||||
values,
|
||||
|
||||
@@ -45,6 +45,7 @@ from langchain_core.utils import (
|
||||
convert_to_secret_str,
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
@@ -218,7 +219,7 @@ class JinaChat(BaseChatModel):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["jinachat_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -11,6 +11,8 @@ from importlib.metadata import version
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, cast
|
||||
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import gpudb
|
||||
|
||||
@@ -24,7 +26,7 @@ from langchain_core.messages import (
|
||||
)
|
||||
from langchain_core.output_parsers.transform import BaseOutputParser
|
||||
from langchain_core.outputs import ChatGeneration, ChatResult, Generation
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
@@ -341,7 +343,7 @@ class ChatKinetica(BaseChatModel):
|
||||
kdbc: Any = Field(exclude=True)
|
||||
""" Kinetica DB connection. """
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Pydantic object validator."""
|
||||
|
||||
|
||||
@@ -23,8 +23,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.messages import AIMessageChunk, BaseMessage
|
||||
from langchain_core.outputs import ChatGenerationChunk, ChatResult
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.adapters.openai import (
|
||||
convert_message_to_dict,
|
||||
@@ -85,7 +85,7 @@ class ChatKonko(ChatOpenAI):
|
||||
max_tokens: int = 20
|
||||
"""Maximum number of tokens to generate."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["konko_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -48,10 +48,10 @@ from langchain_core.outputs import (
|
||||
ChatGenerationChunk,
|
||||
ChatResult,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
from langchain_core.runnables import Runnable
|
||||
from langchain_core.tools import BaseTool
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
from langchain_core.utils.function_calling import convert_to_openai_tool
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -249,7 +249,7 @@ class ChatLiteLLM(BaseChatModel):
|
||||
|
||||
return _completion_with_retry(**kwargs)
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate api key, python package exists, temperature, top_p, and top_k."""
|
||||
try:
|
||||
|
||||
@@ -2,10 +2,10 @@
|
||||
|
||||
from typing import Dict
|
||||
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import (
|
||||
convert_to_secret_str,
|
||||
get_from_dict_or_env,
|
||||
pre_init,
|
||||
)
|
||||
|
||||
from langchain_community.chat_models import ChatOpenAI
|
||||
@@ -29,7 +29,7 @@ class MoonshotChat(MoonshotCommon, ChatOpenAI): # type: ignore[misc]
|
||||
moonshot = MoonshotChat(model="moonshot-v1-8k")
|
||||
"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that the environment is set up correctly."""
|
||||
values["moonshot_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
|
||||
from typing import Dict
|
||||
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.chat_models.openai import ChatOpenAI
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
@@ -48,7 +48,7 @@ class ChatOctoAI(ChatOpenAI):
|
||||
def is_lc_serializable(cls) -> bool:
|
||||
return False
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["octoai_api_base"] = get_from_dict_or_env(
|
||||
|
||||
@@ -49,6 +49,7 @@ from langchain_core.runnables import Runnable
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
|
||||
from langchain_community.adapters.openai import (
|
||||
@@ -274,7 +275,7 @@ class ChatOpenAI(BaseChatModel):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
if values["n"] < 1:
|
||||
|
||||
@@ -40,9 +40,8 @@ from langchain_core.pydantic_v1 import (
|
||||
Extra,
|
||||
Field,
|
||||
SecretStr,
|
||||
root_validator,
|
||||
)
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from premai.api.chat_completions.v1_chat_completions_create import (
|
||||
@@ -249,7 +248,7 @@ class ChatPremAI(BaseChatModel, BaseModel):
|
||||
allow_population_by_field_name = True
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environments(cls, values: Dict) -> Dict:
|
||||
"""Validate that the package is installed and that the API token is valid"""
|
||||
try:
|
||||
|
||||
@@ -16,6 +16,7 @@ from langchain_core.utils import (
|
||||
convert_to_secret_str,
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from langchain_core.utils.utils import build_extra_kwargs
|
||||
|
||||
@@ -135,7 +136,7 @@ class ChatSnowflakeCortex(BaseChatModel):
|
||||
)
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
try:
|
||||
from snowflake.snowpark import Session
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
from typing import Dict
|
||||
|
||||
from langchain_core._api import deprecated
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.chat_models import ChatOpenAI
|
||||
from langchain_community.llms.solar import SOLAR_SERVICE_URL_BASE, SolarCommon
|
||||
@@ -37,7 +37,7 @@ class SolarChat(SolarCommon, ChatOpenAI):
|
||||
arbitrary_types_allowed = True
|
||||
extra = "ignore"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that the environment is set up correctly."""
|
||||
values["solar_api_key"] = get_from_dict_or_env(
|
||||
|
||||
@@ -49,10 +49,10 @@ from langchain_core.outputs import (
|
||||
ChatGenerationChunk,
|
||||
ChatResult,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
|
||||
from langchain_core.runnables import Runnable
|
||||
from langchain_core.tools import BaseTool
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from langchain_core.utils.function_calling import convert_to_openai_tool
|
||||
from requests.exceptions import HTTPError
|
||||
from tenacity import (
|
||||
@@ -431,7 +431,7 @@ class ChatTongyi(BaseChatModel):
|
||||
"""Return type of llm."""
|
||||
return "tongyi"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["dashscope_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -27,7 +27,7 @@ from langchain_core.messages import (
|
||||
SystemMessage,
|
||||
)
|
||||
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.vertexai import (
|
||||
_VertexAICommon,
|
||||
@@ -225,7 +225,7 @@ class ChatVertexAI(_VertexAICommon, BaseChatModel):
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "chat_models", "vertexai"]
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that the python package exists in environment."""
|
||||
is_gemini = is_gemini_model(values["model_name"])
|
||||
|
||||
@@ -44,6 +44,7 @@ from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
@@ -166,7 +167,7 @@ class ChatYuan2(BaseChatModel):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["yuan2_api_key"] = get_from_dict_or_env(
|
||||
|
||||
@@ -89,6 +89,8 @@ class PebbloSafeLoader(BaseLoader):
|
||||
list: Documents fetched from load method of the wrapped `loader`.
|
||||
"""
|
||||
self.docs = self.loader.load()
|
||||
# Add pebblo-specific metadata to docs
|
||||
self._add_pebblo_specific_metadata()
|
||||
if not self.load_semantic:
|
||||
self._classify_doc(self.docs, loading_end=True)
|
||||
return self.docs
|
||||
@@ -123,6 +125,8 @@ class PebbloSafeLoader(BaseLoader):
|
||||
self.docs = []
|
||||
break
|
||||
self.docs = list((doc,))
|
||||
# Add pebblo-specific metadata to docs
|
||||
self._add_pebblo_specific_metadata()
|
||||
if not self.load_semantic:
|
||||
self._classify_doc(self.docs, loading_end=True)
|
||||
yield self.docs[0]
|
||||
@@ -517,3 +521,13 @@ class PebbloSafeLoader(BaseLoader):
|
||||
classified_doc.get("topics", {}).keys()
|
||||
)
|
||||
return doc
|
||||
|
||||
def _add_pebblo_specific_metadata(self) -> None:
|
||||
"""Add Pebblo specific metadata to documents."""
|
||||
for doc in self.docs:
|
||||
doc_metadata = doc.metadata
|
||||
doc_metadata["full_path"] = get_full_path(
|
||||
doc_metadata.get(
|
||||
"full_path", doc_metadata.get("source", self.source_path)
|
||||
)
|
||||
)
|
||||
|
||||
@@ -4,8 +4,8 @@ from __future__ import annotations
|
||||
|
||||
from typing import Dict
|
||||
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.embeddings.openai import OpenAIEmbeddings
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
@@ -34,7 +34,7 @@ class AnyscaleEmbeddings(OpenAIEmbeddings):
|
||||
"anyscale_api_key": "ANYSCALE_API_KEY",
|
||||
}
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: dict) -> dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["anyscale_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -4,8 +4,8 @@ import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -48,7 +48,7 @@ class QianfanEmbeddingsEndpoint(BaseModel, Embeddings):
|
||||
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
|
||||
"""extra params for model invoke using with `do`."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""
|
||||
Validate whether qianfan_ak and qianfan_sk in the environment variables or
|
||||
|
||||
@@ -2,8 +2,8 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
import requests
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
DEFAULT_MODEL_ID = "sentence-transformers/clip-ViT-B-32"
|
||||
MAX_BATCH_SIZE = 1024
|
||||
@@ -59,7 +59,7 @@ class DeepInfraEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
deepinfra_api_token = get_from_dict_or_env(
|
||||
|
||||
@@ -6,9 +6,8 @@ from langchain_core.pydantic_v1 import (
|
||||
Extra,
|
||||
Field,
|
||||
SecretStr,
|
||||
root_validator,
|
||||
)
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.utilities.requests import Requests
|
||||
|
||||
@@ -35,7 +34,7 @@ class EdenAiEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
values["edenai_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -2,8 +2,8 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
import requests
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from requests.adapters import HTTPAdapter, Retry
|
||||
from typing_extensions import NotRequired, TypedDict
|
||||
|
||||
@@ -61,7 +61,7 @@ class EmbaasEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
embaas_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -6,9 +6,9 @@ from typing import Dict, List, Optional
|
||||
import requests
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.runnables.config import run_in_executor
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -31,7 +31,7 @@ class ErnieEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
_lock = threading.Lock()
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
values["ernie_api_base"] = get_from_dict_or_env(
|
||||
values, "ernie_api_base", "ERNIE_API_BASE", "https://aip.baidubce.com"
|
||||
|
||||
@@ -2,7 +2,8 @@ from typing import Any, Dict, List, Literal, Optional
|
||||
|
||||
import numpy as np
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
class FastEmbedEmbeddings(BaseModel, Embeddings):
|
||||
@@ -54,7 +55,7 @@ class FastEmbedEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that FastEmbed has been installed."""
|
||||
model_name = values.get("model_name")
|
||||
|
||||
@@ -5,7 +5,8 @@ from functools import cached_property
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -77,7 +78,7 @@ class GigaChatEmbeddings(BaseModel, Embeddings):
|
||||
key_file_password=self.key_file_password,
|
||||
)
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate authenticate data in environment and python package is installed."""
|
||||
try:
|
||||
|
||||
@@ -4,8 +4,8 @@ import logging
|
||||
from typing import Any, Callable, Dict, List, Optional
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -62,7 +62,7 @@ class GooglePalmEmbeddings(BaseModel, Embeddings):
|
||||
show_progress_bar: bool = False
|
||||
"""Whether to show a tqdm progress bar. Must have `tqdm` installed."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate api key, python package exists."""
|
||||
google_api_key = get_from_dict_or_env(
|
||||
|
||||
@@ -2,7 +2,8 @@ from typing import Any, Dict, List, Optional
|
||||
|
||||
import numpy as np
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
LASER_MULTILINGUAL_MODEL: str = "laser2"
|
||||
|
||||
@@ -41,7 +42,7 @@ class LaserEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that laser_encoders has been installed."""
|
||||
try:
|
||||
|
||||
@@ -4,8 +4,8 @@ from typing import Dict, List, Optional
|
||||
|
||||
import requests
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
|
||||
class LLMRailsEmbeddings(BaseModel, Embeddings):
|
||||
@@ -37,7 +37,7 @@ class LLMRailsEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
api_key = convert_to_secret_str(
|
||||
|
||||
@@ -17,7 +17,11 @@ from typing import (
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env, get_pydantic_field_names
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from tenacity import (
|
||||
AsyncRetrying,
|
||||
before_sleep_log,
|
||||
@@ -193,7 +197,7 @@ class LocalAIEmbeddings(BaseModel, Embeddings):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["openai_api_key"] = get_from_dict_or_env(
|
||||
|
||||
@@ -5,8 +5,8 @@ from typing import Any, Callable, Dict, List, Optional
|
||||
|
||||
import requests
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -84,7 +84,7 @@ class MiniMaxEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that group id and api key exists in environment."""
|
||||
minimax_group_id = get_from_dict_or_env(
|
||||
|
||||
@@ -8,7 +8,8 @@ import aiohttp
|
||||
import requests
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
def is_endpoint_live(url: str, headers: Optional[dict], payload: Any) -> bool:
|
||||
@@ -58,7 +59,7 @@ class NeMoEmbeddings(BaseModel, Embeddings):
|
||||
model: str = "NV-Embed-QA-003"
|
||||
api_endpoint_url: str = "http://localhost:8088/v1/embeddings"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that the end point is alive using the values that are provided."""
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
|
||||
class NLPCloudEmbeddings(BaseModel, Embeddings):
|
||||
@@ -30,7 +30,7 @@ class NLPCloudEmbeddings(BaseModel, Embeddings):
|
||||
) -> None:
|
||||
super().__init__(model_name=model_name, gpu=gpu, **kwargs)
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
nlpcloud_api_key = get_from_dict_or_env(
|
||||
|
||||
@@ -2,7 +2,8 @@ from enum import Enum
|
||||
from typing import Any, Dict, Iterator, List, Mapping, Optional
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
CUSTOM_ENDPOINT_PREFIX = "ocid1.generativeaiendpoint"
|
||||
|
||||
@@ -89,7 +90,7 @@ class OCIGenAIEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict: # pylint: disable=no-self-argument
|
||||
"""Validate that OCI config and python package exists in environment."""
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from typing import Dict
|
||||
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.embeddings.openai import OpenAIEmbeddings
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
@@ -38,7 +38,7 @@ class OctoAIEmbeddings(OpenAIEmbeddings):
|
||||
def lc_secrets(self) -> Dict[str, str]:
|
||||
return {"octoai_api_token": "OCTOAI_API_TOKEN"}
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: dict) -> dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["endpoint_url"] = get_from_dict_or_env(
|
||||
|
||||
@@ -22,7 +22,11 @@ import numpy as np
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env, get_pydantic_field_names
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from tenacity import (
|
||||
AsyncRetrying,
|
||||
before_sleep_log,
|
||||
@@ -282,7 +286,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["openai_api_key"] = get_from_dict_or_env(
|
||||
|
||||
@@ -5,8 +5,8 @@ from typing import Any, Callable, Dict, List, Optional, Union
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.language_models.llms import create_base_retry_decorator
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -32,7 +32,7 @@ class PremAIEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
client: Any
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environments(cls, values: Dict) -> Dict:
|
||||
"""Validate that the package is installed and that the API token is valid"""
|
||||
try:
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
|
||||
|
||||
@@ -115,7 +116,7 @@ class SagemakerEndpointEmbeddings(BaseModel, Embeddings):
|
||||
extra = Extra.forbid
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Dont do anything if client provided externally"""
|
||||
if values.get("client") is not None:
|
||||
|
||||
@@ -3,8 +3,8 @@ from typing import Dict, Generator, List, Optional
|
||||
|
||||
import requests
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
|
||||
class SambaStudioEmbeddings(BaseModel, Embeddings):
|
||||
@@ -64,7 +64,7 @@ class SambaStudioEmbeddings(BaseModel, Embeddings):
|
||||
batch_size: int = 32
|
||||
"""Batch size for the embedding models"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["sambastudio_embeddings_base_url"] = get_from_dict_or_env(
|
||||
|
||||
@@ -6,8 +6,8 @@ from typing import Any, Callable, Dict, List, Optional
|
||||
import requests
|
||||
from langchain_core._api import deprecated
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -80,7 +80,7 @@ class SolarEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate api key exists in environment."""
|
||||
solar_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -8,7 +8,7 @@ from typing import Any, Dict, List, Literal, Optional, Tuple
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.language_models.llms import create_base_retry_decorator
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.vertexai import _VertexAICommon
|
||||
from langchain_community.utilities.vertexai import raise_vertex_import_error
|
||||
@@ -33,7 +33,7 @@ class VertexAIEmbeddings(_VertexAICommon, Embeddings):
|
||||
show_progress_bar: bool = False
|
||||
"""Whether to show a tqdm progress bar. Must have `tqdm` installed."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validates that the python package exists in environment."""
|
||||
cls._try_init_vertexai(values)
|
||||
|
||||
@@ -4,8 +4,8 @@ import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -43,7 +43,7 @@ class VolcanoEmbeddings(BaseModel, Embeddings):
|
||||
client: Any
|
||||
"""volcano client"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""
|
||||
Validate whether volcano_ak and volcano_sk in the environment variables or
|
||||
|
||||
@@ -7,8 +7,8 @@ import time
|
||||
from typing import Any, Callable, Dict, List, Sequence
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -76,7 +76,7 @@ class YandexGPTEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
allow_population_by_field_name = True
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that iam token exists in environment."""
|
||||
|
||||
|
||||
@@ -3,8 +3,8 @@ from typing import Any, Dict, List, Optional, cast
|
||||
import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
|
||||
class AI21PenaltyData(BaseModel):
|
||||
@@ -73,7 +73,7 @@ class AI21(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
ai21_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -2,8 +2,8 @@ from typing import Any, Dict, List, Optional, Sequence
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -129,7 +129,7 @@ class AlephAlpha(LLM):
|
||||
"""Stop sequences to use."""
|
||||
|
||||
# Client params
|
||||
aleph_alpha_api_key: Optional[str] = None
|
||||
aleph_alpha_api_key: Optional[SecretStr] = None
|
||||
"""API key for Aleph Alpha API."""
|
||||
host: str = "https://api.aleph-alpha.com"
|
||||
"""The hostname of the API host.
|
||||
@@ -167,7 +167,7 @@ class AlephAlpha(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["aleph_alpha_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -25,6 +25,7 @@ from langchain_core.utils import (
|
||||
check_package_version,
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from langchain_core.utils.utils import build_extra_kwargs, convert_to_secret_str
|
||||
|
||||
@@ -74,7 +75,7 @@ class _AnthropicCommon(BaseLanguageModel):
|
||||
)
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["anthropic_api_key"] = convert_to_secret_str(
|
||||
@@ -185,7 +186,7 @@ class Anthropic(LLM, _AnthropicCommon):
|
||||
allow_population_by_field_name = True
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def raise_warning(cls, values: Dict) -> Dict:
|
||||
"""Raise warning that this class is deprecated."""
|
||||
warnings.warn(
|
||||
|
||||
@@ -14,8 +14,8 @@ from langchain_core.callbacks import (
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.openai import (
|
||||
BaseOpenAI,
|
||||
@@ -93,7 +93,7 @@ class Anyscale(BaseOpenAI):
|
||||
def is_lc_serializable(cls) -> bool:
|
||||
return False
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["anyscale_api_base"] = get_from_dict_or_env(
|
||||
|
||||
@@ -3,7 +3,8 @@ from typing import Any, Dict, List, Optional
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models import BaseLLM
|
||||
from langchain_core.outputs import Generation, LLMResult
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.pydantic_v1 import Field
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
class Aphrodite(BaseLLM):
|
||||
@@ -157,7 +158,7 @@ class Aphrodite(BaseLLM):
|
||||
|
||||
client: Any #: :meta private:
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that python package exists in environment."""
|
||||
|
||||
|
||||
@@ -7,8 +7,8 @@ from typing import Any, Dict, List, Optional
|
||||
import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -31,7 +31,7 @@ class BaichuanLLM(LLM):
|
||||
baichuan_api_host: Optional[str] = None
|
||||
baichuan_api_key: Optional[SecretStr] = None
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
values["baichuan_api_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "baichuan_api_key", "BAICHUAN_API_KEY")
|
||||
|
||||
@@ -16,8 +16,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -76,7 +76,7 @@ class QianfanLLMEndpoint(LLM):
|
||||
In the case of other model, passing these params will not affect the result.
|
||||
"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
values["qianfan_ak"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(
|
||||
|
||||
@@ -4,7 +4,7 @@ from typing import Any, Dict, List, Mapping, Optional, cast
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -63,7 +63,7 @@ class Banana(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
banana_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -10,7 +10,7 @@ import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -95,7 +95,7 @@ class Beam(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
beam_client_id = get_from_dict_or_env(
|
||||
|
||||
@@ -21,8 +21,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, Field
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
from langchain_community.utilities.anthropic import (
|
||||
@@ -389,7 +389,7 @@ class BedrockBase(BaseModel, ABC):
|
||||
...Logic to handle guardrail intervention...
|
||||
""" # noqa: E501
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that AWS credentials to and python package exists in environment."""
|
||||
|
||||
@@ -743,7 +743,7 @@ class Bedrock(LLM, BedrockBase):
|
||||
|
||||
"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
model_id = values["model_id"]
|
||||
if model_id.startswith("anthropic.claude-3"):
|
||||
|
||||
@@ -5,7 +5,7 @@ import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -62,7 +62,7 @@ class CerebriumAI(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
cerebriumai_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -4,7 +4,8 @@ from typing import Any, Dict, List, Optional
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import Generation, LLMResult
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.pydantic_v1 import Extra, Field
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -53,7 +54,7 @@ class Clarifai(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that we have all required info to access Clarifai
|
||||
platform and python package exists in environment."""
|
||||
|
||||
@@ -10,8 +10,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.load.serializable import Serializable
|
||||
from langchain_core.pydantic_v1 import Extra, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra, Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -95,7 +95,7 @@ class BaseCohere(Serializable):
|
||||
user_agent: str = "langchain"
|
||||
"""Identifier for the application making the request."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
try:
|
||||
|
||||
@@ -6,7 +6,7 @@ from langchain_core.callbacks import (
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
class CTransformers(LLM):
|
||||
@@ -57,7 +57,7 @@ class CTransformers(LLM):
|
||||
"""Return type of llm."""
|
||||
return "ctransformers"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that ``ctransformers`` package is installed."""
|
||||
try:
|
||||
|
||||
@@ -3,7 +3,8 @@ from typing import Any, Dict, List, Optional, Union
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import BaseLLM
|
||||
from langchain_core.outputs import Generation, LLMResult
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.pydantic_v1 import Field
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
class CTranslate2(BaseLLM):
|
||||
@@ -50,7 +51,7 @@ class CTranslate2(BaseLLM):
|
||||
explicitly specified.
|
||||
"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that python package exists in environment."""
|
||||
|
||||
|
||||
@@ -8,8 +8,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.utilities.requests import Requests
|
||||
|
||||
@@ -43,7 +43,7 @@ class DeepInfra(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
deepinfra_api_token = get_from_dict_or_env(
|
||||
|
||||
@@ -1,12 +1,18 @@
|
||||
# flake8: noqa
|
||||
from langchain_core.utils import pre_init
|
||||
from typing import Any, AsyncIterator, Dict, Iterator, List, Optional, Union
|
||||
from langchain_core.utils import pre_init
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import pre_init
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.utils import pre_init
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.utils import pre_init
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
from langchain_core.utils import pre_init
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
|
||||
|
||||
@@ -55,7 +61,7 @@ class DeepSparse(LLM):
|
||||
"""Return type of llm."""
|
||||
return "deepsparse"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that ``deepsparse`` package is installed."""
|
||||
try:
|
||||
|
||||
@@ -10,7 +10,7 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
from langchain_community.utilities.requests import Requests
|
||||
@@ -73,7 +73,7 @@ class EdenAI(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
values["edenai_api_key"] = get_from_dict_or_env(
|
||||
|
||||
@@ -3,7 +3,8 @@ from typing import Any, Dict, Iterator, List, Optional
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.pydantic_v1 import Field
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
class ExLlamaV2(LLM):
|
||||
@@ -58,7 +59,7 @@ class ExLlamaV2(LLM):
|
||||
disallowed_tokens: List[int] = Field(None)
|
||||
"""List of tokens to disallow during generation."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
try:
|
||||
import torch
|
||||
|
||||
@@ -9,8 +9,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import BaseLLM, create_base_retry_decorator
|
||||
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, pre_init
|
||||
from langchain_core.utils.env import get_from_dict_or_env
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ class Fireworks(BaseLLM):
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "llms", "fireworks"]
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key in environment."""
|
||||
try:
|
||||
|
||||
@@ -10,7 +10,8 @@ from langchain_core.callbacks.manager import (
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.load.serializable import Serializable
|
||||
from langchain_core.outputs import GenerationChunk, LLMResult
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import pre_init
|
||||
from langchain_core.utils.env import get_from_dict_or_env
|
||||
from langchain_core.utils.utils import convert_to_secret_str
|
||||
|
||||
@@ -66,7 +67,7 @@ class BaseFriendli(Serializable):
|
||||
# is used by default.
|
||||
top_p: Optional[float] = None
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate if personal access token is provided in environment."""
|
||||
try:
|
||||
|
||||
@@ -11,7 +11,7 @@ from langchain_core.callbacks import (
|
||||
from langchain_core.language_models.llms import BaseLLM
|
||||
from langchain_core.load.serializable import Serializable
|
||||
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import gigachat
|
||||
@@ -113,7 +113,7 @@ class _BaseGigaChat(Serializable):
|
||||
verbose=self.verbose,
|
||||
)
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate authenticate data in environment and python package is installed."""
|
||||
try:
|
||||
|
||||
@@ -6,8 +6,8 @@ from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models import LanguageModelInput
|
||||
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms import BaseLLM
|
||||
from langchain_community.utilities.vertexai import create_retry_decorator
|
||||
@@ -107,7 +107,7 @@ class GooglePalm(BaseLLM, BaseModel):
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "llms", "google_palm"]
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate api key, python package exists."""
|
||||
google_api_key = get_from_dict_or_env(
|
||||
|
||||
@@ -4,7 +4,7 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -86,7 +86,7 @@ class GooseAI(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
gooseai_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -3,7 +3,8 @@ from typing import Any, Dict, List, Mapping, Optional, Set
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.pydantic_v1 import Extra, Field
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -127,7 +128,7 @@ class GPT4All(LLM):
|
||||
"streaming": self.streaming,
|
||||
}
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that the python package exists in the environment."""
|
||||
try:
|
||||
|
||||
@@ -10,7 +10,11 @@ from langchain_core.callbacks import (
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env, get_pydantic_field_names
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -162,7 +166,7 @@ class HuggingFaceEndpoint(LLM):
|
||||
values["model"] = values.get("endpoint_url") or values.get("repo_id")
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that package is installed and that the API token is valid."""
|
||||
try:
|
||||
|
||||
@@ -4,8 +4,8 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -61,7 +61,7 @@ class HuggingFaceHub(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
huggingfacehub_api_token = get_from_dict_or_env(
|
||||
|
||||
@@ -9,7 +9,7 @@ from langchain_core.callbacks import (
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.utils import get_pydantic_field_names
|
||||
from langchain_core.utils import get_pydantic_field_names, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -134,7 +134,7 @@ class HuggingFaceTextGenInference(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that python package exists in environment."""
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import get_pydantic_field_names
|
||||
from langchain_core.utils import get_pydantic_field_names, pre_init
|
||||
from langchain_core.utils.utils import build_extra_kwargs
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -133,7 +133,7 @@ class LlamaCpp(LLM):
|
||||
verbose: bool = True
|
||||
"""Print verbose output to stderr."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that llama-cpp-python library is installed."""
|
||||
try:
|
||||
|
||||
@@ -2,7 +2,8 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.pydantic_v1 import Extra
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
|
||||
class ManifestWrapper(LLM):
|
||||
@@ -16,7 +17,7 @@ class ManifestWrapper(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that python package exists in environment."""
|
||||
try:
|
||||
|
||||
@@ -16,7 +16,7 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -72,7 +72,7 @@ class MinimaxCommon(BaseModel):
|
||||
minimax_group_id: Optional[str] = None
|
||||
minimax_api_key: Optional[SecretStr] = None
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["minimax_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -4,7 +4,7 @@ import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models import LLM
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -79,7 +79,7 @@ class MoonshotCommon(BaseModel):
|
||||
"""
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["moonshot_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -3,8 +3,8 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -64,7 +64,7 @@ class MosaicML(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
mosaicml_api_token = get_from_dict_or_env(
|
||||
|
||||
@@ -2,8 +2,8 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
|
||||
class NLPCloud(LLM):
|
||||
@@ -56,7 +56,7 @@ class NLPCloud(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["nlpcloud_api_key"] = convert_to_secret_str(
|
||||
|
||||
@@ -4,8 +4,8 @@ from typing import Any, Dict, List, Optional
|
||||
import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -47,7 +47,7 @@ class OCIModelDeploymentLLM(LLM):
|
||||
"""Stop words to use when generating. Model output is cut off
|
||||
at the first occurrence of any of these substrings."""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment( # pylint: disable=no-self-argument
|
||||
cls, values: Dict
|
||||
) -> Dict:
|
||||
|
||||
@@ -8,7 +8,8 @@ from typing import Any, Dict, Iterator, List, Mapping, Optional
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -99,7 +100,7 @@ class OCIGenAIBase(BaseModel, ABC):
|
||||
is_stream: bool = False
|
||||
"""Whether to stream back partial progress"""
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that OCI config and python package exists in environment."""
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from typing import Any, Dict
|
||||
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.openai import BaseOpenAI
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
@@ -66,7 +66,7 @@ class OctoAIEndpoint(BaseOpenAI):
|
||||
"""Return type of llm."""
|
||||
return "octoai_endpoint"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["octoai_api_base"] = get_from_dict_or_env(
|
||||
|
||||
@@ -5,8 +5,8 @@ from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models import BaseLanguageModel
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.messages import AIMessage
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -38,7 +38,7 @@ class OpaquePrompts(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validates that the OpaquePrompts API key and the Python package exist."""
|
||||
try:
|
||||
|
||||
@@ -29,7 +29,11 @@ from langchain_core.callbacks import (
|
||||
from langchain_core.language_models.llms import BaseLLM, create_base_retry_decorator
|
||||
from langchain_core.outputs import Generation, GenerationChunk, LLMResult
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env, get_pydantic_field_names
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
pre_init,
|
||||
)
|
||||
from langchain_core.utils.utils import build_extra_kwargs
|
||||
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
@@ -269,7 +273,7 @@ class BaseOpenAI(BaseLLM):
|
||||
)
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
if values["n"] < 1:
|
||||
@@ -818,7 +822,7 @@ class AzureOpenAI(BaseOpenAI):
|
||||
"""Get the namespace of the langchain object."""
|
||||
return ["langchain", "llms", "openai"]
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
if values["n"] < 1:
|
||||
@@ -1025,7 +1029,7 @@ class OpenAIChat(BaseLLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
openai_api_key = get_from_dict_or_env(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from typing import Any, Dict
|
||||
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import pre_init
|
||||
|
||||
from langchain_community.llms.openai import BaseOpenAI
|
||||
|
||||
@@ -16,7 +16,7 @@ class OpenLM(BaseOpenAI):
|
||||
def _invocation_params(self) -> Dict[str, Any]:
|
||||
return {**{"model": self.model_name}, **super()._invocation_params}
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
try:
|
||||
import openlm
|
||||
|
||||
@@ -6,8 +6,7 @@ import requests
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -52,7 +51,7 @@ class PaiEasEndpoint(LLM):
|
||||
|
||||
version: Optional[str] = "2.0"
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
values["eas_service_url"] = get_from_dict_or_env(
|
||||
|
||||
@@ -4,7 +4,7 @@ from typing import Any, Dict, List, Mapping, Optional
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -86,7 +86,7 @@ class Petals(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
huggingface_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -10,7 +10,7 @@ from langchain_core.pydantic_v1 import (
|
||||
SecretStr,
|
||||
root_validator,
|
||||
)
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -65,7 +65,7 @@ class PipelineAI(LLM, BaseModel):
|
||||
values["pipeline_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
pipeline_api_key = convert_to_secret_str(
|
||||
|
||||
@@ -3,8 +3,8 @@ from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import Extra
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
@@ -53,7 +53,7 @@ class PredictionGuard(LLM):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that the access token and python package exists in environment."""
|
||||
token = get_from_dict_or_env(values, "token", "PREDICTIONGUARD_TOKEN")
|
||||
|
||||
@@ -7,7 +7,7 @@ from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import Extra, Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.utils import get_from_dict_or_env, pre_init
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from replicate.prediction import Prediction
|
||||
@@ -97,7 +97,7 @@ class Replicate(LLM):
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
@pre_init
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
replicate_api_token = get_from_dict_or_env(
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user