Compare commits

...

71 Commits

Author SHA1 Message Date
Eugene Yurtsev
59c03901fa update 2024-08-02 12:40:28 -04:00
Eugene Yurtsev
444ae9d03a SNAPSHOT: Update test_grpah.ambr some additional descriptions + correct handling of Optional attribute 2024-08-02 12:33:25 -04:00
Eugene Yurtsev
c6399cdb19 MANUAL: test_runnable update tests and name snapshots 2024-08-02 12:32:45 -04:00
Eugene Yurtsev
2b38e3c606 SNAPSHOT: UPDATE SNAPSHOT FOR CHAT PROMPT TEMPLATE -- some re-arrangement and handling of optional values 2024-08-02 12:32:14 -04:00
Eugene Yurtsev
489c451437 MANUAL: Update test_chat test to use old style ChatMessage.from_messages 2024-08-02 12:30:56 -04:00
Eugene Yurtsev
545d40ebd8 MANUAL: SCHEMA REMAPPING UPDATE REMOVE __default__ from top level 2024-08-02 12:30:37 -04:00
Eugene Yurtsev
b6d3f85deb MANUAL: Update get_name() for runnables to handle pydantic generic subclass 2024-08-02 12:09:22 -04:00
Eugene Yurtsev
3973470667 MANUAL: (PORT NOW) Remove optional name in runnable serializable (overloaded) 2024-08-02 11:01:11 -04:00
Eugene Yurtsev
5b463effbf MANUAL: Update Serializable to get correct name from generic Pydantic 2024-08-02 11:00:49 -04:00
Eugene Yurtsev
229b9b962f MANUAL: Update schema remapping, and update tests (new test fixture is more correct) 2024-08-02 10:29:24 -04:00
Eugene Yurtsev
3c967cb45c update 2024-08-02 10:14:46 -04:00
Eugene Yurtsev
2937808d1c x 2024-08-01 22:27:10 -04:00
Eugene Yurtsev
0802d0024e MANUAL: Hard-code RunnableBranch name 2024-08-01 22:02:25 -04:00
Eugene Yurtsev
95f7169fc7 MANUAL: FIX create_schema_from_function to handle classmethods and methods 2024-08-01 17:39:58 -04:00
Eugene Yurtsev
2f18fb68b8 MANUAL: (NO NEED TO PORT) Removed an accidentally commited test 2024-08-01 17:23:25 -04:00
Eugene Yurtsev
b71e3424c6 MANUAL: SkipValidation in ChatPromptTemplate 2024-08-01 17:21:06 -04:00
Eugene Yurtsev
64e6996b57 Fix unit test 2024-08-01 17:20:51 -04:00
Eugene Yurtsev
701a303432 x 2024-08-01 17:11:48 -04:00
Eugene Yurtsev
08b4fce4e3 update community 2024-08-01 10:33:12 -04:00
Eugene Yurtsev
3cf432849e MANUAL: TEMPORARILY ADD v1 REPR to a base abstractions 2024-08-01 09:10:22 -04:00
Eugene Yurtsev
161610fcd6 MANUAL: Fix one more unit test 2024-07-31 22:12:39 -04:00
Eugene Yurtsev
949e28a7c2 MANUAL: Adjust test_prompt to not show default None 2024-07-31 22:07:46 -04:00
Eugene Yurtsev
ce30623462 MANUAL: Fix unit tests for test_history 2024-07-31 22:03:02 -04:00
Eugene Yurtsev
74ea68b840 MANUAL: FIX LLM Result (PORT NOW) Needs discriminated union 2024-07-31 22:02:43 -04:00
Eugene Yurtsev
2d671a90d9 MANUAL: Fixes for test function calling (White space changes?!) 2024-07-31 21:44:09 -04:00
Eugene Yurtsev
ae827201ef MANUAL: Add required fields 2024-07-31 21:39:33 -04:00
Eugene Yurtsev
272a81af2e MANUAL: Better schema conversion for unit tests 2024-07-31 21:34:52 -04:00
Eugene Yurtsev
725e42c7f0 MANUAL: Runnable (1) use get_type_hints (portable), (2) Use mro() and bases for resolving generic 2024-07-31 21:34:28 -04:00
Eugene Yurtsev
db2f1a9c02 MANUAL: Fix output type on base output parser to use mro() 2024-07-31 21:33:56 -04:00
Eugene Yurtsev
1100b82a49 MANUAL: Fix OutputType inference of OutputParsers 2024-07-31 18:04:06 -04:00
Eugene Yurtsev
6c2baa98eb MANUA: SCHEMA REMAPPING 2024-07-31 18:03:44 -04:00
Eugene Yurtsev
372e799bf3 MANUAL: HANDLE MORE STUFF IN SCHEMA REMAPPING 2024-07-31 16:23:17 -04:00
Eugene Yurtsev
e077eecef4 MANUAL: Set verbose to repr=False in chat model (TODO?) 2024-07-31 16:11:48 -04:00
Eugene Yurtsev
d0ff30e6af MANUAL: Update create_base_model to use type 2024-07-31 16:06:24 -04:00
Eugene Yurtsev
00576dcee3 MANUAL: FIX get_pydantic_field_names to work with pydantic 2 2024-07-31 13:08:10 -04:00
Eugene Yurtsev
6988b63d43 temporary changes to community 2024-07-31 13:07:22 -04:00
Eugene Yurtsev
03aa6c19fa LANGCHAIN PACKAGE CHANGES 2024-07-31 11:09:51 -04:00
Eugene Yurtsev
dcea8b20a5 MANUAL (LANGCHAIN): Update pydantic imports 2024-07-31 09:29:42 -04:00
Eugene Yurtsev
9a9db37e74 MANUAL: Add required fields in schema and fix comparison test for AIMessage in test_runnables.py 2024-07-31 09:21:50 -04:00
Eugene Yurtsev
52960f792e MANUAL: Schema remapping 2024-07-31 09:04:55 -04:00
Eugene Yurtsev
8a507e816e MANUAL: Restore one more missing RootModel instead of __custom_root_model_ check 2024-07-30 16:52:26 -04:00
Eugene Yurtsev
8a11819583 MANUAL: Fix one prompt test 2024-07-30 16:41:35 -04:00
Eugene Yurtsev
5d6266f3ca Remove accidentally commmitted test 2024-07-30 16:01:54 -04:00
Eugene Yurtsev
5810a4b90c Remove accidentally committed test 2024-07-30 16:01:12 -04:00
Eugene Yurtsev
a2e9c45303 MANUAL: Fix input schema into Runnable Lambdas that comes from ast traversal 2024-07-30 16:00:00 -04:00
Eugene Yurtsev
16108c94e2 MANUAL: Replace __custom_root_type with issubclass(obj, RootModel) check 2024-07-30 15:32:22 -04:00
Eugene Yurtsev
0de8e86282 MANUAL: Handle Unserializable attributes on RootModel + schema generator 2024-07-30 15:09:59 -04:00
Eugene Yurtsev
a828f8c2a1 MANUAL: Update create_schea_from_function for tools 2024-07-30 13:47:02 -04:00
Eugene Yurtsev
84db231f1e MANUAL: Exclude verbose from representation? 2024-07-30 12:40:24 -04:00
Eugene Yurtsev
054edda0c5 MANUAL: Test optional subset model re-write add =None to optionals 2024-07-30 12:32:40 -04:00
Eugene Yurtsev
bda6650d7a MANUAL: Fix REPR for ToolMessage 2024-07-30 12:11:27 -04:00
Eugene Yurtsev
fe9d818fe5 MANUAL: Update Serializable to handle default factory for is field useful 2024-07-30 12:06:29 -04:00
Eugene Yurtsev
0482a810bc STASHED CHANGES 2024-07-30 11:52:46 -04:00
Eugene Yurtsev
3a76e0f2ae MANUAL: Fix RunnableAssign parameterization 2024-07-30 10:19:32 -04:00
Eugene Yurtsev
fef70302fa STASHED CHANGE 2024-07-29 17:25:23 -04:00
Eugene Yurtsev
72eab997c0 MANUAL: Add missing AnyID import 2024-07-29 16:49:50 -04:00
Eugene Yurtsev
ffcb025567 MANUAL: Update @pre_init decorator to work with pydantic 2 2024-07-29 15:03:32 -04:00
Eugene Yurtsev
1d9fb23517 MANUAL: Fix REPR for Document in a test 2024-07-29 14:54:45 -04:00
Eugene Yurtsev
45422703fa MANUAL Handle AnyID in unit tests 2024-07-29 14:45:50 -04:00
Eugene Yurtsev
01e5621d28 qxqxqx 2024-07-16 16:12:39 -04:00
Eugene Yurtsev
20e5695ef9 MANUAL: Update create_model to use new RootModel (INCOMPLETE) 2024-07-16 14:54:09 -04:00
Eugene Yurtsev
5c108706dd MANUAL: Force model_rebuild, bring Optional into scope 2024-07-16 11:53:04 -04:00
Eugene Yurtsev
edf848ce9d MANUAL: Update serializable to handle excluded fields correctly 2024-07-16 11:30:29 -04:00
Eugene Yurtsev
209bf78167 MANUAL: Add Optional for model_rebuild for StrOutputParser 2024-07-16 11:21:49 -04:00
Eugene Yurtsev
0ae0175d4e MANUAL: Update create subset model 2024-07-16 11:20:59 -04:00
Eugene Yurtsev
28abed9b0a MANUAL: Replace config in tools. Determine why it inherits from Serializable? 2024-07-16 11:20:18 -04:00
Eugene Yurtsev
ba34dd1017 MANUAL: Update BaseLanaguageModel with model_config = ConfigDict(arbitrary_types_allowed=True) 2024-07-16 11:19:17 -04:00
Eugene Yurtsev
c77f2a4d99 x 2024-07-16 11:17:41 -04:00
Eugene Yurtsev
25ba40fcf2 Use model_rebuild instead of update_forward_references 2024-07-16 11:15:51 -04:00
Eugene Yurtsev
5ba3d79d75 Update to import from pydantic instead of langchain_core.pydantic_v1 2024-07-16 11:14:37 -04:00
Eugene Yurtsev
6a6fb56b43 auto config update 2024-07-16 11:12:27 -04:00
729 changed files with 8493 additions and 6843 deletions

View File

@@ -25,7 +25,7 @@ from langchain_core.messages import (
SystemMessage,
ToolMessage,
)
from langchain_core.pydantic_v1 import BaseModel
from pydantic import BaseModel
from typing_extensions import Literal

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, List, Literal, Optional
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, root_validator
from langchain_community.tools import BaseTool
from langchain_community.tools.ainetwork.app import AINAppOps
@@ -53,13 +53,7 @@ class AINetworkToolkit(BaseToolkit):
values["interface"] = authenticate(network=values.get("network", "testnet"))
return values
class Config:
"""Pydantic config."""
# Allow extra fields. This is needed for the `interface` field.
validate_all = True
# Allow arbitrary types. This is needed for the `interface` field.
arbitrary_types_allowed = True
model_config = ConfigDict(validate_default=True, arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, List, Optional
from langchain_core.language_models import BaseLanguageModel
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.amadeus.closest_airport import AmadeusClosestAirport
@@ -25,12 +25,7 @@ class AmadeusToolkit(BaseToolkit):
client: Client = Field(default_factory=authenticate)
llm: Optional[BaseLanguageModel] = Field(default=None)
class Config:
"""Pydantic config."""
# Allow extra fields. This is needed for the `client` field.
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -2,7 +2,7 @@
from typing import List
from langchain_core.pydantic_v1 import Field
from pydantic import ConfigDict, Field
from langchain_community.agent_toolkits.base import BaseToolkit
from langchain_community.tools import BaseTool
@@ -23,12 +23,7 @@ class CassandraDatabaseToolkit(BaseToolkit):
"""
db: CassandraDatabase = Field(exclude=True)
class Config:
"""Configuration for this pydantic object."""
# Allow arbitrary types. This is needed for the `db` field.
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -1,7 +1,7 @@
from typing import List
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseTool, BaseToolkit
from pydantic import root_validator
from langchain_community.tools.connery import ConneryService

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
from typing import Dict, List, Optional, Type
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseToolkit
from pydantic import root_validator
from langchain_community.tools import BaseTool
from langchain_community.tools.file_management.copy import CopyFileTool

View File

@@ -2,8 +2,8 @@
from typing import Dict, List
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import BaseToolkit
from pydantic import BaseModel, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.github.prompt import (

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, List
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.gmail.create_draft import GmailCreateDraft
@@ -44,11 +44,7 @@ class GmailToolkit(BaseToolkit):
"""
api_resource: Resource = Field(default_factory=build_resource_service)
class Config:
"""Pydantic config."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
from typing import List
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict
from langchain_community.tools import BaseTool
from langchain_community.tools.multion.close_session import MultionCloseSession
@@ -25,10 +26,7 @@ class MultionToolkit(BaseToolkit):
See https://python.langchain.com/docs/security for more information.
"""
class Config:
"""Pydantic config."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -3,8 +3,8 @@ from __future__ import annotations
from typing import Any, List, Optional, Sequence
from langchain_core.language_models import BaseLanguageModel
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseTool, BaseToolkit
from pydantic import Field
from langchain_community.agent_toolkits.nla.tool import NLATool
from langchain_community.tools.openapi.utils.openapi_utils import OpenAPISpec

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, List
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.office365.create_draft_message import (
@@ -39,11 +39,7 @@ class O365Toolkit(BaseToolkit):
"""
account: Account = Field(default_factory=authenticate)
class Config:
"""Pydantic config."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -9,8 +9,8 @@ import yaml
from langchain_core.callbacks import BaseCallbackManager
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate, PromptTemplate
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseTool, Tool
from pydantic import Field
from langchain_community.agent_toolkits.openapi.planner_prompt import (
API_CONTROLLER_PROMPT,
@@ -69,6 +69,7 @@ class RequestsGetToolWithParsing(BaseRequestsTool, BaseTool):
name: str = "requests_get"
"""Tool name."""
# TODO[pydantic]: add type annotation
description = REQUESTS_GET_TOOL_DESCRIPTION
"""Tool description."""
response_length: int = MAX_RESPONSE_LENGTH
@@ -103,6 +104,7 @@ class RequestsPostToolWithParsing(BaseRequestsTool, BaseTool):
name: str = "requests_post"
"""Tool name."""
# TODO[pydantic]: add type annotation
description = REQUESTS_POST_TOOL_DESCRIPTION
"""Tool description."""
response_length: int = MAX_RESPONSE_LENGTH
@@ -134,6 +136,7 @@ class RequestsPatchToolWithParsing(BaseRequestsTool, BaseTool):
name: str = "requests_patch"
"""Tool name."""
# TODO[pydantic]: add type annotation
description = REQUESTS_PATCH_TOOL_DESCRIPTION
"""Tool description."""
response_length: int = MAX_RESPONSE_LENGTH
@@ -167,6 +170,7 @@ class RequestsPutToolWithParsing(BaseRequestsTool, BaseTool):
name: str = "requests_put"
"""Tool name."""
# TODO[pydantic]: add type annotation
description = REQUESTS_PUT_TOOL_DESCRIPTION
"""Tool description."""
response_length: int = MAX_RESPONSE_LENGTH
@@ -198,6 +202,7 @@ class RequestsDeleteToolWithParsing(BaseRequestsTool, BaseTool):
name: str = "requests_delete"
"""The name of the tool."""
# TODO[pydantic]: add type annotation
description = REQUESTS_DELETE_TOOL_DESCRIPTION
"""The description of the tool."""

View File

@@ -4,8 +4,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, List, Optional, Type, cast
from langchain_core.pydantic_v1 import Extra, root_validator
from langchain_core.tools import BaseTool, BaseToolkit
from pydantic import ConfigDict, root_validator
from langchain_community.tools.playwright.base import (
BaseBrowserTool,
@@ -67,12 +67,7 @@ class PlayWrightBrowserToolkit(BaseToolkit):
sync_browser: Optional["SyncBrowser"] = None
async_browser: Optional["AsyncBrowser"] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@root_validator(pre=True)
def validate_imports_and_browser_provided(cls, values: dict) -> dict:

View File

@@ -13,8 +13,8 @@ from langchain_core.prompts.chat import (
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.powerbi.prompt import (
@@ -62,11 +62,7 @@ class PowerBIToolkit(BaseToolkit):
callback_manager: Optional[BaseCallbackManager] = None
output_token_limit: Optional[int] = None
tiktoken_model_name: Optional[str] = None
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -2,8 +2,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, List
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.slack.get_channel import SlackGetChannel
@@ -24,11 +24,7 @@ class SlackToolkit(BaseToolkit):
"""
client: WebClient = Field(default_factory=login)
class Config:
"""Pydantic config."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -3,8 +3,8 @@
from typing import List
from langchain_core.language_models import BaseLanguageModel
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.spark_sql.tool import (
@@ -26,11 +26,7 @@ class SparkSQLToolkit(BaseToolkit):
db: SparkSQL = Field(exclude=True)
llm: BaseLanguageModel = Field(exclude=True)
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -26,11 +26,6 @@ from langchain_community.agent_toolkits.sql.prompt import (
SQL_FUNCTIONS_SUFFIX,
SQL_PREFIX,
)
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain_community.tools.sql_database.tool import (
InfoSQLDatabaseTool,
ListSQLDatabaseTool,
)
if TYPE_CHECKING:
from langchain.agents.agent import AgentExecutor
@@ -39,6 +34,7 @@ if TYPE_CHECKING:
from langchain_core.language_models import BaseLanguageModel
from langchain_core.tools import BaseTool
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain_community.utilities.sql_database import SQLDatabase
@@ -115,6 +111,7 @@ def create_sql_agent(
agent_executor = create_sql_agent(llm, db=db, agent_type="tool-calling", verbose=True)
""" # noqa: E501
from langchain.agents import (
create_openai_functions_agent,
create_openai_tools_agent,
@@ -128,6 +125,11 @@ def create_sql_agent(
)
from langchain.agents.agent_types import AgentType
from langchain_community.tools.sql_database.tool import (
InfoSQLDatabaseTool,
ListSQLDatabaseTool,
)
if toolkit is None and db is None:
raise ValueError(
"Must provide exactly one of 'toolkit' or 'db'. Received neither."

View File

@@ -3,8 +3,8 @@
from typing import List
from langchain_core.language_models import BaseLanguageModel
from langchain_core.pydantic_v1 import Field
from langchain_core.tools import BaseToolkit
from pydantic import ConfigDict, Field
from langchain_community.tools import BaseTool
from langchain_community.tools.sql_database.tool import (
@@ -32,10 +32,7 @@ class SQLDatabaseToolkit(BaseToolkit):
"""Return string representation of SQL dialect to use."""
return self.db.dialect
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""

View File

@@ -15,10 +15,10 @@ from langchain.agents.openai_assistant.base import OpenAIAssistantRunnable, Outp
from langchain_core._api import beta
from langchain_core.callbacks import CallbackManager
from langchain_core.load import dumpd
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.runnables import RunnableConfig, ensure_config
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel, Field, root_validator
if TYPE_CHECKING:
import openai

View File

@@ -22,8 +22,8 @@ from langchain_core.output_parsers import (
BaseOutputParser,
)
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import Runnable
from pydantic import BaseModel
from langchain_community.output_parsers.ernie_functions import (
JsonOutputFunctionsParser,

View File

@@ -10,7 +10,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
AQL_FIX_PROMPT,

View File

@@ -9,7 +9,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks.manager import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
ENTITY_EXTRACTION_PROMPT,

View File

@@ -22,8 +22,8 @@ from langchain_core.prompts import (
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_core.pydantic_v1 import Field
from langchain_core.runnables import Runnable
from pydantic import Field
from langchain_community.chains.graph_qa.cypher_utils import (
CypherQueryCorrector,

View File

@@ -10,7 +10,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
CYPHER_GENERATION_PROMPT,

View File

@@ -10,7 +10,7 @@ from langchain_core.callbacks.manager import CallbackManager, CallbackManagerFor
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
CYPHER_QA_PROMPT,

View File

@@ -9,7 +9,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
CYPHER_QA_PROMPT,

View File

@@ -10,7 +10,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
CYPHER_QA_PROMPT,

View File

@@ -9,7 +9,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
CYPHER_QA_PROMPT,

View File

@@ -9,7 +9,7 @@ from langchain.chains.prompt_selector import ConditionalPromptSelector
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.base import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
CYPHER_QA_PROMPT,

View File

@@ -12,7 +12,7 @@ from langchain_core.callbacks.manager import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.base import BasePromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import SPARQL_QA_PROMPT
from langchain_community.graphs import NeptuneRdfGraph

View File

@@ -12,7 +12,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks.manager import CallbackManager, CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.base import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
GRAPHDB_QA_PROMPT,

View File

@@ -11,7 +11,7 @@ from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts.base import BasePromptTemplate
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.chains.graph_qa.prompts import (
SPARQL_GENERATION_SELECT_PROMPT,

View File

@@ -7,7 +7,7 @@ from typing import Any, Dict, List, Optional
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain_core.callbacks import CallbackManagerForChainRun
from langchain_core.pydantic_v1 import Extra, Field, root_validator
from pydantic import ConfigDict, Field, root_validator
from langchain_community.utilities.requests import TextRequestsWrapper
@@ -37,12 +37,7 @@ class LLMRequestsChain(Chain):
requests_key: str = "requests_result" #: :meta private:
input_key: str = "url" #: :meta private:
output_key: str = "output" #: :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@property
def input_keys(self) -> List[str]:

View File

@@ -11,7 +11,7 @@ from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain_core.callbacks import CallbackManagerForChainRun, Callbacks
from langchain_core.language_models import BaseLanguageModel
from langchain_core.pydantic_v1 import BaseModel, Field
from pydantic import BaseModel, Field
from requests import Response
from langchain_community.tools.openapi.utils.api_models import APIOperation
@@ -30,14 +30,14 @@ class OpenAPIEndpointChain(Chain, BaseModel):
"""Chain interacts with an OpenAPI endpoint using natural language."""
api_request_chain: LLMChain
api_response_chain: Optional[LLMChain]
api_response_chain: Optional[LLMChain] = None
api_operation: APIOperation
requests: Requests = Field(exclude=True, default_factory=Requests)
param_mapping: _ParamMapping = Field(alias="param_mapping")
return_intermediate_steps: bool = False
instructions_key: str = "instructions" #: :meta private:
output_key: str = "output" #: :meta private:
max_text_length: Optional[int] = Field(ge=0) #: :meta private:
max_text_length: Optional[int] = Field(None, ge=0) #: :meta private:
@property
def input_keys(self) -> List[str]:

View File

@@ -19,8 +19,8 @@ from langchain_core.callbacks import (
)
from langchain_core.documents import Document
from langchain_core.language_models import BaseLanguageModel
from langchain_core.pydantic_v1 import Extra, Field, validator
from langchain_core.vectorstores import VectorStoreRetriever
from pydantic import ConfigDict, Field, validator
from langchain_community.chains.pebblo_retrieval.enforcement_filters import (
SUPPORTED_VECTORSTORES,
@@ -185,12 +185,9 @@ class PebbloRetrievalQA(Chain):
else:
return {self.output_key: answer}
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
allow_population_by_field_name = True
model_config = ConfigDict(
extra="forbid", arbitrary_types_allowed=True, populate_by_name=True
)
@property
def input_keys(self) -> List[str]:

View File

@@ -2,7 +2,7 @@
from typing import Any, List, Optional, Union
from langchain_core.pydantic_v1 import BaseModel
from pydantic import BaseModel
class AuthContext(BaseModel):
@@ -89,17 +89,17 @@ class Framework(BaseModel):
class Model(BaseModel):
vendor: Optional[str]
name: Optional[str]
vendor: Optional[str] = None
name: Optional[str] = None
class PkgInfo(BaseModel):
project_home_page: Optional[str]
documentation_url: Optional[str]
pypi_url: Optional[str]
liscence_type: Optional[str]
installed_via: Optional[str]
location: Optional[str]
project_home_page: Optional[str] = None
documentation_url: Optional[str] = None
pypi_url: Optional[str] = None
liscence_type: Optional[str] = None
installed_via: Optional[str] = None
location: Optional[str] = None
class VectorDB(BaseModel):
@@ -111,14 +111,14 @@ class VectorDB(BaseModel):
class Chains(BaseModel):
name: str
model: Optional[Model]
vector_dbs: Optional[List[VectorDB]]
model: Optional[Model] = None
vector_dbs: Optional[List[VectorDB]] = None
class App(BaseModel):
name: str
owner: str
description: Optional[str]
description: Optional[str] = None
runtime: Runtime
framework: Framework
chains: List[Chains]
@@ -126,8 +126,8 @@ class App(BaseModel):
class Context(BaseModel):
retrieved_from: Optional[str]
doc: Optional[str]
retrieved_from: Optional[str] = None
doc: Optional[str] = None
vector_db: str
@@ -138,9 +138,9 @@ class Prompt(BaseModel):
class Qa(BaseModel):
name: str
context: Union[List[Optional[Context]], Optional[Context]]
prompt: Optional[Prompt]
response: Optional[Prompt]
prompt: Optional[Prompt] = None
response: Optional[Prompt] = None
prompt_time: str
user: str
user_identities: Optional[List[str]]
user_identities: Optional[List[str]] = None
classifier_location: str

View File

@@ -20,6 +20,7 @@ from langchain_core.messages import (
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.prompt_values import PromptValue
from pydantic import ConfigDict
from langchain_community.llms.anthropic import _AnthropicCommon
@@ -91,11 +92,7 @@ class ChatAnthropic(BaseChatModel, _AnthropicCommon):
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
"""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
arbitrary_types_allowed = True
model_config = ConfigDict(populate_by_name=True, arbitrary_types_allowed=True)
@property
def lc_secrets(self) -> Dict[str, str]:

View File

@@ -9,8 +9,8 @@ from typing import TYPE_CHECKING, Dict, Optional, Set
import requests
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from pydantic import Field, SecretStr, root_validator
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (

View File

@@ -9,8 +9,8 @@ from typing import Any, Callable, Dict, List, Union
from langchain_core._api.deprecation import deprecated
from langchain_core.outputs import ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils import get_from_dict_or_env, pre_init
from pydantic import BaseModel, Field
from langchain_community.chat_models.openai import ChatOpenAI
from langchain_community.utils.openai import is_openai_v1

View File

@@ -26,12 +26,12 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
get_pydantic_field_names,
)
from pydantic import ConfigDict, Field, SecretStr, root_validator
logger = logging.getLogger(__name__)
@@ -147,11 +147,7 @@ class ChatBaichuan(BaseChatModel):
Whether to use search enhance, default is False."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for API call not explicitly specified."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -39,11 +39,11 @@ from langchain_core.output_parsers.openai_tools import (
PydanticToolsParser,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel, ConfigDict, Field, SecretStr, root_validator
logger = logging.getLogger(__name__)
@@ -351,11 +351,7 @@ class QianfanChatEndpoint(BaseChatModel):
endpoint: Optional[str] = None
"""Endpoint of the Qianfan LLM, required if custom model used."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -16,7 +16,7 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Extra
from pydantic import ConfigDict
from langchain_community.chat_models.anthropic import (
convert_messages_to_prompt_anthropic,
@@ -232,10 +232,7 @@ class BedrockChat(BaseChatModel, BedrockBase):
return attributes
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
model_config = ConfigDict(extra="forbid")
def _stream(
self,

View File

@@ -19,6 +19,7 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from pydantic import ConfigDict
from langchain_community.llms.cohere import BaseCohere
@@ -117,11 +118,7 @@ class ChatCohere(BaseChatModel, BaseCohere):
chat.invoke(messages)
"""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
arbitrary_types_allowed = True
model_config = ConfigDict(populate_by_name=True, arbitrary_types_allowed=True)
@property
def _llm_type(self) -> str:

View File

@@ -19,11 +19,11 @@ from langchain_core.messages import (
HumanMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
)
from pydantic import ConfigDict, Field, SecretStr, root_validator
logger = logging.getLogger(__name__)
@@ -110,11 +110,7 @@ class ChatCoze(BaseChatModel):
true: set to true, partial message deltas will be sent .
"Streaming response" will provide real-time response of the model to the client, and
the client needs to assemble the final reply based on the type of message. """
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -13,8 +13,8 @@ from langchain_core.messages import (
BaseMessage,
)
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.pydantic_v1 import Extra, Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from pydantic import ConfigDict, Field, SecretStr, root_validator
from langchain_community.utilities.requests import Requests
@@ -69,11 +69,7 @@ class ChatDappierAI(BaseChatModel):
dappier_model: str = "dm_01hpsxyfm2fwdt2zet9cg6fdxt"
dappier_api_key: Optional[SecretStr] = Field(None, description="Dappier API Token")
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
model_config = ConfigDict(extra="forbid")
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -54,11 +54,11 @@ from langchain_core.outputs import (
ChatGenerationChunk,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel, Field, root_validator
from langchain_community.utilities.requests import Requests

View File

@@ -47,16 +47,16 @@ from langchain_core.output_parsers.openai_tools import (
PydanticToolsParser,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import (
BaseModel,
Extra,
Field,
SecretStr,
)
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import (
BaseModel,
ConfigDict,
Field,
SecretStr,
)
from langchain_community.utilities.requests import Requests
@@ -295,11 +295,7 @@ class ChatEdenAI(BaseChatModel):
edenai_api_url: str = "https://api.edenai.run/v2"
edenai_api_key: Optional[SecretStr] = Field(None, description="EdenAI API Token")
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
model_config = ConfigDict(extra="forbid")
@pre_init
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -13,8 +13,8 @@ from langchain_core.messages import (
HumanMessage,
)
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.pydantic_v1 import root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import root_validator
logger = logging.getLogger(__name__)

View File

@@ -7,8 +7,8 @@ import sys
from typing import TYPE_CHECKING, Dict, Optional, Set
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import Field, root_validator
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (

View File

@@ -32,9 +32,9 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str
from langchain_core.utils.env import get_from_dict_or_env
from pydantic import Field, SecretStr, root_validator
from langchain_community.adapters.openai import convert_message_to_dict

View File

@@ -21,8 +21,8 @@ from langchain_core.outputs import (
ChatGeneration,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from pydantic import BaseModel, SecretStr
from tenacity import (
before_sleep_log,
retry,
@@ -231,7 +231,7 @@ class ChatGooglePalm(BaseChatModel, BaseModel):
"""
client: Any #: :meta private:
client: Any = None #: :meta private:
model_name: str = "models/chat-bison-001"
"""Model name to use."""
google_api_key: Optional[SecretStr] = None

View File

@@ -30,8 +30,8 @@ from langchain_core.language_models.chat_models import (
from langchain_core.language_models.llms import create_base_retry_decorator
from langchain_core.messages import AIMessageChunk, BaseMessage, BaseMessageChunk
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from pydantic import BaseModel, Field, SecretStr, root_validator
from langchain_community.adapters.openai import (
convert_dict_to_message,
@@ -150,7 +150,7 @@ class GPTRouter(BaseChatModel):
"""
client: Any = Field(default=None, exclude=True) #: :meta private:
models_priority_list: List[GPTRouterModel] = Field(min_items=1)
models_priority_list: List[GPTRouterModel] = Field(min_length=1)
gpt_router_api_base: str = Field(default=None)
"""WriteSonic GPTRouter custom endpoint"""
gpt_router_api_key: Optional[SecretStr] = None
@@ -167,7 +167,6 @@ class GPTRouter(BaseChatModel):
"""Number of chat completions to generate for each prompt."""
max_tokens: int = 256
@root_validator(allow_reuse=True)
def validate_environment(cls, values: Dict) -> Dict:
values["gpt_router_api_base"] = get_from_dict_or_env(
values,

View File

@@ -25,7 +25,7 @@ from langchain_core.outputs import (
ChatResult,
LLMResult,
)
from langchain_core.pydantic_v1 import root_validator
from pydantic import root_validator
from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain_community.llms.huggingface_hub import HuggingFaceHub

View File

@@ -15,7 +15,7 @@ from langchain_core.messages import (
messages_to_dict,
)
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.llms.utils import enforce_stop_tokens

View File

@@ -24,13 +24,13 @@ from langchain_core.messages import (
HumanMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
)
from pydantic import ConfigDict, Field, SecretStr, root_validator
logger = logging.getLogger(__name__)
@@ -159,11 +159,7 @@ class ChatHunyuan(BaseChatModel):
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for API call not explicitly specified."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -18,7 +18,7 @@ from langchain_core.outputs import (
ChatGeneration,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, SecretStr
from pydantic import BaseModel, ConfigDict, Extra, Field, SecretStr
logger = logging.getLogger(__name__)
@@ -67,11 +67,7 @@ class ChatJavelinAIGateway(BaseChatModel):
javelin_api_key: Optional[SecretStr] = Field(None, alias="api_key")
"""The API key for the Javelin AI Gateway."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
def __init__(self, **kwargs: Any):
try:

View File

@@ -40,13 +40,13 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
)
from pydantic import ConfigDict, Field, SecretStr, root_validator
from tenacity import (
before_sleep_log,
retry,
@@ -187,11 +187,7 @@ class JinaChat(BaseChatModel):
"""Whether to stream the results or not."""
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -12,6 +12,7 @@ from pathlib import Path
from typing import TYPE_CHECKING, Any, Dict, List, Optional, cast
from langchain_core.utils import pre_init
from pydantic import ConfigDict
if TYPE_CHECKING:
import gpudb
@@ -26,7 +27,7 @@ from langchain_core.messages import (
)
from langchain_core.output_parsers.transform import BaseOutputParser
from langchain_core.outputs import ChatGeneration, ChatResult, Generation
from langchain_core.pydantic_v1 import BaseModel, Field
from pydantic import BaseModel, Field
LOG = logging.getLogger(__name__)
@@ -78,7 +79,7 @@ class _KdtSuggestContext(BaseModel):
class _KdtSuggestPayload(BaseModel):
"""pydantic API request type"""
question: Optional[str]
question: Optional[str] = None
context: List[_KdtSuggestContext]
def get_system_str(self) -> str:
@@ -542,11 +543,7 @@ class KineticaSqlResponse(BaseModel):
# dataframe: "pd.DataFrame" = Field(default=None)
dataframe: Any = Field(default=None)
"""The Pandas dataframe containing the fetched data."""
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
class KineticaSqlOutputParser(BaseOutputParser[KineticaSqlResponse]):
@@ -583,11 +580,7 @@ class KineticaSqlOutputParser(BaseOutputParser[KineticaSqlResponse]):
kdbc: Any = Field(exclude=True)
""" Kinetica DB connection. """
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def parse(self, text: str) -> KineticaSqlResponse:
df = self.kdbc.to_df(text)

View File

@@ -23,8 +23,8 @@ from langchain_core.callbacks import (
)
from langchain_core.messages import AIMessageChunk, BaseMessage
from langchain_core.outputs import ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from pydantic import Field, SecretStr
from langchain_community.adapters.openai import (
convert_message_to_dict,

View File

@@ -48,11 +48,11 @@ from langchain_core.outputs import (
ChatGenerationChunk,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool
from langchain_core.utils import get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel, Field
logger = logging.getLogger(__name__)

View File

@@ -21,8 +21,8 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import root_validator
from langchain_core.utils import get_pydantic_field_names
from pydantic import ConfigDict, root_validator
logger = logging.getLogger(__name__)
@@ -83,11 +83,7 @@ class LlamaEdgeChatService(BaseChatModel):
"""model name, default is `NA`."""
streaming: bool = False
"""Whether to stream the results or not."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -46,10 +46,10 @@ from langchain_core.output_parsers.openai_tools import (
parse_tool_call,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel, Field, root_validator
class ChatLlamaCpp(BaseChatModel):

View File

@@ -16,7 +16,7 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from requests import Response
from requests.exceptions import HTTPError

View File

@@ -25,8 +25,8 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from pydantic import BaseModel, ConfigDict, Field, SecretStr, root_validator
logger = logging.getLogger(__name__)
@@ -285,11 +285,7 @@ class MiniMaxChat(BaseChatModel):
"""Minimax API Key"""
streaming: bool = False
"""Whether to stream the results or not."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True, allow_reuse=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -19,11 +19,11 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import (
from langchain_core.runnables import RunnableConfig
from pydantic import (
Field,
PrivateAttr,
)
from langchain_core.runnables import RunnableConfig
logger = logging.getLogger(__name__)

View File

@@ -18,7 +18,7 @@ from langchain_core.outputs import (
ChatGeneration,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Extra
from pydantic import BaseModel, Extra
logger = logging.getLogger(__name__)

View File

@@ -16,7 +16,7 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Extra
from pydantic import ConfigDict
from langchain_community.llms.oci_generative_ai import OCIGenAIBase
from langchain_community.llms.utils import enforce_stop_tokens
@@ -220,10 +220,7 @@ class ChatOCIGenAI(BaseChatModel, OCIGenAIBase):
""" # noqa: E501
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
model_config = ConfigDict(extra="forbid")
@property
def _llm_type(self) -> str:

View File

@@ -2,8 +2,8 @@
from typing import Dict
from langchain_core.pydantic_v1 import Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from pydantic import Field, SecretStr
from langchain_community.chat_models.openai import ChatOpenAI
from langchain_community.utils.openai import is_openai_v1

View File

@@ -44,13 +44,13 @@ from langchain_core.messages import (
ToolMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.runnables import Runnable
from langchain_core.utils import (
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
)
from pydantic import BaseModel, ConfigDict, Field, root_validator
from langchain_community.adapters.openai import (
convert_dict_to_message,
@@ -243,11 +243,7 @@ class ChatOpenAI(BaseChatModel):
# [httpx documentation](https://www.python-httpx.org/api/#client) for more details.
http_client: Union[Any, None] = None
"""Optional httpx.Client."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -17,8 +17,8 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import root_validator
from langchain_community.llms.utils import enforce_stop_tokens

View File

@@ -35,8 +35,8 @@ from langchain_core.messages import (
ToolMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import get_from_dict_or_env, get_pydantic_field_names
from pydantic import ConfigDict, Field, root_validator
logger = logging.getLogger(__name__)
@@ -80,11 +80,7 @@ class ChatPerplexity(BaseChatModel):
"""Whether to stream the results or not."""
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@property
def lc_secrets(self) -> Dict[str, str]:
@@ -116,7 +112,6 @@ class ChatPerplexity(BaseChatModel):
values["model_kwargs"] = extra
return values
@root_validator(allow_reuse=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["pplx_api_key"] = get_from_dict_or_env(

View File

@@ -35,13 +35,13 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import (
from langchain_core.utils import get_from_dict_or_env, pre_init
from pydantic import (
BaseModel,
Extra,
ConfigDict,
Field,
SecretStr,
)
from langchain_core.utils import get_from_dict_or_env, pre_init
if TYPE_CHECKING:
from premai.api.chat_completions.v1_chat_completions_create import (
@@ -239,14 +239,10 @@ class ChatPremAI(BaseChatModel, BaseModel):
streaming: Optional[bool] = False
"""Whether to stream the responses or not."""
client: Any
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
allow_population_by_field_name = True
arbitrary_types_allowed = True
client: Any = None
model_config = ConfigDict(
extra="forbid", populate_by_name=True, arbitrary_types_allowed=True
)
@pre_init
def validate_environments(cls, values: Dict) -> Dict:

View File

@@ -11,7 +11,6 @@ from langchain_core.messages import (
SystemMessage,
)
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
@@ -19,6 +18,7 @@ from langchain_core.utils import (
pre_init,
)
from langchain_core.utils.utils import build_extra_kwargs
from pydantic import Field, SecretStr, root_validator
SUPPORTED_ROLES: List[str] = [
"system",

View File

@@ -3,8 +3,8 @@
from typing import Dict
from langchain_core._api import deprecated
from langchain_core.pydantic_v1 import Field
from langchain_core.utils import get_from_dict_or_env, pre_init
from pydantic import ConfigDict, Field
from langchain_community.chat_models import ChatOpenAI
from langchain_community.llms.solar import SOLAR_SERVICE_URL_BASE, SolarCommon
@@ -28,14 +28,9 @@ class SolarChat(SolarCommon, ChatOpenAI):
"""
max_tokens: int = Field(default=1024)
# this is needed to match ChatOpenAI superclass
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
arbitrary_types_allowed = True
extra = "ignore"
model_config = ConfigDict(
populate_by_name=True, arbitrary_types_allowed=True, extra="ignore"
)
@pre_init
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -35,11 +35,11 @@ from langchain_core.outputs import (
ChatGenerationChunk,
ChatResult,
)
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import (
get_from_dict_or_env,
get_pydantic_field_names,
)
from pydantic import ConfigDict, Field, root_validator
logger = logging.getLogger(__name__)
@@ -162,11 +162,7 @@ class ChatSparkLLM(BaseChatModel):
"""What search sampling control to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for API call not explicitly specified."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -53,11 +53,11 @@ from langchain_core.outputs import (
ChatGenerationChunk,
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
from pydantic import BaseModel, ConfigDict, Field, SecretStr
from requests.exceptions import HTTPError
from tenacity import (
before_sleep_log,
@@ -451,11 +451,7 @@ class ChatTongyi(BaseChatModel):
max_retries: int = 10
"""Maximum number of retries to make when generating."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@property
def _llm_type(self) -> str:

View File

@@ -40,12 +40,12 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.utils import (
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
)
from pydantic import BaseModel, ConfigDict, Field, root_validator
from tenacity import (
before_sleep_log,
retry,
@@ -119,11 +119,7 @@ class ChatYuan2(BaseChatModel):
repeat_penalty: Optional[float] = 1.18
"""The penalty to apply to repeated tokens."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@property
def lc_secrets(self) -> Dict[str, str]:

View File

@@ -31,8 +31,8 @@ from langchain_core.messages import (
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import BaseModel, Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import BaseModel, ConfigDict, Field, root_validator
logger = logging.getLogger(__name__)
@@ -371,11 +371,7 @@ class ChatZhipuAI(BaseChatModel):
"""Whether to stream the results or not."""
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate."""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
model_config = ConfigDict(populate_by_name=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -1,7 +1,7 @@
from difflib import SequenceMatcher
from typing import List, Tuple
from langchain_core.pydantic_v1 import BaseModel
from pydantic import BaseModel
from langchain_community.cross_encoders.base import BaseCrossEncoder

View File

@@ -1,6 +1,6 @@
from typing import Any, Dict, List, Tuple
from langchain_core.pydantic_v1 import BaseModel, Extra, Field
from pydantic import BaseModel, ConfigDict, Field
from langchain_community.cross_encoders.base import BaseCrossEncoder
@@ -23,7 +23,7 @@ class HuggingFaceCrossEncoder(BaseModel, BaseCrossEncoder):
)
"""
client: Any #: :meta private:
client: Any = None #: :meta private:
model_name: str = DEFAULT_MODEL_NAME
"""Model name to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
@@ -45,10 +45,7 @@ class HuggingFaceCrossEncoder(BaseModel, BaseCrossEncoder):
self.model_name, **self.model_kwargs
)
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
model_config = ConfigDict(extra="forbid")
def score(self, text_pairs: List[Tuple[str, str]]) -> List[float]:
"""Compute similarity scores using a HuggingFace transformer model.

View File

@@ -1,7 +1,7 @@
import json
from typing import Any, Dict, List, Optional, Tuple
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
from pydantic import BaseModel, ConfigDict, root_validator
from langchain_community.cross_encoders.base import BaseCrossEncoder
@@ -61,7 +61,7 @@ class SagemakerEndpointCrossEncoder(BaseModel, BaseCrossEncoder):
credentials_profile_name=credentials_profile_name
)
"""
client: Any #: :meta private:
client: Any = None #: :meta private:
endpoint_name: str = ""
"""The name of the endpoint from the deployed Sagemaker model.
@@ -88,12 +88,7 @@ class SagemakerEndpointCrossEncoder(BaseModel, BaseCrossEncoder):
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
"""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -5,8 +5,8 @@ from typing import Any, Dict, List, Optional, Sequence, Union
from langchain_core.callbacks.base import Callbacks
from langchain_core.documents import BaseDocumentCompressor, Document
from langchain_core.pydantic_v1 import Extra, Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import ConfigDict, Field, root_validator
class DashScopeRerank(BaseDocumentCompressor):
@@ -24,13 +24,9 @@ class DashScopeRerank(BaseDocumentCompressor):
dashscope_api_key: Optional[str] = Field(None, alias="api_key")
"""DashScope API key. Must be specified directly or via environment variable
DASHSCOPE_API_KEY."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
allow_population_by_field_name = True
model_config = ConfigDict(
extra="forbid", arbitrary_types_allowed=True, populate_by_name=True
)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -4,7 +4,7 @@ from typing import TYPE_CHECKING, Dict, Optional, Sequence
from langchain_core.callbacks.manager import Callbacks
from langchain_core.documents import BaseDocumentCompressor, Document
from langchain_core.pydantic_v1 import Extra, root_validator
from pydantic import ConfigDict, root_validator
if TYPE_CHECKING:
from flashrank import Ranker, RerankRequest
@@ -28,12 +28,7 @@ class FlashrankRerank(BaseDocumentCompressor):
"""Number of documents to return."""
model: Optional[str] = None
"""Model to use for reranking."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -6,8 +6,8 @@ from typing import Any, Dict, List, Optional, Sequence, Union
import requests
from langchain_core.callbacks import Callbacks
from langchain_core.documents import BaseDocumentCompressor, Document
from langchain_core.pydantic_v1 import Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import ConfigDict, root_validator
JINA_API_URL: str = "https://api.jina.ai/v1/rerank"
@@ -26,12 +26,7 @@ class JinaRerank(BaseDocumentCompressor):
JINA_API_KEY."""
user_agent: str = "langchain"
"""Identifier for the application making the request."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -8,7 +8,7 @@ from langchain_core.documents import Document
from langchain_core.documents.compressor import (
BaseDocumentCompressor,
)
from langchain_core.pydantic_v1 import root_validator
from pydantic import ConfigDict, root_validator
DEFAULT_LLM_LINGUA_INSTRUCTION = (
"Given this documents, please answer the final question"
@@ -71,11 +71,7 @@ class LLMLinguaCompressor(BaseDocumentCompressor):
)
return values
class Config:
"""Configuration for this pydantic object."""
extra = "forbid"
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@staticmethod
def _format_context(docs: Sequence[Document]) -> List[str]:

View File

@@ -5,7 +5,7 @@ import numpy as np
from langchain_core.callbacks import Callbacks
from langchain_core.documents import Document
from langchain_core.documents.compressor import BaseDocumentCompressor
from langchain_core.pydantic_v1 import Field
from pydantic import Field
class RerankRequest:

View File

@@ -7,8 +7,8 @@ from typing import TYPE_CHECKING, Any, Dict, Optional, Sequence
from langchain.retrievers.document_compressors.base import BaseDocumentCompressor
from langchain_core.callbacks.manager import Callbacks
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import Extra, Field, PrivateAttr, root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import ConfigDict, Field, PrivateAttr, root_validator
if TYPE_CHECKING:
from rank_llm.data import Candidate, Query, Request
@@ -35,12 +35,7 @@ class RankLLMRerank(BaseDocumentCompressor):
gpt_model: str = Field(default="gpt-3.5-turbo")
"""OpenAI model name."""
_retriever: Any = PrivateAttr()
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -5,8 +5,8 @@ from typing import Any, Dict, List, Optional, Sequence, Union
from langchain_core.callbacks.base import Callbacks
from langchain_core.documents import BaseDocumentCompressor, Document
from langchain_core.pydantic_v1 import Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import ConfigDict, root_validator
class VolcengineRerank(BaseDocumentCompressor):
@@ -31,13 +31,9 @@ class VolcengineRerank(BaseDocumentCompressor):
top_n: Optional[int] = 3
"""Number of documents to return."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
allow_population_by_field_name = True
model_config = ConfigDict(
extra="forbid", arbitrary_types_allowed=True, populate_by_name=True
)
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:

View File

@@ -1,7 +1,7 @@
from typing import Any, Callable, Dict, List
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, root_validator
from pydantic import BaseModel, root_validator
from langchain_community.document_loaders.base import BaseLoader
@@ -26,7 +26,7 @@ class ApifyDatasetLoader(BaseLoader, BaseModel):
documents = loader.load()
""" # noqa: E501
apify_client: Any
apify_client: Any = None
"""An instance of the ApifyClient class from the apify-client Python package."""
dataset_id: str
"""The ID of the dataset on the Apify platform."""

View File

@@ -10,13 +10,8 @@ from enum import Enum
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Sequence, Union
from langchain_core.pydantic_v1 import (
BaseModel,
BaseSettings,
Field,
FilePath,
SecretStr,
)
from pydantic import BaseModel, Field, FilePath, SecretStr
from pydantic_settings import BaseSettings, SettingsConfigDict
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.blob_loaders.file_system import (
@@ -34,13 +29,11 @@ CHUNK_SIZE = 1024 * 1024 * 5
class _O365Settings(BaseSettings):
client_id: str = Field(..., env="O365_CLIENT_ID")
client_secret: SecretStr = Field(..., env="O365_CLIENT_SECRET")
class Config:
env_prefix = ""
case_sentive = False
env_file = ".env"
client_id: str = Field(..., validation_alias="O365_CLIENT_ID")
client_secret: SecretStr = Field(..., validation_alias="O365_CLIENT_SECRET")
model_config = SettingsConfigDict(
env_prefix="", case_sentive=False, env_file=".env"
)
class _O365TokenStorage(BaseSettings):

View File

@@ -8,7 +8,7 @@ from typing import Any, Dict, List, Mapping, Optional, Sequence, Union
import requests
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, root_validator
from pydantic import BaseModel, root_validator
from langchain_community.document_loaders.base import BaseLoader
@@ -44,13 +44,13 @@ class DocugamiLoader(BaseLoader, BaseModel):
access_token: Optional[str] = os.environ.get("DOCUGAMI_API_KEY")
"""The Docugami API access token to use."""
max_text_length = 4096
max_text_length: int = 4096
"""Max length of chunk text returned."""
min_text_length: int = 32
"""Threshold under which chunks are appended to next to avoid over-chunking."""
max_metadata_length = 512
max_metadata_length: int = 512
"""Max length of metadata text returned."""
include_xml_tags: bool = False
@@ -69,13 +69,13 @@ class DocugamiLoader(BaseLoader, BaseModel):
"""Set to False if you want to full whitespace formatting in the original
XML doc, including indentation."""
docset_id: Optional[str]
docset_id: Optional[str] = None
"""The Docugami API docset ID to use."""
document_ids: Optional[Sequence[str]]
document_ids: Optional[Sequence[str]] = None
"""The Docugami API document IDs to use."""
file_paths: Optional[Sequence[Union[Path, str]]]
file_paths: Optional[Sequence[Union[Path, str]]] = None
"""The local file paths to use."""
include_project_metadata_in_doc_metadata: bool = True

View File

@@ -12,7 +12,7 @@ from pathlib import Path
from typing import Any, Dict, List, Optional
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, root_validator
from pydantic import BaseModel, root_validator
from langchain_community.document_loaders.base import BaseLoader

View File

@@ -5,8 +5,8 @@ from typing import Callable, Dict, Iterator, List, Literal, Optional, Union
import requests
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, root_validator, validator
from langchain_core.utils import get_from_dict_or_env
from pydantic import BaseModel, root_validator, validator
from langchain_community.document_loaders.base import BaseLoader
@@ -181,7 +181,7 @@ class GithubFileLoader(BaseGitHubLoader, ABC):
file_extension: str = ".md"
branch: str = "main"
file_filter: Optional[Callable[[str], bool]]
file_filter: Optional[Callable[[str], bool]] = None
def get_file_paths(self) -> List[Dict]:
base_url = (

View File

@@ -13,7 +13,7 @@ from typing import Any, Dict, List, Optional, Sequence, Union
from langchain_core._api.deprecation import deprecated
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, root_validator, validator
from pydantic import BaseModel, root_validator, validator
from langchain_community.document_loaders.base import BaseLoader

View File

@@ -6,7 +6,7 @@ import logging
from typing import TYPE_CHECKING, Iterator, List, Optional, Sequence, Union
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.document_loaders.base_o365 import (
O365BaseLoader,

View File

@@ -4,7 +4,7 @@ import tempfile
from typing import TYPE_CHECKING, List
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel, Field
from pydantic import BaseModel, ConfigDict, Field
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
@@ -20,11 +20,7 @@ class OneDriveFileLoader(BaseLoader, BaseModel):
file: File = Field(...)
"""The file to load."""
class Config:
arbitrary_types_allowed = True
"""Allow arbitrary types. This is needed for the File type. Default is True.
See https://pydantic-docs.helpmanual.io/usage/types/#arbitrary-types-allowed"""
model_config = ConfigDict(arbitrary_types_allowed=True)
def load(self) -> List[Document]:
"""Load Documents"""

View File

@@ -5,33 +5,24 @@ from typing import Dict, Iterator, List, Optional
import requests
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import (
BaseModel,
BaseSettings,
Field,
FilePath,
SecretStr,
)
from pydantic import BaseModel, Field, FilePath, SecretStr
from langchain_community.document_loaders.base import BaseLoader
# from pydantic_settings import BaseSettings, SettingsConfigDict
class _OneNoteGraphSettings(BaseSettings):
client_id: str = Field(..., env="MS_GRAPH_CLIENT_ID")
client_secret: SecretStr = Field(..., env="MS_GRAPH_CLIENT_SECRET")
class Config:
"""Config for OneNoteGraphSettings."""
env_prefix = ""
case_sentive = False
env_file = ".env"
#
# class _OneNoteGraphSettings(BaseSettings):
# client_id: str = Field(..., validation_alias="MS_GRAPH_CLIENT_ID")
# client_secret: SecretStr = Field(..., validation_alias="MS_GRAPH_CLIENT_SECRET")
# model_config = SettingsConfigDict(env_prefix="", case_sentive=False, env_file=".env")
#
class OneNoteLoader(BaseLoader, BaseModel):
"""Load pages from OneNote notebooks."""
settings: _OneNoteGraphSettings = Field(default_factory=_OneNoteGraphSettings) # type: ignore[arg-type]
# settings: _OneNoteGraphSettings = Field(default_factory=_OneNoteGraphSettings) # type: ignore[arg-type]
"""Settings for the Microsoft Graph API client."""
auth_with_token: bool = False
"""Whether to authenticate with a token or not. Defaults to False."""
@@ -39,7 +30,7 @@ class OneNoteLoader(BaseLoader, BaseModel):
"""Personal access token"""
onenote_api_base_url: str = "https://graph.microsoft.com/v1.0/me/onenote"
"""URL of Microsoft Graph API for OneNote"""
authority_url = "https://login.microsoftonline.com/consumers/"
authority_url: str = "https://login.microsoftonline.com/consumers/"
"""A URL that identifies a token authority"""
token_path: FilePath = Path.home() / ".credentials" / "onenote_graph_token.txt"
"""Path to the file where the access token is stored"""

View File

@@ -9,7 +9,7 @@ from typing import Any, Iterator, List, Optional, Sequence
import requests # type: ignore
from langchain_core.document_loaders import BaseLoader
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import Field
from pydantic import Field
from langchain_community.document_loaders.base_o365 import (
O365BaseLoader,

View File

@@ -9,8 +9,8 @@ from typing import Any, Dict, Generator, List, Optional, Sequence, Union
from urllib.parse import parse_qs, urlparse
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import root_validator
from langchain_core.pydantic_v1.dataclasses import dataclass
from pydantic import root_validator
from pydantic.dataclasses import dataclass
from langchain_community.document_loaders.base import BaseLoader

View File

@@ -5,7 +5,7 @@ from typing import Any, Callable, List, Sequence
import numpy as np
from langchain_core.documents import BaseDocumentTransformer, Document
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Field
from pydantic import BaseModel, ConfigDict, Field
from langchain_community.utils.math import cosine_similarity
@@ -152,11 +152,7 @@ class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):
similarity_threshold: float = 0.95
"""Threshold for determining when two documents are similar enough
to be considered redundant."""
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
@@ -202,11 +198,7 @@ class EmbeddingsClusteringFilter(BaseDocumentTransformer, BaseModel):
This could dramatically reduce results when there is a lot of overlap between
clusters.
"""
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any

Some files were not shown because too many files have changed in this diff Show More