mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-11 13:55:03 +00:00
Some providers include (legacy) function calls in `additional_kwargs` in addition to tool calls. We currently unpack both function calls and tool calls if present, but OpenAI will raise 400 in this case. This can come up if providers are mixed in a tool-calling loop. Example: ```python from langchain.chat_models import init_chat_model from langchain_core.messages import HumanMessage from langchain_core.tools import tool @tool def get_weather(location: str) -> str: """Get weather at a location.""" return "It's sunny." gemini = init_chat_model("google_genai:gemini-2.0-flash-001").bind_tools([get_weather]) openai = init_chat_model("openai:gpt-4.1-mini").bind_tools([get_weather]) input_message = HumanMessage("What's the weather in Boston?") tool_call_message = gemini.invoke([input_message]) assert len(tool_call_message.tool_calls) == 1 tool_call = tool_call_message.tool_calls[0] tool_message = get_weather.invoke(tool_call) response = openai.invoke( # currently raises 400 / BadRequestError [input_message, tool_call_message, tool_message] ) ``` Here we ignore function calls if tool calls are present. |
||
---|---|---|
.. | ||
langchain_openai | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
pyproject.toml | ||
README.md | ||
uv.lock |
langchain-openai
This package contains the LangChain integrations for OpenAI through their openai
SDK.
Installation and Setup
- Install the LangChain partner package
pip install langchain-openai
- Get an OpenAI api key and set it as an environment variable (
OPENAI_API_KEY
)
Chat model
See a usage example.
from langchain_openai import ChatOpenAI
If you are using a model hosted on Azure
, you should use different wrapper for that:
from langchain_openai import AzureChatOpenAI
For a more detailed walkthrough of the Azure
wrapper, see here
Text Embedding Model
See a usage example
from langchain_openai import OpenAIEmbeddings
If you are using a model hosted on Azure
, you should use different wrapper for that:
from langchain_openai import AzureOpenAIEmbeddings
For a more detailed walkthrough of the Azure
wrapper, see here
LLM (Legacy)
LLM refers to the legacy text-completion models that preceded chat models. See a usage example.
from langchain_openai import OpenAI
If you are using a model hosted on Azure
, you should use different wrapper for that:
from langchain_openai import AzureOpenAI
For a more detailed walkthrough of the Azure
wrapper, see here