mirror of
https://github.com/hwchase17/langchain.git
synced 2025-10-21 17:25:34 +00:00
Here we allow standard tests to specify a value for `tool_choice` via a `tool_choice_value` property, which defaults to None. Chat models [available in Together](https://docs.together.ai/docs/chat-models) have issues passing standard tool calling tests: - llama 3.1 models currently [appear to rely on user-side parsing](https://docs.together.ai/docs/llama-3-function-calling) in Together; - Mixtral-8x7B and Mistral-7B (currently tested) consistently do not call tools in some tests. Specifying tool_choice also lets us remove an existing `xfail` and use a smaller model in Groq tests.
langchain-mistralai
This package contains the LangChain integrations for MistralAI through their mistralai SDK.
Installation
pip install -U langchain-mistralai
Chat Models
This package contains the ChatMistralAI
class, which is the recommended way to interface with MistralAI models.
To use, install the requirements, and configure your environment.
export MISTRAL_API_KEY=your-api-key
Then initialize
from langchain_core.messages import HumanMessage
from langchain_mistralai.chat_models import ChatMistralAI
chat = ChatMistralAI(model="mistral-small")
messages = [HumanMessage(content="say a brief hello")]
chat.invoke(messages)
ChatMistralAI
also supports async and streaming functionality:
# For async...
await chat.ainvoke(messages)
# For streaming...
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
Embeddings
With MistralAIEmbeddings
, you can directly use the default model 'mistral-embed', or set a different one if available.
Choose model
embedding.model = 'mistral-embed'
Simple query
res_query = embedding.embed_query("The test information")
Documents
res_document = embedding.embed_documents(["test1", "another test"])