langchain/libs/partners/mistralai
ccurme 3450bfc806
infra: add UV_FROZEN to makefiles (#29642)
These are set in Github workflows, but forgot to add them to most
makefiles for convenience when developing locally.

`uv run` will automatically sync the lock file. Because many of our
development dependencies are local installs, it will pick up version
changes and update the lock file. Passing `--frozen` or setting this
environment variable disables the behavior.
2025-02-06 14:36:54 -05:00
..
langchain_mistralai [MistralAI] Improve MistralAIEmbeddings (#29242) 2025-02-05 21:31:54 -05:00
scripts multiple: pydantic 2 compatibility, v0.3 (#26443) 2024-09-13 14:38:45 -07:00
tests mistral[patch]: release 0.2.5 (#29463) 2025-01-28 18:29:54 -05:00
.gitignore
LICENSE
Makefile infra: add UV_FROZEN to makefiles (#29642) 2025-02-06 14:36:54 -05:00
pyproject.toml infra: migrate to uv (#29566) 2025-02-06 13:36:26 -05:00
README.md mistralai[minor]: Add embeddings (#15282) 2024-01-16 17:48:37 -08:00
uv.lock infra: migrate to uv (#29566) 2025-02-06 13:36:26 -05:00

langchain-mistralai

This package contains the LangChain integrations for MistralAI through their mistralai SDK.

Installation

pip install -U langchain-mistralai

Chat Models

This package contains the ChatMistralAI class, which is the recommended way to interface with MistralAI models.

To use, install the requirements, and configure your environment.

export MISTRAL_API_KEY=your-api-key

Then initialize

from langchain_core.messages import HumanMessage
from langchain_mistralai.chat_models import ChatMistralAI

chat = ChatMistralAI(model="mistral-small")
messages = [HumanMessage(content="say a brief hello")]
chat.invoke(messages)

ChatMistralAI also supports async and streaming functionality:

# For async...
await chat.ainvoke(messages)

# For streaming...
for chunk in chat.stream(messages):
    print(chunk.content, end="", flush=True)

Embeddings

With MistralAIEmbeddings, you can directly use the default model 'mistral-embed', or set a different one if available.

Choose model

embedding.model = 'mistral-embed'

Simple query

res_query = embedding.embed_query("The test information")

Documents

res_document = embedding.embed_documents(["test1", "another test"])