mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-10 13:27:36 +00:00
We are implementing a token-counting callback handler in `langchain-core` that is intended to work with all chat models supporting usage metadata. The callback will aggregate usage metadata by model. This requires responses to include the model name in its metadata. To support this, if a model `returns_usage_metadata`, we check that it includes a string model name in its `response_metadata` in the `"model_name"` key. More context: https://github.com/langchain-ai/langchain/pull/30487 |
||
---|---|---|
.. | ||
docs | ||
integration_template | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
pyproject.toml | ||
README.md |
package_name
This package contains the LangChain integration with ModuleName
Installation
pip install -U __package_name__
And you should configure credentials by setting the following environment variables:
- TODO: fill this out
Chat Models
Chat__ModuleName__
class exposes chat models from ModuleName.
from __module_name__ import Chat__ModuleName__
llm = Chat__ModuleName__()
llm.invoke("Sing a ballad of LangChain.")
Embeddings
__ModuleName__Embeddings
class exposes embeddings from ModuleName.
from __module_name__ import __ModuleName__Embeddings
embeddings = __ModuleName__Embeddings()
embeddings.embed_query("What is the meaning of life?")
LLMs
__ModuleName__LLM
class exposes LLMs from ModuleName.
from __module_name__ import __ModuleName__LLM
llm = __ModuleName__LLM()
llm.invoke("The meaning of life is")