mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-23 07:09:31 +00:00
docs: update LiteLLM integration docs for router migration to langchain-litellm (#31063)
# What's Changed? - [x] 1. docs: **docs/docs/integrations/chat/litellm.ipynb** : Updated with docs for litellm_router since it has been moved into the [langchain-litellm](https://github.com/Akshay-Dongare/langchain-litellm) package along with ChatLiteLLM - [x] 2. docs: **docs/docs/integrations/chat/litellm_router.ipynb** : Deleted to avoid redundancy - [x] 3. docs: **docs/docs/integrations/providers/litellm.mdx** : Updated to reflect inclusion of ChatLiteLLMRouter class - [x] Lint and test: Done # Issue: - [x] Related to the issue https://github.com/langchain-ai/langchain/issues/30368 # About me - [x] 🔗 LinkedIn: [akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
This commit is contained in:
parent
275ba2ec37
commit
0b8e9868e6
@ -19,18 +19,50 @@
|
|||||||
"id": "5bcea387"
|
"id": "5bcea387"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# ChatLiteLLM\n",
|
"# ChatLiteLLM and ChatLiteLLMRouter\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.\n",
|
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library.\n",
|
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library.\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"This integration contains two main classes:\n",
|
||||||
|
"\n",
|
||||||
|
"- ```ChatLiteLLM```: The main Langchain wrapper for basic usage of LiteLLM ([docs](https://docs.litellm.ai/docs/)).\n",
|
||||||
|
"- ```ChatLiteLLMRouter```: A ```ChatLiteLLM``` wrapper that leverages LiteLLM's Router ([docs](https://docs.litellm.ai/docs/routing))."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "2ddb7fd3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Table of Contents\n",
|
||||||
|
"1. [Overview](#overview)\n",
|
||||||
|
" - [Integration Details](#integration-details)\n",
|
||||||
|
" - [Model Features](#model-features)\n",
|
||||||
|
"2. [Setup](#setup)\n",
|
||||||
|
"3. [Credentials](#credentials)\n",
|
||||||
|
"4. [Installation](#installation)\n",
|
||||||
|
"5. [Instantiation](#instantiation)\n",
|
||||||
|
" - [ChatLiteLLM](#chatlitellm)\n",
|
||||||
|
" - [ChatLiteLLMRouter](#chatlitellmrouter)\n",
|
||||||
|
"6. [Invocation](#invocation)\n",
|
||||||
|
"7. [Async and Streaming Functionality](#async-and-streaming-functionality)\n",
|
||||||
|
"8. [API Reference](#api-reference)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "37be6ef8",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
"## Overview\n",
|
"## Overview\n",
|
||||||
"### Integration details\n",
|
"### Integration details\n",
|
||||||
"\n",
|
"\n",
|
||||||
"| Class | Package | Local | Serializable | JS support| Package downloads | Package latest |\n",
|
"| Class | Package | Local | Serializable | JS support| Package downloads | Package latest |\n",
|
||||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||||
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ |  |  |\n",
|
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ |  |  |\n",
|
||||||
|
"| [ChatLiteLLMRouter](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellmrouter) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ |  |  |\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Model features\n",
|
"### Model features\n",
|
||||||
"| [Tool calling](https://python.langchain.com/docs/how_to/tool_calling/) | [Structured output](https://python.langchain.com/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Native async](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Token usage](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/) | [Logprobs](https://python.langchain.com/docs/how_to/logprobs/) |\n",
|
"| [Tool calling](https://python.langchain.com/docs/how_to/tool_calling/) | [Structured output](https://python.langchain.com/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Native async](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Token usage](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/) | [Logprobs](https://python.langchain.com/docs/how_to/logprobs/) |\n",
|
||||||
@ -38,7 +70,7 @@
|
|||||||
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
|
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Setup\n",
|
"### Setup\n",
|
||||||
"To access ChatLiteLLM models you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI or Cohere account. Then you have to get an API key, and export it as an environment variable."
|
"To access ```ChatLiteLLM``` and ```ChatLiteLLMRouter``` models, you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI, or Cohere account. Then, you have to get an API key and export it as an environment variable."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -53,23 +85,23 @@
|
|||||||
"You have to choose the LLM provider you want and sign up with them to get their API key.\n",
|
"You have to choose the LLM provider you want and sign up with them to get their API key.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Example - Anthropic\n",
|
"### Example - Anthropic\n",
|
||||||
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable.\n",
|
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this, set the ANTHROPIC_API_KEY environment variable.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Example - OpenAI\n",
|
"### Example - OpenAI\n",
|
||||||
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable."
|
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this, set the OPENAI_API_KEY environment variable."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": null,
|
||||||
"id": "7595eddf",
|
"id": "7595eddf",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "7595eddf"
|
"id": "7595eddf"
|
||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"## set ENV variables\n",
|
"## Set ENV variables\n",
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"\n",
|
"\n",
|
||||||
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\n",
|
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\n",
|
||||||
@ -85,7 +117,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Installation\n",
|
"### Installation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The LangChain LiteLLM integration lives in the `langchain-litellm` package:"
|
"The LangChain LiteLLM integration is available in the `langchain-litellm` package:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -107,13 +139,21 @@
|
|||||||
"id": "bc1182b4"
|
"id": "bc1182b4"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"## Instantiation\n",
|
"## Instantiation"
|
||||||
"Now we can instantiate our model object and generate chat completions:"
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d439241a",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### ChatLiteLLM\n",
|
||||||
|
"You can instantiate a ```ChatLiteLLM``` model by providing a ```model``` name [supported by LiteLLM](https://docs.litellm.ai/docs/providers)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 3,
|
"execution_count": null,
|
||||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||||
@ -123,7 +163,50 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"from langchain_litellm import ChatLiteLLM\n",
|
"from langchain_litellm import ChatLiteLLM\n",
|
||||||
"\n",
|
"\n",
|
||||||
"llm = ChatLiteLLM(model=\"gpt-3.5-turbo\")"
|
"llm = ChatLiteLLM(model=\"gpt-4.1-nano\", temperature=0.1)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3d0ed306",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### ChatLiteLLMRouter\n",
|
||||||
|
"You can also leverage LiteLLM's routing capabilities by defining your model list as specified [here](https://docs.litellm.ai/docs/routing)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "8d26393a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain_litellm import ChatLiteLLMRouter\n",
|
||||||
|
"from litellm import Router\n",
|
||||||
|
"\n",
|
||||||
|
"model_list = [\n",
|
||||||
|
" {\n",
|
||||||
|
" \"model_name\": \"gpt-4.1\",\n",
|
||||||
|
" \"litellm_params\": {\n",
|
||||||
|
" \"model\": \"azure/gpt-4.1\",\n",
|
||||||
|
" \"api_key\": \"<your-api-key>\",\n",
|
||||||
|
" \"api_version\": \"2024-10-21\",\n",
|
||||||
|
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
|
||||||
|
" },\n",
|
||||||
|
" },\n",
|
||||||
|
" {\n",
|
||||||
|
" \"model_name\": \"gpt-4o\",\n",
|
||||||
|
" \"litellm_params\": {\n",
|
||||||
|
" \"model\": \"azure/gpt-4o\",\n",
|
||||||
|
" \"api_key\": \"<your-api-key>\",\n",
|
||||||
|
" \"api_version\": \"2024-10-21\",\n",
|
||||||
|
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
|
||||||
|
" },\n",
|
||||||
|
" },\n",
|
||||||
|
"]\n",
|
||||||
|
"litellm_router = Router(model_list=model_list)\n",
|
||||||
|
"llm = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-4.1\", temperature=0.1)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -133,7 +216,8 @@
|
|||||||
"id": "63d98454"
|
"id": "63d98454"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"## Invocation"
|
"## Invocation\n",
|
||||||
|
"Whether you've instantiated a `ChatLiteLLM` or a `ChatLiteLLMRouter`, you can now use the ChatModel through Langchain's API."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -171,7 +255,8 @@
|
|||||||
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c"
|
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"## `ChatLiteLLM` also supports async and streaming functionality:"
|
"## Async and Streaming Functionality\n",
|
||||||
|
"`ChatLiteLLM` and `ChatLiteLLMRouter` also support async and streaming functionality:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -212,7 +297,7 @@
|
|||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"## API reference\n",
|
"## API reference\n",
|
||||||
"For detailed documentation of all `ChatLiteLLM` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
|
"For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations, head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
@ -1,218 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "raw",
|
|
||||||
"id": "59148044",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"---\n",
|
|
||||||
"sidebar_label: LiteLLM Router\n",
|
|
||||||
"---"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "247da7a6",
|
|
||||||
"metadata": {},
|
|
||||||
"source": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# ChatLiteLLMRouter\n",
|
|
||||||
"\n",
|
|
||||||
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
|
|
||||||
"\n",
|
|
||||||
"This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 1,
|
|
||||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from langchain_community.chat_models import ChatLiteLLMRouter\n",
|
|
||||||
"from langchain_core.messages import HumanMessage\n",
|
|
||||||
"from litellm import Router"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 2,
|
|
||||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"model_list = [\n",
|
|
||||||
" {\n",
|
|
||||||
" \"model_name\": \"gpt-4\",\n",
|
|
||||||
" \"litellm_params\": {\n",
|
|
||||||
" \"model\": \"azure/gpt-4-1106-preview\",\n",
|
|
||||||
" \"api_key\": \"<your-api-key>\",\n",
|
|
||||||
" \"api_version\": \"2023-05-15\",\n",
|
|
||||||
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
|
|
||||||
" },\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" \"model_name\": \"gpt-35-turbo\",\n",
|
|
||||||
" \"litellm_params\": {\n",
|
|
||||||
" \"model\": \"azure/gpt-35-turbo\",\n",
|
|
||||||
" \"api_key\": \"<your-api-key>\",\n",
|
|
||||||
" \"api_version\": \"2023-05-15\",\n",
|
|
||||||
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
|
|
||||||
" },\n",
|
|
||||||
" },\n",
|
|
||||||
"]\n",
|
|
||||||
"litellm_router = Router(model_list=model_list)\n",
|
|
||||||
"chat = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-35-turbo\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 3,
|
|
||||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [
|
|
||||||
{
|
|
||||||
"data": {
|
|
||||||
"text/plain": [
|
|
||||||
"AIMessage(content=\"J'aime programmer.\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"execution_count": 3,
|
|
||||||
"metadata": {},
|
|
||||||
"output_type": "execute_result"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"source": [
|
|
||||||
"messages = [\n",
|
|
||||||
" HumanMessage(\n",
|
|
||||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
|
||||||
" )\n",
|
|
||||||
"]\n",
|
|
||||||
"chat(messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## `ChatLiteLLMRouter` also supports async and streaming functionality:"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 4,
|
|
||||||
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
|
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 5,
|
|
||||||
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
|
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [
|
|
||||||
{
|
|
||||||
"data": {
|
|
||||||
"text/plain": [
|
|
||||||
"LLMResult(generations=[[ChatGeneration(text=\"J'adore programmer.\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"J'adore programmer.\"))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"execution_count": 5,
|
|
||||||
"metadata": {},
|
|
||||||
"output_type": "execute_result"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"source": [
|
|
||||||
"await chat.agenerate([messages])"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 6,
|
|
||||||
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
|
|
||||||
"metadata": {
|
|
||||||
"tags": []
|
|
||||||
},
|
|
||||||
"outputs": [
|
|
||||||
{
|
|
||||||
"name": "stdout",
|
|
||||||
"output_type": "stream",
|
|
||||||
"text": [
|
|
||||||
"J'adore programmer."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"data": {
|
|
||||||
"text/plain": [
|
|
||||||
"AIMessage(content=\"J'adore programmer.\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"execution_count": 6,
|
|
||||||
"metadata": {},
|
|
||||||
"output_type": "execute_result"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"source": [
|
|
||||||
"chat = ChatLiteLLMRouter(\n",
|
|
||||||
" router=litellm_router,\n",
|
|
||||||
" model_name=\"gpt-35-turbo\",\n",
|
|
||||||
" streaming=True,\n",
|
|
||||||
" verbose=True,\n",
|
|
||||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
|
||||||
")\n",
|
|
||||||
"chat(messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "c253883f",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": []
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3 (ipykernel)",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.11.9"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 5
|
|
||||||
}
|
|
@ -12,7 +12,10 @@ pip install langchain-litellm
|
|||||||
```python
|
```python
|
||||||
from langchain_litellm import ChatLiteLLM
|
from langchain_litellm import ChatLiteLLM
|
||||||
```
|
```
|
||||||
|
```python
|
||||||
|
from langchain_litellm import ChatLiteLLMRouter
|
||||||
|
```
|
||||||
See more detail in the guide [here](/docs/integrations/chat/litellm).
|
See more detail in the guide [here](/docs/integrations/chat/litellm).
|
||||||
|
|
||||||
## API reference
|
## API reference
|
||||||
For detailed documentation of all `ChatLiteLLM` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm
|
For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm
|
||||||
|
Loading…
Reference in New Issue
Block a user