mirror of
https://github.com/hwchase17/langchain.git
synced 2025-04-28 11:55:21 +00:00
docs, cli[patch]: chat model doc template (#22290)
Update ChatModel integration doc template, integration docstring, and adds langchain-cli command to easily create just doc (for updating existing integrations): ```bash langchain-cli integration create-doc --name "foo-bar" ```
This commit is contained in:
parent
f40e341a03
commit
627a337887
@ -135,25 +135,17 @@
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4e5fe97e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above cell that your OpenAI API key is set in your environment variables. If you would prefer you can specify credentials like API key, organization ID, base url, etc. as init params:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"llm = ChatOpenAI(\n",
|
||||
" model=\"gpt-4o\", \n",
|
||||
" temperature=0, \n",
|
||||
" api_key=\"YOUR_API_KEY\", \n",
|
||||
" organization=\"YOUR_ORGANIZATION_ID\", \n",
|
||||
" base_url=\"YOUR_BASE_URL\"\n",
|
||||
")\n",
|
||||
"```"
|
||||
" model=\"gpt-4o\",\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=None,\n",
|
||||
" timeout=None,\n",
|
||||
" max_retries=2,\n",
|
||||
" # api_key=\"...\", # if you prefer to pass api key in directly instaed of using env vars\n",
|
||||
" # base_url=\"...\",\n",
|
||||
" # organization=\"...\",\n",
|
||||
" # other params...\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -220,7 +212,7 @@
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"We can chain our model with a prompt template like so:"
|
||||
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -17,20 +17,122 @@
|
||||
"source": [
|
||||
"# Chat__ModuleName__\n",
|
||||
"\n",
|
||||
"This notebook covers how to get started with __ModuleName__ chat models.\n",
|
||||
"- TODO: Make sure API reference link is correct.\n",
|
||||
"\n",
|
||||
"## Installation"
|
||||
"This notebook provides a quick overview for getting started with __ModuleName__ [chat models](/docs/concepts/#chat-models). For detailed documentation of all Chat__ModuleName__ features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/__module_name__.chat_models.Chat__ModuleName__.html).\n",
|
||||
"\n",
|
||||
"- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/v0.2/docs/integrations/chat/openai/ for an example.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"- TODO: Fill in table features.\n",
|
||||
"- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
|
||||
"- TODO: Make sure API reference links are correct.\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| [Chat__ModuleName__](https://api.python.langchain.com/en/latest/chat_models/__module_name__.chat_models.Chat__ModuleName__.html) | [__package__name__](https://api.python.langchain.com/en/latest/__package_name_short_snake___api_reference.html) | ✅/❌ | beta/❌ | ✅/❌ |  |  |\n",
|
||||
"\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Native streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
||||
"| ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | ✅/❌ | \n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"- TODO: Update with relevant info.\n",
|
||||
"\n",
|
||||
"To access __ModuleName__ models you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name__` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"- TODO: Update with relevant info.\n",
|
||||
"\n",
|
||||
"Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4c3bef91",
|
||||
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install package\n",
|
||||
"!pip install -U __package_name__"
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\"Enter your __ModuleName__ API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain __ModuleName__ integration lives in the `__package_name__` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU __package_name__"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:\n",
|
||||
"\n",
|
||||
"- TODO: Update model instantiation with relevant params."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from __module_name__ import Chat__ModuleName__\n",
|
||||
"\n",
|
||||
"llm = Chat__ModuleName__(\n",
|
||||
" model=\"model-name\",\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=None,\n",
|
||||
" timeout=None,\n",
|
||||
" max_retries=2,\n",
|
||||
" # other params...\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -38,13 +140,9 @@
|
||||
"id": "2b4f3e15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Environment Setup\n",
|
||||
"## Invocation\n",
|
||||
"\n",
|
||||
"Make sure to set the following environment variables:\n",
|
||||
"\n",
|
||||
"- TODO: fill out relevant environment variables or secrets\n",
|
||||
"\n",
|
||||
"## Usage"
|
||||
"- TODO: Run cells so output can be seen."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -56,20 +154,86 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from __module_name__.chat_models import Chat__ModuleName__\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"messages = [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"I love programming.\"),\n",
|
||||
"]\n",
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(ai_msg.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"chat = Chat__ModuleName__()\n",
|
||||
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n",
|
||||
"\n",
|
||||
"- TODO: Run cells so output can be seen."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful assistant that translates English to French.\"),\n",
|
||||
" (\"human\", \"Translate this sentence from English to French. {english_text}.\"),\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke({\"english_text\": \"Hello, how are you?\"})"
|
||||
"chain = prompt | llm\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"German\",\n",
|
||||
" \"input\": \"I love programming.\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## TODO: Any functionality specific to this model provider\n",
|
||||
"\n",
|
||||
"E.g. creating/using finetuned models via this provider. Delete if not relevant."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all Chat__ModuleName__ features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/__module_name__.chat_models.Chat__ModuleName__.html"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -89,7 +253,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -1,34 +1,262 @@
|
||||
"""__ModuleName__ chat models."""
|
||||
|
||||
from typing import Any, AsyncIterator, Iterator, List, Optional
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.chat_models import BaseChatModel
|
||||
from langchain_core.messages import BaseMessage
|
||||
from langchain_core.outputs import ChatGenerationChunk, ChatResult
|
||||
from langchain_core.outputs import ChatResult
|
||||
|
||||
|
||||
class Chat__ModuleName__(BaseChatModel):
|
||||
"""Chat__ModuleName__ chat model.
|
||||
# TODO: Replace all TODOs in docstring. See example docstring:
|
||||
# https://github.com/langchain-ai/langchain/blob/7ff05357bac6eaedf5058a2af88f23a1817d40fe/libs/partners/openai/langchain_openai/chat_models/base.py#L1120
|
||||
"""__ModuleName__ chat model integration.
|
||||
|
||||
Example:
|
||||
# TODO: Replace with relevant packages, env vars.
|
||||
Setup:
|
||||
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install -U __package_name__
|
||||
export __MODULE_NAME___API_KEY="your-api-key"
|
||||
|
||||
# TODO: Populate with relevant params.
|
||||
Key init args — completion params:
|
||||
model: str
|
||||
Name of __ModuleName__ model to use.
|
||||
temperature: float
|
||||
Sampling temperature.
|
||||
max_tokens: Optional[int]
|
||||
Max number of tokens to generate.
|
||||
|
||||
# TODO: Populate with relevant params.
|
||||
Key init args — client params:
|
||||
timeout:
|
||||
Timeout for requests.
|
||||
max_retries:
|
||||
Max number of retries.
|
||||
api_key:
|
||||
__ModuleName__ API key. If not passed in will be read from env var __MODULE_NAME___API_KEY.
|
||||
|
||||
See full list of supported init args and their descriptions in the params section.
|
||||
|
||||
# TODO: Replace with relevant init params.
|
||||
Instantiate:
|
||||
.. code-block:: python
|
||||
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
from __module_name__ import Chat__ModuleName__
|
||||
|
||||
model = Chat__ModuleName__()
|
||||
model.invoke([HumanMessage(content="Come up with 10 names for a song about parrots.")])
|
||||
""" # noqa: E501
|
||||
llm = Chat__ModuleName__(
|
||||
model="...",
|
||||
temperature=0,
|
||||
max_tokens=None,
|
||||
timeout=None,
|
||||
max_retries=2,
|
||||
# api_key="...",
|
||||
# other params...
|
||||
)
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
"""Return type of chat model."""
|
||||
return "chat-__package_name_short__"
|
||||
Invoke:
|
||||
.. code-block:: python
|
||||
|
||||
messages = [
|
||||
("system", "You are a helpful translator. Translate the user sentence to French."),
|
||||
("human", "I love programming."),
|
||||
]
|
||||
llm.invoke(messages)
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
# TODO: Delete if token-level streaming isn't supported.
|
||||
Stream:
|
||||
.. code-block:: python
|
||||
|
||||
for chunk in llm.stream(messages):
|
||||
print(chunk)
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
stream = llm.stream(messages)
|
||||
full = next(stream)
|
||||
for chunk in stream:
|
||||
full += chunk
|
||||
full
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
# TODO: Delete if native async isn't supported.
|
||||
Async:
|
||||
.. code-block:: python
|
||||
|
||||
await llm.ainvoke(messages)
|
||||
|
||||
# stream:
|
||||
# async for chunk in (await llm.astream(messages))
|
||||
|
||||
# batch:
|
||||
# await llm.abatch([messages])
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
# TODO: Delete if .bind_tools() isn't supported.
|
||||
Tool calling:
|
||||
.. code-block:: python
|
||||
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
|
||||
class GetWeather(BaseModel):
|
||||
'''Get the current weather in a given location'''
|
||||
|
||||
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
|
||||
|
||||
class GetPopulation(BaseModel):
|
||||
'''Get the current population in a given location'''
|
||||
|
||||
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
|
||||
|
||||
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
|
||||
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
|
||||
ai_msg.tool_calls
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
See ``Chat__ModuleName__.bind_tools()`` method for more.
|
||||
|
||||
# TODO: Delete if .with_structured_output() isn't supported.
|
||||
Structured output:
|
||||
.. code-block:: python
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from langchain_core.pydantic_v1 import BaseModel, Field
|
||||
|
||||
class Joke(BaseModel):
|
||||
'''Joke to tell user.'''
|
||||
|
||||
setup: str = Field(description="The setup of the joke")
|
||||
punchline: str = Field(description="The punchline to the joke")
|
||||
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
|
||||
|
||||
structured_llm = llm.with_structured_output(Joke)
|
||||
structured_llm.invoke("Tell me a joke about cats")
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
See ``Chat__ModuleName__.with_structured_output()`` for more.
|
||||
|
||||
# TODO: Delete if JSON mode response format isn't supported.
|
||||
JSON mode:
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Replace with appropriate bind arg.
|
||||
json_llm = llm.bind(response_format={"type": "json_object"})
|
||||
ai_msg = json_llm.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
|
||||
ai_msg.content
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
# TODO: Delete if image inputs aren't supported.
|
||||
Image input:
|
||||
.. code-block:: python
|
||||
|
||||
import base64
|
||||
import httpx
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
|
||||
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
|
||||
# TODO: Replace with appropriate message content format.
|
||||
message = HumanMessage(
|
||||
content=[
|
||||
{"type": "text", "text": "describe the weather in this image"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
|
||||
},
|
||||
],
|
||||
)
|
||||
ai_msg = llm.invoke([message])
|
||||
ai_msg.content
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
# TODO: Delete if audio inputs aren't supported.
|
||||
Audio input:
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example input
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output
|
||||
|
||||
# TODO: Delete if video inputs aren't supported.
|
||||
Video input:
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example input
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output
|
||||
|
||||
# TODO: Delete if token usage metadata isn't supported.
|
||||
Token usage:
|
||||
.. code-block:: python
|
||||
|
||||
ai_msg = llm.invoke(messages)
|
||||
ai_msg.usage_metadata
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33}
|
||||
|
||||
# TODO: Delete if logprobs aren't supported.
|
||||
Logprobs:
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Replace with appropriate bind arg.
|
||||
logprobs_llm = llm.bind(logprobs=True)
|
||||
ai_msg = logprobs_llm.invoke(messages)
|
||||
ai_msg.response_metadata["logprobs"]
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
Response metadata
|
||||
.. code-block:: python
|
||||
|
||||
ai_msg = llm.invoke(messages)
|
||||
ai_msg.response_metadata
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# TODO: Example output.
|
||||
|
||||
""" # noqa: E501
|
||||
|
||||
# TODO: This method must be implemented to generate chat responses.
|
||||
def _generate(
|
||||
@ -38,36 +266,36 @@ class Chat__ModuleName__(BaseChatModel):
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
raise NotImplementedError
|
||||
raise NotImplementedError()
|
||||
|
||||
# TODO: Implement if Chat__ModuleName__ supports streaming. Otherwise delete method.
|
||||
def _stream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
raise NotImplementedError
|
||||
# def _stream(
|
||||
# self,
|
||||
# messages: List[BaseMessage],
|
||||
# stop: Optional[List[str]] = None,
|
||||
# run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
# **kwargs: Any,
|
||||
# ) -> Iterator[ChatGenerationChunk]:
|
||||
|
||||
# TODO: Implement if Chat__ModuleName__ supports async streaming. Otherwise delete
|
||||
# method.
|
||||
async def _astream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> AsyncIterator[ChatGenerationChunk]:
|
||||
raise NotImplementedError
|
||||
# TODO: Implement if Chat__ModuleName__ supports async streaming. Otherwise delete.
|
||||
# async def _astream(
|
||||
# self,
|
||||
# messages: List[BaseMessage],
|
||||
# stop: Optional[List[str]] = None,
|
||||
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
# **kwargs: Any,
|
||||
# ) -> AsyncIterator[ChatGenerationChunk]:
|
||||
|
||||
# TODO: Implement if Chat__ModuleName__ supports async generation. Otherwise delete
|
||||
# method.
|
||||
async def _agenerate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
raise NotImplementedError
|
||||
# TODO: Implement if Chat__ModuleName__ supports async generation. Otherwise delete.
|
||||
# async def _agenerate(
|
||||
# self,
|
||||
# messages: List[BaseMessage],
|
||||
# stop: Optional[List[str]] = None,
|
||||
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
# **kwargs: Any,
|
||||
# ) -> ChatResult:
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
"""Return type of chat model."""
|
||||
return "chat-__package_name_short__"
|
||||
|
@ -1,7 +1,6 @@
|
||||
"""
|
||||
Develop integration packages for LangChain.
|
||||
"""
|
||||
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
@ -11,7 +10,7 @@ from typing import Optional
|
||||
import typer
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
from langchain_cli.utils.find_replace import replace_glob
|
||||
from langchain_cli.utils.find_replace import replace_file, replace_glob
|
||||
|
||||
integration_cli = typer.Typer(no_args_is_help=True, add_completion=False)
|
||||
|
||||
@ -21,6 +20,7 @@ Replacements = TypedDict(
|
||||
"__package_name__": str,
|
||||
"__module_name__": str,
|
||||
"__ModuleName__": str,
|
||||
"__MODULE_NAME__": str,
|
||||
"__package_name_short__": str,
|
||||
},
|
||||
)
|
||||
@ -46,7 +46,9 @@ def _process_name(name: str):
|
||||
"__package_name__": f"langchain-{preprocessed}",
|
||||
"__module_name__": "langchain_" + preprocessed.replace("-", "_"),
|
||||
"__ModuleName__": preprocessed.title().replace("-", ""),
|
||||
"__MODULE_NAME__": preprocessed.upper().replace("-", ""),
|
||||
"__package_name_short__": preprocessed,
|
||||
"__package_name_short_snake__": preprocessed.replace("-", "_"),
|
||||
}
|
||||
)
|
||||
|
||||
@ -57,7 +59,7 @@ def new(
|
||||
str,
|
||||
typer.Option(
|
||||
help="The name of the integration to create (e.g. `my-integration`)",
|
||||
prompt=True,
|
||||
prompt="The name of the integration to create (e.g. `my-integration`)",
|
||||
),
|
||||
],
|
||||
name_class: Annotated[
|
||||
@ -121,3 +123,81 @@ def new(
|
||||
["poetry", "install", "--with", "lint,test,typing,test_integration"],
|
||||
cwd=destination_dir,
|
||||
)
|
||||
|
||||
|
||||
@integration_cli.command()
|
||||
def create_doc(
|
||||
name: Annotated[
|
||||
str,
|
||||
typer.Option(
|
||||
help=(
|
||||
"The kebab-case name of the integration (e.g. `openai`, "
|
||||
"`google-vertexai`). Do not include a 'langchain-' prefix."
|
||||
),
|
||||
prompt=(
|
||||
"The kebab-case name of the integration (e.g. `openai`, "
|
||||
"`google-vertexai`). Do not include a 'langchain-' prefix."
|
||||
),
|
||||
),
|
||||
],
|
||||
name_class: Annotated[
|
||||
Optional[str],
|
||||
typer.Option(
|
||||
help=(
|
||||
"The PascalCase name of the integration (e.g. `OpenAI`, "
|
||||
"`VertexAI`). Do not include a 'Chat', 'VectorStore', etc. "
|
||||
"prefix/suffix."
|
||||
),
|
||||
),
|
||||
] = None,
|
||||
component_type: Annotated[
|
||||
str,
|
||||
typer.Option(
|
||||
help=("The type of component. Currently only 'ChatModel' supported."),
|
||||
),
|
||||
] = "ChatModel",
|
||||
destination_dir: Annotated[
|
||||
str,
|
||||
typer.Option(
|
||||
help="The relative path to the docs directory to place the new file in.",
|
||||
prompt="The relative path to the docs directory to place the new file in.",
|
||||
),
|
||||
] = "docs/docs/integrations/chat/",
|
||||
):
|
||||
"""
|
||||
Creates a new integration doc.
|
||||
"""
|
||||
try:
|
||||
replacements = _process_name(name)
|
||||
except ValueError as e:
|
||||
typer.echo(e)
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
if name_class:
|
||||
if not re.match(r"^[A-Z][a-zA-Z0-9]*$", name_class):
|
||||
typer.echo(
|
||||
"Name should only contain letters (a-z, A-Z), numbers, and underscores"
|
||||
", and start with a capital letter."
|
||||
)
|
||||
raise typer.Exit(code=1)
|
||||
replacements["__ModuleName__"] = name_class
|
||||
else:
|
||||
replacements["__ModuleName__"] = typer.prompt(
|
||||
(
|
||||
"The PascalCase name of the integration (e.g. `OpenAI`, `VertexAI`). "
|
||||
"Do not include a 'Chat', 'VectorStore', etc. prefix/suffix."
|
||||
),
|
||||
default=replacements["__ModuleName__"],
|
||||
)
|
||||
destination_path = (
|
||||
Path.cwd()
|
||||
/ destination_dir
|
||||
/ (replacements["__package_name_short_snake__"] + ".ipynb")
|
||||
)
|
||||
|
||||
# copy over template from ../integration_template
|
||||
docs_template = Path(__file__).parents[1] / "integration_template/docs/chat.ipynb"
|
||||
shutil.copy(docs_template, destination_path)
|
||||
|
||||
# replacements in file
|
||||
replace_file(destination_path, replacements)
|
||||
|
@ -55,7 +55,7 @@ watch = "poetry run ptw"
|
||||
version = "poetry version --short"
|
||||
bump = ["_bump_1", "_bump_2"]
|
||||
lint = ["_lint", "_check_formatting"]
|
||||
format = ["_lint_fix", "_format"]
|
||||
format = ["_format", "_lint_fix"]
|
||||
|
||||
_bump_2.shell = """sed -i "" "/^__version__ =/c\\ \n__version__ = \\"$version\\"\n" langchain_cli/cli.py"""
|
||||
_bump_2.uses = { version = "version" }
|
||||
|
Loading…
Reference in New Issue
Block a user