mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-04 06:37:58 +00:00
Writer partners integration docs (#29961)
**Documentation of Writer provider and additional features** * [PyPi langchain-writer web-page](https://pypi.org/project/langchain-writer/) * [GitHub langchain-writer repo](https://github.com/writer/langchain-writer) --------- Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
parent
820a4c068c
commit
47e1a384f7
docs/docs/integrations
chat
document_loaders/parsers
providers
splitters
tools
libs
@ -1,160 +1,135 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "85e07aae70a15572",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Writer\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cb4dd00a-8893-4a45-96f7-9a9fc341cd61",
|
||||
"id": "e815de6298bf07ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatWriter\n",
|
||||
"# Chat Writer\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n",
|
||||
"This notebook provides a quick overview for getting started with Writer [chat](/docs/concepts/chat_models/).\n",
|
||||
"\n",
|
||||
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home).\n",
|
||||
"\n",
|
||||
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n",
|
||||
"\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "617a6e98205ab7c8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"| :--- | :--- | :---: | :---: |:----------:| :---: | :---: |\n",
|
||||
"| ChatWriter | langchain-community | ❌ | ❌ | ❌ | ❌ | ❌ |\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"|:--------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
|
||||
"| ChatWriter | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ |  |  |\n",
|
||||
"### Model features\n",
|
||||
"| [Tool calling](/docs/how_to/tool_calling) | Structured output | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | Logprobs |\n",
|
||||
"| :---: |:-----------------:| :---: | :---: | :---: | :---: | :---: | :---: |:--------------------------------:|:--------:|\n",
|
||||
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Writer models you'll need to create a Writer account, get an API key, and install the `writer-sdk` and `langchain-community` packages.\n",
|
||||
"\n",
|
||||
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3fd9903e685808d9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to sign up to OpenAI and generate an API key. Once you've done this set the WRITER_API_KEY environment variable:"
|
||||
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) and follow this [Quickstart](https://dev.writer.com/api-guides/quickstart) to obtain an API key. Then, set the WRITER_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "e817fe2e-4f1d-4533-b19e-2400b1cf6ce8",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:26.800627Z",
|
||||
"start_time": "2024-11-14T09:27:59.652281Z"
|
||||
}
|
||||
},
|
||||
"execution_count": null,
|
||||
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.environ.get(\"WRITER_API_KEY\"):\n",
|
||||
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key:\")"
|
||||
"if not os.getenv(\"WRITER_API_KEY\"):\n",
|
||||
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c59722a9-6dbb-45f7-ae59-5be50ca5733d",
|
||||
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you want to get automated tracing of your model calls, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Writer integration lives in the `langchain-community` package:"
|
||||
"`ChatWriter` is available from the `langchain-writer` package. Install it with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "2113471c-75d7-45df-b784-d78da4ef7aba",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:32.415354Z",
|
||||
"start_time": "2024-11-14T09:46:26.826112Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\r\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\r\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\r\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"execution_count": null,
|
||||
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-community writer-sdk"
|
||||
"%pip install -qU langchain-writer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1098bc9d-ce83-462b-8c19-f85bf3a159dc",
|
||||
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"### Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
"Now we can instantiate our model object in order to generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "522686de",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:33.504711Z",
|
||||
"start_time": "2024-11-14T09:46:32.574505Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"execution_count": null,
|
||||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models.writer import ChatWriter\n",
|
||||
"from langchain_writer import ChatWriter\n",
|
||||
"\n",
|
||||
"llm = ChatWriter(\n",
|
||||
" model=\"palmyra-x-004\",\n",
|
||||
" temperature=0.7,\n",
|
||||
" max_tokens=1000,\n",
|
||||
" # other params...\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=None,\n",
|
||||
" timeout=None,\n",
|
||||
" max_retries=2,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6511982a-734a-4193-a47d-254f8dcaff5e",
|
||||
"id": "2b4f3e15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Invocation"
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"To use the model, you pass in a list of messages and call the `invoke` method:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
|
||||
"execution_count": null,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:38.856174Z",
|
||||
"start_time": "2024-11-14T09:46:33.520062Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
@ -162,129 +137,74 @@
|
||||
"messages = [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that writes poems about the Python programming language.\",\n",
|
||||
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"Write a poem about Python.\"),\n",
|
||||
" (\"human\", \"I love programming.\"),\n",
|
||||
"]\n",
|
||||
"ai_msg = llm.invoke(messages)"
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5cf7293d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then, you can access the content of the message:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "2cd224b8-4499-41fb-a604-d53a7ff17b2e",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:38.866651Z",
|
||||
"start_time": "2024-11-14T09:46:38.863817Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"In realms of code, where logic weaves and flows,\n",
|
||||
"A language rises, Python by its name,\n",
|
||||
"With syntax clear, where elegance it shows,\n",
|
||||
"A serpent, wise, that time and space can tame.\n",
|
||||
"\n",
|
||||
"Born from the mind of Guido, pure and bright,\n",
|
||||
"Its beauty lies in simplicity and grace,\n",
|
||||
"A tool of power, yet gentle in its might,\n",
|
||||
"In every programmer's heart, a cherished place.\n",
|
||||
"\n",
|
||||
"It dances through the data, vast and deep,\n",
|
||||
"With libraries that span the digital realm,\n",
|
||||
"From machine learning's secrets to keep,\n",
|
||||
"To web development, it wields the helm.\n",
|
||||
"\n",
|
||||
"In the hands of the novice and the sage,\n",
|
||||
"Python spins the threads of digital dreams,\n",
|
||||
"A language that can turn the age,\n",
|
||||
"With a gentle learning curve, its appeal gleams.\n",
|
||||
"\n",
|
||||
"It's more than code, a community it builds,\n",
|
||||
"Where knowledge freely flows, and all are heard,\n",
|
||||
"In Python's world, the future unfolds,\n",
|
||||
"A language of the people, for the world.\n",
|
||||
"\n",
|
||||
"So here's to Python, in its gentle might,\n",
|
||||
"A master of the modern coding art,\n",
|
||||
"May it continue to light our path each night,\n",
|
||||
"In the vast, evolving world of code, its heart.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"execution_count": null,
|
||||
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(ai_msg.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "35b3a5b3dabef65",
|
||||
"id": "4391289ce0a80e19",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Streaming"
|
||||
"## Streaming\n",
|
||||
"\n",
|
||||
"You can also stream the response. First, create a stream:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "2725770182bf96dc",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:38.914883Z",
|
||||
"start_time": "2024-11-14T09:46:38.912564Z"
|
||||
}
|
||||
},
|
||||
"execution_count": null,
|
||||
"id": "4a0f2112b3a4c79e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ai_stream = llm.stream(messages)"
|
||||
"messages = [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"I love programming. Sing a song about it\"),\n",
|
||||
"]\n",
|
||||
"ai_stream = llm.stream(messages)\n",
|
||||
"ai_stream"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "23cc74b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then, iterate over the stream to get the chunks:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "a48410d9488162e3",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:43.226449Z",
|
||||
"start_time": "2024-11-14T09:46:38.955512Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"In realms of code where logic weaves,\n",
|
||||
"A language rises, Python, it breezes,\n",
|
||||
"With syntax clear and simple to read,\n",
|
||||
"Through its elegance, our spirits are fed.\n",
|
||||
"\n",
|
||||
"Like rivers flowing, smooth and serene,\n",
|
||||
"Its structure harmonious, a coder's dream,\n",
|
||||
"Indentations guide the flow of control,\n",
|
||||
"In Python's world, confusion takes no toll.\n",
|
||||
"\n",
|
||||
"A vast library, a treasure trove so bright,\n",
|
||||
"For web and data, it offers its might,\n",
|
||||
"With modules and packages, a rich array,\n",
|
||||
"Python empowers us to code in play.\n",
|
||||
"\n",
|
||||
"From AI to scripts, in flexibility it thrives,\n",
|
||||
"A language of the future, as many now derive,\n",
|
||||
"Its community, a beacon of support and cheer,\n",
|
||||
"With Python, the possibilities are vast, far and near.\n",
|
||||
"\n",
|
||||
"So here's to Python, in its gentle grace,\n",
|
||||
"A tool that enhances, a language that embraces,\n",
|
||||
"The art of coding, with a fluent, flowing pen,\n",
|
||||
"In the Python world, we code, and we begin."
|
||||
]
|
||||
}
|
||||
],
|
||||
"execution_count": null,
|
||||
"id": "8c4b7b9b9308c757",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for chunk in ai_stream:\n",
|
||||
" print(chunk.content, end=\"\")"
|
||||
@ -292,71 +212,16 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "778f912a-66ea-4a5d-b3de-6c7db4baba26",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "fbb043e6",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:50.721645Z",
|
||||
"start_time": "2024-11-14T09:46:43.234590Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessageChunk(content='In the realm of code, where logic weaves and flows, \\nA language rises, like a phoenix from the code\\'s throes. \\nJava, the name, a cup of coffee\\'s steam, \\nBrewed in the minds, where digital dreams gleam.\\n\\nWith syntax clear, like morning\\'s misty hue, \\nIn classes and objects, it spins a tale so true. \\nA platform agnostic, with a byte to spare, \\nAcross the devices, it journeys everywhere.\\n\\nInheritance and polymorphism, its power\\'s core, \\nLike ancient runes, in every line they bore. \\nEncapsulation, a shield, with data it does hide, \\nIn the vast jungle of code, it stands as a guide.\\n\\nFrom applets small, to vast, server-side apps, \\nIts threads run swift, through the computing traps. \\nA language of the people, by the people, for the people’s use, \\nBuilt on the principle, \"write once, run anywhere, with no excuse.\"\\n\\nIn the heart of Android, it beats, a steady drum, \\nCrafting experiences, in every smartphone\\'s hum. \\nIn the cloud, in the enterprise, its presence is vast, \\nA cornerstone of computing, built to last.\\n\\nOh Java, thy elegance, thy robust design, \\nA language that stands, in any computing line. \\nWith every update, with every new release, \\nThy community grows, with a vibrant, diverse peace.\\n\\nSo here\\'s to Java, the versatile, the grand, \\nA language that shapes the digital land. \\nMay it continue to evolve, to grow, to inspire, \\nIn the endless quest of turning thoughts into digital fire.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 345, 'prompt_tokens': 33, 'total_tokens': 378, 'completion_tokens_details': None, 'prompt_token_details': None}, 'model_name': 'palmyra-x-004', 'system_fingerprint': 'v1', 'finish_reason': 'stop'}, id='run-a5b4be59-0eb0-41bd-80f7-72477861b0bd-0')"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that writes poems about the {input_language} programming language.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | llm\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"Java\",\n",
|
||||
" \"input\": \"Write a poem about Java.\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0b1b52a5-b58d-40c9-bcdd-88eb8fb351e2",
|
||||
"id": "e632bf7d0873f933",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tool calling\n",
|
||||
"\n",
|
||||
"Writer supports [tool calling](https://dev.writer.com/api-guides/tool-calling), which lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.\n",
|
||||
"Writer models like Palmyra X 004 support [tool calling](https://dev.writer.com/api-guides/tool-calling), which lets you describe tools and their arguments. The model will return a JSON object with a tool to invoke and the inputs to that tool.\n",
|
||||
"\n",
|
||||
"### ChatWriter.bind_tools()\n",
|
||||
"### Binding tools\n",
|
||||
"\n",
|
||||
"With `ChatWriter.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to tool schemas, which looks like:\n",
|
||||
"With `ChatWriter.bind_tools`, you can easily pass in Pydantic classes, dictionary schemas, LangChain tools, or even functions as tools to the model. Under the hood, these are converted to tool schemas, which look like this:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"name\": \"...\",\n",
|
||||
@ -364,19 +229,16 @@
|
||||
" \"parameters\": {...} # JSONSchema\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"and passed in every model invocation."
|
||||
"These are passed in every model invocation.\n",
|
||||
"\n",
|
||||
"For example, to use a tool that gets the weather in a given location, you can define a Pydantic class and pass it to `ChatWriter.bind_tools`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "b7ea7690-ec7a-4337-b392-e87d1f39a6ec",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:50.891937Z",
|
||||
"start_time": "2024-11-14T09:46:50.733463Z"
|
||||
}
|
||||
},
|
||||
"execution_count": null,
|
||||
"id": "47e2f0faceca533",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
@ -388,86 +250,173 @@
|
||||
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind_tools([GetWeather])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "1d1ab955-6a68-42f8-bb5d-86eb1111478a",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:51.725422Z",
|
||||
"start_time": "2024-11-14T09:46:50.904699Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ai_msg = llm_with_tools.invoke(\n",
|
||||
" \"what is the weather like in New York City\",\n",
|
||||
")"
|
||||
"llm.bind_tools([GetWeather])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "768d1ae4-4b1a-48eb-a329-c8d5051067a3",
|
||||
"id": "68e22d3b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### AIMessage.tool_calls\n",
|
||||
"Notice that the AIMessage has a `tool_calls` attribute. This contains in a standardized ToolCall format that is model-provider agnostic."
|
||||
"Then, you can invoke the model with the tool:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "166cb7ce-831d-4a7c-9721-abc107f11084",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-11-14T09:46:51.744202Z",
|
||||
"start_time": "2024-11-14T09:46:51.738431Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'name': 'GetWeather',\n",
|
||||
" 'args': {'location': 'New York City, NY'},\n",
|
||||
" 'id': 'chatcmpl-tool-fe70912c800d40fc8700d604d4823001',\n",
|
||||
" 'type': 'tool_call'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"execution_count": null,
|
||||
"id": "765527dd533ec967",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ai_msg = llm.invoke(\n",
|
||||
" \"what is the weather like in New York City\",\n",
|
||||
")\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "57544bdf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, you can access the tool calls and proceed to execute your functions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f361c4769e772fe",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(ai_msg.tool_calls)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e082c9ac-c7c7-4aff-a8ec-8e220262a59c",
|
||||
"id": "3baf53021834d2ff",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For more on binding tools and tool call outputs, head to the [tool calling](/docs/how_to/function_calling) docs."
|
||||
"### A note on tool binding\n",
|
||||
"\n",
|
||||
"The `ChatWriter.bind_tools()` method does not create new instance with bound tools, but stores the received `tools` and `tool_choice` in the initial class instance attributes to pass them as parameters during the Palmyra LLM call while using `ChatWriter` invocation. This approach allows the support of different tool types, e.g. `function` and `graph`. `Graph` is one of the remotely called Writer Palmyra tools. For further information visit our [docs](https://dev.writer.com/api-guides/knowledge-graph#knowledge-graph). \n",
|
||||
"\n",
|
||||
"For more information about tool usage in LangChain, visit the [LangChain tool calling documentation](https://python.langchain.com/docs/concepts/tool_calling/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a796d728-971b-408b-88d5-440015bbb941",
|
||||
"id": "a4674b1b82ce9d1f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Batching\n",
|
||||
"\n",
|
||||
"You can also batch requests and set the `max_concurrency`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c8a217f6190747fe",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ai_batch = llm.batch(\n",
|
||||
" [\n",
|
||||
" \"How to cook pancakes?\",\n",
|
||||
" \"How to compose poem?\",\n",
|
||||
" \"How to run faster?\",\n",
|
||||
" ],\n",
|
||||
" config={\"max_concurrency\": 3},\n",
|
||||
")\n",
|
||||
"ai_batch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2eb81e1d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then, iterate over the batch to get the results:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b6a228d448f3df23",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for batch in ai_batch:\n",
|
||||
" print(batch.content)\n",
|
||||
" print(\"-\" * 100)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "58a9ab241fe09a71",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Asynchronous usage\n",
|
||||
"\n",
|
||||
"All features above (invocation, streaming, batching, tools calling) also support asynchronous usage."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prompt templates\n",
|
||||
"\n",
|
||||
"[Prompt templates](https://python.langchain.com/docs/concepts/prompt_templates/) help to translate user input and parameters into instructions for a language model. You can use `ChatWriter` with a prompt templates like so:\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
|
||||
" ),\n",
|
||||
" (\"human\", \"{input}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | llm\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"German\",\n",
|
||||
" \"input\": \"I love programming.\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"For detailed documentation of all ChatWriter features and configurations head to the [API reference](https://python.langchain.com/api_reference/writer/chat_models/langchain_writer.chat_models.ChatWriter.html#langchain_writer.chat_models.ChatWriter).\n",
|
||||
"\n",
|
||||
"For detailed documentation of all Writer features, head to our [API reference](https://dev.writer.com/api-guides/api-reference/completion-api/chat-completion)."
|
||||
"## Additional resources\n",
|
||||
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@ -481,7 +430,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -0,0 +1,196 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "db23d51760310705",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Writer PDF Parser\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with the Writer `PDFParser` [document loader](/docs/concepts/document_loaders/).\n",
|
||||
"\n",
|
||||
"Writer's [PDF Parser](https://dev.writer.com/api-guides/api-reference/tool-api/pdf-parser#parse-pdf) converts PDF documents into other formats like text or Markdown. This is particularly useful when you need to extract and process text content from PDF files for further analysis or integration into your workflow. In `langchain-writer`, we provide usage of Writer's PDF Parser as a LangChain document parser.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
|
||||
"| PDFParser | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ |  |  |"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c5f08d23df5dc127",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"The `PDFParser` is available in the `langchain-writer` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a8d653f15b7ee32d",
|
||||
"metadata": {
|
||||
"jupyter": {
|
||||
"is_executing": true
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --quiet -U langchain-writer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3b9709c26797edf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to generate an API key (you can follow this [Quickstart](https://dev.writer.com/api-guides/quickstart)). Then, set the WRITER_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2983e19c9d555e58",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"WRITER_API_KEY\"):\n",
|
||||
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "92a22c77f03d43dc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability. If you wish to do so, you can set the `LANGCHAIN_TRACING_V2` and `LANGCHAIN_API_KEY` environment variables:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "98d8422ecee77403",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "67ab78950a3da8ba",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Instantiation\n",
|
||||
"\n",
|
||||
"Next, instantiate an instance of the Writer PDF Parser with the desired output format:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "787b3ba8af32533f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_writer.pdf_parser import PDFParser\n",
|
||||
"\n",
|
||||
"parser = PDFParser(format=\"markdown\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d91c6f752fd31cee",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"There are two ways to use the PDF Parser, either synchronously or asynchronously. In either case, the PDF Parser will return a list of `Document` objects, each containing the parsed content of a page from the PDF file.\n",
|
||||
"\n",
|
||||
"### Synchronous usage\n",
|
||||
"\n",
|
||||
"To invoke the PDF Parser synchronously, pass a `Blob` object to the `parse` method referencing the PDF file you want to parse:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d1a24b81a8a96f09",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents.base import Blob\n",
|
||||
"\n",
|
||||
"file = Blob.from_path(\"../../data/page_to_parse.pdf\")\n",
|
||||
"\n",
|
||||
"parsed_pages = parser.parse(blob=file)\n",
|
||||
"parsed_pages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f89c048c7d23807a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Asynchronous usage\n",
|
||||
"\n",
|
||||
"To invoke the PDF Parser asynchronously, pass a `Blob` object to the `aparse` method referencing the PDF file you want to parse:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e2f7fd52b7188c6c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"parsed_pages_async = await parser.aparse(blob=file)\n",
|
||||
"parsed_pages_async"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ab25a3bed8437a05",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all `PDFParser` features and configurations, head to the [API reference](https://python.langchain.com/api_reference/writer/pdf_parser/langchain_writer.pdf_parser.PDFParser.html#langchain_writer.pdf_parser.PDFParser).\n",
|
||||
"\n",
|
||||
"## Additional resources\n",
|
||||
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,16 +1,53 @@
|
||||
# Writer
|
||||
---
|
||||
keywords: [writer]
|
||||
---
|
||||
|
||||
This page covers how to use the Writer ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
|
||||
# Writer, Inc.
|
||||
|
||||
All functionality related to Writer
|
||||
|
||||
|
||||
>This page covers how to use the [Writer](https://writer.com/) ecosystem within LangChain. For further information see Writer [docs](https://dev.writer.com/home/introduction).
|
||||
>[Palmyra](https://writer.com/blog/palmyra/) is a Large Language Model (LLM) developed by `Writer, Inc`.
|
||||
>
|
||||
>The [Writer API](https://dev.writer.com/api-guides/introduction) is powered by a diverse set of Palmyra sub-models with different capabilities and price points.
|
||||
|
||||
## Installation and Setup
|
||||
- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`)
|
||||
|
||||
## Wrappers
|
||||
Install the integration package with
|
||||
```bash
|
||||
pip install langchain-writer
|
||||
```
|
||||
|
||||
### LLM
|
||||
Get an Writer API key and set it as an environment variable (`WRITER_API_KEY`)
|
||||
|
||||
## Chat model
|
||||
|
||||
There exists an Writer LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain_community.llms import Writer
|
||||
```
|
||||
from langchain_writer import ChatWriter
|
||||
```
|
||||
|
||||
## PDF Parser
|
||||
|
||||
|
||||
```python
|
||||
from langchain_writer.pdf_parser import PDFParser
|
||||
```
|
||||
|
||||
## Text splitter
|
||||
|
||||
```python
|
||||
from langchain_writer.text_splitter import WriterTextSplitter
|
||||
```
|
||||
|
||||
## Tools calling
|
||||
|
||||
### Functions
|
||||
|
||||
Support of basic function calls defined via dicts, Pydantic, python functions etc.
|
||||
|
||||
### Graphs
|
||||
|
||||
```python
|
||||
from langchain_writer.tools import GraphTool
|
||||
```
|
||||
|
296
docs/docs/integrations/splitters/writer_text_splitter.ipynb
Normal file
296
docs/docs/integrations/splitters/writer_text_splitter.ipynb
Normal file
@ -0,0 +1,296 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "db23d51760310705",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Writer Text Splitter\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with Writer's [text splitter](/docs/concepts/text_splitters/).\n",
|
||||
"\n",
|
||||
"Writer's [context-aware splitting endpoint](https://dev.writer.com/api-guides/tools#context-aware-text-splitting) provides intelligent text splitting capabilities for long documents (up to 4000 words). Unlike simple character-based splitting, it preserves the semantic meaning and context between chunks, making it ideal for processing long-form content while maintaining coherence. In `langchain-writer`, we provide usage of Writer's context-aware splitting endpoint as a LangChain text splitter.\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
|
||||
"| WriterTextSplitter | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ |  |  |"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c5f08d23df5dc127",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"The `WriterTextSplitter` is available in the `langchain-writer` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a8d653f15b7ee32d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": "%pip install --quiet -U langchain-writer"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3b9709c26797edf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to generate an API key (you can follow this [Quickstart](https://dev.writer.com/api-guides/quickstart)). Then, set the WRITER_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2983e19c9d555e58",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"WRITER_API_KEY\"):\n",
|
||||
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "92a22c77f03d43dc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability. If you wish to do so, you can set the `LANGCHAIN_TRACING_V2` and `LANGCHAIN_API_KEY` environment variables:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "98d8422ecee77403",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "67ab78950a3da8ba",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Instantiation\n",
|
||||
"\n",
|
||||
"Instantiate an instance of `WriterTextSplitter` with the `strategy` parameter set to one of the following:\n",
|
||||
"\n",
|
||||
"- `llm_split`: Uses language model for precise semantic splitting\n",
|
||||
"- `fast_split`: Uses heuristic-based approach for quick splitting\n",
|
||||
"- `hybrid_split`: Combines both approaches\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "787b3ba8af32533f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_writer.text_splitter import WriterTextSplitter\n",
|
||||
"\n",
|
||||
"splitter = WriterTextSplitter(strategy=\"fast_split\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d91c6f752fd31cee",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"The `WriterTextSplitter` can be used synchronously or asynchronously.\n",
|
||||
"\n",
|
||||
"### Synchronous usage\n",
|
||||
"To use the `WriterTextSplitter` synchronously, call the `split_text` method with the text you want to split:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d1a24b81a8a96f09",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text = \"\"\"Reeeeeeeeeeeeeeeeeeeeeaally long text you want to divide into smaller chunks. For example you can add a poem multiple times:\n",
|
||||
"Two roads diverged in a yellow wood,\n",
|
||||
"And sorry I could not travel both\n",
|
||||
"And be one traveler, long I stood\n",
|
||||
"And looked down one as far as I could\n",
|
||||
"To where it bent in the undergrowth;\n",
|
||||
"\n",
|
||||
"Then took the other, as just as fair,\n",
|
||||
"And having perhaps the better claim,\n",
|
||||
"Because it was grassy and wanted wear;\n",
|
||||
"Though as for that the passing there\n",
|
||||
"Had worn them really about the same,\n",
|
||||
"\n",
|
||||
"And both that morning equally lay\n",
|
||||
"In leaves no step had trodden black.\n",
|
||||
"Oh, I kept the first for another day!\n",
|
||||
"Yet knowing how way leads on to way,\n",
|
||||
"I doubted if I should ever come back.\n",
|
||||
"\n",
|
||||
"I shall be telling this with a sigh\n",
|
||||
"Somewhere ages and ages hence:\n",
|
||||
"Two roads diverged in a wood, and I—\n",
|
||||
"I took the one less traveled by,\n",
|
||||
"And that has made all the difference.\n",
|
||||
"\n",
|
||||
"Two roads diverged in a yellow wood,\n",
|
||||
"And sorry I could not travel both\n",
|
||||
"And be one traveler, long I stood\n",
|
||||
"And looked down one as far as I could\n",
|
||||
"To where it bent in the undergrowth;\n",
|
||||
"\n",
|
||||
"Then took the other, as just as fair,\n",
|
||||
"And having perhaps the better claim,\n",
|
||||
"Because it was grassy and wanted wear;\n",
|
||||
"Though as for that the passing there\n",
|
||||
"Had worn them really about the same,\n",
|
||||
"\n",
|
||||
"And both that morning equally lay\n",
|
||||
"In leaves no step had trodden black.\n",
|
||||
"Oh, I kept the first for another day!\n",
|
||||
"Yet knowing how way leads on to way,\n",
|
||||
"I doubted if I should ever come back.\n",
|
||||
"\n",
|
||||
"I shall be telling this with a sigh\n",
|
||||
"Somewhere ages and ages hence:\n",
|
||||
"Two roads diverged in a wood, and I—\n",
|
||||
"I took the one less traveled by,\n",
|
||||
"And that has made all the difference.\n",
|
||||
"\n",
|
||||
"Two roads diverged in a yellow wood,\n",
|
||||
"And sorry I could not travel both\n",
|
||||
"And be one traveler, long I stood\n",
|
||||
"And looked down one as far as I could\n",
|
||||
"To where it bent in the undergrowth;\n",
|
||||
"\n",
|
||||
"Then took the other, as just as fair,\n",
|
||||
"And having perhaps the better claim,\n",
|
||||
"Because it was grassy and wanted wear;\n",
|
||||
"Though as for that the passing there\n",
|
||||
"Had worn them really about the same,\n",
|
||||
"\n",
|
||||
"And both that morning equally lay\n",
|
||||
"In leaves no step had trodden black.\n",
|
||||
"Oh, I kept the first for another day!\n",
|
||||
"Yet knowing how way leads on to way,\n",
|
||||
"I doubted if I should ever come back.\n",
|
||||
"\n",
|
||||
"I shall be telling this with a sigh\n",
|
||||
"Somewhere ages and ages hence:\n",
|
||||
"Two roads diverged in a wood, and I—\n",
|
||||
"I took the one less traveled by,\n",
|
||||
"And that has made all the difference.\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"chunks = splitter.split_text(text)\n",
|
||||
"chunks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e6d09fcf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can print the length of the chunks to see how many chunks were created:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a470daa875d99006",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(len(chunks))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f89c048c7d23807a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Asynchronous usage\n",
|
||||
"To use the `WriterTextSplitter` asynchronously, call the `asplit_text` method with the text you want to split:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e2f7fd52b7188c6c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async_chunks = await splitter.asplit_text(text)\n",
|
||||
"async_chunks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cb669ce7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Print the length of the chunks to see how many chunks were created:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a1439db14e687fa4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(len(async_chunks))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ab25a3bed8437a05",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"For detailed documentation of all `WriterTextSplitter` features and configurations head to the [API reference](https://python.langchain.com/api_reference/writer/text_splitter/langchain_writer.text_splitter.WriterTextSplitter.html#langchain_writer.text_splitter.WriterTextSplitter).\n",
|
||||
"\n",
|
||||
"## Additional resources\n",
|
||||
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
366
docs/docs/integrations/tools/writer.ipynb
Normal file
366
docs/docs/integrations/tools/writer.ipynb
Normal file
@ -0,0 +1,366 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a6f91f20",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Writer Tools\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with Writer [tools](https://python.langchain.com/docs/concepts/tools/). For detailed documentation of all Writer features and configurations head to the [Writer docs](https://dev.writer.com/home).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||
"|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
|
||||
"| GraphTool | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ |  |  |\n",
|
||||
"\n",
|
||||
"### Features\n",
|
||||
"\n",
|
||||
"We provide usage of two types of tools for use with `ChatWriter`: `function` and `graph`.\n",
|
||||
"\n",
|
||||
"#### Function\n",
|
||||
"\n",
|
||||
"Functions are the most common type of tool, which allows the LLM to call external APIs, fetch data from databases, and generally perform any external action you want to do. Visit our [tool calling docs](https://dev.writer.com/api-guides/tool-calling#tool-calling) for additional information.\n",
|
||||
"\n",
|
||||
"#### Graph\n",
|
||||
"\n",
|
||||
"The `Graph` tool is Writer's graph-based retrieval-augmented generation (RAG) called Knowledge Graph. This tool enables developers to simply pass the graph ID to the model and it will return the answer to the question in the prompt. To learn more, see our [Knowledge Graph API docs](https://dev.writer.com/api-guides/knowledge-graph)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "40136062a4c267f3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to generate an API key (you can follow this [Quickstart](https://dev.writer.com/api-guides/quickstart)). Then, set the WRITER_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "80d4e1a791aaa8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"WRITER_API_KEY\"):\n",
|
||||
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"You can bind graph or function tools to `ChatWriter`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "570e4abffc12774",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Graph Tools\n",
|
||||
"\n",
|
||||
"To bind graph tools, first create and initialize a `GraphTool` instance with the `graph_ids` you want to use as sources:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6faaae25509f0f28",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_writer.chat_models import ChatWriter\n",
|
||||
"from langchain_writer.tools import GraphTool\n",
|
||||
"\n",
|
||||
"chat = ChatWriter()\n",
|
||||
"\n",
|
||||
"graph_id = getpass.getpass(\"Enter Writer Knowledge Graph ID: \")\n",
|
||||
"graph_tool = GraphTool(graph_ids=[graph_id])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "50ea16fc3a382cf",
|
||||
"metadata": {},
|
||||
"source": "## Instantiation"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e98d7deedb0e5c6f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Optional\n",
|
||||
"\n",
|
||||
"from langchain_core.tools import tool\n",
|
||||
"from pydantic import BaseModel, Field\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"def get_supercopa_trophies_count(club_name: str) -> Optional[int]:\n",
|
||||
" \"\"\"Returns information about supercopa trophies count.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" club_name: Club you want to investigate info of supercopa trophies about\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" Number of supercopa trophies or None if there is no info about requested club\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" if club_name == \"Barcelona\":\n",
|
||||
" return 15\n",
|
||||
" elif club_name == \"Real Madrid\":\n",
|
||||
" return 13\n",
|
||||
" elif club_name == \"Atletico Madrid\":\n",
|
||||
" return 2\n",
|
||||
" else:\n",
|
||||
" return None\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class GetWeather(BaseModel):\n",
|
||||
" \"\"\"Get the current weather in a given location\"\"\"\n",
|
||||
"\n",
|
||||
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"get_product_info = {\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"get_product_info\",\n",
|
||||
" \"description\": \"Get information about a product by its id\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"product_id\": {\n",
|
||||
" \"type\": \"number\",\n",
|
||||
" \"description\": \"The unique identifier of the product to retrieve information for\",\n",
|
||||
" }\n",
|
||||
" },\n",
|
||||
" \"required\": [\"product_id\"],\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1753ca46",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Binding tools\n",
|
||||
"Then, you can simply bind all tools to the `ChatWriter` instance:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a4833f2597a87777",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat.bind_tools(\n",
|
||||
" [graph_tool, get_supercopa_trophies_count, GetWeather, get_product_info]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a300614e244f4aaf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"All tools are stored in the `tools` attribute of the `ChatWriter` instance:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ccb61b945a56672b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat.tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "52dfec8a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The tool choice mode is stored at the `tool_choice` attribute, which is `auto` by default:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "381f0d4b9a8357a4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat.tool_choice"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "74147a1a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Invocation\n",
|
||||
"\n",
|
||||
"The model will automatically choose the tool during invocation with all modes (streaming/non-streaming, sync/async)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "74df06b58b5dc2e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" \"Use knowledge graph tool to compose this answer. Tell me what th first line of documents stored in your KG. Also I want to know: how many SuperCopa trophies have Barcelona won?\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"response = chat.invoke(messages)\n",
|
||||
"messages.append(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b015fdb2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In the case of function tools, you will receive an assistant message with the tool call request."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e271e0fc677446b2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(response.tool_calls)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aeff6a17ce3176b1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then you can manually handle tool call request, send to model and receive final response:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "156b58108aa9b367",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for tool_call in response.tool_calls:\n",
|
||||
" selected_tool = {\n",
|
||||
" \"get_supercopa_trophies_count\": get_supercopa_trophies_count,\n",
|
||||
" }[tool_call[\"name\"].lower()]\n",
|
||||
" tool_msg = selected_tool.invoke(tool_call)\n",
|
||||
" messages.append(tool_msg)\n",
|
||||
"\n",
|
||||
"response = chat.invoke(messages)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ff9347b2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With a `GraphTool`, the model will call it remotely and return usage info in the `additional_kwargs` under the `graph_data` key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4b3c6f05096fc9e3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(response.additional_kwargs[\"graph_data\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f001d2ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `content` attribute contains the final response:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "eb6e0da74b10b8fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "602631cd878e5dbe",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining\n",
|
||||
"\n",
|
||||
"#TODO: fill chaining section"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4ac8146c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API reference\n",
|
||||
"For detailed documentation of all `GraphTool` features and configurations, head to the [API reference](https://python.langchain.com/api_reference/writer/tools/langchain_writer.tools.GraphTool.html#langchain_writer.tools.GraphTool)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -462,3 +462,8 @@ packages:
|
||||
- name: langchain-permit
|
||||
path: .
|
||||
repo: permitio/langchain-permit
|
||||
- name: langchain-writer
|
||||
path: .
|
||||
repo: writer/langchain-writer
|
||||
downloads: 0
|
||||
downloads_updated_at: '2025-02-24T13:19:19.816059+00:00'
|
Loading…
Reference in New Issue
Block a user