diff --git a/docs/docs/integrations/chat/cohere.ipynb b/docs/docs/integrations/chat/cohere.ipynb index 707bcf91e84..875413b0fb2 100644 --- a/docs/docs/integrations/chat/cohere.ipynb +++ b/docs/docs/integrations/chat/cohere.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "raw", - "id": "53fbf15f", + "id": "afaf8039", "metadata": {}, "source": [ "---\n", @@ -12,103 +12,129 @@ }, { "cell_type": "markdown", - "id": "bf733a38-db84-4363-89e2-de6735c37230", + "id": "e49f1e0d", "metadata": {}, "source": [ - "# Cohere\n", + "# ChatCohere\n", "\n", - "This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).\n", + "This doc will help you get started with Cohere [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatCohere features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html).\n", + "\n", + "For an overview of all Cohere models head to the [Cohere docs](https://docs.cohere.com/docs/models).\n", + "\n", + "## Overview\n", + "### Integration details\n", + "\n", + "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/cohere) | Package downloads | Package latest |\n", + "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", + "| [ChatCohere](https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html) | [langchain-cohere](https://api.python.langchain.com/en/latest/cohere_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cohere?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cohere?style=flat-square&label=%20) |\n", + "\n", + "### Model features\n", + "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", + "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", + "| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n", "\n", - "Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods." - ] - }, - { - "cell_type": "markdown", - "id": "3607d67e-e56c-4102-bbba-df2edc0e109e", - "metadata": {}, - "source": [ "## Setup\n", "\n", - "The integration lives in the `langchain-cohere` package. We can install these with:\n", + "To access Cohere models you'll need to create a Cohere account, get an API key, and install the `langchain-cohere` integration package.\n", "\n", - "```bash\n", - "pip install -U langchain-cohere\n", - "```\n", + "### Credentials\n", "\n", - "We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:" + "Head to https://dashboard.cohere.com/welcome/login to sign up to Cohere and generate an API key. Once you've done this set the COHERE_API_KEY environment variable:" ] }, { "cell_type": "code", - "execution_count": 11, - "id": "2108b517-1e8d-473d-92fa-4f930e8072a7", + "execution_count": null, + "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94", "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", - "os.environ[\"COHERE_API_KEY\"] = getpass.getpass()" + "os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Enter your Cohere API key: \")" ] }, { "cell_type": "markdown", - "id": "cf690fbb", + "id": "72ee0c4b-9764-423a-9dbf-95129e185210", "metadata": {}, "source": [ - "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" + "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:" ] }, { "cell_type": "code", - "execution_count": 12, - "id": "7f11de02", + "execution_count": null, + "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", "metadata": {}, "outputs": [], "source": [ - "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", - "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n", + "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"" ] }, { "cell_type": "markdown", - "id": "4c26754b-b3c9-4d93-8f36-43049bd943bf", + "id": "0730d6a1-c893-4840-9817-5e5251676d5d", "metadata": {}, "source": [ - "## Usage\n", + "### Installation\n", "\n", - "ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:" + "The LangChain Cohere integration lives in the `langchain-cohere` package:" ] }, { "cell_type": "code", - "execution_count": 13, - "id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9", - "metadata": { - "tags": [] - }, + "execution_count": null, + "id": "652d6238-1f87-422a-b135-f5abbb8652fc", + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -qU langchain-cohere" + ] + }, + { + "cell_type": "markdown", + "id": "a38cde65-254d-4219-a441-068766c0d4b5", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our model object and generate chat completions:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", + "metadata": {}, "outputs": [], "source": [ "from langchain_cohere import ChatCohere\n", - "from langchain_core.messages import HumanMessage" + "\n", + "llm = ChatCohere(\n", + " model=\"command-r-plus\",\n", + " temperature=0,\n", + " max_tokens=None,\n", + " timeout=None,\n", + " max_retries=2,\n", + " # other params...\n", + ")" ] }, { - "cell_type": "code", - "execution_count": 14, - "id": "70cf04e8-423a-4ff6-8b09-f11fb711c817", - "metadata": { - "tags": [] - }, - "outputs": [], + "cell_type": "markdown", + "id": "2b4f3e15", + "metadata": {}, "source": [ - "chat = ChatCohere(model=\"command\")" + "## Invocation" ] }, { "cell_type": "code", - "execution_count": 15, - "id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c", + "execution_count": 2, + "id": "62e0dbc3", "metadata": { "tags": [] }, @@ -116,134 +142,110 @@ { "data": { "text/plain": [ - "AIMessage(content='4 && 5 \\n6 || 7 \\n\\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')" + "AIMessage(content=\"J'adore programmer.\", additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'd84f80f3-4611-46e6-aed0-9d8665a20a11', 'token_count': {'input_tokens': 89, 'output_tokens': 5}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'd84f80f3-4611-46e6-aed0-9d8665a20a11', 'token_count': {'input_tokens': 89, 'output_tokens': 5}}, id='run-514ab516-ed7e-48ac-b132-2598fb80ebef-0')" ] }, - "execution_count": 15, + "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "messages = [HumanMessage(content=\"1\"), HumanMessage(content=\"2 3\")]\n", - "chat.invoke(messages)" + "messages = [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", + " ),\n", + " (\"human\", \"I love programming.\"),\n", + "]\n", + "ai_msg = llm.invoke(messages)\n", + "ai_msg" ] }, { "cell_type": "code", - "execution_count": 16, - "id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b", - "metadata": { - "tags": [] - }, - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')" - ] - }, - "execution_count": 16, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "await chat.ainvoke(messages)" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "id": "025be980-e50d-4a68-93dc-c9c7b500ce34", - "metadata": { - "tags": [] - }, + "execution_count": 3, + "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", + "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "4 && 5" + "J'adore programmer.\n" ] } ], "source": [ - "for chunk in chat.stream(messages):\n", - " print(chunk.content, end=\"\", flush=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "id": "064288e4-f184-4496-9427-bcf148fa055e", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]" - ] - }, - "execution_count": 18, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chat.batch([messages])" + "print(ai_msg.content)" ] }, { "cell_type": "markdown", - "id": "f1c56460", + "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", "metadata": {}, "source": [ "## Chaining\n", "\n", - "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)" + "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:" ] }, { "cell_type": "code", - "execution_count": 19, - "id": "0851b103", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain_core.prompts import ChatPromptTemplate\n", - "\n", - "prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n", - "chain = prompt | chat" - ] - }, - { - "cell_type": "code", - "execution_count": 20, - "id": "ae950c0f-1691-47f1-b609-273033cae707", + "execution_count": 4, + "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "AIMessage(content='What color socks do bears wear?\\n\\nThey don’t wear socks, they have bear feet. \\n\\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0')" + "AIMessage(content='Ich liebe Programmierung.', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '053bebde-4e1d-4d06-8ee6-3446e7afa25e', 'token_count': {'input_tokens': 84, 'output_tokens': 6}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '053bebde-4e1d-4d06-8ee6-3446e7afa25e', 'token_count': {'input_tokens': 84, 'output_tokens': 6}}, id='run-53700708-b7fb-417b-af36-1a6fcde38e7d-0')" ] }, - "execution_count": 20, + "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "chain.invoke({\"topic\": \"bears\"})" + "from langchain_core.prompts import ChatPromptTemplate\n", + "\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", + " ),\n", + " (\"human\", \"{input}\"),\n", + " ]\n", + ")\n", + "\n", + "chain = prompt | llm\n", + "chain.invoke(\n", + " {\n", + " \"input_language\": \"English\",\n", + " \"output_language\": \"German\",\n", + " \"input\": \"I love programming.\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", + "metadata": {}, + "source": [ + "## API reference\n", + "\n", + "For detailed documentation of all ChatCohere features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "poetry-venv-2", "language": "python", - "name": "python3" + "name": "poetry-venv-2" }, "language_info": { "codemirror_mode": { @@ -255,7 +257,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.7" + "version": "3.11.9" } }, "nbformat": 4,