diff --git a/docs/docs/integrations/chat/ollama_functions.ipynb b/docs/docs/integrations/chat/ollama_functions.ipynb index ed12fd7e5f9..1e6aaef68f6 100644 --- a/docs/docs/integrations/chat/ollama_functions.ipynb +++ b/docs/docs/integrations/chat/ollama_functions.ipynb @@ -15,7 +15,7 @@ "source": [ "# OllamaFunctions\n", "\n", - "This notebook shows how to use an experimental wrapper around Ollama that gives it the same API as OpenAI Functions.\n", + "This notebook shows how to use an experimental wrapper around Ollama that gives it [tool calling capabilities](https://python.langchain.com/v0.2/docs/concepts/#functiontool-calling).\n", "\n", "Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use llama3 and phi3 models.\n", "For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).\n", @@ -25,81 +25,75 @@ "This is an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. Use with caution.\n", "\n", ":::\n", + "## Overview\n", + "\n", + "### Integration details\n", + "\n", + "| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n", + "|:-----------------------------------------------------------------------------------------------------------------------------------:|:-------:|:-----:|:------------:|:----------:|:-----------------:|:--------------:|\n", + "| [OllamaFunctions](https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.ollama_function.OllamaFunctions.html) | [langchain-experimental](https://api.python.langchain.com/en/latest/openai_api_reference.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-experimental?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-experimental?style=flat-square&label=%20) |\n", + "\n", + "### Model features\n", + "\n", + "| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", + "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", + "| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |\n", "\n", "## Setup\n", "\n", - "Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance.\n", + "To access `OllamaFunctions` you will need to install `langchain-experimental` integration package.\n", + "Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance as well as download and serve [supported models](https://ollama.com/library).\n", "\n", - "## Usage\n", + "### Credentials\n", "\n", - "You can initialize OllamaFunctions in a similar way to how you'd initialize a standard ChatOllama instance:" + "Credentials support is not present at this time.\n", + "\n", + "### Installation\n", + "\n", + "The `OllamaFunctions` class lives in the `langchain-experimental` package:\n" ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -qU langchain-experimental" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "`OllamaFunctions` takes the same init parameters as `ChatOllama`. \n", + "\n", + "In order to use tool calling, you must also specify `format=\"json\"`." + ] + }, + { + "cell_type": "code", + "execution_count": 6, "metadata": { "ExecuteTime": { - "end_time": "2024-04-28T00:53:25.276543Z", - "start_time": "2024-04-28T00:53:24.881202Z" - }, - "scrolled": true + "end_time": "2024-06-23T15:20:21.818089Z", + "start_time": "2024-06-23T15:20:21.815759Z" + } }, "outputs": [], "source": [ "from langchain_experimental.llms.ollama_functions import OllamaFunctions\n", "\n", - "model = OllamaFunctions(model=\"llama3\", format=\"json\")" + "llm = OllamaFunctions(model=\"phi3\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "You can then bind functions defined with JSON Schema parameters and a `function_call` parameter to force the model to call the given function:" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "ExecuteTime": { - "end_time": "2024-04-26T04:59:17.270931Z", - "start_time": "2024-04-26T04:59:17.263347Z" - } - }, - "outputs": [], - "source": [ - "model = model.bind_tools(\n", - " tools=[\n", - " {\n", - " \"name\": \"get_current_weather\",\n", - " \"description\": \"Get the current weather in a given location\",\n", - " \"parameters\": {\n", - " \"type\": \"object\",\n", - " \"properties\": {\n", - " \"location\": {\n", - " \"type\": \"string\",\n", - " \"description\": \"The city and state, \" \"e.g. San Francisco, CA\",\n", - " },\n", - " \"unit\": {\n", - " \"type\": \"string\",\n", - " \"enum\": [\"celsius\", \"fahrenheit\"],\n", - " },\n", - " },\n", - " \"required\": [\"location\"],\n", - " },\n", - " }\n", - " ],\n", - " function_call={\"name\": \"get_current_weather\"},\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Calling a function with this model then results in JSON output matching the provided schema:" + "## Invocation" ] }, { @@ -107,15 +101,15 @@ "execution_count": 3, "metadata": { "ExecuteTime": { - "end_time": "2024-04-26T04:59:26.092428Z", - "start_time": "2024-04-26T04:59:17.272627Z" + "end_time": "2024-06-23T15:20:46.794689Z", + "start_time": "2024-06-23T15:20:44.982632Z" } }, "outputs": [ { "data": { "text/plain": [ - "AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\"}'}}, id='run-1791f9fe-95ad-4ca4-bdf7-9f73eab31e6f-0')" + "AIMessage(content=\"J'adore programmer.\", id='run-94815fcf-ae11-438a-ba3f-00819328b5cd-0')" ] }, "execution_count": 3, @@ -124,79 +118,55 @@ } ], "source": [ - "from langchain_core.messages import HumanMessage\n", - "\n", - "model.invoke(\"what is the weather in Boston?\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Structured Output\n", - "\n", - "One useful thing you can do with function calling using `with_structured_output()` function is extracting properties from a given input in a structured format:" + "messages = [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", + " ),\n", + " (\"human\", \"I love programming.\"),\n", + "]\n", + "ai_msg = llm.invoke(messages)\n", + "ai_msg" ] }, { "cell_type": "code", "execution_count": 4, - "metadata": { - "ExecuteTime": { - "end_time": "2024-04-26T04:59:26.098828Z", - "start_time": "2024-04-26T04:59:26.094021Z" + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\"J'adore programmer.\"" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" } - }, - "outputs": [], + ], "source": [ - "from langchain_core.prompts import PromptTemplate\n", - "from langchain_core.pydantic_v1 import BaseModel, Field\n", - "\n", - "\n", - "# Schema for structured response\n", - "class Person(BaseModel):\n", - " name: str = Field(description=\"The person's name\", required=True)\n", - " height: float = Field(description=\"The person's height\", required=True)\n", - " hair_color: str = Field(description=\"The person's hair color\")\n", - "\n", - "\n", - "# Prompt template\n", - "prompt = PromptTemplate.from_template(\n", - " \"\"\"Alex is 5 feet tall. \n", - "Claudia is 1 feet taller than Alex and jumps higher than him. \n", - "Claudia is a brunette and Alex is blonde.\n", - "\n", - "Human: {question}\n", - "AI: \"\"\"\n", - ")\n", - "\n", - "# Chain\n", - "llm = OllamaFunctions(model=\"phi3\", format=\"json\", temperature=0)\n", - "structured_llm = llm.with_structured_output(Person)\n", - "chain = prompt | structured_llm" + "ai_msg.content" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Extracting data about Alex" + "## Chaining\n", + "\n", + "We can [chain](https://python.langchain.com/v0.2/docs/how_to/sequence/) our model with a prompt template like so:" ] }, { "cell_type": "code", "execution_count": 5, - "metadata": { - "ExecuteTime": { - "end_time": "2024-04-26T04:59:30.164955Z", - "start_time": "2024-04-26T04:59:26.099790Z" - } - }, + "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "Person(name='Alex', height=5.0, hair_color='blonde')" + "AIMessage(content='Programmieren ist sehr verrückt! Es freut mich, dass Sie auf Programmierung so positiv eingestellt sind.', id='run-ee99be5e-4d48-4ab6-b602-35415f0bdbde-0')" ] }, "execution_count": 5, @@ -205,41 +175,123 @@ } ], "source": [ - "alex = chain.invoke(\"Describe Alex\")\n", - "alex" + "from langchain_core.prompts import ChatPromptTemplate\n", + "\n", + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", + " ),\n", + " (\"human\", \"{input}\"),\n", + " ]\n", + ")\n", + "\n", + "chain = prompt | llm\n", + "chain.invoke(\n", + " {\n", + " \"input_language\": \"English\",\n", + " \"output_language\": \"German\",\n", + " \"input\": \"I love programming.\",\n", + " }\n", + ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Extracting data about Claudia" + "## Tool Calling\n", + "\n", + "### OllamaFunctions.bind_tools()\n", + "\n", + "With `OllamaFunctions.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to a tool definition schemas, which looks like:" ] }, { "cell_type": "code", - "execution_count": 6, - "metadata": { - "ExecuteTime": { - "end_time": "2024-04-26T04:59:31.509846Z", - "start_time": "2024-04-26T04:59:30.165662Z" - } - }, + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_core.pydantic_v1 import BaseModel, Field\n", + "\n", + "\n", + "class GetWeather(BaseModel):\n", + " \"\"\"Get the current weather in a given location\"\"\"\n", + "\n", + " location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n", + "\n", + "\n", + "llm_with_tools = llm.bind_tools([GetWeather])" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "Person(name='Claudia', height=6.0, hair_color='brunette')" + "AIMessage(content='', id='run-b9769435-ec6a-4cb8-8545-5a5035fc19bd-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_064c4e1cb27e4adb9e4e7ed60362ecc9'}])" ] }, - "execution_count": 6, + "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "claudia = chain.invoke(\"Describe Claudia\")\n", - "claudia" + "ai_msg = llm_with_tools.invoke(\n", + " \"what is the weather like in San Francisco\",\n", + ")\n", + "ai_msg" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### AIMessage.tool_calls\n", + "\n", + "Notice that the AIMessage has a `tool_calls` attribute. This contains in a standardized `ToolCall` format that is model-provider agnostic." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'name': 'GetWeather',\n", + " 'args': {'location': 'San Francisco, CA'},\n", + " 'id': 'call_064c4e1cb27e4adb9e4e7ed60362ecc9'}]" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "ai_msg.tool_calls" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": "For more on binding tools and tool call outputs, head to the [tool calling](docs/how_to/function_calling) docs." + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## API reference\n", + "\n", + "For detailed documentation of all ToolCallingLLM features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.ollama_functions.OllamaFunctions.html\n" ] } ], @@ -259,7 +311,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4,