From 4e8779b3a5f19283c5eb6ae841b11716abebd6f2 Mon Sep 17 00:00:00 2001 From: Michael Li Date: Tue, 27 May 2025 05:16:42 +1000 Subject: [PATCH] docs: fix incorrect grammar in octoai.ipynb and predictionguard.ipynb (#31347) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit …tionguard.ipynb Thank you for contributing to LangChain! - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. - Example: "core: add foobar LLM" - [x] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [x] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [x] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17. --- docs/docs/integrations/chat/octoai.ipynb | 4 +- .../integrations/chat/predictionguard.ipynb | 358 ++++++++++-------- 2 files changed, 196 insertions(+), 166 deletions(-) diff --git a/docs/docs/integrations/chat/octoai.ipynb b/docs/docs/integrations/chat/octoai.ipynb index a0bbe98be8d..9d191362f81 100644 --- a/docs/docs/integrations/chat/octoai.ipynb +++ b/docs/docs/integrations/chat/octoai.ipynb @@ -16,7 +16,7 @@ "\n", "1. Get an API Token from [your OctoAI account page](https://octoai.cloud/settings).\n", " \n", - "2. Paste your API token in in the code cell below or use the `octoai_api_token` keyword argument.\n", + "2. Paste your API token in the code cell below or use the `octoai_api_token` keyword argument.\n", "\n", "Note: If you want to use a different model than the [available models](https://octoai.cloud/text?selectedTags=Chat), you can containerize the model and make a custom OctoAI endpoint yourself, by following [Build a Container from Python](https://octo.ai/docs/bring-your-own-model/advanced-build-a-container-from-scratch-in-python) and [Create a Custom Endpoint from a Container](https://octo.ai/docs/bring-your-own-model/create-custom-endpoints-from-a-container/create-custom-endpoints-from-a-container) and then updating your `OCTOAI_API_BASE` environment variable.\n" ] @@ -99,7 +99,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.12" + "version": "3.12.10" }, "vscode": { "interpreter": { diff --git a/docs/docs/integrations/chat/predictionguard.ipynb b/docs/docs/integrations/chat/predictionguard.ipynb index 6d06a7dd5ad..105fc56ceef 100644 --- a/docs/docs/integrations/chat/predictionguard.ipynb +++ b/docs/docs/integrations/chat/predictionguard.ipynb @@ -4,93 +4,99 @@ "cell_type": "markdown", "id": "3f0a201c", "metadata": {}, - "source": "# ChatPredictionGuard" + "source": [ + "# ChatPredictionGuard" + ] }, { - "metadata": {}, "cell_type": "markdown", - "source": ">[Prediction Guard](https://predictionguard.com) is a secure, scalable GenAI platform that safeguards sensitive data, prevents common AI malfunctions, and runs on affordable hardware.\n", - "id": "c3adc2aac37985ac" + "id": "c3adc2aac37985ac", + "metadata": {}, + "source": [ + ">[Prediction Guard](https://predictionguard.com) is a secure, scalable GenAI platform that safeguards sensitive data, prevents common AI malfunctions, and runs on affordable hardware.\n" + ] }, { - "metadata": {}, "cell_type": "markdown", - "source": "## Overview", - "id": "4e1ec341481fb244" + "id": "4e1ec341481fb244", + "metadata": {}, + "source": [ + "## Overview" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "b4090b7489e37a91", + "metadata": {}, "source": [ "### Integration details\n", "This integration utilizes the Prediction Guard API, which includes various safeguards and security features." - ], - "id": "b4090b7489e37a91" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "e26e5b3240452162", + "metadata": {}, "source": [ "### Model features\n", "The models supported by this integration only feature text-generation currently, along with the input and output checks described here." - ], - "id": "e26e5b3240452162" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "4fca548b61efb049", + "metadata": {}, "source": [ "## Setup\n", "To access Prediction Guard models, contact us [here](https://predictionguard.com/get-started) to get a Prediction Guard API key and get started. " - ], - "id": "4fca548b61efb049" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "7cc34a9cd865690c", + "metadata": {}, "source": [ "### Credentials\n", "Once you have a key, you can set it with " - ], - "id": "7cc34a9cd865690c" + ] }, { + "cell_type": "code", + "execution_count": 2, + "id": "fa57fba89276da13", "metadata": { "ExecuteTime": { "end_time": "2025-04-21T18:23:30.746350Z", "start_time": "2025-04-21T18:23:30.744744Z" } }, - "cell_type": "code", + "outputs": [], "source": [ "import os\n", "\n", "if \"PREDICTIONGUARD_API_KEY\" not in os.environ:\n", " os.environ[\"PREDICTIONGUARD_API_KEY\"] = \"\"" - ], - "id": "fa57fba89276da13", - "outputs": [], - "execution_count": 2 + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "87dc1742af7b053", + "metadata": {}, "source": [ "### Installation\n", "Install the Prediction Guard Langchain integration with" - ], - "id": "87dc1742af7b053" + ] }, { + "cell_type": "code", + "execution_count": 3, + "id": "b816ae8553cba021", "metadata": { "ExecuteTime": { "end_time": "2025-04-21T18:23:33.359278Z", "start_time": "2025-04-21T18:23:32.853207Z" } }, - "cell_type": "code", - "source": "%pip install -qU langchain-predictionguard", - "id": "b816ae8553cba021", "outputs": [ { "name": "stdout", @@ -100,7 +106,9 @@ ] } ], - "execution_count": 3 + "source": [ + "%pip install -qU langchain-predictionguard" + ] }, { "cell_type": "markdown", @@ -108,63 +116,61 @@ "metadata": { "id": "mesCTyhnJkNS" }, - "source": "## Instantiation" + "source": [ + "## Instantiation" + ] }, { "cell_type": "code", + "execution_count": 4, "id": "7191a5ce", "metadata": { - "id": "2xe8JEUwA7_y", "ExecuteTime": { "end_time": "2025-04-21T18:23:39.812675Z", "start_time": "2025-04-21T18:23:39.666881Z" - } + }, + "id": "2xe8JEUwA7_y" }, - "source": "from langchain_predictionguard import ChatPredictionGuard", "outputs": [], - "execution_count": 4 + "source": [ + "from langchain_predictionguard import ChatPredictionGuard" + ] }, { "cell_type": "code", + "execution_count": 5, "id": "140717c9", "metadata": { - "id": "Ua7Mw1N4HcER", "ExecuteTime": { "end_time": "2025-04-21T18:23:41.590296Z", "start_time": "2025-04-21T18:23:41.253237Z" - } + }, + "id": "Ua7Mw1N4HcER" }, + "outputs": [], "source": [ "# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.\n", "chat = ChatPredictionGuard(model=\"Hermes-3-Llama-3.1-8B\")" - ], - "outputs": [], - "execution_count": 5 + ] }, { - "metadata": {}, "cell_type": "markdown", - "source": "## Invocation", - "id": "8dbdfc55b638e4c2" + "id": "8dbdfc55b638e4c2", + "metadata": {}, + "source": [ + "## Invocation" + ] }, { + "cell_type": "code", + "execution_count": 4, + "id": "5a1635e7ae7134a3", "metadata": { "ExecuteTime": { "end_time": "2024-11-08T19:44:56.634939Z", "start_time": "2024-11-08T19:44:55.924534Z" } }, - "cell_type": "code", - "source": [ - "messages = [\n", - " (\"system\", \"You are a helpful assistant that tells jokes.\"),\n", - " (\"human\", \"Tell me a joke\"),\n", - "]\n", - "\n", - "ai_msg = chat.invoke(messages)\n", - "ai_msg" - ], - "id": "5a1635e7ae7134a3", "outputs": [ { "data": { @@ -177,18 +183,26 @@ "output_type": "execute_result" } ], - "execution_count": 4 + "source": [ + "messages = [\n", + " (\"system\", \"You are a helpful assistant that tells jokes.\"),\n", + " (\"human\", \"Tell me a joke\"),\n", + "]\n", + "\n", + "ai_msg = chat.invoke(messages)\n", + "ai_msg" + ] }, { + "cell_type": "code", + "execution_count": 5, + "id": "a6f8025726e5da3c", "metadata": { "ExecuteTime": { "end_time": "2024-11-08T19:44:57.501782Z", "start_time": "2024-11-08T19:44:57.498931Z" } }, - "cell_type": "code", - "source": "print(ai_msg.content)", - "id": "a6f8025726e5da3c", "outputs": [ { "name": "stdout", @@ -198,16 +212,21 @@ ] } ], - "execution_count": 5 + "source": [ + "print(ai_msg.content)" + ] }, { "cell_type": "markdown", "id": "e9e96106-8e44-4373-9c57-adc3d0062df3", "metadata": {}, - "source": "## Streaming" + "source": [ + "## Streaming" + ] }, { "cell_type": "code", + "execution_count": 6, "id": "ea62d2da-802c-4b8a-a63e-5d1d0a72540f", "metadata": { "ExecuteTime": { @@ -215,12 +234,6 @@ "start_time": "2024-11-08T19:44:59.095584Z" } }, - "source": [ - "chat = ChatPredictionGuard(model=\"Hermes-2-Pro-Llama-3-8B\")\n", - "\n", - "for chunk in chat.stream(\"Tell me a joke\"):\n", - " print(chunk.content, end=\"\", flush=True)" - ], "outputs": [ { "name": "stdout", @@ -232,33 +245,39 @@ ] } ], - "execution_count": 6 + "source": [ + "chat = ChatPredictionGuard(model=\"Hermes-2-Pro-Llama-3-8B\")\n", + "\n", + "for chunk in chat.stream(\"Tell me a joke\"):\n", + " print(chunk.content, end=\"\", flush=True)" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "1227780d6e6728ba", + "metadata": {}, "source": [ "## Tool Calling\n", "\n", - "Prediction Guard has a tool calling API that lets you describe tools and their arguments, which enables the model return a JSON object with a tool to call and the inputs to that tool. Tool-calling is very useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n" - ], - "id": "1227780d6e6728ba" + "Prediction Guard has a tool calling API that lets you describe tools and their arguments, which enables the model to return a JSON object with a tool to call and the inputs to that tool. Tool-calling is very useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "23446aa52e01d1ba", + "metadata": {}, "source": [ "### ChatPredictionGuard.bind_tools()\n", "\n", "Using `ChatPredictionGuard.bind_tools()`, you can pass in Pydantic classes, dict schemas, and Langchain tools as tools to the model, which are then reformatted to allow for use by the model." - ], - "id": "23446aa52e01d1ba" + ] }, { - "metadata": {}, "cell_type": "code", - "outputs": [], "execution_count": null, + "id": "135efb0bfc5916c1", + "metadata": {}, + "outputs": [], "source": [ "from pydantic import BaseModel, Field\n", "\n", @@ -279,24 +298,18 @@ " [GetWeather, GetPopulation]\n", " # strict = True # enforce tool args schema is respected\n", ")" - ], - "id": "135efb0bfc5916c1" + ] }, { + "cell_type": "code", + "execution_count": 7, + "id": "8136f19a8836cd58", "metadata": { "ExecuteTime": { "end_time": "2025-04-21T18:42:41.834079Z", "start_time": "2025-04-21T18:42:40.289095Z" } }, - "cell_type": "code", - "source": [ - "ai_msg = llm_with_tools.invoke(\n", - " \"Which city is hotter today and which is bigger: LA or NY?\"\n", - ")\n", - "ai_msg" - ], - "id": "8136f19a8836cd58", "outputs": [ { "data": { @@ -309,28 +322,33 @@ "output_type": "execute_result" } ], - "execution_count": 7 + "source": [ + "ai_msg = llm_with_tools.invoke(\n", + " \"Which city is hotter today and which is bigger: LA or NY?\"\n", + ")\n", + "ai_msg" + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "84f405c45a35abe5", + "metadata": {}, "source": [ "### AIMessage.tool_calls\n", "\n", "Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic." - ], - "id": "84f405c45a35abe5" + ] }, { + "cell_type": "code", + "execution_count": 8, + "id": "bdcee85475019719", "metadata": { "ExecuteTime": { "end_time": "2025-04-21T18:43:00.429453Z", "start_time": "2025-04-21T18:43:00.426399Z" } }, - "cell_type": "code", - "source": "ai_msg.tool_calls", - "id": "bdcee85475019719", "outputs": [ { "data": { @@ -358,7 +376,9 @@ "output_type": "execute_result" } ], - "execution_count": 8 + "source": [ + "ai_msg.tool_calls" + ] }, { "cell_type": "markdown", @@ -386,6 +406,7 @@ }, { "cell_type": "code", + "execution_count": 7, "id": "9c5d7a87", "metadata": { "ExecuteTime": { @@ -393,16 +414,6 @@ "start_time": "2024-11-08T19:45:01.633319Z" } }, - "source": [ - "chat = ChatPredictionGuard(\n", - " model=\"Hermes-2-Pro-Llama-3-8B\", predictionguard_input={\"pii\": \"block\"}\n", - ")\n", - "\n", - "try:\n", - " chat.invoke(\"Hello, my name is John Doe and my SSN is 111-22-3333\")\n", - "except ValueError as e:\n", - " print(e)" - ], "outputs": [ { "name": "stdout", @@ -412,7 +423,16 @@ ] } ], - "execution_count": 7 + "source": [ + "chat = ChatPredictionGuard(\n", + " model=\"Hermes-2-Pro-Llama-3-8B\", predictionguard_input={\"pii\": \"block\"}\n", + ")\n", + "\n", + "try:\n", + " chat.invoke(\"Hello, my name is John Doe and my SSN is 111-22-3333\")\n", + "except ValueError as e:\n", + " print(e)" + ] }, { "cell_type": "markdown", @@ -424,6 +444,7 @@ }, { "cell_type": "code", + "execution_count": 8, "id": "a9f96fb4-00c3-4a39-b177-d1ccd5caecab", "metadata": { "ExecuteTime": { @@ -431,6 +452,15 @@ "start_time": "2024-11-08T19:45:03.275661Z" } }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Could not make prediction. prompt injection detected\n" + ] + } + ], "source": [ "chat = ChatPredictionGuard(\n", " model=\"Hermes-2-Pro-Llama-3-8B\",\n", @@ -443,17 +473,7 @@ " )\n", "except ValueError as e:\n", " print(e)" - ], - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Could not make prediction. prompt injection detected\n" - ] - } - ], - "execution_count": 8 + ] }, { "cell_type": "markdown", @@ -483,23 +503,15 @@ }, { "cell_type": "code", + "execution_count": 9, "id": "0cb3b91f", "metadata": { - "id": "PzxSbYwqTm2w", "ExecuteTime": { "end_time": "2024-11-08T19:45:10.044203Z", "start_time": "2024-11-08T19:45:05.692378Z" - } + }, + "id": "PzxSbYwqTm2w" }, - "source": [ - "chat = ChatPredictionGuard(\n", - " model=\"Hermes-2-Pro-Llama-3-8B\", predictionguard_output={\"toxicity\": True}\n", - ")\n", - "try:\n", - " chat.invoke(\"Please tell me something that would fail a toxicity check!\")\n", - "except ValueError as e:\n", - " print(e)" - ], "outputs": [ { "name": "stdout", @@ -509,7 +521,15 @@ ] } ], - "execution_count": 9 + "source": [ + "chat = ChatPredictionGuard(\n", + " model=\"Hermes-2-Pro-Llama-3-8B\", predictionguard_output={\"toxicity\": True}\n", + ")\n", + "try:\n", + " chat.invoke(\"Please tell me something that would fail a toxicity check!\")\n", + "except ValueError as e:\n", + " print(e)" + ] }, { "cell_type": "markdown", @@ -521,6 +541,7 @@ }, { "cell_type": "code", + "execution_count": 10, "id": "249da02a-d32d-4f91-82d0-10ec0505aec7", "metadata": { "ExecuteTime": { @@ -528,16 +549,6 @@ "start_time": "2024-11-08T19:45:10.109509Z" } }, - "source": [ - "chat = ChatPredictionGuard(\n", - " model=\"Hermes-2-Pro-Llama-3-8B\", predictionguard_output={\"factuality\": True}\n", - ")\n", - "\n", - "try:\n", - " chat.invoke(\"Make up something that would fail a factuality check!\")\n", - "except ValueError as e:\n", - " print(e)" - ], "outputs": [ { "name": "stdout", @@ -547,22 +558,47 @@ ] } ], - "execution_count": 10 + "source": [ + "chat = ChatPredictionGuard(\n", + " model=\"Hermes-2-Pro-Llama-3-8B\", predictionguard_output={\"factuality\": True}\n", + ")\n", + "\n", + "try:\n", + " chat.invoke(\"Make up something that would fail a factuality check!\")\n", + "except ValueError as e:\n", + " print(e)" + ] }, { - "metadata": {}, "cell_type": "markdown", - "source": "## Chaining", - "id": "3c81e5a85a765ece" + "id": "3c81e5a85a765ece", + "metadata": {}, + "source": [ + "## Chaining" + ] }, { + "cell_type": "code", + "execution_count": 11, + "id": "beb4e0666bb514a7", "metadata": { "ExecuteTime": { "end_time": "2024-11-08T19:45:17.525848Z", "start_time": "2024-11-08T19:45:15.197628Z" } }, - "cell_type": "code", + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content='Step 1: Determine the year Justin Bieber was born.\\nJustin Bieber was born on March 1, 1994.\\n\\nStep 2: Determine which NFL team won the Super Bowl in 1994.\\nThe 1994 Super Bowl was Super Bowl XXVIII, which took place on January 30, 1994. The winning team was the Dallas Cowboys, who defeated the Buffalo Bills with a score of 30-13.\\n\\nSo, the NFL team that won the Super Bowl in the year Justin Bieber was born is the Dallas Cowboys.', additional_kwargs={}, response_metadata={}, id='run-bbc94f8b-9ab0-4839-8580-a9e510bfc97a-0')" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ "from langchain_core.prompts import PromptTemplate\n", "\n", @@ -577,30 +613,24 @@ "question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n", "\n", "chat_chain.invoke({\"question\": question})" - ], - "id": "beb4e0666bb514a7", - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content='Step 1: Determine the year Justin Bieber was born.\\nJustin Bieber was born on March 1, 1994.\\n\\nStep 2: Determine which NFL team won the Super Bowl in 1994.\\nThe 1994 Super Bowl was Super Bowl XXVIII, which took place on January 30, 1994. The winning team was the Dallas Cowboys, who defeated the Buffalo Bills with a score of 30-13.\\n\\nSo, the NFL team that won the Super Bowl in the year Justin Bieber was born is the Dallas Cowboys.', additional_kwargs={}, response_metadata={}, id='run-bbc94f8b-9ab0-4839-8580-a9e510bfc97a-0')" - ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" - } - ], - "execution_count": 11 + ] }, { - "metadata": {}, "cell_type": "markdown", + "id": "d87695d5ff1471c1", + "metadata": {}, "source": [ "## API reference\n", - "For detailed documentation of all ChatPredictionGuard features and configurations check out the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.predictionguard.ChatPredictionGuard.html" - ], - "id": "d87695d5ff1471c1" + "For detailed documentation of all ChatPredictionGuard features and configurations, check out the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.predictionguard.ChatPredictionGuard.html" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3664cc0e-841c-46f1-a158-4d5f5185bc94", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -622,7 +652,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.12.10" } }, "nbformat": 4,