From 1f0686db806af4fe5f357b494e12cb4786686840 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=B0=8F=E8=B1=86=E8=B1=86=E5=AD=A6=E9=95=BF?= <1342181530@qq.com> Date: Fri, 28 Mar 2025 03:27:04 +0800 Subject: [PATCH] community: add netmind integration (#30149) Co-authored-by: yanrujing Co-authored-by: ccurme --- docs/docs/integrations/chat/netmind.ipynb | 326 ++++++++++++++++++ .../docs/integrations/providers/netmind.ipynb | 73 ++++ .../integrations/text_embedding/netmind.ipynb | 323 +++++++++++++++++ libs/packages.yml | 3 + 4 files changed, 725 insertions(+) create mode 100644 docs/docs/integrations/chat/netmind.ipynb create mode 100644 docs/docs/integrations/providers/netmind.ipynb create mode 100644 docs/docs/integrations/text_embedding/netmind.ipynb diff --git a/docs/docs/integrations/chat/netmind.ipynb b/docs/docs/integrations/chat/netmind.ipynb new file mode 100644 index 00000000000..50402e976c6 --- /dev/null +++ b/docs/docs/integrations/chat/netmind.ipynb @@ -0,0 +1,326 @@ +{ + "cells": [ + { + "cell_type": "raw", + "id": "afaf8039", + "metadata": {}, + "source": [ + "---\n", + "sidebar_label: Netmind\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "e49f1e0d", + "metadata": {}, + "source": [ + "# ChatNetmind\n", + "\n", + "This will help you getting started with Netmind [chat models](https://www.netmind.ai/). For detailed documentation of all ChatNetmind features and configurations head to the [API reference](https://github.com/protagolabs/langchain-netmind).\n", + "\n", + "- See https://www.netmind.ai/ for an example.\n", + "\n", + "## Overview\n", + "### Integration details\n", + "\n", + "| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/) | Package downloads | Package latest |\n", + "|:---------------------------------------------------------------------------------------------| :--- |:-----:|:------------:|:--------------------------------------------------------------:| :---: | :---: |\n", + "| [ChatNetmind](https://python.langchain.com/api_reference/) | [langchain-netmind](https://python.langchain.com/api_reference/) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-netmind?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-netmind?style=flat-square&label=%20) |\n", + "\n", + "### Model features\n", + "| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n", + "|:-----------------------------------------------:|:---------------------------------------------------------:|:---------:|:---------------------------------------------------:|:-----------:|:-----------:|:----------------------------------------------------------:|:------------:|:-----------------------------------------------------------:|:---------------------------------------:|\n", + "| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n", + "\n", + "## Setup\n", + "\n", + "To access Netmind models you'll need to create a/an Netmind account, get an API key, and install the `langchain-netmind` integration package.\n", + "\n", + "### Credentials\n", + "\n", + "Head to https://www.netmind.ai/ to sign up to Netmind and generate an API key. Once you've done this set the NETMIND_API_KEY environment variable:" + ] + }, + { + "cell_type": "code", + "id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T02:00:30.732333Z", + "start_time": "2025-03-20T02:00:28.384208Z" + } + }, + "source": [ + "import getpass\n", + "import os\n", + "\n", + "if not os.getenv(\"NETMIND_API_KEY\"):\n", + " os.environ[\"NETMIND_API_KEY\"] = getpass.getpass(\"Enter your Netmind API key: \")" + ], + "outputs": [], + "execution_count": 1 + }, + { + "cell_type": "markdown", + "id": "72ee0c4b-9764-423a-9dbf-95129e185210", + "metadata": {}, + "source": [ + "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:" + ] + }, + { + "cell_type": "code", + "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T02:00:33.421446Z", + "start_time": "2025-03-20T02:00:33.419081Z" + } + }, + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")" + ], + "outputs": [], + "execution_count": 2 + }, + { + "cell_type": "markdown", + "id": "0730d6a1-c893-4840-9817-5e5251676d5d", + "metadata": {}, + "source": [ + "### Installation\n", + "\n", + "The LangChain Netmind integration lives in the `langchain-netmind` package:" + ] + }, + { + "cell_type": "code", + "id": "652d6238-1f87-422a-b135-f5abbb8652fc", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T02:00:35.923300Z", + "start_time": "2025-03-20T02:00:34.505928Z" + } + }, + "source": [ + "%pip install -qU langchain-netmind" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\r\n", + "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m A new release of pip is available: \u001B[0m\u001B[31;49m24.0\u001B[0m\u001B[39;49m -> \u001B[0m\u001B[32;49m25.0.1\u001B[0m\r\n", + "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m To update, run: \u001B[0m\u001B[32;49mpip install --upgrade pip\u001B[0m\r\n", + "Note: you may need to restart the kernel to use updated packages.\n" + ] + } + ], + "execution_count": 3 + }, + { + "cell_type": "markdown", + "id": "a38cde65-254d-4219-a441-068766c0d4b5", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our model object and generate chat completions:\n" + ] + }, + { + "cell_type": "code", + "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T02:01:08.007764Z", + "start_time": "2025-03-20T02:01:07.391951Z" + } + }, + "source": [ + "from langchain_netmind import ChatNetmind\n", + "\n", + "llm = ChatNetmind(\n", + " model=\"deepseek-ai/DeepSeek-V3\",\n", + " temperature=0,\n", + " max_tokens=None,\n", + " timeout=None,\n", + " max_retries=2,\n", + " # other params...\n", + ")" + ], + "outputs": [], + "execution_count": 4 + }, + { + "cell_type": "markdown", + "id": "2b4f3e15", + "metadata": {}, + "source": "## Invocation\n" + }, + { + "cell_type": "code", + "id": "62e0dbc3", + "metadata": { + "tags": [], + "ExecuteTime": { + "end_time": "2025-03-20T02:01:19.011273Z", + "start_time": "2025-03-20T02:01:10.295510Z" + } + }, + "source": [ + "messages = [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", + " ),\n", + " (\"human\", \"I love programming.\"),\n", + "]\n", + "ai_msg = llm.invoke(messages)\n", + "ai_msg" + ], + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content=\"J'adore programmer.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 31, 'total_tokens': 44, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-V3', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ca6c2010-844d-4bf6-baac-6e248491b000-0', usage_metadata={'input_tokens': 31, 'output_tokens': 13, 'total_tokens': 44, 'input_token_details': {}, 'output_token_details': {}})" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "execution_count": 5 + }, + { + "cell_type": "code", + "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T02:01:20.240190Z", + "start_time": "2025-03-20T02:01:20.238242Z" + } + }, + "source": [ + "print(ai_msg.content)" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "J'adore programmer.\n" + ] + } + ], + "execution_count": 6 + }, + { + "cell_type": "markdown", + "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", + "metadata": {}, + "source": [ + "## Chaining\n", + "\n", + "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n" + ] + }, + { + "cell_type": "code", + "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T02:01:27.456393Z", + "start_time": "2025-03-20T02:01:23.993410Z" + } + }, + "source": [ + "from langchain_core.prompts import ChatPromptTemplate\n", + "\n", + "prompt = ChatPromptTemplate(\n", + " [\n", + " (\n", + " \"system\",\n", + " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", + " ),\n", + " (\"human\", \"{input}\"),\n", + " ]\n", + ")\n", + "\n", + "chain = prompt | llm\n", + "chain.invoke(\n", + " {\n", + " \"input_language\": \"English\",\n", + " \"output_language\": \"German\",\n", + " \"input\": \"I love programming.\",\n", + " }\n", + ")" + ], + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content='Ich liebe es zu programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 26, 'total_tokens': 40, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-V3', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d63adcc6-53ba-4caa-9a79-78d640b39274-0', usage_metadata={'input_tokens': 26, 'output_tokens': 14, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "execution_count": 7 + }, + { + "cell_type": "markdown", + "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd", + "metadata": {}, + "source": "" + }, + { + "cell_type": "markdown", + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", + "metadata": {}, + "source": [ + "## API reference\n", + "\n", + "For detailed documentation of all ChatNetmind features and configurations head to the API reference: \n", + "* [API reference](https://python.langchain.com/api_reference/) \n", + "* [langchain-netmind](https://github.com/protagolabs/langchain-netmind) \n", + "* [pypi](https://pypi.org/project/langchain-netmind/)" + ] + }, + { + "metadata": {}, + "cell_type": "code", + "outputs": [], + "execution_count": null, + "source": "", + "id": "30f8be8c940bfbf3" + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/docs/integrations/providers/netmind.ipynb b/docs/docs/integrations/providers/netmind.ipynb new file mode 100644 index 00000000000..fa73d88a596 --- /dev/null +++ b/docs/docs/integrations/providers/netmind.ipynb @@ -0,0 +1,73 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Netmind\n", + "\n", + "[Netmind AI](https://www.netmind.ai/) Build AI Faster, Smarter, and More Affordably.\n", + "Train, Fine-tune, Run Inference, and Scale with our Global GPU Network—Your all-in-one AI Engine.\n", + "\n", + "This example goes over how to use LangChain to interact with Netmind AI models.\n" + ] + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "## Installation and Setup\n", + "\n", + "```bash\n", + "pip install langchain-netmind\n", + "```\n", + "\n", + "Get an Netmind api key and set it as an environment variable (`NETMIND_API_KEY`). \n", + "Head to https://www.netmind.ai/ to sign up to Netmind and generate an API key. \n", + "\n" + ] + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "## Chat Models\n", + "\n", + "For more on Netmind chat models, visit the guide [here](/docs/integrations/chat/netmind)" + ] + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "## Embedding Model\n", + "\n", + "For more on Netmind embedding models, visit the [guide](/docs/integrations/text_embedding/netmind)\n" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.11" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/docs/docs/integrations/text_embedding/netmind.ipynb b/docs/docs/integrations/text_embedding/netmind.ipynb new file mode 100644 index 00000000000..ad59fc28590 --- /dev/null +++ b/docs/docs/integrations/text_embedding/netmind.ipynb @@ -0,0 +1,323 @@ +{ + "cells": [ + { + "cell_type": "raw", + "id": "afaf8039", + "metadata": {}, + "source": [ + "---\n", + "sidebar_label: Netmind\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "9a3d6f34", + "metadata": {}, + "source": [ + "# NetmindEmbeddings\n", + "\n", + "This will help you get started with Netmind embedding models using LangChain. For detailed documentation on `NetmindEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/).\n", + "\n", + "## Overview\n", + "### Integration details\n", + "\n", + "| Provider | Package |\n", + "|:--------:|:-------:|\n", + "| [Netmind](/docs/integrations/providers/netmind/) | [langchain-netmind](https://python.langchain.com/api_reference/) |\n", + "\n", + "## Setup\n", + "\n", + "To access Netmind embedding models you'll need to create a/an Netmind account, get an API key, and install the `langchain-netmind` integration package.\n", + "\n", + "### Credentials\n", + "\n", + "Head to https://www.netmind.ai/ to sign up to Netmind and generate an API key. Once you've done this set the NETMIND_API_KEY environment variable:" + ] + }, + { + "cell_type": "code", + "id": "36521c2a", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:53:29.982962Z", + "start_time": "2025-03-20T01:53:27.764291Z" + } + }, + "source": [ + "import getpass\n", + "import os\n", + "\n", + "if not os.getenv(\"NETMIND_API_KEY\"):\n", + " os.environ[\"NETMIND_API_KEY\"] = getpass.getpass(\"Enter your Netmind API key: \")" + ], + "outputs": [], + "execution_count": 1 + }, + { + "cell_type": "markdown", + "id": "c84fb993", + "metadata": {}, + "source": [ + "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:" + ] + }, + { + "cell_type": "code", + "id": "39a4953b", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:53:32.143687Z", + "start_time": "2025-03-20T01:53:32.141858Z" + } + }, + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")" + ], + "outputs": [], + "execution_count": 2 + }, + { + "cell_type": "markdown", + "id": "d9664366", + "metadata": {}, + "source": [ + "### Installation\n", + "\n", + "The LangChain Netmind integration lives in the `langchain-netmind` package:" + ] + }, + { + "cell_type": "code", + "id": "64853226", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:53:38.639440Z", + "start_time": "2025-03-20T01:53:36.171640Z" + } + }, + "source": [ + "%pip install -qU langchain-netmind" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\r\n", + "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m A new release of pip is available: \u001B[0m\u001B[31;49m24.0\u001B[0m\u001B[39;49m -> \u001B[0m\u001B[32;49m25.0.1\u001B[0m\r\n", + "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m To update, run: \u001B[0m\u001B[32;49mpip install --upgrade pip\u001B[0m\r\n", + "Note: you may need to restart the kernel to use updated packages.\n" + ] + } + ], + "execution_count": 3 + }, + { + "cell_type": "markdown", + "id": "45dd1724", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our model object:\n" + ] + }, + { + "cell_type": "code", + "id": "9ea7a09b", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:54:31.005334Z", + "start_time": "2025-03-20T01:54:30.146876Z" + } + }, + "source": [ + "from langchain_netmind import NetmindEmbeddings\n", + "\n", + "embeddings = NetmindEmbeddings(\n", + " model=\"nvidia/NV-Embed-v2\",\n", + ")" + ], + "outputs": [], + "execution_count": 4 + }, + { + "cell_type": "markdown", + "id": "77d271b6", + "metadata": {}, + "source": [ + "## Indexing and Retrieval\n", + "\n", + "Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n", + "\n", + "Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`." + ] + }, + { + "cell_type": "code", + "id": "d817716b", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:54:40.963137Z", + "start_time": "2025-03-20T01:54:34.500805Z" + } + }, + "source": [ + "# Create a vector store with a sample text\n", + "from langchain_core.vectorstores import InMemoryVectorStore\n", + "\n", + "text = \"LangChain is the framework for building context-aware reasoning applications\"\n", + "\n", + "vectorstore = InMemoryVectorStore.from_texts(\n", + " [text],\n", + " embedding=embeddings,\n", + ")\n", + "\n", + "# Use the vectorstore as a retriever\n", + "retriever = vectorstore.as_retriever()\n", + "\n", + "# Retrieve the most similar text\n", + "retrieved_documents = retriever.invoke(\"What is LangChain?\")\n", + "\n", + "# show the retrieved document's content\n", + "retrieved_documents[0].page_content" + ], + "outputs": [ + { + "data": { + "text/plain": [ + "'LangChain is the framework for building context-aware reasoning applications'" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "execution_count": 5 + }, + { + "cell_type": "markdown", + "id": "e02b9855", + "metadata": {}, + "source": [ + "## Direct Usage\n", + "\n", + "Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n", + "\n", + "You can directly call these methods to get embeddings for your own use cases.\n", + "\n", + "### Embed single texts\n", + "\n", + "You can embed single texts or documents with `embed_query`:" + ] + }, + { + "cell_type": "code", + "id": "0d2befcd", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:54:49.540750Z", + "start_time": "2025-03-20T01:54:45.196528Z" + } + }, + "source": [ + "single_vector = embeddings.embed_query(text)\n", + "print(str(single_vector)[:100]) # Show the first 100 characters of the vector" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[-0.0051240199245512486, -0.01726294495165348, 0.011966848745942116, -0.0018107350915670395, 0.01146\n" + ] + } + ], + "execution_count": 6 + }, + { + "cell_type": "markdown", + "id": "1b5a7d03", + "metadata": {}, + "source": [ + "### Embed multiple texts\n", + "\n", + "You can embed multiple texts with `embed_documents`:" + ] + }, + { + "cell_type": "code", + "id": "2f4d6e97", + "metadata": { + "ExecuteTime": { + "end_time": "2025-03-20T01:54:57.089847Z", + "start_time": "2025-03-20T01:54:52.468719Z" + } + }, + "source": [ + "text2 = (\n", + " \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n", + ")\n", + "two_vectors = embeddings.embed_documents([text, text2])\n", + "for vector in two_vectors:\n", + " print(str(vector)[:100]) # Show the first 100 characters of the vector" + ], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[-0.0051240199245512486, -0.01726294495165348, 0.011966848745942116, -0.0018107350915670395, 0.01146\n", + "[0.022523142397403717, -0.002223758026957512, -0.008578270673751831, -0.006029821466654539, 0.008752\n" + ] + } + ], + "execution_count": 7 + }, + { + "cell_type": "markdown", + "id": "98785c12", + "metadata": {}, + "source": [ + "## API Reference\n", + "\n", + "For detailed documentation on `NetmindEmbeddings` features and configuration options, please refer to the: \n", + "* [API reference](https://python.langchain.com/api_reference/) \n", + "* [langchain-netmind](https://github.com/protagolabs/langchain-netmind) \n", + "* [pypi](https://pypi.org/project/langchain-netmind/)\n" + ] + }, + { + "metadata": {}, + "cell_type": "code", + "outputs": [], + "execution_count": null, + "source": "", + "id": "adb9e45c34733299" + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.5" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/libs/packages.yml b/libs/packages.yml index d53e7a24ccc..ea4881568b2 100644 --- a/libs/packages.yml +++ b/libs/packages.yml @@ -517,6 +517,9 @@ packages: repo: OpenGradient/og-langchain downloads: 274 downloads_updated_at: '2025-03-22T21:59:15.663971+00:00' +- name: langchain-netmind + path: . + repo: protagolabs/langchain-netmind - name: langchain-agentql path: langchain repo: tinyfish-io/agentql-integrations