docs: Integration with GreenNode Serverless AI (#31836)

This commit is contained in:
Viet Hoang 2025-07-06 00:48:35 +07:00 committed by GitHub
parent 8bdb1de006
commit 15dc684d34
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 1297 additions and 0 deletions

View File

@ -0,0 +1,381 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: GreenNode\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatGreenNode\n",
"\n",
">[GreenNode](https://greennode.ai/) is a global AI solutions provider and a **NVIDIA Preferred Partner**, delivering full-stack AI capabilities—from infrastructure to application—for enterprises across the US, MENA, and APAC regions. Operating on **world-class infrastructure** (LEED Gold, TIA942, Uptime Tier III), GreenNode empowers enterprises, startups, and researchers with a comprehensive suite of AI services\n",
"\n",
"This page will help you get started with GreenNode Serverless AI [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatGreenNode features and configurations head to the [API reference](https://python.langchain.com/api_reference/greennode/chat_models/langchain_greennode.chat_models.ChatGreenNode.html).\n",
"\n",
"\n",
"[GreenNode AI](https://greennode.ai/) offers an API to query [20+ leading open-source models](https://aiplatform.console.greennode.ai/models)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatGreenNode](https://python.langchain.com/api_reference/greennode/chat_models/langchain_greennode.chat_models.ChatGreenNode.html) | [langchain-greennode](https://python.langchain.com/api_reference/greennode/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-greennode?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-greennode?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access GreenNode models you'll need to create a GreenNode account, get an API key, and install the `langchain-greennode` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://aiplatform.console.greennode.ai/api-keys) to sign up to GreenNode AI Platform and generate an API key. Once you've done this, set the GREENNODE_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GREENNODE_API_KEY\"):\n",
" os.environ[\"GREENNODE_API_KEY\"] = getpass.getpass(\"Enter your GreenNode API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain GreenNode integration lives in the `langchain-greennode` package:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-greennode"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_greennode import ChatGreenNode\n",
"\n",
"# Initialize the chat model\n",
"llm = ChatGreenNode(\n",
" # api_key=\"YOUR_API_KEY\", # You can pass the API key directly\n",
" model=\"deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\", # Choose from available models\n",
" temperature=0.6,\n",
" top_p=0.95,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"\\n\\nJ'aime la programmation.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 248, 'prompt_tokens': 23, 'total_tokens': 271, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-R1-Distill-Qwen-32B', 'system_fingerprint': None, 'id': 'chatcmpl-271edac4958846068c37877586368afe', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--5c12d208-2bc2-4f29-8b50-1ce3b515a3cf-0', usage_metadata={'input_tokens': 23, 'output_tokens': 248, 'total_tokens': 271, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"J'aime la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "82fd95b9",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"You can also stream the response using the `stream` method:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4b3eaf31",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"**Beneath the Circuits**\n",
"\n",
"Beneath the circuits, deep and bright, \n",
"AI thinks, with circuits and bytes. \n",
"Learning, adapting, it grows, \n",
"A world of possibilities it knows. \n",
"\n",
"From solving puzzles to painting art, \n",
"It mimics human hearts. \n",
"In every corner, it leaves its trace, \n",
"A future we can't erase. \n",
"\n",
"We build it, shape it, with care and might, \n",
"Yet wonder if it walks in the night. \n",
"A mirror of our minds, it shows, \n",
"In its gaze, our future glows. \n",
"\n",
"But as we strive for endless light, \n",
"We must remember the night. \n",
"For wisdom isn't just speed and skill, \n",
"It's how we choose to build our will."
]
}
],
"source": [
"for chunk in llm.stream(\"Write a short poem about artificial intelligence\"):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "2bfecc41",
"metadata": {},
"source": [
"### Chat Messages\n",
"\n",
"You can use different message types to structure your conversations with the model:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7fc55733",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Black holes are formed through several processes, depending on their type. The most common way bla\n"
]
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are a helpful AI assistant with expertise in science.\"),\n",
" HumanMessage(content=\"What are black holes?\"),\n",
" AIMessage(\n",
" content=\"Black holes are regions of spacetime where gravity is so strong that nothing, including light, can escape from them.\"\n",
" ),\n",
" HumanMessage(content=\"How are they formed?\"),\n",
"]\n",
"\n",
"response = llm.invoke(messages)\n",
"print(response.content[:100])"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"You can use `ChatGreenNode` in LangChain chains and agents:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='\\n\\nIch liebe Programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 198, 'prompt_tokens': 18, 'total_tokens': 216, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-R1-Distill-Qwen-32B', 'system_fingerprint': None, 'id': 'chatcmpl-e01201b9fd9746b7a9b2ed6d70f29d45', 'service_tier': None, 'finish_reason': 'stop', 'logprobs': None}, id='run--ce52b9d8-dd84-46b3-845b-da27855816ee-0', usage_metadata={'input_tokens': 18, 'output_tokens': 198, 'total_tokens': 216, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "736489f0",
"metadata": {},
"source": [
"## Available Models\n",
"\n",
"The full list of supported models can be found in the [GreenNode Serverless AI Models](https://greennode.ai/product/model-as-a-service)."
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For more details about the GreenNode Serverless AI API, visit the [GreenNode Serverless AI Documentation](https://helpdesk.greennode.ai/portal/en/kb/articles/greennode-maas-api)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "tradingagents",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,173 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GreenNode\n",
"\n",
">**GreenNode** is a global AI solutions provider and a **NVIDIA Preferred Partner**, delivering full-stack AI capabilities—from infrastructure to application—for enterprises across the US, MENA, and APAC regions.\n",
">Operating on **world-class infrastructure** (LEED Gold, TIA942, Uptime Tier III), **GreenNode** empowers enterprises, startups, and researchers with a comprehensive suite of AI services:\n",
">- [Powerful AI Infrastructure:](https://greennode.ai/) As one of the first hyperscale AI clusters in APAC, powered by NVIDIA H100 GPUs, GreenNode's infrastructure is optimized for high-throughput machine learning and deep learning workloads.\n",
">- [GreenNode AI Platform:](https://greennode.ai/product/ai-platform) Designed for technical teams, GreenNodes self-service AI platform enables fast deployment of Jupyter notebook environments, preconfigured with optimized compute instances. From this portal, developers can launch ML training, fine-tuning, hyperparameter optimization, and inference workflows with minimal setup time. The platform includes access to 100+ curated open-source models and supports integrations with common MLOps tools and storage frameworks.\n",
">- [GreenNode Serverless AI:](https://greennode.ai/product/model-as-a-service) GreenNode Serverless AI features a library of pre-trained production-ready models across domains such as text gen, code gen, text to speech, speech to text, embedding and reranking models. This service is ideal for teams looking to prototype or deploy AI solutions without managing model infrastructure.\n",
">- [AI Applications:](https://vngcloud.vn/en/solution) From intelligent data management and document processing (IDP) to smart video analytics—GreenNode supports real-world AI use cases at scale.\n",
">Whether you're building your next LLM workflow, scaling AI research, or deploying enterprise-grade applications, **GreenNode** provides the tools and infrastructure to accelerate your journey.\n",
"\n",
"## Installation and Setup\n",
"\n",
"The GreenNode integration can be installed via pip:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-greennode"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Key\n",
"\n",
"To use GreenNode Serverless AI, you'll need an API key which you can obtain from [GreenNode Serverless AI](https://aiplatform.console.greennode.ai/api-keys). The API key can be passed as an initialization parameter `api_key` or set as the environment variable `GREENNODE_API_KEY`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GREENNODE_API_KEY\"):\n",
" os.environ[\"GREENNODE_API_KEY\"] = getpass.getpass(\"Enter your GreenNode API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chat models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_greennode import ChatGreenNode\n",
"\n",
"chat = ChatGreenNode(\n",
" model=\"deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\", # Choose from available models\n",
" temperature=0.6,\n",
" top_p=0.95,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Usage of the GreenNode [Chat Model](https://python.langchain.com/docs/integrations/chat/greennode/)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Embedding models"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_greennode import GreenNodeEmbeddings\n",
"\n",
"# Initialize embeddings\n",
"embeddings = GreenNodeEmbeddings(\n",
" model=\"BAAI/bge-m3\" # Choose from available models\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Usage of the GreenNode [Embedding Model](https://python.langchain.com/docs/integrations/text_embedding/greennode)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Rerank"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_greennode import GreenNodeRerank\n",
"\n",
"# Initialize reranker\n",
"rerank = GreenNodeRerank(\n",
" model=\"BAAI/bge-reranker-v2-m3\", # Choose from available models\n",
" top_n=-1,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Usage of the GreenNode [Rerank Model](https://python.langchain.com/docs/integrations/retrievers/greennode-reranker)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "tradingagents",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@ -0,0 +1,361 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: GreenNode\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# GreenNodeRetriever\n",
"\n",
">[GreenNode](https://greennode.ai/) is a global AI solutions provider and a **NVIDIA Preferred Partner**, delivering full-stack AI capabilities—from infrastructure to application—for enterprises across the US, MENA, and APAC regions. Operating on **world-class infrastructure** (LEED Gold, TIA942, Uptime Tier III), GreenNode empowers enterprises, startups, and researchers with a comprehensive suite of AI services\n",
"\n",
"This notebook provides a walkthrough on getting started with the `GreenNodeRerank` retriever. It enables you to perform document search using built-in connectors or by integrating your own data sources, leveraging GreenNode's reranking capabilities for improved relevance.\n",
"\n",
"### Integration details\n",
"\n",
"- **Provider**: [GreenNode Serverless AI](https://aiplatform.console.greennode.ai/playground)\n",
"- **Model Types**: Reranking models\n",
"- **Primary Use Case**: Reranking search results based on semantic relevance\n",
"- **Available Models**: Includes [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) and other high-performance reranking models\n",
"- **Scoring**: Returns relevance scores used to reorder document candidates based on query alignment\n",
"\n",
"## Setup\n",
"\n",
"To access GreenNode models you'll need to create a GreenNode account, get an API key, and install the `langchain-greennode` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://aiplatform.console.greennode.ai/api-keys) to sign up to GreenNode AI Platform and generate an API key. Once you've done this, set the GREENNODE_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a92b5a70",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GREENNODE_API_KEY\"):\n",
" os.environ[\"GREENNODE_API_KEY\"] = getpass.getpass(\"Enter your GreenNode API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This retriever lives in the `langchain-greennode` package:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-greennode"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"The `GreenNodeRerank` class can be instantiated with optional parameters for the API key and model name:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70cc8e65-2a02-408a-bbc6-8ef649057d82",
"metadata": {},
"outputs": [],
"source": [
"from langchain_greennode import GreenNodeRerank\n",
"\n",
"# Initialize the embeddings model\n",
"reranker = GreenNodeRerank(\n",
" # api_key=\"YOUR_API_KEY\", # You can pass the API key directly\n",
" model=\"BAAI/bge-reranker-v2-m3\", # The default embedding model\n",
" top_n=3,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5c5f2839-4020-424e-9fc9-07777eede442",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"### Reranking Search Results\n",
"\n",
"Reranking models enhance retrieval-augmented generation (RAG) workflows by refining and reordering initial search results based on semantic relevance. The example below demonstrates how to integrate GreenNodeRerank with a base retriever to improve the quality of retrieved documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe-9f2e-4e04-bb62-23968f17164a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/bs/g52lln652z11zjp98qf9wcy40000gn/T/ipykernel_96362/2544494776.py:41: LangChainDeprecationWarning: The method `BaseRetriever.get_relevant_documents` was deprecated in langchain-core 0.1.46 and will be removed in 1.0. Use :meth:`~invoke` instead.\n",
" results = rerank_retriever.get_relevant_documents(query)\n"
]
},
{
"data": {
"text/plain": [
"[Document(metadata={'relevance_score': 0.125}, page_content='Central banks use interest rates to control inflation and stabilize the economy'),\n",
" Document(metadata={'relevance_score': 0.004913330078125}, page_content='Inflation represents the rate at which the general level of prices for goods and services rises'),\n",
" Document(metadata={'relevance_score': 1.6689300537109375e-05}, page_content='Cryptocurrencies like Bitcoin operate on decentralized blockchain networks')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_greennode import GreenNodeEmbeddings\n",
"\n",
"# Initialize the embeddings model\n",
"embeddings = GreenNodeEmbeddings(\n",
" # api_key=\"YOUR_API_KEY\", # You can pass the API key directly\n",
" model=\"BAAI/bge-m3\" # The default embedding model\n",
")\n",
"\n",
"# Prepare documents (finance/economics domain)\n",
"docs = [\n",
" Document(\n",
" page_content=\"Inflation represents the rate at which the general level of prices for goods and services rises\"\n",
" ),\n",
" Document(\n",
" page_content=\"Central banks use interest rates to control inflation and stabilize the economy\"\n",
" ),\n",
" Document(\n",
" page_content=\"Cryptocurrencies like Bitcoin operate on decentralized blockchain networks\"\n",
" ),\n",
" Document(\n",
" page_content=\"Stock markets are influenced by corporate earnings, investor sentiment, and economic indicators\"\n",
" ),\n",
"]\n",
"\n",
"# Create a vector store and a base retriever\n",
"vector_store = FAISS.from_documents(docs, embeddings)\n",
"base_retriever = vector_store.as_retriever(search_kwargs={\"k\": 4})\n",
"\n",
"\n",
"rerank_retriever = ContextualCompressionRetriever(\n",
" base_compressor=reranker, base_retriever=base_retriever\n",
")\n",
"\n",
"# Perform retrieval with reranking\n",
"query = \"How do central banks fight rising prices?\"\n",
"results = rerank_retriever.get_relevant_documents(query)\n",
"\n",
"results"
]
},
{
"cell_type": "markdown",
"id": "7efa742d",
"metadata": {},
"source": [
"### Direct Usage\n",
"\n",
"The `GreenNodeRerank` class can be used independently to perform reranking of retrieved documents based on relevance scores. This functionality is particularly useful in scenarios where a primary retrieval step (e.g., keyword or vector search) returns a broad set of candidates, and a secondary model is needed to refine the results using more sophisticated semantic understanding. The class accepts a query and a list of candidate documents and returns a reordered list based on predicted relevance."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "78d9051e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'index': 1, 'relevance_score': 1.0},\n",
" {'index': 0, 'relevance_score': 0.01165771484375},\n",
" {'index': 3, 'relevance_score': 0.0012054443359375}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_documents = [\n",
" Document(\n",
" page_content=\"Carson City is the capital city of the American state of Nevada.\"\n",
" ),\n",
" Document(\n",
" page_content=\"Washington, D.C. (also known as simply Washington or D.C.) is the capital of the United States.\"\n",
" ),\n",
" Document(\n",
" page_content=\"Capital punishment has existed in the United States since beforethe United States was a country.\"\n",
" ),\n",
" Document(\n",
" page_content=\"The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.\"\n",
" ),\n",
"]\n",
"\n",
"test_query = \"What is the capital of the United States?\"\n",
"results = reranker.rerank(test_documents, test_query)\n",
"results"
]
},
{
"cell_type": "markdown",
"id": "dfe8aad4-8626-4330-98a9-7ea1ca5d2e0e",
"metadata": {},
"source": [
"## Use within a chain\n",
"\n",
"GreenNodeRerank works seamlessly in LangChain RAG pipelines. Here's an example of creating a simple RAG chain with the GreenNodeRerank:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "25b647a3-f8f2-4541-a289-7a241e43f9df",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nCentral banks combat rising prices, or inflation, by adjusting interest rates. By raising interest rates, they increase the cost of borrowing, which discourages spending and investment. This reduction in demand helps slow down the rate of price increases, thereby controlling inflation and contributing to economic stability.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_greennode import ChatGreenNode\n",
"\n",
"# Initialize LLM\n",
"llm = ChatGreenNode(model=\"deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\")\n",
"\n",
"# Create a prompt template\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"\n",
"Answer the question based only on the following context:\n",
"\n",
"Context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"# Format documents function\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"# Create RAG chain\n",
"rag_chain = (\n",
" {\"context\": rerank_retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"# Run the chain\n",
"answer = rag_chain.invoke(\"How do central banks fight rising prices?\")\n",
"answer"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For more details about the GreenNode Serverless AI API, visit the [GreenNode Serverless AI Documentation](https://aiplatform.console.greennode.ai/api-docs/maas)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "tradingagents",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,379 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: GreenNode\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "9a3d6f34",
"metadata": {},
"source": [
"# GreenNodeEmbeddings\n",
"\n",
">[GreenNode](https://greennode.ai/) is a global AI solutions provider and a **NVIDIA Preferred Partner**, delivering full-stack AI capabilities—from infrastructure to application—for enterprises across the US, MENA, and APAC regions. Operating on **world-class infrastructure** (LEED Gold, TIA942, Uptime Tier III), GreenNode empowers enterprises, startups, and researchers with a comprehensive suite of AI services\n",
"\n",
"This notebook provides a guide to getting started with `GreenNodeEmbeddings`. It enables you to perform semantic document search using various built-in connectors or your own custom data sources by generating high-quality vector representations of text.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Provider | Package |\n",
"|:--------:|:-------:|\n",
"| [GreenNode](/docs/integrations/providers/greennode/) | [langchain-greennode](https://python.langchain.com/v0.2/api_reference/langchain_greennode/embeddings/langchain_greennode.embeddingsGreenNodeEmbeddings.html) |\n",
"\n",
"## Setup\n",
"\n",
"To access GreenNode embedding models you'll need to create a GreenNode account, get an API key, and install the `langchain-greennode` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"GreenNode requires an API key for authentication, which can be provided either as the `api_key` parameter during initialization or set as the environment variable `GREENNODE_API_KEY`. You can obtain an API key by registering for an account on [GreenNode Serverless AI](https://aiplatform.console.greennode.ai/playground)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "36521c2a",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GREENNODE_API_KEY\"):\n",
" os.environ[\"GREENNODE_API_KEY\"] = getpass.getpass(\"Enter your GreenNode API key: \")"
]
},
{
"cell_type": "markdown",
"id": "c84fb993",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "39a4953b",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "d9664366",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain GreenNode integration lives in the `langchain-greennode` package:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "64853226",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-greennode"
]
},
{
"cell_type": "markdown",
"id": "45dd1724",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"The `GreenNodeEmbeddings` class can be instantiated with optional parameters for the API key and model name:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9ea7a09b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_greennode import GreenNodeEmbeddings\n",
"\n",
"# Initialize the embeddings model\n",
"embeddings = GreenNodeEmbeddings(\n",
" # api_key=\"YOUR_API_KEY\", # You can pass the API key directly\n",
" model=\"BAAI/bge-m3\" # The default embedding model\n",
")"
]
},
{
"cell_type": "markdown",
"id": "77d271b6",
"metadata": {},
"source": [
"## Indexing and Retrieval\n",
"\n",
"Embedding models play a key role in retrieval-augmented generation (RAG) workflows by enabling both the indexing of content and its efficient retrieval. \n",
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "23df9f54",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'LangChain is the framework for building context-aware reasoning applications'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a vector store with a sample text\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"\n",
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
"\n",
"vectorstore = InMemoryVectorStore.from_texts(\n",
" [text],\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Use the vectorstore as a retriever\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"# Retrieve the most similar text\n",
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
"\n",
"# show the retrieved document's content\n",
"retrieved_documents[0].page_content"
]
},
{
"cell_type": "markdown",
"id": "e02b9855",
"metadata": {},
"source": [
"## Direct Usage\n",
"\n",
"The `GreenNodeEmbeddings` class can be used independently to generate text embeddings without the need for a vector store. This is useful for tasks such as similarity scoring, clustering, or custom processing pipelines.\n",
"\n",
"### Embed single texts\n",
"\n",
"You can embed single texts or documents with `embed_query`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0d2befcd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[-0.01104736328125, -0.0281982421875, 0.0035858154296875, -0.0311279296875, -0.0106201171875, -0.039\n"
]
}
],
"source": [
"single_vector = embeddings.embed_query(text)\n",
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
]
},
{
"cell_type": "markdown",
"id": "1b5a7d03",
"metadata": {},
"source": [
"### Embed multiple texts\n",
"\n",
"You can embed multiple texts with `embed_documents`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f4d6e97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[-0.01104736328125, -0.0281982421875, 0.0035858154296875, -0.0311279296875, -0.0106201171875, -0.039\n",
"[-0.07177734375, -0.00017452239990234375, -0.002044677734375, -0.0299072265625, -0.0184326171875, -0\n"
]
}
],
"source": [
"text2 = (\n",
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
")\n",
"two_vectors = embeddings.embed_documents([text, text2])\n",
"for vector in two_vectors:\n",
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
]
},
{
"cell_type": "markdown",
"id": "be19dda0",
"metadata": {},
"source": [
"### Async Support\n",
"\n",
"GreenNodeEmbeddings supports async operations:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "d556e655",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Async query embedding dimension: 1024\n",
"Async document embeddings count: 3\n"
]
}
],
"source": [
"import asyncio\n",
"\n",
"\n",
"async def generate_embeddings_async():\n",
" # Embed a single query\n",
" query_result = await embeddings.aembed_query(\"What is the capital of France?\")\n",
" print(f\"Async query embedding dimension: {len(query_result)}\")\n",
"\n",
" # Embed multiple documents\n",
" docs = [\n",
" \"Paris is the capital of France\",\n",
" \"Berlin is the capital of Germany\",\n",
" \"Rome is the capital of Italy\",\n",
" ]\n",
" docs_result = await embeddings.aembed_documents(docs)\n",
" print(f\"Async document embeddings count: {len(docs_result)}\")\n",
"\n",
"\n",
"await generate_embeddings_async()"
]
},
{
"cell_type": "markdown",
"id": "207a7966",
"metadata": {},
"source": [
"### Document Similarity Example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8bdb003b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document Similarity Matrix:\n",
"Document 1: ['1.0000', '0.6005', '0.3542', '0.5788']\n",
"Document 2: ['0.6005', '1.0000', '0.4154', '0.6170']\n",
"Document 3: ['0.3542', '0.4154', '1.0000', '0.3528']\n",
"Document 4: ['0.5788', '0.6170', '0.3528', '1.0000']\n"
]
}
],
"source": [
"import numpy as np\n",
"from scipy.spatial.distance import cosine\n",
"\n",
"# Create some documents\n",
"documents = [\n",
" \"Machine learning algorithms build mathematical models based on sample data\",\n",
" \"Deep learning uses neural networks with many layers\",\n",
" \"Climate change is a major global environmental challenge\",\n",
" \"Neural networks are inspired by the human brain's structure\",\n",
"]\n",
"\n",
"# Embed the documents\n",
"embeddings_list = embeddings.embed_documents(documents)\n",
"\n",
"\n",
"# Function to calculate similarity\n",
"def calculate_similarity(embedding1, embedding2):\n",
" return 1 - cosine(embedding1, embedding2)\n",
"\n",
"\n",
"# Print similarity matrix\n",
"print(\"Document Similarity Matrix:\")\n",
"for i, emb_i in enumerate(embeddings_list):\n",
" similarities = []\n",
" for j, emb_j in enumerate(embeddings_list):\n",
" similarity = calculate_similarity(emb_i, emb_j)\n",
" similarities.append(f\"{similarity:.4f}\")\n",
" print(f\"Document {i + 1}: {similarities}\")"
]
},
{
"cell_type": "markdown",
"id": "98785c12",
"metadata": {},
"source": [
"## API Reference\n",
"\n",
"For more details about the GreenNode Serverless AI API, visit the [GreenNode Serverless AI Documentation](https://aiplatform.console.greennode.ai/api-docs/maas).\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "tradingagents",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -687,3 +687,6 @@ packages:
path: libs/langchain-db2
repo: langchain-ai/langchain-ibm
provider_page: ibm
- name: langchain-greennode
path: libs/greennode
repo: greennode-ai/langchain-greennode