mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-26 22:05:29 +00:00
merge
This commit is contained in:
@@ -461,7 +461,7 @@
|
||||
"id": "f8014c9d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/#agents).\n",
|
||||
"Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/#agents).\n",
|
||||
"\n",
|
||||
"Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_agent` will call `.bind_tools` for us under the hood."
|
||||
]
|
||||
|
@@ -24,7 +24,7 @@
|
||||
"\n",
|
||||
"## Architecture\n",
|
||||
"\n",
|
||||
"At a high-level, the steps of constructing a knowledge are from text are:\n",
|
||||
"At a high-level, the steps of constructing a knowledge graph from text are:\n",
|
||||
"\n",
|
||||
"1. **Extracting structured information from text**: Model is used to extract structured graph information from text.\n",
|
||||
"2. **Storing into graph database**: Storing the extracted structured graph information into a graph database enables downstream RAG applications\n",
|
||||
|
@@ -129,13 +129,13 @@
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"def count_emails(last_n_days: int) -> int:\n",
|
||||
" \"\"\"Multiply two integers together.\"\"\"\n",
|
||||
" \"\"\"Dummy function to count number of e-mails. Returns 2 * last_n_days.\"\"\"\n",
|
||||
" return last_n_days * 2\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"def send_email(message: str, recipient: str) -> str:\n",
|
||||
" \"Add two integers.\"\n",
|
||||
" \"\"\"Dummy function for sending an e-mail.\"\"\"\n",
|
||||
" return f\"Successfully sent email to {recipient}.\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
|
@@ -50,18 +50,18 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"os.environ[\"AI21_API_KEY\"] = getpass()"
|
||||
]
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -73,14 +73,14 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7c2e19d3-7c58-4470-9e1a-718b27a32056",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -115,15 +115,15 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "c40756fb-cbf8-4d44-a293-3989d707237e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_ai21 import ChatAI21\n",
|
||||
"\n",
|
||||
"llm = ChatAI21(model=\"jamba-instruct\", temperature=0)"
|
||||
]
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -135,21 +135,8 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "46b982dc-5d8a-46da-a711-81c03ccd6adc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"J'adore programmer.\", id='run-2e8d16d6-a06e-45cb-8d0c-1c8208645033-0')"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" (\n",
|
||||
@@ -160,7 +147,9 @@
|
||||
"]\n",
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -174,7 +163,6 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "39353473fce5dd2e",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
@@ -182,18 +170,6 @@
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Ich liebe das Programmieren.', id='run-e1bd82dc-1a7e-4b2e-bde9-ac995929ac0f-0')"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
@@ -215,7 +191,95 @@
|
||||
" \"input\": \"I love programming.\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "markdown",
|
||||
"source": "# Tool Calls / Function Calling",
|
||||
"id": "39c0ccd229927eab"
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "markdown",
|
||||
"source": "This example shows how to use tool calling with AI21 models:",
|
||||
"id": "2bf6b40be07fe2d4"
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"from langchain_ai21.chat_models import ChatAI21\n",
|
||||
"from langchain_core.messages import HumanMessage, SystemMessage, ToolMessage\n",
|
||||
"from langchain_core.tools import tool\n",
|
||||
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
|
||||
"\n",
|
||||
"os.environ[\"AI21_API_KEY\"] = getpass()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"def get_weather(location: str, date: str) -> str:\n",
|
||||
" \"\"\"“Provide the weather for the specified location on the given date.”\"\"\"\n",
|
||||
" if location == \"New York\" and date == \"2024-12-05\":\n",
|
||||
" return \"25 celsius\"\n",
|
||||
" elif location == \"New York\" and date == \"2024-12-06\":\n",
|
||||
" return \"27 celsius\"\n",
|
||||
" elif location == \"London\" and date == \"2024-12-05\":\n",
|
||||
" return \"22 celsius\"\n",
|
||||
" return \"32 celsius\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"llm = ChatAI21(model=\"jamba-1.5-mini\")\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind_tools([convert_to_openai_tool(get_weather)])\n",
|
||||
"\n",
|
||||
"chat_messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful assistant. You can use the provided tools \"\n",
|
||||
" \"to assist with various tasks and provide accurate information\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"human_messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"What is the forecast for the weather in New York on December 5, 2024?\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(content=\"And what about the 2024-12-06?\"),\n",
|
||||
" HumanMessage(content=\"OK, thank you.\"),\n",
|
||||
" HumanMessage(content=\"What is the expected weather in London on December 5, 2024?\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"for human_message in human_messages:\n",
|
||||
" print(f\"User: {human_message.content}\")\n",
|
||||
" chat_messages.append(human_message)\n",
|
||||
" response = llm_with_tools.invoke(chat_messages)\n",
|
||||
" chat_messages.append(response)\n",
|
||||
" if response.tool_calls:\n",
|
||||
" tool_call = response.tool_calls[0]\n",
|
||||
" if tool_call[\"name\"] == \"get_weather\":\n",
|
||||
" weather = get_weather.invoke(\n",
|
||||
" {\n",
|
||||
" \"location\": tool_call[\"args\"][\"location\"],\n",
|
||||
" \"date\": tool_call[\"args\"][\"date\"],\n",
|
||||
" }\n",
|
||||
" )\n",
|
||||
" chat_messages.append(\n",
|
||||
" ToolMessage(content=weather, tool_call_id=tool_call[\"id\"])\n",
|
||||
" )\n",
|
||||
" llm_answer = llm_with_tools.invoke(chat_messages)\n",
|
||||
" print(f\"Assistant: {llm_answer.content}\")\n",
|
||||
" else:\n",
|
||||
" print(f\"Assistant: {response.content}\")"
|
||||
],
|
||||
"id": "a181a28df77120fb",
|
||||
"outputs": [],
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
|
@@ -49,7 +49,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The ScrapflyLoader also allows passigng ScrapeConfig object for customizing the scrape request. See the documentation for the full feature details and their API params: https://scrapfly.io/docs/scrape-api/getting-started"
|
||||
"The ScrapflyLoader also allows passing ScrapeConfig object for customizing the scrape request. See the documentation for the full feature details and their API params: https://scrapfly.io/docs/scrape-api/getting-started"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@@ -12,7 +12,7 @@ pip install langchain-huggingface
|
||||
|
||||
## Chat models
|
||||
|
||||
### Models from Hugging Face
|
||||
### ChatHuggingFace
|
||||
|
||||
We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class.
|
||||
|
||||
@@ -24,7 +24,16 @@ from langchain_huggingface import ChatHuggingFace
|
||||
|
||||
## LLMs
|
||||
|
||||
### Hugging Face Local Pipelines
|
||||
### HuggingFaceEndpoint
|
||||
|
||||
|
||||
See a [usage example](/docs/integrations/llms/huggingface_endpoint).
|
||||
|
||||
```python
|
||||
from langchain_huggingface import HuggingFaceEndpoint
|
||||
```
|
||||
|
||||
### HuggingFacePipeline
|
||||
|
||||
Hugging Face models can be run locally through the `HuggingFacePipeline` class.
|
||||
|
||||
@@ -44,6 +53,22 @@ See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
|
||||
from langchain_huggingface import HuggingFaceEmbeddings
|
||||
```
|
||||
|
||||
### HuggingFaceEndpointEmbeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
|
||||
|
||||
```python
|
||||
from langchain_huggingface import HuggingFaceEndpointEmbeddings
|
||||
```
|
||||
|
||||
### HuggingFaceInferenceAPIEmbeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import HuggingFaceInferenceAPIEmbeddings
|
||||
```
|
||||
|
||||
### HuggingFaceInstructEmbeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/instruct_embeddings).
|
||||
@@ -63,25 +88,6 @@ See a [usage example](/docs/integrations/text_embedding/bge_huggingface).
|
||||
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
|
||||
```
|
||||
|
||||
### Hugging Face Text Embeddings Inference (TEI)
|
||||
|
||||
>[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source
|
||||
> text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models,
|
||||
>including `FlagEmbedding`, `Ember`, `GTE` and `E5`.
|
||||
|
||||
We need to install `huggingface-hub` python package.
|
||||
|
||||
```bash
|
||||
pip install huggingface-hub
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/text_embeddings_inference).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import HuggingFaceHubEmbeddings
|
||||
```
|
||||
|
||||
|
||||
## Document Loaders
|
||||
|
||||
### Hugging Face dataset
|
||||
@@ -104,7 +110,34 @@ See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset).
|
||||
from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
|
||||
```
|
||||
|
||||
### Hugging Face model loader
|
||||
|
||||
>Load model information from `Hugging Face Hub`, including README content.
|
||||
>
|
||||
>This loader interfaces with the `Hugging Face Models API` to fetch
|
||||
> and load model metadata and README files.
|
||||
> The API allows you to search and filter models based on
|
||||
> specific criteria such as model tags, authors, and more.
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders import HuggingFaceModelLoader
|
||||
```
|
||||
|
||||
### Image captions
|
||||
|
||||
It uses the Hugging Face models to generate image captions.
|
||||
|
||||
We need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install transformers pillow
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/document_loaders/image_captions).
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders import ImageCaptionLoader
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
@@ -124,3 +157,12 @@ See a [usage example](/docs/integrations/tools/huggingface_tools).
|
||||
```python
|
||||
from langchain_community.agent_toolkits.load_tools import load_huggingface_tool
|
||||
```
|
||||
|
||||
### Hugging Face Text-to-Speech Model Inference.
|
||||
|
||||
> It is a wrapper around `OpenAI Text-to-Speech API`.
|
||||
|
||||
```python
|
||||
from langchain_community.tools.audio import HuggingFaceTextToSpeechModelInference
|
||||
```
|
||||
|
||||
|
@@ -436,6 +436,8 @@ See a [usage example](/docs/integrations/tools/azure_ai_services).
|
||||
from langchain_community.agent_toolkits import azure_ai_services
|
||||
```
|
||||
|
||||
#### Azure AI Services individual tools
|
||||
|
||||
The `azure_ai_services` toolkit includes the following tools:
|
||||
|
||||
- Image Analysis: [AzureAiServicesImageAnalysisTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.image_analysis.AzureAiServicesImageAnalysisTool.html)
|
||||
@@ -460,6 +462,23 @@ See a [usage example](/docs/integrations/tools/office365).
|
||||
from langchain_community.agent_toolkits import O365Toolkit
|
||||
```
|
||||
|
||||
#### Office 365 individual tools
|
||||
|
||||
You can use individual tools from the Office 365 Toolkit:
|
||||
- `O365CreateDraftMessage`: tool for creating a draft email in Office 365
|
||||
- `O365SearchEmails`: tool for searching email messages in Office 365
|
||||
- `O365SearchEvents`: tool for searching calendar events in Office 365
|
||||
- `O365SendEvent`: tool for sending calendar events in Office 365
|
||||
- `O365SendMessage`: tool for sending an email in Office 365
|
||||
|
||||
```python
|
||||
from langchain_community.tools.office365 import O365CreateDraftMessage
|
||||
from langchain_community.tools.office365 import O365SearchEmails
|
||||
from langchain_community.tools.office365 import O365SearchEvents
|
||||
from langchain_community.tools.office365 import O365SendEvent
|
||||
from langchain_community.tools.office365 import O365SendMessage
|
||||
```
|
||||
|
||||
### Microsoft Azure PowerBI
|
||||
|
||||
We need to install `azure-identity` python package.
|
||||
@@ -475,6 +494,20 @@ from langchain_community.agent_toolkits import PowerBIToolkit
|
||||
from langchain_community.utilities.powerbi import PowerBIDataset
|
||||
```
|
||||
|
||||
#### PowerBI individual tools
|
||||
|
||||
You can use individual tools from the Azure PowerBI Toolkit:
|
||||
- `InfoPowerBITool`: tool for getting metadata about a PowerBI Dataset
|
||||
- `ListPowerBITool`: tool for getting tables names
|
||||
- `QueryPowerBITool`: tool for querying a PowerBI Dataset
|
||||
|
||||
```python
|
||||
from langchain_community.tools.powerbi.tool import InfoPowerBITool
|
||||
from langchain_community.tools.powerbi.tool import ListPowerBITool
|
||||
from langchain_community.tools.powerbi.tool import QueryPowerBITool
|
||||
```
|
||||
|
||||
|
||||
### PlayWright Browser Toolkit
|
||||
|
||||
>[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool
|
||||
|
63
docs/docs/integrations/providers/apache.mdx
Normal file
63
docs/docs/integrations/providers/apache.mdx
Normal file
@@ -0,0 +1,63 @@
|
||||
# Apache Software Foundation
|
||||
|
||||
>[The Apache Software Foundation (Wikipedia)](https://en.wikipedia.org/wiki/The_Apache_Software_Foundation)
|
||||
> is a decentralized open source community of developers. The software they
|
||||
> produce is distributed under the terms of the Apache License, a permissive
|
||||
> open-source license for free and open-source software (FOSS). The Apache projects
|
||||
> are characterized by a collaborative, consensus-based development process
|
||||
> and an open and pragmatic software license, which is to say that it
|
||||
> allows developers, who receive the software freely, to redistribute
|
||||
> it under non-free terms. Each project is managed by a self-selected
|
||||
> team of technical experts who are active contributors to the project.
|
||||
|
||||
## Apache AGE
|
||||
|
||||
>[Apache AGE](https://age.apache.org/) is a `PostgreSQL` extension that provides
|
||||
> graph database functionality. `AGE` is an acronym for `A Graph Extension`, and
|
||||
> is inspired by Bitnine’s fork of `PostgreSQL 10`, `AgensGraph`, which is
|
||||
> a multimodal database. The goal of the project is to create single
|
||||
> storage that can handle both relational and graph model data so that users
|
||||
> can use standard ANSI SQL along with `openCypher`, the Graph query language.
|
||||
> The data elements `Apache AGE` stores are nodes, edges connecting them, and
|
||||
> attributes of nodes and edges.
|
||||
|
||||
See more about [integrating with Apache AGE](/docs/integrations/graphs/apache_age).
|
||||
|
||||
## Apache Cassandra
|
||||
|
||||
>[Apache Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented,
|
||||
> highly scalable and highly available database. Starting with version 5.0,
|
||||
> the database ships with vector search capabilities.
|
||||
|
||||
See more about [integrating with Apache Cassandra](/docs/integrations/providers/cassandra/).
|
||||
|
||||
## Apache Doris
|
||||
|
||||
>[Apache Doris](https://doris.apache.org/) is a modern data warehouse for
|
||||
> real-time analytics. It delivers lightning-fast analytics on real-time data at scale.
|
||||
>
|
||||
>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent
|
||||
> performance in ClickBench — a Benchmark For Analytical DBMS. Since it has
|
||||
> a super-fast vectorized execution engine, it could also be used as a fast vectordb.
|
||||
|
||||
See more about [integrating with Apache Doris](/docs/integrations/providers/apache_doris/).
|
||||
|
||||
## Apache Kafka
|
||||
|
||||
>[Apache Kafka](https://github.com/apache/kafka) is a distributed messaging system
|
||||
> that is used to publish and subscribe to streams of records.
|
||||
|
||||
See more about [integrating with Apache Kafka](/docs/integrations/memory/kafka_chat_message_history).
|
||||
|
||||
|
||||
## Apache Spark
|
||||
|
||||
>[Apache Spark](https://spark.apache.org/) is a unified analytics engine for
|
||||
> large-scale data processing. It provides high-level APIs in Scala, Java,
|
||||
> Python, and R, and an optimized engine that supports general computation
|
||||
> graphs for data analysis. It also supports a rich set of higher-level
|
||||
> tools including `Spark SQL` for SQL and DataFrames, `pandas API on Spark`
|
||||
> for pandas workloads, `MLlib` for machine learning,
|
||||
> `GraphX` for graph processing, and `Structured Streaming` for stream processing.
|
||||
|
||||
See more about [integrating with Apache Spark](/docs/integrations/providers/spark).
|
22
docs/docs/integrations/providers/apple.mdx
Normal file
22
docs/docs/integrations/providers/apple.mdx
Normal file
@@ -0,0 +1,22 @@
|
||||
# Apple
|
||||
|
||||
>[Apple Inc. (Wikipedia)](https://en.wikipedia.org/wiki/Apple_Inc.) is an American
|
||||
> multinational corporation and technology company.
|
||||
>
|
||||
> [iMessage (Wikipedia)](https://en.wikipedia.org/wiki/IMessage) is an instant
|
||||
> messaging service developed by Apple Inc. and launched in 2011.
|
||||
> `iMessage` functions exclusively on Apple platforms.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/chat_loaders/imessage).
|
||||
|
||||
## Chat loader
|
||||
|
||||
It loads chat sessions from the `iMessage` `chat.db` `SQLite` file.
|
||||
|
||||
See a [usage example](/docs/integrations/chat_loaders/imessage).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_loaders.imessage import IMessageChatLoader
|
||||
```
|
@@ -1,69 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Nomic\n",
|
||||
"\n",
|
||||
"Nomic currently offers two products:\n",
|
||||
"\n",
|
||||
"- Atlas: their Visual Data Engine\n",
|
||||
"- GPT4All: their Open Source Edge Language Model Ecosystem\n",
|
||||
"\n",
|
||||
"The Nomic integration exists in its own [partner package](https://pypi.org/project/langchain-nomic/). You can install it with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-nomic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Currently, you can import their hosted [embedding model](/docs/integrations/text_embedding/nomic) as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "y8ku6X96sebl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_nomic import NomicEmbeddings"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
58
docs/docs/integrations/providers/nomic.mdx
Normal file
58
docs/docs/integrations/providers/nomic.mdx
Normal file
@@ -0,0 +1,58 @@
|
||||
# Nomic
|
||||
|
||||
>[Nomic](https://www.nomic.ai/) builds tools that enable everyone to interact with AI scale datasets and run AI models on consumer computers.
|
||||
>
|
||||
>`Nomic` currently offers two products:
|
||||
>
|
||||
>- `Atlas`: the Visual Data Engine
|
||||
>- `GPT4All`: the Open Source Edge Language Model Ecosystem
|
||||
|
||||
The Nomic integration exists in two partner packages: [langchain-nomic](https://pypi.org/project/langchain-nomic/)
|
||||
and in [langchain-community](https://pypi.org/project/langchain-community/).
|
||||
|
||||
## Installation
|
||||
|
||||
You can install them with:
|
||||
|
||||
```bash
|
||||
pip install -U langchain-nomic
|
||||
pip install -U langchain-community
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
### GPT4All
|
||||
|
||||
See [a usage example](/docs/integrations/llms/gpt4all).
|
||||
|
||||
```python
|
||||
from langchain_community.llms import GPT4All
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
|
||||
### NomicEmbeddings
|
||||
|
||||
See [a usage example](/docs/integrations/text_embedding/nomic).
|
||||
|
||||
```python
|
||||
from langchain_nomic import NomicEmbeddings
|
||||
```
|
||||
|
||||
### GPT4All
|
||||
|
||||
See [a usage example](/docs/integrations/text_embedding/gpt4all).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import GPT4AllEmbeddings
|
||||
```
|
||||
|
||||
## Vector store
|
||||
|
||||
### Atlas
|
||||
|
||||
See [a usage example and installation instructions](/docs/integrations/vectorstores/atlas).
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import AtlasDB
|
||||
```
|
49
docs/docs/integrations/providers/spark.mdx
Normal file
49
docs/docs/integrations/providers/spark.mdx
Normal file
@@ -0,0 +1,49 @@
|
||||
# Spark
|
||||
|
||||
>[Apache Spark](https://spark.apache.org/) is a unified analytics engine for
|
||||
> large-scale data processing. It provides high-level APIs in Scala, Java,
|
||||
> Python, and R, and an optimized engine that supports general computation
|
||||
> graphs for data analysis. It also supports a rich set of higher-level
|
||||
> tools including `Spark SQL` for SQL and DataFrames, `pandas API on Spark`
|
||||
> for pandas workloads, `MLlib` for machine learning,
|
||||
> `GraphX` for graph processing, and `Structured Streaming` for stream processing.
|
||||
|
||||
## Document loaders
|
||||
|
||||
### PySpark
|
||||
|
||||
It loads data from a `PySpark` DataFrame.
|
||||
|
||||
See a [usage example](/docs/integrations/document_loaders/pyspark_dataframe).
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders import PySparkDataFrameLoader
|
||||
```
|
||||
|
||||
## Tools/Toolkits
|
||||
|
||||
### Spark SQL toolkit
|
||||
|
||||
Toolkit for interacting with `Spark SQL`.
|
||||
|
||||
See a [usage example](/docs/integrations/tools/spark_sql).
|
||||
|
||||
```python
|
||||
from langchain_community.agent_toolkits import SparkSQLToolkit, create_spark_sql_agent
|
||||
from langchain_community.utilities.spark_sql import SparkSQL
|
||||
```
|
||||
|
||||
#### Spark SQL individual tools
|
||||
|
||||
You can use individual tools from the Spark SQL Toolkit:
|
||||
- `InfoSparkSQLTool`: tool for getting metadata about a Spark SQL
|
||||
- `ListSparkSQLTool`: tool for getting tables names
|
||||
- `QueryCheckerTool`: tool uses an LLM to check if a query is correct
|
||||
- `QuerySparkSQLTool`: tool for querying a Spark SQL
|
||||
|
||||
```python
|
||||
from langchain_community.tools.spark_sql.tool import InfoSparkSQLTool
|
||||
from langchain_community.tools.spark_sql.tool import ListSparkSQLTool
|
||||
from langchain_community.tools.spark_sql.tool import QueryCheckerTool
|
||||
from langchain_community.tools.spark_sql.tool import QuerySparkSQLTool
|
||||
```
|
@@ -4,11 +4,26 @@
|
||||
It has cross-domain knowledge and language understanding ability by learning a large amount of texts, codes and images.
|
||||
It can understand and perform tasks based on natural dialogue.
|
||||
|
||||
## SparkLLM LLM Model
|
||||
An example is available at [example](/docs/integrations/llms/sparkllm).
|
||||
## Chat models
|
||||
|
||||
## SparkLLM Chat Model
|
||||
An example is available at [example](/docs/integrations/chat/sparkllm).
|
||||
See a [usage example](/docs/integrations/chat/sparkllm).
|
||||
|
||||
## SparkLLM Text Embedding Model
|
||||
An example is available at [example](/docs/integrations/text_embedding/sparkllm)
|
||||
```python
|
||||
from langchain_community.chat_models import ChatSparkLLM
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
See a [usage example](/docs/integrations/llms/sparkllm).
|
||||
|
||||
```python
|
||||
from langchain_community.llms import SparkLLM
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/sparkllm)
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import SparkLLMTextEmbeddings
|
||||
```
|
||||
|
34
docs/docs/integrations/providers/transwarp.mdx
Normal file
34
docs/docs/integrations/providers/transwarp.mdx
Normal file
@@ -0,0 +1,34 @@
|
||||
# Transwarp
|
||||
|
||||
>[Transwarp](https://www.transwarp.cn/en/introduction) aims to build
|
||||
> enterprise-level big data and AI infrastructure software,
|
||||
> to shape the future of data world. It provides enterprises with
|
||||
> infrastructure software and services around the whole data lifecycle,
|
||||
> including integration, storage, governance, modeling, analysis,
|
||||
> mining and circulation.
|
||||
>
|
||||
> `Transwarp` focuses on technology research and
|
||||
> development and has accumulated core technologies in these aspects:
|
||||
> distributed computing, SQL compilations, database technology,
|
||||
> unification for multi-model data management, container-based cloud computing,
|
||||
> and big data analytics and intelligence.
|
||||
|
||||
## Installation
|
||||
|
||||
You have to install several python packages:
|
||||
|
||||
```bash
|
||||
pip install -U tiktoken hippo-api
|
||||
```
|
||||
|
||||
and get the connection configuration.
|
||||
|
||||
## Vector stores
|
||||
|
||||
### Hippo
|
||||
|
||||
See [a usage example and installation instructions](/docs/integrations/vectorstores/hippo).
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores.hippo import Hippo
|
||||
```
|
@@ -6,45 +6,18 @@
|
||||
"source": [
|
||||
"# Upstage\n",
|
||||
"\n",
|
||||
"[Upstage](https://upstage.ai) is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components. \n"
|
||||
">[Upstage](https://upstage.ai) is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components.\n",
|
||||
">\n",
|
||||
">**Solar Mini Chat** is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.\n",
|
||||
"\n",
|
||||
">Other than Solar, Upstage also offers features for real-world RAG (retrieval-augmented generation), such as **Groundedness Check** and **Layout Analysis**. \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Solar LLM\n",
|
||||
"\n",
|
||||
"**Solar Mini Chat** is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.\n",
|
||||
"\n",
|
||||
"Other than Solar, Upstage also offers features for real-world RAG (retrieval-augmented generation), such as **Groundedness Check** and **Layout Analysis**. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installation and Setup\n",
|
||||
"\n",
|
||||
"Install `langchain-upstage` package:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install -qU langchain-core langchain-upstage\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Get [API Keys](https://console.upstage.ai) and set environment variable `UPSTAGE_API_KEY`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Upstage LangChain integrations\n",
|
||||
"### Upstage LangChain integrations\n",
|
||||
"\n",
|
||||
"| API | Description | Import | Example usage |\n",
|
||||
"| --- | --- | --- | --- |\n",
|
||||
@@ -60,9 +33,20 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Quick Examples\n",
|
||||
"## Installation and Setup\n",
|
||||
"\n",
|
||||
"### Environment Setup"
|
||||
"Install `langchain-upstage` package:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install -qU langchain-core langchain-upstage\n",
|
||||
"```\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Get [API Keys](https://console.upstage.ai) and set environment variable `UPSTAGE_API_KEY`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -80,8 +64,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat models\n",
|
||||
"\n",
|
||||
"### Chat\n"
|
||||
"### Solar LLM\n",
|
||||
"\n",
|
||||
"See [a usage example](/docs/integrations/chat/upstage)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -101,10 +88,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Embedding models\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Text embedding\n",
|
||||
"\n"
|
||||
"See [a usage example](/docs/integrations/text_embedding/upstage)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -134,7 +120,45 @@
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Groundedness Check"
|
||||
"## Document loader\n",
|
||||
"\n",
|
||||
"### Layout Analysis\n",
|
||||
"\n",
|
||||
"See [a usage example](/docs/integrations/document_loaders/upstage)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_upstage import UpstageLayoutAnalysisLoader\n",
|
||||
"\n",
|
||||
"file_path = \"/PATH/TO/YOUR/FILE.pdf\"\n",
|
||||
"layzer = UpstageLayoutAnalysisLoader(file_path, split=\"page\")\n",
|
||||
"\n",
|
||||
"# For improved memory efficiency, consider using the lazy_load method to load documents page by page.\n",
|
||||
"docs = layzer.load() # or layzer.lazy_load()\n",
|
||||
"\n",
|
||||
"for doc in docs[:3]:\n",
|
||||
" print(doc)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Tools\n",
|
||||
"\n",
|
||||
"### Groundedness Check\n",
|
||||
"\n",
|
||||
"See [a usage example](/docs/integrations/tools/upstage_groundedness_check)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -159,36 +183,6 @@
|
||||
"response = groundedness_check.invoke(request_input)\n",
|
||||
"print(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Layout Analysis"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_upstage import UpstageLayoutAnalysisLoader\n",
|
||||
"\n",
|
||||
"file_path = \"/PATH/TO/YOUR/FILE.pdf\"\n",
|
||||
"layzer = UpstageLayoutAnalysisLoader(file_path, split=\"page\")\n",
|
||||
"\n",
|
||||
"# For improved memory efficiency, consider using the lazy_load method to load documents page by page.\n",
|
||||
"docs = layzer.load() # or layzer.lazy_load()\n",
|
||||
"\n",
|
||||
"for doc in docs[:3]:\n",
|
||||
" print(doc)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -210,7 +204,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@@ -325,7 +325,13 @@
|
||||
"id": "20cf6074081b"
|
||||
},
|
||||
"source": [
|
||||
"### Search for documents with metadata filter"
|
||||
"### Searching Documents with Metadata Filters\n",
|
||||
"The vectorstore supports two methods for applying filters to metadata fields when performing document searches:\n",
|
||||
"\n",
|
||||
"- Dictionary-based Filters\n",
|
||||
" - You can pass a dictionary (dict) where the keys represent metadata fields and the values specify the filter condition. This method applies an equality filter between the key and the corresponding value. When multiple key-value pairs are provided, they are combined using a logical AND operation.\n",
|
||||
"- SQL-based Filters\n",
|
||||
" - Alternatively, you can provide a string representing an SQL WHERE clause to define more complex filtering conditions. This allows for greater flexibility, supporting SQL expressions such as comparison operators and logical operators."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -336,11 +342,24 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Dictionary-based Filters\n",
|
||||
"# This should only return \"Banana\" document.\n",
|
||||
"docs = store.similarity_search_by_vector(query_vector, filter={\"len\": 6})\n",
|
||||
"print(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# SQL-based Filters\n",
|
||||
"# This should return \"Banana\", \"Apples and oranges\" and \"Cars and airplanes\" documents.\n",
|
||||
"docs = store.similarity_search_by_vector(query_vector, filter={\"len = 6 AND len > 17\"})\n",
|
||||
"print(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
|
@@ -105,7 +105,7 @@
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n",
|
||||
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n",
|
||||
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
|
||||
"\n",
|
||||
"```{=mdx}\n",
|
||||
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
||||
@@ -254,7 +254,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ! pip install langchain_community"
|
||||
"%pip install langchain_community"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -952,7 +952,7 @@
|
||||
"source": [
|
||||
"## Streaming\n",
|
||||
"\n",
|
||||
"Now we've got a function chatbot. However, one *really* important UX consideration for chatbot application is streaming. LLMs can sometimes take a while to respond, and so in order to improve the user experience one thing that most application do is stream back each token as it is generated. This allows the user to see progress.\n",
|
||||
"Now we've got a functioning chatbot. However, one *really* important UX consideration for chatbot applications is streaming. LLMs can sometimes take a while to respond, and so in order to improve the user experience one thing that most applications do is stream back each token as it is generated. This allows the user to see progress.\n",
|
||||
"\n",
|
||||
"It's actually super easy to do this!\n",
|
||||
"\n",
|
||||
|
@@ -95,7 +95,7 @@
|
||||
"source": [
|
||||
"## Using Language Models\n",
|
||||
"\n",
|
||||
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n",
|
||||
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
|
||||
"\n",
|
||||
"```{=mdx}\n",
|
||||
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
|
||||
@@ -159,9 +159,7 @@
|
||||
"cell_type": "markdown",
|
||||
"id": "f83373db",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If we've enable LangSmith, we can see that this run is logged to LangSmith, and can see the [LangSmith trace](https://smith.langchain.com/public/88baa0b2-7c1a-4d09-ba30-a47985dde2ea/r)"
|
||||
]
|
||||
"source": "If we've enabled LangSmith, we can see that this run is logged to LangSmith, and can see the [LangSmith trace](https://smith.langchain.com/public/88baa0b2-7c1a-4d09-ba30-a47985dde2ea/r)"
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
|
@@ -125,8 +125,11 @@ const config = {
|
||||
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
|
||||
({
|
||||
announcementBar: {
|
||||
content: 'LangChain 0.2 is out! Leave feedback on the v0.2 docs <a href="https://github.com/langchain-ai/langchain/discussions/21716">here</a>. You can view the v0.1 docs <a href="/v0.1/docs/get_started/introduction/">here</a>.',
|
||||
content:
|
||||
'Share your thoughts on AI agents. <a target="_blank" href="https://langchain.typeform.com/state-of-agents">Take the 3-min survey</a>.',
|
||||
isCloseable: true,
|
||||
backgroundColor: "rgba(53, 151, 147, 0.1)",
|
||||
textColor: "rgb(53, 151, 147)",
|
||||
},
|
||||
docs: {
|
||||
sidebar: {
|
||||
|
Reference in New Issue
Block a user