mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-07 01:30:24 +00:00
Compare commits
4 Commits
bagatur/ve
...
eugene/sta
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
df32037497 | ||
|
|
f6cfc68bcc | ||
|
|
b766fb8681 | ||
|
|
c581e7172a |
3
.github/scripts/check_diff.py
vendored
3
.github/scripts/check_diff.py
vendored
@@ -47,9 +47,6 @@ if __name__ == "__main__":
|
||||
found = True
|
||||
if found:
|
||||
dirs_to_run["extended-test"].add(dir_)
|
||||
elif file.startswith("libs/cli"):
|
||||
# todo: add cli makefile
|
||||
pass
|
||||
elif file.startswith("libs/partners"):
|
||||
partner_dir = file.split("/")[2]
|
||||
if os.path.isdir(f"libs/partners/{partner_dir}") and [
|
||||
|
||||
6
.github/workflows/_release.yml
vendored
6
.github/workflows/_release.yml
vendored
@@ -55,7 +55,7 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: Upload build
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -246,7 +246,7 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: release
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
- uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -285,7 +285,7 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: release
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
- uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
4
.github/workflows/_test_release.yml
vendored
4
.github/workflows/_test_release.yml
vendored
@@ -48,7 +48,7 @@ jobs:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
- name: Upload build
|
||||
uses: actions/upload-artifact@v4
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: test-dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
@@ -76,7 +76,7 @@ jobs:
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
- uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: test-dist
|
||||
path: ${{ inputs.working-directory }}/dist/
|
||||
|
||||
78
.github/workflows/api_doc_build.yml
vendored
Normal file
78
.github/workflows/api_doc_build.yml
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
name: API docs build
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: '0 13 * * *'
|
||||
env:
|
||||
POETRY_VERSION: "1.7.1"
|
||||
PYTHON_VERSION: "3.10"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: bagatur/api_docs_build
|
||||
path: langchain
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-google
|
||||
path: langchain-google
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
repository: langchain-ai/langchain-datastax
|
||||
path: langchain-datastax
|
||||
|
||||
- name: Set Git config
|
||||
working-directory: langchain
|
||||
run: |
|
||||
git config --local user.email "actions@github.com"
|
||||
git config --local user.name "Github Actions"
|
||||
|
||||
- name: Merge master
|
||||
working-directory: langchain
|
||||
run: |
|
||||
git fetch origin master
|
||||
git merge origin/master -m "Merge master" --allow-unrelated-histories -X theirs
|
||||
|
||||
- name: Move google libs
|
||||
run: |
|
||||
rm -rf \
|
||||
langchain/libs/partners/google-genai \
|
||||
langchain/libs/partners/google-vertexai \
|
||||
langchain/libs/partners/astradb
|
||||
mv langchain-google/libs/genai langchain/libs/partners/google-genai
|
||||
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
|
||||
mv langchain-datastax/libs/astradb langchain/libs/partners/astradb
|
||||
|
||||
- name: Set up Python ${{ env.PYTHON_VERSION }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./langchain/.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
cache-key: api-docs
|
||||
working-directory: langchain
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: langchain
|
||||
run: |
|
||||
poetry run python -m pip install --upgrade --no-cache-dir pip setuptools
|
||||
poetry run python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
|
||||
# skip airbyte and ibm due to pandas dependency issue
|
||||
poetry run python -m pip install $(ls ./libs/partners | grep -vE "airbyte|ibm" | xargs -I {} echo "./libs/partners/{}")
|
||||
poetry run python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
|
||||
|
||||
- name: Build docs
|
||||
working-directory: langchain
|
||||
run: |
|
||||
poetry run python -m pip install --upgrade --no-cache-dir pip setuptools
|
||||
poetry run python docs/api_reference/create_api_rst.py
|
||||
poetry run python -m sphinx -T -E -b html -d _build/doctrees -c docs/api_reference docs/api_reference api_reference_build/html -j auto
|
||||
|
||||
# https://github.com/marketplace/actions/add-commit
|
||||
- uses: EndBug/add-and-commit@v9
|
||||
with:
|
||||
cwd: langchain
|
||||
message: 'Update API docs build'
|
||||
@@ -50,7 +50,7 @@ The LangChain libraries themselves are made up of several different packages.
|
||||
- **[`langchain-community`](libs/community)**: Third party integrations.
|
||||
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
|
||||
|
||||

|
||||

|
||||
|
||||
## 🧱 What can you build with LangChain?
|
||||
**❓ Retrieval augmented generation**
|
||||
|
||||
61
SECURITY.md
61
SECURITY.md
@@ -1,61 +1,6 @@
|
||||
# Security Policy
|
||||
|
||||
## Reporting OSS Vulnerabilities
|
||||
## Reporting a Vulnerability
|
||||
|
||||
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
|
||||
a bounty program for our open source projects.
|
||||
|
||||
Please report security vulnerabilities associated with the LangChain
|
||||
open source projects by visiting the following link:
|
||||
|
||||
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
|
||||
|
||||
Before reporting a vulnerability, please review:
|
||||
|
||||
1) In-Scope Targets and Out-of-Scope Targets below.
|
||||
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
|
||||
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
|
||||
understand what we consider to be a security vulnerability vs. developer
|
||||
responsibility.
|
||||
|
||||
### In-Scope Targets
|
||||
|
||||
The following packages and repositories are eligible for bug bounties:
|
||||
|
||||
- langchain-core
|
||||
- langchain (see exceptions)
|
||||
- langchain-community (see exceptions)
|
||||
- langgraph
|
||||
- langserve
|
||||
|
||||
### Out of Scope Targets
|
||||
|
||||
All out of scope targets defined by huntr as well as:
|
||||
|
||||
- **langchain-experimental**: This repository is for experimental code and is not
|
||||
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
|
||||
time and published with no bounty attached.
|
||||
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
|
||||
bounties. This includes the following directories
|
||||
- langchain/tools
|
||||
- langchain-community/tools
|
||||
- Please review our [security guidelines](https://python.langchain.com/docs/security)
|
||||
for more details, but generally tools interact with the real world. Developers are
|
||||
expected to understand the security implications of their code and are responsible
|
||||
for the security of their tools.
|
||||
- Code documented with security notices. This will be decided done on a case by
|
||||
case basis, but likely will not be eligible for a bounty as the code is already
|
||||
documented with guidelines for developers that should be followed for making their
|
||||
application secure.
|
||||
- Any LangSmith related repositories or APIs see below.
|
||||
|
||||
## Reporting LangSmith Vulnerabilities
|
||||
|
||||
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
|
||||
|
||||
- LangSmith site: https://smith.langchain.com
|
||||
- SDK client: https://github.com/langchain-ai/langsmith-sdk
|
||||
|
||||
### Other Security Concerns
|
||||
|
||||
For any other security concerns, please contact us at `security@langchain.dev`.
|
||||
Please report security vulnerabilities by email to `security@langchain.dev`.
|
||||
This email is an alias to a subset of our maintainers, and will ensure the issue is promptly triaged and acted upon as needed.
|
||||
|
||||
@@ -8,7 +8,6 @@ Notebook | Description
|
||||
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
|
||||
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
|
||||
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
|
||||
[amazon_personalize_how_to.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/amazon_personalize_how_to.ipynb) | Retrieving personalized recommendations from Amazon Personalize and use custom agents to build generative AI apps
|
||||
[analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document.
|
||||
[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
|
||||
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
|
||||
|
||||
@@ -52,7 +52,7 @@
|
||||
"\n",
|
||||
"bash_chain = LLMBashChain.from_llm(llm, verbose=True)\n",
|
||||
"\n",
|
||||
"bash_chain.invoke(text)"
|
||||
"bash_chain.run(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -135,7 +135,7 @@
|
||||
"\n",
|
||||
"text = \"Please write a bash script that prints 'Hello World' to the console.\"\n",
|
||||
"\n",
|
||||
"bash_chain.invoke(text)"
|
||||
"bash_chain.run(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -190,7 +190,7 @@
|
||||
"\n",
|
||||
"text = \"List the current directory then move up a level.\"\n",
|
||||
"\n",
|
||||
"bash_chain.invoke(text)"
|
||||
"bash_chain.run(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -231,7 +231,7 @@
|
||||
],
|
||||
"source": [
|
||||
"# Run the same command again and see that the state is maintained between calls\n",
|
||||
"bash_chain.invoke(text)"
|
||||
"bash_chain.run(text)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install langchain langchain-elasticsearch lark openai elasticsearch pandas"
|
||||
"!pip install langchain lark openai elasticsearch pandas"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -52,28 +52,6 @@ services:
|
||||
retries: 60
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
pgvector:
|
||||
# postgres with the pgvector extension
|
||||
image: ankane/pgvector
|
||||
environment:
|
||||
POSTGRES_DB: langchain
|
||||
POSTGRES_USER: langchain
|
||||
POSTGRES_PASSWORD: langchain
|
||||
ports:
|
||||
- "6024:5432"
|
||||
command: |
|
||||
postgres -c log_statement=all
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
"CMD-SHELL",
|
||||
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
|
||||
]
|
||||
interval: 5s
|
||||
retries: 60
|
||||
volumes:
|
||||
- postgres_data_pgvector:/var/lib/postgresql/data
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
postgres_data_pgvector:
|
||||
|
||||
1
docs/.gitignore
vendored
1
docs/.gitignore
vendored
@@ -1,2 +1 @@
|
||||
/.quarto/
|
||||
src/supabase.d.ts
|
||||
|
||||
@@ -31,11 +31,9 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"cell_type": "raw",
|
||||
"id": "278b0027",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-core langchain-community langchain-openai"
|
||||
]
|
||||
|
||||
@@ -557,7 +557,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai_joke = chain.with_config(configurable={\"llm\": \"openai\"})"
|
||||
"openai_poem = chain.with_config(configurable={\"llm\": \"openai\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -578,7 +578,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"openai_joke.invoke({\"topic\": \"bears\"})"
|
||||
"openai_poem.invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
"id": "9eb73e8b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We will show examples of streaming using the chat model from [Anthropic](/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
|
||||
"We will show examples of streaming using the chat model from [Anthropic](https://python.langchain.com/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -658,7 +658,7 @@
|
||||
"\n",
|
||||
"This is a **beta API**, and we're almost certainly going to make some changes to it.\n",
|
||||
"\n",
|
||||
"This version parameter will allow us to minimize such breaking changes to your code. \n",
|
||||
"This version parameter will allow us to mimimize such breaking changes to your code. \n",
|
||||
"\n",
|
||||
"In short, we are annoying you now, so we don't have to annoy you later.\n",
|
||||
":::"
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# JSON Evaluators\n",
|
||||
"\n",
|
||||
"Evaluating [extraction](/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
|
||||
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
|
||||
"\n",
|
||||
"## JsonValidityEvaluator\n",
|
||||
"\n",
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
hide_table_of_contents: true
|
||||
---
|
||||
|
||||
# Extending LangChain
|
||||
|
||||
Extending LangChain's base abstractions, whether you're planning to contribute back to the open-source repo or build a bespoke internal integration, is encouraged.
|
||||
|
||||
Check out these guides for building your own custom classes for the following modules:
|
||||
|
||||
- [Chat models](/docs/modules/model_io/chat/custom_chat_model) for interfacing with chat-tuned language models.
|
||||
- [LLMs](/docs/modules/model_io/llms/custom_llm) for interfacing with text language models.
|
||||
- [Output parsers](/docs/modules/model_io/output_parsers/custom) for handling language model outputs.
|
||||
@@ -24,7 +24,7 @@
|
||||
"<img src=\"/img/qa_privacy_protection.png\" width=\"900\"/>\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](/docs/guides/privacy/presidio_data_anonymization/).\n",
|
||||
"In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](https://python.langchain.com/docs/guides/privacy/presidio_data_anonymization/).\n",
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n",
|
||||
|
||||
@@ -16,13 +16,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "6017f26a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import openai\n",
|
||||
"from langchain_community.adapters import openai as lc_openai"
|
||||
"from langchain.adapters import openai as lc_openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -277,7 +277,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import openai\n",
|
||||
"from langchain_community.adapters import openai as lc_openai"
|
||||
"from langchain.adapters import openai as lc_openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,7 +310,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"\n",
|
||||
">[PromptLayer](https://docs.promptlayer.com/introduction) is a platform for prompt engineering. It also helps with the LLM observability to visualize requests, version prompts, and track usage.\n",
|
||||
">\n",
|
||||
">While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](/docs/integrations/llms/promptlayer_openai)), using a callback is the recommended way to integrate `PromptLayer` with LangChain.\n",
|
||||
">While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), using a callback is the recommended way to integrate `PromptLayer` with LangChain.\n",
|
||||
"\n",
|
||||
"In this guide, we will go over how to setup the `PromptLayerCallbackHandler`. \n",
|
||||
"\n",
|
||||
|
||||
@@ -124,7 +124,7 @@
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](/docs/modules/model_io/llms/) or [Chat Models](/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
|
||||
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](https://python.langchain.com/docs/modules/model_io/llms/) or [Chat Models](https://python.langchain.com/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -41,7 +41,7 @@
|
||||
"source": [
|
||||
"## Environment Setup\n",
|
||||
"\n",
|
||||
"We'll need to get an [Anthropic](https://console.anthropic.com/settings/keys) API key and set the `ANTHROPIC_API_KEY` environment variable:"
|
||||
"We'll need to get a [Anthropic](https://console.anthropic.com/settings/keys) and set the `ANTHROPIC_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,286 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Friendli\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatFriendli\n",
|
||||
"\n",
|
||||
"> [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.\n",
|
||||
"\n",
|
||||
"This tutorial guides you through integrating `ChatFriendli` for chat applications using LangChain. `ChatFriendli` offers a flexible approach to generating conversational AI responses, supporting both synchronous and asynchronous calls."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"Ensure the `langchain_community` and `friendli-client` are installed.\n",
|
||||
"\n",
|
||||
"```sh\n",
|
||||
"pip install -U langchain-comminity friendli-client.\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"FRIENDLI_TOKEN\"] = getpass.getpass(\"Friendi Personal Access Token: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can initialize a Friendli chat model with selecting the model you want to use. The default model is `mixtral-8x7b-instruct-v0-1`. You can check the available models at [docs.friendli.ai](https://docs.periflow.ai/guides/serverless_endpoints/pricing#text-generation-models)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models.friendli import ChatFriendli\n",
|
||||
"\n",
|
||||
"chat = ChatFriendli(model=\"llama-2-13b-chat\", max_tokens=100, temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"`FrienliChat` supports all methods of [`ChatModel`](/docs/modules/model_io/chat/) including async APIs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also use functionality of `invoke`, `batch`, `generate`, and `stream`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.messages.human import HumanMessage\n",
|
||||
"from langchain_core.messages.system import SystemMessage\n",
|
||||
"\n",
|
||||
"system_message = SystemMessage(content=\"Answer questions as short as you can.\")\n",
|
||||
"human_message = HumanMessage(content=\"Tell me a joke.\")\n",
|
||||
"messages = [system_message, human_message]\n",
|
||||
"\n",
|
||||
"chat.invoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\"),\n",
|
||||
" AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\")]"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat.batch([messages, messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[ChatGeneration(text=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\", message=AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\"))], [ChatGeneration(text=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\", message=AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\"))]], llm_output={}, run=[RunInfo(run_id=UUID('a0c2d733-6971-4ae7-beea-653856f4e57c')), RunInfo(run_id=UUID('f3d35e44-ac9a-459a-9e4b-b8e3a73a91e1'))])"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat.generate([messages, messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Knock, knock!\n",
|
||||
"Who's there?\n",
|
||||
"Cows go.\n",
|
||||
"Cows go who?\n",
|
||||
"MOO!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for chunk in chat.stream(messages):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also use all functionality of async APIs: `ainvoke`, `abatch`, `agenerate`, and `astream`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.ainvoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\"),\n",
|
||||
" AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\")]"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.abatch([messages, messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[ChatGeneration(text=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\", message=AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\"))], [ChatGeneration(text=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\", message=AIMessage(content=\" Knock, knock!\\nWho's there?\\nCows go.\\nCows go who?\\nMOO!\"))]], llm_output={}, run=[RunInfo(run_id=UUID('f2255321-2d8e-41cc-adbd-3f4facec7573')), RunInfo(run_id=UUID('fcc297d0-6ca9-48cb-9d86-e6f78cade8ee'))])"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.agenerate([messages, messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Knock, knock!\n",
|
||||
"Who's there?\n",
|
||||
"Cows go.\n",
|
||||
"Cows go who?\n",
|
||||
"MOO!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async for chunk in chat.astream(messages):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -10,7 +10,7 @@
|
||||
"\n",
|
||||
"In particular, we will:\n",
|
||||
"1. Utilize the [HuggingFaceTextGenInference](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_text_gen_inference.py), [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py), or [HuggingFaceHub](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_hub.py) integrations to instantiate an `LLM`.\n",
|
||||
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](/docs/modules/model_io/chat/#messages) abstraction.\n",
|
||||
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](https://python.langchain.com/docs/modules/model_io/chat/#messages) abstraction.\n",
|
||||
"3. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -280,7 +280,7 @@
|
||||
"source": [
|
||||
"## 3. Take it for a spin as an agent!\n",
|
||||
"\n",
|
||||
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](/docs/modules/agents/agent_types/react#using-chat-models).\n",
|
||||
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models).\n",
|
||||
"\n",
|
||||
"> Note: To run this section, you'll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`"
|
||||
]
|
||||
|
||||
@@ -17,9 +17,9 @@
|
||||
"source": [
|
||||
"# Llama2Chat\n",
|
||||
"\n",
|
||||
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/use_cases/question_answering/local_retrieval_qa), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
|
||||
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [HuggingFaceTextGenInference](https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference), [LlamaCpp](https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa), [GPT4All](https://python.langchain.com/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
|
||||
"\n",
|
||||
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/modules/model_io/chat/). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
|
||||
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](https://python.langchain.com/docs/modules/model_io/models/chat/). `Llama2Chat` converts a list of [chat messages](https://python.langchain.com/docs/modules/model_io/models/chat/#messages) into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -77,7 +77,7 @@
|
||||
"id": "2ff99380",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A HuggingFaceTextGenInference LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server serves a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It can be started locally with:\n",
|
||||
"A [HuggingFaceTextGenInference](https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference) LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server serves a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It can be started locally with:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"docker run \\\n",
|
||||
@@ -220,7 +220,7 @@
|
||||
"id": "52c1a0b9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For using a Llama-2 chat model with a [LlamaCPP](/docs/integrations/llms/llamacpp) `LMM`, install the `llama-cpp-python` library using [these installation instructions](/docs/integrations/llms/llamacpp#installation). The following example uses a quantized [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf) model stored locally at `~/Models/llama-2-7b-chat.Q4_0.gguf`. \n",
|
||||
"For using a Llama-2 chat model with a [LlamaCPP](https://python.langchain.com/docs/integrations/llms/llamacpp) `LMM`, install the `llama-cpp-python` library using [these installation instructions](https://python.langchain.com/docs/integrations/llms/llamacpp#installation). The following example uses a quantized [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf) model stored locally at `~/Models/llama-2-7b-chat.Q4_0.gguf`. \n",
|
||||
"\n",
|
||||
"After creating a `LlamaCpp` instance, the `llm` is again wrapped into `Llama2Chat`"
|
||||
]
|
||||
@@ -731,7 +731,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1005,7 +1005,7 @@
|
||||
"id": "79efa62d"
|
||||
},
|
||||
"source": [
|
||||
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](/docs/modules/memory/types/buffer) example applied to the `mixtral_8x7b` model."
|
||||
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://python.langchain.com/docs/modules/memory/types/buffer) example applied to the `mixtral_8x7b` model."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -107,7 +107,7 @@
|
||||
"\n",
|
||||
"# using LangChain Expressive Language chain syntax\n",
|
||||
"# learn more about the LCEL on\n",
|
||||
"# /docs/expression_language/why\n",
|
||||
"# https://python.langchain.com/docs/expression_language/why\n",
|
||||
"chain = prompt | llm | StrOutputParser()\n",
|
||||
"\n",
|
||||
"# for brevity, response is printed in terminal\n",
|
||||
@@ -235,7 +235,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/expression_language/interface) for the other available interfaces for use when a chain is created.\n",
|
||||
"Take a look at the [LangChain Expressive Language (LCEL) Interface](https://python.langchain.com/docs/expression_language/interface) for the other available interfaces for use when a chain is created.\n",
|
||||
"\n",
|
||||
"## Building from source\n",
|
||||
"\n",
|
||||
@@ -250,7 +250,7 @@
|
||||
" \n",
|
||||
"Use the latest version of Ollama and supply the [`format`](https://github.com/jmorganca/ollama/blob/main/docs/api.md#json-mode) flag. The `format` flag will force the model to produce the response in JSON.\n",
|
||||
"\n",
|
||||
"> **Note:** You can also try out the experimental [OllamaFunctions](/docs/integrations/chat/ollama_functions) wrapper for convenience."
|
||||
"> **Note:** You can also try out the experimental [OllamaFunctions](https://python.langchain.com/docs/integrations/chat/ollama_functions) wrapper for convenience."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -99,36 +99,6 @@
|
||||
"for chunk in chat.stream(\"Hello!\"):\n",
|
||||
" print(chunk.content, end=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "566c85e0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## For v2"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3103ebdf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\"\"\"For basic init and call\"\"\"\n",
|
||||
"from langchain_community.chat_models import ChatSparkLLM\n",
|
||||
"from langchain_core.messages import HumanMessage\n",
|
||||
"\n",
|
||||
"chat = ChatSparkLLM(\n",
|
||||
" spark_app_id=\"<app_id>\",\n",
|
||||
" spark_api_key=\"<api_key>\",\n",
|
||||
" spark_api_secret=\"<api_secret>\",\n",
|
||||
" spark_api_url=\"wss://spark-api.xf-yun.com/v2.1/chat\",\n",
|
||||
" spark_llm_domain=\"generalv2\",\n",
|
||||
")\n",
|
||||
"message = HumanMessage(content=\"Hello\")\n",
|
||||
"chat([message])"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte CDK (Deprecated)"
|
||||
"# Airbyte CDK"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: `AirbyteCDKLoader` is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"A lot of source connectors are implemented using the [Airbyte CDK](https://docs.airbyte.com/connector-development/cdk-python/). This loader allows to run any of these connectors and return the data as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Gong (Deprecated)"
|
||||
"# Airbyte Gong"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Gong connector as a document loader, allowing you to load various Gong objects as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Hubspot (Deprecated)"
|
||||
"# Airbyte Hubspot"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: `AirbyteHubspotLoader` is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Hubspot connector as a document loader, allowing you to load various Hubspot objects as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte JSON (Deprecated)"
|
||||
"# Airbyte JSON"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: `AirbyteJSONLoader` is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases."
|
||||
]
|
||||
},
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Salesforce (Deprecated)"
|
||||
"# Airbyte Salesforce"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Salesforce connector as a document loader, allowing you to load various Salesforce objects as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Shopify (Deprecated)"
|
||||
"# Airbyte Shopify"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Shopify connector as a document loader, allowing you to load various Shopify objects as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Stripe (Deprecated)"
|
||||
"# Airbyte Stripe"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Stripe connector as a document loader, allowing you to load various Stripe objects as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Typeform (Deprecated)"
|
||||
"# Airbyte Typeform"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Typeform connector as a document loader, allowing you to load various Typeform objects as documents."
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "1f3a5ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Airbyte Zendesk Support (Deprecated)"
|
||||
"# Airbyte Zendesk Support"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,8 +13,6 @@
|
||||
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: This connector-specific loader is deprecated. Please use [`AirbyteLoader`](/docs/integrations/document_loaders/airbyte) instead.\n",
|
||||
"\n",
|
||||
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
|
||||
"\n",
|
||||
"This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents."
|
||||
|
||||
@@ -206,42 +206,6 @@
|
||||
"len(documents)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a56ba97505c8d140",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Sample 4\n",
|
||||
"\n",
|
||||
"You have the option to pass an additional parameter called `linearization_config` to the AmazonTextractPDFLoader which will determine how the the text output will be linearized by the parser after Textract runs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1efbc4b6-f3cb-45c5-bbe8-16e7df060b92",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.document_loaders import AmazonTextractPDFLoader\n",
|
||||
"from textractor.data.text_linearization_config import TextLinearizationConfig\n",
|
||||
"\n",
|
||||
"loader = AmazonTextractPDFLoader(\n",
|
||||
" \"s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf\",\n",
|
||||
" linearization_config=TextLinearizationConfig(\n",
|
||||
" hide_header_layout=True,\n",
|
||||
" hide_footer_layout=True,\n",
|
||||
" hide_figure_layout=True,\n",
|
||||
" ),\n",
|
||||
")\n",
|
||||
"documents = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b3e41b4d-b159-4274-89be-80d8159134ef",
|
||||
@@ -312,14 +276,11 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bd97f1c90aff6a83",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1a09d18b-ab7b-468e-ae66-f92abf666b9b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
@@ -915,7 +876,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -118,7 +118,7 @@
|
||||
"\n",
|
||||
"1. You can set min and max chunk size, which the system tries to adhere to with minimal truncation. You can set `loader.min_text_length` and `loader.max_text_length` to control these.\n",
|
||||
"2. By default, only the text for chunks is returned. However, Docugami's XML knowledge graph has additional rich information including semantic tags for entities inside the chunk. Set `loader.include_xml_tags = True` if you want the additional xml metadata on the returned chunks.\n",
|
||||
"3. In addition, you can set `loader.parent_hierarchy_levels` if you want Docugami to return parent chunks in the chunks it returns. The child chunks point to the parent chunks via the `loader.parent_id_key` value. This is useful e.g. with the [MultiVector Retriever](/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval. See detailed example later in this notebook."
|
||||
"3. In addition, you can set `loader.parent_hierarchy_levels` if you want Docugami to return parent chunks in the chunks it returns. The child chunks point to the parent chunks via the `loader.parent_id_key` value. This is useful e.g. with the [MultiVector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval. See detailed example later in this notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -457,7 +457,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Documents are inherently semi-structured and the DocugamiLoader is able to navigate the semantic and structural contours of the document to provide parent chunk references on the chunks it returns. This is useful e.g. with the [MultiVector Retriever](/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval.\n",
|
||||
"Documents are inherently semi-structured and the DocugamiLoader is able to navigate the semantic and structural contours of the document to provide parent chunk references on the chunks it returns. This is useful e.g. with the [MultiVector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval.\n",
|
||||
"\n",
|
||||
"To get parent chunk references, you can set `loader.parent_hierarchy_levels` to a non-zero value."
|
||||
]
|
||||
|
||||
@@ -47,7 +47,7 @@
|
||||
"id": "04981332",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create a GeoPandas dataframe from [`Open City Data`](/docs/integrations/document_loaders/open_city_data) as an example input."
|
||||
"Create a GeoPandas dataframe from [`Open City Data`](https://python.langchain.com/docs/integrations/document_loaders/open_city_data) as an example input."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -10,11 +10,7 @@
|
||||
"\n",
|
||||
"> [AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `AlloyDB for PostgreSQL` to load Documents with the `AlloyDBLoader` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-alloydb-pg-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/document_loader.ipynb)"
|
||||
"This notebook goes over how to use `AlloyDB for PostgreSQL` to load Documents with the `AlloyDBLoader` class."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -28,7 +24,7 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the AlloyDB API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
|
||||
" * [Enable the AlloyDB Admin API.](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
|
||||
" * [Create a AlloyDB cluster and instance.](https://cloud.google.com/alloydb/docs/cluster-create)\n",
|
||||
" * [Create a AlloyDB database.](https://cloud.google.com/alloydb/docs/quickstart/create-and-connect)\n",
|
||||
" * [Add a User to the database.](https://cloud.google.com/alloydb/docs/database-users/about)"
|
||||
@@ -143,6 +139,30 @@
|
||||
"! gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "rEWWNoNnKOgq",
|
||||
"metadata": {
|
||||
"id": "rEWWNoNnKOgq"
|
||||
},
|
||||
"source": [
|
||||
"### 💡 API Enablement\n",
|
||||
"The `langchain-google-alloydb-pg` package requires that you [enable the AlloyDB Admin API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5utKIdq7KYi5",
|
||||
"metadata": {
|
||||
"id": "5utKIdq7KYi5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable AlloyDB Admin API\n",
|
||||
"!gcloud services enable alloydb.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
|
||||
@@ -8,9 +8,7 @@
|
||||
"\n",
|
||||
"> [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
|
||||
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
@@ -24,7 +22,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Bigtable API](https://console.cloud.google.com/flows/enableapi?apiid=bigtable.googleapis.com)\n",
|
||||
"* [Create a Bigtable instance](https://cloud.google.com/bigtable/docs/creating-instance)\n",
|
||||
"* [Create a Bigtable table](https://cloud.google.com/bigtable/docs/managing-tables)\n",
|
||||
"* [Create Bigtable access credentials](https://developers.google.com/workspace/guides/create-credentials)\n",
|
||||
@@ -464,7 +461,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -4,13 +4,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google Cloud SQL for SQL server\n",
|
||||
"# Google Cloud SQL for SQL Server\n",
|
||||
"\n",
|
||||
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n",
|
||||
"This notebook goes over how to use [Cloud SQL for SQL Server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mssql-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
@@ -24,10 +22,9 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
"* [Create a Cloud SQL for SQL server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance)\n",
|
||||
"* [Create a Cloud SQL database](https://cloud.google.com/sql/docs/sqlserver/create-manage-databases)\n",
|
||||
"* [Add an IAM database user to the database](https://cloud.google.com/sql/docs/sqlserver/create-manage-users) (Optional)\n",
|
||||
"* [Create a Cloud SQL for SQL Server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance)\n",
|
||||
"* [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mssql/create-manage-databases)\n",
|
||||
"* [Add an IAM database user to the database](https://cloud.google.com/sql/docs/sqlserver/add-manage-iam-users#creating-a-database-user) (Optional)\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
]
|
||||
@@ -173,7 +170,7 @@
|
||||
"\n",
|
||||
"Before saving or loading documents from MSSQL table, we need first configures a connection pool to Cloud SQL database. The `MSSQLEngine` configures a [SQLAlchemy connection pool](https://docs.sqlalchemy.org/en/20/core/pooling.html#module-sqlalchemy.pool) to your Cloud SQL database, enabling successful connections from your application and following industry best practices.\n",
|
||||
"\n",
|
||||
"To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 6 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
@@ -208,7 +205,6 @@
|
||||
"### Initialize a table\n",
|
||||
"\n",
|
||||
"Initialize a table of default schema via `MSSQLEngine.init_document_table(<table_name>)`. Table Columns:\n",
|
||||
"\n",
|
||||
"- page_content (type: text)\n",
|
||||
"- langchain_metadata (type: JSON)\n",
|
||||
"\n",
|
||||
@@ -231,7 +227,6 @@
|
||||
"### Save documents\n",
|
||||
"\n",
|
||||
"Save langchain documents with `MSSQLDocumentSaver.add_documents(<documents>)`. To initialize `MSSQLDocumentSaver` class you need to provide 2 things:\n",
|
||||
"\n",
|
||||
"1. `engine` - An instance of a `MSSQLEngine` engine.\n",
|
||||
"2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents."
|
||||
]
|
||||
@@ -275,7 +270,6 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Load langchain documents with `MSSQLLoader.load()` or `MSSQLLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `MSSQLDocumentSaver` class you need to provide:\n",
|
||||
"\n",
|
||||
"1. `engine` - An instance of a `MSSQLEngine` engine.\n",
|
||||
"2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents."
|
||||
]
|
||||
@@ -347,7 +341,6 @@
|
||||
"For table with default schema (page_content, langchain_metadata), the deletion criteria is:\n",
|
||||
"\n",
|
||||
"A `row` should be deleted if there exists a `document` in the list, such that\n",
|
||||
"\n",
|
||||
"- `document.page_content` equals `row[page_content]`\n",
|
||||
"- `document.metadata` equals `row[langchain_metadata]`"
|
||||
]
|
||||
@@ -455,7 +448,6 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can specify the content and metadata we want to load by setting the `content_columns` and `metadata_columns` when initializing the `MSSQLLoader`.\n",
|
||||
"\n",
|
||||
"1. `content_columns`: The columns to write into the `page_content` of the document.\n",
|
||||
"2. `metadata_columns`: The columns to write into the `metadata` of the document.\n",
|
||||
"\n",
|
||||
@@ -494,14 +486,12 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In order to save langchain document into table with customized metadata fields. We need first create such a table via `MSSQLEngine.init_document_table()`, and specify the list of `metadata_columns` we want it to have. In this example, the created table will have table columns:\n",
|
||||
"\n",
|
||||
"- description (type: text): for storing fruit description.\n",
|
||||
"- fruit_name (type text): for storing fruit name.\n",
|
||||
"- organic (type tinyint(1)): to tell if the fruit is organic.\n",
|
||||
"- other_metadata (type: JSON): for storing other metadata information of the fruit.\n",
|
||||
"\n",
|
||||
"We can use the following parameters with `MSSQLEngine.init_document_table()` to create the table:\n",
|
||||
"\n",
|
||||
"1. `table_name`: The name of the table within the Cloud SQL database to store langchain documents.\n",
|
||||
"2. `metadata_columns`: A list of `sqlalchemy.Column` indicating the list of metadata columns we need.\n",
|
||||
"3. `content_column`: The name of column to store `page_content` of langchain document. Default: `page_content`.\n",
|
||||
@@ -541,7 +531,6 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Save documents with `MSSQLDocumentSaver.add_documents(<documents>)`. As you can see in this example, \n",
|
||||
"\n",
|
||||
"- `document.page_content` will be saved into `description` column.\n",
|
||||
"- `document.metadata.fruit_name` will be saved into `fruit_name` column.\n",
|
||||
"- `document.metadata.organic` will be saved into `organic` column.\n",
|
||||
@@ -595,7 +584,6 @@
|
||||
"We can also delete documents from table with customized metadata columns via `MSSQLDocumentSaver.delete(<documents>)`. The deletion criteria is:\n",
|
||||
"\n",
|
||||
"A `row` should be deleted if there exists a `document` in the list, such that\n",
|
||||
"\n",
|
||||
"- `document.page_content` equals `row[page_content]`\n",
|
||||
"- For every metadata field `k` in `document.metadata`\n",
|
||||
" - `document.metadata[k]` equals `row[k]` or `document.metadata[k]` equals `row[langchain_metadata][k]`\n",
|
||||
|
||||
@@ -6,11 +6,9 @@
|
||||
"source": [
|
||||
"# Google Cloud SQL for MySQL\n",
|
||||
"\n",
|
||||
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgresql), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
|
||||
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mysql-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
@@ -24,7 +22,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
"* [Create a Cloud SQL for MySQL instance](https://cloud.google.com/sql/docs/mysql/create-instance)\n",
|
||||
"* [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
|
||||
"* [Add an IAM database user to the database](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users#creating-a-database-user) (Optional)\n",
|
||||
@@ -140,6 +137,24 @@
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### API Enablement\n",
|
||||
"The `langchain-google-cloud-sql-mysql` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Cloud SQL Admin API\n",
|
||||
"!gcloud services enable sqladmin.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -374,7 +389,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First we prepare an example table with non-default schema, and populate it with some arbitrary data."
|
||||
"First we prepare an example table with non-default schema, and populate it with some arbitary data."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,362 +1,382 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "E_RJy7C1bpCT"
|
||||
},
|
||||
"source": [
|
||||
"# Google Cloud SQL for PostgreSQL\n",
|
||||
"\n",
|
||||
"> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Extend your database application to build AI-powered experiences leveraging Cloud SQL for PostgreSQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Cloud SQL for PostgreSQL` to load Documents with the `PostgresLoader` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-pg-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-pg-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "xjcxaw6--Xyy"
|
||||
},
|
||||
"source": [
|
||||
"## Before you begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
" * [Create a Cloud SQL for PostgreSQL instance.](https://cloud.google.com/sql/docs/postgres/create-instance)\n",
|
||||
" * [Create a Cloud SQL for PostgreSQL database.](https://cloud.google.com/sql/docs/postgres/create-manage-databases)\n",
|
||||
" * [Add a User to the database.](https://cloud.google.com/sql/docs/postgres/create-manage-users)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "IR54BmgvdHT_"
|
||||
},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"Install the integration library, `langchain_google_cloud_sql_pg`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 1000
|
||||
},
|
||||
"id": "0ZITIDE160OD",
|
||||
"outputId": "90e0636e-ff34-4e1e-ad37-d2a6db4a317e"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain_google_cloud_sql_pg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "v40bB_GMcr9f"
|
||||
},
|
||||
"source": [
|
||||
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "6o0iGVIdDD6K"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "cTXTbj4UltKf"
|
||||
},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Uj02bMRAc9_c"
|
||||
},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "wnp1R1PYc9_c",
|
||||
"outputId": "6502c721-a2fd-451f-b946-9f7b850d5966"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Project { display-mode: \"form\" }\n",
|
||||
"PROJECT_ID = \"gcp_project_id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"! gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
"metadata": {
|
||||
"id": "f8f2830ee9ca1e01"
|
||||
},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "OMvzMWRrR6n7",
|
||||
"metadata": {
|
||||
"id": "OMvzMWRrR6n7"
|
||||
},
|
||||
"source": [
|
||||
"### Set Cloud SQL database values\n",
|
||||
"Find your database variables, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "irl7eMFnSPZr",
|
||||
"metadata": {
|
||||
"id": "irl7eMFnSPZr"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Set Your Values Here { display-mode: \"form\" }\n",
|
||||
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
|
||||
"INSTANCE = \"my-primary\" # @param {type: \"string\"}\n",
|
||||
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
|
||||
"TABLE_NAME = \"vector_store\" # @param {type: \"string\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "QuQigs4UoFQ2",
|
||||
"metadata": {
|
||||
"id": "QuQigs4UoFQ2"
|
||||
},
|
||||
"source": [
|
||||
"### Cloud SQL Engine\n",
|
||||
"\n",
|
||||
"One of the requirements and arguments to establish PostgreSQL as a document loader is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL for PostgreSQL database, enabling successful connections from your application and following industry best practices.\n",
|
||||
"\n",
|
||||
"To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
"\n",
|
||||
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the environment.\n",
|
||||
"\n",
|
||||
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/users) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`:\n",
|
||||
"\n",
|
||||
"* `user` : Database user to use for built-in database authentication and login\n",
|
||||
"* `password` : Database password to use for built-in database authentication and login.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Note**: This tutorial demonstrates the async interface. All async methods have corresponding sync methods."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresEngine\n",
|
||||
"\n",
|
||||
"engine = await PostgresEngine.afrom_instance(\n",
|
||||
" project_id=PROJECT_ID,\n",
|
||||
" region=REGION,\n",
|
||||
" instance=INSTANCE,\n",
|
||||
" database=DATABASE,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "e1tl0aNx7SWy"
|
||||
},
|
||||
"source": [
|
||||
"### Create PostgresLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "z-AZyzAQ7bsf"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
|
||||
"\n",
|
||||
"# Creating a basic PostgreSQL object\n",
|
||||
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "PeOMpftjc9_e"
|
||||
},
|
||||
"source": [
|
||||
"### Load Documents via default table\n",
|
||||
"The loader returns a list of Documents from the table using the first column as page_content and all other columns as metadata. The default table will have the first column as\n",
|
||||
"page_content and the second column as metadata (JSON). Each row becomes a document. Please note that if you want your documents to have ids you will need to add them in."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "cwvi_O5Wc9_e"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
|
||||
"\n",
|
||||
"# Creating a basic PostgresLoader object\n",
|
||||
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)\n",
|
||||
"\n",
|
||||
"docs = await loader.aload()\n",
|
||||
"print(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "kSkL9l1Hc9_e"
|
||||
},
|
||||
"source": [
|
||||
"### Load documents via custom table/metadata or custom page content columns"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = await PostgresLoader.create(\n",
|
||||
" engine,\n",
|
||||
" table_name=TABLE_NAME,\n",
|
||||
" content_columns=[\"product_name\"], # Optional\n",
|
||||
" metadata_columns=[\"id\"], # Optional\n",
|
||||
")\n",
|
||||
"docs = await loader.aload()\n",
|
||||
"print(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5R6h0_Cvc9_f"
|
||||
},
|
||||
"source": [
|
||||
"### Set page content format\n",
|
||||
"The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NGNdS7cqc9_f"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = await PostgresLoader.create(\n",
|
||||
" engine,\n",
|
||||
" table_name=\"products\",\n",
|
||||
" content_columns=[\"product_name\", \"description\"],\n",
|
||||
" format=\"YAML\",\n",
|
||||
")\n",
|
||||
"docs = await loader.aload()\n",
|
||||
"print(docs)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"toc_visible": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "E_RJy7C1bpCT"
|
||||
},
|
||||
"source": [
|
||||
"# Google Cloud SQL for PostgreSQL\n",
|
||||
"\n",
|
||||
"> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Extend your database application to build AI-powered experiences leveraging Cloud SQL for PostgreSQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Cloud SQL for PostgreSQL` to load Documents with the `PostgreSQLLoader` class."
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "xjcxaw6--Xyy"
|
||||
},
|
||||
"source": [
|
||||
"## Before you begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
" * [Create a Cloud SQL for PostgreSQL instance.](https://cloud.google.com/sql/docs/postgres/create-instance)\n",
|
||||
" * [Create a Cloud SQL for PostgreSQL database.](https://cloud.google.com/sql/docs/postgres/create-manage-databases)\n",
|
||||
" * [Add a User to the database.](https://cloud.google.com/sql/docs/postgres/create-manage-users)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "IR54BmgvdHT_"
|
||||
},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"Install the integration library, `langchain-google-cloud-sql-pg`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 1000
|
||||
},
|
||||
"id": "0ZITIDE160OD",
|
||||
"outputId": "90e0636e-ff34-4e1e-ad37-d2a6db4a317e"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-google-cloud-sql-pg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "v40bB_GMcr9f"
|
||||
},
|
||||
"source": [
|
||||
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "6o0iGVIdDD6K"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "cTXTbj4UltKf"
|
||||
},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Uj02bMRAc9_c"
|
||||
},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "wnp1R1PYc9_c",
|
||||
"outputId": "6502c721-a2fd-451f-b946-9f7b850d5966"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Project { display-mode: \"form\" }\n",
|
||||
"PROJECT_ID = \"gcp_project_id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"! gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "rEWWNoNnKOgq",
|
||||
"metadata": {
|
||||
"id": "rEWWNoNnKOgq"
|
||||
},
|
||||
"source": [
|
||||
"### 💡 API Enablement\n",
|
||||
"The `langchain_google_cloud_sql_pg` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5utKIdq7KYi5",
|
||||
"metadata": {
|
||||
"id": "5utKIdq7KYi5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Cloud SQL Admin API\n",
|
||||
"!gcloud services enable sqladmin.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
"metadata": {
|
||||
"id": "f8f2830ee9ca1e01"
|
||||
},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "OMvzMWRrR6n7",
|
||||
"metadata": {
|
||||
"id": "OMvzMWRrR6n7"
|
||||
},
|
||||
"source": [
|
||||
"### Set Cloud SQL database values\n",
|
||||
"Find your database variables, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "irl7eMFnSPZr",
|
||||
"metadata": {
|
||||
"id": "irl7eMFnSPZr"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Set Your Values Here { display-mode: \"form\" }\n",
|
||||
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
|
||||
"INSTANCE = \"my-primary\" # @param {type: \"string\"}\n",
|
||||
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
|
||||
"TABLE_NAME = \"vector_store\" # @param {type: \"string\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "QuQigs4UoFQ2",
|
||||
"metadata": {
|
||||
"id": "QuQigs4UoFQ2"
|
||||
},
|
||||
"source": [
|
||||
"### Cloud SQL Engine\n",
|
||||
"\n",
|
||||
"One of the requirements and arguments to establish PostgreSQL as a document loader is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL for PostgreSQL database, enabling successful connections from your application and following industry best practices.\n",
|
||||
"\n",
|
||||
"To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
"\n",
|
||||
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the environment.\n",
|
||||
"\n",
|
||||
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/users) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`:\n",
|
||||
"\n",
|
||||
"* `user` : Database user to use for built-in database authentication and login\n",
|
||||
"* `password` : Database password to use for built-in database authentication and login.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Note**: This tutorial demonstrates the async interface. All async methods have corresponding sync methods."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresEngine\n",
|
||||
"\n",
|
||||
"engine = await PostgresEngine.afrom_instance(\n",
|
||||
" project_id=PROJECT_ID,\n",
|
||||
" region=REGION,\n",
|
||||
" instance=INSTANCE,\n",
|
||||
" database=DATABASE,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "e1tl0aNx7SWy"
|
||||
},
|
||||
"source": [
|
||||
"### Create PostgresLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "z-AZyzAQ7bsf"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
|
||||
"\n",
|
||||
"# Creating a basic PostgreSQL object\n",
|
||||
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "PeOMpftjc9_e"
|
||||
},
|
||||
"source": [
|
||||
"### Load Documents via default table\n",
|
||||
"The loader returns a list of Documents from the table using the first column as page_content and all other columns as metadata. The default table will have the first column as\n",
|
||||
"page_content and the second column as metadata (JSON). Each row becomes a document. Please note that if you want your documents to have ids you will need to add them in."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "cwvi_O5Wc9_e"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
|
||||
"\n",
|
||||
"# Creating a basic PostgresLoader object\n",
|
||||
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)\n",
|
||||
"\n",
|
||||
"docs = await loader.aload()\n",
|
||||
"print(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "kSkL9l1Hc9_e"
|
||||
},
|
||||
"source": [
|
||||
"### Load documents via custom table/metadata or custom page content columns"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = await PostgresLoader.create(\n",
|
||||
" engine,\n",
|
||||
" table_name=TABLE_NAME,\n",
|
||||
" content_columns=[\"product_name\"], # Optional\n",
|
||||
" metadata_columns=[\"id\"], # Optional\n",
|
||||
")\n",
|
||||
"docs = await loader.aload()\n",
|
||||
"print(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5R6h0_Cvc9_f"
|
||||
},
|
||||
"source": [
|
||||
"### Set page content format\n",
|
||||
"The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NGNdS7cqc9_f"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = await PostgresLoader.create(\n",
|
||||
" engine,\n",
|
||||
" table_name=\"products\",\n",
|
||||
" content_columns=[\"product_name\", \"description\"],\n",
|
||||
" format=\"YAML\",\n",
|
||||
")\n",
|
||||
"docs = await loader.aload()\n",
|
||||
"print(docs)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"toc_visible": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
||||
@@ -1,336 +1,411 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google Firestore in Datastore Mode\n",
|
||||
"\n",
|
||||
"> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com)\n",
|
||||
"* [Create a Firestore in Datastore Mode database](https://cloud.google.com/datastore/docs/manage-databases)\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"\n",
|
||||
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -upgrade --quiet langchain-google-datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Save documents\n",
|
||||
"\n",
|
||||
"Save langchain documents with `DatastoreSaver.upsert_documents(<documents>)`. By default it will try to extract the entity key from the `key` in the Document metadata."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_google_datastore import DatastoreSaver\n",
|
||||
"\n",
|
||||
"saver = DatastoreSaver()\n",
|
||||
"\n",
|
||||
"data = [Document(page_content=\"Hello, World!\")]\n",
|
||||
"saver.upsert_documents(data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Save documents without key\n",
|
||||
"\n",
|
||||
"If a `kind` is specified the documents will be stored with an auto generated id."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"saver = DatastoreSaver(\"MyKind\")\n",
|
||||
"\n",
|
||||
"saver.upsert_documents(data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load documents via Kind\n",
|
||||
"\n",
|
||||
"Load langchain documents with `DatastoreLoader.load()` or `DatastoreLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `DatastoreLoader` class you need to provide:\n",
|
||||
"1. `source` - The source to load the documents. It can be an instance of Query or the name of the Datastore kind to read from."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_datastore import DatastoreLoader\n",
|
||||
"\n",
|
||||
"loader = DatastoreLoader(\"MyKind\")\n",
|
||||
"data = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load documents via query\n",
|
||||
"\n",
|
||||
"Other than loading documents from kind, we can also choose to load documents from query. For example:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.cloud import datastore\n",
|
||||
"\n",
|
||||
"client = datastore.Client(database=\"non-default-db\", namespace=\"custom_namespace\")\n",
|
||||
"query_load = client.query(kind=\"MyKind\")\n",
|
||||
"query_load.add_filter(\"region\", \"=\", \"west_coast\")\n",
|
||||
"\n",
|
||||
"loader_document = DatastoreLoader(query_load)\n",
|
||||
"\n",
|
||||
"data = loader_document.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Delete documents\n",
|
||||
"\n",
|
||||
"Delete a list of langchain documents from Datastore with `DatastoreSaver.delete_documents(<documents>)`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"saver = DatastoreSaver()\n",
|
||||
"\n",
|
||||
"saver.delete_documents(data)\n",
|
||||
"\n",
|
||||
"keys_to_delete = [\n",
|
||||
" [\"Kind1\", \"identifier\"],\n",
|
||||
" [\"Kind2\", 123],\n",
|
||||
" [\"Kind3\", \"identifier\", \"NestedKind\", 456],\n",
|
||||
"]\n",
|
||||
"# The Documents will be ignored and only the document ids will be used.\n",
|
||||
"saver.delete_documents(data, keys_to_delete)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Advanced Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load documents with customized document page content & metadata\n",
|
||||
"\n",
|
||||
"The arguments of `page_content_properties` and `metadata_properties` will specify the Entity properties to be written into LangChain Document `page_content` and `metadata`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = DatastoreLoader(\n",
|
||||
" source=\"MyKind\",\n",
|
||||
" page_content_fields=[\"data_field\"],\n",
|
||||
" metadata_fields=[\"metadata_field\"],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"data = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Customize Page Content Format\n",
|
||||
"\n",
|
||||
"When the `page_content` contains only one field the information will be the field value only. Otherwise the `page_content` will be in JSON format."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Customize Connection & Authentication"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.auth import compute_engine\n",
|
||||
"from google.cloud.firestore import Client\n",
|
||||
"\n",
|
||||
"client = Client(database=\"non-default-db\", creds=compute_engine.Credentials())\n",
|
||||
"loader = DatastoreLoader(\n",
|
||||
" source=\"foo\",\n",
|
||||
" client=client,\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google Firestore in Datastore mode\n",
|
||||
"\n",
|
||||
"> [Firestore in Datastore mode](https://cloud.google.com/datastore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Firestore in Datastore mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Create a Datastore database](https://cloud.google.com/datastore/docs/manage-databases)\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please specify a source for demo purpose.\n",
|
||||
"SOURCE = \"test\" # @param {type:\"Query\"|\"CollectionGroup\"|\"DocumentReference\"|\"string\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"\n",
|
||||
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -upgrade --quiet langchain-google-datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### API Enablement\n",
|
||||
"The `langchain-google-datastore` package requires that you [enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Datastore API\n",
|
||||
"!gcloud services enable datastore.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Save documents\n",
|
||||
"\n",
|
||||
"`DatastoreSaver` can store Documents into Datastore. By default it will try to extract the Document reference from the metadata\n",
|
||||
"\n",
|
||||
"Save langchain documents with `DatastoreSaver.upsert_documents(<documents>)`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_google_datastore import DatastoreSaver\n",
|
||||
"\n",
|
||||
"data = [Document(page_content=\"Hello, World!\")]\n",
|
||||
"saver = DatastoreSaver()\n",
|
||||
"saver.upsert_documents(data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Save documents without reference\n",
|
||||
"\n",
|
||||
"If a collection is specified the documents will be stored with an auto generated id."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"saver = DatastoreSaver(\"Collection\")\n",
|
||||
"\n",
|
||||
"saver.upsert_documents(data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Save documents with other references"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"doc_ids = [\"AnotherCollection/doc_id\", \"foo/bar\"]\n",
|
||||
"saver = DatastoreSaver()\n",
|
||||
"\n",
|
||||
"saver.upsert_documents(documents=data, document_ids=doc_ids)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load from Collection or SubCollection"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Load langchain documents with `DatastoreLoader.load()` or `Datastore.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `DatastoreLoader` class you need to provide:\n",
|
||||
"\n",
|
||||
"1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\\`-delimited path to a Datastore collection`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_datastore import DatastoreLoader\n",
|
||||
"\n",
|
||||
"loader_collection = DatastoreLoader(\"Collection\")\n",
|
||||
"loader_subcollection = DatastoreLoader(\"Collection/doc/SubCollection\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"data_collection = loader_collection.load()\n",
|
||||
"data_subcollection = loader_subcollection.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load a single Document"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.cloud import datastore\n",
|
||||
"\n",
|
||||
"client = datastore.Client()\n",
|
||||
"doc_ref = client.collection(\"foo\").document(\"bar\")\n",
|
||||
"\n",
|
||||
"loader_document = DatastoreLoader(doc_ref)\n",
|
||||
"\n",
|
||||
"data = loader_document.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load from CollectionGroup or Query"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.cloud.datastore import CollectionGroup, FieldFilter, Query\n",
|
||||
"\n",
|
||||
"col_ref = client.collection(\"col_group\")\n",
|
||||
"collection_group = CollectionGroup(col_ref)\n",
|
||||
"\n",
|
||||
"loader_group = DatastoreLoader(collection_group)\n",
|
||||
"\n",
|
||||
"col_ref = client.collection(\"collection\")\n",
|
||||
"query = col_ref.where(filter=FieldFilter(\"region\", \"==\", \"west_coast\"))\n",
|
||||
"\n",
|
||||
"loader_query = DatastoreLoader(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Delete documents\n",
|
||||
"\n",
|
||||
"Delete a list of langchain documents from Datastore collection with `DatastoreSaver.delete_documents(<documents>)`.\n",
|
||||
"\n",
|
||||
"If document ids is provided, the Documents will be ignored."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"saver = DatastoreSaver()\n",
|
||||
"\n",
|
||||
"saver.delete_documents(data)\n",
|
||||
"\n",
|
||||
"# The Documents will be ignored and only the document ids will be used.\n",
|
||||
"saver.delete_documents(data, doc_ids)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Advanced Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load documents with customize document page content & metadata\n",
|
||||
"\n",
|
||||
"The arguments of `page_content_fields` and `metadata_fields` will specify the Datastore Document fields to be written into LangChain Document `page_content` and `metadata`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = DatastoreLoader(\n",
|
||||
" source=\"foo/bar/subcol\",\n",
|
||||
" page_content_fields=[\"data_field\"],\n",
|
||||
" metadata_fields=[\"metadata_field\"],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"data = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Customize Page Content Format\n",
|
||||
"\n",
|
||||
"When the `page_content` contains only one field the information will be the field value only. Otherwise the `page_content` will be in JSON format."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Customize Connection & Authentication"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.auth import compute_engine\n",
|
||||
"from google.cloud.datastore import Client\n",
|
||||
"\n",
|
||||
"client = Client(database=\"non-default-db\", creds=compute_engine.Credentials())\n",
|
||||
"loader = DatastoreLoader(\n",
|
||||
" source=\"foo\",\n",
|
||||
" client=client,\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -8,9 +8,7 @@
|
||||
"\n",
|
||||
"> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n",
|
||||
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-firestore-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
@@ -24,7 +22,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com)\n",
|
||||
"* [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases)\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
@@ -131,6 +128,24 @@
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### API Enablement\n",
|
||||
"The `langchain-google-firestore` package requires that you [enable the Firestore Admin API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Firestore Admin API\n",
|
||||
"!gcloud services enable firestore.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -155,7 +170,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_core.documents.base import Document\n",
|
||||
"from langchain_google_firestore import FirestoreSaver\n",
|
||||
"\n",
|
||||
"saver = FirestoreSaver()\n",
|
||||
@@ -217,7 +232,7 @@
|
||||
"source": [
|
||||
"Load langchain documents with `FirestoreLoader.load()` or `Firestore.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `FirestoreLoader` class you need to provide:\n",
|
||||
"\n",
|
||||
"1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\\`-delimited path to a Firestore collection."
|
||||
"1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\\`-delimited path to a Firestore collection`."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -10,9 +10,7 @@
|
||||
"\n",
|
||||
"> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
|
||||
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
@@ -26,7 +24,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
|
||||
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 5.0.\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
@@ -162,7 +159,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import redis\n",
|
||||
"from langchain_core.documents import Document\n",
|
||||
"from langchain_core.documents.base import Document\n",
|
||||
"from langchain_google_memorystore_redis import MemorystoreDocumentSaver\n",
|
||||
"\n",
|
||||
"test_docs = [\n",
|
||||
|
||||
@@ -6,11 +6,9 @@
|
||||
"source": [
|
||||
"# Google Spanner\n",
|
||||
"\n",
|
||||
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
|
||||
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution. Extend your database application to build AI-powered experiences leveraging Spanner's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
|
||||
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/docs/document_loader.ipynb)"
|
||||
]
|
||||
@@ -24,7 +22,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
|
||||
"* [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances)\n",
|
||||
"* [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases)\n",
|
||||
"* [Create a Spanner table](https://cloud.google.com/spanner/docs/create-query-database-console#create-schema)\n",
|
||||
@@ -61,7 +58,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -upgrade --quiet langchain-google-spanner langchain"
|
||||
"%pip install -upgrade --quiet langchain-google-spanner"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -259,34 +256,6 @@
|
||||
"## Advanced Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom client\n",
|
||||
"\n",
|
||||
"The client created by default is the default client. To pass in `credentials` and `project` explicitly, a custom client can be passed to the constructor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.cloud import spanner\n",
|
||||
"from google.oauth2 import service_account\n",
|
||||
"\n",
|
||||
"creds = service_account.Credentials.from_service_account_file(\"/path/to/key.json\")\n",
|
||||
"custom_client = spanner.Client(project=\"my-project\", credentials=creds)\n",
|
||||
"loader = SpannerLoader(\n",
|
||||
" INSTANCE_ID,\n",
|
||||
" DATABASE_ID,\n",
|
||||
" query,\n",
|
||||
" client=custom_client,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -443,7 +412,9 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.cloud import spanner\n",
|
||||
"from google.oauth2 import service_account\n",
|
||||
"\n",
|
||||
"creds = service_account.Credentials.from_service_account_file(\"/path/to/key.json\")\n",
|
||||
"custom_client = spanner.Client(project=\"my-project\", credentials=creds)\n",
|
||||
"saver = SpannerDocumentSaver(\n",
|
||||
" INSTANCE_ID,\n",
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
"---\n",
|
||||
"The best approach is to install Grobid via docker, see https://grobid.readthedocs.io/en/latest/Grobid-docker/. \n",
|
||||
"\n",
|
||||
"(Note: additional instructions can be found [here](/docs/integrations/providers/grobid).)\n",
|
||||
"(Note: additional instructions can be found [here](https://python.langchain.com/docs/docs/integrations/providers/grobid.mdx).)\n",
|
||||
"\n",
|
||||
"Once grobid is up-and-running you can interact as described below. \n"
|
||||
]
|
||||
|
||||
@@ -42,7 +42,6 @@
|
||||
"* MongoDB database name\n",
|
||||
"* MongoDB collection name\n",
|
||||
"* (Optional) Content Filter dictionary\n",
|
||||
"* (Optional) List of field names to include in the output\n",
|
||||
"\n",
|
||||
"The output takes the following format:\n",
|
||||
"\n",
|
||||
@@ -60,7 +59,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -72,7 +71,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -81,7 +80,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 25,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -90,22 +89,21 @@
|
||||
" db_name=\"sample_restaurants\",\n",
|
||||
" collection_name=\"restaurants\",\n",
|
||||
" filter_criteria={\"borough\": \"Bronx\", \"cuisine\": \"Bakery\"},\n",
|
||||
" field_names=[\"name\", \"address\"],\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"71"
|
||||
"25359"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -118,16 +116,16 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content=\"Morris Park Bake Shop {'building': '1007', 'coord': [-73.856077, 40.848447], 'street': 'Morris Park Ave', 'zipcode': '10462'}\", metadata={'database': 'sample_restaurants', 'collection': 'restaurants'})"
|
||||
"Document(page_content=\"{'_id': ObjectId('5eb3d668b31de5d588f4292a'), 'address': {'building': '2780', 'coord': [-73.98241999999999, 40.579505], 'street': 'Stillwell Avenue', 'zipcode': '11224'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': datetime.datetime(2014, 6, 10, 0, 0), 'grade': 'A', 'score': 5}, {'date': datetime.datetime(2013, 6, 5, 0, 0), 'grade': 'A', 'score': 7}, {'date': datetime.datetime(2012, 4, 13, 0, 0), 'grade': 'A', 'score': 12}, {'date': datetime.datetime(2011, 10, 12, 0, 0), 'grade': 'A', 'score': 12}], 'name': 'Riviera Caterer', 'restaurant_id': '40356018'}\", metadata={'database': 'sample_restaurants', 'collection': 'restaurants'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# TiDB\n",
|
||||
"\n",
|
||||
"> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at https://tidb.cloud/ai.\n",
|
||||
"> [TiDB](https://github.com/pingcap/tidb) is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics.\n",
|
||||
"\n",
|
||||
"This notebook introduces how to use `TiDBLoader` to load data from TiDB in langchain."
|
||||
]
|
||||
|
||||
@@ -39,7 +39,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = ToMarkdownLoader(url=\"/docs/get_started/introduction\", api_key=api_key)"
|
||||
"loader = ToMarkdownLoader(\n",
|
||||
" url=\"https://python.langchain.com/docs/get_started/introduction\", api_key=api_key\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -70,9 +72,9 @@
|
||||
"This framework consists of several parts.\n",
|
||||
"\n",
|
||||
"- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.\n",
|
||||
"- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.\n",
|
||||
"- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.\n",
|
||||
"- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.\n",
|
||||
"- **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.\n",
|
||||
"- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API.\n",
|
||||
"- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -99,11 +101,11 @@
|
||||
"\n",
|
||||
"## Get started [](\\#get-started \"Direct link to Get started\")\n",
|
||||
"\n",
|
||||
"[Here’s](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n",
|
||||
"[Here’s](https://python.langchain.com/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n",
|
||||
"\n",
|
||||
"We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n",
|
||||
"We recommend following our [Quickstart](https://python.langchain.com/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n",
|
||||
"\n",
|
||||
"Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.\n",
|
||||
"Read up on our [Security](https://python.langchain.com/docs/security) best practices to make sure you're developing safely with LangChain.\n",
|
||||
"\n",
|
||||
"note\n",
|
||||
"\n",
|
||||
@@ -113,43 +115,43 @@
|
||||
"\n",
|
||||
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
|
||||
"\n",
|
||||
"- **[Overview](/docs/expression_language/)**: LCEL and its benefits\n",
|
||||
"- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects\n",
|
||||
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
|
||||
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
|
||||
"- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits\n",
|
||||
"- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects\n",
|
||||
"- **[How-to](https://python.langchain.com/docs/expression_language/how_to)**: Key features of LCEL\n",
|
||||
"- **[Cookbook](https://python.langchain.com/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
|
||||
"\n",
|
||||
"## Modules [](\\#modules \"Direct link to Modules\")\n",
|
||||
"\n",
|
||||
"LangChain provides standard, extendable interfaces and integrations for the following modules:\n",
|
||||
"\n",
|
||||
"#### [Model I/O](/docs/modules/model_io/) [](\\#model-io \"Direct link to model-io\")\n",
|
||||
"#### [Model I/O](https://python.langchain.com/docs/modules/model_io/) [](\\#model-io \"Direct link to model-io\")\n",
|
||||
"\n",
|
||||
"Interface with language models\n",
|
||||
"\n",
|
||||
"#### [Retrieval](/docs/modules/data_connection/) [](\\#retrieval \"Direct link to retrieval\")\n",
|
||||
"#### [Retrieval](https://python.langchain.com/docs/modules/data_connection/) [](\\#retrieval \"Direct link to retrieval\")\n",
|
||||
"\n",
|
||||
"Interface with application-specific data\n",
|
||||
"\n",
|
||||
"#### [Agents](/docs/modules/agents/) [](\\#agents \"Direct link to agents\")\n",
|
||||
"#### [Agents](https://python.langchain.com/docs/modules/agents/) [](\\#agents \"Direct link to agents\")\n",
|
||||
"\n",
|
||||
"Let models choose which tools to use given high-level directives\n",
|
||||
"\n",
|
||||
"## Examples, ecosystem, and resources [](\\#examples-ecosystem-and-resources \"Direct link to Examples, ecosystem, and resources\")\n",
|
||||
"\n",
|
||||
"### [Use cases](/docs/use_cases/question_answering/) [](\\#use-cases \"Direct link to use-cases\")\n",
|
||||
"### [Use cases](https://python.langchain.com/docs/use_cases/question_answering/) [](\\#use-cases \"Direct link to use-cases\")\n",
|
||||
"\n",
|
||||
"Walkthroughs and techniques for common end-to-end use cases, like:\n",
|
||||
"\n",
|
||||
"- [Document question answering](/docs/use_cases/question_answering/)\n",
|
||||
"- [Chatbots](/docs/use_cases/chatbots/)\n",
|
||||
"- [Analyzing structured data](/docs/use_cases/sql/)\n",
|
||||
"- [Document question answering](https://python.langchain.com/docs/use_cases/question_answering/)\n",
|
||||
"- [Chatbots](https://python.langchain.com/docs/use_cases/chatbots/)\n",
|
||||
"- [Analyzing structured data](https://python.langchain.com/docs/use_cases/sql/)\n",
|
||||
"- and much more...\n",
|
||||
"\n",
|
||||
"### [Integrations](/docs/integrations/providers/) [](\\#integrations \"Direct link to integrations\")\n",
|
||||
"### [Integrations](https://python.langchain.com/docs/integrations/providers/) [](\\#integrations \"Direct link to integrations\")\n",
|
||||
"\n",
|
||||
"LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).\n",
|
||||
"LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](https://python.langchain.com/docs/integrations/providers/).\n",
|
||||
"\n",
|
||||
"### [Guides](/docs/guides/debugging) [](\\#guides \"Direct link to guides\")\n",
|
||||
"### [Guides](https://python.langchain.com/docs/guides/debugging) [](\\#guides \"Direct link to guides\")\n",
|
||||
"\n",
|
||||
"Best practices for developing with LangChain.\n",
|
||||
"\n",
|
||||
@@ -157,11 +159,11 @@
|
||||
"\n",
|
||||
"Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.\n",
|
||||
"\n",
|
||||
"### [Developer's guide](/docs/contributing) [](\\#developers-guide \"Direct link to developers-guide\")\n",
|
||||
"### [Developer's guide](https://python.langchain.com/docs/contributing) [](\\#developers-guide \"Direct link to developers-guide\")\n",
|
||||
"\n",
|
||||
"Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.\n",
|
||||
"\n",
|
||||
"Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.\n"
|
||||
"Head to the [Community navigator](https://python.langchain.com/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -43,7 +43,7 @@
|
||||
"source": [
|
||||
"## Environment Setup\n",
|
||||
"\n",
|
||||
"We'll need to get an [Anthropic](https://console.anthropic.com/settings/keys) API key and set the `ANTHROPIC_API_KEY` environment variable:"
|
||||
"We'll need to get a [Anthropic](https://console.anthropic.com/settings/keys) and set the `ANTHROPIC_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Baseten\n",
|
||||
"\n",
|
||||
"[Baseten](https://baseten.co) is a [Provider](/docs/integrations/providers/baseten) in the LangChain ecosystem that implements the LLMs component.\n",
|
||||
"[Baseten](https://baseten.co) is a [Provider](https://python.langchain.com/docs/integrations/providers/baseten) in the LangChain ecosystem that implements the LLMs component.\n",
|
||||
"\n",
|
||||
"This example demonstrates using an LLM — Mistral 7B hosted on Baseten — with LangChain."
|
||||
]
|
||||
@@ -83,7 +83,7 @@
|
||||
"\n",
|
||||
"We can chain together multiple calls to one or multiple models, which is the whole point of Langchain!\n",
|
||||
"\n",
|
||||
"For example, we can replace GPT with Mistral in this demo of terminal emulation."
|
||||
"For example, we can replace GPT with Mistral in this [demo of terminal emulation](https://python.langchain.com/docs/modules/agents/how_to/chatgpt_clone)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -167,7 +167,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -181,9 +181,10 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
"version": "3.10.4"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
||||
@@ -1,277 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Friendli\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Friendli\n",
|
||||
"\n",
|
||||
"> [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.\n",
|
||||
"\n",
|
||||
"This tutorial guides you through integrating `Friendli` with LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"Ensure the `langchain_community` and `friendli-client` are installed.\n",
|
||||
"\n",
|
||||
"```sh\n",
|
||||
"pip install -U langchain-comminity friendli-client.\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"FRIENDLI_TOKEN\"] = getpass.getpass(\"Friendi Personal Access Token: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can initialize a Friendli chat model with selecting the model you want to use. The default model is `mixtral-8x7b-instruct-v0-1`. You can check the available models at [docs.friendli.ai](https://docs.periflow.ai/guides/serverless_endpoints/pricing#text-generation-models)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms.friendli import Friendli\n",
|
||||
"\n",
|
||||
"llm = Friendli(model=\"mixtral-8x7b-instruct-v0-1\", max_tokens=100, temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"`Frienli` supports all methods of [`LLM`](/docs/modules/model_io/llms/) including async APIs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can use functionality of `invoke`, `batch`, `generate`, and `stream`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"'"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm.invoke(\"Tell me a joke.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"',\n",
|
||||
" 'Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"']"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm.batch([\"Tell me a joke.\", \"Tell me a joke.\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[Generation(text='Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"')], [Generation(text='Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"')]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('a2009600-baae-4f5a-9f69-23b2bc916e4c')), RunInfo(run_id=UUID('acaf0838-242c-4255-85aa-8a62b675d046'))])"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm.generate([\"Tell me a joke.\", \"Tell me a joke.\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Username checks out.\n",
|
||||
"User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment.\n",
|
||||
"User 0: I'm not being sarcastic. I'm just saying that your username is very fitting.\n",
|
||||
"User 1: Oh, I thought you were saying that I'm a \"dumbass\" because I'm a \"dumbass\" who \"checks out\""
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for chunk in llm.stream(\"Tell me a joke.\"):\n",
|
||||
" print(chunk, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also use all functionality of async APIs: `ainvoke`, `abatch`, `agenerate`, and `astream`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await llm.ainvoke(\"Tell me a joke.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"',\n",
|
||||
" 'Username checks out.\\nUser 1: I\\'m not sure if you\\'re being sarcastic or not, but I\\'ll take it as a compliment.\\nUser 0: I\\'m not being sarcastic. I\\'m just saying that your username is very fitting.\\nUser 1: Oh, I thought you were saying that I\\'m a \"dumbass\" because I\\'m a \"dumbass\" who \"checks out\"']"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await llm.abatch([\"Tell me a joke.\", \"Tell me a joke.\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[Generation(text=\"Username checks out.\\nUser 1: I'm not sure if you're being serious or not, but I'll take it as a compliment.\\nUser 0: I'm being serious. I'm not sure if you're being serious or not.\\nUser 1: I'm being serious. I'm not sure if you're being serious or not.\\nUser 0: I'm being serious. I'm not sure\")], [Generation(text=\"Username checks out.\\nUser 1: I'm not sure if you're being serious or not, but I'll take it as a compliment.\\nUser 0: I'm being serious. I'm not sure if you're being serious or not.\\nUser 1: I'm being serious. I'm not sure if you're being serious or not.\\nUser 0: I'm being serious. I'm not sure\")]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('46144905-7350-4531-a4db-22e6a827c6e3')), RunInfo(run_id=UUID('e2b06c30-ffff-48cf-b792-be91f2144aa6'))])"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await llm.agenerate([\"Tell me a joke.\", \"Tell me a joke.\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Username checks out.\n",
|
||||
"User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment.\n",
|
||||
"User 0: I'm not being sarcastic. I'm just saying that your username is very fitting.\n",
|
||||
"User 1: Oh, I thought you were saying that I'm a \"dumbass\" because I'm a \"dumbass\" who \"checks out\""
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async for chunk in llm.astream(\"Tell me a joke.\"):\n",
|
||||
" print(chunk, end=\"\", flush=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -25,7 +25,7 @@
|
||||
"id": "bead5ede-d9cc-44b9-b062-99c90a10cf40",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](/docs/integrations/llms/google_vertex_ai_palm)."
|
||||
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
|
||||
"\n",
|
||||
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class."
|
||||
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HuggingFaceHub](./huggingface_hub) notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -259,26 +259,6 @@
|
||||
"!optimum-cli export openvino --model gpt2 ov_model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0f7a6d21",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It is recommended to apply 8 or 4-bit weight quantization to reduce inference latency and model footprint using `--weight-format`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "97088ea0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!optimum-cli export openvino --model gpt2 --weight-format int8 ov_model # for 8-bit quantization\n",
|
||||
"\n",
|
||||
"!optimum-cli export openvino --model gpt2 --weight-format int4 ov_model # for 4-bit quantization"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -300,38 +280,6 @@
|
||||
"\n",
|
||||
"print(ov_chain.invoke({\"question\": question}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a2c5726c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can get additional inference speed improvement with Dynamic Quantization of activations and KV-cache quantization. These options can be enabled with `ov_config` as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a1f9c2c5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ov_config = {\n",
|
||||
" \"KV_CACHE_PRECISION\": \"u8\",\n",
|
||||
" \"DYNAMIC_QUANTIZATION_GROUP_SIZE\": \"32\",\n",
|
||||
" \"PERFORMANCE_HINT\": \"LATENCY\",\n",
|
||||
" \"NUM_STREAMS\": \"1\",\n",
|
||||
" \"CACHE_DIR\": \"\",\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "da9a9239",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For more information refer to [OpenVINO LLM guide](https://docs.openvino.ai/2024/openvino-workflow/generative-ai-models-guide.html)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -105,7 +105,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)"
|
||||
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](https://python.langchain.com/docs/expression_language/interface)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -6,19 +6,22 @@
|
||||
"source": [
|
||||
"# OctoAI\n",
|
||||
"\n",
|
||||
"[OctoAI](https://docs.octoai.cloud/docs) offers easy access to efficient compute and enables users to integrate their choice of AI models into applications. The `OctoAI` compute service helps you run, tune, and scale AI applications easily.\n",
|
||||
">[OctoML](https://docs.octoai.cloud/docs) is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The `OctoAI` compute service helps you run, tune, and scale AI applications.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `OctoAI` [LLM endpoints](https://octoai.cloud/templates)\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To run our example app, there are two simple steps to take:\n",
|
||||
"To run our example app, there are four simple steps to take:\n",
|
||||
"\n",
|
||||
"1. Get an API Token from [your OctoAI account page](https://octoai.cloud/settings).\n",
|
||||
"1. Clone the MPT-7B demo template to your OctoAI account by visiting <https://octoai.cloud/templates/mpt-7b-demo> then clicking \"Clone Template.\" \n",
|
||||
" 1. If you want to use a different LLM model, you can also containerize the model and make a custom OctoAI endpoint yourself, by following [Build a Container from Python](doc:create-custom-endpoints-from-python-code) and [Create a Custom Endpoint from a Container](doc:create-custom-endpoints-from-a-container)\n",
|
||||
" \n",
|
||||
"2. Paste your API key in in the code cell below.\n",
|
||||
"2. Paste your Endpoint URL in the code cell below\n",
|
||||
"\n",
|
||||
"Note: If you want to use a different LLM model, you can containerize the model and make a custom OctoAI endpoint yourself, by following [Build a Container from Python](https://octo.ai/docs/bring-your-own-model/advanced-build-a-container-from-scratch-in-python) and [Create a Custom Endpoint from a Container](https://octo.ai/docs/bring-your-own-model/create-custom-endpoints-from-a-container/create-custom-endpoints-from-a-container) and then update your Endpoint URL in the code cell below.\n"
|
||||
"3. Get an API Token from [your OctoAI account page](https://octoai.cloud/settings).\n",
|
||||
" \n",
|
||||
"4. Paste your API key in in the code cell below"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -175,7 +175,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)"
|
||||
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](https://python.langchain.com/docs/expression_language/interface)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"\n",
|
||||
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your OpenAI requests.\n",
|
||||
"\n",
|
||||
"Another example is [here](/docs/integrations/providers/promptlayer)."
|
||||
"Another example is [here](https://python.langchain.com/docs/integrations/providers/promptlayer)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -290,7 +290,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Streaming Response\n",
|
||||
"You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on [Streaming](/docs/modules/model_io/llms/streaming_llm) for more information."
|
||||
"You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on [Streaming](https://python.langchain.com/docs/modules/model_io/llms/how_to/streaming_llm) for more information."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -540,9 +540,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
|
||||
@@ -88,7 +88,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import AstraDBChatMessageHistory\n",
|
||||
"from langchain.memory import AstraDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"message_history = AstraDBChatMessageHistory(\n",
|
||||
" session_id=\"test-session\",\n",
|
||||
|
||||
@@ -45,18 +45,6 @@
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0c68f434-0ccf-49a4-8313-e9f6064086a8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" DynamoDBChatMessageHistory,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "030d784f",
|
||||
@@ -117,6 +105,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import DynamoDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = DynamoDBChatMessageHistory(table_name=\"SessionTable\", session_id=\"0\")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
@@ -162,6 +152,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import DynamoDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = DynamoDBChatMessageHistory(\n",
|
||||
" table_name=\"SessionTable\",\n",
|
||||
" session_id=\"0\",\n",
|
||||
@@ -192,7 +184,6 @@
|
||||
"execution_count": 14,
|
||||
"id": "088c037c",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
@@ -207,6 +198,8 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import DynamoDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"composite_table = dynamodb.create_table(\n",
|
||||
" TableName=\"CompositeTable\",\n",
|
||||
" KeySchema=[\n",
|
||||
@@ -391,7 +384,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -123,9 +123,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" CassandraChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain.memory import CassandraChatMessageHistory\n",
|
||||
"\n",
|
||||
"message_history = CassandraChatMessageHistory(\n",
|
||||
" session_id=\"test-session\",\n",
|
||||
@@ -183,7 +181,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.17"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -116,9 +116,7 @@
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" ElasticsearchChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain.memory import ElasticsearchChatMessageHistory\n",
|
||||
"\n",
|
||||
"es_url = os.environ.get(\"ES_URL\", \"http://localhost:9200\")\n",
|
||||
"\n",
|
||||
|
||||
@@ -6,13 +6,9 @@
|
||||
"source": [
|
||||
"# Google AlloyDB for PostgreSQL\n",
|
||||
"\n",
|
||||
"> [Google Cloud AlloyDB for PostgreSQL](https://cloud.google.com/alloydb) is a fully managed `PostgreSQL` compatible database service for your most demanding enterprise workloads. `AlloyDB` combines the best of `Google Cloud` with `PostgreSQL`, for superior performance, scale, and availability. Extend your database application to build AI-powered experiences leveraging `AlloyDB` Langchain integrations.\n",
|
||||
"> [AlloyDB](https://cloud.google.com/alloydb) is a fully managed PostgreSQL compatible database service for your most demanding enterprise workloads. AlloyDB combines the best of Google with PostgreSQL, for superior performance, scale, and availability. Extend your database application to build AI-powered experiences leveraging AlloyDB Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Google Cloud AlloyDB for PostgreSQL` to store chat message history with the `AlloyDBChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-alloydb-pg-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
"This notebook goes over how to use `AlloyDB for PostgreSQL` to store chat message history with the `AlloyDBChatMessageHistory` class."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -24,7 +20,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the AlloyDB API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
|
||||
" * [Create a AlloyDB instance](https://cloud.google.com/alloydb/docs/instance-primary-create)\n",
|
||||
" * [Create a AlloyDB database](https://cloud.google.com/alloydb/docs/database-create)\n",
|
||||
" * [Add an IAM database user to the database](https://cloud.google.com/alloydb/docs/manage-iam-authn) (Optional)"
|
||||
@@ -174,7 +169,7 @@
|
||||
"\n",
|
||||
"To create a `AlloyDBEngine` using `AlloyDBEngine.from_instance()` you need to provide only 5 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the AlloyDB instance is located.\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the AlloyDB instance is located.\n",
|
||||
"1. `region` : Region where the AlloyDB instance is located.\n",
|
||||
"1. `cluster`: The name of the AlloyDB cluster.\n",
|
||||
"1. `instance` : The name of the AlloyDB instance.\n",
|
||||
@@ -288,7 +283,7 @@
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -6,11 +6,9 @@
|
||||
"source": [
|
||||
"# Google Bigtable\n",
|
||||
"\n",
|
||||
"> [Google Cloud Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n",
|
||||
"> [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Google Cloud Bigtable](https://cloud.google.com/bigtable) to store chat message history with the `BigtableChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
|
||||
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to store chat message history with the `BigtableChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/chat_message_history.ipynb)\n"
|
||||
]
|
||||
@@ -24,7 +22,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Bigtable API](https://console.cloud.google.com/flows/enableapi?apiid=bigtable.googleapis.com)\n",
|
||||
"* [Create a Bigtable instance](https://cloud.google.com/bigtable/docs/creating-instance)\n",
|
||||
"* [Create a Bigtable table](https://cloud.google.com/bigtable/docs/managing-tables)\n",
|
||||
"* [Create Bigtable access credentials](https://developers.google.com/workspace/guides/create-credentials)"
|
||||
|
||||
@@ -7,15 +7,11 @@
|
||||
"id": "f22eab3f84cbeb37"
|
||||
},
|
||||
"source": [
|
||||
"# Google SQL for SQL Server\n",
|
||||
"# Google Cloud SQL for SQL Server\n",
|
||||
"\n",
|
||||
"> [Google Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers `MySQL`, `PostgreSQL`, and `SQL Server` database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers MySQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Google Cloud SQL for SQL Server` to store chat message history with the `MSSQLChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mssql-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
"This notebook goes over how to use `Cloud SQL for SQL Server` to store chat message history with the `MSSQLChatMessageHistory` class."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -30,7 +26,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
" * [Create a Cloud SQL for SQL Server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance)\n",
|
||||
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/sqlserver/create-manage-databases)\n",
|
||||
" * [Create a database user](https://cloud.google.com/sql/docs/sqlserver/create-manage-users) (Optional if you choose to use the `sqlserver` user)"
|
||||
@@ -224,7 +219,7 @@
|
||||
"\n",
|
||||
"To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 6 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
@@ -243,7 +238,6 @@
|
||||
"end_time": "2023-08-28T10:04:38.077748Z",
|
||||
"start_time": "2023-08-28T10:04:36.105894Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "4576e914a866fb40",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -334,7 +328,6 @@
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "b476688cbb32ba90",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -393,7 +386,7 @@
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -552,7 +545,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -7,13 +7,11 @@
|
||||
"id": "f22eab3f84cbeb37"
|
||||
},
|
||||
"source": [
|
||||
"# Google SQL for MySQL\n",
|
||||
"# Google Cloud SQL for MySQL\n",
|
||||
"\n",
|
||||
"> [Cloud Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers `MySQL`, `PostgreSQL`, and `SQL Server` database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers MySQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Google Cloud SQL for MySQL` to store chat message history with the `MySQLChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
|
||||
"This notebook goes over how to use `Cloud SQL for MySQL` to store chat message history with the `MySQLChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mysql-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
@@ -30,7 +28,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
" * [Create a Cloud SQL for MySQL instance](https://cloud.google.com/sql/docs/mysql/create-instance)\n",
|
||||
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
|
||||
" * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users#creating-a-database-user) (Optional)"
|
||||
@@ -222,7 +219,7 @@
|
||||
"\n",
|
||||
"To create a `MySQLEngine` using `MySQLEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
@@ -249,7 +246,6 @@
|
||||
"end_time": "2023-08-28T10:04:38.077748Z",
|
||||
"start_time": "2023-08-28T10:04:36.105894Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "4576e914a866fb40",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -335,7 +331,6 @@
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "b476688cbb32ba90",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
@@ -394,7 +389,7 @@
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
554
docs/docs/integrations/memory/google_cloud_sql_pg.ipynb
Normal file
554
docs/docs/integrations/memory/google_cloud_sql_pg.ipynb
Normal file
@@ -0,0 +1,554 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f22eab3f84cbeb37",
|
||||
"metadata": {
|
||||
"id": "f22eab3f84cbeb37"
|
||||
},
|
||||
"source": [
|
||||
"# Google Cloud SQL for PostgreSQL\n",
|
||||
"\n",
|
||||
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers MySQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Cloud SQL for PostgreSQL` to store chat message history with the `PostgresChatMessageHistory` class."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "da400c79-a360-43e2-be60-401fd02b2819",
|
||||
"metadata": {
|
||||
"id": "da400c79-a360-43e2-be60-401fd02b2819"
|
||||
},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Create a Cloud SQL for PostgreSQL instance](https://cloud.google.com/sql/docs/postgres/create-instance)\n",
|
||||
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
|
||||
" * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/postgres/add-manage-iam-users#creating-a-database-user) (Optional)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "Mm7-fG_LltD7",
|
||||
"metadata": {
|
||||
"id": "Mm7-fG_LltD7"
|
||||
},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"The integration lives in its own `langchain-google-cloud-sql-pg` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1VELXvcj8AId",
|
||||
"metadata": {
|
||||
"id": "1VELXvcj8AId"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-google-cloud-sql-pg langchain-google-vertexai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98TVoM3MNDHu",
|
||||
"metadata": {
|
||||
"id": "98TVoM3MNDHu"
|
||||
},
|
||||
"source": [
|
||||
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "v6jBDnYnNM08",
|
||||
"metadata": {
|
||||
"id": "v6jBDnYnNM08"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "yygMe6rPWxHS",
|
||||
"metadata": {
|
||||
"id": "yygMe6rPWxHS"
|
||||
},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "PTXN1_DSXj2b",
|
||||
"metadata": {
|
||||
"id": "PTXN1_DSXj2b"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "NEvB9BoLEulY",
|
||||
"metadata": {
|
||||
"id": "NEvB9BoLEulY"
|
||||
},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "gfkS3yVRE4_W",
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "gfkS3yVRE4_W"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "rEWWNoNnKOgq",
|
||||
"metadata": {
|
||||
"id": "rEWWNoNnKOgq"
|
||||
},
|
||||
"source": [
|
||||
"### 💡 API Enablement\n",
|
||||
"The `langchain-google-cloud-sql-pg` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5utKIdq7KYi5",
|
||||
"metadata": {
|
||||
"id": "5utKIdq7KYi5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Cloud SQL Admin API\n",
|
||||
"!gcloud services enable sqladmin.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
"metadata": {
|
||||
"id": "f8f2830ee9ca1e01"
|
||||
},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "OMvzMWRrR6n7",
|
||||
"metadata": {
|
||||
"id": "OMvzMWRrR6n7"
|
||||
},
|
||||
"source": [
|
||||
"### Set Cloud SQL database values\n",
|
||||
"Find your database values, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql?_ga=2.223735448.2062268965.1707700487-2088871159.1707257687)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "irl7eMFnSPZr",
|
||||
"metadata": {
|
||||
"id": "irl7eMFnSPZr"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Set Your Values Here { display-mode: \"form\" }\n",
|
||||
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
|
||||
"INSTANCE = \"my-postgresql-instance\" # @param {type: \"string\"}\n",
|
||||
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
|
||||
"TABLE_NAME = \"message_store\" # @param {type: \"string\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "QuQigs4UoFQ2",
|
||||
"metadata": {
|
||||
"id": "QuQigs4UoFQ2"
|
||||
},
|
||||
"source": [
|
||||
"### PostgresEngine Connection Pool\n",
|
||||
"\n",
|
||||
"One of the requirements and arguments to establish Cloud SQL as a ChatMessageHistory memory store is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices.\n",
|
||||
"\n",
|
||||
"To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
"\n",
|
||||
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication#iam-db-auth) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the envionment.\n",
|
||||
"\n",
|
||||
"For more informatin on IAM database authentication please see:\n",
|
||||
"\n",
|
||||
"* [Configure an instance for IAM database authentication](https://cloud.google.com/sql/docs/postgres/create-edit-iam-instances)\n",
|
||||
"* [Manage users with IAM database authentication](https://cloud.google.com/sql/docs/postgres/add-manage-iam-users)\n",
|
||||
"\n",
|
||||
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/built-in-authentication) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`:\n",
|
||||
"\n",
|
||||
"* `user` : Database user to use for built-in database authentication and login\n",
|
||||
"* `password` : Database password to use for built-in database authentication and login.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "4576e914a866fb40",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-28T10:04:38.077748Z",
|
||||
"start_time": "2023-08-28T10:04:36.105894Z"
|
||||
},
|
||||
"id": "4576e914a866fb40",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresEngine\n",
|
||||
"\n",
|
||||
"engine = PostgresEngine.from_instance(\n",
|
||||
" project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "qPV8WfWr7O54",
|
||||
"metadata": {
|
||||
"id": "qPV8WfWr7O54"
|
||||
},
|
||||
"source": [
|
||||
"### Initialize a table\n",
|
||||
"The `PostgresChatMessageHistory` class requires a database table with a specific schema in order to store the chat message history.\n",
|
||||
"\n",
|
||||
"The `PostgresEngine` engine has a helper method `init_chat_history_table()` that can be used to create a table with the proper schema for you."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "TEu4VHArRttE",
|
||||
"metadata": {
|
||||
"id": "TEu4VHArRttE"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"engine.init_chat_history_table(table_name=TABLE_NAME)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "zSYQTYf3UfOi",
|
||||
"metadata": {
|
||||
"id": "zSYQTYf3UfOi"
|
||||
},
|
||||
"source": [
|
||||
"### PostgresChatMessageHistory\n",
|
||||
"\n",
|
||||
"To initialize the `PostgresChatMessageHistory` class you need to provide only 3 things:\n",
|
||||
"\n",
|
||||
"1. `engine` - An instance of a `PostgresEngine` engine.\n",
|
||||
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
|
||||
"1. `table_name` : The name of the table within the Cloud SQL database to store the chat message history."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "Kq7RLtfOq0wi",
|
||||
"metadata": {
|
||||
"id": "Kq7RLtfOq0wi"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = PostgresChatMessageHistory.create_sync(\n",
|
||||
" engine, session_id=\"test_session\", table_name=TABLE_NAME\n",
|
||||
")\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
"history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "b476688cbb32ba90",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-28T10:04:38.929396Z",
|
||||
"start_time": "2023-08-28T10:04:38.915727Z"
|
||||
},
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "b476688cbb32ba90",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"outputId": "a19e5cd8-4225-476a-d28d-e870c6b838bb"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ss6CbqcTTedr",
|
||||
"metadata": {
|
||||
"id": "ss6CbqcTTedr"
|
||||
},
|
||||
"source": [
|
||||
"#### Cleaning up\n",
|
||||
"When the history of a specific session is obsolete and can be deleted, it can be done the following way.\n",
|
||||
"\n",
|
||||
"**Note:** Once deleted, the data is no longer stored in Cloud SQL and is gone forever."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "3khxzFxYO7x6",
|
||||
"metadata": {
|
||||
"id": "3khxzFxYO7x6"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"history.clear()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2e5337719d5614fd",
|
||||
"metadata": {
|
||||
"id": "2e5337719d5614fd"
|
||||
},
|
||||
"source": [
|
||||
"## 🔗 Chaining\n",
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "hYtHM3-TOMCe",
|
||||
"metadata": {
|
||||
"id": "hYtHM3-TOMCe"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Vertex AI API\n",
|
||||
"!gcloud services enable aiplatform.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "6558418b-0ece-4d01-9661-56d562d78f7a",
|
||||
"metadata": {
|
||||
"id": "6558418b-0ece-4d01-9661-56d562d78f7a"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"from langchain_google_vertexai import ChatVertexAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "82149122-61d3-490d-9bdb-bb98606e8ba1",
|
||||
"metadata": {
|
||||
"id": "82149122-61d3-490d-9bdb-bb98606e8ba1"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful assistant.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{question}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | ChatVertexAI(project=PROJECT_ID)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "2df90853-b67c-490f-b7f8-b69d69270b9c",
|
||||
"metadata": {
|
||||
"id": "2df90853-b67c-490f-b7f8-b69d69270b9c"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain_with_history = RunnableWithMessageHistory(\n",
|
||||
" chain,\n",
|
||||
" lambda session_id: PostgresChatMessageHistory.create_sync(\n",
|
||||
" engine,\n",
|
||||
" session_id=session_id,\n",
|
||||
" table_name=TABLE_NAME,\n",
|
||||
" ),\n",
|
||||
" input_messages_key=\"question\",\n",
|
||||
" history_messages_key=\"history\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b",
|
||||
"metadata": {
|
||||
"id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is where we configure the session id\n",
|
||||
"config = {\"configurable\": {\"session_id\": \"test_session\"}}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "38e1423b-ba86-4496-9151-25932fab1a8b",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "38e1423b-ba86-4496-9151-25932fab1a8b",
|
||||
"outputId": "d5c93570-4b0b-4fe8-d19c-4b361fe74291"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Hello Bob, how can I help you today?')"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16",
|
||||
"outputId": "288fe388-3f60-41b8-8edb-37cfbec18981"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Your name is Bob.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"toc_visible": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
259
docs/docs/integrations/memory/google_datastore.ipynb
Normal file
259
docs/docs/integrations/memory/google_datastore.ipynb
Normal file
@@ -0,0 +1,259 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google Firestore in Datastore\n",
|
||||
"\n",
|
||||
"> [Firestore in Datastore](https://cloud.google.com/datastore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Firestore in Datastore](https://cloud.google.com/datastore) to to store chat message history with the `DatastoreChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Create a Datastore database](https://cloud.google.com/datastore/docs/manage-databases)\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"\n",
|
||||
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -upgrade --quiet langchain-google-datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### API Enablement\n",
|
||||
"The `langchain-google-datastore` package requires that you [enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Datastore API\n",
|
||||
"!gcloud services enable datastore.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### DatastoreChatMessageHistory\n",
|
||||
"\n",
|
||||
"To initialize the `DatastoreChatMessageHistory` class you need to provide only 3 things:\n",
|
||||
"\n",
|
||||
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
|
||||
"1. `collection` : The single `/`-delimited path to a Datastore collection."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_datastore import DatastoreChatMessageHistory\n",
|
||||
"\n",
|
||||
"chat_history = DatastoreChatMessageHistory(\n",
|
||||
" session_id=\"user-session-id\", collection=\"HistoryMessages\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chat_history.add_user_message(\"Hi!\")\n",
|
||||
"chat_history.add_ai_message(\"How can I help you?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat_history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Cleaning up\n",
|
||||
"When the history of a specific session is obsolete and can be deleted from the database and memory, it can be done the following way.\n",
|
||||
"\n",
|
||||
"**Note:** Once deleted, the data is no longer stored in Datastore and is gone forever."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat_history.clear()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom Client\n",
|
||||
"\n",
|
||||
"The client is created by default using the available environment variables. A [custom client](https://cloud.google.com/python/docs/reference/datastore/latest/client) can be passed to the constructor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.auth import compute_engine\n",
|
||||
"from google.cloud import datastore\n",
|
||||
"\n",
|
||||
"client = datastore.Client(\n",
|
||||
" project=\"project-custom\",\n",
|
||||
" database=\"non-default-database\",\n",
|
||||
" credentials=compute_engine.Credentials(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history = DatastoreChatMessageHistory(\n",
|
||||
" session_id=\"session-id\", collection=\"History\", client=client\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"New message\")\n",
|
||||
"\n",
|
||||
"history.messages\n",
|
||||
"\n",
|
||||
"history.clear()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -4,13 +4,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google El Carro Oracle\n",
|
||||
"# Google El Carro Oracle Operator\n",
|
||||
"\n",
|
||||
"> [Google Cloud El Carro Oracle](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source, community-driven, no vendor lock-in container orchestration system. `El Carro` provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring. Extend your `Oracle` database's capabilities to build AI-powered experiences by leveraging the `El Carro` Langchain integration.\n",
|
||||
"> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring. Extend your Oracle database's capabilities to build AI-powered experiences by leveraging the El Carro Langchain integration.\n",
|
||||
"\n",
|
||||
"This guide goes over how to use the `El Carro` Langchain integration to store chat message history with the `ElCarroChatMessageHistory` class. This integration works for any `Oracle` database, regardless of where it is running.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-el-carro-python/).\n",
|
||||
"This guide goes over how to use the El Carro Langchain integration to store chat message history with the `ElCarroChatMessageHistory` class. This integration works for any Oracle database, regardless of where it is running.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-el-carro-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
@@ -22,7 +20,6 @@
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * Complete the [Getting Started](https://github.com/googleapis/langchain-google-el-carro-python/tree/main/README.md#getting-started) section if you would like to run your Oracle database with El Carro."
|
||||
]
|
||||
},
|
||||
@@ -80,9 +77,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from google.colab import auth\n",
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"# auth.authenticate_user()"
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -131,12 +128,6 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Set Your Values Here { display-mode: \"form\" }\n",
|
||||
@@ -146,37 +137,34 @@
|
||||
"TABLE_NAME = \"message_store\" # @param {type: \"string\"}\n",
|
||||
"USER = \"my-user\" # @param {type: \"string\"}\n",
|
||||
"PASSWORD = input(\"Please provide a password to be used for the database user: \")"
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"\n",
|
||||
"If you are using `El Carro`, you can find the hostname and port values in the\n",
|
||||
"status of the `El Carro` Kubernetes instance.\n",
|
||||
"Use the user password you created for your PDB.\n",
|
||||
"Example"
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"kubectl get -w instances.oracle.db.anthosapis.com -n db\n",
|
||||
"NAME DB ENGINE VERSION EDITION ENDPOINT URL DB NAMES BACKUP ID READYSTATUS READYREASON DBREADYSTATUS DBREADYREASON\n",
|
||||
"mydb Oracle 18c Express mydb-svc.db 34.71.69.25:6021 False CreateInProgress"
|
||||
]
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -293,7 +281,7 @@
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -397,7 +385,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -6,11 +6,9 @@
|
||||
"source": [
|
||||
"# Google Firestore (Native Mode)\n",
|
||||
"\n",
|
||||
"> [Google Cloud Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging `Firestore's` Langchain integrations.\n",
|
||||
"> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Google Cloud Firestore](https://cloud.google.com/firestore) to store chat message history with the `FirestoreChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n",
|
||||
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to to store chat message history with the `FirestoreChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-firestore-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
@@ -24,7 +22,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com)\n",
|
||||
"* [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases)\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
@@ -121,6 +118,24 @@
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### API Enablement\n",
|
||||
"The `langchain-google-firestore` package requires that you [enable the Firestore Admin API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Firestore Admin API\n",
|
||||
"!gcloud services enable firestore.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -236,7 +251,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,263 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google Firestore (Datastore Mode)\n",
|
||||
"\n",
|
||||
"> [Google Cloud Firestore in Datastore](https://cloud.google.com/datastore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging `Datastore's` Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Google Cloud Firestore in Datastore](https://cloud.google.com/datastore) to store chat message history with the `DatastoreChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com)\n",
|
||||
"* [Create a Datastore database](https://cloud.google.com/datastore/docs/manage-databases)\n",
|
||||
"\n",
|
||||
"After confirming access to the database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"\n",
|
||||
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -upgrade --quiet langchain-google-datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### API Enablement\n",
|
||||
"The `langchain-google-datastore` package requires that you [enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Datastore API\n",
|
||||
"!gcloud services enable datastore.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### DatastoreChatMessageHistory\n",
|
||||
"\n",
|
||||
"To initialize the `DatastoreChatMessageHistory` class you need to provide only 3 things:\n",
|
||||
"\n",
|
||||
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
|
||||
"1. `kind` - The name of the Datastore kind to write into. This is an optional value and by default, it will use `ChatHistory` as the kind.\n",
|
||||
"1. `collection` - The single `/`-delimited path to a Datastore collection."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_datastore import DatastoreChatMessageHistory\n",
|
||||
"\n",
|
||||
"chat_history = DatastoreChatMessageHistory(\n",
|
||||
" session_id=\"user-session-id\", collection=\"HistoryMessages\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chat_history.add_user_message(\"Hi!\")\n",
|
||||
"chat_history.add_ai_message(\"How can I help you?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat_history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Cleaning up\n",
|
||||
"When the history of a specific session is obsolete and can be deleted from the database and memory, it can be done the following way.\n",
|
||||
"\n",
|
||||
"**Note:** Once deleted, the data is no longer stored in Datastore and is gone forever."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat_history.clear()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom Client\n",
|
||||
"\n",
|
||||
"The client is created by default using the available environment variables. A [custom client](https://cloud.google.com/python/docs/reference/datastore/latest/client) can be passed to the constructor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.auth import compute_engine\n",
|
||||
"from google.cloud import datastore\n",
|
||||
"\n",
|
||||
"client = datastore.Client(\n",
|
||||
" project=\"project-custom\",\n",
|
||||
" database=\"non-default-database\",\n",
|
||||
" credentials=compute_engine.Credentials(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history = DatastoreChatMessageHistory(\n",
|
||||
" session_id=\"session-id\", collection=\"History\", client=client\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"New message\")\n",
|
||||
"\n",
|
||||
"history.messages\n",
|
||||
"\n",
|
||||
"history.clear()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -8,11 +8,9 @@
|
||||
"source": [
|
||||
"# Google Memorystore for Redis\n",
|
||||
"\n",
|
||||
"> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
|
||||
"> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to store chat message history with the `MemorystoreChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
|
||||
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to store chat message history with the `MemorystoreChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
@@ -26,7 +24,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
|
||||
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 5.0.\n",
|
||||
"\n",
|
||||
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
|
||||
@@ -228,9 +225,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
|
||||
@@ -3,28 +3,19 @@
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"# Google Spanner\n",
|
||||
"> [Google Cloud Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
|
||||
"> [Cloud Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Spanner` to store chat message history with the `SpannerChatMessageHistory` class.\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/samples/chat_message_history.ipynb)"
|
||||
"This notebook goes over how to use `Spanner` to store chat message history with the `SpannerChatMessageHistory` class."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
@@ -32,7 +23,6 @@
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
|
||||
" * [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances)\n",
|
||||
" * [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases)"
|
||||
]
|
||||
@@ -76,6 +66,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "yygMe6rPWxHS",
|
||||
"metadata": {
|
||||
"id": "yygMe6rPWxHS"
|
||||
},
|
||||
@@ -90,6 +81,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "PTXN1_DSXj2b",
|
||||
"metadata": {
|
||||
"id": "PTXN1_DSXj2b"
|
||||
},
|
||||
@@ -102,6 +94,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "NEvB9BoLEulY",
|
||||
"metadata": {
|
||||
"id": "NEvB9BoLEulY"
|
||||
},
|
||||
@@ -119,6 +112,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "gfkS3yVRE4_W",
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "gfkS3yVRE4_W"
|
||||
@@ -135,6 +129,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "rEWWNoNnKOgq",
|
||||
"metadata": {
|
||||
"id": "rEWWNoNnKOgq"
|
||||
},
|
||||
@@ -146,6 +141,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5utKIdq7KYi5",
|
||||
"metadata": {
|
||||
"id": "5utKIdq7KYi5"
|
||||
},
|
||||
@@ -157,6 +153,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
"metadata": {
|
||||
"id": "f8f2830ee9ca1e01"
|
||||
},
|
||||
@@ -166,17 +163,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "OMvzMWRrR6n7",
|
||||
"metadata": {
|
||||
"id": "OMvzMWRrR6n7"
|
||||
},
|
||||
"source": [
|
||||
"### Set Spanner database values\n",
|
||||
"Find your database values, in the [Spanner Instances page](https://console.cloud.google.com/spanner)."
|
||||
"Find your database values, in the [Spanner Instances page](https://console.cloud.google.com/spanner/instances)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "irl7eMFnSPZr",
|
||||
"metadata": {
|
||||
"id": "irl7eMFnSPZr"
|
||||
},
|
||||
@@ -190,6 +189,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "qPV8WfWr7O54",
|
||||
"metadata": {
|
||||
"id": "qPV8WfWr7O54"
|
||||
},
|
||||
@@ -203,6 +203,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "TEu4VHArRttE",
|
||||
"metadata": {
|
||||
"id": "TEu4VHArRttE"
|
||||
},
|
||||
@@ -233,10 +234,7 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
||||
@@ -1,561 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f22eab3f84cbeb37",
|
||||
"metadata": {
|
||||
"id": "f22eab3f84cbeb37"
|
||||
},
|
||||
"source": [
|
||||
"# Google SQL for MySQL\n",
|
||||
"\n",
|
||||
"> [Cloud Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers `MySQL`, `PostgreSQL`, and `SQL Server` database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Google Cloud SQL for MySQL` to store chat message history with the `MySQLChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mysql-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "da400c79-a360-43e2-be60-401fd02b2819",
|
||||
"metadata": {
|
||||
"id": "da400c79-a360-43e2-be60-401fd02b2819"
|
||||
},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
" * [Create a Cloud SQL for MySQL instance](https://cloud.google.com/sql/docs/mysql/create-instance)\n",
|
||||
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
|
||||
" * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users#creating-a-database-user) (Optional)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "Mm7-fG_LltD7",
|
||||
"metadata": {
|
||||
"id": "Mm7-fG_LltD7"
|
||||
},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"The integration lives in its own `langchain-google-cloud-sql-mysql` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1VELXvcj8AId",
|
||||
"metadata": {
|
||||
"id": "1VELXvcj8AId"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-google-cloud-sql-mysql langchain-google-vertexai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98TVoM3MNDHu",
|
||||
"metadata": {
|
||||
"id": "98TVoM3MNDHu"
|
||||
},
|
||||
"source": [
|
||||
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "v6jBDnYnNM08",
|
||||
"metadata": {
|
||||
"id": "v6jBDnYnNM08"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "yygMe6rPWxHS",
|
||||
"metadata": {
|
||||
"id": "yygMe6rPWxHS"
|
||||
},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "PTXN1_DSXj2b",
|
||||
"metadata": {
|
||||
"id": "PTXN1_DSXj2b"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "NEvB9BoLEulY",
|
||||
"metadata": {
|
||||
"id": "NEvB9BoLEulY"
|
||||
},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "gfkS3yVRE4_W",
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "gfkS3yVRE4_W"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "rEWWNoNnKOgq",
|
||||
"metadata": {
|
||||
"id": "rEWWNoNnKOgq"
|
||||
},
|
||||
"source": [
|
||||
"### 💡 API Enablement\n",
|
||||
"The `langchain-google-cloud-sql-mysql` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5utKIdq7KYi5",
|
||||
"metadata": {
|
||||
"id": "5utKIdq7KYi5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Cloud SQL Admin API\n",
|
||||
"!gcloud services enable sqladmin.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
"metadata": {
|
||||
"id": "f8f2830ee9ca1e01"
|
||||
},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "OMvzMWRrR6n7",
|
||||
"metadata": {
|
||||
"id": "OMvzMWRrR6n7"
|
||||
},
|
||||
"source": [
|
||||
"### Set Cloud SQL database values\n",
|
||||
"Find your database values, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql?_ga=2.223735448.2062268965.1707700487-2088871159.1707257687)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "irl7eMFnSPZr",
|
||||
"metadata": {
|
||||
"id": "irl7eMFnSPZr"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Set Your Values Here { display-mode: \"form\" }\n",
|
||||
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
|
||||
"INSTANCE = \"my-mysql-instance\" # @param {type: \"string\"}\n",
|
||||
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
|
||||
"TABLE_NAME = \"message_store\" # @param {type: \"string\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "QuQigs4UoFQ2",
|
||||
"metadata": {
|
||||
"id": "QuQigs4UoFQ2"
|
||||
},
|
||||
"source": [
|
||||
"### MySQLEngine Connection Pool\n",
|
||||
"\n",
|
||||
"One of the requirements and arguments to establish Cloud SQL as a ChatMessageHistory memory store is a `MySQLEngine` object. The `MySQLEngine` configures a connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices.\n",
|
||||
"\n",
|
||||
"To create a `MySQLEngine` using `MySQLEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
"\n",
|
||||
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/mysql/iam-authentication#iam-db-auth) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the envionment.\n",
|
||||
"\n",
|
||||
"For more informatin on IAM database authentication please see:\n",
|
||||
"\n",
|
||||
"* [Configure an instance for IAM database authentication](https://cloud.google.com/sql/docs/mysql/create-edit-iam-instances)\n",
|
||||
"* [Manage users with IAM database authentication](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users)\n",
|
||||
"\n",
|
||||
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/mysql/built-in-authentication) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `MySQLEngine.from_instance()`:\n",
|
||||
"\n",
|
||||
"* `user` : Database user to use for built-in database authentication and login\n",
|
||||
"* `password` : Database password to use for built-in database authentication and login.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "4576e914a866fb40",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-28T10:04:38.077748Z",
|
||||
"start_time": "2023-08-28T10:04:36.105894Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "4576e914a866fb40",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_mysql import MySQLEngine\n",
|
||||
"\n",
|
||||
"engine = MySQLEngine.from_instance(\n",
|
||||
" project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "qPV8WfWr7O54",
|
||||
"metadata": {
|
||||
"id": "qPV8WfWr7O54"
|
||||
},
|
||||
"source": [
|
||||
"### Initialize a table\n",
|
||||
"The `MySQLChatMessageHistory` class requires a database table with a specific schema in order to store the chat message history.\n",
|
||||
"\n",
|
||||
"The `MySQLEngine` engine has a helper method `init_chat_history_table()` that can be used to create a table with the proper schema for you."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "TEu4VHArRttE",
|
||||
"metadata": {
|
||||
"id": "TEu4VHArRttE"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"engine.init_chat_history_table(table_name=TABLE_NAME)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "zSYQTYf3UfOi",
|
||||
"metadata": {
|
||||
"id": "zSYQTYf3UfOi"
|
||||
},
|
||||
"source": [
|
||||
"### MySQLChatMessageHistory\n",
|
||||
"\n",
|
||||
"To initialize the `MySQLChatMessageHistory` class you need to provide only 3 things:\n",
|
||||
"\n",
|
||||
"1. `engine` - An instance of a `MySQLEngine` engine.\n",
|
||||
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
|
||||
"1. `table_name` : The name of the table within the Cloud SQL database to store the chat message history."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "Kq7RLtfOq0wi",
|
||||
"metadata": {
|
||||
"id": "Kq7RLtfOq0wi"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_mysql import MySQLChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = MySQLChatMessageHistory(\n",
|
||||
" engine, session_id=\"test_session\", table_name=TABLE_NAME\n",
|
||||
")\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
"history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "b476688cbb32ba90",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-28T10:04:38.929396Z",
|
||||
"start_time": "2023-08-28T10:04:38.915727Z"
|
||||
},
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "b476688cbb32ba90",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"outputId": "a19e5cd8-4225-476a-d28d-e870c6b838bb"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ss6CbqcTTedr",
|
||||
"metadata": {
|
||||
"id": "ss6CbqcTTedr"
|
||||
},
|
||||
"source": [
|
||||
"#### Cleaning up\n",
|
||||
"When the history of a specific session is obsolete and can be deleted, it can be done the following way.\n",
|
||||
"\n",
|
||||
"**Note:** Once deleted, the data is no longer stored in Cloud SQL and is gone forever."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "3khxzFxYO7x6",
|
||||
"metadata": {
|
||||
"id": "3khxzFxYO7x6"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"history.clear()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2e5337719d5614fd",
|
||||
"metadata": {
|
||||
"id": "2e5337719d5614fd"
|
||||
},
|
||||
"source": [
|
||||
"## 🔗 Chaining\n",
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "hYtHM3-TOMCe",
|
||||
"metadata": {
|
||||
"id": "hYtHM3-TOMCe"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Vertex AI API\n",
|
||||
"!gcloud services enable aiplatform.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "6558418b-0ece-4d01-9661-56d562d78f7a",
|
||||
"metadata": {
|
||||
"id": "6558418b-0ece-4d01-9661-56d562d78f7a"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"from langchain_google_vertexai import ChatVertexAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "82149122-61d3-490d-9bdb-bb98606e8ba1",
|
||||
"metadata": {
|
||||
"id": "82149122-61d3-490d-9bdb-bb98606e8ba1"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful assistant.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{question}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | ChatVertexAI(project=PROJECT_ID)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "2df90853-b67c-490f-b7f8-b69d69270b9c",
|
||||
"metadata": {
|
||||
"id": "2df90853-b67c-490f-b7f8-b69d69270b9c"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain_with_history = RunnableWithMessageHistory(\n",
|
||||
" chain,\n",
|
||||
" lambda session_id: MySQLChatMessageHistory(\n",
|
||||
" engine,\n",
|
||||
" session_id=session_id,\n",
|
||||
" table_name=TABLE_NAME,\n",
|
||||
" ),\n",
|
||||
" input_messages_key=\"question\",\n",
|
||||
" history_messages_key=\"history\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b",
|
||||
"metadata": {
|
||||
"id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is where we configure the session id\n",
|
||||
"config = {\"configurable\": {\"session_id\": \"test_session\"}}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "38e1423b-ba86-4496-9151-25932fab1a8b",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "38e1423b-ba86-4496-9151-25932fab1a8b",
|
||||
"outputId": "d5c93570-4b0b-4fe8-d19c-4b361fe74291"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Hello Bob, how can I help you today?')"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16",
|
||||
"outputId": "288fe388-3f60-41b8-8edb-37cfbec18981"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Your name is Bob.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"toc_visible": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,561 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f22eab3f84cbeb37",
|
||||
"metadata": {
|
||||
"id": "f22eab3f84cbeb37"
|
||||
},
|
||||
"source": [
|
||||
"# Google SQL for PostgreSQL\n",
|
||||
"\n",
|
||||
"> [Google Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers `MySQL`, `PostgreSQL`, and `SQL Server` database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Google Cloud SQL for PostgreSQL` to store chat message history with the `PostgresChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-pg-python/).\n",
|
||||
"\n",
|
||||
"[](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-pg-python/blob/main/docs/chat_message_history.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "da400c79-a360-43e2-be60-401fd02b2819",
|
||||
"metadata": {
|
||||
"id": "da400c79-a360-43e2-be60-401fd02b2819"
|
||||
},
|
||||
"source": [
|
||||
"## Before You Begin\n",
|
||||
"\n",
|
||||
"To run this notebook, you will need to do the following:\n",
|
||||
"\n",
|
||||
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
|
||||
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
|
||||
" * [Create a Cloud SQL for PostgreSQL instance](https://cloud.google.com/sql/docs/postgres/create-instance)\n",
|
||||
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
|
||||
" * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/postgres/add-manage-iam-users#creating-a-database-user) (Optional)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "Mm7-fG_LltD7",
|
||||
"metadata": {
|
||||
"id": "Mm7-fG_LltD7"
|
||||
},
|
||||
"source": [
|
||||
"### 🦜🔗 Library Installation\n",
|
||||
"The integration lives in its own `langchain-google-cloud-sql-pg` package, so we need to install it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1VELXvcj8AId",
|
||||
"metadata": {
|
||||
"id": "1VELXvcj8AId"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-google-cloud-sql-pg langchain-google-vertexai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98TVoM3MNDHu",
|
||||
"metadata": {
|
||||
"id": "98TVoM3MNDHu"
|
||||
},
|
||||
"source": [
|
||||
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "v6jBDnYnNM08",
|
||||
"metadata": {
|
||||
"id": "v6jBDnYnNM08"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
|
||||
"# import IPython\n",
|
||||
"\n",
|
||||
"# app = IPython.Application.instance()\n",
|
||||
"# app.kernel.do_shutdown(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "yygMe6rPWxHS",
|
||||
"metadata": {
|
||||
"id": "yygMe6rPWxHS"
|
||||
},
|
||||
"source": [
|
||||
"### 🔐 Authentication\n",
|
||||
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
|
||||
"\n",
|
||||
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
|
||||
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "PTXN1_DSXj2b",
|
||||
"metadata": {
|
||||
"id": "PTXN1_DSXj2b"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import auth\n",
|
||||
"\n",
|
||||
"auth.authenticate_user()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "NEvB9BoLEulY",
|
||||
"metadata": {
|
||||
"id": "NEvB9BoLEulY"
|
||||
},
|
||||
"source": [
|
||||
"### ☁ Set Your Google Cloud Project\n",
|
||||
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
|
||||
"\n",
|
||||
"If you don't know your project ID, try the following:\n",
|
||||
"\n",
|
||||
"* Run `gcloud config list`.\n",
|
||||
"* Run `gcloud projects list`.\n",
|
||||
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "gfkS3yVRE4_W",
|
||||
"metadata": {
|
||||
"cellView": "form",
|
||||
"id": "gfkS3yVRE4_W"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
|
||||
"\n",
|
||||
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
|
||||
"\n",
|
||||
"# Set the project id\n",
|
||||
"!gcloud config set project {PROJECT_ID}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "rEWWNoNnKOgq",
|
||||
"metadata": {
|
||||
"id": "rEWWNoNnKOgq"
|
||||
},
|
||||
"source": [
|
||||
"### 💡 API Enablement\n",
|
||||
"The `langchain-google-cloud-sql-pg` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5utKIdq7KYi5",
|
||||
"metadata": {
|
||||
"id": "5utKIdq7KYi5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Cloud SQL Admin API\n",
|
||||
"!gcloud services enable sqladmin.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f8f2830ee9ca1e01",
|
||||
"metadata": {
|
||||
"id": "f8f2830ee9ca1e01"
|
||||
},
|
||||
"source": [
|
||||
"## Basic Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "OMvzMWRrR6n7",
|
||||
"metadata": {
|
||||
"id": "OMvzMWRrR6n7"
|
||||
},
|
||||
"source": [
|
||||
"### Set Cloud SQL database values\n",
|
||||
"Find your database values, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql?_ga=2.223735448.2062268965.1707700487-2088871159.1707257687)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "irl7eMFnSPZr",
|
||||
"metadata": {
|
||||
"id": "irl7eMFnSPZr"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# @title Set Your Values Here { display-mode: \"form\" }\n",
|
||||
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
|
||||
"INSTANCE = \"my-postgresql-instance\" # @param {type: \"string\"}\n",
|
||||
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
|
||||
"TABLE_NAME = \"message_store\" # @param {type: \"string\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "QuQigs4UoFQ2",
|
||||
"metadata": {
|
||||
"id": "QuQigs4UoFQ2"
|
||||
},
|
||||
"source": [
|
||||
"### PostgresEngine Connection Pool\n",
|
||||
"\n",
|
||||
"One of the requirements and arguments to establish Cloud SQL as a ChatMessageHistory memory store is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL database, enabling successful connections from your application and following industry best practices.\n",
|
||||
"\n",
|
||||
"To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things:\n",
|
||||
"\n",
|
||||
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
|
||||
"1. `region` : Region where the Cloud SQL instance is located.\n",
|
||||
"1. `instance` : The name of the Cloud SQL instance.\n",
|
||||
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
|
||||
"\n",
|
||||
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication#iam-db-auth) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the envionment.\n",
|
||||
"\n",
|
||||
"For more informatin on IAM database authentication please see:\n",
|
||||
"\n",
|
||||
"* [Configure an instance for IAM database authentication](https://cloud.google.com/sql/docs/postgres/create-edit-iam-instances)\n",
|
||||
"* [Manage users with IAM database authentication](https://cloud.google.com/sql/docs/postgres/add-manage-iam-users)\n",
|
||||
"\n",
|
||||
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/built-in-authentication) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`:\n",
|
||||
"\n",
|
||||
"* `user` : Database user to use for built-in database authentication and login\n",
|
||||
"* `password` : Database password to use for built-in database authentication and login.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "4576e914a866fb40",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-28T10:04:38.077748Z",
|
||||
"start_time": "2023-08-28T10:04:36.105894Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "4576e914a866fb40",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresEngine\n",
|
||||
"\n",
|
||||
"engine = PostgresEngine.from_instance(\n",
|
||||
" project_id=PROJECT_ID, region=REGION, instance=INSTANCE, database=DATABASE\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "qPV8WfWr7O54",
|
||||
"metadata": {
|
||||
"id": "qPV8WfWr7O54"
|
||||
},
|
||||
"source": [
|
||||
"### Initialize a table\n",
|
||||
"The `PostgresChatMessageHistory` class requires a database table with a specific schema in order to store the chat message history.\n",
|
||||
"\n",
|
||||
"The `PostgresEngine` engine has a helper method `init_chat_history_table()` that can be used to create a table with the proper schema for you."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "TEu4VHArRttE",
|
||||
"metadata": {
|
||||
"id": "TEu4VHArRttE"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"engine.init_chat_history_table(table_name=TABLE_NAME)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "zSYQTYf3UfOi",
|
||||
"metadata": {
|
||||
"id": "zSYQTYf3UfOi"
|
||||
},
|
||||
"source": [
|
||||
"### PostgresChatMessageHistory\n",
|
||||
"\n",
|
||||
"To initialize the `PostgresChatMessageHistory` class you need to provide only 3 things:\n",
|
||||
"\n",
|
||||
"1. `engine` - An instance of a `PostgresEngine` engine.\n",
|
||||
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
|
||||
"1. `table_name` : The name of the table within the Cloud SQL database to store the chat message history."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "Kq7RLtfOq0wi",
|
||||
"metadata": {
|
||||
"id": "Kq7RLtfOq0wi"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_google_cloud_sql_pg import PostgresChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = PostgresChatMessageHistory.create_sync(\n",
|
||||
" engine, session_id=\"test_session\", table_name=TABLE_NAME\n",
|
||||
")\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
"history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "b476688cbb32ba90",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-28T10:04:38.929396Z",
|
||||
"start_time": "2023-08-28T10:04:38.915727Z"
|
||||
},
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"collapsed": false,
|
||||
"id": "b476688cbb32ba90",
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
},
|
||||
"outputId": "a19e5cd8-4225-476a-d28d-e870c6b838bb"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ss6CbqcTTedr",
|
||||
"metadata": {
|
||||
"id": "ss6CbqcTTedr"
|
||||
},
|
||||
"source": [
|
||||
"#### Cleaning up\n",
|
||||
"When the history of a specific session is obsolete and can be deleted, it can be done the following way.\n",
|
||||
"\n",
|
||||
"**Note:** Once deleted, the data is no longer stored in Cloud SQL and is gone forever."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "3khxzFxYO7x6",
|
||||
"metadata": {
|
||||
"id": "3khxzFxYO7x6"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"history.clear()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2e5337719d5614fd",
|
||||
"metadata": {
|
||||
"id": "2e5337719d5614fd"
|
||||
},
|
||||
"source": [
|
||||
"## 🔗 Chaining\n",
|
||||
"\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
|
||||
"\n",
|
||||
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "hYtHM3-TOMCe",
|
||||
"metadata": {
|
||||
"id": "hYtHM3-TOMCe"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable Vertex AI API\n",
|
||||
"!gcloud services enable aiplatform.googleapis.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "6558418b-0ece-4d01-9661-56d562d78f7a",
|
||||
"metadata": {
|
||||
"id": "6558418b-0ece-4d01-9661-56d562d78f7a"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"from langchain_google_vertexai import ChatVertexAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "82149122-61d3-490d-9bdb-bb98606e8ba1",
|
||||
"metadata": {
|
||||
"id": "82149122-61d3-490d-9bdb-bb98606e8ba1"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are a helpful assistant.\"),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{question}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = prompt | ChatVertexAI(project=PROJECT_ID)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "2df90853-b67c-490f-b7f8-b69d69270b9c",
|
||||
"metadata": {
|
||||
"id": "2df90853-b67c-490f-b7f8-b69d69270b9c"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain_with_history = RunnableWithMessageHistory(\n",
|
||||
" chain,\n",
|
||||
" lambda session_id: PostgresChatMessageHistory.create_sync(\n",
|
||||
" engine,\n",
|
||||
" session_id=session_id,\n",
|
||||
" table_name=TABLE_NAME,\n",
|
||||
" ),\n",
|
||||
" input_messages_key=\"question\",\n",
|
||||
" history_messages_key=\"history\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b",
|
||||
"metadata": {
|
||||
"id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is where we configure the session id\n",
|
||||
"config = {\"configurable\": {\"session_id\": \"test_session\"}}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "38e1423b-ba86-4496-9151-25932fab1a8b",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "38e1423b-ba86-4496-9151-25932fab1a8b",
|
||||
"outputId": "d5c93570-4b0b-4fe8-d19c-4b361fe74291"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Hello Bob, how can I help you today?')"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16",
|
||||
"outputId": "288fe388-3f60-41b8-8edb-37cfbec18981"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Your name is Bob.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"toc_visible": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -27,7 +27,7 @@
|
||||
"source": [
|
||||
"from datetime import timedelta\n",
|
||||
"\n",
|
||||
"from langchain_community.chat_message_histories import MomentoChatMessageHistory\n",
|
||||
"from langchain.memory import MomentoChatMessageHistory\n",
|
||||
"\n",
|
||||
"session_id = \"foo\"\n",
|
||||
"cache_name = \"langchain\"\n",
|
||||
|
||||
@@ -202,9 +202,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import Neo4jChatMessageHistory\n",
|
||||
"from langchain.memory import Neo4jChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = Neo4jChatMessageHistory(\n",
|
||||
" url=\"bolt://localhost:7687\",\n",
|
||||
@@ -68,7 +68,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.8.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -19,9 +19,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" PostgresChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain.memory import PostgresChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = PostgresChatMessageHistory(\n",
|
||||
" connection_string=\"postgresql://postgres:mypassword@localhost/chat_history\",\n",
|
||||
|
||||
@@ -31,16 +31,6 @@
|
||||
"pip install -U langchain-community redis"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b11090e7-284b-4ed2-9790-ce0d35638717",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import RedisChatMessageHistory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "20b99474-75ea-422e-9809-fbdb9d103afc",
|
||||
@@ -56,6 +46,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import RedisChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = RedisChatMessageHistory(\"foo\", url=\"redis://localhost:6379\")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
@@ -111,6 +103,7 @@
|
||||
"source": [
|
||||
"from typing import Optional\n",
|
||||
"\n",
|
||||
"from langchain_community.chat_message_histories import RedisChatMessageHistory\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"from langchain_openai import ChatOpenAI"
|
||||
@@ -185,7 +178,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.18"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -52,9 +52,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" RocksetChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain_community.chat_message_histories import RocksetChatMessageHistory\n",
|
||||
"from rockset import Regions, RocksetClient\n",
|
||||
"\n",
|
||||
"history = RocksetChatMessageHistory(\n",
|
||||
|
||||
@@ -18,9 +18,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" SingleStoreDBChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain.memory import SingleStoreDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = SingleStoreDBChatMessageHistory(\n",
|
||||
" session_id=\"foo\", host=\"root:pass@localhost:3306/db\"\n",
|
||||
@@ -58,9 +56,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
}
|
||||
@@ -71,10 +71,7 @@
|
||||
"end_time": "2023-08-28T10:04:38.077748Z",
|
||||
"start_time": "2023-08-28T10:04:36.105894Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -97,10 +94,7 @@
|
||||
"end_time": "2023-08-28T10:04:38.929396Z",
|
||||
"start_time": "2023-08-28T10:04:38.915727Z"
|
||||
},
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -247,7 +241,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -36,9 +36,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
" StreamlitChatMessageHistory,\n",
|
||||
")\n",
|
||||
"from langchain_community.chat_message_histories import StreamlitChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = StreamlitChatMessageHistory(key=\"chat_messages\")\n",
|
||||
"\n",
|
||||
@@ -61,7 +59,7 @@
|
||||
"id": "b60dc735",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history).\n",
|
||||
"We can easily combine this message history class with [LCEL Runnables](https://python.langchain.com/docs/expression_language/how_to/message_history).\n",
|
||||
"\n",
|
||||
"The history will be persisted across re-runs of the Streamlit app within a given user session. A given `StreamlitChatMessageHistory` will NOT be persisted or shared across user sessions."
|
||||
]
|
||||
@@ -73,6 +71,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import StreamlitChatMessageHistory\n",
|
||||
"\n",
|
||||
"# Optionally, specify your own session_state key for storing messages\n",
|
||||
"msgs = StreamlitChatMessageHistory(key=\"special_app_key\")\n",
|
||||
"\n",
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
"source": [
|
||||
"# TiDB\n",
|
||||
"\n",
|
||||
"> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at https://tidb.cloud/ai.\n",
|
||||
"> [TiDB](https://github.com/pingcap/tidb) is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics.\n",
|
||||
"\n",
|
||||
"This notebook introduces how to use TiDB to store chat message history. "
|
||||
]
|
||||
@@ -244,7 +244,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -258,9 +258,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.10.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import (\n",
|
||||
"from langchain_community.chat_message_histories.upstash_redis import (\n",
|
||||
" UpstashRedisChatMessageHistory,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
|
||||
@@ -75,7 +75,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_message_histories import XataChatMessageHistory\n",
|
||||
"from langchain.memory import XataChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = XataChatMessageHistory(\n",
|
||||
" session_id=\"session-1\", api_key=api_key, db_url=db_url, table_name=\"memory\"\n",
|
||||
@@ -154,7 +154,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.vectorstores import XataVectorStore\n",
|
||||
"from langchain_community.vectorstores.xata import XataVectorStore\n",
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings()\n",
|
||||
|
||||
@@ -419,7 +419,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -67,7 +67,7 @@ See a [usage example](/docs/integrations/chat/bedrock).
|
||||
from langchain_community.chat_models import BedrockChat
|
||||
```
|
||||
|
||||
## Embedding Models
|
||||
## Text Embedding Models
|
||||
|
||||
### Bedrock
|
||||
|
||||
@@ -84,6 +84,26 @@ from langchain_community.embeddings import SagemakerEndpointEmbeddings
|
||||
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
|
||||
```
|
||||
|
||||
## Chains
|
||||
|
||||
### Amazon Comprehend Moderation Chain
|
||||
|
||||
>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
|
||||
> uses machine learning to uncover valuable insights and connections in text.
|
||||
|
||||
|
||||
We need to install the `boto3` and `nltk` libraries.
|
||||
|
||||
```bash
|
||||
pip install boto3 nltk
|
||||
```
|
||||
|
||||
See a [usage example](/docs/guides/safety/amazon_comprehend_chain).
|
||||
|
||||
```python
|
||||
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
|
||||
```
|
||||
|
||||
## Document loaders
|
||||
|
||||
### AWS S3 Directory and File
|
||||
@@ -112,55 +132,25 @@ See a [usage example](/docs/integrations/document_loaders/amazon_textract).
|
||||
from langchain_community.document_loaders import AmazonTextractPDFLoader
|
||||
```
|
||||
|
||||
## Vector stores
|
||||
## Memory
|
||||
|
||||
### Amazon OpenSearch Service
|
||||
### AWS DynamoDB
|
||||
|
||||
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
|
||||
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
|
||||
> an open source,
|
||||
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
|
||||
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
|
||||
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
|
||||
>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
|
||||
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
|
||||
|
||||
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
|
||||
|
||||
We need to install several python libraries.
|
||||
We need to install the `boto3` library.
|
||||
|
||||
```bash
|
||||
pip install boto3 requests requests-aws4auth
|
||||
pip install boto3
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).
|
||||
See a [usage example](/docs/integrations/memory/aws_dynamodb).
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import OpenSearchVectorSearch
|
||||
```
|
||||
|
||||
### Amazon DocumentDB Vector Search
|
||||
|
||||
>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
|
||||
> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
|
||||
> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.
|
||||
|
||||
#### Installation and Setup
|
||||
|
||||
See [detail configuration instructions](/docs/integrations/vectorstores/documentdb).
|
||||
|
||||
We need to install the `pymongo` python package.
|
||||
|
||||
```bash
|
||||
pip install pymongo
|
||||
```
|
||||
|
||||
#### Deploy DocumentDB on AWS
|
||||
|
||||
[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
|
||||
|
||||
AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/).
|
||||
|
||||
See a [usage example](/docs/integrations/vectorstores/documentdb).
|
||||
|
||||
```python
|
||||
from langchain.vectorstores import DocumentDBVectorSearch
|
||||
from langchain.memory import DynamoDBChatMessageHistory
|
||||
```
|
||||
|
||||
## Retrievers
|
||||
@@ -207,6 +197,29 @@ See a [usage example](/docs/integrations/retrievers/bedrock).
|
||||
from langchain.retrievers import AmazonKnowledgeBasesRetriever
|
||||
```
|
||||
|
||||
## Vector stores
|
||||
|
||||
### Amazon OpenSearch Service
|
||||
|
||||
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
|
||||
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
|
||||
> an open source,
|
||||
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
|
||||
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
|
||||
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
|
||||
|
||||
We need to install several python libraries.
|
||||
|
||||
```bash
|
||||
pip install boto3 requests requests-aws4auth
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import OpenSearchVectorSearch
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
### AWS Lambda
|
||||
@@ -225,26 +238,6 @@ pip install boto3
|
||||
|
||||
See a [usage example](/docs/integrations/tools/awslambda).
|
||||
|
||||
## Memory
|
||||
|
||||
### AWS DynamoDB
|
||||
|
||||
>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
|
||||
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
|
||||
|
||||
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
|
||||
|
||||
We need to install the `boto3` library.
|
||||
|
||||
```bash
|
||||
pip install boto3
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/memory/aws_dynamodb).
|
||||
|
||||
```python
|
||||
from langchain.memory import DynamoDBChatMessageHistory
|
||||
```
|
||||
|
||||
## Callbacks
|
||||
|
||||
@@ -268,23 +261,3 @@ See a [usage example](/docs/integrations/callbacks/sagemaker_tracking).
|
||||
```python
|
||||
from langchain.callbacks import SageMakerCallbackHandler
|
||||
```
|
||||
|
||||
## Chains
|
||||
|
||||
### Amazon Comprehend Moderation Chain
|
||||
|
||||
>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
|
||||
> uses machine learning to uncover valuable insights and connections in text.
|
||||
|
||||
|
||||
We need to install the `boto3` and `nltk` libraries.
|
||||
|
||||
```bash
|
||||
pip install boto3 nltk
|
||||
```
|
||||
|
||||
See a [usage example](/docs/guides/safety/amazon_comprehend_chain).
|
||||
|
||||
```python
|
||||
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
|
||||
```
|
||||
|
||||
@@ -20,9 +20,25 @@ See a [usage example](/docs/integrations/llms/google_ai).
|
||||
from langchain_google_genai import GoogleGenerativeAI
|
||||
```
|
||||
|
||||
### Vertex AI Model Garden
|
||||
### Vertex AI
|
||||
|
||||
Access `PaLM` and hundreds of OSS models via `Vertex AI Model Garden` service.
|
||||
Access to `Gemini` and `PaLM` LLMs (like `text-bison` and `code-bison`) via `Vertex AI` on Google Cloud.
|
||||
|
||||
We need to install `langchain-google-vertexai` python package.
|
||||
|
||||
```bash
|
||||
pip install langchain-google-vertexai
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm).
|
||||
|
||||
```python
|
||||
from langchain_google_vertexai import VertexAI
|
||||
```
|
||||
|
||||
### Model Garden
|
||||
|
||||
Access PaLM and hundreds of OSS models via `Vertex AI Model Garden` on Google Cloud.
|
||||
|
||||
We need to install `langchain-google-vertexai` python package.
|
||||
|
||||
@@ -36,7 +52,6 @@ See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model
|
||||
from langchain_google_vertexai import VertexAIModelGarden
|
||||
```
|
||||
|
||||
|
||||
## Chat models
|
||||
|
||||
### Google Generative AI
|
||||
@@ -104,40 +119,6 @@ See a [usage example](/docs/integrations/chat/google_vertex_ai_palm).
|
||||
from langchain_google_vertexai import ChatVertexAI
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
|
||||
### Google Generative AI Embeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/google_generative_ai).
|
||||
|
||||
```bash
|
||||
pip install -U langchain-google-genai
|
||||
```
|
||||
|
||||
Configure your API key.
|
||||
|
||||
```bash
|
||||
export GOOGLE_API_KEY=your-api-key
|
||||
```
|
||||
|
||||
```python
|
||||
from langchain_google_genai import GoogleGenerativeAIEmbeddings
|
||||
```
|
||||
|
||||
### Vertex AI
|
||||
|
||||
We need to install `langchain-google-vertexai` python package.
|
||||
|
||||
```bash
|
||||
pip install langchain-google-vertexai
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/google_vertex_ai_palm).
|
||||
|
||||
```python
|
||||
from langchain_google_vertexai import VertexAIEmbeddings
|
||||
```
|
||||
|
||||
## Document Loaders
|
||||
|
||||
### AlloyDB for PostgreSQL
|
||||
@@ -495,7 +476,7 @@ Install the python package:
|
||||
pip install langchain-google-cloud-sql-pg
|
||||
```
|
||||
|
||||
See [usage example](/docs/integrations/vectorstores/google_cloud_sql_pg).
|
||||
See [usage example](/docs/integrations/vectorstores/google_sql_pg).
|
||||
|
||||
```python
|
||||
from langchain_google_cloud_sql_pg import PostgreSQLEngine, PostgresVectorStore
|
||||
@@ -779,7 +760,7 @@ Install the python package:
|
||||
pip install langchain-google-cloud-sql-pg
|
||||
```
|
||||
|
||||
See [usage example](/docs/integrations/memory/google_sql_pg).
|
||||
See [usage example](/docs/integrations/memory/google_cloud_sql_pg).
|
||||
|
||||
|
||||
```python
|
||||
@@ -795,7 +776,7 @@ Install the python package:
|
||||
pip install langchain-google-cloud-sql-mysql
|
||||
```
|
||||
|
||||
See [usage example](/docs/integrations/memory/google_sql_mysql).
|
||||
See [usage example](/docs/integrations/memory/google_cloud_sql_mysql).
|
||||
|
||||
```python
|
||||
from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLChatMessageHistory
|
||||
@@ -810,12 +791,28 @@ Install the python package:
|
||||
pip install langchain-google-cloud-sql-mssql
|
||||
```
|
||||
|
||||
See [usage example](/docs/integrations/memory/google_sql_mssql).
|
||||
See [usage example](/docs/integrations/memory/google_cloud_sql_mssql).
|
||||
|
||||
```python
|
||||
from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLChatMessageHistory
|
||||
```
|
||||
|
||||
## El Carro for Oracle Workloads
|
||||
|
||||
> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
|
||||
offers a way to run Oracle databases in Kubernetes as a portable, open source,
|
||||
community driven, no vendor lock-in container orchestration system.
|
||||
|
||||
```bash
|
||||
pip install langchain-google-el-carro
|
||||
```
|
||||
|
||||
See [usage example](/docs/integrations/memory/google_el_carro).
|
||||
|
||||
```python
|
||||
from langchain_google_el_carro import ElCarroChatMessageHistory
|
||||
```
|
||||
|
||||
### Spanner
|
||||
|
||||
> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
|
||||
@@ -886,16 +883,16 @@ Install the python package:
|
||||
pip install langchain-google-datastore
|
||||
```
|
||||
|
||||
See [usage example](/docs/integrations/memory/google_firestore_datastore).
|
||||
See [usage example](/docs/integrations/memory/google_datastore).
|
||||
|
||||
```python
|
||||
from langchain_google_datastore import DatastoreChatMessageHistory
|
||||
```
|
||||
|
||||
### El Carro: The Oracle Operator for Kubernetes
|
||||
## El Carro Oracle Operator
|
||||
|
||||
> Google [El Carro Oracle Operator for Kubernetes](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
|
||||
offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source,
|
||||
> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
|
||||
offers a way to run Oracle databases in Kubernetes as a portable, open source,
|
||||
community driven, no vendor lock-in container orchestration system.
|
||||
|
||||
```bash
|
||||
|
||||
@@ -2,26 +2,28 @@
|
||||
|
||||
All functionality related to the [Hugging Face Platform](https://huggingface.co/).
|
||||
|
||||
## Chat models
|
||||
## LLMs
|
||||
|
||||
### Models from Hugging Face
|
||||
### Hugging Face Hub
|
||||
|
||||
We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class.
|
||||
>The [Hugging Face Hub](https://huggingface.co/docs/hub/index) is a platform
|
||||
> with over 350k models, 75k datasets, and 150k demo apps (Spaces), all open source
|
||||
> and publicly available, in an online platform where people can easily
|
||||
> collaborate and build ML together. The Hub works as a central place where anyone
|
||||
> can explore, experiment, collaborate, and build technology with Machine Learning.
|
||||
|
||||
We need to install several python packages.
|
||||
To use, we should have the `huggingface_hub` python [package installed](https://huggingface.co/docs/huggingface_hub/installation).
|
||||
|
||||
```bash
|
||||
pip install huggingface_hub
|
||||
pip install transformers
|
||||
```
|
||||
See a [usage example](/docs/integrations/chat/huggingface).
|
||||
|
||||
See a [usage example](/docs/integrations/llms/huggingface_hub).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models.huggingface import ChatHuggingFace
|
||||
from langchain_community.llms import HuggingFaceHub
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
### Hugging Face Local Pipelines
|
||||
|
||||
Hugging Face models can be run locally through the `HuggingFacePipeline` class.
|
||||
@@ -54,6 +56,41 @@ optimum-cli export openvino --model gpt2 ov_model
|
||||
|
||||
To apply [weight-only quantization](https://github.com/huggingface/optimum-intel?tab=readme-ov-file#export) when exporting your model.
|
||||
|
||||
### Hugging Face TextGen Inference
|
||||
|
||||
>[Text Generation Inference](https://github.com/huggingface/text-generation-inference) is
|
||||
> a Rust, Python and gRPC server for text generation inference. Used in production at
|
||||
> [HuggingFace](https://huggingface.co/) to power LLMs api-inference widgets.
|
||||
|
||||
We need to install `text_generation` python package.
|
||||
|
||||
```bash
|
||||
pip install text_generation
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/llms/huggingface_textgen_inference).
|
||||
|
||||
```python
|
||||
from langchain_community.llms import HuggingFaceTextGenInference
|
||||
```
|
||||
|
||||
## Chat models
|
||||
|
||||
### Models from Hugging Face
|
||||
|
||||
We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class.
|
||||
|
||||
We need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install huggingface_hub
|
||||
pip install transformers
|
||||
```
|
||||
See a [usage example](/docs/integrations/chat/huggingface).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models.huggingface import ChatHuggingFace
|
||||
```
|
||||
|
||||
## Embedding Models
|
||||
|
||||
|
||||
@@ -2,15 +2,6 @@
|
||||
|
||||
All functionality related to `Microsoft Azure` and other `Microsoft` products.
|
||||
|
||||
## LLMs
|
||||
### Azure OpenAI
|
||||
|
||||
See a [usage example](/docs/integrations/llms/azure_openai).
|
||||
|
||||
```python
|
||||
from langchain_openai import AzureOpenAI
|
||||
```
|
||||
|
||||
## Chat Models
|
||||
### Azure OpenAI
|
||||
|
||||
@@ -38,7 +29,7 @@ See a [usage example](/docs/integrations/chat/azure_chat_openai)
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
```
|
||||
|
||||
## Embedding Models
|
||||
## Text Embedding Models
|
||||
### Azure OpenAI
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/azureopenai)
|
||||
@@ -47,6 +38,15 @@ See a [usage example](/docs/integrations/text_embedding/azureopenai)
|
||||
from langchain_openai import AzureOpenAIEmbeddings
|
||||
```
|
||||
|
||||
## LLMs
|
||||
### Azure OpenAI
|
||||
|
||||
See a [usage example](/docs/integrations/llms/azure_openai).
|
||||
|
||||
```python
|
||||
from langchain_openai import AzureOpenAI
|
||||
```
|
||||
|
||||
## Document loaders
|
||||
|
||||
### Azure AI Data
|
||||
@@ -209,6 +209,7 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_onenote).
|
||||
from langchain_community.document_loaders.onenote import OneNoteLoader
|
||||
```
|
||||
|
||||
|
||||
## Vector stores
|
||||
|
||||
### Azure Cosmos DB
|
||||
@@ -261,6 +262,19 @@ See a [usage example](/docs/integrations/retrievers/azure_cognitive_search).
|
||||
from langchain.retrievers import AzureCognitiveSearchRetriever
|
||||
```
|
||||
|
||||
## Utilities
|
||||
|
||||
### Bing Search API
|
||||
|
||||
>[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
|
||||
> is a web search engine owned and operated by `Microsoft`.
|
||||
|
||||
See a [usage example](/docs/integrations/tools/bing_search).
|
||||
|
||||
```python
|
||||
from langchain_community.utilities import BingSearchAPIWrapper
|
||||
```
|
||||
|
||||
## Toolkits
|
||||
|
||||
### Azure Cognitive Services
|
||||
@@ -306,19 +320,6 @@ from langchain_community.agent_toolkits import PowerBIToolkit
|
||||
from langchain_community.utilities.powerbi import PowerBIDataset
|
||||
```
|
||||
|
||||
## Utilities
|
||||
|
||||
### Bing Search API
|
||||
|
||||
>[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
|
||||
> is a web search engine owned and operated by `Microsoft`.
|
||||
|
||||
See a [usage example](/docs/integrations/tools/bing_search).
|
||||
|
||||
```python
|
||||
from langchain_community.utilities import BingSearchAPIWrapper
|
||||
```
|
||||
|
||||
## More
|
||||
|
||||
### Microsoft Presidio
|
||||
|
||||
@@ -36,6 +36,7 @@ from langchain_openai import AzureOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai)
|
||||
|
||||
|
||||
## Chat model
|
||||
|
||||
See a [usage example](/docs/integrations/chat/openai).
|
||||
@@ -50,7 +51,8 @@ from langchain_openai import AzureChatOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai)
|
||||
|
||||
## Embedding Model
|
||||
|
||||
## Text Embedding Model
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/openai)
|
||||
|
||||
@@ -58,6 +60,19 @@ See a [usage example](/docs/integrations/text_embedding/openai)
|
||||
from langchain_openai import OpenAIEmbeddings
|
||||
```
|
||||
|
||||
|
||||
## Tokenizer
|
||||
|
||||
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
||||
for OpenAI LLMs.
|
||||
|
||||
You can also use it to count tokens when splitting documents with
|
||||
```python
|
||||
from langchain_text_splitters import CharacterTextSplitter
|
||||
CharacterTextSplitter.from_tiktoken_encoder(...)
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
|
||||
|
||||
## Document Loader
|
||||
|
||||
See a [usage example](/docs/integrations/document_loaders/chatgpt_loader).
|
||||
@@ -74,6 +89,22 @@ See a [usage example](/docs/integrations/retrievers/chatgpt-plugin).
|
||||
from langchain.retrievers import ChatGPTPluginRetriever
|
||||
```
|
||||
|
||||
## Chain
|
||||
|
||||
See a [usage example](/docs/guides/safety/moderation).
|
||||
|
||||
```python
|
||||
from langchain.chains import OpenAIModerationChain
|
||||
```
|
||||
|
||||
## Adapter
|
||||
|
||||
See a [usage example](/docs/integrations/adapters/openai).
|
||||
|
||||
```python
|
||||
from langchain.adapters import openai as lc_openai
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
### Dall-E Image Generator
|
||||
@@ -88,33 +119,3 @@ See a [usage example](/docs/integrations/tools/dalle_image_generator).
|
||||
```python
|
||||
from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
|
||||
```
|
||||
|
||||
## Adapter
|
||||
|
||||
See a [usage example](/docs/integrations/adapters/openai).
|
||||
|
||||
```python
|
||||
from langchain.adapters import openai as lc_openai
|
||||
```
|
||||
|
||||
## Tokenizer
|
||||
|
||||
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
||||
for OpenAI LLMs.
|
||||
|
||||
You can also use it to count tokens when splitting documents with
|
||||
```python
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
CharacterTextSplitter.from_tiktoken_encoder(...)
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
|
||||
|
||||
## Chain
|
||||
|
||||
See a [usage example](/docs/guides/safety/moderation).
|
||||
|
||||
```python
|
||||
from langchain.chains import OpenAIModerationChain
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@ pip install python-arango
|
||||
|
||||
Connect your `ArangoDB` Database with a chat model to get insights on your data.
|
||||
|
||||
See the notebook example [here](/docs/use_cases/graph/integrations/graph_arangodb_qa).
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).
|
||||
|
||||
```python
|
||||
from arango import ArangoClient
|
||||
|
||||
from langchain_community.graphs import ArangoGraph
|
||||
from langchain.chains import ArangoGraphQAChain
|
||||
```
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user