Compare commits

..

3 Commits

Author SHA1 Message Date
Chester Curme
ad6715b6bb update property docstrings 2024-12-03 10:43:38 -05:00
ccurme
027f49c661 docs[patch]: reorganize integration guides (#28457)
Proposal is:

- Each type of component (chat models, embeddings, etc) has a dedicated
guide.
- This guide contains detail on both implementation and testing via
langchain-tests.
- We delete the monolithic standard-tests guide.
2024-12-02 17:55:57 -05:00
Harrison Chase
ce6e4bb645 improve integration docs 2024-11-29 12:40:35 -05:00
461 changed files with 10674 additions and 35517 deletions

View File

@@ -22,7 +22,7 @@ body:
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -16,7 +16,7 @@ body:
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -21,7 +21,7 @@ body:
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -272,9 +272,6 @@ if __name__ == "__main__":
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
dirs_to_run["test"].add("libs/partners/anthropic")
@@ -282,9 +279,8 @@ if __name__ == "__main__":
dirs_to_run["test"].add("libs/partners/groq")
elif file.startswith("libs/cli"):
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
# todo: add cli makefile
pass
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:
@@ -21,7 +21,6 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "poetry run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
# This env var allows us to get inline annotations when ruff has complaints.
@@ -23,7 +23,6 @@ jobs:
build:
name: "make lint #${{ inputs.python-version }}"
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4

View File

@@ -21,7 +21,7 @@ on:
env:
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:
@@ -167,7 +167,6 @@ jobs:
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
@@ -192,12 +191,7 @@ jobs:
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Import dist package
- name: Import published package
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
@@ -213,7 +207,15 @@ jobs:
# - attempt install again after 5 seconds if it fails because there is
# sometimes a delay in availability on test pypi
run: |
poetry run pip install dist/*.whl
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 15 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
)
# Replace all dashes in the package name with underscores,
# since that's how Python imports packages with dashes in the name.
@@ -222,10 +224,10 @@ jobs:
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test --no-root
run: poetry install --with test
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the built version
# Overwrite the local version of the package with the test PyPI version.
- name: Import published package (again)
working-directory: ${{ inputs.working-directory }}
shell: bash
@@ -233,7 +235,9 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
poetry run pip install dist/*.whl
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION"
- name: Run unit tests
run: make tests

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:
@@ -21,7 +21,6 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -9,12 +9,11 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: "check doc imports #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -18,7 +18,7 @@ on:
description: "Pydantic version to test."
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:
@@ -26,7 +26,6 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test # pydantic: ~=${{ inputs.pydantic-version }}, python: ${{ inputs.python-version }}, "
steps:
- uses: actions/checkout@v4

View File

@@ -14,7 +14,7 @@ on:
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
PYTHON_VERSION: "3.10"
jobs:

View File

@@ -5,7 +5,7 @@ on:
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.8.1"
PYTHON_VERSION: "3.11"
jobs:

View File

@@ -5,7 +5,6 @@ on:
push:
branches: [master]
pull_request:
merge_group:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
@@ -18,7 +17,7 @@ concurrency:
cancel-in-progress: true
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:
@@ -120,7 +119,6 @@ jobs:
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
fail-fast: false
runs-on: ubuntu-latest
timeout-minutes: 20
defaults:
run:
working-directory: ${{ matrix.job-configs.working-directory }}

View File

@@ -15,7 +15,7 @@ on:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.8.4"
POETRY_VERSION: "1.7.1"
jobs:
build:

View File

@@ -2,60 +2,32 @@ name: Scheduled tests
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.8.4"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_VERSION: "1.7.1"
jobs:
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: Compute matrix
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set matrix
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
WORKING_DIRECTORY_FORCE: ${{ github.event.inputs.working-directory-force || '' }}
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
fi
if [ -n "$WORKING_DIRECTORY_FORCE" ]; then
working_directory="[\"$WORKING_DIRECTORY_FORCE\"]"
fi
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
python-version: ${{ fromJSON(needs.compute-matrix.outputs.matrix).python-version }}
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
python-version:
- "3.9"
- "3.11"
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
- "libs/partners/fireworks"
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
steps:
- uses: actions/checkout@v4

View File

@@ -69,11 +69,7 @@ lint lint_package lint_tests:
poetry run ruff check docs cookbook
poetry run ruff format docs cookbook cookbook --diff
poetry run ruff check --select I docs cookbook
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
git grep 'from langchain import' docs/docs cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:

View File

@@ -38,21 +38,18 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's open-source
[components](https://python.langchain.com/docs/concepts/) and
[third-party integrations](https://python.langchain.com/docs/integrations/providers/).
- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel), [components](https://python.langchain.com/docs/concepts/), and [third-party integrations](https://python.langchain.com/docs/integrations/providers/).
Use [LangGraph](https://langchain-ai.github.io/langgraph/) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/).
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
### Open-source libraries
- **`langchain-core`**: Base abstractions.
- **Integration packages** (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Important integrations have been split into lightweight packages that are co-maintained by the LangChain team and the integration developers.
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`langchain-community`**: Third-party integrations that are community maintained.
- **[LangGraph](https://langchain-ai.github.io/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
### Productionization:
@@ -60,7 +57,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
### Deployment:
- **[LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024.svg#gh-light-mode-only "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024_dark.svg#gh-dark-mode-only "LangChain Architecture Overview")
@@ -75,7 +72,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
- End-to-end Example: [LangChain Extract](https://github.com/langchain-ai/langchain-extract/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
@@ -88,12 +85,19 @@ And much more! Head to the [Tutorials](https://python.langchain.com/docs/tutoria
The main value props of the LangChain libraries are:
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not.
2. **Easy orchestration with LangGraph**: [LangGraph](https://langchain-ai.github.io/langgraph/),
built on top of `langchain-core`, has built-in support for [messages](https://python.langchain.com/docs/concepts/messages/), [tools](https://python.langchain.com/docs/concepts/tools/),
and other LangChain abstractions. This makes it easy to combine components into
production-ready applications with persistence, streaming, and other key features.
Check out the LangChain [tutorials page](https://python.langchain.com/docs/tutorials/#orchestration) for examples.
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is a key part of LangChain, allowing you to build and organize chains of processes in a straightforward, declarative manner. It was designed to support taking prototypes directly into production without needing to alter any code. This means you can use LCEL to set up everything from basic "prompt + LLM" setups to intricate, multi-step workflows.
- **[Overview](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
@@ -101,19 +105,15 @@ Components fall into the following **modules**:
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/concepts/prompt_templates/)
and a generic interface for [chat models](https://python.langchain.com/docs/concepts/chat_models/), including a consistent interface for [tool-calling](https://python.langchain.com/docs/concepts/tool_calling/) and [structured output](https://python.langchain.com/docs/concepts/structured_outputs/) across model providers.
This includes [prompt management](https://python.langchain.com/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/docs/concepts/#output-parsers).
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/text_splitters/), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/retrievers/) it for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. [LangGraph](https://langchain-ai.github.io/langgraph/) makes it easy to use
LangChain components to build both [custom](https://langchain-ai.github.io/langgraph/tutorials/)
and [built-in](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/)
LLM agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
## 📖 Documentation
@@ -123,7 +123,7 @@ Please see [here](https://python.langchain.com) for full documentation, which in
- [Tutorials](https://python.langchain.com/docs/tutorials/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://python.langchain.com/api_reference/): Thorough documentation of every class and method.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem

View File

@@ -1,30 +1,5 @@
# Security Policy
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best practices
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, its safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. Its best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
@@ -39,7 +14,7 @@ Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) The [Best practicies](#best-practices) above to
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
understand what we consider to be a security vulnerability vs. developer
responsibility.
@@ -58,13 +33,13 @@ The following packages and repositories are eligible for bug bounties:
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- libs/langchain/langchain/tools
- libs/community/langchain_community/tools
- Please review the [best practices](#best-practices)
- langchain/tools
- langchain-community/tools
- Please review our [security guidelines](https://python.langchain.com/docs/security)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
@@ -72,7 +47,7 @@ All out of scope targets defined by huntr as well as:
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
- Any LangSmith related repositories or APIs see below.
## Reporting LangSmith Vulnerabilities

View File

@@ -13,21 +13,28 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec sh -c ' \
for dir; do \
if find "$$dir" -maxdepth 1 -type f \( -name "pyproject.toml" -o -name "setup.py" \) | grep -q .; then \
echo "$$dir"; \
fi \
done' sh {} + | grep -vE "airbyte|ibm|databricks" | tr '\n' ' ')
PORT ?= 3001
clean:
rm -rf build
install-vercel-deps:
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
yum -y update
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install -q --upgrade pip
$(PYTHON) -m pip install -q --upgrade uv
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py)
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install --pre --editable $(PARTNER_DEPS_LIST)
generate-files:
mkdir -p $(INTERMEDIATE_DIR)
@@ -40,7 +47,6 @@ generate-files:
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:
@@ -53,7 +59,6 @@ copy-infra:
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
render:
@@ -75,7 +80,6 @@ build: install-py-deps generate-files copy-infra render md-sync append-related
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
cp -r ../libs/cli/langchain_cli/integration_template src/theme
rm -rf build
mkdir static/api_reference
git clone --depth=1 https://github.com/langchain-ai/langchain-api-docs-html.git

View File

@@ -80,8 +80,6 @@
html {
--pst-font-family-base: 'Inter';
--pst-font-family-heading: 'Inter Tight', sans-serif;
--pst-icon-versionmodified-deprecated: var(--pst-icon-exclamation-triangle);
}
/*******************************************************************************
@@ -94,7 +92,7 @@ html {
* https://sass-lang.com/documentation/interpolation
*/
/* Defaults to light mode if data-theme is not set */
html:not([data-theme]), html[data-theme=light] {
html:not([data-theme]) {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
@@ -124,8 +122,58 @@ html:not([data-theme]), html[data-theme=light] {
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
--pst-color-deprecated: #f47d2e;
--pst-color-deprecated-bg: #fff3e8;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
--pst-color-secondary-bg: #DAD6FE;
--pst-color-accent: #c132af;
--pst-color-accent-bg: #f8dff5;
--pst-color-info: #276be9;
--pst-color-info-bg: #dce7fc;
--pst-color-warning: #f66a0a;
--pst-color-warning-bg: #f8e3d0;
--pst-color-success: #00843f;
--pst-color-success-bg: #d6ece1;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #d72d47;
--pst-color-danger-bg: #f9e1e4;
--pst-color-text-base: #222832;
--pst-color-text-muted: #48566b;
--pst-color-heading-color: #ffffff;
--pst-color-shadow: rgba(0, 0, 0, 0.1);
--pst-color-border: #d1d5da;
--pst-color-border-muted: rgba(23, 23, 26, 0.2);
--pst-color-inline-code: #912583;
--pst-color-inline-code-links: #246161;
--pst-color-target: #f3cf95;
--pst-color-background: #ffffff;
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
}
html[data-theme=dark] {
@@ -158,8 +206,6 @@ html[data-theme=dark] {
--pst-color-on-background: #222832;
--pst-color-surface: #29313d;
--pst-color-on-surface: #f3f4f5;
--pst-color-deprecated: #b46f3e;
--pst-color-deprecated-bg: #341906;
/* Adjust images in dark mode (unless they have class .only-dark or
* .dark-light, in which case assume they're already optimized for dark
* mode).
@@ -170,30 +216,6 @@ html[data-theme=dark] {
*/
color-scheme: dark;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
}
html[data-theme=dark] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
@@ -367,13 +389,6 @@ html[data-theme=dark] .MathJax_SVG * {
div.deprecated {
margin-top: 0.5em;
margin-bottom: 2em;
background-color: var(--pst-color-deprecated-bg);
border-color: var(--pst-color-deprecated);
}
span.versionmodified.deprecated:before {
color: var(--pst-color-deprecated);
}
.admonition-beta.admonition, div.admonition-beta.admonition {
@@ -393,4 +408,4 @@ dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.
p {
font-size: 0.9rem;
margin-bottom: 0.5rem;
}
}

View File

@@ -87,18 +87,6 @@ class Beta(BaseAdmonition):
def setup(app):
app.add_directive("example_links", ExampleLinksDirective)
app.add_directive("beta", Beta)
app.connect("autodoc-skip-member", skip_private_members)
def skip_private_members(app, what, name, obj, skip, options):
if skip:
return True
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# dont document default init
return True
return None
# -- Project information -----------------------------------------------------
@@ -128,7 +116,6 @@ extensions = [
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
"sphinxcontrib.googleanalytics",
]
source_suffix = [".rst", ".md"]
@@ -268,8 +255,6 @@ html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
googleanalytics_id = "G-9B66JQQH2F"
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -72,21 +72,14 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
Returns:
list: A list of loaded module objects.
"""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
module = importlib.import_module(module_path)
if ":private:" in (module.__doc__ or ""):
return ModuleMembers(classes_=[], functions=[])
for name, type_ in inspect.getmembers(module):
if not hasattr(type_, "__module__"):
continue
if type_.__module__ != module_path:
continue
if ":private:" in (type_.__doc__ or ""):
continue
if inspect.isclass(type_):
# The type of the class is used to select a template

View File

@@ -9,4 +9,3 @@ pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4
sphinxcontrib-googleanalytics

View File

@@ -28,19 +28,19 @@ From the opposite direction, scientists use `LangChain` in research and referenc
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023&#8209;07&#8209;18 | `Cookbook:` [Semi Structured Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2307.03172v3` [Lost in the Middle: How Language Models Use Long Contexts](http://arxiv.org/abs/2307.03172v3) | Nelson F. Liu, Kevin Lin, John Hewitt, et al. | 2023&#8209;07&#8209;06 | `Docs:` [docs/how_to/long_context_reorder](https://python.langchain.com/docs/how_to/long_context_reorder)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023&#8209;05&#8209;23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [Rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023&#8209;05&#8209;15 | `API:` [langchain_experimental.tot](https://python.langchain.com/api_reference/experimental/tot.html), `Cookbook:` [Tree Of Thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023&#8209;05&#8209;15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [Tree Of Thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023&#8209;05&#8209;06 | `Cookbook:` [Plan And Execute Agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2305.02156v1` [Zero-Shot Listwise Document Reranking with a Large Language Model](http://arxiv.org/abs/2305.02156v1) | Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. | 2023&#8209;05&#8209;03 | `Docs:` [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression), `API:` [langchain...LLMListwiseRerank](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#)
| `2305.02156v1` [Zero-Shot Listwise Document Reranking with a Large Language Model](http://arxiv.org/abs/2305.02156v1) | Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. | 2023&#8209;05&#8209;03 | `Docs:` [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression), `API:` [langchain...LLMListwiseRerank](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023&#8209;04&#8209;17 | `Cookbook:` [Semi Structured Multi Modal Rag Llama2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb), [Semi Structured And Multi Modal Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023&#8209;04&#8209;07 | `Cookbook:` [Generative Agents Interactive Simulacra Of Human Behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb), [Multiagent Bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023&#8209;03&#8209;31 | `Cookbook:` [Camel Role Playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023&#8209;03&#8209;30 | `API:` [langchain_experimental.autonomous_agents](https://python.langchain.com/api_reference/experimental/autonomous_agents.html), `Cookbook:` [Hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023&#8209;03&#8209;30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [Hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023&#8209;01&#8209;24 | `API:` [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022&#8209;12&#8209;20 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts), `API:` [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [Hypothetical Document Embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.08073v1` [Constitutional AI: Harmlessness from AI Feedback](http://arxiv.org/abs/2212.08073v1) | Yuntao Bai, Saurav Kadavath, Sandipan Kundu, et al. | 2022&#8209;12&#8209;15 | `Docs:` [docs/versions/migrating_chains/constitutional_chain](https://python.langchain.com/docs/versions/migrating_chains/constitutional_chain)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022&#8209;12&#8209;12 | `API:` [langchain_experimental.fallacy_removal](https://python.langchain.com/api_reference/experimental/fallacy_removal.html)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022&#8209;12&#8209;12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022&#8209;11&#8209;25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022&#8209;11&#8209;18 | `API:` [langchain_experimental.pal_chain](https://python.langchain.com/api_reference/experimental/pal_chain.html), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [Program Aided Language Model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022&#8209;11&#8209;18 | `API:` [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [Program Aided Language Model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2210.11934v2` [An Analysis of Fusion Functions for Hybrid Retrieval](http://arxiv.org/abs/2210.11934v2) | Sebastian Bruch, Siyu Gai, Amir Ingber | 2022&#8209;10&#8209;21 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022&#8209;10&#8209;06 | `Docs:` [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/concepts](https://python.langchain.com/docs/concepts), `API:` [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022&#8209;09&#8209;22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
@@ -49,7 +49,7 @@ From the opposite direction, scientists use `LangChain` in research and referenc
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022&#8209;03&#8209;15 | `Docs:` [docs/tutorials/sql_qa](https://python.langchain.com/docs/tutorials/sql_qa), `API:` [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022&#8209;02&#8209;01 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2112.01488v3` [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](http://arxiv.org/abs/2112.01488v3) | Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, et al. | 2021&#8209;12&#8209;02 | `Docs:` [docs/integrations/retrievers/ragatouille](https://python.langchain.com/docs/integrations/retrievers/ragatouille), [docs/integrations/providers/ragatouille](https://python.langchain.com/docs/integrations/providers/ragatouille), [docs/concepts](https://python.langchain.com/docs/concepts), [docs/integrations/providers/dspy](https://python.langchain.com/docs/integrations/providers/dspy)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021&#8209;02&#8209;26 | `API:` [langchain_experimental.open_clip](https://python.langchain.com/api_reference/experimental/open_clip.html)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021&#8209;02&#8209;26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `2005.14165v4` [Language Models are Few-Shot Learners](http://arxiv.org/abs/2005.14165v4) | Tom B. Brown, Benjamin Mann, Nick Ryder, et al. | 2020&#8209;05&#8209;28 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
| `2005.11401v4` [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](http://arxiv.org/abs/2005.11401v4) | Patrick Lewis, Ethan Perez, Aleksandra Piktus, et al. | 2020&#8209;05&#8209;22 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019&#8209;09&#8209;11 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
@@ -433,7 +433,7 @@ for retrieval-augmented LLM.
- **arXiv id:** [2305.08291v1](http://arxiv.org/abs/2305.08291v1) **Published Date:** 2023-05-15
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://python.langchain.com/api_reference/experimental/tot.html)
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
@@ -490,7 +490,7 @@ https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
- **LangChain:**
- **Documentation:** [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression)
- **API Reference:** [langchain...LLMListwiseRerank](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#)
- **API Reference:** [langchain...LLMListwiseRerank](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank)
**Abstract:** Supervised ranking methods based on bi-encoder or cross-encoder architectures
have shown success in multi-stage text ranking tasks, but they require large
@@ -597,7 +597,7 @@ agents and beyond: https://github.com/camel-ai/camel.
- **arXiv id:** [2303.17580v4](http://arxiv.org/abs/2303.17580v4) **Published Date:** 2023-03-30
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://python.langchain.com/api_reference/experimental/autonomous_agents.html)
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
@@ -704,7 +704,7 @@ labels.
- **arXiv id:** [2212.07425v3](http://arxiv.org/abs/2212.07425v3) **Published Date:** 2022-12-12
- **LangChain:**
- **API Reference:** [langchain_experimental.fallacy_removal](https://python.langchain.com/api_reference/experimental/fallacy_removal.html)
- **API Reference:** [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
**Abstract:** The spread of misinformation, propaganda, and flawed argumentation has been
amplified in the Internet era. Given the volume of data and the subtlety of
@@ -759,7 +759,7 @@ performance across three real-world tasks on multiple LLMs.
- **arXiv id:** [2211.10435v2](http://arxiv.org/abs/2211.10435v2) **Published Date:** 2022-11-18
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain](https://python.langchain.com/api_reference/experimental/pal_chain.html), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **API Reference:** [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
@@ -992,7 +992,7 @@ footprint of late interaction models by 6--10$\times$.
- **arXiv id:** [2103.00020v1](http://arxiv.org/abs/2103.00020v1) **Published Date:** 2021-02-26
- **LangChain:**
- **API Reference:** [langchain_experimental.open_clip](https://python.langchain.com/api_reference/experimental/open_clip.html)
- **API Reference:** [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
**Abstract:** State-of-the-art computer vision systems are trained to predict a fixed set
of predetermined object categories. This restricted form of supervision limits

View File

@@ -12,7 +12,6 @@
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C)
### [by Total Technology Zonne](https://youtube.com/playlist?list=PLI8raxzYtfGyE02fAxiM1CPhLUuqcTLWg&si=fkAye16rQKBJVHc9)
## Courses

View File

@@ -65,7 +65,7 @@ A package to deploy LangChain chains as REST APIs. Makes it easy to get a produc
:::important
LangServe is designed to primarily deploy simple Runnables and work with well-known primitives in langchain-core.
If you need a deployment option for LangGraph, you should instead be looking at LangGraph Platform (beta) which will be better suited for deploying LangGraph applications.
If you need a deployment option for LangGraph, you should instead be looking at LangGraph Cloud (beta) which will be better suited for deploying LangGraph applications.
:::
For more information, see the [LangServe documentation](/docs/langserve).

View File

@@ -3,7 +3,7 @@
:::info[Prerequisites]
* [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)
* [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
:::

View File

@@ -48,7 +48,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs a Runnable.
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
@@ -70,7 +70,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[langchain-core](/docs/concepts/architecture#langchain-core)**: Core langchain package. Includes base interfaces and in-memory implementations.
- **[langchain](/docs/concepts/architecture#langchain)**: A package for higher level components (e.g., some pre-built chains).
- **[langgraph](/docs/concepts/architecture#langgraph)**: Powerful orchestration layer for LangChain. Use to build complex pipelines and workflows.
- **[langserve](/docs/concepts/architecture#langserve)**: Used to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[LLMs (legacy)](/docs/concepts/text_llms)**: Older language models that take a string as input and return a string as output.
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
@@ -79,7 +79,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
- **[RunnableConfig](/docs/concepts/runnables/#runnableconfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`.
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`,
- **[Standard tests](/docs/concepts/testing#standard-tests)**: A defined set of unit and integration tests that all integrations must pass.
- **[stream](/docs/concepts/streaming)**: Use to stream output from a Runnable or a graph.
- **[Tokenization](/docs/concepts/tokens)**: The process of converting data into tokens and vice versa.

View File

@@ -221,7 +221,7 @@ They are particularly useful for storing and querying complex relationships betw
LangChain provides a unified interface for interacting with various retrieval systems through the [retriever](/docs/concepts/retrievers/) concept. The interface is straightforward:
1. Input: A query (string)
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
2. Output: A list of documents (standardized LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects)
You can create a retriever using any of the retrieval systems mentioned earlier. The query analysis techniques we discussed are particularly useful here, as they enable natural language interfaces for databases that typically require structured query languages.
For example, you can build a retriever for a SQL database using text-to-SQL conversion. This allows a natural language query (string) to be transformed into a SQL query behind the scenes.

View File

@@ -18,7 +18,7 @@ Because of their importance and variability, LangChain provides a uniform interf
The LangChain [retriever](/docs/concepts/retrievers/) interface is straightforward:
1. Input: A query (string)
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
2. Output: A list of documents (standardized LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects)
## Key concept
@@ -29,7 +29,7 @@ All retrievers implement a simple interface for retrieving documents using natur
## Interface
The only requirement for a retriever is the ability to accepts a query and return documents.
In particular, [LangChain's retriever class](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html#) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
In particular, [LangChain's retriever class](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
The underlying logic used to get relevant documents is specified by the retriever and can be whatever is most useful for the application.
A LangChain retriever is a [runnable](/docs/how_to/lcel_cheatsheet/), which is a standard interface is for LangChain components.
@@ -39,7 +39,7 @@ This means that it has a few common methods, including `invoke`, that are used t
docs = retriever.invoke(query)
```
Retrievers return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
Retrievers return a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
* `page_content`: The content of this document. Currently is a string.
* `metadata`: Arbitrary metadata associated with this document (e.g., document id, file name, source, etc).

View File

@@ -114,7 +114,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
| Method | Description |
|-------------------------|------------------------------------------------------------------|
| `get_input_schema` | Gives the Pydantic Schema of the input schema for the Runnable. |
| `get_output_schema` | Gives the Pydantic Schema of the output schema for the Runnable. |
| `get_output_chema` | Gives the Pydantic Schema of the output schema for the Runnable. |
| `config_schema` | Gives the Pydantic Schema of the config schema for the Runnable. |
| `get_input_jsonschema` | Gives the JSONSchema of the input schema for the Runnable. |
| `get_output_jsonschema` | Gives the JSONSchema of the output schema for the Runnable. |
@@ -125,7 +125,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
).
## RunnableConfig
@@ -323,7 +323,7 @@ multiple Runnables and you need to add custom processing logic in one of the ste
There are two ways to create a custom Runnable from a function:
* `RunnableLambda`: Use this for simple transformations where streaming is not required.
* `RunnableLambda`: Use this simple transformations where streaming is not required.
* `RunnableGenerator`: use this for more complex transformations when streaming is needed.
See the [How to run custom functions](/docs/how_to/functions) guide for more information on how to use `RunnableLambda` and `RunnableGenerator`.
@@ -347,6 +347,6 @@ Sometimes you may want to experiment with, or even expose to the end user, multi
To simplify this process, the Runnable interface provides two methods for creating configurable Runnables at runtime:
* `configurable_fields`: This method allows you to configure specific **attributes** in a Runnable. For example, the `temperature` attribute of a chat model.
* `configurable_alternatives`: This method enables you to specify **alternative** Runnables that can be run during runtime. For example, you could specify a list of different chat models that can be used.
* `configurable_alternatives`: This method enables you to specify **alternative** Runnables that can be run during run time. For example, you could specify a list of different chat models that can be used.
See the [How to configure runtime chain internals](/docs/how_to/configure) guide for more information on how to configure runtime chain internals.

View File

@@ -78,4 +78,4 @@ class TestParrotMultiplyToolUnit(ToolsUnitTests):
return {"a": 2, "b": 3}
```
To learn more, check out our guide on [how to add standard tests to an integration](../../contributing/how_to/integrations/standard_tests).
To learn more, check out our guide guides on [contributing an integration](/docs/contributing/how_to/integrations/#standard-tests).

View File

@@ -59,7 +59,7 @@ vector_store = InMemoryVectorStore(embedding=SomeEmbeddingModel())
To add documents, use the `add_documents` method.
This API works with a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects.
This API works with a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects.
`Document` objects all have `page_content` and `metadata` attributes, making them a universal way to store unstructured text and associated metadata.
```python
@@ -126,7 +126,7 @@ to the documentation of the specific vectorstore you are using to see what simil
Given a similarity metric to measure the distance between the embedded query and any embedded document, we need an algorithm to efficiently search over *all* the embedded documents to find the most similar ones.
There are various ways to do this. As an example, many vectorstores implement [HNSW (Hierarchical Navigable Small World)](https://www.pinecone.io/learn/series/faiss/hnsw/), a graph-based index structure that allows for efficient similarity search.
Regardless of the search algorithm used under the hood, the LangChain vectorstore interface has a `similarity_search` method for all integrations.
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html).
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html).
```python
query = "my query"

View File

@@ -7,4 +7,3 @@
- [**Start Here**](integrations/index.mdx): Help us integrate with your favorite vendors and tools.
- [**Package**](integrations/package): Publish an integration package to PyPi
- [**Standard Tests**](integrations/standard_tests): Ensure your integration passes an expected set of tests.

View File

@@ -0,0 +1,323 @@
---
pagination_next: contributing/how_to/integrations/publish
pagination_prev: contributing/how_to/integrations/index
---
# How to implement and test a chat model integration
This guide walks through how to implement and test a custom [chat model](/docs/concepts/chat_models) that you have developed.
For testing, we will rely on the `langchain-tests` dependency we added in the previous [bootstrapping guide](/docs/contributing/how_to/integrations/package).
## Implementation
Let's say you're building a simple integration package that provides a `ChatParrotLink`
chat model integration for LangChain. Here's a simple example of what your project
structure might look like:
```plaintext
langchain-parrot-link/
├── langchain_parrot_link/
│ ├── __init__.py
│ └── chat_models.py
├── tests/
│ ├── __init__.py
│ └── test_chat_models.py
├── pyproject.toml
└── README.md
```
Following the [bootstrapping guide](/docs/contributing/how_to/integrations/package),
all of these files should already exist, except for
`chat_models.py` and `test_chat_models.py`. We will implement these files in this guide.
To implement `chat_models.py`, we copy the [implementation](/docs/how_to/custom_chat_model/#implementation) from our
[Custom Chat Model Guide](/docs/how_to/custom_chat_model). Refer to that guide for more detail.
<details>
<summary>chat_models.py</summary>
```python title="langchain_parrot_link/chat_models.py"
from typing import Any, Dict, Iterator, List, Optional
from langchain_core.callbacks import (
CallbackManagerForLLMRun,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from pydantic import Field
class ChatParrotLink(BaseChatModel):
"""A custom chat model that echoes the first `parrot_buffer_length` characters
of the input.
When contributing an implementation to LangChain, carefully document
the model including the initialization parameters, include
an example of how to initialize the model and include any relevant
links to the underlying models documentation or API.
Example:
.. code-block:: python
model = ChatParrotLink(parrot_buffer_length=2, model="bird-brain-001")
result = model.invoke([HumanMessage(content="hello")])
result = model.batch([[HumanMessage(content="hello")],
[HumanMessage(content="world")]])
"""
model_name: str = Field(alias="model")
"""The name of the model"""
parrot_buffer_length: int
"""The number of characters from the last message of the prompt to be echoed."""
temperature: Optional[float] = None
max_tokens: Optional[int] = None
timeout: Optional[int] = None
stop: Optional[List[str]] = None
max_retries: int = 2
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
"""Override the _generate method to implement the chat model logic.
This can be a call to an API, a call to a local model, or any other
implementation that generates a response to the input prompt.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
# Replace this with actual logic to generate a response from a list
# of messages.
last_message = messages[-1]
tokens = last_message.content[: self.parrot_buffer_length]
ct_input_tokens = sum(len(message.content) for message in messages)
ct_output_tokens = len(tokens)
message = AIMessage(
content=tokens,
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
},
usage_metadata={
"input_tokens": ct_input_tokens,
"output_tokens": ct_output_tokens,
"total_tokens": ct_input_tokens + ct_output_tokens,
},
)
##
generation = ChatGeneration(message=message)
return ChatResult(generations=[generation])
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model.
This method should be implemented if the model can generate output
in a streaming fashion. If the model does not support streaming,
do not implement it. In that case streaming requests will be automatically
handled by the _generate method.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
last_message = messages[-1]
tokens = str(last_message.content[: self.parrot_buffer_length])
ct_input_tokens = sum(len(message.content) for message in messages)
for token in tokens:
usage_metadata = UsageMetadata(
{
"input_tokens": ct_input_tokens,
"output_tokens": 1,
"total_tokens": ct_input_tokens + 1,
}
)
ct_input_tokens = 0
chunk = ChatGenerationChunk(
message=AIMessageChunk(content=token, usage_metadata=usage_metadata)
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
# Let's add some other information (e.g., response metadata)
chunk = ChatGenerationChunk(
message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3})
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
@property
def _llm_type(self) -> str:
"""Get the type of language model used by this chat model."""
return "echoing-chat-model-advanced"
@property
def _identifying_params(self) -> Dict[str, Any]:
"""Return a dictionary of identifying parameters.
This information is used by the LangChain callback system, which
is used for tracing purposes make it possible to monitor LLMs.
"""
return {
# The model name allows users to specify custom token counting
# rules in LLM monitoring applications (e.g., in LangSmith users
# can provide per token pricing for their model and monitor
# costs for the given LLM.)
"model_name": self.model_name,
}
```
</details>
## Testing
To implement our test files, we will subclass test classes from the `langchain_tests` package. These test classes contain the tests that will be run. We will just need to configure what model is tested, what parameters it is tested with, and specify any tests that should be skipped.
### Setup
First we need to install certain dependencies. These include:
- `pytest`: For running tests
- `pytest-socket`: For running unit tests
- `pytest-asyncio`: For testing async functionality
- `langchain-tests`: For importing standard tests
- `langchain-core`: This should already be installed, but is needed to define our integration.
If you followed the previous [bootstrapping guide](/docs/contributing/how_to/integrations/package/), these should already be installed.
### Add and configure standard tests
There are two namespaces in the langchain-tests package:
[unit tests](../../../concepts/testing.mdx#unit-tests) (`langchain_tests.unit_tests`): designed to be used to test the component in isolation and without access to external services
[integration tests](../../../concepts/testing.mdx#integration-tests) (`langchain_tests.integration_tests`): designed to be used to test the component with access to external services (in particular, the external service that the component is designed to interact with).
Both types of tests are implemented as [pytest class-based test suites](https://docs.pytest.org/en/7.1.x/getting-started.html#group-multiple-tests-in-a-class).
By subclassing the base classes for each type of standard test (see below), you get all of the standard tests for that type, and you can override the properties that the test suite uses to configure the tests.
Here's how you would configure the standard unit tests for the custom chat model:
```python
# title="tests/unit_tests/test_chat_models.py"
from typing import Type
from my_package.chat_models import MyChatModel
from langchain_tests.unit_tests import ChatModelUnitTests
class TestChatParrotLinkUnit(ChatModelUnitTests):
@property
def chat_model_class(self) -> Type[MyChatModel]:
return MyChatModel
@property
def chat_model_params(self) -> dict:
# These should be parameters used to initialize your integration for testing
return {
"model": "bird-brain-001",
"temperature": 0,
"parrot_buffer_length": 50,
}
```
And here is the corresponding snippet for integration tests:
```python
# title="tests/integration_tests/test_chat_models.py"
from typing import Type
from my_package.chat_models import MyChatModel
from langchain_tests.integration_tests import ChatModelIntegrationTests
class TestChatParrotLinkIntegration(ChatModelIntegrationTests):
@property
def chat_model_class(self) -> Type[MyChatModel]:
return MyChatModel
@property
def chat_model_params(self) -> dict:
# These should be parameters used to initialize your integration for testing
return {
"model": "bird-brain-001",
"temperature": 0,
"parrot_buffer_length": 50,
}
```
These two snippets should be written into `tests/unit_tests/test_chat_models.py` and `tests/integration_tests/test_chat_models.py`, respectively.
:::note
LangChain standard tests test a range of behaviors, from the most basic requirements to optional capabilities like multi-modal support. The above implementation will likely need to be updated to specify any tests that should be ignored. See [below](#skipping-tests) for detail.
:::
### Run standard tests
After setting tests up, you would run these with the following commands from your project root:
```shell
# run unit tests without network access
pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests
# run integration tests
pytest --asyncio-mode=auto tests/integration_tests
```
Our objective is for the pytest run to be successful. That is,
1. If a feature is intended to be supported by the model, it passes;
2. If a feature is not intended to be supported by the model, it is skipped.
### Skipping tests
LangChain standard tests test a range of behaviors, from the most basic requirements (generating a response to a query) to optional capabilities like multi-modal support, tool-calling, or support for messages generated from other providers. Tests for "optional" capabilities are controlled via a [set of properties](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.chat_models.ChatModelTests.html) that can be overridden on the test model subclass.
### Test suite information and troubleshooting
What tests are run to test this integration?
If a test fails, what does that mean?
You can find information on the tests run for this integration in the [Standard Tests API Reference](https://python.langchain.com/api_reference/standard_tests/index.html).
// TODO: link to exact page for this integration test suite information

View File

@@ -1,5 +1,4 @@
---
pagination_prev: null
pagination_next: contributing/how_to/integrations/package
---
@@ -12,7 +11,7 @@ LangChain provides standard interfaces for several different components (languag
## Why contribute an integration to LangChain?
- **Discoverability:** LangChain is the most used framework for building LLM applications, with over 20 million monthly downloads. LangChain integrations are discoverable by a large community of GenAI builders.
- **Interoperability:** LangChain components expose a standard interface, allowing developers to easily swap them for each other. If you implement a LangChain integration, any developer using a different component will easily be able to swap yours in.
- **Interoptability:** LangChain components expose a standard interface, allowing developers to easily swap them for each other. If you implement a LangChain integration, any developer using a different component will easily be able to swap yours in.
- **Best Practices:** Through their standard interface, LangChain components encourage and facilitate best practices (streaming, async, etc)
@@ -38,6 +37,7 @@ While any component can be integrated into LangChain, there are specific types o
<li>Chat Models</li>
<li>Tools/Toolkits</li>
<li>Retrievers</li>
<li>Document Loaders</li>
<li>Vector Stores</li>
<li>Embedding Models</li>
</ul>
@@ -45,7 +45,6 @@ While any component can be integrated into LangChain, there are specific types o
<td>
<ul>
<li>LLMs (Text-Completion Models)</li>
<li>Document Loaders</li>
<li>Key-Value Stores</li>
<li>Document Transformers</li>
<li>Model Caches</li>
@@ -65,18 +64,42 @@ While any component can be integrated into LangChain, there are specific types o
In order to contribute an integration, you should follow these steps:
1. Confirm that your integration is in the [list of components](#components-to-integrate) we are currently encouraging.
2. [Implement your package](/docs/contributing/how_to/integrations/package/) and publish it to a public github repository.
3. [Implement the standard tests](./standard_tests) for your integration and successfully run them.
4. [Publish your integration](./publish.mdx) by publishing the package to PyPi and add docs in the `docs/docs/integrations` directory of the LangChain monorepo.
2. [Bootstrap your integration](/docs/contributing/how_to/integrations/package/).
3. Implement and test your integration following the [component-specific guides](#component-specific-guides).
4. [Publish your integration](/docs/contributing/how_to/integrations/publish/) in a Python package to PyPi.
5. [Optional] Open and merge a PR to add documentation for your integration to the official LangChain docs.
6. [Optional] Engage with the LangChain team for joint co-marketing ([see below](#co-marketing)).
## Component-specific guides
The guides below walk you through both implementing and testing specific components:
- [Chat Models](/docs/contributing/how_to/integrations/chat_models)
- [Tools]
- [Toolkits]
- [Retrievers]
- [Document Loaders]
- [Vector Stores]
- [Embedding Models]
## Standard Tests
Testing is a critical part of the development process that ensures your code works as expected and meets the desired quality standards.
In the LangChain ecosystem, we have 2 main types of tests: **unit tests** and **integration tests**.
These standard tests help maintain compatibility between different components and ensure reliability.
**Unit Tests**: Unit tests are designed to validate the smallest parts of your code—individual functions or methods—ensuring they work as expected in isolation. They do not rely on external systems or integrations.
**Integration Tests**: Integration tests validate that multiple components or systems work together as expected. For tools or integrations relying on external services, these tests often ensure end-to-end functionality.
Each type of integration has its own set of standard tests. Please see the relevant [component-specific guide](#component-specific-guides) for details on testing your integration.
## Co-Marketing
With over 20 million monthly downloads, LangChain has a large audience of developers building LLM applications.
Besides just adding integrations, we also like to show them examples of cool tools or APIs they can use.
While traditionally called "co-marketing", we like to think of this more as "co-education".
While traditionally called "co-marketing", we like to thing of this more as "co-education".
For that reason, while we are happy to highlight your integration through our social media channels, we prefer to highlight examples that also serve some educational purpose.
Our main social media channels are Twitter and LinkedIn.
@@ -87,5 +110,4 @@ Here are some heuristics for types of content we are excited to promote:
- **End-to-end applications:** End-to-end applications are great resources for developers looking to build. We prefer to highlight applications that are more complex/agentic in nature, and that use [LangGraph](https://github.com/langchain-ai/langgraph) as the orchestration framework. We get particularly excited about anything involving long-term memory, human-in-the-loop interaction patterns, or multi-agent architectures.
- **Research:** We love highlighting novel research! Whether it is research built on top of LangChain or that integrates with it.
## Further Reading
To get started, let's learn [how to implement an integration package](/docs/contributing/how_to/integrations/package/) for LangChain.
// TODO: set up some form to gather these requests.

View File

@@ -1,110 +1,24 @@
---
pagination_next: contributing/how_to/integrations/standard_tests
pagination_prev: contributing/how_to/integrations/index
pagination_next: contributing/how_to/integrations/index
pagination_prev: null
---
# How to implement an integration package
# How to bootstrap a new integration package
This guide walks through the process of implementing a LangChain integration
package.
This guide walks through the process of publishing a new LangChain integration
package to PyPi.
Integration packages are just Python packages that can be installed with `pip install <your-package>`,
which contain classes that are compatible with LangChain's core interfaces.
We will cover:
1. (Optional) How to bootstrap a new integration package
2. How to implement components, such as [chat models](/docs/concepts/chat_models/) and [vector stores](/docs/concepts/vectorstores/), that adhere
to the LangChain interface;
## (Optional) bootstrapping a new integration package
In this section, we will outline 2 options for bootstrapping a new integration package,
and you're welcome to use other tools if you prefer!
1. **langchain-cli**: This is a command-line tool that can be used to bootstrap a new integration package with a template for LangChain components and Poetry for dependency management.
2. **Poetry**: This is a Python dependency management tool that can be used to bootstrap a new Python package with dependencies. You can then add LangChain components to this package.
<details>
<summary>Option 1: langchain-cli (recommended)</summary>
In this guide, we will be using the `langchain-cli` to create a new integration package
from a template, which can be edited to implement your LangChain components.
### **Prerequisites**
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with langchain-cli
First, install `langchain-cli` and `poetry`:
```bash
pip install langchain-cli poetry
```
Next, come up with a name for your package. For this guide, we'll use `langchain-parrot-link`.
You can confirm that the name is available on PyPi by searching for it on the [PyPi website](https://pypi.org/).
Next, create your new Python package with `langchain-cli`, and navigate into the new directory with `cd`:
```bash
langchain-cli integration new
> The name of the integration to create (e.g. `my-integration`): parrot-link
> Name of integration in PascalCase [ParrotLink]:
cd parrot-link
```
Next, let's add any dependencies we need
```bash
poetry add my-integration-sdk
```
We can also add some `typing` or `test` dependencies in a separate poetry dependency group.
```
poetry add --group typing my-typing-dep
poetry add --group test my-test-dep
```
And finally, have poetry set up a virtual environment with your dependencies, as well
as your integration package:
```bash
poetry install --with lint,typing,test,test_integration
```
You now have a new Python package with a template for LangChain components! This
template comes with files for each integration type, and you're welcome to duplicate or
delete any of these files as needed (including the associated test files).
To create any individual files from the [template], you can run e.g.:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models_2.py
```
</details>
<details>
<summary>Option 2: Poetry (manual)</summary>
In this guide, we will be using [Poetry](https://python-poetry.org/) for
dependency management and packaging, and you're welcome to use any other tools you prefer.
### **Prerequisites**
## **Prerequisites**
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with Poetry
## Boostrapping a new Python package with Poetry
First, install Poetry:
@@ -132,7 +46,7 @@ We will also add some `test` dependencies in a separate poetry dependency group.
you are not using Poetry, we recommend adding these in a way that won't package them
with your published package, or just installing them separately when you run tests.
`langchain-tests` will provide the [standard tests](../standard_tests) we will use later.
`langchain-tests` will provide the [standard tests](/docs/contributing/how_to/integrations/#standard-tests) we will use later.
We recommended pinning these to the latest version: <img src="https://img.shields.io/pypi/v/langchain-tests" style={{position:"relative",top:4,left:3}} />
Note: Replace `<latest_version>` with the latest version of `langchain-tests` below.
@@ -150,34 +64,9 @@ poetry install --with test
You're now ready to start writing your integration package!
### Writing your integration
See our [component-specific guides](/docs/contributing/how_to/integrations/#component-specific-guides) for detail on implementing and testing each component.
Let's say you're building a simple integration package that provides a `ChatParrotLink`
chat model integration for LangChain. Here's a simple example of what your project
structure might look like:
```plaintext
langchain-parrot-link/
├── langchain_parrot_link/
│ ├── __init__.py
│ └── chat_models.py
├── tests/
│ ├── __init__.py
│ └── test_chat_models.py
├── pyproject.toml
└── README.md
```
All of these files should already exist from step 1, except for
`chat_models.py` and `test_chat_models.py`! We will implement `test_chat_models.py`
later, following the [standard tests](../standard_tests) guide.
For `chat_models.py`, simply paste the contents of the chat model implementation
[above](#implementing-langchain-components).
</details>
### Push your package to a public Github repository
## Push your package to a public Github repository
This is only required if you want to publish your integration in the LangChain documentation.
@@ -185,290 +74,6 @@ This is only required if you want to publish your integration in the LangChain d
2. Push your code to the repository.
3. Confirm that your repository is viewable by the public (e.g. in a private browsing window, where you're not logged into Github).
## Implementing LangChain components
LangChain components are subclasses of base classes in [langchain-core](/docs/concepts/architecture/#langchain-core).
Examples include [chat models](/docs/concepts/chat_models/),
[vector stores](/docs/concepts/vectorstores/), [tools](/docs/concepts/tools/),
[embedding models](/docs/concepts/embedding_models/) and [retrievers](/docs/concepts/retrievers/).
Your integration package will typically implement a subclass of at least one of these
components. Expand the tabs below to see details on each.
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
<Tabs>
<TabItem value="chat_models" label="Chat models">
Refer to the [Custom Chat Model Guide](/docs/how_to/custom_chat_model) guide for
detail on a starter chat model [implementation](/docs/how_to/custom_chat_model/#implementation).
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models.py
```
<details>
<summary>Example chat model code</summary>
import ChatModelSource from '../../../../src/theme/integration_template/integration_template/chat_models.py';
<CodeBlock language="python" title="langchain_parrot_link/chat_models.py">
{
ChatModelSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="vector_stores" label="Vector stores">
Your vector store implementation will depend on your chosen database technology.
`langchain-core` includes a minimal
[in-memory vector store](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.in_memory.InMemoryVectorStore.html)
that we can use as a guide. You can access the code [here](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/vectorstores/in_memory.py).
All vector stores must inherit from the [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html)
base class. This interface consists of methods for writing, deleting and searching
for documents in the vector store.
`VectorStore` supports a variety of synchronous and asynchronous search types (e.g.,
nearest-neighbor or maximum marginal relevance), as well as interfaces for adding
documents to the store. See the [API Reference](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html)
for all supported methods. The required methods are tabulated below:
| Method/Property | Description |
|------------------------ |------------------------------------------------------|
| `add_documents` | Add documents to the vector store. |
| `delete` | Delete selected documents from vector store (by IDs) |
| `get_by_ids` | Get selected documents from vector store (by IDs) |
| `similarity_search` | Get documents most similar to a query. |
| `embeddings` (property) | Embeddings object for vector store. |
| `from_texts` | Instantiate vector store via adding texts. |
Note that `InMemoryVectorStore` implements some optional search types, as well as
convenience methods for loading and dumping the object to a file, but this is not
necessary for all implementations.
:::tip
The [in-memory vector store](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/vectorstores/in_memory.py)
is tested against the standard tests in the LangChain Github repository.
:::
<details>
<summary>Example vector store code</summary>
import VectorstoreSource from '../../../../src/theme/integration_template/integration_template/vectorstores.py';
<CodeBlock language="python" title="langchain_parrot_link/vectorstores.py">
{
VectorstoreSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="embeddings" label="Embeddings">
Embeddings are used to convert `str` objects from `Document.page_content` fields
into a vector representation (represented as a list of floats).
Refer to the [Custom Embeddings Guide](/docs/how_to/custom_embeddings) guide for
detail on a starter embeddings [implementation](/docs/how_to/custom_embeddings/#implementation).
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/embeddings.py \
--dst langchain_parrot_link/embeddings.py
```
<details>
<summary>Example embeddings code</summary>
import EmbeddingsSource from '/src/theme/integration_template/integration_template/embeddings.py';
<CodeBlock language="python" title="langchain_parrot_link/embeddings.py">
{
EmbeddingsSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="tools" label="Tools">
Tools are used in 2 main ways:
1. To define an "input schema" or "args schema" to pass to a chat model's tool calling
feature along with a text request, such that the chat model can generate a "tool call",
or parameters to call the tool with.
2. To take a "tool call" as generated above, and take some action and return a response
that can be passed back to the chat model as a ToolMessage.
The `Tools` class must inherit from the [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool) base class. This interface has 3 properties and 2 methods that should be implemented in a
subclass.
| Method/Property | Description |
|------------------------ |------------------------------------------------------|
| `name` | Name of the tool (passed to the LLM too). |
| `description` | Description of the tool (passed to the LLM too). |
| `args_schema` | Define the schema for the tool's input arguments. |
| `_run` | Run the tool with the given arguments. |
| `_arun` | Asynchronously run the tool with the given arguments.|
### Properties
`name`, `description`, and `args_schema` are all properties that should be implemented
in the subclass. `name` and `description` are strings that are used to identify the tool
and provide a description of what the tool does. Both of these are passed to the LLM,
and users may override these values depending on the LLM they are using as a form of
"prompt engineering." Giving these a concise and LLM-usable name and description is
important for the initial user experience of the tool.
`args_schema` is a Pydantic `BaseModel` that defines the schema for the tool's input
arguments. This is used to validate the input arguments to the tool, and to provide
a schema for the LLM to fill out when calling the tool. Similar to the `name` and
`description` of the overall Tool class, the fields' names (the variable name) and
description (part of `Field(..., description="description")`) are passed to the LLM,
and the values in these fields should be concise and LLM-usable.
### Run Methods
`_run` is the main method that should be implemented in the subclass. This method
takes in the arguments from `args_schema` and runs the tool, returning a string
response. This method is usually called in a LangGraph [`ToolNode`](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/), and can also be called in a legacy
`langchain.agents.AgentExecutor`.
`_arun` is optional because by default, `_run` will be run in an async executor.
However, if your tool is calling any apis or doing any async work, you should implement
this method to run the tool asynchronously in addition to `_run`.
### Implementation
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/tools.py \
--dst langchain_parrot_link/tools.py
```
<details>
<summary>Example tool code</summary>
import ToolSource from '/src/theme/integration_template/integration_template/tools.py';
<CodeBlock language="python" title="langchain_parrot_link/tools.py">
{
ToolSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="retrievers" label="Retrievers">
Retrievers are used to retrieve documents from APIs, databases, or other sources
based on a query. The `Retriever` class must inherit from the [BaseRetriever](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html) base class. This interface has 1 attribute and 2 methods that should be implemented in a subclass.
| Method/Property | Description |
|------------------------ |------------------------------------------------------|
| `k` | Default number of documents to retrieve (configurable). |
| `_get_relevant_documents`| Retrieve documents based on a query. |
| `_aget_relevant_documents`| Asynchronously retrieve documents based on a query. |
### Attributes
`k` is an attribute that should be implemented in the subclass. This attribute
can simply be defined at the top of the class with a default value like
`k: int = 5`. This attribute is the default number of documents to retrieve
from the retriever, and can be overridden by the user when constructing or calling
the retriever.
### Methods
`_get_relevant_documents` is the main method that should be implemented in the subclass.
This method takes in a query and returns a list of `Document` objects, which have 2
main properties:
- `page_content` - the text content of the document
- `metadata` - a dictionary of metadata about the document
Retrievers are typically directly invoked by a user, e.g. as
`MyRetriever(k=4).invoke("query")`, which will automatically call `_get_relevant_documents`
under the hood.
`_aget_relevant_documents` is optional because by default, `_get_relevant_documents` will
be run in an async executor. However, if your retriever is calling any apis or doing
any async work, you should implement this method to run the retriever asynchronously
in addition to `_get_relevant_documents` for performance reasons.
### Implementation
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/retrievers.py \
--dst langchain_parrot_link/retrievers.py
```
<details>
<summary>Example retriever code</summary>
import RetrieverSource from '/src/theme/integration_template/integration_template/retrievers.py';
<CodeBlock language="python" title="langchain_parrot_link/retrievers.py">
{
RetrieverSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
</Tabs>
---
## Next Steps
Now that you've implemented your package, you can move on to [testing your integration](../standard_tests) for your integration and successfully run them.
Now that you've bootstrapped your package, you can move on to [implementing and testing](/docs/contributing/how_to/integrations/#component-specific-guides) your integration.

View File

@@ -1,5 +1,5 @@
---
pagination_prev: contributing/how_to/integrations/standard_tests
pagination_prev: contributing/how_to/integrations/index
pagination_next: null
---
@@ -12,7 +12,7 @@ Now that your package is implemented and tested, you can:
## Publishing your package to PyPi
This guide assumes you have already implemented your package and written tests for it. If you haven't done that yet, please refer to the [implementation guide](../package) and the [testing guide](../standard_tests).
This guide assumes you have already implemented your package and written tests for it. If you haven't done that yet, please refer to the [component-specific guides](/docs/contributing/how_to/integrations/#component-specific-guides).
Note that Poetry is not required to publish a package to PyPi, and we're using it in this guide end-to-end for convenience.
You are welcome to publish your package using any other method you prefer.

View File

@@ -1,393 +0,0 @@
---
pagination_next: contributing/how_to/integrations/publish
pagination_prev: contributing/how_to/integrations/package
---
# How to add standard tests to an integration
When creating either a custom class for yourself or to publish in a LangChain integration, it is important to add standard tests to ensure it works as expected. This guide will show you how to add standard tests to each integration type.
## Setup
First, let's install 2 dependencies:
- `langchain-core` will define the interfaces we want to import to define our custom tool.
- `langchain-tests` will provide the standard tests we want to use, as well as pytest plugins necessary to run them. Recommended to pin to the latest version: <img src="https://img.shields.io/pypi/v/langchain-tests" style={{position:"relative",top:4,left:3}} />
:::note
Because added tests in new versions of `langchain-tests` can break your CI/CD pipelines, we recommend pinning the
version of `langchain-tests` to avoid unexpected changes.
:::
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs>
<TabItem value="poetry" label="Poetry" default>
If you followed the [previous guide](../package), you should already have these dependencies installed!
```bash
poetry add langchain-core
poetry add --group test langchain-tests==<latest_version>
poetry install --with test
```
</TabItem>
<TabItem value="pip" label="Pip">
```bash
pip install -U langchain-core langchain-tests
# install current package in editable mode
pip install --editable .
```
</TabItem>
</Tabs>
## Add and configure standard tests
There are 2 namespaces in the `langchain-tests` package:
- [unit tests](../../../concepts/testing.mdx#unit-tests) (`langchain_tests.unit_tests`): designed to be used to test the component in isolation and without access to external services
- [integration tests](../../../concepts/testing.mdx#integration-tests) (`langchain_tests.integration_tests`): designed to be used to test the component with access to external services (in particular, the external service that the component is designed to interact with).
Both types of tests are implemented as [`pytest` class-based test suites](https://docs.pytest.org/en/7.1.x/getting-started.html#group-multiple-tests-in-a-class).
By subclassing the base classes for each type of standard test (see below), you get all of the standard tests for that type, and you
can override the properties that the test suite uses to configure the tests.
In order to run the tests in the same way as this guide, we recommend subclassing these
classes in test files under two test subdirectories:
- `tests/unit_tests` for unit tests
- `tests/integration_tests` for integration tests
### Implementing standard tests
import CodeBlock from '@theme/CodeBlock';
In the following tabs, we show how to implement the standard tests for
each component type:
<Tabs>
<TabItem value="chat_models" label="Chat models">
To configure standard tests for a chat model, we subclass `ChatModelUnitTests` and `ChatModelIntegrationTests`. On each subclass, we override the following `@property` methods to specify the chat model to be tested and the chat model's configuration:
| Property | Description |
| --- | --- |
| `chat_model_class` | The class for the chat model to be tested |
| `chat_model_params` | The parameters to pass to the chat
model's constructor |
Additionally, chat model standard tests test a range of behaviors, from the most basic requirements (generating a response to a query) to optional capabilities like multi-modal support and tool-calling. For a test run to be successful:
1. If a feature is intended to be supported by the model, it should pass;
2. If a feature is not intended to be supported by the model, it should be skipped.
Tests for "optional" capabilities are controlled via a set of properties that can be overridden on the test model subclass.
You can see the **entire list of configurable capabilities** in the API references for
[unit tests](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.chat_models.ChatModelUnitTests.html)
and [integration tests](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html).
For example, to enable integration tests for image inputs, we can implement
```python
@property
def supports_image_inputs(self) -> bool:
return True
```
on the integration test class.
:::note
Details on what tests are run, how each test can be skipped, and troubleshooting tips for each test can be found in the API references. See details:
- [Unit tests API reference](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.chat_models.ChatModelUnitTests.html)
- [Integration tests API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html)
:::
Unit test example:
import ChatUnitSource from '../../../../src/theme/integration_template/tests/unit_tests/test_chat_models.py';
<CodeBlock language="python" title="tests/unit_tests/test_chat_models.py">
{
ChatUnitSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
Integration test example:
import ChatIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_chat_models.py';
<CodeBlock language="python" title="tests/integration_tests/test_chat_models.py">
{
ChatIntegrationSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
<TabItem value="vector_stores" label="Vector stores">
Here's how you would configure the standard tests for a typical vector store (using
`ParrotVectorStore` as a placeholder):
Vector store tests do not have optional capabilities to be configured at this time.
import VectorStoreIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_vectorstores.py';
<CodeBlock language="python" title="tests/integration_tests/test_vectorstores.py">
{
VectorStoreIntegrationSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
Configuring the tests consists of implementing pytest fixtures for setting up an
empty vector store and tearing down the vector store after the test run ends.
| Fixture | Description |
| --- | --- |
| `vectorstore` | A generator that yields an empty vector store for unit tests. The vector store is cleaned up after the test run ends. |
For example, below is the `VectorStoreIntegrationTests` class for the [Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma/)
integration:
```python
from typing import Generator
import pytest
from langchain_core.vectorstores import VectorStore
from langchain_tests.integration_tests.vectorstores import VectorStoreIntegrationTests
from langchain_chroma import Chroma
class TestChromaStandard(VectorStoreIntegrationTests):
@pytest.fixture()
def vectorstore(self) -> Generator[VectorStore, None, None]: # type: ignore
"""Get an empty vectorstore for unit tests."""
store = Chroma(embedding_function=self.get_embeddings())
try:
yield store
finally:
store.delete_collection()
pass
```
Note that before the initial `yield`, we instantiate the vector store with an
[embeddings](/docs/concepts/embedding_models/) object. This is a pre-defined
["fake" embeddings model](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.vectorstores.VectorStoreIntegrationTests.html#langchain_tests.integration_tests.vectorstores.VectorStoreIntegrationTests.get_embeddings)
that will generate short, arbitrary vectors for documents. You can use a different
embeddings object if desired.
In the `finally` block, we call whatever integration-specific logic is needed to
bring the vector store to a clean state. This logic is executed in between each test
(e.g., even if tests fail).
:::note
Details on what tests are run and troubleshooting tips for each test can be found in the [API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.vectorstores.VectorStoreIntegrationTests.html).
:::
</TabItem>
<TabItem value="embeddings" label="Embeddings">
To configure standard tests for an embeddings model, we subclass `EmbeddingsUnitTests` and `EmbeddingsIntegrationTests`. On each subclass, we override the following `@property` methods to specify the embeddings model to be tested and the embeddings model's configuration:
| Property | Description |
| --- | --- |
| `embeddings_class` | The class for the embeddings model to be tested |
| `embedding_model_params` | The parameters to pass to the embeddings model's constructor |
:::note
Details on what tests are run, how each test can be skipped, and troubleshooting tips for each test can be found in the API references. See details:
- [Unit tests API reference](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.embeddings.EmbeddingsUnitTests.html)
- [Integration tests API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.embeddings.EmbeddingsIntegrationTests.html)
:::
Unit test example:
import EmbeddingsUnitSource from '../../../../src/theme/integration_template/tests/unit_tests/test_embeddings.py';
<CodeBlock language="python" title="tests/unit_tests/test_embeddings.py">
{
EmbeddingsUnitSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
Integration test example:
```python title="tests/integration_tests/test_embeddings.py"
from typing import Type
from langchain_parrot_link.embeddings import ParrotLinkEmbeddings
from langchain_tests.integration_tests import EmbeddingsIntegrationTests
class TestParrotLinkEmbeddingsIntegration(EmbeddingsIntegrationTests):
@property
def embeddings_class(self) -> Type[ParrotLinkEmbeddings]:
return ParrotLinkEmbeddings
@property
def embedding_model_params(self) -> dict:
return {"model": "nest-embed-001"}
```
import EmbeddingsIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_embeddings.py';
<CodeBlock language="python" title="tests/integration_tests/test_embeddings.py">
{
EmbeddingsIntegrationSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
<TabItem value="tools" label="Tools">
To configure standard tests for a tool, we subclass `ToolsUnitTests` and
`ToolsIntegrationTests`. On each subclass, we override the following `@property` methods
to specify the tool to be tested and the tool's configuration:
| Property | Description |
| --- | --- |
| `tool_constructor` | The constructor for the tool to be tested, or an instantiated tool. |
| `tool_constructor_params` | The parameters to pass to the tool (optional). |
| `tool_invoke_params_example` | An example of the parameters to pass to the tool's `invoke` method. |
If you are testing a tool class and pass a class like `MyTool` to `tool_constructor`, you can pass the parameters to the constructor in `tool_constructor_params`.
If you are testing an instantiated tool, you can pass the instantiated tool to `tool_constructor` and do not
override `tool_constructor_params`.
:::note
Details on what tests are run, how each test can be skipped, and troubleshooting tips for each test can be found in the API references. See details:
- [Unit tests API reference](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.tools.ToolsUnitTests.html)
- [Integration tests API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.tools.ToolsIntegrationTests.html)
:::
import ToolsUnitSource from '../../../../src/theme/integration_template/tests/unit_tests/test_tools.py';
<CodeBlock language="python" title="tests/unit_tests/test_tools.py">
{
ToolsUnitSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
import ToolsIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_tools.py';
<CodeBlock language="python" title="tests/integration_tests/test_tools.py">
{
ToolsIntegrationSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
<TabItem value="retrievers" label="Retrievers">
To configure standard tests for a retriever, we subclass `RetrieversUnitTests` and
`RetrieversIntegrationTests`. On each subclass, we override the following `@property` methods
| Property | Description |
| --- | --- |
| `retriever_constructor` | The class for the retriever to be tested |
| `retriever_constructor_params` | The parameters to pass to the retriever's constructor |
| `retriever_query_example` | An example of the query to pass to the retriever's `invoke` method |
:::note
Details on what tests are run and troubleshooting tips for each test can be found in the [API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.retrievers.RetrieversIntegrationTests.html).
:::
import RetrieverIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_retrievers.py';
<CodeBlock language="python" title="tests/integration_tests/test_retrievers.py">
{
RetrieverIntegrationSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
</Tabs>
---
### Running the tests
You can run these with the following commands from your project root
<Tabs>
<TabItem value="poetry" label="Poetry" default>
```bash
# run unit tests without network access
poetry run pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests
# run integration tests
poetry run pytest --asyncio-mode=auto tests/integration_tests
```
</TabItem>
<TabItem value="pip" label="Pip">
```bash
# run unit tests without network access
pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests
# run integration tests
pytest --asyncio-mode=auto tests/integration_tests
```
</TabItem>
</Tabs>
## Test suite information and troubleshooting
For a full list of the standard test suites that are available, as well as
information on which tests are included and how to troubleshoot common issues,
see the [Standard Tests API Reference](https://python.langchain.com/api_reference/standard_tests/index.html).
You can see troubleshooting guides under the individual test suites listed in that API Reference. For example,
[here is the guide for `ChatModelIntegrationTests.test_usage_metadata`](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).

View File

@@ -17,7 +17,6 @@ More coming soon! We are working on tutorials to help you make your first contri
- [**Documentation**](how_to/documentation/index.mdx): Help improve our docs, including this one!
- [**Code**](how_to/code/index.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](how_to/integrations/index.mdx): Help us integrate with your favorite vendors and tools.
- [**Standard Tests**](how_to/integrations/standard_tests): Ensure your integration passes an expected set of tests.
## Reference

View File

@@ -802,7 +802,7 @@
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
"\n",
":::important\n",
"This section covered building with LangChain Agents. They are fine for getting started, but past a certain point you will likely want flexibility and control which they do not offer. To develop more advanced agents, we recommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
":::\n",
"\n",
"If you want to continue using LangChain agents, some good advanced guides are:\n",

View File

@@ -23,7 +23,7 @@
"**Attention**:\n",
"\n",
"- Be sure to set the `namespace` parameter to avoid collisions of the same text embedded using different embeddings models.\n",
"- `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one needs to specify a `query_embedding_cache`."
"- `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one need to specify a `query_embedding_cache`."
]
},
{

View File

@@ -15,7 +15,7 @@
"\n",
":::\n",
"\n",
"In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing a run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.\n",
"In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.\n",
"\n",
"This prevents us from having to manually attach the handlers to each individual nested object. Here's an example:"
]

View File

@@ -37,7 +37,7 @@
"\n",
"Langchain comes with a built-in in memory rate limiter. This rate limiter is thread safe and can be shared by multiple threads in the same process.\n",
"\n",
"The provided rate limiter can only limit the number of requests per unit time. It will not help if you need to also limit based on the size\n",
"The provided rate limiter can only limit the number of requests per unit time. It will not help if you need to also limited based on the size\n",
"of the requests."
]
},

View File

@@ -1,222 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "c160026f-aadb-4e9f-8642-b4a9e8479d77",
"metadata": {},
"source": [
"# Custom Embeddings\n",
"\n",
"LangChain is integrated with many [3rd party embedding models](/docs/integrations/text_embedding/). In this guide we'll show you how to create a custom Embedding class, in case a built-in one does not already exist. Embeddings are critical in natural language processing applications as they convert text into a numerical form that algorithms can understand, thereby enabling a wide range of applications such as similarity search, text classification, and clustering.\n",
"\n",
"Implementing embeddings using the standard [Embeddings](https://python.langchain.com/api_reference/core/embeddings/langchain_core.embeddings.embeddings.Embeddings.html) interface will allow your embeddings to be utilized in existing `LangChain` abstractions (e.g., as the embeddings powering a [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html) or cached using [CacheBackedEmbeddings](/docs/how_to/caching_embeddings/)).\n",
"\n",
"## Interface\n",
"\n",
"The current `Embeddings` abstraction in LangChain is designed to operate on text data. In this implementation, the inputs are either single strings or lists of strings, and the outputs are lists of numerical arrays (vectors), where each vector represents\n",
"an embedding of the input text into some n-dimensional space.\n",
"\n",
"Your custom embedding must implement the following methods:\n",
"\n",
"| Method/Property | Description | Required/Optional |\n",
"|---------------------------------|----------------------------------------------------------------------------|-------------------|\n",
"| `embed_documents(texts)` | Generates embeddings for a list of strings. | Required |\n",
"| `embed_query(text)` | Generates an embedding for a single text query. | Required |\n",
"| `aembed_documents(texts)` | Asynchronously generates embeddings for a list of strings. | Optional |\n",
"| `aembed_query(text)` | Asynchronously generates an embedding for a single text query. | Optional |\n",
"\n",
"These methods ensure that your embedding model can be integrated seamlessly into the LangChain framework, providing both synchronous and asynchronous capabilities for scalability and performance optimization.\n",
"\n",
"\n",
":::note\n",
"`Embeddings` do not currently implement the [Runnable](/docs/concepts/runnables/) interface and are also **not** instances of pydantic `BaseModel`.\n",
":::\n",
"\n",
"### Embedding queries vs documents\n",
"\n",
"The `embed_query` and `embed_documents` methods are required. These methods both operate\n",
"on string inputs. The accessing of `Document.page_content` attributes is handled\n",
"by the vector store using the embedding model for legacy reasons.\n",
"\n",
"`embed_query` takes in a single string and returns a single embedding as a list of floats.\n",
"If your model has different modes for embedding queries vs the underlying documents, you can\n",
"implement this method to handle that. \n",
"\n",
"`embed_documents` takes in a list of strings and returns a list of embeddings as a list of lists of floats.\n",
"\n",
":::note\n",
"`embed_documents` takes in a list of plain text, not a list of LangChain `Document` objects. The name of this method\n",
"may change in future versions of LangChain.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "2162547f-4577-47e8-b12f-e9aa3c243797",
"metadata": {},
"source": [
"## Implementation\n",
"\n",
"As an example, we'll implement a simple embeddings model that returns a constant vector. This model is for illustrative purposes only."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6b838062-552c-43f8-94f8-d17e4ae4c221",
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.embeddings import Embeddings\n",
"\n",
"\n",
"class ParrotLinkEmbeddings(Embeddings):\n",
" \"\"\"ParrotLink embedding model integration.\n",
"\n",
" # TODO: Populate with relevant params.\n",
" Key init args — completion params:\n",
" model: str\n",
" Name of ParrotLink model to use.\n",
"\n",
" See full list of supported init args and their descriptions in the params section.\n",
"\n",
" # TODO: Replace with relevant init params.\n",
" Instantiate:\n",
" .. code-block:: python\n",
"\n",
" from langchain_parrot_link import ParrotLinkEmbeddings\n",
"\n",
" embed = ParrotLinkEmbeddings(\n",
" model=\"...\",\n",
" # api_key=\"...\",\n",
" # other params...\n",
" )\n",
"\n",
" Embed single text:\n",
" .. code-block:: python\n",
"\n",
" input_text = \"The meaning of life is 42\"\n",
" embed.embed_query(input_text)\n",
"\n",
" .. code-block:: python\n",
"\n",
" # TODO: Example output.\n",
"\n",
" # TODO: Delete if token-level streaming isn't supported.\n",
" Embed multiple text:\n",
" .. code-block:: python\n",
"\n",
" input_texts = [\"Document 1...\", \"Document 2...\"]\n",
" embed.embed_documents(input_texts)\n",
"\n",
" .. code-block:: python\n",
"\n",
" # TODO: Example output.\n",
"\n",
" # TODO: Delete if native async isn't supported.\n",
" Async:\n",
" .. code-block:: python\n",
"\n",
" await embed.aembed_query(input_text)\n",
"\n",
" # multiple:\n",
" # await embed.aembed_documents(input_texts)\n",
"\n",
" .. code-block:: python\n",
"\n",
" # TODO: Example output.\n",
"\n",
" \"\"\"\n",
"\n",
" def __init__(self, model: str):\n",
" self.model = model\n",
"\n",
" def embed_documents(self, texts: List[str]) -> List[List[float]]:\n",
" \"\"\"Embed search docs.\"\"\"\n",
" return [[0.5, 0.6, 0.7] for _ in texts]\n",
"\n",
" def embed_query(self, text: str) -> List[float]:\n",
" \"\"\"Embed query text.\"\"\"\n",
" return self.embed_documents([text])[0]\n",
"\n",
" # optional: add custom async implementations here\n",
" # you can also delete these, and the base class will\n",
" # use the default implementation, which calls the sync\n",
" # version in an async executor:\n",
"\n",
" # async def aembed_documents(self, texts: List[str]) -> List[List[float]]:\n",
" # \"\"\"Asynchronous Embed search docs.\"\"\"\n",
" # ...\n",
"\n",
" # async def aembed_query(self, text: str) -> List[float]:\n",
" # \"\"\"Asynchronous Embed query text.\"\"\"\n",
" # ..."
]
},
{
"cell_type": "markdown",
"id": "47a19044-5c3f-40da-889a-1a1cfffc137c",
"metadata": {},
"source": [
"### Let's test it 🧪"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "21c218fe-8f91-437f-b523-c2b6e5cf749e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0.5, 0.6, 0.7], [0.5, 0.6, 0.7]]\n",
"[0.5, 0.6, 0.7]\n"
]
}
],
"source": [
"embeddings = ParrotLinkEmbeddings(\"test-model\")\n",
"print(embeddings.embed_documents([\"Hello\", \"world\"]))\n",
"print(embeddings.embed_query(\"Hello\"))"
]
},
{
"cell_type": "markdown",
"id": "de50f690-178e-4561-af98-14967b3c8501",
"metadata": {},
"source": [
"## Contributing\n",
"\n",
"We welcome contributions of Embedding models to the LangChain code base.\n",
"\n",
"If you aim to contribute an embedding model for a new provider (e.g., with a new set of dependencies or SDK), we encourage you to publish your implementation in a separate `langchain-*` integration package. This will enable you to appropriately manage dependencies and version your package. Please refer to our [contributing guide](/docs/contributing/how_to/integrations/) for a walkthrough of this process."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -9,16 +9,10 @@
"\n",
"This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.\n",
"\n",
"Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications.\n",
"Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n",
"\n",
"As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box, async support, the `astream_events` API, etc.\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of [text completion models](/docs/concepts/text_llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/chat_models).\n",
"\n",
"Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/how_to/custom_chat_model/).\n",
":::\n",
"\n",
"## Implementation\n",
"\n",
"There are only two required things that a custom LLM needs to implement:\n",

View File

@@ -162,7 +162,7 @@
"\n",
"@tool\n",
"def multiply_by_max(\n",
" a: Annotated[int, \"scale factor\"],\n",
" a: Annotated[str, \"scale factor\"],\n",
" b: Annotated[List[int], \"list of ints over which to take maximum\"],\n",
") -> int:\n",
" \"\"\"Multiply a by the maximum of b.\"\"\"\n",
@@ -294,7 +294,7 @@
"metadata": {},
"source": [
":::caution\n",
"By default, `@tool(parse_docstring=True)` will raise `ValueError` if the docstring does not parse correctly. See [API Reference](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) for detail and examples.\n",
"By default, `@tool(parse_docstring=True)` will raise `ValueError` if the docstring does not parse correctly. See [API Reference](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html) for detail and examples.\n",
":::"
]
},

View File

@@ -121,7 +121,7 @@
"source": [
"## The convenience `@chain` decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionally equivalent to wrapping the function in a `RunnableLambda` constructor as shown above. Here's an example:"
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping the function in a `RunnableLambda` constructor as shown above. Here's an example:"
]
},
{

View File

@@ -245,7 +245,7 @@
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"graph_documents_filtered = llm_transformer_tuple.convert_to_graph_documents(documents)\n",
"llm_transformer_tuple = llm_transformer_filtered.convert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]

View File

@@ -0,0 +1,459 @@
{
"cells": [
{
"cell_type": "raw",
"id": "5e61b0f2-15b9-4241-9ab5-ff0f3f732232",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "846ef4f4-ee38-4a42-a7d3-1a23826e4830",
"metadata": {},
"source": [
"# How to map values to a graph database\n",
"\n",
"In this guide we'll go over strategies to improve graph database query generation by mapping values from user inputs to database.\n",
"When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database.\n",
"Therefore, we can introduce a new step in graph database QA system to accurately map values.\n",
"\n",
"## Setup\n",
"\n",
"First, get required packages and set environment variables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18294435-182d-48da-bcab-5b8945b6d9cf",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
]
},
{
"cell_type": "markdown",
"id": "d86dd771-4001-4a34-8680-22e9b50e1e88",
"metadata": {},
"source": [
"We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9346f8e9-78bf-4667-b3d3-72807a73b718",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Uncomment the below to use LangSmith. Not required.\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "271c8a23-e51c-4ead-a76e-cf21107db47e",
"metadata": {},
"source": [
"Next, we need to define Neo4j credentials.\n",
"Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a2a3bb65-05c7-4daf-bac2-b25ae7fe2751",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\""
]
},
{
"cell_type": "markdown",
"id": "50fa4510-29b7-49b6-8496-5e86f694e81f",
"metadata": {},
"source": [
"The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4ee9ef7a-eef9-4289-b9fd-8fbc31041688",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_neo4j import Neo4jGraph\n",
"\n",
"graph = Neo4jGraph()\n",
"\n",
"# Import movie information\n",
"\n",
"movies_query = \"\"\"\n",
"LOAD CSV WITH HEADERS FROM \n",
"'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n",
"AS row\n",
"MERGE (m:Movie {id:row.movieId})\n",
"SET m.released = date(row.released),\n",
" m.title = row.title,\n",
" m.imdbRating = toFloat(row.imdbRating)\n",
"FOREACH (director in split(row.director, '|') | \n",
" MERGE (p:Person {name:trim(director)})\n",
" MERGE (p)-[:DIRECTED]->(m))\n",
"FOREACH (actor in split(row.actors, '|') | \n",
" MERGE (p:Person {name:trim(actor)})\n",
" MERGE (p)-[:ACTED_IN]->(m))\n",
"FOREACH (genre in split(row.genres, '|') | \n",
" MERGE (g:Genre {name:trim(genre)})\n",
" MERGE (m)-[:IN_GENRE]->(g))\n",
"\"\"\"\n",
"\n",
"graph.query(movies_query)"
]
},
{
"cell_type": "markdown",
"id": "0cb0ea30-ca55-4f35-aad6-beb57453de66",
"metadata": {},
"source": [
"## Detecting entities in the user input\n",
"We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e1a19424-6046-40c2-81d1-f3b88193a293",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",
"\n",
"class Entities(BaseModel):\n",
" \"\"\"Identifying information about entities.\"\"\"\n",
"\n",
" names: List[str] = Field(\n",
" ...,\n",
" description=\"All the person or movies appearing in the text\",\n",
" )\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are extracting person and movies from the text.\",\n",
" ),\n",
" (\n",
" \"human\",\n",
" \"Use the given format to extract information from the following \"\n",
" \"input: {question}\",\n",
" ),\n",
" ]\n",
")\n",
"\n",
"\n",
"entity_chain = prompt | llm.with_structured_output(Entities)"
]
},
{
"cell_type": "markdown",
"id": "9c14084c-37a7-4a9c-a026-74e12961c781",
"metadata": {},
"source": [
"We can test the entity extraction chain."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "bbfe0d8f-982e-46e6-88fb-8a4f0d850b07",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Entities(names=['Casino'])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"entities = entity_chain.invoke({\"question\": \"Who played in Casino movie?\"})\n",
"entities"
]
},
{
"cell_type": "markdown",
"id": "a8afbf13-05d0-4383-8050-f88b8c2f6fab",
"metadata": {},
"source": [
"We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6f92929f-74fb-4db2-b7e1-eb1e9d386a67",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Casino maps to Casino Movie in database\\n'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"match_query = \"\"\"MATCH (p:Person|Movie)\n",
"WHERE p.name CONTAINS $value OR p.title CONTAINS $value\n",
"RETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS type\n",
"LIMIT 1\n",
"\"\"\"\n",
"\n",
"\n",
"def map_to_database(entities: Entities) -> Optional[str]:\n",
" result = \"\"\n",
" for entity in entities.names:\n",
" response = graph.query(match_query, {\"value\": entity})\n",
" try:\n",
" result += f\"{entity} maps to {response[0]['result']} {response[0]['type']} in database\\n\"\n",
" except IndexError:\n",
" pass\n",
" return result\n",
"\n",
"\n",
"map_to_database(entities)"
]
},
{
"cell_type": "markdown",
"id": "f66c6756-6efb-4b1e-9b5d-87ed914a5212",
"metadata": {},
"source": [
"## Custom Cypher generating chain\n",
"\n",
"We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement.\n",
"We will be using the LangChain expression language to accomplish that."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8ef3e21d-f1c2-45e2-9511-4920d1cf6e7e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"# Generate Cypher statement based on natural language input\n",
"cypher_template = \"\"\"Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:\n",
"{schema}\n",
"Entities in the question map to the following database values:\n",
"{entities_list}\n",
"Question: {question}\n",
"Cypher query:\"\"\"\n",
"\n",
"cypher_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question, convert it to a Cypher query. No pre-amble.\",\n",
" ),\n",
" (\"human\", cypher_template),\n",
" ]\n",
")\n",
"\n",
"cypher_response = (\n",
" RunnablePassthrough.assign(names=entity_chain)\n",
" | RunnablePassthrough.assign(\n",
" entities_list=lambda x: map_to_database(x[\"names\"]),\n",
" schema=lambda _: graph.get_schema,\n",
" )\n",
" | cypher_prompt\n",
" | llm.bind(stop=[\"\\nCypherResult:\"])\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1f0011e3-9660-4975-af2a-486b1bc3b954",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'MATCH (:Movie {title: \"Casino\"})<-[:ACTED_IN]-(actor)\\nRETURN actor.name'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cypher = cypher_response.invoke({\"question\": \"Who played in Casino movie?\"})\n",
"cypher"
]
},
{
"cell_type": "markdown",
"id": "38095678-611f-4847-a4de-e51ef7ef727c",
"metadata": {},
"source": [
"## Generating answers based on database results\n",
"\n",
"Now that we have a chain that generates the Cypher statement, we need to execute the Cypher statement against the database and send the database results back to an LLM to generate the final answer.\n",
"Again, we will be using LCEL."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d1fa97c0-1c9c-41d3-9ee1-5f1905d17434",
"metadata": {},
"outputs": [],
"source": [
"from langchain_neo4j.chains.graph_qa.cypher_utils import (\n",
" CypherQueryCorrector,\n",
" Schema,\n",
")\n",
"\n",
"graph.refresh_schema()\n",
"# Cypher validation tool for relationship directions\n",
"corrector_schema = [\n",
" Schema(el[\"start\"], el[\"type\"], el[\"end\"])\n",
" for el in graph.structured_schema.get(\"relationships\")\n",
"]\n",
"cypher_validation = CypherQueryCorrector(corrector_schema)\n",
"\n",
"# Generate natural language response based on database results\n",
"response_template = \"\"\"Based on the the question, Cypher query, and Cypher response, write a natural language response:\n",
"Question: {question}\n",
"Cypher query: {query}\n",
"Cypher Response: {response}\"\"\"\n",
"\n",
"response_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and Cypher response, convert it to a natural\"\n",
" \" language answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", response_template),\n",
" ]\n",
")\n",
"\n",
"chain = (\n",
" RunnablePassthrough.assign(query=cypher_response)\n",
" | RunnablePassthrough.assign(\n",
" response=lambda x: graph.query(cypher_validation(x[\"query\"])),\n",
" )\n",
" | response_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "918146e5-7918-46d2-a774-53f9547d8fcb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Robert De Niro, James Woods, Joe Pesci, and Sharon Stone played in the movie \"Casino\".'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"Who played in Casino movie?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7ba75cd-8399-4e54-a6f8-8a411f159f56",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,548 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to best prompt for Graph-RAG\n",
"\n",
"In this guide we'll go over prompting strategies to improve graph database query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt.\n",
"\n",
"## Setup\n",
"\n",
"First, get required packages and set environment variables:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Uncomment the below to use LangSmith. Not required.\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we need to define Neo4j credentials.\n",
"Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_neo4j import Neo4jGraph\n",
"\n",
"graph = Neo4jGraph()\n",
"\n",
"# Import movie information\n",
"\n",
"movies_query = \"\"\"\n",
"LOAD CSV WITH HEADERS FROM \n",
"'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n",
"AS row\n",
"MERGE (m:Movie {id:row.movieId})\n",
"SET m.released = date(row.released),\n",
" m.title = row.title,\n",
" m.imdbRating = toFloat(row.imdbRating)\n",
"FOREACH (director in split(row.director, '|') | \n",
" MERGE (p:Person {name:trim(director)})\n",
" MERGE (p)-[:DIRECTED]->(m))\n",
"FOREACH (actor in split(row.actors, '|') | \n",
" MERGE (p:Person {name:trim(actor)})\n",
" MERGE (p)-[:ACTED_IN]->(m))\n",
"FOREACH (genre in split(row.genres, '|') | \n",
" MERGE (g:Genre {name:trim(genre)})\n",
" MERGE (m)-[:IN_GENRE]->(g))\n",
"\"\"\"\n",
"\n",
"graph.query(movies_query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Filtering graph schema\n",
"\n",
"At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements.\n",
"Let's say we are dealing with the following graph schema:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING},Genre {name: STRING}\n",
"Relationship properties are the following:\n",
"\n",
"The relationships are the following:\n",
"(:Movie)-[:IN_GENRE]->(:Genre),(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
]
}
],
"source": [
"graph.refresh_schema()\n",
"print(graph.schema)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's say we want to exclude the _Genre_ node from the schema representation we pass to an LLM.\n",
"We can achieve that using the `exclude` parameter of the GraphCypherQAChain chain."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_neo4j import GraphCypherQAChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" llm=llm,\n",
" exclude_types=[\"Genre\"],\n",
" verbose=True,\n",
" allow_dangerous_requests=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING}\n",
"Relationship properties are the following:\n",
"\n",
"The relationships are the following:\n",
"(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
]
}
],
"source": [
"print(chain.graph_schema)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Few-shot examples\n",
"\n",
"Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries.\n",
"\n",
"Let's say we have the following examples:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"examples = [\n",
" {\n",
" \"question\": \"How many artists are there?\",\n",
" \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\",\n",
" },\n",
" {\n",
" \"question\": \"Which actors played in the movie Casino?\",\n",
" \"query\": \"MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name\",\n",
" },\n",
" {\n",
" \"question\": \"How many movies has Tom Hanks acted in?\",\n",
" \"query\": \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
" },\n",
" {\n",
" \"question\": \"List all the genres of the movie Schindler's List\",\n",
" \"query\": \"MATCH (m:Movie {{title: 'Schindler\\\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name\",\n",
" },\n",
" {\n",
" \"question\": \"Which actors have worked in movies from both the comedy and action genres?\",\n",
" \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
" },\n",
" {\n",
" \"question\": \"Which directors have made movies with at least three different actors named 'John'?\",\n",
" \"query\": \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
" },\n",
" {\n",
" \"question\": \"Identify movies where directors also played a role in the film.\",\n",
" \"query\": \"MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name\",\n",
" },\n",
" {\n",
" \"question\": \"Find the actor with the highest number of movies in the database.\",\n",
" \"query\": \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\",\n",
" },\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can create a few-shot prompt with them like so:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n",
"\n",
"example_prompt = PromptTemplate.from_template(\n",
" \"User input: {question}\\nCypher query: {query}\"\n",
")\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples[:5],\n",
" example_prompt=example_prompt,\n",
" prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n",
" suffix=\"User input: {question}\\nCypher query: \",\n",
" input_variables=[\"question\", \"schema\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
"\n",
"Here is the schema information\n",
"foo.\n",
"\n",
"Below are a number of examples of questions and their corresponding Cypher queries.\n",
"\n",
"User input: How many artists are there?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n",
"\n",
"User input: Which actors played in the movie Casino?\n",
"Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.name\n",
"\n",
"User input: How many movies has Tom Hanks acted in?\n",
"Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n",
"\n",
"User input: List all the genres of the movie Schindler's List\n",
"Cypher query: MATCH (m:Movie {title: 'Schindler\\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.name\n",
"\n",
"User input: Which actors have worked in movies from both the comedy and action genres?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n",
"\n",
"User input: How many artists are there?\n",
"Cypher query: \n"
]
}
],
"source": [
"print(prompt.format(question=\"How many artists are there?\", schema=\"foo\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dynamic few-shot examples\n",
"\n",
"If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.\n",
"\n",
"We can do just this using an ExampleSelector. In this case we'll use a [SemanticSimilarityExampleSelector](https://python.langchain.com/api_reference/core/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones: "
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n",
"from langchain_neo4j import Neo4jVector\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
" examples,\n",
" OpenAIEmbeddings(),\n",
" Neo4jVector,\n",
" k=5,\n",
" input_keys=[\"question\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'query': 'MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)',\n",
" 'question': 'How many artists are there?'},\n",
" {'query': \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
" 'question': 'How many movies has Tom Hanks acted in?'},\n",
" {'query': \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
" 'question': 'Which actors have worked in movies from both the comedy and action genres?'},\n",
" {'query': \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
" 'question': \"Which directors have made movies with at least three different actors named 'John'?\"},\n",
" {'query': 'MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1',\n",
" 'question': 'Find the actor with the highest number of movies in the database.'}]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.select_examples({\"question\": \"how many artists are there?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"prompt = FewShotPromptTemplate(\n",
" example_selector=example_selector,\n",
" example_prompt=example_prompt,\n",
" prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n",
" suffix=\"User input: {question}\\nCypher query: \",\n",
" input_variables=[\"question\", \"schema\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
"\n",
"Here is the schema information\n",
"foo.\n",
"\n",
"Below are a number of examples of questions and their corresponding Cypher queries.\n",
"\n",
"User input: How many artists are there?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n",
"\n",
"User input: How many movies has Tom Hanks acted in?\n",
"Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n",
"\n",
"User input: Which actors have worked in movies from both the comedy and action genres?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n",
"\n",
"User input: Which directors have made movies with at least three different actors named 'John'?\n",
"Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\n",
"\n",
"User input: Find the actor with the highest number of movies in the database.\n",
"Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\n",
"\n",
"User input: how many artists are there?\n",
"Cypher query: \n"
]
}
],
"source": [
"print(prompt.format(question=\"how many artists are there?\", schema=\"foo\"))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" llm=llm,\n",
" cypher_prompt=prompt,\n",
" verbose=True,\n",
" allow_dangerous_requests=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'count(DISTINCT a)': 967}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'How many actors are in the graph?',\n",
" 'result': 'There are 967 actors in the graph.'}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"How many actors are in the graph?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

View File

@@ -159,7 +159,6 @@ See [supported integrations](/docs/integrations/text_embedding/) for details on
- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)
- [How to: create a custom embeddings class](/docs/how_to/custom_embeddings)
### Vector stores
@@ -245,7 +244,6 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: create a custom chat model class](/docs/how_to/custom_chat_model)
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
- [How to: create a custom embeddings class](/docs/how_to/custom_embeddings)
- [How to: write a custom retriever class](/docs/how_to/custom_retriever)
- [How to: write a custom document loader](/docs/how_to/document_loader_custom)
- [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
@@ -318,7 +316,9 @@ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
- [How to: map values to a database](/docs/how_to/graph_mapping)
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
- [How to: improve results with prompting](/docs/how_to/graph_prompting)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
### Summarization

View File

@@ -39,20 +39,19 @@
"| None | ✅ | ✅ | ❌ | ❌ | - |\n",
"| Incremental | ✅ | ✅ | ❌ | ✅ | Continuously |\n",
"| Full | ✅ | ❌ | ✅ | ✅ | At end of indexing |\n",
"| Scoped_Full | ✅ | ✅ | ❌ | ✅ | At end of indexing |\n",
"\n",
"\n",
"`None` does not do any automatic clean up, allowing the user to manually do clean up of old content. \n",
"\n",
"`incremental`, `full` and `scoped_full` offer the following automated clean up:\n",
"`incremental` and `full` offer the following automated clean up:\n",
"\n",
"* If the content of the source document or derived documents has **changed**, all 3 modes will clean up (delete) previous versions of the content.\n",
"* If the source document has been **deleted** (meaning it is not included in the documents currently being indexed), the `full` cleanup mode will delete it from the vector store correctly, but the `incremental` and `scoped_full` mode will not.\n",
"* If the content of the source document or derived documents has **changed**, both `incremental` or `full` modes will clean up (delete) previous versions of the content.\n",
"* If the source document has been **deleted** (meaning it is not included in the documents currently being indexed), the `full` cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not.\n",
"\n",
"When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted.\n",
"\n",
"* `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes.\n",
"* `full` and `scoped_full` mode does the clean up after all batches have been written.\n",
"* `full` mode does the clean up after all batches have been written.\n",
"\n",
"## Requirements\n",
"\n",
@@ -65,7 +64,7 @@
" \n",
"## Caution\n",
"\n",
"The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` or `scoped_full` cleanup modes).\n",
"The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes).\n",
"\n",
"If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content.\n",
"\n",

View File

@@ -51,7 +51,7 @@ pip install langchain-core
Certain integrations like OpenAI and Anthropic have their own packages.
Any integrations that require their own package will be documented as such in the [Integration docs](/docs/integrations/providers/).
You can see a list of all integration packages in the [API reference](https://python.langchain.com/api_reference/) under the "Partner libs" dropdown.
You can see a list of all integration packages in the [API reference](https://api.python.langchain.com) under the "Partner libs" dropdown.
To install one of these run:
```bash

View File

@@ -162,18 +162,6 @@
"md_header_splits"
]
},
{
"cell_type": "markdown",
"id": "fb3f834a",
"metadata": {},
"source": [
":::note\n",
"\n",
"The default `MarkdownHeaderTextSplitter` strips white spaces and new lines. To preserve the original formatting of your Markdown documents, check out [ExperimentalMarkdownSyntaxTextSplitter](https://python.langchain.com/api_reference/text_splitters/markdown/langchain_text_splitters.markdown.ExperimentalMarkdownSyntaxTextSplitter.html).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "aa67e0cc-d721-4536-9c7a-9fa3a7a69cbe",

View File

@@ -12,7 +12,7 @@
"There are two ways to implement a custom parser:\n",
"\n",
"1. Using `RunnableLambda` or `RunnableGenerator` in [LCEL](/docs/concepts/lcel/) -- we strongly recommend this for most use cases\n",
"2. By inheriting from one of the base classes for out parsing -- this is the hard way of doing things\n",
"2. By inherting from one of the base classes for out parsing -- this is the hard way of doing things\n",
"\n",
"The difference between the two approaches are mostly superficial and are mainly in terms of which callbacks are triggered (e.g., `on_chain_start` vs. `on_parser_start`), and how a runnable lambda vs. a parser might be visualized in a tracing platform like LangSmith."
]
@@ -200,7 +200,7 @@
"id": "24067447-8a5a-4d6b-86a3-4b9cc4b4369b",
"metadata": {},
"source": [
"## Inheriting from Parsing Base Classes"
"## Inherting from Parsing Base Classes"
]
},
{
@@ -208,7 +208,7 @@
"id": "9713f547-b2e4-48eb-807f-a0f6f6d0e7e0",
"metadata": {},
"source": [
"Another approach to implement a parser is by inheriting from `BaseOutputParser`, `BaseGenerationOutputParser` or another one of the base parsers depending on what you need to do.\n",
"Another approach to implement a parser is by inherting from `BaseOutputParser`, `BaseGenerationOutputParser` or another one of the base parsers depending on what you need to do.\n",
"\n",
"In general, we **do not** recommend this approach for most use cases as it results in more code to write without significant benefits.\n",
"\n",

View File

@@ -29,7 +29,7 @@
":::\n",
"\n",
"\n",
"When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjunction with a [RunnableParallel](/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.\n",
"When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.\n",
"\n",
"See the example below:"
]

View File

@@ -125,11 +125,9 @@
"\n",
"There are a few ways to determine what that threshold is, which are controlled by the `breakpoint_threshold_type` kwarg.\n",
"\n",
"Note: if the resulting chunk sizes are too small/big, the additional kwargs `breakpoint_threshold_amount` and `min_chunk_size` can be used for adjustments.\n",
"\n",
"### Percentile\n",
"\n",
"The default way to split is based on percentile. In this method, all differences between sentences are calculated, and then any difference greater than the X percentile is split. The default value for X is 95.0 and can be adjusted by the keyword argument `breakpoint_threshold_amount` which expects a number between 0.0 and 100.0."
"The default way to split is based on percentile. In this method, all differences between sentences are calculated, and then any difference greater than the X percentile is split."
]
},
{
@@ -188,7 +186,7 @@
"source": [
"### Standard Deviation\n",
"\n",
"In this method, any difference greater than X standard deviations is split. The default value for X is 3.0 and can be adjusted by the keyword argument `breakpoint_threshold_amount`."
"In this method, any difference greater than X standard deviations is split."
]
},
{
@@ -247,7 +245,7 @@
"source": [
"### Interquartile\n",
"\n",
"In this method, the interquartile distance is used to split chunks. The interquartile range can be scaled by the keyword argument `breakpoint_threshold_amount`, the default value is 1.5."
"In this method, the interquartile distance is used to split chunks."
]
},
{
@@ -308,8 +306,8 @@
"source": [
"### Gradient\n",
"\n",
"In this method, the gradient of distance is used to split chunks along with the percentile method. This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data.\n",
"Similar to the percentile method, the split can be adjusted by the keyword argument `breakpoint_threshold_amount` which expects a number between 0.0 and 100.0, the default value is 95.0."
"In this method, the gradient of distance is used to split chunks along with the percentile method.\n",
"This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data."
]
},
{

View File

@@ -55,7 +55,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:"
"Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:"
]
},
{

View File

@@ -51,7 +51,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
"Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{

View File

@@ -54,7 +54,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
"Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{

View File

@@ -336,7 +336,7 @@
"\n",
"The **MultiQueryRetriever** is used to tackle the problem that the RAG pipeline might not return the best set of documents based on the query. It generates multiple queries that mean the same as the original query and then fetches documents for each.\n",
"\n",
"To evaluate this retriever, UpTrain will run the following evaluation:\n",
"To evluate this retriever, UpTrain will run the following evaluation:\n",
"- **[Multi Query Accuracy](https://docs.uptrain.ai/predefined-evaluations/query-quality/multi-query-accuracy)**: Checks if the multi-queries generated mean the same as the original query."
]
},

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatCerebras\n",
"\n",
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#).\n",
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html).\n",
"\n",
"At Cerebras, we've developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). The Cerebras CS-3 system, powered by the WSE-3, represents a new class of AI supercomputer that sets the standard for generative AI training and inference with unparalleled performance and scalability.\n",
"\n",
@@ -37,7 +37,7 @@
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cerebras) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatCerebras](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#) | [langchain-cerebras](https://python.langchain.com/api_reference/cerebras/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cerebras?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?style=flat-square&label=%20) |\n",
"| [ChatCerebras](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html) | [langchain-cerebras](https://api.python.langchain.com/en/latest/cerebras_api_reference.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cerebras?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
@@ -139,7 +139,7 @@
"from langchain_cerebras import ChatCerebras\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" model=\"llama3.1-70b\",\n",
" # other params...\n",
")"
]
@@ -215,7 +215,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" model=\"llama3.1-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -280,7 +280,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" model=\"llama3.1-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -324,7 +324,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" model=\"llama3.1-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -371,7 +371,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" model=\"llama3.1-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -396,7 +396,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#"
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html"
]
}
],

View File

@@ -36,7 +36,7 @@
"### Integration details\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/ibm/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatWatsonx](https://python.langchain.com/api_reference/ibm/chat_models/langchain_ibm.chat_models.ChatWatsonx.html) | [langchain-ibm](https://python.langchain.com/api_reference/ibm/index.html) | ❌ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ibm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ibm?style=flat-square&label=%20) |\n",
"| [ChatWatsonx](https://python.langchain.com/api_reference/ibm/chat_models/langchain_ibm.chat_models.ChatWatsonx.html#langchain_ibm.chat_models.ChatWatsonx) | [langchain-ibm](https://python.langchain.com/api_reference/ibm/index.html) | ❌ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ibm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ibm?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",

View File

@@ -63,9 +63,9 @@
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-35-turbo\",\n",
" \"model_name\": \"gpt-4\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-35-turbo\",\n",
" \"model\": \"azure/gpt-4-1106-preview\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
@@ -73,7 +73,7 @@
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"chat = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-35-turbo\")"
"chat = ChatLiteLLMRouter(router=litellm_router)"
]
},
{
@@ -177,7 +177,6 @@
"source": [
"chat = ChatLiteLLMRouter(\n",
" router=litellm_router,\n",
" model_name=\"gpt-35-turbo\",\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
@@ -210,7 +209,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.9.13"
}
},
"nbformat": 4,

View File

@@ -39,7 +39,7 @@
"### Credentials\n",
"\n",
"\n",
"A valid [API key](https://console.mistral.ai/api-keys/) is needed to communicate with the API. Once you've done this set the MISTRAL_API_KEY environment variable:"
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API. Once you've done this set the MISTRAL_API_KEY environment variable:"
]
},
{

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatOCIModelDeployment\n",
"\n",
"This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html).\n",
"This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html).\n",
"\n",
"[OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models in the Oracle Cloud Infrastructure. You can use [AI Quick Actions](https://blogs.oracle.com/ai-and-datascience/post/ai-quick-actions-in-oci-data-science) to easily deploy LLMs on [OCI Data Science Model Deployment Service](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm). You may choose to deploy the model with popular inference frameworks such as vLLM or TGI. By default, the model deployment endpoint mimics the OpenAI API protocol.\n",
"\n",
@@ -30,7 +30,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOCIModelDeployment](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"| [ChatOCIModelDeployment](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
@@ -430,9 +430,9 @@
"\n",
"For comprehensive details on all features and configurations, please refer to the API reference documentation for each class:\n",
"\n",
"* [ChatOCIModelDeployment](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html)\n",
"* [ChatOCIModelDeploymentVLLM](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentVLLM.html)\n",
"* [ChatOCIModelDeploymentTGI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentTGI.html)"
"* [ChatOCIModelDeployment](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html)\n",
"* [ChatOCIModelDeploymentVLLM](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentVLLM.html)\n",
"* [ChatOCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentTGI.html)"
]
}
],

View File

@@ -26,14 +26,14 @@
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ |\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-oci-generative-ai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-oci-generative-ai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | [JSON mode](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs) | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | | | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | \n",
"| ✅ | ✅ | | | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",

View File

@@ -34,8 +34,8 @@
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: |:----------------------------------------------------:| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
@@ -96,20 +96,6 @@
"%pip install -qU langchain-ollama"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "Make sure you're using the latest Ollama version for structured outputs. Update by running:",
"id": "b18bd692076f7cf7"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": "%pip install -U ollama",
"id": "b7a05cba95644c2e"
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatOutlines\n",
"\n",
"This will help you getting started with Outlines [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatOutlines features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html).\n",
"This will help you getting started with Outlines [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatOutlines features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html).\n",
"\n",
"[Outlines](https://github.com/outlines-dev/outlines) is a library for constrained language generation. It allows you to use large language models (LLMs) with various backends while applying constraints to the generated output.\n",
"\n",
@@ -26,7 +26,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOutlines](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"| [ChatOutlines](https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
@@ -316,7 +316,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html\n",
"\n",
"## Full Outlines Documentation: \n",
"\n",

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatSambaStudio\n",
"\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html).\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html).\n",
"\n",
"**[SambaNova](https://sambanova.ai/)'s** [SambaStudio](https://docs.sambanova.ai/sambastudio/latest/sambastudio-intro.html) SambaStudio is a rich, GUI-based platform that provides the functionality to train, deploy, and manage models in SambaNova [DataScale](https://sambanova.ai/products/datascale) systems.\n",
"\n",
@@ -28,13 +28,13 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatSambaStudio](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"| [ChatSambaStudio](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | | | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"| | | | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
@@ -119,20 +119,20 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.sambanova import ChatSambaStudio\n",
"\n",
"llm = ChatSambaStudio(\n",
" model=\"Meta-Llama-3-70B-Instruct-4096\", # set if using a Bundle endpoint\n",
" model=\"Meta-Llama-3-70B-Instruct-4096\", # set if using a CoE endpoint\n",
" max_tokens=1024,\n",
" temperature=0.7,\n",
" top_k=1,\n",
" top_p=0.01,\n",
" do_sample=True,\n",
" process_prompt=\"True\", # set if using a Bundle endpoint\n",
" process_prompt=\"True\", # set if using a CoE endpoint\n",
")"
]
},
@@ -349,141 +349,13 @@
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool calling"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"from langchain_core.messages import HumanMessage, SystemMessage, ToolMessage\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_time(kind: str = \"both\") -> str:\n",
" \"\"\"Returns current date, current time or both.\n",
" Args:\n",
" kind: date, time or both\n",
" \"\"\"\n",
" if kind == \"date\":\n",
" date = datetime.now().strftime(\"%m/%d/%Y\")\n",
" return f\"Current date: {date}\"\n",
" elif kind == \"time\":\n",
" time = datetime.now().strftime(\"%H:%M:%S\")\n",
" return f\"Current time: {time}\"\n",
" else:\n",
" date = datetime.now().strftime(\"%m/%d/%Y\")\n",
" time = datetime.now().strftime(\"%H:%M:%S\")\n",
" return f\"Current date: {date}, Current time: {time}\"\n",
"\n",
"\n",
"tools = [get_time]\n",
"\n",
"\n",
"def invoke_tools(tool_calls, messages):\n",
" available_functions = {tool.name: tool for tool in tools}\n",
" for tool_call in tool_calls:\n",
" selected_tool = available_functions[tool_call[\"name\"]]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" print(f\"Tool output: {tool_output}\")\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" return messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = llm.bind_tools(tools=tools)\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"I need to schedule a meeting for two weeks from today. Can you tell me the exact date of the meeting?\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Intermediate model response: [{'name': 'get_time', 'args': {'kind': 'date'}, 'id': 'call_4092d5dd21cd4eb494', 'type': 'tool_call'}]\n",
"Tool output: Current date: 11/07/2024\n",
"final response: The meeting will be exactly two weeks from today, which would be 25/07/2024.\n"
]
}
],
"source": [
"response = llm_with_tools.invoke(messages)\n",
"while len(response.tool_calls) > 0:\n",
" print(f\"Intermediate model response: {response.tool_calls}\")\n",
" messages.append(response)\n",
" messages = invoke_tools(response.tool_calls, messages)\n",
"response = llm_with_tools.invoke(messages)\n",
"\n",
"print(f\"final response: {response.content}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Structured Outputs"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why did the cat join a band?', punchline='Because it wanted to be the purr-cussionist!')"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatSambaStudio features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html"
"For detailed documentation of all ChatSambaStudio features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html"
]
}
],

View File

@@ -22,16 +22,24 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet snowflake-snowpark-python"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -65,14 +73,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatSnowflakeCortex\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"# By default, we'll be using the cortex provided model: `mistral-large`, with function: `complete`\n",
"# By default, we'll be using the cortex provided model: `snowflake-arctic`, with function: `complete`\n",
"chat = ChatSnowflakeCortex()"
]
},
@@ -84,16 +92,16 @@
"\n",
"```python\n",
"chat = ChatSnowflakeCortex(\n",
" # Change the default cortex model and function\n",
" model=\"mistral-large\",\n",
" # change default cortex model and function\n",
" model=\"snowflake-arctic\",\n",
" cortex_function=\"complete\",\n",
"\n",
" # Change the default generation parameters\n",
" # change default generation parameters\n",
" temperature=0,\n",
" max_tokens=10,\n",
" top_p=0.95,\n",
"\n",
" # Specify your Snowflake Credentials\n",
" # specify snowflake credentials\n",
" account=\"YOUR_SNOWFLAKE_ACCOUNT\",\n",
" username=\"YOUR_SNOWFLAKE_USERNAME\",\n",
" password=\"YOUR_SNOWFLAKE_PASSWORD\",\n",
@@ -109,13 +117,28 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calling the chat model\n",
"We can now call the chat model using the `invoke` or `stream` methods."
"### Calling the model\n",
"We can now call the model using the `invoke` or `generate` method.\n",
"\n",
"#### Generation"
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" Large language models are artificial intelligence systems designed to understand, generate, and manipulate human language. These models are typically based on deep learning techniques and are trained on vast amounts of text data to learn patterns and structures in language. They can perform a wide range of language-related tasks, such as language translation, text generation, sentiment analysis, and answering questions. Some well-known large language models include Google's BERT, OpenAI's GPT series, and Facebook's RoBERTa. These models have shown remarkable performance in various natural language processing tasks, and their applications continue to expand as research in AI progresses.\", response_metadata={'completion_tokens': 131, 'prompt_tokens': 29, 'total_tokens': 160}, id='run-5435bd0a-83fd-4295-b237-66cbd1b5c0f3-0')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" SystemMessage(content=\"You are a friendly assistant.\"),\n",
@@ -128,31 +151,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Stream"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Sample input prompt\n",
"messages = [\n",
" SystemMessage(content=\"You are a friendly assistant.\"),\n",
" HumanMessage(content=\"What are large language models?\"),\n",
"]\n",
"\n",
"# Invoke the stream method and print each chunk as it arrives\n",
"print(\"Stream Method Response:\")\n",
"for chunk in chat._stream(messages):\n",
" print(chunk.message.content)"
"### Streaming\n",
"`ChatSnowflakeCortex` doesn't support streaming as of now. Support for streaming will be coming in the later versions!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -166,7 +172,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.20"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -11,7 +11,7 @@
"A loader for `Confluence` pages.\n",
"\n",
"\n",
"This currently supports `username/api_key`, `Oauth2 login`, `cookies`. Additionally, on-prem installations also support `token` authentication. \n",
"This currently supports `username/api_key`, `Oauth2 login`. Additionally, on-prem installations also support `token` authentication. \n",
"\n",
"\n",
"Specify a list `page_id`-s and/or `space_key` to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\n",

View File

@@ -1,253 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Needle Document Loader\n",
"[Needle](https://needle-ai.com) makes it easy to create your RAG pipelines with minimal effort. \n",
"\n",
"For more details, refer to our [API documentation](https://docs.needle-ai.com/docs/api-reference/needle-api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview\n",
"The Needle Document Loader is a utility for integrating Needle collections with LangChain. It enables seamless storage, retrieval, and utilization of documents for Retrieval-Augmented Generation (RAG) workflows.\n",
"\n",
"This example demonstrates:\n",
"\n",
"* Storing documents into a Needle collection.\n",
"* Setting up a retriever to fetch documents.\n",
"* Building a Retrieval-Augmented Generation (RAG) pipeline."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup\n",
"Before starting, ensure you have the following environment variables set:\n",
"\n",
"* NEEDLE_API_KEY: Your API key for authenticating with Needle.\n",
"* OPENAI_API_KEY: Your OpenAI API key for language model operations."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"NEEDLE_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"To initialize the NeedleLoader, you need the following parameters:\n",
"\n",
"* needle_api_key: Your Needle API key (or set it as an environment variable).\n",
"* collection_id: The ID of the Needle collection to work with."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.needle import NeedleLoader\n",
"\n",
"collection_id = \"clt_01J87M9T6B71DHZTHNXYZQRG5H\"\n",
"\n",
"# Initialize NeedleLoader to store documents to the collection\n",
"document_loader = NeedleLoader(\n",
" needle_api_key=os.getenv(\"NEEDLE_API_KEY\"),\n",
" collection_id=collection_id,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"To add files to the Needle collection:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"files = {\n",
" \"tech-radar-30.pdf\": \"https://www.thoughtworks.com/content/dam/thoughtworks/documents/radar/2024/04/tr_technology_radar_vol_30_en.pdf\"\n",
"}\n",
"\n",
"document_loader.add_files(files=files)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# Show the documents in the collection\n",
"# collections_documents = document_loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"The lazy_load method allows you to iteratively load documents from the Needle collection, yielding each document as it is fetched:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Show the documents in the collection\n",
"# collections_documents = document_loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"### Use within a chain\n",
"Below is a complete example of setting up a RAG pipeline with Needle within a chain:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'Did RAG move to accepted?',\n",
" 'context': [Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.')],\n",
" 'answer': 'Yes, RAG has been adopted as the preferred pattern for improving the quality of responses generated by a large language model.'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_community.retrievers.needle import NeedleRetriever\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"# Initialize the Needle retriever (make sure your Needle API key is set as an environment variable)\n",
"retriever = NeedleRetriever(\n",
" needle_api_key=os.getenv(\"NEEDLE_API_KEY\"),\n",
" collection_id=\"clt_01J87M9T6B71DHZTHNXYZQRG5H\",\n",
")\n",
"\n",
"# Define system prompt for the assistant\n",
"system_prompt = \"\"\"\n",
" You are an assistant for question-answering tasks. \n",
" Use the following pieces of retrieved context to answer the question.\n",
" If you don't know, say so concisely.\\n\\n{context}\n",
" \"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", system_prompt), (\"human\", \"{input}\")]\n",
")\n",
"\n",
"# Define the question-answering chain using a document chain (stuff chain) and the retriever\n",
"question_answer_chain = create_stuff_documents_chain(llm, prompt)\n",
"\n",
"# Create the RAG (Retrieval-Augmented Generation) chain by combining the retriever and the question-answering chain\n",
"rag_chain = create_retrieval_chain(retriever, question_answer_chain)\n",
"\n",
"# Define the input query\n",
"query = {\"input\": \"Did RAG move to accepted?\"}\n",
"\n",
"response = rag_chain.invoke(query)\n",
"\n",
"response"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `Needle` features and configurations head to the API reference: https://docs.needle-ai.com"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -103,7 +103,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [

View File

@@ -153,7 +153,7 @@
"\n",
"Best practices for developing with LangChain.\n",
"\n",
"### [API reference](https://python.langchain.com/api_reference/) [](\\#api-reference \"Direct link to api-reference\")\n",
"### [API reference](https://api.python.langchain.com) [](\\#api-reference \"Direct link to api-reference\")\n",
"\n",
"Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.\n",
"\n",

View File

@@ -1,149 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: YoutubeLoaderDL\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# YoutubeLoaderDL\n",
"\n",
"Loader for Youtube leveraging the `yt-dlp` library.\n",
"\n",
"This package implements a [document loader](/docs/concepts/document_loaders/) for Youtube. In contrast to the [YoutubeLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.youtube.YoutubeLoader.html) of `langchain-community`, which relies on `pytube`, `YoutubeLoaderDL` is able to fetch YouTube metadata. `langchain-yt-dlp` leverages the robust `yt-dlp` library, providing a more reliable and feature-rich YouTube document loader.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS Support |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| YoutubeLoader | langchain-yt-dlp | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"```bash\n",
"pip install langchain-yt-dlp\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialization"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from langchain_yt_dlp.youtube_loader import YoutubeLoaderDL\n",
"\n",
"# Basic transcript loading\n",
"loader = YoutubeLoaderDL.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=dQw4w9WgXcQ\", add_video_info=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"documents = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'dQw4w9WgXcQ',\n",
" 'title': 'Rick Astley - Never Gonna Give You Up (Official Music Video)',\n",
" 'description': 'The official video for “Never Gonna Give You Up” by Rick Astley. \\n\\nNever: The Autobiography 📚 OUT NOW! \\nFollow this link to get your copy and listen to Ricks Never playlist ❤️ #RickAstleyNever\\nhttps://linktr.ee/rickastleynever\\n\\n“Never Gonna Give You Up” was a global smash on its release in July 1987, topping the charts in 25 countries including Ricks native UK and the US Billboard Hot 100. It also won the Brit Award for Best single in 1988. Stock Aitken and Waterman wrote and produced the track which was the lead-off single and lead track from Ricks debut LP “Whenever You Need Somebody”. The album was itself a UK number one and would go on to sell over 15 million copies worldwide.\\n\\nThe legendary video was directed by Simon West who later went on to make Hollywood blockbusters such as Con Air, Lara Croft Tomb Raider and The Expendables 2. The video passed the 1bn YouTube views milestone on 28 July 2021.\\n\\nSubscribe to the official Rick Astley YouTube channel: https://RickAstley.lnk.to/YTSubID\\n\\nFollow Rick Astley:\\nFacebook: https://RickAstley.lnk.to/FBFollowID \\nTwitter: https://RickAstley.lnk.to/TwitterID \\nInstagram: https://RickAstley.lnk.to/InstagramID \\nWebsite: https://RickAstley.lnk.to/storeID \\nTikTok: https://RickAstley.lnk.to/TikTokID\\n\\nListen to Rick Astley:\\nSpotify: https://RickAstley.lnk.to/SpotifyID \\nApple Music: https://RickAstley.lnk.to/AppleMusicID \\nAmazon Music: https://RickAstley.lnk.to/AmazonMusicID \\nDeezer: https://RickAstley.lnk.to/DeezerID \\n\\nLyrics:\\nWere no strangers to love\\nYou know the rules and so do I\\nA full commitments what Im thinking of\\nYou wouldnt get this from any other guy\\n\\nI just wanna tell you how Im feeling\\nGotta make you understand\\n\\nNever gonna give you up\\nNever gonna let you down\\nNever gonna run around and desert you\\nNever gonna make you cry\\nNever gonna say goodbye\\nNever gonna tell a lie and hurt you\\n\\nWeve known each other for so long\\nYour hearts been aching but youre too shy to say it\\nInside we both know whats been going on\\nWe know the game and were gonna play it\\n\\nAnd if you ask me how Im feeling\\nDont tell me youre too blind to see\\n\\nNever gonna give you up\\nNever gonna let you down\\nNever gonna run around and desert you\\nNever gonna make you cry\\nNever gonna say goodbye\\nNever gonna tell a lie and hurt you\\n\\n#RickAstley #NeverGonnaGiveYouUp #WheneverYouNeedSomebody #OfficialMusicVideo',\n",
" 'view_count': 1603360806,\n",
" 'publish_date': datetime.datetime(2009, 10, 25, 0, 0),\n",
" 'length': 212,\n",
" 'author': 'Rick Astley',\n",
" 'channel_id': 'UCuAXFkgsw1L7xaCfnd5JJOw',\n",
" 'webpage_url': 'https://www.youtube.com/watch?v=dQw4w9WgXcQ'}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"documents[0].metadata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- No lazy loading is implemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference:\n",
"\n",
"- [Github](https://github.com/aqib0770/langchain-yt-dlp)\n",
"- [PyPi](https://pypi.org/project/langchain-yt-dlp/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,16 +12,6 @@
"First, let's install some dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bedbf252-4ea5-4eea-a3dc-d18ccc84aca3",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -29,6 +19,8 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai langchain-community\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
@@ -63,7 +55,7 @@
"tags": []
},
"source": [
"## `In Memory` cache"
"## `In Memory` Cache"
]
},
{
@@ -147,7 +139,7 @@
"tags": []
},
"source": [
"## `SQLite` cache"
"## `SQLite` Cache"
]
},
{
@@ -242,7 +234,7 @@
"id": "e71273ab",
"metadata": {},
"source": [
"## `Upstash Redis` caches"
"## `Upstash Redis` Cache"
]
},
{
@@ -250,7 +242,7 @@
"id": "f10dabef",
"metadata": {},
"source": [
"### Standard cache\n",
"### Standard Cache\n",
"Use [Upstash Redis](https://upstash.com) to cache prompts and responses with a serverless HTTP API."
]
},
@@ -348,8 +340,7 @@
"id": "b29dd776",
"metadata": {},
"source": [
"### Semantic cache\n",
"\n",
"### Semantic Cache\n",
"Use [Upstash Vector](https://upstash.com/docs/vector/overall/whatisvector) to do a semantic similarity search and cache the most similar response in the database. The vectorization is automatically done by the selected embedding model while creating Upstash Vector database. "
]
},
@@ -463,10 +454,11 @@
"cell_type": "markdown",
"id": "278ad7ae",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## `Redis` caches\n",
"## `Redis` Cache\n",
"\n",
"See the main [Redis cache docs](/docs/integrations/caches/redis_llm_caching/) for detail."
]
@@ -476,7 +468,7 @@
"id": "c5c9a4d5",
"metadata": {},
"source": [
"### Standard cache\n",
"### Standard Cache\n",
"Use [Redis](/docs/integrations/providers/redis) to cache prompts and responses."
]
},
@@ -572,7 +564,7 @@
"id": "82be23f6",
"metadata": {},
"source": [
"### Semantic cache\n",
"### Semantic Cache\n",
"Use [Redis](/docs/integrations/providers/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
@@ -668,6 +660,7 @@
"cell_type": "markdown",
"id": "684eab55",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -912,7 +905,7 @@
"id": "9b2b2777",
"metadata": {},
"source": [
"## `MongoDB Atlas` caches\n",
"## `MongoDB Atlas` Cache\n",
"\n",
"[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It has native support for \n",
"Vector Search on the MongoDB document data.\n",
@@ -924,9 +917,8 @@
"id": "ecdc2a0a",
"metadata": {},
"source": [
"### Standard cache\n",
"\n",
"Standard cache is a simple cache in MongoDB. It does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"### `MongoDBCache`\n",
"An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"\n",
"To import this cache, first install the required dependency:\n",
"\n",
@@ -958,9 +950,8 @@
"```\n",
"\n",
"\n",
"### Semantic cache\n",
"\n",
"Semantic caching allows retrieval of cached prompts based on semantic similarity between the user input and previously cached results. Under the hood, it blends MongoDBAtlas as both a cache and a vectorstore.\n",
"### `MongoDBAtlasSemanticCache`\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.\n",
"The MongoDBAtlasSemanticCache inherits from `MongoDBAtlasVectorSearch` and needs an Atlas Vector Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/mongodb_atlas) on how to set up the index.\n",
"\n",
"To import this cache:\n",
@@ -994,13 +985,14 @@
"cell_type": "markdown",
"id": "726fe754",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## `Momento` cache\n",
"## `Momento` Cache\n",
"Use [Momento](/docs/integrations/providers/momento) to cache prompts and responses.\n",
"\n",
"Requires installing the `momento` package:"
"Requires momento to use, uncomment below to install:"
]
},
{
@@ -1104,14 +1096,13 @@
"cell_type": "markdown",
"id": "934943dc",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## `SQLAlchemy` cache\n",
"## `SQLAlchemy` Cache\n",
"\n",
"You can use `SQLAlchemyCache` to cache with any SQL database supported by `SQLAlchemy`.\n",
"\n",
"### Standard cache"
"You can use `SQLAlchemyCache` to cache with any SQL database supported by `SQLAlchemy`."
]
},
{
@@ -1121,11 +1112,11 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.cache import SQLAlchemyCache\n",
"from sqlalchemy import create_engine\n",
"# from langchain.cache import SQLAlchemyCache\n",
"# from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(\"postgresql://postgres:postgres@localhost:5432/postgres\")\n",
"set_llm_cache(SQLAlchemyCache(engine))"
"# engine = create_engine(\"postgresql://postgres:postgres@localhost:5432/postgres\")\n",
"# set_llm_cache(SQLAlchemyCache(engine))"
]
},
{
@@ -1133,9 +1124,7 @@
"id": "0959d640",
"metadata": {},
"source": [
"### Custom SQLAlchemy schemas\n",
"\n",
"You can define your own declarative `SQLAlchemyCache` child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with `Postgres`, use:"
"### Custom SQLAlchemy Schemas"
]
},
{
@@ -1145,6 +1134,8 @@
"metadata": {},
"outputs": [],
"source": [
"# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:\n",
"\n",
"from langchain_community.cache import SQLAlchemyCache\n",
"from sqlalchemy import Column, Computed, Index, Integer, Sequence, String, create_engine\n",
"from sqlalchemy.ext.declarative import declarative_base\n",
@@ -1194,7 +1185,7 @@
"id": "6cf6acb4-1bc4-4c4b-9325-2420c17e5e2b",
"metadata": {},
"source": [
"Required dependency:"
"### Required dependency"
]
},
{
@@ -1212,7 +1203,7 @@
"id": "a4a6725d",
"metadata": {},
"source": [
"### Connecting to the DB\n",
"### Connect to the DB\n",
"\n",
"The Cassandra caches shown in this page can be used with Cassandra as well as other derived databases, such as Astra DB, which use the CQL (Cassandra Query Language) protocol.\n",
"\n",
@@ -1226,7 +1217,7 @@
"id": "15735abe-2567-43ce-aa91-f253b33b5a88",
"metadata": {},
"source": [
"#### to a Cassandra cluster\n",
"#### Connecting to a Cassandra cluster\n",
"\n",
"You first need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
@@ -1279,7 +1270,7 @@
"id": "2cc7ba29-8f84-4fbf-aaf7-3daa1be7e7b0",
"metadata": {},
"source": [
"#### to Astra DB through CQL\n",
"#### Connecting to Astra DB through CQL\n",
"\n",
"In this case you initialize CassIO with the following connection parameters:\n",
"\n",
@@ -1338,7 +1329,7 @@
"id": "8665664a",
"metadata": {},
"source": [
"### Standard cache\n",
"### Cassandra: Exact cache\n",
"\n",
"This will avoid invoking the LLM when the supplied prompt is _exactly_ the same as one encountered already:"
]
@@ -1409,7 +1400,7 @@
"id": "8fc4d017",
"metadata": {},
"source": [
"### Semantic cache\n",
"### Cassandra: Semantic cache\n",
"\n",
"This cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an `Embeddings` instance of your choice."
]
@@ -1497,9 +1488,9 @@
"id": "55dc84b3-37cb-4f19-b175-40e18e06f83f",
"metadata": {},
"source": [
"**Attribution statement:**\n",
"#### Attribution statement\n",
"\n",
">`Apache Cassandra`, `Cassandra` and `Apache` are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries."
">Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries."
]
},
{
@@ -1507,7 +1498,7 @@
"id": "8712f8fc-bb89-4164-beb9-c672778bbd91",
"metadata": {},
"source": [
"## `Astra DB` caches"
"## `Astra DB` Caches"
]
},
{
@@ -1552,7 +1543,7 @@
"id": "ee6d587f-4b7c-43f4-9e90-5129c842a143",
"metadata": {},
"source": [
"### Standard cache\n",
"### Astra DB exact LLM cache\n",
"\n",
"This will avoid invoking the LLM when the supplied prompt is _exactly_ the same as one encountered already:"
]
@@ -1628,7 +1619,7 @@
"id": "524b94fa-6162-4880-884d-d008749d14e2",
"metadata": {},
"source": [
"### Semantic cache\n",
"### Astra DB Semantic cache\n",
"\n",
"This cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an `Embeddings` instance of your choice."
]
@@ -1722,7 +1713,7 @@
}
},
"source": [
"## `Azure Cosmos DB` semantic cache\n",
"## Azure Cosmos DB Semantic Cache\n",
"\n",
"You can use this integrated [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) for caching."
]
@@ -1857,162 +1848,18 @@
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "235ff73bf7143f13",
"metadata": {},
"source": [
"## `Azure Cosmos DB NoSql` semantic cache\n",
"\n",
"You can use this integrated [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) for caching."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41fea5aa7b2153ca",
"metadata": {
"ExecuteTime": {
"end_time": "2024-12-06T00:55:38.648972Z",
"start_time": "2024-12-06T00:55:38.290541Z"
}
},
"outputs": [],
"source": [
"from typing import Any, Dict\n",
"\n",
"from azure.cosmos import CosmosClient, PartitionKey\n",
"from langchain_community.cache import AzureCosmosDBNoSqlSemanticCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"HOST = \"COSMOS_DB_URI\"\n",
"KEY = \"COSMOS_DB_KEY\"\n",
"\n",
"cosmos_client = CosmosClient(HOST, KEY)\n",
"\n",
"\n",
"def get_vector_indexing_policy() -> dict:\n",
" return {\n",
" \"indexingMode\": \"consistent\",\n",
" \"includedPaths\": [{\"path\": \"/*\"}],\n",
" \"excludedPaths\": [{\"path\": '/\"_etag\"/?'}],\n",
" \"vectorIndexes\": [{\"path\": \"/embedding\", \"type\": \"diskANN\"}],\n",
" }\n",
"\n",
"\n",
"def get_vector_embedding_policy() -> dict:\n",
" return {\n",
" \"vectorEmbeddings\": [\n",
" {\n",
" \"path\": \"/embedding\",\n",
" \"dataType\": \"float32\",\n",
" \"dimensions\": 1536,\n",
" \"distanceFunction\": \"cosine\",\n",
" }\n",
" ]\n",
" }\n",
"\n",
"\n",
"cosmos_container_properties_test = {\"partition_key\": PartitionKey(path=\"/id\")}\n",
"cosmos_database_properties_test: Dict[str, Any] = {}\n",
"\n",
"set_llm_cache(\n",
" AzureCosmosDBNoSqlSemanticCache(\n",
" cosmos_client=cosmos_client,\n",
" embedding=OpenAIEmbeddings(),\n",
" vector_embedding_policy=get_vector_embedding_policy(),\n",
" indexing_policy=get_vector_indexing_policy(),\n",
" cosmos_container_properties=cosmos_container_properties_test,\n",
" cosmos_database_properties=cosmos_database_properties_test,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1e1cd93819921bf6",
"metadata": {
"ExecuteTime": {
"end_time": "2024-12-06T00:55:44.513080Z",
"start_time": "2024-12-06T00:55:41.353843Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 374 ms, sys: 34.2 ms, total: 408 ms\n",
"Wall time: 3.15 s\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "576ce24c1244812a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-12-06T00:55:50.925865Z",
"start_time": "2024-12-06T00:55:50.548520Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 17.7 ms, sys: 2.88 ms, total: 20.6 ms\n",
"Wall time: 373 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "306ff47b",
"metadata": {},
"source": [
"## `Elasticsearch` caches\n",
"\n",
"## `Elasticsearch` Cache\n",
"A caching layer for LLMs that uses Elasticsearch.\n",
"\n",
"First install the LangChain integration with Elasticsearch."
@@ -2033,8 +1880,6 @@
"id": "9e70b0a0",
"metadata": {},
"source": [
"### Standard cache\n",
"\n",
"Use the class `ElasticsearchCache`.\n",
"\n",
"Simple example:"
@@ -2142,26 +1987,6 @@
"please only make additive modifications, keeping the base mapping intact."
]
},
{
"cell_type": "markdown",
"id": "3dc15b3c-8793-432e-98c3-d2726d497a5a",
"metadata": {},
"source": [
"### Embedding cache\n",
"\n",
"An Elasticsearch store for caching embeddings."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0ae5bd3-517d-470f-9b44-14d9359e6940",
"metadata": {},
"outputs": [],
"source": [
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache"
]
},
{
"cell_type": "markdown",
"id": "0c69d84d",
@@ -2169,9 +1994,8 @@
"tags": []
},
"source": [
"## LLM-specific optional caching\n",
"\n",
"You can also turn off caching for specific LLMs. In the example below, even though global caching is enabled, we turn it off for a specific LLM."
"## Optional Caching\n",
"You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM"
]
},
{
@@ -2251,8 +2075,7 @@
"tags": []
},
"source": [
"## Optional caching in Chains\n",
"\n",
"## Optional Caching in Chains\n",
"You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.\n",
"\n",
"As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step."
@@ -2419,7 +2242,7 @@
"id": "9ecfa565038eff71",
"metadata": {},
"source": [
"## `OpenSearch` semantic cache\n",
"## OpenSearch Semantic Cache\n",
"Use [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch/) as a semantic cache to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
@@ -2523,8 +2346,7 @@
"id": "2ac1a8c7",
"metadata": {},
"source": [
"## `SingleStoreDB` semantic cache\n",
"\n",
"## SingleStoreDB Semantic Cache\n",
"You can use [SingleStoreDB](https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/) as a semantic cache to cache prompts and responses."
]
},
@@ -2551,7 +2373,7 @@
"id": "7e6b9b1a",
"metadata": {},
"source": [
"## `Memcached` cache\n",
"## `Memcached` Cache\n",
"You can use [Memcached](https://www.memcached.org/) as a cache to cache prompts and responses through [pymemcache](https://github.com/pinterest/pymemcache).\n",
"\n",
"This cache requires the pymemcache dependency to be installed:"
@@ -2647,7 +2469,7 @@
"id": "7019c991-0101-4f9c-b212-5729a5471293",
"metadata": {},
"source": [
"## `Couchbase` caches\n",
"## Couchbase Caches\n",
"\n",
"Use [Couchbase](https://couchbase.com/) as a cache for prompts and responses."
]
@@ -2657,7 +2479,7 @@
"id": "d6aac680-ba32-4c19-8864-6471cf0e7d5a",
"metadata": {},
"source": [
"### Standard cache\n",
"### Couchbase Cache\n",
"\n",
"The standard cache that looks for an exact match of the user prompt."
]
@@ -2791,7 +2613,7 @@
"id": "1dca39d8-233a-45ba-ad7d-0920dfbc4a50",
"metadata": {},
"source": [
"#### Time to Live (TTL) for the cached entries\n",
"### Specifying a Time to Live (TTL) for the Cached entries\n",
"The Cached documents can be deleted after a specified time automatically by specifying a `ttl` parameter along with the initialization of the Cache."
]
},
@@ -2820,7 +2642,7 @@
"id": "43626f33-d184-4260-b641-c9341cef5842",
"metadata": {},
"source": [
"### Semantic cache\n",
"### Couchbase Semantic Cache\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. This needs an appropriate Vector Search Index defined to work. Please look at the usage example on how to set up the index."
]
},
@@ -2863,9 +2685,7 @@
"- The search index for the semantic cache needs to be defined before using the semantic cache. \n",
"- The optional parameter, `score_threshold` in the Semantic Cache that you can use to tune the results of the semantic search.\n",
"\n",
"#### Index to the Full Text Search service\n",
"\n",
"How to Import an Index to the Full Text Search service?\n",
"### How to Import an Index to the Full Text Search service?\n",
" - [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html)\n",
" - Click on Search -> Add Index -> Import\n",
" - Copy the following Index definition in the Import screen\n",
@@ -2875,8 +2695,7 @@
" - Import the file in Capella using the instructions in the documentation.\n",
" - Click on Create Index to create the index.\n",
"\n",
"**Example index for the vector search:**\n",
"\n",
"#### Example index for the vector search. \n",
" ```\n",
" {\n",
" \"type\": \"fulltext-index\",\n",
@@ -3036,8 +2855,7 @@
"id": "f6f674fa-70b5-4cf9-a208-992aad2c3c89",
"metadata": {},
"source": [
"#### Time to Live (TTL) for the cached entries\n",
"\n",
"### Specifying a Time to Live (TTL) for the Cached entries\n",
"The Cached documents can be deleted after a specified time automatically by specifying a `ttl` parameter along with the initialization of the Cache."
]
},
@@ -3095,10 +2913,10 @@
"source": [
"**Cache** classes are implemented by inheriting the [BaseCache](https://python.langchain.com/api_reference/core/caches/langchain_core.caches.BaseCache.html) class.\n",
"\n",
"This table lists all derived classes with links to the API Reference.\n",
"This table lists all 21 derived classes with links to the API Reference.\n",
"\n",
"\n",
"| Namespace | Class 🔻 |\n",
"| Namespace 🔻 | Class |\n",
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
@@ -3107,31 +2925,22 @@
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.AsyncElasticsearchCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchEmbeddingsCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.AsyncElasticsearchEmbeddingsCache.html) |\n",
"| langchain_community.cache | [GPTCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.GPTCache.html) |\n",
"| langchain_core.caches | [InMemoryCache](https://python.langchain.com/api_reference/core/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_community.cache | [InMemoryCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.InMemoryCache.html) |\n",
"| langchain_community.cache | [MomentoCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.MomentoCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBCache.html) |\n",
"| langchain_community.cache | [OpenSearchSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.OpenSearchSemanticCache.html) |\n",
"| langchain_community.cache | [RedisSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.RedisSemanticCache.html) |\n",
"| langchain_community.cache | [SingleStoreDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.SingleStoreDBSemanticCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.SQLAlchemyCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyMd5Cache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.SQLAlchemyMd5Cache.html) |\n",
"| langchain_community.cache | [UpstashRedisCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.UpstashRedisCache.html) |\n"
"| langchain_community.cache | [UpstashRedisCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.UpstashRedisCache.html) |\n",
"| langchain_core.caches | [InMemoryCache](https://python.langchain.com/api_reference/core/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ef1a2d4-da2e-4fb1-aae4-ffc4aef6c3ad",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -3150,7 +2959,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -17,9 +17,6 @@
"source": [
"# AI21LLM\n",
"\n",
":::caution This service is deprecated.\n",
"See [this page](https://python.langchain.com/docs/integrations/chat/ai21/) for the updated ChatAI21 object. :::\n",
"\n",
"This example goes over how to use LangChain to interact with `AI21` Jurassic models. To use the Jamba model, use the [ChatAI21 object](https://python.langchain.com/docs/integrations/chat/ai21/) instead.\n",
"\n",
"[See a full list of AI21 models and tools on LangChain.](https://pypi.org/project/langchain-ai21/)\n",

View File

@@ -180,9 +180,9 @@
"\n",
"For comprehensive details on all features and configurations, please refer to the API reference documentation for each class:\n",
"\n",
"* [OCIModelDeploymentLLM](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentLLM.html)\n",
"* [OCIModelDeploymentVLLM](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentVLLM.html)\n",
"* [OCIModelDeploymentTGI](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html)"
"* [OCIModelDeploymentLLM](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentLLM.html)\n",
"* [OCIModelDeploymentVLLM](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentVLLM.html)\n",
"* [OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html)"
]
}
],

View File

@@ -7,14 +7,7 @@
"source": [
"# OpenLLM\n",
"\n",
"[🦾 OpenLLM](https://github.com/bentoml/OpenLLM) lets developers run any **open-source LLMs** as **OpenAI-compatible API** endpoints with **a single command**.\n",
"\n",
"- 🔬 Build for fast and production usages\n",
"- 🚂 Support llama3, qwen2, gemma, etc, and many **quantized** versions [full list](https://github.com/bentoml/openllm-models)\n",
"- ⛓️ OpenAI-compatible API\n",
"- 💬 Built-in ChatGPT like UI\n",
"- 🔥 Accelerated LLM decoding with state-of-the-art inference backends\n",
"- 🌥️ Ready for enterprise-grade cloud deployment (Kubernetes, Docker and BentoCloud)"
"[🦾 OpenLLM](https://github.com/bentoml/OpenLLM) is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps."
]
},
{
@@ -44,10 +37,10 @@
"source": [
"## Launch OpenLLM server locally\n",
"\n",
"To start an LLM server, use `openllm hello` command:\n",
"To start an LLM server, use `openllm start` command. For example, to start a dolly-v2 server, run the following command from a terminal:\n",
"\n",
"```bash\n",
"openllm hello\n",
"openllm start dolly-v2\n",
"```\n",
"\n",
"\n",
@@ -64,7 +57,74 @@
"from langchain_community.llms import OpenLLM\n",
"\n",
"server_url = \"http://localhost:3000\" # Replace with remote host if you are running on a remote server\n",
"llm = OpenLLM(base_url=server_url, api_key=\"na\")"
"llm = OpenLLM(server_url=server_url)"
]
},
{
"cell_type": "markdown",
"id": "4f830f9d",
"metadata": {},
"source": [
"### Optional: Local LLM Inference\n",
"\n",
"You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.\n",
"\n",
"When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the `server_url` option demonstrated above.\n",
"\n",
"To load an LLM locally via the LangChain wrapper:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82c392b6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import OpenLLM\n",
"\n",
"llm = OpenLLM(\n",
" model_name=\"dolly-v2\",\n",
" model_id=\"databricks/dolly-v2-3b\",\n",
" temperature=0.94,\n",
" repetition_penalty=1.2,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f15ebe0d",
"metadata": {},
"source": [
"### Integrate with a LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "8b02a97a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"iLkb\n"
]
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"template = \"What is a good name for a company that makes {product}?\"\n",
"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"generated = llm_chain.run(product=\"mechanical keyboard\")\n",
"print(generated)"
]
},
{
@@ -73,9 +133,7 @@
"id": "56cb4bc0",
"metadata": {},
"outputs": [],
"source": [
"llm(\"To build a LLM from scratch, the following are the steps:\")"
]
"source": []
}
],
"metadata": {
@@ -94,7 +152,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.10"
}
},
"nbformat": 4,

View File

@@ -6,7 +6,7 @@
"source": [
"# Outlines\n",
"\n",
"This will help you getting started with Outlines LLM. For detailed documentation of all Outlines features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.outlines.Outlines.html).\n",
"This will help you getting started with Outlines LLM. For detailed documentation of all Outlines features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/llms/outlines.llms.Outlines.html).\n",
"\n",
"[Outlines](https://github.com/outlines-dev/outlines) is a library for constrained language generation. It allows you to use large language models (LLMs) with various backends while applying constraints to the generated output.\n",
"\n",
@@ -236,7 +236,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html\n",
"\n",
"## Outlines Documentation: \n",
"\n",

View File

@@ -1,24 +0,0 @@
# Aerospike
>[Aerospike](https://aerospike.com/docs/vector) is a high-performance, distributed database known for its speed and scalability, now with support for vector storage and search, enabling retrieval and search of embedding vectors for machine learning and AI applications.
> See the documentation for Aerospike Vector Search (AVS) [here](https://aerospike.com/docs/vector).
## Installation and Setup
Install the AVS Python SDK and AVS langchain vector store:
```bash
pip install aerospike-vector-search langchain-community
See the documentation for the Ptyhon SDK [here](https://aerospike-vector-search-python-client.readthedocs.io/en/latest/index.html).
The documentation for the AVS langchain vector store is [here](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.aerospike.Aerospike.html).
## Vector Store
To import this vectorstore:
```python
from langchain_community.vectorstores import Aerospike
See a usage example [here](https://python.langchain.com/docs/integrations/vectorstores/aerospike/).

View File

@@ -15,6 +15,26 @@ This page covers how to use the `AI21` ecosystem within `LangChain`.
pip install langchain-ai21
```
## LLMs
See a [usage example](/docs/integrations/llms/ai21).
### AI21 LLM
```python
from langchain_ai21 import AI21LLM
```
### AI21 Contextual Answer
You can use AI21s contextual answers model to receive text or document,
serving as a context, and a question and return an answer based entirely on this context.
```python
from langchain_ai21 import AI21ContextualAnswers
```
## Chat models
### AI21 Chat
@@ -25,32 +45,23 @@ See a [usage example](/docs/integrations/chat/ai21).
from langchain_ai21 import ChatAI21
```
## Deprecated features
:::caution The following features are deprecated.
:::
### AI21 LLM
```python
from langchain_ai21 import AI21LLM
```
### AI21 Contextual Answer
```python
from langchain_ai21 import AI21ContextualAnswers
```
## Embedding models
### AI21 Embeddings
See a [usage example](/docs/integrations/text_embedding/ai21).
```python
from langchain_ai21 import AI21Embeddings
```
## Text splitters
### AI21 Semantic Text Splitter
See a [usage example](/docs/integrations/document_transformers/ai21_semantic_text_splitter).
```python
from langchain_ai21 import AI21SemanticTextSplitter
```
```

View File

@@ -89,11 +89,3 @@ See [installation instructions and a usage example](/docs/integrations/vectorsto
```python
from langchain_community.vectorstores import Hologres
```
### Tablestore
See [installation instructions and a usage example](/docs/integrations/vectorstores/tablestore).
```python
from langchain_community.vectorstores import TablestoreVectorStore
```

View File

@@ -33,7 +33,7 @@ from langchain_community.document_loaders.couchbase import CouchbaseLoader
### CouchbaseCache
Use Couchbase as a cache for prompts and responses.
See a [usage example](/docs/integrations/llm_caching/#couchbase-caches).
See a [usage example](/docs/integrations/llm_caching/#couchbase-cache).
To import this cache:
```python
@@ -61,7 +61,7 @@ set_llm_cache(
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore.
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index.
See a [usage example](/docs/integrations/llm_caching/#couchbase-caches).
See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache).
To import this cache:
```python

View File

@@ -41,7 +41,7 @@ tool = DataheraldTextToSQL(api_wrapper=api_wrapper)
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input":"Return the sql for this question: How many employees are in the company?"})
```

View File

@@ -84,7 +84,7 @@ from langchain_elasticsearch import ElasticsearchChatMessageHistory
## LLM cache
See a [usage example](/docs/integrations/llm_caching/#elasticsearch-caches).
See a [usage example](/docs/integrations/llm_caching/#elasticsearch-cache).
```python
from langchain_elasticsearch import ElasticsearchCache

View File

@@ -2,10 +2,6 @@
>[Jina AI](https://jina.ai/about-us) is a search AI company. `Jina` helps businesses and developers unlock multimodal data with a better search.
:::caution
For proper compatibility, please ensure you are using the `openai` SDK at version **0.x**.
:::
## Installation and Setup
- Get a Jina AI API token from [here](https://jina.ai/embeddings/) and set it as an environment variable (`JINA_API_TOKEN`)

View File

@@ -1,39 +0,0 @@
# Linkup
> [Linkup](https://www.linkup.so/) provides an API to connect LLMs to the web and the Linkup Premium Partner sources.
## Installation and Setup
To use the Linkup provider, you first need a valid API key, which you can find by signing-up [here](https://app.linkup.so/sign-up).
You will also need the `langchain-linkup` package, which you can install using pip:
```bash
pip install langchain-linkup
```
## Retriever
See a [usage example](/docs/integrations/retrievers/linkup_search).
```python
from langchain_linkup import LinkupSearchRetriever
retriever = LinkupSearchRetriever(
depth="deep", # "standard" or "deep"
linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable
)
```
## Tools
See a [usage example](/docs/integrations/tools/linkup_search).
```python
from langchain_linkup import LinkupSearchTool
tool = LinkupSearchTool(
depth="deep", # "standard" or "deep"
output_type="searchResults", # "searchResults", "sourcedAnswer" or "structured"
linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable
)
```

View File

@@ -6,10 +6,6 @@
> audio (and not only) locally or on-prem with consumer grade hardware,
> supporting multiple model families and architectures.
:::caution
For proper compatibility, please ensure you are using the `openai` SDK at version **0.x**.
:::
## Installation and Setup
We have to install several python packages:

View File

@@ -343,31 +343,6 @@ See a [usage example](/docs/integrations/memory/postgres_chat_message_history/).
Since Azure Database for PostgreSQL is open-source Postgres, you can use the [LangChain's Postgres support](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL.
### Azure SQL Database
>[Azure SQL Database](https://learn.microsoft.com/azure/azure-sql/database/sql-database-paas-overview?view=azuresql) is a robust service that combines scalability, security, and high availability, providing all the benefits of a modern database solution. It also provides a dedicated Vector data type & built-in functions that simplifies the storage and querying of vector embeddings directly within a relational database. This eliminates the need for separate vector databases and related integrations, increasing the security of your solutions while reducing the overall complexity.
By leveraging your current SQL Server databases for vector search, you can enhance data capabilities while minimizing expenses and avoiding the challenges of transitioning to new systems.
##### Installation and Setup
See [detail configuration instructions](/docs/integrations/vectorstores/sqlserver).
We need to install the `langchain-sqlserver` python package.
```bash
!pip install langchain-sqlserver==0.1.1
```
##### Deploy Azure SQL DB on Microsoft Azure
[Sign Up](https://learn.microsoft.com/azure/azure-sql/database/free-offer?view=azuresql) for free to get started today.
See a [usage example](/docs/integrations/vectorstores/sqlserver).
```python
from langchain_sqlserver import SQLServer_VectorStore
```
### Azure AI Search

View File

@@ -1,31 +0,0 @@
# OceanBase
[OceanBase Database](https://github.com/oceanbase/oceanbase) is a distributed relational database.
It is developed entirely by Ant Group. The OceanBase Database is built on a common server cluster.
Based on the Paxos protocol and its distributed structure, the OceanBase Database provides high availability and linear scalability.
OceanBase currently has the ability to store vectors. Users can easily perform the following operations with SQL:
- Create a table containing vector type fields;
- Create a vector index table based on the HNSW algorithm;
- Perform vector approximate nearest neighbor queries;
- ...
## Installation
```bash
pip install -U langchain-oceanbase
```
We recommend using Docker to deploy OceanBase:
```shell
docker run --name=ob433 -e MODE=slim -p 2881:2881 -d oceanbase/oceanbase-ce:4.3.3.0-100000132024100711
```
[More methods to deploy OceanBase cluster](https://github.com/oceanbase/oceanbase-doc/blob/V4.3.1/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/100.deployment-overview.md)
### Usage
For a more detailed walkthrough of the OceanBase Wrapper, see [this notebook](https://github.com/oceanbase/langchain-oceanbase/blob/main/docs/vectorstores.ipynb)

View File

@@ -1,17 +1,11 @@
---
keywords: [openllm]
---
# OpenLLM
OpenLLM lets developers run any **open-source LLMs** as **OpenAI-compatible API** endpoints with **a single command**.
This page demonstrates how to use [OpenLLM](https://github.com/bentoml/OpenLLM)
with LangChain.
- 🔬 Build for fast and production usages
- 🚂 Support llama3, qwen2, gemma, etc, and many **quantized** versions [full list](https://github.com/bentoml/openllm-models)
- ⛓️ OpenAI-compatible API
- 💬 Built-in ChatGPT like UI
- 🔥 Accelerated LLM decoding with state-of-the-art inference backends
- 🌥️ Ready for enterprise-grade cloud deployment (Kubernetes, Docker and BentoCloud)
`OpenLLM` is an open platform for operating large language models (LLMs) in
production. It enables developers to easily run inference with any open-source
LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
## Installation and Setup
@@ -29,7 +23,8 @@ are pre-optimized for OpenLLM.
## Wrappers
There is a OpenLLM Wrapper which supports interacting with running server with OpenLLM:
There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
remote OpenLLM server:
```python
from langchain_community.llms import OpenLLM
@@ -37,12 +32,13 @@ from langchain_community.llms import OpenLLM
### Wrapper for OpenLLM server
This wrapper supports interacting with OpenLLM's OpenAI-compatible endpoint.
This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
OpenLLM server can run either locally or on the cloud.
To run a model, do:
To try it out locally, start an OpenLLM server:
```bash
openllm hello
openllm start flan-t5
```
Wrapper usage:
@@ -50,7 +46,20 @@ Wrapper usage:
```python
from langchain_community.llms import OpenLLM
llm = OpenLLM(base_url="http://localhost:3000/v1", api_key="na")
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```
### Wrapper for Local Inference
You can also use the OpenLLM wrapper to load LLM in current Python process for
running inference.
```python
from langchain_community.llms import OpenLLM
llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```

View File

@@ -32,7 +32,7 @@ For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook]
### Pinecone Hybrid Search
```bash
pip install pinecone pinecone-text
pip install pinecone-client pinecone-text
```
```python

View File

@@ -1,4 +1,4 @@
# Sema4 (fka Robocorp)
# Robocorp
>[Robocorp](https://robocorp.com/) helps build and operate Python workers that run seamlessly anywhere at any scale

View File

@@ -1,41 +0,0 @@
# ScrapeGraph AI
>[ScrapeGraph AI](https://scrapegraphai.com) is a service that provides AI-powered web scraping capabilities.
>It offers tools for extracting structured data, converting webpages to markdown, and processing local HTML content
>using natural language prompts.
## Installation and Setup
Install the required packages:
```bash
pip install langchain-scrapegraph
```
Set up your API key:
```bash
export SGAI_API_KEY="your-scrapegraph-api-key"
```
## Tools
See a [usage example](/docs/integrations/tools/scrapegraph).
There are four tools available:
```python
from langchain_scrapegraph.tools import (
SmartScraperTool, # Extract structured data from websites
MarkdownifyTool, # Convert webpages to markdown
LocalScraperTool, # Process local HTML content
GetCreditsTool, # Check remaining API credits
)
```
Each tool serves a specific purpose:
- `SmartScraperTool`: Extract structured data from websites given a URL, prompt and optional output schema
- `MarkdownifyTool`: Convert any webpage to clean markdown format
- `LocalScraperTool`: Extract structured data from a local HTML file given a prompt and optional output schema
- `GetCreditsTool`: Check your remaining ScrapeGraph AI credits

Some files were not shown because too many files have changed in this diff Show More