mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-10 19:20:24 +00:00
Compare commits
72 Commits
rlm/text-t
...
jacoblee93
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
524458571e | ||
|
|
869a49a0ab | ||
|
|
ebf998acb6 | ||
|
|
43257a295c | ||
|
|
1d568e1add | ||
|
|
5a71b81609 | ||
|
|
988f6d9912 | ||
|
|
3f16acc538 | ||
|
|
b7e559c7e1 | ||
|
|
cce132d146 | ||
|
|
d9f1bcf366 | ||
|
|
3da1a65fa0 | ||
|
|
ab3c124ffb | ||
|
|
aa212c3d0e | ||
|
|
04d58018e1 | ||
|
|
3d74d5e24d | ||
|
|
47070b8314 | ||
|
|
07c2649753 | ||
|
|
c26ec7789f | ||
|
|
7108084947 | ||
|
|
b5b2d07681 | ||
|
|
69f4e402e4 | ||
|
|
c25b174db5 | ||
|
|
fd5f549a9e | ||
|
|
53f35c5f5c | ||
|
|
9fc28d50c3 | ||
|
|
276c6ba115 | ||
|
|
1f8094938f | ||
|
|
d2cb95c39d | ||
|
|
e7e670805c | ||
|
|
286a29a49e | ||
|
|
2008a6438c | ||
|
|
583dc49477 | ||
|
|
81052ee18e | ||
|
|
46e28b9613 | ||
|
|
9c2c9c5274 | ||
|
|
87af2360df | ||
|
|
6e3f39963f | ||
|
|
d5c2ce7c2e | ||
|
|
079d1f3b8e | ||
|
|
67c4fd0ad0 | ||
|
|
d3744175bf | ||
|
|
a2840a2b42 | ||
|
|
386ea48432 | ||
|
|
d76f026d72 | ||
|
|
95ae40ff90 | ||
|
|
d5d7ba582a | ||
|
|
f09f82541b | ||
|
|
11f13aed53 | ||
|
|
ba20c14e28 | ||
|
|
deb8168329 | ||
|
|
8ba97cb408 | ||
|
|
44dae6936b | ||
|
|
922193475a | ||
|
|
55f0f8dae8 | ||
|
|
69d9eae5cd | ||
|
|
69f5f82804 | ||
|
|
34ffb94770 | ||
|
|
8c150ad7f6 | ||
|
|
bb137fd6e7 | ||
|
|
ace2234391 | ||
|
|
ebf749c40c | ||
|
|
283a3ecc9c | ||
|
|
99afc1b4f8 | ||
|
|
0d44746430 | ||
|
|
ff79a99825 | ||
|
|
66f8cb015d | ||
|
|
4d6243fa87 | ||
|
|
f82bdf4613 | ||
|
|
963ff93476 | ||
|
|
d0505c0d47 | ||
|
|
4f23aa677a |
132
.github/CODE_OF_CONDUCT.md
vendored
Normal file
132
.github/CODE_OF_CONDUCT.md
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, caste, color, religion, or sexual
|
||||
identity and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the overall
|
||||
community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or advances of
|
||||
any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email address,
|
||||
without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
conduct@langchain.dev.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series of
|
||||
actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or permanent
|
||||
ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within the
|
||||
community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.1, available at
|
||||
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
|
||||
|
||||
Community Impact Guidelines were inspired by
|
||||
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
|
||||
[https://www.contributor-covenant.org/translations][translations].
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
|
||||
[Mozilla CoC]: https://github.com/mozilla/diversity
|
||||
[FAQ]: https://www.contributor-covenant.org/faq
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
10
.github/CONTRIBUTING.md
vendored
10
.github/CONTRIBUTING.md
vendored
@@ -1,7 +1,7 @@
|
||||
# Contributing to LangChain
|
||||
|
||||
Hi there! Thank you for even being interested in contributing to LangChain.
|
||||
As an open source project in a rapidly developing field, we are extremely open
|
||||
As an open-source project in a rapidly developing field, we are extremely open
|
||||
to contributions, whether they be in the form of new features, improved infra, better documentation, or bug fixes.
|
||||
|
||||
## 🗺️ Guidelines
|
||||
@@ -14,7 +14,7 @@ Please do not try to push directly to this repo unless you are a maintainer.
|
||||
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
|
||||
maintainers.
|
||||
|
||||
Pull requests cannot land without passing the formatting, linting and testing checks first. See [Testing](#testing) and
|
||||
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
|
||||
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
|
||||
|
||||
It's essential that we maintain great documentation and testing. If you:
|
||||
@@ -77,9 +77,9 @@ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs
|
||||
|
||||
There are two separate projects in this repository:
|
||||
- `langchain`: core langchain code, abstractions, and use cases
|
||||
- `langchain.experimental`: see the [Experimental README](../libs/experimental/README.md) for more information.
|
||||
- `langchain.experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
|
||||
|
||||
Each of these has their own development environment. Docs are run from the top-level makefile, but development
|
||||
Each of these has its own development environment. Docs are run from the top-level makefile, but development
|
||||
is split across separate test & release flows.
|
||||
|
||||
For this quickstart, start with langchain core:
|
||||
@@ -129,7 +129,7 @@ To run unit tests in Docker:
|
||||
make docker_tests
|
||||
```
|
||||
|
||||
There are also [integration tests and code-coverage](../libs/langchain/tests/README.md) available.
|
||||
There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.
|
||||
|
||||
### Formatting and Linting
|
||||
|
||||
|
||||
57
.github/workflows/_compile_integration_test.yml
vendored
Normal file
57
.github/workflows/_compile_integration_test.yml
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
name: compile-integration-test
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
working-directory:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.8"
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: compile-integration
|
||||
|
||||
- name: Install integration dependencies
|
||||
shell: bash
|
||||
run: poetry install --with=test_integration
|
||||
|
||||
- name: Check integration tests compile
|
||||
shell: bash
|
||||
run: poetry run pytest -m compile tests/integration_tests
|
||||
|
||||
- name: Ensure the tests did not create any additional files
|
||||
shell: bash
|
||||
run: |
|
||||
set -eu
|
||||
|
||||
STATUS="$(git status)"
|
||||
echo "$STATUS"
|
||||
|
||||
# grep will exit non-zero if the target message isn't found,
|
||||
# and `set -e` above will cause the step to fail.
|
||||
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
||||
50
.github/workflows/_test_release.yml
vendored
Normal file
50
.github/workflows/_test_release.yml
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
name: test-release
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
working-directory:
|
||||
required: true
|
||||
type: string
|
||||
description: "From which folder this pipeline executes"
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.6.1"
|
||||
|
||||
jobs:
|
||||
publish_to_test_pypi:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# This permission is used for trusted publishing:
|
||||
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
||||
#
|
||||
# Trusted publishing has to also be configured on PyPI for each package:
|
||||
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
||||
id-token: write
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: "3.10"
|
||||
poetry-version: ${{ env.POETRY_VERSION }}
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
cache-key: release
|
||||
|
||||
- name: Build project for distribution
|
||||
run: poetry build
|
||||
- name: Check Version
|
||||
id: check-version
|
||||
run: |
|
||||
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
|
||||
- name: Publish package to TestPyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
repository-url: https://test.pypi.org/legacy/
|
||||
packages-dir: ${{ inputs.working-directory }}/dist/
|
||||
verbose: true
|
||||
print-hash: true
|
||||
7
.github/workflows/langchain_ci.yml
vendored
7
.github/workflows/langchain_ci.yml
vendored
@@ -44,6 +44,13 @@ jobs:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
|
||||
compile-integration-tests:
|
||||
uses:
|
||||
./.github/workflows/_compile_integration_test.yml
|
||||
with:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
|
||||
pydantic-compatibility:
|
||||
uses:
|
||||
./.github/workflows/_pydantic_compatibility.yml
|
||||
|
||||
@@ -44,6 +44,13 @@ jobs:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
|
||||
compile-integration-tests:
|
||||
uses:
|
||||
./.github/workflows/_compile_integration_test.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
|
||||
# It's possible that langchain-experimental works fine with the latest *published* langchain,
|
||||
# but is broken with the langchain on `master`.
|
||||
#
|
||||
|
||||
13
.github/workflows/langchain_experimental_test_release.yml
vendored
Normal file
13
.github/workflows/langchain_experimental_test_release.yml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
name: Experimental Test Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
13
.github/workflows/langchain_test_release.yml
vendored
Normal file
13
.github/workflows/langchain_test_release.yml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
name: Test Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
with:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
@@ -4,7 +4,7 @@ Example code for building applications with LangChain, with an emphasis on more
|
||||
|
||||
Notebook | Description
|
||||
:- | :-
|
||||
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a sql database using an open source llm (llama2), specifically demonstrated on a sqlite database containing nba rosters.
|
||||
[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.
|
||||
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
|
||||
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
|
||||
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
|
||||
@@ -12,14 +12,14 @@ Notebook | Description
|
||||
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.
|
||||
[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.
|
||||
[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.
|
||||
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
|
||||
[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.
|
||||
[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.
|
||||
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
|
||||
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
|
||||
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
|
||||
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
|
||||
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
|
||||
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl api.
|
||||
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
|
||||
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
|
||||
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
|
||||
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
|
||||
@@ -37,7 +37,7 @@ Notebook | Description
|
||||
[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.
|
||||
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
|
||||
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
|
||||
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question answering system by incorporating openai functions into a retrieval pipeline.
|
||||
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
|
||||
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
|
||||
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
|
||||
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
|
||||
@@ -46,7 +46,7 @@ Notebook | Description
|
||||
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
|
||||
[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.
|
||||
[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.
|
||||
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the twitter algorithm with the help of gpt4 and activeloop's deep lake.
|
||||
[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.
|
||||
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
|
||||
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialoguesimulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
|
||||
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
|
||||
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
|
||||
|
||||
@@ -1,631 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Incoporating semantic similarity in tabular databases\n",
|
||||
"\n",
|
||||
"In this notebook we will cover how to run semantic search over a specific table column within a single SQL query, combining tabular query with RAG.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Overall workflow\n",
|
||||
"\n",
|
||||
"1. Generating embeddings for a specific column\n",
|
||||
" \n",
|
||||
"2. Storing the embeddings in a new column (if column has low cardinality, it's better to use another table containing unique values and their embeddings)\n",
|
||||
" \n",
|
||||
"3. Querying using standard SQL queries with [PGVector](https://github.com/pgvector/pgvector) extension which allows using:\n",
|
||||
"* L2 distance (`<->`)\n",
|
||||
"* Cosine distance (`<=>` or cosine similarity using `1 - <=>`)\n",
|
||||
"* Inner product (`<#>`)\n",
|
||||
" \n",
|
||||
"4. Running standard SQL query\n",
|
||||
"\n",
|
||||
"### Requirements\n",
|
||||
"\n",
|
||||
"We will need a PostgreSQL database with [pgvector](https://github.com/pgvector/pgvector) extension enabled. \n",
|
||||
"\n",
|
||||
"For this example, we will use a `Chinook` database using a local PostgreSQL server."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import getpass\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = os.environ.get('OPENAI_API_KEY') or getpass.getpass(\"OpenAI API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.sql_database import SQLDatabase\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"CONNECTION_STRING = \"postgresql+psycopg2://postgres:test@localhost:5432/vectordb\" # Replace with your own\n",
|
||||
"db = SQLDatabase.from_uri(CONNECTION_STRING)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embedding the song titles"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For this example, we will run queries based on semantic meaning of song titles. \n",
|
||||
"\n",
|
||||
"In order to do this, let's start by adding a new column in the table for storing the embeddings:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# db.run('ALTER TABLE \"Track\" ADD COLUMN \"embeddings\" vector;')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's generate the embedding for each *track title* and store it as a new column in our \"Track\" table"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings_model = OpenAIEmbeddings()\n",
|
||||
"\n",
|
||||
"tracks = db.run('SELECT \"Name\" FROM \"Track\"')\n",
|
||||
"song_titles = [s[0] for s in eval(tracks)]\n",
|
||||
"title_embeddings = embeddings_model.embed_documents(song_titles)\n",
|
||||
"len(title_embeddings)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now let's insert the embeddings in the into the new column from our table"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from tqdm import tqdm\n",
|
||||
"\n",
|
||||
"for i in tqdm(range(len(title_embeddings))):\n",
|
||||
" title = titles[i].replace(\"'\",\"''\")\n",
|
||||
" embedding = title_embeddings[i]\n",
|
||||
" sql_command = f'UPDATE \"Track\" SET \"embeddings\" = ARRAY{embedding} WHERE \"Name\" =' + f\"'{title}'\"\n",
|
||||
" db.run(sql_command)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can test the semantic search running the following query:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'[(\"Tomorrow\\'s Dream\",), (\\'Remember Tomorrow\\',), (\\'Remember Tomorrow\\',), (\\'The Best Is Yet To Come\\',), (\"Thinking \\'Bout Tomorrow\",)]'"
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"embeded_title = embeddings_model.embed_query(\"hope about the future\")\n",
|
||||
"query = 'SELECT \"Track\".\"Name\" FROM \"Track\" WHERE \"Track\".\"embeddings\" IS NOT NULL ORDER BY \"embeddings\" <-> ' + f\"'{embeded_title}' LIMIT 5\"\n",
|
||||
"db.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can see the the song titles are conceptually similar to our search term `\"hope about the future\"`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating the SQL Chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's start by defining useful functions to get info from database and running the query:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_schema(_):\n",
|
||||
" return db.get_table_info()\n",
|
||||
"\n",
|
||||
"def run_query(query):\n",
|
||||
" return db.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now let's build the **prompt** we will use:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"template = \"\"\"You are a Postgres expert. Given an input question, first create a syntactically correct Postgres query to run, then look at the results of the query and return the answer to the input question.\n",
|
||||
"Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per Postgres. You can order the results to return the most informative data in the database.\n",
|
||||
"Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
|
||||
"Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",
|
||||
"Pay attention to use date('now') function to get the current date, if the question involves \"today\".\n",
|
||||
"\n",
|
||||
"You can use an extra extension which allows you to run semantic similarity using <-> operator on tables containing columns named \"embeddings\".\n",
|
||||
"<-> operator can ONLY be used on embeddings columns.\n",
|
||||
"The embeddings value for a given row typically represents the semantic meaning of that row.\n",
|
||||
"The vector represents an embedding representation of the question, given below. \n",
|
||||
"Do NOT fill in the vector values directly, but rather specify a `[search_word]` placeholder, which should contain the word that would be embedded for filtering.\n",
|
||||
"For example, if the user asks for songs about 'the feeling of loneliness' the query could be:\n",
|
||||
"'SELECT \"[whatever_table_name]\".\"SongName\" FROM \"[whatever_table_name]\" ORDER BY \"embeddings\" <-> '[loneliness]' LIMIT 5'\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: <Question here>\n",
|
||||
"SQLQuery: <SQL Query to run>\n",
|
||||
"SQLResult: <Result of the SQLQuery>\n",
|
||||
"Answer: <Final answer here>\n",
|
||||
"\n",
|
||||
"Only use the following tables:\n",
|
||||
"\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"QUESTION: {question}\n",
|
||||
"SQLQuery:\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And we can create the chain using **[LangChain Expression Language](https://python.langchain.com/docs/expression_language/)**:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/manuelsoria/miniconda3/envs/auto-gpt/lib/python3.8/site-packages/langchain/utilities/sql_database.py:112: SAWarning: Did not recognize type 'vector' of column 'title_embedding'\n",
|
||||
" self._metadata.reflect(\n",
|
||||
"/Users/manuelsoria/miniconda3/envs/auto-gpt/lib/python3.8/site-packages/langchain/utilities/sql_database.py:112: SAWarning: Did not recognize type 'vector' of column 'embeddings'\n",
|
||||
" self._metadata.reflect(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"db = SQLDatabase.from_uri(CONNECTION_STRING) # We reconnect to db so the new columns are loaded as well.\n",
|
||||
"llm = ChatOpenAI(model_name='gpt-4', temperature=0)\n",
|
||||
"\n",
|
||||
"sql_query_chain = (\n",
|
||||
" RunnablePassthrough.assign(schema=get_schema)\n",
|
||||
" | prompt\n",
|
||||
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
||||
" | StrOutputParser()\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This chain simply generates the query. Now we will create the full chain that also handles the execution and the final result for the user:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"from langchain.schema.runnable import RunnableLambda\n",
|
||||
"\n",
|
||||
"# Inject the embedding for any words within brackets \n",
|
||||
"def replace_brackets(match):\n",
|
||||
" words_inside_brackets = match.group(1).split(', ')\n",
|
||||
" embedded_words = [str(embeddings_model.embed_query(word)) for word in words_inside_brackets]\n",
|
||||
" return \"', '\".join(embedded_words)\n",
|
||||
"\n",
|
||||
"def get_query(query):\n",
|
||||
" sql_query = re.sub(r'\\[([\\w\\s,]+)\\]', replace_brackets, query)\n",
|
||||
" return sql_query\n",
|
||||
" \n",
|
||||
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
|
||||
"{schema}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"SQL Query: {query}\n",
|
||||
"SQL Response: {response}\"\"\"\n",
|
||||
"\n",
|
||||
"prompt_response = ChatPromptTemplate.from_messages([\n",
|
||||
" (\"system\", \"Given an input question and SQL response, convert it to a natural langugae answer. No pre-amble.\"),\n",
|
||||
" (\"human\", template)\n",
|
||||
"])\n",
|
||||
"\n",
|
||||
"full_chain = (\n",
|
||||
" RunnablePassthrough.assign(query=sql_query_chain)\n",
|
||||
" | RunnablePassthrough.assign(\n",
|
||||
" schema=get_schema,\n",
|
||||
" response=RunnableLambda(lambda x: db.run(get_query(x[\"query\"]))),\n",
|
||||
" )\n",
|
||||
" | prompt_response \n",
|
||||
" | llm\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using the Chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 1: Filtering a column based on semantic meaning"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's say we want to retrieve songs that express `deep feeling of dispair`, but filtering based on genre:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"The 5 rock songs with titles about deep feeling of despair are 'Sea Of Sorrow', 'Surrender', 'Indifference', 'Hard Luck Woman', and 'Desire'.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\":\"Which are the 5 rock songs with titles about deep feeling of dispair?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"What is substantially different in implementing this method is that we have combined:\n",
|
||||
"- Semantic search (songs that have titles with some semantic meaning)\n",
|
||||
"- Traditional tabular querying (running JOIN statements to filter track based on genre)\n",
|
||||
"\n",
|
||||
"This is something we _could_ potentially achieve using metadata filtering, but it's more complex to do so (we would need to use a vector database containing the embeddings, and use metadata filtering based on genre).\n",
|
||||
"\n",
|
||||
"However, for other use cases metadata filtering **wouldn't be enough**."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 2: Combining filters"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"The three albums which have the most amount of songs in the top 150 saddest songs are 'International Superhits' with 5 songs, 'Ten' with 4 songs, and 'Album Of The Year' with 3 songs.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 29,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\": \"I want to know the 3 albums which have the most amount of songs in the top 150 saddest songs\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"So we have result for 3 albums with most amount of songs in top 150 saddest ones. This **wouldn't** be possible using only standard metadata filtering. Without this _hybdrid query_, we would need some postprocessing to get the result.\n",
|
||||
"\n",
|
||||
"Another similar exmaple:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"The 6 albums with the shortest titles that contain songs which are in the 20 saddest song list are 'Ten', 'Core', 'Big Ones', 'One By One', 'Black Album', and 'Miles Ahead'.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 30,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\": \"I need the 6 albums with shortest title, as long as they contain songs which are in the 20 saddest song list.\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's see what the query looks like to double check:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WITH \"SadSongs\" AS (\n",
|
||||
" SELECT \"TrackId\" FROM \"Track\" \n",
|
||||
" ORDER BY \"embeddings\" <-> '[sad]' LIMIT 20\n",
|
||||
"),\n",
|
||||
"\"SadAlbums\" AS (\n",
|
||||
" SELECT DISTINCT \"AlbumId\" FROM \"Track\" \n",
|
||||
" WHERE \"TrackId\" IN (SELECT \"TrackId\" FROM \"SadSongs\")\n",
|
||||
")\n",
|
||||
"SELECT \"Album\".\"Title\" FROM \"Album\" \n",
|
||||
"WHERE \"AlbumId\" IN (SELECT \"AlbumId\" FROM \"SadAlbums\") \n",
|
||||
"ORDER BY \"title_len\" ASC \n",
|
||||
"LIMIT 6\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(sql_query_chain.invoke({\"question\": \"I need the 6 albums with shortest title, as long as they contain songs which are in the 20 saddest song list.\"}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 3: Combining two separate semantic searches\n",
|
||||
"\n",
|
||||
"One interesting aspect of this approach which is **substantially different from using standar RAG** is that we can even **combine** two semantic search filters:\n",
|
||||
"- _Get 5 saddest songs..._\n",
|
||||
"- _**...obtained from albums with \"lovely\" titles**_\n",
|
||||
"\n",
|
||||
"This could generalize to **any kind of combined RAG** (paragraphs discussing _X_ topic belonging from books about _Y_, replies to a tweet about _ABC_ topic that express _XYZ_ feeling)\n",
|
||||
"\n",
|
||||
"We will combine semantic search on songs and album titles, so we need to do the same for `Album` table:\n",
|
||||
"1. Generate the embeddings\n",
|
||||
"2. Add them to the table as a new column (which we need to add in the table)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 60,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# db.run('ALTER TABLE \"Album\" ADD COLUMN \"embeddings\" vector;')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 43,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"100%|██████████| 347/347 [00:01<00:00, 179.64it/s]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"albums = db.run('SELECT \"Title\" FROM \"Album\"')\n",
|
||||
"album_titles = [title[0] for title in eval(albums)]\n",
|
||||
"album_title_embeddings = embeddings_model.embed_documents(album_titles)\n",
|
||||
"for i in tqdm(range(len(album_title_embeddings))):\n",
|
||||
" album_title = album_titles[i].replace(\"'\",\"''\")\n",
|
||||
" album_embedding = album_title_embeddings[i]\n",
|
||||
" sql_command = f'UPDATE \"Album\" SET \"embeddings\" = ARRAY{album_embedding} WHERE \"Title\" =' + f\"'{album_title}'\"\n",
|
||||
" db.run(sql_command)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 45,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"[('Realize',), ('Morning Dance',), ('Into The Light',), ('New Adventures In Hi-Fi',), ('Miles Ahead',)]\""
|
||||
]
|
||||
},
|
||||
"execution_count": 45,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"embeded_title = embeddings_model.embed_query(\"hope about the future\")\n",
|
||||
"query = 'SELECT \"Album\".\"Title\" FROM \"Album\" WHERE \"Album\".\"embeddings\" IS NOT NULL ORDER BY \"embeddings\" <-> ' + f\"'{embeded_title}' LIMIT 5\"\n",
|
||||
"db.run(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can combine both filters:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 54,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"db = SQLDatabase.from_uri(CONNECTION_STRING) # We reconnect to dbso the new columns are loaded as well."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 49,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='The songs about breakouts obtained from the top 5 albums about love are \\'Royal Orleans\\', \"Nobody\\'s Fault But Mine\", \\'Achilles Last Stand\\', \\'For Your Life\\', and \\'Hots On For Nowhere\\'.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 49,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"full_chain.invoke({\"question\": \"I want to know songs about breakouts obtained from top 5 albums about love\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is something **different** that **couldn't be achieved** using standard metadata filtering over a vectordb."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"Read the paper [here](https://arxiv.org/abs/2310.06117)\n",
|
||||
"\n",
|
||||
"See an excelent blog post on this by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)\n",
|
||||
"See an excellent blog post on this by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)\n",
|
||||
"\n",
|
||||
"In this cookbook we will replicate this technique. We modify the prompts used slightly to work better with chat models."
|
||||
]
|
||||
|
||||
@@ -14,6 +14,7 @@ cd ../_dist
|
||||
poetry run python scripts/model_feat_table.py
|
||||
poetry run nbdoc_build --srcdir docs
|
||||
cp ../cookbook/README.md src/pages/cookbook.mdx
|
||||
cp ../.github/CONTRIBUTING.md docs/contributing.md
|
||||
poetry run python scripts/generate_api_reference_links.py
|
||||
yarn install
|
||||
yarn start
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -346,7 +346,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -18,7 +18,7 @@ import CodeBlock from "@theme/CodeBlock";
|
||||
</Tabs>
|
||||
|
||||
|
||||
For more details, see our [Installation guide](/docs/get_started/installation.html).
|
||||
For more details, see our [Installation guide](/docs/get_started/installation).
|
||||
|
||||
## Environment setup
|
||||
|
||||
|
||||
@@ -20,11 +20,11 @@ This guide aims to provide a comprehensive overview of the requirements for depl
|
||||
|
||||
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
|
||||
|
||||
- [Ray Serve](/docs/ecosystem/integrations/ray_serve.html)
|
||||
- [Ray Serve](/docs/ecosystem/integrations/ray_serve)
|
||||
- [BentoML](https://github.com/bentoml/BentoML)
|
||||
- [OpenLLM](/docs/ecosystem/integrations/openllm.html)
|
||||
- [Modal](/docs/ecosystem/integrations/modal.html)
|
||||
- [Jina](/docs/ecosystem/integrations/jina.html#deployment)
|
||||
- [OpenLLM](/docs/ecosystem/integrations/openllm)
|
||||
- [Modal](/docs/ecosystem/integrations/modal)
|
||||
- [Jina](/docs/ecosystem/integrations/jina#deployment)
|
||||
|
||||
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
|
||||
"> from data labeling to model monitoring.\n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla.html\">\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>"
|
||||
]
|
||||
|
||||
114
docs/docs/integrations/chat/gigachat.ipynb
Normal file
114
docs/docs/integrations/chat/gigachat.ipynb
Normal file
@@ -0,0 +1,114 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# GigaChat\n",
|
||||
"This notebook shows how to use LangChain with [GigaChat](https://developers.sber.ru/portal/products/gigachat).\n",
|
||||
"To use you need to install ```gigachat``` python package."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install gigachat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"To get GigaChat credentials you need to [create account](https://developers.sber.ru/studio/login) and [get access to API](https://developers.sber.ru/docs/ru/gigachat/api/integration)\n",
|
||||
"## Example"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"os.environ['GIGACHAT_CREDENTIALS'] = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import GigaChat\n",
|
||||
"\n",
|
||||
"chat = GigaChat(verify_ssl_certs=False)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"What do you get when you cross a goat and a skunk? A smelly goat!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.schema import SystemMessage, HumanMessage\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" SystemMessage(\n",
|
||||
" content=\"You are a helpful AI that shares everything you know. Talk in English.\"\n",
|
||||
" ),\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Tell me a joke\"\n",
|
||||
" ),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"print(chat(messages).content)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@@ -5,7 +5,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# GCP Vertex AI \n",
|
||||
"# Google Cloud Vertex AI \n",
|
||||
"\n",
|
||||
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||
"\n",
|
||||
@@ -31,7 +31,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#!pip install langchain google-cloud-aiplatform"
|
||||
"#!pip install langchain google-cloud-aiplatform\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -41,7 +41,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatVertexAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate"
|
||||
"from langchain.prompts import ChatPromptTemplate\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -50,7 +50,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatVertexAI()"
|
||||
"chat = ChatVertexAI()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -64,7 +64,7 @@
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", system), (\"human\", human)]\n",
|
||||
")\n",
|
||||
"messages = prompt.format_messages()"
|
||||
"messages = prompt.format_messages()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -84,7 +84,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat(messages)"
|
||||
"chat(messages)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -104,7 +104,7 @@
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", system), (\"human\", human)]\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -127,7 +127,7 @@
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke(\n",
|
||||
" {\"input_language\": \"English\", \"output_language\": \"Japanese\", \"text\": \"I love programming\"}\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -161,7 +161,7 @@
|
||||
" model_name=\"codechat-bison\",\n",
|
||||
" max_output_tokens=1000,\n",
|
||||
" temperature=0.5\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -189,7 +189,7 @@
|
||||
],
|
||||
"source": [
|
||||
"# For simple string in string out usage, we can use the `predict` method:\n",
|
||||
"print(chat.predict(\"Write a Python function to identify all prime numbers\"))"
|
||||
"print(chat.predict(\"Write a Python function to identify all prime numbers\"))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -209,7 +209,7 @@
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"# import nest_asyncio\n",
|
||||
"# nest_asyncio.apply()"
|
||||
"# nest_asyncio.apply()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -237,7 +237,7 @@
|
||||
" top_k=40,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"asyncio.run(chat.agenerate([messages]))"
|
||||
"asyncio.run(chat.agenerate([messages]))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -257,7 +257,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"asyncio.run(chain.ainvoke({\"input_language\": \"English\", \"output_language\": \"Sanskrit\", \"text\": \"I love programming\"}))"
|
||||
"asyncio.run(chain.ainvoke({\"input_language\": \"English\", \"output_language\": \"Sanskrit\", \"text\": \"I love programming\"}))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -275,7 +275,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sys"
|
||||
"import sys\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,7 +310,7 @@
|
||||
"messages = prompt.format_messages()\n",
|
||||
"for chunk in chat.stream(messages):\n",
|
||||
" sys.stdout.write(chunk.content)\n",
|
||||
" sys.stdout.flush()"
|
||||
" sys.stdout.flush()\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "a9ab2a39-7c2d-4119-9dc7-8035fdfba3cb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Fine-Tuning on LangSmith Chat Datasets\n",
|
||||
"# LangSmith Chat Datasets\n",
|
||||
"\n",
|
||||
"This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.\n",
|
||||
"The process is simple and comprises 3 steps.\n",
|
||||
@@ -271,7 +271,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "a9ab2a39-7c2d-4119-9dc7-8035fdfba3cb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Fine-Tuning on LangSmith LLM Runs\n",
|
||||
"# LangSmith LLM Runs\n",
|
||||
"\n",
|
||||
"This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.\n",
|
||||
"The process is simple and comprises 3 steps.\n",
|
||||
@@ -421,7 +421,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
|
||||
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Pandas DataFrame\n",
|
||||
"\n",
|
||||
"This notebook goes over how to load data from a [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html) DataFrame."
|
||||
"This notebook goes over how to load data from a [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/index) DataFrame."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -5,10 +5,10 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Psychic\n",
|
||||
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic.html) for more details.\n",
|
||||
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic) for more details.\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic.html)\n",
|
||||
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic)\n",
|
||||
"2. Log into the [Psychic dashboard](https://dashboard.psychic.dev/) and get your secret key\n",
|
||||
"3. Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify."
|
||||
]
|
||||
|
||||
@@ -6,30 +6,7 @@
|
||||
"source": [
|
||||
"# DeepInfra\n",
|
||||
"\n",
|
||||
"`DeepInfra` provides [several LLMs](https://deepinfra.com/models).\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use Langchain with [DeepInfra](https://deepinfra.com)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from langchain.llms import DeepInfra\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain"
|
||||
"[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for language models."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -45,7 +22,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@@ -68,12 +45,14 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 7,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"DEEPINFRA_API_TOKEN\"] = DEEPINFRA_API_TOKEN"
|
||||
]
|
||||
},
|
||||
@@ -87,11 +66,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = DeepInfra(model_id=\"databricks/dolly-v2-12b\")\n",
|
||||
"from langchain.llms import DeepInfra\n",
|
||||
"\n",
|
||||
"llm = DeepInfra(model_id=\"meta-llama/Llama-2-70b-chat-hf\")\n",
|
||||
"llm.model_kwargs = {\n",
|
||||
" \"temperature\": 0.7,\n",
|
||||
" \"repetition_penalty\": 1.2,\n",
|
||||
@@ -100,6 +81,51 @@
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'This is a question that has puzzled many people'"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# run inferences directly via wrapper\n",
|
||||
"llm(\"Who let the dogs out?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
" Will\n",
|
||||
" Smith\n",
|
||||
"."
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# run streaming inference\n",
|
||||
"for chunk in llm.stream(\"Who let the dogs out?\"):\n",
|
||||
" print(chunk)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -110,10 +136,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
@@ -130,10 +158,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
]
|
||||
},
|
||||
@@ -147,16 +177,16 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Penguins live in the Southern hemisphere.\\nThe North pole is located in the Northern hemisphere.\\nSo, first you need to turn the penguin South.\\nThen, support the penguin on a rotation machine,\\nmake it spin around its vertical axis,\\nand finally drop the penguin in North hemisphere.\\nNow, you have a penguin in the north pole!\\n\\nStill didn't understand?\\nWell, you're a failure as a teacher.\""
|
||||
"\"Penguins are found in Antarctica and the surrounding islands, which are located at the southernmost tip of the planet. The North Pole is located at the northernmost tip of the planet, and it would be a long journey for penguins to get there. In fact, penguins don't have the ability to fly or migrate over such long distances. So, no, penguins cannot reach the North Pole. \""
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -166,6 +196,13 @@
|
||||
"\n",
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -184,7 +221,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
"version": "3.11.5"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
113
docs/docs/integrations/llms/gigachat.ipynb
Normal file
113
docs/docs/integrations/llms/gigachat.ipynb
Normal file
@@ -0,0 +1,113 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# GigaChat\n",
|
||||
"This notebook shows how to use LangChain with [GigaChat](https://developers.sber.ru/portal/products/gigachat).\n",
|
||||
"To use you need to install ```gigachat``` python package."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install gigachat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"To get GigaChat credentials you need to [create account](https://developers.sber.ru/studio/login) and [get access to API](https://developers.sber.ru/docs/ru/gigachat/api/integration)\n",
|
||||
"## Example"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"os.environ['GIGACHAT_CREDENTIALS'] = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import GigaChat\n",
|
||||
"\n",
|
||||
"llm = GigaChat(verify_ssl_certs=False)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The capital of Russia is Moscow.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"\n",
|
||||
"template = \"What is capital of {country}?\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"generated = llm_chain.run(country=\"Russia\")\n",
|
||||
"print(generated)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# GCP Vertex AI\n",
|
||||
"# Google Cloud Vertex AI\n",
|
||||
"\n",
|
||||
"**Note:** This is separate from the `Google PaLM` integration, it exposes [Vertex AI PaLM API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) on `Google Cloud`. \n"
|
||||
]
|
||||
@@ -41,7 +41,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#!pip install langchain google-cloud-aiplatform"
|
||||
"#!pip install langchain google-cloud-aiplatform\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -50,7 +50,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import VertexAI"
|
||||
"from langchain.llms import VertexAI\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -74,7 +74,7 @@
|
||||
],
|
||||
"source": [
|
||||
"llm = VertexAI()\n",
|
||||
"print(llm(\"What are some of the pros and cons of Python as a programming language?\"))"
|
||||
"print(llm(\"What are some of the pros and cons of Python as a programming language?\"))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -90,7 +90,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate"
|
||||
"from langchain.prompts import PromptTemplate\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -102,7 +102,7 @@
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)"
|
||||
"prompt = PromptTemplate.from_template(template)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -111,7 +111,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | llm"
|
||||
"chain = prompt | llm\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -130,7 +130,7 @@
|
||||
],
|
||||
"source": [
|
||||
"question = \"Who was the president in the year Justin Beiber was born?\"\n",
|
||||
"print(chain.invoke({\"question\": question}))"
|
||||
"print(chain.invoke({\"question\": question}))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -159,7 +159,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = VertexAI(model_name=\"code-bison\", max_output_tokens=1000, temperature=0.3)"
|
||||
"llm = VertexAI(model_name=\"code-bison\", max_output_tokens=1000, temperature=0.3)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -168,7 +168,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"question = \"Write a python function that checks if a string is a valid email address\""
|
||||
"question = \"Write a python function that checks if a string is a valid email address\"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -193,7 +193,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(llm(question))"
|
||||
"print(llm(question))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -223,7 +223,7 @@
|
||||
],
|
||||
"source": [
|
||||
"result = llm.generate([question])\n",
|
||||
"result.generations"
|
||||
"result.generations\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -243,7 +243,7 @@
|
||||
"source": [
|
||||
"# If running in a Jupyter notebook you'll need to install nest_asyncio\n",
|
||||
"\n",
|
||||
"# !pip install nest_asyncio"
|
||||
"# !pip install nest_asyncio\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -254,7 +254,7 @@
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"# import nest_asyncio\n",
|
||||
"# nest_asyncio.apply()"
|
||||
"# nest_asyncio.apply()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -274,7 +274,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"asyncio.run(llm.agenerate([question]))"
|
||||
"asyncio.run(llm.agenerate([question]))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -292,7 +292,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sys"
|
||||
"import sys\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -337,7 +337,7 @@
|
||||
"source": [
|
||||
"for chunk in llm.stream(question):\n",
|
||||
" sys.stdout.write(chunk)\n",
|
||||
" sys.stdout.flush()"
|
||||
" sys.stdout.flush()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -360,7 +360,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import VertexAIModelGarden"
|
||||
"from langchain.llms import VertexAIModelGarden\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -372,7 +372,7 @@
|
||||
"llm = VertexAIModelGarden(\n",
|
||||
" project=\"YOUR PROJECT\",\n",
|
||||
" endpoint_id=\"YOUR ENDPOINT_ID\"\n",
|
||||
")"
|
||||
")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -381,7 +381,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(llm(\"What is the meaning of life?\"))"
|
||||
"print(llm(\"What is the meaning of life?\"))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -397,7 +397,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")"
|
||||
"prompt = PromptTemplate.from_template(\"What is the meaning of {thing}?\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -407,7 +407,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chian = prompt | llm\n",
|
||||
"print(chain.invoke({\"thing\": \"life\"}))"
|
||||
"print(chain.invoke({\"thing\": \"life\"}))\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# JSONFormer\n",
|
||||
"\n",
|
||||
"[JSONFormer](https://github.com/1rgs/jsonformer) is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.\n",
|
||||
"[JSONFormer](https://github.com/1rgs/jsonformer) is a library that wraps local Hugging Face pipeline models for structured decoding of a subset of the JSON Schema.\n",
|
||||
"\n",
|
||||
"It works by filling in the structure tokens and then sampling the content tokens from the model.\n",
|
||||
"\n",
|
||||
@@ -31,7 +31,7 @@
|
||||
"id": "66bd89f1-8daa-433d-bb8f-5b0b3ae34b00",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### HuggingFace Baseline\n",
|
||||
"### Hugging Face Baseline\n",
|
||||
"\n",
|
||||
"First, let's establish a qualitative baseline by checking the output of the model without structured decoding."
|
||||
]
|
||||
|
||||
@@ -319,7 +319,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Standard Cache\n",
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses."
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -405,7 +405,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Semantic Cache\n",
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses and evaluate hits based on semantic similarity."
|
||||
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -730,7 +730,7 @@
|
||||
},
|
||||
"source": [
|
||||
"## `Momento` Cache\n",
|
||||
"Use [Momento](/docs/ecosystem/integrations/momento.html) to cache prompts and responses.\n",
|
||||
"Use [Momento](/docs/ecosystem/integrations/momento) to cache prompts and responses.\n",
|
||||
"\n",
|
||||
"Requires momento to use, uncomment below to install:"
|
||||
]
|
||||
|
||||
@@ -82,6 +82,15 @@
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example to initialize with external boto3 session\n",
|
||||
"\n",
|
||||
"### for cross account scenarios"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -92,7 +101,77 @@
|
||||
"source": [
|
||||
"from typing import Dict\n",
|
||||
"\n",
|
||||
"from langchain.prompts import PromptTemplate\nfrom langchain.llms import SagemakerEndpoint\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.llms import SagemakerEndpoint\n",
|
||||
"from langchain.llms.sagemaker_endpoint import LLMContentHandler\n",
|
||||
"from langchain.chains.question_answering import load_qa_chain\n",
|
||||
"import json\n",
|
||||
"import boto3\n",
|
||||
"\n",
|
||||
"query = \"\"\"How long was Elizabeth hospitalized?\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"prompt_template = \"\"\"Use the following pieces of context to answer the question at the end.\n",
|
||||
"\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"Answer:\"\"\"\n",
|
||||
"PROMPT = PromptTemplate(\n",
|
||||
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"roleARN = 'arn:aws:iam::123456789:role/cross-account-role'\n",
|
||||
"sts_client = boto3.client('sts')\n",
|
||||
"response = sts_client.assume_role(RoleArn=roleARN, \n",
|
||||
" RoleSessionName='CrossAccountSession')\n",
|
||||
"\n",
|
||||
"client = boto3.client(\n",
|
||||
" \"sagemaker-runtime\",\n",
|
||||
" region_name=\"us-west-2\", \n",
|
||||
" aws_access_key_id=response['Credentials']['AccessKeyId'],\n",
|
||||
" aws_secret_access_key=response['Credentials']['SecretAccessKey'],\n",
|
||||
" aws_session_token = response['Credentials']['SessionToken']\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"class ContentHandler(LLMContentHandler):\n",
|
||||
" content_type = \"application/json\"\n",
|
||||
" accepts = \"application/json\"\n",
|
||||
"\n",
|
||||
" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n",
|
||||
" input_str = json.dumps({prompt: prompt, **model_kwargs})\n",
|
||||
" return input_str.encode(\"utf-8\")\n",
|
||||
"\n",
|
||||
" def transform_output(self, output: bytes) -> str:\n",
|
||||
" response_json = json.loads(output.read().decode(\"utf-8\"))\n",
|
||||
" return response_json[0][\"generated_text\"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"content_handler = ContentHandler()\n",
|
||||
"\n",
|
||||
"chain = load_qa_chain(\n",
|
||||
" llm=SagemakerEndpoint(\n",
|
||||
" endpoint_name=\"endpoint-name\",\n",
|
||||
" client=client,\n",
|
||||
" model_kwargs={\"temperature\": 1e-10},\n",
|
||||
" content_handler=content_handler,\n",
|
||||
" ),\n",
|
||||
" prompt=PROMPT,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Dict\n",
|
||||
"\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.llms import SagemakerEndpoint\n",
|
||||
"from langchain.llms.sagemaker_endpoint import LLMContentHandler\n",
|
||||
"from langchain.chains.question_answering import load_qa_chain\n",
|
||||
"import json\n",
|
||||
|
||||
@@ -75,9 +75,9 @@ from langchain.llms.sagemaker_endpoint import ContentHandlerBase
|
||||
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
|
||||
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
|
||||
|
||||
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory.html).
|
||||
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).
|
||||
|
||||
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file.html).
|
||||
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader
|
||||
|
||||
@@ -30,7 +30,6 @@ Access PaLM chat models like `chat-bison` and `codechat-bison` via Google Cloud.
|
||||
from langchain.chat_models import ChatVertexAI
|
||||
```
|
||||
|
||||
|
||||
## Document Loader
|
||||
### Google BigQuery
|
||||
|
||||
@@ -51,7 +50,7 @@ from langchain.document_loaders import BigQueryLoader
|
||||
|
||||
### Google Cloud Storage
|
||||
|
||||
>[Google Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data.
|
||||
> [Google Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data.
|
||||
|
||||
First, we need to install `google-cloud-storage` python package.
|
||||
|
||||
@@ -74,27 +73,28 @@ from langchain.document_loaders import GCSFileLoader
|
||||
|
||||
### Google Drive
|
||||
|
||||
>[Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google.
|
||||
> [Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google.
|
||||
|
||||
Currently, only `Google Docs` are supported.
|
||||
|
||||
First, we need to install several python package.
|
||||
First, we need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
|
||||
```
|
||||
|
||||
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive.html).
|
||||
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import GoogleDriveLoader
|
||||
```
|
||||
|
||||
## Vector Store
|
||||
### Google Vertex AI MatchingEngine
|
||||
### Google Vertex AI Vector Search
|
||||
|
||||
> [Google Vertex AI Matching Engine](https://cloud.google.com/vertex-ai/docs/matching-engine/overview) provides
|
||||
> the industry's leading high-scale low latency vector database. These vector databases are commonly
|
||||
> [Google Vertex AI Vector Search](https://cloud.google.com/vertex-ai/docs/matching-engine/overview),
|
||||
> formerly known as Vertex AI Matching Engine, provides the industry's leading high-scale
|
||||
> low latency vector database. These vector databases are commonly
|
||||
> referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.
|
||||
|
||||
We need to install several python packages.
|
||||
@@ -181,14 +181,28 @@ There exists a `GoogleSearchAPIWrapper` utility which wraps this API. To import
|
||||
```python
|
||||
from langchain.utilities import GoogleSearchAPIWrapper
|
||||
```
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search.html).
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search).
|
||||
|
||||
We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:
|
||||
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
tools = load_tools(["google-search"])
|
||||
```
|
||||
|
||||
### Google Places
|
||||
|
||||
See a [usage example](/docs/integrations/tools/google_places).
|
||||
|
||||
```
|
||||
pip install googlemaps
|
||||
```
|
||||
|
||||
```python
|
||||
from langchain.tools import GooglePlacesTool
|
||||
```
|
||||
|
||||
## Document Transformer
|
||||
### Google Document AI
|
||||
|
||||
@@ -216,3 +230,40 @@ See a [usage example](/docs/integrations/document_transformers/docai).
|
||||
from langchain.document_loaders.blob_loaders import Blob
|
||||
from langchain.document_loaders.parsers import DocAIParser
|
||||
```
|
||||
|
||||
## Chat loaders
|
||||
### Gmail
|
||||
|
||||
> [Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google.
|
||||
|
||||
First, we need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install --upgrade google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-client
|
||||
```
|
||||
|
||||
See a [usage example and authorizing instructions](/docs/integrations/chat_loaders/gmail).
|
||||
|
||||
```python
|
||||
from langchain.chat_loaders.gmail import GMailLoader
|
||||
```
|
||||
|
||||
## Agents and Toolkits
|
||||
### Gmail
|
||||
|
||||
See a [usage example and authorizing instructions](/docs/integrations/toolkits/gmail).
|
||||
|
||||
```python
|
||||
from langchain.agents.agent_toolkits import GmailToolkit
|
||||
|
||||
toolkit = GmailToolkit()
|
||||
```
|
||||
|
||||
### Google Drive
|
||||
|
||||
See a [usage example and authorizing instructions](/docs/integrations/toolkits/google_drive).
|
||||
|
||||
```python
|
||||
from langchain_googledrive.utilities.google_drive import GoogleDriveAPIWrapper
|
||||
from langchain_googledrive.tools.google_drive.tool import GoogleDriveSearchTool
|
||||
```
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Microsoft
|
||||
|
||||
All functionality related to Microsoft Azure
|
||||
All functionality related to `Microsoft Azure` and other `Microsoft` products.
|
||||
|
||||
## LLM
|
||||
### Azure OpenAI
|
||||
@@ -70,13 +70,13 @@ from langchain.chat_models import AzureChatOpenAI
|
||||
pip install azure-storage-blob
|
||||
```
|
||||
|
||||
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container.html).
|
||||
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import AzureBlobStorageContainerLoader
|
||||
```
|
||||
|
||||
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file.html).
|
||||
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import AzureBlobStorageFileLoader
|
||||
@@ -161,3 +161,59 @@ See a [usage example](/docs/integrations/retrievers/azure_cognitive_search).
|
||||
from langchain.retrievers import AzureCognitiveSearchRetriever
|
||||
```
|
||||
|
||||
## Utilities
|
||||
|
||||
### Bing Search API
|
||||
|
||||
See a [usage example](/docs/integrations/tools/bing_search).
|
||||
|
||||
```python
|
||||
from langchain.utilities import BingSearchAPIWrapper
|
||||
```
|
||||
|
||||
## Toolkits
|
||||
|
||||
### Azure Cognitive Services
|
||||
|
||||
We need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install azure-ai-formrecognizer azure-cognitiveservices-speech azure-ai-vision
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/toolkits/azure_cognitive_services).
|
||||
|
||||
```python
|
||||
from langchain.agents.agent_toolkits import O365Toolkit
|
||||
```
|
||||
### Microsoft Office 365 email and calendar
|
||||
|
||||
We need to install `O365` python package.
|
||||
|
||||
```bash
|
||||
pip install O365
|
||||
```
|
||||
|
||||
|
||||
See a [usage example](/docs/integrations/toolkits/office365).
|
||||
|
||||
```python
|
||||
from langchain.agents.agent_toolkits import O365Toolkit
|
||||
```
|
||||
|
||||
### Microsoft Azure PowerBI
|
||||
|
||||
We need to install `azure-identity` python package.
|
||||
|
||||
```bash
|
||||
pip install azure-identity
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/toolkits/powerbi).
|
||||
|
||||
```python
|
||||
from langchain.agents.agent_toolkits import PowerBIToolkit
|
||||
from langchain.utilities.powerbi import PowerBIDataset
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -25,4 +25,4 @@ pip install pyairtable
|
||||
from langchain.document_loaders import AirtableLoader
|
||||
```
|
||||
|
||||
See an [example](/docs/integrations/document_loaders/airtable.html).
|
||||
See an [example](/docs/integrations/document_loaders/airtable).
|
||||
|
||||
@@ -12,4 +12,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import AnalyticDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb.html)
|
||||
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb)
|
||||
|
||||
@@ -32,7 +32,7 @@ You can use the `ApifyWrapper` to run Actors on the Apify platform.
|
||||
from langchain.utilities import ApifyWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
|
||||
|
||||
|
||||
### Loader
|
||||
@@ -43,4 +43,4 @@ You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
|
||||
from langchain.document_loaders import ApifyDatasetLoader
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset.html).
|
||||
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset).
|
||||
|
||||
@@ -13,7 +13,7 @@ pip install python-arango
|
||||
|
||||
Connect your ArangoDB Database with a chat model to get insights on your data.
|
||||
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa.html).
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).
|
||||
|
||||
```python
|
||||
from arango import ArangoClient
|
||||
|
||||
@@ -22,7 +22,7 @@ If you don't you can refer to [Argilla - 🚀 Quickstart](https://docs.argilla.i
|
||||
|
||||
## Tracking
|
||||
|
||||
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla.html).
|
||||
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla).
|
||||
|
||||
```python
|
||||
from langchain.callbacks import ArgillaCallbackHandler
|
||||
|
||||
@@ -18,7 +18,7 @@ whether for semantic search or example selection.
|
||||
from langchain.vectorstores import Chroma
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma.html)
|
||||
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma)
|
||||
|
||||
## Retriever
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ from langchain.llms import Clarifai
|
||||
llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
|
||||
```
|
||||
|
||||
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai.html).
|
||||
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai).
|
||||
|
||||
|
||||
### Text Embedding Models
|
||||
@@ -37,7 +37,7 @@ There is a Clarifai Embedding model in LangChain, which you can access with:
|
||||
from langchain.embeddings import ClarifaiEmbeddings
|
||||
embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
|
||||
```
|
||||
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai.html).
|
||||
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai).
|
||||
|
||||
## Vectorstore
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ There exists an Cohere Embedding model, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import CohereEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere)
|
||||
|
||||
## Retriever
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
"source": [
|
||||
"In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with [Comet](https://www.comet.com/site/?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook). \n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking.html\">\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>\n",
|
||||
"\n",
|
||||
|
||||
@@ -54,4 +54,4 @@ llm = CTransformers(model='marella/gpt-2-ggml', config=config)
|
||||
|
||||
See [Documentation](https://github.com/marella/ctransformers#config) for a list of available parameters.
|
||||
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers.html).
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers).
|
||||
|
||||
@@ -21,4 +21,4 @@ You may import the vectorstore by:
|
||||
from langchain.vectorstores import DashVector
|
||||
```
|
||||
|
||||
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector.html)
|
||||
For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector)
|
||||
|
||||
@@ -33,11 +33,11 @@ See [MLflow AI Gateway](/docs/integrations/providers/mlflow_ai_gateway).
|
||||
Databricks as an LLM provider
|
||||
-----------------------------
|
||||
|
||||
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
|
||||
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
|
||||
|
||||
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
|
||||
|
||||
Databricks Dolly
|
||||
----------------
|
||||
|
||||
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
|
||||
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub) for instructions to access it through the Hugging Face Hub integration with LangChain.
|
||||
|
||||
@@ -10,16 +10,27 @@ It is broken into two parts: installation and setup, and then references to spec
|
||||
## Available Models
|
||||
|
||||
DeepInfra provides a range of Open Source LLMs ready for deployment.
|
||||
You can list supported models [here](https://deepinfra.com/models?type=text-generation).
|
||||
You can list supported models for
|
||||
[text-generation](https://deepinfra.com/models?type=text-generation) and
|
||||
[embeddings](https://deepinfra.com/models?type=embeddings).
|
||||
google/flan\* models can be viewed [here](https://deepinfra.com/models?type=text2text-generation).
|
||||
|
||||
You can view a list of request and response parameters [here](https://deepinfra.com/databricks/dolly-v2-12b#API)
|
||||
You can view a [list of request and response parameters](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api).
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
|
||||
There exists an DeepInfra LLM wrapper, which you can access with
|
||||
|
||||
```python
|
||||
from langchain.llms import DeepInfra
|
||||
```
|
||||
|
||||
### Embeddings
|
||||
|
||||
There is also an DeepInfra Embeddings wrapper, you can access with
|
||||
|
||||
```python
|
||||
from langchain.embeddings import DeepInfraEmbeddings
|
||||
```
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
# Dingo
|
||||
# DingoDB
|
||||
|
||||
This page covers how to use the Dingo ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Dingo wrappers.
|
||||
This page covers how to use the DingoDB ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific DingoDB wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install dingodb`
|
||||
|
||||
## VectorStore
|
||||
|
||||
There exists a wrapper around Dingo indexes, allowing you to use it as a vectorstore,
|
||||
There exists a wrapper around DingoDB indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
To import this vectorstore:
|
||||
@@ -16,4 +16,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Dingo
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Dingo wrapper, see [this notebook](/docs/integrations/vectorstores/dingo.html)
|
||||
For a more detailed walkthrough of the DingoDB wrapper, see [this notebook](/docs/integrations/vectorstores/dingo)
|
||||
|
||||
@@ -20,4 +20,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Epsilla
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla.html)
|
||||
For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla)
|
||||
@@ -20,7 +20,7 @@ There exists a GoldenQueryAPIWrapper utility which wraps this API. To import thi
|
||||
from langchain.utilities.golden_query import GoldenQueryAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query).
|
||||
|
||||
### Tool
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
# Google Document AI
|
||||
|
||||
>[Document AI](https://cloud.google.com/document-ai/docs/overview) is a `Google Cloud Platform`
|
||||
> service to transform unstructured data from documents into structured data, making it easier
|
||||
> to understand, analyze, and consume.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
You need to set up a [`GCS` bucket and create your own OCR processor](https://cloud.google.com/document-ai/docs/create-processor)
|
||||
The `GCS_OUTPUT_PATH` should be a path to a folder on GCS (starting with `gs://`)
|
||||
and a processor name should look like `projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID`.
|
||||
You can get it either programmatically or copy from the `Prediction endpoint` section of the `Processor details`
|
||||
tab in the Google Cloud Console.
|
||||
|
||||
```bash
|
||||
pip install google-cloud-documentai
|
||||
pip install google-cloud-documentai-toolbox
|
||||
```
|
||||
|
||||
## Document Transformer
|
||||
|
||||
See a [usage example](/docs/integrations/document_transformers/docai).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders.blob_loaders import Blob
|
||||
from langchain.document_loaders.parsers import DocAIParser
|
||||
```
|
||||
@@ -1,4 +1,4 @@
|
||||
# Google Serper
|
||||
# Serper - Google Search API
|
||||
|
||||
This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
|
||||
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
|
||||
@@ -59,7 +59,7 @@ So the final answer is: El Palmar, Spain
|
||||
'El Palmar, Spain'
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper).
|
||||
|
||||
### Tool
|
||||
|
||||
|
||||
@@ -45,4 +45,4 @@ model("Once upon a time, ", callbacks=callbacks)
|
||||
|
||||
You can find links to model file downloads in the [pyllamacpp](https://github.com/nomic-ai/pyllamacpp) repository.
|
||||
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all)
|
||||
|
||||
@@ -24,4 +24,4 @@ There exists an Gradient Embedding model, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import GradientEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient)
|
||||
|
||||
@@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.llms import HuggingFaceHub
|
||||
```
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub.html)
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub)
|
||||
|
||||
|
||||
### Embeddings
|
||||
|
||||
@@ -28,7 +28,7 @@ you don't, follow the next steps to start it:
|
||||
|
||||
## Using Infino
|
||||
|
||||
See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino.html).
|
||||
See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino).
|
||||
|
||||
```python
|
||||
from langchain.callbacks import InfinoCallbackHandler
|
||||
|
||||
@@ -15,7 +15,7 @@ There exists a Jina Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import JinaEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina)
|
||||
|
||||
## Deployment
|
||||
|
||||
|
||||
@@ -20,4 +20,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import LanceDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb.html)
|
||||
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb)
|
||||
|
||||
@@ -15,7 +15,7 @@ There exists a LlamaCpp LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import LlamaCpp
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp)
|
||||
|
||||
### Embeddings
|
||||
|
||||
@@ -23,4 +23,4 @@ There exists a LlamaCpp Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import LlamaCppEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp.html)
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp)
|
||||
|
||||
@@ -28,4 +28,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Marqo
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo.html)
|
||||
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo)
|
||||
|
||||
@@ -22,4 +22,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Milvus
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)
|
||||
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus)
|
||||
|
||||
@@ -11,7 +11,7 @@ Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information)
|
||||
## LLM
|
||||
|
||||
There exists a Minimax LLM wrapper, which you can access with
|
||||
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax.html).
|
||||
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax).
|
||||
|
||||
```python
|
||||
from langchain.llms import Minimax
|
||||
@@ -19,7 +19,7 @@ from langchain.llms import Minimax
|
||||
|
||||
## Chat Models
|
||||
|
||||
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax.html)
|
||||
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax)
|
||||
|
||||
```python
|
||||
from langchain.chat_models import MiniMaxChat
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# MLflow AI Gateway
|
||||
|
||||
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index.html) service is a powerful tool designed to streamline the usage and management of various large
|
||||
>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/gateway/index) service is a powerful tool designed to streamline the usage and management of various large
|
||||
> language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface
|
||||
> that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests.
|
||||
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
|
||||
> See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index) for more details.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
@@ -52,7 +52,7 @@ mlflow gateway start --config-path /path/to/config.yaml
|
||||
> This module exports multivariate LangChain models in the langchain flavor and univariate LangChain
|
||||
> models in the pyfunc flavor.
|
||||
|
||||
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain.html).
|
||||
See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain).
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# MLflow\n",
|
||||
"\n",
|
||||
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow.html) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
|
||||
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to track your LangChain experiments into your `MLflow Server`"
|
||||
]
|
||||
|
||||
@@ -50,10 +50,10 @@ Momento can be used as a distributed memory store for LLMs.
|
||||
|
||||
### Chat Message History Memory
|
||||
|
||||
See [this notebook](/docs/integrations/memory/momento_chat_message_history.html) for a walkthrough of how to use Momento as a memory store for chat message history.
|
||||
See [this notebook](/docs/integrations/memory/momento_chat_message_history) for a walkthrough of how to use Momento as a memory store for chat message history.
|
||||
|
||||
## Vector Store
|
||||
|
||||
Momento Vector Index (MVI) can be used as a vector store.
|
||||
|
||||
See [this notebook](/docs/integrations/vectorstores/momento_vector_index.html) for a walkthrough of how to use MVI as a vector store.
|
||||
See [this notebook](/docs/integrations/vectorstores/momento_vector_index) for a walkthrough of how to use MVI as a vector store.
|
||||
|
||||
@@ -31,7 +31,7 @@ db = SQLDatabase.from_uri(conn_str)
|
||||
db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
|
||||
```
|
||||
|
||||
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite.html) documentation on how to use.
|
||||
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite) documentation on how to use.
|
||||
|
||||
|
||||
## LLMCache
|
||||
|
||||
@@ -63,4 +63,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import MyScale
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale.html)
|
||||
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale)
|
||||
|
||||
@@ -29,7 +29,7 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Neo4jVector
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector.html)
|
||||
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector)
|
||||
|
||||
### GraphCypherQAChain
|
||||
|
||||
@@ -41,7 +41,7 @@ from langchain.graphs import Neo4jGraph
|
||||
from langchain.chains import GraphCypherQAChain
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa.html)
|
||||
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa)
|
||||
|
||||
### Constructing a knowledge graph from text
|
||||
|
||||
@@ -55,4 +55,4 @@ from langchain.graphs import Neo4jGraph
|
||||
from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
|
||||
```
|
||||
|
||||
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer.html)
|
||||
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer)
|
||||
|
||||
@@ -12,14 +12,14 @@ All instructions are in examples below.
|
||||
|
||||
We have two different loaders: `NotionDirectoryLoader` and `NotionDBLoader`.
|
||||
|
||||
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion.html).
|
||||
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion).
|
||||
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import NotionDirectoryLoader
|
||||
```
|
||||
|
||||
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb.html).
|
||||
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb).
|
||||
|
||||
|
||||
```python
|
||||
|
||||
@@ -67,4 +67,4 @@ llm("What is the difference between a duck and a goose? And why there are so man
|
||||
### Usage
|
||||
|
||||
For a more detailed walkthrough of the OpenLLM Wrapper, see the
|
||||
[example notebook](/docs/integrations/llms/openllm.html)
|
||||
[example notebook](/docs/integrations/llms/openllm)
|
||||
|
||||
@@ -18,4 +18,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import OpenSearchVectorSearch
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch.html)
|
||||
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch)
|
||||
|
||||
@@ -29,7 +29,7 @@ There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import
|
||||
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap).
|
||||
|
||||
### Tool
|
||||
|
||||
|
||||
@@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
|
||||
|
||||
### Usage
|
||||
|
||||
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector.html)
|
||||
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector)
|
||||
|
||||
@@ -21,4 +21,4 @@ whether for semantic search or example selection.
|
||||
from langchain.vectorstores import Pinecone
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone.html)
|
||||
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone)
|
||||
|
||||
@@ -40,10 +40,10 @@ for res in llm_results.generations:
|
||||
```
|
||||
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that
|
||||
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai) LLM, except that
|
||||
- all your requests will be logged to your PromptLayer account
|
||||
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer
|
||||
- you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat`
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai) and `PromptLayerOpenAIChat`
|
||||
|
||||
@@ -16,4 +16,4 @@ There is a basic wrapper around `SemaDB` collections allowing you to use it as a
|
||||
from langchain.vectorstores import SemaDB
|
||||
```
|
||||
|
||||
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb.html).
|
||||
You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb).
|
||||
@@ -16,7 +16,7 @@ view these connections from the dashboard and retrieve data using the server-sid
|
||||
|
||||
1. Create an account in the [dashboard](https://dashboard.psychic.dev/).
|
||||
2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. You will use this to connect the SaaS apps.
|
||||
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic.html)
|
||||
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic)
|
||||
|
||||
|
||||
## Advantages vs Other Document Loaders
|
||||
|
||||
@@ -24,4 +24,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Qdrant
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant.html)
|
||||
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant)
|
||||
|
||||
@@ -103,7 +103,7 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Redis
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis.html).
|
||||
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis).
|
||||
|
||||
### Retriever
|
||||
|
||||
@@ -114,7 +114,7 @@ Redis can be used to persist LLM conversations.
|
||||
|
||||
#### Vector Store Retriever Memory
|
||||
|
||||
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory.html).
|
||||
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory).
|
||||
|
||||
#### Chat Message History Memory
|
||||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history.html).
|
||||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history).
|
||||
|
||||
@@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
|
||||
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse.html)
|
||||
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse)
|
||||
|
||||
## Self-hosted Embeddings
|
||||
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
|
||||
@@ -26,4 +26,4 @@ the `SelfHostedEmbedding` class.
|
||||
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted.html)
|
||||
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted)
|
||||
|
||||
29
docs/docs/integrations/providers/salute_devices.mdx
Normal file
29
docs/docs/integrations/providers/salute_devices.mdx
Normal file
@@ -0,0 +1,29 @@
|
||||
# Salute Devices
|
||||
|
||||
Salute Devices provides GigaChat LLM's models.
|
||||
|
||||
For more info how to get access to GigaChat [follow here](https://developers.sber.ru/docs/ru/gigachat/api/integration).
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
GigaChat package can be installed via pip from PyPI:
|
||||
|
||||
```bash
|
||||
pip install gigachat
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
See a [usage example](/docs/integrations/llms/gigachat).
|
||||
|
||||
```python
|
||||
from langchain.llms import GigaChat
|
||||
```
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/gigachat).
|
||||
|
||||
```python
|
||||
from langchain.chat_models import GigaChat
|
||||
```
|
||||
@@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
|
||||
from langchain.utilities import SerpAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi).
|
||||
|
||||
### Tool
|
||||
|
||||
|
||||
@@ -19,4 +19,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import SKLearnVectorStore
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn.html).
|
||||
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn).
|
||||
|
||||
@@ -13,7 +13,7 @@ pip install spacy
|
||||
|
||||
## Text Splitter
|
||||
|
||||
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token.html#spacy).
|
||||
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#spacy).
|
||||
|
||||
```python
|
||||
from langchain.text_splitter import SpacyTextSplitter
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/spreedly.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/spreedly).
|
||||
|
||||
## Document Loader
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/stripe.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/stripe).
|
||||
|
||||
## Document Loader
|
||||
|
||||
|
||||
@@ -19,4 +19,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Tair
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair.html)
|
||||
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair)
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/telegram.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/telegram).
|
||||
|
||||
## Document Loader
|
||||
|
||||
|
||||
@@ -12,4 +12,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import TencentVectorDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the TencentVectorDB wrapper, see [this notebook](/docs/integrations/vectorstores/tencentvectordb.html)
|
||||
For a more detailed walkthrough of the TencentVectorDB wrapper, see [this notebook](/docs/integrations/vectorstores/tencentvectordb)
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
pip install py-trello beautifulsoup4
|
||||
```
|
||||
|
||||
See [setup instructions](/docs/integrations/document_loaders/trello.html).
|
||||
See [setup instructions](/docs/integrations/document_loaders/trello).
|
||||
|
||||
|
||||
## Document Loader
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Typesense
|
||||
|
||||
> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either
|
||||
> [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run
|
||||
> [self-host](https://typesense.org/docs/guide/install-typesense#option-2-local-machine-self-hosting) or run
|
||||
> on [Typesense Cloud](https://cloud.typesense.org/).
|
||||
> `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also
|
||||
> focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
|
||||
|
||||
@@ -39,4 +39,4 @@ langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
|
||||
Upstash Redis can be used to persist LLM conversations.
|
||||
|
||||
#### Chat Message History Memory
|
||||
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history.html).
|
||||
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history).
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Chat Over Documents with Vectara\n",
|
||||
"\n",
|
||||
"This notebook is based on the [chat_vector_db](https://github.com/langchain-ai/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db.html) notebook, but using Vectara as the vector database."
|
||||
"This notebook is based on the [chat_vector_db](https://github.com/langchain-ai/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db) notebook, but using Vectara as the vector database."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -35,4 +35,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Weaviate
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate.html)
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate)
|
||||
|
||||
@@ -25,7 +25,7 @@ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import th
|
||||
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha.html).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha).
|
||||
|
||||
### Tool
|
||||
|
||||
|
||||
@@ -93,10 +93,10 @@ llm(
|
||||
### Usage
|
||||
|
||||
For more information and detailed examples, refer to the
|
||||
[example for xinference LLMs](/docs/integrations/llms/xinference.html)
|
||||
[example for xinference LLMs](/docs/integrations/llms/xinference)
|
||||
|
||||
### Embeddings
|
||||
|
||||
Xinference also supports embedding queries and documents. See
|
||||
[example for xinference embeddings](/docs/integrations/text_embedding/xinference.html)
|
||||
[example for xinference embeddings](/docs/integrations/text_embedding/xinference)
|
||||
for a more detailed demo.
|
||||
@@ -19,4 +19,4 @@ whether for semantic search or example selection.
|
||||
from langchain.vectorstores import Milvus
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz.html)
|
||||
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz)
|
||||
|
||||
@@ -43,7 +43,7 @@
|
||||
"from langchain.embeddings import SagemakerEndpointEmbeddings\n",
|
||||
"from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"import boto3\n",
|
||||
"\n",
|
||||
"class ContentHandler(EmbeddingsContentHandler):\n",
|
||||
" content_type = \"application/json\"\n",
|
||||
@@ -87,7 +87,18 @@
|
||||
" endpoint_name=\"huggingface-pytorch-inference-2023-03-21-16-14-03-834\",\n",
|
||||
" region_name=\"us-east-1\",\n",
|
||||
" content_handler=content_handler,\n",
|
||||
")"
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# client = boto3.client(\n",
|
||||
"# \"sagemaker-runtime\",\n",
|
||||
"# region_name=\"us-west-2\" \n",
|
||||
"# )\n",
|
||||
"# embeddings = SagemakerEndpointEmbeddings(\n",
|
||||
"# endpoint_name=\"huggingface-pytorch-inference-2023-03-21-16-14-03-834\", \n",
|
||||
"# client=client\n",
|
||||
"# content_handler=content_handler,\n",
|
||||
"# )"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"This notebook demonstrates a sample composition of the `Speak`, `Klarna`, and `Spoonacluar` APIs.\n",
|
||||
"\n",
|
||||
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi.html) notebook.\n",
|
||||
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi) notebook.\n",
|
||||
"\n",
|
||||
"### First, import dependencies and load the LLM"
|
||||
]
|
||||
|
||||
@@ -379,7 +379,7 @@
|
||||
"agent.run(\n",
|
||||
" \"\"\"\n",
|
||||
"who bought the most expensive ticket?\n",
|
||||
"You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe.html\n",
|
||||
"You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe\n",
|
||||
"\"\"\"\n",
|
||||
")"
|
||||
]
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user