Compare commits

...

242 Commits

Author SHA1 Message Date
Erick Friis
1cc21e60a9 openai: developer message just openai 2024-12-18 11:54:29 -05:00
binhnd102
f723a8456e Fixes: community: fix LanceDB return no metadata (#27024)
- [ x ] Fix when lancedb return table without metadata column
- **Description:** Check the table schema, if not has metadata column,
init the Document with metadata argument equal to empty dict
    - **Issue:** https://github.com/langchain-ai/langchain/issues/27005

- [ x ] **Add tests and docs**

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-18 15:21:28 +00:00
ANSARI MD AAQIB AHMED
91d28ef453 Add langchain-yt-dlp Document Loader Documentation (#28775)
## Overview
This PR adds documentation for the `langchain-yt-dlp` package, a YouTube
document loader that uses `yt-dlp` for Youtube videos metadata
extraaction.

## Changes
- Added documentation notebook for YoutubeLoader
- Updated packages.yml to include langchain-yt-dlp

## Motivation
The existing LangChain YoutubeLoader was unable to fetch YouTube
metadata due to changes in YouTube's structure. This package resolves
those issues by leveraging the `yt-dlp` library.

## Features
- Reliable YouTube metadata extraction

## Related
- Package Repository: https://github.com/aqib0770/langchain-yt-dlp
- PyPI Package: https://pypi.org/project/langchain-yt-dlp/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 10:16:50 -05:00
GITHUBear
33b1fb95b8 partners: langchain-oceanbase Integration (#28782)
Hi, langchain team! I'm a maintainer of
[OceanBase](https://github.com/oceanbase/oceanbase).

With the integration guidance, I create a python lib named
[langchain-oceanbase](https://github.com/oceanbase/langchain-oceanbase)
to integrate `Oceanbase Vector Store` with `Langchain`.

So I'd like to add the required docs. I will appreciate your feedback.
Thank you!

---------

Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 14:51:49 +00:00
Rave Harpaz
986b752fc8 Add OCI Generative AI new model and structured output support (#28754)
- [X] **PR title**: 
 community: Add new model and structured output support


- [X] **PR message**: 
- **Description:** add support for meta llama 3.2 image handling, and
JSON mode for structured output
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: 
  1. we have updated our unit tests,
  2. no changes required for documentation.


- [x] **Lint and test**: 
make format, make lint and make test we run successfully

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-18 09:50:25 -05:00
David Pryce-Compson
ef24220d3f community: adding haiku 3.5 and opus callbacks (#28783)
**Description:** 
Adding new AWS Bedrock model and their respective costs to match
https://aws.amazon.com/bedrock/pricing/ for the Bedrock callback

**Issue:** 
Missing models for those that wish to try them out

**Dependencies:**
Nothing added

**Twitter handle:**
@David_Pryce and / or @JamfSoftware

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-18 09:45:10 -05:00
Yudai Kotani
05a44797ee langchain_community: Add default None values to DocumentAttributeValue class properties (#28785)
**Description**: 
This PR addresses an issue where the DocumentAttributeValue class
properties did not have default values of None. By explicitly setting
the Optional attributes (DateValue, LongValue, StringListValue, and
StringValue) to default to None, this change ensures the class functions
as expected when no value is provided for these attributes.

**Changes Made**:
Added default None values to the following properties of the
DocumentAttributeValue class:
DateValue
LongValue
StringListValue
StringValue
Removed the invalid argument extra="allow" from the BaseModel
inheritance.
Dependencies: None.

**Twitter handle (optional)**: @__korikori1021

**Checklist**
- [x] Verified that KendraRetriever works as expected after the changes.

Co-authored-by: y1u0d2a1i <y.kotani@raksul.com>
2024-12-18 09:43:04 -05:00
Satyam Kumar
90f7713399 refactor: improve docstring parsing logic for Google style (#28730)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


Description:  
Improved the `_parse_google_docstring` function in `langchain/core` to
support parsing multi-paragraph descriptions before the `Args:` section
while maintaining compliance with Google-style docstring guidelines.
This change ensures better handling of docstrings with detailed function
descriptions.

Issue:  
Fixes #28628

Dependencies:  
None.

Twitter handle:  
@isatyamks

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 09:35:19 -05:00
Zapiron
85c3bc1bbd docs: Grammar and Typo update for Runnable Conceptual guide (#28777) 2024-12-17 21:18:58 -05:00
Dong Shin
0b1359801e community: add trust_env at web_base_loader (#28514)
- **Description:** I am working to address a similar issue to the one
mentioned in https://github.com/langchain-ai/langchain/pull/19499.
Specifically, there is a problem with the Webbase loader used in
open-webui, where it fails to load the proxy configuration. This PR aims
to resolve that issue.




<!--If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.-->

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 21:18:16 -05:00
Erick Friis
be738aa7de packages: enable vertex api build (#28773) 2024-12-17 11:31:14 -08:00
Bagatur
ac278cbe8b core[patch]: export InjectedToolCallId (#28772) 2024-12-17 19:29:20 +00:00
ccurme
5656702b8d docs: fix readme link (#28770)
SQL Llama2 Template -> LangChain Extract
2024-12-17 18:51:01 +00:00
Bagatur
e4d3ccf62f json mode standard test (#25497)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 18:47:34 +00:00
ccurme
24bf24270d docs: reference ExperimentalMarkdownTextSplitter (#28768)
Continuing https://github.com/langchain-ai/langchain/pull/27832

---------

Co-authored-by: promptless[bot] <179508745+promptless[bot]@users.noreply.github.com>
Co-authored-by: Frances Liu <francestfls@gmail.com>
2024-12-17 12:56:38 -05:00
Frank Dai
e81433497b community: support Confluence cookies (#28760)
**Description**: Some confluence instances don't support personal access
token, then cookie is a convenient way to authenticate. This PR adds
support for Confluence cookies.

**Twitter handle**: soulmachine
2024-12-17 12:16:36 -05:00
ccurme
b745281eec anthropic[patch]: increase timeouts for integration tests (#28767)
Some tests consistently ran into the 10s limit in CI.
2024-12-17 15:47:17 +00:00
Leonid Ganeline
6479fd8c1c docs: integrations cache table of content (#28755)
Issue: the current
[Cache](https://python.langchain.com/docs/integrations/llm_caching/)
page has an inconsistent heading.
Mixed terms are used; mixed casing; and mixed `selecting`. Excessively
long titles make right-side ToC hard to read and unnecessarily long.

Changes: consitent and more-readable ToC
2024-12-17 10:12:49 -05:00
Vinit Kudva
a00258ec12 chroma: fix persistence if client_settings is passed in (#25199)
…ent path given.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 10:03:02 -05:00
Omri Eliyahu Levy
f8883a1321 partners/voyageai: enable setting output dimension (#28740)
Voyage has introduced voyage-3-large and voyage-code-3, which feature
different output dimensions by leveraging a technique called "Matryoshka
Embeddings" (see blog -
https://blog.voyageai.com/2024/12/04/voyage-code-3/).
These two models are available in various sizes: [256, 512, 1024, 2048]
(https://docs.voyageai.com/docs/embeddings#model-choices).

This PR adds the option to set the required output dimension.
2024-12-17 10:02:00 -05:00
Swastik-Swarup-Dash
0afc284920 fix:Agent reactgpt4 or gpt 4o as input model #28747 (#28764)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-17 14:35:09 +00:00
Zapiron
0c11aee486 docs: small grammar changes for conceptual guide page (#28765)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-17 14:27:55 +00:00
German Martin
3a1d05394d community: Apache AGE wrapper. Ensure Node Uniqueness by ID. (#28759)
**Description:**

The Apache AGE graph integration incorrectly handled node merging,
allowing duplicate nodes with different IDs but the same type and other
properties. Unlike
[Neo4j](cdf6202156/libs/community/langchain_community/graphs/neo4j_graph.py (L47)),
[Memgraph](cdf6202156/libs/community/langchain_community/graphs/memgraph_graph.py (L50)),
[Kuzu](cdf6202156/libs/community/langchain_community/graphs/kuzu_graph.py (L253)),
and
[Gremlin](cdf6202156/libs/community/langchain_community/graphs/gremlin_graph.py (L165)),
it did not use the node ID as the primary identifier for merging.

This inconsistency caused data integrity issues and unexpected behavior
when users expected updates to specific nodes by ID.

**Solution:**
This PR modifies the `node_insert_query` to `MERGE` nodes based on label
and ID *only* and updates properties with `SET`, aligning the behavior
with other graph database integrations. The `_format_properties` method
was also modified to handle id overrides.

**Impact:**

This fix ensures data integrity by preventing duplicate nodes, and
provides a consistent behavior across graph database integrations.
2024-12-17 09:21:59 -05:00
gsa9989
cdf6202156 cosmosdbnosql: Added Cosmos DB NoSQL Semantic Cache Integration with tests and jupyter notebook (#24424)
* Added Cosmos DB NoSQL Semantic Cache Integration with tests and
jupyter notebook

---------

Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 21:57:05 -05:00
Brian Burgin
27a9056725 community: Fix ChatLiteLLMRouter runtime issues (#28163)
**Description:** Fix ChatLiteLLMRouter ctor validation and model_name
parameter
**Issue:** #19356, #27455, #28077
**Twitter handle:** @bburgin_0
2024-12-16 18:17:39 -05:00
Eugene Yurtsev
234d49653a docs: Create custom embeddings (#20398)
Guidelines on how to create custom embeddings

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 17:57:51 -05:00
Mikhail Khludnev
00deacc67e docs, external: introduce langchain-localai (#28751)
Thank you for contributing to LangChain!

Referring to https://github.com/mkhludnev/langchain-localai

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 22:22:37 +00:00
Erick Friis
d4b5e7ef22 community: recommend RedisVectorStore over Redis (#28749) 2024-12-16 21:08:30 +00:00
Hiros
8f5e72de05 community: Correctly handle multi-element rich text (#25762)
**Description:**

- Add _concatenate_rich_text method to combine all elements in rich text
arrays
- Update load_page method to use _concatenate_rich_text for rich text
properties
- Ensure all text content is captured, including inline code and
formatted text
- Add unit tests to verify correct handling of multi-element rich text
This fix prevents truncation of content after backticks or other
formatting elements.

 **Issue:**

Using Notion DB Loader, the text for `richtext` and `title` is truncated
after 1st element was loaded as Notion Loader only read the first
element.

**Dependencies:** any dependencies required for this change
None.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 20:20:27 +00:00
Antonio Lanza
b2102b8cc4 text-splitters: Inconsistent results with NLTKTextSplitter's add_start_index=True (#27782)
This PR closes #27781

# Problem
The current implementation of `NLTKTextSplitter` is using
`sent_tokenize`. However, this `sent_tokenize` doesn't handle chars
between 2 tokenized sentences... hence, this behavior throws errors when
we are using `add_start_index=True`, as described in issue #27781. In
particular:
```python
from nltk.tokenize import sent_tokenize

output1 = sent_tokenize("Innovation drives our success. Collaboration fosters creative solutions. Efficiency enhances data management.", language="english")
print(output1)
output2 = sent_tokenize("Innovation drives our success.        Collaboration fosters creative solutions. Efficiency enhances data management.", language="english")
print(output2)
>>> ['Innovation drives our success.', 'Collaboration fosters creative solutions.', 'Efficiency enhances data management.']
>>> ['Innovation drives our success.', 'Collaboration fosters creative solutions.', 'Efficiency enhances data management.']
```

# Solution
With this new `use_span_tokenize` parameter, we can use NLTK to create
sentences (with `span_tokenize`), but also add extra chars to be sure
that we still can map the chunks to the original text.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-12-16 19:53:15 +00:00
Tari Yekorogha
d262d41cc0 community: added FalkorDB vector store support i.e implementation, test, docs an… (#26245)
**Description:** Added support for FalkorDB Vector Store, including its
implementation, unit tests, documentation, and an example notebook. The
FalkorDB integration allows users to efficiently manage and query
embeddings in a vector database, with relevance scoring and maximal
marginal relevance search. The following components were implemented:

- Core implementation for FalkorDBVector store.
- Unit tests ensuring proper functionality and edge case coverage.
- Example notebook demonstrating an end-to-end setup, search, and
retrieval using FalkorDB.

**Twitter handle:** @tariyekorogha

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 19:37:55 +00:00
Aaron Pham
12fced13f4 chore(community): update to OpenLLM 0.6 (#24609)
Update to OpenLLM 0.6, which we decides to make use of OpenLLM's
OpenAI-compatible endpoint. Thus, OpenLLM will now just become a thin
wrapper around OpenAI wrapper.

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

---------

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-16 14:30:07 -05:00
Lvlvko
5c17a4ace9 community: support Hunyuan Embedding (#23160)
## description

- I refactor `Chathunyuan` using tencentcloud sdk because I found the
original one can't work in my application
- I add `HunyuanEmbeddings` using tencentcloud sdk
- Both of them are extend the basic class of langchain. I have fully
tested them in my application

## Dependencies
- tencentcloud-sdk-python

---------

Co-authored-by: centonhuang <centonhuang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 19:27:19 +00:00
Harrison Chase
de7996c2ca core: add kwargs support to VectorStore (#25934)
has been missing the passthrough until now

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 18:57:57 +00:00
Henry Tu
87c50f99e5 docs: cerebras Update Llama 3.1 70B to Llama 3.3 70B (#28746)
This PR updates the docs for the Cerebras integration to use Llama 3.3
70b instead of Llama 3.1 70b.

cc: @efriis
2024-12-16 18:43:35 +00:00
Lorenzo
b79a1156ed community: correct return type of get_files_from_directory in github tool (#27885)
### About:
- **Description:** the _get_files_from_directory_ method return a
string, but it's used in other methods that expect a List[str]
- **Issue:** None
- **Dependencies:** None

This pull request import a new method _list_files_ with the old logic of
_get_files_from_directory_, but it return a List[str] at the end.
The behavior of _ get_files_from_directory_ is not changed.
2024-12-16 10:30:33 -08:00
Sheepsta300
580a8d53f9 community: Add configurable VisualFeatures to the AzureAiServicesImageAnalysisTool (#27444)
Thank you for contributing to LangChain!

- [ ] **PR title**: community: Add configurable `VisualFeatures` to the
`AzureAiServicesImageAnalysisTool`


- [ ] **PR message**:  
- **Description:** The `AzureAiServicesImageAnalysisTool` is a good
service and utilises the Azure AI Vision package under the hood.
However, since the creation of this tool, new `VisualFeatures` have been
added to allow the user to request other image specific information to
be returned. Currently, the tool offers neither configuration of which
features should be return nor does it offer any newer feature types. The
aim of this PR is to address this and expose more of the Azure Service
in this integration.
- **Dependencies:** no new dependencies in the main class file,
azure.ai.vision.imageanalysis added to extra test dependencies file.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. Although no tests exist for already implemented Azure Service tools,
I've created 3 unit tests for this class that test initialisation and
credentials, local file analysis and a test for the new changes/
features option.


- [ ] **Lint and test**: All linting has passed.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 18:30:04 +00:00
Erick Friis
1c120e9615 core: xml output parser tags docstring (#28745) 2024-12-16 18:25:16 +00:00
Ana
ebab2ea81b Fix Azure National Cloud authentication using token (RBAC) (Generated by Ana - AI SDE) (#25843)
This pull request addresses the issue with authenticating Azure National
Cloud using token (RBAC) in the AzureSearch vectorstore implementation.

## Changes

- Modified the `_get_search_client` method in `azuresearch.py` to pass
`additional_search_client_options` to the `SearchIndexClient` instance.

## Implementation Details

The patch updates the `SearchIndexClient` initialization to include the
`additional_search_client_options` parameter:

```python
index_client: SearchIndexClient = SearchIndexClient(
    endpoint=endpoint,
    credential=credential,
    user_agent=user_agent,
    **additional_search_client_options
)
```

This change allows the `audience` parameter to be correctly passed when
using Azure National Cloud, fixing the authentication issues with
GovCloud & RBAC.

This patch was generated by [Ana - AI SDE](https://openana.ai/), an
AI-powered software development assistant.

This is a fix for [Issue
25823](https://github.com/langchain-ai/langchain/issues/25823)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-16 18:22:24 +00:00
chenzimin
169d419581 community: Remove all other keys in ChatLiteLLM and add api_key (#28097)
Thank you for contributing to LangChain!

- **PR title**: "community: Remove all other keys in ChatLiteLLM and add
api_key"


- **PR message**: Currently, no api_key are passed to LiteLLM, and
LiteLLM only takes on api_key parameter. Therefore I removed all current
`*_api_key` attributes (They are not used), and added `api_key` that is
passed to ChatLiteLLM.
  - Should fix issue #27826

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 17:54:29 +00:00
German Martin
d5d18c62b3 community: Apache AGE wrapper additional edge cases. (#28151)
Description: 
Current AGEGraph() implementation does some custom wrapping for graph
queries. The method here is _wrap_query() as it parse the field from the
original query to add some SQL context to it.
This improves the current parsing logic to cover additional edge cases
that are added to the test coverage, basically if any Node property name
or value has the "return" literal in it will break the graph / SQL
query.
We discovered this while dealing with real world datasets, is not an
uncommon scenario and I think it needs to be covered.
2024-12-16 11:28:01 -05:00
Rock2z
768e4a7fd4 [community][fix] Compatibility support to bump up wikibase-rest-api-client version (#27316)
**Description:**

This PR addresses the `TypeError: sequence item 0: expected str
instance, FluentValue found` error when invoking `WikidataQueryRun`. The
root cause was an incompatible version of the
`wikibase-rest-api-client`, which caused the tool to fail when handling
`FluentValue` objects instead of strings.

The current implementation only supports `wikibase-rest-api-client<0.2`,
but the latest version is `0.2.1`, where the current implementation
breaks. Additionally, the error message advises users to install the
latest version: [code
reference](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py#L125C25-L125C32).
Therefore, this PR updates the tool to support the latest version of
`wikibase-rest-api-client`.

Key changes:
- Updated the handling of `FluentValue` objects to ensure compatibility
with the latest `wikibase-rest-api-client`.
- Removed the restriction to `wikibase-rest-api-client<0.2` and updated
to support the latest version (`0.2.1`).

**Issue:**

Fixes [#24093](https://github.com/langchain-ai/langchain/issues/24093) –
`TypeError: sequence item 0: expected str instance, FluentValue found`.

**Dependencies:**

- Upgraded `wikibase-rest-api-client` to the latest version to resolve
the issue.

---------

Co-authored-by: peiwen_zhang <peiwen_zhang@email.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 16:22:18 +00:00
André Quintino
a26c786bc5 community: refactor opensearch query constructor to use wildcard instead of match in the contain comparator (#26653)
- **Description:** Changed the comparator to use a wildcard query
instead of match. This modification allows for partial text matching on
analyzed fields, which improves the flexibility of the search by
performing full-text searches that aren't limited to exact matches.
- **Issue:** The previous implementation used a match query, which
performs exact matches on analyzed fields. This approach limited the
search capabilities by requiring the query terms to align with the
indexed text. The modification to use a wildcard query instead addresses
this limitation. The wildcard query allows for partial text matching,
which means the search can return results even if only a portion of the
term matches the text. This makes the search more flexible and suitable
for use cases where exact matches aren't necessary or expected, enabling
broader full-text searches across analyzed fields.
In short, the problem was that match queries were too restrictive, and
the change to wildcard queries enhances the ability to perform partial
matches.
- **Dependencies:** none
- **Twitter handle:** @Andre_Q_Pereira

---------

Co-authored-by: André Quintino <andre.quintino@tui.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 11:16:34 -05:00
Davi Schumacher
0f9b4bf244 community[patch]: update dynamodb chat history to update instead of overwrite (#22397)
**Description:**
The current implementation of `DynamoDBChatMessageHistory` updates the
`History` attribute for a given chat history record by first extracting
the existing contents into memory, appending the new message, and then
using the `put_item` method to put the record back. This has the effect
of overwriting any additional attributes someone may want to include in
the record, like chat session metadata.

This PR suggests changing from using `put_item` to using `update_item`
instead which will keep any other attributes in the record untouched.
The change is backward compatible since
1. `update_item` is an "upsert" operation, creating the record if it
doesn't already exist, otherwise updating it
2. It only touches the db insert call and passes the exact same
information. The rest of the class is left untouched

**Dependencies:**
None

**Tests and docs:**
No unit tests currently exist for the `DynamoDBChatMessageHistory`
class. This PR adds the file
`libs/community/tests/unit_tests/chat_message_histories/test_dynamodb_chat_message_history.py`
to test the `add_message` and `clear` methods. I wanted to use the moto
library to mock DynamoDB calls but I could not get poetry to resolve it
so I mocked those calls myself in the test. Therefore, no test
dependencies were added.

The change was tested on a test DynamoDB table as well. The first three
images below show the current behavior. First a message is added to chat
history, then a value is inserted in the record in some other attribute,
and finally another message is added to the record, destroying the other
attribute.

![using_put_1_first_message](https://github.com/langchain-ai/langchain/assets/29493541/426acd62-fe29-42f4-b75f-863fb8b3fb21)

![using_put_2_add_attribute](https://github.com/langchain-ai/langchain/assets/29493541/f8a1c864-7114-4fe3-b487-d6f9252f8f92)

![using_put_3_second_message](https://github.com/langchain-ai/langchain/assets/29493541/8b691e08-755e-4877-8969-0e9769e5d28a)

The next three images show the new behavior. Once again a value is added
to an attribute other than the History attribute, but now when the
followup message is added it does not destroy that other attribute. The
History attribute itself is unaffected by this change.

![using_update_1_first_message](https://github.com/langchain-ai/langchain/assets/29493541/3e0d76ed-637e-41cd-82c7-01a86c468634)

![using_update_2_add_attribute](https://github.com/langchain-ai/langchain/assets/29493541/52585f9b-71a2-43f0-9dfc-9935aa59c729)

![using_update_3_second_message](https://github.com/langchain-ai/langchain/assets/29493541/f94c8147-2d6f-407a-9a0f-86b94341abff)

The doc located at `docs/docs/integrations/memory/aws_dynamodb.ipynb`
required no changes and was tested as well.
2024-12-16 10:38:00 -05:00
Christophe Bornet
6ddd5dbb1e community: Add FewShotSQLTool (#28232)
The `FewShotSQLTool` gets some SQL query examples from a
`BaseExampleSelector` for a given question.
This is useful to provide [few-shot
examples](https://python.langchain.com/docs/how_to/sql_prompting/#few-shot-examples)
capability to an SQL agent.

Example usage:
```python
from langchain.agents.agent_toolkits.sql.prompt import SQL_PREFIX

embeddings = OpenAIEmbeddings()

example_selector = SemanticSimilarityExampleSelector.from_examples(
    examples,
    embeddings,
    AstraDB,
    k=5,
    input_keys=["input"],
    collection_name="lc_few_shots",
    token=ASTRA_DB_APPLICATION_TOKEN,
    api_endpoint=ASTRA_DB_API_ENDPOINT,
)

few_shot_sql_tool = FewShotSQLTool(
    example_selector=example_selector,
    description="Input to this tool is the input question, output is a few SQL query examples related to the input question. Always use this tool before checking the query with sql_db_query_checker!"
)

agent = create_sql_agent(
    llm=llm, 
    db=db, 
    prefix=SQL_PREFIX + "\nYou MUST get some example queries before creating the query.", 
    extra_tools=[few_shot_sql_tool]
)

result = agent.invoke({"input": "How many artists are there?"})
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 15:37:21 +00:00
Mohammad Mohtashim
8d746086ab Added bind_tools support for ChatMLX along with small fix in _stream (#28743)
- **Description:** Added Support for `bind_tool` as requested in the
issue. Plus two issue in `_stream` were fixed:
    - Corrected the Positional Argument Passing for `generate_step`
    - Accountability if `token` returned by `generate_step` is integer.
- **Issue:** #28692
2024-12-16 09:52:49 -05:00
Jorge Piedrahita Ortiz
558b65ea32 community: SamabaStudio Tool Calling and Structured Output (#28025)
Description: Add tool calling and structured output support for
SambaStudio chat models, docs included

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 06:15:19 +00:00
Hristiyan Genchev
c87f24d85d docs: correct variable name from formatted_docs to docs_content (#28735)
- **Description:** Fixed incorrect variable name formatted_docs,
replacing it with docs_content to ensure correct functionality.
2024-12-15 22:09:58 -08:00
clairebehue
fb44e74ca4 community: fix AzureSearch Oauth with azure_ad_access_token (#26995)
**Description:** 
AzureSearch vector store: create a wrapper class on
`azure.core.credentials.TokenCredential` (which is not-instantiable) to
fix Oauth usage with `azure_ad_access_token` argument

**Issue:** [the issue it
fixes](https://github.com/langchain-ai/langchain/issues/26216)

 **Dependencies:** None

- [x] **Lint and test**

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 05:56:45 +00:00
SirSmokeAlot
29305cd948 community: O365Toolkit - send_event - fixed timezone error (#25876)
**Description**: Fixed formatting start and end time
**Issue**: The old formatting resulted everytime in an timezone error
**Dependencies**: /
**Twitter handle**: /

---------

Co-authored-by: Yannick Opitz <yannick.opitz@gob.de>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 05:32:28 +00:00
Erick Friis
4f6ccb7080 text-splitters: extended-tests without socket (#28736) 2024-12-16 05:19:50 +00:00
Erick Friis
8ec1c72e03 text-splitters: test without socket (#28732) 2024-12-15 22:10:35 +00:00
Tibor Reiss
690aa02c31 docs[experimental]: Make docs clearer and add min_chunk_size (#26398)
Fixes #26171:

- added some clarification text for the keyword argument
`breakpoint_threshold_amount`
- added min_chunk_size: together with `breakpoint_threshold_amount`, too
small/big chunk sizes can be avoided

Note: the langchain-experimental was moved to a separate repo, so only
the doc change stays here.
2024-12-15 13:43:48 -08:00
Aayush Kataria
d417e4b372 Community: Azure CosmosDB No Sql Vector Store: Full Text and Hybrid Search Support (#28716)
Thank you for contributing to LangChain!

- Added [full
text](https://learn.microsoft.com/en-us/azure/cosmos-db/gen-ai/full-text-search)
and [hybrid
search](https://learn.microsoft.com/en-us/azure/cosmos-db/gen-ai/hybrid-search)
support for Azure CosmosDB NoSql Vector Store
- Added a new enum called CosmosDBQueryType which supports the following
values:
    - VECTOR = "vector"
    - FULL_TEXT_SEARCH = "full_text_search"
    - FULL_TEXT_RANK = "full_text_rank"
    - HYBRID = "hybrid"
- User now needs to provide this query_type to the similarity_search
method for the vectorStore to make the correct query api call.
- Added a couple of work arounds as for the FULL_TEXT_RANK and HYBRID
query functions we don't support parameterized queries right now. I have
added TODO's in place, and will remove these work arounds by end of
January.
- Added necessary test cases and updated the 


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-12-15 13:26:32 -08:00
Mohammad Mohtashim
4c1871d9a8 community: Passing the model_kwargs correctly while maintaing backward compatability (#28439)
- **Description:** `Model_Kwargs` was not being passed correctly to
`sentence_transformers.SentenceTransformer` which has been corrected
while maintaing backward compatability
- **Issue:** #28436

---------

Co-authored-by: MoosaTae <sadhis.tae@gmail.com>
Co-authored-by: Sadit Wongprayon <101176694+MoosaTae@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-15 20:34:29 +00:00
nhols
a3851cb3bc community: FAISS vectorstore - consistent Document id field (#28728)
make sure id field of Documents in `FAISS` docstore have the same id as
values in `index_to_docstore_id`, implement `get_by_ids` method
2024-12-15 12:23:49 -08:00
Bagatur
a0534ae62a community[patch]: Release 0.3.12 (#28725) 2024-12-14 22:13:20 +00:00
Bagatur
089e659e03 langchain[patch]: Release 0.3.12 (#28724) 2024-12-14 20:02:18 +00:00
Bagatur
679e3a9970 text-splitters[patch]: Release 0.3.3 (#28723) 2024-12-14 19:20:22 +00:00
ccurme
23b433f683 infra: fix notebook tests (#28722)
Bump unstructured to pick up resolution of
https://github.com/Unstructured-IO/unstructured/issues/3795
2024-12-14 15:13:19 +00:00
Erick Friis
387284c259 core: release 0.3.25 (#28718) 2024-12-14 02:22:28 +00:00
Nawaf Alharbi
decd77c515 community: fix an issue with deepinfra integration (#28715)
Thank you for contributing to LangChain!

- [x] **PR title**: langchain: add URL parameter to ChatDeepInfra class

- [x] **PR message**: add URL parameter to ChatDeepInfra class
- **Description:** This PR introduces a url parameter to the
ChatDeepInfra class in LangChain, allowing users to specify a custom
URL. Previously, the URL for the DeepInfra API was hardcoded to
"https://stage.api.deepinfra.com/v1/openai/chat/completions", which
caused issues when the staging endpoint was not functional. The _url
method was updated to return the value from the url parameter, enabling
greater flexibility and addressing the problem. out!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 02:15:29 +00:00
Ben Chambers
008efada2c [community]: Render documents to graphviz (#24830)
- **Description:** Adds a helper that renders documents with the
GraphVectorStore metadata fields to Graphviz for visualization. This is
helpful for understanding and debugging.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 02:02:09 +00:00
Leonid Ganeline
fc8006121f docs: integrations W&B update (#28059)
Issue: Here is an ambiguity about W&B integrations. There are two
existing provider pages.
Fix: Added the "root" W&B provider page. Added there the references to
the documentation in the W&B site. Cleaned up formats in existing pages.
Added one more integration reference.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-14 00:47:16 +00:00
Erick Friis
288f204758 docs, community: aerospike docs update (#28717)
Co-authored-by: Jesse Schumacher <jschumacher@aerospike.com>
Co-authored-by: Jesse S <jschmidt@aerospike.com>
Co-authored-by: dylan <dwelch@aerospike.com>
2024-12-14 00:27:37 +00:00
ronidas39
bd008baee0 docs: Update additional_resources/tutorials.mdx (#28005)
Added Langchain complete tutorial playlist from total technology zonne
channel .In this playlist every video is focusing one specific use case
and hands on demo.All tutorials are equally good for every levels .

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 00:21:30 +00:00
Vimpas
337fed80a5 community: 🐛 PDF Filter Type Error (#27154)
Thank you for contributing to LangChain!

 **PR title**: "community: fix  PDF Filter Type Error"


  - **Description:** fix  PDF Filter Type Error"
  - **Issue:** the issue #27153 it fixes,
  - **Dependencies:** no
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 23:30:29 +00:00
Ryan Parker
12111cb922 community: fallback on core async atransform_documents method for MarkdownifyTransformer (#27866)
# Description
Implements the `atransform_documents` method for
`MarkdownifyTransformer` using the `asyncio` built-in library for
concurrency.

Note that this is mainly for API completeness when working with async
frameworks rather than for performance, since the `markdownify` function
is not I/O bound because it works with `Document` objects already in
memory.

# Issue
Fixes #27865

# Dependencies
No new dependencies added, but
[`markdownify`](https://github.com/matthewwithanm/python-markdownify) is
required since this PR updates the `markdownify` integration.

# Tests and docs
- Tests added
- I did not modify the docstrings since they already described the basic
functionality, and [the API docs also already included a
description](https://python.langchain.com/api_reference/community/document_transformers/langchain_community.document_transformers.markdownify.MarkdownifyTransformer.html#langchain_community.document_transformers.markdownify.MarkdownifyTransformer.atransform_documents).
If it would be helpful, I would be happy to update the docstrings and/or
the API docs.

# Lint and test
- [x] format
- [x] lint
- [x] test

I ran formatting with `make format`, linting with `make lint`, and
confirmed that tests pass using `make test`. Note that some unit tests
pass in CI but may fail when running `make_test`. Those unit tests are:
- `test_extract_html` (and `test_extract_html_async`)
- `test_strip_tags` (and `test_strip_tags_async`)
- `test_convert_tags` (and `test_convert_tags_async`)

The reason for the difference is that there are trailing spaces when the
tests are run in the CI checks, and no trailing spaces when run with
`make test`. I ensured that the tests pass in CI, but they may fail with
`make test` due to the addition of trailing spaces.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 22:32:22 +00:00
Manuel
af2e0a7ede partners: add 'model' alias for consistency in embedding classes (#28374)
**Description:** This PR introduces a `model` alias for the embedding
classes that contain the attribute `model_name`, to ensure consistency
across the codebase, as suggested by a moderator in a previous PR. The
change aligns the usage of attribute names across the project (see for
example
[here](65deeddd5d/libs/partners/groq/langchain_groq/chat_models.py (L304))).
**Issue:** This PR addresses the suggestion from the review of issue
#28269.
**Dependencies:**  None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 22:30:00 +00:00
Erick Friis
3107d78517 huggingface: fix standard test lint (#28714) 2024-12-13 22:18:54 +00:00
Kaiwei Zhang
b909d54e70 chroma[patch]: Update logic for assigning ids 2024-12-13 21:58:34 +00:00
ccurme
9c55c75eb5 docs: dropdowns for embeddings and vector stores (#28713) 2024-12-13 16:48:02 -05:00
Karthik Bharadhwaj
498f0249e2 community[minor]: Opensearch hybridsearch implementation (#25375)
community: add hybrid search in opensearch

# Langchain OpenSearch Hybrid Search Implementation

## Implementation of Hybrid Search: 

I have taken LangChain's OpenSearch integration to the next level by
adding hybrid search capabilities. Building on the existing
OpenSearchVectorSearch class, I have implemented Hybrid Search
functionality (which combines the best of both keyword and semantic
search). This new functionality allows users to harness the power of
OpenSearch's advanced hybrid search features without leaving the
familiar LangChain ecosystem. By blending traditional text matching with
vector-based similarity, the enhanced class delivers more accurate and
contextually relevant results. It's designed to seamlessly fit into
existing LangChain workflows, making it easy for developers to upgrade
their search capabilities.

In implementing the hybrid search for OpenSearch within the LangChain
framework, I also incorporated filtering capabilities. It's important to
note that according to the OpenSearch hybrid search documentation, only
post-filtering is supported for hybrid queries. This means that the
filtering is applied after the hybrid search results are obtained,
rather than during the initial search process.

**Note:** For the implementation of hybrid search, I strictly followed
the official OpenSearch Hybrid search documentation and I took
inspiration from
https://github.com/AndreasThinks/langchain/tree/feature/opensearch_hybrid_search
Thanks Mate!  

### Experiments

I conducted few experiments to verify that the hybrid search
implementation is accurate and capable of reproducing the results of
both plain keyword search and vector search.

Experiment - 1
Hybrid Search
Keyword_weight: 1, vector_weight: 0

I conducted an experiment to verify the accuracy of my hybrid search
implementation by comparing it to a plain keyword search. For this test,
I set the keyword_weight to 1 and the vector_weight to 0 in the hybrid
search, effectively giving full weightage to the keyword component. The
results from this hybrid search configuration matched those of a plain
keyword search, confirming that my implementation can accurately
reproduce keyword-only search results when needed. It's important to
note that while the results were the same, the scores differed between
the two methods. This difference is expected because the plain keyword
search in OpenSearch uses the BM25 algorithm for scoring, whereas the
hybrid search still performs both keyword and vector searches before
normalizing the scores, even when the vector component is given zero
weight. This experiment validates that my hybrid search solution
correctly handles the keyword search component and properly applies the
weighting system, demonstrating its accuracy and flexibility in
emulating different search scenarios.


Experiment - 2
Hybrid Search
keyword_weight = 0.0, vector_weight = 1.0

For experiment-2, I took the inverse approach to further validate my
hybrid search implementation. I set the keyword_weight to 0 and the
vector_weight to 1, effectively giving full weightage to the vector
search component (KNN search). I then compared these results with a pure
vector search. The outcome was consistent with my expectations: the
results from the hybrid search with these settings exactly matched those
from a standalone vector search. This confirms that my implementation
accurately reproduces vector search results when configured to do so. As
with the first experiment, I observed that while the results were
identical, the scores differed between the two methods. This difference
in scoring is expected and can be attributed to the normalization
process in hybrid search, which still considers both components even
when one is given zero weight. This experiment further validates the
accuracy and flexibility of my hybrid search solution, demonstrating its
ability to effectively emulate pure vector search when needed while
maintaining the underlying hybrid search structure.



Experiment - 3
Hybrid Search - balanced

keyword_weight = 0.5, vector_weight = 0.5

For experiment-3, I adopted a balanced approach to further evaluate the
effectiveness of my hybrid search implementation. In this test, I set
both the keyword_weight and vector_weight to 0.5, giving equal
importance to keyword-based and vector-based search components. This
configuration aims to leverage the strengths of both search methods
simultaneously. By setting both weights to 0.5, I intended to create a
scenario where the hybrid search would consider lexical matches and
semantic similarity equally. This balanced approach is often ideal for
many real-world applications, as it can capture both exact keyword
matches and contextually relevant results that might not contain the
exact search terms.

Kindly verify the notebook for the experiments conducted!  

**Notebook:**
https://github.com/karthikbharadhwajKB/Langchain_OpenSearch_Hybrid_search/blob/main/Opensearch_Hybridsearch.ipynb

### Instructions to follow for Performing Hybrid Search:

**Step-1: Instantiating OpenSearchVectorSearch Class:**
```python
opensearch_vectorstore = OpenSearchVectorSearch(
    index_name=os.getenv("INDEX_NAME"),
    embedding_function=embedding_model,
    opensearch_url=os.getenv("OPENSEARCH_URL"),
    http_auth=(os.getenv("OPENSEARCH_USERNAME"),os.getenv("OPENSEARCH_PASSWORD")),
    use_ssl=False,
    verify_certs=False,
    ssl_assert_hostname=False,
    ssl_show_warn=False
)
```

**Parameters:**
1. **index_name:** The name of the OpenSearch index to use.
2. **embedding_function:** The function or model used to generate
embeddings for the documents. It's assumed that embedding_model is
defined elsewhere in the code.
3. **opensearch_url:** The URL of the OpenSearch instance.
4. **http_auth:** A tuple containing the username and password for
authentication.
5. **use_ssl:** Set to False, indicating that the connection to
OpenSearch is not using SSL/TLS encryption.
6. **verify_certs:** Set to False, which means the SSL certificates are
not being verified. This is often used in development environments but
is not recommended for production.
7. **ssl_assert_hostname:** Set to False, disabling hostname
verification in SSL certificates.
8. **ssl_show_warn:** Set to False, suppressing SSL-related warnings.

**Step-2: Configure Search Pipeline:**

To initiate hybrid search functionality, you need to configures a search
pipeline first.

**Implementation Details:**

This method configures a search pipeline in OpenSearch that:
1. Normalizes the scores from both keyword and vector searches using the
min-max technique.
2. Applies the specified weights to the normalized scores.
3. Calculates the final score using an arithmetic mean of the weighted,
normalized scores.


**Parameters:**

* **pipeline_name (str):** A unique identifier for the search pipeline.
It's recommended to use a descriptive name that indicates the weights
used for keyword and vector searches.
* **keyword_weight (float):** The weight assigned to the keyword search
component. This should be a float value between 0 and 1. In this
example, 0.3 gives 30% importance to traditional text matching.
* **vector_weight (float):** The weight assigned to the vector search
component. This should be a float value between 0 and 1. In this
example, 0.7 gives 70% importance to semantic similarity.

```python
opensearch_vectorstore.configure_search_pipelines(
    pipeline_name="search_pipeline_keyword_0.3_vector_0.7",
    keyword_weight=0.3,
    vector_weight=0.7,
)
```

**Step-3: Performing Hybrid Search:**

After creating the search pipeline, you can perform a hybrid search
using the `similarity_search()` method (or) any methods that are
supported by `langchain`. This method combines both `keyword-based and
semantic similarity` searches on your OpenSearch index, leveraging the
strengths of both traditional information retrieval and vector embedding
techniques.

**parameters:**
* **query:** The search query string.
* **k:** The number of top results to return (in this case, 3).
* **search_type:** Set to `hybrid_search` to use both keyword and vector
search capabilities.
* **search_pipeline:** The name of the previously created search
pipeline.

```python
query = "what are the country named in our database?"

top_k = 3

pipeline_name = "search_pipeline_keyword_0.3_vector_0.7"

matched_docs = opensearch_vectorstore.similarity_search_with_score(
                query=query,
                k=top_k,
                search_type="hybrid_search",
                search_pipeline = pipeline_name
            )

matched_docs
```

twitter handle: @iamkarthik98

---------

Co-authored-by: Karthik Kolluri <karthik.kolluri@eidosmedia.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 16:34:12 -05:00
Philippe PRADOS
f3fb5a9c68 community[minor]: Fix json._validate_metadata_func() (#22842)
JSONparse, in _validate_metadata_func(), checks the consistency of the
_metadata_func() function. To do this, it invokes it and makes sure it
receives a dictionary in response. However, during the call, it does not
respect future calls, as shown on line 100. This generates errors if,
for example, the function is like this:
```python
        def generate_metadata(json_node:Dict[str,Any],kwargs:Dict[str,Any]) -> Dict[str,Any]:
             return {
                "source": url,
                "row": kwargs['seq_num'],
                "question":json_node.get("question"),
            }
        loader = JSONLoader(
            file_path=file_path,
            content_key="answer",
            jq_schema='.[]',
            metadata_func=generate_metadata,
            text_content=False)
```
To avoid this, the verification must comply with the specifications.
This patch does just that.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 21:24:20 +00:00
Keiichi Hirobe
67fd554512 core[patch]: throw exception indexing code if deletion fails in vectorstore (#28103)
The delete methods in the VectorStore and DocumentIndex interfaces
return a status indicating the result. Therefore, we can assume that
their implementations don't throw exceptions but instead return a result
indicating whether the delete operations have failed. The current
implementation doesn't check the returned value, so I modified it to
throw an exception when the operation fails.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 16:14:27 -05:00
Keiichi Hirobe
258b3be5ec core[minor]: add new clean up strategy "scoped_full" to indexing (#28505)
~Note that this PR is now Draft, so I didn't add change to `aindex`
function and didn't add test codes for my change.
After we have an agreement on the direction, I will add commits.~

`batch_size` is very difficult to decide because setting a large number
like >10000 will impact VectorDB and RecordManager, while setting a
small number will delete records unnecessarily, leading to redundant
work, as the `IMPORTANT` section says.
On the other hand, we can't use `full` because the loader returns just a
subset of the dataset in our use case.

I guess many people are in the same situation as us.

So, as one of the possible solutions for it, I would like to introduce a
new argument, `scoped_full_cleanup`.
This argument will be valid only when `claneup` is Full. If True, Full
cleanup deletes all documents that haven't been updated AND that are
associated with source ids that were seen during indexing. Default is
False.

This change keeps backward compatibility.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 20:35:25 +00:00
ccurme
4802c31a53 docs: update intro page (#28639) 2024-12-13 15:24:14 -05:00
Eugene Yurtsev
ce90b25313 core[patch]: Update error message in indexing code for unreachable code assertion (#28712)
Minor update for error message that should never be triggered
2024-12-13 20:21:14 +00:00
Keiichi Hirobe
da28cf1f54 core[patch]: Reverts PR #25754 and add unit tests (#28702)
I reported the bug 2 weeks ago here:
https://github.com/langchain-ai/langchain/issues/28447

I believe this is a critical bug for the indexer, so I submitted a PR to
revert the change and added unit tests to prevent similar bugs from
being introduced in the future.

@eyurtsev Could you check this?
2024-12-13 15:13:06 -05:00
ScriptShi
b0a298894d community[minor]: Add TablestoreVectorStore (#25767)
Thank you for contributing to LangChain!

- [x] **PR title**:  community: add TablestoreVectorStore



- [x] **PR message**: 
    - **Description:** add TablestoreVectorStore
    - **Dependencies:** none


- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. a test for the integration: yes
  2. an example notebook showing its use: yes

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-12-13 11:17:28 -08:00
Erick Friis
86b3c6e81c community: make old stub for QuerySQLDataBaseTool private to skip api ref (#28711) 2024-12-13 10:43:23 -08:00
Martin Triska
05ebe1e66b Community: add modified_since argument to O365BaseLoader (#28708)
## What are we doing in this PR
We're adding `modified_since` optional argument to `O365BaseLoader`.
When set, O365 loader will only load documents newer than
`modified_since` datetime.

## Why?
OneDrives / Sharepoints can contain large number of documents. Current
approach is to download and parse all files and let indexer to deal with
duplicates. This can be prohibitively time-consuming. Especially when
using OCR-based parser like
[zerox](fa06188834/libs/community/langchain_community/document_loaders/pdf.py (L948)).
This argument allows to skip documents that are older than known time of
indexing.

_Q: What if a file was modfied during last indexing process?
A: Users can set the `modified_since` conservatively and indexer will
still take care of duplicates._




If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 17:30:17 +00:00
UV
c855d434c5 DOC: Fixed conflicting info on ChatOllama structured output support (#28701)
This PR resolves the conflicting information in the Chat models
documentation regarding structured output support for ChatOllama.

- The Featured Providers table has been updated to reflect the correct
status.
- Structured output support for ChatOllama was introduced on Dec 6,
2024.
- A note has been added to ensure users update to the latest Ollama
version for structured outputs.

**Issue:** Fixes #28691
2024-12-13 17:24:59 +00:00
Bagatur
fa06188834 community[patch]: fix QuerySQLDatabaseTool name (#28659)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-12 19:16:03 -08:00
Bagatur
94c22c3f48 rfc: dropdown for chat models (#28673) 2024-12-12 19:14:39 -08:00
Erick Friis
48ab91b520 docs: more useful vercel warnings (#28699) 2024-12-13 03:07:24 +00:00
Erick Friis
f60110107c docs: ganalytics in api ref (#28697) 2024-12-12 23:55:59 +00:00
Michael Chin
28cb2cefc6 docs: Fix stack diagram in community README (#28685)
- **Description:** The stack diagram illustration in the community
README fails to render due to an invalid branch reference. This PR
replaces the broken image link with a valid one referencing master
branch.
2024-12-12 13:33:50 -08:00
Botong Zhu
13c3c4a210 community: fixes json loader not getting texts with json standard (#27327)
This PR fixes JSONLoader._get_text not converting objects to json string
correctly.
If an object is serializable and is not a dict, JSONLoader will use
python built-in str() method to convert it to string. This may cause
object converted to strings not following json standard. For example, a
list will be converted to string with single quotes, and if json.loads
try to load this string, it will cause error.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 19:33:45 +00:00
Lorenzo
4149c0dd8d community: add method to create branch and list files for gitlab tool (#27883)
### About

- **Description:** In the Gitlab utilities used for the Gitlab tool
there are no methods to create branches, list branches and files, as
this is already done for Github
- **Issue:** None
- **Dependencies:** None

This Pull request add the methods:
- create_branch
- list_branches_in_repo
- set_active_branch
- list_files_in_main_branch
- list_files_in_bot_branch
- list_files_from_directory

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 19:11:35 +00:00
Prathamesh Nimkar
ca054ed1b1 community: ChatSnowflakeCortex - Add streaming functionality (#27753)
Description:
snowflake.py
Add _stream and _stream_content methods to enable streaming
functionality
fix pydantic issues and added functionality with the overall langchain
version upgrade
added bind_tools method for agentic workflows support through langgraph
updated the _generate method to account for agentic workflows support
through langgraph
cosmetic changes to comments and if conditions

snowflake.ipynb
Added _stream example
cosmetic changes to comments
fixed lint errors

check_pydantic.sh
Decreased counter from 126 to 125 as suggested when formatting

---------

Co-authored-by: Prathamesh Nimkar <prathamesh.nimkar@snowflake.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 18:35:40 -08:00
Wang, Yi
d834c6b618 huggingface: fix tool argument serialization in _convert_TGI_message_to_LC_message (#26075)
Currently `_convert_TGI_message_to_LC_message` replaces `'` in the tool
arguments, so an argument like "It's" will be converted to `It"s` and
could cause a json parser to fail.

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-12-11 18:34:32 -08:00
Lakindu Boteju
5a31792bf1 community: Add support for cross-region inference profile IDs in Bedrock Anthropic Claude token cost calculation (#28167)
This change modifies the token cost calculation logic to support
cross-region inference profile IDs for Anthropic Claude models. Instead
of explicitly listing all regional variants of new inference profile IDs
in the cost dictionaries, the code now extracts a base model ID from the
input model ID (or inference profile ID), making it more maintainable
and automatically supporting new regional variants.

These inference profile IDs follow the format:
`<region>.<vendor>.<model-name>` (e.g.,
`us.anthropic.claude-3-haiku-xxx`, `eu.anthropic.claude-3-sonnet-xxx`).

Cross-region inference profiles are system-defined identifiers that
enable distributing model inference requests across multiple AWS
regions. They help manage unplanned traffic bursts and enhance
resilience during peak demands without additional routing costs.

References for Amazon Bedrock's cross-region inference profiles:-
-
https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html
-
https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 02:33:50 +00:00
fatmelon
d1e0ec7b55 community: VectorStores: Azure Cosmos DB Mongo vCore with DiskANN (#27329)
# Description
Add a new vector index type `diskann` to Azure Cosmos DB Mongo vCore
vector store. Paper of DiskANN can be found here [DiskANN: Fast Accurate
Billion-point Nearest Neighbor Search on a Single
Node](https://proceedings.neurips.cc/paper_files/paper/2019/file/09853c7fb1d3f8ee67a61b6bf4a7f8e6-Paper.pdf).

## Sample Usage
```python
from pymongo import MongoClient

# INDEX_NAME = "izzy-test-index-2"
# NAMESPACE = "izzy_test_db.izzy_test_collection"
# DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")

client: MongoClient = MongoClient(CONNECTION_STRING)
collection = client[DB_NAME][COLLECTION_NAME]

model_deployment = os.getenv(
    "OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada"
)
model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")

vectorstore = AzureCosmosDBVectorSearch.from_documents(
    docs,
    openai_embeddings,
    collection=collection,
    index_name=INDEX_NAME,
)

# Read more about these variables in detail here. https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search
maxDegree = 40
dimensions = 1536
similarity_algorithm = CosmosDBSimilarityType.COS
kind = CosmosDBVectorSearchType.VECTOR_DISKANN
lBuild = 20

vectorstore.create_index(
            dimensions=dimensions,
            similarity=similarity_algorithm,
            kind=kind ,
            max_degree=maxDegree,
            l_build=lBuild,
        )
```

## Dependencies
No additional dependencies were added

---------

Co-authored-by: Yang Qiao (from Dev Box) <yangqiao@microsoft.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 01:54:04 +00:00
manukychen
ba9b95cd23 Community: Adding bulk_size as a setable param for OpenSearchVectorSearch (#28325)
Description:
When using langchain.retrievers.parent_document_retriever.py with
vectorstore is OpenSearchVectorSearch, I found that the bulk_size param
I passed into OpenSearchVectorSearch class did not work on my
ParentDocumentRetriever.add_documents() function correctly, it will be
overwrite with int 500 the function which OpenSearchVectorSearch class
had (e.g., add_texts(), add_embeddings()...).

So I made this PR requset to fix this, thanks!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 01:45:22 +00:00
Erick Friis
0af5ad8262 docs: provider list from packages.yml (#28677) 2024-12-12 00:12:30 +00:00
Ayantunji Timilehin
a4713cab47 FIX: typos in docs (#28679)
- **Twitter handle:**@timi471
2024-12-11 16:06:04 -08:00
xintoteai
45f9c9ae88 langchain: fixed weaviate (v4) vectorstore import for self-query retriever (#28675)
Co-authored-by: Xin Heng <xin.heng@gmail.com>
2024-12-11 15:53:41 -08:00
Thomas van Dongen
ee640d6bd3 community: fixed bug in model2vec embedding code (#28670)
This PR fixes a bug with the current implementation for Model2Vec
embeddings where `embed_documents` does not work as expected.

- **Description**: the current implementation uses `encode_as_sequence`
for encoding documents. This is incorrect, as `encode_as_sequence`
creates token embeddings and not mean embeddings. The normal `encode`
function handles both single and batched inputs and should be used
instead. The return type was also incorrect, as encode returns a NumPy
array. This PR converts the embedding to a list so that the output is
consistent with the Embeddings ABC.
2024-12-11 15:50:56 -08:00
Brian Sharon
b20230c800 community: use correct id_key when deleting by id in LanceDB wrapper (#28655)
- **Description:** The current version of the `delete` method assumes
that the id field will always be called `id`.
- **Issue:** n/a
- **Dependencies:** n/a
- **Twitter handle:** ugh, Twitter :D 

---

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 23:49:35 +00:00
Huy Nguyen
8780f7a2ad Fix typo in doc for: Custom Functions & Pass Through Arguments pages (#28663)
- [x] Fix typo in Custom Output Parser doc
2024-12-11 15:47:14 -08:00
Mohammad Mohtashim
fa155a422f [Community]: requests_kwargs not being used in _fetch (#28646)
- **Description:** `requests_kwargs` is not being passed to `_fetch`
which is fetching pages asynchronously. In this PR, making sure that we
are passing `requests_kwargs` to `_fetch` just like `_scrape`.
- **Issue:** #28634

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 23:46:54 +00:00
bjoaquinc
8c37808d47 docs: added caution notes on Jina and LocalAI docs about openai sdk version compatibility (#28662)
- [ ] Main note
- **Description:** I added notes on the Jina and LocalAI pages telling
users that they must be using this integrations with openai sdk version
0.x, because if they dont they will get an error saying that "openai has
no attribute error". This PR was recommended by @efriis
    - **Issue:** warns people about the issue in #28529 
    - **Dependencies:** None
    - **Twitter handle:** JoaqCore



- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-11 15:46:32 -08:00
Mohammad Mohtashim
a37afbe353 mistral[minor]: Added Retrying Mechanism in case of Request Rate Limit Error for MistralAIEmbeddings (#27818)
- **Description:**: In the event of a Rate Limit Error from the
MistralAI server, the response JSON raises a KeyError. To address this,
a simple retry mechanism has been implemented to handle cases where the
request limit is exceeded.
  - **Issue:** #27790

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-11 17:53:42 -05:00
Vincent Zhang
df5008fe55 community[minor]: FAISS Filter Function Enhancement with Advanced Query Operators (#28207)
## Description
We are submitting as a team of four for a project. Other team members
are @RuofanChen03, @LikeWang10067, @TANYAL77.

This pull requests expands the filtering capabilities of the FAISS
vectorstore by adding MongoDB-style query operators indicated as
follows, while including comprehensive testing for the added
functionality.
- $eq (equals)
- $neq (not equals)
- $gt (greater than)
- $lt (less than)
- $gte (greater than or equal)
- $lte (less than or equal)
- $in (membership in list)
- $nin (not in list)
- $and (all conditions must match)
- $or (any condition must match)
- $not (negation of condition)


## Issue
This closes https://github.com/langchain-ai/langchain/issues/26379.


## Sample Usage
```python
import faiss
import asyncio
from langchain_community.vectorstores import FAISS
from langchain.schema import Document
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
documents = [
    Document(page_content="Process customer refund request", metadata={"schema_type": "financial", "handler_type": "refund",}),
    Document(page_content="Update customer shipping address", metadata={"schema_type": "customer", "handler_type": "update",}),
    Document(page_content="Process payment transaction", metadata={"schema_type": "financial", "handler_type": "payment",}),
    Document(page_content="Handle customer complaint", metadata={"schema_type": "customer","handler_type": "complaint",}),
    Document(page_content="Process invoice payment", metadata={"schema_type": "financial","handler_type": "payment",})
]

async def search(vectorstore, query, schema_type, handler_type, k=2):
    schema_filter = {"schema_type": {"$eq": schema_type}}
    handler_filter = {"handler_type": {"$eq": handler_type}}
    combined_filter = {
        "$and": [
            schema_filter,
            handler_filter,
        ]
    }
    base_retriever = vectorstore.as_retriever(
        search_kwargs={"k":k, "filter":combined_filter}
    )
    return await base_retriever.ainvoke(query)

async def main():
    vectorstore = FAISS.from_texts(
        texts=[doc.page_content for doc in documents],
        embedding=embeddings,
        metadatas=[doc.metadata for doc in documents]
    )
    
    def printt(title, documents):
        print(title)
        if not documents:
            print("\tNo documents found.")
            return
        for doc in documents:
            print(f"\t{doc.page_content}. {doc.metadata}")

    printt("Documents:", documents)
    printt('\nquery="process payment", schema_type="financial", handler_type="payment":', await search(vectorstore, query="process payment", schema_type="financial", handler_type="payment", k=2))
    printt('\nquery="customer update", schema_type="customer", handler_type="update":', await search(vectorstore, query="customer update", schema_type="customer", handler_type="update", k=2))
    printt('\nquery="refund process", schema_type="financial", handler_type="refund":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="refund", k=2))
    printt('\nquery="refund process", schema_type="financial", handler_type="foobar":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="foobar", k=2))
    print()

if __name__ == "__main__":asyncio.run(main())
```

## Output
```
Documents:
	Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'}
	Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'}
	Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'}
	Handle customer complaint. {'schema_type': 'customer', 'handler_type': 'complaint'}
	Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'}

query="process payment", schema_type="financial", handler_type="payment":
	Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'}
	Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'}

query="customer update", schema_type="customer", handler_type="update":
	Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'}

query="refund process", schema_type="financial", handler_type="refund":
	Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'}

query="refund process", schema_type="financial", handler_type="foobar":
	No documents found.

```

---------

Co-authored-by: ruofan chen <ruofan.is.awesome@gmail.com>
Co-authored-by: RickyCowboy <like.wang@mail.utoronto.ca>
Co-authored-by: Shanni Li <tanya.li@mail.utoronto.ca>
Co-authored-by: RuofanChen03 <114096642+ruofanchen03@users.noreply.github.com>
Co-authored-by: Like Wang <102838708+likewang10067@users.noreply.github.com>
2024-12-11 17:52:22 -05:00
Erick Friis
b9dd4f2985 docs: box to package table (#28676) 2024-12-11 13:01:00 -08:00
like
3048a9a26d community: tongyi multimodal response format fix to support langchain (#28645)
Description: The multimodal(tongyi) response format "message": {"role":
"assistant", "content": [{"text": "图像"}]}}]} is not compatible with
LangChain.
Dependencies: No

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 21:13:26 +00:00
Bagatur
d0e662e43b community[patch]: Release 0.3.11 (#28658) 2024-12-10 20:51:13 +00:00
Bagatur
91227ad7fd langchain[patch]: Release 0.3.11 (#28657) 2024-12-10 12:28:14 -08:00
Bagatur
1fbd86a155 core[patch]: Release 0.3.24 (#28656) 2024-12-10 20:19:21 +00:00
Bagatur
e6a62d8422 core,langchain,community[patch]: allow langsmith 0.2 (#28598) 2024-12-10 18:50:58 +00:00
ccurme
bc4dc7f4b1 ollama[patch]: permit streaming for tool calls (#28654)
Resolves https://github.com/langchain-ai/langchain/issues/28543

Ollama recently
[released](https://github.com/ollama/ollama/releases/tag/v0.4.6) support
for streaming tool calls. Previously we would override the `stream`
parameter if tools were passed in.

Covered in standard tests here:
c1d348e95d/libs/standard-tests/langchain_tests/integration_tests/chat_models.py (L893-L897)

Before, the test generates one message chunk:
```python
[
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={
            'model': 'llama3.1',
            'created_at': '2024-12-10T17:49:04.468487Z',
            'done': True,
            'done_reason': 'stop',
            'total_duration': 525471208,
            'load_duration': 19701000,
            'prompt_eval_count': 170,
            'prompt_eval_duration': 31000000,
            'eval_count': 17,
            'eval_duration': 473000000,
            'message': Message(
                role='assistant',
                content='',
                images=None,
                tool_calls=[
                    ToolCall(
                        function=Function(name='magic_function', arguments={'input': 3})
                    )
                ]
            )
        },
        id='run-552bbe0f-8fb2-4105-ada1-fa38c1db444d',
        tool_calls=[
            {
                'name': 'magic_function',
                'args': {'input': 3},
                'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2',
                'type': 'tool_call',
            },
        ],
        usage_metadata={
            'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187
        },
        tool_call_chunks=[
            {
                'name': 'magic_function',
                'args': '{"input": 3}',
                'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2',
                'index': None,
                'type': 'tool_call_chunk',
            }
        ]
    )
]
```

After, it generates two (tool call in one, response metadata in
another):
```python
[
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={},
        id='run-9a3f0860-baa1-4bae-9562-13a61702de70',
        tool_calls=[
            {
                'name': 'magic_function',
                'args': {'input': 3},
                'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0',
                'type': 'tool_call',
            },
        ],
        tool_call_chunks=[
            {
                'name': 'magic_function',
                'args': '{"input": 3}',
                'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0',
                'index': None,
                'type': 'tool_call_chunk',
            },
        ],
    ),
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={
            'model': 'llama3.1',
            'created_at': '2024-12-10T17:46:43.278436Z',
            'done': True,
            'done_reason': 'stop',
            'total_duration': 514282750,
            'load_duration': 16894458,
            'prompt_eval_count': 170,
            'prompt_eval_duration': 31000000,
            'eval_count': 17,
            'eval_duration': 464000000,
            'message': Message(
                role='assistant', content='', images=None, tool_calls=None
            ),
        },
        id='run-9a3f0860-baa1-4bae-9562-13a61702de70',
        usage_metadata={
            'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187
        }
    ),
]
```
2024-12-10 12:54:37 -05:00
Tomaz Bratanic
704059466a Fix graph example documentation (#28653) 2024-12-10 17:46:50 +00:00
Johannes Mohren
c1d348e95d doc-loader: retain Azure Doc Intelligence API metadata in Document parser (#28382)
**Description**:
This PR modifies the doc_intelligence.py parser in the community package
to include all metadata returned by the Azure Doc Intelligence API in
the Document object. Previously, only the parsed content (markdown) was
retained, while other important metadata such as bounding boxes (bboxes)
for images and tables was discarded. These image bboxes are crucial for
supporting use cases like multi-modal RAG workflows when using Azure Doc
Intelligence.

The change ensures that all information returned by the Azure Doc
Intelligence API is preserved by setting the metadata attribute of the
Document object to the entire result returned by the API, rather than an
empty dictionary. This extends the parser's utility for complex use
cases without breaking existing functionality.

**Issue**:
This change does not address a specific issue number, but it resolves a
critical limitation in supporting multimodal workflows when using the
LangChain wrapper for the Azure API.

**Dependencies**:
No additional dependencies are required for this change.

---------

Co-authored-by: jmohren <johannes.mohren@aol.de>
2024-12-10 11:22:58 -05:00
Alex Tonkonozhenko
0d20c314dd Confluence Loader: Fix CQL loading (#27620)
fix #12082

<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->
2024-12-10 11:05:23 -05:00
Katarina Supe
aba2711e7f community: update Memgraph integration (#27017)
**Description:**
- **Memgraph** no longer relies on `Neo4jGraphStore` but **implements
`GraphStore`**, just like other graph databases.
- **Memgraph** no longer relies on `GraphQAChain`, but implements
`MemgraphQAChain`, just like other graph databases.
- The refresh schema procedure has been updated to try using `SHOW
SCHEMA INFO`. The fallback uses Cypher queries (a combination of schema
and Cypher) → **LangChain integration no longer relies on MAGE
library**.
- The **schema structure** has been reformatted. Regardless of the
procedures used to get schema, schema structure is the same.
- The `add_graph_documents()` method has been implemented. It transforms
`GraphDocument` into Cypher queries and creates a graph in Memgraph. It
implements the ability to use `baseEntityLabel` to improve speed
(`baseEntityLabel` has an index on the `id` property). It also
implements the ability to include sources by creating a `MENTIONS`
relationship to the source document.
- Jupyter Notebook for Memgraph has been updated.
- **Issue:** /
- **Dependencies:** /
- **Twitter handle:** supe_katarina (DX Engineer @ Memgraph)

Closes #25606
2024-12-10 10:57:21 -05:00
ccurme
5c6e2cbcda ollama[patch]: support structured output (#28629)
- Bump minimum version of `ollama` to 0.4.4 (which also addresses
https://github.com/langchain-ai/langchain/issues/28607).
- Support recently-released [structured
output](https://ollama.com/blog/structured-outputs) feature. This can be
accessed by calling `.with_structured_output` with
`method="json_schema"` (choice of name
[mirrors](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)
what we have for OpenAI's structured output feature).

`ChatOllama` previously implemented `.with_structured_output` via the
[base
implementation](ec9b41431e/libs/core/langchain_core/language_models/chat_models.py (L1117)).
2024-12-10 10:36:00 -05:00
Bagatur
24292c4a31 core[patch]: Release 0.3.23 (#28648) 2024-12-10 10:01:16 +00:00
Bagatur
e24f86e55f core[patch]: return ToolMessage from tool (#28605) 2024-12-10 09:59:38 +00:00
hsm207
d0e95971f5 langchain-weaviate: Remove outdated docs (#28058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


Docs on how to do hybrid search with weaviate is covered
[here](https://python.langchain.com/docs/integrations/vectorstores/weaviate/)

@efriis

---------

Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 05:00:07 +00:00
Erick Friis
ef2f875dfb core: deprecate PipelinePromptTemplate (#28644) 2024-12-10 03:56:48 +00:00
TamagoTorisugi
0f0df2df60 fix: Set default search_type to 'similarity' in as_retriever method of AzureSearch (#28376)
**Description**
This PR updates the `as_retriever` method in the `AzureSearch` to ensure
that the `search_type` parameter defaults to 'similarity' when not
explicitly provided.

Previously, if the `search_type` was omitted, it did not default to any
specific value. So it was inherited from
`AzureSearchVectorStoreRetriever`, which defaults to 'hybrid'.

This change ensures that the intended default behavior aligns with the
expected usage.

**Issue**
No specific issue was found related to this change.

**Dependencies**
No new dependencies are introduced with this change.

---------

Co-authored-by: prrao87 <prrao87@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:40:04 +00:00
Prashanth Rao
8c6eec5f25 community: KuzuGraph needs allow_dangerous_requests, add graph documents via LLMGraphTransformer (#27949)
- [x] **PR title**: "community: Kuzu - Add graph documents via
LLMGraphTransformer"
- This PR adds a new method `add_graph_documents` to use the
`GraphDocument`s extracted by `LLMGraphTransformer` and store in a Kùzu
graph backend.
- This allows users to transform unstructured text into a graph that
uses Kùzu as the graph store.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:15:28 +00:00
Pooja Kamath
9b7d49f7da docs: Adding Docs for new SQLServer Vector store package (#28173)
**Description:** Adding Documentation for new SQL Server Vector Store
Package.

Changed files -
Added new Vector Store -
docs\docs\integrations\vectorstores\sqlserver.ipynb
 FeatureTable.Js - docs\src\theme\FeatureTables.js
Microsoft.mdx - docs\docs\integrations\providers\microsoft.mdx

Detailed documentation on API -
https://python.langchain.com/api_reference/sqlserver/index.html

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:00:10 +00:00
Erick Friis
5afeb8b46c infra: merge queue allowed (#28641) 2024-12-09 17:11:15 -08:00
Filip Ratajczak
4e743b5427 Core: google docstring parsing fix (#28404)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: google docstring parsing fix"


- [x] **PR message**:
- **Description:** Added a solution for invalid parsing of google
docstring such as:
    Args:
net_annual_income (float): The user's net annual income (in current year
dollars).
- **Issue:** Previous code would return arg = "net_annual_income
(float)" which would cause exception in
_validate_docstring_args_against_annotations
    - **Dependencies:** None

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 00:27:25 +00:00
Arnav Priyadarshi
b78b2f7a28 community[fix]: Update Perplexity to pass parameters into API calls (#28421)
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- **Description:** I realized the invocation parameters were not being
passed into `_generate` so I added those in but then realized that the
parameters contained some old fields designed for an older openai client
which I removed. Parameters work fine now.
- **Issue:** Fixes #28229 
- **Dependencies:** No new dependencies.  
- **Twitter handle:** @arch_plane

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 00:23:31 +00:00
Erick Friis
34ca31e467 docs: integration contrib typo (#28642) 2024-12-09 23:46:31 +00:00
Clément Jumel
cf6d1c0ae7 docs: add Linkup integration documentation (#28366)
## Description

First of all, thanks for the great framework that is LangChain!

At [Linkup](https://www.linkup.so/) we're working on an API to connect
LLMs and agents to the internet and our partner sources. We'd be super
excited to see our API integrated in LangChain! This essentially
consists in adding a LangChain retriever and tool, which is done in our
own [package](https://pypi.org/project/langchain-linkup/). Here we're
simply following the [integration
documentation](https://python.langchain.com/docs/contributing/how_to/integrations/)
and update the documentation of LangChain to mention the Linkup
integration.

We do have tests (both units & integration) in our [source
code](https://github.com/LinkupPlatform/langchain-linkup), and tried to
follow as close as possible the [integration
documentation](https://python.langchain.com/docs/contributing/how_to/integrations/)
which specifically requests to focus on documentation changes for an
integration PR, so I'm not adding tests here, even though the PR
checklist seems to suggest so. Feel free to correct me if I got this
wrong!

By the way, we would be thrilled by being mentioned in the list of
providers which have standalone packages
[here](https://langchain-git-fork-linkupplatform-cj-doc-langchain.vercel.app/docs/integrations/providers/),
is there something in particular for us to do for that? 🙂

## Twitter handle

Linkup_platform
<!--
## PR Checklist

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
--!>
2024-12-09 14:36:25 -08:00
Amir Sadeghi
2c49f587aa community[fix]: could not locate runnable browser (#28289)
set open_browser to false to resolve "could not locate runnable browser"
error while default browser is None

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 21:05:52 +00:00
Martin Triska
75bc6bb191 community: [bugfix] fix source path for office files in O365 (#28260)
# What problem are we fixing?

Currently documents loaded using `O365BaseLoader` fetch source from
`file.web_url` (where `file` is `<class 'O365.drive.File'>`). This works
well for `.pdf` documents. Unfortunately office documents (`.xlsx`,
`.docx` ...) pass their `web_url` in following format:

`https://sharepoint_address/sites/path/to/library/root/Doc.aspx?sourcedoc=%XXXXXXXX-1111-1111-XXXX-XXXXXXXXXX%7D&file=filename.xlsx&action=default&mobileredirect=true`

This obfuscates the path to the file. This PR utilizes the parrent
folder's path and file name to reconstruct the actual location of the
file. Knowing the file's location can be crucial for some RAG
applications (path to the file can carry information we don't want to
loose).

@vbarda Could you please look at this one? I'm @-mentioning you since
we've already closed some PRs together :-)

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 12:34:59 -08:00
Erick Friis
534b8f4364 standard-tests: release 0.3.7 (#28637) 2024-12-09 15:12:18 -05:00
Tomaz Bratanic
6815981578 Switch graphqa example in docs to langgraph (#28574)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-09 14:46:00 -05:00
Naka Masato
ce3b69aa05 community: add include_labels option to ConfluenceLoader (#28259)
## **Description:**

Enable `ConfluenceLoader` to include labels with `include_labels` option
(`false` by default for backward compatibility). and the labels are set
to `metadata` in the `Document`. e.g. `{"labels": ["l1", "l2"]}`

## Notes

Confluence API supports to get labels by providing `metadata.labels` to
`expand` query parameter

All of the following functions support `expand` in the same way:
- confluence.get_page_by_id
- confluence.get_all_pages_by_label
- confluence.get_all_pages_from_space
- cql (internally using
[/api/content/search](https://developer.atlassian.com/cloud/confluence/rest/v1/api-group-content/#api-wiki-rest-api-content-search-get))

## **Issue:**

No issue related to this PR.

## **Dependencies:** 

No changes.

## **Twitter handle:** 

[@gymnstcs](https://x.com/gymnstcs)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 19:35:01 +00:00
Rajendra Kadam
242fee11be community[minor] Pebblo: Support for new Pinecone class PineconeVectorStore (#28253)
- **Description:** Support for new Pinecone class PineconeVectorStore in
PebbloRetrievalQA.
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** -

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 19:33:54 +00:00
Pranav Ramesh Lohar
85114b4f3a docs: Update sql-query doc by fixing spelling mistake of chinhook.db to chinook.db (#28465)
Link (of doc with mistake):
https://python.langchain.com/v0.1/docs/use_cases/sql/quickstart/#:~:text=Now%2C-,Chinhook.db,-is%20in%20our

  - **Description:** speeling mistake in how-to docs of sql-db 
  - **Issue:** just a spelling mistake.
  - **Dependencies:** NA
2024-12-09 14:15:29 -05:00
nikitajoyn
9fcd203556 partners/mistralai: Fix KeyError in Vertex AI stream (#28624)
- **Description:** Streaming response from Mistral model using Vertex AI
raises KeyError when trying to access `choices` key, that the last chunk
doesn't have. The fix is to access the key safely using `get()`.
  - **Issue:** https://github.com/langchain-ai/langchain/issues/27886
  - **Dependencies:**
  - **Twitter handle:**
2024-12-09 14:14:58 -05:00
Huy Nguyen
bdb4cf7cc0 Fix typo in Custom Output Parser doc (#28617)
- [x] Fix typo in Custom Output Parser doc
2024-12-09 14:14:00 -05:00
ccurme
b476fdb54a docs: update readme (#28631) 2024-12-09 13:50:12 -05:00
maang-h
b64d846347 docs: Standardize MoonshotChat docstring (#28159)
- **Description:** Add docstring

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 18:46:25 +00:00
Erick Friis
4c70ffff01 standard-tests: sync/async vectorstore tests conditional (#28636)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-09 18:02:55 +00:00
ccurme
ffb5c1905a openai[patch]: release 0.2.12 (#28633) 2024-12-09 12:38:13 -05:00
ccurme
6e6061fe73 openai[patch]: bump minimum SDK version (#28632)
Resolves https://github.com/langchain-ai/langchain/issues/28625
2024-12-09 11:28:05 -05:00
Mohammad Mohtashim
ec9b41431e [Core]: Small Docstring Clarification for BaseTool (#28148)
- **Description:** `kwargs` are not being passed to `run` of the
`BaseTool` which has been fixed
- **Issue:** #28114

---------

Co-authored-by: Stevan Kapicic <kapicic.ste1@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 06:10:19 +00:00
Erick Friis
cef21a0b49 cli: warning on app add (#28619)
instead of #28128
2024-12-09 06:07:14 +00:00
Ankit Dangi
90f162efb6 text-splitters: add pydocstyle linting (#28127)
As seen in #23188, turned on Google-style docstrings by enabling
`pydocstyle` linting in the `text-splitters` package. Each resulting
linting error was addressed differently: ignored, resolved, suppressed,
and missing docstrings were added.

Fixes one of the checklist items from #25154, similar to #25939 in
`core` package. Ran `make format`, `make lint` and `make test` from the
root of the package `text-splitters` to ensure no issues were found.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 06:01:03 +00:00
Erick Friis
b53f07bfb9 docs: more integration contrib (#28618) 2024-12-09 05:41:08 +00:00
WGNW_MG
eabe587787 community[patch]:Fix for get_openai_callback() return token_cost=0.0 when model is gpt-4o-11-20 (#28408)
- **Description:** update MODEL_COST_PER_1K_TOKENS for new gpt-4o-11-20.
- **Issue:** with latest gpt-4o-11-20, openai callback return
token_cost=0.0
- **Dependencies:** None (just simple dict fix.)
- **Twitter handle:** I Don't Use Twitter. 
- (However..., I have a YouTube channel. Could you upload this there, by
any chance?
https://www.youtube.com/@%EA%B2%9C%EC%B0%BD%EB%B6%80%EA%B3%A0%EB%AC%B8AI%EC%9E%90%EB%AC%B8%EC%84%BC%EC%84%B8)
2024-12-08 20:46:50 -08:00
Fahim Zaman
481c4bfaba core[patch]: Fixed trim functions, and added corresponding unit test for the solved issue (#28429)
- **Description:** 
- Trim functions were incorrectly deleting nodes with more than 1
outgoing/incoming edge, so an extra condition was added to check for
this directly. A unit test "test_trim_multi_edge" was written to test
this test case specifically.
- **Issue:** 
  - Fixes #28411 
  - Fixes https://github.com/langchain-ai/langgraph/issues/1676
- **Dependencies:** 
  - No changes were made to the dependencies

- [x] Unit tests were added to verify the changes.
- [x] Updated documentation where necessary.
- [x] Ran make format, make lint, and make test to ensure compliance
with project standards.

---------

Co-authored-by: Tasif Hussain <tasif006@gmail.com>
2024-12-08 20:45:28 -08:00
Inah Jeon
54fba7e520 docs: change upstage solar model descriptions (#28419)
Thank you for contributing to LangChain!

- [ ] **PR message**:
- **Description:**: We have launched the new **Solar Pro** model, and
the documentation has been updated to include its details and features.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-08 20:43:19 -08:00
funkyrailroad
079c7ea0fc docs: Fix typo in weaviate integration docs (#28425)
- [ ] "docs: Fix typo in weaviate integration docs"
2024-12-08 20:42:00 -08:00
Zapiron
e8508fb4c6 docs: Fixed mini typo in recommend and improve the phrasing (#28438)
Fixed a typo on the word "recommend" and generally improved the phrasing
2024-12-08 20:30:43 -08:00
Zapiron
220b33df7f docs: Fixed broken link in the warning message to @tool API Reference… (#28437)
Fixed the broken hyperlink in the warning of docstring section to the
correct `@tool` API reference
2024-12-08 20:29:08 -08:00
Zapiron
1fc4ac32f0 docs: Resolve incorrect import of AttributeInfo for self-query retriever section (#28446)
Most of the imports from the self-query retriever section seems to
imported `AttributeInfo` from `query_constructor.base` instead of
`query_constructor.schema`, found in the API reference
[here](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.query_constructor.schema.AttributeInfo.html)

This PR resolves the wrong imports from most of the notebooks
2024-12-08 20:23:26 -08:00
Marco Perini
2354bb7bfa partners: 🕷️🦜 ScrapeGraph API Integration (#28559)
Hi Langchain team!

I'm the co-founder and mantainer at
[ScrapeGraphAI](https://scrapegraphai.com/).
By following the integration
[guide](https://python.langchain.com/docs/contributing/how_to/integrations/publish/)
on your site, I have created a new lib called
[langchain-scrapegraph](https://github.com/ScrapeGraphAI/langchain-scrapegraph).

With this PR I would like to integrate Scrapegraph as provider in
Langchain, adding the required documentation files.
Let me know if there are some changes to be made to be properly
integrated both in the lib and in the documentation.

Thank you 🕷️🦜

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 02:38:21 +00:00
Abhinav
317a38b83e community[minor]: Add support for modle2vec embeddings (#28507)
This PR add an embeddings integration for model2vec, the
`Model2vecEmbeddings` class.

- **Description**: [Model2Vec](https://github.com/MinishLab/model2vec)
lets you turn any sentence transformer into a really small static model
and makes running the model faster.
- **Issue**:
- **Dependencies**: model2vec
([pypi](https://pypi.org/project/model2vec/))
- **Twitter handle:**:

- [x] **Add tests and docs**: 
-
[Test](https://github.com/blacksmithop/langchain/blob/model2vec_embeddings/libs/community/langchain_community/embeddings/model2vec.py),
[docs](https://github.com/blacksmithop/langchain/blob/model2vec_embeddings/docs/docs/integrations/text_embedding/model2vec.ipynb)

- [x] **Lint and test**:

---------

Co-authored-by: Abhinav KM <abhinav.m@zerone-consulting.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-12-09 02:17:22 +00:00
Mateusz Szewczyk
fbf0704e48 docs: Update IBM documentation (#28503)
Thank you for contributing to LangChain!

PR: Update IBM documentation
2024-12-08 12:40:29 -08:00
Mohammad Mohtashim
524ee6d9ac Invalid tool_choice being passed to ChatLiteLLM (#28198)
- **Description:** Invalid `tool_choice` is given to `ChatLiteLLM` to
`bind_tools` due to it's parent's class default value being pass through
`with_structured_output`.
- **Issue:** #28176
2024-12-07 14:33:40 -05:00
Erick Friis
dd0085a9ff docs: standard tests to markdown, load templates from files (#28603) 2024-12-07 01:37:21 +00:00
Erick Friis
9b848491c8 docs: tool, retriever contributing docs (#28602) 2024-12-07 00:36:55 +00:00
Erick Friis
5e8553c31a standard-tests: retriever docstrings (#28596) 2024-12-07 00:32:19 +00:00
ccurme
d801c6ffc7 tests[patch]: nits (#28601) 2024-12-07 00:13:04 +00:00
Ikko Eltociear Ashimine
a32035d17d docs: update uptrain.ipynb (#28561)
evluate -> evaluate
2024-12-06 19:09:48 -05:00
Erick Friis
07c2ac765a community: release 0.3.10 (#28600) 2024-12-07 00:07:13 +00:00
Erick Friis
4a7dc6ec4c standard-tests: release 0.3.6 (#28599) 2024-12-07 00:05:04 +00:00
ccurme
80a88f8f04 tests[patch]: update API ref for chat models (#28594) 2024-12-06 19:00:14 -05:00
Erick Friis
0eb7ab65f1 multiple: fix xfailed signatures (#28597) 2024-12-06 15:39:47 -08:00
Erick Friis
b7c2029e84 standard-tests: root docstrings (#28595) 2024-12-06 15:14:52 -08:00
Erick Friis
925ca75ca5 docs: format (#28593) 2024-12-06 15:08:25 -08:00
Erick Friis
f943205ebf docs: dont document root init (#28592) 2024-12-06 15:07:53 -08:00
Erick Friis
9e2abcd152 standard-tests: show right classes in api docs (#28591) 2024-12-06 14:48:13 -08:00
Erick Friis
246c10a1cc standard-tests: private members and tools unit troubleshoot (#28590) 2024-12-06 13:52:58 -08:00
Erick Friis
1cedf401a7 docs: enable private docstring submembers sphinx (#28589) 2024-12-06 13:36:34 -08:00
Erick Friis
791d7e965e docs: enable private docstring modules sphinx (#28588) 2024-12-06 13:23:06 -08:00
Erick Friis
4f99952129 docs: enable private docstring members sphinx (#28586) 2024-12-06 13:19:52 -08:00
Bagatur
221ab03fe4 docs: readme/intro nits (#28581) 2024-12-06 12:52:15 -08:00
Erick Friis
e6663b69f3 langchain: release 0.3.10 (#28585) 2024-12-06 20:20:24 +00:00
Erick Friis
c38b845d7e core: fix path test (#28584) 2024-12-06 20:05:18 +00:00
ccurme
2c6bc74cb1 multiple: combine sync/async vector store standard test suites (#28580)
Breaking change in `langchain-tests`.
2024-12-06 14:55:06 -05:00
Bagatur
dda9f90047 core[patch]: Release 0.3.22 (#28582) 2024-12-06 19:36:53 +00:00
ccurme
15cbc36a23 docs[patch]: update contributor docs for integrations (#28576)
- Reformat tabs
- Add code snippets inline
- Add embeddings content
2024-12-06 13:33:24 -05:00
ccurme
f3dc142d3c cli[patch]: implement minimal starter vector store (#28577)
Basically the same as core's in-memory vector store. Removed some
optional methods.
2024-12-06 13:10:22 -05:00
Erick Friis
5277a021c1 docs: raw loader codeblock (#28548)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-06 09:26:34 -08:00
Erick Friis
18386c16c7 core, tests: more tolerant _aget_relevant_documents function (#28462) 2024-12-06 00:49:30 +00:00
Erick Friis
bc636ccc60 cli: release 0.0.35 (#28557) 2024-12-05 16:40:52 -08:00
Erick Friis
7ecf38f4fa cli: create specific files from template (#28556) 2024-12-06 00:32:47 +00:00
Erick Friis
a197e0ba3d docs: custom deprecated coloring, organize css a bit (#28555) 2024-12-05 23:57:54 +00:00
Erick Friis
478def8dcc core: deprecation doc removal (#28553)
![ScreenShot 2024-12-05 at 02 33
43PM@2x](https://github.com/user-attachments/assets/e1ce495b-90ca-41c7-9a65-b403a934675c)
2024-12-05 15:35:28 -08:00
cinqisap
482e8a7855 community: Add support for SAP HANA Vector hnsw index creation (#27884)
**Issue:** Added support for creating indexes in the SAP HANA Vector
engine.
 
**Changes**: 
1. Introduced a new function `create_hnsw_index` in `hanavector.py` that
enables the creation of indexes for SAP HANA Vector.
2. Added integration tests for the index creation function to ensure
functionality.
3. Updated the documentation to reflect the new index creation feature,
including examples and output from the notebook.
4. Fix the operator issue in ` _process_filter_object` function and
change the array argument to a placeholder in the similarity search SQL
statement.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 23:29:08 +00:00
blaufink
28f8d436f6 mistral: fix of issue #26029 (#28233)
- Description: Azure AI takes an issue with the safe_mode parameter
being set to False instead of None. Therefore, this PR changes the
default value of safe_mode from False to None. This results in it being
filtered out before the request is sent - avoind the extra-parameter
issue described below.

- Issue: #26029

- Dependencies: /

---------

Co-authored-by: blaufink <sebastian.brueckner@outlook.de>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 23:28:12 +00:00
Erick Friis
7a96ce1320 docs: deprecated styling (#28550) 2024-12-05 14:05:25 -08:00
ccurme
5519a1c1d3 docs[patch]: improve integration docs (#28547)
Alternative to https://github.com/langchain-ai/langchain/pull/28426

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 16:32:09 -05:00
Erick Friis
66f819c59e infra: run cli tests on test changes (#28542) 2024-12-05 13:09:25 -08:00
Erick Friis
3d5493593b docs: deprecated styling (#28546)
from

![image](https://github.com/user-attachments/assets/52565861-618b-407c-8bb4-f16dce3b5718)

to

![image](https://github.com/user-attachments/assets/fafeef00-6101-42cd-a8c6-ca15808d1e7c)
2024-12-05 12:41:41 -08:00
ccurme
ecdfc98ef6 tests[patch]: run standard tests for embeddings and populate embeddings API ref (#28545)
plus minor updates to chat models and vector store API refs
2024-12-05 19:39:03 +00:00
dwelch-spike
1581857e3d docs: add Aerospike to providers list (#28066)
**Description:** Adds Aerospike to the list of langchain providers and
points users to documentation for the vector store and Python SDK.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 17:29:00 +00:00
WEIQ-beepbeep
1e285cb5f3 docs: Updated incorrected type used for the multiply_by_max function (#28042)
In the `multiply_by_max()` tool, `a` is a scale factor but it is
annotated with a string type.
2024-12-05 09:15:59 -08:00
ccurme
b8e861a63b openai[patch]: add standard tests for embeddings (#28540) 2024-12-05 17:00:27 +00:00
ZhangShenao
d26555c682 [VectorStore] Improvement: Improve chroma vector store (#28524)
- Complete unit test
- Fix spelling error
2024-12-05 11:58:32 -05:00
John
7d44316d92 docs: Add link for how to install extras (#28537)
**Description:** Add link for how to install extras
2024-12-05 11:57:51 -05:00
ccurme
8f9b3b7498 chroma[patch]: fix bug (#28538)
Fix bug introduced in
https://github.com/langchain-ai/langchain/pull/27995

If all document IDs are `""`, the chroma SDK will raise
```
DuplicateIDError: Expected IDs to be unique
```

Caught by [docs
tests](https://github.com/langchain-ai/langchain/actions/runs/12180395579/job/33974633950),
but added a test to langchain-chroma as well.
2024-12-05 15:37:19 +00:00
Erick Friis
ecff9a01e4 cli: release 0.0.34 (#28525) 2024-12-05 15:35:49 +00:00
ccurme
d9e42a1517 langchain[patch]: fix deprecation warning (#28535) 2024-12-05 14:49:10 +00:00
Erick Friis
0f539f0246 standard-tests: release 0.3.5 (#28526) 2024-12-05 00:41:07 -08:00
Erick Friis
43c35d19d4 cli: standard tests in cli, test that they run, skip vectorstore tests (#28521) 2024-12-05 00:38:32 -08:00
Erick Friis
c5acedddc2 anthropic: timeout in tests (10s) (#28488) 2024-12-04 16:03:38 -08:00
ccurme
f459754470 tests[patch]: populate API reference for vector stores (#28520) 2024-12-05 00:02:31 +00:00
Erick Friis
2b360d6a2f infra: scheduled test fix (#28519) 2024-12-04 15:20:56 -08:00
ccurme
8bc2c912b8 chroma[patch]: (nit) simplify test (#28517)
Use `self.get_embeddings` on test class instead of importing embeddings
separately.
2024-12-04 20:22:55 +00:00
Tomaz Bratanic
a0130148bc switch graph semantic layer docs to langgraph (#28513)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-04 14:30:12 -05:00
ccurme
eec55c2550 chroma[patch]: add get_by_ids and fix bug (#28516)
- Run standard integration tests in Chroma
- Add `get_by_ids` method
- Fix bug in `add_texts`: if a list of `ids` is passed but any of them
are None, Chroma will raise an exception. Here we assign a uuid.
2024-12-04 14:00:36 -05:00
Erick Friis
12d74d5bef docs: single security doc (#28515) 2024-12-04 18:15:34 +00:00
Erick Friis
e6a08355a3 docs: more api ref links, add linting step to prevent more (#28495) 2024-12-04 04:19:42 +00:00
wlleiiwang
6151ea78d5 community: implement _select_relevance_score_fn for tencent vectordb (#28036)
implement _select_relevance_score_fn for tencent vectordb
fix use external embedding for tencent vectordb

Co-authored-by: wlleiiwang <wlleiiwang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 03:03:00 +00:00
Asi Greenholts
d34bf78f3b community: BM25Retriever preservation of document id (#27019)
Currently this retriever discards document ids

---------

Co-authored-by: asi-cider <88270351+asi-cider@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 00:36:00 +00:00
Erick Friis
a009249369 infra: release rely on local built in testing (#28492) 2024-12-03 16:35:38 -08:00
peterdhp
bc5ec63d67 community : allow using apikey for PubMedAPIWrapper (#27246)
**Description**: 

> Without an API key, any site (IP address) posting more than 3 requests
per second to the E-utilities will receive an error message. By
including an API key, a site can post up to 10 requests per second by
default.

quoted from A General Introduction to the E-utilities,NCBI :
https://www.ncbi.nlm.nih.gov/books/NBK25497/

I have simply added a api_key parameter to the PubMedAPIWrapper that can
be used to increase the number of requests per second from 3 to 10.

**Twitter handle** : @KORmaori

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:21:22 -08:00
Eric Pinzur
eff8a54756 langchain_chroma: added document.id support (#27995)
Description:
* Added internal `Document.id` support to Chroma VectorStore

Dependencies:
* https://github.com/langchain-ai/langchain/pull/27968 should be merged
first and this PR should be re-based on top of those changes.

Tests:
* Modified/Added tests for `Document.id` support. All tests are passing.


Note: I am not a member of the Chroma team.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 00:04:27 +00:00
William Smith
15e7353168 langchain_community: updated query constructor for Databricks Vector Search due to LangChainDeprecationWarning: filters was deprecated since langchain-community 0.2.11 and will be removed in 0.3. Please use filter instead. (#27974)
- **Description:** Updated the kwargs for the structured query from
filters to filter due to deprecation of 'filters' for Databricks Vector
Search. Also changed the error messages as the allowed operators and
comparators are different which can cause issues with functions such as
get_query_constructor_prompt()

- **Issue:** Fixes the Key Error for filters due to deprecation in favor
for 'filter':

LangChainDeprecationWarning: DatabricksVectorSearch received a key
`filters` in search_kwargs. `filters` was deprecated since
langchain-community 0.2.11 and will be removed in 0.3. Please use
`filter` instead.

- **Dependencies:** N/A
- **Twitter handle:** N/A

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:03:53 -08:00
miri-bar
6e607bb237 docs: langchain-ai21 update ai21 docs (#28076)
Thank you for contributing to LangChain!

Update docs to match latest langchain-ai21 release.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:01:36 -08:00
Jan Heimes
ef365543cb community: add Needle retriever and document loader integration (#28157)
- [x] **PR title**: "community: add Needle retriever and document loader
integration"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** This PR adds a new integration for Needle, which
includes:
- **NeedleRetriever**: A retriever for fetching documents from Needle
collections.
- **NeedleLoader**: A document loader for managing and loading documents
into Needle collections.
      - Example notebooks demonstrating usage have been added in:
        - `docs/docs/integrations/retrievers/needle.ipynb`
        - `docs/docs/integrations/document_loaders/needle.ipynb`.
- **Dependencies:** The `needle-python` package is required as an
external dependency for accessing Needle's API. It has been added to the
extended testing dependencies list.
- **Twitter handle:** Feel free to mention me if this PR gets announced:
[needlexai](https://x.com/NeedlexAI).

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Unit tests have been added for both `NeedleRetriever` and
`NeedleLoader` in `libs/community/tests/unit_tests`. These tests mock
API calls to avoid relying on network access.
2. Example notebooks have been added to `docs/docs/integrations/`,
showcasing both retriever and loader functionality.

- [x] **Lint and test**: Run `make format`, `make lint`, and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
  - `make format`: Passed
  - `make lint`: Passed
- `make test`: Passed (requires `needle-python` to be installed locally;
this package is not added to LangChain dependencies).

Additional guidelines:
- [x] Optional dependencies are imported only within functions.
- [x] No dependencies have been added to pyproject.toml files except for
those required for unit tests.
- [x] The PR does not touch more than one package.
- [x] Changes are fully backwards compatible.
- [x] Community additions are not re-imported into LangChain core.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 22:06:25 +00:00
prakashshan50
b0a83071df Update graph_constructing.ipynb (#28489)
Used llm_transformer_tuple instance to create graph documents and
assigned to graph_documents_filtered variable.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-03 16:16:55 -05:00
ccurme
ab831ce05c tests[patch]: populate API reference for chat models (#28487)
Populate API reference for test class properties and test methods for
chat models.

Also:
- Make `standard_chat_model_params` private.
- `pytest.skip` some tests that were previously passed if features are
not supported.
2024-12-03 15:24:54 -05:00
Erick Friis
50ddf13692 infra: configurable scheduled tests (#28486) 2024-12-03 12:06:29 -08:00
Erick Friis
a220ee56cd infra: add 20min timeout to ci steps (#28483) 2024-12-03 10:35:57 -08:00
Erick Friis
c74f34cb41 pinecone: release 0.2.1 (version sequence) (#28485) 2024-12-03 10:22:16 -08:00
Audrey Sage Lorberfeld
926e452f44 partners: update version header for Pinecone integration (#28481)
Just need to update the version header used with Pinecone in
recently-merged method (from [this
PR](https://github.com/langchain-ai/langchain/pull/28320/files#r1867820929)).

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 18:08:56 +00:00
Erick Friis
7315360907 openai: dont populate logit_bias if None (#28482) 2024-12-03 17:54:53 +00:00
Erick Friis
ff675c11f6 partners/pinecone: release 0.2.2 (#28466) 2024-12-03 06:49:35 +00:00
Audrey Sage Lorberfeld
6b7e93d4c7 pinecone: update pinecone client (#28320)
This PR updates the Pinecone client to `5.4.0`, as well as its
dependencies (`pinecone-plugin-inference` and
`pinecone-plugin-interface`).

Note: `pinecone-client` is now simply called `pinecone`.

**Question for reviewer(s):** should this PR also update the `pinecone`
dep in [the root dir's `poetry.lock`
file](https://github.com/langchain-ai/langchain/blob/master/poetry.lock#L6729)?
Was unsure. (I don't believe so b/c it seems pinned to a lower version
likely based on 3rd-party deps (e.g. Unstructured).)

--
TW: @audrey_sage_


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1208693659122374
2024-12-02 22:47:09 -08:00
Erick Friis
000be1f32c tests: init retriever standard tests (#28459) 2024-12-02 23:36:09 +00:00
Erick Friis
42d40d694b partners/openai: release 0.2.11 (#28461) 2024-12-02 23:35:18 +00:00
Erick Friis
9f04416768 openai: set logit_bias to none instead of empty dict by default (#28460) 2024-12-02 15:30:32 -08:00
William FH
ecee41ab72 fix: Handle response metadata in merge_messages_runs (#28453) 2024-12-02 13:56:23 -08:00
lucasiscovici
60021e54b5 community: Add the additonnal kward 'context' for openai (#28351)
- **Description:** 
Add the additonnal kward 'context' for openai into
`convert_dict_to_message` and `convert_message_to_dict` functions.
2024-12-02 16:43:30 -05:00
ccurme
28487597b2 ollama[patch]: release 0.2.1 (#28458)
We inadvertently skipped 0.2.1, so release pipeline
[failed](https://github.com/langchain-ai/langchain/actions/runs/12126964367/job/33810204551).
2024-12-02 21:17:51 +00:00
ccurme
88d6d02b59 ollama[patch]: release 0.2.2 (#28456) 2024-12-02 14:57:30 -05:00
Ülgen Sarıkavak
c953f93c54 infra: Update Poetry version, to current latest (1.8.4) (#28194)
Update all Poetry versions to the current latest, 1.8.4 .

I was checking how lock files are managed and found out that even though
the files are generated - updated with the current latest version of
Poetry, the version used in CI and Dockerfile was outdated.

*
https://github.com/langchain-ai/langchain/pull/28061/files#diff-e00422d37a73d07c174e7838ad7c30f642d06305aff8f9d71e1e84c6897efbffL1
*
https://github.com/langchain-ai/langchain/pull/28070/files#diff-55267c883e58892916d5316bc029725fdeeba5a77e2557cf7667793823d9d9c6L1
*
https://github.com/langchain-ai/langchain/pull/27991/files#diff-9f96b8fd39133c3f1d737e013c9042b065b42ae04b3da76902304f30cec136d8R1

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-02 19:01:13 +00:00
Prithvi Kannan
e5b4f9ad75 docs: Add ChatDatabricks to llm models (#28398)
Thank you for contributing to LangChain!

Add `ChatDatabricks` to the list of LLM models options.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-02 13:19:30 -05:00
Riccardo Cocetta
58d2bfe310 docs: fixed a variable name in embeddings section (#28434)
This is a simple change for a variable name in the Embeddings section of
this document:
https://python.langchain.com/docs/tutorials/retrievers/#embeddings .

The variable name `embeddings_model` seems to be wrong and doesn't match
what follows in the template: `embeddings`.
2024-12-02 12:29:06 -05:00
chistokir
e294e7159a Update mistralai.ipynb (#28440)
Just fixing a broken link

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-02 17:05:28 +00:00
Bagatur
47433485e7 mistral[patch]: Release 0.2.3 (#28452) 2024-12-02 08:26:28 -08:00
455 changed files with 34815 additions and 10583 deletions

View File

@@ -22,7 +22,7 @@ body:
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -16,7 +16,7 @@ body:
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -21,7 +21,7 @@ body:
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -272,6 +272,9 @@ if __name__ == "__main__":
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
dirs_to_run["test"].add("libs/partners/anthropic")
@@ -279,8 +282,9 @@ if __name__ == "__main__":
dirs_to_run["test"].add("libs/partners/groq")
elif file.startswith("libs/cli"):
# todo: add cli makefile
pass
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:
@@ -21,6 +21,7 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "poetry run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
# This env var allows us to get inline annotations when ruff has complaints.
@@ -23,6 +23,7 @@ jobs:
build:
name: "make lint #${{ inputs.python-version }}"
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4

View File

@@ -21,7 +21,7 @@ on:
env:
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:
@@ -167,6 +167,7 @@ jobs:
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
@@ -191,7 +192,12 @@ jobs:
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
- name: Import published package
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Import dist package
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
@@ -207,15 +213,7 @@ jobs:
# - attempt install again after 5 seconds if it fails because there is
# sometimes a delay in availability on test pypi
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 15 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
)
poetry run pip install dist/*.whl
# Replace all dashes in the package name with underscores,
# since that's how Python imports packages with dashes in the name.
@@ -224,10 +222,10 @@ jobs:
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test
run: poetry install --with test --no-root
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the test PyPI version.
# Overwrite the local version of the package with the built version
- name: Import published package (again)
working-directory: ${{ inputs.working-directory }}
shell: bash
@@ -235,9 +233,7 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION"
poetry run pip install dist/*.whl
- name: Run unit tests
run: make tests

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:
@@ -21,6 +21,7 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -9,11 +9,12 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: "check doc imports #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -18,7 +18,7 @@ on:
description: "Pydantic version to test."
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:
@@ -26,6 +26,7 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test # pydantic: ~=${{ inputs.pydantic-version }}, python: ${{ inputs.python-version }}, "
steps:
- uses: actions/checkout@v4

View File

@@ -14,7 +14,7 @@ on:
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
PYTHON_VERSION: "3.10"
jobs:

View File

@@ -5,7 +5,7 @@ on:
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.8.1"
POETRY_VERSION: "1.8.4"
PYTHON_VERSION: "3.11"
jobs:

View File

@@ -5,6 +5,7 @@ on:
push:
branches: [master]
pull_request:
merge_group:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
@@ -17,7 +18,7 @@ concurrency:
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:
@@ -119,6 +120,7 @@ jobs:
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
fail-fast: false
runs-on: ubuntu-latest
timeout-minutes: 20
defaults:
run:
working-directory: ${{ matrix.job-configs.working-directory }}

View File

@@ -15,7 +15,7 @@ on:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
jobs:
build:

View File

@@ -2,32 +2,60 @@ name: Scheduled tests
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
jobs:
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: Compute matrix
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set matrix
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
WORKING_DIRECTORY_FORCE: ${{ github.event.inputs.working-directory-force || '' }}
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
fi
if [ -n "$WORKING_DIRECTORY_FORCE" ]; then
working_directory="[\"$WORKING_DIRECTORY_FORCE\"]"
fi
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
python-version:
- "3.9"
- "3.11"
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
- "libs/partners/fireworks"
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
python-version: ${{ fromJSON(needs.compute-matrix.outputs.matrix).python-version }}
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v4

View File

@@ -69,7 +69,11 @@ lint lint_package lint_tests:
poetry run ruff check docs cookbook
poetry run ruff format docs cookbook cookbook --diff
poetry run ruff check --select I docs cookbook
git grep 'from langchain import' docs/docs cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
## format: Format the project files.
format format_diff:

View File

@@ -38,18 +38,21 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel), [components](https://python.langchain.com/docs/concepts/), and [third-party integrations](https://python.langchain.com/docs/integrations/providers/).
- **Open-source libraries**: Build your applications using LangChain's open-source
[components](https://python.langchain.com/docs/concepts/) and
[third-party integrations](https://python.langchain.com/docs/integrations/providers/).
Use [LangGraph](https://langchain-ai.github.io/langgraph/) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain-core`**: Base abstractions.
- **Integration packages** (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Important integrations have been split into lightweight packages that are co-maintained by the LangChain team and the integration developers.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
- **`langchain-community`**: Third-party integrations that are community maintained.
- **[LangGraph](https://langchain-ai.github.io/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
### Productionization:
@@ -57,7 +60,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
### Deployment:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
- **[LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024.svg#gh-light-mode-only "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024_dark.svg#gh-dark-mode-only "LangChain Architecture Overview")
@@ -72,7 +75,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
- End-to-end Example: [LangChain Extract](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
@@ -85,19 +88,12 @@ And much more! Head to the [Tutorials](https://python.langchain.com/docs/tutoria
The main value props of the LangChain libraries are:
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is a key part of LangChain, allowing you to build and organize chains of processes in a straightforward, declarative manner. It was designed to support taking prototypes directly into production without needing to alter any code. This means you can use LCEL to set up everything from basic "prompt + LLM" setups to intricate, multi-step workflows.
- **[Overview](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not.
2. **Easy orchestration with LangGraph**: [LangGraph](https://langchain-ai.github.io/langgraph/),
built on top of `langchain-core`, has built-in support for [messages](https://python.langchain.com/docs/concepts/messages/), [tools](https://python.langchain.com/docs/concepts/tools/),
and other LangChain abstractions. This makes it easy to combine components into
production-ready applications with persistence, streaming, and other key features.
Check out the LangChain [tutorials page](https://python.langchain.com/docs/tutorials/#orchestration) for examples.
## Components
@@ -105,15 +101,19 @@ Components fall into the following **modules**:
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/docs/concepts/#output-parsers).
This includes [prompt management](https://python.langchain.com/docs/concepts/prompt_templates/)
and a generic interface for [chat models](https://python.langchain.com/docs/concepts/chat_models/), including a consistent interface for [tool-calling](https://python.langchain.com/docs/concepts/tool_calling/) and [structured output](https://python.langchain.com/docs/concepts/structured_outputs/) across model providers.
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/#retrievers) it for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/text_splitters/), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/retrievers/) it for use in the generation step.
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. [LangGraph](https://langchain-ai.github.io/langgraph/) makes it easy to use
LangChain components to build both [custom](https://langchain-ai.github.io/langgraph/tutorials/)
and [built-in](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/)
LLM agents.
## 📖 Documentation
@@ -123,7 +123,7 @@ Please see [here](https://python.langchain.com) for full documentation, which in
- [Tutorials](https://python.langchain.com/docs/tutorials/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
- [API Reference](https://python.langchain.com/api_reference/): Thorough documentation of every class and method.
## 🌐 Ecosystem

View File

@@ -1,5 +1,30 @@
# Security Policy
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best practices
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, its safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. Its best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
@@ -14,7 +39,7 @@ Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
3) The [Best practicies](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
@@ -33,13 +58,13 @@ The following packages and repositories are eligible for bug bounties:
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- langchain/tools
- langchain-community/tools
- Please review our [security guidelines](https://python.langchain.com/docs/security)
- libs/langchain/langchain/tools
- libs/community/langchain_community/tools
- Please review the [best practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
@@ -47,7 +72,7 @@ All out of scope targets defined by huntr as well as:
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs see below.
- Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities

View File

@@ -13,28 +13,21 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec sh -c ' \
for dir; do \
if find "$$dir" -maxdepth 1 -type f \( -name "pyproject.toml" -o -name "setup.py" \) | grep -q .; then \
echo "$$dir"; \
fi \
done' sh {} + | grep -vE "airbyte|ibm|databricks" | tr '\n' ' ')
PORT ?= 3001
clean:
rm -rf build
install-vercel-deps:
yum -y update
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install --pre --editable $(PARTNER_DEPS_LIST)
$(PYTHON) -m pip install -q --upgrade pip
$(PYTHON) -m pip install -q --upgrade uv
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py)
generate-files:
mkdir -p $(INTERMEDIATE_DIR)
@@ -47,6 +40,7 @@ generate-files:
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:
@@ -59,6 +53,7 @@ copy-infra:
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
render:
@@ -80,6 +75,7 @@ build: install-py-deps generate-files copy-infra render md-sync append-related
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
cp -r ../libs/cli/langchain_cli/integration_template src/theme
rm -rf build
mkdir static/api_reference
git clone --depth=1 https://github.com/langchain-ai/langchain-api-docs-html.git

View File

@@ -80,6 +80,8 @@
html {
--pst-font-family-base: 'Inter';
--pst-font-family-heading: 'Inter Tight', sans-serif;
--pst-icon-versionmodified-deprecated: var(--pst-icon-exclamation-triangle);
}
/*******************************************************************************
@@ -92,7 +94,7 @@ html {
* https://sass-lang.com/documentation/interpolation
*/
/* Defaults to light mode if data-theme is not set */
html:not([data-theme]) {
html:not([data-theme]), html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
@@ -122,58 +124,8 @@ html:not([data-theme]) {
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
--pst-color-secondary-bg: #DAD6FE;
--pst-color-accent: #c132af;
--pst-color-accent-bg: #f8dff5;
--pst-color-info: #276be9;
--pst-color-info-bg: #dce7fc;
--pst-color-warning: #f66a0a;
--pst-color-warning-bg: #f8e3d0;
--pst-color-success: #00843f;
--pst-color-success-bg: #d6ece1;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #d72d47;
--pst-color-danger-bg: #f9e1e4;
--pst-color-text-base: #222832;
--pst-color-text-muted: #48566b;
--pst-color-heading-color: #ffffff;
--pst-color-shadow: rgba(0, 0, 0, 0.1);
--pst-color-border: #d1d5da;
--pst-color-border-muted: rgba(23, 23, 26, 0.2);
--pst-color-inline-code: #912583;
--pst-color-inline-code-links: #246161;
--pst-color-target: #f3cf95;
--pst-color-background: #ffffff;
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
--pst-color-deprecated: #f47d2e;
--pst-color-deprecated-bg: #fff3e8;
}
html[data-theme=dark] {
@@ -206,6 +158,8 @@ html[data-theme=dark] {
--pst-color-on-background: #222832;
--pst-color-surface: #29313d;
--pst-color-on-surface: #f3f4f5;
--pst-color-deprecated: #b46f3e;
--pst-color-deprecated-bg: #341906;
/* Adjust images in dark mode (unless they have class .only-dark or
* .dark-light, in which case assume they're already optimized for dark
* mode).
@@ -216,6 +170,30 @@ html[data-theme=dark] {
*/
color-scheme: dark;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
}
html[data-theme=dark] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
@@ -389,6 +367,13 @@ html[data-theme=dark] .MathJax_SVG * {
div.deprecated {
margin-top: 0.5em;
margin-bottom: 2em;
background-color: var(--pst-color-deprecated-bg);
border-color: var(--pst-color-deprecated);
}
span.versionmodified.deprecated:before {
color: var(--pst-color-deprecated);
}
.admonition-beta.admonition, div.admonition-beta.admonition {
@@ -408,4 +393,4 @@ dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.
p {
font-size: 0.9rem;
margin-bottom: 0.5rem;
}
}

View File

@@ -87,6 +87,18 @@ class Beta(BaseAdmonition):
def setup(app):
app.add_directive("example_links", ExampleLinksDirective)
app.add_directive("beta", Beta)
app.connect("autodoc-skip-member", skip_private_members)
def skip_private_members(app, what, name, obj, skip, options):
if skip:
return True
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# dont document default init
return True
return None
# -- Project information -----------------------------------------------------
@@ -116,6 +128,7 @@ extensions = [
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
"sphinxcontrib.googleanalytics",
]
source_suffix = [".rst", ".md"]
@@ -255,6 +268,8 @@ html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
googleanalytics_id = "G-9B66JQQH2F"
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -72,14 +72,21 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
Returns:
list: A list of loaded module objects.
"""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
module = importlib.import_module(module_path)
if ":private:" in (module.__doc__ or ""):
return ModuleMembers(classes_=[], functions=[])
for name, type_ in inspect.getmembers(module):
if not hasattr(type_, "__module__"):
continue
if type_.__module__ != module_path:
continue
if ":private:" in (type_.__doc__ or ""):
continue
if inspect.isclass(type_):
# The type of the class is used to select a template

View File

@@ -9,3 +9,4 @@ pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4
sphinxcontrib-googleanalytics

View File

@@ -28,19 +28,19 @@ From the opposite direction, scientists use `LangChain` in research and referenc
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023&#8209;07&#8209;18 | `Cookbook:` [Semi Structured Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2307.03172v3` [Lost in the Middle: How Language Models Use Long Contexts](http://arxiv.org/abs/2307.03172v3) | Nelson F. Liu, Kevin Lin, John Hewitt, et al. | 2023&#8209;07&#8209;06 | `Docs:` [docs/how_to/long_context_reorder](https://python.langchain.com/docs/how_to/long_context_reorder)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023&#8209;05&#8209;23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [Rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023&#8209;05&#8209;15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [Tree Of Thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023&#8209;05&#8209;15 | `API:` [langchain_experimental.tot](https://python.langchain.com/api_reference/experimental/tot.html), `Cookbook:` [Tree Of Thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023&#8209;05&#8209;06 | `Cookbook:` [Plan And Execute Agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2305.02156v1` [Zero-Shot Listwise Document Reranking with a Large Language Model](http://arxiv.org/abs/2305.02156v1) | Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. | 2023&#8209;05&#8209;03 | `Docs:` [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression), `API:` [langchain...LLMListwiseRerank](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank)
| `2305.02156v1` [Zero-Shot Listwise Document Reranking with a Large Language Model](http://arxiv.org/abs/2305.02156v1) | Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. | 2023&#8209;05&#8209;03 | `Docs:` [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression), `API:` [langchain...LLMListwiseRerank](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023&#8209;04&#8209;17 | `Cookbook:` [Semi Structured Multi Modal Rag Llama2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb), [Semi Structured And Multi Modal Rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023&#8209;04&#8209;07 | `Cookbook:` [Generative Agents Interactive Simulacra Of Human Behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb), [Multiagent Bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023&#8209;03&#8209;31 | `Cookbook:` [Camel Role Playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023&#8209;03&#8209;30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [Hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023&#8209;03&#8209;30 | `API:` [langchain_experimental.autonomous_agents](https://python.langchain.com/api_reference/experimental/autonomous_agents.html), `Cookbook:` [Hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023&#8209;01&#8209;24 | `API:` [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022&#8209;12&#8209;20 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts), `API:` [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [Hypothetical Document Embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.08073v1` [Constitutional AI: Harmlessness from AI Feedback](http://arxiv.org/abs/2212.08073v1) | Yuntao Bai, Saurav Kadavath, Sandipan Kundu, et al. | 2022&#8209;12&#8209;15 | `Docs:` [docs/versions/migrating_chains/constitutional_chain](https://python.langchain.com/docs/versions/migrating_chains/constitutional_chain)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022&#8209;12&#8209;12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022&#8209;12&#8209;12 | `API:` [langchain_experimental.fallacy_removal](https://python.langchain.com/api_reference/experimental/fallacy_removal.html)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022&#8209;11&#8209;25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022&#8209;11&#8209;18 | `API:` [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [Program Aided Language Model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022&#8209;11&#8209;18 | `API:` [langchain_experimental.pal_chain](https://python.langchain.com/api_reference/experimental/pal_chain.html), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [Program Aided Language Model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2210.11934v2` [An Analysis of Fusion Functions for Hybrid Retrieval](http://arxiv.org/abs/2210.11934v2) | Sebastian Bruch, Siyu Gai, Amir Ingber | 2022&#8209;10&#8209;21 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022&#8209;10&#8209;06 | `Docs:` [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/concepts](https://python.langchain.com/docs/concepts), `API:` [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022&#8209;09&#8209;22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
@@ -49,7 +49,7 @@ From the opposite direction, scientists use `LangChain` in research and referenc
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022&#8209;03&#8209;15 | `Docs:` [docs/tutorials/sql_qa](https://python.langchain.com/docs/tutorials/sql_qa), `API:` [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022&#8209;02&#8209;01 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2112.01488v3` [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](http://arxiv.org/abs/2112.01488v3) | Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, et al. | 2021&#8209;12&#8209;02 | `Docs:` [docs/integrations/retrievers/ragatouille](https://python.langchain.com/docs/integrations/retrievers/ragatouille), [docs/integrations/providers/ragatouille](https://python.langchain.com/docs/integrations/providers/ragatouille), [docs/concepts](https://python.langchain.com/docs/concepts), [docs/integrations/providers/dspy](https://python.langchain.com/docs/integrations/providers/dspy)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021&#8209;02&#8209;26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021&#8209;02&#8209;26 | `API:` [langchain_experimental.open_clip](https://python.langchain.com/api_reference/experimental/open_clip.html)
| `2005.14165v4` [Language Models are Few-Shot Learners](http://arxiv.org/abs/2005.14165v4) | Tom B. Brown, Benjamin Mann, Nick Ryder, et al. | 2020&#8209;05&#8209;28 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
| `2005.11401v4` [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](http://arxiv.org/abs/2005.11401v4) | Patrick Lewis, Ethan Perez, Aleksandra Piktus, et al. | 2020&#8209;05&#8209;22 | `Docs:` [docs/concepts](https://python.langchain.com/docs/concepts)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019&#8209;09&#8209;11 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
@@ -433,7 +433,7 @@ for retrieval-augmented LLM.
- **arXiv id:** [2305.08291v1](http://arxiv.org/abs/2305.08291v1) **Published Date:** 2023-05-15
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **API Reference:** [langchain_experimental.tot](https://python.langchain.com/api_reference/experimental/tot.html)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
@@ -490,7 +490,7 @@ https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
- **LangChain:**
- **Documentation:** [docs/how_to/contextual_compression](https://python.langchain.com/docs/how_to/contextual_compression)
- **API Reference:** [langchain...LLMListwiseRerank](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank)
- **API Reference:** [langchain...LLMListwiseRerank](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#)
**Abstract:** Supervised ranking methods based on bi-encoder or cross-encoder architectures
have shown success in multi-stage text ranking tasks, but they require large
@@ -597,7 +597,7 @@ agents and beyond: https://github.com/camel-ai/camel.
- **arXiv id:** [2303.17580v4](http://arxiv.org/abs/2303.17580v4) **Published Date:** 2023-03-30
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **API Reference:** [langchain_experimental.autonomous_agents](https://python.langchain.com/api_reference/experimental/autonomous_agents.html)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
@@ -704,7 +704,7 @@ labels.
- **arXiv id:** [2212.07425v3](http://arxiv.org/abs/2212.07425v3) **Published Date:** 2022-12-12
- **LangChain:**
- **API Reference:** [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
- **API Reference:** [langchain_experimental.fallacy_removal](https://python.langchain.com/api_reference/experimental/fallacy_removal.html)
**Abstract:** The spread of misinformation, propaganda, and flawed argumentation has been
amplified in the Internet era. Given the volume of data and the subtlety of
@@ -759,7 +759,7 @@ performance across three real-world tasks on multiple LLMs.
- **arXiv id:** [2211.10435v2](http://arxiv.org/abs/2211.10435v2) **Published Date:** 2022-11-18
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **API Reference:** [langchain_experimental.pal_chain](https://python.langchain.com/api_reference/experimental/pal_chain.html), [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
@@ -992,7 +992,7 @@ footprint of late interaction models by 6--10$\times$.
- **arXiv id:** [2103.00020v1](http://arxiv.org/abs/2103.00020v1) **Published Date:** 2021-02-26
- **LangChain:**
- **API Reference:** [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
- **API Reference:** [langchain_experimental.open_clip](https://python.langchain.com/api_reference/experimental/open_clip.html)
**Abstract:** State-of-the-art computer vision systems are trained to predict a fixed set
of predetermined object categories. This restricted form of supervision limits

View File

@@ -12,6 +12,7 @@
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C)
### [by Total Technology Zonne](https://youtube.com/playlist?list=PLI8raxzYtfGyE02fAxiM1CPhLUuqcTLWg&si=fkAye16rQKBJVHc9)
## Courses

View File

@@ -65,7 +65,7 @@ A package to deploy LangChain chains as REST APIs. Makes it easy to get a produc
:::important
LangServe is designed to primarily deploy simple Runnables and work with well-known primitives in langchain-core.
If you need a deployment option for LangGraph, you should instead be looking at LangGraph Cloud (beta) which will be better suited for deploying LangGraph applications.
If you need a deployment option for LangGraph, you should instead be looking at LangGraph Platform (beta) which will be better suited for deploying LangGraph applications.
:::
For more information, see the [LangServe documentation](/docs/langserve).

View File

@@ -3,7 +3,7 @@
:::info[Prerequisites]
* [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)
* [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)
:::

View File

@@ -48,7 +48,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs a Runnable.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs.
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
@@ -70,7 +70,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[langchain-core](/docs/concepts/architecture#langchain-core)**: Core langchain package. Includes base interfaces and in-memory implementations.
- **[langchain](/docs/concepts/architecture#langchain)**: A package for higher level components (e.g., some pre-built chains).
- **[langgraph](/docs/concepts/architecture#langgraph)**: Powerful orchestration layer for LangChain. Use to build complex pipelines and workflows.
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[langserve](/docs/concepts/architecture#langserve)**: Used to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[LLMs (legacy)](/docs/concepts/text_llms)**: Older language models that take a string as input and return a string as output.
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
@@ -79,7 +79,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
- **[RunnableConfig](/docs/concepts/runnables/#runnableconfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`,
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`.
- **[Standard tests](/docs/concepts/testing#standard-tests)**: A defined set of unit and integration tests that all integrations must pass.
- **[stream](/docs/concepts/streaming)**: Use to stream output from a Runnable or a graph.
- **[Tokenization](/docs/concepts/tokens)**: The process of converting data into tokens and vice versa.

View File

@@ -221,7 +221,7 @@ They are particularly useful for storing and querying complex relationships betw
LangChain provides a unified interface for interacting with various retrieval systems through the [retriever](/docs/concepts/retrievers/) concept. The interface is straightforward:
1. Input: A query (string)
2. Output: A list of documents (standardized LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects)
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
You can create a retriever using any of the retrieval systems mentioned earlier. The query analysis techniques we discussed are particularly useful here, as they enable natural language interfaces for databases that typically require structured query languages.
For example, you can build a retriever for a SQL database using text-to-SQL conversion. This allows a natural language query (string) to be transformed into a SQL query behind the scenes.

View File

@@ -18,7 +18,7 @@ Because of their importance and variability, LangChain provides a uniform interf
The LangChain [retriever](/docs/concepts/retrievers/) interface is straightforward:
1. Input: A query (string)
2. Output: A list of documents (standardized LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects)
2. Output: A list of documents (standardized LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects)
## Key concept
@@ -29,7 +29,7 @@ All retrievers implement a simple interface for retrieving documents using natur
## Interface
The only requirement for a retriever is the ability to accepts a query and return documents.
In particular, [LangChain's retriever class](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
In particular, [LangChain's retriever class](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html#) only requires that the `_get_relevant_documents` method is implemented, which takes a `query: str` and returns a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects that are most relevant to the query.
The underlying logic used to get relevant documents is specified by the retriever and can be whatever is most useful for the application.
A LangChain retriever is a [runnable](/docs/how_to/lcel_cheatsheet/), which is a standard interface is for LangChain components.
@@ -39,7 +39,7 @@ This means that it has a few common methods, including `invoke`, that are used t
docs = retriever.invoke(query)
```
Retrievers return a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
Retrievers return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which have two attributes:
* `page_content`: The content of this document. Currently is a string.
* `metadata`: Arbitrary metadata associated with this document (e.g., document id, file name, source, etc).

View File

@@ -114,7 +114,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
| Method | Description |
|-------------------------|------------------------------------------------------------------|
| `get_input_schema` | Gives the Pydantic Schema of the input schema for the Runnable. |
| `get_output_chema` | Gives the Pydantic Schema of the output schema for the Runnable. |
| `get_output_schema` | Gives the Pydantic Schema of the output schema for the Runnable. |
| `config_schema` | Gives the Pydantic Schema of the config schema for the Runnable. |
| `get_input_jsonschema` | Gives the JSONSchema of the input schema for the Runnable. |
| `get_output_jsonschema` | Gives the JSONSchema of the output schema for the Runnable. |
@@ -125,7 +125,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
).
## RunnableConfig
@@ -323,7 +323,7 @@ multiple Runnables and you need to add custom processing logic in one of the ste
There are two ways to create a custom Runnable from a function:
* `RunnableLambda`: Use this simple transformations where streaming is not required.
* `RunnableLambda`: Use this for simple transformations where streaming is not required.
* `RunnableGenerator`: use this for more complex transformations when streaming is needed.
See the [How to run custom functions](/docs/how_to/functions) guide for more information on how to use `RunnableLambda` and `RunnableGenerator`.
@@ -347,6 +347,6 @@ Sometimes you may want to experiment with, or even expose to the end user, multi
To simplify this process, the Runnable interface provides two methods for creating configurable Runnables at runtime:
* `configurable_fields`: This method allows you to configure specific **attributes** in a Runnable. For example, the `temperature` attribute of a chat model.
* `configurable_alternatives`: This method enables you to specify **alternative** Runnables that can be run during run time. For example, you could specify a list of different chat models that can be used.
* `configurable_alternatives`: This method enables you to specify **alternative** Runnables that can be run during runtime. For example, you could specify a list of different chat models that can be used.
See the [How to configure runtime chain internals](/docs/how_to/configure) guide for more information on how to configure runtime chain internals.

View File

@@ -59,7 +59,7 @@ vector_store = InMemoryVectorStore(embedding=SomeEmbeddingModel())
To add documents, use the `add_documents` method.
This API works with a list of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects.
This API works with a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects.
`Document` objects all have `page_content` and `metadata` attributes, making them a universal way to store unstructured text and associated metadata.
```python
@@ -126,7 +126,7 @@ to the documentation of the specific vectorstore you are using to see what simil
Given a similarity metric to measure the distance between the embedded query and any embedded document, we need an algorithm to efficiently search over *all* the embedded documents to find the most similar ones.
There are various ways to do this. As an example, many vectorstores implement [HNSW (Hierarchical Navigable Small World)](https://www.pinecone.io/learn/series/faiss/hnsw/), a graph-based index structure that allows for efficient similarity search.
Regardless of the search algorithm used under the hood, the LangChain vectorstore interface has a `similarity_search` method for all integrations.
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html).
This will take the search query, create an embedding, find similar documents, and return them as a list of [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html).
```python
query = "my query"

View File

@@ -1,17 +1,20 @@
---
pagination_prev: null
pagination_next: contributing/how_to/integrations/package
---
# Contribute Integrations
LangChain integrations are packages that provide access to language models, vector stores, and other components that can be used in LangChain.
Integrations are a core component of LangChain.
LangChain provides standard interfaces for several different components (language models, vector stores, etc) that are crucial when building LLM applications.
This guide will walk you through how to contribute new integrations to LangChain, by
publishing an integration package to PyPi, and adding documentation for it
to the LangChain Monorepo.
These instructions will evolve over the next few months as we improve our integration
processes.
## Why contribute an integration to LangChain?
- **Discoverability:** LangChain is the most used framework for building LLM applications, with over 20 million monthly downloads. LangChain integrations are discoverable by a large community of GenAI builders.
- **Interoperability:** LangChain components expose a standard interface, allowing developers to easily swap them for each other. If you implement a LangChain integration, any developer using a different component will easily be able to swap yours in.
- **Best Practices:** Through their standard interface, LangChain components encourage and facilitate best practices (streaming, async, etc)
## Components to Integrate
@@ -22,8 +25,7 @@ supported in LangChain
:::
While any component can be integrated into LangChain, at this time we are only accepting
new integrations in the docs of the following kinds:
While any component can be integrated into LangChain, there are specific types of integrations we encourage more:
<table>
<tr>
@@ -36,7 +38,6 @@ new integrations in the docs of the following kinds:
<li>Chat Models</li>
<li>Tools/Toolkits</li>
<li>Retrievers</li>
<li>Document Loaders</li>
<li>Vector Stores</li>
<li>Embedding Models</li>
</ul>
@@ -44,6 +45,7 @@ new integrations in the docs of the following kinds:
<td>
<ul>
<li>LLMs (Text-Completion Models)</li>
<li>Document Loaders</li>
<li>Key-Value Stores</li>
<li>Document Transformers</li>
<li>Model Caches</li>
@@ -60,18 +62,30 @@ new integrations in the docs of the following kinds:
## How to contribute an integration
The only step necessary to "be" a LangChain integration is to add documentation
that will render on this site (https://python.langchain.com/).
In order to contribute an integration, you should follow these steps:
As a prerequisite to adding your integration to our documentation, you must:
1. Confirm that your integration is in the [list of components](#components-to-integrate) we are currently accepting.
2. [Implement your package](./package.mdx) and publish it to a public github repository.
1. Confirm that your integration is in the [list of components](#components-to-integrate) we are currently encouraging.
2. [Implement your package](/docs/contributing/how_to/integrations/package/) and publish it to a public github repository.
3. [Implement the standard tests](./standard_tests) for your integration and successfully run them.
4. [Publish your integration](./publish.mdx) by publishing the package to PyPi and add docs in the `docs/docs/integrations` directory of the LangChain monorepo.
5. [Optional] Open and merge a PR to add documentation for your integration to the official LangChain docs.
6. [Optional] Engage with the LangChain team for joint co-marketing ([see below](#co-marketing)).
Once you have completed these steps, you can submit a PR to the LangChain monorepo to add your integration to the documentation.
## Co-Marketing
With over 20 million monthly downloads, LangChain has a large audience of developers building LLM applications.
Besides just adding integrations, we also like to show them examples of cool tools or APIs they can use.
While traditionally called "co-marketing", we like to think of this more as "co-education".
For that reason, while we are happy to highlight your integration through our social media channels, we prefer to highlight examples that also serve some educational purpose.
Our main social media channels are Twitter and LinkedIn.
Here are some heuristics for types of content we are excited to promote:
- **Integration announcement:** If you announce the integration with a link to the LangChain documentation page, we are happy to re-tweet/re-share on Twitter/LinkedIn.
- **Educational content:** We highlight good educational content on the weekends - if you write a good blog or make a good YouTube video, we are happy to share there! Note that we prefer content that is NOT framed as "here's how to use integration XYZ", but rather "here's how to do ABC", as we find that is more educational and helpful for developers.
- **End-to-end applications:** End-to-end applications are great resources for developers looking to build. We prefer to highlight applications that are more complex/agentic in nature, and that use [LangGraph](https://github.com/langchain-ai/langgraph) as the orchestration framework. We get particularly excited about anything involving long-term memory, human-in-the-loop interaction patterns, or multi-agent architectures.
- **Research:** We love highlighting novel research! Whether it is research built on top of LangChain or that integrates with it.
## Further Reading
To get started, let's learn [how to bootstrap a new integration package](./package.mdx) for LangChain.
To get started, let's learn [how to implement an integration package](/docs/contributing/how_to/integrations/package/) for LangChain.

View File

@@ -2,23 +2,109 @@
pagination_next: contributing/how_to/integrations/standard_tests
pagination_prev: contributing/how_to/integrations/index
---
# How to bootstrap a new integration package
# How to implement an integration package
This guide walks through the process of publishing a new LangChain integration
package to PyPi.
This guide walks through the process of implementing a LangChain integration
package.
Integration packages are just Python packages that can be installed with `pip install <your-package>`,
which contain classes that are compatible with LangChain's core interfaces.
In this guide, we will be using [Poetry](https://python-poetry.org/) for
dependency management and packaging, and you're welcome to use any other tools you prefer.
We will cover:
## **Prerequisites**
1. (Optional) How to bootstrap a new integration package
2. How to implement components, such as [chat models](/docs/concepts/chat_models/) and [vector stores](/docs/concepts/vectorstores/), that adhere
to the LangChain interface;
## (Optional) bootstrapping a new integration package
In this section, we will outline 2 options for bootstrapping a new integration package,
and you're welcome to use other tools if you prefer!
1. **langchain-cli**: This is a command-line tool that can be used to bootstrap a new integration package with a template for LangChain components and Poetry for dependency management.
2. **Poetry**: This is a Python dependency management tool that can be used to bootstrap a new Python package with dependencies. You can then add LangChain components to this package.
<details>
<summary>Option 1: langchain-cli (recommended)</summary>
In this guide, we will be using the `langchain-cli` to create a new integration package
from a template, which can be edited to implement your LangChain components.
### **Prerequisites**
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
## Boostrapping a new Python package with Poetry
### Boostrapping a new Python package with langchain-cli
First, install `langchain-cli` and `poetry`:
```bash
pip install langchain-cli poetry
```
Next, come up with a name for your package. For this guide, we'll use `langchain-parrot-link`.
You can confirm that the name is available on PyPi by searching for it on the [PyPi website](https://pypi.org/).
Next, create your new Python package with `langchain-cli`, and navigate into the new directory with `cd`:
```bash
langchain-cli integration new
> The name of the integration to create (e.g. `my-integration`): parrot-link
> Name of integration in PascalCase [ParrotLink]:
cd parrot-link
```
Next, let's add any dependencies we need
```bash
poetry add my-integration-sdk
```
We can also add some `typing` or `test` dependencies in a separate poetry dependency group.
```
poetry add --group typing my-typing-dep
poetry add --group test my-test-dep
```
And finally, have poetry set up a virtual environment with your dependencies, as well
as your integration package:
```bash
poetry install --with lint,typing,test,test_integration
```
You now have a new Python package with a template for LangChain components! This
template comes with files for each integration type, and you're welcome to duplicate or
delete any of these files as needed (including the associated test files).
To create any individual files from the [template], you can run e.g.:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models_2.py
```
</details>
<details>
<summary>Option 2: Poetry (manual)</summary>
In this guide, we will be using [Poetry](https://python-poetry.org/) for
dependency management and packaging, and you're welcome to use any other tools you prefer.
### **Prerequisites**
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with Poetry
First, install Poetry:
@@ -64,7 +150,7 @@ poetry install --with test
You're now ready to start writing your integration package!
## Writing your integration
### Writing your integration
Let's say you're building a simple integration package that provides a `ChatParrotLink`
chat model integration for LangChain. Here's a simple example of what your project
@@ -86,183 +172,12 @@ All of these files should already exist from step 1, except for
`chat_models.py` and `test_chat_models.py`! We will implement `test_chat_models.py`
later, following the [standard tests](../standard_tests) guide.
To implement `chat_models.py`, let's copy the implementation from our
[Custom Chat Model Guide](../../../../how_to/custom_chat_model).
For `chat_models.py`, simply paste the contents of the chat model implementation
[above](#implementing-langchain-components).
<details>
<summary>chat_models.py</summary>
```python title="langchain_parrot_link/chat_models.py"
from typing import Any, Dict, Iterator, List, Optional
from langchain_core.callbacks import (
CallbackManagerForLLMRun,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from pydantic import Field
class ChatParrotLink(BaseChatModel):
"""A custom chat model that echoes the first `parrot_buffer_length` characters
of the input.
When contributing an implementation to LangChain, carefully document
the model including the initialization parameters, include
an example of how to initialize the model and include any relevant
links to the underlying models documentation or API.
Example:
.. code-block:: python
model = ChatParrotLink(parrot_buffer_length=2, model="bird-brain-001")
result = model.invoke([HumanMessage(content="hello")])
result = model.batch([[HumanMessage(content="hello")],
[HumanMessage(content="world")]])
"""
model_name: str = Field(alias="model")
"""The name of the model"""
parrot_buffer_length: int
"""The number of characters from the last message of the prompt to be echoed."""
temperature: Optional[float] = None
max_tokens: Optional[int] = None
timeout: Optional[int] = None
stop: Optional[List[str]] = None
max_retries: int = 2
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
"""Override the _generate method to implement the chat model logic.
This can be a call to an API, a call to a local model, or any other
implementation that generates a response to the input prompt.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
# Replace this with actual logic to generate a response from a list
# of messages.
last_message = messages[-1]
tokens = last_message.content[: self.parrot_buffer_length]
ct_input_tokens = sum(len(message.content) for message in messages)
ct_output_tokens = len(tokens)
message = AIMessage(
content=tokens,
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
},
usage_metadata={
"input_tokens": ct_input_tokens,
"output_tokens": ct_output_tokens,
"total_tokens": ct_input_tokens + ct_output_tokens,
},
)
##
generation = ChatGeneration(message=message)
return ChatResult(generations=[generation])
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model.
This method should be implemented if the model can generate output
in a streaming fashion. If the model does not support streaming,
do not implement it. In that case streaming requests will be automatically
handled by the _generate method.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
last_message = messages[-1]
tokens = str(last_message.content[: self.parrot_buffer_length])
ct_input_tokens = sum(len(message.content) for message in messages)
for token in tokens:
usage_metadata = UsageMetadata(
{
"input_tokens": ct_input_tokens,
"output_tokens": 1,
"total_tokens": ct_input_tokens + 1,
}
)
ct_input_tokens = 0
chunk = ChatGenerationChunk(
message=AIMessageChunk(content=token, usage_metadata=usage_metadata)
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
# Let's add some other information (e.g., response metadata)
chunk = ChatGenerationChunk(
message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3})
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
@property
def _llm_type(self) -> str:
"""Get the type of language model used by this chat model."""
return "echoing-chat-model-advanced"
@property
def _identifying_params(self) -> Dict[str, Any]:
"""Return a dictionary of identifying parameters.
This information is used by the LangChain callback system, which
is used for tracing purposes make it possible to monitor LLMs.
"""
return {
# The model name allows users to specify custom token counting
# rules in LLM monitoring applications (e.g., in LangSmith users
# can provide per token pricing for their model and monitor
# costs for the given LLM.)
"model_name": self.model_name,
}
```
</details>
## Push your package to a public Github repository
### Push your package to a public Github repository
This is only required if you want to publish your integration in the LangChain documentation.
@@ -270,6 +185,290 @@ This is only required if you want to publish your integration in the LangChain d
2. Push your code to the repository.
3. Confirm that your repository is viewable by the public (e.g. in a private browsing window, where you're not logged into Github).
## Implementing LangChain components
LangChain components are subclasses of base classes in [langchain-core](/docs/concepts/architecture/#langchain-core).
Examples include [chat models](/docs/concepts/chat_models/),
[vector stores](/docs/concepts/vectorstores/), [tools](/docs/concepts/tools/),
[embedding models](/docs/concepts/embedding_models/) and [retrievers](/docs/concepts/retrievers/).
Your integration package will typically implement a subclass of at least one of these
components. Expand the tabs below to see details on each.
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
<Tabs>
<TabItem value="chat_models" label="Chat models">
Refer to the [Custom Chat Model Guide](/docs/how_to/custom_chat_model) guide for
detail on a starter chat model [implementation](/docs/how_to/custom_chat_model/#implementation).
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models.py
```
<details>
<summary>Example chat model code</summary>
import ChatModelSource from '../../../../src/theme/integration_template/integration_template/chat_models.py';
<CodeBlock language="python" title="langchain_parrot_link/chat_models.py">
{
ChatModelSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="vector_stores" label="Vector stores">
Your vector store implementation will depend on your chosen database technology.
`langchain-core` includes a minimal
[in-memory vector store](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.in_memory.InMemoryVectorStore.html)
that we can use as a guide. You can access the code [here](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/vectorstores/in_memory.py).
All vector stores must inherit from the [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html)
base class. This interface consists of methods for writing, deleting and searching
for documents in the vector store.
`VectorStore` supports a variety of synchronous and asynchronous search types (e.g.,
nearest-neighbor or maximum marginal relevance), as well as interfaces for adding
documents to the store. See the [API Reference](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html)
for all supported methods. The required methods are tabulated below:
| Method/Property | Description |
|------------------------ |------------------------------------------------------|
| `add_documents` | Add documents to the vector store. |
| `delete` | Delete selected documents from vector store (by IDs) |
| `get_by_ids` | Get selected documents from vector store (by IDs) |
| `similarity_search` | Get documents most similar to a query. |
| `embeddings` (property) | Embeddings object for vector store. |
| `from_texts` | Instantiate vector store via adding texts. |
Note that `InMemoryVectorStore` implements some optional search types, as well as
convenience methods for loading and dumping the object to a file, but this is not
necessary for all implementations.
:::tip
The [in-memory vector store](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/vectorstores/in_memory.py)
is tested against the standard tests in the LangChain Github repository.
:::
<details>
<summary>Example vector store code</summary>
import VectorstoreSource from '../../../../src/theme/integration_template/integration_template/vectorstores.py';
<CodeBlock language="python" title="langchain_parrot_link/vectorstores.py">
{
VectorstoreSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="embeddings" label="Embeddings">
Embeddings are used to convert `str` objects from `Document.page_content` fields
into a vector representation (represented as a list of floats).
Refer to the [Custom Embeddings Guide](/docs/how_to/custom_embeddings) guide for
detail on a starter embeddings [implementation](/docs/how_to/custom_embeddings/#implementation).
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/embeddings.py \
--dst langchain_parrot_link/embeddings.py
```
<details>
<summary>Example embeddings code</summary>
import EmbeddingsSource from '/src/theme/integration_template/integration_template/embeddings.py';
<CodeBlock language="python" title="langchain_parrot_link/embeddings.py">
{
EmbeddingsSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="tools" label="Tools">
Tools are used in 2 main ways:
1. To define an "input schema" or "args schema" to pass to a chat model's tool calling
feature along with a text request, such that the chat model can generate a "tool call",
or parameters to call the tool with.
2. To take a "tool call" as generated above, and take some action and return a response
that can be passed back to the chat model as a ToolMessage.
The `Tools` class must inherit from the [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool) base class. This interface has 3 properties and 2 methods that should be implemented in a
subclass.
| Method/Property | Description |
|------------------------ |------------------------------------------------------|
| `name` | Name of the tool (passed to the LLM too). |
| `description` | Description of the tool (passed to the LLM too). |
| `args_schema` | Define the schema for the tool's input arguments. |
| `_run` | Run the tool with the given arguments. |
| `_arun` | Asynchronously run the tool with the given arguments.|
### Properties
`name`, `description`, and `args_schema` are all properties that should be implemented
in the subclass. `name` and `description` are strings that are used to identify the tool
and provide a description of what the tool does. Both of these are passed to the LLM,
and users may override these values depending on the LLM they are using as a form of
"prompt engineering." Giving these a concise and LLM-usable name and description is
important for the initial user experience of the tool.
`args_schema` is a Pydantic `BaseModel` that defines the schema for the tool's input
arguments. This is used to validate the input arguments to the tool, and to provide
a schema for the LLM to fill out when calling the tool. Similar to the `name` and
`description` of the overall Tool class, the fields' names (the variable name) and
description (part of `Field(..., description="description")`) are passed to the LLM,
and the values in these fields should be concise and LLM-usable.
### Run Methods
`_run` is the main method that should be implemented in the subclass. This method
takes in the arguments from `args_schema` and runs the tool, returning a string
response. This method is usually called in a LangGraph [`ToolNode`](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/), and can also be called in a legacy
`langchain.agents.AgentExecutor`.
`_arun` is optional because by default, `_run` will be run in an async executor.
However, if your tool is calling any apis or doing any async work, you should implement
this method to run the tool asynchronously in addition to `_run`.
### Implementation
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/tools.py \
--dst langchain_parrot_link/tools.py
```
<details>
<summary>Example tool code</summary>
import ToolSource from '/src/theme/integration_template/integration_template/tools.py';
<CodeBlock language="python" title="langchain_parrot_link/tools.py">
{
ToolSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
<TabItem value="retrievers" label="Retrievers">
Retrievers are used to retrieve documents from APIs, databases, or other sources
based on a query. The `Retriever` class must inherit from the [BaseRetriever](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html) base class. This interface has 1 attribute and 2 methods that should be implemented in a subclass.
| Method/Property | Description |
|------------------------ |------------------------------------------------------|
| `k` | Default number of documents to retrieve (configurable). |
| `_get_relevant_documents`| Retrieve documents based on a query. |
| `_aget_relevant_documents`| Asynchronously retrieve documents based on a query. |
### Attributes
`k` is an attribute that should be implemented in the subclass. This attribute
can simply be defined at the top of the class with a default value like
`k: int = 5`. This attribute is the default number of documents to retrieve
from the retriever, and can be overridden by the user when constructing or calling
the retriever.
### Methods
`_get_relevant_documents` is the main method that should be implemented in the subclass.
This method takes in a query and returns a list of `Document` objects, which have 2
main properties:
- `page_content` - the text content of the document
- `metadata` - a dictionary of metadata about the document
Retrievers are typically directly invoked by a user, e.g. as
`MyRetriever(k=4).invoke("query")`, which will automatically call `_get_relevant_documents`
under the hood.
`_aget_relevant_documents` is optional because by default, `_get_relevant_documents` will
be run in an async executor. However, if your retriever is calling any apis or doing
any async work, you should implement this method to run the retriever asynchronously
in addition to `_get_relevant_documents` for performance reasons.
### Implementation
You can start from the following template or langchain-cli command:
```bash
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/retrievers.py \
--dst langchain_parrot_link/retrievers.py
```
<details>
<summary>Example retriever code</summary>
import RetrieverSource from '/src/theme/integration_template/integration_template/retrievers.py';
<CodeBlock language="python" title="langchain_parrot_link/retrievers.py">
{
RetrieverSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</details>
</TabItem>
</Tabs>
---
## Next Steps
Now that you've implemented your package, you can move on to [testing your integration](../standard_tests) for your integration and successfully run them.

View File

@@ -1,497 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"pagination_next: contributing/how_to/integrations/publish\n",
"pagination_prev: contributing/how_to/integrations/package\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to add standard tests to an integration\n",
"\n",
"When creating either a custom class for yourself or to publish in a LangChain integration, it is important to add standard tests to ensure it works as expected. This guide will show you how to add standard tests to a custom chat model, and you can **[Skip to the test templates](#standard-test-templates-per-component)** for implementing tests for each integration type.\n",
"\n",
"## Setup\n",
"\n",
"If you're coming from the [previous guide](../package), you have already installed these dependencies, and you can skip this section.\n",
"\n",
"First, let's install 2 dependencies:\n",
"\n",
"- `langchain-core` will define the interfaces we want to import to define our custom tool.\n",
"- `langchain-tests` will provide the standard tests we want to use. Recommended to pin to the latest version: <img src=\"https://img.shields.io/pypi/v/langchain-tests\" style={{position:\"relative\",top:4,left:3}} />\n",
"\n",
":::note\n",
"\n",
"Because added tests in new versions of `langchain-tests` can break your CI/CD pipelines, we recommend pinning the \n",
"version of `langchain-tests` to avoid unexpected changes.\n",
"\n",
":::\n",
"\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"poetry\" label=\"Poetry\" default>\n",
"If you followed the [previous guide](../package), you should already have these dependencies installed!\n",
"\n",
"```bash\n",
"poetry add langchain-core\n",
"poetry add --group test pytest pytest-socket pytest-asyncio langchain-tests==<latest_version>\n",
"poetry install --with test\n",
"```\n",
" </TabItem>\n",
" <TabItem value=\"pip\" label=\"Pip\">\n",
"```bash\n",
"pip install -U langchain-core pytest pytest-socket pytest-asyncio langchain-tests\n",
"\n",
"# install current package in editable mode\n",
"pip install --editable .\n",
"```\n",
" </TabItem>\n",
"</Tabs>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's say we're publishing a package, `langchain_parrot_link`, that exposes the chat model from the [guide on implementing the package](../package). We can add the standard tests to the package by following the steps below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And we'll assume you've structured your package the same way as the main LangChain\n",
"packages:\n",
"\n",
"```plaintext\n",
"langchain-parrot-link/\n",
"├── langchain_parrot_link/\n",
"│ ├── __init__.py\n",
"│ └── chat_models.py\n",
"├── tests/\n",
"│ ├── __init__.py\n",
"│ └── test_chat_models.py\n",
"├── pyproject.toml\n",
"└── README.md\n",
"```\n",
"\n",
"## Add and configure standard tests\n",
"\n",
"There are 2 namespaces in the `langchain-tests` package: \n",
"\n",
"- [unit tests](../../../concepts/testing.mdx#unit-tests) (`langchain_tests.unit_tests`): designed to be used to test the component in isolation and without access to external services\n",
"- [integration tests](../../../concepts/testing.mdx#unit-tests) (`langchain_tests.integration_tests`): designed to be used to test the component with access to external services (in particular, the external service that the component is designed to interact with).\n",
"\n",
"Both types of tests are implemented as [`pytest` class-based test suites](https://docs.pytest.org/en/7.1.x/getting-started.html#group-multiple-tests-in-a-class).\n",
"\n",
"By subclassing the base classes for each type of standard test (see below), you get all of the standard tests for that type, and you\n",
"can override the properties that the test suite uses to configure the tests.\n",
"\n",
"### Standard chat model tests\n",
"\n",
"Here's how you would configure the standard unit tests for the custom chat model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/unit_tests/test_chat_models.py\"\n",
"from typing import Tuple, Type\n",
"\n",
"from langchain_parrot_link.chat_models import ChatParrotLink\n",
"from langchain_tests.unit_tests import ChatModelUnitTests\n",
"\n",
"\n",
"class TestChatParrotLinkUnit(ChatModelUnitTests):\n",
" @property\n",
" def chat_model_class(self) -> Type[ChatParrotLink]:\n",
" return ChatParrotLink\n",
"\n",
" @property\n",
" def chat_model_params(self) -> dict:\n",
" return {\n",
" \"model\": \"bird-brain-001\",\n",
" \"temperature\": 0,\n",
" \"parrot_buffer_length\": 50,\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/integration_tests/test_chat_models.py\"\n",
"from typing import Type\n",
"\n",
"from langchain_parrot_link.chat_models import ChatParrotLink\n",
"from langchain_tests.integration_tests import ChatModelIntegrationTests\n",
"\n",
"\n",
"class TestChatParrotLinkIntegration(ChatModelIntegrationTests):\n",
" @property\n",
" def chat_model_class(self) -> Type[ChatParrotLink]:\n",
" return ChatParrotLink\n",
"\n",
" @property\n",
" def chat_model_params(self) -> dict:\n",
" return {\n",
" \"model\": \"bird-brain-001\",\n",
" \"temperature\": 0,\n",
" \"parrot_buffer_length\": 50,\n",
" }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and you would run these with the following commands from your project root\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"poetry\" label=\"Poetry\" default>\n",
"\n",
"```bash\n",
"# run unit tests without network access\n",
"poetry run pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests\n",
"\n",
"# run integration tests\n",
"poetry run pytest --asyncio-mode=auto tests/integration_tests\n",
"```\n",
"\n",
" </TabItem>\n",
" <TabItem value=\"pip\" label=\"Pip\">\n",
"\n",
"```bash\n",
"# run unit tests without network access\n",
"pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests\n",
"\n",
"# run integration tests\n",
"pytest --asyncio-mode=auto tests/integration_tests\n",
"```\n",
"\n",
" </TabItem>\n",
"</Tabs>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test suite information and troubleshooting\n",
"\n",
"For a full list of the standard test suites that are available, as well as\n",
"information on which tests are included and how to troubleshoot common issues,\n",
"see the [Standard Tests API Reference](https://python.langchain.com/api_reference/standard_tests/index.html).\n",
"\n",
"An increasing number of troubleshooting guides are being added to this documentation,\n",
"and if you're interested in contributing, feel free to add docstrings to tests in \n",
"[Github](https://github.com/langchain-ai/langchain/tree/master/libs/standard-tests/langchain_tests)!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Standard test templates per component:\n",
"\n",
"Above, we implement the **unit** and **integration** standard tests for a tool. Below are the templates for implementing the standard tests for each component:\n",
"\n",
"<details>\n",
" <summary>Chat Models</summary>\n",
" <p>Note: The standard tests for chat models are implemented in the example in the main body of this guide too.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/unit_tests/test_chat_models.py\"\n",
"from typing import Type\n",
"\n",
"from langchain_parrot_link.chat_models import ChatParrotLink\n",
"from langchain_tests.unit_tests import ChatModelUnitTests\n",
"\n",
"\n",
"class TestChatParrotLinkUnit(ChatModelUnitTests):\n",
" @property\n",
" def chat_model_class(self) -> Type[ChatParrotLink]:\n",
" return ChatParrotLink\n",
"\n",
" @property\n",
" def chat_model_params(self) -> dict:\n",
" return {\n",
" \"model\": \"bird-brain-001\",\n",
" \"temperature\": 0,\n",
" \"parrot_buffer_length\": 50,\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/integration_tests/test_chat_models.py\"\n",
"from typing import Type\n",
"\n",
"from langchain_parrot_link.chat_models import ChatParrotLink\n",
"from langchain_tests.integration_tests import ChatModelIntegrationTests\n",
"\n",
"\n",
"class TestChatParrotLinkIntegration(ChatModelIntegrationTests):\n",
" @property\n",
" def chat_model_class(self) -> Type[ChatParrotLink]:\n",
" return ChatParrotLink\n",
"\n",
" @property\n",
" def chat_model_params(self) -> dict:\n",
" return {\n",
" \"model\": \"bird-brain-001\",\n",
" \"temperature\": 0,\n",
" \"parrot_buffer_length\": 50,\n",
" }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"</details>\n",
"<details>\n",
" <summary>Embedding Models</summary>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/unit_tests/test_embeddings.py\"\n",
"from typing import Tuple, Type\n",
"\n",
"from langchain_parrot_link.embeddings import ParrotLinkEmbeddings\n",
"from langchain_tests.unit_tests import EmbeddingsUnitTests\n",
"\n",
"\n",
"class TestParrotLinkEmbeddingsUnit(EmbeddingsUnitTests):\n",
" @property\n",
" def embeddings_class(self) -> Type[ParrotLinkEmbeddings]:\n",
" return ParrotLinkEmbeddings\n",
"\n",
" @property\n",
" def embedding_model_params(self) -> dict:\n",
" return {\"model\": \"nest-embed-001\", \"temperature\": 0}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/integration_tests/test_embeddings.py\"\n",
"from typing import Type\n",
"\n",
"from langchain_parrot_link.embeddings import ParrotLinkEmbeddings\n",
"from langchain_tests.integration_tests import EmbeddingsIntegrationTests\n",
"\n",
"\n",
"class TestParrotLinkEmbeddingsIntegration(EmbeddingsIntegrationTests):\n",
" @property\n",
" def embeddings_class(self) -> Type[ParrotLinkEmbeddings]:\n",
" return ParrotLinkEmbeddings\n",
"\n",
" @property\n",
" def embedding_model_params(self) -> dict:\n",
" return {\"model\": \"nest-embed-001\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"</details>\n",
"<details>\n",
" <summary>Tools/Toolkits</summary>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/unit_tests/test_tools.py\"\n",
"from typing import Type\n",
"\n",
"from langchain_parrot_link.tools import ParrotMultiplyTool\n",
"from langchain_tests.unit_tests import ToolsUnitTests\n",
"\n",
"\n",
"class TestParrotMultiplyToolUnit(ToolsUnitTests):\n",
" @property\n",
" def tool_constructor(self) -> Type[ParrotMultiplyTool]:\n",
" return ParrotMultiplyTool\n",
"\n",
" @property\n",
" def tool_constructor_params(self) -> dict:\n",
" # if your tool constructor instead required initialization arguments like\n",
" # `def __init__(self, some_arg: int):`, you would return those here\n",
" # as a dictionary, e.g.: `return {'some_arg': 42}`\n",
" return {}\n",
"\n",
" @property\n",
" def tool_invoke_params_example(self) -> dict:\n",
" \"\"\"\n",
" Returns a dictionary representing the \"args\" of an example tool call.\n",
"\n",
" This should NOT be a ToolCall dict - i.e. it should not\n",
" have {\"name\", \"id\", \"args\"} keys.\n",
" \"\"\"\n",
" return {\"a\": 2, \"b\": 3}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/integration_tests/test_tools.py\"\n",
"from typing import Type\n",
"\n",
"from langchain_parrot_link.tools import ParrotMultiplyTool\n",
"from langchain_tests.integration_tests import ToolsIntegrationTests\n",
"\n",
"\n",
"class TestParrotMultiplyToolIntegration(ToolsIntegrationTests):\n",
" @property\n",
" def tool_constructor(self) -> Type[ParrotMultiplyTool]:\n",
" return ParrotMultiplyTool\n",
"\n",
" @property\n",
" def tool_constructor_params(self) -> dict:\n",
" # if your tool constructor instead required initialization arguments like\n",
" # `def __init__(self, some_arg: int):`, you would return those here\n",
" # as a dictionary, e.g.: `return {'some_arg': 42}`\n",
" return {}\n",
"\n",
" @property\n",
" def tool_invoke_params_example(self) -> dict:\n",
" \"\"\"\n",
" Returns a dictionary representing the \"args\" of an example tool call.\n",
"\n",
" This should NOT be a ToolCall dict - i.e. it should not\n",
" have {\"name\", \"id\", \"args\"} keys.\n",
" \"\"\"\n",
" return {\"a\": 2, \"b\": 3}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"</details>\n",
"<details>\n",
" <summary>Vector Stores</summary>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# title=\"tests/integration_tests/test_vectorstores_sync.py\"\n",
"\n",
"from typing import AsyncGenerator, Generator\n",
"\n",
"import pytest\n",
"from langchain_core.vectorstores import VectorStore\n",
"from langchain_parrot_link.vectorstores import ParrotVectorStore\n",
"from langchain_standard_tests.integration_tests.vectorstores import (\n",
" AsyncReadWriteTestSuite,\n",
" ReadWriteTestSuite,\n",
")\n",
"\n",
"\n",
"class TestSync(ReadWriteTestSuite):\n",
" @pytest.fixture()\n",
" def vectorstore(self) -> Generator[VectorStore, None, None]: # type: ignore\n",
" \"\"\"Get an empty vectorstore for unit tests.\"\"\"\n",
" store = ParrotVectorStore()\n",
" # note: store should be EMPTY at this point\n",
" # if you need to delete data, you may do so here\n",
" try:\n",
" yield store\n",
" finally:\n",
" # cleanup operations, or deleting data\n",
" pass\n",
"\n",
"\n",
"class TestAsync(AsyncReadWriteTestSuite):\n",
" @pytest.fixture()\n",
" async def vectorstore(self) -> AsyncGenerator[VectorStore, None]: # type: ignore\n",
" \"\"\"Get an empty vectorstore for unit tests.\"\"\"\n",
" store = ParrotVectorStore()\n",
" # note: store should be EMPTY at this point\n",
" # if you need to delete data, you may do so here\n",
" try:\n",
" yield store\n",
" finally:\n",
" # cleanup operations, or deleting data\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"</details>"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,393 @@
---
pagination_next: contributing/how_to/integrations/publish
pagination_prev: contributing/how_to/integrations/package
---
# How to add standard tests to an integration
When creating either a custom class for yourself or to publish in a LangChain integration, it is important to add standard tests to ensure it works as expected. This guide will show you how to add standard tests to each integration type.
## Setup
First, let's install 2 dependencies:
- `langchain-core` will define the interfaces we want to import to define our custom tool.
- `langchain-tests` will provide the standard tests we want to use, as well as pytest plugins necessary to run them. Recommended to pin to the latest version: <img src="https://img.shields.io/pypi/v/langchain-tests" style={{position:"relative",top:4,left:3}} />
:::note
Because added tests in new versions of `langchain-tests` can break your CI/CD pipelines, we recommend pinning the
version of `langchain-tests` to avoid unexpected changes.
:::
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs>
<TabItem value="poetry" label="Poetry" default>
If you followed the [previous guide](../package), you should already have these dependencies installed!
```bash
poetry add langchain-core
poetry add --group test langchain-tests==<latest_version>
poetry install --with test
```
</TabItem>
<TabItem value="pip" label="Pip">
```bash
pip install -U langchain-core langchain-tests
# install current package in editable mode
pip install --editable .
```
</TabItem>
</Tabs>
## Add and configure standard tests
There are 2 namespaces in the `langchain-tests` package:
- [unit tests](../../../concepts/testing.mdx#unit-tests) (`langchain_tests.unit_tests`): designed to be used to test the component in isolation and without access to external services
- [integration tests](../../../concepts/testing.mdx#integration-tests) (`langchain_tests.integration_tests`): designed to be used to test the component with access to external services (in particular, the external service that the component is designed to interact with).
Both types of tests are implemented as [`pytest` class-based test suites](https://docs.pytest.org/en/7.1.x/getting-started.html#group-multiple-tests-in-a-class).
By subclassing the base classes for each type of standard test (see below), you get all of the standard tests for that type, and you
can override the properties that the test suite uses to configure the tests.
In order to run the tests in the same way as this guide, we recommend subclassing these
classes in test files under two test subdirectories:
- `tests/unit_tests` for unit tests
- `tests/integration_tests` for integration tests
### Implementing standard tests
import CodeBlock from '@theme/CodeBlock';
In the following tabs, we show how to implement the standard tests for
each component type:
<Tabs>
<TabItem value="chat_models" label="Chat models">
To configure standard tests for a chat model, we subclass `ChatModelUnitTests` and `ChatModelIntegrationTests`. On each subclass, we override the following `@property` methods to specify the chat model to be tested and the chat model's configuration:
| Property | Description |
| --- | --- |
| `chat_model_class` | The class for the chat model to be tested |
| `chat_model_params` | The parameters to pass to the chat
model's constructor |
Additionally, chat model standard tests test a range of behaviors, from the most basic requirements (generating a response to a query) to optional capabilities like multi-modal support and tool-calling. For a test run to be successful:
1. If a feature is intended to be supported by the model, it should pass;
2. If a feature is not intended to be supported by the model, it should be skipped.
Tests for "optional" capabilities are controlled via a set of properties that can be overridden on the test model subclass.
You can see the **entire list of configurable capabilities** in the API references for
[unit tests](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.chat_models.ChatModelUnitTests.html)
and [integration tests](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html).
For example, to enable integration tests for image inputs, we can implement
```python
@property
def supports_image_inputs(self) -> bool:
return True
```
on the integration test class.
:::note
Details on what tests are run, how each test can be skipped, and troubleshooting tips for each test can be found in the API references. See details:
- [Unit tests API reference](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.chat_models.ChatModelUnitTests.html)
- [Integration tests API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html)
:::
Unit test example:
import ChatUnitSource from '../../../../src/theme/integration_template/tests/unit_tests/test_chat_models.py';
<CodeBlock language="python" title="tests/unit_tests/test_chat_models.py">
{
ChatUnitSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
Integration test example:
import ChatIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_chat_models.py';
<CodeBlock language="python" title="tests/integration_tests/test_chat_models.py">
{
ChatIntegrationSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
<TabItem value="vector_stores" label="Vector stores">
Here's how you would configure the standard tests for a typical vector store (using
`ParrotVectorStore` as a placeholder):
Vector store tests do not have optional capabilities to be configured at this time.
import VectorStoreIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_vectorstores.py';
<CodeBlock language="python" title="tests/integration_tests/test_vectorstores.py">
{
VectorStoreIntegrationSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
Configuring the tests consists of implementing pytest fixtures for setting up an
empty vector store and tearing down the vector store after the test run ends.
| Fixture | Description |
| --- | --- |
| `vectorstore` | A generator that yields an empty vector store for unit tests. The vector store is cleaned up after the test run ends. |
For example, below is the `VectorStoreIntegrationTests` class for the [Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma/)
integration:
```python
from typing import Generator
import pytest
from langchain_core.vectorstores import VectorStore
from langchain_tests.integration_tests.vectorstores import VectorStoreIntegrationTests
from langchain_chroma import Chroma
class TestChromaStandard(VectorStoreIntegrationTests):
@pytest.fixture()
def vectorstore(self) -> Generator[VectorStore, None, None]: # type: ignore
"""Get an empty vectorstore for unit tests."""
store = Chroma(embedding_function=self.get_embeddings())
try:
yield store
finally:
store.delete_collection()
pass
```
Note that before the initial `yield`, we instantiate the vector store with an
[embeddings](/docs/concepts/embedding_models/) object. This is a pre-defined
["fake" embeddings model](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.vectorstores.VectorStoreIntegrationTests.html#langchain_tests.integration_tests.vectorstores.VectorStoreIntegrationTests.get_embeddings)
that will generate short, arbitrary vectors for documents. You can use a different
embeddings object if desired.
In the `finally` block, we call whatever integration-specific logic is needed to
bring the vector store to a clean state. This logic is executed in between each test
(e.g., even if tests fail).
:::note
Details on what tests are run and troubleshooting tips for each test can be found in the [API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.vectorstores.VectorStoreIntegrationTests.html).
:::
</TabItem>
<TabItem value="embeddings" label="Embeddings">
To configure standard tests for an embeddings model, we subclass `EmbeddingsUnitTests` and `EmbeddingsIntegrationTests`. On each subclass, we override the following `@property` methods to specify the embeddings model to be tested and the embeddings model's configuration:
| Property | Description |
| --- | --- |
| `embeddings_class` | The class for the embeddings model to be tested |
| `embedding_model_params` | The parameters to pass to the embeddings model's constructor |
:::note
Details on what tests are run, how each test can be skipped, and troubleshooting tips for each test can be found in the API references. See details:
- [Unit tests API reference](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.embeddings.EmbeddingsUnitTests.html)
- [Integration tests API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.embeddings.EmbeddingsIntegrationTests.html)
:::
Unit test example:
import EmbeddingsUnitSource from '../../../../src/theme/integration_template/tests/unit_tests/test_embeddings.py';
<CodeBlock language="python" title="tests/unit_tests/test_embeddings.py">
{
EmbeddingsUnitSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
Integration test example:
```python title="tests/integration_tests/test_embeddings.py"
from typing import Type
from langchain_parrot_link.embeddings import ParrotLinkEmbeddings
from langchain_tests.integration_tests import EmbeddingsIntegrationTests
class TestParrotLinkEmbeddingsIntegration(EmbeddingsIntegrationTests):
@property
def embeddings_class(self) -> Type[ParrotLinkEmbeddings]:
return ParrotLinkEmbeddings
@property
def embedding_model_params(self) -> dict:
return {"model": "nest-embed-001"}
```
import EmbeddingsIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_embeddings.py';
<CodeBlock language="python" title="tests/integration_tests/test_embeddings.py">
{
EmbeddingsIntegrationSource.replaceAll('__ModuleName__', 'ParrotLink')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT_LINK')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
<TabItem value="tools" label="Tools">
To configure standard tests for a tool, we subclass `ToolsUnitTests` and
`ToolsIntegrationTests`. On each subclass, we override the following `@property` methods
to specify the tool to be tested and the tool's configuration:
| Property | Description |
| --- | --- |
| `tool_constructor` | The constructor for the tool to be tested, or an instantiated tool. |
| `tool_constructor_params` | The parameters to pass to the tool (optional). |
| `tool_invoke_params_example` | An example of the parameters to pass to the tool's `invoke` method. |
If you are testing a tool class and pass a class like `MyTool` to `tool_constructor`, you can pass the parameters to the constructor in `tool_constructor_params`.
If you are testing an instantiated tool, you can pass the instantiated tool to `tool_constructor` and do not
override `tool_constructor_params`.
:::note
Details on what tests are run, how each test can be skipped, and troubleshooting tips for each test can be found in the API references. See details:
- [Unit tests API reference](https://python.langchain.com/api_reference/standard_tests/unit_tests/langchain_tests.unit_tests.tools.ToolsUnitTests.html)
- [Integration tests API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.tools.ToolsIntegrationTests.html)
:::
import ToolsUnitSource from '../../../../src/theme/integration_template/tests/unit_tests/test_tools.py';
<CodeBlock language="python" title="tests/unit_tests/test_tools.py">
{
ToolsUnitSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
import ToolsIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_tools.py';
<CodeBlock language="python" title="tests/integration_tests/test_tools.py">
{
ToolsIntegrationSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
<TabItem value="retrievers" label="Retrievers">
To configure standard tests for a retriever, we subclass `RetrieversUnitTests` and
`RetrieversIntegrationTests`. On each subclass, we override the following `@property` methods
| Property | Description |
| --- | --- |
| `retriever_constructor` | The class for the retriever to be tested |
| `retriever_constructor_params` | The parameters to pass to the retriever's constructor |
| `retriever_query_example` | An example of the query to pass to the retriever's `invoke` method |
:::note
Details on what tests are run and troubleshooting tips for each test can be found in the [API reference](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.retrievers.RetrieversIntegrationTests.html).
:::
import RetrieverIntegrationSource from '../../../../src/theme/integration_template/tests/integration_tests/test_retrievers.py';
<CodeBlock language="python" title="tests/integration_tests/test_retrievers.py">
{
RetrieverIntegrationSource.replaceAll('__ModuleName__', 'Parrot')
.replaceAll('__package_name__', 'langchain-parrot-link')
.replaceAll('__MODULE_NAME__', 'PARROT')
.replaceAll('__module_name__', 'langchain_parrot_link')
}
</CodeBlock>
</TabItem>
</Tabs>
---
### Running the tests
You can run these with the following commands from your project root
<Tabs>
<TabItem value="poetry" label="Poetry" default>
```bash
# run unit tests without network access
poetry run pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests
# run integration tests
poetry run pytest --asyncio-mode=auto tests/integration_tests
```
</TabItem>
<TabItem value="pip" label="Pip">
```bash
# run unit tests without network access
pytest --disable-socket --allow-unix-socket --asyncio-mode=auto tests/unit_tests
# run integration tests
pytest --asyncio-mode=auto tests/integration_tests
```
</TabItem>
</Tabs>
## Test suite information and troubleshooting
For a full list of the standard test suites that are available, as well as
information on which tests are included and how to troubleshoot common issues,
see the [Standard Tests API Reference](https://python.langchain.com/api_reference/standard_tests/index.html).
You can see troubleshooting guides under the individual test suites listed in that API Reference. For example,
[here is the guide for `ChatModelIntegrationTests.test_usage_metadata`](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).

View File

@@ -802,7 +802,7 @@
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
"\n",
":::important\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
"This section covered building with LangChain Agents. They are fine for getting started, but past a certain point you will likely want flexibility and control which they do not offer. To develop more advanced agents, we recommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
":::\n",
"\n",
"If you want to continue using LangChain agents, some good advanced guides are:\n",

View File

@@ -23,7 +23,7 @@
"**Attention**:\n",
"\n",
"- Be sure to set the `namespace` parameter to avoid collisions of the same text embedded using different embeddings models.\n",
"- `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one need to specify a `query_embedding_cache`."
"- `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one needs to specify a `query_embedding_cache`."
]
},
{

View File

@@ -15,7 +15,7 @@
"\n",
":::\n",
"\n",
"In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.\n",
"In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing a run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.\n",
"\n",
"This prevents us from having to manually attach the handlers to each individual nested object. Here's an example:"
]

View File

@@ -37,7 +37,7 @@
"\n",
"Langchain comes with a built-in in memory rate limiter. This rate limiter is thread safe and can be shared by multiple threads in the same process.\n",
"\n",
"The provided rate limiter can only limit the number of requests per unit time. It will not help if you need to also limited based on the size\n",
"The provided rate limiter can only limit the number of requests per unit time. It will not help if you need to also limit based on the size\n",
"of the requests."
]
},

View File

@@ -0,0 +1,222 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "c160026f-aadb-4e9f-8642-b4a9e8479d77",
"metadata": {},
"source": [
"# Custom Embeddings\n",
"\n",
"LangChain is integrated with many [3rd party embedding models](/docs/integrations/text_embedding/). In this guide we'll show you how to create a custom Embedding class, in case a built-in one does not already exist. Embeddings are critical in natural language processing applications as they convert text into a numerical form that algorithms can understand, thereby enabling a wide range of applications such as similarity search, text classification, and clustering.\n",
"\n",
"Implementing embeddings using the standard [Embeddings](https://python.langchain.com/api_reference/core/embeddings/langchain_core.embeddings.embeddings.Embeddings.html) interface will allow your embeddings to be utilized in existing `LangChain` abstractions (e.g., as the embeddings powering a [VectorStore](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html) or cached using [CacheBackedEmbeddings](/docs/how_to/caching_embeddings/)).\n",
"\n",
"## Interface\n",
"\n",
"The current `Embeddings` abstraction in LangChain is designed to operate on text data. In this implementation, the inputs are either single strings or lists of strings, and the outputs are lists of numerical arrays (vectors), where each vector represents\n",
"an embedding of the input text into some n-dimensional space.\n",
"\n",
"Your custom embedding must implement the following methods:\n",
"\n",
"| Method/Property | Description | Required/Optional |\n",
"|---------------------------------|----------------------------------------------------------------------------|-------------------|\n",
"| `embed_documents(texts)` | Generates embeddings for a list of strings. | Required |\n",
"| `embed_query(text)` | Generates an embedding for a single text query. | Required |\n",
"| `aembed_documents(texts)` | Asynchronously generates embeddings for a list of strings. | Optional |\n",
"| `aembed_query(text)` | Asynchronously generates an embedding for a single text query. | Optional |\n",
"\n",
"These methods ensure that your embedding model can be integrated seamlessly into the LangChain framework, providing both synchronous and asynchronous capabilities for scalability and performance optimization.\n",
"\n",
"\n",
":::note\n",
"`Embeddings` do not currently implement the [Runnable](/docs/concepts/runnables/) interface and are also **not** instances of pydantic `BaseModel`.\n",
":::\n",
"\n",
"### Embedding queries vs documents\n",
"\n",
"The `embed_query` and `embed_documents` methods are required. These methods both operate\n",
"on string inputs. The accessing of `Document.page_content` attributes is handled\n",
"by the vector store using the embedding model for legacy reasons.\n",
"\n",
"`embed_query` takes in a single string and returns a single embedding as a list of floats.\n",
"If your model has different modes for embedding queries vs the underlying documents, you can\n",
"implement this method to handle that. \n",
"\n",
"`embed_documents` takes in a list of strings and returns a list of embeddings as a list of lists of floats.\n",
"\n",
":::note\n",
"`embed_documents` takes in a list of plain text, not a list of LangChain `Document` objects. The name of this method\n",
"may change in future versions of LangChain.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "2162547f-4577-47e8-b12f-e9aa3c243797",
"metadata": {},
"source": [
"## Implementation\n",
"\n",
"As an example, we'll implement a simple embeddings model that returns a constant vector. This model is for illustrative purposes only."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6b838062-552c-43f8-94f8-d17e4ae4c221",
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.embeddings import Embeddings\n",
"\n",
"\n",
"class ParrotLinkEmbeddings(Embeddings):\n",
" \"\"\"ParrotLink embedding model integration.\n",
"\n",
" # TODO: Populate with relevant params.\n",
" Key init args — completion params:\n",
" model: str\n",
" Name of ParrotLink model to use.\n",
"\n",
" See full list of supported init args and their descriptions in the params section.\n",
"\n",
" # TODO: Replace with relevant init params.\n",
" Instantiate:\n",
" .. code-block:: python\n",
"\n",
" from langchain_parrot_link import ParrotLinkEmbeddings\n",
"\n",
" embed = ParrotLinkEmbeddings(\n",
" model=\"...\",\n",
" # api_key=\"...\",\n",
" # other params...\n",
" )\n",
"\n",
" Embed single text:\n",
" .. code-block:: python\n",
"\n",
" input_text = \"The meaning of life is 42\"\n",
" embed.embed_query(input_text)\n",
"\n",
" .. code-block:: python\n",
"\n",
" # TODO: Example output.\n",
"\n",
" # TODO: Delete if token-level streaming isn't supported.\n",
" Embed multiple text:\n",
" .. code-block:: python\n",
"\n",
" input_texts = [\"Document 1...\", \"Document 2...\"]\n",
" embed.embed_documents(input_texts)\n",
"\n",
" .. code-block:: python\n",
"\n",
" # TODO: Example output.\n",
"\n",
" # TODO: Delete if native async isn't supported.\n",
" Async:\n",
" .. code-block:: python\n",
"\n",
" await embed.aembed_query(input_text)\n",
"\n",
" # multiple:\n",
" # await embed.aembed_documents(input_texts)\n",
"\n",
" .. code-block:: python\n",
"\n",
" # TODO: Example output.\n",
"\n",
" \"\"\"\n",
"\n",
" def __init__(self, model: str):\n",
" self.model = model\n",
"\n",
" def embed_documents(self, texts: List[str]) -> List[List[float]]:\n",
" \"\"\"Embed search docs.\"\"\"\n",
" return [[0.5, 0.6, 0.7] for _ in texts]\n",
"\n",
" def embed_query(self, text: str) -> List[float]:\n",
" \"\"\"Embed query text.\"\"\"\n",
" return self.embed_documents([text])[0]\n",
"\n",
" # optional: add custom async implementations here\n",
" # you can also delete these, and the base class will\n",
" # use the default implementation, which calls the sync\n",
" # version in an async executor:\n",
"\n",
" # async def aembed_documents(self, texts: List[str]) -> List[List[float]]:\n",
" # \"\"\"Asynchronous Embed search docs.\"\"\"\n",
" # ...\n",
"\n",
" # async def aembed_query(self, text: str) -> List[float]:\n",
" # \"\"\"Asynchronous Embed query text.\"\"\"\n",
" # ..."
]
},
{
"cell_type": "markdown",
"id": "47a19044-5c3f-40da-889a-1a1cfffc137c",
"metadata": {},
"source": [
"### Let's test it 🧪"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "21c218fe-8f91-437f-b523-c2b6e5cf749e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0.5, 0.6, 0.7], [0.5, 0.6, 0.7]]\n",
"[0.5, 0.6, 0.7]\n"
]
}
],
"source": [
"embeddings = ParrotLinkEmbeddings(\"test-model\")\n",
"print(embeddings.embed_documents([\"Hello\", \"world\"]))\n",
"print(embeddings.embed_query(\"Hello\"))"
]
},
{
"cell_type": "markdown",
"id": "de50f690-178e-4561-af98-14967b3c8501",
"metadata": {},
"source": [
"## Contributing\n",
"\n",
"We welcome contributions of Embedding models to the LangChain code base.\n",
"\n",
"If you aim to contribute an embedding model for a new provider (e.g., with a new set of dependencies or SDK), we encourage you to publish your implementation in a separate `langchain-*` integration package. This will enable you to appropriately manage dependencies and version your package. Please refer to our [contributing guide](/docs/contributing/how_to/integrations/) for a walkthrough of this process."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -9,10 +9,16 @@
"\n",
"This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.\n",
"\n",
"Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n",
"Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications.\n",
"\n",
"As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box, async support, the `astream_events` API, etc.\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of [text completion models](/docs/concepts/text_llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/chat_models).\n",
"\n",
"Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/how_to/custom_chat_model/).\n",
":::\n",
"\n",
"## Implementation\n",
"\n",
"There are only two required things that a custom LLM needs to implement:\n",

View File

@@ -162,7 +162,7 @@
"\n",
"@tool\n",
"def multiply_by_max(\n",
" a: Annotated[str, \"scale factor\"],\n",
" a: Annotated[int, \"scale factor\"],\n",
" b: Annotated[List[int], \"list of ints over which to take maximum\"],\n",
") -> int:\n",
" \"\"\"Multiply a by the maximum of b.\"\"\"\n",
@@ -294,7 +294,7 @@
"metadata": {},
"source": [
":::caution\n",
"By default, `@tool(parse_docstring=True)` will raise `ValueError` if the docstring does not parse correctly. See [API Reference](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html) for detail and examples.\n",
"By default, `@tool(parse_docstring=True)` will raise `ValueError` if the docstring does not parse correctly. See [API Reference](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) for detail and examples.\n",
":::"
]
},

View File

@@ -121,7 +121,7 @@
"source": [
"## The convenience `@chain` decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping the function in a `RunnableLambda` constructor as shown above. Here's an example:"
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionally equivalent to wrapping the function in a `RunnableLambda` constructor as shown above. Here's an example:"
]
},
{

View File

@@ -245,7 +245,7 @@
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"llm_transformer_tuple = llm_transformer_filtered.convert_to_graph_documents(documents)\n",
"graph_documents_filtered = llm_transformer_tuple.convert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]

View File

@@ -1,459 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "5e61b0f2-15b9-4241-9ab5-ff0f3f732232",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "846ef4f4-ee38-4a42-a7d3-1a23826e4830",
"metadata": {},
"source": [
"# How to map values to a graph database\n",
"\n",
"In this guide we'll go over strategies to improve graph database query generation by mapping values from user inputs to database.\n",
"When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database.\n",
"Therefore, we can introduce a new step in graph database QA system to accurately map values.\n",
"\n",
"## Setup\n",
"\n",
"First, get required packages and set environment variables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18294435-182d-48da-bcab-5b8945b6d9cf",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
]
},
{
"cell_type": "markdown",
"id": "d86dd771-4001-4a34-8680-22e9b50e1e88",
"metadata": {},
"source": [
"We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9346f8e9-78bf-4667-b3d3-72807a73b718",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Uncomment the below to use LangSmith. Not required.\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "271c8a23-e51c-4ead-a76e-cf21107db47e",
"metadata": {},
"source": [
"Next, we need to define Neo4j credentials.\n",
"Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a2a3bb65-05c7-4daf-bac2-b25ae7fe2751",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\""
]
},
{
"cell_type": "markdown",
"id": "50fa4510-29b7-49b6-8496-5e86f694e81f",
"metadata": {},
"source": [
"The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4ee9ef7a-eef9-4289-b9fd-8fbc31041688",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_neo4j import Neo4jGraph\n",
"\n",
"graph = Neo4jGraph()\n",
"\n",
"# Import movie information\n",
"\n",
"movies_query = \"\"\"\n",
"LOAD CSV WITH HEADERS FROM \n",
"'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n",
"AS row\n",
"MERGE (m:Movie {id:row.movieId})\n",
"SET m.released = date(row.released),\n",
" m.title = row.title,\n",
" m.imdbRating = toFloat(row.imdbRating)\n",
"FOREACH (director in split(row.director, '|') | \n",
" MERGE (p:Person {name:trim(director)})\n",
" MERGE (p)-[:DIRECTED]->(m))\n",
"FOREACH (actor in split(row.actors, '|') | \n",
" MERGE (p:Person {name:trim(actor)})\n",
" MERGE (p)-[:ACTED_IN]->(m))\n",
"FOREACH (genre in split(row.genres, '|') | \n",
" MERGE (g:Genre {name:trim(genre)})\n",
" MERGE (m)-[:IN_GENRE]->(g))\n",
"\"\"\"\n",
"\n",
"graph.query(movies_query)"
]
},
{
"cell_type": "markdown",
"id": "0cb0ea30-ca55-4f35-aad6-beb57453de66",
"metadata": {},
"source": [
"## Detecting entities in the user input\n",
"We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e1a19424-6046-40c2-81d1-f3b88193a293",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",
"\n",
"class Entities(BaseModel):\n",
" \"\"\"Identifying information about entities.\"\"\"\n",
"\n",
" names: List[str] = Field(\n",
" ...,\n",
" description=\"All the person or movies appearing in the text\",\n",
" )\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are extracting person and movies from the text.\",\n",
" ),\n",
" (\n",
" \"human\",\n",
" \"Use the given format to extract information from the following \"\n",
" \"input: {question}\",\n",
" ),\n",
" ]\n",
")\n",
"\n",
"\n",
"entity_chain = prompt | llm.with_structured_output(Entities)"
]
},
{
"cell_type": "markdown",
"id": "9c14084c-37a7-4a9c-a026-74e12961c781",
"metadata": {},
"source": [
"We can test the entity extraction chain."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "bbfe0d8f-982e-46e6-88fb-8a4f0d850b07",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Entities(names=['Casino'])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"entities = entity_chain.invoke({\"question\": \"Who played in Casino movie?\"})\n",
"entities"
]
},
{
"cell_type": "markdown",
"id": "a8afbf13-05d0-4383-8050-f88b8c2f6fab",
"metadata": {},
"source": [
"We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6f92929f-74fb-4db2-b7e1-eb1e9d386a67",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Casino maps to Casino Movie in database\\n'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"match_query = \"\"\"MATCH (p:Person|Movie)\n",
"WHERE p.name CONTAINS $value OR p.title CONTAINS $value\n",
"RETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS type\n",
"LIMIT 1\n",
"\"\"\"\n",
"\n",
"\n",
"def map_to_database(entities: Entities) -> Optional[str]:\n",
" result = \"\"\n",
" for entity in entities.names:\n",
" response = graph.query(match_query, {\"value\": entity})\n",
" try:\n",
" result += f\"{entity} maps to {response[0]['result']} {response[0]['type']} in database\\n\"\n",
" except IndexError:\n",
" pass\n",
" return result\n",
"\n",
"\n",
"map_to_database(entities)"
]
},
{
"cell_type": "markdown",
"id": "f66c6756-6efb-4b1e-9b5d-87ed914a5212",
"metadata": {},
"source": [
"## Custom Cypher generating chain\n",
"\n",
"We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement.\n",
"We will be using the LangChain expression language to accomplish that."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8ef3e21d-f1c2-45e2-9511-4920d1cf6e7e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"# Generate Cypher statement based on natural language input\n",
"cypher_template = \"\"\"Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:\n",
"{schema}\n",
"Entities in the question map to the following database values:\n",
"{entities_list}\n",
"Question: {question}\n",
"Cypher query:\"\"\"\n",
"\n",
"cypher_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question, convert it to a Cypher query. No pre-amble.\",\n",
" ),\n",
" (\"human\", cypher_template),\n",
" ]\n",
")\n",
"\n",
"cypher_response = (\n",
" RunnablePassthrough.assign(names=entity_chain)\n",
" | RunnablePassthrough.assign(\n",
" entities_list=lambda x: map_to_database(x[\"names\"]),\n",
" schema=lambda _: graph.get_schema,\n",
" )\n",
" | cypher_prompt\n",
" | llm.bind(stop=[\"\\nCypherResult:\"])\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1f0011e3-9660-4975-af2a-486b1bc3b954",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'MATCH (:Movie {title: \"Casino\"})<-[:ACTED_IN]-(actor)\\nRETURN actor.name'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cypher = cypher_response.invoke({\"question\": \"Who played in Casino movie?\"})\n",
"cypher"
]
},
{
"cell_type": "markdown",
"id": "38095678-611f-4847-a4de-e51ef7ef727c",
"metadata": {},
"source": [
"## Generating answers based on database results\n",
"\n",
"Now that we have a chain that generates the Cypher statement, we need to execute the Cypher statement against the database and send the database results back to an LLM to generate the final answer.\n",
"Again, we will be using LCEL."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d1fa97c0-1c9c-41d3-9ee1-5f1905d17434",
"metadata": {},
"outputs": [],
"source": [
"from langchain_neo4j.chains.graph_qa.cypher_utils import (\n",
" CypherQueryCorrector,\n",
" Schema,\n",
")\n",
"\n",
"graph.refresh_schema()\n",
"# Cypher validation tool for relationship directions\n",
"corrector_schema = [\n",
" Schema(el[\"start\"], el[\"type\"], el[\"end\"])\n",
" for el in graph.structured_schema.get(\"relationships\")\n",
"]\n",
"cypher_validation = CypherQueryCorrector(corrector_schema)\n",
"\n",
"# Generate natural language response based on database results\n",
"response_template = \"\"\"Based on the the question, Cypher query, and Cypher response, write a natural language response:\n",
"Question: {question}\n",
"Cypher query: {query}\n",
"Cypher Response: {response}\"\"\"\n",
"\n",
"response_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Given an input question and Cypher response, convert it to a natural\"\n",
" \" language answer. No pre-amble.\",\n",
" ),\n",
" (\"human\", response_template),\n",
" ]\n",
")\n",
"\n",
"chain = (\n",
" RunnablePassthrough.assign(query=cypher_response)\n",
" | RunnablePassthrough.assign(\n",
" response=lambda x: graph.query(cypher_validation(x[\"query\"])),\n",
" )\n",
" | response_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "918146e5-7918-46d2-a774-53f9547d8fcb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Robert De Niro, James Woods, Joe Pesci, and Sharon Stone played in the movie \"Casino\".'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"Who played in Casino movie?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7ba75cd-8399-4e54-a6f8-8a411f159f56",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,548 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to best prompt for Graph-RAG\n",
"\n",
"In this guide we'll go over prompting strategies to improve graph database query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt.\n",
"\n",
"## Setup\n",
"\n",
"First, get required packages and set environment variables:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Uncomment the below to use LangSmith. Not required.\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we need to define Neo4j credentials.\n",
"Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_neo4j import Neo4jGraph\n",
"\n",
"graph = Neo4jGraph()\n",
"\n",
"# Import movie information\n",
"\n",
"movies_query = \"\"\"\n",
"LOAD CSV WITH HEADERS FROM \n",
"'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n",
"AS row\n",
"MERGE (m:Movie {id:row.movieId})\n",
"SET m.released = date(row.released),\n",
" m.title = row.title,\n",
" m.imdbRating = toFloat(row.imdbRating)\n",
"FOREACH (director in split(row.director, '|') | \n",
" MERGE (p:Person {name:trim(director)})\n",
" MERGE (p)-[:DIRECTED]->(m))\n",
"FOREACH (actor in split(row.actors, '|') | \n",
" MERGE (p:Person {name:trim(actor)})\n",
" MERGE (p)-[:ACTED_IN]->(m))\n",
"FOREACH (genre in split(row.genres, '|') | \n",
" MERGE (g:Genre {name:trim(genre)})\n",
" MERGE (m)-[:IN_GENRE]->(g))\n",
"\"\"\"\n",
"\n",
"graph.query(movies_query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Filtering graph schema\n",
"\n",
"At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements.\n",
"Let's say we are dealing with the following graph schema:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING},Genre {name: STRING}\n",
"Relationship properties are the following:\n",
"\n",
"The relationships are the following:\n",
"(:Movie)-[:IN_GENRE]->(:Genre),(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
]
}
],
"source": [
"graph.refresh_schema()\n",
"print(graph.schema)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's say we want to exclude the _Genre_ node from the schema representation we pass to an LLM.\n",
"We can achieve that using the `exclude` parameter of the GraphCypherQAChain chain."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_neo4j import GraphCypherQAChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" llm=llm,\n",
" exclude_types=[\"Genre\"],\n",
" verbose=True,\n",
" allow_dangerous_requests=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING}\n",
"Relationship properties are the following:\n",
"\n",
"The relationships are the following:\n",
"(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
]
}
],
"source": [
"print(chain.graph_schema)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Few-shot examples\n",
"\n",
"Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries.\n",
"\n",
"Let's say we have the following examples:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"examples = [\n",
" {\n",
" \"question\": \"How many artists are there?\",\n",
" \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\",\n",
" },\n",
" {\n",
" \"question\": \"Which actors played in the movie Casino?\",\n",
" \"query\": \"MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name\",\n",
" },\n",
" {\n",
" \"question\": \"How many movies has Tom Hanks acted in?\",\n",
" \"query\": \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
" },\n",
" {\n",
" \"question\": \"List all the genres of the movie Schindler's List\",\n",
" \"query\": \"MATCH (m:Movie {{title: 'Schindler\\\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name\",\n",
" },\n",
" {\n",
" \"question\": \"Which actors have worked in movies from both the comedy and action genres?\",\n",
" \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
" },\n",
" {\n",
" \"question\": \"Which directors have made movies with at least three different actors named 'John'?\",\n",
" \"query\": \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
" },\n",
" {\n",
" \"question\": \"Identify movies where directors also played a role in the film.\",\n",
" \"query\": \"MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name\",\n",
" },\n",
" {\n",
" \"question\": \"Find the actor with the highest number of movies in the database.\",\n",
" \"query\": \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\",\n",
" },\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can create a few-shot prompt with them like so:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n",
"\n",
"example_prompt = PromptTemplate.from_template(\n",
" \"User input: {question}\\nCypher query: {query}\"\n",
")\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples[:5],\n",
" example_prompt=example_prompt,\n",
" prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n",
" suffix=\"User input: {question}\\nCypher query: \",\n",
" input_variables=[\"question\", \"schema\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
"\n",
"Here is the schema information\n",
"foo.\n",
"\n",
"Below are a number of examples of questions and their corresponding Cypher queries.\n",
"\n",
"User input: How many artists are there?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n",
"\n",
"User input: Which actors played in the movie Casino?\n",
"Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.name\n",
"\n",
"User input: How many movies has Tom Hanks acted in?\n",
"Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n",
"\n",
"User input: List all the genres of the movie Schindler's List\n",
"Cypher query: MATCH (m:Movie {title: 'Schindler\\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.name\n",
"\n",
"User input: Which actors have worked in movies from both the comedy and action genres?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n",
"\n",
"User input: How many artists are there?\n",
"Cypher query: \n"
]
}
],
"source": [
"print(prompt.format(question=\"How many artists are there?\", schema=\"foo\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dynamic few-shot examples\n",
"\n",
"If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.\n",
"\n",
"We can do just this using an ExampleSelector. In this case we'll use a [SemanticSimilarityExampleSelector](https://python.langchain.com/api_reference/core/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones: "
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n",
"from langchain_neo4j import Neo4jVector\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
" examples,\n",
" OpenAIEmbeddings(),\n",
" Neo4jVector,\n",
" k=5,\n",
" input_keys=[\"question\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'query': 'MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)',\n",
" 'question': 'How many artists are there?'},\n",
" {'query': \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
" 'question': 'How many movies has Tom Hanks acted in?'},\n",
" {'query': \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
" 'question': 'Which actors have worked in movies from both the comedy and action genres?'},\n",
" {'query': \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
" 'question': \"Which directors have made movies with at least three different actors named 'John'?\"},\n",
" {'query': 'MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1',\n",
" 'question': 'Find the actor with the highest number of movies in the database.'}]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.select_examples({\"question\": \"how many artists are there?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"prompt = FewShotPromptTemplate(\n",
" example_selector=example_selector,\n",
" example_prompt=example_prompt,\n",
" prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n",
" suffix=\"User input: {question}\\nCypher query: \",\n",
" input_variables=[\"question\", \"schema\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
"\n",
"Here is the schema information\n",
"foo.\n",
"\n",
"Below are a number of examples of questions and their corresponding Cypher queries.\n",
"\n",
"User input: How many artists are there?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n",
"\n",
"User input: How many movies has Tom Hanks acted in?\n",
"Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n",
"\n",
"User input: Which actors have worked in movies from both the comedy and action genres?\n",
"Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n",
"\n",
"User input: Which directors have made movies with at least three different actors named 'John'?\n",
"Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\n",
"\n",
"User input: Find the actor with the highest number of movies in the database.\n",
"Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\n",
"\n",
"User input: how many artists are there?\n",
"Cypher query: \n"
]
}
],
"source": [
"print(prompt.format(question=\"how many artists are there?\", schema=\"foo\"))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" llm=llm,\n",
" cypher_prompt=prompt,\n",
" verbose=True,\n",
" allow_dangerous_requests=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'count(DISTINCT a)': 967}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'How many actors are in the graph?',\n",
" 'result': 'There are 967 actors in the graph.'}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"How many actors are in the graph?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

View File

@@ -159,6 +159,7 @@ See [supported integrations](/docs/integrations/text_embedding/) for details on
- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)
- [How to: create a custom embeddings class](/docs/how_to/custom_embeddings)
### Vector stores
@@ -244,6 +245,7 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: create a custom chat model class](/docs/how_to/custom_chat_model)
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
- [How to: create a custom embeddings class](/docs/how_to/custom_embeddings)
- [How to: write a custom retriever class](/docs/how_to/custom_retriever)
- [How to: write a custom document loader](/docs/how_to/document_loader_custom)
- [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
@@ -316,9 +318,7 @@ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
- [How to: map values to a database](/docs/how_to/graph_mapping)
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
- [How to: improve results with prompting](/docs/how_to/graph_prompting)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
### Summarization

View File

@@ -39,19 +39,20 @@
"| None | ✅ | ✅ | ❌ | ❌ | - |\n",
"| Incremental | ✅ | ✅ | ❌ | ✅ | Continuously |\n",
"| Full | ✅ | ❌ | ✅ | ✅ | At end of indexing |\n",
"| Scoped_Full | ✅ | ✅ | ❌ | ✅ | At end of indexing |\n",
"\n",
"\n",
"`None` does not do any automatic clean up, allowing the user to manually do clean up of old content. \n",
"\n",
"`incremental` and `full` offer the following automated clean up:\n",
"`incremental`, `full` and `scoped_full` offer the following automated clean up:\n",
"\n",
"* If the content of the source document or derived documents has **changed**, both `incremental` or `full` modes will clean up (delete) previous versions of the content.\n",
"* If the source document has been **deleted** (meaning it is not included in the documents currently being indexed), the `full` cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not.\n",
"* If the content of the source document or derived documents has **changed**, all 3 modes will clean up (delete) previous versions of the content.\n",
"* If the source document has been **deleted** (meaning it is not included in the documents currently being indexed), the `full` cleanup mode will delete it from the vector store correctly, but the `incremental` and `scoped_full` mode will not.\n",
"\n",
"When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted.\n",
"\n",
"* `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes.\n",
"* `full` mode does the clean up after all batches have been written.\n",
"* `full` and `scoped_full` mode does the clean up after all batches have been written.\n",
"\n",
"## Requirements\n",
"\n",
@@ -64,7 +65,7 @@
" \n",
"## Caution\n",
"\n",
"The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes).\n",
"The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` or `scoped_full` cleanup modes).\n",
"\n",
"If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content.\n",
"\n",

View File

@@ -51,7 +51,7 @@ pip install langchain-core
Certain integrations like OpenAI and Anthropic have their own packages.
Any integrations that require their own package will be documented as such in the [Integration docs](/docs/integrations/providers/).
You can see a list of all integration packages in the [API reference](https://api.python.langchain.com) under the "Partner libs" dropdown.
You can see a list of all integration packages in the [API reference](https://python.langchain.com/api_reference/) under the "Partner libs" dropdown.
To install one of these run:
```bash

View File

@@ -162,6 +162,18 @@
"md_header_splits"
]
},
{
"cell_type": "markdown",
"id": "fb3f834a",
"metadata": {},
"source": [
":::note\n",
"\n",
"The default `MarkdownHeaderTextSplitter` strips white spaces and new lines. To preserve the original formatting of your Markdown documents, check out [ExperimentalMarkdownSyntaxTextSplitter](https://python.langchain.com/api_reference/text_splitters/markdown/langchain_text_splitters.markdown.ExperimentalMarkdownSyntaxTextSplitter.html).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "aa67e0cc-d721-4536-9c7a-9fa3a7a69cbe",

View File

@@ -12,7 +12,7 @@
"There are two ways to implement a custom parser:\n",
"\n",
"1. Using `RunnableLambda` or `RunnableGenerator` in [LCEL](/docs/concepts/lcel/) -- we strongly recommend this for most use cases\n",
"2. By inherting from one of the base classes for out parsing -- this is the hard way of doing things\n",
"2. By inheriting from one of the base classes for out parsing -- this is the hard way of doing things\n",
"\n",
"The difference between the two approaches are mostly superficial and are mainly in terms of which callbacks are triggered (e.g., `on_chain_start` vs. `on_parser_start`), and how a runnable lambda vs. a parser might be visualized in a tracing platform like LangSmith."
]
@@ -200,7 +200,7 @@
"id": "24067447-8a5a-4d6b-86a3-4b9cc4b4369b",
"metadata": {},
"source": [
"## Inherting from Parsing Base Classes"
"## Inheriting from Parsing Base Classes"
]
},
{
@@ -208,7 +208,7 @@
"id": "9713f547-b2e4-48eb-807f-a0f6f6d0e7e0",
"metadata": {},
"source": [
"Another approach to implement a parser is by inherting from `BaseOutputParser`, `BaseGenerationOutputParser` or another one of the base parsers depending on what you need to do.\n",
"Another approach to implement a parser is by inheriting from `BaseOutputParser`, `BaseGenerationOutputParser` or another one of the base parsers depending on what you need to do.\n",
"\n",
"In general, we **do not** recommend this approach for most use cases as it results in more code to write without significant benefits.\n",
"\n",

View File

@@ -29,7 +29,7 @@
":::\n",
"\n",
"\n",
"When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.\n",
"When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjunction with a [RunnableParallel](/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.\n",
"\n",
"See the example below:"
]

View File

@@ -125,9 +125,11 @@
"\n",
"There are a few ways to determine what that threshold is, which are controlled by the `breakpoint_threshold_type` kwarg.\n",
"\n",
"Note: if the resulting chunk sizes are too small/big, the additional kwargs `breakpoint_threshold_amount` and `min_chunk_size` can be used for adjustments.\n",
"\n",
"### Percentile\n",
"\n",
"The default way to split is based on percentile. In this method, all differences between sentences are calculated, and then any difference greater than the X percentile is split."
"The default way to split is based on percentile. In this method, all differences between sentences are calculated, and then any difference greater than the X percentile is split. The default value for X is 95.0 and can be adjusted by the keyword argument `breakpoint_threshold_amount` which expects a number between 0.0 and 100.0."
]
},
{
@@ -186,7 +188,7 @@
"source": [
"### Standard Deviation\n",
"\n",
"In this method, any difference greater than X standard deviations is split."
"In this method, any difference greater than X standard deviations is split. The default value for X is 3.0 and can be adjusted by the keyword argument `breakpoint_threshold_amount`."
]
},
{
@@ -245,7 +247,7 @@
"source": [
"### Interquartile\n",
"\n",
"In this method, the interquartile distance is used to split chunks."
"In this method, the interquartile distance is used to split chunks. The interquartile range can be scaled by the keyword argument `breakpoint_threshold_amount`, the default value is 1.5."
]
},
{
@@ -306,8 +308,8 @@
"source": [
"### Gradient\n",
"\n",
"In this method, the gradient of distance is used to split chunks along with the percentile method.\n",
"This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data."
"In this method, the gradient of distance is used to split chunks along with the percentile method. This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data.\n",
"Similar to the percentile method, the split can be adjusted by the keyword argument `breakpoint_threshold_amount` which expects a number between 0.0 and 100.0, the default value is 95.0."
]
},
{

View File

@@ -55,7 +55,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:"
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:"
]
},
{

View File

@@ -51,7 +51,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{

View File

@@ -54,7 +54,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
"Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
"Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{

View File

@@ -336,7 +336,7 @@
"\n",
"The **MultiQueryRetriever** is used to tackle the problem that the RAG pipeline might not return the best set of documents based on the query. It generates multiple queries that mean the same as the original query and then fetches documents for each.\n",
"\n",
"To evluate this retriever, UpTrain will run the following evaluation:\n",
"To evaluate this retriever, UpTrain will run the following evaluation:\n",
"- **[Multi Query Accuracy](https://docs.uptrain.ai/predefined-evaluations/query-quality/multi-query-accuracy)**: Checks if the multi-queries generated mean the same as the original query."
]
},

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatCerebras\n",
"\n",
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html).\n",
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#).\n",
"\n",
"At Cerebras, we've developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). The Cerebras CS-3 system, powered by the WSE-3, represents a new class of AI supercomputer that sets the standard for generative AI training and inference with unparalleled performance and scalability.\n",
"\n",
@@ -37,7 +37,7 @@
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cerebras) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatCerebras](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html) | [langchain-cerebras](https://api.python.langchain.com/en/latest/cerebras_api_reference.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cerebras?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?style=flat-square&label=%20) |\n",
"| [ChatCerebras](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#) | [langchain-cerebras](https://python.langchain.com/api_reference/cerebras/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cerebras?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
@@ -139,7 +139,7 @@
"from langchain_cerebras import ChatCerebras\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama3.1-70b\",\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")"
]
@@ -215,7 +215,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama3.1-70b\",\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -280,7 +280,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama3.1-70b\",\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -324,7 +324,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama3.1-70b\",\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -371,7 +371,7 @@
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama3.1-70b\",\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
@@ -396,7 +396,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html"
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#"
]
}
],

View File

@@ -36,7 +36,7 @@
"### Integration details\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/ibm/) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatWatsonx](https://python.langchain.com/api_reference/ibm/chat_models/langchain_ibm.chat_models.ChatWatsonx.html#langchain_ibm.chat_models.ChatWatsonx) | [langchain-ibm](https://python.langchain.com/api_reference/ibm/index.html) | ❌ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ibm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ibm?style=flat-square&label=%20) |\n",
"| [ChatWatsonx](https://python.langchain.com/api_reference/ibm/chat_models/langchain_ibm.chat_models.ChatWatsonx.html) | [langchain-ibm](https://python.langchain.com/api_reference/ibm/index.html) | ❌ | ❌ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ibm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ibm?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",

View File

@@ -63,9 +63,9 @@
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-4\",\n",
" \"model_name\": \"gpt-35-turbo\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4-1106-preview\",\n",
" \"model\": \"azure/gpt-35-turbo\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
@@ -73,7 +73,7 @@
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"chat = ChatLiteLLMRouter(router=litellm_router)"
"chat = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-35-turbo\")"
]
},
{
@@ -177,6 +177,7 @@
"source": [
"chat = ChatLiteLLMRouter(\n",
" router=litellm_router,\n",
" model_name=\"gpt-35-turbo\",\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
@@ -209,7 +210,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -39,7 +39,7 @@
"### Credentials\n",
"\n",
"\n",
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API. Once you've done this set the MISTRAL_API_KEY environment variable:"
"A valid [API key](https://console.mistral.ai/api-keys/) is needed to communicate with the API. Once you've done this set the MISTRAL_API_KEY environment variable:"
]
},
{

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatOCIModelDeployment\n",
"\n",
"This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html).\n",
"This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html).\n",
"\n",
"[OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models in the Oracle Cloud Infrastructure. You can use [AI Quick Actions](https://blogs.oracle.com/ai-and-datascience/post/ai-quick-actions-in-oci-data-science) to easily deploy LLMs on [OCI Data Science Model Deployment Service](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm). You may choose to deploy the model with popular inference frameworks such as vLLM or TGI. By default, the model deployment endpoint mimics the OpenAI API protocol.\n",
"\n",
@@ -30,7 +30,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOCIModelDeployment](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"| [ChatOCIModelDeployment](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
@@ -430,9 +430,9 @@
"\n",
"For comprehensive details on all features and configurations, please refer to the API reference documentation for each class:\n",
"\n",
"* [ChatOCIModelDeployment](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html)\n",
"* [ChatOCIModelDeploymentVLLM](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentVLLM.html)\n",
"* [ChatOCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentTGI.html)"
"* [ChatOCIModelDeployment](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeployment.html)\n",
"* [ChatOCIModelDeploymentVLLM](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentVLLM.html)\n",
"* [ChatOCIModelDeploymentTGI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_data_science.ChatOCIModelDeploymentTGI.html)"
]
}
],

View File

@@ -26,14 +26,14 @@
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-oci-generative-ai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-oci-generative-ai?style=flat-square&label=%20) |\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/oci_generative_ai) |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [ChatOCIGenAI](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | [JSON mode](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs) | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | | | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | \n",
"| ✅ | ✅ | | | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",

View File

@@ -34,8 +34,8 @@
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ | \n",
"| :---: |:----------------------------------------------------:| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
@@ -96,6 +96,20 @@
"%pip install -qU langchain-ollama"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "Make sure you're using the latest Ollama version for structured outputs. Update by running:",
"id": "b18bd692076f7cf7"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": "%pip install -U ollama",
"id": "b7a05cba95644c2e"
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatOutlines\n",
"\n",
"This will help you getting started with Outlines [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatOutlines features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html).\n",
"This will help you getting started with Outlines [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatOutlines features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html).\n",
"\n",
"[Outlines](https://github.com/outlines-dev/outlines) is a library for constrained language generation. It allows you to use large language models (LLMs) with various backends while applying constraints to the generated output.\n",
"\n",
@@ -26,7 +26,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOutlines](https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html) | [langchain-community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"| [ChatOutlines](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
@@ -316,7 +316,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html\n",
"\n",
"## Full Outlines Documentation: \n",
"\n",

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatSambaStudio\n",
"\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html).\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html).\n",
"\n",
"**[SambaNova](https://sambanova.ai/)'s** [SambaStudio](https://docs.sambanova.ai/sambastudio/latest/sambastudio-intro.html) SambaStudio is a rich, GUI-based platform that provides the functionality to train, deploy, and manage models in SambaNova [DataScale](https://sambanova.ai/products/datascale) systems.\n",
"\n",
@@ -28,13 +28,13 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatSambaStudio](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"| [ChatSambaStudio](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| | | | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"| | | | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
@@ -119,20 +119,20 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.sambanova import ChatSambaStudio\n",
"\n",
"llm = ChatSambaStudio(\n",
" model=\"Meta-Llama-3-70B-Instruct-4096\", # set if using a CoE endpoint\n",
" model=\"Meta-Llama-3-70B-Instruct-4096\", # set if using a Bundle endpoint\n",
" max_tokens=1024,\n",
" temperature=0.7,\n",
" top_k=1,\n",
" top_p=0.01,\n",
" do_sample=True,\n",
" process_prompt=\"True\", # set if using a CoE endpoint\n",
" process_prompt=\"True\", # set if using a Bundle endpoint\n",
")"
]
},
@@ -349,13 +349,141 @@
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool calling"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"from langchain_core.messages import HumanMessage, SystemMessage, ToolMessage\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_time(kind: str = \"both\") -> str:\n",
" \"\"\"Returns current date, current time or both.\n",
" Args:\n",
" kind: date, time or both\n",
" \"\"\"\n",
" if kind == \"date\":\n",
" date = datetime.now().strftime(\"%m/%d/%Y\")\n",
" return f\"Current date: {date}\"\n",
" elif kind == \"time\":\n",
" time = datetime.now().strftime(\"%H:%M:%S\")\n",
" return f\"Current time: {time}\"\n",
" else:\n",
" date = datetime.now().strftime(\"%m/%d/%Y\")\n",
" time = datetime.now().strftime(\"%H:%M:%S\")\n",
" return f\"Current date: {date}, Current time: {time}\"\n",
"\n",
"\n",
"tools = [get_time]\n",
"\n",
"\n",
"def invoke_tools(tool_calls, messages):\n",
" available_functions = {tool.name: tool for tool in tools}\n",
" for tool_call in tool_calls:\n",
" selected_tool = available_functions[tool_call[\"name\"]]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" print(f\"Tool output: {tool_output}\")\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" return messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = llm.bind_tools(tools=tools)\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"I need to schedule a meeting for two weeks from today. Can you tell me the exact date of the meeting?\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Intermediate model response: [{'name': 'get_time', 'args': {'kind': 'date'}, 'id': 'call_4092d5dd21cd4eb494', 'type': 'tool_call'}]\n",
"Tool output: Current date: 11/07/2024\n",
"final response: The meeting will be exactly two weeks from today, which would be 25/07/2024.\n"
]
}
],
"source": [
"response = llm_with_tools.invoke(messages)\n",
"while len(response.tool_calls) > 0:\n",
" print(f\"Intermediate model response: {response.tool_calls}\")\n",
" messages.append(response)\n",
" messages = invoke_tools(response.tool_calls, messages)\n",
"response = llm_with_tools.invoke(messages)\n",
"\n",
"print(f\"final response: {response.content}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Structured Outputs"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why did the cat join a band?', punchline='Because it wanted to be the purr-cussionist!')"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatSambaStudio features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html"
"For detailed documentation of all ChatSambaStudio features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html"
]
}
],

View File

@@ -22,24 +22,16 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install --upgrade --quiet snowflake-snowpark-python"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -73,14 +65,14 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatSnowflakeCortex\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"# By default, we'll be using the cortex provided model: `snowflake-arctic`, with function: `complete`\n",
"# By default, we'll be using the cortex provided model: `mistral-large`, with function: `complete`\n",
"chat = ChatSnowflakeCortex()"
]
},
@@ -92,16 +84,16 @@
"\n",
"```python\n",
"chat = ChatSnowflakeCortex(\n",
" # change default cortex model and function\n",
" model=\"snowflake-arctic\",\n",
" # Change the default cortex model and function\n",
" model=\"mistral-large\",\n",
" cortex_function=\"complete\",\n",
"\n",
" # change default generation parameters\n",
" # Change the default generation parameters\n",
" temperature=0,\n",
" max_tokens=10,\n",
" top_p=0.95,\n",
"\n",
" # specify snowflake credentials\n",
" # Specify your Snowflake Credentials\n",
" account=\"YOUR_SNOWFLAKE_ACCOUNT\",\n",
" username=\"YOUR_SNOWFLAKE_USERNAME\",\n",
" password=\"YOUR_SNOWFLAKE_PASSWORD\",\n",
@@ -117,28 +109,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calling the model\n",
"We can now call the model using the `invoke` or `generate` method.\n",
"\n",
"#### Generation"
"### Calling the chat model\n",
"We can now call the chat model using the `invoke` or `stream` methods."
]
},
{
"cell_type": "code",
"execution_count": 9,
"cell_type": "markdown",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" Large language models are artificial intelligence systems designed to understand, generate, and manipulate human language. These models are typically based on deep learning techniques and are trained on vast amounts of text data to learn patterns and structures in language. They can perform a wide range of language-related tasks, such as language translation, text generation, sentiment analysis, and answering questions. Some well-known large language models include Google's BERT, OpenAI's GPT series, and Facebook's RoBERTa. These models have shown remarkable performance in various natural language processing tasks, and their applications continue to expand as research in AI progresses.\", response_metadata={'completion_tokens': 131, 'prompt_tokens': 29, 'total_tokens': 160}, id='run-5435bd0a-83fd-4295-b237-66cbd1b5c0f3-0')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" SystemMessage(content=\"You are a friendly assistant.\"),\n",
@@ -151,14 +128,31 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"`ChatSnowflakeCortex` doesn't support streaming as of now. Support for streaming will be coming in the later versions!"
"### Stream"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Sample input prompt\n",
"messages = [\n",
" SystemMessage(content=\"You are a friendly assistant.\"),\n",
" HumanMessage(content=\"What are large language models?\"),\n",
"]\n",
"\n",
"# Invoke the stream method and print each chunk as it arrives\n",
"print(\"Stream Method Response:\")\n",
"for chunk in chat._stream(messages):\n",
" print(chunk.message.content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "langchain",
"language": "python",
"name": "python3"
},
@@ -172,7 +166,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.9.20"
}
},
"nbformat": 4,

View File

@@ -11,7 +11,7 @@
"A loader for `Confluence` pages.\n",
"\n",
"\n",
"This currently supports `username/api_key`, `Oauth2 login`. Additionally, on-prem installations also support `token` authentication. \n",
"This currently supports `username/api_key`, `Oauth2 login`, `cookies`. Additionally, on-prem installations also support `token` authentication. \n",
"\n",
"\n",
"Specify a list `page_id`-s and/or `space_key` to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\n",

View File

@@ -0,0 +1,253 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Needle Document Loader\n",
"[Needle](https://needle-ai.com) makes it easy to create your RAG pipelines with minimal effort. \n",
"\n",
"For more details, refer to our [API documentation](https://docs.needle-ai.com/docs/api-reference/needle-api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview\n",
"The Needle Document Loader is a utility for integrating Needle collections with LangChain. It enables seamless storage, retrieval, and utilization of documents for Retrieval-Augmented Generation (RAG) workflows.\n",
"\n",
"This example demonstrates:\n",
"\n",
"* Storing documents into a Needle collection.\n",
"* Setting up a retriever to fetch documents.\n",
"* Building a Retrieval-Augmented Generation (RAG) pipeline."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup\n",
"Before starting, ensure you have the following environment variables set:\n",
"\n",
"* NEEDLE_API_KEY: Your API key for authenticating with Needle.\n",
"* OPENAI_API_KEY: Your OpenAI API key for language model operations."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"NEEDLE_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"To initialize the NeedleLoader, you need the following parameters:\n",
"\n",
"* needle_api_key: Your Needle API key (or set it as an environment variable).\n",
"* collection_id: The ID of the Needle collection to work with."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.needle import NeedleLoader\n",
"\n",
"collection_id = \"clt_01J87M9T6B71DHZTHNXYZQRG5H\"\n",
"\n",
"# Initialize NeedleLoader to store documents to the collection\n",
"document_loader = NeedleLoader(\n",
" needle_api_key=os.getenv(\"NEEDLE_API_KEY\"),\n",
" collection_id=collection_id,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"To add files to the Needle collection:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"files = {\n",
" \"tech-radar-30.pdf\": \"https://www.thoughtworks.com/content/dam/thoughtworks/documents/radar/2024/04/tr_technology_radar_vol_30_en.pdf\"\n",
"}\n",
"\n",
"document_loader.add_files(files=files)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# Show the documents in the collection\n",
"# collections_documents = document_loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"The lazy_load method allows you to iteratively load documents from the Needle collection, yielding each document as it is fetched:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Show the documents in the collection\n",
"# collections_documents = document_loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"### Use within a chain\n",
"Below is a complete example of setting up a RAG pipeline with Needle within a chain:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'Did RAG move to accepted?',\n",
" 'context': [Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),\n",
" Document(metadata={}, page_content='New Moved in/out No change\\n\\n© Thoughtworks, Inc. All Rights Reserved. 12\\n\\nTechniques\\n\\n1. Retrieval-augmented generation (RAG)\\nAdopt\\n\\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \\nresponses generated by a large language model (LLM). Weve successfully used it in several projects, \\nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \\ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \\ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \\na given prompt, the database is queried to retrieve relevant documents, which are then combined \\nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \\nreduced hallucinations. The context window — which determines the maximum size of the LLM input \\n— is limited, which means that selecting the most relevant documents is crucial. We improve the \\nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \\ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \\na difficult problem, and one approach is to have the chunks overlap to a certain extent.')],\n",
" 'answer': 'Yes, RAG has been adopted as the preferred pattern for improving the quality of responses generated by a large language model.'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_community.retrievers.needle import NeedleRetriever\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"# Initialize the Needle retriever (make sure your Needle API key is set as an environment variable)\n",
"retriever = NeedleRetriever(\n",
" needle_api_key=os.getenv(\"NEEDLE_API_KEY\"),\n",
" collection_id=\"clt_01J87M9T6B71DHZTHNXYZQRG5H\",\n",
")\n",
"\n",
"# Define system prompt for the assistant\n",
"system_prompt = \"\"\"\n",
" You are an assistant for question-answering tasks. \n",
" Use the following pieces of retrieved context to answer the question.\n",
" If you don't know, say so concisely.\\n\\n{context}\n",
" \"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", system_prompt), (\"human\", \"{input}\")]\n",
")\n",
"\n",
"# Define the question-answering chain using a document chain (stuff chain) and the retriever\n",
"question_answer_chain = create_stuff_documents_chain(llm, prompt)\n",
"\n",
"# Create the RAG (Retrieval-Augmented Generation) chain by combining the retriever and the question-answering chain\n",
"rag_chain = create_retrieval_chain(retriever, question_answer_chain)\n",
"\n",
"# Define the input query\n",
"query = {\"input\": \"Did RAG move to accepted?\"}\n",
"\n",
"response = rag_chain.invoke(query)\n",
"\n",
"response"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `Needle` features and configurations head to the API reference: https://docs.needle-ai.com"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -103,7 +103,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [

View File

@@ -153,7 +153,7 @@
"\n",
"Best practices for developing with LangChain.\n",
"\n",
"### [API reference](https://api.python.langchain.com) [](\\#api-reference \"Direct link to api-reference\")\n",
"### [API reference](https://python.langchain.com/api_reference/) [](\\#api-reference \"Direct link to api-reference\")\n",
"\n",
"Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.\n",
"\n",

View File

@@ -0,0 +1,149 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: YoutubeLoaderDL\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# YoutubeLoaderDL\n",
"\n",
"Loader for Youtube leveraging the `yt-dlp` library.\n",
"\n",
"This package implements a [document loader](/docs/concepts/document_loaders/) for Youtube. In contrast to the [YoutubeLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.youtube.YoutubeLoader.html) of `langchain-community`, which relies on `pytube`, `YoutubeLoaderDL` is able to fetch YouTube metadata. `langchain-yt-dlp` leverages the robust `yt-dlp` library, providing a more reliable and feature-rich YouTube document loader.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS Support |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| YoutubeLoader | langchain-yt-dlp | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"### Installation\n",
"\n",
"```bash\n",
"pip install langchain-yt-dlp\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialization"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from langchain_yt_dlp.youtube_loader import YoutubeLoaderDL\n",
"\n",
"# Basic transcript loading\n",
"loader = YoutubeLoaderDL.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=dQw4w9WgXcQ\", add_video_info=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"documents = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'dQw4w9WgXcQ',\n",
" 'title': 'Rick Astley - Never Gonna Give You Up (Official Music Video)',\n",
" 'description': 'The official video for “Never Gonna Give You Up” by Rick Astley. \\n\\nNever: The Autobiography 📚 OUT NOW! \\nFollow this link to get your copy and listen to Ricks Never playlist ❤️ #RickAstleyNever\\nhttps://linktr.ee/rickastleynever\\n\\n“Never Gonna Give You Up” was a global smash on its release in July 1987, topping the charts in 25 countries including Ricks native UK and the US Billboard Hot 100. It also won the Brit Award for Best single in 1988. Stock Aitken and Waterman wrote and produced the track which was the lead-off single and lead track from Ricks debut LP “Whenever You Need Somebody”. The album was itself a UK number one and would go on to sell over 15 million copies worldwide.\\n\\nThe legendary video was directed by Simon West who later went on to make Hollywood blockbusters such as Con Air, Lara Croft Tomb Raider and The Expendables 2. The video passed the 1bn YouTube views milestone on 28 July 2021.\\n\\nSubscribe to the official Rick Astley YouTube channel: https://RickAstley.lnk.to/YTSubID\\n\\nFollow Rick Astley:\\nFacebook: https://RickAstley.lnk.to/FBFollowID \\nTwitter: https://RickAstley.lnk.to/TwitterID \\nInstagram: https://RickAstley.lnk.to/InstagramID \\nWebsite: https://RickAstley.lnk.to/storeID \\nTikTok: https://RickAstley.lnk.to/TikTokID\\n\\nListen to Rick Astley:\\nSpotify: https://RickAstley.lnk.to/SpotifyID \\nApple Music: https://RickAstley.lnk.to/AppleMusicID \\nAmazon Music: https://RickAstley.lnk.to/AmazonMusicID \\nDeezer: https://RickAstley.lnk.to/DeezerID \\n\\nLyrics:\\nWere no strangers to love\\nYou know the rules and so do I\\nA full commitments what Im thinking of\\nYou wouldnt get this from any other guy\\n\\nI just wanna tell you how Im feeling\\nGotta make you understand\\n\\nNever gonna give you up\\nNever gonna let you down\\nNever gonna run around and desert you\\nNever gonna make you cry\\nNever gonna say goodbye\\nNever gonna tell a lie and hurt you\\n\\nWeve known each other for so long\\nYour hearts been aching but youre too shy to say it\\nInside we both know whats been going on\\nWe know the game and were gonna play it\\n\\nAnd if you ask me how Im feeling\\nDont tell me youre too blind to see\\n\\nNever gonna give you up\\nNever gonna let you down\\nNever gonna run around and desert you\\nNever gonna make you cry\\nNever gonna say goodbye\\nNever gonna tell a lie and hurt you\\n\\n#RickAstley #NeverGonnaGiveYouUp #WheneverYouNeedSomebody #OfficialMusicVideo',\n",
" 'view_count': 1603360806,\n",
" 'publish_date': datetime.datetime(2009, 10, 25, 0, 0),\n",
" 'length': 212,\n",
" 'author': 'Rick Astley',\n",
" 'channel_id': 'UCuAXFkgsw1L7xaCfnd5JJOw',\n",
" 'webpage_url': 'https://www.youtube.com/watch?v=dQw4w9WgXcQ'}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"documents[0].metadata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- No lazy loading is implemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference:\n",
"\n",
"- [Github](https://github.com/aqib0770/langchain-yt-dlp)\n",
"- [PyPi](https://pypi.org/project/langchain-yt-dlp/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,16 @@
"First, let's install some dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bedbf252-4ea5-4eea-a3dc-d18ccc84aca3",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -19,8 +29,6 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai langchain-community\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
@@ -55,7 +63,7 @@
"tags": []
},
"source": [
"## `In Memory` Cache"
"## `In Memory` cache"
]
},
{
@@ -139,7 +147,7 @@
"tags": []
},
"source": [
"## `SQLite` Cache"
"## `SQLite` cache"
]
},
{
@@ -234,7 +242,7 @@
"id": "e71273ab",
"metadata": {},
"source": [
"## `Upstash Redis` Cache"
"## `Upstash Redis` caches"
]
},
{
@@ -242,7 +250,7 @@
"id": "f10dabef",
"metadata": {},
"source": [
"### Standard Cache\n",
"### Standard cache\n",
"Use [Upstash Redis](https://upstash.com) to cache prompts and responses with a serverless HTTP API."
]
},
@@ -340,7 +348,8 @@
"id": "b29dd776",
"metadata": {},
"source": [
"### Semantic Cache\n",
"### Semantic cache\n",
"\n",
"Use [Upstash Vector](https://upstash.com/docs/vector/overall/whatisvector) to do a semantic similarity search and cache the most similar response in the database. The vectorization is automatically done by the selected embedding model while creating Upstash Vector database. "
]
},
@@ -454,11 +463,10 @@
"cell_type": "markdown",
"id": "278ad7ae",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## `Redis` Cache\n",
"## `Redis` caches\n",
"\n",
"See the main [Redis cache docs](/docs/integrations/caches/redis_llm_caching/) for detail."
]
@@ -468,7 +476,7 @@
"id": "c5c9a4d5",
"metadata": {},
"source": [
"### Standard Cache\n",
"### Standard cache\n",
"Use [Redis](/docs/integrations/providers/redis) to cache prompts and responses."
]
},
@@ -564,7 +572,7 @@
"id": "82be23f6",
"metadata": {},
"source": [
"### Semantic Cache\n",
"### Semantic cache\n",
"Use [Redis](/docs/integrations/providers/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
@@ -660,7 +668,6 @@
"cell_type": "markdown",
"id": "684eab55",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -905,7 +912,7 @@
"id": "9b2b2777",
"metadata": {},
"source": [
"## `MongoDB Atlas` Cache\n",
"## `MongoDB Atlas` caches\n",
"\n",
"[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It has native support for \n",
"Vector Search on the MongoDB document data.\n",
@@ -917,8 +924,9 @@
"id": "ecdc2a0a",
"metadata": {},
"source": [
"### `MongoDBCache`\n",
"An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"### Standard cache\n",
"\n",
"Standard cache is a simple cache in MongoDB. It does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"\n",
"To import this cache, first install the required dependency:\n",
"\n",
@@ -950,8 +958,9 @@
"```\n",
"\n",
"\n",
"### `MongoDBAtlasSemanticCache`\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.\n",
"### Semantic cache\n",
"\n",
"Semantic caching allows retrieval of cached prompts based on semantic similarity between the user input and previously cached results. Under the hood, it blends MongoDBAtlas as both a cache and a vectorstore.\n",
"The MongoDBAtlasSemanticCache inherits from `MongoDBAtlasVectorSearch` and needs an Atlas Vector Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/mongodb_atlas) on how to set up the index.\n",
"\n",
"To import this cache:\n",
@@ -985,14 +994,13 @@
"cell_type": "markdown",
"id": "726fe754",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## `Momento` Cache\n",
"## `Momento` cache\n",
"Use [Momento](/docs/integrations/providers/momento) to cache prompts and responses.\n",
"\n",
"Requires momento to use, uncomment below to install:"
"Requires installing the `momento` package:"
]
},
{
@@ -1096,13 +1104,14 @@
"cell_type": "markdown",
"id": "934943dc",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
"## `SQLAlchemy` Cache\n",
"## `SQLAlchemy` cache\n",
"\n",
"You can use `SQLAlchemyCache` to cache with any SQL database supported by `SQLAlchemy`."
"You can use `SQLAlchemyCache` to cache with any SQL database supported by `SQLAlchemy`.\n",
"\n",
"### Standard cache"
]
},
{
@@ -1112,11 +1121,11 @@
"metadata": {},
"outputs": [],
"source": [
"# from langchain.cache import SQLAlchemyCache\n",
"# from sqlalchemy import create_engine\n",
"from langchain.cache import SQLAlchemyCache\n",
"from sqlalchemy import create_engine\n",
"\n",
"# engine = create_engine(\"postgresql://postgres:postgres@localhost:5432/postgres\")\n",
"# set_llm_cache(SQLAlchemyCache(engine))"
"engine = create_engine(\"postgresql://postgres:postgres@localhost:5432/postgres\")\n",
"set_llm_cache(SQLAlchemyCache(engine))"
]
},
{
@@ -1124,7 +1133,9 @@
"id": "0959d640",
"metadata": {},
"source": [
"### Custom SQLAlchemy Schemas"
"### Custom SQLAlchemy schemas\n",
"\n",
"You can define your own declarative `SQLAlchemyCache` child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with `Postgres`, use:"
]
},
{
@@ -1134,8 +1145,6 @@
"metadata": {},
"outputs": [],
"source": [
"# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:\n",
"\n",
"from langchain_community.cache import SQLAlchemyCache\n",
"from sqlalchemy import Column, Computed, Index, Integer, Sequence, String, create_engine\n",
"from sqlalchemy.ext.declarative import declarative_base\n",
@@ -1185,7 +1194,7 @@
"id": "6cf6acb4-1bc4-4c4b-9325-2420c17e5e2b",
"metadata": {},
"source": [
"### Required dependency"
"Required dependency:"
]
},
{
@@ -1203,7 +1212,7 @@
"id": "a4a6725d",
"metadata": {},
"source": [
"### Connect to the DB\n",
"### Connecting to the DB\n",
"\n",
"The Cassandra caches shown in this page can be used with Cassandra as well as other derived databases, such as Astra DB, which use the CQL (Cassandra Query Language) protocol.\n",
"\n",
@@ -1217,7 +1226,7 @@
"id": "15735abe-2567-43ce-aa91-f253b33b5a88",
"metadata": {},
"source": [
"#### Connecting to a Cassandra cluster\n",
"#### to a Cassandra cluster\n",
"\n",
"You first need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
@@ -1270,7 +1279,7 @@
"id": "2cc7ba29-8f84-4fbf-aaf7-3daa1be7e7b0",
"metadata": {},
"source": [
"#### Connecting to Astra DB through CQL\n",
"#### to Astra DB through CQL\n",
"\n",
"In this case you initialize CassIO with the following connection parameters:\n",
"\n",
@@ -1329,7 +1338,7 @@
"id": "8665664a",
"metadata": {},
"source": [
"### Cassandra: Exact cache\n",
"### Standard cache\n",
"\n",
"This will avoid invoking the LLM when the supplied prompt is _exactly_ the same as one encountered already:"
]
@@ -1400,7 +1409,7 @@
"id": "8fc4d017",
"metadata": {},
"source": [
"### Cassandra: Semantic cache\n",
"### Semantic cache\n",
"\n",
"This cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an `Embeddings` instance of your choice."
]
@@ -1488,9 +1497,9 @@
"id": "55dc84b3-37cb-4f19-b175-40e18e06f83f",
"metadata": {},
"source": [
"#### Attribution statement\n",
"**Attribution statement:**\n",
"\n",
">Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries."
">`Apache Cassandra`, `Cassandra` and `Apache` are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries."
]
},
{
@@ -1498,7 +1507,7 @@
"id": "8712f8fc-bb89-4164-beb9-c672778bbd91",
"metadata": {},
"source": [
"## `Astra DB` Caches"
"## `Astra DB` caches"
]
},
{
@@ -1543,7 +1552,7 @@
"id": "ee6d587f-4b7c-43f4-9e90-5129c842a143",
"metadata": {},
"source": [
"### Astra DB exact LLM cache\n",
"### Standard cache\n",
"\n",
"This will avoid invoking the LLM when the supplied prompt is _exactly_ the same as one encountered already:"
]
@@ -1619,7 +1628,7 @@
"id": "524b94fa-6162-4880-884d-d008749d14e2",
"metadata": {},
"source": [
"### Astra DB Semantic cache\n",
"### Semantic cache\n",
"\n",
"This cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an `Embeddings` instance of your choice."
]
@@ -1713,7 +1722,7 @@
}
},
"source": [
"## Azure Cosmos DB Semantic Cache\n",
"## `Azure Cosmos DB` semantic cache\n",
"\n",
"You can use this integrated [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) for caching."
]
@@ -1848,18 +1857,162 @@
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "235ff73bf7143f13",
"metadata": {},
"source": [
"## `Azure Cosmos DB NoSql` semantic cache\n",
"\n",
"You can use this integrated [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) for caching."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41fea5aa7b2153ca",
"metadata": {
"ExecuteTime": {
"end_time": "2024-12-06T00:55:38.648972Z",
"start_time": "2024-12-06T00:55:38.290541Z"
}
},
"outputs": [],
"source": [
"from typing import Any, Dict\n",
"\n",
"from azure.cosmos import CosmosClient, PartitionKey\n",
"from langchain_community.cache import AzureCosmosDBNoSqlSemanticCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"HOST = \"COSMOS_DB_URI\"\n",
"KEY = \"COSMOS_DB_KEY\"\n",
"\n",
"cosmos_client = CosmosClient(HOST, KEY)\n",
"\n",
"\n",
"def get_vector_indexing_policy() -> dict:\n",
" return {\n",
" \"indexingMode\": \"consistent\",\n",
" \"includedPaths\": [{\"path\": \"/*\"}],\n",
" \"excludedPaths\": [{\"path\": '/\"_etag\"/?'}],\n",
" \"vectorIndexes\": [{\"path\": \"/embedding\", \"type\": \"diskANN\"}],\n",
" }\n",
"\n",
"\n",
"def get_vector_embedding_policy() -> dict:\n",
" return {\n",
" \"vectorEmbeddings\": [\n",
" {\n",
" \"path\": \"/embedding\",\n",
" \"dataType\": \"float32\",\n",
" \"dimensions\": 1536,\n",
" \"distanceFunction\": \"cosine\",\n",
" }\n",
" ]\n",
" }\n",
"\n",
"\n",
"cosmos_container_properties_test = {\"partition_key\": PartitionKey(path=\"/id\")}\n",
"cosmos_database_properties_test: Dict[str, Any] = {}\n",
"\n",
"set_llm_cache(\n",
" AzureCosmosDBNoSqlSemanticCache(\n",
" cosmos_client=cosmos_client,\n",
" embedding=OpenAIEmbeddings(),\n",
" vector_embedding_policy=get_vector_embedding_policy(),\n",
" indexing_policy=get_vector_indexing_policy(),\n",
" cosmos_container_properties=cosmos_container_properties_test,\n",
" cosmos_database_properties=cosmos_database_properties_test,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1e1cd93819921bf6",
"metadata": {
"ExecuteTime": {
"end_time": "2024-12-06T00:55:44.513080Z",
"start_time": "2024-12-06T00:55:41.353843Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 374 ms, sys: 34.2 ms, total: 408 ms\n",
"Wall time: 3.15 s\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "576ce24c1244812a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-12-06T00:55:50.925865Z",
"start_time": "2024-12-06T00:55:50.548520Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 17.7 ms, sys: 2.88 ms, total: 20.6 ms\n",
"Wall time: 373 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "306ff47b",
"metadata": {},
"source": [
"## `Elasticsearch` Cache\n",
"## `Elasticsearch` caches\n",
"\n",
"A caching layer for LLMs that uses Elasticsearch.\n",
"\n",
"First install the LangChain integration with Elasticsearch."
@@ -1880,6 +2033,8 @@
"id": "9e70b0a0",
"metadata": {},
"source": [
"### Standard cache\n",
"\n",
"Use the class `ElasticsearchCache`.\n",
"\n",
"Simple example:"
@@ -1987,6 +2142,26 @@
"please only make additive modifications, keeping the base mapping intact."
]
},
{
"cell_type": "markdown",
"id": "3dc15b3c-8793-432e-98c3-d2726d497a5a",
"metadata": {},
"source": [
"### Embedding cache\n",
"\n",
"An Elasticsearch store for caching embeddings."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0ae5bd3-517d-470f-9b44-14d9359e6940",
"metadata": {},
"outputs": [],
"source": [
"from langchain_elasticsearch import ElasticsearchEmbeddingsCache"
]
},
{
"cell_type": "markdown",
"id": "0c69d84d",
@@ -1994,8 +2169,9 @@
"tags": []
},
"source": [
"## Optional Caching\n",
"You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM"
"## LLM-specific optional caching\n",
"\n",
"You can also turn off caching for specific LLMs. In the example below, even though global caching is enabled, we turn it off for a specific LLM."
]
},
{
@@ -2075,7 +2251,8 @@
"tags": []
},
"source": [
"## Optional Caching in Chains\n",
"## Optional caching in Chains\n",
"\n",
"You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.\n",
"\n",
"As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step."
@@ -2242,7 +2419,7 @@
"id": "9ecfa565038eff71",
"metadata": {},
"source": [
"## OpenSearch Semantic Cache\n",
"## `OpenSearch` semantic cache\n",
"Use [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch/) as a semantic cache to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
@@ -2346,7 +2523,8 @@
"id": "2ac1a8c7",
"metadata": {},
"source": [
"## SingleStoreDB Semantic Cache\n",
"## `SingleStoreDB` semantic cache\n",
"\n",
"You can use [SingleStoreDB](https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/) as a semantic cache to cache prompts and responses."
]
},
@@ -2373,7 +2551,7 @@
"id": "7e6b9b1a",
"metadata": {},
"source": [
"## `Memcached` Cache\n",
"## `Memcached` cache\n",
"You can use [Memcached](https://www.memcached.org/) as a cache to cache prompts and responses through [pymemcache](https://github.com/pinterest/pymemcache).\n",
"\n",
"This cache requires the pymemcache dependency to be installed:"
@@ -2469,7 +2647,7 @@
"id": "7019c991-0101-4f9c-b212-5729a5471293",
"metadata": {},
"source": [
"## Couchbase Caches\n",
"## `Couchbase` caches\n",
"\n",
"Use [Couchbase](https://couchbase.com/) as a cache for prompts and responses."
]
@@ -2479,7 +2657,7 @@
"id": "d6aac680-ba32-4c19-8864-6471cf0e7d5a",
"metadata": {},
"source": [
"### Couchbase Cache\n",
"### Standard cache\n",
"\n",
"The standard cache that looks for an exact match of the user prompt."
]
@@ -2613,7 +2791,7 @@
"id": "1dca39d8-233a-45ba-ad7d-0920dfbc4a50",
"metadata": {},
"source": [
"### Specifying a Time to Live (TTL) for the Cached entries\n",
"#### Time to Live (TTL) for the cached entries\n",
"The Cached documents can be deleted after a specified time automatically by specifying a `ttl` parameter along with the initialization of the Cache."
]
},
@@ -2642,7 +2820,7 @@
"id": "43626f33-d184-4260-b641-c9341cef5842",
"metadata": {},
"source": [
"### Couchbase Semantic Cache\n",
"### Semantic cache\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. This needs an appropriate Vector Search Index defined to work. Please look at the usage example on how to set up the index."
]
},
@@ -2685,7 +2863,9 @@
"- The search index for the semantic cache needs to be defined before using the semantic cache. \n",
"- The optional parameter, `score_threshold` in the Semantic Cache that you can use to tune the results of the semantic search.\n",
"\n",
"### How to Import an Index to the Full Text Search service?\n",
"#### Index to the Full Text Search service\n",
"\n",
"How to Import an Index to the Full Text Search service?\n",
" - [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html)\n",
" - Click on Search -> Add Index -> Import\n",
" - Copy the following Index definition in the Import screen\n",
@@ -2695,7 +2875,8 @@
" - Import the file in Capella using the instructions in the documentation.\n",
" - Click on Create Index to create the index.\n",
"\n",
"#### Example index for the vector search. \n",
"**Example index for the vector search:**\n",
"\n",
" ```\n",
" {\n",
" \"type\": \"fulltext-index\",\n",
@@ -2855,7 +3036,8 @@
"id": "f6f674fa-70b5-4cf9-a208-992aad2c3c89",
"metadata": {},
"source": [
"### Specifying a Time to Live (TTL) for the Cached entries\n",
"#### Time to Live (TTL) for the cached entries\n",
"\n",
"The Cached documents can be deleted after a specified time automatically by specifying a `ttl` parameter along with the initialization of the Cache."
]
},
@@ -2913,10 +3095,10 @@
"source": [
"**Cache** classes are implemented by inheriting the [BaseCache](https://python.langchain.com/api_reference/core/caches/langchain_core.caches.BaseCache.html) class.\n",
"\n",
"This table lists all 21 derived classes with links to the API Reference.\n",
"This table lists all derived classes with links to the API Reference.\n",
"\n",
"\n",
"| Namespace 🔻 | Class |\n",
"| Namespace | Class 🔻 |\n",
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
@@ -2925,22 +3107,31 @@
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.AsyncElasticsearchCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchEmbeddingsCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.AsyncElasticsearchEmbeddingsCache.html) |\n",
"| langchain_community.cache | [GPTCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.GPTCache.html) |\n",
"| langchain_core.caches | [InMemoryCache](https://python.langchain.com/api_reference/core/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_community.cache | [InMemoryCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.InMemoryCache.html) |\n",
"| langchain_community.cache | [MomentoCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.MomentoCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBCache.html) |\n",
"| langchain_community.cache | [OpenSearchSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.OpenSearchSemanticCache.html) |\n",
"| langchain_community.cache | [RedisSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.RedisSemanticCache.html) |\n",
"| langchain_community.cache | [SingleStoreDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.SingleStoreDBSemanticCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.SQLAlchemyCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyMd5Cache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.SQLAlchemyMd5Cache.html) |\n",
"| langchain_community.cache | [UpstashRedisCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.UpstashRedisCache.html) |\n",
"| langchain_core.caches | [InMemoryCache](https://python.langchain.com/api_reference/core/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://python.langchain.com/api_reference/mongodb/cache/langchain_mongodb.cache.MongoDBCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://python.langchain.com/api_reference/couchbase/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n"
"| langchain_community.cache | [UpstashRedisCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.UpstashRedisCache.html) |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ef1a2d4-da2e-4fb1-aae4-ffc4aef6c3ad",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -2959,7 +3150,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -17,6 +17,9 @@
"source": [
"# AI21LLM\n",
"\n",
":::caution This service is deprecated.\n",
"See [this page](https://python.langchain.com/docs/integrations/chat/ai21/) for the updated ChatAI21 object. :::\n",
"\n",
"This example goes over how to use LangChain to interact with `AI21` Jurassic models. To use the Jamba model, use the [ChatAI21 object](https://python.langchain.com/docs/integrations/chat/ai21/) instead.\n",
"\n",
"[See a full list of AI21 models and tools on LangChain.](https://pypi.org/project/langchain-ai21/)\n",

View File

@@ -180,9 +180,9 @@
"\n",
"For comprehensive details on all features and configurations, please refer to the API reference documentation for each class:\n",
"\n",
"* [OCIModelDeploymentLLM](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentLLM.html)\n",
"* [OCIModelDeploymentVLLM](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentVLLM.html)\n",
"* [OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html)"
"* [OCIModelDeploymentLLM](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentLLM.html)\n",
"* [OCIModelDeploymentVLLM](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentVLLM.html)\n",
"* [OCIModelDeploymentTGI](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html)"
]
}
],

View File

@@ -7,7 +7,14 @@
"source": [
"# OpenLLM\n",
"\n",
"[🦾 OpenLLM](https://github.com/bentoml/OpenLLM) is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps."
"[🦾 OpenLLM](https://github.com/bentoml/OpenLLM) lets developers run any **open-source LLMs** as **OpenAI-compatible API** endpoints with **a single command**.\n",
"\n",
"- 🔬 Build for fast and production usages\n",
"- 🚂 Support llama3, qwen2, gemma, etc, and many **quantized** versions [full list](https://github.com/bentoml/openllm-models)\n",
"- ⛓️ OpenAI-compatible API\n",
"- 💬 Built-in ChatGPT like UI\n",
"- 🔥 Accelerated LLM decoding with state-of-the-art inference backends\n",
"- 🌥️ Ready for enterprise-grade cloud deployment (Kubernetes, Docker and BentoCloud)"
]
},
{
@@ -37,10 +44,10 @@
"source": [
"## Launch OpenLLM server locally\n",
"\n",
"To start an LLM server, use `openllm start` command. For example, to start a dolly-v2 server, run the following command from a terminal:\n",
"To start an LLM server, use `openllm hello` command:\n",
"\n",
"```bash\n",
"openllm start dolly-v2\n",
"openllm hello\n",
"```\n",
"\n",
"\n",
@@ -57,74 +64,7 @@
"from langchain_community.llms import OpenLLM\n",
"\n",
"server_url = \"http://localhost:3000\" # Replace with remote host if you are running on a remote server\n",
"llm = OpenLLM(server_url=server_url)"
]
},
{
"cell_type": "markdown",
"id": "4f830f9d",
"metadata": {},
"source": [
"### Optional: Local LLM Inference\n",
"\n",
"You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.\n",
"\n",
"When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the `server_url` option demonstrated above.\n",
"\n",
"To load an LLM locally via the LangChain wrapper:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82c392b6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import OpenLLM\n",
"\n",
"llm = OpenLLM(\n",
" model_name=\"dolly-v2\",\n",
" model_id=\"databricks/dolly-v2-3b\",\n",
" temperature=0.94,\n",
" repetition_penalty=1.2,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f15ebe0d",
"metadata": {},
"source": [
"### Integrate with a LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "8b02a97a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"iLkb\n"
]
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"template = \"What is a good name for a company that makes {product}?\"\n",
"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"generated = llm_chain.run(product=\"mechanical keyboard\")\n",
"print(generated)"
"llm = OpenLLM(base_url=server_url, api_key=\"na\")"
]
},
{
@@ -133,7 +73,9 @@
"id": "56cb4bc0",
"metadata": {},
"outputs": [],
"source": []
"source": [
"llm(\"To build a LLM from scratch, the following are the steps:\")"
]
}
],
"metadata": {
@@ -152,7 +94,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -6,7 +6,7 @@
"source": [
"# Outlines\n",
"\n",
"This will help you getting started with Outlines LLM. For detailed documentation of all Outlines features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/llms/outlines.llms.Outlines.html).\n",
"This will help you getting started with Outlines LLM. For detailed documentation of all Outlines features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.outlines.Outlines.html).\n",
"\n",
"[Outlines](https://github.com/outlines-dev/outlines) is a library for constrained language generation. It allows you to use large language models (LLMs) with various backends while applying constraints to the generated output.\n",
"\n",
@@ -236,7 +236,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/outlines.chat_models.ChatOutlines.html\n",
"For detailed documentation of all ChatOutlines features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html\n",
"\n",
"## Outlines Documentation: \n",
"\n",

View File

@@ -0,0 +1,24 @@
# Aerospike
>[Aerospike](https://aerospike.com/docs/vector) is a high-performance, distributed database known for its speed and scalability, now with support for vector storage and search, enabling retrieval and search of embedding vectors for machine learning and AI applications.
> See the documentation for Aerospike Vector Search (AVS) [here](https://aerospike.com/docs/vector).
## Installation and Setup
Install the AVS Python SDK and AVS langchain vector store:
```bash
pip install aerospike-vector-search langchain-community
See the documentation for the Ptyhon SDK [here](https://aerospike-vector-search-python-client.readthedocs.io/en/latest/index.html).
The documentation for the AVS langchain vector store is [here](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.aerospike.Aerospike.html).
## Vector Store
To import this vectorstore:
```python
from langchain_community.vectorstores import Aerospike
See a usage example [here](https://python.langchain.com/docs/integrations/vectorstores/aerospike/).

View File

@@ -15,26 +15,6 @@ This page covers how to use the `AI21` ecosystem within `LangChain`.
pip install langchain-ai21
```
## LLMs
See a [usage example](/docs/integrations/llms/ai21).
### AI21 LLM
```python
from langchain_ai21 import AI21LLM
```
### AI21 Contextual Answer
You can use AI21s contextual answers model to receive text or document,
serving as a context, and a question and return an answer based entirely on this context.
```python
from langchain_ai21 import AI21ContextualAnswers
```
## Chat models
### AI21 Chat
@@ -45,23 +25,32 @@ See a [usage example](/docs/integrations/chat/ai21).
from langchain_ai21 import ChatAI21
```
## Embedding models
## Deprecated features
:::caution The following features are deprecated.
:::
### AI21 LLM
```python
from langchain_ai21 import AI21LLM
```
### AI21 Contextual Answer
```python
from langchain_ai21 import AI21ContextualAnswers
```
### AI21 Embeddings
See a [usage example](/docs/integrations/text_embedding/ai21).
```python
from langchain_ai21 import AI21Embeddings
```
## Text splitters
### AI21 Semantic Text Splitter
See a [usage example](/docs/integrations/document_transformers/ai21_semantic_text_splitter).
```python
from langchain_ai21 import AI21SemanticTextSplitter
```
```

View File

@@ -89,3 +89,11 @@ See [installation instructions and a usage example](/docs/integrations/vectorsto
```python
from langchain_community.vectorstores import Hologres
```
### Tablestore
See [installation instructions and a usage example](/docs/integrations/vectorstores/tablestore).
```python
from langchain_community.vectorstores import TablestoreVectorStore
```

View File

@@ -33,7 +33,7 @@ from langchain_community.document_loaders.couchbase import CouchbaseLoader
### CouchbaseCache
Use Couchbase as a cache for prompts and responses.
See a [usage example](/docs/integrations/llm_caching/#couchbase-cache).
See a [usage example](/docs/integrations/llm_caching/#couchbase-caches).
To import this cache:
```python
@@ -61,7 +61,7 @@ set_llm_cache(
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore.
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index.
See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache).
See a [usage example](/docs/integrations/llm_caching/#couchbase-caches).
To import this cache:
```python

View File

@@ -41,7 +41,7 @@ tool = DataheraldTextToSQL(api_wrapper=api_wrapper)
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)
agent_executor.invoke({"input":"Return the sql for this question: How many employees are in the company?"})
```

View File

@@ -84,7 +84,7 @@ from langchain_elasticsearch import ElasticsearchChatMessageHistory
## LLM cache
See a [usage example](/docs/integrations/llm_caching/#elasticsearch-cache).
See a [usage example](/docs/integrations/llm_caching/#elasticsearch-caches).
```python
from langchain_elasticsearch import ElasticsearchCache

View File

@@ -2,6 +2,10 @@
>[Jina AI](https://jina.ai/about-us) is a search AI company. `Jina` helps businesses and developers unlock multimodal data with a better search.
:::caution
For proper compatibility, please ensure you are using the `openai` SDK at version **0.x**.
:::
## Installation and Setup
- Get a Jina AI API token from [here](https://jina.ai/embeddings/) and set it as an environment variable (`JINA_API_TOKEN`)

View File

@@ -0,0 +1,39 @@
# Linkup
> [Linkup](https://www.linkup.so/) provides an API to connect LLMs to the web and the Linkup Premium Partner sources.
## Installation and Setup
To use the Linkup provider, you first need a valid API key, which you can find by signing-up [here](https://app.linkup.so/sign-up).
You will also need the `langchain-linkup` package, which you can install using pip:
```bash
pip install langchain-linkup
```
## Retriever
See a [usage example](/docs/integrations/retrievers/linkup_search).
```python
from langchain_linkup import LinkupSearchRetriever
retriever = LinkupSearchRetriever(
depth="deep", # "standard" or "deep"
linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable
)
```
## Tools
See a [usage example](/docs/integrations/tools/linkup_search).
```python
from langchain_linkup import LinkupSearchTool
tool = LinkupSearchTool(
depth="deep", # "standard" or "deep"
output_type="searchResults", # "searchResults", "sourcedAnswer" or "structured"
linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable
)
```

View File

@@ -6,6 +6,10 @@
> audio (and not only) locally or on-prem with consumer grade hardware,
> supporting multiple model families and architectures.
:::caution
For proper compatibility, please ensure you are using the `openai` SDK at version **0.x**.
:::
## Installation and Setup
We have to install several python packages:

View File

@@ -343,6 +343,31 @@ See a [usage example](/docs/integrations/memory/postgres_chat_message_history/).
Since Azure Database for PostgreSQL is open-source Postgres, you can use the [LangChain's Postgres support](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL.
### Azure SQL Database
>[Azure SQL Database](https://learn.microsoft.com/azure/azure-sql/database/sql-database-paas-overview?view=azuresql) is a robust service that combines scalability, security, and high availability, providing all the benefits of a modern database solution. It also provides a dedicated Vector data type & built-in functions that simplifies the storage and querying of vector embeddings directly within a relational database. This eliminates the need for separate vector databases and related integrations, increasing the security of your solutions while reducing the overall complexity.
By leveraging your current SQL Server databases for vector search, you can enhance data capabilities while minimizing expenses and avoiding the challenges of transitioning to new systems.
##### Installation and Setup
See [detail configuration instructions](/docs/integrations/vectorstores/sqlserver).
We need to install the `langchain-sqlserver` python package.
```bash
!pip install langchain-sqlserver==0.1.1
```
##### Deploy Azure SQL DB on Microsoft Azure
[Sign Up](https://learn.microsoft.com/azure/azure-sql/database/free-offer?view=azuresql) for free to get started today.
See a [usage example](/docs/integrations/vectorstores/sqlserver).
```python
from langchain_sqlserver import SQLServer_VectorStore
```
### Azure AI Search

View File

@@ -0,0 +1,31 @@
# OceanBase
[OceanBase Database](https://github.com/oceanbase/oceanbase) is a distributed relational database.
It is developed entirely by Ant Group. The OceanBase Database is built on a common server cluster.
Based on the Paxos protocol and its distributed structure, the OceanBase Database provides high availability and linear scalability.
OceanBase currently has the ability to store vectors. Users can easily perform the following operations with SQL:
- Create a table containing vector type fields;
- Create a vector index table based on the HNSW algorithm;
- Perform vector approximate nearest neighbor queries;
- ...
## Installation
```bash
pip install -U langchain-oceanbase
```
We recommend using Docker to deploy OceanBase:
```shell
docker run --name=ob433 -e MODE=slim -p 2881:2881 -d oceanbase/oceanbase-ce:4.3.3.0-100000132024100711
```
[More methods to deploy OceanBase cluster](https://github.com/oceanbase/oceanbase-doc/blob/V4.3.1/en-US/400.deploy/500.deploy-oceanbase-database-community-edition/100.deployment-overview.md)
### Usage
For a more detailed walkthrough of the OceanBase Wrapper, see [this notebook](https://github.com/oceanbase/langchain-oceanbase/blob/main/docs/vectorstores.ipynb)

View File

@@ -1,11 +1,17 @@
---
keywords: [openllm]
---
# OpenLLM
This page demonstrates how to use [OpenLLM](https://github.com/bentoml/OpenLLM)
with LangChain.
OpenLLM lets developers run any **open-source LLMs** as **OpenAI-compatible API** endpoints with **a single command**.
`OpenLLM` is an open platform for operating large language models (LLMs) in
production. It enables developers to easily run inference with any open-source
LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
- 🔬 Build for fast and production usages
- 🚂 Support llama3, qwen2, gemma, etc, and many **quantized** versions [full list](https://github.com/bentoml/openllm-models)
- ⛓️ OpenAI-compatible API
- 💬 Built-in ChatGPT like UI
- 🔥 Accelerated LLM decoding with state-of-the-art inference backends
- 🌥️ Ready for enterprise-grade cloud deployment (Kubernetes, Docker and BentoCloud)
## Installation and Setup
@@ -23,8 +29,7 @@ are pre-optimized for OpenLLM.
## Wrappers
There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
remote OpenLLM server:
There is a OpenLLM Wrapper which supports interacting with running server with OpenLLM:
```python
from langchain_community.llms import OpenLLM
@@ -32,13 +37,12 @@ from langchain_community.llms import OpenLLM
### Wrapper for OpenLLM server
This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
OpenLLM server can run either locally or on the cloud.
This wrapper supports interacting with OpenLLM's OpenAI-compatible endpoint.
To try it out locally, start an OpenLLM server:
To run a model, do:
```bash
openllm start flan-t5
openllm hello
```
Wrapper usage:
@@ -46,20 +50,7 @@ Wrapper usage:
```python
from langchain_community.llms import OpenLLM
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```
### Wrapper for Local Inference
You can also use the OpenLLM wrapper to load LLM in current Python process for
running inference.
```python
from langchain_community.llms import OpenLLM
llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')
llm = OpenLLM(base_url="http://localhost:3000/v1", api_key="na")
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```

View File

@@ -32,7 +32,7 @@ For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook]
### Pinecone Hybrid Search
```bash
pip install pinecone-client pinecone-text
pip install pinecone pinecone-text
```
```python

View File

@@ -1,4 +1,4 @@
# Robocorp
# Sema4 (fka Robocorp)
>[Robocorp](https://robocorp.com/) helps build and operate Python workers that run seamlessly anywhere at any scale

View File

@@ -0,0 +1,41 @@
# ScrapeGraph AI
>[ScrapeGraph AI](https://scrapegraphai.com) is a service that provides AI-powered web scraping capabilities.
>It offers tools for extracting structured data, converting webpages to markdown, and processing local HTML content
>using natural language prompts.
## Installation and Setup
Install the required packages:
```bash
pip install langchain-scrapegraph
```
Set up your API key:
```bash
export SGAI_API_KEY="your-scrapegraph-api-key"
```
## Tools
See a [usage example](/docs/integrations/tools/scrapegraph).
There are four tools available:
```python
from langchain_scrapegraph.tools import (
SmartScraperTool, # Extract structured data from websites
MarkdownifyTool, # Convert webpages to markdown
LocalScraperTool, # Process local HTML content
GetCreditsTool, # Check remaining API credits
)
```
Each tool serves a specific purpose:
- `SmartScraperTool`: Extract structured data from websites given a URL, prompt and optional output schema
- `MarkdownifyTool`: Convert any webpage to clean markdown format
- `LocalScraperTool`: Extract structured data from a local HTML file given a prompt and optional output schema
- `GetCreditsTool`: Check your remaining ScrapeGraph AI credits

View File

@@ -22,7 +22,7 @@ dependencies running.
- To run everything locally, install the open-source python package with `pip install unstructured`
along with `pip install langchain-community` and use the same `UnstructuredLoader` as mentioned above.
- You can install document specific dependencies with extras, e.g. `pip install "unstructured[docx]"`.
- You can install document specific dependencies with extras, e.g. `pip install "unstructured[docx]"`. Learn more about extras [here](https://docs.unstructured.io/open-source/installation/full-installation).
- To install the dependencies for all document types, use `pip install "unstructured[all-docs]"`.
- Install the following system dependencies if they are not already available on your system with e.g. `brew install` for Mac.
Depending on what document types you're parsing, you may not need all of these.

View File

@@ -8,7 +8,7 @@
"\n",
">[Upstage](https://upstage.ai) is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components.\n",
">\n",
">**Solar Mini Chat** is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.\n",
">**Solar Pro** is an enterprise-grade LLM optimized for single-GPU deployment, excelling in instruction-following and processing structured formats like HTML and Markdown. It supports English, Korean, and Japanese with top multilingual performance and offers domain expertise in finance, healthcare, and legal.\n",
"\n",
">Other than Solar, Upstage also offers features for real-world RAG (retrieval-augmented generation), such as **Document Parse** and **Groundedness Check**. \n"
]
@@ -21,12 +21,12 @@
"\n",
"| API | Description | Import | Example usage |\n",
"| --- | --- | --- | --- |\n",
"| Chat | Build assistants using Solar Mini Chat | `from langchain_upstage import ChatUpstage` | [Go](../../chat/upstage) |\n",
"| Chat | Build assistants using Solar Chat | `from langchain_upstage import ChatUpstage` | [Go](../../chat/upstage) |\n",
"| Text Embedding | Embed strings to vectors | `from langchain_upstage import UpstageEmbeddings` | [Go](../../text_embedding/upstage) |\n",
"| Groundedness Check | Verify groundedness of assistant's response | `from langchain_upstage import UpstageGroundednessCheck` | [Go](../../tools/upstage_groundedness_check) |\n",
"| Document Parse | Serialize documents with tables and figures | `from langchain_upstage import UpstageDocumentParseLoader` | [Go](../../document_loaders/upstage) |\n",
"\n",
"See [documentations](https://developers.upstage.ai/) for more details about the features."
"See [documentations](https://console.upstage.ai/docs/getting-started/overview) for more details about the models and features."
]
},
{

View File

@@ -0,0 +1,42 @@
# Weights & Biases
>[Weights & Biases](https://wandb.ai/) is provider of the AI developer platform to train and
> fine-tune AI models and develop AI applications.
`Weights & Biase` products can be used to log metrics and artifacts during training,
and to trace the execution of your code.
There are several main ways to use `Weights & Biases` products within LangChain:
- with `wandb_tracing_enabled`
- with `Weave` lightweight toolkit
- with `WandbCallbackHandler` (deprecated)
## wandb_tracing_enabled
See a [usage example](/docs/integrations/providers/wandb_tracing).
See in the [W&B documentation](https://docs.wandb.ai/guides/integrations/langchain).
```python
from langchain_community.callbacks import wandb_tracing_enabled
```
## Weave
See in the [W&B documentation](https://weave-docs.wandb.ai/guides/integrations/langchain).
## WandbCallbackHandler
**Note:** the `WandbCallbackHandler` is being deprecated in favour of the `wandb_tracing_enabled`.
See a [usage example](/docs/integrations/providers/wandb_tracking).
See in the [W&B documentation](https://docs.wandb.ai/guides/integrations/langchain).
```python
from langchain_community.callbacks import WandbCallbackHandler
```

View File

@@ -5,7 +5,7 @@
"id": "5371a9bb",
"metadata": {},
"source": [
"# WandB Tracing\n",
"# Weights & Biases tracing\n",
"\n",
"There are two recommended ways to trace your LangChains:\n",
"\n",
@@ -165,7 +165,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
}
},
"nbformat": 4,

Some files were not shown because too many files have changed in this diff Show More