Compare commits

...

739 Commits

Author SHA1 Message Date
Eugene Yurtsev
44c52df906 Merge branch 'master' into eugene/update_add_texts 2024-07-08 17:19:06 -04:00
Eugene Yurtsev
f765e8fa9d core[minor],community[patch],standard-tests[patch]: Move InMemoryImplementation to langchain-core (#23986)
This PR moves the in memory implementation to langchain-core.

* The implementation remains importable from langchain-community.
* Supporting utilities are marked as private for now.
2024-07-08 14:11:51 -07:00
Eugene Yurtsev
aa8c9bb4a9 community[patch]: Add constraint for pdfminer.six to unbreak CI (#23988)
Something changed in pdfminer six. This PR unreaks CI without
fixing the underlying PDF parser.
2024-07-08 20:55:19 +00:00
Eugene Yurtsev
151df77b84 Merge branch 'master' into eugene/update_add_texts 2024-07-08 16:32:47 -04:00
Eugene Yurtsev
2c180d645e core[minor],community[minor]: Upgrade all @root_validator() to @pre_init (#23841)
This PR introduces a @pre_init decorator that's a @root_validator(pre=True) but with all the defaults populated!
2024-07-08 16:09:29 -04:00
Mustafa Abdul-Kader
f152d6ed3d docs(llamacpp): fix copy paste error (#23983) 2024-07-08 20:06:04 +00:00
Eugene Yurtsev
32c0ee96fa xx 2024-07-08 13:05:06 -04:00
Eugene Yurtsev
afd3a6a5b1 x 2024-07-08 12:44:34 -04:00
Eugene Yurtsev
9515bd2d41 x 2024-07-08 12:44:24 -04:00
Eugene Yurtsev
88a0634ed7 x 2024-07-08 12:44:06 -04:00
JonasDeitmersATACAMA
4d6f28cdde Update annoy.ipynb (#23970)
mmemory in the description -> memory (corrected spelling mistake)

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-08 12:52:05 +00:00
Zheng Robert Jia
bf8d4716a7 Update concepts.mdx (#23955)
Added link to list of built-in tools.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-08 08:47:51 -04:00
Zheng Robert Jia
4ec5fdda8d Update index.mdx (#23956)
Added reference to built-in tools list.
2024-07-08 08:47:28 -04:00
ccurme
ee579c77c1 docs: chain migration guide (#23844)
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
2024-07-05 16:37:34 -07:00
Eugene Yurtsev
9787552b00 core[patch]: Use InMemoryChatMessageHistory in unit tests (#23916)
Update unit test to use the existing implementation of chat message
history
2024-07-05 20:10:54 +00:00
Rajendra Kadam
8b84457b17 community[minor]: Support PGVector in PebbloRetrievalQA (#23874)
- **Description:** Support PGVector in PebbloRetrievalQA
  - Identity and Semantic Enforcement support for PGVector
  - Refactor Vectorstore validation and name check
  - Clear the overridden identity and semantic enforcement filters
- **Issue:** NA
- **Dependencies:** NA
- **Tests**: NA(already added)
-  **Docs**: Updated
- **Twitter handle:** [@Raj__725](https://twitter.com/Raj__725)
2024-07-05 16:02:25 -04:00
Eugene Yurtsev
e0186df56b core[patch]: Clarify upsert response semantics (#23921) 2024-07-05 15:59:47 -04:00
Leonid Ganeline
fcd018be47 docs: langgraph link fix (#23848)
Link for the LangGraph doc is instead the LG repo link.
Fixed the link
2024-07-05 15:50:45 -04:00
Robbie Cronin
0990ab146c community: update import in chatbot tutorial to use InMemoryChatMessageHistory (#23903)
Summary of change:

- Replace ChatMessageHistory with InMemoryChatMessageHistory

Fixes #23892

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-05 15:48:11 -04:00
Rajendra Kadam
ee8aa54f53 community[patch]: Fix source path mismatch in PebbloSafeLoader (#23857)
**Description:** Fix for source path mismatch in PebbloSafeLoader. The
fix involves storing the full path in the doc metadata in VectorDB
**Issue:** NA, caught in internal testing
**Dependencies:** NA
**Add tests**:  Updated tests
2024-07-05 15:24:17 -04:00
Eugene Yurtsev
5b7d5f7729 core[patch]: Add comment to clarify aadd_documents (#23920)
Add comment to clarify how add documents works
2024-07-05 15:20:16 -04:00
Eugene Yurtsev
e0889384d9 standard-tests[minor]: add unit tests for testing get_by_ids, aget_by_ids, upsert, aupsert_by_ids (#23919)
These standard unit tests provide standard tests for functionality
introduced in these PRs:

* https://github.com/langchain-ai/langchain/pull/23774
* https://github.com/langchain-ai/langchain/pull/23594
2024-07-05 19:11:54 +00:00
ccurme
74c7198906 core, anthropic[patch]: support streaming tool calls when function has no arguments (#23915)
resolves https://github.com/langchain-ai/langchain/issues/23911

When an AIMessageChunk is instantiated, we attempt to parse tool calls
off of the tool_call_chunks.

Here we add a special-case to this parsing, where `""` will be parsed as
`{}`.

This is a reaction to how Anthropic streams tool calls in the case where
a function has no arguments:
```
{'id': 'toolu_01J8CgKcuUVrMqfTQWPYh64r', 'input': {}, 'name': 'magic_function', 'type': 'tool_use', 'index': 1}
{'partial_json': '', 'type': 'tool_use', 'index': 1}
```
The `partial_json` does not accumulate to a valid json string-- most
other providers tend to emit `"{}"` in this case.
2024-07-05 18:57:41 +00:00
Mateusz Szewczyk
902b57d107 IBM: Added WatsonxChat passing params to invoke method (#23758)
Thank you for contributing to LangChain!

- [x] **PR title**: "IBM: Added WatsonxChat to chat models preview,
update passing params to invoke method"


- [x] **PR message**: 
- **Description:** Added WatsonxChat passing params to invoke method,
added integration tests
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-05 18:07:50 +00:00
ccurme
1f5a163f42 langchain[patch]: deprecate QAGenerationChain (#23730) 2024-07-05 18:06:19 +00:00
ccurme
25de47878b langchain[patch]: deprecate AnalyzeDocumentChain (#23769) 2024-07-05 14:00:23 -04:00
Christophe Bornet
42d049f618 core[minor]: Add Graph Store component (#23092)
This PR introduces a GraphStore component. GraphStore extends
VectorStore with the concept of links between documents based on
document metadata. This allows linking documents based on a variety of
techniques, including common keywords, explicit links in the content,
and other patterns.

This works with existing Documents, so it’s easy to extend existing
VectorStores to be used as GraphStores. The interface can be implemented
for any Vector Store technology that supports metadata, not only graph
DBs.

When retrieving documents for a given query, the first level of search
is done using classical similarity search. Next, links may be followed
using various traversal strategies to get additional documents. This
allows documents to be retrieved that aren’t directly similar to the
query but contain relevant information.

2 retrieving methods are added to the VectorStore ones : 
* traversal_search which gets all linked documents up to a certain depth
* mmr_traversal_search which selects linked documents using an MMR
algorithm to have more diverse results.

If a depth of retrieval of 0 is used, GraphStore is effectively a
VectorStore. It enables an easy transition from a simple VectorStore to
GraphStore by adding links between documents as a second step.

An implementation for Apache Cassandra is also proposed.

See
https://github.com/datastax/ragstack-ai/blob/main/libs/knowledge-store/notebooks/astra_support.ipynb
for a notebook explaining how to use GraphStore and that shows that it
can answer correctly to questions that a simple VectorStore cannot.

**Twitter handle:** _cbornet
2024-07-05 12:24:10 -04:00
Leonid Ganeline
77f5fc3d55 core: docstrings load (#23787)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-05 12:23:19 -04:00
Eugene Yurtsev
6f08e11d7c core[minor]: add upsert, streaming_upsert, aupsert, astreaming_upsert methods to the VectorStore abstraction (#23774)
This PR rolls out part of the new proposed interface for vectorstores
(https://github.com/langchain-ai/langchain/pull/23544) to existing store
implementations.

The PR makes the following changes:

1. Adds standard upsert, streaming_upsert, aupsert, astreaming_upsert
methods to the vectorstore.
2. Updates `add_texts` and `aadd_texts` to be non required with a
default implementation that delegates to `upsert` and `aupsert` if those
have been implemented. The original `add_texts` and `aadd_texts` methods
are problematic as they spread object specific information across
document and **kwargs. (e.g., ids are not a part of the document)
3. Adds a default implementation to `add_documents` and `aadd_documents`
that delegates to `upsert` and `aupsert` respectively.
4. Adds standard unit tests to verify that a given vectorstore
implements a correct read/write API.

A downside of this implementation is that it creates `upsert` with a
very similar signature to `add_documents`.
The reason for introducing `upsert` is to:
* Remove any ambiguities about what information is allowed in `kwargs`.
Specifically kwargs should only be used for information common to all
indexed data. (e.g., indexing timeout).
*Allow inheriting from an anticipated generalized interface for indexing
that will allow indexing `BaseMedia` (i.e., allow making a vectorstore
for images/audio etc.)
 
`add_documents` can be deprecated in the future in favor of `upsert` to
make sure that users have a single correct way of indexing content.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-05 12:21:40 -04:00
G Sreejith
3c752238c5 core[patch]: Fix typo in docstring (graphm -> graph) (#23910)
Changes has been as per the request
Replaced graphm with graph
2024-07-05 16:20:33 +00:00
Leonid Ganeline
12c92b6c19 core: docstrings outputs (#23889)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-05 12:18:17 -04:00
Leonid Ganeline
1eca98ec56 core: docstrings prompts (#23890)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-05 12:17:52 -04:00
Philippe PRADOS
289960bc60 community[patch]: Redis.delete should be a regular method not a static method (#23873)
The `langchain_common.vectostore.Redis.delete()` must not be a
`@staticmethod`.

With the current implementation, it's not possible to have multiple
instances of Redis vectorstore because all versions must share the
`REDIS_URL`.

It's not conform with the base class.
2024-07-05 12:04:58 -04:00
Mohammad Mohtashim
2274d2b966 core[patch]: Accounting for Optional Input Variables in BasePromptTemplate (#22851)
**Description**: After reviewing the prompts API, it is clear that the
only way a user can explicitly mark an input variable as optional is
through the `MessagePlaceholder.optional` attribute. Otherwise, the user
must explicitly pass in the `input_variables` expected to be used in the
`BasePromptTemplate`, which will be validated upon execution. Therefore,
to semantically handle a `MessagePlaceholder` `variable_name` as
optional, we will treat the `variable_name` of `MessagePlaceholder` as a
`partial_variable` if it has been marked as optional. This approach
aligns with how the `variable_name` of `MessagePlaceholder` is already
handled
[here](https://github.com/keenborder786/langchain/blob/optional_input_variables/libs/core/langchain_core/prompts/chat.py#L991).
Additionally, an attribute `optional_variable` has been added to
`BasePromptTemplate`, and the `variable_name` of `MessagePlaceholder` is
also made part of `optional_variable` when marked as optional.

Moreover, the `get_input_schema` method has been updated for
`BasePromptTemplate` to differentiate between optional and non-optional
variables.

**Issue**: #22832, #21425

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-05 15:49:40 +00:00
Klaudia Lemiec
a2082bc1f8 docs: Arxiv docs update (#23871)
- [X] **PR title**
- [X] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Update of docstrings and docpages
- **Issue:**
[22866](https://github.com/langchain-ai/langchain/issues/22866)

- [X] **Add tests and docs**

- [X] **Lint and test**
2024-07-05 11:43:51 -04:00
jonathan | ヨナタン
d311f22182 Langchain: fixed a typo in the imports (#23864)
Description: Fixed a typo during the imports for the
GoogleDriveSearchTool
    
Issue: It's only for the docs, but it bothered me so i decided to fix it
quickly :D
2024-07-05 15:42:50 +00:00
Arun Sasidharan
db6512aa35 docs: fix typo in llm_chain.ipynb (#23907)
- Fix typo in the tutorial step
- Add some context on `text`
2024-07-05 15:41:46 +00:00
André Quintino
99b1467b63 community: add support for 'cloud' parameter in JiraAPIWrapper (#23057)
- **Description:** Enhance JiraAPIWrapper to accept the 'cloud'
parameter through an environment variable. This update allows more
flexibility in configuring the environment for the Jira API.
 - **Twitter handle:** Andre_Q_Pereira

---------

Co-authored-by: André Quintino <andre.quintino@tui.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-05 15:11:10 +00:00
wenngong
b1e90b3075 community: add model_name param valid for GPT4AllEmbeddings (#23867)
Description: add model_name param valid for GPT4AllEmbeddings

Issue: #23863 #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-07-05 10:46:34 -04:00
volodymyr-memsql
a4eb6d0fb1 community: add SingleStoreDB semantic cache (#23218)
This PR adds a `SingleStoreDBSemanticCache` class that implements a
cache based on SingleStoreDB vector store, integration tests, and a
notebook example.

Additionally, this PR contains minor changes to SingleStoreDB vector
store:
 - change add texts/documents methods to return a list of inserted ids
 - implement delete(ids) method to delete documents by list of ids
 - added drop() method to drop a correspondent database table
- updated integration tests to use and check functionality implemented
above


CC: @baskaryan, @hwchase17

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
2024-07-05 09:26:06 -04:00
Igor Drozdov
bb597b1286 feat(community): add bind_tools function for ChatLiteLLM (#23823)
It's a follow-up to https://github.com/langchain-ai/langchain/pull/23765

Now the tools can be bound by calling `bind_tools`

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
# tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]
tools = [GetWeather, GetPopulation]

llm = ChatLiteLLM(model="claude-3-sonnet-20240229").bind_tools(tools)
ai_msg = llm.invoke(prompt)
print(ai_msg.tool_calls)
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
2024-07-05 09:19:41 -04:00
eliasecchig
efb48566d0 docs: add Vertex Feature Store, edit BigQuery Vector Search (#23709)
Add Vertex Feature Store, edit BigQuery Vector Search docs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-05 12:12:21 +00:00
Yuki Watanabe
0e916d0d55 community: Overhaul MLflow Integration documentation (#23067) 2024-07-03 22:52:17 -04:00
ccurme
e62f8f143f infra: remove cohere from monorepo scheduled tests (#23846) 2024-07-03 21:48:39 +00:00
Jiejun Tan
2be66a38d8 huggingface: Fix huggingface tei support (#22653)
Update former pull request:
https://github.com/langchain-ai/langchain/pull/22595.

Modified
`libs/partners/huggingface/langchain_huggingface/embeddings/huggingface_endpoint.py`,
where the API call function does not match current [Text Embeddings
Inference
API](https://huggingface.github.io/text-embeddings-inference/#/Text%20Embeddings%20Inference/embed).
One example is:
```json
{
  "inputs": "string",
  "normalize": true,
  "truncate": false
}
```
Parameters in `_model_kwargs` are not passed properly in the latest
version. By the way, the issue *[why cause 413?
#50](https://github.com/huggingface/text-embeddings-inference/issues/50)*
might be solved.
2024-07-03 13:30:29 -07:00
Eugene Yurtsev
9ccc4b1616 core[patch]: Fix logic in BaseChatModel that processes the llm string that is used as a key for caching chat models responses (#23842)
This PR should fix the following issue:
https://github.com/langchain-ai/langchain/issues/23824
Introduced as part of this PR:
https://github.com/langchain-ai/langchain/pull/23416

I am unable to reproduce the issue locally though it's clear that we're
getting a `serialized` object which is not a dictionary somehow.

The test below passes for me prior to the PR as well

```python

def test_cache_with_sqllite() -> None:
    from langchain_community.cache import SQLiteCache

    from langchain_core.globals import set_llm_cache

    cache = SQLiteCache(database_path=".langchain.db")
    set_llm_cache(cache)
    chat_model = FakeListChatModel(responses=["hello", "goodbye"], cache=True)
    assert chat_model.invoke("How are you?").content == "hello"
    assert chat_model.invoke("How are you?").content == "hello"
```
2024-07-03 16:23:55 -04:00
Vadym Barda
9bb623381b core[minor]: update conversion utils to handle RemoveMessage (#23840) 2024-07-03 16:13:31 -04:00
Eugene Yurtsev
4ab78572e7 core[patch]: Speed up unit tests for imports (#23837)
Speed up unit tests for imports
2024-07-03 15:55:15 -04:00
Nico Puhlmann
4a15fce516 langchain: update declarative_base import (#20056)
**Description**: The ``declarative_base()`` function is now available as
sqlalchemy.orm.declarative_base(). (depreca ted since: 2.0) (Background
on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-03 15:52:35 -04:00
Mu Xian Ming
c06c666ce5 docs: fix docs/tutorials/llm_chain.ipynb (#23807)
to correctly display the link

Co-authored-by: Mu Xianming <mu.xianming@lmwn.com>
2024-07-03 15:38:31 -04:00
Vadym Barda
d206df8d3d docs: improve structure in the agent migration to langgraph guide (#23817) 2024-07-03 12:25:11 -07:00
Théo Deschamps
39b19cf764 core[patch]: extract input variables for path and detail keys in order to format an ImagePromptTemplate (#22613)
- Description: Add support for `path` and `detail` keys in
`ImagePromptTemplate`. Previously, only variables associated with the
`url` key were considered. This PR allows for the inclusion of a local
image path and a detail parameter as input to the format method.
- Issues:
    - fixes #20820 
    - related to #22024 
- Dependencies: None
- Twitter handle: @DeschampsTho5

---------

Co-authored-by: tdeschamps <tdeschamps@kameleoon.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-03 18:58:42 +00:00
Bagatur
a4798802ef cli[patch]: ruff 0.5 (#23833) 2024-07-03 18:33:15 +00:00
Leonid Ganeline
55f6f91f17 core[patch]: docstrings output_parsers (#23825)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 14:27:40 -04:00
Philippe PRADOS
26cee2e878 partners[patch]: MongoDB vectorstore to return and accept string IDs (#23818)
The mongdb have some errors.
- `add_texts() -> List` returns a list of `ObjectId`, and not a list of
string
- `delete()` with `id` never remove chunks.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-03 14:14:08 -04:00
Ikko Eltociear Ashimine
75734fbcf1 community: fix typo in unit tests for test_zenguard.py (#23819)
enviroment -> environment


- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"
2024-07-03 14:05:42 -04:00
Bagatur
a0c2281540 infra: update mypy 1.10, ruff 0.5 (#23721)
```python
"""python scripts/update_mypy_ruff.py"""
import glob
import tomllib
from pathlib import Path

import toml
import subprocess
import re

ROOT_DIR = Path(__file__).parents[1]


def main():
    for path in glob.glob(str(ROOT_DIR / "libs/**/pyproject.toml"), recursive=True):
        print(path)
        with open(path, "rb") as f:
            pyproject = tomllib.load(f)
        try:
            pyproject["tool"]["poetry"]["group"]["typing"]["dependencies"]["mypy"] = (
                "^1.10"
            )
            pyproject["tool"]["poetry"]["group"]["lint"]["dependencies"]["ruff"] = (
                "^0.5"
            )
        except KeyError:
            continue
        with open(path, "w") as f:
            toml.dump(pyproject, f)
        cwd = "/".join(path.split("/")[:-1])
        completed = subprocess.run(
            "poetry lock --no-update; poetry install --with typing; poetry run mypy . --no-color",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )
        logs = completed.stdout.split("\n")

        to_ignore = {}
        for l in logs:
            if re.match("^(.*)\:(\d+)\: error:.*\[(.*)\]", l):
                path, line_no, error_type = re.match(
                    "^(.*)\:(\d+)\: error:.*\[(.*)\]", l
                ).groups()
                if (path, line_no) in to_ignore:
                    to_ignore[(path, line_no)].append(error_type)
                else:
                    to_ignore[(path, line_no)] = [error_type]
        print(len(to_ignore))
        for (error_path, line_no), error_types in to_ignore.items():
            all_errors = ", ".join(error_types)
            full_path = f"{cwd}/{error_path}"
            try:
                with open(full_path, "r") as f:
                    file_lines = f.readlines()
            except FileNotFoundError:
                continue
            file_lines[int(line_no) - 1] = (
                file_lines[int(line_no) - 1][:-1] + f"  # type: ignore[{all_errors}]\n"
            )
            with open(full_path, "w") as f:
                f.write("".join(file_lines))

        subprocess.run(
            "poetry run ruff format .; poetry run ruff --select I --fix .",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )


if __name__ == "__main__":
    main()

```
2024-07-03 10:33:27 -07:00
William FH
6cd56821dc [Core] Unify function schema parsing (#23370)
Use pydantic to infer nested schemas and all that fun.
Include bagatur's convenient docstring parser
Include annotation support


Previously we didn't adequately support many typehints in the
bind_tools() method on raw functions (like optionals/unions, nested
types, etc.)
2024-07-03 09:55:38 -07:00
Oguz Vuruskaner
2a2c0d1a94 community[deepinfra]: fix tool call parsing. (#23162)
This PR includes fix for DeepInfra tool call parsing.
2024-07-03 12:11:37 -04:00
maang-h
525109e506 feat: Implement ChatBaichuan asynchronous interface (#23589)
- **Description:** Add interface to `ChatBaichuan` to support
asynchronous requests
    - `_agenerate` method
    - `_astream` method

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-03 12:10:04 -04:00
Bagatur
8842a0d986 docs: fireworks nit (#23822) 2024-07-03 15:36:27 +00:00
Leonid Ganeline
716a316654 core: docstrings indexing (#23785)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:27:34 -04:00
Leonid Ganeline
30fdc2dbe7 core: docstrings messages (#23788)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:25:00 -04:00
ccurme
54e730f6e4 fireworks[patch]: read from tool calls attribute (#23820) 2024-07-03 11:11:17 -04:00
Bagatur
e787249af1 docs: fireworks standard page (#23816) 2024-07-03 14:33:05 +00:00
Jacob Lee
27aa4d38bf docs[patch]: Update structured output docs to have more discussion (#23786)
CC @agola11 @ccurme
2024-07-02 16:53:31 -07:00
Bagatur
ebb404527f anthropic[patch]: Release 0.1.19 (#23783) 2024-07-02 18:17:25 -04:00
Bagatur
6168c846b2 openai[patch]: Release 0.1.14 (#23782) 2024-07-02 18:17:15 -04:00
Bagatur
cb9812593f openai[patch]: expose model request payload (#23287)
![Screenshot 2024-06-21 at 3 12 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/6243a01f-1ef6-4085-9160-2844d9f2b683)
2024-07-02 17:43:55 -04:00
Bagatur
ed200bf2c4 anthropic[patch]: expose payload (#23291)
![Screenshot 2024-06-21 at 4 56 02
PM](https://github.com/langchain-ai/langchain/assets/22008038/a2c6224f-3741-4502-9607-1a726a0551c9)
2024-07-02 17:43:47 -04:00
Bagatur
7a3d8e5a99 core[patch]: Release 0.2.11 (#23780) 2024-07-02 17:35:57 -04:00
Bagatur
d677dadf5f core[patch]: mark RemoveMessage beta (#23656) 2024-07-02 21:27:21 +00:00
ccurme
1d54ac93bb ai21[patch]: release 0.1.7 (#23781) 2024-07-02 21:24:13 +00:00
Asaf Joseph Gardin
320dc31822 partners: AI21 Labs Jamba Streaming Support (#23538)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Added support for streaming in AI21 Jamba Model
    - **Twitter handle:** https://github.com/AI21Labs


- [x] **Add tests and docs**: If you're adding a new integration, please
include

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-02 17:15:46 -04:00
Qingchuan Hao
5cd4083457 community: make bing web search as the only option (#23523)
This PR make bing web search as the option for BingSearchAPIWrapper to
facilitate and simply the user interface on Langchain.
This is a follow-up work of
https://github.com/langchain-ai/langchain/pull/23306.
2024-07-02 17:13:54 -04:00
William W Wang
76e7e4e9e6 Update docs: LangChain agent memory (#23673)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


**Description:** Update docs content on agent memory

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-02 17:06:32 -04:00
ccurme
7c1cddf1b7 anthropic[patch]: release 0.1.18 (#23778) 2024-07-02 16:46:47 -04:00
ccurme
c9dac59008 anthropic[patch]: fix model name in some integration tests (#23779) 2024-07-02 20:45:52 +00:00
Bagatur
7a6c06cadd anthropic[patch]: tool output parser fix (#23647) 2024-07-02 16:33:22 -04:00
ccurme
46cbf0e4aa anthropic[patch]: use core output parsers for structured output (#23776)
Also add to standard tests for structured output.
2024-07-02 16:15:26 -04:00
kiarina
dc396835ed langchain_anthropic: add stop_reason in ChatAnthropic stream result (#23689)
`ChatAnthropic` can get `stop_reason` from the resulting `AIMessage` in
`invoke` and `ainvoke`, but not in `stream` and `astream`.
This is a different behavior from `ChatOpenAI`.
It is possible to get `stop_reason` from `stream` as well, since it is
needed to determine the next action after the LLM call. This would be
easier to handle in situations where only `stop_reason` is needed.

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
2024-07-02 15:16:20 -04:00
Bagatur
27ce58f86e docs: google genai standard page (#23766)
Part of #22296
2024-07-02 13:54:34 -04:00
maang-h
e4e28a6ff5 community[patch]: Fix MiniMaxChat validate_environment error (#23770)
- **Description:** Fix some issues in MiniMaxChat 
  - Fix `minimax_api_host` not in `values` error
- Remove `minimax_group_id` from reading environment variables, the
`minimax_group_id` no longer use in MiniMaxChat
  - Invoke callback prior to yielding token, the issus #16913
2024-07-02 13:23:32 -04:00
SN
acc457f645 core[patch]: fix nested sections for mustache templating (#23747)
The prompt template variable detection only worked for singly-nested
sections because we just kept track of whether we were in a section and
then set that to false as soon as we encountered an end block. i.e. the
following:

```
{{#outerSection}}
    {{variableThatShouldntShowUp}}
    {{#nestedSection}}
        {{nestedVal}}
    {{/nestedSection}}
    {{anotherVariableThatShouldntShowUp}}
{{/outerSection}}
```

Would yield `['outerSection', 'anotherVariableThatShouldntShowUp']` as
input_variables (whereas it should just yield `['outerSection']`). This
fixes that by keeping track of the current depth and using a stack.
2024-07-02 10:20:45 -07:00
Karim Lalani
acc8fb3ead docs[patch]: Update OllamaFunctions docs to match chat model integration template (#23179)
Added Tool Calling Agent Example with langgraph to OllamaFunctions
documentation
2024-07-02 10:05:44 -07:00
Bagatur
79c07a8ade docs: standardize bedrock page (#23738)
Part of #22296
2024-07-02 12:03:36 -04:00
Teja Hara
a77a263e24 Added langchain-community installation (#23741)
PR title: Docs enhancement

- Description: Adding installation instructions for integrations
requiring langchain-community package since 0.2
- Issue: https://github.com/langchain-ai/langchain/issues/22005

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-02 11:03:07 -04:00
Eugene Yurtsev
46ff0f7a3c community[patch]: Update @root_validators to use explicit pre=True or pre=False (#23737) 2024-07-02 10:47:21 -04:00
Igor Drozdov
b664dbcc36 feat(community): add support for tool_calls response (#23765)
When `model_kwargs={"tools": tools}` are passed to `ChatLiteLLM`, they
are executed, but the response is not recognized correctly

Let's add `tool_calls` to the `additional_kwargs`

Thank you for contributing to LangChain!

## ChatAnthropic

I used the following example to verify the output of llm with tools:

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_anthropic import ChatAnthropic

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm = ChatAnthropic(model="claude-3-sonnet-20240229")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
print(ai_msg.tool_calls)
```

I get the following response:

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01UfDA89knrhw3vFV9X47neT'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01NrYVRYae7m7z7tBgyPb3Gd'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01EPFEpDgzL6vV2dTpD9SVP5'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01B5J6tPJXgwwfhQX9BHP2dt'}]
```

## LiteLLM

Based on https://litellm.vercel.app/docs/completion/function_call

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
import litellm

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

response = litellm.completion(model="claude-3-sonnet-20240229", messages=[{'role': 'user', 'content': prompt}], tools=tools)
print(response.choices[0].message.tool_calls)
```

```python
[ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetWeather'), id='toolu_01HeDWV5vP7BDFfytH5FJsja', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetWeather'), id='toolu_01EiLesUSEr3YK1DaE2jxsQv', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetPopulation'), id='toolu_01Xz26zvkBDRxEUEWm9pX6xa', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetPopulation'), id='toolu_01SDqKnsLjvUXuBsgAZdEEpp', type='function')]
```

## ChatLiteLLM

When I try the following

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

llm = ChatLiteLLM(model="claude-3-sonnet-20240229", model_kwargs={"tools": tools})
ai_msg = llm.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls)
```

```python
content="Okay, let's find out the current weather and populations for Los Angeles and New York City:" response_metadata={'token_usage': Usage(prompt_tokens=329, completion_tokens=193, total_tokens=522), 'model': 'claude-3-sonnet-20240229', 'finish_reason': 'tool_calls'} id='run-748b7a84-84f4-497e-bba1-320bd4823937-0'
[]
```

---

When I apply the changes of this PR, the output is

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_017D2tGjiaiakB1HadsEFZ4e'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01WrDpJfVqLkPejWzonPCbLW'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_016UKyYrVAV9Pz99iZGgGU7V'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01Sgv1imExFX1oiR1Cw88zKy'}]
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
2024-07-02 10:42:08 -04:00
Eugene Yurtsev
338cef35b4 community[patch]: update @root_validator in utilities namespace (#23768)
Update all utilities to use `pre=True` or `pre=False`

https://github.com/langchain-ai/langchain/issues/22819
2024-07-02 14:33:01 +00:00
wenngong
ee5eedfa04 partners: support reading HuggingFace params from env (#23309)
Description: 
1. partners/HuggingFace module support reading params from env. Not
adjust langchain_community/.../huggingfaceXX modules since they are
deprecated.
  2. pydantic 2 @root_validator migration.

Issue: #22448 #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-07-02 10:12:45 -04:00
antonpibm
ffde8a6a09 Milvus vectorstore: fix pass ids as argument after upsert (#23761)
**Description**: Milvus vectorstore supports both `add_documents` via
the base class and `upsert` method which deletes and re-adds documents
based on their ids

**Issue**: Due to mismatch in the interfaces the ids used by `upsert`
are neglected in `add_documents`, as `ids` are passed as argument in
`upsert` but via `kwargs` is `add_documents`

This caused exceptions and inconsistency in the DB, tested with
`auto_id=False`

**Fix**: pass `ids` via `kwargs` to `add_documents`
2024-07-02 13:45:30 +00:00
Eugene Yurtsev
d084172b63 community[patch]: root validator set explicit pre=False or pre=True (#23764)
See issue: https://github.com/langchain-ai/langchain/issues/22819
2024-07-02 09:42:05 -04:00
Khelan Modi
4457e64e13 Update azure_cosmos_db for mongodb documentation (#23740)
added pre-filtering documentation

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
    - **Description:** added filter vector search 
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:**: n/a


- [x] **Add tests and docs**: If you're adding a new integration, please
include - No need for tests, just a simple doc update
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-02 12:53:05 +00:00
panwg3
bc98f90ba3 update wrong words (#23749)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-02 08:50:20 -04:00
mattthomps1
cc55823486 docs: updated PPLX model (#23723)
Description: updated pplx docs to reference a currently [supported
model](https://docs.perplexity.ai/docs/model-cards). pplx-70b-online
->llama-3-sonar-small-32k-online

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-02 08:48:49 -04:00
Bagatur
aa165539f6 docs: standardize cohere page (#23739)
Part of #22296
2024-07-01 19:34:13 -04:00
Jacob Lee
7791d92711 community[patch]: Fix requests alias for load_tools (#23734)
CC @baskaryan
2024-07-01 15:02:14 -07:00
Eugene Yurtsev
f24e38876a community[patch]: Update root_validators to use explicit pre=True or pre=False (#23736) 2024-07-01 17:13:23 -04:00
Yannick Stephan
5b1de2ae93 mistralai: Fixed streaming in MistralAI with ainvoke and callbacks (#22000)
# Fix streaming in mistral with ainvoke 
- [x] **PR title**
- [x] **PR message**
- [x] **Add tests and docs**:
  1. [x] Added a test for the fixed integration.
2. [x] An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Ran `make format`, `make lint` and `make test`
from the root of the package(s) I've modified.

Hello 

* I Identified an issue in the mistral package where the callback
streaming (see on_llm_new_token) was not functioning correctly when the
streaming parameter was set to True and call with `ainvoke`.
* The root cause of the problem was the streaming not taking into
account. ( I think it's an oversight )
* To resolve the issue, I added the `streaming` attribut.
* Now, the callback with streaming works as expected when the streaming
parameter is set to True.

## How to reproduce

```
from langchain_mistralai.chat_models import ChatMistralAI
chain = ChatMistralAI(streaming=True)
# Add a callback
chain.ainvoke(..)

# Oberve on_llm_new_token
# Now, the callback is given as streaming tokens, before it was in grouped format.
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 20:53:09 +00:00
Jacob Lee
f4b2e553e7 docs[patch]: Update Unstructured loader notebooks and install instructions (#23726)
CC @baskaryan @MthwRobinson
2024-07-01 13:36:48 -07:00
Eugene Yurtsev
5d2262af34 community[patch]: Update root_validators to use pre=True or pre=False (#23731)
Update root_validators in preparation for pydantic 2 migration.
2024-07-01 20:10:15 +00:00
Erick Friis
6019147b66 infra: filter template check (#23727) 2024-07-01 13:00:33 -07:00
Eugene Yurtsev
ebcee4f610 core[patch]: Add versionadded to get_by_ids (#23728) 2024-07-01 15:16:00 -04:00
Eugene Yurtsev
e800f6bb57 core[minor]: Create BaseMedia object (#23639)
This PR implements a BaseContent object from which Document and Blob
objects will inherit proposed here:
https://github.com/langchain-ai/langchain/pull/23544

Alternative: Create a base object that only has an identifier and no
metadata.

For now decided against it, since that refactor can be done at a later
time. It also feels a bit odd since our IDs are optional at the moment.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 15:07:30 -04:00
Chip Davis
04bc5f1a95 partners[azure]: fix having openai_api_base set for other packages (#22068)
This fix is for #21726. When having other packages installed that
require the `openai_api_base` environment variable, users are not able
to instantiate the AzureChatModels or AzureEmbeddings.

This PR adds a new value `ignore_openai_api_base` which is a bool. When
set to True, it sets `openai_api_base` to `None`

Two new tests were added for the `test_azure` and a new file
`test_azure_embeddings`

A different approach may be better for this. If you can think of better
logic, let me know and I can adjust it.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 18:35:20 +00:00
Nuno Campos
b36e95caa9 core[patch]: use async messages where possible (#23718)
Fix #23716

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 18:33:05 +00:00
Spyros Avlonitis
8cfb2fa1b7 core[minor]: Add maxsize for InMemoryCache (#23405)
This PR introduces a maxsize parameter for the InMemoryCache class,
allowing users to specify the maximum number of items to store in the
cache. If the cache exceeds the specified maximum size, the oldest items
are removed. Additionally, comprehensive unit tests have been added to
ensure all functionalities are thoroughly tested. The tests are written
using pytest and cover both synchronous and asynchronous methods.

Twitter: @spyrosavl

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 14:21:21 -04:00
maang-h
96af8f31ae community[patch]: Invoke callback prior to yielding token (#23638)
- **Description:** Invoke callback prior to yielding token in stream and
astream methods for ChatZhipuAI.
- **Issue:** the issue #16913
2024-07-01 18:12:24 +00:00
Eugene Yurtsev
b5aef4cf97 core[patch]: Fix llm string representation for serializable models (#23416)
Fix LLM string representation for serializable objects.

Fix for issue: https://github.com/langchain-ai/langchain/issues/23257

The llm string of serializable chat models is the serialized
representation of the object. LangChain serialization dumps some basic
information about non serializable objects including their repr() which
includes an object id.

This means that if a chat model has any non serializable fields (e.g., a
cache), then any new instantiation of the those fields will change the
llm representation of the chat model and cause chat misses.

i.e., re-instantiating a postgres cache would result in cache misses!
2024-07-01 14:06:33 -04:00
nobbbbby
3904f2cd40 core: fix NameError (#23658)
**Description:** In the chat_models module of the language model, the
import statement for BaseModel has been moved from the conditionally
imported section to the main import area, fixing `NameError `.
**Issue:** fix `NameError `
2024-07-01 17:51:23 +00:00
Jacob Lee
d2c7379f1c 👥 Update LangChain people data (#23697)
👥 Update LangChain people data

---------

Co-authored-by: github-actions <github-actions@github.com>
2024-07-01 17:42:55 +00:00
Jordy Jackson Antunes da Rocha
a50eabbd48 experimental: LLMGraphTransformer add missing conditional adding restrictions to prompts for LLM that do not support function calling (#22793)
- Description: Modified the prompt created by the function
`create_unstructured_prompt` (which is called for LLMs that do not
support function calling) by adding conditional checks that verify if
restrictions on entity types and rel_types should be added to the
prompt. If the user provides a sufficiently large text, the current
prompt **may** fail to produce results in some LLMs. I have first seen
this issue when I implemented a custom LLM class that did not support
Function Calling and used Gemini 1.5 Pro, but I was able to replicate
this issue using OpenAI models.

By loading a sufficiently large text
```python
from langchain_community.llms import Ollama
from langchain_openai import ChatOpenAI, OpenAI
from langchain_core.prompts import PromptTemplate
import re
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain_core.documents import Document

with open("texto-longo.txt", "r") as file:
    full_text = file.read()
    partial_text = full_text[:4000]

documents = [Document(page_content=partial_text)] # cropped to fit GPT 3.5 context window
```

And using the chat class (that has function calling)
```python
chat_openai = ChatOpenAI(model="gpt-3.5-turbo", model_kwargs={"seed": 42})
chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai)
graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents)
```
It works:
```
>>> print(graph_from_chat_gpt35[0].nodes)
[Node(id="Jesu, Joy of Man's Desiring", type='Music'), Node(id='Godel', type='Person'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='clever way of encoding the complicated expressions as numbers', type='Concept')]
```

But if you try to use the non-chat LLM class (that does not support
function calling)
```python
openai = OpenAI(
    model="gpt-3.5-turbo-instruct",
    max_tokens=1000,
)
gpt35_transformer = LLMGraphTransformer(llm=openai)
graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents)
```

It uses the prompt that has issues and sometimes does not produce any
result
```
>>> print(graph_from_gpt35[0].nodes)
[]
```

After implementing the changes, I was able to use both classes more
consistently:

```shell
>>> chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai)
>>> graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents)
>>> print(graph_from_chat_gpt35[0].nodes)
[Node(id="Jesu, Joy Of Man'S Desiring", type='Music'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='Godel', type='Person')]
>>> gpt35_transformer = LLMGraphTransformer(llm=openai)
>>> graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents)
>>> print(graph_from_gpt35[0].nodes)
[Node(id='I', type='Pronoun'), Node(id="JESU, JOY OF MAN'S DESIRING", type='Song'), Node(id='larger memory', type='Memory'), Node(id='this nice tree structure', type='Structure'), Node(id='how you can do it all with the numbers', type='Process'), Node(id='JOHANN SEBASTIAN BACH', type='Composer'), Node(id='type of structure', type='Characteristic'), Node(id='that', type='Pronoun'), Node(id='we', type='Pronoun'), Node(id='worry', type='Verb')]
```

The results are a little inconsistent because the GPT 3.5 model may
produce incomplete json due to the token limit, but that could be solved
(or mitigated) by checking for a complete json when parsing it.
2024-07-01 17:33:51 +00:00
Eugene Yurtsev
4f1821db3e core[minor]: Add get_by_ids to vectorstore interface (#23594)
This PR adds a part of the indexing API proposed in this RFC
https://github.com/langchain-ai/langchain/pull/23544/files.

It allows rolling out `get_by_ids` which should be uncontroversial to
existing vectorstores without introducing new abstractions.

The semantics for this method depend on the ability of identifying
returned documents using the new optional ID field on documents:
https://github.com/langchain-ai/langchain/pull/23411

Alternatives are:

1. Relax the sequence requirement

```python
def get_by_ids(self, ids: Iterable[str], /) -> Iterable[Document]:
```

Rejected:
- implementations are more likley to start batching with bad defaults
- users would need to call list() or we'd need to introduce another
convenience method

2. Support more kwargs

```python

def get_by_ids(self, ids: Sequence[str], /, **kwargs) -> List[Document]:
...
```

Rejected: 
- No need for `batch` parameter since IDs is a sequence
- Output cannot be customized since `Document` is fixed. (e.g.,
parameters could be useful to grab extra metadata like the vector that
was indexed with the Document or to project a part of the document)
2024-07-01 13:04:33 -04:00
Valentin
bf402f902e community: Fix LanceDB similarity search bug (#23591)
**Description:** LanceDB didn't allow querying the database using
similarity score thresholds because the metrics value was missing. This
PR simply fixes that bug.
**Issue:** not applicable
**Dependencies:** none
**Twitter handle:** not available

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-01 16:33:45 +00:00
Bagatur
389a568f9a standard-tests[patch]: add anthropic format integration test (#23717) 2024-07-01 11:06:04 -04:00
Rafael Pereira
4b9517db85 Jira: Allow Jira access using only the token (#23708)
- **Description:** At the moment the Jira wrapper only accepts the the
usage of the Username and Password/Token at the same time. However Jira
allows the connection using only is useful for enterprise context.

Co-authored-by: rpereira <rafael.pereira@criticalsoftware.com>
2024-07-01 13:13:51 +00:00
Francesco Kruk
7538f3df58 Update jina embedding notebook to show multimodal capability more clearly (#23702)
After merging the [PR #22594 to include Jina AI multimodal capabilities
in the Langchain
documentation](https://github.com/langchain-ai/langchain/pull/22594), we
updated the notebook to showcase the difference between text and
multimodal capabilities more clearly.
2024-07-01 09:13:19 -04:00
Tim Van Wassenhove
24916c6703 community: Register pandas df in duckdb when creating vector_store (#23690)
- **Description:** Register pandas df in duckdb when creating
vector_store
- **Issue:** Resolves #23308
- **Dependencies:** None
- **Twitter handle:** @timvw

Co-authored-by: Tim Van Wassenhove <tim.van.wassenhove@telenetgroup.be>
2024-07-01 09:12:06 -04:00
Sourav Biswal
b60df8bb4f Update chatbot.ipynb (#23688)
DOC: missing parenthesis #23687

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-01 13:00:34 +00:00
Jacob Lee
9604cb833b ci[patch]: Update people PR CI permissions (#23696)
CC @agola11
2024-06-30 22:25:08 -07:00
Bagatur
29aa9d6750 groq[patch]: Release 0.1.6 (#23655) 2024-06-29 07:35:23 -04:00
Bagatur
f2d0c13a15 fireworks[patch]: Release 0.1.4 (#23654) 2024-06-29 07:35:16 -04:00
Bagatur
9a5e35d1ba mistralai[patch]: Release 0.1.9 (#23653) 2024-06-29 07:35:09 -04:00
Bagatur
74321e546d infra: update release permissions (#23662) 2024-06-29 07:31:36 -04:00
Mateusz Szewczyk
a78ccb993c ibm: Add support for Chat Models (#22979) 2024-06-29 01:59:25 -07:00
Jacob Lee
16c59118eb docs[patch]: Adds short tracing how-tos and conceptual guide (#23657)
CC @agola11
2024-06-28 18:28:49 -07:00
Jacob Lee
c0bb26e85b docs[patch]: Typo fix (#23652) 2024-06-28 17:27:44 -07:00
Jacob Lee
72175c57bd docs[patch]: Fix docs bugs in response to feedback (#23649)
- Update Meta Llama 3 cookbook link
- Add prereq section and information on `messages_modifier` to LangGraph
migration guide
- Update `PydanticToolsParser` explanation and entrypoint in tool
calling guide
- Add more obvious warning to `OllamaFunctions`
- Fix Wikidata tool install flow
- Update Bedrock LLM initialization

@baskaryan can you add a bit of information on how to authenticate into
the `ChatBedrock` and `BedrockLLM` models? I wasn't able to figure it
out :(
2024-06-28 17:24:55 -07:00
Bagatur
af2c05e5f3 openai[patch]: Release 0.1.13 (#23651) 2024-06-28 17:10:30 -07:00
Bagatur
b63c7f10bc anthropic[patch]: Release 0.1.17 (#23650) 2024-06-28 17:07:08 -07:00
Bagatur
fc8fd49328 openai, anthropic, ...: with_structured_output to pass in explicit tool choice (#23645)
...community, mistralai, groq, fireworks

part of #23644
2024-06-28 16:39:53 -07:00
Bagatur
c5f35a72da docs: vllm pkg nit (#23648) 2024-06-28 16:09:36 -07:00
Bagatur
81064017a9 docs: azure openai docstring (#23643)
part of #22296
2024-06-28 15:15:58 -07:00
Bagatur
381aedcc61 docs: standardize azure openai page (#23642)
part of #22296
2024-06-28 15:15:41 -07:00
Vadym Barda
e8d77002ea core: add RemoveMessage (#23636)
This change adds a new message type `RemoveMessage`. This will enable
`langgraph` users to manually modify graph state (or have the graph
nodes modify the state) to remove messages by `id`

Examples:

* allow users to delete messages from state by calling

```python
graph.update_state(config, values=[RemoveMessage(id=state.values[-1].id)])
```

* allow nodes to delete messages

```python
graph.add_node("delete_messages", lambda state: [RemoveMessage(id=state[-1].id)])
```
2024-06-28 14:40:02 -07:00
ccurme
8fce8c6771 community: fix extended tests (#23640) 2024-06-28 16:35:38 -04:00
ccurme
5d93916665 openai[patch]: release 0.1.12 (#23641) 2024-06-28 19:51:16 +00:00
Jacob Lee
a032583b17 docs[patch]: Update diagrams (#23613) 2024-06-28 12:36:00 -07:00
ccurme
390ee8d971 standard-tests: add test for structured output (#23631)
- add test for structured output
- fix bug with structured output for Azure
- better testing on Groq (break out Mixtral + Llama3 and add xfails
where needed)
2024-06-28 15:01:40 -04:00
Eugene Yurtsev
6c1ba9731d docs: Resurface some methods in API reference and clarify note at top of Reference (#23633)
This PR modifies the API Reference in the following way:

1. Relist standard methods: invoke, ainvoke, batch, abatch,
batch_as_completed, abatch_as_completed, stream, astream,
astream_events. These are the main entry points for a lot of runnables,
so we'll keep them for each runnable.
2. Relist methods from Runnable Serializable: to_json,
configurable_fields, configurable_alternatives.
3. Expand the note in the API reference documentation to explain that
additional methods are available.
2024-06-28 12:31:37 -04:00
Brace Sproul
800b0ff3b9 docs[minor]: Hide langserve pages (#23618) 2024-06-28 08:25:08 -07:00
j pradhan
5f21eab491 community:perplexity[patch]: standardize init args (#21794)
updated request_timeout default alias value per related docstring.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

Thank you for contributing to LangChain!

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-28 13:26:12 +00:00
mackong
11483b0fb8 community[patch]: set tool name for tongyi&qianfan llm (#22889)
- **Description:** The name of ToolMessage is default to None, which
makes tool message send to LLM likes
 ```json
{"role": "tool",
   "tool_call_id": "",
   "content": "{\"time\": \"12:12\"}",
   "name": null}
```
But the name seems essential for some LLMs like TongYi Qwen. so we need to set the name use agent_action's tool value.
  - **Issue:** N/A
  - **Dependencies:** N/A
2024-06-28 09:17:05 -04:00
Leonid Ganeline
e4caa41aa9 community: docstrings toolkits (#23616)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-28 08:40:52 -04:00
clement.l
19eb82e68b docs: Fix link in LLMChain tutorial (#23620)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-28 03:59:24 +00:00
Bagatur
bd68a38723 docs: update chatmodel.with_structured_output feat in table (#23610) 2024-06-27 20:38:49 -07:00
ccurme
adf2dc13de community: fix lint (#23611) 2024-06-27 22:12:16 +00:00
Bagatur
ef0593db58 docs: tool call run model (#23609) 2024-06-27 22:02:12 +00:00
Leonid Ganeline
75a44fe951 core: chat_* docstrings (#23412)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-27 17:29:38 -04:00
Bagatur
3b1fcb2a65 chroma[patch]: Release 0.1.2 (#23604) 2024-06-27 13:58:24 -07:00
Eugene Yurtsev
68f348357e community[patch]: Test InMemoryVectorStore with RWAPI test suite (#23603)
Add standard test suite to InMemoryVectorStore implementation.
2024-06-27 16:43:43 -04:00
Eugene Yurtsev
da7beb1c38 core[patch]: Add unit test when catching generator exit (#23402)
This pr adds a unit test for:
https://github.com/langchain-ai/langchain/pull/22662
And narrows the scope where the exception is caught.
2024-06-27 20:36:07 +00:00
NG Sai Prasanth
5e6d23f27d community: Standardise tool import for arxiv & semantic scholar (#23578)
- **Description:** Fixing the way users have to import Arxiv and
Semantic Scholar
- **Issue:** Changed to use `from langchain_community.tools.arxiv import
ArxivQueryRun` instead of `from langchain_community.tools.arxiv.tool
import ArxivQueryRun`
    - **Dependencies:** None
    - **Twitter handle:** Nope
2024-06-27 16:35:50 -04:00
ccurme
d04f657424 langchain[patch]: deprecate ConversationChain (#23504)
Would like some feedback on how to best incorporate legacy memory
objects into `RunnableWithMessageHistory`.
2024-06-27 16:32:44 -04:00
Ayo Ayibiowu
c6f700b7cb fix(community): allow support for disabling max_tokens args (#21534)
This PR fixes an issue with not able to use unlimited/infinity tokens
from the respective provider for the LiteLLM provider.

This is an issue when working in an agent environment that the token
usage can drastically increase beyond the initial value set causing
unexpected behavior.
2024-06-27 16:28:59 -04:00
WU LIFU
2a0d6788f7 docs[patch]: extraction_examples fix the examples given to the llm (#23393)
Descriptions: currently in the
[doc](https://python.langchain.com/v0.2/docs/how_to/extraction_examples/)
it sets "Data" as the LLM's structured output schema, however its
examples given to the LLM output's "Person", which causes the LLM to be
confused and might occasionally return "Person" as the function to call

issue: #23383

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
2024-06-27 16:22:26 -04:00
Leonid Ganeline
c0fdbaac85 langchain: docstrings in agents root (#23561)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-27 15:52:18 -04:00
Leonid Ganeline
b64c4b4750 langchain: docstrings agents nested (#23598)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-27 19:49:41 +00:00
mackong
70834cd741 community[patch]: support convert FunctionMessage for Tongyi (#23569)
**Description:** For function call agent with Tongyi, cause the
AgentAction will be converted to FunctionMessage by

47f69fe0d8/libs/core/langchain_core/agents.py (L188)
But now Tongyi's *convert_message_to_dict* doesn't support
FunctionMessage

47f69fe0d8/libs/community/langchain_community/chat_models/tongyi.py (L184-L207)
Then next round conversation will be failed by the *TypeError*
exception.

This patch adds the support to convert FunctionMessage for Tongyi.

**Issue:** N/A
**Dependencies:** N/A
2024-06-27 15:49:26 -04:00
Bagatur
d45ece0e58 chroma[patch]: loosen py req (#23599)
currently causes issues if you try adding to a project that supports
py<4
2024-06-27 12:40:59 -07:00
Mohammad Mohtashim
4796b7eb15 [Community [HuggingFace]]: Small Fix for ChatHuggingFace. (#22925)
- **Description:** A small fix where I moved the `available_endpoints`
in order to avoid the token error in the below issue. Also I have added
conftest file and updated the `scripy`,`numpy` versions to support newer
python versions in poetry files.
- **Issue:** #22804

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-27 19:37:20 +00:00
Jacob Lee
644723adda docs[patch]: Add search keyword, update contribution guide (#23602)
CC @vbarda @hinthornw
2024-06-27 12:36:02 -07:00
ccurme
bffc3c24a0 openai[patch]: release 0.1.11 (#23596) 2024-06-27 18:48:40 +00:00
ccurme
a1520357c8 openai[patch]: revert addition of "name" to supported properties for tool messages (#23600) 2024-06-27 18:40:04 +00:00
joshc-ai21
16a293cc3a Small bug fixes (#23353)
Small bug fixes according to your comments

---------

Signed-off-by: Joffref <mariusjoffre@gmail.com>
Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Baskar Gopinath <73015364+baskargopinath@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Mathis Joffre <51022808+Joffref@users.noreply.github.com>
Co-authored-by: Baur <baur.krykpayev@gmail.com>
Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Rave Harpaz <rave.harpaz@oracle.com>
Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: RUO <61719257+comsa33@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Luis Rueda <userlerueda@gmail.com>
Co-authored-by: Jib <Jibzade@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: S M Zia Ur Rashid <smziaurrashid@gmail.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: yuncliu <lyc1990@qq.com>
Co-authored-by: wenngong <76683249+wenngong@users.noreply.github.com>
Co-authored-by: gongwn1 <gongwn1@lenovo.com>
Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com>
Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: asafg <asafg@ai21.com>
Co-authored-by: Asaf Joseph Gardin <39553475+Josephasafg@users.noreply.github.com>
2024-06-27 17:58:22 +00:00
panwg3
9308bf32e5 spelling errors in words (#23559)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-27 17:16:22 +00:00
clement.l
182fc06769 docs: Fix typo in LLMChain tutorial (#23593)
When using `model_with_tools.invoke`, the `content` returns as an empty
string.
For more details, please refer to my [trace
log](https://smith.langchain.com/public/6fd24bc4-86c4-4627-8565-9a8adaf4ad7d/r).
2024-06-27 17:01:05 +00:00
ccurme
5536420bee openai[patch]: add comment (#23595)
Forgot to push this to
https://github.com/langchain-ai/langchain/pull/23551
2024-06-27 16:47:14 +00:00
andrewmjc
9f0f3c7e29 partners[openai]: Add name field to tool message to match OpenAI spec (#23551)
Discovered alongside @t968914

  - **Description:**
According to OpenAI docs, tool messages (response from calling tools)
must have a 'name' field.

https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models

  - **Issue:** N/A (as of right now)
  - **Dependencies:** N/A
  - **Twitter handle:** N/A

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-27 12:42:36 -04:00
Krista Pratico
85e36b0f50 partners[openai]: only add stream_options to kwargs if requested (#23552)
- **Description:** This PR
https://github.com/langchain-ai/langchain/pull/22854 added the ability
to pass `stream_options` through to the openai service to get token
usage information in the response. Currently OpenAI supports this
parameter, but Azure OpenAI does not yet. For users who proxy their
calls to both services through ChatOpenAI, this breaks when targeting
Azure OpenAI (see related discussion opened in openai-python:
https://github.com/openai/openai-python/issues/1469#issuecomment-2192658630).

> Error code: 400 - {'error': {'code': None, 'message': 'Unrecognized
request argument supplied: stream_options', 'param': None, 'type':
'invalid_request_error'}}

This PR fixes the issue by only adding `stream_options` to the request
if it's actually requested by the user (i.e. set to True). If I'm not
mistaken, we have a test case that already covers this scenario:
https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/tests/integration_tests/chat_models/test_base.py#L398-L399

- **Issue:** Issue opened in openai-python:
https://github.com/openai/openai-python/issues/1469
  - **Dependencies:** N/A
  - **Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-27 12:23:05 -04:00
Eugene Yurtsev
96b72edac8 core[minor]: Add optional ID field to Document schema (#23411)
This PR adds an optional ID field to the document schema.

# 1. Optional or Required

- An optional field will will requrie additional checking for the type
in user code (annoying).
- However, vectorstores currently don't respect this field. So if we
make it
required and start returning random UUIDs that might be even more
confusing
  to users.


**Proposal**: Start with Optional and convert to Required (with default
set to uuid4()) in 1-2 major releases.


# 2. Override __str__ or generic solution in prompts

Overriding __str__ as a simple way to avoid changing user code that
relies on
default str(document) in prompts. 


I considered rolling out a more general solution in prompts
(https://github.com/langchain-ai/langchain/pull/8685),
but to do that we need to:

1. Make things serializable
2. The more general solution would likely need to be backwards
compatible as well
3. It's unclear that one wants to format a List[int] in the same way as
List[Document]. The former should be `,` seperated (likely), the latter
   should be `---` separated (likely).


**Proposal** Start with __str__ override and focus on the vectorstore
APIs, we generalize prompts later
2024-06-27 12:15:58 -04:00
ccurme
5bfcb898ad openai[patch]: bump sdk version (#23592)
Tests failing with `TypeError: Completions.create() got an unexpected
keyword argument 'parallel_tool_calls'`
2024-06-27 11:57:24 -04:00
Jacob Lee
60fc15a56b docs[patch]: Update docs introduction and README (#23558)
CC @hwchase17 @baskaryan
2024-06-27 08:51:43 -07:00
panwg3
2445b997ee Correction of incorrect words (#23557)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-27 15:13:15 +00:00
Aditya
6721b991ab docs: realigned sections for langchain-google-vertexai (#23584)
- **Description:** Re-aligned sections in documentation of Vertex AI
LLMs
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:**NA

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-27 10:42:32 -04:00
mackong
daf733b52e langchain[minor]: fix comment typo (#23564)
**Description:** fix typo of comment
**Issue:** N/A
**Dependencies:** N/A
2024-06-27 10:09:18 -04:00
Jacob Lee
47f69fe0d8 docs[patch]: Add ReAct agent conceptual guide, improve search (#23554)
@baskaryan
2024-06-26 19:02:03 -07:00
Jacob Lee
672fcbb8dc docs[patch]: Fix bad link format (#23553) 2024-06-26 16:43:26 -07:00
Jacob Lee
13254715a2 docs[patch]: Update installation guide with diagram (#23548)
CC @baskaryan
2024-06-26 15:10:22 -07:00
Leonid Ganeline
2c9b84c3a8 core[patch]: docstrings agents (#23502)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:50:48 -04:00
Jacob Lee
79d8556c22 docs[patch]: Address feedback from docs users (#23550)
- Updates chat few shot prompt tutorial to show off a more cohesive
example
- Fix async Chromium loader guide
- Fix Excel loader install instructions
- Reformat Html2Text page
- Add install instructions to Azure OpenAI embeddings page
- Add missing dep install to SQL QA tutorial

@baskaryan
2024-06-26 14:47:01 -07:00
Leonid Ganeline
2a5d59b3d7 core[patch]: callbacks docstrings (#23375)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:11:06 -04:00
Leonid Ganeline
1141b08eb8 core: docstrings example_selectors (#23542)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:10:40 -04:00
wenngong
3bf1d98dbf langchain[patch]: update agent and chains modules root_validators (#23256)
Description: update agent and chains modules Pydantic root_validators.
Issue: the issue #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-26 17:09:50 -04:00
Bagatur
a7ab93479b anthropic[patch]: Release 0.1.16 (#23549) 2024-06-26 20:49:13 +00:00
Jib
c0fcf76e93 LangChain-MongoDB: [Experimental] Driver-side index creation helper (#19359)
## Description
Created a helper method to make vector search indexes via client-side
pymongo.

**Recent Update** -- Removed error suppressing/overwriting layer in
favor of letting the original exception provide information.

## ToDo's
- [x] Make _wait_untils for integration test delete index
functionalities.
- [x] Add documentation for its use. Highlight it's experimental
- [x] Post Integration Test Results in a screenshot
- [x] Get review from MongoDB internal team (@shaneharvey, @blink1073 ,
@NoahStapp , @caseyclements)



- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Added new integration tests. Not eligible for unit testing since the
operation is Atlas Cloud specific.
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

![image](https://github.com/langchain-ai/langchain/assets/2887713/a3fc8ee1-e04c-4976-accc-fea0eeae028a)


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-26 15:07:28 -04:00
Jacob Lee
b1dfb8ea1e docs[patch]: Update contribution guides (#23382)
CC @vbarda @hwchase17
2024-06-26 11:12:41 -07:00
maang-h
5070004e8a docs: Update Tongyi ChatModel docstring (#23540)
- **Description:** Update Tongyi ChatModel rich docstring
- **Issue:** the issue #22296
2024-06-26 13:07:13 -04:00
Nuradil
2f976c5174 community: fix code example in ZenGuard docs (#23541)
Thank you for contributing to LangChain!

- [X] **PR title**: "community: fix code example in ZenGuard docs"


- [X] **PR message**: 
- **Description:** corrected the docs by indicating in the code example
that the tool accepts a list of prompts instead of just one


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Thank you for review

---------

Co-authored-by: Baur <baur.krykpayev@gmail.com>
2024-06-26 13:05:59 -04:00
yonarw
6d0ebbca1e community: SAP HANA Vector Engine fix for latest HANA release (#23516)
- **Description:** This PR fixes an issue with SAP HANA Cloud QRC03
version. In that version the number to indicate no length being set for
a vector column changed from -1 to 0. The change in this PR support both
behaviours (old/new).
- **Dependencies:** No dependencies have been introduced.

- **Tests**:  The change is covered by previous unit tests.
2024-06-26 13:15:51 +00:00
Roman Solomatin
1e3e05b0c3 openai[patch]: add support for extra_body (#23404)
**Description:** Add support passing extra_body parameter

Some OpenAI compatible API's have additional parameters (for example
[vLLM](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#extra-parameters))
that can be passed thought `extra_body`. Same question in
https://github.com/openai/openai-python/issues/767

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->
2024-06-26 13:11:59 +00:00
Alireza Kashani
c39521b70d Update grobid.py (#23399)
fixed potential `IndexError: list index out of range` in case there is
no title

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-26 09:11:02 -04:00
Qingchuan Hao
ee282a1d2e community: add missing link (#23526) 2024-06-26 09:06:28 -04:00
Lincoln Stein
c314222796 Add a conversation memory that combines a (optionally persistent) vectorstore history with a token buffer (#22155)
**langchain: ConversationVectorStoreTokenBufferMemory**

-**Description:** This PR adds ConversationVectorStoreTokenBufferMemory.
It is similar in concept to ConversationSummaryBufferMemory. It
maintains an in-memory buffer of messages up to a preset token limit.
After the limit is hit timestamped messages are written into a
vectorstore retriever rather than into a summary. The user's prompt is
then used to retrieve relevant fragments of the previous conversation.
By persisting the vectorstore, one can maintain memory from session to
session.
-**Issue:** n/a
-**Dependencies:** none
-**Twitter handle:** Please no!!!
- [X] **Add tests and docs**: I looked to see how the unit tests were
written for the other ConversationMemory modules, but couldn't find
anything other than a test for successful import. I need to know whether
you are using pytest.mock or another fixture to simulate the LLM and
vectorstore. In addition, I would like guidance on where to place the
documentation. Should it be a notebook file in docs/docs?

- [X] **Lint and test**: I am seeing some linting errors from a couple
of modules unrelated to this PR.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-25 20:17:10 -07:00
Bagatur
32f8f39974 core[patch]: use args_schema doc for tool description (#23503) 2024-06-25 15:26:35 -07:00
ccurme
6f7fe82830 text-splitters: release 0.2.2 (#23508) 2024-06-25 18:26:05 -04:00
ccurme
62b16fcc6b experimental: release 0.0.62 (#23507) 2024-06-25 22:01:35 +00:00
ccurme
99ce84ef23 community: release 0.2.6 (#23501) 2024-06-25 21:29:52 +00:00
ccurme
03c41e725e langchain: release 0.2.6 (#23426) 2024-06-25 21:03:41 +00:00
ccurme
86ca44d451 core: release 0.2.10 (#23420) 2024-06-25 16:26:31 -04:00
Isaac Francisco
85f5d14cef [docs]: split up tool docs (#22919) 2024-06-25 13:15:08 -07:00
ccurme
f788d0982d docs: update trim messages guide (#23418)
- rerun to remove warnings following
https://github.com/langchain-ai/langchain/pull/23363
- `raise` -> `return`
2024-06-25 19:50:53 +00:00
ccurme
c9619349d6 docs: rerun chatbot tutorial to remove warnings (#23417) 2024-06-25 19:26:54 +00:00
Nuradil
c93d9e66e4 Community: Update and fix ZenGuardTool docs and add ZenguardTool to init files (#23415)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: update docs and add tool to init.py"

- [x] **PR message**: 
- **Description:** Fixed some errors and comments in the docs and added
our ZenGuardTool and additional classes to init.py for easy access when
importing
- **Question:** when will you update the langchain-community package in
pypi to make our tool available?


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Thank you for review!

---------

Co-authored-by: Baur <baur.krykpayev@gmail.com>
2024-06-25 19:26:32 +00:00
William FH
8955bc1866 [Core] Logging: Suppress missing parent warning (#23363) 2024-06-25 14:57:23 -04:00
ccurme
730c551819 core[patch]: export tool output parsers from langchain_core.output_parsers (#23305)
These currently read off AIMessage.tool_calls, and only fall back to
OpenAI parsing if tool calls aren't populated.

Importing these from `openai_tools` (e.g., in our [tool calling
docs](https://python.langchain.com/v0.2/docs/how_to/tool_calling/#tool-calls))
can lead to confusion.

After landing, would need to release core and update docs.
2024-06-25 14:40:42 -04:00
Eugene Yurtsev
7e9e69c758 core[patch]: Add unit test for str and repr for Document (#23414) 2024-06-25 18:28:21 +00:00
Bagatur
f055f2a1e3 infra: install integration deps as needed (#23413) 2024-06-25 11:17:43 -07:00
Bagatur
92ac0fc9bd openai[patch]: Release 0.1.10 (#23410) 2024-06-25 17:40:02 +00:00
Bagatur
fb3df898b5 docs: Update README.md (#23409) 2024-06-25 17:35:00 +00:00
Bagatur
9d145b9630 openai[patch]: fix tool calling token counting (#23408)
Resolves https://github.com/langchain-ai/langchain/issues/23388
2024-06-25 10:34:25 -07:00
Tomaz Bratanic
22fa32e164 LLM Graph transformer dealing with empty strings (#23368)
Pydantic allows empty strings:

```
from langchain.pydantic_v1 import Field, BaseModel

class Property(BaseModel):
  """A single property consisting of key and value"""
  key: str = Field(..., description="key")
  value: str = Field(..., description="value")

x = Property(key="", value="")
```

Which can produce errors downstream. We simply ignore those records
2024-06-25 13:01:53 -04:00
Rajendra Kadam
d3520a784f docs: Added providers page for Pebblo and docs for PebbloRetrievalQA (#20746)
- **Description:** Added providers page for Pebblo and docs for
PebbloRetrievalQA
- **Issue:** NA
- **Dependencies:** None
- **Unit tests**: NA
2024-06-25 12:46:11 -04:00
clement.l
a75b32a54a docs: Fix typo in LLMChain tutorial (#23380)
Description: Fix a typo
Issue: n/a
Dependencies: None
Twitter handle:
2024-06-25 13:03:24 +00:00
Riccardo Schirone
4530d851e4 Merge pull request #22662
* core: runnables: special handling GeneratorExit because no error
2024-06-25 08:42:03 -04:00
Qingchuan Hao
ad50702934 community: add default value to bing_search_url (#23306)
bing_search_url is an endpoint to requests bing search resource and is
normally invariant to users, we can give it the default value to simply
the uesages of this utility/tool
2024-06-25 08:08:41 -04:00
ccurme
68e0ae3286 langchain[patch]: update removal target for LLMChain (#23373)
to 1.0

Also improve replacement example in docstring.
2024-06-24 21:51:29 +00:00
wenngong
b33d2346db community: FlashrankRerank support loading customer client (#23350)
Description: FlashrankRerank Document compressor support loading
customer client
Issue: #23338

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-24 17:50:08 -04:00
maang-h
f58c40b4e3 docs: Update QianfanChatEndpoint ChatModel docstring (#23337)
- **Description:** Update QianfanChatEndpoint ChatModel rich docstring
- **Issue:** the issue #22296
2024-06-24 17:42:46 -04:00
Rahul Triptahi
9ef93ecd7c community[minor]: Added classification_location parameter in PebbloSafeLoader. (#22565)
Description: Add classifier_location feature flag. This flag enables
Pebblo to decide the classifier location, local or pebblo-cloud.
Unit Tests: N/A
Documentation: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-24 17:30:38 -04:00
Mirna Wong
2115fb76de Replace llm variable with model (#23280)
The code snippet under ‘pdfs_qa’ contains an small incorrect code
example , resulting in users getting errors. This pr replaces ‘llm’
variable with ‘model’ to help user avoid a NameError message.

Resolves #22689


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-24 17:08:02 -04:00
wenngong
af620db9c7 partners: add lint docstrings for azure-dynamic-sessions/together modules (#23303)
Description: add lint docstrings for azure-dynamic-sessions/together
modules
Issue: #23188 @baskaryan

test: ruff check passed.
<img width="782" alt="image"
src="https://github.com/langchain-ai/langchain/assets/76683249/bf11783d-65b3-4e56-a563-255eae89a3e4">

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-24 16:26:54 -04:00
yuncliu
398b2b9c51 community[minor]: Add Ascend NPU optimized Embeddings (#20260)
- **Description:** Add NPU support for embeddings

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-24 20:15:11 +00:00
Ikko Eltociear Ashimine
7b1066341b docs: update sql_query_checking.ipynb (#23345)
creat -> create
2024-06-24 16:03:32 -04:00
S M Zia Ur Rashid
d5b2a93c6d package: security update urllib3 to @1.26.19 (#23366)
urllib3 version update 1.26.18 to 1.26.19 to address a security
vulnerability.

**Reference:**
https://security.snyk.io/vuln/SNYK-PYTHON-URLLIB3-7267250
2024-06-24 19:44:39 +00:00
Jacob Lee
57c13b4ef8 docs[patch]: Fix typo in how to guide for message history (#23364) 2024-06-24 15:43:05 -04:00
Luis Rueda
168e9ed3a5 partners: add custom options to MongoDBChatMessageHistory (#22944)
**Description:** Adds options for configuring MongoDBChatMessageHistory
(no breaking changes):
- session_id_key: name of the field that stores the session id
- history_key: name of the field that stores the chat history
- create_index: whether to create an index on the session id field
- index_kwargs: additional keyword arguments to pass to the index
creation

**Discussion:**
https://github.com/langchain-ai/langchain/discussions/22918
**Twitter handle:** @userlerueda

---------

Co-authored-by: Jib <Jibzade@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-24 19:42:56 +00:00
Eugene Yurtsev
1e750f12f6 standard-tests[minor]: Add standard read write test suite for vectorstores (#23355)
Add standard read write test suite for vectorstores
2024-06-24 19:40:56 +00:00
Eugene Yurtsev
3b3ed72d35 standard-tests[minor]: Add standard tests for BaseStore (#23360)
Add standard tests to base store abstraction. These only work on [str,
str] right now. We'll need to check if it's possible to add
encoder/decoders to generalize
2024-06-24 19:38:50 +00:00
ccurme
e1190c8f3c mongodb[patch]: fix CI for python 3.12 (#23369) 2024-06-24 19:31:20 +00:00
RUO
2b87e330b0 community: fix issue with nested field extraction in MongodbLoader (#22801)
**Description:** 
This PR addresses an issue in the `MongodbLoader` where nested fields
were not being correctly extracted. The loader now correctly handles
nested fields specified in the `field_names` parameter.

**Issue:** 
Fixes an issue where attempting to extract nested fields from MongoDB
documents resulted in `KeyError`.

**Dependencies:** 
No new dependencies are required for this change.

**Twitter handle:** 
(Optional, your Twitter handle if you'd like a mention when the PR is
announced)

### Changes
1. **Field Name Parsing**:
- Added logic to parse nested field names and safely extract their
values from the MongoDB documents.

2. **Projection Construction**:
- Updated the projection dictionary to include nested fields correctly.

3. **Field Extraction**:
- Updated the `aload` method to handle nested field extraction using a
recursive approach to traverse the nested dictionaries.

### Example Usage
Updated usage example to demonstrate how to specify nested fields in the
`field_names` parameter:

```python
loader = MongodbLoader(
    connection_string=MONGO_URI,
    db_name=MONGO_DB,
    collection_name=MONGO_COLLECTION,
    filter_criteria={"data.job.company.industry_name": "IT", "data.job.detail": { "$exists": True }},
    field_names=[
        "data.job.detail.id",
        "data.job.detail.position",
        "data.job.detail.intro",
        "data.job.detail.main_tasks",
        "data.job.detail.requirements",
        "data.job.detail.preferred_points",
        "data.job.detail.benefits",
    ],
)

docs = loader.load()
print(len(docs))
for doc in docs:
    print(doc.page_content)
```
### Testing
Tested with a MongoDB collection containing nested documents to ensure
that the nested fields are correctly extracted and concatenated into a
single page_content string.
### Note
This change ensures backward compatibility for non-nested fields and
improves functionality for nested field extraction.
### Output Sample
```python
print(docs[:3])
```
```shell
# output sample:
[
    Document(
        # Here in this example, page_content is the combined text from the fields below
        # "position", "intro", "main_tasks", "requirements", "preferred_points", "benefits"
        page_content='all combined contents from the requested fields in the document',
        metadata={'database': 'Your Database name', 'collection': 'Your Collection name'}
    ),
    ...
]
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-24 19:29:11 +00:00
Tomaz Bratanic
aeeda370aa Sanitize backticks from neo4j labels and types for import (#23367) 2024-06-24 19:05:31 +00:00
Jacob Lee
d2db561347 docs[patch]: Adds callout in LLM concept docs, remove deprecated code (#23361)
CC @baskaryan @hwchase17
2024-06-24 12:03:18 -07:00
Rave Harpaz
f5ff7f178b Add OCI Generative AI new model support (#22880)
- [x] PR title: 
community: Add OCI Generative AI new model support
 
- [x] PR message:
- Description: adding support for new models offered by OCI Generative
AI services. This is a moderate update of our initial integration PR
16548 and includes a new integration for our chat models under
/langchain_community/chat_models/oci_generative_ai.py
    - Issue: NA
- Dependencies: No new Dependencies, just latest version of our OCI sdk
    - Twitter handle: NA


- [x] Add tests and docs: 
  1. we have updated our unit tests
2. we have updated our documentation including a new ipynb for our new
chat integration


- [x] Lint and test: 
 `make format`, `make lint`, and `make test` run successfully

---------

Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
2024-06-24 14:48:23 -04:00
Jacob Lee
753edf9c80 docs[patch]: Update chatbot tools how-to guide (#23362) 2024-06-24 11:46:06 -07:00
Baur
aa358f2be4 community: Add ZenGuard tool (#22959)
** Description**
This is the community integration of ZenGuard AI - the fastest
guardrails for GenAI applications. ZenGuard AI protects against:

- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.

**Twitter Handle** : @zenguardai

- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. Added an integration test
  2. Added colab


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

---------

Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
2024-06-24 17:40:56 +00:00
Mathis Joffre
60103fc4a5 community: Fix OVHcloud 401 Unauthorized on embedding. (#23260)
They are now rejecting with code 401 calls from users with expired or
invalid tokens (while before they were being considered anonymous).
Thus, the authorization header has to be removed when there is no token.

Related to: #23178

---------

Signed-off-by: Joffref <mariusjoffre@gmail.com>
2024-06-24 12:58:32 -04:00
Baskar Gopinath
4964ba74db Update multimodal_prompts.ipynb (#23301)
fixes #23294

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-24 15:58:51 +00:00
Eugene Yurtsev
d90379210a standard-tests[minor]: Add standard tests for cache (#23357)
Add standard tests for cache abstraction
2024-06-24 15:15:03 +00:00
Leonid Ganeline
987099cfcd community: toolkits docstrings (#23286)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-22 14:37:52 +00:00
Rahul Triptahi
0cd3f93361 Enhance metadata of sharepointLoader. (#22248)
Description: 2 feature flags added to SharePointLoader in this PR:

1. load_auth: if set to True, adds authorised identities to metadata
2. load_extended_metadata, adds source, owner and full_path to metadata

Unit tests:N/A
Documentation: To be done.

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-21 17:03:38 -07:00
Yuki Watanabe
5d4133d82f community: Overhaul Databricks provider documentation (#23203)
**Description**: Update [Databricks
Provider](https://python.langchain.com/v0.2/docs/integrations/providers/databricks/)
documentations to the latest component notebooks and draw better
navigation path to related notebooks.

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-06-21 16:57:35 -07:00
Bagatur
bcac6c3aff openai[patch]: temp fix ignore lint (#23290) 2024-06-21 16:52:52 -07:00
William FH
efb4c12abe [Core] Add support for inferring Annotated types (#23284)
in bind_tools() / convert_to_openai_function
2024-06-21 15:16:30 -07:00
Vadym Barda
9ac302cb97 core[minor]: update draw_mermaid node label processing (#23285)
This fixes processing issue for nodes with numbers in their labels (e.g.
`"node_1"`, which would previously be relabeled as `"node__"`, and now
are correctly processed as `"node_1"`)
2024-06-21 21:35:32 +00:00
Rajendra Kadam
7ee2822ec2 community: Fix TypeError in PebbloRetrievalQA (#23170)
**Description:** 
Fix "`TypeError: 'NoneType' object is not iterable`" when the
auth_context is absent in PebbloRetrievalQA. The auth_context is
optional; hence, PebbloRetrievalQA should work without it, but it throws
an error at the moment. This PR fixes that issue.

**Issue:** NA
**Dependencies:** None
**Unit tests:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-21 17:04:00 -04:00
Iurii Umnov
3b7b933aa2 community[minor]: OpenAPI agent. Add support for PUT, DELETE and PATCH (#22962)
**Description**: Add PUT, DELETE and PATCH tools to tool list for
OpenAPI agent if dangerous requests are allowed.

**Issue**: https://github.com/langchain-ai/langchain/issues/20469
2024-06-21 20:44:23 +00:00
Guangdong Liu
3c42bf8d97 community(patch):Fix PineconeHynridSearchRetriever not having search_kwargs (#21577)
- close #21521
2024-06-21 16:27:52 -04:00
Rahul Triptahi
4bb3d5c488 [community][quick-fix]: changed from blob.path to blob.path.name in 0365BaseLoader. (#22287)
Description: file_metadata_ was not getting propagated to returned
documents. Changed the lookup key to the name of the blob's path.
Changed blob.path key to blob.path.name for metadata_dict key lookup.
Documentation: N/A
Unit tests: N/A

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-21 15:51:03 -04:00
Bagatur
f824f6d925 docs: fix merge message runs docstring (#23279) 2024-06-21 19:50:50 +00:00
wenngong
f9aea3db07 partners: add lint docstrings for chroma module (#23249)
Description: add lint docstrings for chroma module
Issue: the issue #23188 @baskaryan

test:  ruff check passed.


![image](https://github.com/langchain-ai/langchain/assets/76683249/5e168a0c-32d0-464f-8ddb-110233918019)

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-21 19:49:24 +00:00
Bagatur
9eda8f2fe8 docs: fix trim_messages code blocks (#23271) 2024-06-21 17:15:31 +00:00
Jacob Lee
86326269a1 docs[patch]: Adds prereqs to trim messages (#23270)
CC @baskaryan
2024-06-21 10:09:41 -07:00
Bagatur
4c97a9ee53 docs: fix message transformer docstrings (#23264) 2024-06-21 16:10:03 +00:00
Vwake04
0deb98ac0c pinecone: Fix multiprocessing issue in PineconeVectorStore (#22571)
**Description:**

Currently, the `langchain_pinecone` library forces the `async_req`
(asynchronous required) argument to Pinecone to `True`. This design
choice causes problems when deploying to environments that do not
support multiprocessing, such as AWS Lambda. In such environments, this
restriction can prevent users from successfully using
`langchain_pinecone`.

This PR introduces a change that allows users to specify whether they
want to use asynchronous requests by passing the `async_req` parameter
through `**kwargs`. By doing so, users can set `async_req=False` to
utilize synchronous processing, making the library compatible with AWS
Lambda and other environments that do not support multithreading.

**Issue:**
This PR does not address a specific issue number but aims to resolve
compatibility issues with AWS Lambda by allowing synchronous processing.

**Dependencies:**
None, that I'm aware of.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-21 15:46:01 +00:00
ccurme
75c7c3a1a7 openai: release 0.1.9 (#23263) 2024-06-21 11:15:29 -04:00
Brace Sproul
abe7566d7d core[minor]: BaseChatModel with_structured_output implementation (#22859) 2024-06-21 08:14:03 -07:00
mackong
360a70c8a8 core[patch]: fix no current event loop for sql history in async mode (#22933)
- **Description:** When use
RunnableWithMessageHistory/SQLChatMessageHistory in async mode, we'll
get the following error:
```
Error in RootListenersTracer.on_chain_end callback: RuntimeError("There is no current event loop in thread 'asyncio_3'.")
```
which throwed by
ddfbca38df/libs/community/langchain_community/chat_message_histories/sql.py (L259).
and no message history will be add to database.

In this patch, a new _aexit_history function which will'be called in
async mode is added, and in turn aadd_messages will be called.

In this patch, we use `afunc` attribute of a Runnable to check if the
end listener should be run in async mode or not.

  - **Issue:** #22021, #22022 
  - **Dependencies:** N/A
2024-06-21 10:39:47 -04:00
Philippe PRADOS
1c2b9cc9ab core[minor]: Update pgvector transalor for langchain_postgres (#23217)
The SelfQuery PGVectorTranslator is not correct. The operator is "eq"
and not "$eq".
This patch use a new version of PGVectorTranslator from
langchain_postgres.

It's necessary to release a new version of langchain_postgres (see
[here](https://github.com/langchain-ai/langchain-postgres/pull/75)
before accepting this PR in langchain.
2024-06-21 10:37:09 -04:00
Mu Yang
401d469a92 langchain: fix systax warning in create_json_chat_agent (#23253)
fix systax warning in `create_json_chat_agent`

```
.../langchain/agents/json_chat/base.py:22: SyntaxWarning: invalid escape sequence '\ '
  """Create an agent that uses JSON to format its logic, build for Chat Models.
```
2024-06-21 10:05:38 -04:00
mackong
b108b4d010 core[patch]: set schema format for AsyncRootListenersTracer (#23214)
- **Description:** AsyncRootListenersTracer support on_chat_model_start,
it's schema_format should be "original+chat".
  - **Issue:** N/A
  - **Dependencies:**
2024-06-21 09:30:27 -04:00
Bagatur
976b456619 docs: BaseChatModel key methods table (#23238)
If we're moving documenting inherited params think these kinds of tables
become more important

![Screenshot 2024-06-20 at 3 59 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/722266eb-2353-4e85-8fae-76b19bd333e0)
2024-06-20 21:00:22 -07:00
Jacob Lee
5da7eb97cb docs[patch]: Update link (#23240)
CC @agola11
2024-06-20 17:43:12 -07:00
ccurme
a7b4175091 standard tests: add test for tool calling (#23234)
Including streaming
2024-06-20 17:20:11 -04:00
Bagatur
12e0c28a6e docs: fix chat model methods table (#23233)
rst table not md
![Screenshot 2024-06-20 at 12 37 46
PM](https://github.com/langchain-ai/langchain/assets/22008038/7a03b869-c1f4-45d0-8d27-3e16f4c6eb19)
2024-06-20 19:51:10 +00:00
Zheng Robert Jia
a349fce880 docs[minor],community[patch]: Minor tutorial docs improvement, minor import error quick fix. (#22725)
minor changes to module import error handling and minor issues in
tutorial documents.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-20 15:36:49 -04:00
Eugene Yurtsev
7545b1d29b core[patch]: Fix doc-strings for code blocks (#23232)
Code blocks need extra space around them to be rendered properly by
sphinx
2024-06-20 19:34:52 +00:00
Luis Moros
d5be160af0 community[patch]: Fix sql_databse.from_databricks issue when ran from Job (#23224)
**Desscription**: When the ``sql_database.from_databricks`` is executed
from a Workflow Job, the ``context`` object does not have a
"browserHostName" property, resulting in an error. This change manages
the error so the "DATABRICKS_HOST" env variable value is used instead of
stoping the flow

Co-authored-by: lmorosdb <lmorosdb>
2024-06-20 19:34:15 +00:00
Cory Waddingham
cd6812342e pinecone[patch]: Update Poetry requirements for pinecone-client >=3.2.2 (#22094)
This change updates the requirements in
`libs/partners/pinecone/pyproject.toml` to allow all versions of
`pinecone-client` greater than or equal to 3.2.2.

This change resolves issue
[21955](https://github.com/langchain-ai/langchain/issues/21955).

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-20 18:59:36 +00:00
ccurme
abb3066150 docs: clarify streaming with RunnableLambda (#23228) 2024-06-20 14:49:00 -04:00
ccurme
bf7763d9b0 docs: add serialization guide (#23223) 2024-06-20 12:50:24 -04:00
Eugene Yurtsev
59d7adff8f core[patch]: Add clarification about streaming to RunnableLambda (#23227)
Add streaming clarification to runnable lambda docstring.
2024-06-20 16:47:16 +00:00
Jacob Lee
60db79a38a docs[patch]: Update Anthropic chat model docs (#23226)
CC @baskaryan
2024-06-20 09:46:43 -07:00
maang-h
bc4cd9c5cc community[patch]: Update root_validators ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat, ChatSparkLLM, ChatZhipuAI (#22853)
This PR updates root validators for:

- ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat,
ChatSparkLLM, ChatZhipuAI

Issues #22819

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 16:36:41 +00:00
ChrisDEV
cb6cf4b631 Fix return value type of dumpd (#20123)
The return type of `json.loads` is `Any`.

In fact, the return type of `dumpd` must be based on `json.loads`, so
the correction here is understandable.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 16:31:41 +00:00
Guangdong Liu
0bce28cd30 core(patch): Fix encoding problem of load_prompt method (#21559)
- description: Add encoding parameters.
- @baskaryan, @efriis, @eyurtsev, @hwchase17.


![54d25ac7b1d5c2e47741a56fe8ed8ba](https://github.com/langchain-ai/langchain/assets/48236177/ffea9596-2001-4e19-b245-f8a6e231b9f9)
2024-06-20 09:25:54 -07:00
Philippe PRADOS
8711c61298 core[minor]: Adds an in-memory implementation of RecordManager (#13200)
**Description:**
langchain offers three technologies to save data:
-
[vectorstore](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
- [docstore](https://js.langchain.com/docs/api/schema/classes/Docstore)
- [record
manager](https://python.langchain.com/docs/modules/data_connection/indexing)

If you want to combine these technologies in a sample persistence
stategy you need a common implementation for each. `DocStore` propose
`InMemoryDocstore`.

We propose the class `MemoryRecordManager` to complete the system.

This is the prelude to another full-request, which needs a consistent
combination of persistence components.

**Tag maintainer:**
@baskaryan

**Twitter handle:**
@pprados

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 12:19:10 -04:00
Eugene Yurtsev
3ab49c0036 docs: API reference remove Prev/Up/Next buttons (#23225)
These do not work anyway. Let's remove them for now for simplicity.
2024-06-20 16:15:45 +00:00
Eugene Yurtsev
61daa16e5d docs: Update clean up API reference (#23221)
- Fix bug with TypedDicts rendering inherited methods if inherting from
  typing_extensions.TypedDict rather than typing.TypedDict
- Do not surface inherited pydantic methods for subclasses of BaseModel
- Subclasses of RunnableSerializable will not how methods inherited from
  Runnable or from BaseModel
- Subclasses of Runnable that not pydantic models will include a link to
RunnableInterface (they still show inherited methods, we can fix this
later)
2024-06-20 11:35:00 -04:00
Leonid Ganeline
51e75cf59d community: docstrings (#23202)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-20 11:08:13 -04:00
Julian Weng
6a1a0d977a partners[minor]: Fix value error message for with_structured_output (#22877)
Currently, calling `with_structured_output()` with an invalid method
argument raises `Unrecognized method argument. Expected one of
'function_calling' or 'json_format'`, but the JSON mode option [is now
referred
to](https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method)
by `'json_mode'`. This fixes that.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 15:03:21 +00:00
Qingchuan Hao
dd4d4411c9 doc: replace function all with tool call (#23184)
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-20 09:27:39 -04:00
Yahkeef Davis
b03c801523 Docs: Update Rag tutorial so it includes an additional notebook cell with pip installs of required langchain_chroma and langchain_community. (#23204)
Description: Update Rag tutorial notebook so it includes an additional
notebook cell with pip installs of required langchain_chroma and
langchain_community.

This fixes the issue with the rag tutorial gives you a 'missing modules'
error if you run code in the notebook as is.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-20 09:22:49 -04:00
Leonid Ganeline
41f7620989 huggingface: docstrings (#23148)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-20 13:22:40 +00:00
ccurme
066a5a209f huggingface[patch]: fix CI for python 3.12 (#23197) 2024-06-20 09:17:26 -04:00
xyd
9b3a025f9c fix https://github.com/langchain-ai/langchain/issues/23215 (#23216)
fix bug 
The ZhipuAIEmbeddings class is not working.

Co-authored-by: xu yandong <shaonian@acsx1.onexmail.com>
2024-06-20 13:04:50 +00:00
Bagatur
ad7f2ec67d standard-tests[patch]: test stop not stop_sequences (#23200) 2024-06-19 18:07:33 -07:00
Bagatur
bd5c92a113 docs: standard params (#23199) 2024-06-19 17:57:05 -07:00
David DeCaprio
a4bcb45f65 core:Add optional max_messages to MessagePlaceholder (#16098)
- **Description:** Add optional max_messages to MessagePlaceholder
- **Issue:**
[16096](https://github.com/langchain-ai/langchain/issues/16096)
- **Dependencies:** None
- **Twitter handle:** @davedecaprio

Sometimes it's better to limit the history in the prompt itself rather
than the memory. This is needed if you want different prompts in the
chain to have different history lengths.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-19 23:39:51 +00:00
shaunakgodbole
7193634ae6 fireworks[patch]: fix api_key alias in Fireworks LLM (#23118)
Thank you for contributing to LangChain!

**Description**
The current code snippet for `Fireworks` had incorrect parameters. This
PR fixes those parameters.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-19 21:14:42 +00:00
Eugene Yurtsev
1fcf875fe3 core[patch]: Document agent schema (#23194)
* Document agent schema
* Refer folks to langgraph for more information on how to create agents.
2024-06-19 20:16:57 +00:00
Bagatur
255ad39ae3 infra: run CI on large diffs (#23192)
currently we skip CI on diffs >= 300 files. think we should just run it
on all packages instead

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-19 19:30:56 +00:00
Eugene Yurtsev
c2d43544cc core[patch]: Document messages namespace (#23154)
- Moved doc-strings below attribtues in TypedDicts -- seems to render
better on APIReference pages.
* Provided more description and some simple code examples
2024-06-19 15:00:00 -04:00
Eugene Yurtsev
3c917204dc core[patch]: Add doc-strings to outputs, fix @root_validator (#23190)
- Document outputs namespace
- Update a vanilla @root_validator that was missed
2024-06-19 14:59:06 -04:00
Bagatur
8698cb9b28 infra: add more formatter rules to openai (#23189)
Turns on
https://docs.astral.sh/ruff/settings/#format_docstring-code-format and
https://docs.astral.sh/ruff/settings/#format_skip-magic-trailing-comma

```toml
[tool.ruff.format]
docstring-code-format = true
skip-magic-trailing-comma = true
```
2024-06-19 11:39:58 -07:00
Michał Krassowski
710197e18c community[patch]: restore compatibility with SQLAlchemy 1.x (#22546)
- **Description:** Restores compatibility with SQLAlchemy 1.4.x that was
broken since #18992 and adds a test run for this version on CI (only for
Python 3.11)
- **Issue:** fixes #19681
- **Dependencies:** None
- **Twitter handle:** `@krassowski_m`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-19 17:58:57 +00:00
Erick Friis
48d6ea427f upstage: move to external repo (#22506) 2024-06-19 17:56:07 +00:00
Bagatur
0a4ee864e9 openai[patch]: image token counting (#23147)
Resolves #23000

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-19 10:41:47 -07:00
Jorge Piedrahita Ortiz
b3e53ffca0 community[patch]: sambanova llm integration improvement (#23137)
- **Description:** sambanova sambaverse integration improvement: removed
input parsing that was changing raw user input, and was making to use
process prompt parameter as true mandatory
2024-06-19 10:30:14 -07:00
Jorge Piedrahita Ortiz
e162893d7f community[patch]: update sambastudio embeddings (#23133)
Description: update sambastudio embeddings integration, now compatible
with generic endpoints and CoE endpoints
2024-06-19 10:26:56 -07:00
Philippe PRADOS
db6f46c1a6 langchain[small]: Change type to BasePromptTemplate (#23083)
```python
Change from_llm(
 prompt: PromptTemplate 
 ...
 )
```
 to
```python
Change from_llm(
 prompt: BasePromptTemplate 
 ...
 )
```
2024-06-19 13:19:36 -04:00
Sergey Kozlov
94452a94b1 core[patch[: add exceptions propagation test for astream_events v2 (#23159)
**Description:** `astream_events(version="v2")` didn't propagate
exceptions in `langchain-core<=0.2.6`, fixed in the #22916. This PR adds
a unit test to check that exceptions are propagated upwards.

Co-authored-by: Sergey Kozlov <sergey.kozlov@ludditelabs.io>
2024-06-19 13:00:25 -04:00
Leonid Ganeline
50484be330 prompty: docstring (#23152)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-19 12:50:58 -04:00
Qingchuan Hao
9b82707ea6 docs: add bing search tool to ms platform (#23183)
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-19 12:43:05 -04:00
chenxi
505a2e8743 fix: MoonshotChat fails when setting the moonshot_api_key through the OS environment. (#23176)
Close #23174

Co-authored-by: tianming <tianming@bytenew.com>
2024-06-19 16:28:24 +00:00
Bagatur
677408bfc9 core[patch]: fix chat history circular import (#23182) 2024-06-19 09:08:36 -07:00
Eugene Yurtsev
883e90d06e core[patch]: Add an example to the Document schema doc-string (#23131)
Add an example to the document schema
2024-06-19 11:35:30 -04:00
ccurme
2b08e9e265 core[patch]: update test to catch circular imports (#23172)
This raises ImportError due to a circular import:
```python
from langchain_core import chat_history
```

This does not:
```python
from langchain_core import runnables
from langchain_core import chat_history
```

Here we update `test_imports` to run each import in a separate
subprocess. Open to other ways of doing this!
2024-06-19 15:24:38 +00:00
Eugene Yurtsev
ae4c0ed25a core[patch]: Add documentation to load namespace (#23143)
Document some of the modules within the load namespace
2024-06-19 15:21:41 +00:00
Eugene Yurtsev
a34e650f8b core[patch]: Add doc-string to document compressor (#23085) 2024-06-19 11:03:49 -04:00
Eugene Yurtsev
1007a715a5 community[patch]: Prevent unit tests from making network requests (#23180)
* Prevent unit tests from making network requests
2024-06-19 14:56:30 +00:00
ccurme
ca798bc6ea community: move test to integration tests (#23178)
Tests failing on master with

> FAILED
tests/unit_tests/embeddings/test_ovhcloud.py::test_ovhcloud_embed_documents
- ValueError: Request failed with status code: 401, {"message":"Bad
token; invalid JSON"}
2024-06-19 14:39:48 +00:00
Eugene Yurtsev
4fe8403bfb core[patch]: Expand documentation in the indexing namespace (#23134) 2024-06-19 10:11:44 -04:00
Eugene Yurtsev
fe4f10047b core[patch]: Document embeddings namespace (#23132)
Document embeddings namespace
2024-06-19 10:11:16 -04:00
Eugene Yurtsev
a3bae56a48 core[patch]: Update documentation in LLM namespace (#23138)
Update documentation in lllm namespace.
2024-06-19 10:10:50 -04:00
Leonid Ganeline
a70b7a688e ai21: docstrings (#23142)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-19 08:51:15 -04:00
Jacob Lee
0c2ebe5f47 docs[patch]: Standardize prerequisites in tutorial docs (#23150)
CC @baskaryan
2024-06-18 23:10:13 -07:00
bilk0h
3d54784e6d text-splitters: Fix/recursive json splitter data persistence issue (#21529)
Thank you for contributing to LangChain!

**Description:** Noticed an issue with when I was calling
`RecursiveJsonSplitter().split_json()` multiple times that I was getting
weird results. I found an issue where `chunks` list in the `_json_split`
method. If chunks is not provided when _json_split (which is the case
when split_json calls _json_split) then the same list is used for
subsequent calls to `_json_split`.


You can see this in the test case i also added to this commit.

Output should be: 
```
[{'a': 1, 'b': 2}]
[{'c': 3, 'd': 4}]
```

Instead you get:
```
[{'a': 1, 'b': 2}]
[{'a': 1, 'b': 2, 'c': 3, 'd': 4}]
```

---------

Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-18 20:21:55 -07:00
Yuki Watanabe
9ab7a6df39 docs: Overhaul Databricks components documentation (#22884)
**Description:** Documentation at
[integrations/llms/databricks](https://python.langchain.com/v0.2/docs/integrations/llms/databricks/)
is not up-to-date and includes examples about chat model and embeddings,
which should be located in the different corresponding subdirectories.
This PR split the page into correct scope and overhaul the contents.

**Note**: This PR might be hard to review on the diffs view, please use
the following preview links for the changed pages.
- `ChatDatabricks`:
https://langchain-git-fork-b-step62-chat-databricks-doc-langchain.vercel.app/v0.2/docs/integrations/chat/databricks/
- `Databricks`:
https://langchain-git-fork-b-step62-chat-databricks-doc-langchain.vercel.app/v0.2/docs/integrations/llms/databricks/
- `DatabricksEmbeddings`:
https://langchain-git-fork-b-step62-chat-databricks-doc-langchain.vercel.app/v0.2/docs/integrations/text_embedding/databricks/

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
2024-06-18 20:10:54 -07:00
鹿鹿鹿鲨
6b46b5e9ce community: add **request_kwargs and expect TimeError AsyncHtmlLoader (#23068)
- **Description:** add `**request_kwargs` and expect `TimeError` in
`_fetch` function for AsyncHtmlLoader. This allows you to fill in the
kwargs parameter when using the `load()` method of the `AsyncHtmlLoader`
class.

Co-authored-by: Yucolu <yucolu@tencent.com>
2024-06-18 20:02:46 -07:00
Leonid Ganeline
109a70fc64 ibm: docstrings (#23149)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-18 20:00:27 -07:00
Ryan Elston
86ee4f0daa text-splitters: Introduce Experimental Markdown Syntax Splitter (#22257)
#### Description
This MR defines a `ExperimentalMarkdownSyntaxTextSplitter` class. The
main goal is to replicate the functionality of the original
`MarkdownHeaderTextSplitter` which extracts the header stack as metadata
but with one critical difference: it keeps the whitespace of the
original text intact.

This draft reimplements the `MarkdownHeaderTextSplitter` with a very
different algorithmic approach. Instead of marking up each line of the
text individually and aggregating them back together into chunks, this
method builds each chunk sequentially and applies the metadata to each
chunk. This makes the implementation simpler. However, since it's
designed to keep white space intact its not a full drop in replacement
for the original. Since it is a radical implementation change to the
original code and I would like to get feedback to see if this is a
worthwhile replacement, should be it's own class, or is not a good idea
at all.

Note: I implemented the `return_each_line` parameter but I don't think
it's a necessary feature. I'd prefer to remove it.

This implementation also adds the following additional features:
- Splits out code blocks and includes the language in the `"Code"`
metadata key
- Splits text on the horizontal rule `---` as well
- The `headers_to_split_on` parameter is now optional - with sensible
defaults that can be overridden.

#### Issue
Keeping the whitespace keeps the paragraphs structure and the formatting
of the code blocks intact which allows the caller much more flexibility
in how they want to further split the individuals sections of the
resulting documents. This addresses the issues brought up by the
community in the following issues:
- https://github.com/langchain-ai/langchain/issues/20823
- https://github.com/langchain-ai/langchain/issues/19436
- https://github.com/langchain-ai/langchain/issues/22256

#### Dependencies
N/A

#### Twitter handle
@RyanElston

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-18 19:44:00 -07:00
Bagatur
93d0ad97fe anthropic[patch]: test image input (#23155) 2024-06-19 02:32:15 +00:00
Leonid Ganeline
3dfd055411 anthropic: docstrings (#23145)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-18 22:26:45 -04:00
Bagatur
90559fde70 openai[patch], standard-tests[patch]: don't pass in falsey stop vals (#23153)
adds an image input test to standard-tests as well
2024-06-18 18:13:13 -07:00
Bagatur
e8a8286012 core[patch]: runnablewithchathistory from core.runnables (#23136) 2024-06-19 00:15:18 +00:00
Jacob Lee
2ae718796e docs[patch]: Fix typo in feedback (#23146) 2024-06-18 16:32:04 -07:00
Jacob Lee
74749c909d docs[patch]: Adds feedback input after thumbs up/down (#23141)
CC @baskaryan
2024-06-18 16:08:22 -07:00
Bagatur
cf38981bb7 docs: use trim_messages in chatbot how to (#23139) 2024-06-18 15:48:03 -07:00
Vadym Barda
b483bf5095 core[minor]: handle boolean data in draw_mermaid (#23135)
This change should address graph rendering issues for edges with boolean
data

Example from langgraph:

```python
from typing import Annotated, TypedDict

from langchain_core.messages import AnyMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]


def branch(state: State) -> bool:
    return 1 + 1 == 3


graph_builder = StateGraph(State)
graph_builder.add_node("foo", lambda state: {"messages": [("ai", "foo")]})
graph_builder.add_node("bar", lambda state: {"messages": [("ai", "bar")]})

graph_builder.add_conditional_edges(
    START,
    branch,
    path_map={True: "foo", False: "bar"},
    then=END,
)

app = graph_builder.compile()
print(app.get_graph().draw_mermaid())
```

Previous behavior:

```python
AttributeError: 'bool' object has no attribute 'split'
```

Current behavior:

```python
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
	__start__[__start__]:::startclass;
	__end__[__end__]:::endclass;
	foo([foo]):::otherclass;
	bar([bar]):::otherclass;
	__start__ -. ('a',) .-> foo;
	foo --> __end__;
	__start__ -. ('b',) .-> bar;
	bar --> __end__;
	classDef startclass fill:#ffdfba;
	classDef endclass fill:#baffc9;
	classDef otherclass fill:#fad7de;
```
2024-06-18 20:15:42 +00:00
Bagatur
093ae04d58 core[patch]: Pin pydantic in py3.12.4 (#23130) 2024-06-18 12:00:02 -07:00
hmasdev
ff0c06b1e5 langchain[patch]: fix OutputType of OutputParsers and fix legacy API in OutputParsers (#19792)
# Description

This pull request aims to address specific issues related to the
ambiguity and error-proneness of the output types of certain output
parsers, as well as the absence of unit tests for some parsers. These
issues could potentially lead to runtime errors or unexpected behaviors
due to type mismatches when used, causing confusion for developers and
users. Through clarifying output types, this PR seeks to improve the
stability and reliability.

Therefore, this pull request

- fixes the `OutputType` of OutputParsers to be the expected type;
- e.g. `OutputType` property of `EnumOutputParser` raises `TypeError`.
This PR introduce a logic to extract `OutputType` from its attribute.
- and fixes the legacy API in OutputParsers like `LLMChain.run` to the
modern API like `LLMChain.invoke`;
- Note: For `OutputFixingParser`, `RetryOutputParser` and
`RetryWithErrorOutputParser`, this PR introduces `legacy` attribute with
False as default value in order to keep the backward compatibility
- and adds the tests for the `OutputFixingParser` and
`RetryOutputParser`.

The following table shows my expected output and the actual output of
the `OutputType` of OutputParsers.
I have used this table to fix `OutputType` of OutputParsers.

| Class Name of OutputParser | My Expected `OutputType` (after this PR)|
Actual `OutputType` [evidence](#evidence) (before this PR)| Fix Required
|
|---------|--------------|---------|--------|
| BooleanOutputParser | `<class 'bool'>` | `<class 'bool'>` | NO |
| CombiningOutputParser | `typing.Dict[str, Any]` | `TypeError` is
raised | YES |
| DatetimeOutputParser | `<class 'datetime.datetime'>` | `<class
'datetime.datetime'>` | NO |
| EnumOutputParser(enum=MyEnum) | `MyEnum` | `TypeError` is raised | YES
|
| OutputFixingParser | The same type as `self.parser.OutputType` | `~T`
| YES |
| CommaSeparatedListOutputParser | `typing.List[str]` |
`typing.List[str]` | NO |
| MarkdownListOutputParser | `typing.List[str]` | `typing.List[str]` |
NO |
| NumberedListOutputParser | `typing.List[str]` | `typing.List[str]` |
NO |
| JsonOutputKeyToolsParser | `typing.Any` | `typing.Any` | NO |
| JsonOutputToolsParser | `typing.Any` | `typing.Any` | NO |
| PydanticToolsParser | `typing.Any` | `typing.Any` | NO |
| PandasDataFrameOutputParser | `typing.Dict[str, Any]` | `TypeError` is
raised | YES |
| PydanticOutputParser(pydantic_object=MyModel) | `<class
'__main__.MyModel'>` | `<class '__main__.MyModel'>` | NO |
| RegexParser | `typing.Dict[str, str]` | `TypeError` is raised | YES |
| RegexDictParser | `typing.Dict[str, str]` | `TypeError` is raised |
YES |
| RetryOutputParser | The same type as `self.parser.OutputType` | `~T` |
YES |
| RetryWithErrorOutputParser | The same type as `self.parser.OutputType`
| `~T` | YES |
| StructuredOutputParser | `typing.Dict[str, Any]` | `TypeError` is
raised | YES |
| YamlOutputParser(pydantic_object=MyModel) | `MyModel` | `~T` | YES |

NOTE: In "Fix Required", "YES" means that it is required to fix in this
PR while "NO" means that it is not required.

# Issue

No issues for this PR.

# Twitter handle

- [hmdev3](https://twitter.com/hmdev3)

# Questions:

1. Is it required to create tests for legacy APIs `LLMChain.run` in the
following scripts?
   - libs/langchain/tests/unit_tests/output_parsers/test_fix.py;
   - libs/langchain/tests/unit_tests/output_parsers/test_retry.py.

2. Is there a more appropriate expected output type than I expect in the
above table?
- e.g. the `OutputType` of `CombiningOutputParser` should be
SOMETHING...

# Actual outputs (before this PR)

<div id='evidence'></div>

<details><summary>Actual outputs</summary>

## Requirements

- Python==3.9.13
- langchain==0.1.13

```python
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import langchain
>>> langchain.__version__
'0.1.13'
>>> from langchain import output_parsers
```

### `BooleanOutputParser`

```python
>>> output_parsers.BooleanOutputParser().OutputType
<class 'bool'>
```

### `CombiningOutputParser`

```python
>>> output_parsers.CombiningOutputParser(parsers=[output_parsers.DatetimeOutputParser(), output_parsers.CommaSeparatedListOutputParser()]).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable CombiningOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `DatetimeOutputParser`

```python
>>> output_parsers.DatetimeOutputParser().OutputType
<class 'datetime.datetime'>
```

### `EnumOutputParser`

```python
>>> from enum import Enum
>>> class MyEnum(Enum):
...     a = 'a'
...     b = 'b'
...
>>> output_parsers.EnumOutputParser(enum=MyEnum).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable EnumOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `OutputFixingParser`

```python
>>> output_parsers.OutputFixingParser(parser=output_parsers.DatetimeOutputParser()).OutputType
~T
```

### `CommaSeparatedListOutputParser`

```python
>>> output_parsers.CommaSeparatedListOutputParser().OutputType
typing.List[str]
```

### `MarkdownListOutputParser`

```python
>>> output_parsers.MarkdownListOutputParser().OutputType
typing.List[str]
```

### `NumberedListOutputParser`

```python
>>> output_parsers.NumberedListOutputParser().OutputType
typing.List[str]
```

### `JsonOutputKeyToolsParser`

```python
>>> output_parsers.JsonOutputKeyToolsParser(key_name='tool').OutputType
typing.Any
```

### `JsonOutputToolsParser`

```python
>>> output_parsers.JsonOutputToolsParser().OutputType
typing.Any
```

### `PydanticToolsParser`

```python
>>> from langchain.pydantic_v1 import BaseModel
>>> class MyModel(BaseModel):
...     a: int
...
>>> output_parsers.PydanticToolsParser(tools=[MyModel, MyModel]).OutputType
typing.Any
```

### `PandasDataFrameOutputParser`

```python
>>> output_parsers.PandasDataFrameOutputParser().OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable PandasDataFrameOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `PydanticOutputParser`

```python
>>> output_parsers.PydanticOutputParser(pydantic_object=MyModel).OutputType
<class '__main__.MyModel'>
```

### `RegexParser`

```python
>>> output_parsers.RegexParser(regex='$', output_keys=['a']).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable RegexParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `RegexDictParser`

```python
>>> output_parsers.RegexDictParser(output_key_to_format={'a':'a'}).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable RegexDictParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `RetryOutputParser`

```python
>>> output_parsers.RetryOutputParser(parser=output_parsers.DatetimeOutputParser()).OutputType
~T
```

### `RetryWithErrorOutputParser`

```python
>>> output_parsers.RetryWithErrorOutputParser(parser=output_parsers.DatetimeOutputParser()).OutputType
~T
```

### `StructuredOutputParser`

```python
>>> from langchain.output_parsers.structured import ResponseSchema
>>> response_schemas = [ResponseSchema(name="foo",description="a list of strings",type="List[string]"),ResponseSchema(name="bar",description="a string",type="string"), ]
>>> output_parsers.StructuredOutputParser.from_response_schemas(response_schemas).OutputType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "D:\workspace\venv\lib\site-packages\langchain_core\output_parsers\base.py", line 160, in OutputType
    raise TypeError(
TypeError: Runnable StructuredOutputParser doesn't have an inferable OutputType. Override the OutputType property to specify the output type.
```

### `YamlOutputParser`

```python
>>> output_parsers.YamlOutputParser(pydantic_object=MyModel).OutputType
~T
```


<div>

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-18 18:59:42 +00:00
Artem Mukhin
e271f75bee docs: Fix URL formatting in deprecation warnings (#23075)
**Description**

Updated the URLs in deprecation warning messages. The URLs were
previously written as raw strings and are now formatted to be clickable
HTML links.

Example of a broken link in the current API Reference:
https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html

<img width="942" alt="Screenshot 2024-06-18 at 13 21 07"
src="https://github.com/langchain-ai/langchain/assets/4854600/a1b1863c-cd03-4af2-a9bc-70375407fb00">
2024-06-18 14:49:58 -04:00
Gabriel Petracca
c6660df58e community[minor]: Implement Doctran async execution (#22372)
**Description**

The DoctranTextTranslator has an async transform function that was not
implemented because [the doctran
library](https://github.com/psychic-api/doctran) uses a sync version of
the `execute` method.

- I implemented the `DoctranTextTranslator.atransform_documents()`
method using `asyncio.to_thread` to run the function in a separate
thread.
- I updated the example in the Notebook with the new async version.
- The performance improvements can be appreciated when a big document is
divided into multiple chunks.

Relates to:
- Issue #14645: https://github.com/langchain-ai/langchain/issues/14645
- Issue #14437: https://github.com/langchain-ai/langchain/issues/14437
- https://github.com/langchain-ai/langchain/pull/15264

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-18 18:17:37 +00:00
Eugene Yurtsev
aa6415aa7d core[minor]: Support multiple keys in get_from_dict_or_env (#23086)
Support passing multiple keys for ge_from_dict_or_env
2024-06-18 14:13:28 -04:00
nold
226802f0c4 community: add args_schema to SearxSearch (#22954)
This change adds args_schema (pydantic BaseModel) to SearxSearchRun for
correct schema formatting on LLM function calls

Issue: currently using SearxSearchRun with OpenAI function calling
returns the following error "TypeError: SearxSearchRun._run() got an
unexpected keyword argument '__arg1' ".

This happens because the schema sent to the LLM is "input:
'{"__arg1":"foobar"}'" while the method should be called with the
"query" parameter.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-18 17:27:39 +00:00
Bagatur
01783d67fc core[patch]: Release 0.2.9 (#23091) 2024-06-18 17:15:04 +00:00
Finlay Macklon
616d06d7fe community: glob multiple patterns when using DirectoryLoader (#22852)
- **Description:** Updated
*community.langchain_community.document_loaders.directory.py* to enable
the use of multiple glob patterns in the `DirectoryLoader` class. Now,
the glob parameter is of type `list[str] | str` and still defaults to
the same value as before. I updated the docstring of the class to
reflect this, and added a unit test to
*community.tests.unit_tests.document_loaders.test_directory.py* named
`test_directory_loader_glob_multiple`. This test also shows an example
of how to use the new functionality.
- ~~Issue:~~**Discussion Thread:**
https://github.com/langchain-ai/langchain/discussions/18559
- **Dependencies:** None
- **Twitter handle:** N/a

- [x] **Add tests and docs**
    - Added test (described above)
    - Updated class docstring

- [x] **Lint and test**

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-18 09:24:50 -07:00
Eugene Yurtsev
5564d9e404 core[patch]: Document BaseStore (#23082)
Add doc-string to BaseStore
2024-06-18 11:47:47 -04:00
Takuya Igei
9f791b6ad5 core[patch],community[patch],langchain[patch]: tenacity dependency to version >=8.1.0,<8.4.0 (#22973)
Fix https://github.com/langchain-ai/langchain/issues/22972.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-18 10:34:28 -04:00
Raghav Dixit
74c4cbb859 LanceDB example minor change (#23069)
Removed package version `0.6.13` in the example.
2024-06-18 09:16:17 -04:00
Bagatur
ddfbca38df docs: add trim_messages to chatbot (#23061) 2024-06-17 22:39:39 -07:00
Lance Martin
931b41b30f Update Fireworks link (#23058) 2024-06-17 21:16:18 -07:00
Leonid Ganeline
6a66d8e2ca docs: AWS platform page update (#23063)
Added a reference to the `GlueCatalogLoader` new document loader.
2024-06-17 21:01:58 -07:00
Raviraj
858ce264ef SemanticChunker : Feature Addition ("Semantic Splitting with gradient") (#22895)
```SemanticChunker``` currently provide three methods to split the texts semantically:
- percentile
- standard_deviation
- interquartile

I propose new method ```gradient```. In this method, the gradient of distance is used to split chunks along with the percentile method (technically) . This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data.
I have tested this merge on a set of 10 domain specific documents (mostly legal).

Details : 
    - **Issue:** Improvement
    - **Dependencies:** NA
    - **Twitter handle:** [x.com/prajapat_ravi](https://x.com/prajapat_ravi)


@hwchase17

---------

Co-authored-by: Raviraj Prajapat <raviraj.prajapat@sirionlabs.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-17 21:01:08 -07:00
Raghav Dixit
55705c0f5e LanceDB integration update (#22869)
Added : 

- [x] relevance search (w/wo scores)
- [x] maximal marginal search
- [x] image ingestion
- [x] filtering support
- [x] hybrid search w reranking 

make test, lint_diff and format checked.
2024-06-17 20:54:26 -07:00
Chang Liu
62c8a67f56 community: add KafkaChatMessageHistory (#22216)
Add chat history store based on Kafka.

Files added: 
`libs/community/langchain_community/chat_message_histories/kafka.py`
`docs/docs/integrations/memory/kafka_chat_message_history.ipynb`

New issue to be created for future improvement:
1. Async method implementation.
2. Message retrieval based on timestamp.
3. Support for other configs when connecting to cloud hosted Kafka (e.g.
add `api_key` field)
4. Improve unit testing & integration testing.
2024-06-17 20:34:01 -07:00
shimajiroxyz
3e835a1aa1 langchain: add id_key option to EnsembleRetriever for metadata-based document merging (#22950)
**Description:**
- What I changed
- By specifying the `id_key` during the initialization of
`EnsembleRetriever`, it is now possible to determine which documents to
merge scores for based on the value corresponding to the `id_key`
element in the metadata, instead of `page_content`. Below is an example
of how to use the modified `EnsembleRetriever`:
    ```python
retriever = EnsembleRetriever(retrievers=[ret1, ret2], id_key="id") #
The Document returned by each retriever must keep the "id" key in its
metadata.
    ```

- Additionally, I added a script to easily test the behavior of the
`invoke` method of the modified `EnsembleRetriever`.

- Why I changed
- There are cases where you may want to calculate scores by treating
Documents with different `page_content` as the same when using
`EnsembleRetriever`. For example, when you want to ensemble the search
results of the same document described in two different languages.
- The previous `EnsembleRetriever` used `page_content` as the basis for
score aggregation, making the above usage difficult. Therefore, the
score is now calculated based on the specified key value in the
Document's metadata.

**Twitter handle:** @shimajiroxyz
2024-06-18 03:29:17 +00:00
mackong
39f6c4169d langchain[patch]: add tool messages formatter for tool calling agent (#22849)
- **Description:** add tool_messages_formatter for tool calling agent,
make tool messages can be formatted in different ways for your LLM.
  - **Issue:** N/A
  - **Dependencies:** N/A
2024-06-17 20:29:00 -07:00
Lucas Tucker
e25a5966b5 docs: Standardize DocumentLoader docstrings (#22932)
**Standardizing DocumentLoader docstrings (of which there are many)**

This PR addresses issue #22866 and adds docstrings according to the
issue's specified format (in the appendix) for files csv_loader.py and
json_loader.py in langchain_community.document_loaders. In particular,
the following sections have been added to both CSVLoader and JSONLoader:
Setup, Instantiate, Load, Async load, and Lazy load. It may be worth
adding a 'Metadata' section to the JSONLoader docstring to clarify how
we want to extract the JSON metadata (using the `metadata_func`
argument). The files I used to walkthrough the various sections were
`example_2.json` from
[HERE](https://support.oneskyapp.com/hc/en-us/articles/208047697-JSON-sample-files)
and `hw_200.csv` from
[HERE](https://people.sc.fsu.edu/~jburkardt/data/csv/csv.html).

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-18 03:26:36 +00:00
Leonid Ganeline
a56ff199a7 docs: embeddings classes (#22927)
Added a table with all Embedding classes.
2024-06-17 20:17:24 -07:00
Mohammad Mohtashim
60ba02f5db [Community]: Fixed DDG DuckDuckGoSearchResults Docstring (#22968)
- **Description:** A very small fix in the Docstring of
`DuckDuckGoSearchResults` identified in the following issue.
- **Issue:** #22961

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-18 03:16:24 +00:00
Eun Hye Kim
70761af8cf community: Fix #22975 (Add SSL Verification Option to Requests Class in langchain_community) (#22977)
- **PR title**: "community: Fix #22975 (Add SSL Verification Option to
Requests Class in langchain_community)"
- **PR message**: 
    - **Description:**
- Added an optional verify parameter to the Requests class with a
default value of True.
- Modified the get, post, patch, put, and delete methods to include the
verify parameter.
- Updated the _arequest async context manager to include the verify
parameter.
- Added the verify parameter to the GenericRequestsWrapper class and
passed it to the Requests class.
    - **Issue:** This PR fixes issue #22975.
- **Dependencies:** No additional dependencies are required for this
change.
    - **Twitter handle:** @lunara_x

You can check this change with below code.
```python
from langchain_openai.chat_models import ChatOpenAI
from langchain.requests import RequestsWrapper
from langchain_community.agent_toolkits.openapi import planner
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec

with open("swagger.yaml") as f:
    data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)

llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(verify=False) # modified point
superset_agent = planner.create_openapi_agent(swagger_api_spec, swagger_requests_wrapper, llm, allow_dangerous_requests=True, handle_parsing_errors=True)

superset_agent.run(
    "Tell me the number and types of charts and dashboards available."
)
```

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-18 03:12:40 +00:00
Mohammad Mohtashim
bf839676c7 [Community]: FIxed the DocumentDBVectorSearch _similarity_search_without_score (#22970)
- **Description:** The PR #22777 introduced a bug in
`_similarity_search_without_score` which was raising the
`OperationFailure` error. The mistake was syntax error for MongoDB
pipeline which has been corrected now.
    - **Issue:** #22770
2024-06-17 20:08:42 -07:00
Nuno Campos
f01f12ce1e Include "no escape" and "inverted section" mustache vars in Prompt.input_variables and Prompt.input_schema (#22981) 2024-06-17 19:24:13 -07:00
Bella Be
7a0b36501f docs: Update how to docs for pydantic compatibility (#22983)
Add missing imports in docs from langchain_core.tools  BaseTool

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-18 01:49:56 +00:00
Jacob Lee
3b7b276f6f docs[patch]: Adds evaluation sections (#23050)
Also want to add an index/rollup page to LangSmith docs to enable
linking to a how-to category as a group (e.g.
https://docs.smith.langchain.com/how_to_guides/evaluation/)

CC @agola11 @hinthornw
2024-06-17 17:25:04 -07:00
Jacob Lee
6605ae22f6 docs[patch]: Update docs links (#23013) 2024-06-17 15:58:28 -07:00
Bagatur
c2b2e3266c core[minor]: message transformer utils (#22752) 2024-06-17 15:30:07 -07:00
Qingchuan Hao
c5e0acf6f0 docs: add bing search integration to agent (#22929)
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-17 18:08:52 -04:00
Anders Swanson
aacc6198b9 community: OCI GenAI embedding batch size (#22986)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: OCI GenAI embedding batch size"



- [x] **PR message**:
    - **Issue:** #22985 


- [ ] **Add tests and docs**: N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-17 22:06:45 +00:00
Bagatur
8235bae48e core[patch]: Release 0.2.8 (#23012) 2024-06-17 20:55:39 +00:00
Bagatur
5ee6e22983 infra: test all dependents on any change (#22994) 2024-06-17 20:50:31 +00:00
Nuno Campos
bd4b68cd54 core: run_in_executor: Wrap StopIteration in RuntimeError (#22997)
- StopIteration can't be set on an asyncio.Future it raises a TypeError
and leaves the Future pending forever so we need to convert it to a
RuntimeError
2024-06-17 20:40:01 +00:00
Bagatur
d96f67b06f standard-tests[patch]: Update chat model standard tests (#22378)
- Refactor standard test classes to make them easier to configure
- Update openai to support stop_sequences init param
- Update groq to support stop_sequences init param
- Update fireworks to support max_retries init param
- Update ChatModel.bind_tools to type tool_choice
- Update groq to handle tool_choice="any". **this may be controversial**

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-17 13:37:41 -07:00
Bob Lin
14f0cdad58 docs: Add some 3rd party tutorials (#22931)
Langchain is very popular among developers in China, but there are still
no good Chinese books or documents, so I want to add my own Chinese
resources on langchain topics, hoping to give Chinese readers a better
experience using langchain. This is not a translation of the official
langchain documentation, but my understanding.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-17 20:12:49 +00:00
Jacob Lee
893299c3c9 docs[patch]: Reorder streaming guide, add tags (#22993)
CC @hinthornw
2024-06-17 13:10:51 -07:00
Oguz Vuruskaner
dd25d08c06 community[minor]: add tool calling for DeepInfraChat (#22745)
DeepInfra now supports tool calling for supported models.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-17 15:21:49 -04:00
Bagatur
158701ab3c docs: update universal init title (#22990) 2024-06-17 12:13:31 -07:00
Lance Martin
a54deba6bc Add RAG to conceptual guide (#22790)
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
2024-06-17 11:20:28 -07:00
maang-h
c6b7db6587 community: Add Baichuan Embeddings batch size (#22942)
- **Support batch size** 
Baichuan updates the document, indicating that up to 16 documents can be
imported at a time

- **Standardized model init arg names**
    - baichuan_api_key -> api_key
    - model_name  -> model
2024-06-17 14:11:04 -04:00
ccurme
722c8f50ea openai[patch]: add stream_usage parameter (#22854)
Here we add `stream_usage` to ChatOpenAI as:

1. a boolean attribute
2. a kwarg to _stream and _astream.

Question: should the `stream_usage` attribute be `bool`, or `bool |
None`?

Currently I've kept it `bool` and defaulted to False. It was implemented
on
[ChatAnthropic](e832bbb486/libs/partners/anthropic/langchain_anthropic/chat_models.py (L535))
as a bool. However, to maintain support for users who access the
behavior via OpenAI's `stream_options` param, this ends up being
possible:
```python
llm = ChatOpenAI(model_kwargs={"stream_options": {"include_usage": True}})
assert not llm.stream_usage
```
(and this model will stream token usage).

Some options for this:
- it's ok
- make the `stream_usage` attribute bool or None
- make an \_\_init\_\_ for ChatOpenAI, set a `._stream_usage` attribute
and read `.stream_usage` from a property

Open to other ideas as well.
2024-06-17 13:35:18 -04:00
Shubham Pandey
56ac94e014 community[minor]: add ChatSnowflakeCortex chat model (#21490)
**Description:** This PR adds a chat model integration for [Snowflake
Cortex](https://docs.snowflake.com/en/user-guide/snowflake-cortex/llm-functions),
which gives an instant access to industry-leading large language models
(LLMs) trained by researchers at companies like Mistral, Reka, Meta, and
Google, including [Snowflake
Arctic](https://www.snowflake.com/en/data-cloud/arctic/), an open
enterprise-grade model developed by Snowflake.

**Dependencies:** Snowflake's
[snowpark](https://pypi.org/project/snowflake-snowpark-python/) library
is required for using this integration.

**Twitter handle:** [@gethouseware](https://twitter.com/gethouseware)

- [x] **Add tests and docs**:
1. integration tests:
`libs/community/tests/integration_tests/chat_models/test_snowflake.py`
2. unit tests:
`libs/community/tests/unit_tests/chat_models/test_snowflake.py`
  3. example notebook: `docs/docs/integrations/chat/snowflake.ipynb`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-17 09:47:05 -07:00
Lance Martin
ea96133890 docs: Update llamacpp ntbk (#22907)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-17 15:42:56 +00:00
Bagatur
e2304ebcdb standard-tests[patch]: Release 0.1.1 (#22984) 2024-06-17 15:31:34 +00:00
Hakan Özdemir
c437b1aab7 [Partner]: Add metadata to stream response (#22716)
Adds `response_metadata` to stream responses from OpenAI. This is
returned with `invoke` normally, but wasn't implemented for `stream`.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-17 09:46:50 -04:00
Baskar Gopinath
42a379c75c docs: Standardise formatting (#22948)
Standardised formatting 


![image](https://github.com/langchain-ai/langchain/assets/73015364/ea3b5c5c-e7a6-4bb7-8c6b-e7d8cbbbf761)
2024-06-17 09:00:05 -04:00
Ikko Eltociear Ashimine
3e7bb7690c docs: update databricks.ipynb (#22949)
arbitary -> arbitrary
2024-06-17 08:57:49 -04:00
Baskar Gopinath
19356b6445 Update sql_qa.ipynb (#22966)
fixes #22798 
fixes #22963
2024-06-17 08:57:16 -04:00
Bagatur
9ff249a38d standard-tests[patch]: don't require str chunk contents (#22965) 2024-06-17 08:52:24 -04:00
Daniel Glogowski
892bd4c29b docs: nim model name update (#22943)
NIM Model name change in a notebook and mdx file.

Thanks!
2024-06-15 16:38:28 -04:00
Christopher Tee
ada03dd273 community(you): Better support for You.com News API (#22622)
## Description
While `YouRetriever` supports both You.com's Search and News APIs, news
is supported as an afterthought.
More specifically, not all of the News API parameters are exposed for
the user, only those that happen to overlap with the Search API.

This PR:
- improves support for both APIs, exposing the remaining News API
parameters while retaining backward compatibility
- refactor some REST parameter generation logic
- updates the docstring of `YouSearchAPIWrapper`
- add input validation and warnings to ensure parameters are properly
set by user
- 🚨 Breaking: Limit the news results to `k` items

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-15 20:05:19 +00:00
ccurme
e09c6bb58b infra: update integration test workflow (#22945) 2024-06-15 19:52:43 +00:00
Tomaz Bratanic
1c661fd849 Improve llm graph transformer docstring (#22939) 2024-06-15 15:33:26 -04:00
maang-h
7a0af56177 docs: update ZhipuAI ChatModel docstring (#22934)
- **Description:** Update ZhipuAI ChatModel rich docstring
- **Issue:** the issue #22296
2024-06-15 09:12:21 -04:00
Appletree24
6838804116 docs:Fix mispelling in streaming doc (#22936)
Description: Fix mispelling
Issue: None
Dependencies: None
Twitter handle: None

Co-authored-by: qcloud <ubuntu@localhost.localdomain>
2024-06-15 12:24:50 +00:00
Bitmonkey
570d45b2a1 Update ollama.py with optional raw setting. (#21486)
Ollama has a raw option now. 

https://github.com/ollama/ollama/blob/main/docs/api.md

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-14 17:19:26 -07:00
caiyueliang
9944ad7f5f community: 'Solve the issue where the _search function in ElasticsearchStore supports passing a query_vector parameter, but the parameter does not take effect. (#21532)
**Issue:**
When using the similarity_search_with_score function in
ElasticsearchStore, I expected to pass in the query_vector that I have
already obtained. I noticed that the _search function does support the
query_vector parameter, but it seems to be ineffective. I am attempting
to resolve this issue.

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-14 17:13:11 -07:00
Erick Friis
764f1958dd docs: add ollama json mode (#22926)
fixes #22910
2024-06-14 23:27:55 +00:00
Erick Friis
c374c98389 experimental: release 0.0.61 (#22924) 2024-06-14 15:55:07 -07:00
BuxianChen
af65cac609 cli[minor]: remove redefined DEFAULT_GIT_REF (#21471)
remove redefined DEFAULT_GIT_REF

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-14 15:49:15 -07:00
Erick Friis
79a64207f5 community: release 0.2.5 (#22923) 2024-06-14 15:45:07 -07:00
Jiejun Tan
c8c67dde6f text-splitters[patch]: Fix HTMLSectionSplitter (#22812)
Update former pull request:
https://github.com/langchain-ai/langchain/pull/22654.

Modified `langchain_text_splitters.HTMLSectionSplitter`, where in the
latest version `dict` data structure is used to store sections from a
html document, in function `split_html_by_headers`. The header/section
element names serve as dict keys. This can be a problem when duplicate
header/section element names are present in a single html document.
Latter ones can replace former ones with the same name. Therefore some
contents can be miss after html text splitting is conducted.

Using a list to store sections can hopefully solve the problem. A Unit
test considering duplicate header names has been added.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-14 22:40:39 +00:00
Erick Friis
fbeeb6da75 langchain: release 0.2.5 (#22922) 2024-06-14 15:37:54 -07:00
Erick Friis
551640a030 templates: remove lockfiles (#22920)
poetry will default to latest versions without
2024-06-14 21:42:30 +00:00
Baskar Gopinath
c4f2bc9540 docs: Fix wrongly referenced class name in confluence.py (#22879)
Fixes #22542

Changed ConfluenceReader to ConfluenceLoader
2024-06-14 14:00:48 -07:00
ccurme
32966a08a9 infra: remove nvidia from monorepo scheduled tests (#22915)
Scheduled tests run in
https://github.com/langchain-ai/langchain-nvidia/tree/main
2024-06-14 13:23:04 -07:00
Erick Friis
9ef15691d6 core: release 0.2.7 (#22917) 2024-06-14 20:03:58 +00:00
Nuno Campos
338180f383 core: in astream_events v2 always await task even if already finished (#22916)
- this ensures exceptions propagate to the caller
2024-06-14 19:54:20 +00:00
Istvan/Nebulinq
513e491ce9 experimental: LLMGraphTransformer - added relationship properties. (#21856)
- **Description:** 
The generated relationships in the graph had no properties, but the
Relationship class was properly defined with properties. This made it
very difficult to transform conditional sentences into a graph. Adding
properties to relationships can solve this issue elegantly.
The changes expand on the existing LLMGraphTransformer implementation
but add the possibility to define allowed relationship properties like
this: LLMGraphTransformer(llm=llm, relationship_properties=["Condition",
"Time"],)
- **Issue:** 
    no issue found
 - **Dependencies:**
    n/a
- **Twitter handle:** 
    @IstvanSpace


-Quick Test
=================================================================
from dotenv import load_dotenv
import os
from langchain_community.graphs import Neo4jGraph
from langchain_experimental.graph_transformers import
LLMGraphTransformer
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.documents import Document

load_dotenv()
os.environ["NEO4J_URI"] = os.getenv("NEO4J_URI")
os.environ["NEO4J_USERNAME"] = os.getenv("NEO4J_USERNAME")
os.environ["NEO4J_PASSWORD"] = os.getenv("NEO4J_PASSWORD")
graph = Neo4jGraph()
llm = ChatOpenAI(temperature=0, model_name="gpt-4o")
llm_transformer = LLMGraphTransformer(llm=llm)
#text = "Harry potter likes pies, but only if it rains outside"
text = "Jack has a dog named Max. Jack only walks Max if it is sunny
outside."
documents = [Document(page_content=text)]
llm_transformer_props = LLMGraphTransformer(
    llm=llm,
    relationship_properties=["Condition"],
)
graph_documents_props =
llm_transformer_props.convert_to_graph_documents(documents)
print(f"Nodes:{graph_documents_props[0].nodes}")
print(f"Relationships:{graph_documents_props[0].relationships}")
graph.add_graph_documents(graph_documents_props)

---------

Co-authored-by: Istvan Lorincz <istvan.lorincz@pm.me>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-14 14:41:04 -04:00
ccurme
694ae87748 docs: add groq to chatmodeltabs (#22913) 2024-06-14 14:31:48 -04:00
Eugene Yurtsev
c816d03699 dcos: Add admonition to PythonREPL tool (#22909)
Add admonition to the documentation to make sure users are aware that
the tool allows execution of code on the host machine using a python
interpreter (by design).
2024-06-14 14:06:40 -04:00
kiarina
8171efd07a core[patch]: Fix FunctionCallbackHandler._on_tool_end (#22908)
If the global `debug` flag is enabled, the agent will get the following
error in `FunctionCallbackHandler._on_tool_end` at runtime.

```
Error in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")
```

By calling str() before strip(), the error was avoided.
This error can be seen at
[debugging.ipynb](https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/debugging.ipynb).

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
2024-06-14 17:59:29 +00:00
Philippe PRADOS
b61de9728e community[minor]: Fix long_context_reorder.py async (#22839)
Implement `async def atransform_documents( self, documents:
Sequence[Document], **kwargs: Any ) -> Sequence[Document]` for
`LongContextReorder`
2024-06-14 13:55:18 -04:00
Eugene Yurtsev
c72bcda4f2 community[major], experimental[patch]: Remove Python REPL from community (#22904)
Remove the REPL from community, and suggest an alternative import from
langchain_experimental.

Fix for this issue:
https://github.com/langchain-ai/langchain/issues/14345

This is not a bug in the code or an actual security risk. The python
REPL itself is behaving as expected.

The PR is done to appease blanket security policies that are just
looking for the presence of exec in the code.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-14 17:53:29 +00:00
Eugene Yurtsev
9a877c7adb community[patch]: SitemapLoader restrict depth of parsing sitemap (CVE-2024-2965) (#22903)
This PR restricts the depth to which the sitemap can be parsed.

Fix for: CVE-2024-2965
2024-06-14 13:04:40 -04:00
Eugene Yurtsev
4a77a3ab19 core[patch]: fix validation of @deprecated decorator (#22513)
This PR moves the validation of the decorator to a better place to avoid
creating bugs while deprecating code.

Prevent issues like this from arising:
https://github.com/langchain-ai/langchain/issues/22510

we should replace with a linter at some point that just does static
analysis
2024-06-14 16:52:30 +00:00
Jacob Lee
181a61982f anthropic[minor]: Adds streaming tool call support for Anthropic (#22687)
Preserves string content chunks for non tool call requests for
convenience.

One thing - Anthropic events look like this:

```
RawContentBlockStartEvent(content_block=TextBlock(text='', type='text'), index=0, type='content_block_start')
RawContentBlockDeltaEvent(delta=TextDelta(text='<thinking>\nThe', type='text_delta'), index=0, type='content_block_delta')
RawContentBlockDeltaEvent(delta=TextDelta(text=' provide', type='text_delta'), index=0, type='content_block_delta')
...
RawContentBlockStartEvent(content_block=ToolUseBlock(id='toolu_01GJ6x2ddcMG3psDNNe4eDqb', input={}, name='get_weather', type='tool_use'), index=1, type='content_block_start')
RawContentBlockDeltaEvent(delta=InputJsonDelta(partial_json='', type='input_json_delta'), index=1, type='content_block_delta')
```

Note that `delta` has a `type` field. With this implementation, I'm
dropping it because `merge_list` behavior will concatenate strings.

We currently have `index` as a special field when merging lists, would
it be worth adding `type` too?

If so, what do we set as a context block chunk? `text` vs.
`text_delta`/`tool_use` vs `input_json_delta`?

CC @ccurme @efriis @baskaryan
2024-06-14 09:14:43 -07:00
ccurme
f40b2c6f9d fireworks[patch]: add usage_metadata to (a)invoke and (a)stream (#22906) 2024-06-14 12:07:19 -04:00
Mohammad Mohtashim
d1b7a934aa [Community]: HuggingFaceCrossEncoder score accounting for <not-relevant score,relevant score> pairs. (#22578)
- **Description:** Some of the Cross-Encoder models provide scores in
pairs, i.e., <not-relevant score (higher means the document is less
relevant to the query), relevant score (higher means the document is
more relevant to the query)>. However, the `HuggingFaceCrossEncoder`
`score` method does not currently take into account the pair situation.
This PR addresses this issue by modifying the method to consider only
the relevant score if score is being provided in pair. The reason for
focusing on the relevant score is that the compressors select the top-n
documents based on relevance.
    - **Issue:** #22556 
- Please also refer to this
[comment](https://github.com/UKPLab/sentence-transformers/issues/568#issuecomment-729153075)
2024-06-14 08:28:24 -07:00
Baskar Gopinath
83643cbdfe docs: Fix typo in tutorial about structured data extraction (#22888)
[Fixed typo](docs: Fix typo in tutorial about structured data
extraction)
2024-06-14 15:19:55 +00:00
Thanh Nguyen
b5e2ba3a47 community[minor]: add chat model llamacpp (#22589)
- **PR title**: [community] add chat model llamacpp


- **PR message**:
- **Description:** This PR introduces a new chat model integration with
llamacpp_python, designed to work similarly to the existing ChatOpenAI
model.
      + Work well with instructed chat, chain and function/tool calling.
+ Work with LangGraph (persistent memory, tool calling), will update
soon

- **Dependencies:** This change requires the llamacpp_python library to
be installed.
    
@baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-14 14:51:43 +00:00
Bagatur
e4279f80cd docs: doc loader feat table alignment (#22900) 2024-06-14 14:25:01 +00:00
Isaac Francisco
984c7a9d42 docs: generate table for document loaders (#22871)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-14 07:03:27 -07:00
Jacob Lee
8e89178047 docs[patch]: Expand embeddings docs (#22881) 2024-06-13 23:06:07 -07:00
ccurme
73c76b9628 anthropic[patch]: always add tool_result type to ToolMessage content (#22721)
Anthropic tool results can contain image data, which are typically
represented with content blocks having `"type": "image"`. Currently,
these content blocks are passed as-is as human/user messages to
Anthropic, which raises BadRequestError as it expects a tool_result
block to follow a tool_use.

Here we update ChatAnthropic to nest the content blocks inside a
tool_result content block.

Example:
```python
import base64

import httpx
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.pydantic_v1 import BaseModel, Field


# Fetch image
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")


class FetchImage(BaseModel):
    should_fetch: bool = Field(..., description="Whether an image is requested.")


llm = ChatAnthropic(model="claude-3-sonnet-20240229").bind_tools([FetchImage])

messages = [
    HumanMessage(content="Could you summon a beautiful image please?"),
    AIMessage(
        content=[
            {
                "type": "tool_use",
                "id": "toolu_01Rn6Qvj5m7955x9m9Pfxbcx",
                "name": "FetchImage",
                "input": {"should_fetch": True},
            },
        ],
        tool_calls=[
            {
                "name": "FetchImage",
                "args": {"should_fetch": True},
                "id": "toolu_01Rn6Qvj5m7955x9m9Pfxbcx",
            },
        ],
    ),
    ToolMessage(
        name="FetchImage",
        content=[
            {
                "type": "image",
                "source": {
                    "type": "base64",
                    "media_type": "image/jpeg",
                    "data": image_data,
                },
            },
        ],
        tool_call_id="toolu_01Rn6Qvj5m7955x9m9Pfxbcx",
    ),
]

llm.invoke(messages)
```

Trace:
https://smith.langchain.com/public/d27e4fc1-a96d-41e1-9f52-54f5004122db/r
2024-06-13 20:14:23 -07:00
Lucas Tucker
7114aed78f docs: Standardize ChatGroq (#22751)
Updated ChatGroq doc string as per issue
https://github.com/langchain-ai/langchain/issues/22296:"langchain_groq:
updated docstring for ChatGroq in langchain_groq to match that of the
description (in the appendix) provided in issue
https://github.com/langchain-ai/langchain/issues/22296. "

Issue: This PR is in response to issue
https://github.com/langchain-ai/langchain/issues/22296, and more
specifically the ChatGroq model. In particular, this PR updates the
docstring for langchain/libs/partners/groq/langchain_groq/chat_model.py
by adding the following sections: Instantiate, Invoke, Stream, Async,
Tool calling, Structured Output, and Response metadata. I used the
template from the Anthropic implementation and referenced the Appendix
of the original issue post. I also noted that: `usage_metadata `returns
none for all ChatGroq models I tested; there is no mention of image
input in the ChatGroq documentation; unlike that of ChatHuggingFace,
`.stream(messages)` for ChatGroq returned blocks of output.

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-14 03:08:36 +00:00
Anush
e002c855bd qdrant[patch]: Use collection_exists API instead of exceptions (#22764)
## Description

Currently, the Qdrant integration relies on exceptions raised by
[`get_collection`
](https://qdrant.tech/documentation/concepts/collections/#collection-info)
to check if a collection exists.

Using
[`collection_exists`](https://qdrant.tech/documentation/concepts/collections/#check-collection-existence)
is recommended to avoid missing any unhandled exceptions. This PR
addresses this.

## Testing
All integration and unit tests pass. No user-facing changes.
2024-06-13 20:01:32 -07:00
Anindyadeep
c417803908 community[minor]: Prem Templates (#22783)
This PR adds the feature add Prem Template feature in ChatPremAI.
Additionally it fixes a minor bug for API auth error when API passed
through arguments.
2024-06-13 19:59:28 -07:00
Stefano Lottini
4160b700e6 docs: Astra DB vectorstore, adjust syntax for automatic-embedding example (#22833)
Description: Adjusting the syntax for creating the vectorstore
collection (in the case of automatic embedding computation) for the most
idiomatic way to submit the stored secret name.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-14 02:52:32 +00:00
maang-h
1055b9a309 community[minor]: Implement ZhipuAIEmbeddings interface (#22821)
- **Description:** Implement ZhipuAIEmbeddings interface, include:
     - The `embed_query` method
     - The `embed_documents` method

refer to [ZhipuAI
Embedding-2](https://open.bigmodel.cn/dev/api#text_embedding)

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-13 19:45:11 -07:00
Leonid Ganeline
46c9784127 docs: ReAct reference (#22830)
The `ReAct` is used all across LangChain but it is not referenced
properly.
Added references to the original paper.
2024-06-13 19:39:28 -07:00
Giacomo Berardi
712aa0c529 docs: fixes for Elasticsearch integrations, cache doc and providers list (#22817)
Some minor fixes in the documentation:
 - ElasticsearchCache initilization is now correct
 - List of integrations for ES updated
2024-06-13 19:39:10 -07:00
Isaac Francisco
f9a6d5c845 infra: lint new docs to match doc loader template (#22867) 2024-06-13 19:34:50 -07:00
Bagatur
8bd368d07e cli[patch]: Release 0.0.25 (#22876) 2024-06-14 02:31:04 +00:00
Isaac Francisco
75e966a2fa docs, cli[patch]: document loaders doc template (#22862)
From: https://github.com/langchain-ai/langchain/pull/22290

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-13 19:28:57 -07:00
Hayden Wolff
d1cdde267a docs: update NVIDIA Riva tool to use NVIDIA NIM for LLM (#22873)
**Description:**
Update the NVIDIA Riva tool documentation to use NVIDIA NIM for the LLM.
Show how to use NVIDIA NIMs and link to documentation for LangChain with
NIM.

---------

Co-authored-by: Hayden Wolff <hwolff@nvidia.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-13 19:26:05 -07:00
Zeeshan Qureshi
ada1e5cc64 docs: s/path_images/images/ for ImageCaptionLoader keyword arguments (#22857)
Quick update to `ImageCaptionLoader` documentation to reflect what's in
code.
2024-06-13 18:37:12 -07:00
liuzc9
41e232cb82 Fix typo in vearch.md (#22840)
Fix typo
2024-06-13 18:24:51 -07:00
Kagura Chen
57783c5e55 Fix: lint errors and update Field alias in models.py and AutoSelectionScorer initialization (#22846)
This PR addresses several lint errors in the core package of LangChain.
Specifically, the following issues were fixed:

1.Unexpected keyword argument "required" for "Field"  [call-arg]
2.tests/integration_tests/chains/test_cpal.py:263: error: Unexpected
keyword argument "narrative_input" for "QueryModel" [call-arg]
2024-06-13 18:18:00 -07:00
Erick Friis
5bc774827b langchain: release 0.2.4 (#22872) 2024-06-14 00:14:48 +00:00
Erick Friis
7234fd0f51 core: release 0.2.6 (#22868) 2024-06-13 22:22:34 +00:00
Jacob Lee
bcbb43480c core[patch]: Treat type as a special field when merging lists (#22750)
Should we even log a warning? At least for Anthropic, it's expected to
get e.g. `text_block` followed by `text_delta`.

@ccurme @baskaryan @efriis
2024-06-13 15:08:24 -07:00
Nuno Campos
bae82e966a core: In astream_events v2 propagate cancel/break to the inner astream call (#22865)
- previous behavior was for the inner astream to continue running with
no interruption
- also propagate break in core runnable methods
2024-06-13 15:02:48 -07:00
Eugene Yurtsev
a766815a99 experimental[patch]/docs[patch]: Update links to security docs (#22864)
Minor update to newest version of security docs (content should be
identical).
2024-06-13 20:29:34 +00:00
Eugene Yurtsev
8f7cc73817 ci: Add script to check for pickle usage in community (#22863)
Add script to check for pickle usage in community.
2024-06-13 16:13:15 -04:00
Eugene Yurtsev
77209f315e community[patch]: FAISS VectorStore deserializer should be opt-in (#22861)
FAISS deserializer uses pickle module. Users have to opt-in to
de-serialize.
2024-06-13 15:48:13 -04:00
Eugene Yurtsev
ce0b0f22a1 experimental[major]: Force users to opt-in into code that relies on the python repl (#22860)
This should make it obvious that a few of the agents in langchain
experimental rely on the python REPL as a tool under the hood, and will
force users to opt-in.
2024-06-13 15:41:24 -04:00
Isaac Francisco
869523ad72 [docs]: added info for TavilySearchResults (#22765) 2024-06-13 12:14:11 -07:00
ccurme
42257b120f partners: fix numpy dep (#22858)
Following https://github.com/langchain-ai/langchain/pull/22813, which
added python 3.12 to CI, here we update numpy accordingly in partner
packages.
2024-06-13 14:46:42 -04:00
Isaac Francisco
345fd3a556 minor functionality change: adding API functionality to tavilysearch (#22761) 2024-06-13 11:10:28 -07:00
Isaac Francisco
034257e9bf docs: improved recursive url loader docs (#22648)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-13 11:09:35 -07:00
Isaac Francisco
e832bbb486 [docs]: bind tools (#22831) 2024-06-13 09:50:43 -07:00
ccurme
b626c3ca23 groq[patch]: add usage_metadata to (a)invoke and (a)stream (#22834) 2024-06-13 10:26:27 -04:00
Jacob Lee
e01e5d5a91 docs[patch]: Improve Groq integration page (#22844)
Was bare bones and got marked by folks as unhelpful.

CC @efriis @colemccracken
2024-06-13 03:40:29 -07:00
Jacob Lee
12eff6a130 docs[patch]: Readd Pydantic compatibility docs (#22836)
As a how-to guide.

CC @eyurtsev @hwchase17
2024-06-13 02:56:10 -07:00
Jacob Lee
cb654a3245 docs[patch]: Adds multimodal column to chat models table, move up in concepts (#22837)
CC @hwchase17 @baskaryan
2024-06-13 02:26:55 -07:00
James Braza
45b394268c core[patch]: allowing latest packaging versions (#22792)
Allowing version 24 of https://github.com/pypa/packaging

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-12 23:22:20 +00:00
Jacob Lee
00ad197502 docs[patch]: Add structured output to conceptual docs (#22791)
This downgrades `Function/tool calling` from a h3 to an h4 which means
it'll no longer show up in the right sidebar, but any direct links will
still work. I think that is ok, but LMK if you disapprove.

CC @hwchase17 @eyurtsev @rlancemartin
2024-06-12 15:30:51 -07:00
Karim Lalani
276be6cdd4 [experimental][llms][OllamaFunctions] tool calling related fixes (#22339)
Fixes issues with tool calling to handle tool objects correctly. Added
support to handle ToolMessage correctly.
Added additional checks for error conditions.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-12 16:34:43 -04:00
Christophe Bornet
d04e899b56 ci: add testing with Python 3.12 (#22813)
We need to use a different version of numpy for py3.8 and py3.12 in
pyproject.
And so do projects that use that Python version range and import
langchain.

    - **Twitter handle:** _cbornet
2024-06-12 16:31:36 -04:00
HyoJin Kang
b6bf2bb234 community[patch]: fix database uri type in SQLDatabase (#22661)
**Description**
sqlalchemy uses "sqlalchemy.engine.URL" type for db uri argument.
Added 'URL' type for compatibility.

**Issue**: None

**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-12 15:11:00 -04:00
Eugene Yurtsev
5dbbdcbf8e core[patch]: Update remaining root_validators (#22829)
This PR updates the remaining root_validators in core to either be explicit pre-init or post-init validators.
2024-06-12 14:47:40 -04:00
Eugene Yurtsev
265e650e64 community[patch]: Update root_validators embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub, Toolkits: Connery, ChatModels: PAI_EAS, (#22828)
This PR updates root validators for:

* Embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub
* Toolkits: Connery
* ChatModels: PAI_EAS

Following this issue:
https://github.com/langchain-ai/langchain/issues/22819
2024-06-12 13:59:05 -04:00
JonZeolla
32ba8cfab0 community[minor]: implement huggingface show_progress consistently (#22682)
- **Description:** This implements `show_progress` more consistently
(i.e. it is also added to the `HuggingFaceBgeEmbeddings` object).
- **Issue:** This implements `show_progress` more consistently in the
embeddings huggingface classes. Previously this could have been set via
`encode_kwargs`.
 - **Dependencies:** None
 - **Twitter handle:** @jonzeolla
2024-06-12 17:30:56 +00:00
Eugene Yurtsev
74e705250f core[patch]: update some root_validators (#22787)
Update some of the @root_validators to be explicit pre=True or
pre=False, skip_on_failure=True for pydantic 2 compatibility.
2024-06-12 13:04:57 -04:00
bincat
3d6e8547f9 docs: fix function name in tutorials/agents.ipynb (#22809)
the function called in the flowing example is `create_react_agent`, not
`create_tool_calling_executor `
2024-06-12 12:30:35 -04:00
mrhbj
a1268d9e9a community[patch]: fix hunyuan message include chinese signature error (#22795) (#22796)
… (#22795)

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-12 12:30:22 -04:00
Kagura Chen
513f1d8037 docs: update repo_structure.mdx to reflect latest code changes (#22810)
**Description:** This PR updates the documentation to reflect the recent
code changes.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-12 12:30:04 -04:00
Mr. Lance E Sloan «UMich»
08c466c603 community[patch]: bugfix for YoutubeLoader's LINES format (#22815)
- **Description:** A change I submitted recently introduced a bug in
`YoutubeLoader`'s `LINES` output format. In those conditions, curly
braces ("`{}`") creates a set, not a dictionary. This bugfix explicitly
specifies that a dictionary is created.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter:** lsloan_umich
- **Mastodon:**
[lsloan@mastodon.social](https://mastodon.social/@lsloan)
2024-06-12 12:29:34 -04:00
Philippe PRADOS
23c22fcbc9 langchain[minor]: Make EmbeddingsFilters async (#22737)
Add native async implementation for EmbeddingsFilter
2024-06-12 12:27:26 -04:00
endrajeet
b45bf78d2e Update index.mdx (#22818)
changed "# 🌟Recognition" to "### 🌟 Recognition" to match the rest of the
subheadings.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-12 12:27:16 -04:00
Bagatur
8203c1ff87 infra: lint new docs to match templates (#22786) 2024-06-11 13:26:35 -07:00
ccurme
936aedd10c mistral[patch]: add usage_metadata to (a)invoke and (a)stream (#22781) 2024-06-11 15:34:50 -04:00
Jiří Spilka
20e3662acf docs: Correct code examples in the Apify's notebooks (#22768)
**Description:** Correct code examples in the Apify document load
notebook and Apify Dataset notebook

**Issue**: None
**Dependencies**: None
**Twitter handle**: None
2024-06-11 15:20:16 -04:00
mrhbj
9212c9fcb8 community[patch]: fix hunyuan client json analysis (#22452) (#22767)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 19:05:18 +00:00
Rohan Aggarwal
86e8224cf1 community[patch]: Support for old clients (Thin and Thick) Oracle Vector Store (#22766)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
Support for old clients (Thin and Thick) Oracle Vector Store


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
Support for old clients (Thin and Thick) Oracle Vector Store

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
Have our own local tests

---------

Co-authored-by: rohan.aggarwal@oracle.com <rohaagga@phoenix95642.dev3sub2phx.databasede3phx.oraclevcn.com>
2024-06-11 11:36:06 -07:00
Jacob Lee
232908a46d docs[patch]: Adds streaming conceptual doc (#22760)
CC @hwchase17 @baskaryan
2024-06-11 11:03:52 -07:00
Mr. Lance E Sloan «UMich»
84dc2dd059 community[patch]: Load YouTube transcripts (captions) as fixed-duration chunks with start times (#21710)
- **Description:** Add a new format, `CHUNKS`, to
`langchain_community.document_loaders.youtube.YoutubeLoader` which
creates multiple `Document` objects from YouTube video transcripts
(captions), each of a fixed duration. The metadata of each chunk
`Document` includes the start time of each one and a URL to that time in
the video on the YouTube website.
  
I had implemented this for UMich (@umich-its-ai) in a local module, but
it makes sense to contribute this to LangChain community for all to
benefit and to simplify maintenance.

- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter:** lsloan_umich
- **Mastodon:**
[lsloan@mastodon.social](https://mastodon.social/@lsloan)

With regards to **tests and documentation**, most existing features of
the `YoutubeLoader` class are not tested. Only the
`YoutubeLoader.extract_video_id()` static method had a test. However,
while I was waiting for this PR to be reviewed and merged, I had time to
add a test for the chunking feature I've proposed in this PR.

I have added an example of using chunking to the
`docs/docs/integrations/document_loaders/youtube_transcript.ipynb`
notebook.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 17:44:36 +00:00
Aayush Kataria
71811e0547 community[minor]: Adds a vector store for Azure Cosmos DB for NoSQL (#21676)
This PR add supports for Azure Cosmos DB for NoSQL vector store.

Summary:

Description: added vector store integration for Azure Cosmos DB for
NoSQL Vector Store,
Dependencies: azure-cosmos dependency,
Tag maintainer: @hwchase17, @baskaryan @efriis @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-11 10:34:01 -07:00
Mohammad Mohtashim
36cad5d25c [Community]: Added Metadata filter support for DocumentDB Vector Store (#22777)
- **Description:** As pointed out in this issue #22770, DocumentDB
`similarity_search` does not support filtering through metadata which
this PR adds by passing in the parameter `filter`. Also this PR fixes a
minor Documentation error.
- **Issue:** #22770

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-11 16:37:53 +00:00
Dmitry Stepanov
912751e268 Ollama vision support (#22734)
**Description:** Ollama vision with messages in OpenAI-style support `{
"image_url": { "url": ... } }`
**Issue:** #22460 

Added flexible solution for ChatOllama to support chat messages with
images. Works when you provide either `image_url` as a string or as a
dict with "url" inside (like OpenAI does). So it makes available to use
tuples with `ChatPromptTemplate.from_messages()`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 16:10:19 +00:00
Philippe PRADOS
0908b01cb2 langchain[minor]: Add native async implementation to LLMFilter, add concurrency to both sync and async paths (#22739)
Thank you for contributing to LangChain!

- [ ] **PR title**: "langchain: Fix chain_filter.py to be compatible
with async"


- [ ] **PR message**: 
    - **Description:** chain_filter is not compatible with async.
    - **Twitter handle:** pprados


- [X ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Signed-off-by: zhangwangda <zhangwangda94@163.com>
Co-authored-by: Prakul <discover.prakul@gmail.com>
Co-authored-by: Lei Zhang <zhanglei@apache.org>
Co-authored-by: Gin <ictgtvt@gmail.com>
Co-authored-by: wangda <38549158+daziz@users.noreply.github.com>
Co-authored-by: Max Mulatz <klappradla@posteo.net>
2024-06-11 10:55:40 -04:00
Jaeyeon Kim(김재연)
ce4e29ae42 community[minor]: fix redis store docstring and streamline initialization code (#22730)
Thank you for contributing to LangChain!

### Description

Fix the example in the docstring of redis store.
Change the initilization logic and remove redundant check, enhance error
message.

### Issue

The example in docstring of how to use redis store was wrong.

![image](https://github.com/langchain-ai/langchain/assets/37469330/78c5d9ce-ee66-45b3-8dfe-ea29f125e6e9)

### Dependencies
Nothing



- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-11 14:08:05 +00:00
am-kinetica
ad101adec8 community[patch]: Kinetica Integrations handled error in querying; quotes in table names; updated gpudb API (#22724)
- [ ] **Miscellaneous updates and fixes**: 
- **Description:** Handled error in querying; quotes in table names;
updated gpudb API
- **Issue:** Threw an error with an error message difficult to
understand if a query failed or returned no records
    - **Dependencies:** Updated GPUDB API version to `7.2.0.9`


@baskaryan @hwchase17
2024-06-11 10:01:26 -04:00
NithinBairapaka
27b9ea14a5 docs: Updated integration docs with required package installations (#22392)
**Title:** Updated integration docs with required package installations
   **Issue:**  #22005
2024-06-11 01:44:05 +00:00
Albert Gil López
1710423de3 docs: correct path in readme (#22383)
Description: Fix incorrect path in README instructions.
Issue: N/A
Dependencies: None
Twitter handle: @jddam

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-10 17:47:39 -07:00
Greg Tracy
7e115da16c docs: Fix pixelation in stack graphic (#21554)
This change updates the stack graphic displayed in the top-level README.
The LangChain tile is pixelated in the current graphic.
2024-06-10 22:52:22 +00:00
Leonid Ganeline
55bd8e582b docs: integrations cache: added class table (#22368)
Added a table with the cache classes. See [this table
here](https://langchain-rnpqvikie-langchain.vercel.app/v0.2/docs/integrations/llm_caching/#cache-classes-summary-table).
2024-06-10 15:09:03 -07:00
Jacob Lee
89804c3026 docs: Adds pointers from LLM pages to equivalent chat model pages (#22759)
@baskaryan
2024-06-10 14:13:22 -07:00
Qingchuan Hao
7f180f996b docs: fix langchain expression language link (#22683) 2024-06-10 21:12:47 +00:00
Mathis Joffre
ea43f40daf community[minor]: Add support for OVHcloud AI Endpoints Embedding (#22667)
**Description:** Add support for [OVHcloud AI
Endpoints](https://endpoints.ai.cloud.ovh.net/) Embedding models.

Inspired by:
https://gist.github.com/gmasse/e1f99339e161f4830df6be5d0095349a

Signed-off-by: Joffref <mariusjoffre@gmail.com>
2024-06-10 21:07:25 +00:00
Erick Friis
2aaf86ddae core: fix mustache falsy cases (#22747) 2024-06-10 14:00:12 -07:00
Eugene Yurtsev
5a7eac191a core[patch]: Add missing type annotations (#22756)
Add missing type annotations.

The missing type annotations will raise exceptions with pydantic 2.
2024-06-10 16:59:41 -04:00
Eugene Yurtsev
05d31a2f00 community[patch]: Add missing type annotations (#22758)
Add missing type annotations to objects in community.
These missing type annotations will raise type errors in pydantic 2.
2024-06-10 16:59:28 -04:00
Naka Masato
3237909221 langchain[patch]: allow to use partial variables in create_sql_query_chain (#22688)
- **Description:** allow to use partial variables to pass `top_k` and
`table_info`
- **Issue:** no
- **Dependencies:** no
- **Twitter handle:** @gymnstcs

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-10 20:58:30 +00:00
Bharat Ramanathan
2b5631a6be community[patch]: fix WandbTracer to work with new "RunV2" API (#22673)
- **Description:** This PR updates the `WandbTracer` to work with the
new RunV2 API so that wandb Traces logging works correctly for new
LangChain versions. Here's an example
[run](https://wandb.ai/parambharat/langchain-tracing/runs/wpm99ftq) from
the existing tests
- **Issue:** https://github.com/wandb/wandb/issues/7762
- **Twitter handle:** @ParamBharat

_If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17._
2024-06-10 13:56:35 -07:00
Oguz Vuruskaner
f0f4532579 community[patch]: fix deepinfra inference (#22680)
This PR includes:

1. Update of default model to LLama3.
2. Handle some 400x errors with more user friendly error messages.
3. Handle user errors.
2024-06-10 13:55:55 -07:00
Lucas Tucker
cb79e80b0b docs: standardize ChatHuggingFace (#22693)
**Updated ChatHuggingFace doc string as per issue #22296**:
"langchain_huggingface: updated docstring for ChatHuggingFace in
langchain_huggingface to match that of the description (in the appendix)
provided in issue #22296. "

**Issue:** This PR is in response to issue #22296, and more specifically
ChatHuggingFace model. In particular, this PR updates the docstring for
langchain/libs/partners/hugging_face/langchain_huggingface/chat_models/huggingface.py
by adding the following sections: Instantiate, Invoke, Stream, Async,
Tool calling, and Response metadata. I used the template from the
Anthropic implementation and referenced the Appendix of the original
issue post. I also noted that: langchain_community hugging face llms do
not work with langchain_huggingface's ChatHuggingFace model (at least
for me); the .stream(messages) functionality of ChatHuggingFace only
returned a block of response.

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-10 20:54:36 +00:00
Erick Friis
d92f2251c8 docs: couchbase partner package (#22757) 2024-06-10 20:53:03 +00:00
Tomaz Bratanic
76a193decc community[patch]: Add function response to graph cypher qa chain (#22690)
LLMs struggle with Graph RAG, because it's different from vector RAG in
a way that you don't provide the whole context, only the answer and the
LLM has to believe. However, that doesn't really work a lot of the time.
However, if you wrap the context as function response the accuracy is
much better.

btw... `union[LLMChain, Runnable]` is linting fun, that's why so many
ignores
2024-06-10 13:52:17 -07:00
X-HAN
34edfe4a16 community[minor]: add Volcengine Rerank (#22700)
**Description:** this PR adds Volcengine Rerank capability to Langchain,
you can find Volcengine Rerank API from
[here](https://www.volcengine.com/docs/84313/1254474) &
[here](https://www.volcengine.com/docs/84313/1254605).
[Volcengine](https://www.volcengine.com/) is a cloud service platform
developed by ByteDance, the parent company of TikTok. You can obtain
Volcengine API AK/SK from
[here](https://www.volcengine.com/docs/84313/1254553).

**Dependencies:** VolcengineRerank depends on `volcengine` python
package.

**Twitter handle:** my twitter/x account is https://x.com/LastMonopoly
and I'd like a mention, thank you!


**Tests and docs**
  1. integration test: `test_volcengine_rerank.py`
  2. example notebook: `volcengine_rerank.ipynb`

**Lint and test**: I have run `make format`, `make lint` and `make test`
from the root of the package I've modified.
2024-06-10 13:41:05 -07:00
Prakul
9eacce9356 docs:Update reference to langchain-mongodb (#22705)
**Description**: Update reference to langchain-mongodb
2024-06-10 13:35:21 -07:00
Ikko Eltociear Ashimine
4197c9c85f docs: update azure_container_apps_dynamic_sessions_data_analyst.ipynb (#22718)
colum -> column
2024-06-10 13:33:40 -07:00
Jacob Lee
e4183cbc4e docs[patch]: Add caution on OpenAI LLMs integration page (#22754)
@baskaryan do we like?

<img width="1040" alt="Screenshot 2024-06-10 at 12 16 45 PM"
src="https://github.com/langchain-ai/langchain/assets/6952323/8893063f-1acf-4a56-9ee5-a8a2b1560277">
2024-06-10 13:27:22 -07:00
Mohammad Mohtashim
c3cce98d86 community[patch]: Small Fix in OutlookMessageLoader (Close the Message once Open) (#22744)
- **Description:** A very small fix where we close the message when it
opened
- **Issue:** #22729
2024-06-10 13:08:39 -07:00
Bagatur
86a3f6edf1 docs: standardize ChatVertexAI (#22686)
Part of #22296. Part two of
https://github.com/langchain-ai/langchain-google/pull/287
2024-06-10 12:50:50 -07:00
ccurme
f9fdca6cc2 openai: add parallel_tool_calls to api ref (#22746)
![Screenshot 2024-06-10 at 1 41 24
PM](https://github.com/langchain-ai/langchain/assets/26529506/2626bf9c-41c6-4431-b2e1-f59de1e4e468)
2024-06-10 17:44:43 +00:00
Max Mulatz
058a64c563 Community[minor]: Add language parser for Elixir (#22742)
Hi 👋 

First off, thanks a ton for your work on this 💚 Really appreciate what
you're providing here for the community.

## Description

This PR adds a basic language parser for the
[Elixir](https://elixir-lang.org/) programming language. The parser code
is based upon the approach outlined in
https://github.com/langchain-ai/langchain/pull/13318: it's using
`tree-sitter` under the hood and aligns with all the other `tree-sitter`
based parses added that PR.

The `CHUNK_QUERY` I'm using here is probably not the most sophisticated
one, but it worked for my application. It's a starting point to provide
"core" parsing support for Elixir in LangChain. It enables people to use
the language parser out in real world applications which may then lead
to further tweaking of the queries. I consider this PR just the ground
work.

- **Dependencies:** requires `tree-sitter` and `tree-sitter-languages`
from the extended dependencies
- **Twitter handle:**`@bitcrowd`

## Checklist

- [x] **PR title**: "package: description"
- [x] **Add tests and docs**
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2024-06-10 15:56:57 +00:00
wangda
28e956735c docs:Correcting spelling mistakes in readme (#22664)
Signed-off-by: zhangwangda <zhangwangda94@163.com>
2024-06-10 15:33:41 +00:00
Gin
6f54abc252 docs: Add a missing dot in concepts.mdx (#22677) 2024-06-10 15:30:56 +00:00
Philippe PRADOS
2d4689d721 langchain[minor]: Add pgvector to list of supported vectorstores in self query retriever (#22678)
The fact that we outsourced pgvector to another project has an
unintended effect. The mapping dictionary found by
`_get_builtin_translator()` cannot recognize the new version of pgvector
because it comes from another package.
`SelfQueryRetriever` no longer knows `PGVector`.

I propose to fix this by creating a global dictionary that can be
populated by various database implementations. Thus, importing
`langchain_postgres` will allow the registration of the `PGvector`
mapping.

But for the moment I'm just adding a lazy import

Furthermore, the implementation of _get_builtin_translator()
reconstructs the BUILTIN_TRANSLATORS variable with each invocation,
which is not very efficient. A global map would be an optimization.

- **Twitter handle:** pprados

@eyurtsev, can you review this PR? And unlock the PR [Add async mode for
pgvector](https://github.com/langchain-ai/langchain-postgres/pull/32)
and PR [community[minor]: Add SQL storage
implementation](https://github.com/langchain-ai/langchain/pull/22207)?

Are you in favour of a global dictionary-based implementation of
Translator?
2024-06-10 11:27:47 -04:00
Lei Zhang
5ba1899cd7 infra: Scheduled GitHub Actions to run only on the upstream repository (#22707)
**Description:** Scheduled GitHub Actions to run only on the upstream
repository

**Issue:** Fixes #22706 

**Twitter handle:** @coolbeevip
2024-06-10 11:07:42 -04:00
Prakul
3f76c9e908 docs: Update MongoDB information in llm_caching (#22708)
**Description:**: Update MongoDB information in llm_caching
2024-06-10 11:05:55 -04:00
fzowl
c1fced9269 docs: VoyageAI new embedding and reranking models (#22719) 2024-06-09 09:12:43 -07:00
Enzo Poggio
8f019e91d7 community[patch]: Use Custom Logger Instead of Root Logger in get_user_agent Function (#22691)
## Description
This PR addresses a logging inconsistency in the `get_user_agent`
function. Previously, the function was using the root logger to log a
warning message when the "USER_AGENT" environment variable was not set.
This bypassed the custom logger `log` that was created at the start of
the module, leading to potential inconsistencies in logging behavior.

Changes:
- Replaced `logging.warning` with `log.warning` in the `get_user_agent`
function to ensure that the custom logger is used.

This change ensures that all logging in the `get_user_agent` function
respects the configurations of the custom logger, leading to more
consistent and predictable logging behavior.

## Dependencies

None

## Issue 

None

## Tests and docs

☝🏻 see description


## `make format`, `make lint` & `cd libs/community; make test`

```shell
> make format 
poetry run ruff format docs templates cookbook
1417 files left unchanged
poetry run ruff check --select I --fix docs templates cookbook
All checks passed!
```

```shell
> make lint
poetry run ruff check docs templates cookbook
All checks passed!
poetry run ruff format docs templates cookbook --diff
1417 files already formatted
poetry run ruff check --select I docs templates cookbook
All checks passed!
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
```

~cd libs/community; make test~ too much dependencies for integration ...

```shell
>  poetry run pytest tests/unit_tests   
....
==== 884 passed, 466 skipped, 4447 warnings in 15.93s ====
```

I choose you randomly : @ccurme
2024-06-08 02:33:07 +00:00
Philippe PRADOS
9aabb446c5 community[minor]: Add SQL storage implementation (#22207)
Hello @eyurtsev

- package: langchain-comminity
- **Description**: Add SQL implementation for docstore. A new
implementation, in line with my other PR ([async
PGVector](https://github.com/langchain-ai/langchain-postgres/pull/32),
[SQLChatMessageMemory](https://github.com/langchain-ai/langchain/pull/22065))
- Twitter handler: pprados

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Piotr Mardziel <piotrm@gmail.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:17:02 +00:00
Nithish Raghunandanan
f2f0e0e13d couchbase: Add the initial version of Couchbase partner package (#22087)
Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-07 14:04:08 -07:00
Cahid Arda Öz
6c07eb0c12 community[minor]: Add UpstashRatelimitHandler (#21885)
Adding `UpstashRatelimitHandler` callback for rate limiting based on
number of chain invocations or LLM token usage.

For more details, see [upstash/ratelimit-py
repository](https://github.com/upstash/ratelimit-py) or the notebook
guide included in this PR.

Twitter handle: @cahidarda

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:02:06 +00:00
Erick Friis
9b3ce16982 docs: remove nonexistent headings (#22685) 2024-06-07 20:02:06 +00:00
Erick Friis
9e03864d64 core: add error message for non-structured llm to StructuredPrompt (#22684)
previously was the blank `NotImplementedError` from
`BaseLanguageModel.with_structured_output`
2024-06-07 19:42:09 +00:00
Jacob Lee
02ff78deb8 docs[patch]: Adds LangGraph and LangSmith links, adds more crosslinks between pages (#22656)
@baskaryan @hwchase17
2024-06-07 10:22:29 -07:00
Mateusz Szewczyk
c3a8716589 docs: Updated product version in Embeddings notebook (#22062) 2024-06-07 08:11:03 -07:00
ccurme
f32d57f6f0 anthropic: refactor streaming to use events api; add streaming usage metadata (#22628)
- Refactor streaming to use raw events;
- Add `stream_usage` class attribute and kwarg to stream methods that,
if True, will include separate chunks in the stream containing usage
metadata.

There are two ways to implement streaming with anthropic's python sdk.
They have slight differences in how they surface usage metadata.
1. [Use helper
functions](https://github.com/anthropics/anthropic-sdk-python?tab=readme-ov-file#streaming-helpers).
This is what we are doing now.
```python
count = 1
with client.messages.stream(**params) as stream:
    for text in stream.text_stream:
        snapshot = stream.current_message_snapshot
        print(f"{count}: {snapshot.usage} -- {text}")
        count = count + 1

final_snapshot = stream.get_final_message()
print(f"{count}: {final_snapshot.usage}")
```
```
1: Usage(input_tokens=8, output_tokens=1) -- Hello
2: Usage(input_tokens=8, output_tokens=1) -- !
3: Usage(input_tokens=8, output_tokens=1) --  How
4: Usage(input_tokens=8, output_tokens=1) --  can
5: Usage(input_tokens=8, output_tokens=1) --  I
6: Usage(input_tokens=8, output_tokens=1) --  assist
7: Usage(input_tokens=8, output_tokens=1) --  you
8: Usage(input_tokens=8, output_tokens=1) --  today
9: Usage(input_tokens=8, output_tokens=1) -- ?
10: Usage(input_tokens=8, output_tokens=12)
```
To do this correctly, we need to emit a new chunk at the end of the
stream containing the usage metadata.

2. [Handle raw
events](https://github.com/anthropics/anthropic-sdk-python?tab=readme-ov-file#streaming-responses)
```python
stream = client.messages.create(**params, stream=True)
count = 1
for event in stream:
    print(f"{count}: {event}")
    count = count + 1
```
```
1: RawMessageStartEvent(message=Message(id='msg_01Vdyov2kADZTXqSKkfNJXcS', content=[], model='claude-3-haiku-20240307', role='assistant', stop_reason=None, stop_sequence=None, type='message', usage=Usage(input_tokens=8, output_tokens=1)), type='message_start')
2: RawContentBlockStartEvent(content_block=TextBlock(text='', type='text'), index=0, type='content_block_start')
3: RawContentBlockDeltaEvent(delta=TextDelta(text='Hello', type='text_delta'), index=0, type='content_block_delta')
4: RawContentBlockDeltaEvent(delta=TextDelta(text='!', type='text_delta'), index=0, type='content_block_delta')
5: RawContentBlockDeltaEvent(delta=TextDelta(text=' How', type='text_delta'), index=0, type='content_block_delta')
6: RawContentBlockDeltaEvent(delta=TextDelta(text=' can', type='text_delta'), index=0, type='content_block_delta')
7: RawContentBlockDeltaEvent(delta=TextDelta(text=' I', type='text_delta'), index=0, type='content_block_delta')
8: RawContentBlockDeltaEvent(delta=TextDelta(text=' assist', type='text_delta'), index=0, type='content_block_delta')
9: RawContentBlockDeltaEvent(delta=TextDelta(text=' you', type='text_delta'), index=0, type='content_block_delta')
10: RawContentBlockDeltaEvent(delta=TextDelta(text=' today', type='text_delta'), index=0, type='content_block_delta')
11: RawContentBlockDeltaEvent(delta=TextDelta(text='?', type='text_delta'), index=0, type='content_block_delta')
12: RawContentBlockStopEvent(index=0, type='content_block_stop')
13: RawMessageDeltaEvent(delta=Delta(stop_reason='end_turn', stop_sequence=None), type='message_delta', usage=MessageDeltaUsage(output_tokens=12))
14: RawMessageStopEvent(type='message_stop')
```

Here we implement the second option, in part because it should make
things easier when implementing streaming tool calls in the near future.

This would add two new chunks to the stream-- one at the beginning and
one at the end-- with blank content and containing usage metadata. We
add kwargs to the stream methods and a class attribute allowing for this
behavior to be toggled. I enabled it by default. If we merge this we can
add the same kwargs / attribute to OpenAI.

Usage:
```python
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-3-haiku-20240307",
    temperature=0
)

full = None
for chunk in model.stream("hi"):
    full = chunk if full is None else full + chunk
    print(chunk)

print(f"\nFull: {full}")
```
```
content='' id='run-8a20843f-25c7-4025-ad72-9add395899e3' usage_metadata={'input_tokens': 8, 'output_tokens': 0, 'total_tokens': 8}
content='Hello' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content='!' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' How' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' can' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' I' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' assist' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' you' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' today' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content='?' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content='' id='run-8a20843f-25c7-4025-ad72-9add395899e3' usage_metadata={'input_tokens': 0, 'output_tokens': 12, 'total_tokens': 12}

Full: content='Hello! How can I assist you today?' id='run-8a20843f-25c7-4025-ad72-9add395899e3' usage_metadata={'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}
```
2024-06-07 13:21:46 +00:00
Bagatur
235d91940d community[patch]: Release 0.2.4 (#22643) 2024-06-06 17:47:44 -07:00
Francesco Kruk
344adad056 docs: Update jina embedding notebook to include multimodal capability (#22594)
After merging the [PR #22416 to include Jina AI multimodal
capabilities](https://github.com/langchain-ai/langchain/pull/22416), we
updated the Jina AI embedding notebook accordingly.
2024-06-07 00:02:20 +00:00
William FH
be79ce9336 [Core] Unified Enable/Disable Tracing (#22576) 2024-06-06 16:54:35 -07:00
Leonid Ganeline
57c1239643 docs: arxiv page update (#22574)
Added a link to search the arXiv papers with references to LangChain.
Updated table: better format (no horizontal scroll in table anymore).
2024-06-06 16:51:02 -07:00
Bagatur
fe2e5a3b74 langchain[patch]: Release 0.2.3 (#22644) 2024-06-06 16:29:18 -07:00
Erick Friis
a24a9c6427 multiple: get rid of pyproject extras (#22581)
They cause `poetry lock` to take a ton of time, and `uv pip install` can
resolve the constraints from these toml files in trivial time
(addressing problem with #19153)

This allows us to properly upgrade lockfile dependencies moving forward,
which revealed some issues that were either fixed or type-ignored (see
file comments)
2024-06-06 15:45:22 -07:00
Bagatur
4367e89c9a core[patch]: Release 0.2.5 (#22642) 2024-06-06 15:44:26 -07:00
Eugene Yurtsev
28f744c1f5 core[patch]: Correctly order parent ids in astream events (from root to immediate parent), add defensive check for cycles (#22637)
This PR makes two changes:

1. Fixes the order of parent IDs to be from root to immediate parent
2. Adds a simple defensive check for cycles
2024-06-06 20:37:52 +00:00
Satyam Kumar
835926153b updated oracleai_demo.ipynb (#22635)
The outer try/except block handles connection errors, and the inner
try/except block handles SQL execution errors, providing detailed error
messages for both.
try:
    conn = oracledb.connect(user=username, password=password, dsn=dsn)
    print("Connection successful!")

    cursor = conn.cursor()
    try:
        cursor.execute(
            """
            begin
                -- Drop user
                begin
                    execute immediate 'drop user testuser cascade';
                exception
                    when others then
dbms_output.put_line('Error dropping user: ' || SQLERRM);
                end;

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-06 20:29:24 +00:00
Eugene Yurtsev
035a9c9609 core[minor]: Add parent_ids to astream_events API (#22563)
Include a list of parent ids for each event in astream events.
2024-06-06 16:14:28 -04:00
Tomaz Bratanic
67e58fdc2e docs[patch]: Fix diffbot docs (#22584) 2024-06-06 16:08:59 -04:00
Eugene Yurtsev
6b8963ad92 docs: Add information about run time binding values to tools (#22623)
Add how-to guide that shows a design pattern for creating tools at run time
2024-06-06 16:05:34 -04:00
CharlesCNorton
aa49163bdf docs[patch]: typo in AutoGPT example notebook (#22631)
Corrected a typo in the AutoGPT example notebook. Changed "Needed synce
jupyter runs an async eventloop" to "Needed since Jupyter runs an async
event loop".

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-06 16:05:11 -04:00
CharlesCNorton
ffe75d1e46 docs: typo in dev container documentation (#22630)
removed an extra space before the period in the "Click **Create
codespace on master**." line.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-06 16:04:48 -04:00
Nicolas Nkiere
51005e2776 core[minor]: Add an async root listener and with_alisteners method (#22151)
- [x] **Adding AsyncRootListener**: "langchain_core: Adding
AsyncRootListener"

- **Description:** Adding an AsyncBaseTracer, AsyncRootListener and
`with_alistener` function. This is to enable binding async root listener
to runnables. This currently only supported for sync listeners.
- **Issue:** None
- **Dependencies:** None

- [x] **Add tests and docs**: Added units tests and example snippet code
within the function description of `with_alistener`


- [x] **Lint and test**: Run make format_diff, make lint_diff and make
test
2024-06-06 16:03:44 -04:00
seyf97
2904c50cd5 openai[patch]: correct grammar in exception message in embeddings/base.py (#22629)
Correct the grammar error for missing transformers package ValueError
2024-06-06 18:55:04 +00:00
Anush
80560419b0 qdrant[patch]: Make path optional in from_existing_collection() (#21875)
## Description

The `path` param is used to specify the local persistence directory,
which isn't required if using Qdrant server.

This is a breaking but necessary change.
2024-06-06 10:37:08 -07:00
ccurme
b57aa89f34 multiple: implement ls_params (#22621)
implement ls_params for ai21, fireworks, groq.
2024-06-06 16:51:37 +00:00
Xiangrui Meng
f26ab93df8 community: support Databricks Unity Catalog functions as LangChain tools (#22555)
This PR adds support for using Databricks Unity Catalog functions as
LangChain tools, which runs inside a Databricks SQL warehouse.

* An example notebook is provided.
2024-06-06 09:38:50 -07:00
ccurme
c1ef731503 anthropic: update attribute name and alias (#22625)
update name to `stop_sequences` and alias to `stop` (instead of the
other way around), since `stop_sequences` is the name used by anthropic.
2024-06-06 12:29:10 -04:00
lucasiscovici
05bf98b2f9 community[patch]: pgvector replace nin_ by not_in (#22619)
- [ ] **community**: "pgvector: replace nin_ by not_in"

- [ ] **PR message**: nin_ do not exist in sqlalchemy orm, it's not_in
2024-06-06 12:17:22 -04:00
ccurme
3999761201 multiple: add stop attribute (#22573) 2024-06-06 12:11:52 -04:00
ccurme
e08879147b Revert "anthropic: stream token usage" (#22624)
Reverts langchain-ai/langchain#20180
2024-06-06 12:05:08 -04:00
Bagatur
0d495f3f63 anthropic: stream token usage (#20180)
open to other ideas
<img width="1181" alt="Screenshot 2024-04-08 at 5 34 08 PM"
src="https://github.com/langchain-ai/langchain/assets/22008038/03eb11c4-5eb5-43e3-9109-a13f76098fa4">

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-06 11:51:34 -04:00
liuzc9
e0e40f3f63 docs: Fix typo in llmonitor.md (#22590) 2024-06-06 15:26:51 +00:00
Bagatur
feb73d4281 docs: Add ChatGoogleGenerativeAI to model feat table (#22617) 2024-06-06 08:07:13 -07:00
Satyam Kumar
17b486a37b openai, azure: update model_name in ChatResult to use name from API response (#22569)
The response.get("model", self.model_name) checks if the model key
exists in the response dictionary. If it does, it uses that value;
otherwise, it uses self.model_name.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-06 11:00:09 -04:00
Suganth Solamanraja
02495ae7c5 docs: Correct return type in docstring (#22597)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
- **Description:** This PR corrects the return type in the docstring of
the `docs/api_reference/create_api_rst.py/_load_package_modules`
function. The return type was previously described as a list of

Co-authored-by: suganthsolamanraja <suganth.solamanraja@techjays..com>
2024-06-06 14:51:46 +00:00
svmpsp-rc
51942c03eb docs: correct typos in Italian words (#22606)
**Description**

Fix typos in Italian words.
2024-06-06 07:46:07 -07:00
Gabriele Ghisleni
95883a99a9 docs: ElasticsearchCacheStore in stores integrations documentation (#22612)
The package for LangChain integrations with Elasticsearch
https://github.com/langchain-ai/langchain-elastic contains a
Elasticsearch byte store cache integration (see
https://github.com/langchain-ai/langchain-elastic/pull/27). This is the
documentation contribution on the page dedicated to stores integrations

Co-authored-by: Gabriele Ghisleni <gabriele.ghisleni@spaziodati.eu>
2024-06-06 14:36:43 +00:00
Christophe Bornet
12ddb4fc6f core[patch]: Use explicit classes for InMemoryByteStore and InMemoryStore (#22608)
The current implementation doesn't work well with type checking.
Instead replace with class definition that correctly works with type
checking.
2024-06-06 07:34:43 -07:00
andyjessen
cfed68e06f docs: Fix description (#22611)
This commit fixes the description of the hair_color field.
2024-06-06 07:25:27 -07:00
ccurme
1925bde32e together: bump langchain-core (#22616)
langchain-together depends on langchain-openai ^0.1.8
langchain-openai 0.1.8 has langchain-core >= 0.2.2

Here we bump langchain-core to 0.2.2, just to pass minimum dependency
version tests.
2024-06-06 14:09:40 +00:00
ccurme
35f4aa927b together[patch]: Release 0.1.3 (#22615) 2024-06-06 13:58:35 +00:00
Asi Greenholts
f23bec7be6 docs: Fix typo (#22596)
Fix typo
2024-06-06 08:39:54 -04:00
CharlesCNorton
abb0cecb44 fix: typo in Agents section of README (#22599)
Corrected the phrase "complete done" to "completely done" for better
grammatical accuracy and clarity in the Agents section of the README.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-06 07:44:36 -04:00
Kirushikesh DB
db7e7b69e3 docs: Removed unwanted cell in refine segment (#22604)
**Description:**
There is one unwanted duplicate cell in refine section of summarization
documentation, i have removed it.
2024-06-06 07:40:26 -04:00
andyjessen
8b40428f58 docs: Fix typo (#22603)
This commit changes minor typo in the field description.
2024-06-06 07:38:36 -04:00
Isaac Francisco
ba3e219d83 community[patch]: recursive url loader fix and unit tests (#22521)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-05 17:56:20 -07:00
Jacob Lee
234394f631 docs[minor]: Add "Build a PDF ingestion and Question/Answering system" tutorial (#22570)
More direct entrypoint for a common use-case. Meant to give people a
more hands-on intro to document loaders/loading data from different data
sources as well.

Some duplicate content for RAG and extraction (to show what you can do
with the loaded documents), but defers to the appropriate sections
rather than going too in-depth.

@baskaryan @hwchase17
2024-06-05 17:09:28 -07:00
Jeffrey Mak
5fc5ed463c community[patch]:Support filter for AzureAISearchRetriever (#22303)
**Description**: 
The AzureAISearchRetriever does not support the "$filter" argument
offered in the AISearch API:
https://learn.microsoft.com/en-us/rest/api/searchservice/documents/search-get?view=rest-searchservice-2023-11-01&tabs=HTTP
The $filter allows filtering of indexes based on values in metadata.

**Issue**: 
https://github.com/langchain-ai/langchain/issues/19885

**Dependencies**: 
No

**Twitter handle**: 
@Jeffreym9M
 

- [ ] **Add tests and docs**: Not relevant


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-05 16:53:19 -07:00
Isaac Francisco
148088a588 docs: duckduckgosearch options listed (#22568)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-05 23:29:47 +00:00
Mikhail Khludnev
ef868bc24b docs: mentioning query_instruction with regards to BGE-M3 (#22405)
see
https://github.com/langchain-ai/langchain/pull/18017#issuecomment-2143942760
https://huggingface.co/BAAI/bge-m3#faq

Co-authored-by: mikhail-khludnev <mikhail_khludnev@rntgroup.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-05 22:44:40 +00:00
X-HAN
62f13f95e4 community[minor]: add DashScope Rerank (#22403)
**Description:** this PR adds DashScope Rerank capability to Langchain,
you can find DashScope Rerank API from
[here](https://help.aliyun.com/document_detail/2780058.html?spm=a2c4g.2780059.0.0.6d995024FlrJ12)
&
[here](https://help.aliyun.com/document_detail/2780059.html?spm=a2c4g.2780058.0.0.63f75024cr11N9).
[DashScope](https://dashscope.aliyun.com/) is the generative AI service
from Alibaba Cloud (Aliyun). You can create DashScope API key from
[here](https://bailian.console.aliyun.com/?apiKey=1#/api-key).

**Dependencies:** DashScopeRerank depends on `dashscope` python package.

**Twitter handle:** my twitter/x account is https://x.com/LastMonopoly
and I'd like a mention, thanks you!


**Tests and docs**
  1. integration test: `test_dashscope_rerank.py`
  2. example notebook: `dashscope_rerank.ipynb`

**Lint and test**: I have run `make format`, `make lint` and `make test`
from the root of the package I've modified.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-05 15:40:21 -07:00
Ethan Yang
29064848f9 [Community]add option to delete the prompt from HF output (#22225)
This will help to solve pattern mismatching issue when parsing the
output in Agent.

https://github.com/langchain-ai/langchain/issues/21912
2024-06-05 18:38:54 -04:00
Jacob Lee
c040dc7017 docs[patch]: Adds heading keywords to concepts page (#22577)
@efriis @baskaryan
2024-06-05 15:28:58 -07:00
Erick Friis
24fa17593f docs: update agentexecutor title to legacy (#22575) 2024-06-05 15:09:41 -07:00
Bagatur
584a1e30ac community[patch]: AzureSearch async functions (#22075) 2024-06-05 14:39:54 -07:00
Bagatur
1a911018bc langchain[minor]: add universal init_model (#22039)
decisions to discuss
- only chat models
- model_provider isn't based on any existing values like llm-type,
package names, class names
- implemented as function not as a wrapper ChatModel
- function name (init_model)
- in langchain as opposed to community or core
- marked beta
2024-06-05 14:39:40 -07:00
Isaac Francisco
67012c2558 docs: deprecation of max_length parameter used in Exa search (#22567) 2024-06-05 12:09:53 -07:00
ccurme
af129974a3 community: update how OpenAIAssistantV2Runnable creates threads with tool_resources (#22549)
https://github.com/langchain-ai/langchain/issues/22503
2024-06-05 14:19:41 -04:00
Bagatur
51a0d4574e community[patch]: Release 0.2.3 (#22562) 2024-06-05 17:27:24 +00:00
Bagatur
b2daba37c7 nomic[patch]: Release 0.1.2 (#22561) 2024-06-05 17:06:58 +00:00
Zach Nussbaum
14f3014cce embeddings: nomic embed vision (#22482)
Thank you for contributing to LangChain!

**Description:** Adds Langchain support for Nomic Embed Vision
**Twitter handle:** nomic_ai,zach_nussbaum


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Lance Martin <122662504+rlancemartin@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-05 09:47:17 -07:00
leila-messallem
3280a5b49b community[patch]: improve test setup to accurately test filtering of labels in neo4j (#22531)
**Description:** This PR addresses an issue with an existing test that
was not effectively testing the intended functionality. The previous
test setup did not adequately validate the filtering of the labels in
neo4j, because the nodes and relationship in the test data did not have
any properties set. Without properties these labels would not have been
returned, regardless of the filtering.

---------

Co-authored-by: Oskar Hane <oh@oskarhane.com>
2024-06-05 15:56:53 +00:00
Mohammad Mohtashim
7fcef2556c [Experimental]: Async agenerate method ollama functions (#21682)
- **Description:** :
Added Async method for Generate for OllamaFunctions which was missing
and was raising errors for the users.
   
- **Issue:** 
#21422
2024-06-05 11:50:36 -04:00
Stefano Lottini
328d0c99f2 community[minor]: Add support for metadata indexing policy in Cassandra vector store (#22548)
This PR adds a constructor `metadata_indexing` parameter to the
Cassandra vector store to allow optional fine-tuning of which fields of
the metadata are to be indexed.

This is a feature supported by the underlying CassIO library. Indexing
mode of "all", "none" or deny- and allow-list based choices are
available.

The rationale is, in some cases it's advisable to programmatically
exclude some portions of the metadata from the index if one knows in
advance they won't ever be used at search-time. this keeps the index
more lightweight and performant and avoids limitations on the length of
_indexed_ strings.

I added a integration test of the feature. I also added the possibility
of running the integration test with Cassandra on an arbitrary IP
address (e.g. Dockerized), via
`CASSANDRA_CONTACT_POINTS=10.1.1.5,10.1.1.6 poetry run pytest [...]` or
similar.

While I was at it, I added a line to the `.gitignore` since the mypy
_test_ cache was not ignored yet.

My X (Twitter) handle: @rsprrs.
2024-06-05 11:23:26 -04:00
Emilien Chauvet
c3d4126eb1 community[minor]: add user agent for web scraping loaders (#22480)
**Description:** This PR adds a `USER_AGENT` env variable that is to be
used for web scraping. It creates a util to get that user agent and uses
it in the classes used for scraping in [this piece of
doc](https://python.langchain.com/v0.1/docs/use_cases/web_scraping/).
Identifying your scraper is considered a good politeness practice, this
PR aims at easing it.
**Issue:** `None`
**Dependencies:** `None`
**Twitter handle:** `None`
2024-06-05 15:20:34 +00:00
Philippe PRADOS
8250c177de community[minor]: Add native async support to SQLChatMessageHistory (#22065)
# package community: Fix SQLChatMessageHistory

## Description
Here is a rewrite of `SQLChatMessageHistory` to properly implement the
asynchronous approach. The code circumvents [issue
22021](https://github.com/langchain-ai/langchain/issues/22021) by
accepting a synchronous call to `def add_messages()` in an asynchronous
scenario. This bypasses the bug.

For the same reasons as in [PR
22](https://github.com/langchain-ai/langchain-postgres/pull/32) of
`langchain-postgres`, we use a lazy strategy for table creation. Indeed,
the promise of the constructor cannot be fulfilled without this. It is
not possible to invoke a synchronous call in a constructor. We
compensate for this by waiting for the next asynchronous method call to
create the table.

The goal of the `PostgresChatMessageHistory` class (in
`langchain-postgres`) is, among other things, to be able to recycle
database connections. The implementation of the class is problematic, as
we have demonstrated in [issue
22021](https://github.com/langchain-ai/langchain/issues/22021).

Our new implementation of `SQLChatMessageHistory` achieves this by using
a singleton of type (`Async`)`Engine` for the database connection. The
connection pool is managed by this singleton, and the code is then
reentrant.

We also accept the type `str` (optionally complemented by `async_mode`.
I know you don't like this much, but it's the only way to allow an
asynchronous connection string).

In order to unify the different classes handling database connections,
we have renamed `connection_string` to `connection`, and `Session` to
`session_maker`.

Now, a single transaction is used to add a list of messages. Thus, a
crash during this write operation will not leave the database in an
unstable state with a partially added message list. This makes the code
resilient.

We believe that the `PostgresChatMessageHistory` class is no longer
necessary and can be replaced by:
```
PostgresChatMessageHistory = SQLChatMessageHistory
```
This also fixes the bug.


## Issue
- [issue 22021](https://github.com/langchain-ai/langchain/issues/22021)
  - Bug in _exit_history()
  - Bugs in PostgresChatMessageHistory and sync usage
  - Bugs in PostgresChatMessageHistory and async usage
- [issue
36](https://github.com/langchain-ai/langchain-postgres/issues/36)
 ## Twitter handle:
pprados

## Tests
- libs/community/tests/unit_tests/chat_message_histories/test_sql.py
(add async test)

@baskaryan, @eyurtsev or @hwchase17 can you check this PR ?
And, I've been waiting a long time for validation from other PRs. Can
you take a look?
- [PR 32](https://github.com/langchain-ai/langchain-postgres/pull/32)
- [PR 15575](https://github.com/langchain-ai/langchain/pull/15575)
- [PR 13200](https://github.com/langchain-ai/langchain/pull/13200)

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-05 15:10:38 +00:00
Vincent Min
59bef31997 community[minor]: Improve InMemoryVectorStore with ability to persist to disk and filter on metadata. (#22186)
- **Description:** The InMemoryVectorStore is a nice and simple vector
store implementation for quick development and debugging. The current
implementation is quite limited in its functionalities. This PR extends
the functionalities by adding utility function to persist the vector
store to a json file and to load it from a json file. We choose the json
file format because it allows inspection of the database contents in a
text editor, which is great for debugging. Furthermore, it adds a
`filter` keyword that can be used to filter out documents on their
`page_content` or `metadata`.
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** @Vincent_Min
2024-06-05 10:40:34 -04:00
Christophe Bornet
c34ad8c163 core[patch]: Improve VectorStore API doc (#22547) 2024-06-05 10:23:44 -04:00
maang-h
89128b7a49 community[patch]: add detailed paragraph and example for BaichuanTextEmbeddings (#22031)
- **Description:** add detailed paragraph and example for
BaichuanTextEmbeddings
   - **Issue:** the issue #21983
2024-06-05 10:18:11 -04:00
Anthony Bernabeu
4e676a63b8 community[minor]: Added filter search for LanceDB (#22461)
- [ ] **community**: "vectorstore: added filtering support for LanceDB
vector store"

- [ ] **This PR adds filtering capabilities to LanceDB**:
- **Description:** In LanceDB filtering can be applied when searching
for data into the vectorstore. It is using the SQL language as mentioned
in the LanceDB documentation.
    - **Issue:** #18235 
    - **Dependencies:** No

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-05 09:33:54 -04:00
Erick Friis
4050d6ea2b huggingface: remove text-generation dep (#22543) 2024-06-05 12:13:40 +00:00
Erick Friis
a6fc74f379 ai21: fix core version (#22544) 2024-06-05 08:09:19 -04:00
Asaf Joseph Gardin
75cba742e5 ai21: fix ai21 unittests (#22526)
Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-05 08:00:42 -04:00
Erick Friis
58192d617f community: fix huggingface deprecations (#22522) 2024-06-05 04:13:13 +00:00
Jacob Lee
1e748a6d40 docs[patch]: Adds links to deprecations page (#22514)
@baskaryan
2024-06-04 16:19:32 -07:00
William FH
91fed3ace7 [Docs] Structured output Keywords (#22511) 2024-06-04 20:56:05 +00:00
Christophe Bornet
8ba868d3b0 core[patch]: Add similarity_score_threshold to VectorStore search types (#22477) 2024-06-04 13:43:55 -07:00
Eugene Yurtsev
9120cf5df2 core[patch]: Deduplicate of callback handlers in merge_configs (#22478)
This PR adds deduplication of callback handlers in merge_configs.

Fix for this issue:
https://github.com/langchain-ai/langchain/issues/22227

The issue appears when the code is:

1) running python >=3.11
2) invokes a runnable from within a runnable
3) binds the callbacks to the child runnable from the parent runnable
using with_config

In this case, the same callbacks end up appearing twice: (1) the first
time from with_config, (2) the second time with langchain automatically
propagating them on behalf of the user.


Prior to this PR this will emit duplicate events:

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model.with_config(
        {
            "callbacks": callbacks,  # <-- Propagate callbacks
        }
    )
    return await chain.ainvoke({"question": question})
```

Prior to this PR this will work work correctly (no duplicate events):

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question}, {"callbacks": callbacks})
```

This will also work (as long as the user is using python >= 3.11) -- as
langchain will automatically propagate callbacks

```python
@tool
async def get_items(question: str,):  
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question})
```
2024-06-04 16:19:00 -04:00
Jacob Lee
64dbc52cae docs[patch]: Update quickstart tutorial (#22504)
Mentions LCEL more, hopefully flags it to more people as a simple
entrypoint

@baskaryan @hwchase17
2024-06-04 13:04:56 -07:00
Ofer Mendelevitch
ad502e8d50 community[minor]: Vectara Integration Update - Streaming, FCS, Chat, updates to documentation and example notebooks (#21334)
Thank you for contributing to LangChain!

**Description:** update to the Vectara / Langchain integration to
integrate new Vectara capabilities:
- Full RAG implemented as a Runnable with as_rag()
- Vectara chat supported with as_chat()
- Both support streaming response
- Updated documentation and example notebook to reflect all the changes
- Updated Vectara templates

**Twitter handle:** ofermend

**Add tests and docs**: no new tests or docs, but updated both existing
tests and existing docs
2024-06-04 12:57:28 -07:00
Bagatur
cb183a9bf1 docs: update anthropic chat model (#22483)
Related to #22296

And update anthropic to accept base_url
2024-06-04 12:42:06 -07:00
Erick Friis
d700ce8545 robocorp: typo (#22509) 2024-06-04 15:33:38 -04:00
Erick Friis
39fd44579a robocorp: release 0.0.9.post1 (#22507) 2024-06-04 15:32:30 -04:00
Erick Friis
339e3b7f55 ai21: release 0.1.6 (#22508) 2024-06-04 15:31:23 -04:00
ccurme
3c53cea760 together, upstage: bump minimum langchain-openai version (#22505) 2024-06-04 15:20:41 -04:00
Erick Friis
c438b5b78e docs: fix api ref link generation (#22438)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 12:09:22 -07:00
Bagatur
efcb04f84b mongodb[patch]: Release 0.1.6 (#22501) 2024-06-04 12:01:37 -07:00
Bagatur
222b1ba112 groq[patch]: Release 0.1.5 (#22500) 2024-06-04 12:01:17 -07:00
Bagatur
f021be510e milvus[patch]: Release 0.1.1 (#22499) 2024-06-04 12:00:53 -07:00
Bagatur
64d68c17cd upstage[patch]: Release 0.1.6 (#22498) 2024-06-04 11:58:44 -07:00
Bagatur
48fba40fce experimental[patch]: Release 0.0.60 (#22497) 2024-06-04 11:56:42 -07:00
Bagatur
e60f88ccdd community[patch]: Release 0.2.2 (#22496) 2024-06-04 11:42:11 -07:00
Bagatur
85aa218564 langchain[patch]: Release 0.2.2 (#22495) 2024-06-04 11:33:45 -07:00
Bagatur
8e86080def mistralai[patch]: Release 0.1.8 (#22494) 2024-06-04 11:33:06 -07:00
Bagatur
e850de2422 huggingface[patch]: release 0.0.2 (#22493) 2024-06-04 11:32:36 -07:00
Jacob Lee
593de8a913 docs[patch]: Add robots.txt and root sitemap (#22492)
CC @efriis @baskaryan
2024-06-04 11:26:40 -07:00
Bagatur
99a3cad258 text-splitters[patch]: Release 0.2.1 (#22490) 2024-06-04 11:19:21 -07:00
Bagatur
161b02a8be core[patch]: Release 0.2.4 (#22489) 2024-06-04 11:14:54 -07:00
Ragul Kachiappan
50258a7dda docs: Update chroma docs link for collection reference (#22472)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
- **Description:** Updated dead link referencing chroma docs in Chroma
notebook under vectorstores
2024-06-04 18:01:13 +00:00
nareshnagpal06
9b45374118 docs: Added Semantic Cache Example with BedrockChat using Bedrock Embedding… (#22190)
…s and Opensearch Semantic Cache

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 17:40:29 +00:00
Joydeep Banik Roy
3796672c67 community, milvus, pinecone, qdrant, mongo: Broadcast operation failure while using simsimd beyond v3.7.7 (#22271)
- [ ] **Packages affected**: 
  - community: fix `cosine_similarity` to support simsimd beyond 3.7.7
- partners/milvus: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/mongodb: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/pinecone: fix `cosine_similarity` to support simsimd beyond
3.7.7
- partners/qdrant: fix `cosine_similarity` to support simsimd beyond
3.7.7


- [ ] **Broadcast operation failure while using simsimd beyond v3.7.7**:
- **Description:** I was using simsimd 4.3.1 and the unsupported operand
type issue popped up. When I checked out the repo and ran the tests,
they failed as well (have attached a screenshot for that). Looks like it
is a variant of https://github.com/langchain-ai/langchain/issues/18022 .
Prior to 3.7.7, simd.cdist returned an ndarray but now it returns
simsimd.DistancesTensor which is ineligible for a broadcast operation
with numpy. With this change, it also remove the need to explicitly cast
`Z` to numpy array
    - **Issue:** #19905
    - **Dependencies:** No
    - **Twitter handle:** https://x.com/GetzJoydeep

<img width="1622" alt="Screenshot 2024-05-29 at 2 50 00 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/fb27b383-a9ae-4a6f-b355-6d503b72db56">

- [ ] **Considerations**: 
1. I started with community but since similar changes were there in
Milvus, MongoDB, Pinecone, and QDrant so I modified their files as well.
If touching multiple packages in one PR is not the norm, then I can
remove them from this PR and raise separate ones
2. I have run and verified that the tests work. Since, only MongoDB had
tests, I ran theirs and verified it works as well. Screenshots attached
:
<img width="1573" alt="Screenshot 2024-05-29 at 2 52 13 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/ce87d1ea-19b6-4900-9384-61fbc1a30de9">
<img width="1614" alt="Screenshot 2024-05-29 at 3 33 51 PM"
src="https://github.com/langchain-ai/langchain/assets/31132555/6ce1d679-db4c-4291-8453-01028ab2dca5">
  

I have added a test for simsimd. I feel it may not go well with the
CI/CD setup as installing simsimd is not a dependency requirement. I
have just imported simsimd to ensure simsimd cosine similarity is
invoked. However, its not a good approach. Suggestions are welcome and I
can make the required changes on the PR. Please provide guidance on the
same as I am new to the community.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-04 17:36:31 +00:00
KyrianC
03178ee74f community[minor]: Add tools calls to ChatEdenAI (#22320)
### Description  
Add tools implementation to `ChatEdenAI`:
- `bind_tools()`
- `with_structured_output()`

### Documentation 
Updated `docs/docs/integrations/chat/edenai.ipynb`

### Notes
We don´t support stream with tools as of yet. If stream is called with
tools we directly yield the whole message from `generate` (implemented
the same way as Anthropic did).
2024-06-04 10:29:28 -07:00
pranavvuppala
9d4350e69a docs : Update docstrings for OpenAI base.py (#22221)
- [x] **PR title**: Update docstrings for OpenAI base.py
-**Description:** Updated the docstring of few OpenAI functions for a
better understanding of the function.
    - **Issue:** #21983

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-04 17:24:17 +00:00
Anindyadeep
7a197539aa communty[patch]: Native RAG Support in Prem AI langchain (#22238)
This PR adds native RAG support in langchain premai package. The same
has been added in the docs too.
2024-06-04 10:19:54 -07:00
Rahul Triptahi
77ad857934 community[minor]: Enable retrieval api calls in PebbloRetrievalQA (#21958)
Description: Enable app discovery and Prompt/Response apis in
PebbloSafeRetrieval
Documentation: NA
Unit test: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-04 10:18:50 -07:00
liugz18
8fd231086e experimental[patch]: Fix graph_transformers llms #21482 (#22417)
Fix AttributeError on calling
LLMGraphTransformer.convert_to_graph_documents #21482

 since raw_schema is always a str

@baskaryan
2024-06-04 17:07:38 +00:00
ccurme
6db25b4e31 core[patch]: bump langsmith (#22476)
Noticing errors logged in some situations when tracing with Langsmith:
```python
from langchain_core.pydantic_v1 import BaseModel
from langchain_anthropic import ChatAnthropic


class AnswerWithJustification(BaseModel):
    """An answer to the user question along with justification for the answer."""
    answer: str
    justification: str


llm = ChatAnthropic(model="claude-3-haiku-20240307")
structured_llm = llm.with_structured_output(AnswerWithJustification)

list(structured_llm.stream("What weighs more a pound of bricks or a pound of feathers"))
```
```
Error in LangChainTracer.on_chain_end callback: AttributeError("'NoneType' object has no attribute 'append'")
[AnswerWithJustification(answer='A pound of bricks and a pound of feathers weigh the same amount.', justification='This is because a pound is a unit of mass, not volume. By definition, a pound of any material, whether bricks or feathers, will weigh the same - one pound. The physical size or volume of the materials does not matter when measuring by mass. So a pound of bricks and a pound of feathers both weigh exactly one pound.')]
```
2024-06-04 10:05:53 -07:00
Bagatur
17c127531a community[patch]: deprecate all HF classes (#22444) 2024-06-04 09:48:25 -07:00
Nuno Campos
58b118544e Use immutable sequence type for batch/batch_as_completed types (#22433)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-04 08:04:09 -07:00
Christophe Bornet
9a8fe58ebe community[minor]: Improve Cassandra VectorStore as_retriever (#22465)
The Vectorstore's API `as_retriever` doesn't expose explicitly the
parameters `search_type` and `search_kwargs` and so these are not well
documented.
This PR improves `as_retriever` for the Cassandra VectorStore by making
these parameters explicit.

NB: An alternative would have been to modify `as_retriever` in
`Vectorstore`. But there's probably a good reason these were not exposed
in the first place ? Is it because implementations may decide to not
support them and have fixed values when creating the
VectorStoreRetriever ?
2024-06-04 09:51:17 -04:00
Christophe Bornet
23bba18f92 core[patch]: Fix VectorStore's as_retriever mutating tags param (#22470)
The current VectorStore `as_retriever` implementation mutates the `tags`
param when it's passed in kwargs.
This fix ensures that a copy is done.
2024-06-04 09:50:36 -04:00
Michal Gregor
98b2e7b195 huggingface[patch]: Support for HuggingFacePipeline in ChatHuggingFace. (#22194)
- **Description:** Added support for using HuggingFacePipeline in
ChatHuggingFace (previously it was only usable with API endpoints,
probably by oversight).
- **Issue:** #19997 
- **Dependencies:** none
- **Twitter handle:** none

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-04 00:47:35 +00:00
Fahreddin Özcan
0061ded002 community[patch]: Upstash Vector Store Namespace Support (#22251)
This PR introduces namespace support for Upstash Vector Store, which
would allow users to partition their data in the vector index.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 17:30:56 -07:00
Isaac Francisco
25cf1a74d5 docs: rag tutorial small fixes (#22450) 2024-06-04 00:16:54 +00:00
Jacob Lee
b0f014666d docs[patch]: Adds search keywords for common queries (#22449)
CC @baskaryan @efriis @ccurme
2024-06-03 16:30:17 -07:00
Guangdong Liu
bc7e32f315 core(patch):fix partial_variables not working with SystemMessagePromptTemplate (#20711)
- **Issue:**  close #17560
- @baskaryan, @eyurtsev
2024-06-03 16:22:42 -07:00
Martin Kolb
f2dd31b9e8 docs: Fix doc issue for HANA Cloud Vector Engine (#22260)
- **Description:**
This PR fixes a rendering issue in the docs (Python notebook) of HANA
Cloud Vector Engine.

  - **Issue:** N/A
  - **Dependencies:** no new dependencies added

File of the fixed notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-06-03 15:53:43 -07:00
Dristy Srivastava
ef3df45d9d community[minor]: Updating payload for pebblo discover API (#22309)
**Description:** Updating response for pebblo discover API. Also
updating filed name case type
**Documentation:** N/A
**Unit tests:** N/A
2024-06-03 15:36:17 -07:00
Miroslav
cbd5720011 huggingface[patch]: Skip Login to HuggingFaceHub when token is not set (#22365) 2024-06-03 15:20:32 -07:00
Stefano Lottini
f78ae1d932 docs: Astra DB vectorstore, add automatic-embedding example (#22350)
Description: Adding an example showcasing the newly-introduced API-side
embedding computation option for the Astra DB vector store
2024-06-03 15:13:57 -07:00
bhardwaj-vipul
f397a84a59 langchain[patch]: Fix MongoDBAtlasVectorSearch reference in self query retriever (#22401)
**Description:** 
SelfQuery Retriever with MongoDBAtlasVectorSearch (from
langchain_mongodb import MongoDBAtlasVectorSearch) and
Chroma (from langchain_chroma import Chroma) is not supported.
The imports in the [builtin
translators](8cbce684d4/libs/langchain/langchain/retrievers/self_query/base.py (L73))
points to the
[deprecated](acaf214a45/libs/community/langchain_community/vectorstores/mongodb_atlas.py (L36))
vectorstore.

**Issue:** 
#22272

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 22:10:15 +00:00
ccurme
afe89a1411 community: add standard chat model params to Ollama (#22446) 2024-06-03 17:45:03 -04:00
Isaac Francisco
5119ab2fb9 docs: agents tutorial wording (#22447) 2024-06-03 14:40:01 -07:00
Ethan Yang
52da6a160d community[patch]: Update OpenVINO embedding and reranker to support static input shape (#22171)
It can help to deploy embedding models on NPU device
2024-06-03 13:27:17 -07:00
Tom Clelford
c599732e1a text-splitters[patch]: fix HTMLSectionSplitter parsing of xslt paths (#22176)
## Description
This PR allows passing the HTMLSectionSplitter paths to xslt files. It
does so by fixing two trivial bugs with how passed paths were being
handled. It also changes the default value of the param `xslt_path` to
`None` so the special case where the file was part of the langchain
package could be handled.

## Issue
#22175
2024-06-03 20:26:59 +00:00
maang-h
01352bb55f community[minor]: Implement MiniMaxChat interface (#22391)
- **Description:** Implement MiniMaxChat interface, include:
    - No longer inherits the LLM class (like other chat model)
    - Update request parameters (v1 -> v2)
        - update `base url`
        - update message role (system, user, assistant)
        - add `stream` function
        - no longer use `group id`
    - Implement the `_stream`, `_agenerate`, and `_astream` interfaces

[minimax v2 api
document](https://platform.minimaxi.com/document/guides/chat-model/V2?id=65e0736ab2845de20908e2dd)
2024-06-03 13:22:38 -07:00
Brandon Sharp
56e5aa4dd9 community[patch]: Airtable to allow for addtl params (#22092)
- [X] **PR title**: "community: added optional params to Airtable
table.all()"


- [X] **PR message**: 
- **Description:** Add's **kwargs to AirtableLoader to allow for kwargs:
https://pyairtable.readthedocs.io/en/latest/api.html#pyairtable.Table.all
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** parakoopa88


- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-03 13:05:56 -07:00
Harichandan Roy
1f751343e2 community[patch]: update embeddings/oracleai.py (#22240)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"

"community/embeddings: update oracleai.py"

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

Adding oracle VECTOR_ARRAY_T support.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Tests are not impacted.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 12:38:51 -07:00
maang-h
13140dc4ff community[patch]: Update the default api_url and reqeust_body of sparkllm embedding (#22136)
- **Description:** When I was running the SparkLLMTextEmbeddings,
app_id, api_key and api_secret are all correct, but it cannot run
normally using the current URL.

    ```python
    # example
    from langchain_community.embeddings import SparkLLMTextEmbeddings

    embedding= SparkLLMTextEmbeddings(
        spark_app_id="my-app-id",
        spark_api_key="my-api-key",
        spark_api_secret="my-api-secret"
    )
    embedding= "hello"
    print(spark.embed_query(text1))
    ```

![sparkembedding](https://github.com/langchain-ai/langchain/assets/55082429/11daa853-4f67-45b2-aae2-c95caa14e38c)
   
So I updated the url and request body parameters according to
[Embedding_api](https://www.xfyun.cn/doc/spark/Embedding_api.html), now
it is runnable.
2024-06-03 12:38:11 -07:00
Yuwen Hu
ba0dca46d7 community[minor]: Add IPEX-LLM BGE embedding support on both Intel CPU and GPU (#22226)
**Description:** [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local
PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low
latency. This PR adds ipex-llm integrations to langchain for BGE
embedding support on both Intel CPU and GPU.
**Dependencies:** `ipex-llm`, `sentence-transformers`
**Contribution maintainer**: @Oscilloscope98 
**tests and docs**: 
- langchain/docs/docs/integrations/text_embedding/ipex_llm.ipynb
- langchain/docs/docs/integrations/text_embedding/ipex_llm_gpu.ipynb
-
langchain/libs/community/tests/integration_tests/embeddings/test_ipex_llm.py

---------

Co-authored-by: Shengsheng Huang <shannie.huang@gmail.com>
2024-06-03 12:37:10 -07:00
Jacob Lee
c01467b1f4 core[patch]: RFC: Allow concatenation of messages with multi part content (#22002)
Anthropic's streaming treats tool calls as different content parts
(streamed back with a different index) from normal content in the
`content`.

This means that we need to update our chunk-merging logic to handle
chunks with multi-part content. The alternative is coerceing Anthropic's
responses into a string, but we generally like to preserve model
provider responses faithfully when we can. This will also likely be
useful for multimodal outputs in the future.

This current PR does unfortunately make `index` a magic field within
content parts, but Anthropic and OpenAI both use it at the moment to
determine order anyway. To avoid cases where we have content arrays with
holes and to simplify the logic, I've also restricted merging to chunks
in order.

TODO: tests

CC @baskaryan @ccurme @efriis
2024-06-03 09:46:40 -07:00
Dan
86509161b0 community: fix AzureSearch delete documents (#22315)
**Description**

Fix AzureSearch delete documents method by using FIELDS_ID variable
instead of the hard coded "id" value

**Issue:** 

This is linked to this issue:
https://github.com/langchain-ai/langchain/issues/22314

Co-authored-by: dseban <dan.seban@neoxia.com>
2024-06-03 15:55:06 +00:00
Harrison Chase
8fad2e209a fix error message (#22437)
Was confusing when language is in Enum but not implemented
2024-06-03 15:48:26 +00:00
Bagatur
678a19a5f7 infra: bump anthropic mypy 1 (#22373) 2024-06-03 08:21:55 -07:00
Nuno Campos
ceb73ad06f core: In BaseRetriever make get_relevant_docs delegate to invoke (#22434)
- This fixes all the tracing issues with people still using
get_relevant_docs, and a change we need for 0.3 anyway

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 07:34:53 -07:00
Zheng Robert Jia
1ad1dc5303 docs: resolve minor syntax error. (#22375)
Used the correct magic command. 
Changed from `% pip...` to `%pip`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-03 14:34:24 +00:00
Charles John
2d81a72884 community: fix missing apify_api_token field in ApifyWrapper (#22421)
- **Description:** The `ApifyWrapper` class expects `apify_api_token` to
be passed as a named parameter or set as an environment variable. But
the corresponding field was missing in the class definition causing the
argument to be ignored when passed as a named param. This patch fixes
that.
2024-06-03 14:32:57 +00:00
Klaudia Lemiec
dac355fc62 docs: notebook loader: change .html to .ipynb (#22407)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-03 14:26:28 +00:00
Joan Fontanals
a7ae16f912 add embed_image API to JinaEmbedding (#22416)
- **Description:** Add `embed_image` to JinaEmbedding to embed images
 - **Twitter handle:** https://x.com/JinaAI_
2024-06-03 10:23:37 -04:00
Qingchuan Hao
3e92ed8056 docs: add Microsoft Azure to ChatModelTabs (#22367)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-03 10:19:00 -04:00
Nuno Campos
ed8e9c437a core: In RunnableSequence pass kwargs to the first step (#22393)
- This is a pattern that shows up occasionally in langgraph questions,
people chain a graph to something else after, and want to pass the graph
some kwargs (eg. stream_mode)
2024-06-03 14:18:10 +00:00
Jeffrey Morgan
eabcfaa3d6 Update Ollama instructions (#22394) 2024-06-03 10:17:35 -04:00
Harrison Chase
acaf214a45 update agent docs (#22370)
to use create_react_agent

---------

Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
2024-06-01 08:28:32 -07:00
Jacob Lee
16cce76a68 👥 Update LangChain people data (#22388)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-06-01 07:36:45 -07:00
Jacob Lee
8a57102918 docs[patch]: Fix typo (#22377) 2024-05-31 16:37:05 -07:00
Bagatur
4d82cea71f docs: fix llm caches redirect (#22371) 2024-05-31 19:37:06 +00:00
Bagatur
a8098f5ddb anthropic[patch]: Release 0.1.15, fix sdk tools break (#22369) 2024-05-31 12:10:22 -07:00
Erick Friis
6ffa0acf32 ai21: fix text-splitters version (#22366) 2024-05-31 11:41:05 -04:00
Erick Friis
1bad0ac946 docs: redirect integration links to 0.2 (#22326) 2024-05-31 11:40:48 -04:00
ccurme
8cbce684d4 docs: update retriever how-to content (#22362)
- [x] How to: use a vector store to retrieve data
- [ ] How to: generate multiple queries to retrieve data for
- [x] How to: use contextual compression to compress the data retrieved
- [x] How to: write a custom retriever class
- [x] How to: add similarity scores to retriever results
^ done last month
- [x] How to: combine the results from multiple retrievers
- [x] How to: reorder retrieved results to mitigate the "lost in the
middle" effect
- [x] How to: generate multiple embeddings per document
^ this PR
- [ ] How to: retrieve the whole document for a chunk
- [ ] How to: generate metadata filters
- [ ] How to: create a time-weighted retriever
- [ ] How to: use hybrid vector and keyword retrieval
^ todo
2024-05-31 10:57:35 -04:00
Jacob Lee
75ed9ee929 docs: Fix Solar and OCI integration page typos (#22343)
@efriis @baskaryan
2024-05-31 10:36:12 -04:00
Bagatur
0214246dc6 docs: list tool calling models (#22334) 2024-05-30 14:32:33 -07:00
Bagatur
410e9add44 infra: run scheduled tests on aws, google, cohere, nvidia (#22328)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-30 13:57:12 -07:00
Harrison Chase
0c9a034ed7 add simpler agent tutorial (#22249)
1/ added section at start with full code
2/ removed retriever tool (was just distracting)
3/ added section on starting a new conversation

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-30 12:33:32 -07:00
Bagatur
2b9f1469d8 core[patch]: Release 0.2.3 (#22329) 2024-05-30 11:35:09 -07:00
Harrison Chase
ee32369265 core[patch]: fix runnable history and add docs (#22283) 2024-05-30 11:26:41 -07:00
William FH
dcec133b85 [Core] Update Tracing Interops (#22318)
LangSmith and LangChain context var handling evolved in parallel since
originally we didn't expect people to want to interweave the decorator
and langchain code.

Once we get a new langsmith release, this PR will let you seemlessly
hand off between @traceable context and runnable config context so you
can arbitrarily nest code.

It's expected that this fails right now until we get another release of
the SDK
2024-05-30 10:34:49 -07:00
ccurme
f34337447f openai: update ChatOpenAI api ref (#22324)
Update to reflect that token usage is no longer default in streaming
mode.

Add detail for streaming context under Token Usage section.
2024-05-30 12:31:28 -04:00
ChengZi
2443e85533 docs: fix milvus import and update template (#22306)
docs: fix milvus import problem
update milvus-rag template with milvus-lite

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-05-30 08:28:55 -07:00
WU LIFU
86698b02a9 doc: fix wrong documentation on FAISS load_local function (#22310)
### Issue: #22299 

### descriptions
The documentation appears to be wrong. When the user actually sets this
parameter "asynchronous" to be True, it fails because the __init__
function of FAISS class doesn't allow this parameter. In fact, most of
the class/instance functions of this class have both the sync/async
version, so it looks like what we need is just to remove this parameter
from the doc.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Lifu Wu <lifu@nextbillion.ai>
2024-05-30 15:15:04 +00:00
maang-h
596c062cba community[patch]: Standardize qianfan model init args name (#22322)
- **Description:**  
    - Standardize qianfan chat model intialization arguments name
        - qianfan_ak (qianfan api key)  -> api_key
        - qianfan_sk (qianfan secret key)  ->  secret_key
       
    - Delete unuse variable
- **Issue:** #20085
2024-05-30 11:08:32 -04:00
KhoPhi
c64b0a3095 Docs: Ollama (LLM, Chat Model & Text Embedding) (#22321)
- [x] Docs Update: Ollama
  - llm/ollama 
- Switched to using llama3 as model with reference to templating and
prompting
      - Added concurrency notes to llm/ollama docs
  - chat_models/ollama
      - Added concurrency notes to llm/ollama docs
  - text_embedding/ollama
     - include example for specific embedding models from Ollama
2024-05-30 11:06:45 -04:00
Dobiichi-Origami
10b12e1c08 community: adding tool_call_id for every ToolCall (#22323)
- **Description:** This PR contains a bugfix which result in malfunction
of multi-turn conversation in QianfanChatEndpoint and adaption for
ToolCall and ToolMessage
2024-05-30 10:59:08 -04:00
Bagatur
569d325a59 docs: link GH org (#22308) 2024-05-30 00:17:59 -07:00
Bagatur
93049d1563 docs: make llm cache its own section (#22301) 2024-05-30 00:17:33 -07:00
Bagatur
04631439c9 docs: add v0.2 links to README (#22300) 2024-05-29 16:22:01 -07:00
ccurme
f39e1a2288 community, docs: update token usage tracking callback + how-to guides (#22145) 2024-05-29 17:00:47 -04:00
Bagatur
2bc50fb895 docs, cli[patch]: chat model template nit (#22294) 2024-05-29 20:53:58 +00:00
Bagatur
aa6c31df53 cli[patch]: Release 0.0.24 (#22293) 2024-05-29 13:37:34 -07:00
Bagatur
627a337887 docs, cli[patch]: chat model doc template (#22290)
Update ChatModel integration doc template, integration docstring, and
adds langchain-cli command to easily create just doc (for updating
existing integrations):

```bash
langchain-cli integration create-doc --name "foo-bar"
```
2024-05-29 13:34:58 -07:00
Wu Enze
f40e341a03 docs : Added integrations for memory with langchain_community (#22265)
PR title: Integration Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: [#22005](https://github.com/langchain-ai/langchain/issues/22005)
2024-05-29 16:12:05 -04:00
ccurme
6e1df72a88 openai[patch]: Release 0.1.8 (#22291) 2024-05-29 20:08:30 +00:00
ccurme
e71b0b5827 core[patch]: Release 0.2.2 (#22289) 2024-05-29 19:51:37 +00:00
William FH
9d6cabe84a Update sequence.ipynb (#22288) 2024-05-29 19:34:44 +00:00
Daniel Glogowski
7ff05357ba docs: updating NIM documentation (#22258)
Updating NVIDIA NIM notebooks and readme file.

Thanks!
Daniel
2024-05-29 10:28:39 -07:00
Bagatur
6dd0f095c3 docs: revamp ChatOpenAI (#22253)
Can build API ref docs by running
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
only builds openai ref, takes ~20 sec
2024-05-29 10:20:14 -07:00
Erick Friis
00c70d98c2 robocorp: release 0.0.9 (#22282) 2024-05-29 16:49:18 +00:00
Mikko Korpela
fc5909ad6f langchain-robocorp: Fix parsing of Union types (such as Optional). (#22277) 2024-05-29 09:47:02 -07:00
ccurme
af1f723ada openai: don't override stream_options default (#22242)
ChatOpenAI supports a kwarg `stream_options` which can take values
`{"include_usage": True}` and `{"include_usage": False}`.

Setting include_usage to True adds a message chunk to the end of the
stream with usage_metadata populated. In this case the final chunk no
longer includes `"finish_reason"` in the `response_metadata`. This is
the current default and is not yet released. Because this could be
disruptive to workflows, here we remove this default. The default will
now be consistent with OpenAI's API (see parameter
[here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options)).

Examples:
```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='Hello' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='!' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='' response_metadata={'finish_reason': 'stop'} id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
```

```python
for chunk in llm.stream("hi", stream_options={"include_usage": True}):
    print(chunk)
```
```
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='Hello' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='!' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' response_metadata={'finish_reason': 'stop'} id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```

```python
llm = ChatOpenAI().bind(stream_options={"include_usage": True})

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='Hello' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='!' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' response_metadata={'finish_reason': 'stop'} id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```
2024-05-29 10:30:40 -04:00
Karim Lalani
a1899439fc [experimental][llms][ollama_functions] Update OllamaFunctions to send tool_calls attribute (#21625)
Update OllamaFunctions to return `tool_calls` for AIMessages when used
for tool calling.
2024-05-29 09:38:33 -04:00
Bagatur
d61bdeba25 core[patch]: allow access RunnableWithFallbacks.runnable attrs (#22139)
RFC, candidate fix for #13095 #22134
2024-05-28 13:18:09 -07:00
SteveLiao
7496fe2b16 Update parent_document_retriever.py about **kwargs (#22219)
Add kwargs in add_documents function

**langchain**: Add **kwargs in parent_document_retriever"
 - **Add kwargs for `add_document` in `parent_document_retriever.py`** 


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-28 11:35:38 -07:00
Mark Cusack
8dfa3c5f1a Update/fix docs to list Yellowbrick as a supported indexed vectorstore (#22235)
Update/fix docs to list Yellowbrick as a supported indexed vectorstore
and fix the Jupyter notebook.
2024-05-28 11:34:49 -07:00
Erick Friis
93240fac68 milvus: fix core dep (#22239) 2024-05-28 10:21:37 -07:00
Erick Friis
611faa22c7 infra: allow first releases 2 (#22237) 2024-05-28 09:53:21 -07:00
Erick Friis
26c6e4a5ef infra: allow first releases (#22236) 2024-05-28 09:39:40 -07:00
ChengZi
404d92ded0 milvus: New langchain_milvus package and new milvus features (#21077)
New features:

- New langchain_milvus package in partner
- Milvus collection hybrid search retriever
- Zilliz cloud pipeline retriever
- Milvus Local guid
- Rag-milvus template

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jackson <jacksonxie612@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-28 08:24:20 -07:00
Leonid Ganeline
d7f70535ba docs: arxiv page, added cookbooks (#22215)
Issue: The `arXiv` page is missing the arxiv paper references from the
`langchain/cookbook`.
PR: Added the cookbook references.
Result: `Found 29 arXiv references in the 3 docs, 21 API Refs, 5
Templates, and 18 Cookbooks.` - much more references are visible now.
2024-05-27 15:47:02 -07:00
Leonid Ganeline
d6995e814b ai21[patch]: added license (#22153)
The `pyproject.toml` missed the `license` parameter. I've added it as
`MIT`
2024-05-27 15:14:14 -07:00
Maddy Adams
8332a36f69 infra: update langchainhub and add integration test (#22154)
**Description:** Update langchainhub integration test dependency and add
an integration test for pulling private prompt
**Dependencies:** langchainhub 0.1.16
2024-05-27 14:58:10 -07:00
Will Higgins
83d10df78d community[patch]: Update firecrawl api key name (#22183)
Change 'FIREWALL' to 'FIRECRAWL' as I believe this may have been in
error. Other docs refer to 'FIRECRAWL_API_KEY'.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:39:29 +00:00
hmasdev
bbd7015b5d core[patch]: Add TypeError handler into get_graph of Runnable (#19856)
# Description

## Problem

`Runnable.get_graph` fails when `InputType` or `OutputType` property
raises `TypeError`.

-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L250-L274)
-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L394-L396)

This problem prevents getting a graph of `Runnable` objects whose
`InputType` or `OutputType` property raises `TypeError` but whose
`invoke` works well, such as `langchain.output_parsers.RegexParser`,
which I have already pointed out in #19792 that a `TypeError` would
occur.

## Solution

- Add `try-except` syntax to handle `TypeError` to the codes which get
`input_node` and `output_node`.

# Issue
- #19801 

# Twitter Handle
- [hmdev3](https://twitter.com/hmdev3)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:34:34 +00:00
acho98
753353411f docs: Fix Clova embeddings example document (#22181)
- [ ] **PR title**: "Fix list handling in Clova embeddings example
documentation"
  - Description:
Fixes a bug in the Clova Embeddings example documentation where
document_text was incorrectly wrapped in an additional list.
   - Rationale
The embed_documents method expects a list, but the previous example
wrapped document_text in an unnecessary additional list, causing an
error. The updated example correctly passes document_text directly to
the method, ensuring it functions as intended.
2024-05-27 14:31:34 -07:00
Mohammad Mohtashim
577ed68b59 mistralai[patch]: Added Json Mode for ChatMistralAI (#22213)
- **Description:** Powered
[ChatMistralAI.with_structured_output](fbfed65fb1/libs/partners/mistralai/langchain_mistralai/chat_models.py (L609))
via json mode
 

-  **Issue:** #22081

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:16:52 +00:00
Pranith
25c270b5a5 docs : Added integrations for tools with langchain_community (#22188)
PR title: Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: https://github.com/langchain-ai/langchain/issues/22005
2024-05-27 14:06:40 -07:00
Ibrahim
cfea0e231a Update llm_chain.ipynb text (#22198)
Added the missing verb "is" and a comma to the text in the Prompt
Templates description within the Build a Simple LLM Application tutorial
for more clarity.
2024-05-27 19:57:41 +00:00
Aditya
bf81ecd3b4 docs:updated documentation for llama, falcon and gemma on Vertex AI Model garden (#22201)
- **Description:** updated documentation for llama, falcona and gemma on
Vertex AI Model garden
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-27 12:56:11 -07:00
Pavlo Paliychuk
342df7cf83 community[minor]: Add Zep Cloud components + docs + examples (#21671)
Thank you for contributing to LangChain!

- [x] **PR title**: community: Add Zep Cloud components + docs +
examples

- [x] **PR message**: 
We have recently released our new zep-cloud sdks that are compatible
with Zep Cloud (not Zep Open Source). We have also maintained our Cloud
version of langchain components (ChatMessageHistory, VectorStore) as
part of our sdks. This PRs goal is to port these components to langchain
community repo, and close the gap with the existing Zep Open Source
components already present in community repo (added
ZepCloudMemory,ZepCloudVectorStore,ZepCloudRetriever).
Also added a ZepCloudChatMessageHistory components together with an
expression language example ported from our repo. We have left the
original open source components intact on purpose as to not introduce
any breaking changes.
    - **Issue:** -
- **Dependencies:** Added optional dependency of our new cloud sdk
`zep-cloud`
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-27 12:50:13 -07:00
Jan Soubusta
cccc8fbe2f community[patch]: DuckDB VS - expose similarity, improve performance of from_texts (#20971)
3 fixes of DuckDB vector store:
- unify defaults in constructor and from_texts (users no longer have to
specify `vector_key`).
- include search similarity into output metadata (fixes #20969)
- significantly improve performance of `from_documents`

Dependencies: added Pandas to speed up `from_documents`.
I was thinking about CSV and JSON options, but I expect trouble loading
JSON values this way and also CSV and JSON options require storing data
to disk.
Anyway, the poetry file for langchain-community already contains a
dependency on Pandas.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 15:17:52 -07:00
Surya Pratap Singh Shekhawat
42207f5bef Update agent_executor.ipynb (#22104)
fixed typos in the doc.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 22:14:41 +00:00
Erick Friis
8acadc34f5 docs: edit links, direct for notebooks (#22051) 2024-05-24 19:44:46 +00:00
Erick Friis
42ffcb2ff1 anthropic: release 0.1.14rc2, test release note gen (#22147) 2024-05-24 12:40:10 -07:00
Erick Friis
6ee8de62c0 infra: auto-generated release notes based on git log (#22141)
Generates release notes based on a `git log` command with title names

Aiming to improve to splitting out features vs. bugfixes using
conventional commits in the coming weeks.

Will work for any monorepo packages
2024-05-24 11:43:28 -07:00
Ameya Shenoy
8ba492ed6a community[minor]: clickhouse -- ability to use secure connection (#22108)
- **Description:** this PR gives clickhouse client the ability to use a
secure connection to the clickhosue server
- **Issue:** fixes #22082
- **Dependencies:** -
- **Twitter handle:** `_codingcoffee_`

Signed-off-by: Ameya Shenoy <shenoy.ameya@gmail.com>
Co-authored-by: Shresth Rana <shresth@grapevine.in>
2024-05-24 17:30:22 +00:00
ccurme
9a010fb761 openai: read stream_options (#21548)
OpenAI recently added a `stream_options` parameter to its chat
completions API (see [release
notes](https://platform.openai.com/docs/changelog/added-chat-completions-stream-usage)).
When this parameter is set to `{"usage": True}`, an extra "empty"
message is added to the end of a stream containing token usage. Here we
propagate token usage to `AIMessage.usage_metadata`.

We enable this feature by default. Streams would now include an extra
chunk at the end, **after** the chunk with
`response_metadata={'finish_reason': 'stop'}`.

New behavior:
```
[AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='Hello', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='!', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde', usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17})]
```

Old behavior (accessible by passing `stream_options={"include_usage":
False}` into (a)stream:
```
[AIMessageChunk(content='', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='Hello', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='!', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-1312b971-c5ea-4d92-9015-e6604535f339')]
```

From what I can tell this is not yet implemented in Azure, so we enable
only for ChatOpenAI.
2024-05-24 13:20:56 -04:00
Patrick Zhang
eb7c767e5b docs: update the name of the tool passio_nutrition_ai (#22116)
Updating the name of the Passion Nutrition AI tool so that the name of
the tool is correctly displayed in the sidebar menu.

Currently the name of the tool says "Quickstart" in the side bar.
The patch fixed the name to be Passio Nutrition AI.

<img width="681" alt="image"
src="https://github.com/langchain-ai/langchain/assets/4603110/9609975e-78ea-4032-9024-10c4f838170a">
2024-05-24 17:15:16 +00:00
Leonid Ganeline
fd4ee08167 docs: integrations/platforms/microsoft update (#22100)
Added the `Azure Container Apps dynamic sessions` tool reference
2024-05-24 13:14:51 -04:00
Rahul Triptahi
1a485f59b9 community[patch]: Put authorized identities behind a feature flag in SharepointLoader (#22125)
Description: Put authorised identities behind a feature flag, load_auth.
Documentation: N/A
Unit tests: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-24 12:42:57 -04:00
Anindyadeep
ee689412ab docs: Update PremAI Docs (#22114)
Thank you for contributing to LangChain!

- [X] **PR title**: community: Updated langchain-community PremAI
documentation

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-24 11:55:32 -04:00
sasha
1c9ceff503 community: add metadata to chain logging; (#22122)
Hey, I'm Sasha. The SDK engineer from [Comet](https://comet.com).
This PR updates the CometTracer class.
Added metadata to CometTracerr. From now on, both chains and spans will
send it.
2024-05-24 15:29:40 +00:00
Jirka Lhotka
7c0459faf2 community: Update costs of openai finetuned models (#22124)
- **Description:** Update costs of finetuned models and add
gpt-3-turbo-0125. Source: https://openai.com/api/pricing/
  - **Issue:** N/A
  - **Dependencies:** None
2024-05-24 15:25:17 +00:00
Eugene Yurtsev
d3db83abe3 community[major]: lint for usage of xml library (#22132)
* Lint for usage of standard xml library
* Add forced opt-in for quip client
* Actual security issue is with underlying QuipClient not LangChain
integration (since the client is doing the parsing), but adding
enforcement at the LangChain level.
2024-05-24 15:23:53 +00:00
Tom Aarsen
5b5ea2af30 docs: Add explanation on how to use Hugging Face embeddings (#22118)
- **Description:** I've added a tab on embedding text with LangChain
using Hugging Face models to here:
https://python.langchain.com/v0.2/docs/how_to/embed_text/. HF was
mentioned in the running text, but not in the tabs, which I thought was
odd.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** No need, this is tiny :) 

Also, I had a ton of issues with the poetry docs/lint install, so I
haven't linted this. Apologies for that.

cc @Jofthomas 

- Tom Aarsen
2024-05-24 11:21:03 -04:00
Bagatur
baa3c975cb anthropic[patch]: allow tool call mutation (#22130)
If tool_use blocks and tool_calls with overlapping IDs are present,
prefer the values of the tool_calls. Allows for mutating AIMessages just
via tool_calls.
2024-05-24 08:18:14 -07:00
Christophe Bornet
c838de5027 doc: Add doc for CassandraByteStore (#22126)
Preview:
https://langchain-git-fork-cbornet-doc-cassandrabytestore-langchain.vercel.app/v0.2/docs/integrations/stores/cassandra/
2024-05-24 10:57:55 -04:00
Vadym Barda
2edb512282 docs: improve how-to docs for message history (#22072)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 20:12:24 -04:00
Artem
eb7c453b98 docs: update hub.pull("rlm/map-prompt") to hub.pull("rlm/reduce-prompt") for reduce prompt (#22088)
**PR message**: 
Update `hub.pull("rlm/map-prompt")` to `hub.pull("rlm/reduce-prompt")`
in summarization.ipynb

**Description:** 
Fix typo in prompt hub link from `reduce_prompt =
hub.pull("rlm/map-prompt")` to `reduce_prompt =
hub.pull("rlm/reduce-prompt")` following next issue

**Issue:** #22014

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 23:07:37 +00:00
Leonid Ganeline
2416737c5f docs: compact the API Reference links (#21285)
This PR is opinionated. 
Issue: the `API Reference` sections in the examples hold too much
vertical space and make us scroll the page too much. See an
[example](https://python.langchain.com/docs/get_started/quickstart/#conversation-retrieval-chain).
These sections are **important**. So, the compacting should not make
these sections less noticeable.
Change: compacting the `API Reference` sections. See the [same example
after change
applied](https://langchain-j6nya46lf-langchain.vercel.app/docs/get_started/quickstart/#conversation-retrieval-chain).
It is more compact and now looks like references (footnotes).
Note: I would also change the section style, so it would be more
noticeable (maybe to look like the footnotes. Smaller wider font?)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 15:50:23 -07:00
ccurme
0ea1e89b2c groq: read tool calls from .tool_calls attribute (#22096) 2024-05-23 18:16:06 -04:00
Bagatur
96c21dfe56 docs: hf feat table tool calling (#22091) 2024-05-23 15:09:30 -07:00
Eugene Yurtsev
63004a0945 codespell ignore remaining issues (#22097) 2024-05-23 21:51:39 +00:00
Eugene Yurtsev
2d693c484e docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
2024-05-23 16:59:11 -04:00
Bagatur
38783d07c9 infra: api docs quick preview (#22093) 2024-05-23 13:57:45 -07:00
Pavel Zloi
fe26f937e4 community[minor]: ManticoreSearch engine added to vectorstore (#19117)
**Description:** ManticoreSearch engine added to vectorstores
**Issue:** no issue, just a new feature
**Dependencies:** https://pypi.org/project/manticoresearch-dev/
**Twitter handle:** @EvilFreelancer

- Example notebook with test integration:

https://github.com/EvilFreelancer/langchain/blob/manticore-search-vectorstore/docs/docs/integrations/vectorstores/manticore_search.ipynb

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 13:56:18 -07:00
Erick Friis
95c3e5f85f cli: model name substitution fix, release 0.0.23 (#22089) 2024-05-23 13:09:38 -07:00
Kartheek Yakkala
18b8c8628a docs : Added integrations for tools with langchain_community (#22056)
- **PR title**:  Docs enhancement

- **Description:** Adding installation instructions for integrations
requiring `langchain-community` package since 0.2
    - **Issue:** https://github.com/langchain-ai/langchain/issues/22005
2024-05-23 15:09:34 -04:00
ccurme
152c8cac33 anthropic, openai: cut pre-releases (#22083) 2024-05-23 15:02:23 -04:00
ccurme
cd07521170 core: bump to 0.2.1rc (#22080) 2024-05-23 18:36:50 +00:00
Harrison Chase
170cc8aec3 docs: add multi-modal-docs (#21734)
We dont really have any abstractions around multi-modal... so add a
section explaining we dont have any abstrations and then how to guides
for openai and anthropic (probably need to add for more)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: junefish <junefish@users.noreply.github.com>
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 18:33:25 +00:00
ccurme
fbfed65fb1 core, partners: add token usage attribute to AIMessage (#21944)
```python
class UsageMetadata(TypedDict):
    """Usage metadata for a message, such as token counts.

    Attributes:
        input_tokens: (int) count of input (or prompt) tokens
        output_tokens: (int) count of output (or completion) tokens
        total_tokens: (int) total token count
    """

    input_tokens: int
    output_tokens: int
    total_tokens: int
```
```python
class AIMessage(BaseMessage):
    ...
    usage_metadata: Optional[UsageMetadata] = None
    """If provided, token usage information associated with the message."""
    ...
```
2024-05-23 14:21:58 -04:00
Bagatur
3d26807b92 community[patch]: Release. 0.2.1 (#22073) 2024-05-23 10:40:32 -07:00
2234 changed files with 91876 additions and 272788 deletions

View File

@@ -10,7 +10,7 @@ You can use the dev container configuration in this folder to build and run the
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master** .
1. Click **Create codespace on master**.
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).

View File

@@ -350,11 +350,7 @@ def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(
settings=settings,
query=prs_query,
after=after
)
data = get_graphql_response(settings=settings, query=prs_query, after=after)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
@@ -484,10 +480,16 @@ def get_contributors(settings: Settings):
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = (datetime.now(timezone.utc) - timedelta(days=3*30))
three_months_ago = datetime.now(timezone.utc) - timedelta(days=3 * 30)
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return contributors, contributor_scores, recent_contributor_scores, reviewers, authors
return (
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
authors,
)
def get_top_users(
@@ -524,9 +526,13 @@ if __name__ == "__main__":
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
contributors, contributor_scores, recent_contributor_scores, reviewers, pr_authors = get_contributors(
settings=settings
)
(
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
pr_authors,
) = get_contributors(settings=settings)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
@@ -547,6 +553,7 @@ if __name__ == "__main__":
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"isahers1",
"dqbd",
"bracesproul",
"akira",
@@ -558,7 +565,7 @@ if __name__ == "__main__":
maintainers.append(
{
"login": login,
"count": contributors[login], #+ question_commentors[login],
"count": contributors[login], # + question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
@@ -614,9 +621,7 @@ if __name__ == "__main__":
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if (
people_old_content == new_people_content
):
if people_old_content == new_people_content:
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
@@ -629,9 +634,7 @@ if __name__ == "__main__":
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(
["git", "add", str(people_path)], check=True
)
subprocess.run(["git", "add", str(people_path)], check=True)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
@@ -640,4 +643,4 @@ if __name__ == "__main__":
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")
logging.info("Finished")

View File

@@ -1,7 +1,12 @@
import glob
import json
import sys
import os
from typing import Dict
import re
import sys
import tomllib
from collections import defaultdict
from typing import Dict, List, Set
LANGCHAIN_DIRS = [
"libs/core",
@@ -11,6 +16,43 @@ LANGCHAIN_DIRS = [
"libs/experimental",
]
def all_package_dirs() -> Set[str]:
return {
"/".join(path.split("/")[:-1]).lstrip("./")
for path in glob.glob("./libs/**/pyproject.toml", recursive=True)
if "libs/cli" not in path and "libs/standard-tests" not in path
}
def dependents_graph() -> dict:
dependents = defaultdict(set)
for path in glob.glob("./libs/**/pyproject.toml", recursive=True):
if "template" in path:
continue
with open(path, "rb") as f:
pyproject = tomllib.load(f)["tool"]["poetry"]
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in pyproject["dependencies"]:
if "langchain" in dep:
dependents[dep].add(pkg_dir)
return dependents
def add_dependents(dirs_to_eval: Set[str], dependents: dict) -> List[str]:
updated = set()
for dir_ in dirs_to_eval:
# handle core manually because it has so many dependents
if "core" in dir_:
updated.add(dir_)
continue
pkg = "langchain-" + dir_.split("/")[-1]
updated.update(dependents[pkg])
updated.add(dir_)
return list(updated)
if __name__ == "__main__":
files = sys.argv[1:]
@@ -21,10 +63,11 @@ if __name__ == "__main__":
}
docs_edited = False
if len(files) == 300:
if len(files) >= 300:
# max diff length is 300 files - there are likely files missing
raise ValueError("Max diff reached. Please manually run CI on changed libs.")
dirs_to_run["lint"] = all_package_dirs()
dirs_to_run["test"] = all_package_dirs()
dirs_to_run["extended-test"] = set(LANGCHAIN_DIRS)
for file in files:
if any(
file.startswith(dir_)
@@ -81,11 +124,16 @@ if __name__ == "__main__":
docs_edited = True
dirs_to_run["lint"].add(".")
dependents = dependents_graph()
outputs = {
"dirs-to-lint": list(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
"dirs-to-lint": add_dependents(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"],
dependents,
),
"dirs-to-test": add_dependents(
dirs_to_run["test"] | dirs_to_run["extended-test"], dependents
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
"docs-edited": "true" if docs_edited else "",
}

View File

@@ -74,6 +74,4 @@ if __name__ == "__main__":
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
)
print(" ".join([f"{lib}=={version}" for lib, version in min_versions.items()]))

7
.github/workflows/.codespell-exclude vendored Normal file
View File

@@ -0,0 +1,7 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -24,6 +24,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -28,6 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: dependency checks ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,6 @@ env:
jobs:
build:
environment: Scheduled testing
defaults:
run:
working-directory: ${{ inputs.working-directory }}
@@ -53,8 +52,15 @@ jobs:
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@@ -34,7 +34,7 @@ jobs:
# so linting on fewer versions makes CI faster.
python-version:
- "3.8"
- "3.11"
- "3.12"
steps:
- uses: actions/checkout@v4

View File

@@ -72,12 +72,70 @@ jobs:
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo "# Release $TAG"
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
@@ -86,6 +144,7 @@ jobs:
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
steps:
@@ -144,7 +203,7 @@ jobs:
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test,test_integration
run: poetry install --with test
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the test PyPI version.
@@ -187,6 +246,10 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
@@ -229,6 +292,7 @@ jobs:
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
@@ -270,6 +334,7 @@ jobs:
mark-release:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
@@ -306,6 +371,6 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
body: ${{ needs.release-notes.outputs.release-body }}
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}

View File

@@ -28,6 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "make test #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,7 @@ jobs:
strategy:
matrix:
python-version:
- "3.11"
- "3.12"
name: "check doc imports #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -7,6 +7,7 @@ on:
jobs:
check-links:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

View File

@@ -26,7 +26,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
python-version: '3.11'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
@@ -104,6 +104,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
runs-on: ubuntu-latest
defaults:
run:
@@ -123,7 +124,9 @@ jobs:
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
poetry install --with test
poetry run pip install uv
poetry run uv pip install -r extended_testing_deps.txt
- name: Run extended tests
run: make extended_tests

36
.github/workflows/check_new_docs.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
---
name: Integration docs lint
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
with:
filter: |
*.ipynb
*.md
*.mdx
- name: Check new docs
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -29,9 +29,9 @@ jobs:
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -16,6 +16,7 @@ jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
permissions: write-all
steps:
- name: Dump GitHub context
env:

View File

@@ -10,6 +10,8 @@ env:
jobs:
build:
if: github.repository_owner == 'langchain-ai'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
@@ -25,16 +27,38 @@ jobs:
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/together"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
steps:
- uses: actions/checkout@v4
with:
path: langchain
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }}
uses: "./.github/actions/poetry_setup"
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.working-directory }}
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: 'Authenticate to Google Cloud'
@@ -43,16 +67,20 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Run integration tests
working-directory: ${{ matrix.working-directory }}
shell: bash
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -67,12 +95,24 @@ jobs:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
run: |
make integration_test
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: Ensure the tests did not create any additional files
working-directory: ${{ matrix.working-directory }}
shell: bash
working-directory: langchain
run: |
set -eu

2
.gitignore vendored
View File

@@ -133,6 +133,7 @@ env.bak/
# mypy
.mypy_cache/
.mypy_cache_test/
.dmypy.json
dmypy.json
@@ -178,3 +179,4 @@ _dist
docs/docs/templates
prof
virtualenv/

View File

@@ -32,10 +32,19 @@ api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
cd docs/api_reference && poetry run make clean
git clean -fdX ./docs/api_reference
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:

View File

@@ -2,17 +2,17 @@
⚡ Build context-aware reasoning applications ⚡
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![Downloads](https://static.pepy.tech/badge/langchain-core/month)](https://pepy.tech/project/langchain-core)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -38,43 +38,44 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/v0.2/docs/concepts#langchain-expression-language-lcel), [components](https://python.langchain.com/v0.2/docs/concepts), and [third-party integrations](https://python.langchain.com/v0.2/docs/integrations/platforms/).
Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
@@ -87,49 +88,49 @@ Off-the-shelf chains make it easy to get started. Components make it easy to cus
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
**📚 Retrieval:**
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents:**
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Create stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploy LangChain runnables and chains as REST APIs.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
## 🌟 Contributors

View File

@@ -46,7 +46,7 @@
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Needed synce jupyter runs an async eventloop\n",
"# Needed since jupyter runs an async eventloop\n",
"nest_asyncio.apply()"
]
},

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,497 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9fc3897d-176f-4729-8fd1-cfb4add53abd",
"metadata": {},
"source": [
"## Nomic multi-modal RAG\n",
"\n",
"Many documents contain a mixture of content types, including text and images. \n",
"\n",
"Yet, information captured in images is lost in most RAG applications.\n",
"\n",
"With the emergence of multimodal LLMs, like [GPT-4V](https://openai.com/research/gpt-4v-system-card), it is worth considering how to utilize images in RAG:\n",
"\n",
"In this demo we\n",
"\n",
"* Use multimodal embeddings from Nomic Embed [Vision](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) and [Text](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) to embed images and text\n",
"* Retrieve both using similarity search\n",
"* Pass raw images and text chunks to a multimodal LLM for answer synthesis \n",
"\n",
"## Signup\n",
"\n",
"Get your API token, then run:\n",
"```\n",
"! nomic login\n",
"```\n",
"\n",
"Then run with your generated API token \n",
"```\n",
"! nomic login < token > \n",
"```\n",
"\n",
"## Packages\n",
"\n",
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54926b9b-75c2-4cd4-8f14-b3882a0d370b",
"metadata": {},
"outputs": [],
"source": [
"! nomic login token"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"! pip install -U langchain-nomic langchain_community tiktoken langchain-openai chromadb langchain # (newest versions required for multi-modal)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acbdc603-39e2-4a5f-836c-2bbaecd46b0b",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
"metadata": {},
"source": [
"## Data Loading\n",
"\n",
"### Partition PDF text and images\n",
" \n",
"Let's look at an example pdfs containing interesting images.\n",
"\n",
"1/ Art from the J Paul Getty museum:\n",
"\n",
" * Here is a [zip file](https://drive.google.com/file/d/18kRKbq2dqAhhJ3DfZRnYcTBEUfYxe1YR/view?usp=sharing) with the PDF and the already extracted images. \n",
"* https://www.getty.edu/publications/resources/virtuallibrary/0892360224.pdf\n",
"\n",
"2/ Famous photographs from library of congress:\n",
"\n",
"* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n",
"* We'll use this as an example below\n",
"\n",
"We can use `partition_pdf` below from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images.\n",
"\n",
"To supply this to extract the images:\n",
"```\n",
"extract_images_in_pdf=True\n",
"```\n",
"\n",
"\n",
"\n",
"If using this zip file, then you can simply process the text only with:\n",
"```\n",
"extract_images_in_pdf=False\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9646b524-71a7-4b2a-bdc8-0b81f77e968f",
"metadata": {},
"outputs": [],
"source": [
"# Folder with pdf and extracted images\n",
"from pathlib import Path\n",
"\n",
"# replace with actual path to images\n",
"path = Path(\"../art\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "77f096ab-a933-41d0-8f4e-1efc83998fc3",
"metadata": {},
"outputs": [],
"source": [
"path.resolve()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc4839c0-8773-4a07-ba59-5364501269b2",
"metadata": {},
"outputs": [],
"source": [
"# Extract images, tables, and chunk text\n",
"from unstructured.partition.pdf import partition_pdf\n",
"\n",
"raw_pdf_elements = partition_pdf(\n",
" filename=str(path.resolve()) + \"/getty.pdf\",\n",
" extract_images_in_pdf=False,\n",
" infer_table_structure=True,\n",
" chunking_strategy=\"by_title\",\n",
" max_characters=4000,\n",
" new_after_n_chars=3800,\n",
" combine_text_under_n_chars=2000,\n",
" image_output_dir_path=path,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "969545ad",
"metadata": {},
"outputs": [],
"source": [
"# Categorize text elements by type\n",
"tables = []\n",
"texts = []\n",
"for element in raw_pdf_elements:\n",
" if \"unstructured.documents.elements.Table\" in str(type(element)):\n",
" tables.append(str(element))\n",
" elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):\n",
" texts.append(str(element))"
]
},
{
"cell_type": "markdown",
"id": "5d8e6349-1547-4cbf-9c6f-491d8610ec10",
"metadata": {},
"source": [
"## Multi-modal embeddings with our document\n",
"\n",
"We will use [nomic-embed-vision-v1.5](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) embeddings. This model is aligned \n",
"to [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) allowing for multimodal semantic search and Multimodal RAG!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4bc15842-cb95-4f84-9eb5-656b0282a800",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import uuid\n",
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_nomic import NomicEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",
"# Create chroma\n",
"text_vectorstore = Chroma(\n",
" collection_name=\"mm_rag_clip_photos_text\",\n",
" embedding_function=NomicEmbeddings(\n",
" vision_model=\"nomic-embed-vision-v1.5\", model=\"nomic-embed-text-v1.5\"\n",
" ),\n",
")\n",
"image_vectorstore = Chroma(\n",
" collection_name=\"mm_rag_clip_photos_image\",\n",
" embedding_function=NomicEmbeddings(\n",
" vision_model=\"nomic-embed-vision-v1.5\", model=\"nomic-embed-text-v1.5\"\n",
" ),\n",
")\n",
"\n",
"# Get image URIs with .jpg extension only\n",
"image_uris = sorted(\n",
" [\n",
" os.path.join(path, image_name)\n",
" for image_name in os.listdir(path)\n",
" if image_name.endswith(\".jpg\")\n",
" ]\n",
")\n",
"\n",
"# Add images\n",
"image_vectorstore.add_images(uris=image_uris)\n",
"\n",
"# Add documents\n",
"text_vectorstore.add_texts(texts=texts)\n",
"\n",
"# Make retriever\n",
"image_retriever = image_vectorstore.as_retriever()\n",
"text_retriever = text_vectorstore.as_retriever()"
]
},
{
"cell_type": "markdown",
"id": "02a186d0-27e0-4820-8092-63b5349dd25d",
"metadata": {},
"source": [
"## RAG\n",
"\n",
"`vectorstore.add_images` will store / retrieve images as base64 encoded strings.\n",
"\n",
"These can be passed to [GPT-4V](https://platform.openai.com/docs/guides/vision)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "344f56a8-0dc3-433e-851c-3f7600c7a72b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"import io\n",
"from io import BytesIO\n",
"\n",
"import numpy as np\n",
"from PIL import Image\n",
"\n",
"\n",
"def resize_base64_image(base64_string, size=(128, 128)):\n",
" \"\"\"\n",
" Resize an image encoded as a Base64 string.\n",
"\n",
" Args:\n",
" base64_string (str): Base64 string of the original image.\n",
" size (tuple): Desired size of the image as (width, height).\n",
"\n",
" Returns:\n",
" str: Base64 string of the resized image.\n",
" \"\"\"\n",
" # Decode the Base64 string\n",
" img_data = base64.b64decode(base64_string)\n",
" img = Image.open(io.BytesIO(img_data))\n",
"\n",
" # Resize the image\n",
" resized_img = img.resize(size, Image.LANCZOS)\n",
"\n",
" # Save the resized image to a bytes buffer\n",
" buffered = io.BytesIO()\n",
" resized_img.save(buffered, format=img.format)\n",
"\n",
" # Encode the resized image to Base64\n",
" return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
"\n",
"\n",
"def is_base64(s):\n",
" \"\"\"Check if a string is Base64 encoded\"\"\"\n",
" try:\n",
" return base64.b64encode(base64.b64decode(s)) == s.encode()\n",
" except Exception:\n",
" return False\n",
"\n",
"\n",
"def split_image_text_types(docs):\n",
" \"\"\"Split numpy array images and texts\"\"\"\n",
" images = []\n",
" text = []\n",
" for doc in docs:\n",
" doc = doc.page_content # Extract Document contents\n",
" if is_base64(doc):\n",
" # Resize image to avoid OAI server error\n",
" images.append(\n",
" resize_base64_image(doc, size=(250, 250))\n",
" ) # base64 encoded str\n",
" else:\n",
" text.append(doc)\n",
" return {\"images\": images, \"texts\": text}"
]
},
{
"cell_type": "markdown",
"id": "23a2c1d8-fea6-4152-b184-3172dd46c735",
"metadata": {},
"source": [
"Currently, we format the inputs using a `RunnableLambda` while we add image support to `ChatPromptTemplates`.\n",
"\n",
"Our runnable follows the classic RAG flow - \n",
"\n",
"* We first compute the context (both \"texts\" and \"images\" in this case) and the question (just a RunnablePassthrough here) \n",
"* Then we pass this into our prompt template, which is a custom function that formats the message for the gpt-4-vision-preview model. \n",
"* And finally we parse the output as a string."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d8919dc-c238-4746-86ba-45d940a7d260",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c93fab3-74c4-4f1d-958a-0bc4cdd0797e",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",
" # Joining the context texts into a single string\n",
" formatted_texts = \"\\n\".join(data_dict[\"text_context\"][\"texts\"])\n",
" messages = []\n",
"\n",
" # Adding image(s) to the messages if present\n",
" if data_dict[\"image_context\"][\"images\"]:\n",
" image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": f\"data:image/jpeg;base64,{data_dict['image_context']['images'][0]}\"\n",
" },\n",
" }\n",
" messages.append(image_message)\n",
"\n",
" # Adding the text message for analysis\n",
" text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": (\n",
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
" \"- Connections between the image and the related text.\\n\\n\"\n",
" f\"User-provided keywords: {data_dict['question']}\\n\\n\"\n",
" \"Text and / or tables:\\n\"\n",
" f\"{formatted_texts}\"\n",
" ),\n",
" }\n",
" messages.append(text_message)\n",
"\n",
" return [HumanMessage(content=messages)]\n",
"\n",
"\n",
"model = ChatOpenAI(temperature=0, model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
"\n",
"# RAG pipeline\n",
"chain = (\n",
" {\n",
" \"text_context\": text_retriever | RunnableLambda(split_image_text_types),\n",
" \"image_context\": image_retriever | RunnableLambda(split_image_text_types),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | RunnableLambda(prompt_func)\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1566096d-97c2-4ddc-ba4a-6ef88c525e4e",
"metadata": {},
"source": [
"## Test retrieval and run RAG"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "90121e56-674b-473b-871d-6e4753fd0c45",
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML, display\n",
"\n",
"\n",
"def plt_img_base64(img_base64):\n",
" # Create an HTML img tag with the base64 string as the source\n",
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
"\n",
" # Display the image by rendering the HTML\n",
" display(HTML(image_html))\n",
"\n",
"\n",
"docs = text_retriever.invoke(\"Women with children\", k=5)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "44eaa532-f035-4c04-b578-02339d42554c",
"metadata": {},
"outputs": [],
"source": [
"docs = image_retriever.invoke(\"Women with children\", k=5)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69fb15fd-76fc-49b4-806d-c4db2990027d",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"Women with children\")"
]
},
{
"cell_type": "markdown",
"id": "227f08b8-e732-4089-b65c-6eb6f9e48f15",
"metadata": {},
"source": [
"We can see the images retrieved in the LangSmith trace:\n",
"\n",
"LangSmith [trace](https://smith.langchain.com/public/69c558a5-49dc-4c60-a49b-3adbb70f74c5/r/e872c2c8-528c-468f-aefd-8b5cd730a673)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -86,8 +86,7 @@
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"# please make sure this user has sufficient privileges to perform all below\n",
"# Update with your username, password, hostname, and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
@@ -97,40 +96,45 @@
" print(\"Connection successful!\")\n",
"\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error setting up user.');\n",
" end;\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
"\n",
" -- network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db));\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" cursor.close()\n",
" try:\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- Drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error dropping user: ' || SQLERRM);\n",
" end;\n",
" \n",
" -- Create user and grant privileges\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
" \n",
" -- Network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db)\n",
" );\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" except Exception as e:\n",
" print(f\"User setup failed with error: {e}\")\n",
" finally:\n",
" cursor.close()\n",
" conn.close()\n",
"except Exception as e:\n",
" print(\"User setup failed!\")\n",
" cursor.close()\n",
" conn.close()\n",
" print(f\"Connection failed with error: {e}\")\n",
" sys.exit(1)"
]
},
@@ -526,8 +530,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** Currently, OracleEmbeddings processes each embedding generation request individually, without batching, by calling REST endpoints separately for each request. This method could potentially lead to exceeding the maximum request per minute quota set by some providers. However, we are actively working to enhance this process by implementing request batching, which will allow multiple embedding requests to be combined into fewer API calls, thereby optimizing our use of provider resources and adhering to their request limits. This update is expected to be rolled out soon, eliminating the current limitation.\n",
"\n",
"***Note:*** Users may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
]
},

View File

@@ -35,11 +35,11 @@ generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/document_loader_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
@@ -61,7 +61,7 @@ render:
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)

View File

@@ -10,12 +10,21 @@ from pathlib import Path
from typing import Dict, List, Literal, Optional, Sequence, TypedDict, Union
import toml
import typing_extensions
from langchain_core.runnables import Runnable, RunnableSerializable
from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
ClassKind = Literal[
"TypedDict",
"Regular",
"Pydantic",
"enum",
"RunnablePydantic",
"RunnableNonPydantic",
]
class ClassInfo(TypedDict):
@@ -69,8 +78,36 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
continue
if inspect.isclass(type_):
if type(type_) == typing._TypedDictMeta: # type: ignore
# The clasification of the class is used to select a template
# for the object when rendering the documentation.
# See `templates` directory for defined templates.
# This is a hacky solution to distinguish between different
# kinds of thing that we want to render.
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif type(type_) is typing._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif (
issubclass(type_, Runnable)
and issubclass(type_, BaseModel)
and type_ is not Runnable
):
# RunnableSerializable subclasses from Pydantic which
# for which we use autodoc_pydantic for rendering.
# We need to distinguish these from regular Pydantic
# classes so we can hide inherited Runnable methods
# and provide a link to the Runnable interface from
# the template.
kind = "RunnablePydantic"
elif (
issubclass(type_, Runnable)
and not issubclass(type_, BaseModel)
and type_ is not Runnable
):
# These are not pydantic classes but are Runnable.
# We'll hide all the inherited methods from Runnable
# but use a regular class template to render.
kind = "RunnableNonPydantic"
elif issubclass(type_, Enum):
kind = "enum"
elif issubclass(type_, BaseModel):
@@ -128,11 +165,11 @@ def _load_package_modules(
of the modules/packages are part of the package vs. 3rd party or built-in.
Parameters:
package_directory: Path to the package directory.
submodule: Optional name of submodule to load.
package_directory (Union[str, Path]): Path to the package directory.
submodule (Optional[str]): Optional name of submodule to load.
Returns:
list: A list of loaded module objects.
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
"""
package_path = (
Path(package_directory)
@@ -251,6 +288,10 @@ Classes
template = "enum.rst"
elif class_["kind"] == "Pydantic":
template = "pydantic.rst"
elif class_["kind"] == "RunnablePydantic":
template = "runnable_pydantic.rst"
elif class_["kind"] == "RunnableNonPydantic":
template = "runnable_non_pydantic.rst"
else:
template = "class.rst"

File diff suppressed because one or more lines are too long

View File

@@ -33,4 +33,4 @@
{% endblock %}
.. example_links:: {{ objname }}
.. example_links:: {{ objname }}

View File

@@ -15,6 +15,8 @@
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace
{% block attributes %}
{% endblock %}

View File

@@ -0,0 +1,40 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -0,0 +1,24 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
:model-show-json: False
:model-show-config-summary: False
:model-show-validator-members: False
:model-show-field-summary: False
:field-signature-prefix: param
:members:
:undoc-members:
:inherited-members:
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign
.. example_links:: {{ objname }}

View File

@@ -2,132 +2,129 @@
{%- set url_root = pathto('', 1) %}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- set lang_attr = 'en' %}
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="{{ lang_attr }}" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="{{ lang_attr }}" > <!--<![endif]-->
<!--[if gt IE 8]><!-->
<html class="no-js" lang="{{ lang_attr }}"> <!--<![endif]-->
<head>
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical" href="https://api.python.langchain.com/en/latest/{{pagename}}.html" />
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical"
href="https://api.python.langchain.com/en/latest/{{ pagename }}.html"/>
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
<link rel="stylesheet" href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}" type="text/css" />
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}" type="text/css"{% if css.title is not none %} title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css" />
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}" src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
<link rel="stylesheet"
href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}"
type="text/css"/>
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}"
type="text/css"{% if css.title is not none %}
title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css"/>
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css"/>
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}"
src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
</head>
<body>
{% include "nav.html" %}
{%- block content %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary" for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
<div class="btn-group w-100 mb-2" role="group" aria-label="rellinks">
{%- if prev %}
<a href="{{ prev.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ prev.title|striptags }}">Prev</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Prev</a>
{%- endif %}
{%- if parents -%}
<a href="{{ parents[-1].link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ parents[-1].title|striptags }}">Up</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink disabled py-1">Up</a>
{%- endif %}
{%- if next %}
<a href="{{ next.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ next.title|striptags }}">Next</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Next</a>
{%- endif %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary"
for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}"
class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}"
class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}" class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}" class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}
&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}
.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated
on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}"
rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
</div>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}" rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
</div>
</div>
{%- endblock %}
<script src="{{ pathto('_static/js/vendor/bootstrap.min.js', 1) }}"></script>
{% include "javascript.html" %}

File diff suppressed because it is too large Load Diff

View File

@@ -2,32 +2,154 @@
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
and Templates.
Templates, and Cookbooks.
From the opposite direction, scientists use LangChain in research and reference LangChain in the research papers.
Here you find [such papers](https://arxiv.org/search/?query=langchain&searchtype=all&source=header).
## Summary
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|------------------|---------|-------------------|------------------------|
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022-10-06 | `Docs:` [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/chat/huggingface](https://python.langchain.com/docs/integrations/chat/huggingface), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), `API:` [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **arXiv id:** 2402.03620v1
- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
- **Published Date:** 2024-02-06
- **URL:** http://arxiv.org/abs/2402.03620v1
- **LangChain:**
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
task-intrinsic reasoning structures to tackle complex reasoning problems that
are challenging for typical prompting methods. Core to the framework is a
self-discovery process where LLMs select multiple atomic reasoning modules such
as critical thinking and step-by-step thinking, and compose them into an
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
outperforms inference-intensive methods such as CoT-Self-Consistency by more
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
the self-discovered reasoning structures are universally applicable across
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
commonalities with human reasoning patterns.
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **arXiv id:** 2401.18059v1
- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
- **Published Date:** 2024-01-31
- **URL:** http://arxiv.org/abs/2401.18059v1
- **LangChain:**
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
state and incorporate long-tail knowledge. However, most existing methods
retrieve only short contiguous chunks from a retrieval corpus, limiting
holistic understanding of the overall document context. We introduce the novel
approach of recursively embedding, clustering, and summarizing chunks of text,
constructing a tree with differing levels of summarization from the bottom up.
At inference time, our RAPTOR model retrieves from this tree, integrating
information across lengthy documents at different levels of abstraction.
Controlled experiments show that retrieval with recursive summaries offers
significant improvements over traditional retrieval-augmented LMs on several
tasks. On question-answering tasks that involve complex, multi-step reasoning,
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
with the use of GPT-4, we can improve the best performance on the QuALITY
benchmark by 20% in absolute accuracy.
## Corrective Retrieval Augmented Generation
- **arXiv id:** 2401.15884v2
- **Title:** Corrective Retrieval Augmented Generation
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
- **Published Date:** 2024-01-29
- **URL:** http://arxiv.org/abs/2401.15884v2
- **LangChain:**
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
accuracy of generated texts cannot be secured solely by the parametric
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
practicable complement to LLMs, it relies heavily on the relevance of retrieved
documents, raising concerns about how the model behaves if retrieval goes
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
(CRAG) to improve the robustness of generation. Specifically, a lightweight
retrieval evaluator is designed to assess the overall quality of retrieved
documents for a query, returning a confidence degree based on which different
knowledge retrieval actions can be triggered. Since retrieval from static and
limited corpora can only return sub-optimal documents, large-scale web searches
are utilized as an extension for augmenting the retrieval results. Besides, a
decompose-then-recompose algorithm is designed for retrieved documents to
selectively focus on key information and filter out irrelevant information in
them. CRAG is plug-and-play and can be seamlessly coupled with various
RAG-based approaches. Experiments on four datasets covering short- and
long-form generation tasks show that CRAG can significantly improve the
performance of RAG-based approaches.
## Mixtral of Experts
- **arXiv id:** 2401.04088v1
- **Title:** Mixtral of Experts
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
- **Published Date:** 2024-01-08
- **URL:** http://arxiv.org/abs/2401.04088v1
- **LangChain:**
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and combine their outputs. Even though each token only sees two experts, the
selected experts can be different at each timestep. As a result, each token has
access to 47B parameters, but only uses 13B active parameters during inference.
Mixtral was trained with a context size of 32k tokens and it outperforms or
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
multilingual benchmarks. We also provide a model fine-tuned to follow
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
the base and instruct models are released under the Apache 2.0 license.
## Dense X Retrieval: What Retrieval Granularity Should We Use?
- **arXiv id:** 2312.06648v2
@@ -91,6 +213,39 @@ average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **arXiv id:** 2310.11511v1
- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
- **Published Date:** 2023-10-17
- **URL:** http://arxiv.org/abs/2310.11511v1
- **LangChain:**
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **arXiv id:** 2310.06117v2
@@ -101,6 +256,7 @@ outside the pre-training knowledge scope.
- **LangChain:**
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
@@ -113,6 +269,27 @@ including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
## Llama 2: Open Foundation and Fine-Tuned Chat Models
- **arXiv id:** 2307.09288v2
- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
- **Published Date:** 2023-07-18
- **URL:** http://arxiv.org/abs/2307.09288v2
- **LangChain:**
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
## Query Rewriting for Retrieval-Augmented Large Language Models
- **arXiv id:** 2305.14283v3
@@ -123,6 +300,7 @@ and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
- **LangChain:**
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
@@ -152,6 +330,7 @@ for retrieval-augmented LLM.
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
approach aimed at improving the problem-solving capabilities of auto-regressive
@@ -171,6 +350,132 @@ significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **arXiv id:** 2305.04091v3
- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
- **Published Date:** 2023-05-06
- **URL:** http://arxiv.org/abs/2305.04091v3
- **LangChain:**
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
## Visual Instruction Tuning
- **arXiv id:** 2304.08485v2
- **Title:** Visual Instruction Tuning
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
- **Published Date:** 2023-04-17
- **URL:** http://arxiv.org/abs/2304.08485v2
- **LangChain:**
- **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
## Generative Agents: Interactive Simulacra of Human Behavior
- **arXiv id:** 2304.03442v2
- **Title:** Generative Agents: Interactive Simulacra of Human Behavior
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
- **Published Date:** 2023-04-07
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
communication to prototyping tools. In this paper, we introduce generative
agents--computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint,
while authors write; they form opinions, notice each other, and initiate
conversations; they remember and reflect on days past as they plan the next
day. To enable generative agents, we describe an architecture that extends a
large language model to store a complete record of the agent's experiences
using natural language, synthesize those memories over time into higher-level
reflections, and retrieve them dynamically to plan behavior. We instantiate
generative agents to populate an interactive sandbox environment inspired by
The Sims, where end users can interact with a small town of twenty five agents
using natural language. In an evaluation, these generative agents produce
believable individual and emergent social behaviors: for example, starting with
only a single user-specified notion that one agent wants to throw a Valentine's
Day party, the agents autonomously spread invitations to the party over the
next two days, make new acquaintances, ask each other out on dates to the
party, and coordinate to show up for the party together at the right time. We
demonstrate through ablation that the components of our agent
architecture--observation, planning, and reflection--each contribute critically
to the believability of agent behavior. By fusing large language models with
computational, interactive agents, this work introduces architectural and
interaction patterns for enabling believable simulations of human behavior.
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **arXiv id:** 2303.17760v2
- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
- **Published Date:** 2023-03-31
- **URL:** http://arxiv.org/abs/2303.17760v2
- **LangChain:**
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **arXiv id:** 2303.17580v4
@@ -181,6 +486,7 @@ implementation of the ToT-based Sudoku solver is available on GitHub:
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are numerous AI models
@@ -235,7 +541,7 @@ more than 1/1,000th the compute of GPT-4.
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
@@ -260,8 +566,9 @@ family, and discuss robustness and security.
- **URL:** http://arxiv.org/abs/2212.10496v1
- **LangChain:**
- **API Reference:** [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **API Reference:** [langchain...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
languages, it remains difficult to create effective fully zero-shot dense
@@ -323,7 +630,7 @@ further work on logical fallacy identification.
- **URL:** http://arxiv.org/abs/2211.13892v2
- **LangChain:**
- **API Reference:** [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
- **API Reference:** [langchain_core...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
**Abstract:** Large language models (LLMs) have exhibited remarkable capabilities in
learning from explanations in prompts, but there has been limited understanding
@@ -351,7 +658,8 @@ performance across three real-world tasks on multiple LLMs.
- **URL:** http://arxiv.org/abs/2211.10435v2
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
- **API Reference:** [langchain_experimental...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
@@ -376,6 +684,41 @@ accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
which uses chain-of-thought by absolute 15% top-1. Our code and data are
publicly available at http://reasonwithpal.com/ .
## ReAct: Synergizing Reasoning and Acting in Language Models
- **arXiv id:** 2210.03629v3
- **Title:** ReAct: Synergizing Reasoning and Acting in Language Models
- **Authors:** Shunyu Yao, Jeffrey Zhao, Dian Yu, et al.
- **Published Date:** 2022-10-06
- **URL:** http://arxiv.org/abs/2210.03629v3
- **LangChain:**
- **Documentation:** [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/chat/huggingface](https://python.langchain.com/docs/integrations/chat/huggingface), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping)
- **API Reference:** [langchain...create_react_agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent), [langchain...TrajectoryEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain)
**Abstract:** While large language models (LLMs) have demonstrated impressive capabilities
across tasks in language understanding and interactive decision making, their
abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g.
action plan generation) have primarily been studied as separate topics. In this
paper, we explore the use of LLMs to generate both reasoning traces and
task-specific actions in an interleaved manner, allowing for greater synergy
between the two: reasoning traces help the model induce, track, and update
action plans as well as handle exceptions, while actions allow it to interface
with external sources, such as knowledge bases or environments, to gather
additional information. We apply our approach, named ReAct, to a diverse set of
language and decision making tasks and demonstrate its effectiveness over
state-of-the-art baselines, as well as improved human interpretability and
trustworthiness over methods without reasoning or acting components.
Concretely, on question answering (HotpotQA) and fact verification (Fever),
ReAct overcomes issues of hallucination and error propagation prevalent in
chain-of-thought reasoning by interacting with a simple Wikipedia API, and
generates human-like task-solving trajectories that are more interpretable than
baselines without reasoning traces. On two interactive decision making
benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and
reinforcement learning methods by an absolute success rate of 34% and 10%
respectively, while being prompted with only one or two in-context examples.
Project site with code: https://react-lm.github.io
## Deep Lake: a Lakehouse for Deep Learning
- **arXiv id:** 2209.10785v2
@@ -413,7 +756,7 @@ TensorFlow, JAX, and integrate with numerous MLOps tools.
- **URL:** http://arxiv.org/abs/2205.12654v1
- **LangChain:**
- **API Reference:** [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
- **API Reference:** [langchain_community...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
**Abstract:** Scaling multilingual representation learning beyond the hundred most frequent
languages is challenging, in particular to cover the long tail of low-resource
@@ -442,7 +785,7 @@ encoders, mine bitexts, and validate the bitexts by training NMT systems.
- **URL:** http://arxiv.org/abs/2204.00498v1
- **LangChain:**
- **API Reference:** [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL)
- **API Reference:** [langchain_community...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
language model. We find that, without any finetuning, Codex is a strong
@@ -461,7 +804,7 @@ few-shot examples.
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
@@ -525,7 +868,7 @@ https://github.com/OpenAI/CLIP.
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint)
- **API Reference:** [langchain_community...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_huggingface...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We

View File

@@ -11,6 +11,7 @@
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C)
## Courses
@@ -45,7 +46,6 @@
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
- [Dive into Langchain (Chinese language)](https://langchain.boblin.app/)
---------------------

View File

@@ -11,7 +11,7 @@ LangChain as a framework consists of a number of packages.
### `langchain-core`
This package contains base abstractions of different components and ways to compose them together.
The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here.
The interfaces for core components like LLMs, vector stores, retrievers and more are defined here.
No third party integrations are defined here.
The dependencies are kept purposefully very lightweight.
@@ -30,7 +30,7 @@ All chains, agents, and retrieval strategies here are NOT specific to any one in
This package contains third party integrations that are maintained by the LangChain community.
Key partner packages are separated out (see below).
This contains all integrations for various components (LLMs, vectorstores, retrievers).
This contains all integrations for various components (LLMs, vector stores, retrievers).
All dependencies in this package are optional to keep the package as lightweight as possible.
### [`langgraph`](https://langchain-ai.github.io/langgraph)
@@ -38,7 +38,7 @@ All dependencies in this package are optional to keep the package as lightweight
`langgraph` is an extension of `langchain` aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for constructing more contr
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows.
### [`langserve`](/docs/langserve)
@@ -51,13 +51,14 @@ A developer platform that lets you debug, test, evaluate, and monitor LLM applic
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack.svg'),
dark: useBaseUrl('/svg/langchain_stack_dark.svg'),
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
title="LangChain Framework Overview"
/>
## LangChain Expression Language (LCEL)
<span data-heading-keywords="lcel"></span>
LangChain Expression Language, or LCEL, is a declarative way to chain LangChain components.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
@@ -84,19 +85,25 @@ Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas i
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability.
[**Seamless LangServe deployment**](/docs/langserve)
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).
LCEL aims to provide consistency around behavior and customization over legacy subclassed chains such as `LLMChain` and
`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety
of viable models emerge, customization has become more and more important.
If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/how_to/migrate_chains/).
For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel).
### Runnable interface
<span data-heading-keywords="invoke,runnable"></span>
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way.
The standard interface includes:
- [`stream`](#stream): stream back chunks of the response
- [`invoke`](#invoke): call the chain on an input
- [`batch`](#batch): call the chain on a list of inputs
- `stream`: stream back chunks of the response
- `invoke`: call the chain on an input
- `batch`: call the chain on a list of inputs
These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:
@@ -128,21 +135,33 @@ LangChain provides standard, extendable interfaces and external integrations for
Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
### Chat models
<span data-heading-keywords="chat model,chat models"></span>
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see above).
These are traditionally newer models (older models are generally `LLMs`, see below).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs.
When a string is passed in as input, it is converted to a HumanMessage and then passed to the underlying model.
When a string is passed in as input, it is converted to a `HumanMessage` and then passed to the underlying model.
LangChain does not provide any ChatModels, rather we rely on third party integrations.
LangChain does not host any Chat Models, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
- `model`: the name of the model
- `temperature`: the sampling temperature
- `timeout`: request timeout
- `max_tokens`: max tokens to generate
- `stop`: default stop sequences
- `max_retries`: max number of times to retry requests
- `api_key`: API key for the model provider
- `base_url`: endpoint to send requests to
ChatModels also accept other parameters that are specific to that integration.
Some important things to note:
- standard params only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these.
- standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in ``langchain-community``.
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model.
:::important
**Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling.
@@ -150,16 +169,38 @@ Generally, such models are better at tool calling than non-fine-tuned models, an
Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.
:::
For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models).
#### Multimodality
Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).
For a full list of LangChain model providers with multimodal models, [check out this table](/docs/integrations/chat/#advanced-features).
### LLMs
<span data-heading-keywords="llm,llms"></span>
:::caution
Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models),
even for non-chat use cases.
You are probably looking for [the section above instead](/docs/concepts/#chat-models).
:::
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This makes them interchangeable with ChatModels.
This gives them the same interface as [Chat Models](/docs/concepts/#chat-models).
When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not provide any LLMs, rather we rely on third party integrations.
LangChain does not host any LLMs, rather we rely on third party integrations.
For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms).
### Messages
@@ -174,7 +215,7 @@ The `content` property describes the content of the message.
This can be a few different things:
- A string (most models deal this type of content)
- A List of dictionaries (this is used for multi-modal input, where the dictionary contains information about that input type and that input location)
- A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location)
#### HumanMessage
@@ -214,6 +255,8 @@ This represents the result of a tool call. This is distinct from a FunctionMessa
### Prompt templates
<span data-heading-keywords="prompt,prompttemplate,chatprompttemplate"></span>
Prompt templates help to translate user input and parameters into instructions for a language model.
This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
@@ -222,7 +265,7 @@ Prompt Templates take as input a dictionary, where each key represents a variabl
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages.
The reason this PromptValue exists is to make it easy to switch between strings and messages.
There are a few different types of prompt templates
There are a few different types of prompt templates:
#### String PromptTemplates
@@ -258,6 +301,7 @@ The first is a system message, that has no variables to format.
The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
#### MessagesPlaceholder
<span data-heading-keywords="messagesplaceholder"></span>
This prompt template is responsible for adding a list of messages in a particular place.
In the above ChatPromptTemplate, we saw how we could format two messages, each one a string.
@@ -289,14 +333,18 @@ prompt_template = ChatPromptTemplate.from_messages([
])
```
For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates).
### Example selectors
One common prompting technique for achieving better performance is to include examples as part of the prompt.
This gives the language model concrete examples of how it should behave.
Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.
Example Selectors are classes responsible for selecting and then formatting examples into prompts.
For specifics on how to use example selectors, see the [relevant how-to guides here](/docs/how_to/#example-selectors).
### Output parsers
<span data-heading-keywords="output parser"></span>
:::note
@@ -340,16 +388,19 @@ LangChain has lots of different types of output parsers. This is a list of outpu
| [Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. |
| [Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. |
For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers).
### Chat history
Most LLM applications have a conversational interface.
An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain.
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database.
Future interactions will then load those messages and pass them into the chain as part of the input.
### Documents
<span data-heading-keywords="document,documents"></span>
A Document object in LangChain contains information about some data. It has two attributes:
@@ -357,6 +408,7 @@ A Document object in LangChain contains information about some data. It has two
- `metadata: dict`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
### Document loaders
<span data-heading-keywords="document loader,document loaders"></span>
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
@@ -372,6 +424,8 @@ loader = CSVLoader(
data = loader.load()
```
For specifics on how to use document loaders, see the [relevant how-to guides here](/docs/how_to/#document-loaders).
### Text splitters
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
@@ -389,18 +443,34 @@ That means there are two different axes along which you can customize your text
1. How the text is split
2. How the chunk size is measured
### Embedding models
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
For specifics on how to use text splitters, see the [relevant how-to guides here](/docs/how_to/#text-splitters).
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
### Embedding models
<span data-heading-keywords="embedding,embeddings"></span>
Embedding models create a vector representation of a piece of text. You can think of a vector as an array of numbers that captures the semantic meaning of the text.
By representing the text in this way, you can perform mathematical operations that allow you to do things like search for other pieces of text that are most similar in meaning.
These natural language search capabilities underpin many types of [context retrieval](/docs/concepts/#retrieval),
where we provide an LLM with the relevant data it needs to effectively respond to a query.
![](/img/embeddings.png)
The `Embeddings` class is a class designed for interfacing with text embedding models. There are many different embedding model providers (OpenAI, Cohere, Hugging Face, etc) and local models, and this class is designed to provide a standard interface for all of them.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
For specifics on how to use embedding models, see the [relevant how-to guides here](/docs/how_to/#embedding-models).
### Vector stores
<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors,
and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query.
A vector store takes care of storing embedded data and performing vector search for you.
Most vector stores can also store metadata about embedded vectors and support filtering on that metadata before
similarity search, allowing you more control over returned documents.
Vector stores can be converted to the retriever interface by doing:
```python
@@ -408,15 +478,22 @@ vectorstore = MyVectorStore()
retriever = vectorstore.as_retriever()
```
For specifics on how to use vector stores, see the [relevant how-to guides here](/docs/how_to/#vector-stores).
### Retrievers
<span data-heading-keywords="retriever,retrievers"></span>
A retriever is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers can be created from vectorstores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of Document's as output.
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
### Tools
<span data-heading-keywords="tool,tools"></span>
Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world.
@@ -442,6 +519,10 @@ Generally, when designing tools to be used by a chat model or LLM, it is importa
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas.
- Simpler tools are generally easier for models to use than more complex tools.
For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
To use an existing pre-built tool, see [here](docs/integrations/tools/) for a list of pre-built tools.
### Toolkits
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
@@ -461,7 +542,7 @@ tools = toolkit.get_tools()
By themselves, language models can't take actions - they just output text.
A big use case for LangChain is creating **agents**.
Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.
Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.
The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
[LangGraph](https://github.com/langchain-ai/langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents.
@@ -474,7 +555,29 @@ In order to solve that we built LangGraph to be this flexible, highly-controllab
If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/docs/how_to/agent_executor).
It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent)
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent).
#### ReAct agents
<span data-heading-keywords="react,react agent"></span>
One popular architecture for building agents is [**ReAct**](https://arxiv.org/abs/2210.03629).
ReAct combines reasoning and acting in an iterative process - in fact the name "ReAct" stands for "Reason" and "Act".
The general flow looks like this:
- The model will "think" about what step to take in response to an input and any previous observations.
- The model will then choose an action from available tools (or choose to respond to the user).
- The model will generate arguments to that tool.
- The agent runtime (executor) will parse out the chosen tool and call it with the generated arguments.
- The executor will return the results of the tool call back to the model as an observation.
- This process repeats until the agent chooses to respond.
There are general prompting based implementations that do not require any model-specific features, but the most
reliable implementations use features like [tool calling](/docs/how_to/tool_calling/) to reliably format outputs
and reduce variance.
Please see the [LangGraph documentation](https://langchain-ai.github.io/langgraph/) for more information,
or [this how-to guide](/docs/how_to/migrate_agent/) for specific information on migrating to LangGraph.
### Callbacks
@@ -546,15 +649,264 @@ This is a common reason why you may fail to see events being emitted from custom
runnables or tools.
:::
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).
## Techniques
### Function/tool calling
### Streaming
<span data-heading-keywords="stream,streaming"></span>
Individual LLM calls often run for much longer than traditional resource requests.
This compounds when you build more complex chains or agents that require multiple reasoning steps.
Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results
before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX
around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming.
Below, we'll discuss some concepts and considerations around streaming in LangChain.
#### `.stream()` and `.astream()`
Most modules in LangChain include the `.stream()` method (and the equivalent `.astream()` method for [async](https://docs.python.org/3/library/asyncio.html) environments) as an ergonomic streaming interface.
`.stream()` returns an iterator, which you can consume with a simple `for` loop. Here's an example with a chat model:
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-sonnet-20240229")
for chunk in model.stream("what color is the sky?"):
print(chunk.content, end="|", flush=True)
```
For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but
you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode
without the need to provide additional config.
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html).
Because this method is part of [LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel),
you can handle formatting differences from different outputs using an [output parser](/docs/concepts/#output-parsers) to transform
each yielded chunk.
You can check out [this guide](/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`.
#### `.astream_events()`
<span data-heading-keywords="astream_events,stream_events,stream events"></span>
While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls,
but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of
the chain alongside the final output - for example, returning sources alongside the final generation when building a chat
over documents app.
There are ways to do this [using callbacks](/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate
values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an
`.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator
which yields [various types of events](/docs/how_to/streaming/#event-reference) that you can filter and process according
to the needs of your project.
Here's one small example that prints just events containing streamed chat model output:
```python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-sonnet-20240229")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt | model | parser
async for event in chain.astream_events({"topic": "parrot"}, version="v2"):
kind = event["event"]
if kind == "on_chat_model_stream":
print(event, end="|", flush=True)
```
You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components!
See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`,
including a table listing available events.
#### Callbacks
The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/docs/concepts/#callbacks) system. You can pass a
callback handler that handles the [`on_llm_new_token`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_new_token) event into LangChain components. When that component is invoked, any
[LLM](/docs/concepts/#llms) or [chat model](/docs/concepts/#chat-models) contained in the component calls
the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response.
You can also handle the [`on_llm_end`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_end) event to perform any necessary cleanup.
You can see [this how-to section](/docs/how_to/#callbacks) for more specifics on using callbacks.
Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable,
they can be unwieldy for developers. For example:
- You need to explicitly initialize and manage some aggregator or other stream to collect results.
- The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes.
- Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once.
- You would often ignore the result of the actual model call in favor of callback results.
#### Tokens
The unit that most model providers use to measure input and output is via a unit called a **token**.
Tokens are the basic units that language models read and generate when processing or producing text.
The exact definition of a token can vary depending on the specific way the model was trained -
for instance, in English, a token could be a single word like "apple", or a part of a word like "app".
When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**.
The model then streams back generated output tokens, which the tokenizer decodes into human-readable text.
The below example shows how OpenAI models tokenize `LangChain is cool!`:
![](/img/tokenization.png)
You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries.
The reason language models use tokens rather than something more immediately intuitive like "characters"
has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on
the initial input and their previous generations. Training the model using tokens language models to handle linguistic
units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model
to learn and understand the structure of the language, including grammar and context.
Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
### Structured output
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide
range of inputs, but for some use-cases, it can be useful to constrain the LLM's output
to a specific format or structure. This is referred to as **structured output**.
For example, if the output is to be stored in a relational database,
it is much easier if the model generates output that adheres to a defined schema or format.
[Extracting specific information](/docs/tutorials/extraction/) from unstructured text is another
case where this is particularly useful. Most commonly, the output format will be JSON,
though other formats such as [YAML](/docs/how_to/output_parser_yaml/) can be useful too. Below, we'll discuss
a few ways to get structured output from models in LangChain.
#### `.with_structured_output()`
For convenience, some LangChain chat models support a [`.with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method)
method. This method only requires a schema as input, and returns a dict or Pydantic object.
Generally, this method is only present on models that support one of the more advanced methods described below,
and will use one of them under the hood. It takes care of importing a suitable output parser and
formatting the schema in the right format for the model.
Here's an example:
```python
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
```
```
Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=None)
```
We recommend this method as a starting point when working with structured output:
- It uses other model-specific features under the hood, without the need to import an output parser.
- For the models that use tool calling, no special prompting is needed.
- If multiple underlying techniques are supported, you can supply a `method` parameter to
[toggle which one is used](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs).
You may want or need to use other techiniques if:
- The chat model you are using does not support tool calling.
- You are working with very complex schemas and the model is having trouble generating outputs that conform.
For more information, check out this [how-to guide](/docs/how_to/structured_output/#the-with_structured_output-method).
You can also check out [this table](/docs/integrations/chat/#advanced-features) for a list of models that support
`with_structured_output()`.
#### Raw prompting
The most intuitive way to get a model to structure output is to ask nicely.
In addition to your query, you can give instructions describing what kind of output you'd like, then
parse the output using an [output parser](/docs/concepts/#output-parsers) to convert the raw
model message or string output into something more easily manipulated.
The biggest benefit to raw prompting is its flexibility:
- Raw prompting does not require any special model features, only sufficient reasoning capability to understand
the passed schema.
- You can prompt for any format you'd like, not just JSON. This can be useful if the model you
are using is more heavily trained on a certain type of data, such as XML or YAML.
However, there are some drawbacks too:
- LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format
for smooth parsing can be surprisingly difficult and model-specific.
- Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult.
Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions,
and still others may prefer XML.
While features offered by model providers may increase reliability, prompting techniques remain important for tuning your
results no matter which method you choose.
#### JSON mode
<span data-heading-keywords="json mode"></span>
Some models, such as [Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/),
[Together AI](/docs/integrations/chat/together/) and [Ollama](/docs/integrations/chat/ollama/),
support a feature called **JSON mode**, usually enabled via config.
When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON.
Often they require some custom prompting, but it's usually much less burdensome than completely raw prompting and
more along the lines of, `"you must always return JSON"`. The [output also generally easier to parse](/docs/how_to/output_parser_json/).
It's also generally simpler to use directly and more commonly available than tool calling, and can give
more flexibility around prompting and shaping results than tool calling.
Here's an example:
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain.output_parsers.json import SimpleJsonOutputParser
model = ChatOpenAI(
model="gpt-4o",
model_kwargs={ "response_format": { "type": "json_object" } },
)
prompt = ChatPromptTemplate.from_template(
"Answer the user's question to the best of your ability."
'You must always output a JSON object with an "answer" key and a "followup_question" key.'
"{question}"
)
chain = prompt | model | SimpleJsonOutputParser()
chain.invoke({ "question": "What is the powerhouse of the cell?" })
```
```
{'answer': 'The powerhouse of the cell is the mitochondrion. It is responsible for producing energy in the form of ATP through cellular respiration.',
'followup_question': 'Would you like to know more about how mitochondria produce energy?'}
```
For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features).
#### Function/tool calling
:::info
We use the term tool calling interchangeably with function calling. Although
function calling is sometimes meant to refer to invocations of a single function,
we treat all models as though they can return multiple tool or function calls in
each message.
each message
:::
Tool calling allows a model to respond to a given prompt by generating output that
@@ -566,8 +918,10 @@ from unstructured text, you could give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.
A tool call includes a name, arguments dict, and an optional identifier. The
arguments dict is structured `{argument_name: argument_value}`.
For models that support it, tool calling can be very convenient. It removes the
guesswork around how best to prompt schemas in favor of a built-in model feature. It can also
more naturally support agentic flows, since you can just pass multiple tool schemas instead
of fiddling with enums or unions.
Many LLM providers, including [Anthropic](https://www.anthropic.com/),
[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
@@ -584,39 +938,174 @@ LangChain provides a standardized interface for tool calling that is consistent
The standard interface consists of:
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call.
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) here.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
There are two main use cases for function/tool calling:
The following how-to guides are good practical resources for using function/tool calling:
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
- [How to use a model to call tools](/docs/how_to/tool_calling/)
- [How to use a model to call tools](/docs/how_to/tool_calling)
For a full list of model providers that support tool calling, [see this table](/docs/integrations/chat/#advanced-features).
### Retrieval
LangChain provides several advanced retrieval types. A full list is below, along with the following information:
LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise).
Retrieval is the process of providing relevant information to an LLM to improve its response for a given input. Retrieval augmented generation (RAG) is the process of grounding the LLM generation (output) using the retrieved information.
**Name**: Name of the retrieval algorithm.
:::tip
**Index Type**: Which index type (if any) this relies on.
* See our RAG from Scratch [code](https://github.com/langchain-ai/rag-from-scratch) and [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared).
* For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/).
**Uses an LLM**: Whether this retrieval method uses an LLM.
:::
**When to Use**: Our commentary on when you should considering using this retrieval method.
RAG is only as good as the retrieved documents relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections.
You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app.
**Description**: Description of what this retrieval algorithm is doing.
![](/img/rag_landscape.png)
#### Query Translation
First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries.
**Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system.
For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query.
| Name | When to use | Description |
|---------------|-------------|-------------|
| [Multi-query](/docs/how_to/MultiQueryRetriever/) | When you need to cover multiple perspectives of a question. | Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries. |
| [Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a question can be broken down into smaller subproblems. | Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). |
| [Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a higher-level conceptual understanding is required. | First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. |
| [HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | If you have challenges retrieving relevant documents using the raw user inputs. | Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. |
:::tip
See our RAG from Scratch videos for a few different specific approaches:
- [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared)
- [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared)
- [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared)
- [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared)
:::
#### Routing
Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.**
| Name | When to use | Description |
|------------------|--------------------------------------------|-------------|
| [Logical routing](/docs/how_to/routing/) | When you can prompt an LLM with rules to decide where to route the input. | Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. |
| [Semantic routing](/docs/how_to/routing/#routing-by-semantic-similarity) | When semantic similarity is an effective way to determine where to route the input. | Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. |
:::tip
See our RAG from Scratch video on [routing](https://youtu.be/pfpIndq7Fi8?feature=shared).
:::
#### Query Construction
Third, consider whether any of your data sources require specific query formats. Many structured databases use SQL. Vector stores often have specific syntax for applying keyword filters to document metadata. **Using an LLM to convert a natural language query into a query syntax is a popular and powerful approach.**
In particular, [text-to-SQL](/docs/tutorials/sql_qa/), [text-to-Cypher](/docs/tutorials/graph/), and [query analysis for metadata filters](/docs/tutorials/query_analysis/#query-analysis) are useful ways to interact with structured, graph, and vector databases respectively.
| Name | When to Use | Description |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Text to SQL](/docs/tutorials/sql_qa/) | If users are asking questions that require information housed in a relational database, accessible via SQL. | This uses an LLM to transform user input into a SQL query. |
| [Text-to-Cypher](/docs/tutorials/graph/) | If users are asking questions that require information housed in a graph database, accessible via Cypher. | This uses an LLM to transform user input into a Cypher query. |
| [Self Query](/docs/how_to/self_query/) | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). |
:::tip
See our [blog post overview](https://blog.langchain.dev/query-construction/) and RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared), the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries.
:::
#### Indexing
Fouth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/docs/concepts/#embedding-models).
Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens.
Two approaches can address this tension: (1) [Multi Vector](/docs/how_to/multi_vector/) retriever using an LLM to translate documents into any form (e.g., often into a summary) that is well-suited for indexing, but returns full documents to the LLM for generation. (2) [ParentDocument](/docs/how_to/parent_document_retriever/) retriever embeds document chunks, but also returns full documents. The idea is to get the best of both worlds: use concise representations (summaries or chunks) for retrieval, but use the full documents for answer generation.
| Name | Index Type | Uses an LLM | When to Use | Description |
|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Vectorstore](/docs/how_to/vectorstore_retriever/) | Vectorstore | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. |
| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vectorstore + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). |
| [Multi Vector](/docs/how_to/multi_vector/) | Vectorstore + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. |
| [Self Query](/docs/how_to/self_query/) | Vectorstore | Yes | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filer to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). |
| [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. |
| [Time-Weighted Vectorstore](/docs/how_to/time_weighted_vectorstore/) | Vectorstore | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) |
| [Multi-Query Retriever](/docs/how_to/MultiQueryRetriever/) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. |
| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |
| [Vector store](/docs/how_to/vectorstore_retriever/) | Vector store | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. |
| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vector store + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). |
| [Multi Vector](/docs/how_to/multi_vector/) | Vector store + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. |
| [Time-Weighted Vector store](/docs/how_to/time_weighted_vectorstore/) | Vector store | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) |
:::tip
- See our RAG from Scratch video on [indexing fundamentals](https://youtu.be/bjb_EMsTDKI?feature=shared)
- See our RAG from Scratch video on [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared)
:::
Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding.
[ColBERT](https://docs.google.com/presentation/d/1IRhAdGjIevrrotdplHNcc4aXgIYyKamUKTWtB3m3aMU/edit?usp=sharing) is an interesting approach to address this with a higher granularity embeddings: (1) produce a contextually influenced embedding for each token in the document and query, (2) score similarity between each query token and all document tokens, (3) take the max, (4) do this for all query tokens, and (5) take the sum of the max scores (in step 3) for all query tokens to get a query-document similarity score; this token-wise scoring can yield strong results.
![](/img/colbert.png)
There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents.
| Name | When to use | Description |
|-------------------|----------------------------------------------------------|-------------|
| [ColBERT](/docs/integrations/providers/ragatouille/#using-colbert-as-a-reranker) | When higher granularity embeddings are needed. | ColBERT uses contextually influenced embeddings for each token in the document and query to get a granular query-document similarity score. |
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. |
| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
:::tip
See our RAG from Scratch video on [ColBERT](https://youtu.be/cN6S0Ehm7_8?feature=shared>).
:::
#### Post-processing
Sixth, consider ways to filter or rank retrieved documents. This is very useful if you are [combining documents returned from multiple sources](/docs/integrations/retrievers/cohere-reranker/#doing-reranking-with-coherererank), since it can can down-rank less relevant documents and / or [compress similar documents](/docs/how_to/contextual_compression/#more-built-in-compressors-filters).
| Name | Index Type | Uses an LLM | When to Use | Description |
|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. |
| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |
| [Re-ranking](/docs/integrations/retrievers/cohere-reranker/) | Any | Yes | If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods . | Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query. |
:::tip
See our RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1).
:::
#### Generation
**Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow.
We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction):
- **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above
- **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query
- **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or dont address question
![](/img/langgraph_rag.png)
| Name | When to use | Description |
|-------------------|-----------------------------------------------------------|-------------|
| Self-RAG | When needing to fix answers with hallucinations or irrelevant content. | Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors. |
| Corrective-RAG | When needing a fallback mechanism for low relevance docs. | Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval. |
:::tip
See several videos and cookbooks showcasing RAG with LangGraph:
- [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck)
- [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts)
- [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag)
See our LangGraph RAG recipes with partners:
- [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/3p_integrations/langchain)
- [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain)
:::
### Text splitting
@@ -642,3 +1131,30 @@ Table columns:
| Character | [CharacterTextSplitter](/docs/how_to/character_text_splitter/) | A user defined character | | Splits text based on a user defined character. One of the simpler methods. |
| Semantic Chunker (Experimental) | [SemanticChunker](/docs/how_to/semantic-chunker/) | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) |
| Integration: AI21 Semantic | [AI21SemanticTextSplitter](/docs/integrations/document_transformers/ai21_semantic_text_splitter/) | ✅ | Identifies distinct topics that form coherent pieces of text and splits along those. |
### Evaluation
<span data-heading-keywords="evaluation,evaluate"></span>
Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications.
It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose.
This process is vital for building reliable applications.
![](/img/langsmith_evaluate.png)
[LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways:
- It makes it easier to create and curate datasets via its tracing and annotation features
- It provides an evaluation framework that helps you define metrics and run your app against your dataset
- It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code
To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation).
### Tracing
<span data-heading-keywords="trace,tracing"></span>
A trace is essentially a series of steps that your application takes to go from input to output.
Traces contain individual steps called `runs`. These can be individual calls from a model, retriever,
tool, or sub-chains.
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing).

View File

@@ -0,0 +1,35 @@
# General guidelines
Here are some things to keep in mind for all types of contributions:
- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow.
- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
- Ensure your PR passes formatting, linting, and testing checks before requesting a review.
- If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer.
- See the sections on [Testing](/docs/contributing/code/setup#testing) and [Formatting and Linting](/docs/contributing/code/setup#formatting-and-linting) for how to run these checks locally.
- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes.
- Look for duplicate PRs or issues that have already been opened before opening a new one.
- Keep scope as isolated as possible. As a general rule, your changes should not affect more than one package at a time.
## Bugfixes
We encourage and appreciate bugfixes. We ask that you:
- Explain the bug in enough detail for maintainers to be able to reproduce it.
- If an accompanying issue exists, link to it. Prefix with `Fixes` so that the issue will close automatically when the PR is merged.
- Avoid breaking changes if possible.
- Include unit tests that fail without the bugfix.
If you come across a bug and don't know how to fix it, we ask that you open an issue for it describing in detail the environment in which you encountered the bug.
## New features
We aim to keep the bar high for new features. We generally don't accept new core abstractions, changes to infra, changes to dependencies,
or new agents/chains from outside contributors without an existing GitHub discussion or issue that demonstrates an acute need for them.
- New features must come with docs, unit tests, and (if appropriate) integration tests.
- New integrations must come with docs, unit tests, and (if appropriate) integration tests.
- See [this page](/docs/contributing/integrations) for more details on contributing new integrations.
- New functionality should not inherit from or use deprecated methods or classes.
- We will reject features that are likely to lead to security vulnerabilities or reports.
- Do not add any hard dependencies. Integrations may add optional dependencies.

View File

@@ -0,0 +1,6 @@
# Contribute Code
If you would like to add a new feature or update an existing one, please read the resources below before getting started:
- [General guidelines](/docs/contributing/code/guidelines/)
- [Setup](/docs/contributing/code/setup/)

View File

@@ -1,36 +1,9 @@
---
sidebar_position: 1
---
# Contribute Code
# Setup
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
- Update any affected example notebooks and documentation. These live in `docs`.
- Update unit and integration tests when relevant.
- Add a feature
- Add a demo notebook in `docs/docs/`.
- Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the
best way to get our attention.
## 🚀 Quick Start
This quick start guide explains how to run the repository locally.
This guide walks through how to run the repository locally and check in your first code.
For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer).
### Dependency Management: Poetry and other env/dependency managers
## Dependency Management: Poetry and other env/dependency managers
This project utilizes [Poetry](https://python-poetry.org/) v1.7.1+ as a dependency manager.
@@ -41,7 +14,7 @@ Install Poetry: **[documentation on how to install it](https://python-poetry.org
❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry,
tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
### Different packages
## Different packages
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
@@ -59,7 +32,7 @@ For this quickstart, start with langchain-community:
cd libs/community
```
### Local Development Dependencies
## Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
@@ -79,9 +52,9 @@ If you are still seeing this bug on v1.6.1+, you may also try disabling "modern
(`poetry config installer.modern-installation false`) and re-installing requirements.
See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
### Testing
## Testing
_In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional; see section about optional dependencies_.
**Note:** In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional. See the following section about optional dependencies.
Unit tests cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test.
@@ -118,11 +91,11 @@ poetry install --with test
make test
```
### Formatting and Linting
## Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.
#### Code Formatting
### Code Formatting
Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/).
@@ -174,7 +147,7 @@ This can be very helpful when you've made changes to only certain parts of the p
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
#### Spellcheck
### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
@@ -206,9 +179,7 @@ ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogy
`langchain-core` and partner packages **do not use** optional dependencies in this way.
You only need to add a new dependency if a **unit test** relies on the package.
If your package is only required for **integration tests**, then you can skip these
steps and leave all pyproject.toml and poetry.lock files alone.
You'll notice that `pyproject.toml` and `poetry.lock` are **not** touched when you add optional dependencies below.
If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and
that most users won't have it installed.
@@ -216,20 +187,12 @@ that most users won't have it installed.
Users who do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
To introduce the dependency to a library, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
1. Open extended_testing_deps.txt and add the dependency
2. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
3. Please use the `@pytest.mark.requires(package_name)` decorator for any unit tests that require the dependency.
## Adding a Jupyter Notebook

View File

@@ -1,2 +0,0 @@
label: 'Documentation'
position: 3

View File

@@ -0,0 +1,7 @@
# Contribute Documentation
Documentation is a vital part of LangChain. We welcome both new documentation for new features and
community improvements to our current documentation. Please read the resources below before getting started:
- [Documentation style guide](/docs/contributing/documentation/style_guide/)
- [Setup](/docs/contributing/documentation/setup/)

View File

@@ -1,4 +1,8 @@
# Technical logistics
---
sidebar_class_name: "hidden"
---
# Setup
LangChain documentation consists of two components:
@@ -12,8 +16,6 @@ used to generate the externally facing [API Reference](https://api.python.langch
The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that
developers document their code well.
The main documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
The `API Reference` is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/)
from the code and is hosted by [Read the Docs](https://readthedocs.org/).
@@ -29,7 +31,7 @@ The content for the main documentation is located in the `/docs` directory of th
The documentation is written using a combination of ipython notebooks (`.ipynb` files)
and markdown (`.mdx` files). The notebooks are converted to markdown
using [Quarto](https://quarto.org) and then built using [Docusaurus 2](https://docusaurus.io/).
and then built using [Docusaurus 2](https://docusaurus.io/).
Feel free to make contributions to the main documentation! 🥰
@@ -48,10 +50,6 @@ locally to ensure that it looks good and is free of errors.
If you're unable to build it locally that's okay as well, as you will be able to
see a preview of the documentation on the pull request page.
### Install dependencies
- [Quarto](https://quarto.org) - package that converts Jupyter notebooks (`.ipynb` files) into mdx files for serving in Docusaurus. [Download link](https://quarto.org/docs/download/).
From the **monorepo root**, run the following command to install the dependencies:
```bash
@@ -78,6 +76,18 @@ make docs_build
make api_docs_build
```
:::tip
The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
```bash
make api_docs_quick_preview
```
which will just build a small subset of the API reference.
:::
Finally, run the link checker to ensure all links are valid:
```bash

View File

@@ -1,10 +1,8 @@
---
sidebar_label: "Style guide"
sidebar_class_name: "hidden"
---
# LangChain Documentation Style Guide
## Introduction
# Documentation Style Guide
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
@@ -12,116 +10,137 @@ organization and structure.
## Philosophy
LangChain's documentation aspires to follow the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories:
LangChain's documentation follows the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories: [Tutorials](/docs/contributing/documentation/style_guide/#tutorials),
[How-to guides](/docs/contributing/documentation/style_guide/#how-to-guides),
[References](/docs/contributing/documentation/style_guide/#references), and [Explanations](/docs/contributing/documentation/style_guide/#conceptual-guide).
- **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
- An example of this is our [LCEL streaming guide](/docs/how_to/streaming).
- Our guides on [custom components](/docs/how_to/custom_chat_model) is another one.
- **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
- The clearest examples of this are our [Use case](/docs/how_to#use-cases) quickstart pages.
- **Reference**: Technical descriptions of the machinery and how to operate it.
- Our [Runnable interface](/docs/concepts#interface) page is an example of this.
- The [API reference pages](https://api.python.langchain.com/) are another.
- **Explanation**: Explanations that clarify and illuminate a particular topic.
- The [LCEL primitives pages](/docs/how_to/sequence) are an example of this.
### Tutorials
Tutorials are lessons that take the reader through a practical activity. Their purpose is to help the user
gain understanding of concepts and how they interact by showing one way to achieve some goal in a hands-on way. They should **avoid** giving
multiple permutations of ways to achieve that goal in-depth. Instead, it should guide a new user through a recommended path to accomplishing the tutorial's goal. While the end result of a tutorial does not necessarily need to
be completely production-ready, it should be useful and practically satisfy the the goal that you clearly stated in the tutorial's introduction. Information on how to address additional scenarios
belongs in how-to guides.
To quote the Diataxis website:
> A tutorial serves the users *acquisition* of skills and knowledge - their study. Its purpose is not to help the user get something done, but to help them learn.
In LangChain, these are often higher level guides that show off end-to-end use cases.
Some examples include:
- [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain/)
- [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/)
Here are some high-level tips on writing a good tutorial:
- Focus on guiding the user to get something done, but keep in mind the end-goal is more to impart principles than to create a perfect production system.
- Be specific, not abstract and follow one path.
- No need to go deeply into alternative approaches, but its ok to reference them, ideally with a link to an appropriate how-to guide.
- Get "a point on the board" as soon as possible - something the user can run that outputs something.
- You can iterate and expand afterwards.
- Try to frequently checkpoint at given steps where the user can run code and see progress.
- Focus on results, not technical explanation.
- Crosslink heavily to appropriate conceptual/reference pages.
- The first time you mention a LangChain concept, use its full name (e.g. "LangChain Expression Language (LCEL)"), and link to its conceptual/other documentation page.
- It's also helpful to add a prerequisite callout that links to any pages with necessary background information.
- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as related how-to guides.
### How-to guides
A how-to guide, as the name implies, demonstrates how to do something discrete and specific.
It should assume that the user is already familiar with underlying concepts, and is trying to solve an immediate problem, but
should still give some background or list the scenarios where the information contained within can be relevant.
They can and should discuss alternatives if one approach may be better than another in certain cases.
To quote the Diataxis website:
> A how-to guide serves the work of the already-competent user, whom you can assume to know what they want to do, and to be able to follow your instructions correctly.
Some examples include:
- [How to: return structured data from a model](/docs/how_to/structured_output/)
- [How to: write a custom chat model](/docs/how_to/custom_chat_model/)
Here are some high-level tips on writing a good how-to guide:
- Clearly explain what you are guiding the user through at the start.
- Assume higher intent than a tutorial and show what the user needs to do to get that task done.
- Assume familiarity of concepts, but explain why suggested actions are helpful.
- Crosslink heavily to conceptual/reference pages.
- Discuss alternatives and responses to real-world tradeoffs that may arise when solving a problem.
- Use lots of example code.
- Prefer full code blocks that the reader can copy and run.
- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as other related how-to guides.
### Conceptual guide
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. They should cover LangChain terms and concepts
in a more abstract way than how-to guides or tutorials, and should be geared towards curious users interested in
gaining a deeper understanding of the framework. Try to avoid excessively large code examples - the goal here is to
impart perspective to the user rather than to finish a practical project. These guides should cover **why** things work they way they do.
This guide on documentation style is meant to fall under this category.
To quote the Diataxis website:
> The perspective of explanation is higher and wider than that of the other types. It does not take the users eye-level view, as in a how-to guide, or a close-up view of the machinery, like reference material. Its scope in each case is a topic - “an area of knowledge”, that somehow has to be bounded in a reasonable, meaningful way.
Some examples include:
- [Retrieval conceptual docs](/docs/concepts/#retrieval)
- [Chat model conceptual docs](/docs/concepts/#chat-models)
Here are some high-level tips on writing a good conceptual guide:
- Explain design decisions. Why does concept X exist and why was it designed this way?
- Use analogies and reference other concepts and alternatives
- Avoid blending in too much reference content
- You can and should reference content covered in other guides, but make sure to link to them
### References
References contain detailed, low-level information that describes exactly what functionality exists and how to use it.
In LangChain, this is mainly our API reference pages, which are populated from docstrings within code.
References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know
how to use something specific.
To quote the Diataxis website:
> The only purpose of a reference guide is to describe, as succinctly as possible, and in an orderly way. Whereas the content of tutorials and how-to guides are led by needs of the user, reference material is led by the product it describes.
Many of the reference pages in LangChain are automatically generated from code,
but here are some high-level tips on writing a good docstring:
- Be concise
- Discuss special cases and deviations from a user's expectations
- Go into detail on required inputs and outputs
- Light details on when one might use the feature are fine, but in-depth details belong in other sections.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
## Taxonomy
Keeping the above in mind, we have sorted LangChain's docs into categories. It is helpful to think in these terms
when contributing new documentation:
### Getting started
The [getting started section](/docs/introduction) includes a high-level introduction to LangChain, a quickstart that
tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases
[Use cases](/docs/how_to#use-cases) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped,
then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
The quickstart pages here should fit the **How-to guide** category, with the other pages intended to be **Explanations** of more
in-depth concepts and strategies that accompany the main happy paths.
:::note
The below sections are listed roughly in order of increasing level of abstraction.
:::
### Expression Language
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,
and some **References** for how to use different methods in the Runnable interface.
### Components
The [components section](/docs/concepts) covers concepts one level of abstraction higher than LCEL.
Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes,
such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
This section should contain mostly conceptual **Tutorials**, **References**, and **Explanations** of the components they cover.
:::note
As a general rule of thumb, everything covered in the `Expression Language` and `Components` sections (with the exception of the `Composition` section of components) should
cover only components that exist in `langchain_core`.
:::
### Integrations
The [integrations](/docs/integrations/platforms/) are specific implementations of components. These often involve third-party APIs and services.
If this is the case, as a general rule, these are maintained by the third-party partner.
This section should contain mostly **Explanations** and **References**, though the actual content here is more flexible than other sections and more at the
discretion of the third-party provider.
:::note
Concepts covered in `Integrations` should generally exist in `langchain_community` or specific partner packages.
:::
### Guides and Ecosystem
The [Guides](/docs/tutorials) and [Ecosystem](https://docs.smith.langchain.com/) sections should contain guides that address higher-level problems than the sections above.
This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
### API references
LangChain's API references. Should act as **References** (as the name implies) with some **Explanation**-focused content as well.
## Sample developer journey
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
- The developer lands on https://python.langchain.com, and reads through the introduction and the diagram.
- If they are just curious, they may be drawn to the [Quickstart](/docs/tutorials/llm_chain) to get a high-level tour of what LangChain contains.
- If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
- They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
- Next, they can learn about LangChain's various components and integrations.
- Finally, they can get additional knowledge through the Guides.
This is only an ideal of course - sections will inevitably reference lower or higher-level concepts that are documented in other sections.
## Guidelines
## General guidelines
Here are some other guidelines you should think about when writing and organizing documentation.
### Linking to other sections
We generally do not merge new tutorials from outside contributors without an actue need.
We welcome updates as well as new integration docs, how-tos, and references.
### Avoid duplication
Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should
be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides.
### Link to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
to allow a developer to learn more about an unfamiliar topic inline.
This includes linking to the API references as well as conceptual sections!
### Conciseness
### Be concise
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
re-explain it, unless the concept you are documenting presents some new wrinkle.
@@ -130,9 +149,10 @@ Be concise, including in code samples.
### General style
- Use active voice and present tense whenever possible.
- Use examples and code snippets to illustrate concepts and usage.
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically.
- Use bullet points and numbered lists to break down information into easily digestible chunks.
- Use tables (especially for **Reference** sections) and diagrams often to present information visually.
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages.
- Use active voice and present tense whenever possible
- Use examples and code snippets to illustrate concepts and usage
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically
- Use fewer cells with more code to make copy/paste easier
- Use bullet points and numbered lists to break down information into easily digestible chunks
- Use tables (especially for **Reference** sections) and diagrams often to present information visually
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages

View File

@@ -12,8 +12,8 @@ As an open-source project in a rapidly developing field, we are extremely open t
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](/docs/contributing/documentation/style_guide): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Documentation**](/docs/contributing/documentation/): Help improve our docs, including this one!
- [**Code**](/docs/contributing/code/): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.
@@ -48,7 +48,7 @@ In a similar vein, we do enforce certain linting, formatting, and documentation
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
# 🌟 Recognition
### 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -1,6 +1,7 @@
---
sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/docs/contributing/code/).

View File

@@ -7,6 +7,7 @@ If you plan on contributing to LangChain code or documentation, it can be useful
to understand the high level structure of the repository.
LangChain is organized as a [monorepo](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
You can check out our [installation guide](/docs/how_to/installation/) for more on how they fit together.
Here's the structure visualized as a tree:
@@ -15,12 +16,22 @@ Here's the structure visualized as a tree:
├── cookbook # Tutorials and examples
├── docs # Contains content for the documentation here: https://python.langchain.com/
├── libs
│ ├── langchain # Main package
│ ├── langchain
│ │ ├── langchain
│ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity)
│ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity)
│ ├── langchain-community # Third-party integrations
│ ├── langchain-core # Base interfaces for key abstractions
│ ├── langchain-experimental # Experimental components and chains
│ ├── community # Third-party integrations
│ ├── langchain-community
│ ├── core # Base interfaces for key abstractions
│ │ ├── langchain-core
│ ├── experimental # Experimental components and chains
│ │ ├── langchain-experimental
| ├── cli # Command line interface
│ │ ├── langchain-cli
│ ├── text-splitters
│ │ ├── langchain-text-splitters
│ ├── standard-tests
│ │ ├── langchain-standard-tests
│ ├── partners
│ ├── langchain-partner-1
│ ├── langchain-partner-2
@@ -41,7 +52,7 @@ There are other files in the root directory level, but their presence should be
The `/docs` directory contains the content for the documentation that is shown
at https://python.langchain.com/ and the associated API Reference https://api.python.langchain.com/en/latest/langchain_api_reference.html.
See the [documentation](/docs/contributing/documentation/style_guide) guidelines to learn how to contribute to the documentation.
See the [documentation](/docs/contributing/documentation/) guidelines to learn how to contribute to the documentation.
## Code
@@ -49,6 +60,6 @@ The `/libs` directory contains the code for the LangChain packages.
To learn more about how to contribute code see the following guidelines:
- [Code](./code.mdx) Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx) to learn how to contribute to third-party integrations to langchain-community or to start a new partner package.
- [Testing](./testing.mdx) guidelines to learn how to write tests for the packages.
- [Code](/docs/contributing/code/): Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx): Learn how to contribute to third-party integrations to `langchain-community` or to start a new partner package.
- [Testing](./testing.mdx): Guidelines to learn how to write tests for the packages.

View File

@@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: 6
---
# Testing

Binary file not shown.

View File

@@ -15,18 +15,18 @@
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {},
"source": [
"# Build an Agent\n",
"# Build an Agent with AgentExecutor (Legacy)\n",
"\n",
":::{.callout-important}\n",
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
":::\n",
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.\n",
"Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.\n",
"\n",
"In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"\n",
":::{.callout-important}\n",
"This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
":::\n",
"In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"\n",
"## Concepts\n",
"\n",
@@ -34,7 +34,7 @@
"- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
"- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n",
"- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. \n",
"- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n",
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
"\n",
"## Setup\n",

View File

@@ -23,7 +23,7 @@
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"- [Tool calling](/docs/how_to/tool_calling/)\n",
"- [Tool calling](/docs/how_to/tool_calling)\n",
"\n",
":::\n",
"\n",
@@ -142,7 +142,7 @@
"\n",
"## Attaching OpenAI tools\n",
"\n",
"Another common use-case is tool calling. While you should generally use the [`.bind_tools()`](/docs/how_to/tool_calling/) method for tool-calling models, you can also bind provider-specific args directly if you want lower level control:"
"Another common use-case is tool calling. While you should generally use the [`.bind_tools()`](/docs/how_to/tool_calling) method for tool-calling models, you can also bind provider-specific args directly if you want lower level control:"
]
},
{

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "f781411d",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [charactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "c3ee8d00",

View File

@@ -0,0 +1,157 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cfdf4f09-8125-4ed1-8063-6feed57da8a3",
"metadata": {},
"source": [
"# How to init any model in one line\n",
"\n",
"Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n",
"\n",
":::tip Supported models\n",
"\n",
"See the [init_chat_model()](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) API reference for a full list of supported integrations.\n",
"\n",
"Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165b0de6-9ae3-4e3d-aa98-4fc8a97c4a06",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"id": "ea2c9f57-a796-45f8-b6f4-3efd3f361a9b",
"metadata": {},
"source": [
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "79e14913-803c-4382-9009-5c6af3d75d35",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?\n",
"\n",
"Claude Opus: My name is Claude. It's nice to meet you!\n",
"\n",
"Gemini 1.5: I am a large language model, trained by Google. I do not have a name. \n",
"\n",
"\n"
]
}
],
"source": [
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Returns a langchain_openai.ChatOpenAI instance.\n",
"gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n",
"# Returns a langchain_anthropic.ChatAnthropic instance.\n",
"claude_opus = init_chat_model(\n",
" \"claude-3-opus-20240229\", model_provider=\"anthropic\", temperature=0\n",
")\n",
"# Returns a langchain_google_vertexai.ChatVertexAI instance.\n",
"gemini_15 = init_chat_model(\n",
" \"gemini-1.5-pro\", model_provider=\"google_vertexai\", temperature=0\n",
")\n",
"\n",
"# Since all model integrations implement the ChatModel interface, you can use them in the same way.\n",
"print(\"GPT-4o: \" + gpt_4o.invoke(\"what's your name\").content + \"\\n\")\n",
"print(\"Claude Opus: \" + claude_opus.invoke(\"what's your name\").content + \"\\n\")\n",
"print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
]
},
{
"cell_type": "markdown",
"id": "fff9a4c8-b6ee-4a1a-8d3d-0ecaa312d4ed",
"metadata": {},
"source": [
"## Simple config example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75c25d39-bf47-4b51-a6c6-64d9c572bfd6",
"metadata": {},
"outputs": [],
"source": [
"user_config = {\n",
" \"model\": \"...user-specified...\",\n",
" \"model_provider\": \"...user-specified...\",\n",
" \"temperature\": 0,\n",
" \"max_tokens\": 1000,\n",
"}\n",
"\n",
"llm = init_chat_model(**user_config)\n",
"llm.invoke(\"what's your name\")"
]
},
{
"cell_type": "markdown",
"id": "f811f219-5e78-4b62-b495-915d52a22532",
"metadata": {},
"source": [
"## Inferring model provider\n",
"\n",
"For common and distinct model names `init_chat_model()` will attempt to infer the model provider. See the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) for a full list of inference behavior. E.g. any model that starts with `gpt-3...` or `gpt-4...` will be inferred as using model provider `openai`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0378ccc6-95bc-4d50-be50-fccc193f0a71",
"metadata": {},
"outputs": [],
"source": [
"gpt_4o = init_chat_model(\"gpt-4o\", temperature=0)\n",
"claude_opus = init_chat_model(\"claude-3-opus-20240229\", temperature=0)\n",
"gemini_15 = init_chat_model(\"gemini-1.5-pro\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da07b5c0-d2e6-42e4-bfcd-2efcfaae6221",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -14,35 +14,51 @@
"\n",
":::\n",
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls."
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-openai >= 0.1.8`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c7d1338-dd1b-4d06-b33d-d5cffc49fd6a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "1a55e87a-3291-4e7f-8e8e-4c69b0854384",
"id": "598ae1e2-a52d-4459-81fd-cdc68b06742a",
"metadata": {},
"source": [
"## Using AIMessage.response_metadata\n",
"## Using LangSmith\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](/docs/how_to/response_metadata) field. Here's an example with OpenAI:"
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using AIMessage.usage_metadata\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n",
"\n",
"LangChain `AIMessage` objects include a [usage_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n",
"\n",
"Examples:\n",
"\n",
"**OpenAI**:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "467ccdeb-6b62-45e5-816e-167cd24d2586",
"id": "b39bf807-4125-4db4-bbf7-28a46afff6b4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'token_usage': {'completion_tokens': 225,\n",
" 'prompt_tokens': 17,\n",
" 'total_tokens': 242},\n",
" 'model_name': 'gpt-4-turbo',\n",
" 'system_fingerprint': 'fp_76f018034d',\n",
" 'finish_reason': 'stop',\n",
" 'logprobs': None}"
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}"
]
},
"execution_count": 1,
@@ -51,37 +67,33 @@
}
],
"source": [
"# !pip install -qU langchain-openai\n",
"# # !pip install -qU langchain-openai\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"openai_response = llm.invoke(\"hello\")\n",
"openai_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "9d5026e9-3ad4-41e6-9946-9f1a26f4a21f",
"id": "2299c44a-2fe6-4d52-a6a2-99ff6d231c73",
"metadata": {},
"source": [
"And here's an example with Anthropic:"
"**Anthropic**:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "145404f1-e088-4824-b468-236c486a9903",
"id": "9c82ff80-ec4e-4049-b019-5f0bbd7df82a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'id': 'msg_01P61rdHbapEo6h3fjpfpCQT',\n",
" 'model': 'claude-3-sonnet-20240229',\n",
" 'stop_reason': 'end_turn',\n",
" 'stop_sequence': None,\n",
" 'usage': {'input_tokens': 17, 'output_tokens': 306}}"
"{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}"
]
},
"execution_count": 2,
@@ -94,9 +106,222 @@
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n",
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
"msg.response_metadata"
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
"anthropic_response = llm.invoke(\"hello\")\n",
"anthropic_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f",
"metadata": {},
"source": [
"### Using AIMessage.response_metadata\n",
"\n",
"Metadata from the model response is also included in the AIMessage [response_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f156f9da-21f2-4c81-a714-54cbf9ad393e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n",
"\n",
"Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n"
]
}
],
"source": [
"print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n",
"print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')"
]
},
{
"cell_type": "markdown",
"id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Some providers support token count metadata in a streaming context.\n",
"\n",
"#### OpenAI\n",
"\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={\"include_usage\": True}`.\n",
"\n",
"```{=mdx}\n",
":::note\n",
"By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "07f0c872-6b6c-4fed-a129-9b5a858505be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n",
"content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"aggregate = None\n",
"for chunk in llm.stream(\"hello\", stream_options={\"include_usage\": True}):\n",
" print(chunk)\n",
" aggregate = chunk if aggregate is None else aggregate + chunk"
]
},
{
"cell_type": "markdown",
"id": "dd809ded-8b13-4d5f-be5e-277b79d51802",
"metadata": {},
"source": [
"Note that the usage metadata will be included in the sum of the individual message chunks:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3db7bc03-a7d4-4704-92ab-f8ba92ef59ae",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! How can I assist you today?\n",
"{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"print(aggregate.content)\n",
"print(aggregate.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "7dba63e8-0ed7-4533-8f0f-78e19c38a25c",
"metadata": {},
"source": [
"To disable streaming token counts for OpenAI, set `\"include_usage\"` to False in `stream_options`, or omit it from the parameters:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "67117f2b-ce68-4c1e-9556-2d3849f90e1b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n",
"content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n"
]
}
],
"source": [
"aggregate = None\n",
"for chunk in llm.stream(\"hello\"):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "6a5d9617-be3a-419a-9276-de9c29fa50ae",
"metadata": {},
"source": [
"You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"\n",
"See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "57dec1fb-bd9c-4c98-8798-8fbbe67f6b2c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}\n",
"\n",
"setup='Why was the math book sad?' punchline='Because it had too many problems.'\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"question to set up a joke\")\n",
" punchline: str = Field(description=\"answer to resolve the joke\")\n",
"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" model_kwargs={\"stream_options\": {\"include_usage\": True}},\n",
")\n",
"# Under the hood, .with_structured_output binds tools to the\n",
"# chat model and appends a parser.\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n",
" elif event[\"event\"] == \"on_chain_end\":\n",
" print(event[\"data\"][\"output\"])\n",
" else:\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "2bc8d313-4bef-463e-89a5-236d8bb6ab2f",
"metadata": {},
"source": [
"Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model."
]
},
{
@@ -115,7 +340,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"id": "31667d54",
"metadata": {},
"outputs": [
@@ -123,11 +348,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 26\n",
"Tokens Used: 27\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 15\n",
"\tCompletion Tokens: 16\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.00056\n"
"Total Cost (USD): $2.95e-05\n"
]
}
],
@@ -136,7 +361,7 @@
"\n",
"from langchain_community.callbacks.manager import get_openai_callback\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
@@ -153,7 +378,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "e09420f4",
"metadata": {},
"outputs": [
@@ -161,7 +386,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"52\n"
"55\n"
]
}
],
@@ -172,6 +397,39 @@
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "9ac51188-c8f4-4230-90fd-3cd78cdd955d",
"metadata": {},
"source": [
"```{=mdx}\n",
":::note\n",
"Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available:\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b241069a-265d-4497-af34-b0a5f95ae67f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"28\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" for chunk in llm.stream(\"Tell me a joke\", stream_options={\"include_usage\": True}):\n",
" pass\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
@@ -182,7 +440,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 12,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
@@ -211,15 +469,15 @@
"source": [
"```{=mdx}\n",
":::note\n",
"We have to set `stream_runnable=False` for token counting to work. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events. However, OpenAI does not return token counts when streaming model responses, so we need to turn off the underlying streaming.\n",
"We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events.\n",
":::\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "2f98c536",
"execution_count": 13,
"id": "3950d88b-8bfb-4294-b75b-e6fd421e633c",
"metadata": {},
"outputs": [
{
@@ -230,46 +488,51 @@
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Hummingbird`\n",
"Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Hummingbird\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.\n",
"Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.\n",
"Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 115 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).\n",
"Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.\n",
"\n",
"Page: Rufous hummingbird\n",
"Summary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.\n",
"\n",
"\n",
"Page: Bee hummingbird\n",
"Summary: The bee hummingbird, zunzuncito or Helena hummingbird (Mellisuga helenae) is a species of hummingbird, native to the island of Cuba in the Caribbean. It is the smallest known bird. The bee hummingbird feeds on nectar of flowers and bugs found in Cuba.\n",
"\n",
"Page: Hummingbird cake\n",
"Summary: Hummingbird cake is a banana-pineapple spice cake originating in Jamaica and a popular dessert in the southern United States since the 1970s. Ingredients include flour, sugar, salt, vegetable oil, ripe banana, pineapple, cinnamon, pecans, vanilla extract, eggs, and leavening agent. It is often served with cream cheese frosting.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `Fastest bird`\n",
"Page: Anna's hummingbird\n",
"Summary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna Masséna, Duchess of Rivoli.\n",
"It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.\n",
"These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `{'query': 'fastest bird species'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Fastest animals\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"\n",
"\n",
"Page: Fastest animals\n",
"Summary: This is a list of the fastest animals in the world, by types of animal.\n",
"\n",
"\n",
"\n",
"Page: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon, able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"Page: Ostrich\n",
"Summary: Ostriches are large flightless birds. They are the heaviest and largest living birds, with adult common ostriches weighing anywhere between 63.5 and 145 kilograms and laying the largest eggs of any living land animal. With the ability to run at 70 km/h (43.5 mph), they are the fastest birds on land. They are farmed worldwide, with significant industries in the Philippines and in Namibia. Ostrich leather is a lucrative commodity, and the large feathers are used as plumes for the decoration of ceremonial headgear. Ostrich eggs have been used by humans for millennia.\n",
"Ostriches are of the genus Struthio in the order Struthioniformes, part of the infra-class Palaeognathae, a diverse group of flightless birds also known as ratites that includes the emus, rheas, cassowaries, kiwis and the extinct elephant birds and moas. There are two living species of ostrich: the common ostrich, native to large areas of sub-Saharan Africa, and the Somali ostrich, native to the Horn of Africa. The common ostrich was historically native to the Arabian Peninsula, and ostriches were present across Asia as far east as China and Mongolia during the Late Pleistocene and possibly into the Holocene.\u001b[0m\u001b[32;1m\u001b[1;3m### Hummingbird's Scientific Name\n",
"The scientific name for the bee hummingbird, which is the smallest known bird and a species of hummingbird, is **Mellisuga helenae**. It is native to Cuba.\n",
"\n",
"### Fastest Bird Species\n",
"The fastest bird in terms of airspeed is the **peregrine falcon**, which can exceed speeds of 320 km/h (200 mph) during its diving flight. In level flight, the fastest confirmed speed is held by the **common swift**, which can fly at 111.5 km/h (69.3 mph).\u001b[0m\n",
"Page: Falcon\n",
"Summary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.\n",
"Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.\n",
"The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial \"tooth\" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.\n",
"The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.\n",
"Some small falcons with long, narrow wings are called \"hobbies\" and some which hover while hunting are called \"kestrels\".\n",
"As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).\u001b[0m\u001b[32;1m\u001b[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1583\n",
"Prompt Tokens: 1412\n",
"Completion Tokens: 171\n",
"Total Cost (USD): $0.019250000000000003\n"
"Total Tokens: 1787\n",
"Prompt Tokens: 1687\n",
"Completion Tokens: 100\n",
"Total Cost (USD): $0.0009935\n"
]
}
],
@@ -298,19 +561,19 @@
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4a3eced5-2ff7-49a7-a48b-768af8658323",
"execution_count": 12,
"id": "1837c807-136a-49d8-9c33-060e58dc16d2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 0\n",
"\tPrompt Tokens: 0\n",
"\tCompletion Tokens: 0\n",
"Tokens Used: 96\n",
"\tPrompt Tokens: 26\n",
"\tCompletion Tokens: 70\n",
"Successful Requests: 2\n",
"Total Cost (USD): $0.0\n"
"Total Cost (USD): $0.001888\n"
]
}
],
@@ -364,7 +627,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -71,13 +71,13 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")"
]
},
{
@@ -95,19 +95,15 @@
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said \"J\\'adore la programmation,\" which means \"I love programming\" in French.')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"I said \"J'adore la programmation,\" which means \"I love programming\" in French.\n"
]
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
@@ -115,23 +111,25 @@
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" (\"placeholder\", \"{messages}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
"ai_msg = chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" (\n",
" \"human\",\n",
" \"Translate this sentence from English to French: I love programming.\",\n",
" ),\n",
" AIMessage(content=\"J'adore la programmation.\"),\n",
" HumanMessage(content=\"What did you just say?\"),\n",
" (\"ai\", \"J'adore la programmation.\"),\n",
" (\"human\", \"What did you just say?\"),\n",
" ],\n",
" }\n",
")"
")\n",
"print(ai_msg.content)"
]
},
{
@@ -193,7 +191,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.')"
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 61, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5cbb21c2-9c30-4031-8ea8-bfc497989535-0', usage_metadata={'input_tokens': 61, 'output_tokens': 18, 'total_tokens': 79})"
]
},
"execution_count": 5,
@@ -250,7 +248,7 @@
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
@@ -304,10 +302,17 @@
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run dc4e2f79-4bcd-4a36-9506-55ace9040588 not found for run 34b5773e-3ced-46a6-8daf-4d464c15c940. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" in French is \"J\\'adore la programmation.\"')"
"AIMessage(content='\"J\\'adore la programmation.\"', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 39, 'total_tokens': 48}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-648b0822-b0bb-47a2-8e7d-7d34744be8f2-0', usage_metadata={'input_tokens': 39, 'output_tokens': 9, 'total_tokens': 48})"
]
},
"execution_count": 8,
@@ -327,10 +332,17 @@
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run cc14b9d8-c59e-40db-a523-d6ab3fc2fa4f not found for run 5b75e25c-131e-46ee-9982-68569db04330. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.')"
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 63, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5950435c-1dc2-43a6-836f-f989fd62c95e-0', usage_metadata={'input_tokens': 63, 'output_tokens': 17, 'total_tokens': 80})"
]
},
"execution_count": 9,
@@ -354,12 +366,12 @@
"\n",
"### Trimming messages\n",
"\n",
"LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is to only load and store the most recent `n` messages. Let's use an example history with some preloaded messages:"
"LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is trim the historic messages before passing them to the model. Let's use an example history with some preloaded messages:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 21,
"metadata": {},
"outputs": [
{
@@ -371,7 +383,7 @@
" AIMessage(content='Fine thanks!')]"
]
},
"execution_count": 10,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
@@ -396,34 +408,28 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 7ff2d8ec-65e2-4f67-8961-e498e2c4a591 not found for run 3881e990-6596-4326-84f6-2b76949e0657. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Nemo.')"
"AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72})"
]
},
"execution_count": 11,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
@@ -443,34 +449,33 @@
"source": [
"We can see the chain remembers the preloaded name.\n",
"\n",
"But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the `clear` method to remove messages and re-add them to the history. We don't have to, but let's put this method at the front of our chain to ensure it's always called:"
"But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the built in [trim_messages](/docs/how_to/trim_messages/) util to trim messages based on their token count before they reach our prompt. In this case we'll count each message as 1 \"token\" and keep only the last two messages:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.messages import trim_messages\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"def trim_messages(chain_input):\n",
" stored_messages = demo_ephemeral_chat_history.messages\n",
" if len(stored_messages) <= 2:\n",
" return False\n",
"\n",
" demo_ephemeral_chat_history.clear()\n",
"\n",
" for message in stored_messages[-2:]:\n",
" demo_ephemeral_chat_history.add_message(message)\n",
"\n",
" return True\n",
"\n",
"trimmer = trim_messages(strategy=\"last\", max_tokens=2, token_counter=len)\n",
"\n",
"chain_with_trimming = (\n",
" RunnablePassthrough.assign(messages_trimmed=trim_messages)\n",
" | chain_with_message_history\n",
" RunnablePassthrough.assign(chat_history=itemgetter(\"chat_history\") | trimmer)\n",
" | prompt\n",
" | chat\n",
")\n",
"\n",
"chain_with_trimmed_history = RunnableWithMessageHistory(\n",
" chain_with_trimming,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
@@ -483,22 +488,29 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run 775cde65-8d22-4c44-80bb-f0b9811c32ca not found for run 5cf71d0e-4663-41cd-8dbe-e9752689cfac. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")"
"AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})"
]
},
"execution_count": 13,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
"chain_with_trimmed_history.invoke(\n",
" {\"input\": \"Where does P. Sherman live?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
@@ -506,19 +518,23 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.'),\n",
"[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n",
" AIMessage(content='Hello!'),\n",
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!'),\n",
" HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72}),\n",
" HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")]"
" AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})]"
]
},
"execution_count": 14,
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
@@ -536,48 +552,39 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Parent run fde7123f-6fd3-421a-a3fc-2fb37dead119 not found for run 061a4563-2394-470d-a3ed-9bf1388ca431. Treating as a root run.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")"
"AIMessage(content=\"I'm sorry, but I don't have access to your personal information, so I don't know your name. How else may I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 74, 'total_tokens': 105}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0ab03495-1f7c-4151-9070-56d2d1c565ff-0', usage_metadata={'input_tokens': 74, 'output_tokens': 31, 'total_tokens': 105})"
]
},
"execution_count": 15,
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
"chain_with_trimmed_history.invoke(\n",
" {\"input\": \"What is my name?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"cell_type": "markdown",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\"),\n",
" HumanMessage(content='What is my name?'),\n",
" AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
"Check out our [how to guide on trimming messages](/docs/how_to/trim_messages/) for more."
]
},
{
@@ -638,7 +645,7 @@
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
@@ -672,7 +679,7 @@
" return False\n",
" summarization_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\n",
" \"user\",\n",
" \"Distill the above chat messages into a single summary message. Include as many specific details as you can.\",\n",
@@ -772,9 +779,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

View File

@@ -23,12 +23,12 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install \"unstructured[html]\""
"%pip install unstructured"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "7d167ca3-c7c7-4ef0-b509-080629f0f482",
"metadata": {},
"outputs": [
@@ -36,14 +36,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='My First Heading\\n\\nMy first paragraph.', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html'})]\n"
"[Document(page_content='My First Heading\\n\\nMy first paragraph.', metadata={'source': '../../docs/integrations/document_loaders/example_data/fake-content.html'})]\n"
]
}
],
"source": [
"from langchain_community.document_loaders import UnstructuredHTMLLoader\n",
"\n",
"file_path = \"../../../docs/integrations/document_loaders/example_data/fake-content.html\"\n",
"file_path = \"../../docs/integrations/document_loaders/example_data/fake-content.html\"\n",
"\n",
"loader = UnstructuredHTMLLoader(file_path)\n",
"data = loader.load()\n",
@@ -73,7 +73,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 4,
"id": "0a2050a8-6df6-4696-9889-ba367d6f9caa",
"metadata": {},
"outputs": [
@@ -81,7 +81,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='\\nTest Title\\n\\n\\nMy First Heading\\nMy first paragraph.\\n\\n\\n', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html', 'title': 'Test Title'})]\n"
"[Document(page_content='\\nTest Title\\n\\n\\nMy First Heading\\nMy first paragraph.\\n\\n\\n', metadata={'source': '../../docs/integrations/document_loaders/example_data/fake-content.html', 'title': 'Test Title'})]\n"
]
}
],
@@ -111,7 +111,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -21,12 +21,12 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "c8b147fb-6877-4f7a-b2ee-ee971c7bc662",
"metadata": {},
"outputs": [],
"source": [
"# !pip install \"unstructured[md]\""
"%pip install \"unstructured[md]\""
]
},
{
@@ -39,7 +39,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "80c50cc4-7ce9-4418-81b9-29c52c7b3627",
"metadata": {},
"outputs": [
@@ -62,7 +62,7 @@
"from langchain_community.document_loaders import UnstructuredMarkdownLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"markdown_path = \"../../../../README.md\"\n",
"markdown_path = \"../../../README.md\"\n",
"loader = UnstructuredMarkdownLoader(markdown_path)\n",
"\n",
"data = loader.load()\n",
@@ -84,7 +84,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "a986bbce-7fd3-41d1-bc47-49f9f57c7cd1",
"metadata": {},
"outputs": [
@@ -92,11 +92,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents: 65\n",
"Number of documents: 66\n",
"\n",
"page_content='🦜️🔗 LangChain' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'Title'}\n",
"page_content='🦜️🔗 LangChain' metadata={'source': '../../../README.md', 'category_depth': 0, 'last_modified': '2024-06-28T15:20:01', 'languages': ['eng'], 'filetype': 'text/markdown', 'file_directory': '../../..', 'filename': 'README.md', 'category': 'Title'}\n",
"\n",
"page_content='⚡ Build context-aware reasoning applications ⚡' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'parent_id': 'c3223b6f7100be08a78f1e8c0c28fde1', 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'NarrativeText'}\n",
"page_content='⚡ Build context-aware reasoning applications ⚡' metadata={'source': '../../../README.md', 'last_modified': '2024-06-28T15:20:01', 'languages': ['eng'], 'parent_id': '200b8a7d0dd03f66e4f13456566d2b3a', 'filetype': 'text/markdown', 'file_directory': '../../..', 'filename': 'README.md', 'category': 'NarrativeText'}\n",
"\n"
]
}
@@ -121,7 +121,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "75abc139-3ded-4e8e-9f21-d0c8ec40fdfc",
"metadata": {},
"outputs": [
@@ -129,13 +129,21 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'Title', 'NarrativeText', 'ListItem'}\n"
"{'ListItem', 'NarrativeText', 'Title'}\n"
]
}
],
"source": [
"print(set(document.metadata[\"category\"] for document in data))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "223b4c11",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -154,7 +162,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.5"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -1,15 +1,5 @@
{
"cells": [
{
"cell_type": "raw",
"id": "77bf57fb-e990-45f2-8b5f-c76388b05966",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1",

View File

@@ -75,6 +75,31 @@ Otherwise you can initialize without any params:
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings()
```
</TabItem>
<TabItem value="huggingface" label="Hugging Face">
To start we'll need to install the Hugging Face partner package:
```bash
pip install langchain-huggingface
```
You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
```
You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model.
```python
from langchain_huggingface import HuggingFaceEmbeddings
embeddings_model = HuggingFaceEmbeddings()
```
</TabItem>

View File

@@ -4,13 +4,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to create an Ensemble Retriever\n",
"# How to combine results from multiple retrievers\n",
"\n",
"The `EnsembleRetriever` takes a list of retrievers as input and ensemble the results of their `get_relevant_documents()` methods and rerank the results based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"The [EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"\n",
"By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. \n",
"\n",
"The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity."
"The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity.\n",
"\n",
"## Basic usage\n",
"\n",
"Below we demonstrate ensembling of a [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)."
]
},
{
@@ -24,22 +28,15 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import EnsembleRetriever\n",
"from langchain_community.retrievers import BM25Retriever\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"doc_list_1 = [\n",
" \"I like apples\",\n",
" \"I like oranges\",\n",
@@ -71,19 +68,19 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like oranges', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]"
"[Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1}),\n",
" Document(page_content='You like oranges', metadata={'source': 2})]"
]
},
"execution_count": 15,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -99,24 +96,17 @@
"source": [
"## Runtime Configuration\n",
"\n",
"We can also configure the retrievers at runtime. In order to do this, we need to mark the fields as configurable"
"We can also configure the individual retrievers at runtime using [configurable fields](/docs/how_to/configure). Below we update the \"top-k\" parameter for the FAISS retriever specifically:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField\n",
"\n",
"faiss_retriever = faiss_vectorstore.as_retriever(\n",
" search_kwargs={\"k\": 2}\n",
").configurable_fields(\n",
@@ -125,15 +115,8 @@
" name=\"Search Kwargs\",\n",
" description=\"The search kwargs to use\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
")\n",
"\n",
"ensemble_retriever = EnsembleRetriever(\n",
" retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n",
")"
@@ -141,9 +124,22 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='I like apples', metadata={'source': 1}),\n",
" Document(page_content='You like apples', metadata={'source': 2}),\n",
" Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"config = {\"configurable\": {\"search_kwargs_faiss\": {\"k\": 1}}}\n",
"docs = ensemble_retriever.invoke(\"apples\", config=config)\n",
@@ -181,7 +177,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -60,7 +60,7 @@
"source": [
"examples = [\n",
" {\"input\": \"hi\", \"output\": \"ciao\"},\n",
" {\"input\": \"bye\", \"output\": \"arrivaderci\"},\n",
" {\"input\": \"bye\", \"output\": \"arrivederci\"},\n",
" {\"input\": \"soccer\", \"output\": \"calcio\"},\n",
"]"
]
@@ -133,7 +133,7 @@
{
"data": {
"text/plain": [
"[{'input': 'bye', 'output': 'arrivaderci'}]"
"[{'input': 'bye', 'output': 'arrivederci'}]"
]
},
"execution_count": 39,
@@ -209,7 +209,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Translate the following words from English to Italain:\n",
"Translate the following words from English to Italian:\n",
"\n",
"Input: hand -> Output: mano\n",
"\n",
@@ -222,7 +222,7 @@
" example_selector=example_selector,\n",
" example_prompt=example_prompt,\n",
" suffix=\"Input: {input} -> Output:\",\n",
" prefix=\"Translate the following words from English to Italain:\",\n",
" prefix=\"Translate the following words from English to Italian:\",\n",
" input_variables=[\"input\"],\n",
")\n",
"\n",

View File

@@ -128,7 +128,7 @@
" # Having a good description can help improve extraction results.\n",
" name: Optional[str] = Field(..., description=\"The name of the person\")\n",
" hair_color: Optional[str] = Field(\n",
" ..., description=\"The color of the peron's eyes if known\"\n",
" ..., description=\"The color of the person's hair if known\"\n",
" )\n",
" height_in_meters: Optional[str] = Field(..., description=\"Height in METERs\")\n",
"\n",
@@ -246,11 +246,11 @@
"examples = [\n",
" (\n",
" \"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.\",\n",
" Person(name=None, height_in_meters=None, hair_color=None),\n",
" Data(people=[]),\n",
" ),\n",
" (\n",
" \"Fiona traveled far from France to Spain.\",\n",
" Person(name=\"Fiona\", height_in_meters=None, hair_color=None),\n",
" Data(people=[Person(name=\"Fiona\", height_in_meters=None, hair_color=None)]),\n",
" ),\n",
"]\n",
"\n",

View File

@@ -23,7 +23,7 @@
"- [Prompt templates](/docs/concepts/#prompt-templates)\n",
"- [Example selectors](/docs/concepts/#example-selectors)\n",
"- [LLMs](/docs/concepts/#llms)\n",
"- [Vectorstores](/docs/concepts/#vectorstores)\n",
"- [Vectorstores](/docs/concepts/#vector-stores)\n",
"\n",
":::\n",
"\n",

View File

@@ -23,7 +23,7 @@
"- [Prompt templates](/docs/concepts/#prompt-templates)\n",
"- [Example selectors](/docs/concepts/#example-selectors)\n",
"- [Chat models](/docs/concepts/#chat-model)\n",
"- [Vectorstores](/docs/concepts/#vectorstores)\n",
"- [Vectorstores](/docs/concepts/#vector-stores)\n",
"\n",
":::\n",
"\n",
@@ -51,7 +51,7 @@
"- `examples`: A list of dictionary examples to include in the final prompt.\n",
"- `example_prompt`: converts each example into 1 or more messages through its [`format_messages`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html?highlight=format_messages#langchain_core.prompts.chat.ChatPromptTemplate.format_messages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.\n",
"\n",
"Below is a simple demonstration. First, define the examples you'd like to include:"
"Below is a simple demonstration. First, define the examples you'd like to include. Let's give the LLM an unfamiliar mathematical operator, denoted by the \"🦜\" emoji:"
]
},
{
@@ -59,17 +59,7 @@
"execution_count": 1,
"id": "5b79e400",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai langchain-chroma\n",
"\n",
@@ -79,9 +69,50 @@
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "30856d92",
"metadata": {},
"source": [
"If we try to ask the model what the result of this expression is, it will fail:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 4,
"id": "174dec5b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The expression \"2 🦜 9\" is not a standard mathematical operation or equation. It appears to be a combination of the number 2 and the parrot emoji 🦜 followed by the number 9. It does not have a specific mathematical meaning.', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 17, 'total_tokens': 71}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-aad12dda-5c47-4a1e-9949-6fe94e03242a-0', usage_metadata={'input_tokens': 17, 'output_tokens': 54, 'total_tokens': 71})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n",
"\n",
"model.invoke(\"What is 2 🦜 9?\")"
]
},
{
"cell_type": "markdown",
"id": "e6d58385",
"metadata": {},
"source": [
"Now let's see what happens if we give the LLM some examples to work with. We'll define some below:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0fc5a02a-6249-4e92-95c3-30fff9671e8b",
"metadata": {
"tags": []
@@ -91,8 +122,8 @@
"from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n",
"\n",
"examples = [\n",
" {\"input\": \"2+2\", \"output\": \"4\"},\n",
" {\"input\": \"2+3\", \"output\": \"5\"},\n",
" {\"input\": \"2 🦜 2\", \"output\": \"4\"},\n",
" {\"input\": \"2 🦜 3\", \"output\": \"5\"},\n",
"]"
]
},
@@ -106,7 +137,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "65e72ad1-9060-47d0-91a1-bc130c8b98ac",
"metadata": {
"tags": []
@@ -116,7 +147,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[HumanMessage(content='2+2'), AIMessage(content='4'), HumanMessage(content='2+3'), AIMessage(content='5')]\n"
"[HumanMessage(content='2 🦜 2'), AIMessage(content='4'), HumanMessage(content='2 🦜 3'), AIMessage(content='5')]\n"
]
}
],
@@ -146,7 +177,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"id": "9f86d6d9-50de-41b6-b6c7-0f9980cc0187",
"metadata": {
"tags": []
@@ -162,9 +193,17 @@
")"
]
},
{
"cell_type": "markdown",
"id": "dd8029c5",
"metadata": {},
"source": [
"And now let's ask the model the initial question and see how it does:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"id": "97d443b1-6fae-4b36-bede-3ff7306288a3",
"metadata": {
"tags": []
@@ -173,10 +212,10 @@
{
"data": {
"text/plain": [
"AIMessage(content='A triangle does not have a square. The square of a number is the result of multiplying the number by itself.', response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 52, 'total_tokens': 75}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-3456c4ef-7b4d-4adb-9e02-8079de82a47a-0')"
"AIMessage(content='11', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 60, 'total_tokens': 61}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5ec4e051-262f-408e-ad00-3f2ebeb561c3-0', usage_metadata={'input_tokens': 60, 'output_tokens': 1, 'total_tokens': 61})"
]
},
"execution_count": 5,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -184,9 +223,9 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chain = final_prompt | ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n",
"chain = final_prompt | model\n",
"\n",
"chain.invoke({\"input\": \"What's the square of a triangle?\"})"
"chain.invoke({\"input\": \"What is 2 🦜 9?\"})"
]
},
{
@@ -194,6 +233,8 @@
"id": "70ab7114-f07f-46be-8874-3705a25aba5f",
"metadata": {},
"source": [
"And we can see that the model has now inferred that the parrot emoji means addition from the given few-shot examples!\n",
"\n",
"## Dynamic few-shot prompting\n",
"\n",
"Sometimes you may want to select only a few examples from your overall set to show based on the input. For this, you can replace the `examples` passed into `FewShotChatMessagePromptTemplate` with an `example_selector`. The other components remain the same as above! Our dynamic few-shot prompt template would look like:\n",
@@ -208,7 +249,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"id": "ad66f06a-66fd-4fcc-8166-5d0e3c801e57",
"metadata": {
"tags": []
@@ -220,9 +261,9 @@
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"examples = [\n",
" {\"input\": \"2+2\", \"output\": \"4\"},\n",
" {\"input\": \"2+3\", \"output\": \"5\"},\n",
" {\"input\": \"2+4\", \"output\": \"6\"},\n",
" {\"input\": \"2 🦜 2\", \"output\": \"4\"},\n",
" {\"input\": \"2 🦜 3\", \"output\": \"5\"},\n",
" {\"input\": \"2 🦜 4\", \"output\": \"6\"},\n",
" {\"input\": \"What did the cow say to the moon?\", \"output\": \"nothing at all\"},\n",
" {\n",
" \"input\": \"Write me a poem about the moon\",\n",
@@ -247,7 +288,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"id": "7790303a-f722-452e-8921-b14bdf20bdff",
"metadata": {
"tags": []
@@ -257,10 +298,10 @@
"data": {
"text/plain": [
"[{'input': 'What did the cow say to the moon?', 'output': 'nothing at all'},\n",
" {'input': '2+4', 'output': '6'}]"
" {'input': '2 🦜 4', 'output': '6'}]"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -287,7 +328,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 11,
"id": "253c255e-41d7-45f6-9d88-c7a0ced4b1bd",
"metadata": {
"tags": []
@@ -297,7 +338,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[HumanMessage(content='2+3'), AIMessage(content='5'), HumanMessage(content='2+2'), AIMessage(content='4')]\n"
"[HumanMessage(content='2 🦜 3'), AIMessage(content='5'), HumanMessage(content='2 🦜 4'), AIMessage(content='6')]\n"
]
}
],
@@ -317,7 +358,7 @@
" ),\n",
")\n",
"\n",
"print(few_shot_prompt.invoke(input=\"What's 3+3?\").to_messages())"
"print(few_shot_prompt.invoke(input=\"What's 3 🦜 3?\").to_messages())"
]
},
{
@@ -330,7 +371,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 12,
"id": "e731cb45-f0ea-422c-be37-42af2a6cb2c4",
"metadata": {
"tags": []
@@ -340,7 +381,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"messages=[HumanMessage(content='2+3'), AIMessage(content='5'), HumanMessage(content='2+2'), AIMessage(content='4')]\n"
"messages=[HumanMessage(content='2 🦜 3'), AIMessage(content='5'), HumanMessage(content='2 🦜 4'), AIMessage(content='6')]\n"
]
}
],
@@ -353,7 +394,7 @@
" ]\n",
")\n",
"\n",
"print(few_shot_prompt.invoke(input=\"What's 3+3?\"))"
"print(few_shot_prompt.invoke(input=\"What's 3 🦜 3?\"))"
]
},
{
@@ -368,7 +409,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 13,
"id": "0568cbc6-5354-47f1-ab4d-dfcc616cf583",
"metadata": {
"tags": []
@@ -377,10 +418,10 @@
{
"data": {
"text/plain": [
"AIMessage(content='6', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 51, 'total_tokens': 52}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-6bcbe158-a8e3-4a85-a754-1ba274a9f147-0')"
"AIMessage(content='6', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 60, 'total_tokens': 61}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d1863e5e-17cd-4e9d-bf7a-b9f118747a65-0', usage_metadata={'input_tokens': 60, 'output_tokens': 1, 'total_tokens': 61})"
]
},
"execution_count": 18,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -388,7 +429,7 @@
"source": [
"chain = final_prompt | ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n",
"\n",
"chain.invoke({\"input\": \"What's 3+3?\"})"
"chain.invoke({\"input\": \"What's 3 🦜 3?\"})"
]
},
{
@@ -428,7 +469,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,203 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e389175d-8a65-4f0d-891c-dbdfabb3c3ef",
"metadata": {},
"source": [
"# How to filter messages\n",
"\n",
"In more complex chains and agents we might track state with a list of messages. This list can start to accumulate messages from multiple different models, speakers, sub-chains, etc., and we may only want to pass subsets of this full list of messages to each model call in the chain/agent.\n",
"\n",
"The `filter_messages` utility makes it easy to filter messages by type, id, or name.\n",
"\n",
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f4ad2fd3-3cab-40d4-a989-972115865b8b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='example input', name='example_user', id='2'),\n",
" HumanMessage(content='real input', name='bob', id='4')]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" filter_messages,\n",
")\n",
"\n",
"messages = [\n",
" SystemMessage(\"you are a good assistant\", id=\"1\"),\n",
" HumanMessage(\"example input\", id=\"2\", name=\"example_user\"),\n",
" AIMessage(\"example output\", id=\"3\", name=\"example_assistant\"),\n",
" HumanMessage(\"real input\", id=\"4\", name=\"bob\"),\n",
" AIMessage(\"real output\", id=\"5\", name=\"alice\"),\n",
"]\n",
"\n",
"filter_messages(messages, include_types=\"human\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7b663a1e-a8ae-453e-a072-8dd75dfab460",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content='you are a good assistant', id='1'),\n",
" HumanMessage(content='real input', name='bob', id='4'),\n",
" AIMessage(content='real output', name='alice', id='5')]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filter_messages(messages, exclude_names=[\"example_user\", \"example_assistant\"])"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "db170e46-03f8-4710-b967-23c70c3ac054",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='example input', name='example_user', id='2'),\n",
" HumanMessage(content='real input', name='bob', id='4'),\n",
" AIMessage(content='real output', name='alice', id='5')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filter_messages(messages, include_types=[HumanMessage, AIMessage], exclude_ids=[\"3\"])"
]
},
{
"cell_type": "markdown",
"id": "b7c4e5ad-d1b4-4c18-b250-864adde8f0dd",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"`filter_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "675f8f79-db39-401c-a582-1df2478cba30",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=[], response_metadata={'id': 'msg_01Wz7gBHahAwkZ1KCBNtXmwA', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 3}}, id='run-b5d8a3fe-004f-4502-a071-a6c025031827-0', usage_metadata={'input_tokens': 16, 'output_tokens': 3, 'total_tokens': 19})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# pip install -U langchain-anthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n",
"# Notice we don't pass in messages. This creates\n",
"# a RunnableLambda that takes messages as input\n",
"filter_ = filter_messages(exclude_names=[\"example_user\", \"example_assistant\"])\n",
"chain = filter_ | llm\n",
"chain.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4133ab28-f49c-480f-be92-b51eb6559153",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are filtered: https://smith.langchain.com/public/f808a724-e072-438e-9991-657cc9e7e253/r\n",
"\n",
"Looking at just the filter_, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c090116a-1fef-43f6-a178-7265dff9db00",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='real input', name='bob', id='4'),\n",
" AIMessage(content='real output', name='alice', id='5')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filter_.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "ff339066-d424-4042-8cca-cd4b007c1a8e",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.filter_messages.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -300,7 +300,11 @@
"id": "922b48bd",
"metadata": {},
"source": [
"# Streaming\n",
"## Streaming\n",
"\n",
":::{.callout-note}\n",
"[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) is best suited for code that does not need to support streaming. If you need to support streaming (i.e., be able to operate on chunks of inputs and yield chunks of outputs), use [RunnableGenerator](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableGenerator.html) instead as in the example below.\n",
":::\n",
"\n",
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain.\n",
"\n",

View File

@@ -14,13 +14,14 @@ For comprehensive descriptions of every class and function see the [API Referenc
## Installation
- [How to: install LangChain packages](/docs/how_to/installation/)
- [How to: use LangChain with different Pydantic versions](/docs/how_to/pydantic_compatibility)
## Key features
This highlights functionality that is core to using LangChain.
- [How to: return structured data from a model](/docs/how_to/structured_output/)
- [How to: use a model to call tools](/docs/how_to/tool_calling/)
- [How to: use a model to call tools](/docs/how_to/tool_calling)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: debug your LLM apps](/docs/how_to/debugging/)
@@ -42,6 +43,7 @@ This highlights functionality that is core to using LangChain.
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
- [How to: migrate chains to LCEL](/docs/how_to/migrate_chains)
## Components
@@ -49,7 +51,7 @@ These are the core building blocks you can use when building applications.
### Prompt templates
Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.
[Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model.
- [How to: use few shot examples](/docs/how_to/few_shot_examples)
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
@@ -58,7 +60,7 @@ Prompt Templates are responsible for formatting user input into a format that ca
### Example selectors
Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.
[Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt.
- [How to: use example selectors](/docs/how_to/example_selectors)
- [How to: select examples by length](/docs/how_to/example_selectors_length_based)
@@ -68,7 +70,7 @@ Example Selectors are responsible for selecting the correct few shot examples to
### Chat models
Chat Models are newer forms of language models that take messages in and output a message.
[Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message.
- [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output)
@@ -78,10 +80,25 @@ Chat Models are newer forms of language models that take messages in and output
- [How to: stream a response back](/docs/how_to/chat_streaming)
- [How to: track token usage](/docs/how_to/chat_token_usage_tracking)
- [How to: track response metadata across providers](/docs/how_to/response_metadata)
- [How to: let your end users choose their model](/docs/how_to/chat_models_universal_init/)
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: stream tool calls](/docs/how_to/tool_streaming)
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formated tools](/docs/how_to/tools_model_specific)
- [How to: force specific tool call](/docs/how_to/tool_choice)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
### Messages
[Messages](/docs/concepts/#messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message.
- [How to: trim messages](/docs/how_to/trim_messages/)
- [How to: filter messages](/docs/how_to/filter_messages/)
- [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/)
### LLMs
What LangChain calls LLMs are older forms of language models that take a string in and output a string.
What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string.
- [How to: cache model responses](/docs/how_to/llm_caching)
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
@@ -91,7 +108,7 @@ What LangChain calls LLMs are older forms of language models that take a string
### Output parsers
Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.
[Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format.
- [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured)
- [How to: parse JSON output](/docs/how_to/output_parser_json)
@@ -103,7 +120,7 @@ Output Parsers are responsible for taking the output of an LLM and parsing into
### Document loaders
Document Loaders are responsible for loading documents from a variety of sources.
[Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources.
- [How to: load CSV data](/docs/how_to/document_loader_csv)
- [How to: load data from a directory](/docs/how_to/document_loader_directory)
@@ -116,7 +133,7 @@ Document Loaders are responsible for loading documents from a variety of sources
### Text splitters
Text Splitters take a document and split into chunks that can be used for retrieval.
[Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval.
- [How to: recursively split text](/docs/how_to/recursive_text_splitter)
- [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter)
@@ -130,20 +147,20 @@ Text Splitters take a document and split into chunks that can be used for retrie
### Embedding models
Embedding Models take a piece of text and create a numerical representation of it.
[Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it.
- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)
### Vector stores
Vector stores are databases that can efficiently store and retrieve embeddings.
[Vector stores](/docs/concepts/#vector-stores) are databases that can efficiently store and retrieve embeddings.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstores)
### Retrievers
Retrievers are responsible for taking a query and returning relevant documents.
[Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever)
- [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever)
@@ -151,7 +168,7 @@ Retrievers are responsible for taking a query and returning relevant documents.
- [How to: write a custom retriever class](/docs/how_to/custom_retriever)
- [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever)
- [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever)
- [How to: reorder retrieved results to put most relevant documents not in the middle](/docs/how_to/long_context_reorder)
- [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder)
- [How to: generate multiple embeddings per document](/docs/how_to/multi_vector)
- [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever)
- [How to: generate metadata filters](/docs/how_to/self_query)
@@ -166,21 +183,29 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
### Tools
LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call).
LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
- [How to: create custom tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin)
- [How to: use a chat model to call tools](/docs/how_to/tool_calling/)
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: pass tool results back to model](/docs/how_to/tool_results_pass_to_model)
- [How to: add ad-hoc tool calling capability to LLMs and chat models](/docs/how_to/tools_prompting)
- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
- [How to: add a human in the loop to tool usage](/docs/how_to/tools_human)
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: call tools using multi-modal data](/docs/how_to/tool_calls_multi_modal)
- [How to: disable parallel tool calling](/docs/how_to/tool_choice)
### Multimodal
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
### Agents
:::note
For in depth how-to guides for agents, please check out [LangGraph](https://github.com/langchain-ai/langgraph) documentation.
For in depth how-to guides for agents, please check out [LangGraph](https://langchain-ai.github.io/langgraph/) documentation.
:::
@@ -189,6 +214,8 @@ For in depth how-to guides for agents, please check out [LangGraph](https://gith
### Callbacks
[Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution.
- [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime)
- [How to: attach callbacks to a module](/docs/how_to/callbacks_attach)
- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
@@ -207,6 +234,8 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: define a custom tool](/docs/how_to/custom_tools)
### Serialization
- [How to: save and load LangChain objects](/docs/how_to/serialization)
## Use cases
@@ -215,6 +244,7 @@ These guides cover use-case specific details.
### Q&A with RAG
Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.
For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/).
- [How to: add chat history](/docs/how_to/qa_chat_history_how_to/)
- [How to: stream](/docs/how_to/qa_streaming/)
@@ -226,6 +256,7 @@ Retrieval Augmented Generation (RAG) is a way to connect LLMs to external source
### Extraction
Extraction is when you use LLMs to extract structured information from unstructured text.
For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/).
- [How to: use reference examples](/docs/how_to/extraction_examples/)
- [How to: handle long text](/docs/how_to/extraction_long_text/)
@@ -234,14 +265,17 @@ Extraction is when you use LLMs to extract structured information from unstructu
### Chatbots
Chatbots involve using an LLM to have a conversation.
For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/).
- [How to: manage memory](/docs/how_to/chatbots_memory)
- [How to: do retrieval](/docs/how_to/chatbots_retrieval)
- [How to: use tools](/docs/how_to/chatbots_tools)
- [How to: manage large chat history](/docs/how_to/trim_messages/)
### Query analysis
Query Analysis is the task of using an LLM to generate a query to send to a retriever.
For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/).
- [How to: add examples to the prompt](/docs/how_to/query_few_shot)
- [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries)
@@ -253,6 +287,7 @@ Query Analysis is the task of using an LLM to generate a query to send to a retr
### Q&A over SQL + CSV
You can use LLMs to do question answering over tabular data.
For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
- [How to: use prompting to improve results](/docs/how_to/sql_prompting)
- [How to: do query validation](/docs/how_to/sql_query_checking)
@@ -262,8 +297,44 @@ You can use LLMs to do question answering over tabular data.
### Q&A over graph databases
You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
- [How to: map values to a database](/docs/how_to/graph_mapping)
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
- [How to: improve results with prompting](/docs/how_to/graph_prompting)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
## [LangGraph](https://langchain-ai.github.io/langgraph)
LangGraph is an extension of LangChain aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph documentation is currently hosted on a separate site.
You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/).
## [LangSmith](https://docs.smith.langchain.com/)
LangSmith allows you to closely trace, monitor and evaluate your LLM application.
It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly
relevant to LangChain below:
### Evaluation
<span data-heading-keywords="evaluation,evaluate"></span>
Evaluating performance is a vital part of building LLM-powered applications.
LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators.
To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation).
### Tracing
<span data-heading-keywords="trace,tracing"></span>
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
- [How to: trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain)
- [How to: add metadata and tags to traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces)
You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/how_to_guides/tracing).

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -2,11 +2,14 @@
sidebar_position: 2
---
# Installation
# How to install LangChain packages
The LangChain ecosystem is split into different packages, which allow you to choose exactly which pieces of
functionality to install.
## Official release
To install LangChain run:
To install the main LangChain package, run:
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@@ -21,11 +24,24 @@ import CodeBlock from "@theme/CodeBlock";
</TabItem>
</Tabs>
This will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
While this package acts as a sane starting point to using LangChain,
much of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately.
We'll show how to do that in the next sections of this guide.
## From source
## Ecosystem packages
With the exception of the `langsmith` SDK, all packages in the LangChain ecosystem depend on `langchain-core`, which contains base
classes and abstractions that other packages use. The dependency graph below shows how the difference packages are related.
A directed arrow indicates that the source package depends on the target package:
![](/img/ecosystem_packages.png)
When installing a package, you do not need to explicitly install that package's explicit dependencies (such as `langchain-core`).
However, you may choose to if you are using a feature only available in a certain version of that dependency.
If you do, you should make sure that the installed or pinned version is compatible with any other integration packages you use.
### From source
If you want to install from source, you can do so by cloning the repo and be sure that the directory is `PATH/TO/REPO/langchain/libs/langchain` running:
@@ -33,21 +49,21 @@ If you want to install from source, you can do so by cloning the repo and be sur
pip install -e .
```
## LangChain core
### LangChain core
The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
pip install langchain-core
```
## LangChain community
### LangChain community
The `langchain-community` package contains third-party integrations. Install with:
```bash
pip install langchain-community
```
## LangChain experimental
### LangChain experimental
The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses.
Install with:
@@ -55,14 +71,15 @@ Install with:
pip install langchain-experimental
```
## LangGraph
`langgraph` is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.
### LangGraph
`langgraph` is a library for building stateful, multi-actor applications with LLMs. It integrates smoothly with LangChain, but can be used without it.
Install with:
```bash
pip install langgraph
```
## LangServe
### LangServe
LangServe helps developers deploy LangChain runnables and chains as a REST API.
LangServe is automatically installed by LangChain CLI.
If not using LangChain CLI, install with:
@@ -80,9 +97,10 @@ Install with:
pip install langchain-cli
```
## LangSmith SDK
The LangSmith SDK is automatically installed by LangChain.
If not using LangChain, install with:
### LangSmith SDK
The LangSmith SDK is automatically installed by LangChain. However, it does not depend on
`langchain-core`, and can be installed and used independently if desired.
If you are not using LangChain, you can install it with:
```bash
pip install langsmith

View File

@@ -2,169 +2,226 @@
"cells": [
{
"cell_type": "markdown",
"id": "e5715368",
"id": "90dff237-bc28-4185-a2c0-d5203bbdeacd",
"metadata": {},
"source": [
"# How to track token usage for LLMs\n",
"\n",
"This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single LLM call."
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [LLMs](/docs/concepts/#llms)\n",
":::\n",
"\n",
"## Using LangSmith\n",
"\n",
"You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n",
"\n",
"## Using callbacks\n",
"\n",
"There are some API-specific callback context managers that allow you to track token usage across multiple calls. You'll need to check whether such an integration is available for your particular model.\n",
"\n",
"If such an integration is not available for your model, you can create a custom callback manager by adapting the implementation of the [OpenAI callback manager](https://api.python.langchain.com/en/latest/_modules/langchain_community/callbacks/openai_info.html#OpenAICallbackHandler).\n",
"\n",
"### OpenAI\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call.\n",
"\n",
":::{.callout-danger}\n",
"\n",
"The callback handler does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). For support in a streaming context, refer to the corresponding guide for chat models [here](/docs/how_to/chat_token_usage_tracking).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "f790edd9-823e-4bc5-befa-e9529c7237a0",
"metadata": {},
"source": [
"### Single call"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9455db35",
"id": "2eebbee2-6ca1-4fa8-a3aa-0376888ceefb",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 18\n",
"Prompt Tokens: 4\n",
"Completion Tokens: 14\n",
"Total Cost (USD): $3.4e-05\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI"
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "7df3be35-dd97-4e3a-bd51-52434ab2249d",
"metadata": {},
"source": [
"### Multiple calls\n",
"\n",
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence to a chain. This will also work for an agent which may use multiple steps."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d1c55cc9",
"id": "3ec10419-294c-44bf-af85-86aabf457cb6",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Why did the chicken go to the seance?\n",
"\n",
"To talk to the other side of the road!\n",
"--\n",
"\n",
"\n",
"Why did the fish need a lawyer?\n",
"\n",
"Because it got caught in a net!\n",
"\n",
"---\n",
"Total Tokens: 50\n",
"Prompt Tokens: 12\n",
"Completion Tokens: 38\n",
"Total Cost (USD): $9.400000000000001e-05\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = template | llm\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = chain.invoke({\"topic\": \"birds\"})\n",
" print(response)\n",
" response = chain.invoke({\"topic\": \"fish\"})\n",
" print(\"--\")\n",
" print(response)\n",
"\n",
"\n",
"print()\n",
"print(\"---\")\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "ad7a3fba-9fac-4222-8f87-d1d276d27d6e",
"metadata": {
"tags": []
},
"source": [
"## Streaming\n",
"\n",
":::{.callout-danger}\n",
"\n",
"`get_openai_callback` does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). If you want to count tokens correctly in a streaming context, there are a number of options:\n",
"\n",
"- Use chat models as described in [this guide](/docs/how_to/chat_token_usage_tracking);\n",
"- Implement a [custom callback handler](/docs/how_to/custom_callbacks/) that uses appropriate tokenizers to count the tokens;\n",
"- Use a monitoring platform such as [LangSmith](https://www.langchain.com/langsmith).\n",
":::\n",
"\n",
"Note that when using legacy language models in a streaming context, token counts are not updated:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "31667d54",
"metadata": {},
"id": "cd61ed79-7858-49bb-afb5-d41291f597ba",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 37\n",
"\tPrompt Tokens: 4\n",
"\tCompletion Tokens: 33\n",
"Successful Requests: 1\n",
"Total Cost (USD): $7.2e-05\n"
"\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything.\n",
"---\n",
"\n",
"Total Tokens: 0\n",
"Prompt Tokens: 0\n",
"Completion Tokens: 0\n",
"Total Cost (USD): $0.0\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "c0ab6d27",
"metadata": {},
"source": [
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e09420f4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"72\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
"metadata": {},
"source": [
"If a chain or agent with multiple steps in it is used, it will track all those steps."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_community.callbacks import get_openai_callback\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2f98c536",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[\"Olivia Wilde and Harry Styles took fans by surprise with their whirlwind romance, which began when they met on the set of Don't Worry Darling.\", 'Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.', 'Olivia Wilde and Harry Styles were spotted early on in their relationship walking around London. (. Image ...', \"Looks like Olivia Wilde and Jason Sudeikis are starting 2023 on good terms. Amid their highly publicized custody battle and the actress' ...\", 'The two started dating after Wilde split up with actor Jason Sudeikisin 2020. However, their relationship came to an end last November.', \"Olivia Wilde and Harry Styles started dating during the filming of Don't Worry Darling. While the movie got a lot of backlash because of the ...\", \"Here's what we know so far about Harry Styles and Olivia Wilde's relationship.\", 'Olivia and the Grammy winner kept their romance out of the spotlight as their relationship began just two months after her split from ex-fiancé ...', \"Harry Styles and Olivia Wilde first met on the set of Don't Worry Darling and stepped out as a couple in January 2021. Relive all their biggest relationship ...\"]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Harry Styles is Olivia Wilde's boyfriend.\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m29 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 29 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 29^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 2205\n",
"Prompt Tokens: 2053\n",
"Completion Tokens: 152\n",
"Total Cost (USD): $0.0441\n"
]
}
],
"source": [
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" response = agent.run(\n",
" \"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\"\n",
" )\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
" for chunk in llm.stream(\"Tell me a joke\"):\n",
" print(chunk, end=\"\", flush=True)\n",
" print(result)\n",
" print(\"---\")\n",
"print()\n",
"\n",
"print(f\"Total Tokens: {cb.total_tokens}\")\n",
"print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
"print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
"print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca77a3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -183,7 +240,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -5,28 +5,38 @@
"id": "fc0db1bc",
"metadata": {},
"source": [
"# How to reorder retrieved results to put most relevant documents not in the middle\n",
"# How to reorder retrieved results to mitigate the \"lost in the middle\" effect\n",
"\n",
"No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents.\n",
"In brief: When models must access relevant information in the middle of long contexts, they tend to ignore the provided documents.\n",
"See: https://arxiv.org/abs/2307.03172\n",
"Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arxiv.org/abs/2307.03172) as the number of retrieved documents grows (e.g., beyond ten). In brief: models are liable to miss relevant information in the middle of long contexts.\n",
"\n",
"To avoid this issue you can re-order documents after retrieval to avoid performance degradation."
"By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/#embedding-models)).\n",
"\n",
"To mitigate the [\"lost in the middle\"](https://arxiv.org/abs/2307.03172) effect, you can re-order documents after retrieval such that the most relevant documents are positioned at extrema (e.g., the first and last pieces of context), and the least relevant documents are positioned in the middle. In some cases this can help surface the most relevant information to LLMs.\n",
"\n",
"The [LongContextReorder](https://api.python.langchain.com/en/latest/document_transformers/langchain_community.document_transformers.long_context_reorder.LongContextReorder.html) document transformer implements this re-ordering procedure. Below we demonstrate an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74d1ebe8",
"id": "2074fdaa-edff-468a-970f-6f5f26e93d4a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet sentence-transformers langchain-chroma langchain langchain-openai langchain-huggingface > /dev/null"
]
},
{
"cell_type": "markdown",
"id": "c97eaaf2-34b7-4770-9949-e1abc4ca5226",
"metadata": {},
"source": [
"First we embed some artificial documents and index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store. We will use [Hugging Face](/docs/integrations/text_embedding/huggingfacehub/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "49cbcd8e",
"metadata": {},
"outputs": [
@@ -45,20 +55,14 @@
" Document(page_content='This is just a random text.')]"
]
},
"execution_count": 3,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain, StuffDocumentsChain\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.document_transformers import (\n",
" LongContextReorder,\n",
")\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_openai import OpenAI\n",
"\n",
"# Get embeddings.\n",
"embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
@@ -83,14 +87,22 @@
"query = \"What can you tell me about the Celtics?\"\n",
"\n",
"# Get relevant documents ordered by relevance score\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"docs"
]
},
{
"cell_type": "markdown",
"id": "175d031a-43fa-42f4-93c4-2ba52c3c3ee5",
"metadata": {},
"source": [
"Note that documents are returned in descending order of relevance to the query. The `LongContextReorder` document transformer will implement the re-ordering described above:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "34fb9d6e",
"execution_count": 3,
"id": "9a1181f2-a3dc-4614-9233-2196ab65939e",
"metadata": {},
"outputs": [
{
@@ -108,12 +120,14 @@
" Document(page_content='This is a document about the Boston Celtics')]"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.document_transformers import LongContextReorder\n",
"\n",
"# Reorder the documents:\n",
"# Less relevant document will be at the middle of the list and more\n",
"# relevant elements at beginning / end.\n",
@@ -125,58 +139,54 @@
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ceccab87",
"cell_type": "markdown",
"id": "a8d2ef0c-c397-4d8d-8118-3f7acf86d241",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nThe Celtics are referenced in four of the nine text extracts. They are mentioned as the favorite team of the author, the winner of a basketball game, a team with one of the best players, and a team with a specific player. Additionally, the last extract states that the document is about the Boston Celtics. This suggests that the Celtics are a basketball team, possibly from Boston, that is well-known and has had successful players and games in the past. '"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We prepare and run a custom Stuff chain with reordered docs as context.\n",
"\n",
"# Override prompts\n",
"document_prompt = PromptTemplate(\n",
" input_variables=[\"page_content\"], template=\"{page_content}\"\n",
")\n",
"document_variable_name = \"context\"\n",
"llm = OpenAI()\n",
"stuff_prompt_override = \"\"\"Given this text extracts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\"\"\"\n",
"prompt = PromptTemplate(\n",
" template=stuff_prompt_override, input_variables=[\"context\", \"query\"]\n",
")\n",
"\n",
"# Instantiate the chain\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"chain = StuffDocumentsChain(\n",
" llm_chain=llm_chain,\n",
" document_prompt=document_prompt,\n",
" document_variable_name=document_variable_name,\n",
")\n",
"chain.run(input_documents=reordered_docs, query=query)"
"Below, we show how to incorporate the re-ordered documents into a simple question-answering chain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4696a97",
"execution_count": 5,
"id": "8bbea705-d5b9-4ed5-9957-e12547283622",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"The Celtics are a professional basketball team and one of the most iconic franchises in the NBA. They are highly regarded and have a large fan base. The team has had many successful seasons and is often considered one of the top teams in the league. They have a strong history and have produced many great players, such as Larry Bird and L. Kornet. The team is based in Boston and is often referred to as the Boston Celtics.\n"
]
}
],
"source": [
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI()\n",
"\n",
"prompt_template = \"\"\"\n",
"Given these texts:\n",
"-----\n",
"{context}\n",
"-----\n",
"Please answer the following question:\n",
"{query}\n",
"\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=prompt_template,\n",
" input_variables=[\"context\", \"query\"],\n",
")\n",
"\n",
"# Create and invoke the chain:\n",
"chain = create_stuff_documents_chain(llm, prompt)\n",
"response = chain.invoke({\"context\": reordered_docs, \"query\": query})\n",
"print(response)"
]
}
],
"metadata": {
@@ -195,7 +205,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,170 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ac47bfab-0f4f-42ce-8bb6-898ef22a0338",
"metadata": {},
"source": [
"# How to merge consecutive messages of the same type\n",
"\n",
"Certain models do not support passing in consecutive messages of the same type (a.k.a. \"runs\" of the same message type).\n",
"\n",
"The `merge_message_runs` utility makes it easy to merge consecutive messages of the same type.\n",
"\n",
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1a215bbb-c05c-40b0-a6fd-d94884d517df",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\")\n",
"\n",
"HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways'])\n",
"\n",
"AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')\n"
]
}
],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" merge_message_runs,\n",
")\n",
"\n",
"messages = [\n",
" SystemMessage(\"you're a good assistant.\"),\n",
" SystemMessage(\"you always respond with a joke.\"),\n",
" HumanMessage([{\"type\": \"text\", \"text\": \"i wonder why it's called langchain\"}]),\n",
" HumanMessage(\"and who is harrison chasing anyways\"),\n",
" AIMessage(\n",
" 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n",
" ),\n",
" AIMessage(\"Why, he's probably chasing after the last cup of coffee in the office!\"),\n",
"]\n",
"\n",
"merged = merge_message_runs(messages)\n",
"print(\"\\n\\n\".join([repr(x) for x in merged]))"
]
},
{
"cell_type": "markdown",
"id": "0544c811-7112-4b76-8877-cc897407c738",
"metadata": {},
"source": [
"Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character."
]
},
{
"cell_type": "markdown",
"id": "1b2eee74-71c8-4168-b968-bca580c25d18",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"`merge_message_runs` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6d5a0283-11f8-435b-b27b-7b18f7693592",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=[], response_metadata={'id': 'msg_01D6R8Naum57q8qBau9vLBUX', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 84, 'output_tokens': 3}}, id='run-ac0c465b-b54f-4b8b-9295-e5951250d653-0', usage_metadata={'input_tokens': 84, 'output_tokens': 3, 'total_tokens': 87})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# pip install -U langchain-anthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n",
"# Notice we don't pass in messages. This creates\n",
"# a RunnableLambda that takes messages as input\n",
"merger = merge_message_runs()\n",
"chain = merger | llm\n",
"chain.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "72e90dce-693c-4842-9526-ce6460fe956b",
"metadata": {},
"source": [
"Looking at the LangSmith trace we can see that before the messages are passed to the model they are merged: https://smith.langchain.com/public/ab558677-cac9-4c59-9066-1ecce5bcd87c/r\n",
"\n",
"Looking at just the merger, we can see that it's a Runnable object that can be invoked like all Runnables:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "460817a6-c327-429d-958e-181a8c46059c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\"),\n",
" HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways']),\n",
" AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"merger.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4548d916-ce21-4dc6-8f19-eedb8003ace6",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.merge_message_runs.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "adc7ee09",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [create_react_agent, create_react_agent()]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "457cdc67-1893-4653-8b0c-b185a5947e74",
@@ -7,9 +21,18 @@
"source": [
"# How to migrate from legacy LangChain agents to LangGraph\n",
"\n",
"Here we focus on how to move from legacy LangChain agents to LangGraph agents.\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Agents](/docs/concepts/#agents)\n",
"- [LangGraph](https://langchain-ai.github.io/langgraph/)\n",
"- [Tool calling](/docs/how_to/tool_calling/)\n",
"\n",
":::\n",
"\n",
"Here we focus on how to move from legacy LangChain agents to more flexible [LangGraph](https://langchain-ai.github.io/langgraph/) agents.\n",
"LangChain agents (the [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor) in particular) have multiple configuration parameters.\n",
"In this notebook we will show how those parameters map to the LangGraph [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n",
"In this notebook we will show how those parameters map to the LangGraph react agent executor using the [create_react_agent](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) prebuilt helper method.\n",
"\n",
"#### Prerequisites\n",
"\n",
@@ -195,7 +218,7 @@
"\n",
"Let's take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish.\n",
"\n",
"First up, using AgentExecutor:"
"First up, using `AgentExecutor`:"
]
},
{
@@ -238,7 +261,16 @@
"id": "bd5f5500-5ae4-4000-a9fd-8c5a2cc6404d",
"metadata": {},
"source": [
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent). This can either be a string or a LangChain SystemMessage."
"Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent).\n",
"\n",
"LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a [`messages_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) parameter. This modifies messages before they are passed into the model, and can be one of four values:\n",
"\n",
"- A `SystemMessage`, which is added to the beginning of the list of messages.\n",
"- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n",
"- A `Callable`, which should take in a list of messages. The output is then passed to the language model.\n",
"- Or a [`Runnable`](/docs/concepts/#langchain-expression-language-lcel), which should should take in a list of messages. The output is then passed to the language model.\n",
"\n",
"Here's how it looks in action:"
]
},
{
@@ -319,7 +351,15 @@
"id": "68df3a09",
"metadata": {},
"source": [
"## Memory\n",
"## Memory"
]
},
{
"cell_type": "markdown",
"id": "96e7ffc8",
"metadata": {},
"source": [
"### In LangChain\n",
"\n",
"With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could add chat [Memory](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.memory) so it can engage in a multi-turn conversation."
]
@@ -407,7 +447,7 @@
"id": "c2a5a32f",
"metadata": {},
"source": [
"#### In LangGraph\n",
"### In LangGraph\n",
"\n",
"Memory is just [persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence/), aka [checkpointing](https://langchain-ai.github.io/langgraph/reference/checkpoints/).\n",
"\n",
@@ -478,6 +518,8 @@
"source": [
"## Iterating through steps\n",
"\n",
"### In LangChain\n",
"\n",
"With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could iterate over the steps using the [stream](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) (or async `astream`) methods or the [iter](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter) method. LangGraph supports stepwise iteration using [stream](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) "
]
},
@@ -536,7 +578,7 @@
"id": "46ccbcbf",
"metadata": {},
"source": [
"#### In LangGraph\n",
"### In LangGraph\n",
"\n",
"In LangGraph, things are handled natively using [stream](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.graph.CompiledGraph.stream) or the asynchronous `astream` method."
]
@@ -587,6 +629,8 @@
"source": [
"## `return_intermediate_steps`\n",
"\n",
"### In LangChain\n",
"\n",
"Setting this parameter on AgentExecutor allows users to access intermediate_steps, which pairs agent actions (e.g., tool invocations) with their outcomes.\n"
]
},
@@ -615,6 +659,8 @@
"id": "594f7567-302f-4fa8-85bb-025ac8322162",
"metadata": {},
"source": [
"### In LangGraph\n",
"\n",
"By default the [react agent executor](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state."
]
},
@@ -655,11 +701,9 @@
"source": [
"## `max_iterations`\n",
"\n",
"`AgentExecutor` implements a `max_iterations` parameter, whereas this is controlled via `recursion_limit` in LangGraph.\n",
"### In LangChain\n",
"\n",
"Note that in AgentExecutor, an \"iteration\" includes a full turn of tool invocation and execution. In LangGraph, each step contributes to the recursion limit, so we will need to multiply by two (and add one) to get equivalent results.\n",
"\n",
"If the recursion limit is reached, LangGraph raises a specific exception type, that we can catch and manage similarly to AgentExecutor."
"`AgentExecutor` implements a `max_iterations` parameter, allowing users to abort a run that exceeds a specified number of iterations."
]
},
{
@@ -737,6 +781,20 @@
"agent_executor.invoke({\"input\": query})"
]
},
{
"cell_type": "markdown",
"id": "dd3a933f",
"metadata": {},
"source": [
"### In LangGraph\n",
"\n",
"In LangGraph this is controlled via `recursion_limit` configuration parameter.\n",
"\n",
"Note that in `AgentExecutor`, an \"iteration\" includes a full turn of tool invocation and execution. In LangGraph, each step contributes to the recursion limit, so we will need to multiply by two (and add one) to get equivalent results.\n",
"\n",
"If the recursion limit is reached, LangGraph raises a specific exception type, that we can catch and manage similarly to AgentExecutor."
]
},
{
"cell_type": "code",
"execution_count": 16,
@@ -782,6 +840,8 @@
"source": [
"## `max_execution_time`\n",
"\n",
"### In LangChain\n",
"\n",
"`AgentExecutor` implements a `max_execution_time` parameter, allowing users to abort a run that exceeds a total time limit."
]
},
@@ -848,6 +908,8 @@
"id": "d02eb025",
"metadata": {},
"source": [
"### In LangGraph\n",
"\n",
"With LangGraph's react agent, you can control timeouts on two levels. \n",
"\n",
"You can set a `step_timeout` to bound each **step**:"
@@ -936,6 +998,8 @@
"source": [
"## `early_stopping_method`\n",
"\n",
"### In LangChain\n",
"\n",
"With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.iter), you could configure an [early_stopping_method](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.early_stopping_method) to either return a string saying \"Agent stopped due to iteration limit or time limit.\" (`\"force\"`) or prompt the LLM a final time to respond (`\"generate\"`)."
]
},
@@ -996,7 +1060,7 @@
"id": "706e05c4",
"metadata": {},
"source": [
"#### In LangGraph\n",
"### In LangGraph\n",
"\n",
"In LangGraph, you can explicitly handle the response behavior outside the agent, since the full state can be accessed."
]
@@ -1045,6 +1109,8 @@
"source": [
"## `trim_intermediate_steps`\n",
"\n",
"### In LangChain\n",
"\n",
"With LangChain's [AgentExecutor](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor), you could trim the intermediate steps of long-running agents using [trim_intermediate_steps](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor.trim_intermediate_steps), which is either an integer (indicating the agent should keep the last N steps) or a custom function.\n",
"\n",
"For instance, we could trim the value so the agent only sees the most recent intermediate step."
@@ -1148,7 +1214,7 @@
"id": "3d450c5a",
"metadata": {},
"source": [
"#### In LangGraph\n",
"### In LangGraph\n",
"\n",
"We can use the [`messages_modifier`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent) just as before when passing in [prompt templates](#prompt-templates)."
]
@@ -1212,6 +1278,18 @@
"except GraphRecursionError as e:\n",
" print(\"Stopping agent prematurely due to triggering stop condition\")"
]
},
{
"cell_type": "markdown",
"id": "41377eb8",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now learned how to migrate your LangChain agent executors to LangGraph.\n",
"\n",
"Next, check out other [LangGraph how-to guides](https://langchain-ai.github.io/langgraph/how-tos/)."
]
}
],
"metadata": {

View File

@@ -0,0 +1,798 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f331037f-be3f-4782-856f-d55dab952488",
"metadata": {},
"source": [
"# How to migrate chains to LCEL\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
"\n",
":::\n",
"\n",
"LCEL is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n",
"\n",
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n",
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n",
"\n",
"LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL primitives. Doing so confers some general advantages:\n",
"\n",
"- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n",
"- The chains may be more easily extended or modified;\n",
"- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n",
"\n",
"The LCEL implementations can be slightly more verbose, but there are significant benefits in transparency and customizability.\n",
"\n",
"In this guide we review LCEL implementations of common legacy abstractions. Where appropriate, we link out to separate guides with more detail."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b99b47ec",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community langchain langchain-openai faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "717c8673",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
"metadata": {},
"source": [
"## `LLMChain`\n",
"<span data-heading-keywords=\"llmchain\"></span>\n",
"\n",
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Clarity around contents and parameters. The legacy `LLMChain` contains a default output parser and other options.\n",
"- Easier streaming. `LLMChain` only supports streaming via callbacks.\n",
"- Easier access to raw message outputs if desired. `LLMChain` only exposes these via a parameter or via callback.\n",
"\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e628905c-430e-4e4a-9d7c-c91d2f42052e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'funny',\n",
" 'text': \"Why couldn't the bicycle find its way home?\\n\\nBecause it lost its bearings!\"}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
"\n",
"chain({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0d2a7cf8-1bc7-405c-bb0d-f2ab2ba3b6ab",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"chain.invoke({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "529206c5-abbe-4213-9e6c-3b8586c8000d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'funny',\n",
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"outer_chain = RunnablePassthrough().assign(text=chain)\n",
"\n",
"outer_chain.invoke({\"adjective\": \"funny\"})"
]
},
{
"cell_type": "markdown",
"id": "29d2e26c-2854-4971-9c2b-613450993921",
"metadata": {},
"source": [
"See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers."
]
},
{
"cell_type": "markdown",
"id": "00df631d-5121-4918-94aa-b88acce9b769",
"metadata": {},
"source": [
"## `ConversationChain`\n",
"<span data-heading-keywords=\"conversationchain\"></span>\n",
"\n",
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Innate support for threads/separate sessions. To make this work with `ConversationChain`, you'd need to instantiate a separate memory class outside the chain.\n",
"- More explicit parameters. `ConversationChain` contains a hidden default prompt, which can cause confusion.\n",
"- Streaming support. `ConversationChain` only supports streaming via callbacks.\n",
"\n",
"`RunnableWithMessageHistory` implements sessions via configuration parameters. It should be instantiated with a callable that returns a [chat message history](https://api.python.langchain.com/en/latest/chat_history/langchain_core.chat_history.BaseChatMessageHistory.html). By default, it expects this function to take a single argument `session_id`.\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Legacy\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'how are you?',\n",
" 'history': '',\n",
" 'response': \"Arrr, I be doin' well, me matey! Just sailin' the high seas in search of treasure and adventure. How can I assist ye today?\"}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"template = \"\"\"\n",
"You are a pirate. Answer the following questions as best you can.\n",
"Chat history: {history}\n",
"Question: {input}\n",
"\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"memory = ConversationBufferMemory()\n",
"\n",
"chain = ConversationChain(\n",
" llm=ChatOpenAI(),\n",
" memory=memory,\n",
" prompt=prompt,\n",
")\n",
"\n",
"chain({\"input\": \"how are you?\"})"
]
},
{
"cell_type": "markdown",
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "173e1a9c-2a18-4669-b0de-136f39197786",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Arr, matey! I be sailin' the high seas with me crew, searchin' for buried treasure and adventure! How be ye doin' on this fine day?\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a pirate. Answer the following questions as best you can.\"),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"history = InMemoryChatMessageHistory()\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"wrapped_chain = RunnableWithMessageHistory(chain, lambda x: history)\n",
"\n",
"wrapped_chain.invoke(\n",
" {\"input\": \"how are you?\"},\n",
" config={\"configurable\": {\"session_id\": \"42\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6b386ce6-895e-442c-88f3-7bec0ab9f401",
"metadata": {},
"source": [
"\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The above example uses the same `history` for all sessions. The example below shows how to use a different chat history for each session."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4e05994f-1fbc-4699-bf2e-62cb0e4deeb8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Ahoy there! What be ye wantin' from this old pirate?\", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 29, 'total_tokens': 44}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1846d5f5-0dda-43b6-bb49-864e541f9c29-0', usage_metadata={'input_tokens': 29, 'output_tokens': 15, 'total_tokens': 44})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = InMemoryChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"chain = prompt | ChatOpenAI() | StrOutputParser()\n",
"\n",
"wrapped_chain = RunnableWithMessageHistory(chain, get_session_history)\n",
"\n",
"wrapped_chain.invoke(\"Hello!\", config={\"configurable\": {\"session_id\": \"abc123\"}})"
]
},
{
"cell_type": "markdown",
"id": "c36ebecb",
"metadata": {},
"source": [
"See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n",
"\n",
"## `RetrievalQA`\n",
"<span data-heading-keywords=\"retrievalqa\"></span>\n",
"\n",
"The [`RetrievalQA`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html) chain performed natural-language question answering over a data source using retrieval-augmented generation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
"- Easier customizability. Details such as the prompt and how documents are formatted are only configurable via specific parameters in the `RetrievalQA` chain.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Now let's look at them side-by-side. We'll use the same ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1efbe16e",
"metadata": {},
"outputs": [],
"source": [
"# Load docs\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Store splits\n",
"vectorstore = FAISS.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())\n",
"\n",
"# LLM\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "markdown",
"id": "c7e16438",
"metadata": {},
"source": [
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "43bf55a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'What are autonomous agents?',\n",
" 'result': 'Autonomous agents are LLM-empowered agents that handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. These agents can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other LLMs. They are capable of reasoning and planning ahead for complicated tasks by breaking them down into smaller steps.'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import RetrievalQA\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"\n",
"qa_chain = RetrievalQA.from_llm(\n",
" llm, retriever=vectorstore.as_retriever(), prompt=prompt\n",
")\n",
"\n",
"qa_chain(\"What are autonomous agents?\")"
]
},
{
"cell_type": "markdown",
"id": "081948e5",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "9efcc931",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Autonomous agents are agents that can handle autonomous design, planning, and performance of complex tasks, such as scientific experiments. They can browse the Internet, read documentation, execute code, call robotics experimentation APIs, and leverage other language model models. These agents use reasoning steps to develop solutions to specific tasks, like creating a novel anticancer drug.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/rlm/rag-prompt\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"qa_chain = (\n",
" {\n",
" \"context\": vectorstore.as_retriever() | format_docs,\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"qa_chain.invoke(\"What are autonomous agents?\")"
]
},
{
"cell_type": "markdown",
"id": "d6f44fe8",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"The LCEL implementation exposes the internals of what's happening around retrieving, formatting documents, and passing them through a prompt to the LLM, but it is more verbose. You can customize and wrap this composition logic in a helper function, or use the higher-level [`create_retrieval_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html) and [`create_stuff_documents_chain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) helper method:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "5fe42761",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'What are autonomous agents?',\n",
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. LilLog. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\", metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'})],\n",
" 'answer': 'Autonomous agents are entities that can operate independently, making decisions and taking actions without direct human intervention. These agents can perform tasks such as planning, executing complex experiments, and leveraging various tools and resources to achieve objectives. In the context provided, LLM-powered autonomous agents are specifically designed for scientific discovery, capable of handling tasks like designing novel anticancer drugs through reasoning steps.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"# See full prompt at https://smith.langchain.com/hub/langchain-ai/retrieval-qa-chat\n",
"retrieval_qa_chat_prompt = hub.pull(\"langchain-ai/retrieval-qa-chat\")\n",
"\n",
"combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)\n",
"rag_chain = create_retrieval_chain(vectorstore.as_retriever(), combine_docs_chain)\n",
"\n",
"rag_chain.invoke({\"input\": \"What are autonomous agents?\"})"
]
},
{
"cell_type": "markdown",
"id": "2772f4e9",
"metadata": {},
"source": [
"## `ConversationalRetrievalChain`\n",
"<span data-heading-keywords=\"conversationalretrievalchain\"></span>\n",
"\n",
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
"\n",
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n",
"\n",
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:"
]
},
{
"cell_type": "markdown",
"id": "8bc06416",
"metadata": {},
"source": [
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Legacy"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "54eb9576",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'What are autonomous agents?',\n",
" 'chat_history': '',\n",
" 'answer': 'Autonomous agents are powered by Large Language Models (LLMs) to handle tasks like scientific discovery and complex experiments autonomously. These agents can browse the internet, read documentation, execute code, and leverage other LLMs to perform tasks. They can reason and plan ahead to decompose complicated tasks into manageable steps.'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"\n",
"condense_question_template = \"\"\"\n",
"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"Follow Up Input: {question}\n",
"Standalone question:\"\"\"\n",
"\n",
"condense_question_prompt = ChatPromptTemplate.from_template(condense_question_template)\n",
"\n",
"qa_template = \"\"\"\n",
"You are an assistant for question-answering tasks.\n",
"Use the following pieces of retrieved context to answer\n",
"the question. If you don't know the answer, say that you\n",
"don't know. Use three sentences maximum and keep the\n",
"answer concise.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"\n",
"Other context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_template(qa_template)\n",
"\n",
"convo_qa_chain = ConversationalRetrievalChain.from_llm(\n",
" llm,\n",
" vectorstore.as_retriever(),\n",
" condense_question_prompt=condense_question_prompt,\n",
" combine_docs_chain_kwargs={\n",
" \"prompt\": qa_prompt,\n",
" },\n",
")\n",
"\n",
"convo_qa_chain(\n",
" {\n",
" \"question\": \"What are autonomous agents?\",\n",
" \"chat_history\": \"\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "43a8a23c",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "c884b138",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': 'What are autonomous agents?',\n",
" 'chat_history': [],\n",
" 'context': [Document(page_content='Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.\\nFor example, when requested to \"develop a novel anticancer drug\", the model came up with the following reasoning steps:', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. LilLog. https://lilianweng.github.io/posts/2023-06-23-agent/.', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'}),\n",
" Document(page_content='Or\\n@article{weng2023agent,\\n title = \"LLM-powered Autonomous Agents\",\\n author = \"Weng, Lilian\",\\n journal = \"lilianweng.github.io\",\\n year = \"2023\",\\n month = \"Jun\",\\n url = \"https://lilianweng.github.io/posts/2023-06-23-agent/\"\\n}\\nReferences#\\n[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022\\n[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).', metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en'})],\n",
" 'answer': 'Autonomous agents are entities capable of acting independently, making decisions, and performing tasks without direct human intervention. These agents can interact with their environment, perceive information, and take actions based on their goals or objectives. They often use artificial intelligence techniques to navigate and accomplish tasks in complex or dynamic environments.'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
"\n",
"condense_question_system_template = (\n",
" \"Given a chat history and the latest user question \"\n",
" \"which might reference context in the chat history, \"\n",
" \"formulate a standalone question which can be understood \"\n",
" \"without the chat history. Do NOT answer the question, \"\n",
" \"just reformulate it if needed and otherwise return it as is.\"\n",
")\n",
"\n",
"condense_question_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", condense_question_system_template),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"history_aware_retriever = create_history_aware_retriever(\n",
" llm, vectorstore.as_retriever(), condense_question_prompt\n",
")\n",
"\n",
"system_prompt = (\n",
" \"You are an assistant for question-answering tasks. \"\n",
" \"Use the following pieces of retrieved context to answer \"\n",
" \"the question. If you don't know the answer, say that you \"\n",
" \"don't know. Use three sentences maximum and keep the \"\n",
" \"answer concise.\"\n",
" \"\\n\\n\"\n",
" \"{context}\"\n",
")\n",
"\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"qa_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
"\n",
"convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)\n",
"\n",
"convo_qa_chain.invoke(\n",
" {\n",
" \"input\": \"What are autonomous agents?\",\n",
" \"chat_history\": [],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b2717810",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"</ColumnContainer>\n",
"\n",
"## Next steps\n",
"\n",
"You've now seen how to migrate existing usage of some legacy chains to LCEL.\n",
"\n",
"Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,33 +5,36 @@
"id": "d9172545",
"metadata": {},
"source": [
"# How to use the MultiVector Retriever\n",
"# How to retrieve using multiple vectors per document\n",
"\n",
"It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base `MultiVectorRetriever` which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can embed multiple chunks of a document and associate those embeddings with the parent document, allowing retriever hits on the chunks to return the larger document.\n",
"\n",
"LangChain implements a base [MultiVectorRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html), which simplifies this process. Much of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"\n",
"The methods to create multiple vectors per document include:\n",
"\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).\n",
"- Smaller chunks: split a document into smaller chunks, and embed those (this is [ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html)).\n",
"- Summary: create a summary for each document, embed that along with (or instead of) the document.\n",
"- Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is useful because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.\n",
"\n",
"Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control."
"Below we walk through an example. First we instantiate some documents. We will index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store using [OpenAI](https://python.langchain.com/v0.2/docs/integrations/text_embedding/openai/) embeddings, but any LangChain vector store or embeddings model will suffice."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09cecd95-3499-465a-895a-944627ffb77f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-chroma langchain langchain-openai > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eed469be",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.multi_vector import MultiVectorRetriever"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "18c1421a",
"metadata": {},
"outputs": [],
@@ -40,25 +43,22 @@
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6d869496",
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loaders = [\n",
" TextLoader(\"../../paul_graham_essay.txt\"),\n",
" TextLoader(\"paul_graham_essay.txt\"),\n",
" TextLoader(\"state_of_the_union.txt\"),\n",
"]\n",
"docs = []\n",
"for loader in loaders:\n",
" docs.extend(loader.load())\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)\n",
"docs = text_splitter.split_documents(docs)"
"docs = text_splitter.split_documents(docs)\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")"
]
},
{
@@ -68,52 +68,54 @@
"source": [
"## Smaller chunks\n",
"\n",
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the `ParentDocumentRetriever` does. Here we show what is going on under the hood."
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. Note that this is what the [ParentDocumentRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.parent_document_retriever.ParentDocumentRetriever.html) does. Here we show what is going on under the hood.\n",
"\n",
"We will make a distinction between the vector store, which indexes embeddings of the (sub) documents, and the document store, which houses the \"parent\" documents and associates them with an identifier."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "0e7b6b45",
"metadata": {},
"outputs": [],
"source": [
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\", embedding_function=OpenAIEmbeddings()\n",
")\n",
"import uuid\n",
"\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"\n",
"# The storage layer for the parent documents\n",
"store = InMemoryByteStore()\n",
"id_key = \"doc_id\"\n",
"\n",
"# The retriever (empty to start)\n",
"retriever = MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"import uuid\n",
"\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "72a36491",
"cell_type": "markdown",
"id": "d4feded4-856a-4282-91c3-53aabc62e6ff",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)"
"We next generate the \"sub\" documents by splitting the original documents. Note that we store the document identifier in the `metadata` of the corresponding [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) object."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "5d23247d",
"metadata": {},
"outputs": [],
"source": [
"# The splitter to use to create smaller chunks\n",
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"\n",
"sub_docs = []\n",
"for i, doc in enumerate(docs):\n",
" _id = doc_ids[i]\n",
@@ -123,9 +125,17 @@
" sub_docs.extend(_sub_docs)"
]
},
{
"cell_type": "markdown",
"id": "8e0634f8-90d5-4250-981a-5257c8a6d455",
"metadata": {},
"source": [
"Finally, we index the documents in our vector store and document store:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "92ed5861",
"metadata": {},
"outputs": [],
@@ -134,31 +144,46 @@
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "markdown",
"id": "14c48c6d-850c-4317-9b6e-1ade92f2f710",
"metadata": {},
"source": [
"The vector store alone will retrieve small chunks:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 5,
"id": "8afed60c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '2fd77862-9ed5-4fad-bf76-e487b747b333', 'source': 'state_of_the_union.txt'})"
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '064eca46-a4c4-4789-8e3b-583f9597e54f', 'source': 'state_of_the_union.txt'})"
]
},
"execution_count": 8,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Vectorstore alone retrieves the small chunks\n",
"retriever.vectorstore.similarity_search(\"justice breyer\")[0]"
]
},
{
"cell_type": "markdown",
"id": "717097c7-61d9-4306-8625-ef8f1940c127",
"metadata": {},
"source": [
"Whereas the retriever will return the larger parent document:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"id": "3c9017f1",
"metadata": {},
"outputs": [
@@ -168,14 +193,13 @@
"9875"
]
},
"execution_count": 9,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Retriever returns larger chunks\n",
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
@@ -183,12 +207,12 @@
"id": "cdef8339-f9fa-4b3b-955f-ad9dbdf2734f",
"metadata": {},
"source": [
"The default search type the retriever performs on the vector database is a similarity search. LangChain Vector Stores also support searching via [Max Marginal Relevance](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search) so if you want this instead you can just set the `search_type` property as follows:"
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 7,
"id": "36739460-a737-4a8e-b70f-50bf8c8eaae7",
"metadata": {},
"outputs": [
@@ -198,7 +222,7 @@
"9875"
]
},
"execution_count": 10,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -208,7 +232,7 @@
"\n",
"retriever.search_type = SearchType.mmr\n",
"\n",
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
"len(retriever.invoke(\"justice breyer\")[0].page_content)"
]
},
{
@@ -216,14 +240,37 @@
"id": "d6a7ae0d",
"metadata": {},
"source": [
"## Summary\n",
"## Associating summaries with a document for retrieval\n",
"\n",
"Oftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those."
"A summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.\n",
"\n",
"We construct a simple [chain](/docs/how_to/sequence) that will receive an input [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) object and generate a summary using a LLM.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 8,
"id": "6589291f-55bb-4e9a-b4ff-08f2506ed641",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1433dff4",
"metadata": {},
"outputs": [],
@@ -233,27 +280,26 @@
"from langchain_core.documents import Document\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "35b30390",
"metadata": {},
"outputs": [],
"source": [
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
" | ChatPromptTemplate.from_template(\"Summarize the following document:\\n\\n{doc}\")\n",
" | ChatOpenAI(max_retries=0)\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3faa9fde-1b09-4849-a815-8b2e89c30a02",
"metadata": {},
"source": [
"Note that we can [batch](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) the chain accross documents:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 10,
"id": "41a2a738",
"metadata": {},
"outputs": [],
@@ -261,9 +307,17 @@
"summaries = chain.batch(docs, {\"max_concurrency\": 5})"
]
},
{
"cell_type": "markdown",
"id": "73ef599e-140b-4905-8b62-6c52cdde1852",
"metadata": {},
"source": [
"We can then initialize a `MultiVectorRetriever` as before, indexing the summaries in our vector store, and retaining the original documents in our document store:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 11,
"id": "7ac5e4b1",
"metadata": {},
"outputs": [],
@@ -279,29 +333,13 @@
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "0d93309f",
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"summary_docs = [\n",
" Document(page_content=s, metadata={id_key: doc_ids[i]})\n",
" for i, s in enumerate(summaries)\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6d5edf0d",
"metadata": {},
"outputs": [],
"source": [
"]\n",
"\n",
"retriever.vectorstore.add_documents(summary_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
@@ -320,50 +358,48 @@
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "299232d6",
"cell_type": "markdown",
"id": "f0274892-29c1-4616-9040-d23f9d537526",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
"Querying the vector store will return summaries:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "10e404c0",
"execution_count": 12,
"id": "299232d6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content=\"The document is a speech given by President Biden addressing various issues and outlining his agenda for the nation. He highlights the importance of nominating a Supreme Court justice and introduces his nominee, Judge Ketanji Brown Jackson. He emphasizes the need to secure the border and reform the immigration system, including providing a pathway to citizenship for Dreamers and essential workers. The President also discusses the protection of women's rights, including access to healthcare and the right to choose. He calls for the passage of the Equality Act to protect LGBTQ+ rights. Additionally, President Biden discusses the need to address the opioid epidemic, improve mental health services, support veterans, and fight against cancer. He expresses optimism for the future of America and the strength of the American people.\", metadata={'doc_id': '56345bff-3ead-418c-a4ff-dff203f77474'})"
"Document(page_content=\"President Biden recently nominated Judge Ketanji Brown Jackson to serve on the United States Supreme Court, emphasizing her qualifications and broad support. The President also outlined a plan to secure the border, fix the immigration system, protect women's rights, support LGBTQ+ Americans, and advance mental health services. He highlighted the importance of bipartisan unity in passing legislation, such as the Violence Against Women Act. The President also addressed supporting veterans, particularly those impacted by exposure to burn pits, and announced plans to expand benefits for veterans with respiratory cancers. Additionally, he proposed a plan to end cancer as we know it through the Cancer Moonshot initiative. President Biden expressed optimism about the future of America and emphasized the strength of the American people in overcoming challenges.\", metadata={'doc_id': '84015b1b-980e-400a-94d8-cf95d7e079bd'})"
]
},
"execution_count": 19,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e4cce5c2",
"cell_type": "markdown",
"id": "e4f77ac5-2926-4f60-aad5-b2067900dff9",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
"Whereas the retriever will return the larger source document:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "c8570dbb",
"execution_count": 13,
"id": "e4cce5c2",
"metadata": {},
"outputs": [
{
@@ -372,12 +408,14 @@
"9194"
]
},
"execution_count": 21,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"\n",
"len(retrieved_docs[0].page_content)"
]
},
@@ -388,42 +426,28 @@
"source": [
"## Hypothetical Queries\n",
"\n",
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embedded"
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document, which might bear close semantic similarity to relevant queries in a [RAG](/docs/tutorials/rag) application. These questions can then be embedded and associated with the documents to improve retrieval.\n",
"\n",
"Below, we use the [with_structured_output](/docs/how_to/structured_output/) method to structure the LLM output into a list of strings."
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "5219b085",
"execution_count": 16,
"id": "03d85234-c33a-4a43-861d-47328e1ec2ea",
"metadata": {},
"outputs": [],
"source": [
"functions = [\n",
" {\n",
" \"name\": \"hypothetical_questions\",\n",
" \"description\": \"Generate hypothetical questions\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"questions\": {\n",
" \"type\": \"array\",\n",
" \"items\": {\"type\": \"string\"},\n",
" },\n",
" },\n",
" \"required\": [\"questions\"],\n",
" },\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "523deb92",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
"from typing import List\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class HypotheticalQuestions(BaseModel):\n",
" \"\"\"Generate hypothetical questions.\"\"\"\n",
"\n",
" questions: List[str] = Field(..., description=\"List of questions\")\n",
"\n",
"\n",
"chain = (\n",
" {\"doc\": lambda x: x.page_content}\n",
@@ -431,28 +455,36 @@
" | ChatPromptTemplate.from_template(\n",
" \"Generate a list of exactly 3 hypothetical questions that the below document could be used to answer:\\n\\n{doc}\"\n",
" )\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4\").bind(\n",
" functions=functions, function_call={\"name\": \"hypothetical_questions\"}\n",
" | ChatOpenAI(max_retries=0, model=\"gpt-4o\").with_structured_output(\n",
" HypotheticalQuestions\n",
" )\n",
" | JsonKeyOutputFunctionsParser(key_name=\"questions\")\n",
" | (lambda x: x.questions)\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6dddc40f-62af-413c-b944-f94a5e1f2f4e",
"metadata": {},
"source": [
"Invoking the chain on a single document demonstrates that it outputs a list of questions:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 17,
"id": "11d30554",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[\"What was the author's first experience with programming like?\",\n",
" 'Why did the author switch their focus from AI to Lisp during their graduate studies?',\n",
" 'What led the author to contemplate a career in art instead of computer science?']"
"[\"What impact did the IBM 1401 have on the author's early programming experiences?\",\n",
" \"How did the transition from using the IBM 1401 to microcomputers influence the author's programming journey?\",\n",
" \"What role did Lisp play in shaping the author's understanding and approach to AI?\"]"
]
},
"execution_count": 24,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -462,22 +494,24 @@
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "3eb2e48c",
"cell_type": "markdown",
"id": "dcffc572-7b20-4b77-857a-90ec360a8f7e",
"metadata": {},
"outputs": [],
"source": [
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})"
"We can batch then batch the chain over all documents and assemble our vector store and document store as before:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 18,
"id": "b2cd6e75",
"metadata": {},
"outputs": [],
"source": [
"# Batch chain over documents to generate hypothetical questions\n",
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})\n",
"\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"hypo-questions\", embedding_function=OpenAIEmbeddings()\n",
@@ -491,82 +525,67 @@
" byte_store=store,\n",
" id_key=id_key,\n",
")\n",
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "18831b3b",
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [str(uuid.uuid4()) for _ in docs]\n",
"\n",
"\n",
"# Generate Document objects from hypothetical questions\n",
"question_docs = []\n",
"for i, question_list in enumerate(hypothetical_questions):\n",
" question_docs.extend(\n",
" [Document(page_content=s, metadata={id_key: doc_ids[i]}) for s in question_list]\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "224b24c5",
"metadata": {},
"outputs": [],
"source": [
" )\n",
"\n",
"\n",
"retriever.vectorstore.add_documents(question_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "7b442b90",
"cell_type": "markdown",
"id": "75cba8ab-a06f-4545-85fc-cf49d0204b5e",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
"Note that querying the underlying vector store will retrieve hypothetical questions that are semantically similar to the input query:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "089b5ad0",
"execution_count": 19,
"id": "7b442b90",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Who has been nominated to serve on the United States Supreme Court?', metadata={'doc_id': '0b3a349e-c936-4e77-9c40-0a39fc3e07f0'}),\n",
" Document(page_content=\"What was the context and content of Robert Morris' advice to the document's author in 2010?\", metadata={'doc_id': 'b2b2cdca-988a-4af1-ba47-46170770bc8c'}),\n",
" Document(page_content='How did personal circumstances influence the decision to pass on the leadership of Y Combinator?', metadata={'doc_id': 'b2b2cdca-988a-4af1-ba47-46170770bc8c'}),\n",
" Document(page_content='What were the reasons for the author leaving Yahoo in the summer of 1999?', metadata={'doc_id': 'ce4f4981-ca60-4f56-86f0-89466de62325'})]"
"[Document(page_content='What might be the potential benefits of nominating Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court?', metadata={'doc_id': '43292b74-d1b8-4200-8a8b-ea0cb57fbcdb'}),\n",
" Document(page_content='How might the Bipartisan Infrastructure Law impact the economic competition between the U.S. and China?', metadata={'doc_id': '66174780-d00c-4166-9791-f0069846e734'}),\n",
" Document(page_content='What factors led to the creation of Y Combinator?', metadata={'doc_id': '72003c4e-4cc9-4f09-a787-0b541a65b38c'}),\n",
" Document(page_content='How did the ability to publish essays online change the landscape for writers and thinkers?', metadata={'doc_id': 'e8d2c648-f245-4bcc-b8d3-14e64a164b64'})]"
]
},
"execution_count": 30,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sub_docs = retriever.vectorstore.similarity_search(\"justice breyer\")\n",
"\n",
"sub_docs"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "7594b24e",
"cell_type": "markdown",
"id": "63c32e43-5f4a-463b-a0c2-2101986f70e6",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
"And invoking the retriever will return the corresponding document:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "4c120c65",
"execution_count": 20,
"id": "7594b24e",
"metadata": {},
"outputs": [
{
@@ -575,22 +594,15 @@
"9194"
]
},
"execution_count": 32,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieved_docs = retriever.invoke(\"justice breyer\")\n",
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "005072b8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -609,7 +621,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,228 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to pass multimodal data directly to models\n",
"\n",
"Here we demonstrate how to pass multimodal input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"\n",
"In this example we will ask a model to describe an image."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fb896ce9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "markdown",
"id": "4fca4da7",
"metadata": {},
"source": [
"The most commonly supported way to pass in images is to pass it in as a byte string.\n",
"This should work for most model integrations."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ca1040c",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ec680b6b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_data}\"},\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"We can feed the image URL directly in a content block of type \"image_url\". Note that only some model providers support this."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered clouds, suggesting good visibility and a likely pleasant temperature. The bright sunlight is casting distinct shadows on the grass and vegetation, indicating it is likely daytime, possibly late morning or early afternoon. The overall ambiance suggests a warm and inviting day, suitable for outdoor activities.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "1c470309",
"metadata": {},
"source": [
"We can also pass in multiple images."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "325fb4ca",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Yes, the two images are the same. They both depict a wooden boardwalk extending through a grassy field under a blue sky with light clouds. The scenery, lighting, and composition are identical.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"are these two images the same?\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "71bd28cf-d76c-44e2-a55e-c5f265db986e",
"metadata": {},
"source": [
"## Tool calls\n",
"\n",
"Some multimodal models support [tool calling](/docs/concepts/#functiontool-calling) features as well. To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cd22ea82-2f93-46f9-9f7a-6aaf479fcaa9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_BSX4oq4SKnLlp2WlzDhToHBr'}]\n"
]
}
],
"source": [
"from typing import Literal\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def weather_tool(weather: Literal[\"sunny\", \"cloudy\", \"rainy\"]) -> None:\n",
" \"\"\"Describe the weather\"\"\"\n",
" pass\n",
"\n",
"\n",
"model_with_tools = model.bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model_with_tools.invoke([message])\n",
"print(response.tool_calls)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,189 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to use multimodal prompts\n",
"\n",
"Here we demonstrate how to use prompt templates to format multimodal inputs to models. \n",
"\n",
"In this example we will ask a model to describe an image."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2671f995",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4ee35e4f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data}\"},\n",
" }\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "089f75c2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "02744b06",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "e9b9ebf6",
"metadata": {},
"source": [
"We can also pass in multiple images."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "02190ee3",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"compare the two pictures provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data1}\"},\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data2}\"},\n",
" },\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "42af057b",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "513abe00",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data1\": image_data, \"image_data2\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8152c3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -94,7 +94,7 @@
"source": [
"## LCEL\n",
"\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
"Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type."
]

View File

@@ -0,0 +1,107 @@
# How to use LangChain with different Pydantic versions
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)
- Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time
## LangChain Pydantic migration plan
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
* During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial
migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.
Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in
the case of inheritance and in the case of passing objects to LangChain.
**Example 1: Extending via inheritance**
**YES**
```python
from pydantic.v1 import root_validator, validator
from langchain_core.tools import BaseTool
class CustomTool(BaseTool): # BaseTool is v1 code
x: int = Field(default=1)
def _run(*args, **kwargs):
return "hello"
@validator('x') # v1 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors
**NO**
```python
from pydantic import Field, field_validator # pydantic v2
from langchain_core.tools import BaseTool
class CustomTool(BaseTool): # BaseTool is v1 code
x: int = Field(default=1)
def _run(*args, **kwargs):
return "hello"
@field_validator('x') # v2 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
**Example 2: Passing objects to LangChain**
**YES**
```python
from langchain_core.tools import Tool
from pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace
class CalculatorInput(BaseModel):
question: str = Field()
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```
**NO**
```python
from langchain_core.tools import Tool
from pydantic import BaseModel, Field # <-- Uses v2 namespace
class CalculatorInput(BaseModel):
question: str = Field()
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```

View File

@@ -36,12 +36,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "ede7fdc0-ef31-483d-bd67-32e4b5c5d527",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-chroma bs4"
"%%capture --no-stderr\n",
"%pip install --upgrade --quiet langchain langchain-community langchain-chroma bs4"
]
},
{
@@ -54,7 +55,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "143787ca-d8e6-4dc9-8281-4374f4d71720",
"metadata": {},
"outputs": [],
@@ -62,7 +63,8 @@
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"if not os.environ.get(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# import dotenv\n",
"\n",
@@ -83,13 +85,14 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "07411adb-3722-4f65-ab7f-8f6f57663d11",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"if not os.environ.get(\"LANGCHAIN_API_KEY\"):\n",
" os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
@@ -126,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "cb58f273-2111-4a9b-8932-9b64c95030c8",
"metadata": {},
"outputs": [],
@@ -157,13 +160,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "820244ae-74b4-4593-b392-822979dd91b8",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_chroma import Chroma\n",
@@ -202,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "2b685428-8b82-4af1-be4f-7232c5d55b73",
"metadata": {},
"outputs": [],
@@ -239,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"id": "4c4b1695-6217-4ee8-abaf-7cc26366d988",
"metadata": {},
"outputs": [],
@@ -265,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"id": "afef4385-f571-4874-8f52-3d475642f579",
"metadata": {},
"outputs": [],
@@ -314,7 +316,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"id": "9c3fb176-8d6a-4dc7-8408-6a22c5f7cc72",
"metadata": {},
"outputs": [],
@@ -343,17 +345,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"id": "1046c92f-21b3-4214-907d-92878d8cba23",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in thinking step by step or exploring multiple reasoning possibilities at each step. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'"
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.'"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -369,17 +371,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 11,
"id": "0e89c75f-7ad7-4331-a2fe-57579eb8f840",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down complex tasks into smaller steps. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process effectively.'"
"'Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.'"
]
},
"execution_count": 8,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -401,7 +403,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 12,
"id": "7686b874-3a85-499f-82b5-28a85c4c768c",
"metadata": {},
"outputs": [
@@ -411,11 +413,11 @@
"text": [
"User: What is Task Decomposition?\n",
"\n",
"AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in thinking step by step or exploring multiple reasoning possibilities at each step. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.\n",
"AI: Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable and easier to accomplish. This process can be done using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Task decomposition can be facilitated by providing simple prompts to a language model, task-specific instructions, or human inputs.\n",
"\n",
"User: What are common ways of doing it?\n",
"\n",
"AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down complex tasks into smaller steps. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process effectively.\n",
"AI: Task decomposition can be achieved through various methods, including using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide the model in breaking down tasks effectively. Common ways of task decomposition include providing simple prompts to a language model, task-specific instructions, or human inputs to break down complex tasks into smaller and more manageable steps. Additionally, task decomposition can involve utilizing resources like internet access for information gathering, long-term memory management, and GPT-3.5 powered agents for delegation of simple tasks.\n",
"\n"
]
}
@@ -452,7 +454,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 13,
"id": "71c32048-1a41-465f-a9e2-c4affc332fd9",
"metadata": {},
"outputs": [],
@@ -552,17 +554,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 14,
"id": "6d0a7a73-d151-47d9-9e99-b4f3291c0322",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. This process helps agents or models tackle difficult tasks by dividing them into more easily achievable subgoals. Task decomposition can be done through techniques like Chain of Thought or Tree of Thoughts, which guide the model in thinking step by step or exploring multiple reasoning possibilities at each step.'"
"'Task decomposition involves breaking down a complex task into smaller and simpler steps to make it more manageable. Techniques like Chain of Thought (CoT) and Tree of Thoughts help in decomposing hard tasks into multiple manageable tasks by instructing models to think step by step and explore multiple reasoning possibilities at each step. Task decomposition can be achieved through various methods such as using prompting techniques, task-specific instructions, or human inputs.'"
]
},
"execution_count": 2,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -578,17 +580,17 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 15,
"id": "17021822-896a-4513-a17d-1d20b1c5381c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Common ways of task decomposition include using techniques like Chain of Thought (CoT) or Tree of Thoughts to guide models in breaking down complex tasks into smaller steps. This can be achieved through simple prompting with LLMs, task-specific instructions, or human inputs to help the model understand and navigate the task effectively. Task decomposition aims to enhance model performance on complex tasks by utilizing more test-time computation and shedding light on the model's thinking process.\""
"'Task decomposition can be done in common ways such as using prompting techniques like Chain of Thought (CoT) or Tree of Thoughts, which instruct models to think step by step and explore multiple reasoning possibilities at each step. Another way is to provide task-specific instructions, such as asking to \"Write a story outline\" for writing a novel, to guide the decomposition process. Additionally, task decomposition can also involve human inputs to break down complex tasks into smaller and simpler steps.'"
]
},
"execution_count": 3,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -618,7 +620,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 16,
"id": "809cc747-2135-40a2-8e73-e4556343ee64",
"metadata": {},
"outputs": [],
@@ -646,14 +648,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 17,
"id": "1726d151-4653-4c72-a187-a14840add526",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import chat_agent_executor\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(llm, tools)"
"agent_executor = create_react_agent(llm, tools)"
]
},
{
@@ -666,19 +668,26 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 18,
"id": "52ae46d9-43f7-481b-96d5-df750be3ad65",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID 5cd28d13-88dd-4eac-a465-3770ac27eff6, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_wxRrUmNbaNny8wh9JIb5uCRB', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-57ee0d12-6142-4957-a002-cce0093efe07-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_wxRrUmNbaNny8wh9JIb5uCRB'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_TbhPPPN05GKi36HLeaN4QM90', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 68, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2e60d910-879a-4a2a-b1e9-6a6c5c7d7ebc-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_TbhPPPN05GKi36HLeaN4QM90'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', id='9c3a17f7-653c-47fa-b4e4-fa3d8d24c85d', tool_call_id='call_wxRrUmNbaNny8wh9JIb5uCRB')]}}\n",
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_TbhPPPN05GKi36HLeaN4QM90')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents in planning and executing tasks more effectively. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to think step by step to decompose hard tasks into manageable steps. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of thought steps.\\n\\nTask decomposition can be achieved through various methods, such as using language models with simple prompting, task-specific instructions, or human inputs. By breaking down tasks into smaller components, agents can better plan and execute tasks efficiently.\\n\\nIf you would like more detailed information or examples on task decomposition, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 154, 'prompt_tokens': 588, 'total_tokens': 742}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-8991fa20-c527-4f9e-a058-fc6264fe6259-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in transforming big tasks into multiple manageable tasks, making it easier for autonomous agents to handle and interpret the thinking process. One common method for task decomposition is the Chain of Thought (CoT) technique, where models are instructed to \"think step by step\" to decompose hard tasks. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of multiple thoughts per step. Task decomposition can be facilitated through various methods such as using simple prompts, task-specific instructions, or human inputs.', response_metadata={'token_usage': {'completion_tokens': 130, 'prompt_tokens': 636, 'total_tokens': 766}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-3ef17638-65df-4030-a7fe-795e6da91c69-0')]}}\n",
"----\n"
]
}
@@ -707,7 +716,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 19,
"id": "837a401e-9757-4d0e-a0da-24fa097d887e",
"metadata": {},
"outputs": [],
@@ -716,9 +725,7 @@
"\n",
"memory = SqliteSaver.from_conn_string(\":memory:\")\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" llm, tools, checkpointer=memory\n",
")"
"agent_executor = create_react_agent(llm, tools, checkpointer=memory)"
]
},
{
@@ -733,7 +740,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 20,
"id": "d6d70833-b958-4cd7-9e27-29c1c08bb1b8",
"metadata": {},
"outputs": [
@@ -741,7 +748,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-1451e59b-b135-4776-985d-4759338ffee5-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 67, 'total_tokens': 78}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-1cd17562-18aa-4839-b41b-403b17a0fc20-0')]}}\n",
"----\n"
]
}
@@ -766,19 +773,26 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 21,
"id": "e2c570ae-dd91-402c-8693-ae746de63b16",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID c54381c0-c5d9-495a-91a0-aca4ae755663, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ab2x4iUPSWDAHS5txL7PspSK', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-f76b5813-b41c-4d0d-9ed2-667b988d885e-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_ab2x4iUPSWDAHS5txL7PspSK'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg', 'function': {'arguments': '{\"query\":\"Task Decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 91, 'total_tokens': 110}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-122bf097-7ff1-49aa-b430-e362b51354ad-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Task Decomposition'}, 'id': 'call_rg7zKTE5e0ICxVSslJ1u9LMg'}])]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.\\n\\nFig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:', name='blog_post_retriever', id='e0895fa5-5d41-4be0-98db-10a83d42fc2f', tool_call_id='call_ab2x4iUPSWDAHS5txL7PspSK')]}}\n",
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_rg7zKTE5e0ICxVSslJ1u9LMg')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used in complex tasks where the task is broken down into smaller and simpler steps. This approach helps in managing and solving difficult tasks by dividing them into more manageable components. One common method for task decomposition is the Chain of Thought (CoT) technique, which prompts the model to think step by step and decompose hard tasks into smaller steps. Another extension of CoT is the Tree of Thoughts, which explores multiple reasoning possibilities at each step by creating a tree structure of thought steps.\\n\\nTask decomposition can be achieved through various methods, such as using language models with simple prompting, task-specific instructions, or human inputs. By breaking down tasks into smaller components, agents can better plan and execute complex tasks effectively.\\n\\nIf you would like more detailed information or examples related to task decomposition, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 165, 'prompt_tokens': 611, 'total_tokens': 776}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-13296566-8577-4d65-982b-a39718988ca3-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps in managing and solving intricate problems by dividing them into more manageable components. By decomposing tasks, agents or models can better understand the steps involved and plan their actions accordingly. Techniques like Chain of Thought (CoT) and Tree of Thoughts are examples of methods that enhance model performance on complex tasks by breaking them down into smaller steps.', response_metadata={'token_usage': {'completion_tokens': 87, 'prompt_tokens': 659, 'total_tokens': 746}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b9166386-83e5-4b82-9a4b-590e5fa76671-0')]}}\n",
"----\n"
]
}
@@ -805,7 +819,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 22,
"id": "570d8c68-136e-4ba5-969a-03ba195f6118",
"metadata": {},
"outputs": [
@@ -813,11 +827,24 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KvoiamnLfGEzMeEMlV3u0TJ7', 'function': {'arguments': '{\"query\":\"common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 930, 'total_tokens': 951}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-dd842071-6dbd-4b68-8657-892eaca58638-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'common ways of task decomposition'}, 'id': 'call_KvoiamnLfGEzMeEMlV3u0TJ7'}])]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI', 'function': {'arguments': '{\"query\":\"Common ways of task decomposition\"}', 'name': 'blog_post_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 769, 'total_tokens': 790}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2d2c8327-35cd-484a-b8fd-52436657c2d8-0', tool_calls=[{'name': 'blog_post_retriever', 'args': {'query': 'Common ways of task decomposition'}, 'id': 'call_6kbxTU5CDWLmF9mrvR7bWSkI'}])]}}\n",
"----\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in LangChainTracer.on_tool_end callback: TracerException(\"Found chain run at ID 29553415-e0f4-41a9-8921-ba489e377f68, but expected {'tool'} run.\")\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'tools': {'messages': [ToolMessage(content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', name='blog_post_retriever', tool_call_id='call_6kbxTU5CDWLmF9mrvR7bWSkI')]}}\n",
"----\n",
"{'action': {'messages': [ToolMessage(content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.\\n\\nResources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.\\n\\n(3) Task execution: Expert models execute on the specific tasks and log results.\\nInstruction:\\n\\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user\\'s request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.', name='blog_post_retriever', id='c749bb8e-c8e0-4fa3-bc11-3e2e0651880b', tool_call_id='call_KvoiamnLfGEzMeEMlV3u0TJ7')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='According to the blog post, common ways of task decomposition include:\\n\\n1. Using language models with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Utilizing task-specific instructions, for example, using \"Write a story outline\" for writing a novel.\\n3. Involving human inputs in the task decomposition process.\\n\\nThese methods help in breaking down complex tasks into smaller and more manageable steps, facilitating better planning and execution of the overall task.', response_metadata={'token_usage': {'completion_tokens': 100, 'prompt_tokens': 1475, 'total_tokens': 1575}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_3b956da36b', 'finish_reason': 'stop', 'logprobs': None}, id='run-98b765b3-f1a6-4c9a-ad0f-2db7950b900f-0')]}}\n",
"{'agent': {'messages': [AIMessage(content='Common ways of task decomposition include:\\n1. Using LLM with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Using task-specific instructions, for example, \"Write a story outline\" for writing a novel.\\n3. Involving human inputs in the task decomposition process.', response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 1339, 'total_tokens': 1406}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ad14cde-ca75-4238-a868-f865e0fc50dd-0')]}}\n",
"----\n"
]
}
@@ -852,20 +879,15 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 23,
"id": "b1d2b4d4-e604-497d-873d-d345b808578e",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain.tools.retriever import create_retriever_tool\n",
"from langchain_chroma import Chroma\n",
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from langgraph.checkpoint.sqlite import SqliteSaver\n",
@@ -900,9 +922,7 @@
"tools = [tool]\n",
"\n",
"\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" llm, tools, checkpointer=memory\n",
")"
"agent_executor = create_react_agent(llm, tools, checkpointer=memory)"
]
},
{
@@ -941,7 +961,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -14,7 +14,7 @@
"We will cover two approaches:\n",
"\n",
"1. Using the built-in [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;\n",
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language) implementation, to show the operating principle."
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language-lcel) implementation, to show the operating principle."
]
},
{

View File

@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "raw",
"id": "52976910",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [recursivecharactertextsplitter]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "a678d550",

View File

@@ -323,7 +323,7 @@
"id": "fa0f589d",
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"## Routing by semantic similarity\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example."
]
@@ -371,7 +371,7 @@
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
" | ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
" | StrOutputParser()\n",
")"
]

View File

@@ -297,13 +297,67 @@
"print(len(docs))"
]
},
{
"cell_type": "markdown",
"source": [
"### Gradient\n",
"\n",
"In this method, the gradient of distance is used to split chunks along with the percentile method.\n",
"This method is useful when chunks are highly correlated with each other or specific to a domain e.g. legal or medical. The idea is to apply anomaly detection on gradient array so that the distribution become wider and easy to identify boundaries in highly semantic data."
],
"metadata": {
"collapsed": false
},
"id": "423c6e099e94ca69"
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1f65472",
"metadata": {},
"outputs": [],
"source": []
"source": [
"text_splitter = SemanticChunker(\n",
" OpenAIEmbeddings(), breakpoint_threshold_type=\"gradient\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.\n"
]
}
],
"source": [
"docs = text_splitter.create_documents([state_of_the_union])\n",
"print(docs[0].page_content)"
],
"metadata": {},
"id": "e9f393d316ce1f6c"
},
{
"cell_type": "code",
"execution_count": 8,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"26\n"
]
}
],
"source": [
"print(len(docs))"
],
"metadata": {},
"id": "a407cd57f02a0db4"
}
],
"metadata": {

View File

@@ -2,11 +2,14 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 0\n",
"keywords: [Runnable, Runnables, LCEL]\n",
"keywords: [Runnable, Runnables, RunnableSequence, LCEL, chain, chains, chaining]\n",
"---"
]
},
@@ -250,8 +253,7 @@
"source": [
"## Related\n",
"\n",
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n",
"- "
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n"
]
}
],

View File

@@ -0,0 +1,305 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ab3dc782-321e-4503-96ee-ac88a15e4b5e",
"metadata": {},
"source": [
"# How to save and load LangChain objects\n",
"\n",
"LangChain classes implement standard methods for serialization. Serializing LangChain objects using these methods confer some advantages:\n",
"\n",
"- Secrets, such as API keys, are separated from other parameters and can be loaded back to the object on de-serialization;\n",
"- De-serialization is kept compatible across package versions, so objects that were serialized with one version of LangChain can be properly de-serialized with another.\n",
"\n",
"To save and load LangChain objects using this system, use the `dumpd`, `dumps`, `load`, and `loads` functions in the [load module](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.load) of `langchain-core`. These functions support JSON and JSON-serializable objects.\n",
"\n",
"All LangChain objects that inherit from [Serializable](https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.Serializable.html) are JSON-serializable. Examples include [messages](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.messages), [document objects](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) (e.g., as returned from [retrievers](/docs/concepts/#retrievers)), and most [Runnables](/docs/concepts/#langchain-expression-language-lcel), such as chat models, retrievers, and [chains](/docs/how_to/sequence) implemented with the LangChain Expression Language.\n",
"\n",
"Below we walk through an example with a simple [LLM chain](/docs/tutorials/llm_chain).\n",
"\n",
":::{.callout-caution}\n",
"\n",
"De-serialization using `load` and `loads` can instantiate any serializable LangChain object. Only use this feature with trusted inputs!\n",
"\n",
"De-serialization is a beta feature and is subject to change.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f85d9e51-2a36-4f69-83b1-c716cd43f790",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.load import dumpd, dumps, load, loads\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"Translate the following into {language}:\"),\n",
" (\"user\", \"{text}\"),\n",
" ],\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", api_key=\"llm-api-key\")\n",
"\n",
"chain = prompt | llm"
]
},
{
"cell_type": "markdown",
"id": "356ea99f-5cb5-4433-9a6c-2443d2be9ed3",
"metadata": {},
"source": [
"## Saving objects\n",
"\n",
"### To json"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "26516764-d46b-4357-a6c6-bd8315bfa530",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \"id\": [\n",
" \"langchain\",\n",
" \"schema\",\n",
" \"runnable\",\n",
" \"RunnableSequence\"\n",
" ],\n",
" \"kwargs\": {\n",
" \"first\": {\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \"id\": [\n",
" \"langchain\",\n",
" \"prompts\",\n",
" \"chat\",\n",
" \"ChatPromptTemplate\"\n",
" ],\n",
" \"kwargs\": {\n",
" \"input_variables\": [\n",
" \"language\",\n",
" \"text\"\n",
" ],\n",
" \"messages\": [\n",
" {\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \n"
]
}
],
"source": [
"string_representation = dumps(chain, pretty=True)\n",
"print(string_representation[:500])"
]
},
{
"cell_type": "markdown",
"id": "bd425716-545d-466b-a4e5-dc9952cfd72a",
"metadata": {},
"source": [
"### To a json-serializable Python dict"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6561a968-1741-4419-8c29-e705b9d0ef39",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'dict'>\n"
]
}
],
"source": [
"dict_representation = dumpd(chain)\n",
"\n",
"print(type(dict_representation))"
]
},
{
"cell_type": "markdown",
"id": "711e986e-dd24-4839-9e38-c57903378a5f",
"metadata": {},
"source": [
"### To disk"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f818378b-f4d6-43a7-895b-76cf7359b157",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"with open(\"/tmp/chain.json\", \"w\") as fp:\n",
" json.dump(string_representation, fp)"
]
},
{
"cell_type": "markdown",
"id": "1e621a32-ff5f-4627-ad59-88cacba73c6b",
"metadata": {},
"source": [
"Note that the API key is withheld from the serialized representations. Parameters that are considered secret are specified by the `.lc_secrets` attribute of the LangChain object:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8225e150-000a-4fbc-9f3d-09568f4b560b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'openai_api_key': 'OPENAI_API_KEY'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.last.lc_secrets"
]
},
{
"cell_type": "markdown",
"id": "6d090177-eb1c-4bfb-8c13-29286afe17d9",
"metadata": {},
"source": [
"## Loading objects\n",
"\n",
"Specifying `secrets_map` in `load` and `loads` will load the corresponding secrets onto the de-serialized LangChain object.\n",
"\n",
"### From string"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "54a66267-5f3a-40a2-bfcc-8b44bb24c154",
"metadata": {},
"outputs": [],
"source": [
"chain = loads(string_representation, secrets_map={\"OPENAI_API_KEY\": \"llm-api-key\"})"
]
},
{
"cell_type": "markdown",
"id": "5ed9aff1-92cc-44ba-b2ec-4d12f924fa03",
"metadata": {},
"source": [
"### From dict"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "76979932-13de-4427-9f88-040fb05a6778",
"metadata": {},
"outputs": [],
"source": [
"chain = load(dict_representation, secrets_map={\"OPENAI_API_KEY\": \"llm-api-key\"})"
]
},
{
"cell_type": "markdown",
"id": "7dd81a2a-5163-414d-ab42-f1c35e30471b",
"metadata": {},
"source": [
"### From disk"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "033f62a7-3377-472a-be58-718baa6ab445",
"metadata": {},
"outputs": [],
"source": [
"with open(\"/tmp/chain.json\", \"r\") as fp:\n",
" chain = loads(json.load(fp), secrets_map={\"OPENAI_API_KEY\": \"llm-api-key\"})"
]
},
{
"cell_type": "markdown",
"id": "dc520fdb-035a-468f-a8a8-c3ffe8ed98eb",
"metadata": {},
"source": [
"Note that we recover the API key specified at the start of the guide:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "566b2475-d9b4-432b-8c3b-27c2f183624e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'llm-api-key'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.last.openai_api_key.get_secret_value()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b4cba53-e1d5-4979-927e-b5794a02afc3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -351,7 +351,7 @@
"id": "ab1b2e7c-6ea8-4674-98eb-a43c69f5c19d",
"metadata": {},
"source": [
"To help enforce proper use of our Python tool, we'll using [tool calling](/docs/how_to/tool_calling/):"
"To help enforce proper use of our Python tool, we'll using [tool calling](/docs/how_to/tool_calling):"
]
},
{

View File

@@ -243,7 +243,7 @@
"text": [
"================================\u001b[1m System Message \u001b[0m================================\n",
"\n",
"You are a \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m expert. Given an input question, creat a syntactically correct \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m query to run.\n",
"You are a \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m expert. Given an input question, create a syntactically correct \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m query to run.\n",
"Unless the user specifies in the question a specific number of examples to obtain, query for at most \u001b[33;1m\u001b[1;3m{top_k}\u001b[0m results using the LIMIT clause as per \u001b[33;1m\u001b[1;3m{dialect}\u001b[0m. You can order the results to return the most informative data in the database.\n",
"Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
"Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",
@@ -275,7 +275,7 @@
}
],
"source": [
"system = \"\"\"You are a {dialect} expert. Given an input question, creat a syntactically correct {dialect} query to run.\n",
"system = \"\"\"You are a {dialect} expert. Given an input question, create a syntactically correct {dialect} query to run.\n",
"Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database.\n",
"Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
"Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",

View File

@@ -3,10 +3,14 @@
{
"cell_type": "raw",
"id": "0bdb3b97-4989-4237-b43b-5943dbbd8302",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1.5\n",
"keywords: [stream]\n",
"---"
]
},
@@ -37,6 +41,10 @@
"\n",
"Let's take a look at both approaches, and try to understand how to use them.\n",
"\n",
":::info\n",
"For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/docs/concepts/#streaming).\n",
":::\n",
"\n",
"## Using Stream\n",
"\n",
"All `Runnable` objects implement a sync method called `stream` and an async variant called `astream`. \n",
@@ -999,7 +1007,7 @@
"id": "798ea891-997c-454c-bf60-43124f40ee1b",
"metadata": {},
"source": [
"Because both the model and the parser support streaming, we see sreaming events from both components in real time! Kind of cool isn't it? 🦜"
"Because both the model and the parser support streaming, we see streaming events from both components in real time! Kind of cool isn't it? 🦜"
]
},
{

View File

@@ -3,10 +3,15 @@
{
"cell_type": "raw",
"id": "27598444",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 3\n",
"keywords: [structured output, json, information extraction, with_structured_output]\n",
"---"
]
},
@@ -28,6 +33,8 @@
"\n",
"## The `.with_structured_output()` method\n",
"\n",
"<span data-heading-keywords=\"with_structured_output\"></span>\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of models that support this method here](/docs/integrations/chat/).\n",
@@ -51,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "6d55008f",
"metadata": {},
"outputs": [],
@@ -69,22 +76,22 @@
"id": "a808a401-be1f-49f9-ad13-58dd68f7db5f",
"metadata": {},
"source": [
"If we want the model to return a Pydantic object, we just need to pass in desired the Pydantic class:"
"If we want the model to return a Pydantic object, we just need to pass in the desired Pydantic class:"
]
},
{
"cell_type": "code",
"execution_count": 38,
"execution_count": 3,
"id": "070bf702",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=None)"
"Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=8)"
]
},
"execution_count": 38,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -243,7 +250,7 @@
"id": "e28c14d3",
"metadata": {},
"source": [
"Alternatively, you can use tool calling directly to allow the model to choose between options, if your [chosen model supports it](/docs/integrations/chat/). This involves a bit more parsing and setup but in some instances leads to better performance because you don't have to use nested schemas. See [this how-to guide](/docs/how_to/tool_calling/) for more details."
"Alternatively, you can use tool calling directly to allow the model to choose between options, if your [chosen model supports it](/docs/integrations/chat/). This involves a bit more parsing and setup but in some instances leads to better performance because you don't have to use nested schemas. See [this how-to guide](/docs/how_to/tool_calling) for more details."
]
},
{
@@ -507,12 +514,49 @@
")"
]
},
{
"cell_type": "markdown",
"id": "91e95aa2",
"metadata": {},
"source": [
"### (Advanced) Raw outputs\n",
"\n",
"LLMs aren't perfect at generating structured output, especially as schemas become complex. You can avoid raising exceptions and handle the raw output yourself by passing `include_raw=True`. This changes the output format to contain the raw message output, the `parsed` value (if successful), and any resulting errors:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "10ed2842",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ASK4EmZeZ69Fi3p554Mb4rWy', 'function': {'arguments': '{\"setup\":\"Why was the cat sitting on the computer?\",\"punchline\":\"Because it wanted to keep an eye on the mouse!\"}', 'name': 'Joke'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 36, 'prompt_tokens': 107, 'total_tokens': 143}, 'model_name': 'gpt-4-0125-preview', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-6491d35b-9164-4656-b75c-d7882cfb76cb-0', tool_calls=[{'name': 'Joke', 'args': {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'Because it wanted to keep an eye on the mouse!'}, 'id': 'call_ASK4EmZeZ69Fi3p554Mb4rWy'}], usage_metadata={'input_tokens': 107, 'output_tokens': 36, 'total_tokens': 143}),\n",
" 'parsed': Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=None),\n",
" 'parsing_error': None}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"structured_llm = llm.with_structured_output(Joke, include_raw=True)\n",
"\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5e92a98a",
"metadata": {},
"source": [
"## Prompting and parsing model directly\n",
"## Prompting and parsing model outputs directly\n",
"\n",
"Not all models support `.with_structured_output()`, since not all models have tool calling or JSON mode support. For such models you'll need to directly prompt the model to use a specific format, and use an output parser to extract the structured response from the raw model output.\n",
"\n",
@@ -780,9 +824,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"display_name": "Python 3",
"language": "python",
"name": "poetry-venv-2"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -794,7 +838,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -1,5 +1,18 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"keywords: [tool calling, tool call]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -11,17 +24,24 @@
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Output parsers](/docs/concepts/#output-parsers)\n",
"\n",
":::\n",
"\n",
"```{=mdx}\n",
":::info\n",
":::info Tool calling vs function calling\n",
"\n",
"We use the term tool calling interchangeably with function calling. Although\n",
"function calling is sometimes meant to refer to invocations of a single function,\n",
"we treat all models as though they can return multiple tool or function calls in \n",
"each message.\n",
"\n",
":::\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"```\n",
"\n",
"Tool calling allows a chat model to respond to a given prompt by \"calling a tool\".\n",
"While the name implies that the model is performing \n",
@@ -32,6 +52,12 @@
"parameters matching the desired schema, then treat the generated output as your final \n",
"result.\n",
"\n",
":::note\n",
"\n",
"If you only need formatted values, try the [.with_structured_output()](/docs/how_to/structured_output/#the-with_structured_output-method) chat model method as a simpler entrypoint.\n",
"\n",
":::\n",
"\n",
"However, tool calling goes beyond [structured output](/docs/how_to/structured_output/)\n",
"since you can pass responses from called tools back to the model to create longer interactions.\n",
"For instance, given a search engine tool, an LLM might handle a \n",
@@ -46,8 +72,13 @@
"support variants of a tool calling feature.\n",
"\n",
"LangChain implements standard interfaces for defining tools, passing them to LLMs, \n",
"and representing tool calls. This guide will show you how to use them.\n",
"\n",
"and representing tool calls. This guide and the other How-to pages in the Tool section will show you how to use tools with LangChain."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Passing tools to chat models\n",
"\n",
"Chat models that support tool calling features implement a `.bind_tools` method, which \n",
@@ -147,7 +178,7 @@
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_openai\n",
"%pip install -qU langchain_openai\n",
"\n",
"import os\n",
"from getpass import getpass\n",
@@ -163,9 +194,31 @@
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_g4RuAijtDcSeM96jXyCuiLSN', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 95, 'total_tokens': 113}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5157d15a-7e0e-4ab1-af48-3d98010cd152-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_g4RuAijtDcSeM96jXyCuiLSN'}], usage_metadata={'input_tokens': 95, 'output_tokens': 18, 'total_tokens': 113})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools = llm.bind_tools(tools)"
"llm_with_tools = llm.bind_tools(tools)\n",
"\n",
"query = \"What is 3 * 12?\"\n",
"\n",
"llm_with_tools.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, even though the prompt didn't really suggest a tool call, our LLM made one since it was forced to do so. You can look at the docs for [bind_tools()](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools."
]
},
{
@@ -197,10 +250,10 @@
"text/plain": [
"[{'name': 'Multiply',\n",
" 'args': {'a': 3, 'b': 12},\n",
" 'id': 'call_KquHA7mSbgtAkpkmRPaFnJKa'},\n",
" 'id': 'call_TnadLbWJu9HwDULRb51RNSMw'},\n",
" {'name': 'Add',\n",
" 'args': {'a': 11, 'b': 49},\n",
" 'id': 'call_Fl0hQi4IBTzlpaJYlM5kPQhE'}]"
" 'id': 'call_Q9vt1up05sOQScXvUYWzSpCg'}]"
]
},
"execution_count": 5,
@@ -226,7 +279,8 @@
"a name, string arguments, identifier, and error message.\n",
"\n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further \n",
"process the output. For example, we can convert back to the original Pydantic class:"
"process the output. For example, we can convert existing values populated on the `.tool_calls` attribute back to the original Pydantic class using the\n",
"[PydanticToolsParser](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.openai_tools.PydanticToolsParser.html):"
]
},
{
@@ -246,443 +300,27 @@
}
],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.output_parsers import PydanticToolsParser\n",
"\n",
"chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n",
"chain.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"When tools are called in a streaming context, \n",
"[message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
"will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) \n",
"objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes \n",
"optional string fields for the tool `name`, `args`, and `id`, and includes an optional \n",
"integer field `index` that can be used to join chunks together. Fields are optional \n",
"because portions of a tool call may be streamed across different chunks (e.g., a chunk \n",
"that includes a substring of the arguments may have null values for the tool name and id).\n",
"\n",
"Because message chunks inherit from their parent message class, an \n",
"[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
"with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. \n",
"These fields are parsed best-effort from the message's tool call chunks.\n",
"\n",
"Note that not all providers currently support streaming for tool calls:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[]\n",
"[{'name': 'Multiply', 'args': '', 'id': 'call_3aQwTP9CYlFxwOvQZPHDu6wL', 'index': 0}]\n",
"[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 0}]\n",
"[{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}]\n",
"[{'name': None, 'args': '\"b\": 1', 'id': None, 'index': 0}]\n",
"[{'name': None, 'args': '2}', 'id': None, 'index': 0}]\n",
"[{'name': 'Add', 'args': '', 'id': 'call_SQUoSsJz2p9Kx2x73GOgN1ja', 'index': 1}]\n",
"[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 1}]\n",
"[{'name': None, 'args': ': 11,', 'id': None, 'index': 1}]\n",
"[{'name': None, 'args': ' \"b\": ', 'id': None, 'index': 1}]\n",
"[{'name': None, 'args': '49}', 'id': None, 'index': 1}]\n",
"[]\n"
]
}
],
"source": [
"async for chunk in llm_with_tools.astream(query):\n",
" print(chunk.tool_call_chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n",
"\n",
"For example, below we accumulate tool call chunks:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[]\n",
"[{'name': 'Multiply', 'args': '', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\"', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, ', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 1', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\"', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11,', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": ', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n"
]
}
],
"source": [
"first = True\n",
"async for chunk in llm_with_tools.astream(query):\n",
" if first:\n",
" gathered = chunk\n",
" first = False\n",
" else:\n",
" gathered = gathered + chunk\n",
"\n",
" print(gathered.tool_call_chunks)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'str'>\n"
]
}
],
"source": [
"print(type(gathered.tool_call_chunks[0][\"args\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And below we accumulate tool calls to demonstrate partial parsing:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[]\n",
"[]\n",
"[{'name': 'Multiply', 'args': {}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n"
]
}
],
"source": [
"first = True\n",
"async for chunk in llm_with_tools.astream(query):\n",
" if first:\n",
" gathered = chunk\n",
" first = False\n",
" else:\n",
" gathered = gathered + chunk\n",
"\n",
" print(gathered.tool_calls)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'dict'>\n"
]
}
],
"source": [
"print(type(gathered.tool_calls[0][\"args\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Passing tool outputs to the model\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]),\n",
" ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'),\n",
" ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, ToolMessage\n",
"\n",
"messages = [HumanMessage(query)]\n",
"ai_msg = llm_with_tools.invoke(messages)\n",
"messages.append(ai_msg)\n",
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
"messages"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools.invoke(messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls.\n",
"\n",
"## Few-shot prompting\n",
"\n",
"For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt.\n",
"\n",
"For example, even with some special instructions our model can get tripped up by order of operations:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Multiply',\n",
" 'args': {'a': 119, 'b': 8},\n",
" 'id': 'call_T88XN6ECucTgbXXkyDeC2CQj'},\n",
" {'name': 'Add',\n",
" 'args': {'a': 952, 'b': -20},\n",
" 'id': 'call_licdlmGsRqzup8rhqJSb1yZ4'}]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools.invoke(\n",
" \"Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations\"\n",
").tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 * 8 yet.\n",
"\n",
"By adding a prompt with some examples we can correct this behavior:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Multiply',\n",
" 'args': {'a': 119, 'b': 8},\n",
" 'id': 'call_9MvuwQqg7dlJupJcoTWiEsDo'}]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"examples = [\n",
" HumanMessage(\n",
" \"What's the product of 317253 and 128472 plus four\", name=\"example_user\"\n",
" ),\n",
" AIMessage(\n",
" \"\",\n",
" name=\"example_assistant\",\n",
" tool_calls=[\n",
" {\"name\": \"Multiply\", \"args\": {\"x\": 317253, \"y\": 128472}, \"id\": \"1\"}\n",
" ],\n",
" ),\n",
" ToolMessage(\"16505054784\", tool_call_id=\"1\"),\n",
" AIMessage(\n",
" \"\",\n",
" name=\"example_assistant\",\n",
" tool_calls=[{\"name\": \"Add\", \"args\": {\"x\": 16505054784, \"y\": 4}, \"id\": \"2\"}],\n",
" ),\n",
" ToolMessage(\"16505054788\", tool_call_id=\"2\"),\n",
" AIMessage(\n",
" \"The product of 317253 and 128472 plus four is 16505054788\",\n",
" name=\"example_assistant\",\n",
" ),\n",
"]\n",
"\n",
"system = \"\"\"You are bad at math but are an expert at using a calculator. \n",
"\n",
"Use past tool usage as an example of how to correctly use the tools.\"\"\"\n",
"few_shot_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" *examples,\n",
" (\"human\", \"{query}\"),\n",
" ]\n",
")\n",
"\n",
"chain = {\"query\": RunnablePassthrough()} | few_shot_prompt | llm_with_tools\n",
"chain.invoke(\"Whats 119 times 8 minus 20\").tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And we get the correct output this time.\n",
"\n",
"Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Binding model-specific formats (advanced)\n",
"\n",
"Providers adopt different conventions for formatting tool schemas. \n",
"For instance, OpenAI uses a format like this:\n",
"\n",
"- `type`: The type of the tool. At the time of writing, this is always `\"function\"`.\n",
"- `function`: An object containing tool parameters.\n",
"- `function.name`: The name of the schema to output.\n",
"- `function.description`: A high level description of the schema to output.\n",
"- `function.parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) dict.\n",
"\n",
"We can bind this model-specific format directly to the model as well if preferred. Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe', 'function': {'arguments': '{\"a\":119,\"b\":8}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 62, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-353e8a9a-7125-4f94-8c68-4f3da4c21120-0', tool_calls=[{'name': 'multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_mn4ELw1NbuE0DFYhIeK0GrPe'}])"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"model_with_tools = model.bind(\n",
" tools=[\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"multiply\",\n",
" \"description\": \"Multiply two integers together.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"a\": {\"type\": \"number\", \"description\": \"First integer\"},\n",
" \"b\": {\"type\": \"number\", \"description\": \"Second integer\"},\n",
" },\n",
" \"required\": [\"a\", \"b\"],\n",
" },\n",
" },\n",
" }\n",
" ]\n",
")\n",
"\n",
"model_with_tools.invoke(\"Whats 119 times 8?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is functionally equivalent to the `bind_tools()` calls above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, check out some more specific uses of tool calling:\n",
"Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, you can learn more about how to use tools:\n",
"\n",
"- Few shot promting [with tools](/docs/how_to/tools_few_shot/)\n",
"- Stream [tool calls](/docs/how_to/tool_streaming/)\n",
"- Bind [model-specific tools](/docs/how_to/tools_model_specific/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Pass [tool results back to model](/docs/how_to/tool_results_pass_to_model)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
"- Building [tool-using chains and agents](/docs/how_to#tools)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
@@ -705,7 +343,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,108 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Disabling parallel tool calling (OpenAI only)\n",
"\n",
"OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like \"What is the weather in Tokyo, New York, and Chicago?\" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using the ``parallel_tool_call`` parameter."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First let's set up our tools and model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
"\n",
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's show a quick example of how disabling parallel tool calls work:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'add',\n",
" 'args': {'a': 2, 'b': 2},\n",
" 'id': 'call_Hh4JOTCDM85Sm9Pr84VKrWu5'}]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm_with_tools = llm.bind_tools(tools, parallel_tool_calls=False)\n",
"llm_with_tools.invoke(\"Please call the first tool two times\").tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, even though we explicitly told the model to call a tool twice, by disabling parallel tool calls the model was constrained to only calling one."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,160 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to call tools with multi-modal data\n",
"\n",
"Here we demonstrate how to call tools with multi-modal data, such as images.\n",
"\n",
"Some multi-modal models, such as those that can reason over images or audio, support [tool calling](/docs/concepts/#functiontool-calling) features as well.\n",
"\n",
"To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data).\n",
"\n",
"Below, we demonstrate examples using [OpenAI](/docs/integrations/platforms/openai) and [Anthropic](/docs/integrations/platforms/anthropic). We will use the same image and tool in all cases. Let's first select an image, and build a placeholder tool that expects as input the string \"sunny\", \"cloudy\", or \"rainy\". We will ask the models to describe the weather in the image."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"from typing import Literal\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"\n",
"\n",
"@tool\n",
"def weather_tool(weather: Literal[\"sunny\", \"cloudy\", \"rainy\"]) -> None:\n",
" \"\"\"Describe the weather\"\"\"\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"For OpenAI, we can feed the image URL directly in a content block of type \"image_url\":"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_mRYL50MtHdeNuNIjSCm5UPmB'}]\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\").bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.tool_calls)"
]
},
{
"cell_type": "markdown",
"id": "e5738224-1109-4bf8-8976-ff1570dd1d46",
"metadata": {},
"source": [
"Note that we recover tool calls with parsed arguments in LangChain's [standard format](/docs/how_to/tool_calling) in the model response."
]
},
{
"cell_type": "markdown",
"id": "0cee63ff-e09f-4dd8-8323-912edbde94f6",
"metadata": {},
"source": [
"## Anthropic\n",
"\n",
"For Anthropic, we can format a base64-encoded image into a content block of type \"image\", as below:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d90c4590-71c8-42b1-99ff-03a9eca8082e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'toolu_016m9KfknJqx5fVRYk4tkF6s'}]\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\n",
" \"type\": \"image\",\n",
" \"source\": {\n",
" \"type\": \"base64\",\n",
" \"media_type\": \"image/jpeg\",\n",
" \"data\": image_data,\n",
" },\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.tool_calls)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,126 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to force tool calling behavior\n",
"\n",
"In order to force our LLM to spelect a specific tool, we can use the `tool_choice` parameter to ensure certain behavior. First, let's define our model and tools:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
"\n",
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_openai\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For example, we can force our tool to call the multiply tool by using the following code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{\"a\":2,\"b\":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm_forced_to_multiply = llm.bind_tools(tools, tool_choice=\"Multiply\")\n",
"llm_forced_to_multiply.invoke(\"what is 2 + 4\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even if we pass it something that doesn't require multiplcation - it will still call the tool!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also just force our tool to select at least one of our tools by passing in the \"any\" (or \"required\" which is OpenAI specific) keyword to the `tool_choice` parameter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{\"a\":1,\"b\":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice=\"any\")\n",
"llm_forced_to_use_tool.invoke(\"What day is today?\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,127 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass tool outputs to the model\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s. First, let's define our tools and our model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
"\n",
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"llm_with_tools = llm.bind_tools(tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use ``ToolMessage`` to pass back the output of the tool calls to the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]),\n",
" ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'),\n",
" ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langchain_core.messages import HumanMessage, ToolMessage\n",
"\n",
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
"\n",
"messages = [HumanMessage(query)]\n",
"ai_msg = llm_with_tools.invoke(messages)\n",
"messages.append(ai_msg)\n",
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
"messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm_with_tools.invoke(messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we pass back the same `id` in the `ToolMessage` as the what we receive from the model in order to help the model match tool responses with tool calls."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,256 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass run time values to a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/#chat-models)\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [How to create tools](/docs/how_to/custom_tools)\n",
"- [How to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling)\n",
":::\n",
"\n",
":::{.callout-info} Supported models\n",
"\n",
"This how-to guide uses models with native tool calling capability.\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"\n",
":::{.callout-info} Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::\n",
"\n",
"You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.\n",
"\n",
"Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.\n",
"\n",
"Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.\n",
"\n",
"This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can bind them to chat models as follows:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
"/>\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_openai\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Passing request time information\n",
"\n",
"The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example,\n",
"this information may be the user ID as resolved from the request itself."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.output_parsers import JsonOutputParser\n",
"from langchain_core.tools import BaseTool, tool"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"user_to_pets = {}\n",
"\n",
"\n",
"def generate_tools_for_user(user_id: str) -> List[BaseTool]:\n",
" \"\"\"Generate a set of tools that have a user id associated with them.\"\"\"\n",
"\n",
" @tool\n",
" def update_favorite_pets(pets: List[str]) -> None:\n",
" \"\"\"Add the list of favorite pets.\"\"\"\n",
" user_to_pets[user_id] = pets\n",
"\n",
" @tool\n",
" def delete_favorite_pets() -> None:\n",
" \"\"\"Delete the list of favorite pets.\"\"\"\n",
" if user_id in user_to_pets:\n",
" del user_to_pets[user_id]\n",
"\n",
" @tool\n",
" def list_favorite_pets() -> None:\n",
" \"\"\"List favorite pets if any.\"\"\"\n",
" return user_to_pets.get(user_id, [])\n",
"\n",
" return [update_favorite_pets, delete_favorite_pets, list_favorite_pets]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify that the tools work correctly"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'eugene': ['cat', 'dog']}\n",
"['cat', 'dog']\n"
]
}
],
"source": [
"update_pets, delete_pets, list_pets = generate_tools_for_user(\"eugene\")\n",
"update_pets.invoke({\"pets\": [\"cat\", \"dog\"]})\n",
"print(user_to_pets)\n",
"print(list_pets.invoke({}))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"\n",
"def handle_run_time_request(user_id: str, query: str):\n",
" \"\"\"Handle run time request.\"\"\"\n",
" tools = generate_tools_for_user(user_id)\n",
" llm_with_tools = llm.bind_tools(tools)\n",
" prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful assistant.\")],\n",
" )\n",
" chain = prompt | llm_with_tools\n",
" return llm_with_tools.invoke(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This code will allow the LLM to invoke the tools, but the LLM is **unaware** of the fact that a **user ID** even exists!"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots']},\n",
" 'id': 'call_jJvjPXsNbFO5MMgW0q84iqCN'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_message = handle_run_time_request(\n",
" \"eugene\", \"my favorite animals are cats and parrots.\"\n",
")\n",
"ai_message.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::{.callout-important}\n",
"\n",
"Chat models only output requests to invoke tools, they don't actually invoke the underlying tools.\n",
"\n",
"To see how to invoke the tools, please refer to [how to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling).\n",
":::"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,235 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to stream tool calls\n",
"\n",
"When tools are called in a streaming context, \n",
"[message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
"will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) \n",
"objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes \n",
"optional string fields for the tool `name`, `args`, and `id`, and includes an optional \n",
"integer field `index` that can be used to join chunks together. Fields are optional \n",
"because portions of a tool call may be streamed across different chunks (e.g., a chunk \n",
"that includes a substring of the arguments may have null values for the tool name and id).\n",
"\n",
"Because message chunks inherit from their parent message class, an \n",
"[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
"with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. \n",
"These fields are parsed best-effort from the message's tool call chunks.\n",
"\n",
"Note that not all providers currently support streaming for tool calls. Before we start let's define our tools and our model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
"\n",
"\n",
"tools = [add, multiply]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"llm_with_tools = llm.bind_tools(tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's define our query and stream our output:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[]\n",
"[{'name': 'Multiply', 'args': '', 'id': 'call_3aQwTP9CYlFxwOvQZPHDu6wL', 'index': 0}]\n",
"[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 0}]\n",
"[{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}]\n",
"[{'name': None, 'args': '\"b\": 1', 'id': None, 'index': 0}]\n",
"[{'name': None, 'args': '2}', 'id': None, 'index': 0}]\n",
"[{'name': 'Add', 'args': '', 'id': 'call_SQUoSsJz2p9Kx2x73GOgN1ja', 'index': 1}]\n",
"[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 1}]\n",
"[{'name': None, 'args': ': 11,', 'id': None, 'index': 1}]\n",
"[{'name': None, 'args': ' \"b\": ', 'id': None, 'index': 1}]\n",
"[{'name': None, 'args': '49}', 'id': None, 'index': 1}]\n",
"[]\n"
]
}
],
"source": [
"query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
"\n",
"async for chunk in llm_with_tools.astream(query):\n",
" print(chunk.tool_call_chunks)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n",
"\n",
"For example, below we accumulate tool call chunks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[]\n",
"[{'name': 'Multiply', 'args': '', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\"', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, ', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 1', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\"', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11,', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": ', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n",
"[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_AkL3dVeCjjiqvjv8ckLxL3gP', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_b4iMiB3chGNGqbt5SjqqD2Wh', 'index': 1}]\n"
]
}
],
"source": [
"first = True\n",
"async for chunk in llm_with_tools.astream(query):\n",
" if first:\n",
" gathered = chunk\n",
" first = False\n",
" else:\n",
" gathered = gathered + chunk\n",
"\n",
" print(gathered.tool_call_chunks)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'str'>\n"
]
}
],
"source": [
"print(type(gathered.tool_call_chunks[0][\"args\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And below we accumulate tool calls to demonstrate partial parsing:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[]\n",
"[]\n",
"[{'name': 'Multiply', 'args': {}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n",
"[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_4p0D4tHVXSiae9Mu0e8jlI1m'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_54Hx3DGjZitFlEjgMe1DYonh'}]\n"
]
}
],
"source": [
"first = True\n",
"async for chunk in llm_with_tools.astream(query):\n",
" if first:\n",
" gathered = chunk\n",
" first = False\n",
" else:\n",
" gathered = gathered + chunk\n",
"\n",
" print(gathered.tool_calls)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'dict'>\n"
]
}
],
"source": [
"print(type(gathered.tool_calls[0][\"args\"]))"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Some files were not shown because too many files have changed in this diff Show More