Compare commits

..

83 Commits

Author SHA1 Message Date
William FH
29f6769c21 Merge branch 'master' into wfh/async_chromium 2024-07-16 14:50:11 -07:00
Erick Friis
6c3e65a878 infra: prerelease dep checking on release (#23269) 2024-07-16 21:48:15 +00:00
Eugene Yurtsev
616196c620 Docs: Add how to dispatch custom callback events (#24278)
* Add how-to guide for dispatching custom callback events.
* Add links from index to the how to guide
* Add link from streaming from within a tool
* Update versionadded to correct release
https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D0.2.15
2024-07-16 17:38:32 -04:00
Erick Friis
dd7938ace8 docs: readthedocs deprecation fix (#24321)
https://about.readthedocs.com/blog/2024/07/addons-by-default/#how-does-it-affect-my-projects

we use build.command so we're already using addons, so I think this is
it
2024-07-16 20:32:51 +00:00
Srijan Dubey
ef07308c30 Upgraded shaleprotocol to use langchain v0.2 removed deprecated classes (#24320)
Description: Added support for langchain v0.2 for shale protocol.
Replaced LLMChain with Runnable interface which allows any two Runnables
to be 'chained' together into sequences. Also added
StreamingStdOutCallbackHandler. Callback handler for streaming.
Issue: None
Dependencies: None.
2024-07-16 20:07:36 +00:00
pbharti0831
049bc37111 Cookbook for applying RAG locally using open source models and tools on CPU (#24284)
This cookbook guides user to implement RAG locally on CPU using
langchain tools and open source models. It enables Llama2 model to
answer queries about Intel Q1 2024 earning release using RAG pipeline.

Main libraries are langchain, llama-cpp-python and gpt4all.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Sriragavi <sriragavi.r@intel.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-16 15:17:10 -04:00
Leonid Ganeline
5ccf8ebfac core: docstrings vectorstores update (#24281)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-16 16:58:11 +00:00
Erick Friis
1e9cc02ed8 openai: raw response headers (#24150) 2024-07-16 09:54:54 -07:00
Bagatur
dc42279eb5 core[patch]: fix Typing.cast import (#24313)
Fixes #24287
2024-07-16 16:53:48 +00:00
Anush
e38bf08139 qdrant: Fixed typos in Qdrant vectorstore docs (#24312)
## Description 

As that title goes.
2024-07-16 09:44:07 -07:00
bovlb
5caa381177 community[minor]: Add ApertureDB as a vectorstore (#24088)
Thank you for contributing to LangChain!

- [X] *ApertureDB as vectorstore**: "community: Add ApertureDB as a
vectorestore"

- **Description:** this change provides a new community integration that
uses ApertureData's ApertureDB as a vector store.
    - **Issue:** none
    - **Dependencies:** depends on ApertureDB Python SDK
    - **Twitter handle:** ApertureData

- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Integration tests rely on a local run of a public docker image.
Example notebook additionally relies on a local Ollama server.

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

All lint tests pass.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Gautam <gautam@aperturedata.io>
2024-07-16 09:32:59 -07:00
frob
c59e663365 community[patch]: Fix docstring for ollama parameter "keep_alive" (#23973)
Fix doc-string for ollama integration
2024-07-16 14:48:38 +00:00
Mazen Ramadan
0c1889c713 docs: fix parameter typo in scrapfly loader docs (#24307)
Fixed wrong parameter typo in
[ScrapflyLoader](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/scrapfly.py)
docs, where `ignore_scrape_failures` is used instead of
`continue_on_failure`.

- Description: Fix wrong param typo in ScrapflyLoader docs.
2024-07-16 14:48:13 +00:00
Leonid Ganeline
5fcf2ef7ca core: docstrings documents (#23506)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-16 10:43:54 -04:00
Rafael Pereira
77dd327282 Docs: Fix Concepts Integration Tools Link (#24301)
- **Description:** This PR fix concepts integrations tools link.

- **Issue:** Fixes issue #24112

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-16 10:29:30 -04:00
Rahul Raghavendra Choudhury
f5a38772a8 community[patch]: Update TavilySearch to use TavilyClient instead of the deprecated Client (#24270)
On using TavilySearchAPIRetriever with any conversation chain getting
error :

`TypeError: Client.__init__() got an unexpected keyword argument
'api_key'`

It is because the retreiver class is using the depreciated `Client`
class, `TavilyClient` need to be used instead.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-16 13:35:28 +00:00
Shenhai Ran
5f2dea2b20 core[patch]: Add encoding options when create prompt template from a file (#24054)
- Uses default utf-8 encoding for loading prompt templates from file

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-16 09:35:09 -04:00
Chen Xiabin
69b1603173 baidu qianfan AiMessage with usage_metadata (#24288)
add usage_metadata to qianfan AIMessage. Thanks
2024-07-16 09:30:50 -04:00
amcastror
d83164f837 Update retrievers.ipynb (#24289)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-16 13:30:41 +00:00
Leonid Ganeline
198b85334f core[patch]: docstrings langchain_core/ files update (#24285)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-16 09:21:51 -04:00
Dobiichi-Origami
7aeaa1974d community[patch]: change the class of qianfan_ak and qianfan_sk parameters (#24293)
- **Description:** we changed the class of two parameters to fix a bug,
which causes validation failure when using QianfanEmbeddingEndpoint
2024-07-16 09:17:48 -04:00
Tibor Reiss
1c753d1e81 core[patch]: Update typing for template format to include jinja2 as a Literal (#24144)
Fixes #23929 via adjusting the typing
2024-07-16 09:09:42 -04:00
Jacob Lee
6716379f0c docs[patch]: Fix rendering issue in code splitter page (#24291) 2024-07-15 23:08:21 -07:00
Jacob Lee
58fdb070fa docs[patch]: Update intro diagram (#24290)
CC @agola11
2024-07-15 22:04:42 -07:00
Erick Friis
1d7a3ae7ce infra: add test deps to add_dependents (#24283) 2024-07-15 15:48:53 -07:00
Erick Friis
d2f671271e langchain: fix extended test (#24282) 2024-07-15 15:29:48 -07:00
Lage Ragnarsson
a3c10fc6ce community: Add support for specifying hybrid search for Databricks vector search (#23528)
**Description:**

Databricks Vector Search recently added support for hybrid
keyword-similarity search.
See [usage
examples](https://docs.databricks.com/en/generative-ai/create-query-vector-search.html#query-a-vector-search-endpoint)
from their documentation.

This PR updates the Langchain vectorstore interface for Databricks to
enable the user to pass the *query_type* parameter to
*similarity_search* to make use of this functionality.
By default, there will not be any changes for existing users of this
interface. To use the new hybrid search feature, it is now possible to
do

```python
# ...
dvs = DatabricksVectorSearch(index)
dvs.similarity_search("my search query", query_type="HYBRID")
```

Or using the retriever:

```python
retriever = dvs.as_retriever(
    search_kwargs={
        "query_type": "HYBRID",
    }
)
retriever.invoke("my search query")
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-15 22:14:08 +00:00
Christopher Tee
5171ffc026 community(you): Integrate You.com conversational APIs (#23046)
You.com is releasing two new conversational APIs — Smart and Research. 

This PR:
- integrates those APIs with Langchain, as an LLM
- streaming is supported

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-15 17:46:58 -04:00
maang-h
6c7d9f93b9 feat: Add ChatTongyi structured output (#24187)
- **Description:** Add `with_structured_output` method to ChatTongyi to
support structured output.
2024-07-15 15:57:21 -04:00
Chen Xiabin
8f4620f4b8 baidu qianfan streaming token_usage (#24117)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-15 19:52:31 +00:00
maang-h
9d97de34ae community[patch]: Improve ChatBaichuan init args and role (#23878)
- **Description:** Improve ChatBaichuan init args and role
   -  ChatBaichuan adds `system` role
   - alias: `baichuan_api_base` -> `base_url`
   - `with_search_enhance` is deprecated
   - Add `max_tokens` argument
2024-07-15 15:17:00 -04:00
Erick Friis
56cca23745 openai: remove some params from default serialization (#24280) 2024-07-15 18:53:36 +00:00
mrugank-wadekar
66bebeb76a partners: add similarity search by image functionality to langchain_chroma partner package (#22982)
- **Description:** This pull request introduces two new methods to the
Langchain Chroma partner package that enable similarity search based on
image embeddings. These methods enhance the package's functionality by
allowing users to search for images similar to a given image URI. Also
introduces a notebook to demonstrate it's use.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** @mrugank9009

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-15 18:48:22 +00:00
pm390
b0aa915dea community[patch]: use asyncio.sleep instead of sleep in OpenAI Assistant async (#24275)
**Description:** Implemented async sleep using asyncio instead of
synchronous sleep in openAI Assistants
**Issue:** 24194
**Dependencies:** asyncio
**Twitter handle:** pietromald60939
2024-07-15 18:14:39 +00:00
Anush
d93ae756e6 qdrant: Documentation for the new QdrantVectorStore class (#24166)
## Description

Follow up on #24165. Adds a page to document the latest usage of the new
`QdrantVectorStore` class.
2024-07-15 10:39:23 -07:00
Erick Friis
1244e66bd4 docs: remove couchbase from docs linking (#24277)
`pip install couchbase` adds 12 minutes to the docs build...
2024-07-15 17:34:41 +00:00
wenngong
a001037319 retrievers: MultiVectorRetriever similarity_score_threshold search type (#23539)
Description: support MultiVectorRetriever similarity_score_threshold
search type.

Issue: #23387 #19404

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-07-15 13:31:34 -04:00
Carlos André Antunes
20151384d7 fix azure_openai.py: some keys do not exists (#24158)
In some lines its trying to read a key that do not exists yet. In this
cases I changed the direct access to dict.get() method

Thank you for contributing to LangChain!

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-07-15 17:17:05 +00:00
blueoom
d895614d19 text_splitters: add request parameters for function HTMLHeaderTextSplitter.split_text… (#24178)
**Description:**

The `split_text_from_url` method of `HTMLHeaderTextSplitter` does not
include parameters like `timeout` when using `requests` to send a
request. Therefore, I suggest adding a `kwargs` parameter to the
function, which can be passed as arguments to `requests.get()`
internally, allowing control over the `get` request.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-15 16:43:56 +00:00
Bagatur
9d0c1d2dc9 docs: specify init_chat_model version (#24274) 2024-07-15 16:29:06 +00:00
MoraxMa
a7296bddc2 docs: updated Tongyi package (#24259)
* updated pip install package
2024-07-15 16:25:35 +00:00
Bagatur
c9473367b1 langchain[patch]: Release 0.2.8 (#24273) 2024-07-15 16:05:51 +00:00
JP-Ellis
f77659463a core[patch]: allow message utils to work with lcel (#23743)
The functions `convert_to_messages` has had an expansion of the
arguments it can take:

1. Previously, it only could take a `Sequence` in order to iterate over
it. This has been broadened slightly to an `Iterable` (which should have
no other impact).
2. Support for `PromptValue` and `BaseChatPromptTemplate` has been
added. These are generated when combining messages using the overloaded
`+` operator.

Functions which rely on `convert_to_messages` (namely `filter_messages`,
`merge_message_runs` and `trim_messages`) have had the type of their
arguments similarly expanded.

Resolves #23706.

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->

---------

Signed-off-by: JP-Ellis <josh@jpellis.me>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-07-15 08:58:05 -07:00
Harold Martin
ccdaf14eff docs: Spell check fixes (#24217)
**Description:** Spell check fixes for docs, comments, and a couple of
strings. No code change e.g. variable names.
**Issue:** none
**Dependencies:** none
**Twitter handle:** hmartin
2024-07-15 15:51:43 +00:00
Leonid Ganeline
cacdf96f9c core docstrings tracers update (#24211)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-15 11:37:09 -04:00
Leonid Ganeline
36ee083753 core: docstrings utils update (#24213)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-15 11:36:00 -04:00
thehunmonkgroup
e8a21146d3 community[patch]: upgrade default model for ChatAnyscale (#24232)
Old default `meta-llama/Llama-2-7b-chat-hf` no longer supported.
2024-07-15 11:34:59 -04:00
Bagatur
a0958c0607 docs: more tool call -> tool message docs (#24271) 2024-07-15 07:55:07 -07:00
Bagatur
620b118c70 core[patch]: Release 0.2.19 (#24272) 2024-07-15 07:51:30 -07:00
ccurme
888fbc07b5 core[patch]: support passing args_schema through as_tool (#24269)
Note: this allows the schema to be passed in positionally.

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda


class Add(BaseModel):
    """Add two integers together."""

    a: int = Field(..., description="First integer")
    b: int = Field(..., description="Second integer")


def add(input: dict) -> int:
    return input["a"] + input["b"]


runnable = RunnableLambda(add)
as_tool = runnable.as_tool(Add)
as_tool.args_schema.schema()
```
```
{'title': 'Add',
 'description': 'Add two integers together.',
 'type': 'object',
 'properties': {'a': {'title': 'A',
   'description': 'First integer',
   'type': 'integer'},
  'b': {'title': 'B', 'description': 'Second integer', 'type': 'integer'}},
 'required': ['a', 'b']}
```
2024-07-15 07:51:05 -07:00
ccurme
ab2d7821a7 fireworks[patch]: use firefunction-v2 in standard tests (#24264) 2024-07-15 13:15:08 +00:00
ccurme
6fc7610b1c standard-tests[patch]: update test_bind_runnables_as_tools (#24241)
Reduce number of tool arguments from two to one.
2024-07-15 08:35:07 -04:00
Bagatur
0da5078cad langchain[minor]: Generic configurable model (#23419)
alternative to
[23244](https://github.com/langchain-ai/langchain/pull/23244). allows
you to use chat model declarative methods

![Screenshot 2024-06-25 at 1 07 10
PM](https://github.com/langchain-ai/langchain/assets/22008038/910d1694-9b7b-46bc-bc2e-3792df9321d6)
2024-07-15 01:11:01 +00:00
Bagatur
d0728b0ba0 core[patch]: add tool name to tool message (#24243)
Copying current ToolNode behavior
2024-07-15 00:42:40 +00:00
Bagatur
9224027e45 docs: tool artifacts how to (#24198) 2024-07-14 17:04:47 -07:00
Bagatur
5c3e2612da core[patch]: Release 0.2.18 (#24230) 2024-07-13 09:14:43 -07:00
Bagatur
65321bf975 core[patch]: fix ToolCall "type" when streaming (#24218) 2024-07-13 08:59:03 -07:00
Jacob Lee
2b7d1cdd2f docs[patch]: Update tool child run docs (#24160)
Documents #24143
2024-07-13 07:52:37 -07:00
Anush
a653b209ba qdrant: test new QdrantVectorStore (#24165)
## Description

This PR adds integration tests to follow up on #24164.

By default, the tests use an in-memory instance.

To run the full suite of tests, with both in-memory and Qdrant server:

```
$ docker run -p 6333:6333 qdrant/qdrant

$ make test

$ make integration_test
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:59:30 +00:00
Roman Solomatin
f071581aea openai[patch]: update openai params (#23691)
**Description:** Explicitly add parameters from openai API



- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 16:53:33 -07:00
Leonid Ganeline
f0a7581b50 milvus: docstring (#23151)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:25:31 +00:00
Christian D. Glissov
474b88326f langchain_qdrant: Added method "_asimilarity_search_with_relevance_scores" to Qdrant class (#23954)
I stumbled upon a bug that led to different similarity scores between
the async and sync similarity searches with relevance scores in Qdrant.
The reason being is that _asimilarity_search_with_relevance_scores is
missing, this makes langchain_qdrant use the method of the vectorstore
baseclass leading to drastically different results.

To illustrate the magnitude here are the results running an identical
search in a test vectorstore.

Output of asimilarity_search_with_relevance_scores:
[0.9902903374601824, 0.9472135924938804, 0.8535534011299859]

Output of similarity_search_with_relevance_scores:
[0.9805806749203648, 0.8944271849877607, 0.7071068022599718]

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:25:20 +00:00
Bagatur
bdc03997c9 standard-tests[patch]: check for ToolCall["type"] (#24209) 2024-07-12 16:17:34 -07:00
Nada Amin
3f1cf00d97 docs: Improve neo4j semantic templates (#23939)
I made some changes based on the issues I stumbled on while following
the README of neo4j-semantic-ollama.
I made the changes to the ollama variant, and can also port the relevant
ones to the layer variant once this is approved.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:08:25 +00:00
Nada Amin
6b47c7361e docs: fix code usage to use the ollama variant (#23937)
**Description:** the template neo4j-semantic-ollama uses an import from
the neo4j-semantic-layer template instead of its own.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:07:42 +00:00
Anirudh31415926535
7677ceea60 docs: model parameter mandatory for cohere embedding and rerank (#23349)
Latest langchain-cohere sdk mandates passing in the model parameter into
the Embeddings and Reranker inits.

This PR is to update the docs to reflect these changes.
2024-07-12 23:07:28 +00:00
Miroslav
aee55eda39 community: Skip Login to HuggubgFaceHub when token is not set (#21561)
Thank you for contributing to LangChain!

- [ ] **HuggingFaceEndpoint**: "Skip Login to HuggingFaceHub"
  - Where:  langchain, community, llm, huggingface_endpoint
 


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Skip login to huggingface hub when when
`huggingfacehub_api_token` is not set. This is needed when using custom
`endpoint_url` outside of HuggingFaceHub.
- **Issue:** the issue # it fixes
https://github.com/langchain-ai/langchain/issues/20342 and
https://github.com/langchain-ai/langchain/issues/19685
    - **Dependencies:** None


- [ ] **Add tests and docs**: 
  1. Tested with locally available TGI endpoint
  2.  Example Usage
```python
from langchain_community.llms import HuggingFaceEndpoint

llm = HuggingFaceEndpoint(
    endpoint_url='http://localhost:8080',
    server_kwargs={
        "headers": {"Content-Type": "application/json"}
    }
)
resp = llm.invoke("Tell me a joke")
print(resp)
```
 Also tested against HF Endpoints
 ```python
 from langchain_community.llms import HuggingFaceEndpoint
huggingfacehub_api_token = "hf_xyz"
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
    huggingfacehub_api_token=huggingfacehub_api_token,
    repo_id=repo_id,
)
resp = llm.invoke("Tell me a joke")
print(resp)
 ```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 22:10:32 +00:00
Anush
d09dda5a08 qdrant: Bump patch version (#24168)
# Description

To release a new version of `langchain-qdrant` after #24165 and #24166.
2024-07-12 14:48:50 -07:00
Bagatur
12950cc602 standard-tests[patch]: improve runnable tool description (#24210) 2024-07-12 21:33:56 +00:00
Erick Friis
e8ee781a42 ibm: move to external repo (#24208) 2024-07-12 21:14:24 +00:00
Bagatur
02e71cebed together[patch]: Release 0.1.4 (#24205) 2024-07-12 13:59:58 -07:00
Bagatur
259d4d2029 anthropic[patch]: Release 0.1.20 (#24204) 2024-07-12 13:59:15 -07:00
Bagatur
3aed74a6fc fireworks[patch]: Release 0.1.5 (#24203) 2024-07-12 13:58:58 -07:00
Bagatur
13b0d7ec8f openai[patch]: Release 0.1.16 (#24202) 2024-07-12 13:58:39 -07:00
Bagatur
71cd6e6feb groq[patch]: Release 0.1.7 (#24201) 2024-07-12 13:58:19 -07:00
Bagatur
99054e19eb mistralai[patch]: Release 0.1.10 (#24200) 2024-07-12 13:57:58 -07:00
Bagatur
7a1321e2f9 ibm[patch]: Release 0.1.10 (#24199) 2024-07-12 13:57:38 -07:00
Bagatur
cb5031f22f integrations[patch]: require core >=0.2.17 (#24207) 2024-07-12 20:54:01 +00:00
Nithish Raghunandanan
f1618ec540 couchbase: Add standard and semantic caches (#23607)
Thank you for contributing to LangChain!

**Description:** Add support for caching (standard + semantic) LLM
responses using Couchbase


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 20:30:03 +00:00
Eugene Yurtsev
7e7247537b Update libs/community/langchain_community/document_loaders/chromium.py
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
2024-04-03 11:10:45 -04:00
William Fu-Hinthorn
94cc313886 fixup 2024-04-02 18:40:11 -07:00
William Fu-Hinthorn
8b34a49b40 beh 2024-04-02 18:36:45 -07:00
William Fu-Hinthorn
608a8705c3 Fix async chromium alazy_load 2024-04-02 18:35:32 -07:00
220 changed files with 9965 additions and 5211 deletions

View File

@@ -5,10 +5,10 @@ services:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
@@ -28,5 +28,3 @@ services:
networks:
langchain-network:
driver: bridge

View File

@@ -6,6 +6,7 @@ import sys
import tomllib
from collections import defaultdict
from typing import Dict, List, Set
from pathlib import Path
LANGCHAIN_DIRS = [
@@ -26,17 +27,48 @@ def all_package_dirs() -> Set[str]:
def dependents_graph() -> dict:
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
for path in glob.glob("./libs/**/pyproject.toml", recursive=True):
if "template" in path:
continue
# load regular and test deps from pyproject.toml
with open(path, "rb") as f:
pyproject = tomllib.load(f)["tool"]["poetry"]
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in pyproject["dependencies"]:
for dep in [
*pyproject["dependencies"].keys(),
*pyproject["group"]["test"]["dependencies"].keys(),
]:
if "langchain" in dep:
dependents[dep].add(pkg_dir)
continue
# load extended deps from extended_testing_deps.txt
package_path = Path(path).parent
extended_requirement_path = package_path / "extended_testing_deps.txt"
if extended_requirement_path.exists():
with open(extended_requirement_path, "r") as f:
extended_deps = f.read().splitlines()
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith(
"-e ../partners/"
), "Extended test deps should only editable install partner packages"
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
dep = depline.split("==")[0]
if "langchain" in dep:
dependents[dep].add(pkg_dir)
return dependents

View File

@@ -0,0 +1,35 @@
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
releasing_rc = "rc" in version
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["tool"]["poetry"]["dependencies"]
for lib in dependencies:
dep_version = dependencies[lib]
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {lib} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {lib} has allow-prereleases set to true. Please remove this."
)

View File

@@ -221,6 +221,11 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
poetry run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version

View File

@@ -57,4 +57,5 @@ Notebook | Description
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,756 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "10f50955-be55-422f-8c62-3a32f8cf02ed",
"metadata": {},
"source": [
"# RAG application running locally on Intel Xeon CPU using langchain and open-source models"
]
},
{
"cell_type": "markdown",
"id": "48113be6-44bb-4aac-aed3-76a1365b9561",
"metadata": {},
"source": [
"Author - Pratool Bharti (pratool.bharti@intel.com)"
]
},
{
"cell_type": "markdown",
"id": "8b10b54b-1572-4ea1-9c1e-1d29fcc3dcd9",
"metadata": {},
"source": [
"In this cookbook, we use langchain tools and open source models to execute locally on CPU. This notebook has been validated to run on Intel Xeon 8480+ CPU. Here we implement a RAG pipeline for Llama2 model to answer questions about Intel Q1 2024 earnings release."
]
},
{
"cell_type": "markdown",
"id": "acadbcec-3468-4926-8ce5-03b678041c0a",
"metadata": {},
"source": [
"**Create a conda or virtualenv environment with python >=3.10 and install following libraries**\n",
"<br>\n",
"\n",
"`pip install --upgrade langchain langchain-community langchainhub langchain-chroma bs4 gpt4all pypdf pysqlite3-binary` <br>\n",
"`pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu`"
]
},
{
"cell_type": "markdown",
"id": "84c392c8-700a-42ec-8e94-806597f22e43",
"metadata": {},
"source": [
"**Load pysqlite3 in sys modules since ChromaDB requires sqlite3.**"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "145cd491-b388-4ea7-bdc8-2f4995cac6fd",
"metadata": {},
"outputs": [],
"source": [
"__import__(\"pysqlite3\")\n",
"import sys\n",
"\n",
"sys.modules[\"sqlite3\"] = sys.modules.pop(\"pysqlite3\")"
]
},
{
"cell_type": "markdown",
"id": "14dde7e2-b236-49b9-b3a0-08c06410418c",
"metadata": {},
"source": [
"**Import essential components from langchain to load and split data**"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "887643ba-249e-48d6-9aa7-d25087e8dfbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import PyPDFLoader"
]
},
{
"cell_type": "markdown",
"id": "922c0eba-8736-4de5-bd2f-3d0f00b16e43",
"metadata": {},
"source": [
"**Download Intel Q1 2024 earnings release**"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d6a2419-5338-4188-8615-a40a65ff8019",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2024-07-15 15:04:43-- https://d1io3yog0oux5.cloudfront.net/_11d435a500963f99155ee058df09f574/intel/db/887/9014/earnings_release/Q1+24_EarningsRelease_FINAL.pdf\n",
"Resolving proxy-dmz.intel.com (proxy-dmz.intel.com)... 10.7.211.16\n",
"Connecting to proxy-dmz.intel.com (proxy-dmz.intel.com)|10.7.211.16|:912... connected.\n",
"Proxy request sent, awaiting response... 200 OK\n",
"Length: 133510 (130K) [application/pdf]\n",
"Saving to: intel_q1_2024_earnings.pdf\n",
"\n",
"intel_q1_2024_earni 100%[===================>] 130.38K --.-KB/s in 0.005s \n",
"\n",
"2024-07-15 15:04:44 (24.6 MB/s) - intel_q1_2024_earnings.pdf saved [133510/133510]\n",
"\n"
]
}
],
"source": [
"!wget 'https://d1io3yog0oux5.cloudfront.net/_11d435a500963f99155ee058df09f574/intel/db/887/9014/earnings_release/Q1+24_EarningsRelease_FINAL.pdf' -O intel_q1_2024_earnings.pdf"
]
},
{
"cell_type": "markdown",
"id": "e3612627-e105-453d-8a50-bbd6e39dedb5",
"metadata": {},
"source": [
"**Loading earning release pdf document through PyPDFLoader**"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cac6278e-ebad-4224-a062-bf6daca24cb0",
"metadata": {},
"outputs": [],
"source": [
"loader = PyPDFLoader(\"intel_q1_2024_earnings.pdf\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "a7dca43b-1c62-41df-90c7-6ed2904f823d",
"metadata": {},
"source": [
"**Splitting entire document in several chunks with each chunk size is 500 tokens**"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4486adbe-0d0e-4685-8c08-c1774ed6e993",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"id": "af142346-e793-4a52-9a56-63e3be416b3d",
"metadata": {},
"source": [
"**Looking at the first split of the document**"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e4240fd1-898e-4bfc-a377-02c9bc25b56e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'intel_q1_2024_earnings.pdf', 'page': 0}, page_content='Intel Corporation\\n2200 Mission College Blvd.\\nSanta Clara, CA 95054-1549\\n \\nNews Release\\n Intel Reports First -Quarter 2024 Financial Results\\nNEWS SUMMARY\\n▪First-quarter revenue of $12.7 billion , up 9% year over year (YoY).\\n▪First-quarter GAAP earnings (loss) per share (EPS) attributable to Intel was $(0.09) ; non-GAAP EPS \\nattributable to Intel was $0.18 .')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"all_splits[0]"
]
},
{
"cell_type": "markdown",
"id": "b88d2632-7c1b-49ef-a691-c0eb67d23e6a",
"metadata": {},
"source": [
"**One of the major step in RAG is to convert each split of document into embeddings and store in a vector database such that searching relevant documents are efficient.** <br>\n",
"**For that, importing Chroma vector database from langchain. Also, importing open source GPT4All for embedding models**"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9ff99dd7-9d47-4239-ba0a-d775792334ba",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import GPT4AllEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "b5d1f4dd-dd8d-4a20-95d1-2dbdd204375a",
"metadata": {},
"source": [
"**In next step, we will download one of the most popular embedding model \"all-MiniLM-L6-v2\". Find more details of the model at this link https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2**"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "05db3494-5d8e-4a13-9941-26330a86f5e5",
"metadata": {},
"outputs": [],
"source": [
"model_name = \"all-MiniLM-L6-v2.gguf2.f16.gguf\"\n",
"gpt4all_kwargs = {\"allow_download\": \"True\"}\n",
"embeddings = GPT4AllEmbeddings(model_name=model_name, gpt4all_kwargs=gpt4all_kwargs)"
]
},
{
"cell_type": "markdown",
"id": "4e53999e-1983-46ac-8039-2783e194c3ae",
"metadata": {},
"source": [
"**Store all the embeddings in the Chroma database**"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0922951a-9ddf-4761-973d-8e9a86f61284",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=embeddings)"
]
},
{
"cell_type": "markdown",
"id": "29f94fa0-6c75-4a65-a1a3-debc75422479",
"metadata": {},
"source": [
"**Now, let's find relevant splits from the documents related to the question**"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "88c8152d-ec7a-4f0b-9d86-877789407537",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n"
]
}
],
"source": [
"question = \"What is Intel CCG revenue in Q1 2024\"\n",
"docs = vectorstore.similarity_search(question)\n",
"print(len(docs))"
]
},
{
"cell_type": "markdown",
"id": "53330c6b-cb0f-43f9-b379-2e57ac1e5335",
"metadata": {},
"source": [
"**Look at the first retrieved document from the vector database**"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "43a6d94f-b5c4-47b0-a353-2db4c3d24d9c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'page': 1, 'source': 'intel_q1_2024_earnings.pdf'}, page_content='Client Computing Group (CCG) $7.5 billion up31%\\nData Center and AI (DCAI) $3.0 billion up5%\\nNetwork and Edge (NEX) $1.4 billion down 8%\\nTotal Intel Products revenue $11.9 billion up17%\\nIntel Foundry $4.4 billion down 10%\\nAll other:\\nAltera $342 million down 58%\\nMobileye $239 million down 48%\\nOther $194 million up17%\\nTotal all other revenue $775 million down 46%\\nIntersegment eliminations $(4.4) billion\\nTotal net revenue $12.7 billion up9%\\nIntel Products Highlights')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "64ba074f-4b36-442e-b7e2-b26d6e2815c3",
"metadata": {},
"source": [
"**Download Lllama-2 model from Huggingface and store locally** <br>\n",
"**You can download different quantization variant of Lllama-2 model from the link below. We are using Q8 version here (7.16GB).** <br>\n",
"https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8dd0811-6f43-4bc6-b854-2ab377639c9a",
"metadata": {},
"outputs": [],
"source": [
"!huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q8_0.gguf --local-dir . --local-dir-use-symlinks False"
]
},
{
"cell_type": "markdown",
"id": "3895b1f5-f51d-4539-abf0-af33d7ca48ea",
"metadata": {},
"source": [
"**Import langchain components required to load downloaded LLMs model**"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fb087088-aa62-44c0-8356-061e9b9f1186",
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_community.llms import LlamaCpp"
]
},
{
"cell_type": "markdown",
"id": "5a8a111e-2614-4b70-b034-85cd3e7304cb",
"metadata": {},
"source": [
"**Loading the local Lllama-2 model using Llama-cpp library**"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "fb917da2-c0d7-4995-b56d-26254276e0da",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from llama-2-7b-chat.Q8_0.gguf (version GGUF V2)\n",
"llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\n",
"llama_model_loader: - kv 0: general.architecture str = llama\n",
"llama_model_loader: - kv 1: general.name str = LLaMA v2\n",
"llama_model_loader: - kv 2: llama.context_length u32 = 4096\n",
"llama_model_loader: - kv 3: llama.embedding_length u32 = 4096\n",
"llama_model_loader: - kv 4: llama.block_count u32 = 32\n",
"llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008\n",
"llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128\n",
"llama_model_loader: - kv 7: llama.attention.head_count u32 = 32\n",
"llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32\n",
"llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001\n",
"llama_model_loader: - kv 10: general.file_type u32 = 7\n",
"llama_model_loader: - kv 11: tokenizer.ggml.model str = llama\n",
"llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = [\"<unk>\", \"<s>\", \"</s>\", \"<0x00>\", \"<...\n",
"llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...\n",
"llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...\n",
"llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1\n",
"llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2\n",
"llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0\n",
"llama_model_loader: - kv 18: general.quantization_version u32 = 2\n",
"llama_model_loader: - type f32: 65 tensors\n",
"llama_model_loader: - type q8_0: 226 tensors\n",
"llm_load_vocab: special tokens cache size = 259\n",
"llm_load_vocab: token to piece cache size = 0.1684 MB\n",
"llm_load_print_meta: format = GGUF V2\n",
"llm_load_print_meta: arch = llama\n",
"llm_load_print_meta: vocab type = SPM\n",
"llm_load_print_meta: n_vocab = 32000\n",
"llm_load_print_meta: n_merges = 0\n",
"llm_load_print_meta: vocab_only = 0\n",
"llm_load_print_meta: n_ctx_train = 4096\n",
"llm_load_print_meta: n_embd = 4096\n",
"llm_load_print_meta: n_layer = 32\n",
"llm_load_print_meta: n_head = 32\n",
"llm_load_print_meta: n_head_kv = 32\n",
"llm_load_print_meta: n_rot = 128\n",
"llm_load_print_meta: n_swa = 0\n",
"llm_load_print_meta: n_embd_head_k = 128\n",
"llm_load_print_meta: n_embd_head_v = 128\n",
"llm_load_print_meta: n_gqa = 1\n",
"llm_load_print_meta: n_embd_k_gqa = 4096\n",
"llm_load_print_meta: n_embd_v_gqa = 4096\n",
"llm_load_print_meta: f_norm_eps = 0.0e+00\n",
"llm_load_print_meta: f_norm_rms_eps = 1.0e-06\n",
"llm_load_print_meta: f_clamp_kqv = 0.0e+00\n",
"llm_load_print_meta: f_max_alibi_bias = 0.0e+00\n",
"llm_load_print_meta: f_logit_scale = 0.0e+00\n",
"llm_load_print_meta: n_ff = 11008\n",
"llm_load_print_meta: n_expert = 0\n",
"llm_load_print_meta: n_expert_used = 0\n",
"llm_load_print_meta: causal attn = 1\n",
"llm_load_print_meta: pooling type = 0\n",
"llm_load_print_meta: rope type = 0\n",
"llm_load_print_meta: rope scaling = linear\n",
"llm_load_print_meta: freq_base_train = 10000.0\n",
"llm_load_print_meta: freq_scale_train = 1\n",
"llm_load_print_meta: n_ctx_orig_yarn = 4096\n",
"llm_load_print_meta: rope_finetuned = unknown\n",
"llm_load_print_meta: ssm_d_conv = 0\n",
"llm_load_print_meta: ssm_d_inner = 0\n",
"llm_load_print_meta: ssm_d_state = 0\n",
"llm_load_print_meta: ssm_dt_rank = 0\n",
"llm_load_print_meta: model type = 7B\n",
"llm_load_print_meta: model ftype = Q8_0\n",
"llm_load_print_meta: model params = 6.74 B\n",
"llm_load_print_meta: model size = 6.67 GiB (8.50 BPW) \n",
"llm_load_print_meta: general.name = LLaMA v2\n",
"llm_load_print_meta: BOS token = 1 '<s>'\n",
"llm_load_print_meta: EOS token = 2 '</s>'\n",
"llm_load_print_meta: UNK token = 0 '<unk>'\n",
"llm_load_print_meta: LF token = 13 '<0x0A>'\n",
"llm_load_print_meta: max token length = 48\n",
"llm_load_tensors: ggml ctx size = 0.14 MiB\n",
"llm_load_tensors: CPU buffer size = 6828.64 MiB\n",
"...................................................................................................\n",
"llama_new_context_with_model: n_ctx = 2048\n",
"llama_new_context_with_model: n_batch = 512\n",
"llama_new_context_with_model: n_ubatch = 512\n",
"llama_new_context_with_model: flash_attn = 0\n",
"llama_new_context_with_model: freq_base = 10000.0\n",
"llama_new_context_with_model: freq_scale = 1\n",
"llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB\n",
"llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB\n",
"llama_new_context_with_model: CPU output buffer size = 0.12 MiB\n",
"llama_new_context_with_model: CPU compute buffer size = 164.01 MiB\n",
"llama_new_context_with_model: graph nodes = 1030\n",
"llama_new_context_with_model: graph splits = 1\n",
"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | \n",
"Model metadata: {'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.context_length': '4096', 'general.name': 'LLaMA v2', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '11008', 'llama.attention.layer_norm_rms_epsilon': '0.000001', 'llama.rope.dimension_count': '128', 'llama.attention.head_count': '32', 'tokenizer.ggml.bos_token_id': '1', 'llama.block_count': '32', 'llama.attention.head_count_kv': '32', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '7'}\n",
"Using fallback chat format: llama-2\n"
]
}
],
"source": [
"llm = LlamaCpp(\n",
" model_path=\"llama-2-7b-chat.Q8_0.gguf\",\n",
" n_gpu_layers=-1,\n",
" n_batch=512,\n",
" n_ctx=2048,\n",
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "43e06f56-ef97-451b-87d9-8465ea442aed",
"metadata": {},
"source": [
"**Now let's ask the same question to Llama model without showing them the earnings release.**"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "1033dd82-5532-437d-a548-27695e109589",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"?\n",
"(NASDAQ:INTC)\n",
"Intel's CCG (Client Computing Group) revenue for Q1 2024 was $9.6 billion, a decrease of 35% from the previous quarter and a decrease of 42% from the same period last year."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 16.05 ms / 68 runs ( 0.24 ms per token, 4236.76 tokens per second)\n",
"llama_print_timings: prompt eval time = 131.14 ms / 16 tokens ( 8.20 ms per token, 122.01 tokens per second)\n",
"llama_print_timings: eval time = 3225.00 ms / 67 runs ( 48.13 ms per token, 20.78 tokens per second)\n",
"llama_print_timings: total time = 3466.40 ms / 83 tokens\n"
]
},
{
"data": {
"text/plain": [
"\"?\\n(NASDAQ:INTC)\\nIntel's CCG (Client Computing Group) revenue for Q1 2024 was $9.6 billion, a decrease of 35% from the previous quarter and a decrease of 42% from the same period last year.\""
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.invoke(question)"
]
},
{
"cell_type": "markdown",
"id": "75f5cb10-746f-4e37-9386-b85a4d2b84ef",
"metadata": {},
"source": [
"**As you can see, model is giving wrong information. Correct asnwer is CCG revenue in Q1 2024 is $7.5B. Now let's apply RAG using the earning release document**"
]
},
{
"cell_type": "markdown",
"id": "0f4150ec-5692-4756-b11a-22feb7ab88ff",
"metadata": {},
"source": [
"**in RAG, we modify the input prompt by adding relevent documents with the question. Here, we use one of the popular RAG prompt**"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "226c14b0-f43e-4a1f-a1e4-04731d467ec4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: {question} \\nContext: {context} \\nAnswer:\"))]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"\n",
"rag_prompt = hub.pull(\"rlm/rag-prompt\")\n",
"rag_prompt.messages"
]
},
{
"cell_type": "markdown",
"id": "77deb6a0-0950-450a-916a-f2a029676c20",
"metadata": {},
"source": [
"**Appending all retreived documents in a single document**"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "2dbc3327-6ef3-4c1f-8797-0c71964b0921",
"metadata": {},
"outputs": [],
"source": [
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)"
]
},
{
"cell_type": "markdown",
"id": "2e2d9f18-49d0-43a3-bea8-78746ffa86b7",
"metadata": {},
"source": [
"**The last step is to create a chain using langchain tool that will create an e2e pipeline. It will take question and context as an input.**"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "427379c2-51ff-4e0f-8278-a45221363299",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough, RunnablePick\n",
"\n",
"# Chain\n",
"chain = (\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "095d6280-c949-4d00-8e32-8895a82d245f",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Based on the provided context, Intel CCG revenue in Q1 2024 was $7.5 billion up 31%."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 7.74 ms / 31 runs ( 0.25 ms per token, 4004.13 tokens per second)\n",
"llama_print_timings: prompt eval time = 2529.41 ms / 674 tokens ( 3.75 ms per token, 266.46 tokens per second)\n",
"llama_print_timings: eval time = 1542.94 ms / 30 runs ( 51.43 ms per token, 19.44 tokens per second)\n",
"llama_print_timings: total time = 4123.68 ms / 704 tokens\n"
]
},
{
"data": {
"text/plain": [
"' Based on the provided context, Intel CCG revenue in Q1 2024 was $7.5 billion up 31%.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"context\": docs, \"question\": question})"
]
},
{
"cell_type": "markdown",
"id": "638364b2-6bd2-4471-9961-d3a1d1b9d4ee",
"metadata": {},
"source": [
"**Now we see the results are correct as it is mentioned in earnings release.** <br>\n",
"**To further automate, we will create a chain that will take input as question and retriever so that we don't need to retrieve documents seperately**"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "4654e5b7-635f-4767-8b31-4c430164cdd5",
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()\n",
"qa_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0979f393-fd0a-4e82-b844-68371c6ad68f",
"metadata": {},
"source": [
"**Now we only need to pass the question to the chain and it will fetch the contexts directly from the vector database to generate the answer**\n",
"<br>\n",
"**Let's try with another question**"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "3ea07b82-e6ec-4084-85f4-191373530172",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" According to the provided context, Intel DCAI revenue in Q1 2024 was $3.0 billion up 5%."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 6.28 ms / 31 runs ( 0.20 ms per token, 4937.88 tokens per second)\n",
"llama_print_timings: prompt eval time = 2681.93 ms / 730 tokens ( 3.67 ms per token, 272.19 tokens per second)\n",
"llama_print_timings: eval time = 1471.07 ms / 30 runs ( 49.04 ms per token, 20.39 tokens per second)\n",
"llama_print_timings: total time = 4206.77 ms / 760 tokens\n"
]
},
{
"data": {
"text/plain": [
"' According to the provided context, Intel DCAI revenue in Q1 2024 was $3.0 billion up 5%.'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa_chain.invoke(\"what is Intel DCAI revenue in Q1 2024?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9407f2a0-4a35-4315-8e96-02fcb80f210c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "rag-on-intel",
"language": "python",
"name": "rag-on-intel"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -13,7 +13,7 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm" | tr '\n' ' ')
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm|couchbase" | tr '\n' ' ')
PORT ?= 3001

View File

@@ -178,3 +178,10 @@ autosummary_generate = True
html_copy_source = False
html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -78,7 +78,7 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
continue
if inspect.isclass(type_):
# The clasification of the class is used to select a template
# The type of the class is used to select a template
# for the object when rendering the documentation.
# See `templates` directory for defined templates.
# This is a hacky solution to distinguish between different

View File

@@ -55,6 +55,7 @@ A developer platform that lets you debug, test, evaluate, and monitor LLM applic
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}
/>
## LangChain Expression Language (LCEL)
@@ -521,7 +522,7 @@ Generally, when designing tools to be used by a chat model or LLM, it is importa
For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
To use an existing pre-built tool, see [here](docs/integrations/tools/) for a list of pre-built tools.
To use an existing pre-built tool, see [here](/docs/integrations/tools/) for a list of pre-built tools.
### Toolkits
@@ -821,7 +822,7 @@ We recommend this method as a starting point when working with structured output
- If multiple underlying techniques are supported, you can supply a `method` parameter to
[toggle which one is used](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs).
You may want or need to use other techiniques if:
You may want or need to use other techniques if:
- The chat model you are using does not support tool calling.
- You are working with very complex schemas and the model is having trouble generating outputs that conform.

View File

@@ -0,0 +1,342 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to dispatch custom callback events\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [Custom callback handlers](/docs/how_to/custom_callbacks)\n",
"- [Astream Events API](/docs/concepts/#astream_events) the `astream_events` method will surface custom callback events.\n",
":::\n",
"\n",
"In some situations, you may want to dipsatch a custom callback event from within a [Runnable](/docs/concepts/#runnable-interface) so it can be surfaced\n",
"in a custom callback handler or via the [Astream Events API](/docs/concepts/#astream_events).\n",
"\n",
"For example, if you have a long running tool with multiple steps, you can dispatch custom events between the steps and use these custom events to monitor progress.\n",
"You could also surface these custom events to an end user of your application to show them how the current task is progressing.\n",
"\n",
"To dispatch a custom event you need to decide on two attributes for the event: the `name` and the `data`.\n",
"\n",
"| Attribute | Type | Description |\n",
"|-----------|------|----------------------------------------------------------------------------------------------------------|\n",
"| name | str | A user defined name for the event. |\n",
"| data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |\n",
"\n",
"\n",
":::{.callout-important}\n",
"* Dispatching custom callback events requires `langchain-core>=0.2.15`.\n",
"* Custom callback events can only be dispatched from within an existing `Runnable`.\n",
"* If using `astream_events`, you must use `version='v2'` to see custom events.\n",
"* Sending or rendering custom callbacks events in LangSmith is not yet supported.\n",
":::\n",
"\n",
"\n",
":::caution COMPATIBILITY\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in other Python versions.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain-core"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Astream Events API\n",
"\n",
"The most useful way to consume custom events is via the [Astream Events API](/docs/concepts/#astream_events).\n",
"\n",
"We can use the `async` `adispatch_custom_event` API to emit custom events in an async setting. \n",
"\n",
"\n",
":::{.callout-important}\n",
"\n",
"To see custom events via the astream events API, you need to use the newer `v2` API of `astream_events`.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'hello world'}, 'name': 'foo', 'tags': [], 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': 5, 'parent_ids': []}\n",
"{'event': 'on_chain_stream', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'foo', 'tags': [], 'metadata': {}, 'data': {'chunk': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_chain_end', 'data': {'output': 'hello world'}, 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def foo(x: str) -> str:\n",
" await adispatch_custom_event(\"event1\", {\"x\": x})\n",
" await adispatch_custom_event(\"event2\", 5)\n",
" return x\n",
"\n",
"\n",
"async for event in foo.astream_events(\"hello world\", version=\"v2\"):\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In python <= 3.10, you must propagate the config manually!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'hello world'}, 'name': 'bar', 'tags': [], 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': 5, 'parent_ids': []}\n",
"{'event': 'on_chain_stream', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'bar', 'tags': [], 'metadata': {}, 'data': {'chunk': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_chain_end', 'data': {'output': 'hello world'}, 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'bar', 'tags': [], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def bar(x: str, config: RunnableConfig) -> str:\n",
" \"\"\"An example that shows how to manually propagate config.\n",
"\n",
" You must do this if you're running python<=3.10.\n",
" \"\"\"\n",
" await adispatch_custom_event(\"event1\", {\"x\": x}, config=config)\n",
" await adispatch_custom_event(\"event2\", 5, config=config)\n",
" return x\n",
"\n",
"\n",
"async for event in bar.astream_events(\"hello world\", version=\"v2\"):\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Async Callback Handler\n",
"\n",
"You can also consume the dispatched event via an async callback handler."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Received event event1 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: a62b84be-7afd-4829-9947-7165df1f37d9\n",
"Received event event2 with data: 5, with tags: ['foo', 'bar'], with metadata: {} and run_id: a62b84be-7afd-4829-9947-7165df1f37d9\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Any, Dict, List, Optional\n",
"from uuid import UUID\n",
"\n",
"from langchain_core.callbacks import AsyncCallbackHandler\n",
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"class AsyncCustomCallbackHandler(AsyncCallbackHandler):\n",
" async def on_custom_event(\n",
" self,\n",
" name: str,\n",
" data: Any,\n",
" *,\n",
" run_id: UUID,\n",
" tags: Optional[List[str]] = None,\n",
" metadata: Optional[Dict[str, Any]] = None,\n",
" **kwargs: Any,\n",
" ) -> None:\n",
" print(\n",
" f\"Received event {name} with data: {data}, with tags: {tags}, with metadata: {metadata} and run_id: {run_id}\"\n",
" )\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def bar(x: str, config: RunnableConfig) -> str:\n",
" \"\"\"An example that shows how to manually propagate config.\n",
"\n",
" You must do this if you're running python<=3.10.\n",
" \"\"\"\n",
" await adispatch_custom_event(\"event1\", {\"x\": x}, config=config)\n",
" await adispatch_custom_event(\"event2\", 5, config=config)\n",
" return x\n",
"\n",
"\n",
"async_handler = AsyncCustomCallbackHandler()\n",
"await foo.ainvoke(1, {\"callbacks\": [async_handler], \"tags\": [\"foo\", \"bar\"]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sync Callback Handler\n",
"\n",
"Let's see how to emit custom events in a sync environment using `dispatch_custom_event`.\n",
"\n",
"You **must** call `dispatch_custom_event` from within an existing `Runnable`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Received event event1 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: 27b5ce33-dc26-4b34-92dd-08a89cb22268\n",
"Received event event2 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: 27b5ce33-dc26-4b34-92dd-08a89cb22268\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Any, Dict, List, Optional\n",
"from uuid import UUID\n",
"\n",
"from langchain_core.callbacks import BaseCallbackHandler\n",
"from langchain_core.callbacks.manager import (\n",
" dispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"class CustomHandler(BaseCallbackHandler):\n",
" def on_custom_event(\n",
" self,\n",
" name: str,\n",
" data: Any,\n",
" *,\n",
" run_id: UUID,\n",
" tags: Optional[List[str]] = None,\n",
" metadata: Optional[Dict[str, Any]] = None,\n",
" **kwargs: Any,\n",
" ) -> None:\n",
" print(\n",
" f\"Received event {name} with data: {data}, with tags: {tags}, with metadata: {metadata} and run_id: {run_id}\"\n",
" )\n",
"\n",
"\n",
"@RunnableLambda\n",
"def foo(x: int, config: RunnableConfig) -> int:\n",
" dispatch_custom_event(\"event1\", {\"x\": x})\n",
" dispatch_custom_event(\"event2\", {\"x\": x})\n",
" return x\n",
"\n",
"\n",
"handler = CustomHandler()\n",
"foo.invoke(1, {\"callbacks\": [handler], \"tags\": [\"foo\", \"bar\"]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've seen how to emit custom events, you can check out the more in depth guide for [astream events](/docs/how_to/streaming/#using-stream-events) which is the easiest way to leverage custom events."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -15,6 +15,12 @@
"\n",
"Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n",
"\n",
":::\n",
"\n",
":::info Requires ``langchain >= 0.2.8``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.8``. Please make sure your package is up to date.\n",
"\n",
":::"
]
},
@@ -25,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai"
"%pip install -qU langchain>=0.2.8 langchain-openai langchain-anthropic langchain-google-vertexai"
]
},
{
@@ -76,32 +82,6 @@
"print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
]
},
{
"cell_type": "markdown",
"id": "fff9a4c8-b6ee-4a1a-8d3d-0ecaa312d4ed",
"metadata": {},
"source": [
"## Simple config example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75c25d39-bf47-4b51-a6c6-64d9c572bfd6",
"metadata": {},
"outputs": [],
"source": [
"user_config = {\n",
" \"model\": \"...user-specified...\",\n",
" \"model_provider\": \"...user-specified...\",\n",
" \"temperature\": 0,\n",
" \"max_tokens\": 1000,\n",
"}\n",
"\n",
"llm = init_chat_model(**user_config)\n",
"llm.invoke(\"what's your name\")"
]
},
{
"cell_type": "markdown",
"id": "f811f219-5e78-4b62-b495-915d52a22532",
@@ -125,12 +105,215 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da07b5c0-d2e6-42e4-bfcd-2efcfaae6221",
"cell_type": "markdown",
"id": "476a44db-c50d-4846-951d-0f1c9ba8bbaa",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Creating a configurable model\n",
"\n",
"You can also create a runtime-configurable model by specifying `configurable_fields`. If you don't specify a `model` value, then \"model\" and \"model_provider\" be configurable by default."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6c037f27-12d7-4e83-811e-4245c0e3ba58",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_d576307f90', 'finish_reason': 'stop', 'logprobs': None}, id='run-5428ab5c-b5c0-46de-9946-5d4ca40dbdc8-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"configurable_model = init_chat_model(temperature=0)\n",
"\n",
"configurable_model.invoke(\n",
" \"what's your name\", config={\"configurable\": {\"model\": \"gpt-4o\"}}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", response_metadata={'id': 'msg_012XvotUJ3kGLXJUWKBVxJUi', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-1ad1eefe-f1c6-4244-8bc6-90e2cb7ee554-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"configurable_model.invoke(\n",
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7f3b3d4a-4066-45e4-8297-ea81ac8e70b7",
"metadata": {},
"source": [
"### Configurable model with default values\n",
"\n",
"We can create a configurable model with default model values, specify which parameters are configurable, and add prefixes to configurable params:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "814a2289-d0db-401e-b555-d5116112b413",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_ce0793330f', 'finish_reason': 'stop', 'logprobs': None}, id='run-3923e328-7715-4cd6-b215-98e4b6bf7c9d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"first_llm = init_chat_model(\n",
" model=\"gpt-4o\",\n",
" temperature=0,\n",
" configurable_fields=(\"model\", \"model_provider\", \"temperature\", \"max_tokens\"),\n",
" config_prefix=\"first\", # useful when you have a chain with multiple models\n",
")\n",
"\n",
"first_llm.invoke(\"what's your name\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "6c8755ba-c001-4f5a-a497-be3f1db83244",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", response_metadata={'id': 'msg_01RyYR64DoMPNCfHeNnroMXm', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-22446159-3723-43e6-88df-b84797e7751d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"first_llm.invoke(\n",
" \"what's your name\",\n",
" config={\n",
" \"configurable\": {\n",
" \"first_model\": \"claude-3-5-sonnet-20240620\",\n",
" \"first_temperature\": 0.5,\n",
" \"first_max_tokens\": 100,\n",
" }\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0072b1a3-7e44-4b4e-8b07-efe1ba91a689",
"metadata": {},
"source": [
"### Using a configurable model declaratively\n",
"\n",
"We can call declarative operations like `bind_tools`, `with_structured_output`, `with_configurable`, etc. on a configurable model and chain a configurable model in the same way that we would a regularly instantiated chat model object."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "067dabee-1050-4110-ae24-c48eba01e13b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'call_sYT3PFMufHGWJD32Hi2CTNUP'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'call_j1qjhxRnD3ffQmRyqjlI1Lnk'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"class GetPopulation(BaseModel):\n",
" \"\"\"Get the current population in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm = init_chat_model(temperature=0)\n",
"llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])\n",
"\n",
"llm_with_tools.invoke(\n",
" \"what's bigger in 2024 LA or NYC\", config={\"configurable\": {\"model\": \"gpt-4o\"}}\n",
").tool_calls"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e57dfe9f-cd24-4e37-9ce9-ccf8daf78f89",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'toolu_01CxEHxKtVbLBrvzFS7GQ5xR'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York City, NY'},\n",
" 'id': 'toolu_013A79qt5toWSsKunFBDZd5S'}]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools.invoke(\n",
" \"what's bigger in 2024 LA or NYC\",\n",
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}},\n",
").tool_calls"
]
}
],
"metadata": {
@@ -149,7 +332,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -300,7 +300,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "ac9295d3",
"metadata": {},
"outputs": [],
@@ -312,10 +312,8 @@
"\n",
"## Quick Install\n",
"\n",
"```bash\n",
"# Hopefully this code block isn't split\n",
"pip install langchain\n",
"```\n",
"\n",
"As an open-source project in a rapidly developing field, we are extremely open to contributions.\n",
"\"\"\""
@@ -323,7 +321,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "3a0cb17a",
"metadata": {},
"outputs": [
@@ -332,15 +330,14 @@
"text/plain": [
"[Document(page_content='# 🦜️🔗 LangChain'),\n",
" Document(page_content='⚡ Building applications with LLMs through composability ⚡'),\n",
" Document(page_content='## Quick Install\\n\\n```bash'),\n",
" Document(page_content='## Quick Install'),\n",
" Document(page_content=\"# Hopefully this code block isn't split\"),\n",
" Document(page_content='pip install langchain'),\n",
" Document(page_content='```'),\n",
" Document(page_content='As an open-source project in a rapidly developing field, we'),\n",
" Document(page_content='are extremely open to contributions.')]"
]
},
"execution_count": 9,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -742,7 +739,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -48,20 +48,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "40ed76a2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",

View File

@@ -180,7 +180,7 @@
"id": "32b1a992-8997-4c98-8eb2-c9fe9431b799",
"metadata": {},
"source": [
"Alternatively, we can add typing information via [Runnable.with_types](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types):"
"Alternatively, the schema can be fully specified by directly passing the desired [args_schema](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool.args_schema) for the tool:"
]
},
{
@@ -190,10 +190,18 @@
"metadata": {},
"outputs": [],
"source": [
"as_tool = runnable.with_types(input_type=Args).as_tool(\n",
" name=\"My tool\",\n",
" description=\"Explanation of when to use tool.\",\n",
")"
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GSchema(BaseModel):\n",
" \"\"\"Apply a function to an integer and list of integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"Integer\")\n",
" b: List[int] = Field(..., description=\"List of ints\")\n",
"\n",
"\n",
"runnable = RunnableLambda(g)\n",
"as_tool = runnable.as_tool(GSchema)"
]
},
{

View File

@@ -768,13 +768,189 @@
"\n",
"get_weather_tool.invoke({\"city\": \"foobar\"})"
]
},
{
"cell_type": "markdown",
"id": "1a8d8383-11b3-445e-956f-df4e96995e00",
"metadata": {},
"source": [
"## Returning artifacts of Tool execution\n",
"\n",
"Sometimes there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns custom objects like Documents, we may want to pass some view or metadata about this output to the model without passing the raw output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.\n",
"\n",
"The Tool and [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) interfaces make it possible to distinguish between the parts of the tool output meant for the model (this is the ToolMessage.content) and those parts which are meant for use outside the model (ToolMessage.artifact).\n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"\n",
":::\n",
"\n",
"If we want our tool to distinguish between message content and other artifacts, we need to specify `response_format=\"content_and_artifact\"` when defining our tool and make sure that we return a tuple of (content, artifact):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "14905425-0334-43a0-9de9-5bcf622ede0e",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"from typing import List, Tuple\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool(response_format=\"content_and_artifact\")\n",
"def generate_random_ints(min: int, max: int, size: int) -> Tuple[str, List[int]]:\n",
" \"\"\"Generate size random ints in the range [min, max].\"\"\"\n",
" array = [random.randint(min, max) for _ in range(size)]\n",
" content = f\"Successfully generated array of {size} random ints in [{min}, {max}].\"\n",
" return content, array"
]
},
{
"cell_type": "markdown",
"id": "49f057a6-8938-43ea-8faf-ae41e797ceb8",
"metadata": {},
"source": [
"If we invoke our tool directly with the tool arguments, we'll get back just the content part of the output:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0f2e1528-404b-46e6-b87c-f0957c4b9217",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Successfully generated array of 10 random ints in [0, 9].'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke({\"min\": 0, \"max\": 9, \"size\": 10})"
]
},
{
"cell_type": "markdown",
"id": "1e62ebba-1737-4b97-b61a-7313ade4e8c2",
"metadata": {},
"source": [
"If we invoke our tool with a ToolCall (like the ones generated by tool-calling models), we'll get back a ToolMessage that contains both the content and artifact generated by the Tool:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cc197777-26eb-46b3-a83b-c2ce116c6311",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 10 random ints in [0, 9].', name='generate_random_ints', tool_call_id='123', artifact=[1, 4, 2, 5, 3, 9, 0, 4, 7, 7])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(\n",
" {\n",
" \"name\": \"generate_random_ints\",\n",
" \"args\": {\"min\": 0, \"max\": 9, \"size\": 10},\n",
" \"id\": \"123\", # required\n",
" \"type\": \"tool_call\", # required\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "dfdc1040-bf25-4790-b4c3-59452db84e11",
"metadata": {},
"source": [
"We can do the same when subclassing BaseTool:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fe1a09d1-378b-4b91-bb5e-0697c3d7eb92",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class GenerateRandomFloats(BaseTool):\n",
" name: str = \"generate_random_floats\"\n",
" description: str = \"Generate size random floats in the range [min, max].\"\n",
" response_format: str = \"content_and_artifact\"\n",
"\n",
" ndigits: int = 2\n",
"\n",
" def _run(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" range_ = max - min\n",
" array = [\n",
" round(min + (range_ * random.random()), ndigits=self.ndigits)\n",
" for _ in range(size)\n",
" ]\n",
" content = f\"Generated {size} floats in [{min}, {max}], rounded to {self.ndigits} decimals.\"\n",
" return content, array\n",
"\n",
" # Optionally define an equivalent async method\n",
"\n",
" # async def _arun(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" # ..."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8c3d16f6-1c4a-48ab-b05a-38547c592e79",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.', name='generate_random_floats', tool_call_id='123', artifact=[1.4277, 0.7578, 2.4871])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_gen = GenerateRandomFloats(ndigits=4)\n",
"\n",
"rand_gen.invoke(\n",
" {\n",
" \"name\": \"generate_random_floats\",\n",
" \"args\": {\"min\": 0.1, \"max\": 3.3333, \"size\": 3},\n",
" \"id\": \"123\",\n",
" \"type\": \"tool_call\",\n",
" }\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-311",
"language": "python",
"name": "python3"
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
@@ -786,7 +962,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.9"
},
"vscode": {
"interpreter": {

View File

@@ -67,15 +67,16 @@ If you'd prefer not to set an environment variable you can pass the key in direc
```python
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings(cohere_api_key="...")
embeddings_model = CohereEmbeddings(cohere_api_key="...", model='embed-english-v3.0')
```
Otherwise you can initialize without any params:
Otherwise you can initialize simply as shown below:
```python
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings()
embeddings_model = CohereEmbeddings(model='embed-english-v3.0')
```
Do note that it is mandatory to pass the model parameter while initializing the CohereEmbeddings class.
</TabItem>
<TabItem value="huggingface" label="Hugging Face">

View File

@@ -9,11 +9,13 @@
"source": [
"# Hybrid Search\n",
"\n",
"The standard search in LangChain is done by vector similarity. However, a number of vectorstores implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, ...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n",
"The standard search in LangChain is done by vector similarity. However, a number of vectorstores implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, Qdrant...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n",
"\n",
"**Step 1: Make sure the vectorstore you are using supports hybrid search**\n",
"\n",
"At the moment, there is no unified way to perform hybrid search in LangChain. Each vectorstore may have their own way to do it. This is generally exposed as a keyword argument that is passed in during `similarity_search`. By reading the documentation or source code, figure out whether the vectorstore you are using supports hybrid search, and, if so, how to use it.\n",
"At the moment, there is no unified way to perform hybrid search in LangChain. Each vectorstore may have their own way to do it. This is generally exposed as a keyword argument that is passed in during `similarity_search`.\n",
"\n",
"By reading the documentation or source code, figure out whether the vectorstore you are using supports hybrid search, and, if so, how to use it.\n",
"\n",
"**Step 2: Add that parameter as a configurable field for the chain**\n",
"\n",

View File

@@ -84,7 +84,7 @@ These are the core building blocks you can use when building applications.
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: stream tool calls](/docs/how_to/tool_streaming)
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formated tools](/docs/how_to/tools_model_specific)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
@@ -195,7 +195,9 @@ LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to p
- [How to: add a human in the loop to tool usage](/docs/how_to/tools_human)
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: disable parallel tool calling](/docs/how_to/tool_choice)
- [How to: stream events from within a tool](/docs/how_to/tool_stream_events)
- [How to: access the `RunnableConfig` object within a custom tool](/docs/how_to/tool_configure)
- [How to: stream events from child runs within a custom tool](/docs/how_to/tool_stream_events)
- [How to: return extra artifacts from a tool](/docs/how_to/tool_artifacts/)
### Multimodal
@@ -223,6 +225,7 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang
- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: use callbacks in async environments](/docs/how_to/callbacks_async)
- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
### Custom
@@ -235,6 +238,7 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: define a custom tool](/docs/how_to/custom_tools)
- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
### Serialization
- [How to: save and load LangChain objects](/docs/how_to/serialization)

View File

@@ -63,6 +63,38 @@
"Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character."
]
},
{
"cell_type": "markdown",
"id": "11f7e8d3",
"metadata": {},
"source": [
"The `merge_message_runs` utility also works with messages composed together using the overloaded `+` operation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b51855c5",
"metadata": {},
"outputs": [],
"source": [
"messages = (\n",
" SystemMessage(\"you're a good assistant.\")\n",
" + SystemMessage(\"you always respond with a joke.\")\n",
" + HumanMessage([{\"type\": \"text\", \"text\": \"i wonder why it's called langchain\"}])\n",
" + HumanMessage(\"and who is harrison chasing anyways\")\n",
" + AIMessage(\n",
" 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n",
" )\n",
" + AIMessage(\n",
" \"Why, he's probably chasing after the last cup of coffee in the office!\"\n",
" )\n",
")\n",
"\n",
"merged = merge_message_runs(messages)\n",
"print(\"\\n\\n\".join([repr(x) for x in merged]))"
]
},
{
"cell_type": "markdown",
"id": "1b2eee74-71c8-4168-b968-bca580c25d18",

View File

@@ -0,0 +1,395 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "503e36ae-ca62-4f8a-880c-4fe78ff5df93",
"metadata": {},
"source": [
"# How to return extra artifacts from a tool\n",
"\n",
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
"\n",
":::\n",
"\n",
"Tools are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.\n",
"\n",
"The Tool and [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) interfaces make it possible to distinguish between the parts of the tool output meant for the model (this is the ToolMessage.content) and those parts which are meant for use outside the model (ToolMessage.artifact).\n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"\n",
":::\n",
"\n",
"## Defining the tool\n",
"\n",
"If we want our tool to distinguish between message content and other artifacts, we need to specify `response_format=\"content_and_artifact\"` when defining our tool and make sure that we return a tuple of (content, artifact):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "762b9199-885f-4946-9c98-cc54d72b0d76",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU \"langchain-core>=0.2.19\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b9eb179d-1f41-4748-9866-b3d3e8c73cd0",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"from typing import List, Tuple\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool(response_format=\"content_and_artifact\")\n",
"def generate_random_ints(min: int, max: int, size: int) -> Tuple[str, List[int]]:\n",
" \"\"\"Generate size random ints in the range [min, max].\"\"\"\n",
" array = [random.randint(min, max) for _ in range(size)]\n",
" content = f\"Successfully generated array of {size} random ints in [{min}, {max}].\"\n",
" return content, array"
]
},
{
"cell_type": "markdown",
"id": "0ab05d25-af4a-4e5a-afe2-f090416d7ee7",
"metadata": {},
"source": [
"## Invoking the tool with ToolCall\n",
"\n",
"If we directly invoke our tool with just the tool arguments, you'll notice that we only get back the content part of the Tool output:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5e7d5e77-3102-4a59-8ade-e4e699dd1817",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Successfully generated array of 10 random ints in [0, 9].'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Failed to batch ingest runs: LangSmithRateLimitError('Rate limit exceeded for https://api.smith.langchain.com/runs/batch. HTTPError(\\'429 Client Error: Too Many Requests for url: https://api.smith.langchain.com/runs/batch\\', \\'{\"detail\":\"Monthly unique traces usage limit exceeded\"}\\')')\n"
]
}
],
"source": [
"generate_random_ints.invoke({\"min\": 0, \"max\": 9, \"size\": 10})"
]
},
{
"cell_type": "markdown",
"id": "30db7228-f04c-489e-afda-9a572eaa90a1",
"metadata": {},
"source": [
"In order to get back both the content and the artifact, we need to invoke our model with a ToolCall (which is just a dictionary with \"name\", \"args\", \"id\" and \"type\" keys), which has additional info needed to generate a ToolMessage like the tool call ID:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "da1d939d-a900-4b01-92aa-d19011a6b034",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 10 random ints in [0, 9].', name='generate_random_ints', tool_call_id='123', artifact=[2, 8, 0, 6, 0, 0, 1, 5, 0, 0])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(\n",
" {\n",
" \"name\": \"generate_random_ints\",\n",
" \"args\": {\"min\": 0, \"max\": 9, \"size\": 10},\n",
" \"id\": \"123\", # required\n",
" \"type\": \"tool_call\", # required\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a3cfc03d-020b-42c7-b0f8-c824af19e45e",
"metadata": {},
"source": [
"## Using with a model\n",
"\n",
"With a [tool-calling model](/docs/how_to/tool_calling/), we can easily use a model to call our Tool and generate ToolMessages:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
"/>\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "74de0286-b003-4b48-9cdd-ecab435515ca",
"metadata": {},
"outputs": [],
"source": [
"# | echo: false\n",
"# | output: false\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8a67424b-d19c-43df-ac7b-690bca42146c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'generate_random_ints',\n",
" 'args': {'min': 1, 'max': 24, 'size': 6},\n",
" 'id': 'toolu_01EtALY3Wz1DVYhv1TLvZGvE',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools = llm.bind_tools([generate_random_ints])\n",
"\n",
"ai_msg = llm_with_tools.invoke(\"generate 6 positive ints less than 25\")\n",
"ai_msg.tool_calls"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "00c4e906-3ca8-41e8-a0be-65cb0db7d574",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 6 random ints in [1, 24].', name='generate_random_ints', tool_call_id='toolu_01EtALY3Wz1DVYhv1TLvZGvE', artifact=[2, 20, 23, 8, 1, 15])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(ai_msg.tool_calls[0])"
]
},
{
"cell_type": "markdown",
"id": "ddef2690-70de-4542-ab20-2337f77f3e46",
"metadata": {},
"source": [
"If we just pass in the tool call args, we'll only get back the content:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "f4a6c9a6-0ffc-4b0e-a59f-f3c3d69d824d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Successfully generated array of 6 random ints in [1, 24].'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(ai_msg.tool_calls[0][\"args\"])"
]
},
{
"cell_type": "markdown",
"id": "98d6443b-ff41-4d91-8523-b6274fc74ee5",
"metadata": {},
"source": [
"If we wanted to declaratively create a chain, we could do this:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "eb55ec23-95a4-464e-b886-d9679bf3aaa2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ToolMessage(content='Successfully generated array of 1 random ints in [1, 5].', name='generate_random_ints', tool_call_id='toolu_01FwYhnkwDPJPbKdGq4ng6uD', artifact=[5])]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from operator import attrgetter\n",
"\n",
"chain = llm_with_tools | attrgetter(\"tool_calls\") | generate_random_ints.map()\n",
"\n",
"chain.invoke(\"give me a random number between 1 and 5\")"
]
},
{
"cell_type": "markdown",
"id": "4df46be2-babb-4bfe-a641-91cd3d03ffaf",
"metadata": {},
"source": [
"## Creating from BaseTool class\n",
"\n",
"If you want to create a BaseTool object directly, instead of decorating a function with `@tool`, you can do so like this:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "9a9129e1-6aee-4a10-ad57-62ef3bf0276c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class GenerateRandomFloats(BaseTool):\n",
" name: str = \"generate_random_floats\"\n",
" description: str = \"Generate size random floats in the range [min, max].\"\n",
" response_format: str = \"content_and_artifact\"\n",
"\n",
" ndigits: int = 2\n",
"\n",
" def _run(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" range_ = max - min\n",
" array = [\n",
" round(min + (range_ * random.random()), ndigits=self.ndigits)\n",
" for _ in range(size)\n",
" ]\n",
" content = f\"Generated {size} floats in [{min}, {max}], rounded to {self.ndigits} decimals.\"\n",
" return content, array\n",
"\n",
" # Optionally define an equivalent async method\n",
"\n",
" # async def _arun(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" # ..."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d7322619-f420-4b29-8ee5-023e693d0179",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_gen = GenerateRandomFloats(ndigits=4)\n",
"rand_gen.invoke({\"min\": 0.1, \"max\": 3.3333, \"size\": 3})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0892f277-23a6-4bb8-a0e9-59f533ac9750",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.', name='generate_random_floats', tool_call_id='123', artifact=[1.5789, 2.464, 2.2719])"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_gen.invoke(\n",
" {\n",
" \"name\": \"generate_random_floats\",\n",
" \"args\": {\"min\": 0.1, \"max\": 3.3333, \"size\": 3},\n",
" \"id\": \"123\",\n",
" \"type\": \"tool_call\",\n",
" }\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"language": "python",
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,132 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to access the RunnableConfig object within a custom tool\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Custom tools](/docs/how_to/custom_tools)\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language-lcel)\n",
"- [Configuring runnable behavior](/docs/how_to/configure/)\n",
"\n",
":::\n",
"\n",
"If you have a tool that call chat models, retrievers, or other runnables, you may want to access internal events from those runnables or configure them with additional properties. This guide shows you how to manually pass parameters properly so that you can do this using the `astream_events()` method.\n",
"\n",
"Tools are runnables, and you can treat them the same way as any other runnable at the interface level - you can call `invoke()`, `batch()`, and `stream()` on them as normal. However, when writing custom tools, you may want to invoke other runnables like chat models or retrievers. In order to properly trace and configure those sub-invocations, you'll need to manually access and pass in the tool's current [`RunnableConfig`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html) object. This guide show you some examples of how to do that.\n",
"\n",
":::caution Compatibility\n",
"\n",
"This guide requires `langchain-core>=0.2.16`.\n",
"\n",
":::\n",
"\n",
"## Inferring by parameter type\n",
"\n",
"To access reference the active config object from your custom tool, you'll need to add a parameter to your tool's signature typed as `RunnableConfig`. When you invoke your tool, LangChain will inspect your tool's signature, look for a parameter typed as `RunnableConfig`, and if it exists, populate that parameter with the correct value.\n",
"\n",
"**Note:** The actual name of the parameter doesn't matter, only the typing.\n",
"\n",
"To illustrate this, define a custom tool that takes a two parameters - one typed as a string, the other typed as `RunnableConfig`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_core"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"async def reverse_tool(text: str, special_config_param: RunnableConfig) -> str:\n",
" \"\"\"A test tool that combines input text with a configurable parameter.\"\"\"\n",
" return (text + special_config_param[\"configurable\"][\"additional_field\"])[::-1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, if we invoke the tool with a `config` containing a `configurable` field, we can see that `additional_field` is passed through correctly:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'321cba'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await reverse_tool.ainvoke(\n",
" {\"text\": \"abc\"}, config={\"configurable\": {\"additional_field\": \"123\"}}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now seen how to configure and stream events from within a tool. Next, check out the following guides for more on using tools:\n",
"\n",
"- [Stream events from child runs within a custom tool](/docs/how_to/tool_stream_events/)\n",
"- Pass [tool results back to a model](/docs/how_to/tool_results_pass_to_model)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
"- Building [tool-using chains and agents](/docs/how_to#tools)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -6,12 +6,20 @@
"source": [
"# How to pass tool outputs to the model\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s. First, let's define our tools and our model."
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
"\n",
":::\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s and `ToolCall`s. First, let's define our tools and our model."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +43,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -54,25 +62,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use ``ToolMessage`` to pass back the output of the tool calls to the model."
"The nice thing about Tools is that if we invoke them with a ToolCall, we'll automatically get back a ToolMessage that can be fed back to the model: \n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]),\n",
" ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'),\n",
" ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]"
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'multiply'}, 'type': 'function'}, {'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 88, 'total_tokens': 137}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-56657feb-96dd-456c-ab8e-1857eab2ade0-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'type': 'tool_call'}], usage_metadata={'input_tokens': 88, 'output_tokens': 49, 'total_tokens': 137}),\n",
" ToolMessage(content='36', name='multiply', tool_call_id='call_Smg3NHJNxrKfAmd4f9GkaYn3'),\n",
" ToolMessage(content='60', name='add', tool_call_id='call_55K1C0DmH6U5qh810gW34xZ0')]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@@ -85,24 +100,25 @@
"messages.append(ai_msg)\n",
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" tool_msg = selected_tool.invoke(tool_call)\n",
" messages.append(tool_msg)\n",
"messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')"
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 153, 'total_tokens': 171}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ba5032f0-f773-406d-a408-8314e66511d0-0', usage_metadata={'input_tokens': 153, 'output_tokens': 18, 'total_tokens': 171})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@@ -118,10 +134,24 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"language": "python",
"name": "poetry-venv-311"
},
"language_info": {
"name": "python"
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,25 +4,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to stream events from within a tool\n",
"# How to stream events from child runs within a custom tool\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Custom tools](/docs/how_to/custom_tools)\n",
"- [Using stream events](/docs/how_to/streaming/#using-stream-events)\n",
"- [Accessing RunnableConfig within a custom tool](/docs/how_to/tool_configure/)\n",
"\n",
":::\n",
"\n",
"If you have tools that call LLMs, retrievers, or other runnables, you may want to access internal events from those runnables. This guide shows you a few ways you can do this using the `astream_events()` method.\n",
"If you have tools that call chat models, retrievers, or other runnables, you may want to access internal events from those runnables or configure them with additional properties. This guide shows you how to manually pass parameters properly so that you can do this using the `astream_events()` method.\n",
"\n",
":::caution\n",
"LangChain cannot automatically propagate callbacks to child runnables if you are running async code in python<=3.10.\n",
" \n",
"This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
":::caution Compatibility\n",
"\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in older Python versions.\n",
"\n",
"This guide also requires `langchain-core>=0.2.16`.\n",
":::\n",
"\n",
"We'll define a custom tool below that calls a chain that summarizes its input in a special way by prompting an LLM to return only 10 words, then reversing the output:\n",
"Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
@@ -40,7 +47,7 @@
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_anthropic\n",
"%pip install -qU langchain langchain_anthropic langchain_core\n",
"\n",
"import os\n",
"from getpass import getpass\n",
@@ -65,7 +72,7 @@
"\n",
"\n",
"@tool\n",
"def special_summarization_tool(long_text: str) -> str:\n",
"async def special_summarization_tool(long_text: str) -> str:\n",
" \"\"\"A tool that summarizes input text using advanced techniques.\"\"\"\n",
" prompt = ChatPromptTemplate.from_template(\n",
" \"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n{long_text}\"\n",
@@ -75,7 +82,7 @@
" return x[::-1]\n",
"\n",
" chain = prompt | model | StrOutputParser() | reverse\n",
" summary = chain.invoke({\"long_text\": long_text})\n",
" summary = await chain.ainvoke({\"long_text\": long_text})\n",
" return summary"
]
},
@@ -83,7 +90,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If you just invoke the tool directly, you can see that you only get the final response:"
"Invoking the tool directly works just fine:"
]
},
{
@@ -116,31 +123,90 @@
"Coming! Hang on a second.\n",
"\"\"\"\n",
"\n",
"special_summarization_tool.invoke({\"long_text\": LONG_TEXT})"
"await special_summarization_tool.ainvoke({\"long_text\": LONG_TEXT})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you wanted to access the raw output from the chat model, you could use the [`astream_events()`](/docs/how_to/streaming/#using-stream-events) method and look for `on_chat_model_end` events:"
"But if you wanted to access the raw output from the chat model rather than the full tool, you might try to use the [`astream_events()`](/docs/how_to/streaming/#using-stream-events) method and look for an `on_chat_model_end` event. Here's what happens:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"stream = special_summarization_tool.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" # Never triggers in python<=3.10!\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll notice (unless you're running through this guide in `python>=3.11`) that there are no chat model events emitted from the child run!\n",
"\n",
"This is because the example above does not pass the tool's config object into the internal chain. To fix this, redefine your tool to take a special parameter typed as `RunnableConfig` (see [this guide](/docs/how_to/tool_configure) for more details). You'll also need to pass that parameter through into the internal chain when executing it:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"\n",
"\n",
"@tool\n",
"async def special_summarization_tool_with_config(\n",
" long_text: str, config: RunnableConfig\n",
") -> str:\n",
" \"\"\"A tool that summarizes input text using advanced techniques.\"\"\"\n",
" prompt = ChatPromptTemplate.from_template(\n",
" \"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n{long_text}\"\n",
" )\n",
"\n",
" def reverse(x: str):\n",
" return x[::-1]\n",
"\n",
" chain = prompt | model | StrOutputParser() | reverse\n",
" # Pass the \"config\" object as an argument to any executed runnables\n",
" summary = await chain.ainvoke({\"long_text\": long_text}, config=config)\n",
" return summary"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now try the same `astream_events()` call as before with your new tool:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-195c0986-2ffa-43a3-9366-f2f96c42fe57', usage_metadata={'input_tokens': 182, 'output_tokens': 16, 'total_tokens': 198}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\")]]}}, 'run_id': '195c0986-2ffa-43a3-9366-f2f96c42fe57', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['370919df-1bc3-43ae-aab2-8e112a4ddf47', 'de535624-278b-4927-9393-6d0cac3248df']}\n"
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-d23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', usage_metadata={'input_tokens': 182, 'output_tokens': 16, 'total_tokens': 198}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\")]]}}, 'run_id': 'd23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['f25c41fe-8972-4893-bc40-cecf3922c1fa']}\n"
]
}
],
"source": [
"stream = special_summarization_tool.astream_events(\n",
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"\n",
@@ -153,38 +219,38 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can see that you get the raw response from the chat model.\n",
"Awesome! This time there's an event emitted.\n",
"\n",
"`astream_events()` will automatically call internal runnables in a chain with streaming enabled if possible, so if you wanted to a stream of tokens as they are generated from the chat model, you could simply filter our calls to look for `on_chat_model_stream` events with no other changes:"
"For streaming, `astream_events()` automatically calls internal runnables in a chain with streaming enabled if possible, so if you wanted to a stream of tokens as they are generated from the chat model, you could simply filter to look for `on_chat_model_stream` events with no other changes:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n"
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n"
]
}
],
"source": [
"stream = special_summarization_tool.astream_events(\n",
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"\n",
@@ -193,67 +259,17 @@
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that you still have access to the final tool response as well. You can access it by looking for an `on_tool_end` event.\n",
"\n",
"To make events your tool emits easier to identify, you can also add identifiers to runnables using the `with_config()` method. `run_name` will apply to only to the runnable you attach it to, while `tags` will be inherited by runnables called within your initial runnable.\n",
"\n",
"Let's redeclare the tool with a tag, then run it with `astream_events()` with some filters. You should only see streamed events from the chat model and the final tool output:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_tool_end', 'data': {'output': '.yad noitaudarg rof tiftuo sesoohc yrraB ;scisyhp seifed eeB'}, 'run_id': '49d9d7d3-2b02-4964-a6c5-12f57a063146', 'name': 'special_summarization_tool', 'tags': ['bee_movie'], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"tagged_tool = special_summarization_tool.with_config({\"tags\": [\"bee_movie\"]})\n",
"\n",
"stream = tagged_tool.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\", include_tags=[\"bee_movie\"]\n",
")\n",
"\n",
"async for event in stream:\n",
" event_type = event[\"event\"]\n",
" if event_type == \"on_chat_model_stream\" or event_type == \"on_tool_end\":\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Now you've learned how to stream events from within a tool. Next, you can learn more about how to use tools:\n",
"You've now seen how to stream events from within a tool. Next, check out the following guides for more on using tools:\n",
"\n",
"- Bind [model-specific tools](/docs/how_to/tools_model_specific/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Pass [tool results back to a model](/docs/how_to/tool_results_pass_to_model)\n",
"- [Dispatch custom callback events](/docs/how_to/callbacks_custom_events)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
@@ -264,7 +280,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -278,9 +294,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -37,7 +37,7 @@
"scrapfly_loader = ScrapflyLoader(\n",
" [\"https://web-scraping.dev/products\"],\n",
" api_key=\"Your ScrapFly API key\", # Get your API key from https://www.scrapfly.io/\n",
" ignore_scrape_failures=True, # Ignore unprocessable web pages and log their exceptions\n",
" continue_on_failure=True, # Ignore unprocessable web pages and log their exceptions\n",
")\n",
"\n",
"# Load documents from URLs as markdown\n",
@@ -72,7 +72,7 @@
"scrapfly_loader = ScrapflyLoader(\n",
" [\"https://web-scraping.dev/products\"],\n",
" api_key=\"Your ScrapFly API key\", # Get your API key from https://www.scrapfly.io/\n",
" ignore_scrape_failures=True, # Ignore unprocessable web pages and log their exceptions\n",
" continue_on_failure=True, # Ignore unprocessable web pages and log their exceptions\n",
" scrape_config=scrapfly_scrape_config, # Pass the scrape_config object\n",
" scrape_format=\"markdown\", # The scrape result format, either `markdown`(default) or `text`\n",
")\n",

View File

@@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 1,
"id": "10ad9224",
"metadata": {
"ExecuteTime": {
@@ -1809,7 +1809,6 @@
"cell_type": "markdown",
"id": "0c69d84d",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -1891,7 +1890,6 @@
"cell_type": "markdown",
"id": "5da41b77",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -2149,6 +2147,7 @@
},
{
"cell_type": "markdown",
"id": "2ac1a8c7",
"metadata": {},
"source": [
"## SingleStoreDB Semantic Cache\n",
@@ -2173,6 +2172,353 @@
")"
]
},
{
"cell_type": "markdown",
"id": "7019c991-0101-4f9c-b212-5729a5471293",
"metadata": {},
"source": [
"## Couchbase Caches\n",
"\n",
"Use [Couchbase](https://couchbase.com/) as a cache for prompts and responses."
]
},
{
"cell_type": "markdown",
"id": "d6aac680-ba32-4c19-8864-6471cf0e7d5a",
"metadata": {},
"source": [
"### Couchbase Cache\n",
"\n",
"The standard cache that looks for an exact match of the user prompt."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9b4764e4-c75f-4185-b326-524287a826be",
"metadata": {},
"outputs": [],
"source": [
"# Create couchbase connection object\n",
"from datetime import timedelta\n",
"\n",
"from couchbase.auth import PasswordAuthenticator\n",
"from couchbase.cluster import Cluster\n",
"from couchbase.options import ClusterOptions\n",
"from langchain_couchbase.cache import CouchbaseCache\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"COUCHBASE_CONNECTION_STRING = (\n",
" \"couchbase://localhost\" # or \"couchbases://localhost\" if using TLS\n",
")\n",
"DB_USERNAME = \"Administrator\"\n",
"DB_PASSWORD = \"Password\"\n",
"\n",
"auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)\n",
"options = ClusterOptions(auth)\n",
"cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)\n",
"\n",
"# Wait until the cluster is ready for use.\n",
"cluster.wait_until_ready(timedelta(seconds=5))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4b5e73c5-92c1-4eab-84e2-77924ea9c123",
"metadata": {},
"outputs": [],
"source": [
"# Specify the bucket, scope and collection to store the cached documents\n",
"BUCKET_NAME = \"langchain-testing\"\n",
"SCOPE_NAME = \"_default\"\n",
"COLLECTION_NAME = \"_default\"\n",
"\n",
"set_llm_cache(\n",
" CouchbaseCache(\n",
" cluster=cluster,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "db8d28cc-8d93-47b4-8326-57a29a06fb3c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 22.2 ms, sys: 14 ms, total: 36.2 ms\n",
"Wall time: 938 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in the cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b470dc81-2e7f-4743-9435-ce9071394eea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 53 ms, sys: 29 ms, total: 82 ms\n",
"Wall time: 84.2 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time, it is in the cache, so it should be much faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "43626f33-d184-4260-b641-c9341cef5842",
"metadata": {},
"source": [
"### Couchbase Semantic Cache\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. This needs an appropriate Vector Search Index defined to work. Please look at the usage example on how to set up the index."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6b470c03-d7fe-4270-89e1-638251619a53",
"metadata": {},
"outputs": [],
"source": [
"# Create Couchbase connection object\n",
"from datetime import timedelta\n",
"\n",
"from couchbase.auth import PasswordAuthenticator\n",
"from couchbase.cluster import Cluster\n",
"from couchbase.options import ClusterOptions\n",
"from langchain_couchbase.cache import CouchbaseSemanticCache\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"COUCHBASE_CONNECTION_STRING = (\n",
" \"couchbase://localhost\" # or \"couchbases://localhost\" if using TLS\n",
")\n",
"DB_USERNAME = \"Administrator\"\n",
"DB_PASSWORD = \"Password\"\n",
"\n",
"auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)\n",
"options = ClusterOptions(auth)\n",
"cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)\n",
"\n",
"# Wait until the cluster is ready for use.\n",
"cluster.wait_until_ready(timedelta(seconds=5))"
]
},
{
"cell_type": "markdown",
"id": "f831bc4c-f330-4bd7-9b80-76771d91827e",
"metadata": {},
"source": [
"Notes:\n",
"- The search index for the semantic cache needs to be defined before using the semantic cache. \n",
"- The optional parameter, `score_threshold` in the Semantic Cache that you can use to tune the results of the semantic search.\n",
"\n",
"### How to Import an Index to the Full Text Search service?\n",
" - [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html)\n",
" - Click on Search -> Add Index -> Import\n",
" - Copy the following Index definition in the Import screen\n",
" - Click on Create Index to create the index.\n",
" - [Couchbase Capella](https://docs.couchbase.com/cloud/search/import-search-index.html)\n",
" - Copy the index definition to a new file `index.json`\n",
" - Import the file in Capella using the instructions in the documentation.\n",
" - Click on Create Index to create the index.\n",
"\n",
"#### Example index for the vector search. \n",
" ```\n",
" {\n",
" \"type\": \"fulltext-index\",\n",
" \"name\": \"langchain-testing._default.semantic-cache-index\",\n",
" \"sourceType\": \"gocbcore\",\n",
" \"sourceName\": \"langchain-testing\",\n",
" \"planParams\": {\n",
" \"maxPartitionsPerPIndex\": 1024,\n",
" \"indexPartitions\": 16\n",
" },\n",
" \"params\": {\n",
" \"doc_config\": {\n",
" \"docid_prefix_delim\": \"\",\n",
" \"docid_regexp\": \"\",\n",
" \"mode\": \"scope.collection.type_field\",\n",
" \"type_field\": \"type\"\n",
" },\n",
" \"mapping\": {\n",
" \"analysis\": {},\n",
" \"default_analyzer\": \"standard\",\n",
" \"default_datetime_parser\": \"dateTimeOptional\",\n",
" \"default_field\": \"_all\",\n",
" \"default_mapping\": {\n",
" \"dynamic\": true,\n",
" \"enabled\": false\n",
" },\n",
" \"default_type\": \"_default\",\n",
" \"docvalues_dynamic\": false,\n",
" \"index_dynamic\": true,\n",
" \"store_dynamic\": true,\n",
" \"type_field\": \"_type\",\n",
" \"types\": {\n",
" \"_default.semantic-cache\": {\n",
" \"dynamic\": false,\n",
" \"enabled\": true,\n",
" \"properties\": {\n",
" \"embedding\": {\n",
" \"dynamic\": false,\n",
" \"enabled\": true,\n",
" \"fields\": [\n",
" {\n",
" \"dims\": 1536,\n",
" \"index\": true,\n",
" \"name\": \"embedding\",\n",
" \"similarity\": \"dot_product\",\n",
" \"type\": \"vector\",\n",
" \"vector_index_optimized_for\": \"recall\"\n",
" }\n",
" ]\n",
" },\n",
" \"metadata\": {\n",
" \"dynamic\": true,\n",
" \"enabled\": true\n",
" },\n",
" \"text\": {\n",
" \"dynamic\": false,\n",
" \"enabled\": true,\n",
" \"fields\": [\n",
" {\n",
" \"index\": true,\n",
" \"name\": \"text\",\n",
" \"store\": true,\n",
" \"type\": \"text\"\n",
" }\n",
" ]\n",
" }\n",
" }\n",
" }\n",
" }\n",
" },\n",
" \"store\": {\n",
" \"indexType\": \"scorch\",\n",
" \"segmentVersion\": 16\n",
" }\n",
" },\n",
" \"sourceParams\": {}\n",
" }\n",
" ```"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ae0766c8-ea34-4604-b0dc-cf2bbe8077f4",
"metadata": {},
"outputs": [],
"source": [
"BUCKET_NAME = \"langchain-testing\"\n",
"SCOPE_NAME = \"_default\"\n",
"COLLECTION_NAME = \"semantic-cache\"\n",
"INDEX_NAME = \"semantic-cache-index\"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"cache = CouchbaseSemanticCache(\n",
" cluster=cluster,\n",
" embedding=embeddings,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" index_name=INDEX_NAME,\n",
" score_threshold=0.8,\n",
")\n",
"\n",
"set_llm_cache(cache)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a2e82743-10ea-4319-b43e-193475ae5449",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"The average lifespan of a dog is around 12 years, but this can vary depending on the breed, size, and overall health of the individual dog. Some smaller breeds may live longer, while larger breeds may have shorter lifespans. Proper care, diet, and exercise can also play a role in extending a dog's lifespan.\n",
"CPU times: user 826 ms, sys: 2.46 s, total: 3.28 s\n",
"Wall time: 2.87 s\n"
]
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in the cache, so it should take longer\n",
"print(llm.invoke(\"How long do dogs live?\"))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c36f4e29-d872-4334-a1f1-0e6d10c5d9f2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"The average lifespan of a dog is around 12 years, but this can vary depending on the breed, size, and overall health of the individual dog. Some smaller breeds may live longer, while larger breeds may have shorter lifespans. Proper care, diet, and exercise can also play a role in extending a dog's lifespan.\n",
"CPU times: user 9.82 ms, sys: 2.61 ms, total: 12.4 ms\n",
"Wall time: 311 ms\n"
]
}
],
"source": [
"%%time\n",
"# The second time, it is in the cache, so it should be much faster\n",
"print(llm.invoke(\"What is the expected lifespan of a dog?\"))"
]
},
{
"cell_type": "markdown",
"id": "ae1f5e1c-085e-4998-9f2d-b5867d2c3d5b",
@@ -2228,7 +2574,9 @@
"| langchain_core.caches | [InMemoryCache](https://api.python.langchain.com/en/latest/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBCache.html) |\n"
"| langchain_mongodb.cache | [MongoDBCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseCache](https://api.python.langchain.com/en/latest/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n"
]
},
{
@@ -2256,7 +2604,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -27,7 +27,7 @@
"outputs": [],
"source": [
"# Install the package\n",
"%pip install --upgrade --quiet dashscope"
"%pip install --upgrade --quiet langchain-community dashscope"
]
},
{

View File

@@ -27,3 +27,65 @@ See a [usage example](/docs/integrations/document_loaders/couchbase).
```python
from langchain_community.document_loaders.couchbase import CouchbaseLoader
```
## LLM Caches
### CouchbaseCache
Use Couchbase as a cache for prompts and responses.
See a [usage example](/docs/integrations/llm_caching/#couchbase-cache).
To import this cache:
```python
from langchain_couchbase.cache import CouchbaseCache
```
To use this cache with your LLMs:
```python
from langchain_core.globals import set_llm_cache
cluster = couchbase_cluster_connection_object
set_llm_cache(
CouchbaseCache(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
)
)
```
### CouchbaseSemanticCache
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore.
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index.
See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache).
To import this cache:
```python
from langchain_couchbase.cache import CouchbaseSemanticCache
```
To use this cache with your LLMs:
```python
from langchain_core.globals import set_llm_cache
# use any embedding provider...
from langchain_openai.Embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
cluster = couchbase_cluster_connection_object
set_llm_cache(
CouchbaseSemanticCache(
cluster=cluster,
embedding = embeddings,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
index_name=INDEX_NAME,
)
)
```

View File

@@ -61,7 +61,7 @@ When ready to deploy, you can self-host models with NVIDIA NIM—which is includ
```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank
# connect to an chat NIM running at localhost:8000, specifyig a specific model
# connect to a chat NIM running at localhost:8000, specifying a model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct")
# connect to an embedding NIM running at localhost:8080

View File

@@ -202,7 +202,7 @@ Prem Templates are also available for Streaming too.
## Prem Embeddings
In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key.
In this section we cover how we can get access to different embedding models using `PremEmbeddings` with LangChain. Let's start by importing our modules and setting our API Key.
```python
import os

View File

@@ -21,7 +21,7 @@ whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain_qdrant import Qdrant
from langchain_qdrant import QdrantVectorStore
```
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant)

View File

@@ -21,7 +21,7 @@ For example
```python
from langchain_openai import OpenAI
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_core.output_parsers import StrOutputParser
import os
os.environ['OPENAI_API_BASE'] = "https://shale.live/v1"
@@ -35,10 +35,11 @@ template = """Question: {question}
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain = prompt | llm | StrOutputParser()
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
llm_chain.invoke(question)
```

View File

@@ -309,9 +309,9 @@
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(texts, CohereEmbeddings()).as_retriever(\n",
" search_kwargs={\"k\": 20}\n",
")\n",
"retriever = FAISS.from_documents(\n",
" texts, CohereEmbeddings(model=\"embed-english-v3.0\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
@@ -324,7 +324,8 @@
"metadata": {},
"source": [
"## Doing reranking with CohereRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `CohereRerank`, uses the Cohere rerank endpoint to rerank the returned results."
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `CohereRerank`, uses the Cohere rerank endpoint to rerank the returned results.\n",
"Do note that it is mandatory to specify the model name in CohereRerank!"
]
},
{
@@ -339,7 +340,7 @@
"from langchain_community.llms import Cohere\n",
"\n",
"llm = Cohere(temperature=0)\n",
"compressor = CohereRerank()\n",
"compressor = CohereRerank(model=\"rerank-english-v3.0\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",

View File

@@ -40,7 +40,9 @@
"metadata": {},
"outputs": [],
"source": [
"embeddings = CohereEmbeddings(model=\"embed-english-light-v3.0\")"
"embeddings = CohereEmbeddings(\n",
" model=\"embed-english-light-v3.0\"\n",
") # It is mandatory to pass a model parameter to initialize the CohereEmbeddings object"
]
},
{

View File

@@ -0,0 +1,310 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# ApertureDB\n",
"\n",
"[ApertureDB](https://docs.aperturedata.io) is a database that stores, indexes, and manages multi-modal data like text, images, videos, bounding boxes, and embeddings, together with their associated metadata.\n",
"\n",
"This notebook explains how to use the embeddings functionality of ApertureDB."
]
},
{
"cell_type": "markdown",
"id": "e7393beb",
"metadata": {},
"source": [
"## Install ApertureDB Python SDK\n",
"\n",
"This installs the [Python SDK](https://docs.aperturedata.io/category/aperturedb-python-sdk) used to write client code for ApertureDB."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a62cff8a-bcf7-4e33-bbbc-76999c2e3e20",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet aperturedb"
]
},
{
"cell_type": "markdown",
"id": "4fe12f77",
"metadata": {},
"source": [
"## Run an ApertureDB instance\n",
"\n",
"To continue, you should have an [ApertureDB instance up and running](https://docs.aperturedata.io/HowToGuides/start/Setup) and configure your environment to use it. \n",
"There are various ways to do that, for example:\n",
"\n",
"```bash\n",
"docker run --publish 55555:55555 aperturedata/aperturedb-standalone\n",
"adb config create local --active --no-interactive\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "667eabca",
"metadata": {},
"source": [
"## Download some web documents\n",
"We're going to do a mini-crawl here of one web page."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0798dfdb",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"source": [
"# For loading documents from web\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://docs.aperturedata.io\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "5f077d11",
"metadata": {},
"source": [
"## Select embeddings model\n",
"\n",
"We want to use OllamaEmbeddings so we have to import the necessary modules.\n",
"\n",
"Ollama can be set up as a docker container as described in the [documentation](https://hub.docker.com/r/ollama/ollama), for example:\n",
"```bash\n",
"# Run server\n",
"docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama\n",
"# Tell server to load a specific model\n",
"docker exec ollama ollama run llama2\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8b6ed9cd-81b9-46e5-9c20-5aafca2844d0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.embeddings import OllamaEmbeddings\n",
"\n",
"embeddings = OllamaEmbeddings()"
]
},
{
"cell_type": "markdown",
"id": "b7b313e6",
"metadata": {},
"source": [
"## Split documents into segments\n",
"\n",
"We want to turn our single document into multiple segments."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3c4b7b31",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter()\n",
"documents = text_splitter.split_documents(docs)"
]
},
{
"cell_type": "markdown",
"id": "46339d32",
"metadata": {},
"source": [
"## Create vectorstore from documents and embeddings\n",
"\n",
"This code creates a vectorstore in the ApertureDB instance.\n",
"Within the instance, this vectorstore is represented as a \"[descriptor set](https://docs.aperturedata.io/category/descriptorset-commands)\".\n",
"By default, the descriptor set is named `langchain`. The following code will generate embeddings for each document and store them in ApertureDB as descriptors. This will take a few seconds as the embeddings are bring generated."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "dcf88bdf",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.vectorstores import ApertureDB\n",
"\n",
"vector_db = ApertureDB.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "7672877b",
"metadata": {},
"source": [
"## Select a large language model\n",
"\n",
"Again, we use the Ollama server we set up for local processing."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9a005e4b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama2\")"
]
},
{
"cell_type": "markdown",
"id": "cd54f2ad",
"metadata": {},
"source": [
"## Build a RAG chain\n",
"\n",
"Now we have all the components we need to create a RAG (Retrieval-Augmented Generation) chain. This chain does the following:\n",
"1. Generate embedding descriptor for user query\n",
"2. Find text segments that are similar to the user query using the vector store\n",
"3. Pass user query and context documents to the LLM using a prompt template\n",
"4. Return the LLM's answer"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a8c513ab",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Based on the provided context, ApertureDB can store images. In fact, it is specifically designed to manage multimodal data such as images, videos, documents, embeddings, and associated metadata including annotations. So, ApertureDB has the capability to store and manage images.\n"
]
}
],
"source": [
"# Create prompt\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"\"\"Answer the following question based only on the provided context:\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"Question: {input}\"\"\")\n",
"\n",
"\n",
"# Create a chain that passes documents to an LLM\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"document_chain = create_stuff_documents_chain(llm, prompt)\n",
"\n",
"\n",
"# Treat the vectorstore as a document retriever\n",
"retriever = vector_db.as_retriever()\n",
"\n",
"\n",
"# Create a RAG chain that connects the retriever to the LLM\n",
"from langchain.chains import create_retrieval_chain\n",
"\n",
"retrieval_chain = create_retrieval_chain(retriever, document_chain)"
]
},
{
"cell_type": "markdown",
"id": "3bc6a882",
"metadata": {},
"source": [
"## Run the RAG chain\n",
"\n",
"Finally we pass a question to the chain and get our answer. This will take a few seconds to run as the LLM generates an answer from the query and context documents."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "020f29f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Based on the provided context, ApertureDB can store images in several ways:\n",
"\n",
"1. Multimodal data management: ApertureDB offers a unified interface to manage multimodal data such as images, videos, documents, embeddings, and associated metadata including annotations. This means that images can be stored along with other types of data in a single database instance.\n",
"2. Image storage: ApertureDB provides image storage capabilities through its integration with the public cloud providers or on-premise installations. This allows customers to host their own ApertureDB instances and store images on their preferred cloud provider or on-premise infrastructure.\n",
"3. Vector database: ApertureDB also offers a vector database that enables efficient similarity search and classification of images based on their semantic meaning. This can be useful for applications where image search and classification are important, such as in computer vision or machine learning workflows.\n",
"\n",
"Overall, ApertureDB provides flexible and scalable storage options for images, allowing customers to choose the deployment model that best suits their needs.\n"
]
}
],
"source": [
"user_query = \"How can ApertureDB store images?\"\n",
"response = retrieval_chain.invoke({\"input\": user_query})\n",
"print(response[\"answer\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -174,7 +174,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity search"
"## Similarity search\n",
"Optional keyword arguments to similarity_search include specifying k number of documents to retrive, \n",
"a filters dictionary for metadata filtering based on [this syntax](https://docs.databricks.com/en/generative-ai/create-query-vector-search.html#use-filters-on-queries),\n",
"as well as the [query_type](https://api-docs.databricks.com/python/vector-search/databricks.vector_search.html#databricks.vector_search.index.VectorSearchIndex.similarity_search) which can be ANN or HYBRID "
]
},
{

View File

@@ -78,7 +78,7 @@
"# See docker command above to launch a postgres instance with pgvector enabled.\n",
"connection = \"postgresql+psycopg://langchain:langchain@localhost:6024/langchain\" # Uses psycopg3!\n",
"collection_name = \"my_docs\"\n",
"embeddings = CohereEmbeddings()\n",
"embeddings = CohereEmbeddings(model=\"embed-english-v3.0\")\n",
"\n",
"vectorstore = PGVector(\n",
" embeddings=embeddings,\n",

View File

@@ -8,14 +8,15 @@
"source": [
"# Qdrant\n",
"\n",
">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n",
">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n",
"\n",
"This documentation demonstrates how to use Qdrant with Langchain for dense/sparse and hybrid retrieval.\n",
"\n",
"This notebook shows how to use functionality related to the `Qdrant` vector database. \n",
"> This page documents the `QdrantVectorStore` class that supports multiple retrieval modes via Qdrant's new [Query API](https://qdrant.tech/blog/qdrant-1.10.x/). It requires you to run Qdrant v1.10.0 or above.\n",
"\n",
"There are various modes of how to run `Qdrant`, and depending on the chosen one, there will be some subtle differences. The options include:\n",
"- Local mode, no server required\n",
"- On-premise server deployment\n",
"- Docker deployments\n",
"- Qdrant Cloud\n",
"\n",
"See the [installation instructions](https://qdrant.tech/documentation/install/)."
@@ -30,7 +31,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-qdrant langchain-openai langchain langchain-community"
"%pip install langchain-qdrant langchain-openai langchain"
]
},
{
@@ -39,30 +40,7 @@
"id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "082e7e8b-ac52-430c-98d6-8f0924457642",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
"We will use `OpenAIEmbeddings` for demonstration."
]
},
{
@@ -80,7 +58,7 @@
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_qdrant import Qdrant\n",
"from langchain_qdrant import QdrantVectorStore\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
@@ -97,7 +75,7 @@
},
"outputs": [],
"source": [
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"loader = TextLoader(\"some-file.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
@@ -115,7 +93,7 @@
"\n",
"### Local mode\n",
"\n",
"Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\n",
"Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or storing just a small amount of vectors. The embeddings might be fully kept in memory or persisted on disk.\n",
"\n",
"#### In-memory\n",
"\n",
@@ -135,7 +113,7 @@
},
"outputs": [],
"source": [
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" location=\":memory:\", # Local mode with in-memory storage only\n",
@@ -151,7 +129,7 @@
"source": [
"#### On-disk storage\n",
"\n",
"Local mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs."
"Local mode, without using the Qdrant server, may also store your vectors on disk so they persist between runs."
]
},
{
@@ -167,7 +145,7 @@
},
"outputs": [],
"source": [
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" path=\"/tmp/local_qdrant\",\n",
@@ -199,7 +177,7 @@
"outputs": [],
"source": [
"url = \"<---qdrant url here --->\"\n",
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" url=url,\n",
@@ -233,7 +211,7 @@
"source": [
"url = \"<---qdrant cloud cluster url here --->\"\n",
"api_key = \"<---api key here--->\"\n",
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" url=url,\n",
@@ -266,7 +244,7 @@
"metadata": {},
"outputs": [],
"source": [
"qdrant = Qdrant.from_existing_collection(\n",
"qdrant = QdrantVectorStore.from_existing_collection(\n",
" embeddings=embeddings,\n",
" collection_name=\"my_documents\",\n",
" url=\"http://localhost:6333\",\n",
@@ -297,7 +275,7 @@
"outputs": [],
"source": [
"url = \"<---qdrant url here --->\"\n",
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" url=url,\n",
@@ -320,12 +298,31 @@
"source": [
"## Similarity search\n",
"\n",
"The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the `embedding_function` and used to find similar documents in Qdrant collection."
"The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded into vector embeddings and used to find similar documents in Qdrant collection.\n",
"\n",
"`QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter when setting up the class.\n",
"\n",
"- Dense Vector Search(Default)\n",
"- Sparse Vector Search\n",
"- Hybrid Search"
]
},
{
"cell_type": "markdown",
"id": "b3a78d46",
"metadata": {},
"source": [
"### Dense Vector Search\n",
"\n",
"To search with only dense vectors,\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`(default).\n",
"- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "a8c513ab",
"metadata": {
"ExecuteTime": {
@@ -336,38 +333,108 @@
},
"outputs": [],
"source": [
"from langchain_qdrant import RetrievalMode\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embedding=embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.DENSE,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fc516993",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:25.220984Z",
"start_time": "2023-04-04T10:51:25.213943Z"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"cell_type": "markdown",
"id": "dbd93d85",
"metadata": {},
"source": [
"print(found_docs[0].page_content)"
"### Sparse Vector Search\n",
"\n",
"To search with only sparse vectors,\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.SPARSE`.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.\n",
"\n",
"The `langchain-qdrant` package provides a [FastEmbed](https://github.com/qdrant/fastembed) based implementation out of the box.\n",
"\n",
"To use it, install the FastEmbed package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ceb493a3",
"metadata": {},
"outputs": [],
"source": [
"%pip install fastembed"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "052e3412",
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import FastEmbedSparse, RetrievalMode\n",
"\n",
"sparse_embeddings = FastEmbedSparse(model_name=\"Qdrant/BM25\")\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.SPARSE,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
]
},
{
"cell_type": "markdown",
"id": "f4b6c456",
"metadata": {},
"source": [
"### Hybrid Vector Search\n",
"\n",
"To perform a hybrid search using dense and sparse vectors with score fusion,\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.HYBRID`.\n",
"- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.\n",
"\n",
"Note that if you've added documents with the `HYBRID` mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce56f6e9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import FastEmbedSparse, RetrievalMode\n",
"\n",
"sparse_embeddings = FastEmbedSparse(model_name=\"Qdrant/BM25\")\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embedding=embeddings,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.HYBRID,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
]
},
{
@@ -400,7 +467,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "756a6887",
"metadata": {
"ExecuteTime": {
@@ -408,23 +475,7 @@
"start_time": "2023-04-04T10:51:25.635947Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"\n",
"Score: 0.8153784913324512\n"
]
}
],
"outputs": [],
"source": [
"document, score = found_docs[0]\n",
"print(document.page_content)\n",
@@ -449,10 +500,10 @@
"metadata": {},
"source": [
"```python\n",
"from qdrant_client.http import models as rest\n",
"from qdrant_client.http import models\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\n",
"found_docs = qdrant.similarity_search_with_score(query, filter=models.Filter(...))\n",
"```"
]
},
@@ -469,7 +520,9 @@
"source": [
"## Maximum marginal relevance search (MMR)\n",
"\n",
"If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents."
"If you'd like to look up some similar documents, but you'd also like to receive diverse results, MMR is the method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.\n",
"\n",
"Note that MMR search is only available if you've added documents with `DENSE` or `HYBRID` modes. Since it requires dense vectors."
]
},
{
@@ -490,7 +543,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "80c6db11",
"metadata": {
"ExecuteTime": {
@@ -498,40 +551,7 @@
"start_time": "2023-04-04T10:51:26.013329Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"2. We cant change how divided weve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n",
"\n",
"I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n",
"\n",
"They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n",
"\n",
"Officer Mora was 27 years old. \n",
"\n",
"Officer Rivera was 22. \n",
"\n",
"Both Dominican Americans whod grown up on the same streets they later chose to patrol as police officers. \n",
"\n",
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
"\n",
"Ive worked on these issues a long time. \n",
"\n",
"I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n",
"\n"
]
}
],
"outputs": [],
"source": [
"for i, doc in enumerate(found_docs):\n",
" print(f\"{i + 1}.\", doc.page_content, \"\\n\")"
@@ -545,7 +565,7 @@
"source": [
"## Qdrant as a Retriever\n",
"\n",
"Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. "
"Qdrant, as all the other vector stores, is a LangChain Retriever. "
]
},
{
@@ -589,7 +609,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"id": "f3c70c31",
"metadata": {
"ExecuteTime": {
@@ -597,18 +617,7 @@
"start_time": "2023-04-04T10:51:26.046407Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"retriever.invoke(query)[0]"
@@ -622,11 +631,11 @@
"source": [
"## Customizing Qdrant\n",
"\n",
"There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain `Document`.\n",
"There are options to use an existing Qdrant collection within your Langchain application. In such cases, you may need to define how to map Qdrant point into the Langchain `Document`.\n",
"\n",
"### Named vectors\n",
"\n",
"Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.\n"
"Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. If you work with a collection created externally or want to have the differently named vector used, you can configure it by providing its name.\n"
]
},
{
@@ -638,25 +647,18 @@
},
"outputs": [],
"source": [
"Qdrant.from_documents(\n",
"QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" embedding=embeddings,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents_2\",\n",
" retrieval_mode=RetrievalMode.HYBRID,\n",
" vector_name=\"custom_vector\",\n",
" sparse_vector_name=\"custom_sparse_vector\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b34f5230",
"metadata": {
"collapsed": false
},
"source": [
"As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood."
]
},
{
"cell_type": "markdown",
"id": "b2350093",
@@ -694,7 +696,7 @@
},
"outputs": [],
"source": [
"Qdrant.from_documents(\n",
"QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" location=\":memory:\",\n",
@@ -729,7 +731,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.11.8"
}
},
"nbformat": 4,

View File

@@ -22,6 +22,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
style={{ width: "100%" }}
title="LangChain Framework Overview"
/>

View File

@@ -269,7 +269,7 @@
"id": "b4991642-7275-40a9-b11a-e3beccbf2614",
"metadata": {},
"source": [
"Return documents based on similarity to a embedded query:"
"Return documents based on similarity to an embedded query:"
]
},
{

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 158 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 158 KiB

View File

@@ -72,6 +72,7 @@ rspace_client>=2.5.0,<3
scikit-learn>=1.2.2,<2
simsimd>=4.3.1,<5
sqlite-vss>=0.1.2,<0.2
sseclient-py>=1.8.0,<2
streamlit>=1.18.0,<2
sympy>=1.12,<2
telethon>=1.28.5,<2

View File

@@ -25,7 +25,7 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
DEFAULT_API_BASE = "https://api.endpoints.anyscale.com/v1"
DEFAULT_MODEL = "meta-llama/Llama-2-7b-chat-hf"
DEFAULT_MODEL = "meta-llama/Meta-Llama-3-8B-Instruct"
class ChatAnyscale(ChatOpenAI):

View File

@@ -22,6 +22,8 @@ from langchain_core.messages import (
ChatMessageChunk,
HumanMessage,
HumanMessageChunk,
SystemMessage,
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
@@ -44,6 +46,8 @@ def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
elif isinstance(message, SystemMessage):
message_dict = {"role": "system", "content": message.content}
else:
raise TypeError(f"Got unknown type {message}")
@@ -56,6 +60,8 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
return HumanMessage(content=_dict["content"])
elif role == "assistant":
return AIMessage(content=_dict.get("content", "") or "")
elif role == "system":
return SystemMessage(content=_dict.get("content", ""))
else:
return ChatMessage(content=_dict["content"], role=role)
@@ -70,6 +76,8 @@ def _convert_delta_to_message_chunk(
return HumanMessageChunk(content=content)
elif role == "assistant" or default_class == AIMessageChunk:
return AIMessageChunk(content=content)
elif role == "system" or default_class == SystemMessageChunk:
return SystemMessageChunk(content=content)
elif role or default_class == ChatMessageChunk:
return ChatMessageChunk(content=content, role=role) # type: ignore[arg-type]
else:
@@ -113,7 +121,7 @@ class ChatBaichuan(BaseChatModel):
def lc_serializable(self) -> bool:
return True
baichuan_api_base: str = Field(default=DEFAULT_API_BASE)
baichuan_api_base: str = Field(default=DEFAULT_API_BASE, alias="base_url")
"""Baichuan custom endpoints"""
baichuan_api_key: SecretStr = Field(alias="api_key")
"""Baichuan API Key"""
@@ -121,6 +129,8 @@ class ChatBaichuan(BaseChatModel):
"""[DEPRECATED, keeping it for for backward compatibility] Baichuan Secret Key"""
streaming: bool = False
"""Whether to stream the results or not."""
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate."""
request_timeout: int = Field(default=60, alias="timeout")
"""request timeout for chat http requests"""
model: str = "Baichuan2-Turbo-192K"
@@ -133,7 +143,8 @@ class ChatBaichuan(BaseChatModel):
top_p: float = 0.85
"""What probability mass to use."""
with_search_enhance: bool = False
"""Whether to use search enhance, default is False."""
"""[DEPRECATED, keeping it for for backward compatibility],
Whether to use search enhance, default is False."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for API call not explicitly specified."""
@@ -193,8 +204,8 @@ class ChatBaichuan(BaseChatModel):
"temperature": self.temperature,
"top_p": self.top_p,
"top_k": self.top_k,
"with_search_enhance": self.with_search_enhance,
"stream": self.streaming,
"max_tokens": self.max_tokens,
}
return {**normal_params, **self.model_kwargs}

View File

@@ -32,6 +32,7 @@ from langchain_core.messages import (
SystemMessage,
ToolMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
@@ -88,6 +89,7 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> AIMessage:
request_id=additional_kwargs["id"],
object=additional_kwargs.get("object", ""),
search_info=additional_kwargs.get("search_info", []),
usage=additional_kwargs.get("usage", None),
)
if additional_kwargs.get("function_call", {}):
@@ -102,6 +104,17 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> AIMessage:
}
]
if usage := additional_kwargs.get("usage", None):
return AIMessage(
content=content,
additional_kwargs=msg_additional_kwargs,
usage_metadata=UsageMetadata(
input_tokens=usage.prompt_tokens,
output_tokens=usage.completion_tokens,
total_tokens=usage.total_tokens,
),
)
return AIMessage(
content=content,
additional_kwargs=msg_additional_kwargs,
@@ -578,6 +591,7 @@ class QianfanChatEndpoint(BaseChatModel):
content=msg.content,
role="assistant",
additional_kwargs=additional_kwargs,
usage_metadata=msg.usage_metadata,
),
generation_info=msg.additional_kwargs,
)
@@ -605,6 +619,7 @@ class QianfanChatEndpoint(BaseChatModel):
content=msg.content,
role="assistant",
additional_kwargs=additional_kwargs,
usage_metadata=msg.usage_metadata,
),
generation_info=msg.additional_kwargs,
)

View File

@@ -48,6 +48,7 @@ from langchain_core.messages import (
ToolMessage,
)
from langchain_core.messages.tool import ToolCall
from langchain_core.messages.tool import tool_call as create_tool_call
from langchain_core.outputs import (
ChatGeneration,
ChatGenerationChunk,
@@ -96,7 +97,7 @@ def _parse_tool_calling(tool_call: dict) -> ToolCall:
name = tool_call["function"].get("name", "")
args = json.loads(tool_call["function"]["arguments"])
id = tool_call.get("id")
return ToolCall(name=name, args=args, id=id)
return create_tool_call(name=name, args=args, id=id)
def _convert_to_tool_calling(tool_call: ToolCall) -> Dict[str, Any]:

View File

@@ -36,9 +36,11 @@ from langchain_core.messages import (
InvalidToolCall,
SystemMessage,
ToolCall,
ToolCallChunk,
ToolMessage,
)
from langchain_core.messages.tool import invalid_tool_call as create_invalid_tool_call
from langchain_core.messages.tool import tool_call as create_tool_call
from langchain_core.messages.tool import tool_call_chunk as create_tool_call_chunk
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
@@ -63,7 +65,7 @@ def _result_to_chunked_message(generated_result: ChatResult) -> ChatGenerationCh
message = generated_result.generations[0].message
if isinstance(message, AIMessage) and message.tool_calls is not None:
tool_call_chunks = [
ToolCallChunk(
create_tool_call_chunk(
name=tool_call["name"],
args=json.dumps(tool_call["args"]),
id=tool_call["id"],
@@ -189,7 +191,7 @@ def _extract_tool_calls_from_edenai_response(
for raw_tool_call in raw_tool_calls:
try:
tool_calls.append(
ToolCall(
create_tool_call(
name=raw_tool_call["name"],
args=json.loads(raw_tool_call["arguments"]),
id=raw_tool_call["id"],
@@ -197,7 +199,7 @@ def _extract_tool_calls_from_edenai_response(
)
except json.JSONDecodeError as exc:
invalid_tool_calls.append(
InvalidToolCall(
create_invalid_tool_call(
name=raw_tool_call.get("name"),
args=raw_tool_call.get("arguments"),
id=raw_tool_call.get("id"),

View File

@@ -4,6 +4,7 @@ import asyncio
import functools
import json
import logging
from operator import itemgetter
from typing import (
Any,
AsyncIterator,
@@ -40,7 +41,10 @@ from langchain_core.messages import (
ToolMessage,
ToolMessageChunk,
)
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
PydanticToolsParser,
make_invalid_tool_call,
parse_tool_call,
)
@@ -50,7 +54,7 @@ from langchain_core.outputs import (
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
from langchain_core.runnables import Runnable
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
@@ -372,6 +376,33 @@ class ChatTongyi(BaseChatModel):
}
]
Structured output:
.. code-block:: python
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_chat = tongyi_chat.with_structured_output(Joke)
structured_chat.invoke("Tell me a joke about cats")
.. code-block:: python
Joke(
setup='Why did the cat join the band?',
punchline='Because it wanted to be a solo purr-sonality!',
rating=None
)
Response metadata
.. code-block:: python
@@ -791,3 +822,70 @@ class ChatTongyi(BaseChatModel):
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
return super().bind(tools=formatted_tools, **kwargs)
def with_structured_output(
self,
schema: Union[Dict, Type[BaseModel]],
*,
include_raw: bool = False,
**kwargs: Any,
) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]:
"""Model wrapper that returns outputs formatted to match the given schema.
Args:
schema: The output schema as a dict or a Pydantic class. If a Pydantic class
then the model output will be an object of that class. If a dict then
the model output will be a dict. With a Pydantic class the returned
attributes will be validated, whereas with a dict they will not be. If
`method` is "function_calling" and `schema` is a dict, then the dict
must match the OpenAI function-calling spec.
include_raw: If False then only the parsed structured output is returned. If
an error occurs during model output parsing it will be raised. If True
then both the raw model response (a BaseMessage) and the parsed model
response will be returned. If an error occurs during output parsing it
will be caught and returned as well. The final output is always a dict
with keys "raw", "parsed", and "parsing_error".
Returns:
A Runnable that takes any ChatModel input and returns as output:
If include_raw is True then a dict with keys:
raw: BaseMessage
parsed: Optional[_DictOrPydantic]
parsing_error: Optional[BaseException]
If include_raw is False then just _DictOrPydantic is returned,
where _DictOrPydantic depends on the schema:
If schema is a Pydantic class then _DictOrPydantic is the Pydantic
class.
If schema is a dict then _DictOrPydantic is a dict.
"""
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = isinstance(schema, type) and issubclass(schema, BaseModel)
llm = self.bind_tools([schema])
if is_pydantic_schema:
output_parser: OutputParserLike = PydanticToolsParser(
tools=[schema], # type: ignore[list-item]
first_tool_only=True, # type: ignore[list-item]
)
else:
key_name = convert_to_openai_tool(schema)["function"]["name"]
output_parser = JsonOutputKeyToolsParser(
key_name=key_name, first_tool_only=True
)
if include_raw:
parser_assign = RunnablePassthrough.assign(
parsed=itemgetter("raw") | output_parser, parsing_error=lambda _: None
)
parser_none = RunnablePassthrough.assign(parsed=lambda _: None)
parser_with_fallback = parser_assign.with_fallbacks(
[parser_none], exception_key="parsing_error"
)
return RunnableMap(raw=llm) | parser_with_fallback
else:
return llm | output_parser

View File

@@ -60,7 +60,7 @@ class HuggingFaceCrossEncoder(BaseModel, BaseCrossEncoder):
List of scores, one for each pair.
"""
scores = self.client.predict(text_pairs)
# Somes models e.g bert-multilingual-passage-reranking-msmarco
# Some models e.g bert-multilingual-passage-reranking-msmarco
# gives two score not_relevant and relevant as compare with the query.
if len(scores.shape) > 1: # we are going to get the relevant scores
scores = map(lambda x: x[1], scores)

View File

@@ -1,4 +1,3 @@
import asyncio
import logging
from typing import AsyncIterator, Iterator, List, Optional
@@ -70,6 +69,33 @@ class AsyncChromiumLoader(BaseLoader):
await browser.close()
return results
def scrape_playwright(self, url: str) -> str:
"""
Synchronously scrape the content of a given URL using Playwright's async API.
Args:
url (str): The URL to scrape.
Returns:
str: The scraped HTML content or an error message if an exception occurs.
"""
from playwright.sync_api import sync_playwright
logger.info("Starting scraping...")
results = ""
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
try:
page = browser.new_page()
page.goto(url)
results = page.content()
logger.info("Content scraped")
except Exception as e:
results = f"Error: {e}"
browser.close()
return results
def lazy_load(self) -> Iterator[Document]:
"""
Lazily load text content from the provided URLs.
@@ -82,7 +108,13 @@ class AsyncChromiumLoader(BaseLoader):
"""
for url in self.urls:
html_content = asyncio.run(self.ascrape_playwright(url))
html_content = self.scrape_playwright(url)
metadata = {"source": url}
yield Document(page_content=html_content, metadata=metadata)
async def alazy_load(self) -> AsyncIterator[Document]:
for url in self.urls:
html_content = await self.ascrape_playwright(url)
metadata = {"source": url}
yield Document(page_content=html_content, metadata=metadata)

View File

@@ -60,7 +60,7 @@ class AscendEmbeddings(Embeddings, BaseModel):
raise ValueError("model_path is required")
if not os.access(values["model_path"], os.F_OK):
raise FileNotFoundError(
f"Unabled to find valid model path in [{values['model_path']}]"
f"Unable to find valid model path in [{values['model_path']}]"
)
try:
import torch_npu

View File

@@ -61,7 +61,7 @@ class AzureOpenAIEmbeddings(OpenAIEmbeddings):
# TODO: Remove OPENAI_API_KEY support to avoid possible conflict when using
# other forms of azure credentials.
values["openai_api_key"] = (
values["openai_api_key"]
values.get("openai_api_key")
or os.getenv("AZURE_OPENAI_API_KEY")
or os.getenv("OPENAI_API_KEY")
)
@@ -75,7 +75,7 @@ class AzureOpenAIEmbeddings(OpenAIEmbeddings):
values, "openai_api_type", "OPENAI_API_TYPE", default="azure"
)
values["openai_organization"] = (
values["openai_organization"]
values.get("openai_organization")
or os.getenv("OPENAI_ORG_ID")
or os.getenv("OPENAI_ORGANIZATION")
)
@@ -85,10 +85,10 @@ class AzureOpenAIEmbeddings(OpenAIEmbeddings):
"OPENAI_PROXY",
default="",
)
values["azure_endpoint"] = values["azure_endpoint"] or os.getenv(
values["azure_endpoint"] = values.get("azure_endpoint") or os.getenv(
"AZURE_OPENAI_ENDPOINT"
)
values["azure_ad_token"] = values["azure_ad_token"] or os.getenv(
values["azure_ad_token"] = values.get("azure_ad_token") or os.getenv(
"AZURE_OPENAI_AD_TOKEN"
)
# Azure OpenAI embedding models allow a maximum of 16 texts

View File

@@ -4,7 +4,7 @@ import logging
from typing import Any, Dict, List, Optional
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
logger = logging.getLogger(__name__)
@@ -13,10 +13,10 @@ logger = logging.getLogger(__name__)
class QianfanEmbeddingsEndpoint(BaseModel, Embeddings):
"""`Baidu Qianfan Embeddings` embedding models."""
qianfan_ak: Optional[str] = None
qianfan_ak: Optional[SecretStr] = None
"""Qianfan application apikey"""
qianfan_sk: Optional[str] = None
qianfan_sk: Optional[SecretStr] = None
"""Qianfan application secretkey"""
chunk_size: int = 16

View File

@@ -640,6 +640,12 @@ def _import_yuan2() -> Type[BaseLLM]:
return Yuan2
def _import_you() -> Type[BaseLLM]:
from langchain_community.llms.you import You
return You
def _import_volcengine_maas() -> Type[BaseLLM]:
from langchain_community.llms.volcengine_maas import VolcEngineMaasLLM
@@ -847,6 +853,8 @@ def __getattr__(name: str) -> Any:
return _import_yandex_gpt()
elif name == "Yuan2":
return _import_yuan2()
elif name == "You":
return _import_you()
elif name == "VolcEngineMaasLLM":
return _import_volcengine_maas()
elif name == "type_to_cls_dict":
@@ -959,6 +967,7 @@ __all__ = [
"Writer",
"Xinference",
"YandexGPT",
"You",
"Yuan2",
]
@@ -1056,6 +1065,7 @@ def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
"qianfan_endpoint": _import_baidu_qianfan_endpoint,
"yandex_gpt": _import_yandex_gpt,
"yuan2": _import_yuan2,
"you": _import_you,
"VolcEngineMaasLLM": _import_volcengine_maas,
"SparkLLM": _import_sparkllm,
}

View File

@@ -1,5 +1,6 @@
import json
import logging
import os
from typing import Any, AsyncIterator, Dict, Iterator, List, Mapping, Optional
from langchain_core._api.deprecation import deprecated
@@ -11,7 +12,6 @@ from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
from langchain_core.pydantic_v1 import Extra, Field, root_validator
from langchain_core.utils import (
get_from_dict_or_env,
get_pydantic_field_names,
pre_init,
)
@@ -177,16 +177,17 @@ class HuggingFaceEndpoint(LLM):
"Could not import huggingface_hub python package. "
"Please install it with `pip install huggingface_hub`."
)
try:
huggingfacehub_api_token = get_from_dict_or_env(
values, "huggingfacehub_api_token", "HUGGINGFACEHUB_API_TOKEN"
)
login(token=huggingfacehub_api_token)
except Exception as e:
raise ValueError(
"Could not authenticate with huggingface_hub. "
"Please check your API token."
) from e
huggingfacehub_api_token = values["huggingfacehub_api_token"] or os.getenv(
"HUGGINGFACEHUB_API_TOKEN"
)
if huggingfacehub_api_token is not None:
try:
login(token=huggingfacehub_api_token)
except Exception as e:
raise ValueError(
"Could not authenticate with huggingface_hub. "
"Please check your API token."
) from e
from huggingface_hub import AsyncInferenceClient, InferenceClient

View File

@@ -112,18 +112,20 @@ class _OllamaCommon(BaseLanguageModel):
"""Timeout for the request stream"""
keep_alive: Optional[Union[int, str]] = None
"""How long the model will stay loaded into memory."""
"""How long the model will stay loaded into memory.
raw: Optional[bool] = None
"""raw or not.""
The parameter (Default: 5 minutes) can be set to:
1. a duration string in Golang (such as "10m" or "24h");
2. a number in seconds (such as 3600);
3. any negative number which will keep the model loaded \
in memory (e.g. -1 or "-1m");
4. 0 which will unload the model immediately after generating a response;
See the [Ollama documents](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately)"""
raw: Optional[bool] = None
"""raw or not."""
headers: Optional[dict] = None
"""Additional headers to pass to endpoint (e.g. Authorization, Referer).
This is useful when Ollama is hosted on cloud services that require

View File

@@ -0,0 +1,140 @@
import os
from typing import Any, Dict, Generator, Iterator, List, Literal, Optional
import requests
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
from langchain_core.pydantic_v1 import Field
SMART_ENDPOINT = "https://chat-api.you.com/smart"
RESEARCH_ENDPOINT = "https://chat-api.you.com/research"
def _request(base_url: str, api_key: str, **kwargs: Any) -> Dict[str, Any]:
"""
NOTE: This function can be replaced by a OpenAPI-generated Python SDK in the future,
for better input/output typing support.
"""
headers = {"x-api-key": api_key}
response = requests.post(base_url, headers=headers, json=kwargs)
response.raise_for_status()
return response.json()
def _request_stream(
base_url: str, api_key: str, **kwargs: Any
) -> Generator[str, None, None]:
headers = {"x-api-key": api_key}
params = dict(**kwargs, stream=True)
response = requests.post(base_url, headers=headers, stream=True, json=params)
response.raise_for_status()
# Explicitly coercing the response to a generator to satisfy mypy
event_source = (bytestring for bytestring in response)
try:
import sseclient
client = sseclient.SSEClient(event_source)
except ImportError:
raise ImportError(
(
"Could not import `sseclient`. "
"Please install it with `pip install sseclient-py`."
)
)
for event in client.events():
if event.event in ("search_results", "done"):
pass
elif event.event == "token":
yield event.data
elif event.event == "error":
raise ValueError(f"Error in response: {event.data}")
else:
raise NotImplementedError(f"Unknown event type {event.event}")
class You(LLM):
"""Wrapper around You.com's conversational Smart and Research APIs.
Each API endpoint is designed to generate conversational
responses to a variety of query types, including inline citations
and web results when relevant.
Smart Endpoint:
- Quick, reliable answers for a variety of questions
- Cites the entire web page URL
Research Endpoint:
- In-depth answers with extensive citations for a variety of questions
- Cites the specific web page snippet relevant to the claim
To connect to the You.com api requires an API key which
you can get at https://api.you.com.
For more information, check out the documentations at
https://documentation.you.com/api-reference/.
Args:
endpoint: You.com conversational endpoints. Choose from "smart" or "research"
ydc_api_key: You.com API key, if `YDC_API_KEY` is not set in the environment
"""
endpoint: Literal["smart", "research"] = Field(
"smart",
description=(
'You.com conversational endpoints. Choose from "smart" or "research"'
),
)
ydc_api_key: Optional[str] = Field(
None,
description="You.com API key, if `YDC_API_KEY` is not set in the envrioment",
)
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop:
raise NotImplementedError(
"Stop words are not implemented for You.com endpoints."
)
params = {"query": prompt}
response = _request(self._request_endpoint, api_key=self._api_key, **params)
return response["answer"]
def _stream(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
if stop:
raise NotImplementedError(
"Stop words are not implemented for You.com endpoints."
)
params = {"query": prompt}
for token in _request_stream(
self._request_endpoint, api_key=self._api_key, **params
):
yield GenerationChunk(text=token)
@property
def _request_endpoint(self) -> str:
if self.endpoint == "smart":
return SMART_ENDPOINT
return RESEARCH_ENDPOINT
@property
def _api_key(self) -> str:
return self.ydc_api_key or os.environ["YDC_API_KEY"]
@property
def _llm_type(self) -> str:
return "you.com"

View File

@@ -31,14 +31,18 @@ class TavilySearchAPIRetriever(BaseRetriever):
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
try:
from tavily import Client
try:
from tavily import TavilyClient
except ImportError:
# Older of tavily used Client
from tavily import Client as TavilyClient
except ImportError:
raise ImportError(
"Tavily python package not found. "
"Please install it with `pip install tavily-python`."
)
tavily = Client(api_key=self.api_key or os.environ["TAVILY_API_KEY"])
tavily = TavilyClient(api_key=self.api_key or os.environ["TAVILY_API_KEY"])
max_results = self.k if not self.include_generated_answer else self.k - 1
response = tavily.search(
query=query,

View File

@@ -72,7 +72,7 @@ class SQLStore(BaseStore[str, bytes]):
from langchain_rag.storage import SQLStore
# Instantiate the SQLStore with the root path
sql_store = SQLStore(namespace="test", db_url="sqllite://:memory:")
sql_store = SQLStore(namespace="test", db_url="sqlite://:memory:")
# Set values for keys
sql_store.mset([("key1", b"value1"), ("key2", b"value2")])

View File

@@ -43,6 +43,9 @@ if TYPE_CHECKING:
from langchain_community.vectorstores.apache_doris import (
ApacheDoris,
)
from langchain_community.vectorstores.aperturedb import (
ApertureDB,
)
from langchain_community.vectorstores.astradb import (
AstraDB,
)
@@ -311,6 +314,7 @@ __all__ = [
"AnalyticDB",
"Annoy",
"ApacheDoris",
"ApertureDB",
"AstraDB",
"AtlasDB",
"AwaDB",
@@ -413,6 +417,7 @@ _module_lookup = {
"AnalyticDB": "langchain_community.vectorstores.analyticdb",
"Annoy": "langchain_community.vectorstores.annoy",
"ApacheDoris": "langchain_community.vectorstores.apache_doris",
"ApertureDB": "langchain_community.vectorstores.aperturedb",
"AstraDB": "langchain_community.vectorstores.astradb",
"AtlasDB": "langchain_community.vectorstores.atlas",
"AwaDB": "langchain_community.vectorstores.awadb",

View File

@@ -0,0 +1,516 @@
# System imports
from __future__ import annotations
import logging
import time
import uuid
from typing import Any, Dict, List, Optional, Sequence, Tuple, Type
# Third-party imports
import numpy as np
# Local imports
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.indexing.base import UpsertResponse
from langchain_core.vectorstores import VectorStore
from typing_extensions import override
# Configure some defaults
ENGINE = "HNSW"
METRIC = "CS"
DESCRIPTOR_SET = "langchain"
BATCHSIZE = 1000
PROPERTY_PREFIX = "lc_" # Prefix for properties that are in the client metadata
TEXT_PROPERTY = "text" # Property name for the text
UNIQUEID_PROPERTY = "uniqueid" # Property name for the unique id
class ApertureDB(VectorStore):
@override
def __init__(
self,
embeddings: Embeddings,
descriptor_set: str = DESCRIPTOR_SET,
dimensions: Optional[int] = None,
engine: Optional[str] = None,
metric: Optional[str] = None,
log_level: int = logging.WARN,
properties: Optional[Dict] = None,
**kwargs: Any,
) -> None:
"""Create a vectorstore backed by ApertureDB
A single ApertureDB instance can support many vectorstores,
distinguished by 'descriptor_set' name. The descriptor set is created
if it does not exist. Different descriptor sets can use different
engines and metrics, be supplied by different embedding models, and have
different dimensions.
See ApertureDB documentation on `AddDescriptorSet`
https://docs.aperturedata.io/query_language/Reference/descriptor_commands/desc_set_commands/AddDescriptorSet
for more information on the engine and metric options.
Args:
embeddings (Embeddings): Embeddings object
descriptor_set (str, optional): Descriptor set name. Defaults to
"langchain".
dimensions (Optional[int], optional): Number of dimensions of the
embeddings. Defaults to None.
engine (str, optional): Engine to use. Defaults to "HNSW" for new
descriptorsets.
metric (str, optional): Metric to use. Defaults to "CS" for new
descriptorsets.
log_level (int, optional): Logging level. Defaults to logging.WARN.
"""
# ApertureDB imports
try:
from aperturedb.Utils import Utils, create_connector
except ImportError:
raise ImportError(
"ApertureDB is not installed. Please install it using "
"'pip install aperturedb'"
)
super().__init__(**kwargs)
self.logger = logging.getLogger(__name__)
self.logger.setLevel(log_level)
self.descriptor_set = descriptor_set
self.embedding_function = embeddings
self.dimensions = dimensions
self.engine = engine
self.metric = metric
self.properties = properties
if embeddings is None:
self.logger.fatal("No embedding function provided.")
raise ValueError("No embedding function provided.")
try:
from aperturedb.Utils import Utils, create_connector
except ImportError:
self.logger.exception(
"ApertureDB is not installed. Please install it using "
"'pip install aperturedb'"
)
raise
self.connection = create_connector()
self.utils = Utils(self.connection)
try:
self.utils.status()
except Exception:
self.logger.exception("Failed to connect to ApertureDB")
raise
self._find_or_add_descriptor_set()
def _find_or_add_descriptor_set(self) -> None:
descriptor_set = self.descriptor_set
"""Checks if the descriptor set exists, if not, creates it"""
find_ds_query = [
{
"FindDescriptorSet": {
"with_name": descriptor_set,
"engines": True,
"metrics": True,
"dimensions": True,
"results": {"all_properties": True},
}
}
]
r, b = self.connection.query(find_ds_query)
assert self.connection.last_query_ok(), r
n_entities = (
len(r[0]["FindDescriptorSet"]["entities"])
if "entities" in r[0]["FindDescriptorSet"]
else 0
)
assert n_entities <= 1, "Multiple descriptor sets with the same name"
if n_entities == 1: # Descriptor set exists already
e = r[0]["FindDescriptorSet"]["entities"][0]
self.logger.info(f"Descriptor set {descriptor_set} already exists")
engines = e["_engines"]
assert len(engines) == 1, "Only one engine is supported"
if self.engine is None:
self.engine = engines[0]
elif self.engine != engines[0]:
self.logger.error(f"Engine mismatch: {self.engine} != {engines[0]}")
metrics = e["_metrics"]
assert len(metrics) == 1, "Only one metric is supported"
if self.metric is None:
self.metric = metrics[0]
elif self.metric != metrics[0]:
self.logger.error(f"Metric mismatch: {self.metric} != {metrics[0]}")
dimensions = e["_dimensions"]
if self.dimensions is None:
self.dimensions = dimensions
elif self.dimensions != dimensions:
self.logger.error(
f"Dimensions mismatch: {self.dimensions} != {dimensions}"
)
self.properties = {
k[len(PROPERTY_PREFIX) :]: v
for k, v in e.items()
if k.startswith(PROPERTY_PREFIX)
}
else:
self.logger.info(
f"Descriptor set {descriptor_set} does not exist. Creating it"
)
if self.engine is None:
self.engine = ENGINE
if self.metric is None:
self.metric = METRIC
if self.dimensions is None:
self.dimensions = len(self.embedding_function.embed_query("test"))
properties = (
{PROPERTY_PREFIX + k: v for k, v in self.properties.items()}
if self.properties is not None
else None
)
self.utils.add_descriptorset(
name=descriptor_set,
dim=self.dimensions,
engine=self.engine,
metric=self.metric,
properties=properties,
)
# Create indexes
self.utils.create_entity_index("_Descriptor", "_create_txn")
self.utils.create_entity_index("_DescriptorSet", "_name")
self.utils.create_entity_index("_Descriptor", UNIQUEID_PROPERTY)
@override
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> Optional[bool]:
"""Delete documents from the vectorstore by id.
Args:
ids: List of ids to delete from the vectorstore.
Returns:
True if the deletion was successful, False otherwise
"""
assert ids is not None, "ids must be provided"
query = [
{
"DeleteDescriptor": {
"set": self.descriptor_set,
"constraints": {UNIQUEID_PROPERTY: ["in", ids]},
}
}
]
result, _ = self.utils.execute(query)
return result
@override
def get_by_ids(self, ids: Sequence[str], /) -> List[Document]:
"""Find documents in the vectorstore by id.
Args:
ids: List of ids to find in the vectorstore.
Returns:
documents: List of Document objects found in the vectorstore.
"""
query = [
{
"FindDescriptor": {
"set": self.descriptor_set,
"constraints": {UNIQUEID_PROPERTY: ["in", ids]},
"results": {"all_properties": True},
}
}
]
results, _ = self.utils.execute(query)
docs = [
self._descriptor_to_document(d)
for d in results[0]["FindDescriptor"].get("entities", [])
]
return docs
@override
def similarity_search(
self, query: str, k: int = 4, *args: Any, **kwargs: Any
) -> List[Document]:
"""Search for documents similar to the query using the vectorstore
Args:
query: Query string to search for.
k: Number of results to return.
Returns:
List of Document objects ordered by decreasing similarity to the query.
"""
assert self.embedding_function is not None, "Embedding function is not set"
embedding = self.embedding_function.embed_query(query)
return self.similarity_search_by_vector(embedding, k, *args, **kwargs)
@override
def similarity_search_with_score(
self, query: str, *args: Any, **kwargs: Any
) -> List[Tuple[Document, float]]:
embedding = self.embedding_function.embed_query(query)
return self._similarity_search_with_score_by_vector(embedding, *args, **kwargs)
def _descriptor_to_document(self, d: dict) -> Document:
metadata = {}
for k, v in d.items():
if k.startswith(PROPERTY_PREFIX):
metadata[k[len(PROPERTY_PREFIX) :]] = v
text = d[TEXT_PROPERTY]
uniqueid = d[UNIQUEID_PROPERTY]
doc = Document(page_content=text, metadata=metadata, id=uniqueid)
return doc
def _similarity_search_with_score_by_vector(
self, embedding: List[float], k: int = 4, vectors: bool = False
) -> List[Tuple[Document, float]]:
from aperturedb.Descriptors import Descriptors
descriptors = Descriptors(self.connection)
start_time = time.time()
descriptors.find_similar(
set=self.descriptor_set, vector=embedding, k_neighbors=k, distances=True
)
self.logger.info(
f"ApertureDB similarity search took {time.time() - start_time} seconds"
)
return [(self._descriptor_to_document(d), d["_distance"]) for d in descriptors]
@override
def similarity_search_by_vector(
self, embedding: List[float], k: int = 4, **kwargs: Any
) -> List[Document]:
"""Returns the k most similar documents to the given embedding vector
Args:
embedding: The embedding vector to search for
k: The number of similar documents to return
Returns:
List of Document objects ordered by decreasing similarity to the query.
"""
from aperturedb.Descriptors import Descriptors
descriptors = Descriptors(self.connection)
start_time = time.time()
descriptors.find_similar(
set=self.descriptor_set, vector=embedding, k_neighbors=k
)
self.logger.info(
f"ApertureDB similarity search took {time.time() - start_time} seconds"
)
return [self._descriptor_to_document(d) for d in descriptors]
@override
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Returns similar documents to the query that also have diversity
This algorithm balances relevance and diversity in the search results.
Args:
query: Query string to search for.
k: Number of results to return.
fetch_k: Number of results to fetch.
lambda_mult: Lambda multiplier for MMR.
Returns:
List of Document objects ordered by decreasing similarity/diversty.
"""
self.logger.info(f"Max Marginal Relevance search for query: {query}")
embedding = self.embedding_function.embed_query(query)
return self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mult, **kwargs
)
@override
def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Returns similar documents to the vector that also have diversity
This algorithm balances relevance and diversity in the search results.
Args:
embedding: Embedding vector to search for.
k: Number of results to return.
fetch_k: Number of results to fetch.
lambda_mult: Lambda multiplier for MMR.
Returns:
List of Document objects ordered by decreasing similarity/diversty.
"""
from aperturedb.Descriptors import Descriptors
descriptors = Descriptors(self.connection)
start_time = time.time()
descriptors.find_similar_mmr(
set=self.descriptor_set,
vector=embedding,
k_neighbors=k,
fetch_k=fetch_k,
lambda_mult=lambda_mult,
)
self.logger.info(
f"ApertureDB similarity search mmr took {time.time() - start_time} seconds"
)
return [self._descriptor_to_document(d) for d in descriptors]
@classmethod
@override
def from_texts(
cls: Type[ApertureDB],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> ApertureDB:
"""Creates a new vectorstore from a list of texts
Args:
texts: List of text strings
embedding: Embeddings object as for constructing the vectorstore
metadatas: Optional list of metadatas associated with the texts.
**kwargs: Additional arguments to pass to the constructor
"""
store = cls(embeddings=embedding, **kwargs)
store.add_texts(texts, metadatas)
return store
@classmethod
@override
def from_documents(
cls: Type[ApertureDB],
documents: List[Document],
embedding: Embeddings,
**kwargs: Any,
) -> ApertureDB:
"""Creates a new vectorstore from a list of documents
Args:
documents: List of Document objects
embedding: Embeddings object as for constructing the vectorstore
metadatas: Optional list of metadatas associated with the texts.
**kwargs: Additional arguments to pass to the constructor
"""
store = cls(embeddings=embedding, **kwargs)
store.add_documents(documents)
return store
@classmethod
def delete_vectorstore(class_, descriptor_set: str) -> None:
"""Deletes a vectorstore and all its data from the database
Args:
descriptor_set: The name of the descriptor set to delete
"""
from aperturedb.Utils import Utils, create_connector
db = create_connector()
utils = Utils(db)
utils.remove_descriptorset(descriptor_set)
@classmethod
def list_vectorstores(class_) -> None:
"""Returns a list of all vectorstores in the database
Returns:
List of descriptor sets with properties
"""
from aperturedb.Utils import create_connector
db = create_connector()
query = [
{
"FindDescriptorSet": {
# Return all properties
"results": {"all_properties": True},
"engines": True,
"metrics": True,
"dimensions": True,
}
}
]
response, _ = db.query(query)
assert db.last_query_ok(), response
return response[0]["FindDescriptorSet"]["entities"]
@override
def upsert(self, items: Sequence[Document], /, **kwargs: Any) -> UpsertResponse:
"""Insert or update items
Updating documents is dependent on the documents' `id` attribute.
Args:
items: List of Document objects to upsert
Returns:
UpsertResponse object with succeeded and failed
"""
# For now, simply delete and add
# We could do something more efficient to update metadata,
# but we don't support changing the embedding of a descriptor.
from aperturedb.ParallelLoader import ParallelLoader
ids_to_delete: List[str] = [
item.id for item in items if hasattr(item, "id") and item.id is not None
]
if ids_to_delete:
self.delete(ids_to_delete)
texts = [doc.page_content for doc in items]
metadatas = [
doc.metadata if getattr(doc, "metadata", None) is not None else {}
for doc in items
]
embeddings = self.embedding_function.embed_documents(texts)
ids: List[str] = [
doc.id if hasattr(doc, "id") and doc.id is not None else str(uuid.uuid4())
for doc in items
]
data = []
for text, embedding, metadata, unique_id in zip(
texts, embeddings, metadatas, ids
):
properties = {PROPERTY_PREFIX + k: v for k, v in metadata.items()}
properties[TEXT_PROPERTY] = text
properties[UNIQUEID_PROPERTY] = unique_id
command = {
"AddDescriptor": {
"set": self.descriptor_set,
"properties": properties,
}
}
query = [command]
blobs = [np.array(embedding, dtype=np.float32).tobytes()]
data.append((query, blobs))
loader = ParallelLoader(self.connection)
loader.ingest(data, batchsize=BATCHSIZE)
return UpsertResponse(succeeded=ids, failed=[])

View File

@@ -478,6 +478,93 @@ class Chroma(VectorStore):
"Consider providing relevance_score_fn to Chroma constructor."
)
def similarity_search_by_image(
self,
uri: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Search for similar images based on the given image URI.
Args:
uri (str): URI of the image to search for.
k (int, optional): Number of results to return. Defaults to DEFAULT_K.
filter (Optional[Dict[str, str]], optional): Filter by metadata.
**kwargs (Any): Additional arguments to pass to function.
Returns:
List of Images most similar to the provided image.
Each element in list is a Langchain Document Object.
The page content is b64 encoded image, metadata is default or
as defined by user.
Raises:
ValueError: If the embedding function does not support image embeddings.
"""
if self._embedding_function is None or not hasattr(
self._embedding_function, "embed_image"
):
raise ValueError("The embedding function must support image embedding.")
# Obtain image embedding
# Assuming embed_image returns a single embedding
image_embedding = self._embedding_function.embed_image(uris=[uri])
# Perform similarity search based on the obtained embedding
results = self.similarity_search_by_vector(
embedding=image_embedding,
k=k,
filter=filter,
**kwargs,
)
return results
def similarity_search_by_image_with_relevance_score(
self,
uri: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Search for similar images based on the given image URI.
Args:
uri (str): URI of the image to search for.
k (int, optional): Number of results to return.
Defaults to DEFAULT_K.
filter (Optional[Dict[str, str]], optional): Filter by metadata.
**kwargs (Any): Additional arguments to pass to function.
Returns:
List[Tuple[Document, float]]: List of tuples containing documents similar
to the query image and their similarity scores.
0th element in each tuple is a Langchain Document Object.
The page content is b64 encoded img, metadata is default or defined by user.
Raises:
ValueError: If the embedding function does not support image embeddings.
"""
if self._embedding_function is None or not hasattr(
self._embedding_function, "embed_image"
):
raise ValueError("The embedding function must support image embedding.")
# Obtain image embedding
# Assuming embed_image returns a single embedding
image_embedding = self._embedding_function.embed_image(uris=[uri])
# Perform similarity search based on the obtained embedding
results = self.similarity_search_by_vector_with_relevance_scores(
embedding=image_embedding,
k=k,
filter=filter,
**kwargs,
)
return results
def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],

View File

@@ -277,7 +277,13 @@ class DatabricksVectorSearch(VectorStore):
return True
def similarity_search(
self, query: str, k: int = 4, filters: Optional[Any] = None, **kwargs: Any
self,
query: str,
k: int = 4,
filters: Optional[Any] = None,
*,
query_type: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query.
@@ -285,17 +291,24 @@ class DatabricksVectorSearch(VectorStore):
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filters: Filters to apply to the query. Defaults to None.
query_type: The type of this query. Supported values are "ANN" and "HYBRID".
Returns:
List of Documents most similar to the embedding.
"""
docs_with_score = self.similarity_search_with_score(
query=query, k=k, filters=filters, **kwargs
query=query, k=k, filters=filters, query_type=query_type, **kwargs
)
return [doc for doc, _ in docs_with_score]
def similarity_search_with_score(
self, query: str, k: int = 4, filters: Optional[Any] = None, **kwargs: Any
self,
query: str,
k: int = 4,
filters: Optional[Any] = None,
*,
query_type: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query, along with scores.
@@ -303,6 +316,7 @@ class DatabricksVectorSearch(VectorStore):
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filters: Filters to apply to the query. Defaults to None.
query_type: The type of this query. Supported values are "ANN" and "HYBRID".
Returns:
List of Documents most similar to the embedding and score for each.
@@ -321,6 +335,7 @@ class DatabricksVectorSearch(VectorStore):
query_vector=query_vector,
filters=filters,
num_results=k,
query_type=query_type,
)
return self._parse_search_response(search_resp)
@@ -343,6 +358,8 @@ class DatabricksVectorSearch(VectorStore):
fetch_k: int = 20,
lambda_mult: float = 0.5,
filters: Optional[Any] = None,
*,
query_type: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
@@ -359,6 +376,7 @@ class DatabricksVectorSearch(VectorStore):
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filters: Filters to apply to the query. Defaults to None.
query_type: The type of this query. Supported values are "ANN" and "HYBRID".
Returns:
List of Documents selected by maximal marginal relevance.
"""
@@ -377,6 +395,7 @@ class DatabricksVectorSearch(VectorStore):
fetch_k,
lambda_mult=lambda_mult,
filters=filters,
query_type=query_type,
)
return docs
@@ -387,6 +406,8 @@ class DatabricksVectorSearch(VectorStore):
fetch_k: int = 20,
lambda_mult: float = 0.5,
filters: Optional[Any] = None,
*,
query_type: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
@@ -403,6 +424,7 @@ class DatabricksVectorSearch(VectorStore):
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filters: Filters to apply to the query. Defaults to None.
query_type: The type of this query. Supported values are "ANN" and "HYBRID".
Returns:
List of Documents selected by maximal marginal relevance.
"""
@@ -420,6 +442,7 @@ class DatabricksVectorSearch(VectorStore):
query_vector=embedding,
filters=filters,
num_results=fetch_k,
query_type=query_type,
)
embeddings_result_index = (
@@ -449,6 +472,8 @@ class DatabricksVectorSearch(VectorStore):
embedding: List[float],
k: int = 4,
filters: Optional[Any] = None,
*,
query_type: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
@@ -457,12 +482,13 @@ class DatabricksVectorSearch(VectorStore):
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filters: Filters to apply to the query. Defaults to None.
query_type: The type of this query. Supported values are "ANN" and "HYBRID".
Returns:
List of Documents most similar to the embedding.
"""
docs_with_score = self.similarity_search_by_vector_with_score(
embedding=embedding, k=k, filters=filters, **kwargs
embedding=embedding, k=k, filters=filters, query_type=query_type, **kwargs
)
return [doc for doc, _ in docs_with_score]
@@ -471,6 +497,8 @@ class DatabricksVectorSearch(VectorStore):
embedding: List[float],
k: int = 4,
filters: Optional[Any] = None,
*,
query_type: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to embedding vector, along with scores.
@@ -479,6 +507,7 @@ class DatabricksVectorSearch(VectorStore):
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filters: Filters to apply to the query. Defaults to None.
query_type: The type of this query. Supported values are "ANN" and "HYBRID".
Returns:
List of Documents most similar to the embedding and score for each.
@@ -493,6 +522,7 @@ class DatabricksVectorSearch(VectorStore):
query_vector=embedding,
filters=filters,
num_results=k,
query_type=query_type,
)
return self._parse_search_response(search_resp)

View File

@@ -76,3 +76,14 @@ async def test_chat_baichuan_astream() -> None:
async for chunk in chat.astream("今天天气如何?"):
assert isinstance(chunk, AIMessage)
assert isinstance(chunk.content, str)
def test_chat_baichuan_with_system_role() -> None:
chat = ChatBaichuan() # type: ignore[call-arg]
messages = [
("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
("human", "我喜欢编程。"),
]
response = chat.invoke(messages)
assert isinstance(response, AIMessage)
assert isinstance(response.content, str)

View File

@@ -235,3 +235,34 @@ def test_manual_tool_call_msg() -> None:
assert output.content
# Should not have called the tool again.
assert not output.tool_calls and not output.invalid_tool_calls
class AnswerWithJustification(BaseModel):
"""An answer to the user question along with justification for the answer."""
answer: str
justification: str
def test_chat_tongyi_with_structured_output() -> None:
"""Test ChatTongyi with structured output."""
llm = ChatTongyi() # type: ignore
structured_llm = llm.with_structured_output(AnswerWithJustification)
response = structured_llm.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
assert isinstance(response, AnswerWithJustification)
def test_chat_tongyi_with_structured_output_include_raw() -> None:
"""Test ChatTongyi with structured output."""
llm = ChatTongyi() # type: ignore
structured_llm = llm.with_structured_output(
AnswerWithJustification, include_raw=True
)
response = structured_llm.invoke(
"What weighs more a pound of bricks or a pound of feathers"
)
assert isinstance(response, dict)
assert isinstance(response.get("raw"), AIMessage)
assert isinstance(response.get("parsed"), AnswerWithJustification)

View File

@@ -9,7 +9,7 @@ from langchain_community.tools.zenguard.tool import Detector, ZenGuardTool
@pytest.fixture()
def zenguard_tool() -> ZenGuardTool:
if os.getenv("ZENGUARD_API_KEY") is None:
raise ValueError("ZENGUARD_API_KEY is not set in environment varibale")
raise ValueError("ZENGUARD_API_KEY is not set in environment variable")
return ZenGuardTool()

View File

@@ -0,0 +1,7 @@
services:
aperturedb:
image: aperturedata/aperturedb-standalone:latest
restart: on-failure:0
container_name: aperturedb
ports:
- 55555:55555

View File

@@ -0,0 +1,29 @@
"""Test ApertureDB functionality."""
import uuid
import pytest
from langchain_standard_tests.integration_tests.vectorstores import (
AsyncReadWriteTestSuite,
ReadWriteTestSuite,
)
from langchain_community.vectorstores import ApertureDB
class TestApertureDBReadWriteTestSuite(ReadWriteTestSuite):
@pytest.fixture
def vectorstore(self) -> ApertureDB:
descriptor_set = uuid.uuid4().hex # Fresh descriptor set for each test
return ApertureDB(
embeddings=self.get_embeddings(), descriptor_set=descriptor_set
)
class TestAsyncApertureDBReadWriteTestSuite(AsyncReadWriteTestSuite):
@pytest.fixture
async def vectorstore(self) -> ApertureDB:
descriptor_set = uuid.uuid4().hex # Fresh descriptor set for each test
return ApertureDB(
embeddings=self.get_embeddings(), descriptor_set=descriptor_set
)

View File

@@ -4,7 +4,6 @@ import pytest
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
ChatMessage,
FunctionMessage,
HumanMessage,
HumanMessageChunk,
@@ -54,9 +53,9 @@ def test__convert_message_to_dict_ai() -> None:
def test__convert_message_to_dict_system() -> None:
message = SystemMessage(content="foo")
with pytest.raises(TypeError) as e:
_convert_message_to_dict(message)
assert "Got unknown type" in str(e)
result = _convert_message_to_dict(message)
expected_output = {"role": "system", "content": "foo"}
assert result == expected_output
def test__convert_message_to_dict_function() -> None:
@@ -83,7 +82,7 @@ def test__convert_dict_to_message_ai() -> None:
def test__convert_dict_to_message_other_role() -> None:
message_dict = {"role": "system", "content": "foo"}
result = _convert_dict_to_message(message_dict)
expected_output = ChatMessage(role="system", content="foo")
expected_output = SystemMessage(content="foo")
assert result == expected_output
@@ -134,3 +133,11 @@ def test_uses_actual_secret_value_from_secret_str() -> None:
cast(SecretStr, chat.baichuan_secret_key).get_secret_value()
== "test-secret-key"
)
def test_chat_baichuan_with_base_url() -> None:
chat = ChatBaichuan( # type: ignore[call-arg]
api_key="your-api-key", # type: ignore[arg-type]
base_url="https://exmaple.com", # type: ignore[arg-type]
)
assert chat.baichuan_api_base == "https://exmaple.com"

View File

@@ -12,7 +12,7 @@ PAGE_1 = """
Hello.
<a href="relative">Relative</a>
<a href="/relative-base">Relative base.</a>
<a href="http://cnn.com">Aboslute</a>
<a href="http://cnn.com">Absolute</a>
<a href="//same.foo">Test</a>
</body>
</html>

View File

@@ -98,6 +98,7 @@ EXPECT_ALL = [
"QianfanLLMEndpoint",
"YandexGPT",
"Yuan2",
"You",
"VolcEngineMaasLLM",
"WatsonxLLM",
"SparkLLM",

View File

@@ -0,0 +1,37 @@
import pytest
import requests_mock
@pytest.mark.parametrize("endpoint", ("smart", "research"))
@pytest.mark.requires("sseclient")
def test_invoke(
endpoint: str, requests_mock: requests_mock.Mocker, monkeypatch: pytest.MonkeyPatch
) -> None:
from langchain_community.llms import You
from langchain_community.llms.you import RESEARCH_ENDPOINT, SMART_ENDPOINT
json = {
"answer": (
"A solar eclipse occurs when the Moon passes between the Sun and Earth, "
"casting a shadow on Earth and ..."
),
"search_results": [
{
"url": "https://en.wikipedia.org/wiki/Solar_eclipse",
"name": "Solar eclipse - Wikipedia",
"snippet": (
"A solar eclipse occurs when the Moon passes "
"between Earth and the Sun, thereby obscuring the view of the Sun "
"from a small part of Earth, totally or partially. "
),
}
],
}
request_endpoint = SMART_ENDPOINT if endpoint == "smart" else RESEARCH_ENDPOINT
requests_mock.post(request_endpoint, json=json)
monkeypatch.setenv("YDC_API_KEY", "...")
llm = You(endpoint=endpoint)
output = llm.invoke("What is a solar eclipse?")
assert output == json["answer"]

View File

@@ -167,6 +167,12 @@ EXAMPLE_SEARCH_RESPONSE_WITH_EMBEDDING = {
"next_page_token": "",
}
ALL_QUERY_TYPES = [
None,
"ANN",
"HYBRID",
]
def mock_index(index_details: dict) -> MagicMock:
from databricks.vector_search.client import VectorSearchIndex
@@ -475,8 +481,10 @@ def test_delete_fail_no_ids() -> None:
@pytest.mark.requires("databricks", "databricks.vector_search")
@pytest.mark.parametrize("index_details", ALL_INDEXES)
def test_similarity_search(index_details: dict) -> None:
@pytest.mark.parametrize(
"index_details, query_type", itertools.product(ALL_INDEXES, ALL_QUERY_TYPES)
)
def test_similarity_search(index_details: dict, query_type: Optional[str]) -> None:
index = mock_index(index_details)
index.similarity_search.return_value = EXAMPLE_SEARCH_RESPONSE
vectorsearch = default_databricks_vector_search(index)
@@ -484,7 +492,9 @@ def test_similarity_search(index_details: dict) -> None:
filters = {"some filter": True}
limit = 7
search_result = vectorsearch.similarity_search(query, k=limit, filters=filters)
search_result = vectorsearch.similarity_search(
query, k=limit, filters=filters, query_type=query_type
)
if index_details == DELTA_SYNC_INDEX_MANAGED_EMBEDDINGS:
index.similarity_search.assert_called_once_with(
columns=[DEFAULT_PRIMARY_KEY, DEFAULT_TEXT_COLUMN],
@@ -492,6 +502,7 @@ def test_similarity_search(index_details: dict) -> None:
query_vector=None,
filters=filters,
num_results=limit,
query_type=query_type,
)
else:
index.similarity_search.assert_called_once_with(
@@ -500,6 +511,7 @@ def test_similarity_search(index_details: dict) -> None:
query_vector=DEFAULT_EMBEDDING_MODEL.embed_query(query),
filters=filters,
num_results=limit,
query_type=query_type,
)
assert len(search_result) == len(fake_texts)
assert sorted([d.page_content for d in search_result]) == sorted(fake_texts)
@@ -620,6 +632,7 @@ def test_similarity_search_by_vector(index_details: dict) -> None:
query_vector=query_embedding,
filters=filters,
num_results=limit,
query_type=None,
)
assert len(search_result) == len(fake_texts)
assert sorted([d.page_content for d in search_result]) == sorted(fake_texts)

View File

@@ -10,6 +10,7 @@ EXPECTED_ALL = [
"AnalyticDB",
"Annoy",
"ApacheDoris",
"ApertureDB",
"AstraDB",
"AtlasDB",
"AwaDB",

View File

@@ -48,6 +48,7 @@ def test_compatible_vectorstore_documentation() -> None:
documented = {
"Aerospike",
"AnalyticDB",
"ApertureDB",
"AstraDB",
"AzureCosmosDBVectorSearch",
"AzureCosmosDBNoSqlVectorSearch",

View File

@@ -65,12 +65,15 @@ class AgentAction(Serializable):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether or not the class is serializable."""
"""Return whether or not the class is serializable.
Default is True.
"""
return True
@classmethod
def get_lc_namespace(cls) -> List[str]:
"""Get the namespace of the langchain object."""
"""Get the namespace of the langchain object.
Default is ["langchain", "schema", "agent"]."""
return ["langchain", "schema", "agent"]
@property
@@ -80,7 +83,7 @@ class AgentAction(Serializable):
class AgentActionMessageLog(AgentAction):
"""A representation of an action to be executed by an agent.
"""Representation of an action to be executed by an agent.
This is similar to AgentAction, but includes a message log consisting of
chat messages. This is useful when working with ChatModels, and is used
@@ -102,7 +105,7 @@ class AgentActionMessageLog(AgentAction):
class AgentStep(Serializable):
"""The result of running an AgentAction."""
"""Result of running an AgentAction."""
action: AgentAction
"""The AgentAction that was executed."""
@@ -111,12 +114,12 @@ class AgentStep(Serializable):
@property
def messages(self) -> Sequence[BaseMessage]:
"""Return the messages that correspond to this observation."""
"""Messages that correspond to this observation."""
return _convert_agent_observation_to_messages(self.action, self.observation)
class AgentFinish(Serializable):
"""The final return value of an ActionAgent.
"""Final return value of an ActionAgent.
Agents return an AgentFinish when they have reached a stopping condition.
"""
@@ -148,7 +151,7 @@ class AgentFinish(Serializable):
@property
def messages(self) -> Sequence[BaseMessage]:
"""Return the messages that correspond to this observation."""
"""Messages that correspond to this observation."""
return [AIMessage(content=self.log)]
@@ -180,6 +183,7 @@ def _convert_agent_observation_to_messages(
Args:
agent_action: Agent action to convert.
observation: Observation to convert to a message.
Returns:
AIMessage that corresponds to the original tool invocation.
@@ -196,11 +200,11 @@ def _create_function_message(
"""Convert agent action and observation into a function message.
Args:
agent_action: the tool invocation request from the agent
observation: the result of the tool invocation
agent_action: the tool invocation request from the agent.
observation: the result of the tool invocation.
Returns:
FunctionMessage that corresponds to the original tool invocation
FunctionMessage that corresponds to the original tool invocation.
"""
if not isinstance(observation, str):
try:

View File

@@ -44,7 +44,7 @@ class BaseCache(ABC):
The default implementation of the async methods is to run the synchronous
method in an executor. It's recommended to override the async methods
and provide an async implementations to avoid unnecessary overhead.
and provide async implementations to avoid unnecessary overhead.
"""
@abstractmethod
@@ -56,8 +56,8 @@ class BaseCache(ABC):
Args:
prompt: a string representation of the prompt.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
@@ -78,8 +78,8 @@ class BaseCache(ABC):
Args:
prompt: a string representation of the prompt.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
@@ -101,8 +101,8 @@ class BaseCache(ABC):
Args:
prompt: a string representation of the prompt.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
@@ -125,8 +125,8 @@ class BaseCache(ABC):
Args:
prompt: a string representation of the prompt.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
In the case of a Chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
@@ -152,6 +152,10 @@ class InMemoryCache(BaseCache):
maxsize: The maximum number of items to store in the cache.
If None, the cache has no maximum size.
If the cache exceeds the maximum size, the oldest items are removed.
Default is None.
Raises:
ValueError: If maxsize is less than or equal to 0.
"""
self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {}
if maxsize is not None and maxsize <= 0:

View File

@@ -392,7 +392,7 @@ class RunManagerMixin:
metadata: The metadata associated with the custom event
(includes inherited metadata).
.. versionadded:: 0.2.14
.. versionadded:: 0.2.15
"""
@@ -851,7 +851,7 @@ class AsyncCallbackHandler(BaseCallbackHandler):
metadata: The metadata associated with the custom event
(includes inherited metadata).
.. versionadded:: 0.2.14
.. versionadded:: 0.2.15
"""

View File

@@ -2341,7 +2341,7 @@ async def adispatch_custom_event(
LangChain from automatically propagating the config object on the user's
behalf.
.. versionadded:: 0.2.14
.. versionadded:: 0.2.15
"""
from langchain_core.runnables.config import (
ensure_config,
@@ -2410,7 +2410,7 @@ def dispatch_custom_event(
foo_ = RunnableLambda(foo)
foo_.invoke({"a": "1"}, {"callbacks": [CustomCallbackManager()]})
.. versionadded:: 0.2.14
.. versionadded:: 0.2.15
"""
from langchain_core.runnables.config import (
ensure_config,

View File

@@ -116,7 +116,7 @@ class BaseChatMessageHistory(ABC):
This method may be deprecated in a future release.
Args:
message: The human message to add
message: The human message to add to the store.
"""
if isinstance(message, HumanMessage):
self.add_message(message)
@@ -200,22 +200,38 @@ class BaseChatMessageHistory(ABC):
class InMemoryChatMessageHistory(BaseChatMessageHistory, BaseModel):
"""In memory implementation of chat message history.
Stores messages in an in memory list.
Stores messages in a memory list.
"""
messages: List[BaseMessage] = Field(default_factory=list)
"""A list of messages stored in memory."""
async def aget_messages(self) -> List[BaseMessage]:
"""Async version of getting messages."""
"""Async version of getting messages.
Can over-ride this method to provide an efficient async implementation.
In general, fetching messages may involve IO to the underlying
persistence layer.
Returns:
List of messages.
"""
return self.messages
def add_message(self, message: BaseMessage) -> None:
"""Add a self-created message to the store."""
"""Add a self-created message to the store.
Args:
message: The message to add.
"""
self.messages.append(message)
async def aadd_messages(self, messages: Sequence[BaseMessage]) -> None:
"""Async add messages to the store"""
"""Async add messages to the store.
Args:
messages: The messages to add.
"""
self.add_messages(messages)
def clear(self) -> None:

View File

@@ -15,7 +15,7 @@ PathLike = Union[str, PurePath]
class BaseMedia(Serializable):
"""Use to represent media content.
Media objets can be used to represent raw data, such as text or binary data.
Media objects can be used to represent raw data, such as text or binary data.
LangChain Media objects allow associating metadata and an optional identifier
with the content.

View File

@@ -32,7 +32,16 @@ class BaseDocumentCompressor(BaseModel, ABC):
query: str,
callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]:
"""Compress retrieved documents given the query context."""
"""Compress retrieved documents given the query context.
Args:
documents: The retrieved documents.
query: The query context.
callbacks: Optional callbacks to run during compression.
Returns:
The compressed documents.
"""
async def acompress_documents(
self,
@@ -40,7 +49,16 @@ class BaseDocumentCompressor(BaseModel, ABC):
query: str,
callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]:
"""Compress retrieved documents given the query context."""
"""Async compress retrieved documents given the query context.
Args:
documents: The retrieved documents.
query: The query context.
callbacks: Optional callbacks to run during compression.
Returns:
The compressed documents.
"""
return await run_in_executor(
None, self.compress_documents, documents, query, callbacks
)

View File

@@ -10,9 +10,9 @@ if TYPE_CHECKING:
class BaseDocumentTransformer(ABC):
"""Abstract base class for document transformation systems.
"""Abstract base class for document transformation.
A document transformation system takes a sequence of Documents and returns a
A document transformation takes a sequence of Documents and returns a
sequence of transformed Documents.
Example:
@@ -55,7 +55,7 @@ class BaseDocumentTransformer(ABC):
documents: A sequence of Documents to be transformed.
Returns:
A list of transformed Documents.
A sequence of transformed Documents.
"""
async def atransform_documents(
@@ -67,7 +67,7 @@ class BaseDocumentTransformer(ABC):
documents: A sequence of Documents to be transformed.
Returns:
A list of transformed Documents.
A sequence of transformed Documents.
"""
return await run_in_executor(
None, self.transform_documents, documents, **kwargs

View File

@@ -7,7 +7,7 @@ from langchain_core.runnables.config import run_in_executor
class Embeddings(ABC):
"""An interface for embedding models.
"""Interface for embedding models.
This is an interface meant for implementing text embedding models.
@@ -36,16 +36,44 @@ class Embeddings(ABC):
@abstractmethod
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Embed search docs."""
"""Embed search docs.
Args:
texts: List of text to embed.
Returns:
List of embeddings.
"""
@abstractmethod
def embed_query(self, text: str) -> List[float]:
"""Embed query text."""
"""Embed query text.
Args:
text: Text to embed.
Returns:
Embedding.
"""
async def aembed_documents(self, texts: List[str]) -> List[List[float]]:
"""Asynchronous Embed search docs."""
"""Asynchronous Embed search docs.
Args:
texts: List of text to embed.
Returns:
List of embeddings.
"""
return await run_in_executor(None, self.embed_documents, texts)
async def aembed_query(self, text: str) -> List[float]:
"""Asynchronous Embed query text."""
"""Asynchronous Embed query text.
Args:
text: Text to embed.
Returns:
Embedding.
"""
return await run_in_executor(None, self.embed_query, text)

View File

@@ -4,7 +4,11 @@ from functools import lru_cache
@lru_cache(maxsize=1)
def get_runtime_environment() -> dict:
"""Get information about the LangChain runtime environment."""
"""Get information about the LangChain runtime environment.
Returns:
A dictionary with information about the runtime environment.
"""
# Lazy import to avoid circular imports
from langchain_core import __version__

View File

@@ -22,13 +22,14 @@ class OutputParserException(ValueError, LangChainException):
Parameters:
error: The error that's being re-raised or an error message.
observation: String explanation of error which can be passed to a
model to try and remediate the issue.
model to try and remediate the issue. Defaults to None.
llm_output: String model output which is error-ing.
Defaults to None.
send_to_llm: Whether to send the observation and llm_output back to an Agent
after an OutputParserException has been raised. This gives the underlying
model driving the agent the context that the previous output was improperly
structured, in the hopes that it will update the output to the correct
format.
format. Defaults to False.
"""
def __init__(

View File

@@ -18,7 +18,11 @@ _llm_cache: Optional["BaseCache"] = None
def set_verbose(value: bool) -> None:
"""Set a new value for the `verbose` global setting."""
"""Set a new value for the `verbose` global setting.
Args:
value: The new value for the `verbose` global setting.
"""
try:
import langchain # type: ignore[import]
@@ -46,7 +50,11 @@ def set_verbose(value: bool) -> None:
def get_verbose() -> bool:
"""Get the value of the `verbose` global setting."""
"""Get the value of the `verbose` global setting.
Returns:
The value of the `verbose` global setting.
"""
try:
import langchain # type: ignore[import]
@@ -79,7 +87,11 @@ def get_verbose() -> bool:
def set_debug(value: bool) -> None:
"""Set a new value for the `debug` global setting."""
"""Set a new value for the `debug` global setting.
Args:
value: The new value for the `debug` global setting.
"""
try:
import langchain # type: ignore[import]
@@ -105,7 +117,11 @@ def set_debug(value: bool) -> None:
def get_debug() -> bool:
"""Get the value of the `debug` global setting."""
"""Get the value of the `debug` global setting.
Returns:
The value of the `debug` global setting.
"""
try:
import langchain # type: ignore[import]
@@ -168,7 +184,11 @@ def set_llm_cache(value: Optional["BaseCache"]) -> None:
def get_llm_cache() -> "BaseCache":
"""Get the value of the `llm_cache` global setting."""
"""Get the value of the `llm_cache` global setting.
Returns:
The value of the `llm_cache` global setting.
"""
try:
import langchain # type: ignore[import]

View File

@@ -62,13 +62,20 @@ class BaseMemory(Serializable, ABC):
"""Return key-value pairs given the text input to the chain.
Args:
inputs: The inputs to the chain."""
inputs: The inputs to the chain.
Returns:
A dictionary of key-value pairs.
"""
async def aload_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Async return key-value pairs given the text input to the chain.
Args:
inputs: The inputs to the chain.
Returns:
A dictionary of key-value pairs.
"""
return await run_in_executor(None, self.load_memory_variables, inputs)

View File

@@ -15,11 +15,18 @@ from langchain_core.messages.tool import (
default_tool_chunk_parser,
default_tool_parser,
)
from langchain_core.messages.tool import (
invalid_tool_call as create_invalid_tool_call,
)
from langchain_core.messages.tool import (
tool_call as create_tool_call,
)
from langchain_core.messages.tool import (
tool_call_chunk as create_tool_call_chunk,
)
from langchain_core.pydantic_v1 import root_validator
from langchain_core.utils._merge import merge_dicts, merge_lists
from langchain_core.utils.json import (
parse_partial_json,
)
from langchain_core.utils.json import parse_partial_json
class UsageMetadata(TypedDict):
@@ -106,24 +113,55 @@ class AIMessage(BaseMessage):
@root_validator(pre=True)
def _backwards_compat_tool_calls(cls, values: dict) -> dict:
raw_tool_calls = values.get("additional_kwargs", {}).get("tool_calls")
tool_calls = (
values.get("tool_calls")
or values.get("invalid_tool_calls")
or values.get("tool_call_chunks")
check_additional_kwargs = not any(
values.get(k)
for k in ("tool_calls", "invalid_tool_calls", "tool_call_chunks")
)
if raw_tool_calls and not tool_calls:
if check_additional_kwargs and (
raw_tool_calls := values.get("additional_kwargs", {}).get("tool_calls")
):
try:
if issubclass(cls, AIMessageChunk): # type: ignore
values["tool_call_chunks"] = default_tool_chunk_parser(
raw_tool_calls
)
else:
tool_calls, invalid_tool_calls = default_tool_parser(raw_tool_calls)
values["tool_calls"] = tool_calls
values["invalid_tool_calls"] = invalid_tool_calls
parsed_tool_calls, parsed_invalid_tool_calls = default_tool_parser(
raw_tool_calls
)
values["tool_calls"] = parsed_tool_calls
values["invalid_tool_calls"] = parsed_invalid_tool_calls
except Exception:
pass
# Ensure "type" is properly set on all tool call-like dicts.
if tool_calls := values.get("tool_calls"):
updated: List = []
for tc in tool_calls:
updated.append(
create_tool_call(**{k: v for k, v in tc.items() if k != "type"})
)
values["tool_calls"] = updated
if invalid_tool_calls := values.get("invalid_tool_calls"):
updated = []
for tc in invalid_tool_calls:
updated.append(
create_invalid_tool_call(
**{k: v for k, v in tc.items() if k != "type"}
)
)
values["invalid_tool_calls"] = updated
if tool_call_chunks := values.get("tool_call_chunks"):
updated = []
for tc in tool_call_chunks:
updated.append(
create_tool_call_chunk(
**{k: v for k, v in tc.items() if k != "type"}
)
)
values["tool_call_chunks"] = updated
return values
def pretty_repr(self, html: bool = False) -> str:
@@ -216,7 +254,7 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
if not values["tool_call_chunks"]:
if values["tool_calls"]:
values["tool_call_chunks"] = [
ToolCallChunk(
create_tool_call_chunk(
name=tc["name"],
args=json.dumps(tc["args"]),
id=tc["id"],
@@ -228,7 +266,7 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
tool_call_chunks = values.get("tool_call_chunks", [])
tool_call_chunks.extend(
[
ToolCallChunk(
create_tool_call_chunk(
name=tc["name"], args=tc["args"], id=tc["id"], index=None
)
for tc in values["invalid_tool_calls"]
@@ -244,7 +282,7 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
args_ = parse_partial_json(chunk["args"]) if chunk["args"] != "" else {}
if isinstance(args_, dict):
tool_calls.append(
ToolCall(
create_tool_call(
name=chunk["name"] or "",
args=args_,
id=chunk["id"],
@@ -254,7 +292,7 @@ class AIMessageChunk(AIMessage, BaseMessageChunk):
raise ValueError("Malformed args.")
except Exception:
invalid_tool_calls.append(
InvalidToolCall(
create_invalid_tool_call(
name=chunk["name"],
args=chunk["args"],
id=chunk["id"],
@@ -297,7 +335,7 @@ def add_ai_message_chunks(
left.tool_call_chunks, *(o.tool_call_chunks for o in others)
):
tool_call_chunks = [
ToolCallChunk(
create_tool_call_chunk(
name=rtc.get("name"),
args=rtc.get("args"),
index=rtc.get("index"),

View File

@@ -237,25 +237,25 @@ def default_tool_parser(
"""Best-effort parsing of tools."""
tool_calls = []
invalid_tool_calls = []
for tool_call in raw_tool_calls:
if "function" not in tool_call:
for raw_tool_call in raw_tool_calls:
if "function" not in raw_tool_call:
continue
else:
function_name = tool_call["function"]["name"]
function_name = raw_tool_call["function"]["name"]
try:
function_args = json.loads(tool_call["function"]["arguments"])
parsed = ToolCall(
function_args = json.loads(raw_tool_call["function"]["arguments"])
parsed = tool_call(
name=function_name or "",
args=function_args or {},
id=tool_call.get("id"),
id=raw_tool_call.get("id"),
)
tool_calls.append(parsed)
except json.JSONDecodeError:
invalid_tool_calls.append(
InvalidToolCall(
invalid_tool_call(
name=function_name,
args=tool_call["function"]["arguments"],
id=tool_call.get("id"),
args=raw_tool_call["function"]["arguments"],
id=raw_tool_call.get("id"),
error=None,
)
)
@@ -272,7 +272,7 @@ def default_tool_chunk_parser(raw_tool_calls: List[dict]) -> List[ToolCallChunk]
else:
function_args = tool_call["function"]["arguments"]
function_name = tool_call["function"]["name"]
parsed = ToolCallChunk(
parsed = tool_call_chunk(
name=function_name,
args=function_args,
id=tool_call.get("id"),

View File

@@ -16,6 +16,7 @@ from typing import (
Any,
Callable,
Dict,
Iterable,
List,
Literal,
Optional,
@@ -40,6 +41,7 @@ if TYPE_CHECKING:
from langchain_text_splitters import TextSplitter
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompt_values import PromptValue
from langchain_core.runnables.base import Runnable
AnyMessage = Union[
@@ -284,7 +286,7 @@ def _convert_to_message(message: MessageLikeRepresentation) -> BaseMessage:
def convert_to_messages(
messages: Sequence[MessageLikeRepresentation],
messages: Union[Iterable[MessageLikeRepresentation], PromptValue],
) -> List[BaseMessage]:
"""Convert a sequence of messages to a list of messages.
@@ -294,6 +296,11 @@ def convert_to_messages(
Returns:
List of messages (BaseMessages).
"""
# Import here to avoid circular imports
from langchain_core.prompt_values import PromptValue
if isinstance(messages, PromptValue):
return messages.to_messages()
return [_convert_to_message(m) for m in messages]
@@ -329,7 +336,7 @@ def _runnable_support(func: Callable) -> Callable:
@_runnable_support
def filter_messages(
messages: Sequence[MessageLikeRepresentation],
messages: Union[Iterable[MessageLikeRepresentation], PromptValue],
*,
include_names: Optional[Sequence[str]] = None,
exclude_names: Optional[Sequence[str]] = None,
@@ -417,7 +424,7 @@ def filter_messages(
@_runnable_support
def merge_message_runs(
messages: Sequence[MessageLikeRepresentation],
messages: Union[Iterable[MessageLikeRepresentation], PromptValue],
) -> List[BaseMessage]:
"""Merge consecutive Messages of the same type.
@@ -451,12 +458,12 @@ def merge_message_runs(
HumanMessage("wait your favorite food", id="bar",),
AIMessage(
"my favorite colo",
tool_calls=[ToolCall(name="blah_tool", args={"x": 2}, id="123")],
tool_calls=[ToolCall(name="blah_tool", args={"x": 2}, id="123", type="tool_call")],
id="baz",
),
AIMessage(
[{"type": "text", "text": "my favorite dish is lasagna"}],
tool_calls=[ToolCall(name="blah_tool", args={"x": -10}, id="456")],
tool_calls=[ToolCall(name="blah_tool", args={"x": -10}, id="456", type="tool_call")],
id="blur",
),
]
@@ -474,8 +481,8 @@ def merge_message_runs(
{"type": "text", "text": "my favorite dish is lasagna"}
],
tool_calls=[
ToolCall({"name": "blah_tool", "args": {"x": 2}, "id": "123"),
ToolCall({"name": "blah_tool", "args": {"x": -10}, "id": "456")
ToolCall({"name": "blah_tool", "args": {"x": 2}, "id": "123", "type": "tool_call"}),
ToolCall({"name": "blah_tool", "args": {"x": -10}, "id": "456", "type": "tool_call"})
]
id="baz"
),
@@ -506,7 +513,7 @@ def merge_message_runs(
@_runnable_support
def trim_messages(
messages: Sequence[MessageLikeRepresentation],
messages: Union[Iterable[MessageLikeRepresentation], PromptValue],
*,
max_tokens: int,
token_counter: Union[

View File

@@ -24,17 +24,20 @@ class PromptValue(Serializable, ABC):
"""Base abstract class for inputs to any language model.
PromptValues can be converted to both LLM (pure text-generation) inputs and
ChatModel inputs.
ChatModel inputs.
"""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether this class is serializable."""
"""Return whether this class is serializable. Defaults to True."""
return True
@classmethod
def get_lc_namespace(cls) -> List[str]:
"""Get the namespace of the langchain object."""
"""Get the namespace of the langchain object.
This is used to determine the namespace of the object when serializing.
Defaults to ["langchain", "schema", "prompt"].
"""
return ["langchain", "schema", "prompt"]
@abstractmethod
@@ -55,7 +58,10 @@ class StringPromptValue(PromptValue):
@classmethod
def get_lc_namespace(cls) -> List[str]:
"""Get the namespace of the langchain object."""
"""Get the namespace of the langchain object.
This is used to determine the namespace of the object when serializing.
Defaults to ["langchain", "prompts", "base"].
"""
return ["langchain", "prompts", "base"]
def to_string(self) -> str:
@@ -86,7 +92,10 @@ class ChatPromptValue(PromptValue):
@classmethod
def get_lc_namespace(cls) -> List[str]:
"""Get the namespace of the langchain object."""
"""Get the namespace of the langchain object.
This is used to determine the namespace of the object when serializing.
Defaults to ["langchain", "prompts", "chat"].
"""
return ["langchain", "prompts", "chat"]
@@ -94,7 +103,8 @@ class ImageURL(TypedDict, total=False):
"""Image URL."""
detail: Literal["auto", "low", "high"]
"""Specifies the detail level of the image."""
"""Specifies the detail level of the image. Defaults to "auto".
Can be "auto", "low", or "high"."""
url: str
"""Either a URL of the image or the base64 encoded image data."""
@@ -127,5 +137,8 @@ class ChatPromptValueConcrete(ChatPromptValue):
@classmethod
def get_lc_namespace(cls) -> List[str]:
"""Get the namespace of the langchain object."""
"""Get the namespace of the langchain object.
This is used to determine the namespace of the object when serializing.
Defaults to ["langchain", "prompts", "chat"].
"""
return ["langchain", "prompts", "chat"]

View File

@@ -1060,7 +1060,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
def from_messages(
cls,
messages: Sequence[MessageLikeRepresentation],
template_format: Literal["f-string", "mustache"] = "f-string",
template_format: Literal["f-string", "mustache", "jinja2"] = "f-string",
) -> ChatPromptTemplate:
"""Create a chat prompt template from a variety of message formats.
@@ -1274,7 +1274,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
def _create_template_from_message_type(
message_type: str,
template: Union[str, list],
template_format: Literal["f-string", "mustache"] = "f-string",
template_format: Literal["f-string", "mustache", "jinja2"] = "f-string",
) -> BaseMessagePromptTemplate:
"""Create a message prompt template from a message type and template string.
@@ -1341,7 +1341,7 @@ def _create_template_from_message_type(
def _convert_to_message(
message: MessageLikeRepresentation,
template_format: Literal["f-string", "mustache"] = "f-string",
template_format: Literal["f-string", "mustache", "jinja2"] = "f-string",
) -> Union[BaseMessage, BaseMessagePromptTemplate, BaseChatPromptTemplate]:
"""Instantiate a message from a variety of message formats.

View File

@@ -214,6 +214,7 @@ class PromptTemplate(StringPromptTemplate):
cls,
template_file: Union[str, Path],
input_variables: Optional[List[str]] = None,
encoding: Optional[str] = None,
**kwargs: Any,
) -> PromptTemplate:
"""Load a prompt from a file.
@@ -222,13 +223,15 @@ class PromptTemplate(StringPromptTemplate):
template_file: The path to the file containing the prompt template.
input_variables: [DEPRECATED] A list of variable names the final prompt
template will expect. Defaults to None.
encoding: The encoding system for opening the template file.
If not provided, will use the OS default.
input_variables is ignored as from_file now delegates to from_template().
Returns:
The prompt loaded from the file.
"""
with open(str(template_file), "r") as f:
with open(str(template_file), "r", encoding=encoding) as f:
template = f.read()
if input_variables:
warnings.warn(

View File

@@ -53,14 +53,13 @@ RetrieverOutputLike = Runnable[Any, RetrieverOutput]
class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
"""Abstract base class for a Document retrieval system.
A retrieval system is defined as something that can take string queries and return
the most 'relevant' Documents from some source.
Usage:
A retriever follows the standard Runnable interface, and should be used
via the standard runnable methods of `invoke`, `ainvoke`, `batch`, `abatch`.
via the standard Runnable methods of `invoke`, `ainvoke`, `batch`, `abatch`.
Implementation:
@@ -89,7 +88,7 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
\"\"\"(Optional) async native implementation.\"\"\"
return self.docs[:self.k]
Example: A simple retriever based on a scitkit learn vectorizer
Example: A simple retriever based on a scikit-learn vectorizer
.. code-block:: python
@@ -178,12 +177,12 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
Main entry point for synchronous retriever invocations.
Args:
input: The query string
config: Configuration for the retriever
**kwargs: Additional arguments to pass to the retriever
input: The query string.
config: Configuration for the retriever. Defaults to None.
**kwargs: Additional arguments to pass to the retriever.
Returns:
List of relevant documents
List of relevant documents.
Examples:
@@ -237,12 +236,12 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
Main entry point for asynchronous retriever invocations.
Args:
input: The query string
config: Configuration for the retriever
**kwargs: Additional arguments to pass to the retriever
input: The query string.
config: Configuration for the retriever. Defaults to None.
**kwargs: Additional arguments to pass to the retriever.
Returns:
List of relevant documents
List of relevant documents.
Examples:
@@ -292,10 +291,10 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
"""Get documents relevant to a query.
Args:
query: String to find relevant documents for
run_manager: The callback handler to use
query: String to find relevant documents for.
run_manager: The callback handler to use.
Returns:
List of relevant documents
List of relevant documents.
"""
async def _aget_relevant_documents(
@@ -333,18 +332,21 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
`get_relevant_documents directly`.
Args:
query: string to find relevant documents for
callbacks: Callback manager or list of callbacks
tags: Optional list of tags associated with the retriever. Defaults to None
query: string to find relevant documents for.
callbacks: Callback manager or list of callbacks. Defaults to None.
tags: Optional list of tags associated with the retriever.
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
metadata: Optional metadata associated with the retriever. Defaults to None
Defaults to None.
metadata: Optional metadata associated with the retriever.
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
run_name: Optional name for the run.
Defaults to None.
run_name: Optional name for the run. Defaults to None.
**kwargs: Additional arguments to pass to the retriever.
Returns:
List of relevant documents
List of relevant documents.
"""
config: RunnableConfig = {}
if callbacks:
@@ -374,18 +376,21 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
`aget_relevant_documents directly`.
Args:
query: string to find relevant documents for
callbacks: Callback manager or list of callbacks
tags: Optional list of tags associated with the retriever. Defaults to None
query: string to find relevant documents for.
callbacks: Callback manager or list of callbacks.
tags: Optional list of tags associated with the retriever.
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
metadata: Optional metadata associated with the retriever. Defaults to None
Defaults to None.
metadata: Optional metadata associated with the retriever.
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
run_name: Optional name for the run.
Defaults to None.
run_name: Optional name for the run. Defaults to None.
**kwargs: Additional arguments to pass to the retriever.
Returns:
List of relevant documents
List of relevant documents.
"""
config: RunnableConfig = {}
if callbacks:

View File

@@ -1327,7 +1327,7 @@ class Runnable(Generic[Input, Output], ABC):
def with_config(
self,
config: Optional[RunnableConfig] = None,
# Sadly Unpack is not well supported by mypy so this will have to be untyped
# Sadly Unpack is not well-supported by mypy so this will have to be untyped
**kwargs: Any,
) -> Runnable[Input, Output]:
"""
@@ -2150,6 +2150,7 @@ class Runnable(Generic[Input, Output], ABC):
@beta_decorator.beta(message="This API is in beta and may change in the future.")
def as_tool(
self,
args_schema: Optional[Type[BaseModel]] = None,
*,
name: Optional[str] = None,
description: Optional[str] = None,
@@ -2161,9 +2162,11 @@ class Runnable(Generic[Input, Output], ABC):
``args_schema`` from a Runnable. Where possible, schemas are inferred
from ``runnable.get_input_schema``. Alternatively (e.g., if the
Runnable takes a dict as input and the specific dict keys are not typed),
pass ``arg_types`` to specify the required arguments.
the schema can be specified directly with ``args_schema``. You can also
pass ``arg_types`` to just specify the required arguments and their types.
Args:
args_schema: The schema for the tool. Defaults to None.
name: The name of the tool. Defaults to None.
description: The description of the tool. Defaults to None.
arg_types: A dictionary of argument names to types. Defaults to None.
@@ -2190,7 +2193,28 @@ class Runnable(Generic[Input, Output], ABC):
as_tool = runnable.as_tool()
as_tool.invoke({"a": 3, "b": [1, 2]})
``dict`` input, specifying schema:
``dict`` input, specifying schema via ``args_schema``:
.. code-block:: python
from typing import Any, Dict, List
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda
def f(x: Dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
class FSchema(BaseModel):
\"\"\"Apply a function to an integer and list of integers.\"\"\"
a: int = Field(..., description="Integer")
b: List[int] = Field(..., description="List of ints")
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(FSchema)
as_tool.invoke({"a": 3, "b": [1, 2]})
``dict`` input, specifying schema via ``arg_types``:
.. code-block:: python
@@ -2226,7 +2250,11 @@ class Runnable(Generic[Input, Output], ABC):
from langchain_core.tools import convert_runnable_to_tool
return convert_runnable_to_tool(
self, name=name, description=description, arg_types=arg_types
self,
args_schema=args_schema,
name=name,
description=description,
arg_types=arg_types,
)

View File

@@ -159,7 +159,7 @@ class StandardStreamEvent(BaseStreamEvent):
class CustomStreamEvent(BaseStreamEvent):
"""Custom stream event created by the user.
.. versionadded:: 0.2.14
.. versionadded:: 0.2.15
"""
# Overwrite the event field to be more specific.

View File

@@ -150,7 +150,6 @@ class BaseStore(Generic[K, V], ABC):
Yields:
Iterator[K | str]: An iterator over keys that match the given prefix.
This method is allowed to return an iterator over either K or str
depending on what makes more sense for the given store.
"""
@@ -165,7 +164,6 @@ class BaseStore(Generic[K, V], ABC):
Yields:
Iterator[K | str]: An iterator over keys that match the given prefix.
This method is allowed to return an iterator over either K or str
depending on what makes more sense for the given store.
"""

Some files were not shown because too many files have changed in this diff Show More