Compare commits

...

179 Commits

Author SHA1 Message Date
Bagatur
842065a9cc community[patch]: Release 0.2.9 (#24453) 2024-07-19 12:50:22 -07:00
Bagatur
27ad6a4bb3 langchain[patch]: Release 0.2.10 (#24452) 2024-07-19 12:50:13 -07:00
Bagatur
dda9438e87 community[patch]: gpt-4o-mini costs (#24421) 2024-07-19 19:02:44 +00:00
Eugene Yurtsev
604dfe2d99 community[patch]: Force opt-in for WebResearchRetriever (CVE-2024-3095) (#24451)
This PR addresses the issue raised by (CVE-2024-3095)

https://huntr.com/bounties/e62d4895-2901-405b-9559-38276b6a5273

Unfortunately, we didn't do a good job writing the initial report. It's
pointing at both the wrong package and the wrong code.

The affected code is the Web Retriever not the AsyncHTMLLoader, and the
WebRetriever lives in langchain-community

The vulnerable code lives here: 

0bd3f4e129/libs/community/langchain_community/retrievers/web_research.py (L233-L233)


This PR adds a forced opt-in for users to make sure they are aware of
the risk and can mitigate by configuring a proxy:


0bd3f4e129/libs/community/langchain_community/retrievers/web_research.py (L84-L84)
2024-07-19 18:51:35 +00:00
Bagatur
f101c759ed docs: how to pass runtime secrets (#24450) 2024-07-19 18:36:28 +00:00
Asi Greenholts
372c27f2e5 community[minor]: [GoogleApiYoutubeLoader] Replace API used in _get_document_for_channel from search to playlistItem (#24034)
- **Description:** Search has a limit of 500 results, playlistItems
doesn't. Added a class in except clause to catch another common error.
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** @TupleType

---------

Co-authored-by: asi-cider <88270351+asi-cider@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 14:04:34 -04:00
Rafael Pereira
6a45bf9554 community[minor]: GraphCypherQAChain to accept additional inputs as provided by the user for cypher generation (#24300)
**Description:** This PR introduces a change to the
`cypher_generation_chain` to dynamically concatenate inputs. This
improvement aims to streamline the input handling process and make the
method more flexible. The change involves updating the arguments
dictionary with all elements from the `inputs` dictionary, ensuring that
all necessary inputs are dynamically appended. This will ensure that any
cypher generation template will not require a new `_call` method patch.

**Issue:** This PR fixes issue #24260.
2024-07-19 14:03:14 -04:00
Philippe PRADOS
f5856680fe community[minor]: add mongodb byte store (#23876)
The `MongoDBStore` can manage only documents.
It's not possible to use MongoDB for an `CacheBackedEmbeddings`.

With this new implementation, it's possible to use:
```python
CacheBackedEmbeddings.from_bytes_store(
    underlying_embeddings=embeddings,
    document_embedding_cache=MongoDBByteStore(
      connection_string=db_uri,
      db_name=db_name,
      collection_name=collection_name,
  ),
)
```
and use MongoDB to cache the embeddings !
2024-07-19 13:54:12 -04:00
yabooung
07715f815b community[minor]: Add ability to specify file encoding and json encoding for FileChatMessageHistory (#24258)
Description:
Add UTF-8 encoding support

Issue:
Inability to properly handle characters from certain languages (e.g.,
Korean)

Fix:
Implement UTF-8 encoding in FileChatMessageHistory

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 13:53:21 -04:00
Dristy Srivastava
020cc1cf3e Community[minor]: Added checksum in while send data to pebblo-cloud (#23968)
- **Description:** 
            - Updated checksum in doc metadata
- Sending checksum and removing actual content, while sending data to
`pebblo-cloud` if `classifier-location `is `pebblo-cloud` in
`/loader/doc` API
            - Adding `pb_id` i.e. pebblo id to doc metadata
            - Refactoring as needed.
- Sending `content-checksum` and removing actual content, while sending
data to `pebblo-cloud` if `classifier-location `is `pebblo-cloud` in
`prmopt` API
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** Updated
- **Docs** NA

---------

Co-authored-by: dristy.cd <dristy@clouddefense.io>
2024-07-19 13:52:54 -04:00
Eun Hye Kim
9aae8ef416 core[patch]: Fix utils.json_schema.dereference_refs (#24335 KeyError: 400 in JSON schema processing) (#24337)
Description:
This PR fixes a KeyError: 400 that occurs in the JSON schema processing
within the reduce_openapi_spec function. The _retrieve_ref function in
json_schema.py was modified to handle missing components gracefully by
continuing to the next component if the current one is not found. This
ensures that the OpenAPI specification is fully interpreted and the
agent executes without errors.

Issue:
Fixes issue #24335

Dependencies:
No additional dependencies are required for this change.

Twitter handle:
@lunara_x
2024-07-19 13:31:00 -04:00
keval dekivadiya
06f47678ae community[minor]: Add TextEmbed Embedding Integration (#22946)
**Description:**

**TextEmbed** is a high-performance embedding inference server designed
to provide a high-throughput, low-latency solution for serving
embeddings. It supports various sentence-transformer models and includes
the ability to deploy image and text embedding models. TextEmbed offers
flexibility and scalability for diverse applications.

- **PyPI Package:** [TextEmbed on
PyPI](https://pypi.org/project/textembed/)
- **Docker Image:** [TextEmbed on Docker
Hub](https://hub.docker.com/r/kevaldekivadiya/textembed)
- **GitHub Repository:** [TextEmbed on
GitHub](https://github.com/kevaldekivadiya2415/textembed)

**PR Description**
This PR adds functionality for embedding documents and queries using the
`TextEmbedEmbeddings` class. The implementation allows for both
synchronous and asynchronous embedding requests to a TextEmbed API
endpoint. The class handles batching and permuting of input texts to
optimize the embedding process.

**Example Usage:**

```python
from langchain_community.embeddings import TextEmbedEmbeddings

# Initialise the embeddings class
embeddings = TextEmbedEmbeddings(model="your-model-id", api_key="your-api-key", api_url="your_api_url")

# Define a list of documents
documents = [
    "Data science involves extracting insights from data.",
    "Artificial intelligence is transforming various industries.",
    "Cloud computing provides scalable computing resources over the internet.",
    "Big data analytics helps in understanding large datasets.",
    "India has a diverse cultural heritage."
]

# Define a query
query = "What is the cultural heritage of India?"

# Embed all documents
document_embeddings = embeddings.embed_documents(documents)

# Embed the query
query_embedding = embeddings.embed_query(query)

# Print embeddings for each document
for i, embedding in enumerate(document_embeddings):
    print(f"Document {i+1} Embedding:", embedding)

# Print the query embedding
print("Query Embedding:", query_embedding)

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-19 17:30:25 +00:00
Shikanime Deva
9c3da11910 Fix MultiQueryRetriever breaking Embeddings with empty lines (#21093)
Fix MultiQueryRetriever breaking Embeddings with empty lines

```
[chain/end] [1:chain:ConversationalRetrievalChain > 2:retriever:Retriever > 3:retriever:Retriever > 4:chain:LLMChain] [2.03s] Exiting Chain run with output:
[outputs]
> /workspaces/Sfeir/sncf/metabot-backend/.venv/lib/python3.11/site-packages/langchain/retrievers/multi_query.py(116)_aget_relevant_documents()
-> if self.include_original:
(Pdb) queries
['## Alternative questions for "Hello, tell me about phones?":', '', '1. **What are the latest trends in smartphone technology?** (Focuses on recent advancements)', '2. **How has the mobile phone industry evolved over the years?** (Historical perspective)', '3. **What are the different types of phones available in the market, and which one is best for me?** (Categorization and recommendation)']
```

Example of failure on VertexAIEmbeddings

```
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.INVALID_ARGUMENT
	details = "The text content is empty."
	debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.184.234:443 {created_time:"2024-04-30T09:57:45.625698408+00:00", grpc_status:3, grpc_message:"The text content is empty."}"
```

Fixes: #15959

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-19 17:13:12 +00:00
John Kelly
5affbada61 langchain: Add aadd_documents to ParentDocumentRetriever (#23969)
- **Description:** Add an async version of `add_documents` to
`ParentDocumentRetriever`
-  **Twitter handle:** @johnkdev

---------

Co-authored-by: John Kelly <j.kelly@mwam.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 13:12:39 -04:00
Andrew Benton
f9d64d22e5 community[minor]: Add Riza Python/JS code execution tool (#23995)
- **Description:** Add Riza Python/JS code execution tool
- **Issue:** N/A
- **Dependencies:** an optional dependency on the `rizaio` pypi package
- **Twitter handle:** [@rizaio](https://x.com/rizaio)

[Riza](https://riza.io) is a safe code execution environment for
agent-generated Python and JavaScript that's easy to integrate into
langchain apps. This PR adds two new tool classes to the community
package.
2024-07-19 17:03:22 +00:00
Ben Chambers
3691701d58 community[minor]: Add keybert-based link extractor (#24311)
- **Description:** Add a `KeybertLinkExtractor` for graph vectorstores.
This allows extracting links from keywords in a Document and linking
nodes that have common keywords.
- **Issue:** None
- **Dependencies:** None.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-19 12:25:07 -04:00
Erick Friis
ef049769f0 core[patch]: Release 0.2.22 (#24423)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-07-19 09:09:24 -07:00
Bagatur
cd19ba9a07 core[patch]: core lint fix (#24447) 2024-07-19 09:01:22 -07:00
Ben Chambers
83f3d95ffa community[minor]: GLiNER link extraction (#24314)
- **Description:** This allows extracting links between documents with
common named entities using [GLiNER](https://github.com/urchade/GLiNER).
- **Issue:** None
- **Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-19 15:34:54 +00:00
Anas Khan
b5acb91080 Mask API keys for various LLM/ChatModel Modules (#13885)
**Description:** 
- Added masking of the API Keys for the modules:
  - `langchain/chat_models/openai.py`
  - `langchain/llms/openai.py`
  - `langchain/llms/google_palm.py`
  - `langchain/chat_models/google_palm.py`
  - `langchain/llms/edenai.py`

- Updated the modules to utilize `SecretStr` from pydantic to securely
manage API key.
- Added unit/integration tests
- `langchain/chat_models/asure_openai.py` used the `open_api_key` that
is derived from the `ChatOpenAI` Class and it was assuming
`openai_api_key` is a str so we changed it to expect `SecretStr`
instead.

**Issue:** https://github.com/langchain-ai/langchain/issues/12165 ,
**Dependencies:** none,
**Tag maintainer:** @eyurtsev

---------

Co-authored-by: HassanA01 <anikeboss@gmail.com>
Co-authored-by: Aneeq Hassan <aneeq.hassan@utoronto.ca>
Co-authored-by: kristinspenc <kristinspenc2003@gmail.com>
Co-authored-by: faisalt14 <faisalt14@gmail.com>
Co-authored-by: Harshil-Patel28 <76663814+Harshil-Patel28@users.noreply.github.com>
Co-authored-by: kristinspenc <146893228+kristinspenc@users.noreply.github.com>
Co-authored-by: faisalt14 <90787271+faisalt14@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-19 15:23:34 +00:00
ccurme
f99369a54c community[patch]: fix formatting (#24443)
Somehow this got through CI:
https://github.com/langchain-ai/langchain/pull/24363
2024-07-19 14:38:53 +00:00
Ben Chambers
242b085be7 Merge pull request #24315
* community: Add Hierarchy link extractor

* add example

* lint
2024-07-19 09:42:26 -04:00
Rhuan Barros
c3308f31bc Merge pull request #24363
* important email fields
2024-07-19 09:41:20 -04:00
Piotr Romanowski
c50dd79512 docs: Update langchain-openai package version in chat_token_usage_tracking (#24436)
This PR updates docs to mention correct version of the
`langchain-openai` package required to use the `stream_usage` parameter.

As it can be noticed in the details of this [merge
commit](722c8f50ea),
that functionality is available only in `langchain-openai >= 0.1.9`
while docs state it's available in `langchain-openai >= 0.1.8`.
2024-07-19 13:07:37 +00:00
Han Sol Park
aade9bfde5 Mask API key for ChatOpenAI based chat_models (#14293)
- **Description**: Mask API key for ChatOpenAi based chat_models
(openai, azureopenai, anyscale, everlyai).
Made changes to all chat_models that are based on ChatOpenAI since all
of them assumes that openai_api_key is str rather than SecretStr.
  - **Issue:**: #12165 
  - **Dependencies:**  N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-19 02:25:38 +00:00
William FH
0ee6ed76ca [Evaluation] Pass in seed directly (#24403)
adding test rn
2024-07-18 19:12:28 -07:00
Nuno Campos
62b6965d2a core: In ensure_config don't copy dunder configurable keys to metadata (#24420) 2024-07-18 22:28:52 +00:00
Eugene Yurtsev
ef22ebe431 standard-tests[patch]: Add pytest assert rewrites (#24408)
This will surface nice error messages in subclasses that fail assertions.
2024-07-18 21:41:11 +00:00
Eugene Yurtsev
f62b323108 core[minor]: Support all versions of pydantic base model in argsschema (#24418)
This adds support to any pydantic base model for tools.

The only potential issue is that `get_input_schema()` will not always
return a v1 base model.
2024-07-18 17:14:23 -04:00
Prakul
b2bc15e640 docs: Update mongodb README.md (#24412)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-18 14:02:34 -07:00
Evan Harris
61ea7bf60b Add a ListRerank document compressor (#13311)
- **Description:** This PR adds a new document compressor called
`ListRerank`. It's derived from `BaseDocumentCompressor`. It's a near
exact implementation of introduced by this paper: [Zero-Shot Listwise
Document Reranking with a Large Language
Model](https://arxiv.org/pdf/2305.02156.pdf) which it finds to
outperform pointwise reranking, which is somewhat implemented in
LangChain as
[LLMChainFilter](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/chain_filter.py).
- **Issue:** None
- **Dependencies:** None
- **Tag maintainer:** @hwchase17 @izzymsft
- **Twitter handle:** @HarrisEMitchell

Notes:
1. I didn't add anything to `docs`. I wasn't exactly sure which patterns
to follow as [cohere reranker is under
Retrievers](https://python.langchain.com/docs/integrations/retrievers/cohere-reranker)
with other external document retrieval integrations, but other
contextual compression is
[here](https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression/).
Happy to contribute to either with some direction.
2. I followed syntax, docstrings, implementation patterns, etc. as well
as I could looking at nearby modules. One thing I didn't do was put the
default prompt in a separate `.py` file like [Chain
Filter](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/chain_filter_prompt.py)
and [Chain
Extract](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/chain_extract_prompt.py).
Happy to follow that pattern if it would be preferred.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-18 20:34:38 +00:00
Srijan Dubey
4c651ba13a Adding LangChain v0.2 support for nvidia ai endpoint, langchain-nvidia-ai-endpoints. Removed deprecated classes from nvidia_ai_endpoints.ipynb (#24411)
Description: added support for LangChain v0.2 for nvidia ai endpoint.
Implremented inMemory storage for chains using
RunnableWithMessageHistory which is analogous to using
`ConversationChain` which was used in v0.1 with the default
`ConversationBufferMemory`. This class is deprecated in favor of
`RunnableWithMessageHistory` in LangChain v0.2

Issue: None

Dependencies: None.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-18 15:59:26 -04:00
Erick Friis
334fc1ed1c mongodb: release 0.1.7 (#24409) 2024-07-18 18:13:27 +00:00
ccurme
ba74341eee docs: update tool calling how-to to pass functions to bind_tools (#24402) 2024-07-18 08:53:48 -07:00
Harrison Chase
3adf710f1d docs: improve docs on tools (#24404)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-18 08:52:12 -07:00
Eun Hye Kim
07c5c60f63 community: fix tool appending logic and update planner prompt in OpenAPI agent toolkit (#24384)
**Description:**
- Updated the format for the 'Action' section in the planner prompt to
ensure it must be one of the tools without additional words. Adjusted
the phrasing from "should be" to "must be" for clarity and
enforceability.
- Corrected the tool appending logic in the
`_create_api_controller_agent` function to ensure that
`RequestsDeleteToolWithParsing` and `RequestsPatchToolWithParsing` are
properly added to the tools list for "DELETE" and "PATCH" operations.

**Issue:** #24382

**Dependencies:** None

**Twitter handle:** @lunara_x

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-18 13:37:46 +00:00
Casey Clements
aade1550c6 docs: Adds MongoDBAtlasVectorSearch to VectorStore list compatible with Indexing API (#24374)
Adds MongoDBAtlasVectorSearch to list of VectorStores compatible with
the Indexing API.

(One line change.)

As of `langchain-mongodb = "0.1.7"`, the requirements that the
VectorStore have both add_documents and delete methods with an ids kwarg
is satisfied. #23535 contains the implementation of that, and has been
merged.
2024-07-18 09:37:29 -04:00
Chen Xiabin
63c60a31f0 [fix] baidu qianfan AiMessage with usage_metadata (#24389)
make AIMessage usage_metadata has error
2024-07-18 09:28:16 -04:00
João Dinis Ferreira
242de9aa5e docs: remove redundant --quiet option in pip install command (#24397)
- **Description:** Removes a redundant option in a `pip install` command
in the documentation.
- **Issue:** N/A
- **Dependencies:** N/A
2024-07-18 13:24:42 +00:00
ZhangShenao
916b813107 community[patch]: Fix spelling error in ConversationVectorStoreTokenBufferMemory doc-string (#24385)
Fix word spelling error in `ConversationVectorStoreTokenBufferMemory`
2024-07-18 12:27:36 +00:00
Rajendra Kadam
1c65529fd7 community[minor]: [PebbloSafeLoader] Rename loader type and add SharePointLoader to supported loaders (#24393)
Thank you for contributing to LangChain!

- [x] **PR title**: [PebbloSafeLoader] Rename loader type and add
SharePointLoader to supported loaders
    - **Description:** Minor fixes in the PebbloSafeLoader:
        - Renamed the loader type from `remote_db` to `cloud_folder`.
- Added `SharePointLoader` to the list of loaders supported by
PebbloSafeLoader.
    - **Issue:** NA
    - **Dependencies:** NA
- [x] **Add tests and docs**: NA
2024-07-18 08:23:12 -04:00
Eugene Yurtsev
6182a402f1 experimental[patch]: block a few more things from PALValidator (#24379)
* Please see security warning already in existing class.
* The approach here is fundamentally insecure as it's relying on a block
  approach rather than an approach based on only running allowed nodes.
So users should only use this code if its running from a properly
sandboxed  environment.
2024-07-18 08:22:45 -04:00
Paolo Ráez
0dec72cab0 Community[patch]: Missing "stream" parameter in cloudflare_workersai (#23987)
### Description
Missing "stream" parameter. Without it, you'd never receive a stream of
tokens when using stream() or astream()

### Issue
No existing issue available
2024-07-18 02:09:39 +00:00
Eugene Yurtsev
570566b858 core[patch]: Update API reference for astream events (#24359)
Update the API reference for astream events to include information about
custom events.
2024-07-17 21:48:53 -04:00
Bagatur
f9baaae3ec docs: clean up tool how to titles (#24373) 2024-07-17 17:08:31 -07:00
Bagatur
4da1df568a docs: tools concepts (#24368) 2024-07-17 17:08:16 -07:00
Erick Friis
96ccba9c27 infra: 15s retry wait on test pypi (#24375) 2024-07-17 23:41:22 +00:00
Bagatur
a4c101ae97 core[patch]: Release 0.2.21 (#24372) 2024-07-17 22:44:35 +00:00
William FH
c5a07e2dd8 core[patch]: add InjectedToolArg annotation (#24279)
```python
from typing_extensions import Annotated
from langchain_core.tools import tool, InjectedToolArg
from langchain_anthropic import ChatAnthropic

@tool
def multiply(x: int, y: int, not_for_model: Annotated[dict, InjectedToolArg]) -> str:
    """multiply."""
    return x * y 

ChatAnthropic(model='claude-3-sonnet-20240229',).bind_tools([multiply]).invoke('5 times 3').tool_calls
'''
-> [{'name': 'multiply',
  'args': {'x': 5, 'y': 3},
  'id': 'toolu_01Y1QazYWhu4R8vF4hF4z9no',
  'type': 'tool_call'}]
'''
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-07-17 15:28:40 -07:00
Erick Friis
80f3d48195 openai: release 0.1.18 (#24369) 2024-07-17 22:26:33 +00:00
Bagatur
7d83189b19 openai[patch]: use model_name in AzureOpenAI.ls_model_name (#24366) 2024-07-17 15:24:05 -07:00
Nithish Raghunandanan
eb26b5535a couchbase: Add chat message history (#24356)
**Description:** : Add support for chat message history using Couchbase

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
2024-07-17 15:22:42 -07:00
Eugene Yurtsev
96bac8e20d core[patch]: Fix regression requiring input_variables in few chat prompt templates (#24360)
* Fix regression that requires users passing input_variables=[].

* Regression introduced by my own changes to this PR:
https://github.com/langchain-ai/langchain/pull/22851
2024-07-17 18:14:57 -04:00
Brice Fotzo
034a8c7c1b community: support advanced text extraction options for pdf documents (#20265)
**Description:** 
- Updated constructors in PyPDFParser and PyPDFLoader to handle
`extraction_mode` and additional kwargs, aligning with the capabilities
of `PageObject.extract_text()` from pypdf.

- Added `test_pypdf_loader_with_layout` along with a corresponding
example text file to validate layout extraction from PDFs.

**Issue:** fixes #19735 

**Dependencies:** This change requires updating the pypdf dependency
from version 3.4.0 to at least 4.0.0.

Additional changes include the addition of a new test
test_pypdf_loader_with_layout and an example text file to ensure the
functionality of layout extraction from PDFs aligns with the new
capabilities.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-17 20:47:09 +00:00
hmasdev
a402de3dae langchain[patch]: fix wrong dict key in OutputFixingParser, RetryOutputParser and RetryWithErrorOutputParser (#23967)
# Description
This PR aims to solve a bug in `OutputFixingParser`, `RetryOutputParser`
and `RetryWithErrorOutputParser`
The bug is that the wrong keyword argument was given to `retry_chain`.
The correct keyword argument is 'completion', but 'input' is used.

This pull request makes the following changes:
1. correct a `dict` key given to `retry_chain`;
2. add a test when using the default prompt.
   - `NAIVE_FIX_PROMPT` for `OutputFixingParser`;
   - `NAIVE_RETRY_PROMPT` for `RetryOutputParser`;
   - `NAIVE_RETRY_WITH_ERROR_PROMPT` for `RetryWithErrorOutputParser`;
3. ~~add comments on `retry_chain` input and output types~~ clarify
`InputType` and `OutputType` of `retry_chain`

# Issue
The bug is pointed out in
https://github.com/langchain-ai/langchain/pull/19792#issuecomment-2196512928

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-17 20:34:46 +00:00
Casey Clements
a47f69a120 partners/mongodb : Significant MongoDBVectorSearch ID enhancements (#23535)
## Description

This pull-request improves the treatment of document IDs in
`MongoDBAtlasVectorSearch`.

Class method signatures of add_documents, add_texts, delete, and
from_texts
now include an `ids:Optional[List[str]]` keyword argument permitting the
user
greater control. 
Note that, as before, IDs may also be inferred from
`Document.metadata['_id']`
if present, but this is no longer required,
IDs can also optionally be returned from searches.

This PR closes the following JIRA issues.

* [PYTHON-4446](https://jira.mongodb.org/browse/PYTHON-4446)
MongoDBVectorSearch delete / add_texts function rework
* [PYTHON-4435](https://jira.mongodb.org/browse/PYTHON-4435) Add support
for "Indexing"
* [PYTHON-4534](https://jira.mongodb.org/browse/PYTHON-4534) Ensure
datetimes are json-serializable

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-17 13:26:20 -07:00
Erick Friis
cc2cbfabfc milvus: release 0.1.2 (#24365) 2024-07-17 19:42:44 +00:00
Eugene Yurtsev
9e4a0e76f6 core[patch]: Fix one unit test for chat prompt template (#24362)
Minor change that fixes a unit test that had missing assertions.
2024-07-17 18:56:48 +00:00
Erick Friis
81639243e2 openai: release 0.1.17 (#24361) 2024-07-17 18:50:42 +00:00
Erick Friis
61976a4147 pinecone: release 0.1.2 (#24355) 2024-07-17 17:09:07 +00:00
Bagatur
b5360e2e5f community[patch]: Release 0.2.8 (#24354) 2024-07-17 17:07:27 +00:00
ccurme
4cf67084d3 openai[patch]: fix key collision and _astream (#24345)
Fixes small issues introduced in
https://github.com/langchain-ai/langchain/pull/24150 (unreleased).
2024-07-17 12:59:26 -04:00
Luis Moros
bcb5f354ad community: Fix SQLDatabse.from_databricks issue when ran from Job (#24346)
- Description: When SQLDatabase.from_databricks is ran from a Databricks
Workflow job, line 205 (default_host = context.browserHostName) throws
an ``AttributeError`` as the ``context`` object has no
``browserHostName`` attribute. The fix handles the exception and sets
the ``default_host`` variable to null

---------

Co-authored-by: lmorosdb <lmorosdb>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-17 12:40:12 -04:00
Bagatur
24e9b48d15 langchain[patch]: Release 0.2.9 (#24327) 2024-07-17 09:39:57 -07:00
Rafael Pereira
cf28708e7b Neo4j: Update with non-deprecated cypher methods, and new method to associate relationship embeddings (#23725)
**Description:** At the moment neo4j wrapper is using setVectorProperty,
which is deprecated
([link](https://neo4j.com/docs/operations-manual/5/reference/procedures/#procedure_db_create_setVectorProperty)).
I replaced with the non-deprecated version.

Neo4j recently introduced a new cypher method to associate embeddings
into relations using "setRelationshipVectorProperty" method. In this PR
I also implemented a new method to perform this association maintaining
the same format used in the "add_embeddings" method which is used to
associate embeddings into Nodes.
I also included a test case for this new method.
2024-07-17 12:37:47 -04:00
maang-h
2a3288b15d docs: Add ChatBaichuan docstrings (#24348)
- **Description:** Add ChatBaichuan rich docstrings.
- **Issue:** the issue #22296
2024-07-17 12:00:16 -04:00
Srijan Dubey
1792684e8f removed deprecated classes from pipelineai.ipynb, added support for LangChain v0.2 for PipelineAI integration (#24333)
Description: added support for LangChain v0.2 for PipelineAI
integration. Removed deprecated classes and incorporated support for
LangChain v0.2 to integrate with PipelineAI. Removed LLMChain and
replaced it with Runnable interface. Also added StrOutputParser, that
parses LLMResult into the top likely string.

Issue: None

Dependencies: None.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-17 13:48:32 +00:00
Tobias Sette
e60ad12521 docs(infobip.ipynb): fix typo (#24328) 2024-07-17 13:33:34 +00:00
Rafael Pereira
fc41730e28 neo4j: Fix test for order-insensitive comparison and floating-point precision issues (#24338)
**Description:** 
This PR addresses two main issues in the `test_neo4jvector.py`:
1. **Order-insensitive Comparison:** Modified the
`test_retrieval_dictionary` to ensure that it passes regardless of the
order of returned values by parsing `page_content` into a structured
format (dictionary) before comparison.
2. **Floating-point Precision:** Updated
`test_neo4jvector_relevance_score` to handle minor floating-point
precision differences by using the `isclose` function for comparing
relevance scores with a relative tolerance.

Errors addressed:

- **test_neo4jvector_relevance_score:**
  ```
AssertionError: assert [(Document(page_content='foo', metadata={'page':
'0'}), 1.0000014305114746), (Document(page_content='bar',
metadata={'page': '1'}), 0.9998371005058289),
(Document(page_content='baz', metadata={'page': '2'}),
0.9993508458137512)] == [(Document(page_content='foo', metadata={'page':
'0'}), 1.0), (Document(page_content='bar', metadata={'page': '1'}),
0.9998376369476318), (Document(page_content='baz', metadata={'page':
'2'}), 0.9993523359298706)]
At index 0 diff: (Document(page_content='foo', metadata={'page': '0'}),
1.0000014305114746) != (Document(page_content='foo', metadata={'page':
'0'}), 1.0)
  Full diff:
  - [(Document(page_content='foo', metadata={'page': '0'}), 1.0),
+ [(Document(page_content='foo', metadata={'page': '0'}),
1.0000014305114746),
? +++++++++++++++
- (Document(page_content='bar', metadata={'page': '1'}),
0.9998376369476318),
? ^^^ ------
+ (Document(page_content='bar', metadata={'page': '1'}),
0.9998371005058289),
? ^^^^^^^^^
- (Document(page_content='baz', metadata={'page': '2'}),
0.9993523359298706),
? ----------
+ (Document(page_content='baz', metadata={'page': '2'}),
0.9993508458137512),
? ++++++++++
  ]
  ```

- **test_retrieval_dictionary:**
  ```
AssertionError: assert [Document(page_content='skills:\n- Python\n- Data
Analysis\n- Machine Learning\nname: John\nage: 30\n')] ==
[Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: 30\nname: John\n')]
At index 0 diff: Document(page_content='skills:\n- Python\n- Data
Analysis\n- Machine Learning\nname: John\nage: 30\n') !=
Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: 30\nname: John\n')
  Full diff:
- [Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: 30\nname: John\n')]
? ---------
+ [Document(page_content='skills:\n- Python\n- Data Analysis\n- Machine
Learning\nage: John\nage: 30\n')]
? +++++++++
  ```
2024-07-17 09:28:25 -04:00
Erick Friis
47ed7f766a infra: fix release prerelease deps bug (#24323) 2024-07-16 15:13:41 -07:00
Bagatur
80e7cd6cff core[patch]: Release 0.2.20 (#24322) 2024-07-16 15:04:36 -07:00
Erick Friis
6c3e65a878 infra: prerelease dep checking on release (#23269) 2024-07-16 21:48:15 +00:00
Eugene Yurtsev
616196c620 Docs: Add how to dispatch custom callback events (#24278)
* Add how-to guide for dispatching custom callback events.
* Add links from index to the how to guide
* Add link from streaming from within a tool
* Update versionadded to correct release
https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D0.2.15
2024-07-16 17:38:32 -04:00
Erick Friis
dd7938ace8 docs: readthedocs deprecation fix (#24321)
https://about.readthedocs.com/blog/2024/07/addons-by-default/#how-does-it-affect-my-projects

we use build.command so we're already using addons, so I think this is
it
2024-07-16 20:32:51 +00:00
Srijan Dubey
ef07308c30 Upgraded shaleprotocol to use langchain v0.2 removed deprecated classes (#24320)
Description: Added support for langchain v0.2 for shale protocol.
Replaced LLMChain with Runnable interface which allows any two Runnables
to be 'chained' together into sequences. Also added
StreamingStdOutCallbackHandler. Callback handler for streaming.
Issue: None
Dependencies: None.
2024-07-16 20:07:36 +00:00
pbharti0831
049bc37111 Cookbook for applying RAG locally using open source models and tools on CPU (#24284)
This cookbook guides user to implement RAG locally on CPU using
langchain tools and open source models. It enables Llama2 model to
answer queries about Intel Q1 2024 earning release using RAG pipeline.

Main libraries are langchain, llama-cpp-python and gpt4all.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Sriragavi <sriragavi.r@intel.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-16 15:17:10 -04:00
Leonid Ganeline
5ccf8ebfac core: docstrings vectorstores update (#24281)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-16 16:58:11 +00:00
Erick Friis
1e9cc02ed8 openai: raw response headers (#24150) 2024-07-16 09:54:54 -07:00
Bagatur
dc42279eb5 core[patch]: fix Typing.cast import (#24313)
Fixes #24287
2024-07-16 16:53:48 +00:00
Anush
e38bf08139 qdrant: Fixed typos in Qdrant vectorstore docs (#24312)
## Description 

As that title goes.
2024-07-16 09:44:07 -07:00
bovlb
5caa381177 community[minor]: Add ApertureDB as a vectorstore (#24088)
Thank you for contributing to LangChain!

- [X] *ApertureDB as vectorstore**: "community: Add ApertureDB as a
vectorestore"

- **Description:** this change provides a new community integration that
uses ApertureData's ApertureDB as a vector store.
    - **Issue:** none
    - **Dependencies:** depends on ApertureDB Python SDK
    - **Twitter handle:** ApertureData

- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Integration tests rely on a local run of a public docker image.
Example notebook additionally relies on a local Ollama server.

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

All lint tests pass.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Gautam <gautam@aperturedata.io>
2024-07-16 09:32:59 -07:00
frob
c59e663365 community[patch]: Fix docstring for ollama parameter "keep_alive" (#23973)
Fix doc-string for ollama integration
2024-07-16 14:48:38 +00:00
Mazen Ramadan
0c1889c713 docs: fix parameter typo in scrapfly loader docs (#24307)
Fixed wrong parameter typo in
[ScrapflyLoader](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/scrapfly.py)
docs, where `ignore_scrape_failures` is used instead of
`continue_on_failure`.

- Description: Fix wrong param typo in ScrapflyLoader docs.
2024-07-16 14:48:13 +00:00
Leonid Ganeline
5fcf2ef7ca core: docstrings documents (#23506)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-16 10:43:54 -04:00
Rafael Pereira
77dd327282 Docs: Fix Concepts Integration Tools Link (#24301)
- **Description:** This PR fix concepts integrations tools link.

- **Issue:** Fixes issue #24112

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-16 10:29:30 -04:00
Rahul Raghavendra Choudhury
f5a38772a8 community[patch]: Update TavilySearch to use TavilyClient instead of the deprecated Client (#24270)
On using TavilySearchAPIRetriever with any conversation chain getting
error :

`TypeError: Client.__init__() got an unexpected keyword argument
'api_key'`

It is because the retreiver class is using the depreciated `Client`
class, `TavilyClient` need to be used instead.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-16 13:35:28 +00:00
Shenhai Ran
5f2dea2b20 core[patch]: Add encoding options when create prompt template from a file (#24054)
- Uses default utf-8 encoding for loading prompt templates from file

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-16 09:35:09 -04:00
Chen Xiabin
69b1603173 baidu qianfan AiMessage with usage_metadata (#24288)
add usage_metadata to qianfan AIMessage. Thanks
2024-07-16 09:30:50 -04:00
amcastror
d83164f837 Update retrievers.ipynb (#24289)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-16 13:30:41 +00:00
Leonid Ganeline
198b85334f core[patch]: docstrings langchain_core/ files update (#24285)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-16 09:21:51 -04:00
Dobiichi-Origami
7aeaa1974d community[patch]: change the class of qianfan_ak and qianfan_sk parameters (#24293)
- **Description:** we changed the class of two parameters to fix a bug,
which causes validation failure when using QianfanEmbeddingEndpoint
2024-07-16 09:17:48 -04:00
Tibor Reiss
1c753d1e81 core[patch]: Update typing for template format to include jinja2 as a Literal (#24144)
Fixes #23929 via adjusting the typing
2024-07-16 09:09:42 -04:00
Jacob Lee
6716379f0c docs[patch]: Fix rendering issue in code splitter page (#24291) 2024-07-15 23:08:21 -07:00
Jacob Lee
58fdb070fa docs[patch]: Update intro diagram (#24290)
CC @agola11
2024-07-15 22:04:42 -07:00
Erick Friis
1d7a3ae7ce infra: add test deps to add_dependents (#24283) 2024-07-15 15:48:53 -07:00
Erick Friis
d2f671271e langchain: fix extended test (#24282) 2024-07-15 15:29:48 -07:00
Lage Ragnarsson
a3c10fc6ce community: Add support for specifying hybrid search for Databricks vector search (#23528)
**Description:**

Databricks Vector Search recently added support for hybrid
keyword-similarity search.
See [usage
examples](https://docs.databricks.com/en/generative-ai/create-query-vector-search.html#query-a-vector-search-endpoint)
from their documentation.

This PR updates the Langchain vectorstore interface for Databricks to
enable the user to pass the *query_type* parameter to
*similarity_search* to make use of this functionality.
By default, there will not be any changes for existing users of this
interface. To use the new hybrid search feature, it is now possible to
do

```python
# ...
dvs = DatabricksVectorSearch(index)
dvs.similarity_search("my search query", query_type="HYBRID")
```

Or using the retriever:

```python
retriever = dvs.as_retriever(
    search_kwargs={
        "query_type": "HYBRID",
    }
)
retriever.invoke("my search query")
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-15 22:14:08 +00:00
Christopher Tee
5171ffc026 community(you): Integrate You.com conversational APIs (#23046)
You.com is releasing two new conversational APIs — Smart and Research. 

This PR:
- integrates those APIs with Langchain, as an LLM
- streaming is supported

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-15 17:46:58 -04:00
maang-h
6c7d9f93b9 feat: Add ChatTongyi structured output (#24187)
- **Description:** Add `with_structured_output` method to ChatTongyi to
support structured output.
2024-07-15 15:57:21 -04:00
Chen Xiabin
8f4620f4b8 baidu qianfan streaming token_usage (#24117)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-15 19:52:31 +00:00
maang-h
9d97de34ae community[patch]: Improve ChatBaichuan init args and role (#23878)
- **Description:** Improve ChatBaichuan init args and role
   -  ChatBaichuan adds `system` role
   - alias: `baichuan_api_base` -> `base_url`
   - `with_search_enhance` is deprecated
   - Add `max_tokens` argument
2024-07-15 15:17:00 -04:00
Erick Friis
56cca23745 openai: remove some params from default serialization (#24280) 2024-07-15 18:53:36 +00:00
mrugank-wadekar
66bebeb76a partners: add similarity search by image functionality to langchain_chroma partner package (#22982)
- **Description:** This pull request introduces two new methods to the
Langchain Chroma partner package that enable similarity search based on
image embeddings. These methods enhance the package's functionality by
allowing users to search for images similar to a given image URI. Also
introduces a notebook to demonstrate it's use.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** @mrugank9009

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-15 18:48:22 +00:00
pm390
b0aa915dea community[patch]: use asyncio.sleep instead of sleep in OpenAI Assistant async (#24275)
**Description:** Implemented async sleep using asyncio instead of
synchronous sleep in openAI Assistants
**Issue:** 24194
**Dependencies:** asyncio
**Twitter handle:** pietromald60939
2024-07-15 18:14:39 +00:00
Anush
d93ae756e6 qdrant: Documentation for the new QdrantVectorStore class (#24166)
## Description

Follow up on #24165. Adds a page to document the latest usage of the new
`QdrantVectorStore` class.
2024-07-15 10:39:23 -07:00
Erick Friis
1244e66bd4 docs: remove couchbase from docs linking (#24277)
`pip install couchbase` adds 12 minutes to the docs build...
2024-07-15 17:34:41 +00:00
wenngong
a001037319 retrievers: MultiVectorRetriever similarity_score_threshold search type (#23539)
Description: support MultiVectorRetriever similarity_score_threshold
search type.

Issue: #23387 #19404

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-07-15 13:31:34 -04:00
Carlos André Antunes
20151384d7 fix azure_openai.py: some keys do not exists (#24158)
In some lines its trying to read a key that do not exists yet. In this
cases I changed the direct access to dict.get() method

Thank you for contributing to LangChain!

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-07-15 17:17:05 +00:00
blueoom
d895614d19 text_splitters: add request parameters for function HTMLHeaderTextSplitter.split_text… (#24178)
**Description:**

The `split_text_from_url` method of `HTMLHeaderTextSplitter` does not
include parameters like `timeout` when using `requests` to send a
request. Therefore, I suggest adding a `kwargs` parameter to the
function, which can be passed as arguments to `requests.get()`
internally, allowing control over the `get` request.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-15 16:43:56 +00:00
Bagatur
9d0c1d2dc9 docs: specify init_chat_model version (#24274) 2024-07-15 16:29:06 +00:00
MoraxMa
a7296bddc2 docs: updated Tongyi package (#24259)
* updated pip install package
2024-07-15 16:25:35 +00:00
Bagatur
c9473367b1 langchain[patch]: Release 0.2.8 (#24273) 2024-07-15 16:05:51 +00:00
JP-Ellis
f77659463a core[patch]: allow message utils to work with lcel (#23743)
The functions `convert_to_messages` has had an expansion of the
arguments it can take:

1. Previously, it only could take a `Sequence` in order to iterate over
it. This has been broadened slightly to an `Iterable` (which should have
no other impact).
2. Support for `PromptValue` and `BaseChatPromptTemplate` has been
added. These are generated when combining messages using the overloaded
`+` operator.

Functions which rely on `convert_to_messages` (namely `filter_messages`,
`merge_message_runs` and `trim_messages`) have had the type of their
arguments similarly expanded.

Resolves #23706.

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->

---------

Signed-off-by: JP-Ellis <josh@jpellis.me>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-07-15 08:58:05 -07:00
Harold Martin
ccdaf14eff docs: Spell check fixes (#24217)
**Description:** Spell check fixes for docs, comments, and a couple of
strings. No code change e.g. variable names.
**Issue:** none
**Dependencies:** none
**Twitter handle:** hmartin
2024-07-15 15:51:43 +00:00
Leonid Ganeline
cacdf96f9c core docstrings tracers update (#24211)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-15 11:37:09 -04:00
Leonid Ganeline
36ee083753 core: docstrings utils update (#24213)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-15 11:36:00 -04:00
thehunmonkgroup
e8a21146d3 community[patch]: upgrade default model for ChatAnyscale (#24232)
Old default `meta-llama/Llama-2-7b-chat-hf` no longer supported.
2024-07-15 11:34:59 -04:00
Bagatur
a0958c0607 docs: more tool call -> tool message docs (#24271) 2024-07-15 07:55:07 -07:00
Bagatur
620b118c70 core[patch]: Release 0.2.19 (#24272) 2024-07-15 07:51:30 -07:00
ccurme
888fbc07b5 core[patch]: support passing args_schema through as_tool (#24269)
Note: this allows the schema to be passed in positionally.

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda


class Add(BaseModel):
    """Add two integers together."""

    a: int = Field(..., description="First integer")
    b: int = Field(..., description="Second integer")


def add(input: dict) -> int:
    return input["a"] + input["b"]


runnable = RunnableLambda(add)
as_tool = runnable.as_tool(Add)
as_tool.args_schema.schema()
```
```
{'title': 'Add',
 'description': 'Add two integers together.',
 'type': 'object',
 'properties': {'a': {'title': 'A',
   'description': 'First integer',
   'type': 'integer'},
  'b': {'title': 'B', 'description': 'Second integer', 'type': 'integer'}},
 'required': ['a', 'b']}
```
2024-07-15 07:51:05 -07:00
ccurme
ab2d7821a7 fireworks[patch]: use firefunction-v2 in standard tests (#24264) 2024-07-15 13:15:08 +00:00
ccurme
6fc7610b1c standard-tests[patch]: update test_bind_runnables_as_tools (#24241)
Reduce number of tool arguments from two to one.
2024-07-15 08:35:07 -04:00
Bagatur
0da5078cad langchain[minor]: Generic configurable model (#23419)
alternative to
[23244](https://github.com/langchain-ai/langchain/pull/23244). allows
you to use chat model declarative methods

![Screenshot 2024-06-25 at 1 07 10
PM](https://github.com/langchain-ai/langchain/assets/22008038/910d1694-9b7b-46bc-bc2e-3792df9321d6)
2024-07-15 01:11:01 +00:00
Bagatur
d0728b0ba0 core[patch]: add tool name to tool message (#24243)
Copying current ToolNode behavior
2024-07-15 00:42:40 +00:00
Bagatur
9224027e45 docs: tool artifacts how to (#24198) 2024-07-14 17:04:47 -07:00
Bagatur
5c3e2612da core[patch]: Release 0.2.18 (#24230) 2024-07-13 09:14:43 -07:00
Bagatur
65321bf975 core[patch]: fix ToolCall "type" when streaming (#24218) 2024-07-13 08:59:03 -07:00
Jacob Lee
2b7d1cdd2f docs[patch]: Update tool child run docs (#24160)
Documents #24143
2024-07-13 07:52:37 -07:00
Anush
a653b209ba qdrant: test new QdrantVectorStore (#24165)
## Description

This PR adds integration tests to follow up on #24164.

By default, the tests use an in-memory instance.

To run the full suite of tests, with both in-memory and Qdrant server:

```
$ docker run -p 6333:6333 qdrant/qdrant

$ make test

$ make integration_test
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:59:30 +00:00
Roman Solomatin
f071581aea openai[patch]: update openai params (#23691)
**Description:** Explicitly add parameters from openai API



- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 16:53:33 -07:00
Leonid Ganeline
f0a7581b50 milvus: docstring (#23151)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:25:31 +00:00
Christian D. Glissov
474b88326f langchain_qdrant: Added method "_asimilarity_search_with_relevance_scores" to Qdrant class (#23954)
I stumbled upon a bug that led to different similarity scores between
the async and sync similarity searches with relevance scores in Qdrant.
The reason being is that _asimilarity_search_with_relevance_scores is
missing, this makes langchain_qdrant use the method of the vectorstore
baseclass leading to drastically different results.

To illustrate the magnitude here are the results running an identical
search in a test vectorstore.

Output of asimilarity_search_with_relevance_scores:
[0.9902903374601824, 0.9472135924938804, 0.8535534011299859]

Output of similarity_search_with_relevance_scores:
[0.9805806749203648, 0.8944271849877607, 0.7071068022599718]

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:25:20 +00:00
Bagatur
bdc03997c9 standard-tests[patch]: check for ToolCall["type"] (#24209) 2024-07-12 16:17:34 -07:00
Nada Amin
3f1cf00d97 docs: Improve neo4j semantic templates (#23939)
I made some changes based on the issues I stumbled on while following
the README of neo4j-semantic-ollama.
I made the changes to the ollama variant, and can also port the relevant
ones to the layer variant once this is approved.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:08:25 +00:00
Nada Amin
6b47c7361e docs: fix code usage to use the ollama variant (#23937)
**Description:** the template neo4j-semantic-ollama uses an import from
the neo4j-semantic-layer template instead of its own.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 23:07:42 +00:00
Anirudh31415926535
7677ceea60 docs: model parameter mandatory for cohere embedding and rerank (#23349)
Latest langchain-cohere sdk mandates passing in the model parameter into
the Embeddings and Reranker inits.

This PR is to update the docs to reflect these changes.
2024-07-12 23:07:28 +00:00
Miroslav
aee55eda39 community: Skip Login to HuggubgFaceHub when token is not set (#21561)
Thank you for contributing to LangChain!

- [ ] **HuggingFaceEndpoint**: "Skip Login to HuggingFaceHub"
  - Where:  langchain, community, llm, huggingface_endpoint
 


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Skip login to huggingface hub when when
`huggingfacehub_api_token` is not set. This is needed when using custom
`endpoint_url` outside of HuggingFaceHub.
- **Issue:** the issue # it fixes
https://github.com/langchain-ai/langchain/issues/20342 and
https://github.com/langchain-ai/langchain/issues/19685
    - **Dependencies:** None


- [ ] **Add tests and docs**: 
  1. Tested with locally available TGI endpoint
  2.  Example Usage
```python
from langchain_community.llms import HuggingFaceEndpoint

llm = HuggingFaceEndpoint(
    endpoint_url='http://localhost:8080',
    server_kwargs={
        "headers": {"Content-Type": "application/json"}
    }
)
resp = llm.invoke("Tell me a joke")
print(resp)
```
 Also tested against HF Endpoints
 ```python
 from langchain_community.llms import HuggingFaceEndpoint
huggingfacehub_api_token = "hf_xyz"
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
    huggingfacehub_api_token=huggingfacehub_api_token,
    repo_id=repo_id,
)
resp = llm.invoke("Tell me a joke")
print(resp)
 ```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 22:10:32 +00:00
Anush
d09dda5a08 qdrant: Bump patch version (#24168)
# Description

To release a new version of `langchain-qdrant` after #24165 and #24166.
2024-07-12 14:48:50 -07:00
Bagatur
12950cc602 standard-tests[patch]: improve runnable tool description (#24210) 2024-07-12 21:33:56 +00:00
Erick Friis
e8ee781a42 ibm: move to external repo (#24208) 2024-07-12 21:14:24 +00:00
Bagatur
02e71cebed together[patch]: Release 0.1.4 (#24205) 2024-07-12 13:59:58 -07:00
Bagatur
259d4d2029 anthropic[patch]: Release 0.1.20 (#24204) 2024-07-12 13:59:15 -07:00
Bagatur
3aed74a6fc fireworks[patch]: Release 0.1.5 (#24203) 2024-07-12 13:58:58 -07:00
Bagatur
13b0d7ec8f openai[patch]: Release 0.1.16 (#24202) 2024-07-12 13:58:39 -07:00
Bagatur
71cd6e6feb groq[patch]: Release 0.1.7 (#24201) 2024-07-12 13:58:19 -07:00
Bagatur
99054e19eb mistralai[patch]: Release 0.1.10 (#24200) 2024-07-12 13:57:58 -07:00
Bagatur
7a1321e2f9 ibm[patch]: Release 0.1.10 (#24199) 2024-07-12 13:57:38 -07:00
Bagatur
cb5031f22f integrations[patch]: require core >=0.2.17 (#24207) 2024-07-12 20:54:01 +00:00
Nithish Raghunandanan
f1618ec540 couchbase: Add standard and semantic caches (#23607)
Thank you for contributing to LangChain!

**Description:** Add support for caching (standard + semantic) LLM
responses using Couchbase


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-12 20:30:03 +00:00
Eugene Yurtsev
8d82a0d483 core[patch]: Mark GraphVectorStore as beta (#24195)
* This PR marks graph vectorstore as beta
2024-07-12 14:28:06 -04:00
Bagatur
0a1e475a30 core[patch]: Release 0.2.17 (#24189) 2024-07-12 17:08:29 +00:00
Bagatur
6166ea67a8 core[minor]: rename ToolMessage.raw_output -> artifact (#24185) 2024-07-12 09:52:44 -07:00
Jean Nshuti
d77d9bfc00 community[patch]: update typo document content returned from semanticscholar (#24175)
Update "astract" -> abstract
2024-07-12 15:40:47 +00:00
Leonid Ganeline
aa3e3cfa40 core[patch]: docstrings runnables update (#24161)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-12 11:27:06 -04:00
mumu
14ba1d4b45 docs: fix numeric errors in tools_chain.ipynb (#24169)
Description: Corrected several numeric errors in the
docs/docs/how_to/tools_chain.ipynb file to ensure the accuracy of the
documentation.
2024-07-12 11:26:26 -04:00
Ikko Eltociear Ashimine
18da9f5e59 docs: update custom_chat_model.ipynb (#24170)
characetrs -> characters
2024-07-12 06:48:22 -04:00
Tomaz Bratanic
d3a2b9fae0 Fix neo4j type error on missing constraint information (#24177)
If you use `refresh_schema=False`, then the metadata constraint doesn't
exist. ATM, we used default `None` in the constraint check, but then
`any` fails because it can't iterate over None value
2024-07-12 06:39:29 -04:00
Anush
7014d07cab qdrant: new Qdrant implementation (#24164) 2024-07-12 04:52:02 +02:00
Xander Dumaine
35784d1c33 langchain[minor]: add document_variable_name to create_stuff_documents_chain (#24083)
- **Description:** `StuffDocumentsChain` uses `LLMChain` which is
deprecated by langchain runnables. `create_stuff_documents_chain` is the
replacement, but needs support for `document_variable_name` to allow
multiple uses of the chain within a longer chain.
- **Issue:** none
- **Dependencies:** none
2024-07-12 02:31:46 +00:00
Eugene Yurtsev
8858846607 milvus[patch]: Fix Milvus vectorstore for newer versions of langchain-core (#24152)
Fix for: https://github.com/langchain-ai/langchain/issues/24116

This keeps the old behavior of add_documents and add_texts
2024-07-11 18:51:18 -07:00
thedavgar
ffe6ca986e community: Fix Bug in Azure Search Vectorstore search asyncronously (#24081)
Thank you for contributing to LangChain!

**Description**:
This PR fixes a bug described in the issue in #24064, when using the
AzureSearch Vectorstore with the asyncronous methods to do search which
is also the method used for the retriever. The proposed change includes
just change the access of the embedding as optional because is it not
used anywhere to retrieve documents. Actually, the syncronous methods of
retrieval do not use the embedding neither.

With this PR the code given by the user in the issue works.

```python
vectorstore = AzureSearch(
    azure_search_endpoint=os.getenv("AI_SEARCH_ENDPOINT_SECRET"),
    azure_search_key=os.getenv("AI_SEARCH_API_KEY"),
    index_name=os.getenv("AI_SEARCH_INDEX_NAME_SECRET"),
    fields=fields,
    embedding_function=encoder,
)

retriever = vectorstore.as_retriever(search_type="hybrid", k=2)

await vectorstore.avector_search("what is the capital of France")
await retriever.ainvoke("what is the capital of France")
```

**Issue**:
The Azure Search Vectorstore is not working when searching for documents
with asyncronous methods, as described in issue #24064

**Dependencies**:
There are no extra dependencies required for this change.

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-11 18:32:19 -07:00
Anush
7790d67f94 qdrant: New sparse embeddings provider interface - PART 1 (#24015)
## Description

This PR introduces a new sparse embedding provider interface to work
with the new Qdrant implementation that will follow this PR.

Additionally, an implementation of this interface is provided with
https://github.com/qdrant/fastembed.

This PR will be followed by
https://github.com/Anush008/langchain/pull/3.
2024-07-11 17:07:25 -07:00
Erick Friis
1132fb801b core: release 0.2.16 (#24159) 2024-07-11 23:59:41 +00:00
Nuno Campos
1d37aa8403 core: Remove extra newline (#24157) 2024-07-11 23:55:36 +00:00
ccurme
cb95198398 standard-tests[patch]: add tests for runnables as tools and streaming usage metadata (#24153) 2024-07-11 18:30:05 -04:00
Erick Friis
d002fa902f infra: fix redundant matrix config (#24151) 2024-07-11 15:15:41 -07:00
Bagatur
8d100c58de core[patch]: Tool accept RunnableConfig (#24143)
Relies on #24038

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-11 22:13:17 +00:00
Bagatur
5fd1e67808 core[minor], integrations...[patch]: Support ToolCall as Tool input and ToolMessage as Tool output (#24038)
Changes:
- ToolCall, InvalidToolCall and ToolCallChunk can all accept a "type"
parameter now
- LLM integration packages add "type" to all the above
- Tool supports ToolCall inputs that have "type" specified
- Tool outputs ToolMessage when a ToolCall is passed as input
- Tools can separately specify ToolMessage.content and
ToolMessage.raw_output
- Tools emit events for validation errors (using on_tool_error and
on_tool_end)

Example:
```python
@tool("structured_api", response_format="content_and_raw_output")
def _mock_structured_tool_with_raw_output(
    arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> Tuple[str, dict]:
    """A Structured Tool"""
    return f"{arg1} {arg2}", {"arg1": arg1, "arg2": arg2, "arg3": arg3}


def test_tool_call_input_tool_message_with_raw_output() -> None:
    tool_call: Dict = {
        "name": "structured_api",
        "args": {"arg1": 1, "arg2": True, "arg3": {"img": "base64string..."}},
        "id": "123",
        "type": "tool_call",
    }
    expected = ToolMessage("1 True", raw_output=tool_call["args"], tool_call_id="123")
    tool = _mock_structured_tool_with_raw_output
    actual = tool.invoke(tool_call)
    assert actual == expected

    tool_call.pop("type")
    with pytest.raises(ValidationError):
        tool.invoke(tool_call)

    actual_content = tool.invoke(tool_call["args"])
    assert actual_content == expected.content
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-11 14:54:02 -07:00
Bagatur
eeb996034b core[patch]: Release 0.2.15 (#24149) 2024-07-11 21:34:25 +00:00
Nuno Campos
03fba07d15 core[patch]: Update styles for mermaid graphs (#24147) 2024-07-11 14:19:36 -07:00
Jacob Lee
c481a2715d docs[patch]: Add structural example to style guide (#24133)
CC @nfcampos
2024-07-11 13:20:14 -07:00
ccurme
8ee8ca7c83 core[patch]: propagate parse_docstring to tool decorator (#24123)
Disabled by default.

```python
from langchain_core.tools import tool

@tool(parse_docstring=True)
def foo(bar: str, baz: int) -> str:
    """The foo.

    Args:
        bar: this is the bar
        baz: this is the baz
    """
    return bar


foo.args_schema.schema()
```
```json
{
  "title": "fooSchema",
  "description": "The foo.",
  "type": "object",
  "properties": {
    "bar": {
      "title": "Bar",
      "description": "this is the bar",
      "type": "string"
    },
    "baz": {
      "title": "Baz",
      "description": "this is the baz",
      "type": "integer"
    }
  },
  "required": [
    "bar",
    "baz"
  ]
}
```
2024-07-11 20:11:45 +00:00
Jacob Lee
4121d4151f docs[patch]: Fix typo (#24132)
CC @efriis
2024-07-11 20:10:48 +00:00
Erick Friis
bd18faa2a0 infra: add SQLAlchemy to min version testing (#23186)
preventing issues like #22546 

Notes:
- this will only affect release CI. We may want to consider adding
running unit tests with min versions to PR CI in some form
- because this only affects release CI, it could create annoying issues
releasing while I'm on vacation. Unless anyone feels strongly, I'll wait
to merge this til when I'm back
2024-07-11 20:09:57 +00:00
Jacob Lee
f1f1f75782 community[patch]: Make AzureML endpoint return AI messages for type assistant (#24085) 2024-07-11 21:45:30 +02:00
Eugene Yurtsev
4ba14adec6 core[patch]: Clean up indexing test code (#24139)
Refactor the code to use the existing InMemroyVectorStore.

This change is needed for another PR that moves some of the imports
around (and messes up the mock.patch in this file)
2024-07-11 18:54:46 +00:00
Atul R
457677c1b7 community: Fixes use of ImagePromptTemplate with Ollama (#24140)
Description: ImagePromptTemplate for Multimodal llms like llava when
using Ollama
Twitter handle: https://x.com/a7ulr

Details:

When using llava models / any ollama multimodal llms and passing images
in the prompt as urls, langchain breaks with this error.

```python
image_url_components = image_url.split(",")
                           ^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'split'
```

From the looks of it, there was bug where the condition did check for a
`url` field in the variable but missed to actually assign it.

This PR fixes ImagePromptTemplate for Multimodal llms like llava when
using Ollama specifically.

@hwchase17
2024-07-11 11:31:48 -07:00
Matt
8327925ab7 community:support additional Azure Search Options (#24134)
- **Description:** Support additional kwargs options for the Azure
Search client (Described here
https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md#configurations)
    - **Issue:** N/A
    - **Dependencies:** No additional Dependencies

---------
2024-07-11 18:22:36 +00:00
ccurme
122e80e04d core[patch]: add versionadded to as_tool (#24138) 2024-07-11 18:08:08 +00:00
378 changed files with 21513 additions and 7797 deletions

View File

@@ -5,10 +5,10 @@ services:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
@@ -28,5 +28,3 @@ services:
networks:
langchain-network:
driver: bridge

View File

@@ -6,6 +6,7 @@ import sys
import tomllib
from collections import defaultdict
from typing import Dict, List, Set
from pathlib import Path
LANGCHAIN_DIRS = [
@@ -26,17 +27,48 @@ def all_package_dirs() -> Set[str]:
def dependents_graph() -> dict:
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
for path in glob.glob("./libs/**/pyproject.toml", recursive=True):
if "template" in path:
continue
# load regular and test deps from pyproject.toml
with open(path, "rb") as f:
pyproject = tomllib.load(f)["tool"]["poetry"]
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in pyproject["dependencies"]:
for dep in [
*pyproject["dependencies"].keys(),
*pyproject["group"]["test"]["dependencies"].keys(),
]:
if "langchain" in dep:
dependents[dep].add(pkg_dir)
continue
# load extended deps from extended_testing_deps.txt
package_path = Path(path).parent
extended_requirement_path = package_path / "extended_testing_deps.txt"
if extended_requirement_path.exists():
with open(extended_requirement_path, "r") as f:
extended_deps = f.read().splitlines()
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith(
"-e ../partners/"
), "Extended test deps should only editable install partner packages"
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
dep = depline.split("==")[0]
if "langchain" in dep:
dependents[dep].add(pkg_dir)
return dependents

View File

@@ -0,0 +1,35 @@
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
releasing_rc = "rc" in version
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["tool"]["poetry"]["dependencies"]
for lib in dependencies:
dep_version = dependencies[lib]
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {lib} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {lib} has allow-prereleases set to true. Please remove this."
)

View File

@@ -9,6 +9,7 @@ MIN_VERSION_LIBS = [
"langchain-community",
"langchain",
"langchain-text-splitters",
"SQLAlchemy",
]

View File

@@ -21,14 +21,6 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "poetry run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -189,7 +189,7 @@ jobs:
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 5 && \
sleep 15 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
@@ -221,6 +221,11 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
poetry run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version

View File

@@ -14,10 +14,6 @@ env:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.12"
name: "check doc imports #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -57,4 +57,5 @@ Notebook | Description
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,756 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "10f50955-be55-422f-8c62-3a32f8cf02ed",
"metadata": {},
"source": [
"# RAG application running locally on Intel Xeon CPU using langchain and open-source models"
]
},
{
"cell_type": "markdown",
"id": "48113be6-44bb-4aac-aed3-76a1365b9561",
"metadata": {},
"source": [
"Author - Pratool Bharti (pratool.bharti@intel.com)"
]
},
{
"cell_type": "markdown",
"id": "8b10b54b-1572-4ea1-9c1e-1d29fcc3dcd9",
"metadata": {},
"source": [
"In this cookbook, we use langchain tools and open source models to execute locally on CPU. This notebook has been validated to run on Intel Xeon 8480+ CPU. Here we implement a RAG pipeline for Llama2 model to answer questions about Intel Q1 2024 earnings release."
]
},
{
"cell_type": "markdown",
"id": "acadbcec-3468-4926-8ce5-03b678041c0a",
"metadata": {},
"source": [
"**Create a conda or virtualenv environment with python >=3.10 and install following libraries**\n",
"<br>\n",
"\n",
"`pip install --upgrade langchain langchain-community langchainhub langchain-chroma bs4 gpt4all pypdf pysqlite3-binary` <br>\n",
"`pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu`"
]
},
{
"cell_type": "markdown",
"id": "84c392c8-700a-42ec-8e94-806597f22e43",
"metadata": {},
"source": [
"**Load pysqlite3 in sys modules since ChromaDB requires sqlite3.**"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "145cd491-b388-4ea7-bdc8-2f4995cac6fd",
"metadata": {},
"outputs": [],
"source": [
"__import__(\"pysqlite3\")\n",
"import sys\n",
"\n",
"sys.modules[\"sqlite3\"] = sys.modules.pop(\"pysqlite3\")"
]
},
{
"cell_type": "markdown",
"id": "14dde7e2-b236-49b9-b3a0-08c06410418c",
"metadata": {},
"source": [
"**Import essential components from langchain to load and split data**"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "887643ba-249e-48d6-9aa7-d25087e8dfbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import PyPDFLoader"
]
},
{
"cell_type": "markdown",
"id": "922c0eba-8736-4de5-bd2f-3d0f00b16e43",
"metadata": {},
"source": [
"**Download Intel Q1 2024 earnings release**"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d6a2419-5338-4188-8615-a40a65ff8019",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2024-07-15 15:04:43-- https://d1io3yog0oux5.cloudfront.net/_11d435a500963f99155ee058df09f574/intel/db/887/9014/earnings_release/Q1+24_EarningsRelease_FINAL.pdf\n",
"Resolving proxy-dmz.intel.com (proxy-dmz.intel.com)... 10.7.211.16\n",
"Connecting to proxy-dmz.intel.com (proxy-dmz.intel.com)|10.7.211.16|:912... connected.\n",
"Proxy request sent, awaiting response... 200 OK\n",
"Length: 133510 (130K) [application/pdf]\n",
"Saving to: intel_q1_2024_earnings.pdf\n",
"\n",
"intel_q1_2024_earni 100%[===================>] 130.38K --.-KB/s in 0.005s \n",
"\n",
"2024-07-15 15:04:44 (24.6 MB/s) - intel_q1_2024_earnings.pdf saved [133510/133510]\n",
"\n"
]
}
],
"source": [
"!wget 'https://d1io3yog0oux5.cloudfront.net/_11d435a500963f99155ee058df09f574/intel/db/887/9014/earnings_release/Q1+24_EarningsRelease_FINAL.pdf' -O intel_q1_2024_earnings.pdf"
]
},
{
"cell_type": "markdown",
"id": "e3612627-e105-453d-8a50-bbd6e39dedb5",
"metadata": {},
"source": [
"**Loading earning release pdf document through PyPDFLoader**"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cac6278e-ebad-4224-a062-bf6daca24cb0",
"metadata": {},
"outputs": [],
"source": [
"loader = PyPDFLoader(\"intel_q1_2024_earnings.pdf\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "a7dca43b-1c62-41df-90c7-6ed2904f823d",
"metadata": {},
"source": [
"**Splitting entire document in several chunks with each chunk size is 500 tokens**"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4486adbe-0d0e-4685-8c08-c1774ed6e993",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"id": "af142346-e793-4a52-9a56-63e3be416b3d",
"metadata": {},
"source": [
"**Looking at the first split of the document**"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e4240fd1-898e-4bfc-a377-02c9bc25b56e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'intel_q1_2024_earnings.pdf', 'page': 0}, page_content='Intel Corporation\\n2200 Mission College Blvd.\\nSanta Clara, CA 95054-1549\\n \\nNews Release\\n Intel Reports First -Quarter 2024 Financial Results\\nNEWS SUMMARY\\n▪First-quarter revenue of $12.7 billion , up 9% year over year (YoY).\\n▪First-quarter GAAP earnings (loss) per share (EPS) attributable to Intel was $(0.09) ; non-GAAP EPS \\nattributable to Intel was $0.18 .')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"all_splits[0]"
]
},
{
"cell_type": "markdown",
"id": "b88d2632-7c1b-49ef-a691-c0eb67d23e6a",
"metadata": {},
"source": [
"**One of the major step in RAG is to convert each split of document into embeddings and store in a vector database such that searching relevant documents are efficient.** <br>\n",
"**For that, importing Chroma vector database from langchain. Also, importing open source GPT4All for embedding models**"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9ff99dd7-9d47-4239-ba0a-d775792334ba",
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.embeddings import GPT4AllEmbeddings"
]
},
{
"cell_type": "markdown",
"id": "b5d1f4dd-dd8d-4a20-95d1-2dbdd204375a",
"metadata": {},
"source": [
"**In next step, we will download one of the most popular embedding model \"all-MiniLM-L6-v2\". Find more details of the model at this link https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2**"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "05db3494-5d8e-4a13-9941-26330a86f5e5",
"metadata": {},
"outputs": [],
"source": [
"model_name = \"all-MiniLM-L6-v2.gguf2.f16.gguf\"\n",
"gpt4all_kwargs = {\"allow_download\": \"True\"}\n",
"embeddings = GPT4AllEmbeddings(model_name=model_name, gpt4all_kwargs=gpt4all_kwargs)"
]
},
{
"cell_type": "markdown",
"id": "4e53999e-1983-46ac-8039-2783e194c3ae",
"metadata": {},
"source": [
"**Store all the embeddings in the Chroma database**"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0922951a-9ddf-4761-973d-8e9a86f61284",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=embeddings)"
]
},
{
"cell_type": "markdown",
"id": "29f94fa0-6c75-4a65-a1a3-debc75422479",
"metadata": {},
"source": [
"**Now, let's find relevant splits from the documents related to the question**"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "88c8152d-ec7a-4f0b-9d86-877789407537",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n"
]
}
],
"source": [
"question = \"What is Intel CCG revenue in Q1 2024\"\n",
"docs = vectorstore.similarity_search(question)\n",
"print(len(docs))"
]
},
{
"cell_type": "markdown",
"id": "53330c6b-cb0f-43f9-b379-2e57ac1e5335",
"metadata": {},
"source": [
"**Look at the first retrieved document from the vector database**"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "43a6d94f-b5c4-47b0-a353-2db4c3d24d9c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'page': 1, 'source': 'intel_q1_2024_earnings.pdf'}, page_content='Client Computing Group (CCG) $7.5 billion up31%\\nData Center and AI (DCAI) $3.0 billion up5%\\nNetwork and Edge (NEX) $1.4 billion down 8%\\nTotal Intel Products revenue $11.9 billion up17%\\nIntel Foundry $4.4 billion down 10%\\nAll other:\\nAltera $342 million down 58%\\nMobileye $239 million down 48%\\nOther $194 million up17%\\nTotal all other revenue $775 million down 46%\\nIntersegment eliminations $(4.4) billion\\nTotal net revenue $12.7 billion up9%\\nIntel Products Highlights')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "64ba074f-4b36-442e-b7e2-b26d6e2815c3",
"metadata": {},
"source": [
"**Download Lllama-2 model from Huggingface and store locally** <br>\n",
"**You can download different quantization variant of Lllama-2 model from the link below. We are using Q8 version here (7.16GB).** <br>\n",
"https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8dd0811-6f43-4bc6-b854-2ab377639c9a",
"metadata": {},
"outputs": [],
"source": [
"!huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q8_0.gguf --local-dir . --local-dir-use-symlinks False"
]
},
{
"cell_type": "markdown",
"id": "3895b1f5-f51d-4539-abf0-af33d7ca48ea",
"metadata": {},
"source": [
"**Import langchain components required to load downloaded LLMs model**"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fb087088-aa62-44c0-8356-061e9b9f1186",
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_community.llms import LlamaCpp"
]
},
{
"cell_type": "markdown",
"id": "5a8a111e-2614-4b70-b034-85cd3e7304cb",
"metadata": {},
"source": [
"**Loading the local Lllama-2 model using Llama-cpp library**"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "fb917da2-c0d7-4995-b56d-26254276e0da",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from llama-2-7b-chat.Q8_0.gguf (version GGUF V2)\n",
"llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\n",
"llama_model_loader: - kv 0: general.architecture str = llama\n",
"llama_model_loader: - kv 1: general.name str = LLaMA v2\n",
"llama_model_loader: - kv 2: llama.context_length u32 = 4096\n",
"llama_model_loader: - kv 3: llama.embedding_length u32 = 4096\n",
"llama_model_loader: - kv 4: llama.block_count u32 = 32\n",
"llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008\n",
"llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128\n",
"llama_model_loader: - kv 7: llama.attention.head_count u32 = 32\n",
"llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32\n",
"llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001\n",
"llama_model_loader: - kv 10: general.file_type u32 = 7\n",
"llama_model_loader: - kv 11: tokenizer.ggml.model str = llama\n",
"llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = [\"<unk>\", \"<s>\", \"</s>\", \"<0x00>\", \"<...\n",
"llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...\n",
"llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...\n",
"llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1\n",
"llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2\n",
"llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0\n",
"llama_model_loader: - kv 18: general.quantization_version u32 = 2\n",
"llama_model_loader: - type f32: 65 tensors\n",
"llama_model_loader: - type q8_0: 226 tensors\n",
"llm_load_vocab: special tokens cache size = 259\n",
"llm_load_vocab: token to piece cache size = 0.1684 MB\n",
"llm_load_print_meta: format = GGUF V2\n",
"llm_load_print_meta: arch = llama\n",
"llm_load_print_meta: vocab type = SPM\n",
"llm_load_print_meta: n_vocab = 32000\n",
"llm_load_print_meta: n_merges = 0\n",
"llm_load_print_meta: vocab_only = 0\n",
"llm_load_print_meta: n_ctx_train = 4096\n",
"llm_load_print_meta: n_embd = 4096\n",
"llm_load_print_meta: n_layer = 32\n",
"llm_load_print_meta: n_head = 32\n",
"llm_load_print_meta: n_head_kv = 32\n",
"llm_load_print_meta: n_rot = 128\n",
"llm_load_print_meta: n_swa = 0\n",
"llm_load_print_meta: n_embd_head_k = 128\n",
"llm_load_print_meta: n_embd_head_v = 128\n",
"llm_load_print_meta: n_gqa = 1\n",
"llm_load_print_meta: n_embd_k_gqa = 4096\n",
"llm_load_print_meta: n_embd_v_gqa = 4096\n",
"llm_load_print_meta: f_norm_eps = 0.0e+00\n",
"llm_load_print_meta: f_norm_rms_eps = 1.0e-06\n",
"llm_load_print_meta: f_clamp_kqv = 0.0e+00\n",
"llm_load_print_meta: f_max_alibi_bias = 0.0e+00\n",
"llm_load_print_meta: f_logit_scale = 0.0e+00\n",
"llm_load_print_meta: n_ff = 11008\n",
"llm_load_print_meta: n_expert = 0\n",
"llm_load_print_meta: n_expert_used = 0\n",
"llm_load_print_meta: causal attn = 1\n",
"llm_load_print_meta: pooling type = 0\n",
"llm_load_print_meta: rope type = 0\n",
"llm_load_print_meta: rope scaling = linear\n",
"llm_load_print_meta: freq_base_train = 10000.0\n",
"llm_load_print_meta: freq_scale_train = 1\n",
"llm_load_print_meta: n_ctx_orig_yarn = 4096\n",
"llm_load_print_meta: rope_finetuned = unknown\n",
"llm_load_print_meta: ssm_d_conv = 0\n",
"llm_load_print_meta: ssm_d_inner = 0\n",
"llm_load_print_meta: ssm_d_state = 0\n",
"llm_load_print_meta: ssm_dt_rank = 0\n",
"llm_load_print_meta: model type = 7B\n",
"llm_load_print_meta: model ftype = Q8_0\n",
"llm_load_print_meta: model params = 6.74 B\n",
"llm_load_print_meta: model size = 6.67 GiB (8.50 BPW) \n",
"llm_load_print_meta: general.name = LLaMA v2\n",
"llm_load_print_meta: BOS token = 1 '<s>'\n",
"llm_load_print_meta: EOS token = 2 '</s>'\n",
"llm_load_print_meta: UNK token = 0 '<unk>'\n",
"llm_load_print_meta: LF token = 13 '<0x0A>'\n",
"llm_load_print_meta: max token length = 48\n",
"llm_load_tensors: ggml ctx size = 0.14 MiB\n",
"llm_load_tensors: CPU buffer size = 6828.64 MiB\n",
"...................................................................................................\n",
"llama_new_context_with_model: n_ctx = 2048\n",
"llama_new_context_with_model: n_batch = 512\n",
"llama_new_context_with_model: n_ubatch = 512\n",
"llama_new_context_with_model: flash_attn = 0\n",
"llama_new_context_with_model: freq_base = 10000.0\n",
"llama_new_context_with_model: freq_scale = 1\n",
"llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB\n",
"llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB\n",
"llama_new_context_with_model: CPU output buffer size = 0.12 MiB\n",
"llama_new_context_with_model: CPU compute buffer size = 164.01 MiB\n",
"llama_new_context_with_model: graph nodes = 1030\n",
"llama_new_context_with_model: graph splits = 1\n",
"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | \n",
"Model metadata: {'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.context_length': '4096', 'general.name': 'LLaMA v2', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '11008', 'llama.attention.layer_norm_rms_epsilon': '0.000001', 'llama.rope.dimension_count': '128', 'llama.attention.head_count': '32', 'tokenizer.ggml.bos_token_id': '1', 'llama.block_count': '32', 'llama.attention.head_count_kv': '32', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '7'}\n",
"Using fallback chat format: llama-2\n"
]
}
],
"source": [
"llm = LlamaCpp(\n",
" model_path=\"llama-2-7b-chat.Q8_0.gguf\",\n",
" n_gpu_layers=-1,\n",
" n_batch=512,\n",
" n_ctx=2048,\n",
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "43e06f56-ef97-451b-87d9-8465ea442aed",
"metadata": {},
"source": [
"**Now let's ask the same question to Llama model without showing them the earnings release.**"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "1033dd82-5532-437d-a548-27695e109589",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"?\n",
"(NASDAQ:INTC)\n",
"Intel's CCG (Client Computing Group) revenue for Q1 2024 was $9.6 billion, a decrease of 35% from the previous quarter and a decrease of 42% from the same period last year."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 16.05 ms / 68 runs ( 0.24 ms per token, 4236.76 tokens per second)\n",
"llama_print_timings: prompt eval time = 131.14 ms / 16 tokens ( 8.20 ms per token, 122.01 tokens per second)\n",
"llama_print_timings: eval time = 3225.00 ms / 67 runs ( 48.13 ms per token, 20.78 tokens per second)\n",
"llama_print_timings: total time = 3466.40 ms / 83 tokens\n"
]
},
{
"data": {
"text/plain": [
"\"?\\n(NASDAQ:INTC)\\nIntel's CCG (Client Computing Group) revenue for Q1 2024 was $9.6 billion, a decrease of 35% from the previous quarter and a decrease of 42% from the same period last year.\""
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.invoke(question)"
]
},
{
"cell_type": "markdown",
"id": "75f5cb10-746f-4e37-9386-b85a4d2b84ef",
"metadata": {},
"source": [
"**As you can see, model is giving wrong information. Correct asnwer is CCG revenue in Q1 2024 is $7.5B. Now let's apply RAG using the earning release document**"
]
},
{
"cell_type": "markdown",
"id": "0f4150ec-5692-4756-b11a-22feb7ab88ff",
"metadata": {},
"source": [
"**in RAG, we modify the input prompt by adding relevent documents with the question. Here, we use one of the popular RAG prompt**"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "226c14b0-f43e-4a1f-a1e4-04731d467ec4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: {question} \\nContext: {context} \\nAnswer:\"))]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import hub\n",
"\n",
"rag_prompt = hub.pull(\"rlm/rag-prompt\")\n",
"rag_prompt.messages"
]
},
{
"cell_type": "markdown",
"id": "77deb6a0-0950-450a-916a-f2a029676c20",
"metadata": {},
"source": [
"**Appending all retreived documents in a single document**"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "2dbc3327-6ef3-4c1f-8797-0c71964b0921",
"metadata": {},
"outputs": [],
"source": [
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)"
]
},
{
"cell_type": "markdown",
"id": "2e2d9f18-49d0-43a3-bea8-78746ffa86b7",
"metadata": {},
"source": [
"**The last step is to create a chain using langchain tool that will create an e2e pipeline. It will take question and context as an input.**"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "427379c2-51ff-4e0f-8278-a45221363299",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough, RunnablePick\n",
"\n",
"# Chain\n",
"chain = (\n",
" RunnablePassthrough.assign(context=RunnablePick(\"context\") | format_docs)\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "095d6280-c949-4d00-8e32-8895a82d245f",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Based on the provided context, Intel CCG revenue in Q1 2024 was $7.5 billion up 31%."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 7.74 ms / 31 runs ( 0.25 ms per token, 4004.13 tokens per second)\n",
"llama_print_timings: prompt eval time = 2529.41 ms / 674 tokens ( 3.75 ms per token, 266.46 tokens per second)\n",
"llama_print_timings: eval time = 1542.94 ms / 30 runs ( 51.43 ms per token, 19.44 tokens per second)\n",
"llama_print_timings: total time = 4123.68 ms / 704 tokens\n"
]
},
{
"data": {
"text/plain": [
"' Based on the provided context, Intel CCG revenue in Q1 2024 was $7.5 billion up 31%.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"context\": docs, \"question\": question})"
]
},
{
"cell_type": "markdown",
"id": "638364b2-6bd2-4471-9961-d3a1d1b9d4ee",
"metadata": {},
"source": [
"**Now we see the results are correct as it is mentioned in earnings release.** <br>\n",
"**To further automate, we will create a chain that will take input as question and retriever so that we don't need to retrieve documents seperately**"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "4654e5b7-635f-4767-8b31-4c430164cdd5",
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()\n",
"qa_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | rag_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0979f393-fd0a-4e82-b844-68371c6ad68f",
"metadata": {},
"source": [
"**Now we only need to pass the question to the chain and it will fetch the contexts directly from the vector database to generate the answer**\n",
"<br>\n",
"**Let's try with another question**"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "3ea07b82-e6ec-4084-85f4-191373530172",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" According to the provided context, Intel DCAI revenue in Q1 2024 was $3.0 billion up 5%."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 131.20 ms\n",
"llama_print_timings: sample time = 6.28 ms / 31 runs ( 0.20 ms per token, 4937.88 tokens per second)\n",
"llama_print_timings: prompt eval time = 2681.93 ms / 730 tokens ( 3.67 ms per token, 272.19 tokens per second)\n",
"llama_print_timings: eval time = 1471.07 ms / 30 runs ( 49.04 ms per token, 20.39 tokens per second)\n",
"llama_print_timings: total time = 4206.77 ms / 760 tokens\n"
]
},
{
"data": {
"text/plain": [
"' According to the provided context, Intel DCAI revenue in Q1 2024 was $3.0 billion up 5%.'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa_chain.invoke(\"what is Intel DCAI revenue in Q1 2024?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9407f2a0-4a35-4315-8e96-02fcb80f210c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "rag-on-intel",
"language": "python",
"name": "rag-on-intel"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -13,7 +13,7 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm" | tr '\n' ' ')
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm|couchbase" | tr '\n' ' ')
PORT ?= 3001

View File

@@ -178,3 +178,10 @@ autosummary_generate = True
html_copy_source = False
html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -78,7 +78,7 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
continue
if inspect.isclass(type_):
# The clasification of the class is used to select a template
# The type of the class is used to select a template
# for the object when rendering the documentation.
# See `templates` directory for defined templates.
# This is a hacky solution to distinguish between different

View File

@@ -55,6 +55,7 @@ A developer platform that lets you debug, test, evaluate, and monitor LLM applic
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}
/>
## LangChain Expression Language (LCEL)
@@ -235,7 +236,7 @@ This is where information like log-probs and token usage may be stored.
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output.
They can be accessed from there with the `.tool_calls` property.
This property returns a list of dictionaries. Each dictionary has the following keys:
This property returns a list of `ToolCall`s. A `ToolCall` is a dictionary with the following arguments:
- `name`: The name of the tool that should be called.
- `args`: The arguments to that tool.
@@ -245,13 +246,18 @@ This property returns a list of dictionaries. Each dictionary has the following
This represents a system message, which tells the model how to behave. Not every model provider supports this.
#### FunctionMessage
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
#### ToolMessage
This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
This represents the result of a tool call. In addition to `role` and `content`, this message has:
- a `tool_call_id` field which conveys the id of the call to the tool that was called to produce this result.
- an `artifact` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.
#### (Legacy) FunctionMessage
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. ToolMessage should be used instead to correspond to the updated tool-calling API.
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
### Prompt templates
@@ -495,35 +501,87 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d
### Tools
<span data-heading-keywords="tool,tools"></span>
Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world.
Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
Tools are needed whenever you want a model to control parts of your code or call out to external APIs.
A tool consists of the following components:
A tool consists of:
1. The name of the tool
2. A description of what the tool does
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user (only relevant for agents)
1. The name of the tool.
2. A description of what the tool does.
3. A JSON schema defining the inputs to the tool.
4. A function (and, optionally, an async variant of the function).
The name, description and JSON schema are provided as context
to the LLM, allowing the LLM to determine how to use the tool
appropriately.
When a tool is bound to a model, the name, description and JSON schema are provided as context to the model.
Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs.
Typical usage may look like the following:
Given a list of available tools and a prompt, an LLM can request
that one or more tools be invoked with appropriate arguments.
```python
tools = [...] # Define a list of tools
llm_with_tools = llm.bind_tools(tools)
ai_msg = llm_with_tools.invoke("do xyz...") # AIMessage(tool_calls=[ToolCall(...), ...], ...)
```
Generally, when designing tools to be used by a chat model or LLM, it is important to keep in mind the following:
The `AIMessage` returned from the model MAY have `tool_calls` associated with it.
Read [this guide](/docs/concepts/#aimessage) for more information on what the response type may look like.
- Chat models that have been fine-tuned for tool calling will be better at tool calling than non-fine-tuned models.
- Non fine-tuned models may not be able to use tools at all, especially if the tools are complex or require multiple tool calls.
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas.
- Simpler tools are generally easier for models to use than more complex tools.
Once the chosen tools are invoked, the results can be passed back to the model so that it can complete whatever task
it's performing.
There are generally two different ways to invoke the tool and pass back the response:
For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
#### Invoke with just the arguments
To use an existing pre-built tool, see [here](docs/integrations/tools/) for a list of pre-built tools.
When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string).
This generally looks like:
```python
# You will want to previously check that the LLM returned tool calls
tool_call = ai_msg.tool_calls[0] # ToolCall(args={...}, id=..., ...)
tool_output = tool.invoke(tool_call["args"])
tool_message = ToolMessage(content=tool_output, tool_call_id=tool_call["id"], name=tool_call["name"])
```
Note that the `content` field will generally be passed back to the model.
If you do not want the raw tool response to be passed to the model, but you still want to keep it around,
you can transform the tool output but also pass it as an artifact (read more about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage))
```python
... # Same code as above
response_for_llm = transform(response)
tool_message = ToolMessage(content=response_for_llm, tool_call_id=tool_call["id"], name=tool_call["name"], artifact=tool_output)
```
#### Invoke with `ToolCall`
The other way to invoke a tool is to call it with the full `ToolCall` that was generated by the model.
When you do this, the tool will return a ToolMessage.
The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage.
This generally looks like:
```python
tool_call = ai_msg.tool_calls[0] # ToolCall(args={...}, id=..., ...)
tool_message = tool.invoke(tool_call)
# -> ToolMessage(content="tool result foobar...", tool_call_id=..., name="tool_name")
```
If you are invoking the tool this way and want to include an [artifact](/docs/concepts/#toolmessage) for the ToolMessage, you will need to have the tool return two things.
Read more about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/).
#### Best practices
When designing tools to be used by a model, it is important to keep in mind that:
- Chat models that have explicit [tool-calling APIs](/docs/concepts/#functiontool-calling) will be better at tool calling than non-fine-tuned models.
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. This another form of prompt engineering.
- Simple, narrowly scoped tools are easier for models to use than complex tools.
#### Related
For specifics on how to use tools, see the [tools how-to guides](/docs/how_to/#tools).
To use a pre-built tool, see the [tool integration docs](/docs/integrations/tools/).
### Toolkits
<span data-heading-keywords="toolkit,toolkits"></span>
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
@@ -821,7 +879,7 @@ We recommend this method as a starting point when working with structured output
- If multiple underlying techniques are supported, you can supply a `method` parameter to
[toggle which one is used](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs).
You may want or need to use other techiniques if:
You may want or need to use other techniques if:
- The chat model you are using does not support tool calling.
- You are working with very complex schemas and the model is having trouble generating outputs that conform.

View File

@@ -33,6 +33,8 @@ Some examples include:
- [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain/)
- [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/)
A good structural rule of thumb is to follow the structure of this [example from Numpy](https://numpy.org/numpy-tutorials/content/tutorial-svd.html).
Here are some high-level tips on writing a good tutorial:

View File

@@ -153,7 +153,7 @@
"\n",
" def parse(self, text: str) -> List[str]:\n",
" lines = text.strip().split(\"\\n\")\n",
" return lines\n",
" return list(filter(None, lines)) # Remove empty lines\n",
"\n",
"\n",
"output_parser = LineListOutputParser()\n",

View File

@@ -0,0 +1,342 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to dispatch custom callback events\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Callbacks](/docs/concepts/#callbacks)\n",
"- [Custom callback handlers](/docs/how_to/custom_callbacks)\n",
"- [Astream Events API](/docs/concepts/#astream_events) the `astream_events` method will surface custom callback events.\n",
":::\n",
"\n",
"In some situations, you may want to dipsatch a custom callback event from within a [Runnable](/docs/concepts/#runnable-interface) so it can be surfaced\n",
"in a custom callback handler or via the [Astream Events API](/docs/concepts/#astream_events).\n",
"\n",
"For example, if you have a long running tool with multiple steps, you can dispatch custom events between the steps and use these custom events to monitor progress.\n",
"You could also surface these custom events to an end user of your application to show them how the current task is progressing.\n",
"\n",
"To dispatch a custom event you need to decide on two attributes for the event: the `name` and the `data`.\n",
"\n",
"| Attribute | Type | Description |\n",
"|-----------|------|----------------------------------------------------------------------------------------------------------|\n",
"| name | str | A user defined name for the event. |\n",
"| data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |\n",
"\n",
"\n",
":::{.callout-important}\n",
"* Dispatching custom callback events requires `langchain-core>=0.2.15`.\n",
"* Custom callback events can only be dispatched from within an existing `Runnable`.\n",
"* If using `astream_events`, you must use `version='v2'` to see custom events.\n",
"* Sending or rendering custom callbacks events in LangSmith is not yet supported.\n",
":::\n",
"\n",
"\n",
":::caution COMPATIBILITY\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in other Python versions.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain-core"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Astream Events API\n",
"\n",
"The most useful way to consume custom events is via the [Astream Events API](/docs/concepts/#astream_events).\n",
"\n",
"We can use the `async` `adispatch_custom_event` API to emit custom events in an async setting. \n",
"\n",
"\n",
":::{.callout-important}\n",
"\n",
"To see custom events via the astream events API, you need to use the newer `v2` API of `astream_events`.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'hello world'}, 'name': 'foo', 'tags': [], 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': 5, 'parent_ids': []}\n",
"{'event': 'on_chain_stream', 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'foo', 'tags': [], 'metadata': {}, 'data': {'chunk': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_chain_end', 'data': {'output': 'hello world'}, 'run_id': 'f354ffe8-4c22-4881-890a-c1cad038a9a6', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def foo(x: str) -> str:\n",
" await adispatch_custom_event(\"event1\", {\"x\": x})\n",
" await adispatch_custom_event(\"event2\", 5)\n",
" return x\n",
"\n",
"\n",
"async for event in foo.astream_events(\"hello world\", version=\"v2\"):\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In python <= 3.10, you must propagate the config manually!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'hello world'}, 'name': 'bar', 'tags': [], 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_custom_event', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': 5, 'parent_ids': []}\n",
"{'event': 'on_chain_stream', 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'bar', 'tags': [], 'metadata': {}, 'data': {'chunk': 'hello world'}, 'parent_ids': []}\n",
"{'event': 'on_chain_end', 'data': {'output': 'hello world'}, 'run_id': 'c787b09d-698a-41b9-8290-92aaa656f3e7', 'name': 'bar', 'tags': [], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def bar(x: str, config: RunnableConfig) -> str:\n",
" \"\"\"An example that shows how to manually propagate config.\n",
"\n",
" You must do this if you're running python<=3.10.\n",
" \"\"\"\n",
" await adispatch_custom_event(\"event1\", {\"x\": x}, config=config)\n",
" await adispatch_custom_event(\"event2\", 5, config=config)\n",
" return x\n",
"\n",
"\n",
"async for event in bar.astream_events(\"hello world\", version=\"v2\"):\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Async Callback Handler\n",
"\n",
"You can also consume the dispatched event via an async callback handler."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Received event event1 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: a62b84be-7afd-4829-9947-7165df1f37d9\n",
"Received event event2 with data: 5, with tags: ['foo', 'bar'], with metadata: {} and run_id: a62b84be-7afd-4829-9947-7165df1f37d9\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Any, Dict, List, Optional\n",
"from uuid import UUID\n",
"\n",
"from langchain_core.callbacks import AsyncCallbackHandler\n",
"from langchain_core.callbacks.manager import (\n",
" adispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"class AsyncCustomCallbackHandler(AsyncCallbackHandler):\n",
" async def on_custom_event(\n",
" self,\n",
" name: str,\n",
" data: Any,\n",
" *,\n",
" run_id: UUID,\n",
" tags: Optional[List[str]] = None,\n",
" metadata: Optional[Dict[str, Any]] = None,\n",
" **kwargs: Any,\n",
" ) -> None:\n",
" print(\n",
" f\"Received event {name} with data: {data}, with tags: {tags}, with metadata: {metadata} and run_id: {run_id}\"\n",
" )\n",
"\n",
"\n",
"@RunnableLambda\n",
"async def bar(x: str, config: RunnableConfig) -> str:\n",
" \"\"\"An example that shows how to manually propagate config.\n",
"\n",
" You must do this if you're running python<=3.10.\n",
" \"\"\"\n",
" await adispatch_custom_event(\"event1\", {\"x\": x}, config=config)\n",
" await adispatch_custom_event(\"event2\", 5, config=config)\n",
" return x\n",
"\n",
"\n",
"async_handler = AsyncCustomCallbackHandler()\n",
"await foo.ainvoke(1, {\"callbacks\": [async_handler], \"tags\": [\"foo\", \"bar\"]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sync Callback Handler\n",
"\n",
"Let's see how to emit custom events in a sync environment using `dispatch_custom_event`.\n",
"\n",
"You **must** call `dispatch_custom_event` from within an existing `Runnable`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Received event event1 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: 27b5ce33-dc26-4b34-92dd-08a89cb22268\n",
"Received event event2 with data: {'x': 1}, with tags: ['foo', 'bar'], with metadata: {} and run_id: 27b5ce33-dc26-4b34-92dd-08a89cb22268\n"
]
},
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Any, Dict, List, Optional\n",
"from uuid import UUID\n",
"\n",
"from langchain_core.callbacks import BaseCallbackHandler\n",
"from langchain_core.callbacks.manager import (\n",
" dispatch_custom_event,\n",
")\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_core.runnables.config import RunnableConfig\n",
"\n",
"\n",
"class CustomHandler(BaseCallbackHandler):\n",
" def on_custom_event(\n",
" self,\n",
" name: str,\n",
" data: Any,\n",
" *,\n",
" run_id: UUID,\n",
" tags: Optional[List[str]] = None,\n",
" metadata: Optional[Dict[str, Any]] = None,\n",
" **kwargs: Any,\n",
" ) -> None:\n",
" print(\n",
" f\"Received event {name} with data: {data}, with tags: {tags}, with metadata: {metadata} and run_id: {run_id}\"\n",
" )\n",
"\n",
"\n",
"@RunnableLambda\n",
"def foo(x: int, config: RunnableConfig) -> int:\n",
" dispatch_custom_event(\"event1\", {\"x\": x})\n",
" dispatch_custom_event(\"event2\", {\"x\": x})\n",
" return x\n",
"\n",
"\n",
"handler = CustomHandler()\n",
"foo.invoke(1, {\"callbacks\": [handler], \"tags\": [\"foo\", \"bar\"]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've seen how to emit custom events, you can check out the more in depth guide for [astream events](/docs/how_to/streaming/#using-stream-events) which is the easiest way to leverage custom events."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -15,6 +15,12 @@
"\n",
"Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n",
"\n",
":::\n",
"\n",
":::info Requires ``langchain >= 0.2.8``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.8``. Please make sure your package is up to date.\n",
"\n",
":::"
]
},
@@ -25,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai"
"%pip install -qU langchain>=0.2.8 langchain-openai langchain-anthropic langchain-google-vertexai"
]
},
{
@@ -76,32 +82,6 @@
"print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
]
},
{
"cell_type": "markdown",
"id": "fff9a4c8-b6ee-4a1a-8d3d-0ecaa312d4ed",
"metadata": {},
"source": [
"## Simple config example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75c25d39-bf47-4b51-a6c6-64d9c572bfd6",
"metadata": {},
"outputs": [],
"source": [
"user_config = {\n",
" \"model\": \"...user-specified...\",\n",
" \"model_provider\": \"...user-specified...\",\n",
" \"temperature\": 0,\n",
" \"max_tokens\": 1000,\n",
"}\n",
"\n",
"llm = init_chat_model(**user_config)\n",
"llm.invoke(\"what's your name\")"
]
},
{
"cell_type": "markdown",
"id": "f811f219-5e78-4b62-b495-915d52a22532",
@@ -125,12 +105,215 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da07b5c0-d2e6-42e4-bfcd-2efcfaae6221",
"cell_type": "markdown",
"id": "476a44db-c50d-4846-951d-0f1c9ba8bbaa",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Creating a configurable model\n",
"\n",
"You can also create a runtime-configurable model by specifying `configurable_fields`. If you don't specify a `model` value, then \"model\" and \"model_provider\" be configurable by default."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6c037f27-12d7-4e83-811e-4245c0e3ba58",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_d576307f90', 'finish_reason': 'stop', 'logprobs': None}, id='run-5428ab5c-b5c0-46de-9946-5d4ca40dbdc8-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"configurable_model = init_chat_model(temperature=0)\n",
"\n",
"configurable_model.invoke(\n",
" \"what's your name\", config={\"configurable\": {\"model\": \"gpt-4o\"}}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", response_metadata={'id': 'msg_012XvotUJ3kGLXJUWKBVxJUi', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-1ad1eefe-f1c6-4244-8bc6-90e2cb7ee554-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"configurable_model.invoke(\n",
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7f3b3d4a-4066-45e4-8297-ea81ac8e70b7",
"metadata": {},
"source": [
"### Configurable model with default values\n",
"\n",
"We can create a configurable model with default model values, specify which parameters are configurable, and add prefixes to configurable params:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "814a2289-d0db-401e-b555-d5116112b413",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_ce0793330f', 'finish_reason': 'stop', 'logprobs': None}, id='run-3923e328-7715-4cd6-b215-98e4b6bf7c9d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"first_llm = init_chat_model(\n",
" model=\"gpt-4o\",\n",
" temperature=0,\n",
" configurable_fields=(\"model\", \"model_provider\", \"temperature\", \"max_tokens\"),\n",
" config_prefix=\"first\", # useful when you have a chain with multiple models\n",
")\n",
"\n",
"first_llm.invoke(\"what's your name\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "6c8755ba-c001-4f5a-a497-be3f1db83244",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", response_metadata={'id': 'msg_01RyYR64DoMPNCfHeNnroMXm', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-22446159-3723-43e6-88df-b84797e7751d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"first_llm.invoke(\n",
" \"what's your name\",\n",
" config={\n",
" \"configurable\": {\n",
" \"first_model\": \"claude-3-5-sonnet-20240620\",\n",
" \"first_temperature\": 0.5,\n",
" \"first_max_tokens\": 100,\n",
" }\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0072b1a3-7e44-4b4e-8b07-efe1ba91a689",
"metadata": {},
"source": [
"### Using a configurable model declaratively\n",
"\n",
"We can call declarative operations like `bind_tools`, `with_structured_output`, `with_configurable`, etc. on a configurable model and chain a configurable model in the same way that we would a regularly instantiated chat model object."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "067dabee-1050-4110-ae24-c48eba01e13b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'call_sYT3PFMufHGWJD32Hi2CTNUP'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'call_j1qjhxRnD3ffQmRyqjlI1Lnk'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"class GetPopulation(BaseModel):\n",
" \"\"\"Get the current population in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm = init_chat_model(temperature=0)\n",
"llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])\n",
"\n",
"llm_with_tools.invoke(\n",
" \"what's bigger in 2024 LA or NYC\", config={\"configurable\": {\"model\": \"gpt-4o\"}}\n",
").tool_calls"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e57dfe9f-cd24-4e37-9ce9-ccf8daf78f89",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'toolu_01CxEHxKtVbLBrvzFS7GQ5xR'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York City, NY'},\n",
" 'id': 'toolu_013A79qt5toWSsKunFBDZd5S'}]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools.invoke(\n",
" \"what's bigger in 2024 LA or NYC\",\n",
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}},\n",
").tool_calls"
]
}
],
"metadata": {
@@ -149,7 +332,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -16,7 +16,7 @@
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-openai >= 0.1.8`."
"This guide requires `langchain-openai >= 0.1.9`."
]
},
{
@@ -153,7 +153,7 @@
"\n",
"#### OpenAI\n",
"\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n",
"For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.9` and can be enabled by setting `stream_usage=True`. This attribute can also be set when `ChatOpenAI` is instantiated.\n",
"\n",
"```{=mdx}\n",
":::note\n",

View File

@@ -300,7 +300,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "ac9295d3",
"metadata": {},
"outputs": [],
@@ -312,10 +312,8 @@
"\n",
"## Quick Install\n",
"\n",
"```bash\n",
"# Hopefully this code block isn't split\n",
"pip install langchain\n",
"```\n",
"\n",
"As an open-source project in a rapidly developing field, we are extremely open to contributions.\n",
"\"\"\""
@@ -323,7 +321,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "3a0cb17a",
"metadata": {},
"outputs": [
@@ -332,15 +330,14 @@
"text/plain": [
"[Document(page_content='# 🦜️🔗 LangChain'),\n",
" Document(page_content='⚡ Building applications with LLMs through composability ⚡'),\n",
" Document(page_content='## Quick Install\\n\\n```bash'),\n",
" Document(page_content='## Quick Install'),\n",
" Document(page_content=\"# Hopefully this code block isn't split\"),\n",
" Document(page_content='pip install langchain'),\n",
" Document(page_content='```'),\n",
" Document(page_content='As an open-source project in a rapidly developing field, we'),\n",
" Document(page_content='are extremely open to contributions.')]"
]
},
"execution_count": 9,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -742,7 +739,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -48,20 +48,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "40ed76a2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",

View File

@@ -220,6 +220,57 @@
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"id": "14002ec8-7ee5-4f91-9315-dd21c3808776",
"metadata": {},
"source": [
"### `LLMListwiseRerank`\n",
"\n",
"[LLMListwiseRerank](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html) uses [zero-shot listwise document reranking](https://arxiv.org/pdf/2305.02156) and functions similarly to `LLMChainFilter` as a robust but more expensive option. It is recommended to use a more powerful LLM.\n",
"\n",
"Note that `LLMListwiseRerank` requires a model with the [with_structured_output](/docs/integrations/chat/) method implemented."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4ab9ee9f-917e-4d6f-9344-eb7f01533228",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"from langchain.retrievers.document_compressors import LLMListwiseRerank\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"_filter = LLMListwiseRerank.from_llm(llm, top_n=1)\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=_filter, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"id": "7194da42",
@@ -295,7 +346,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "617a1756",
"metadata": {},
"outputs": [],

View File

@@ -5,7 +5,7 @@
"id": "9a8bceb3-95bd-4496-bb9e-57655136e070",
"metadata": {},
"source": [
"# How to use Runnables as Tools\n",
"# How to convert Runnables as Tools\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -180,7 +180,7 @@
"id": "32b1a992-8997-4c98-8eb2-c9fe9431b799",
"metadata": {},
"source": [
"Alternatively, we can add typing information via [Runnable.with_types](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types):"
"Alternatively, the schema can be fully specified by directly passing the desired [args_schema](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool.args_schema) for the tool:"
]
},
{
@@ -190,10 +190,18 @@
"metadata": {},
"outputs": [],
"source": [
"as_tool = runnable.with_types(input_type=Args).as_tool(\n",
" name=\"My tool\",\n",
" description=\"Explanation of when to use tool.\",\n",
")"
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GSchema(BaseModel):\n",
" \"\"\"Apply a function to an integer and list of integers.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"Integer\")\n",
" b: List[int] = Field(..., description=\"List of ints\")\n",
"\n",
"\n",
"runnable = RunnableLambda(g)\n",
"as_tool = runnable.as_tool(GSchema)"
]
},
{
@@ -533,7 +541,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -131,7 +131,7 @@
"source": [
"## Base Chat Model\n",
"\n",
"Let's implement a chat model that echoes back the first `n` characetrs of the last message in the prompt!\n",
"Let's implement a chat model that echoes back the first `n` characters of the last message in the prompt!\n",
"\n",
"To do so, we will inherit from `BaseChatModel` and we'll need to implement the following:\n",
"\n",

View File

@@ -5,7 +5,7 @@
"id": "5436020b",
"metadata": {},
"source": [
"# How to create custom tools\n",
"# How to create tools\n",
"\n",
"When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"\n",
@@ -16,13 +16,15 @@
"| args_schema | Pydantic BaseModel | Optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters |\n",
"| return_direct | boolean | Only relevant for agents. When True, after invoking the given tool, the agent will stop and return the result direcly to the user. |\n",
"\n",
"LangChain provides 3 ways to create tools:\n",
"LangChain supports the creation of tools from:\n",
"\n",
"1. Using [@tool decorator](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html#langchain_core.tools.tool) -- the simplest way to define a custom tool.\n",
"2. Using [StructuredTool.from_function](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html#langchain_core.tools.StructuredTool.from_function) class method -- this is similar to the `@tool` decorator, but allows more configuration and specification of both sync and async implementations.\n",
"1. Functions;\n",
"2. LangChain [Runnables](/docs/concepts#runnable-interface);\n",
"3. By sub-classing from [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"The `@tool` or the `StructuredTool.from_function` class method should be sufficient for most use cases.\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html#langchain_core.tools.StructuredTool.from_function) class method.\n",
"\n",
"In this guide we provide an overview of these methods.\n",
"\n",
":::{.callout-tip}\n",
"\n",
@@ -35,7 +37,9 @@
"id": "c7326b23",
"metadata": {},
"source": [
"## @tool decorator\n",
"## Creating tools from functions\n",
"\n",
"### @tool decorator\n",
"\n",
"This `@tool` decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description - so a docstring MUST be provided. "
]
@@ -51,7 +55,7 @@
"output_type": "stream",
"text": [
"multiply\n",
"multiply(a: int, b: int) -> int - Multiply two numbers.\n",
"Multiply two numbers.\n",
"{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}\n"
]
}
@@ -96,6 +100,57 @@
" return a * b"
]
},
{
"cell_type": "markdown",
"id": "8f0edc51-c586-414c-8941-c8abe779943f",
"metadata": {},
"source": [
"Note that `@tool` supports parsing of annotations, nested schemas, and other features:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5626423f-053e-4a66-adca-1d794d835397",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'multiply_by_maxSchema',\n",
" 'description': 'Multiply a by the maximum of b.',\n",
" 'type': 'object',\n",
" 'properties': {'a': {'title': 'A',\n",
" 'description': 'scale factor',\n",
" 'type': 'string'},\n",
" 'b': {'title': 'B',\n",
" 'description': 'list of ints over which to take maximum',\n",
" 'type': 'array',\n",
" 'items': {'type': 'integer'}}},\n",
" 'required': ['a', 'b']}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Annotated, List\n",
"\n",
"\n",
"@tool\n",
"def multiply_by_max(\n",
" a: Annotated[str, \"scale factor\"],\n",
" b: Annotated[List[int], \"list of ints over which to take maximum\"],\n",
") -> int:\n",
" \"\"\"Multiply a by the maximum of b.\"\"\"\n",
" return a * max(b)\n",
"\n",
"\n",
"multiply_by_max.args_schema.schema()"
]
},
{
"cell_type": "markdown",
"id": "98d6eee9",
@@ -106,7 +161,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "9216d03a-f6ea-4216-b7e1-0661823a4c0b",
"metadata": {},
"outputs": [
@@ -115,7 +170,7 @@
"output_type": "stream",
"text": [
"multiplication-tool\n",
"multiplication-tool(a: int, b: int) -> int - Multiply two numbers.\n",
"Multiply two numbers.\n",
"{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n",
"True\n"
]
@@ -143,19 +198,84 @@
"print(multiply.return_direct)"
]
},
{
"cell_type": "markdown",
"id": "33a9e94d-0b60-48f3-a4c2-247dce096e66",
"metadata": {},
"source": [
"#### Docstring parsing"
]
},
{
"cell_type": "markdown",
"id": "6d0cb586-93d4-4ff1-9779-71df7853cb68",
"metadata": {},
"source": [
"`@tool` can optionally parse [Google Style docstrings](https://google.github.io/styleguide/pyguide.html#383-functions-and-methods) and associate the docstring components (such as arg descriptions) to the relevant parts of the tool schema. To toggle this behavior, specify `parse_docstring`:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "336f5538-956e-47d5-9bde-b732559f9e61",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'fooSchema',\n",
" 'description': 'The foo.',\n",
" 'type': 'object',\n",
" 'properties': {'bar': {'title': 'Bar',\n",
" 'description': 'The bar.',\n",
" 'type': 'string'},\n",
" 'baz': {'title': 'Baz', 'description': 'The baz.', 'type': 'integer'}},\n",
" 'required': ['bar', 'baz']}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"@tool(parse_docstring=True)\n",
"def foo(bar: str, baz: int) -> str:\n",
" \"\"\"The foo.\n",
"\n",
" Args:\n",
" bar: The bar.\n",
" baz: The baz.\n",
" \"\"\"\n",
" return bar\n",
"\n",
"\n",
"foo.args_schema.schema()"
]
},
{
"cell_type": "markdown",
"id": "f18a2503-5393-421b-99fa-4a01dd824d0e",
"metadata": {},
"source": [
":::{.callout-caution}\n",
"By default, `@tool(parse_docstring=True)` will raise `ValueError` if the docstring does not parse correctly. See [API Reference](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html) for detail and examples.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "b63fcc3b",
"metadata": {},
"source": [
"## StructuredTool\n",
"### StructuredTool\n",
"\n",
"The `StrurcturedTool.from_function` class method provides a bit more configurability than the `@tool` decorator, without requiring much additional code."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 6,
"id": "564fbe6f-11df-402d-b135-ef6ff25e1e63",
"metadata": {},
"outputs": [
@@ -198,7 +318,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 7,
"id": "6bc055d4-1fbe-4db5-8881-9c382eba6b1b",
"metadata": {},
"outputs": [
@@ -208,7 +328,7 @@
"text": [
"6\n",
"Calculator\n",
"Calculator(a: int, b: int) -> int - multiply numbers\n",
"multiply numbers\n",
"{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n"
]
}
@@ -239,6 +359,63 @@
"print(calculator.args)"
]
},
{
"cell_type": "markdown",
"id": "5517995d-54e3-449b-8fdb-03561f5e4647",
"metadata": {},
"source": [
"## Creating tools from Runnables\n",
"\n",
"LangChain [Runnables](/docs/concepts#runnable-interface) that accept string or `dict` input can be converted to tools using the [as_tool](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.as_tool) method, which allows for the specification of names, descriptions, and additional schema information for arguments.\n",
"\n",
"Example usage:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8ef593c5-cf72-4c10-bfc9-7d21874a0c24",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer_style': {'title': 'Answer Style', 'type': 'string'}}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.language_models import GenericFakeChatModel\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"human\", \"Hello. Please respond in the style of {answer_style}.\")]\n",
")\n",
"\n",
"# Placeholder LLM\n",
"llm = GenericFakeChatModel(messages=iter([\"hello matey\"]))\n",
"\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"as_tool = chain.as_tool(\n",
" name=\"Style responder\", description=\"Description of when to use tool.\"\n",
")\n",
"as_tool.args"
]
},
{
"cell_type": "markdown",
"id": "0521b787-a146-45a6-8ace-ae1ac4669dd7",
"metadata": {},
"source": [
"See [this guide](/docs/how_to/convert_runnable_to_tool) for more detail."
]
},
{
"cell_type": "markdown",
"id": "b840074b-9c10-4ca0-aed8-626c52b2398f",
@@ -251,7 +428,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 10,
"id": "1dad8f8e",
"metadata": {},
"outputs": [],
@@ -300,7 +477,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 11,
"id": "bb551c33",
"metadata": {},
"outputs": [
@@ -351,7 +528,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 12,
"id": "6615cb77-fd4c-4676-8965-f92cc71d4944",
"metadata": {},
"outputs": [
@@ -383,7 +560,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 13,
"id": "bb2af583-eadd-41f4-a645-bf8748bd3dcd",
"metadata": {},
"outputs": [
@@ -428,7 +605,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 14,
"id": "4ad0932c-8610-4278-8c57-f9218f654c8a",
"metadata": {},
"outputs": [
@@ -473,7 +650,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 15,
"id": "7094c0e8-6192-4870-a942-aad5b5ae48fd",
"metadata": {},
"outputs": [],
@@ -496,7 +673,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 16,
"id": "b4d22022-b105-4ccc-a15b-412cb9ea3097",
"metadata": {},
"outputs": [
@@ -506,7 +683,7 @@
"'Error: There is no city by the name of foobar.'"
]
},
"execution_count": 12,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -530,7 +707,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 17,
"id": "3fad1728-d367-4e1b-9b54-3172981271cf",
"metadata": {},
"outputs": [
@@ -540,7 +717,7 @@
"\"There is no such city, but it's probably above 0K there!\""
]
},
"execution_count": 13,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -564,7 +741,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 18,
"id": "ebfe7c1f-318d-4e58-99e1-f31e69473c46",
"metadata": {},
"outputs": [
@@ -574,7 +751,7 @@
"'The following errors occurred during tool execution: `Error: There is no city by the name of foobar.`'"
]
},
"execution_count": 14,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@@ -591,13 +768,189 @@
"\n",
"get_weather_tool.invoke({\"city\": \"foobar\"})"
]
},
{
"cell_type": "markdown",
"id": "1a8d8383-11b3-445e-956f-df4e96995e00",
"metadata": {},
"source": [
"## Returning artifacts of Tool execution\n",
"\n",
"Sometimes there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns custom objects like Documents, we may want to pass some view or metadata about this output to the model without passing the raw output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.\n",
"\n",
"The Tool and [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) interfaces make it possible to distinguish between the parts of the tool output meant for the model (this is the ToolMessage.content) and those parts which are meant for use outside the model (ToolMessage.artifact).\n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"\n",
":::\n",
"\n",
"If we want our tool to distinguish between message content and other artifacts, we need to specify `response_format=\"content_and_artifact\"` when defining our tool and make sure that we return a tuple of (content, artifact):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "14905425-0334-43a0-9de9-5bcf622ede0e",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"from typing import List, Tuple\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool(response_format=\"content_and_artifact\")\n",
"def generate_random_ints(min: int, max: int, size: int) -> Tuple[str, List[int]]:\n",
" \"\"\"Generate size random ints in the range [min, max].\"\"\"\n",
" array = [random.randint(min, max) for _ in range(size)]\n",
" content = f\"Successfully generated array of {size} random ints in [{min}, {max}].\"\n",
" return content, array"
]
},
{
"cell_type": "markdown",
"id": "49f057a6-8938-43ea-8faf-ae41e797ceb8",
"metadata": {},
"source": [
"If we invoke our tool directly with the tool arguments, we'll get back just the content part of the output:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0f2e1528-404b-46e6-b87c-f0957c4b9217",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Successfully generated array of 10 random ints in [0, 9].'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke({\"min\": 0, \"max\": 9, \"size\": 10})"
]
},
{
"cell_type": "markdown",
"id": "1e62ebba-1737-4b97-b61a-7313ade4e8c2",
"metadata": {},
"source": [
"If we invoke our tool with a ToolCall (like the ones generated by tool-calling models), we'll get back a ToolMessage that contains both the content and artifact generated by the Tool:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cc197777-26eb-46b3-a83b-c2ce116c6311",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 10 random ints in [0, 9].', name='generate_random_ints', tool_call_id='123', artifact=[1, 4, 2, 5, 3, 9, 0, 4, 7, 7])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(\n",
" {\n",
" \"name\": \"generate_random_ints\",\n",
" \"args\": {\"min\": 0, \"max\": 9, \"size\": 10},\n",
" \"id\": \"123\", # required\n",
" \"type\": \"tool_call\", # required\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "dfdc1040-bf25-4790-b4c3-59452db84e11",
"metadata": {},
"source": [
"We can do the same when subclassing BaseTool:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fe1a09d1-378b-4b91-bb5e-0697c3d7eb92",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class GenerateRandomFloats(BaseTool):\n",
" name: str = \"generate_random_floats\"\n",
" description: str = \"Generate size random floats in the range [min, max].\"\n",
" response_format: str = \"content_and_artifact\"\n",
"\n",
" ndigits: int = 2\n",
"\n",
" def _run(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" range_ = max - min\n",
" array = [\n",
" round(min + (range_ * random.random()), ndigits=self.ndigits)\n",
" for _ in range(size)\n",
" ]\n",
" content = f\"Generated {size} floats in [{min}, {max}], rounded to {self.ndigits} decimals.\"\n",
" return content, array\n",
"\n",
" # Optionally define an equivalent async method\n",
"\n",
" # async def _arun(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" # ..."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8c3d16f6-1c4a-48ab-b05a-38547c592e79",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.', name='generate_random_floats', tool_call_id='123', artifact=[1.4277, 0.7578, 2.4871])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_gen = GenerateRandomFloats(ndigits=4)\n",
"\n",
"rand_gen.invoke(\n",
" {\n",
" \"name\": \"generate_random_floats\",\n",
" \"args\": {\"min\": 0.1, \"max\": 3.3333, \"size\": 3},\n",
" \"id\": \"123\",\n",
" \"type\": \"tool_call\",\n",
" }\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-311",
"language": "python",
"name": "python3"
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
@@ -609,7 +962,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
},
"vscode": {
"interpreter": {

View File

@@ -67,15 +67,16 @@ If you'd prefer not to set an environment variable you can pass the key in direc
```python
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings(cohere_api_key="...")
embeddings_model = CohereEmbeddings(cohere_api_key="...", model='embed-english-v3.0')
```
Otherwise you can initialize without any params:
Otherwise you can initialize simply as shown below:
```python
from langchain_cohere import CohereEmbeddings
embeddings_model = CohereEmbeddings()
embeddings_model = CohereEmbeddings(model='embed-english-v3.0')
```
Do note that it is mandatory to pass the model parameter while initializing the CohereEmbeddings class.
</TabItem>
<TabItem value="huggingface" label="Hugging Face">

View File

@@ -9,11 +9,13 @@
"source": [
"# Hybrid Search\n",
"\n",
"The standard search in LangChain is done by vector similarity. However, a number of vectorstores implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, ...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n",
"The standard search in LangChain is done by vector similarity. However, a number of vectorstores implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, Qdrant...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n",
"\n",
"**Step 1: Make sure the vectorstore you are using supports hybrid search**\n",
"\n",
"At the moment, there is no unified way to perform hybrid search in LangChain. Each vectorstore may have their own way to do it. This is generally exposed as a keyword argument that is passed in during `similarity_search`. By reading the documentation or source code, figure out whether the vectorstore you are using supports hybrid search, and, if so, how to use it.\n",
"At the moment, there is no unified way to perform hybrid search in LangChain. Each vectorstore may have their own way to do it. This is generally exposed as a keyword argument that is passed in during `similarity_search`.\n",
"\n",
"By reading the documentation or source code, figure out whether the vectorstore you are using supports hybrid search, and, if so, how to use it.\n",
"\n",
"**Step 2: Add that parameter as a configurable field for the chain**\n",
"\n",

View File

@@ -44,6 +44,7 @@ This highlights functionality that is core to using LangChain.
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
- [How to: migrate chains to LCEL](/docs/how_to/migrate_chains)
- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets)
## Components
@@ -84,7 +85,7 @@ These are the core building blocks you can use when building applications.
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: stream tool calls](/docs/how_to/tool_streaming)
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
- [How to: bind model-specific formated tools](/docs/how_to/tools_model_specific)
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
@@ -185,17 +186,21 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
- [How to: create custom tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin)
- [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool)
- [How to: use chat model to call tools](/docs/how_to/tool_calling)
- [How to: pass tool results back to model](/docs/how_to/tool_results_pass_to_model)
- [How to: add ad-hoc tool calling capability to LLMs and chat models](/docs/how_to/tools_prompting)
- [How to: create tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
- [How to: use chat models to call tools](/docs/how_to/tool_calling)
- [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model)
- [How to: pass run time values to tools](/docs/how_to/tool_runtime)
- [How to: add a human in the loop to tool usage](/docs/how_to/tools_human)
- [How to: handle errors when calling tools](/docs/how_to/tools_error)
- [How to: disable parallel tool calling](/docs/how_to/tool_choice)
- [How to: stream events from within a tool](/docs/how_to/tool_stream_events)
- [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human)
- [How to: handle tool errors](/docs/how_to/tools_error)
- [How to: force models to call a tool](/docs/how_to/tool_choice)
- [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel)
- [How to: access the `RunnableConfig` from a tool](/docs/how_to/tool_configure)
- [How to: stream events from a tool](/docs/how_to/tool_stream_events)
- [How to: return artifacts from a tool](/docs/how_to/tool_artifacts/)
- [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool)
- [How to: add ad-hoc tool calling capability to models](/docs/how_to/tools_prompting)
- [How to: pass in runtime secrets](/docs/how_to/runnable_runtime_secrets)
### Multimodal
@@ -223,6 +228,7 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang
- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: use callbacks in async environments](/docs/how_to/callbacks_async)
- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
### Custom
@@ -235,6 +241,7 @@ All of LangChain components can easily be extended to support your own versions.
- [How to: write a custom output parser class](/docs/how_to/output_parser_custom)
- [How to: create custom callback handlers](/docs/how_to/custom_callbacks)
- [How to: define a custom tool](/docs/how_to/custom_tools)
- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events)
### Serialization
- [How to: save and load LangChain objects](/docs/how_to/serialization)

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MongoDBAtlasVectorSearch`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SingleStoreDB`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -63,6 +63,38 @@
"Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character."
]
},
{
"cell_type": "markdown",
"id": "11f7e8d3",
"metadata": {},
"source": [
"The `merge_message_runs` utility also works with messages composed together using the overloaded `+` operation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b51855c5",
"metadata": {},
"outputs": [],
"source": [
"messages = (\n",
" SystemMessage(\"you're a good assistant.\")\n",
" + SystemMessage(\"you always respond with a joke.\")\n",
" + HumanMessage([{\"type\": \"text\", \"text\": \"i wonder why it's called langchain\"}])\n",
" + HumanMessage(\"and who is harrison chasing anyways\")\n",
" + AIMessage(\n",
" 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n",
" )\n",
" + AIMessage(\n",
" \"Why, he's probably chasing after the last cup of coffee in the office!\"\n",
" )\n",
")\n",
"\n",
"merged = merge_message_runs(messages)\n",
"print(\"\\n\\n\".join([repr(x) for x in merged]))"
]
},
{
"cell_type": "markdown",
"id": "1b2eee74-71c8-4168-b968-bca580c25d18",

View File

@@ -0,0 +1,78 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6fcd2994-0092-4fa3-9bb1-c9c84babadc5",
"metadata": {},
"source": [
"# How to pass runtime secrets to runnables\n",
"\n",
":::info Requires `langchain-core >= 0.2.22`\n",
"\n",
":::\n",
"\n",
"We can pass in secrets to our runnables at runtime using the `RunnableConfig`. Specifically we can pass in secrets with a `__` prefix to the `configurable` field. This will ensure that these secrets aren't traced as part of the invocation:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "92e42e91-c277-49de-aa7a-dfb5c993c817",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"7"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def foo(x: int, config: RunnableConfig) -> int:\n",
" \"\"\"Sum x and a secret int\"\"\"\n",
" return x + config[\"configurable\"][\"__top_secret_int\"]\n",
"\n",
"\n",
"foo.invoke({\"x\": 5}, {\"configurable\": {\"__top_secret_int\": 2, \"traced_key\": \"bar\"}})"
]
},
{
"cell_type": "markdown",
"id": "ae3a4fb9-2ce7-46b2-b654-35dff0ae7197",
"metadata": {},
"source": [
"Looking at the LangSmith trace for this run, we can see that \"traced_key\" was recorded (as part of Metadata) while our secret int was not: https://smith.langchain.com/public/aa7e3289-49ca-422d-a408-f6b927210170/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"language": "python",
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,396 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "503e36ae-ca62-4f8a-880c-4fe78ff5df93",
"metadata": {},
"source": [
"# How to return artifacts from a tool\n",
"\n",
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [ToolMessage](/docs/concepts/#toolmessage)\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
"\n",
":::\n",
"\n",
"Tools are utilities that can be called by a model, and whose outputs are designed to be fed back to a model. Sometimes, however, there are artifacts of a tool's execution that we want to make accessible to downstream components in our chain or agent, but that we don't want to expose to the model itself. For example if a tool returns a custom object, a dataframe or an image, we may want to pass some metadata about this output to the model without passing the actual output to the model. At the same time, we may want to be able to access this full output elsewhere, for example in downstream tools.\n",
"\n",
"The Tool and [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) interfaces make it possible to distinguish between the parts of the tool output meant for the model (this is the ToolMessage.content) and those parts which are meant for use outside the model (ToolMessage.artifact).\n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"\n",
":::\n",
"\n",
"## Defining the tool\n",
"\n",
"If we want our tool to distinguish between message content and other artifacts, we need to specify `response_format=\"content_and_artifact\"` when defining our tool and make sure that we return a tuple of (content, artifact):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "762b9199-885f-4946-9c98-cc54d72b0d76",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU \"langchain-core>=0.2.19\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b9eb179d-1f41-4748-9866-b3d3e8c73cd0",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"from typing import List, Tuple\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool(response_format=\"content_and_artifact\")\n",
"def generate_random_ints(min: int, max: int, size: int) -> Tuple[str, List[int]]:\n",
" \"\"\"Generate size random ints in the range [min, max].\"\"\"\n",
" array = [random.randint(min, max) for _ in range(size)]\n",
" content = f\"Successfully generated array of {size} random ints in [{min}, {max}].\"\n",
" return content, array"
]
},
{
"cell_type": "markdown",
"id": "0ab05d25-af4a-4e5a-afe2-f090416d7ee7",
"metadata": {},
"source": [
"## Invoking the tool with ToolCall\n",
"\n",
"If we directly invoke our tool with just the tool arguments, you'll notice that we only get back the content part of the Tool output:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5e7d5e77-3102-4a59-8ade-e4e699dd1817",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Successfully generated array of 10 random ints in [0, 9].'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Failed to batch ingest runs: LangSmithRateLimitError('Rate limit exceeded for https://api.smith.langchain.com/runs/batch. HTTPError(\\'429 Client Error: Too Many Requests for url: https://api.smith.langchain.com/runs/batch\\', \\'{\"detail\":\"Monthly unique traces usage limit exceeded\"}\\')')\n"
]
}
],
"source": [
"generate_random_ints.invoke({\"min\": 0, \"max\": 9, \"size\": 10})"
]
},
{
"cell_type": "markdown",
"id": "30db7228-f04c-489e-afda-9a572eaa90a1",
"metadata": {},
"source": [
"In order to get back both the content and the artifact, we need to invoke our model with a ToolCall (which is just a dictionary with \"name\", \"args\", \"id\" and \"type\" keys), which has additional info needed to generate a ToolMessage like the tool call ID:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "da1d939d-a900-4b01-92aa-d19011a6b034",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 10 random ints in [0, 9].', name='generate_random_ints', tool_call_id='123', artifact=[2, 8, 0, 6, 0, 0, 1, 5, 0, 0])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(\n",
" {\n",
" \"name\": \"generate_random_ints\",\n",
" \"args\": {\"min\": 0, \"max\": 9, \"size\": 10},\n",
" \"id\": \"123\", # required\n",
" \"type\": \"tool_call\", # required\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a3cfc03d-020b-42c7-b0f8-c824af19e45e",
"metadata": {},
"source": [
"## Using with a model\n",
"\n",
"With a [tool-calling model](/docs/how_to/tool_calling/), we can easily use a model to call our Tool and generate ToolMessages:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
"/>\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "74de0286-b003-4b48-9cdd-ecab435515ca",
"metadata": {},
"outputs": [],
"source": [
"# | echo: false\n",
"# | output: false\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8a67424b-d19c-43df-ac7b-690bca42146c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'generate_random_ints',\n",
" 'args': {'min': 1, 'max': 24, 'size': 6},\n",
" 'id': 'toolu_01EtALY3Wz1DVYhv1TLvZGvE',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tools = llm.bind_tools([generate_random_ints])\n",
"\n",
"ai_msg = llm_with_tools.invoke(\"generate 6 positive ints less than 25\")\n",
"ai_msg.tool_calls"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "00c4e906-3ca8-41e8-a0be-65cb0db7d574",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 6 random ints in [1, 24].', name='generate_random_ints', tool_call_id='toolu_01EtALY3Wz1DVYhv1TLvZGvE', artifact=[2, 20, 23, 8, 1, 15])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(ai_msg.tool_calls[0])"
]
},
{
"cell_type": "markdown",
"id": "ddef2690-70de-4542-ab20-2337f77f3e46",
"metadata": {},
"source": [
"If we just pass in the tool call args, we'll only get back the content:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "f4a6c9a6-0ffc-4b0e-a59f-f3c3d69d824d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Successfully generated array of 6 random ints in [1, 24].'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_random_ints.invoke(ai_msg.tool_calls[0][\"args\"])"
]
},
{
"cell_type": "markdown",
"id": "98d6443b-ff41-4d91-8523-b6274fc74ee5",
"metadata": {},
"source": [
"If we wanted to declaratively create a chain, we could do this:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "eb55ec23-95a4-464e-b886-d9679bf3aaa2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ToolMessage(content='Successfully generated array of 1 random ints in [1, 5].', name='generate_random_ints', tool_call_id='toolu_01FwYhnkwDPJPbKdGq4ng6uD', artifact=[5])]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from operator import attrgetter\n",
"\n",
"chain = llm_with_tools | attrgetter(\"tool_calls\") | generate_random_ints.map()\n",
"\n",
"chain.invoke(\"give me a random number between 1 and 5\")"
]
},
{
"cell_type": "markdown",
"id": "4df46be2-babb-4bfe-a641-91cd3d03ffaf",
"metadata": {},
"source": [
"## Creating from BaseTool class\n",
"\n",
"If you want to create a BaseTool object directly, instead of decorating a function with `@tool`, you can do so like this:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "9a9129e1-6aee-4a10-ad57-62ef3bf0276c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class GenerateRandomFloats(BaseTool):\n",
" name: str = \"generate_random_floats\"\n",
" description: str = \"Generate size random floats in the range [min, max].\"\n",
" response_format: str = \"content_and_artifact\"\n",
"\n",
" ndigits: int = 2\n",
"\n",
" def _run(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" range_ = max - min\n",
" array = [\n",
" round(min + (range_ * random.random()), ndigits=self.ndigits)\n",
" for _ in range(size)\n",
" ]\n",
" content = f\"Generated {size} floats in [{min}, {max}], rounded to {self.ndigits} decimals.\"\n",
" return content, array\n",
"\n",
" # Optionally define an equivalent async method\n",
"\n",
" # async def _arun(self, min: float, max: float, size: int) -> Tuple[str, List[float]]:\n",
" # ..."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d7322619-f420-4b29-8ee5-023e693d0179",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_gen = GenerateRandomFloats(ndigits=4)\n",
"rand_gen.invoke({\"min\": 0.1, \"max\": 3.3333, \"size\": 3})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0892f277-23a6-4bb8-a0e9-59f533ac9750",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.', name='generate_random_floats', tool_call_id='123', artifact=[1.5789, 2.464, 2.2719])"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rand_gen.invoke(\n",
" {\n",
" \"name\": \"generate_random_floats\",\n",
" \"args\": {\"min\": 0.1, \"max\": 3.3333, \"size\": 3},\n",
" \"id\": \"123\",\n",
" \"type\": \"tool_call\",\n",
" }\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"language": "python",
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -17,7 +17,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to use a model to call tools\n",
"# How to use chat models to call tools\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -82,30 +82,24 @@
"## Passing tools to chat models\n",
"\n",
"Chat models that support tool calling features implement a `.bind_tools` method, which \n",
"receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
"receives a list of functions, Pydantic models, or LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
"and binds them to the chat model in its expected format. Subsequent invocations of the \n",
"chat model will include tool schemas in its calls to the LLM.\n",
"\n",
"For example, we can define the schema for custom tools using the `@tool` decorator \n",
"on Python functions:"
"For example, below we implement simple tools for arithmetic:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Adds a and b.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"@tool\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiplies a and b.\"\"\"\n",
" return a * b\n",
@@ -118,12 +112,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Or below, we define the schema using [Pydantic](https://docs.pydantic.dev):"
"LangChain also implements a `@tool` decorator that allows for further control of the tool schema, such as tool names and argument descriptions. See the how-to guide [here](/docs/how_to/custom_tools/#creating-tools-from-functions) for detail.\n",
"\n",
"We can also define the schema using [Pydantic](https://docs.pydantic.dev):"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -343,7 +339,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -4,7 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Disabling parallel tool calling (OpenAI only)\n",
"# How to disable parallel tool calling\n",
"\n",
":::info OpenAI-specific\n",
"\n",
"This API is currently only supported by OpenAI.\n",
"\n",
":::\n",
"\n",
"OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like \"What is the weather in Tokyo, New York, and Chicago?\" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using the ``parallel_tool_call`` parameter."
]
@@ -99,10 +105,24 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python"
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to force tool calling behavior\n",
"# How to force models to call a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -125,10 +125,24 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python"
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -0,0 +1,132 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to access the RunnableConfig from a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Custom tools](/docs/how_to/custom_tools)\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language-lcel)\n",
"- [Configuring runnable behavior](/docs/how_to/configure/)\n",
"\n",
":::\n",
"\n",
"If you have a tool that call chat models, retrievers, or other runnables, you may want to access internal events from those runnables or configure them with additional properties. This guide shows you how to manually pass parameters properly so that you can do this using the `astream_events()` method.\n",
"\n",
"Tools are runnables, and you can treat them the same way as any other runnable at the interface level - you can call `invoke()`, `batch()`, and `stream()` on them as normal. However, when writing custom tools, you may want to invoke other runnables like chat models or retrievers. In order to properly trace and configure those sub-invocations, you'll need to manually access and pass in the tool's current [`RunnableConfig`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html) object. This guide show you some examples of how to do that.\n",
"\n",
":::caution Compatibility\n",
"\n",
"This guide requires `langchain-core>=0.2.16`.\n",
"\n",
":::\n",
"\n",
"## Inferring by parameter type\n",
"\n",
"To access reference the active config object from your custom tool, you'll need to add a parameter to your tool's signature typed as `RunnableConfig`. When you invoke your tool, LangChain will inspect your tool's signature, look for a parameter typed as `RunnableConfig`, and if it exists, populate that parameter with the correct value.\n",
"\n",
"**Note:** The actual name of the parameter doesn't matter, only the typing.\n",
"\n",
"To illustrate this, define a custom tool that takes a two parameters - one typed as a string, the other typed as `RunnableConfig`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_core"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"async def reverse_tool(text: str, special_config_param: RunnableConfig) -> str:\n",
" \"\"\"A test tool that combines input text with a configurable parameter.\"\"\"\n",
" return (text + special_config_param[\"configurable\"][\"additional_field\"])[::-1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, if we invoke the tool with a `config` containing a `configurable` field, we can see that `additional_field` is passed through correctly:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'321cba'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await reverse_tool.ainvoke(\n",
" {\"text\": \"abc\"}, config={\"configurable\": {\"additional_field\": \"123\"}}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now seen how to configure and stream events from within a tool. Next, check out the following guides for more on using tools:\n",
"\n",
"- [Stream events from child runs within a custom tool](/docs/how_to/tool_stream_events/)\n",
"- Pass [tool results back to a model](/docs/how_to/tool_results_pass_to_model)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
"- Building [tool-using chains and agents](/docs/how_to#tools)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -4,14 +4,22 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass tool outputs to the model\n",
"# How to pass tool outputs to chat models\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s. First, let's define our tools and our model."
":::info Prerequisites\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Tools](/docs/concepts/#tools)\n",
"- [Function/tool calling](/docs/concepts/#functiontool-calling)\n",
"\n",
":::\n",
"\n",
"If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s and `ToolCall`s. First, let's define our tools and our model."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +43,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -54,25 +62,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use ``ToolMessage`` to pass back the output of the tool calls to the model."
"The nice thing about Tools is that if we invoke them with a ToolCall, we'll automatically get back a ToolMessage that can be fed back to the model: \n",
"\n",
":::info Requires ``langchain-core >= 0.2.19``\n",
"\n",
"This functionality was added in ``langchain-core == 0.2.19``. Please make sure your package is up to date.\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_svc2GLSxNFALbaCAbSjMI9J8', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-a79ad1dd-95f1-4a46-b688-4c83f327a7b3-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_svc2GLSxNFALbaCAbSjMI9J8'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_r8jxte3zW6h3MEGV3zH2qzFh'}]),\n",
" ToolMessage(content='36', tool_call_id='call_svc2GLSxNFALbaCAbSjMI9J8'),\n",
" ToolMessage(content='60', tool_call_id='call_r8jxte3zW6h3MEGV3zH2qzFh')]"
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'multiply'}, 'type': 'function'}, {'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 88, 'total_tokens': 137}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-56657feb-96dd-456c-ab8e-1857eab2ade0-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_Smg3NHJNxrKfAmd4f9GkaYn3', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 11, 'b': 49}, 'id': 'call_55K1C0DmH6U5qh810gW34xZ0', 'type': 'tool_call'}], usage_metadata={'input_tokens': 88, 'output_tokens': 49, 'total_tokens': 137}),\n",
" ToolMessage(content='36', name='multiply', tool_call_id='call_Smg3NHJNxrKfAmd4f9GkaYn3'),\n",
" ToolMessage(content='60', name='add', tool_call_id='call_55K1C0DmH6U5qh810gW34xZ0')]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@@ -85,24 +100,25 @@
"messages.append(ai_msg)\n",
"for tool_call in ai_msg.tool_calls:\n",
" selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" tool_msg = selected_tool.invoke(tool_call)\n",
" messages.append(tool_msg)\n",
"messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_d9767fc5b9', 'finish_reason': 'stop', 'logprobs': None}, id='run-20b52149-e00d-48ea-97cf-f8de7a255f8c-0')"
"AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 153, 'total_tokens': 171}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ba5032f0-f773-406d-a408-8314e66511d0-0', usage_metadata={'input_tokens': 153, 'output_tokens': 18, 'total_tokens': 171})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@@ -118,10 +134,24 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"language": "python",
"name": "poetry-venv-311"
},
"language_info": {
"name": "python"
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to pass run time values to a tool\n",
"# How to pass run time values to tools\n",
"\n",
":::info Prerequisites\n",
"\n",
@@ -15,26 +15,25 @@
"- [How to use a model to call tools](/docs/how_to/tool_calling)\n",
":::\n",
"\n",
":::{.callout-info} Supported models\n",
"\n",
"This how-to guide uses models with native tool calling capability.\n",
"You can find a [list of all models that support tool calling](/docs/integrations/chat/).\n",
"\n",
":::\n",
"\n",
":::{.callout-info} Using with LangGraph\n",
":::info Using with LangGraph\n",
"\n",
"If you're using LangGraph, please refer to [this how-to guide](https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/)\n",
"which shows how to create an agent that keeps track of a given user's favorite pets.\n",
":::\n",
"\n",
":::caution Added in `langchain-core==0.2.21`\n",
"\n",
"Must have `langchain-core>=0.2.21` to use this functionality.\n",
"\n",
":::\n",
"\n",
"You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.\n",
"\n",
"Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.\n",
"\n",
"Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.\n",
"\n",
"This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values."
"This how-to guide shows you how to prevent the model from generating certain tool arguments and injecting them in directly at runtime."
]
},
{
@@ -57,23 +56,12 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_openai\n",
"# %pip install -qU langchain langchain_openai\n",
"\n",
"import os\n",
"from getpass import getpass\n",
@@ -90,10 +78,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Passing request time information\n",
"## Hiding arguments from the model\n",
"\n",
"The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example,\n",
"this information may be the user ID as resolved from the request itself."
"We can use the InjectedToolArg annotation to mark certain parameters of our Tool, like `user_id` as being injected at runtime, meaning they shouldn't be generated by the model"
]
},
{
@@ -104,46 +91,88 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_core.output_parsers import JsonOutputParser\n",
"from langchain_core.tools import BaseTool, tool"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import InjectedToolArg, tool\n",
"from typing_extensions import Annotated\n",
"\n",
"user_to_pets = {}\n",
"\n",
"\n",
"def generate_tools_for_user(user_id: str) -> List[BaseTool]:\n",
" \"\"\"Generate a set of tools that have a user id associated with them.\"\"\"\n",
"@tool(parse_docstring=True)\n",
"def update_favorite_pets(\n",
" pets: List[str], user_id: Annotated[str, InjectedToolArg]\n",
") -> None:\n",
" \"\"\"Add the list of favorite pets.\n",
"\n",
" @tool\n",
" def update_favorite_pets(pets: List[str]) -> None:\n",
" \"\"\"Add the list of favorite pets.\"\"\"\n",
" user_to_pets[user_id] = pets\n",
" Args:\n",
" pets: List of favorite pets to set.\n",
" user_id: User's ID.\n",
" \"\"\"\n",
" user_to_pets[user_id] = pets\n",
"\n",
" @tool\n",
" def delete_favorite_pets() -> None:\n",
" \"\"\"Delete the list of favorite pets.\"\"\"\n",
" if user_id in user_to_pets:\n",
" del user_to_pets[user_id]\n",
"\n",
" @tool\n",
" def list_favorite_pets() -> None:\n",
" \"\"\"List favorite pets if any.\"\"\"\n",
" return user_to_pets.get(user_id, [])\n",
"@tool(parse_docstring=True)\n",
"def delete_favorite_pets(user_id: Annotated[str, InjectedToolArg]) -> None:\n",
" \"\"\"Delete the list of favorite pets.\n",
"\n",
" return [update_favorite_pets, delete_favorite_pets, list_favorite_pets]"
" Args:\n",
" user_id: User's ID.\n",
" \"\"\"\n",
" if user_id in user_to_pets:\n",
" del user_to_pets[user_id]\n",
"\n",
"\n",
"@tool(parse_docstring=True)\n",
"def list_favorite_pets(user_id: Annotated[str, InjectedToolArg]) -> None:\n",
" \"\"\"List favorite pets if any.\n",
"\n",
" Args:\n",
" user_id: User's ID.\n",
" \"\"\"\n",
" return user_to_pets.get(user_id, [])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify that the tools work correctly"
"If we look at the input schemas for these tools, we'll see that user_id is still listed:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_petsSchema',\n",
" 'description': 'Add the list of favorite pets.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"update_favorite_pets.get_input_schema().schema()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But if we look at the tool call schema, which is what is passed to the model for tool-calling, user_id has been removed:"
]
},
{
@@ -152,46 +181,60 @@
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'eugene': ['cat', 'dog']}\n",
"['cat', 'dog']\n"
]
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Add the list of favorite pets.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"update_pets, delete_pets, list_pets = generate_tools_for_user(\"eugene\")\n",
"update_pets.invoke({\"pets\": [\"cat\", \"dog\"]})\n",
"print(user_to_pets)\n",
"print(list_pets.invoke({}))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"\n",
"def handle_run_time_request(user_id: str, query: str):\n",
" \"\"\"Handle run time request.\"\"\"\n",
" tools = generate_tools_for_user(user_id)\n",
" llm_with_tools = llm.bind_tools(tools)\n",
" prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful assistant.\")],\n",
" )\n",
" chain = prompt | llm_with_tools\n",
" return llm_with_tools.invoke(query)"
"update_favorite_pets.tool_call_schema.schema()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This code will allow the LLM to invoke the tools, but the LLM is **unaware** of the fact that a **user ID** even exists!"
"So when we invoke our tool, we need to pass in user_id:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'123': ['lizard', 'dog']}\n",
"['lizard', 'dog']\n"
]
}
],
"source": [
"user_id = \"123\"\n",
"update_favorite_pets.invoke({\"pets\": [\"lizard\", \"dog\"], \"user_id\": user_id})\n",
"print(user_to_pets)\n",
"print(list_favorite_pets.invoke({\"user_id\": user_id}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But when the model calls the tool, no user_id argument will be generated:"
]
},
{
@@ -204,7 +247,8 @@
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots']},\n",
" 'id': 'call_jJvjPXsNbFO5MMgW0q84iqCN'}]"
" 'id': 'call_W3cn4lZmJlyk8PCrKN4PRwqB',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 6,
@@ -213,30 +257,349 @@
}
],
"source": [
"ai_message = handle_run_time_request(\n",
" \"eugene\", \"my favorite animals are cats and parrots.\"\n",
")\n",
"ai_message.tool_calls"
"tools = [\n",
" update_favorite_pets,\n",
" delete_favorite_pets,\n",
" list_favorite_pets,\n",
"]\n",
"llm_with_tools = llm.bind_tools(tools)\n",
"ai_msg = llm_with_tools.invoke(\"my favorite animals are cats and parrots\")\n",
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::{.callout-important}\n",
"## Injecting arguments at runtime"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to actually execute our tools using the model-generated tool call, we'll need to inject the user_id ourselves:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots'], 'user_id': '123'},\n",
" 'id': 'call_W3cn4lZmJlyk8PCrKN4PRwqB',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from copy import deepcopy\n",
"\n",
"Chat models only output requests to invoke tools, they don't actually invoke the underlying tools.\n",
"from langchain_core.runnables import chain\n",
"\n",
"To see how to invoke the tools, please refer to [how to use a model to call tools](https://python.langchain.com/v0.2/docs/how_to/tool_calling).\n",
":::"
"\n",
"@chain\n",
"def inject_user_id(ai_msg):\n",
" tool_calls = []\n",
" for tool_call in ai_msg.tool_calls:\n",
" tool_call_copy = deepcopy(tool_call)\n",
" tool_call_copy[\"args\"][\"user_id\"] = user_id\n",
" tool_calls.append(tool_call_copy)\n",
" return tool_calls\n",
"\n",
"\n",
"inject_user_id.invoke(ai_msg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we can chain together our model, injection code, and the actual tools to create a tool-executing chain:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ToolMessage(content='null', name='update_favorite_pets', tool_call_id='call_HUyF6AihqANzEYxQnTUKxkXj')]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool_map = {tool.name: tool for tool in tools}\n",
"\n",
"\n",
"@chain\n",
"def tool_router(tool_call):\n",
" return tool_map[tool_call[\"name\"]]\n",
"\n",
"\n",
"chain = llm_with_tools | inject_user_id | tool_router.map()\n",
"chain.invoke(\"my favorite animals are cats and parrots\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at the user_to_pets dict, we can see that it's been updated to include cats and parrots:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'123': ['cats', 'parrots']}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"user_to_pets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Other ways of annotating args\n",
"\n",
"Here are a few other ways of annotating our tool args:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'UpdateFavoritePetsSchema',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class UpdateFavoritePetsSchema(BaseModel):\n",
" \"\"\"Update list of favorite pets\"\"\"\n",
"\n",
" pets: List[str] = Field(..., description=\"List of favorite pets to set.\")\n",
" user_id: Annotated[str, InjectedToolArg] = Field(..., description=\"User's ID.\")\n",
"\n",
"\n",
"@tool(args_schema=UpdateFavoritePetsSchema)\n",
"def update_favorite_pets(pets, user_id):\n",
" user_to_pets[user_id] = pets\n",
"\n",
"\n",
"update_favorite_pets.get_input_schema().schema()"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"update_favorite_pets.tool_call_schema.schema()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'UpdateFavoritePetsSchema',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Optional, Type\n",
"\n",
"\n",
"class UpdateFavoritePets(BaseTool):\n",
" name: str = \"update_favorite_pets\"\n",
" description: str = \"Update list of favorite pets\"\n",
" args_schema: Optional[Type[BaseModel]] = UpdateFavoritePetsSchema\n",
"\n",
" def _run(self, pets, user_id):\n",
" user_to_pets[user_id] = pets\n",
"\n",
"\n",
"UpdateFavoritePets().get_input_schema().schema()"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"UpdateFavoritePets().tool_call_schema.schema()"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_petsSchema',\n",
" 'description': 'Use the tool.\\n\\nAdd run_manager: Optional[CallbackManagerForToolRun] = None\\nto child implementations to enable tracing.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id', 'type': 'string'}},\n",
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class UpdateFavoritePets2(BaseTool):\n",
" name: str = \"update_favorite_pets\"\n",
" description: str = \"Update list of favorite pets\"\n",
"\n",
" def _run(self, pets: List[str], user_id: Annotated[str, InjectedToolArg]) -> None:\n",
" user_to_pets[user_id] = pets\n",
"\n",
"\n",
"UpdateFavoritePets2().get_input_schema().schema()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"UpdateFavoritePets2().tool_call_schema.schema()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-311",
"language": "python",
"name": "python3"
"name": "poetry-venv-311"
},
"language_info": {
"codemirror_mode": {
@@ -248,7 +611,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -4,25 +4,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to stream events from within a tool\n",
"# How to stream events from a tool\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Tools](/docs/concepts/#tools)\n",
"- [Custom tools](/docs/how_to/custom_tools)\n",
"- [Using stream events](/docs/how_to/streaming/#using-stream-events)\n",
"- [Accessing RunnableConfig within a custom tool](/docs/how_to/tool_configure/)\n",
"\n",
":::\n",
"\n",
"If you have tools that call LLMs, retrievers, or other runnables, you may want to access internal events from those runnables. This guide shows you a few ways you can do this using the `astream_events()` method.\n",
"If you have tools that call chat models, retrievers, or other runnables, you may want to access internal events from those runnables or configure them with additional properties. This guide shows you how to manually pass parameters properly so that you can do this using the `astream_events()` method.\n",
"\n",
":::caution\n",
"LangChain cannot automatically propagate callbacks to child runnables if you are running async code in python<=3.10.\n",
" \n",
"This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
":::caution Compatibility\n",
"\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in older Python versions.\n",
"\n",
"This guide also requires `langchain-core>=0.2.16`.\n",
":::\n",
"\n",
"We'll define a custom tool below that calls a chain that summarizes its input in a special way by prompting an LLM to return only 10 words, then reversing the output:\n",
"Say you have a custom tool that calls a chain that condenses its input by prompting a chat model to return only 10 words, then reversing the output. First, define it in a naive way:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
@@ -40,7 +47,7 @@
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_anthropic\n",
"%pip install -qU langchain langchain_anthropic langchain_core\n",
"\n",
"import os\n",
"from getpass import getpass\n",
@@ -65,7 +72,7 @@
"\n",
"\n",
"@tool\n",
"def special_summarization_tool(long_text: str) -> str:\n",
"async def special_summarization_tool(long_text: str) -> str:\n",
" \"\"\"A tool that summarizes input text using advanced techniques.\"\"\"\n",
" prompt = ChatPromptTemplate.from_template(\n",
" \"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n{long_text}\"\n",
@@ -75,7 +82,7 @@
" return x[::-1]\n",
"\n",
" chain = prompt | model | StrOutputParser() | reverse\n",
" summary = chain.invoke({\"long_text\": long_text})\n",
" summary = await chain.ainvoke({\"long_text\": long_text})\n",
" return summary"
]
},
@@ -83,7 +90,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If you just invoke the tool directly, you can see that you only get the final response:"
"Invoking the tool directly works just fine:"
]
},
{
@@ -116,31 +123,90 @@
"Coming! Hang on a second.\n",
"\"\"\"\n",
"\n",
"special_summarization_tool.invoke({\"long_text\": LONG_TEXT})"
"await special_summarization_tool.ainvoke({\"long_text\": LONG_TEXT})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you wanted to access the raw output from the chat model, you could use the [`astream_events()`](/docs/how_to/streaming/#using-stream-events) method and look for `on_chat_model_end` events:"
"But if you wanted to access the raw output from the chat model rather than the full tool, you might try to use the [`astream_events()`](/docs/how_to/streaming/#using-stream-events) method and look for an `on_chat_model_end` event. Here's what happens:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"stream = special_summarization_tool.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" # Never triggers in python<=3.10!\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll notice (unless you're running through this guide in `python>=3.11`) that there are no chat model events emitted from the child run!\n",
"\n",
"This is because the example above does not pass the tool's config object into the internal chain. To fix this, redefine your tool to take a special parameter typed as `RunnableConfig` (see [this guide](/docs/how_to/tool_configure) for more details). You'll also need to pass that parameter through into the internal chain when executing it:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"\n",
"\n",
"@tool\n",
"async def special_summarization_tool_with_config(\n",
" long_text: str, config: RunnableConfig\n",
") -> str:\n",
" \"\"\"A tool that summarizes input text using advanced techniques.\"\"\"\n",
" prompt = ChatPromptTemplate.from_template(\n",
" \"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n{long_text}\"\n",
" )\n",
"\n",
" def reverse(x: str):\n",
" return x[::-1]\n",
"\n",
" chain = prompt | model | StrOutputParser() | reverse\n",
" # Pass the \"config\" object as an argument to any executed runnables\n",
" summary = await chain.ainvoke({\"long_text\": long_text}, config=config)\n",
" return summary"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now try the same `astream_events()` call as before with your new tool:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-195c0986-2ffa-43a3-9366-f2f96c42fe57', usage_metadata={'input_tokens': 182, 'output_tokens': 16, 'total_tokens': 198}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\")]]}}, 'run_id': '195c0986-2ffa-43a3-9366-f2f96c42fe57', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['370919df-1bc3-43ae-aab2-8e112a4ddf47', 'de535624-278b-4927-9393-6d0cac3248df']}\n"
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-d23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', usage_metadata={'input_tokens': 182, 'output_tokens': 16, 'total_tokens': 198}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\")]]}}, 'run_id': 'd23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['f25c41fe-8972-4893-bc40-cecf3922c1fa']}\n"
]
}
],
"source": [
"stream = special_summarization_tool.astream_events(\n",
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"\n",
@@ -153,38 +219,38 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And you can see that you get the raw response from the chat model.\n",
"Awesome! This time there's an event emitted.\n",
"\n",
"`astream_events()` will automatically call internal runnables in a chain with streaming enabled if possible, so if you wanted to a stream of tokens as they are generated from the chat model, you could simply filter our calls to look for `on_chat_model_stream` events with no other changes:"
"For streaming, `astream_events()` automatically calls internal runnables in a chain with streaming enabled if possible, so if you wanted to a stream of tokens as they are generated from the chat model, you could simply filter to look for `on_chat_model_stream` events with no other changes:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3')}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': 'cd8c1bd9-64d8-463c-a4d7-4bceed7911b3', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['8ddd1325-07c4-4213-8a2f-4462db8c6c70', '9f8654b4-b3f6-414e-b41d-dd201342a2fa']}\n"
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n"
]
}
],
"source": [
"stream = special_summarization_tool.astream_events(\n",
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"\n",
@@ -193,67 +259,17 @@
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that you still have access to the final tool response as well. You can access it by looking for an `on_tool_end` event.\n",
"\n",
"To make events your tool emits easier to identify, you can also add identifiers to runnables using the `with_config()` method. `run_name` will apply to only to the runnable you attach it to, while `tags` will be inherited by runnables called within your initial runnable.\n",
"\n",
"Let's redeclare the tool with a tag, then run it with `astream_events()` with some filters. You should only see streamed events from the chat model and the final tool output:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630')}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-696f4fc8-6c6f-46a0-8c82-e2e3f7625630', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': '696f4fc8-6c6f-46a0-8c82-e2e3f7625630', 'name': 'ChatAnthropic', 'tags': ['seq:step:2', 'bee_movie'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['49d9d7d3-2b02-4964-a6c5-12f57a063146', '8922d0e3-4199-4ba5-9a7a-fc4f2fca3e72']}\n",
"{'event': 'on_tool_end', 'data': {'output': '.yad noitaudarg rof tiftuo sesoohc yrraB ;scisyhp seifed eeB'}, 'run_id': '49d9d7d3-2b02-4964-a6c5-12f57a063146', 'name': 'special_summarization_tool', 'tags': ['bee_movie'], 'metadata': {}, 'parent_ids': []}\n"
]
}
],
"source": [
"tagged_tool = special_summarization_tool.with_config({\"tags\": [\"bee_movie\"]})\n",
"\n",
"stream = tagged_tool.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\", include_tags=[\"bee_movie\"]\n",
")\n",
"\n",
"async for event in stream:\n",
" event_type = event[\"event\"]\n",
" if event_type == \"on_chat_model_stream\" or event_type == \"on_tool_end\":\n",
" print(event)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Now you've learned how to stream events from within a tool. Next, you can learn more about how to use tools:\n",
"You've now seen how to stream events from within a tool. Next, check out the following guides for more on using tools:\n",
"\n",
"- Bind [model-specific tools](/docs/how_to/tools_model_specific/)\n",
"- Pass [runtime values to tools](/docs/how_to/tool_runtime)\n",
"- Pass [tool results back to a model](/docs/how_to/tool_results_pass_to_model)\n",
"- [Dispatch custom callback events](/docs/how_to/callbacks_custom_events)\n",
"\n",
"You can also check out some more specific uses of tool calling:\n",
"\n",
@@ -264,7 +280,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -278,9 +294,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -228,7 +228,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -419,13 +419,13 @@
"Invoking: `exponentiate` with `{'base': 405, 'exponent': 2}`\n",
"\n",
"\n",
"\u001b[0m\u001b[38;5;200m\u001b[1;3m164025\u001b[0m\u001b[32;1m\u001b[1;3mThe result of taking 3 to the fifth power is 243. \n",
"\u001b[0m\u001b[38;5;200m\u001b[1;3m13286025\u001b[0m\u001b[32;1m\u001b[1;3mThe result of taking 3 to the fifth power is 243. \n",
"\n",
"The sum of twelve and three is 15. \n",
"\n",
"Multiplying 243 by 15 gives 3645. \n",
"\n",
"Finally, squaring 3645 gives 164025.\u001b[0m\n",
"Finally, squaring 3645 gives 13286025.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -434,7 +434,7 @@
"data": {
"text/plain": [
"{'input': 'Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result',\n",
" 'output': 'The result of taking 3 to the fifth power is 243. \\n\\nThe sum of twelve and three is 15. \\n\\nMultiplying 243 by 15 gives 3645. \\n\\nFinally, squaring 3645 gives 164025.'}"
" 'output': 'The result of taking 3 to the fifth power is 243. \\n\\nThe sum of twelve and three is 15. \\n\\nMultiplying 243 by 15 gives 3645. \\n\\nFinally, squaring 3645 gives 13286025.'}"
]
},
"execution_count": 18,

View File

@@ -540,7 +540,7 @@
"id": "137662a6"
},
"source": [
"## Example usage within a Conversation Chains"
"## Example usage within RunnableWithMessageHistory "
]
},
{
@@ -550,7 +550,7 @@
"id": "79efa62d"
},
"source": [
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://python.langchain.com/docs/modules/memory/types/buffer) example applied to the `mistralai/mixtral-8x22b-instruct-v0.1` model."
"Like any other integration, ChatNVIDIA is fine to support chat utilities like RunnableWithMessageHistory which is analogous to using `ConversationChain`. Below, we show the [LangChain RunnableWithMessageHistory](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html) example applied to the `mistralai/mixtral-8x22b-instruct-v0.1` model."
]
},
{
@@ -572,8 +572,19 @@
},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.chat_history import InMemoryChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"# store is a dictionary that maps session IDs to their corresponding chat histories.\n",
"store = {} # memory is maintained outside the chain\n",
"\n",
"\n",
"# A function that returns the chat history for a given session ID.\n",
"def get_session_history(session_id: str) -> InMemoryChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = InMemoryChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"chat = ChatNVIDIA(\n",
" model=\"mistralai/mixtral-8x22b-instruct-v0.1\",\n",
@@ -582,24 +593,18 @@
" top_p=1.0,\n",
")\n",
"\n",
"conversation = ConversationChain(llm=chat, memory=ConversationBufferMemory())"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f644ff28",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 268
},
"id": "f644ff28",
"outputId": "bae354cc-2118-4e01-ce20-a717ac94d27d"
},
"outputs": [],
"source": [
"conversation.invoke(\"Hi there!\")[\"response\"]"
"# Define a RunnableConfig object, with a `configurable` key. session_id determines thread\n",
"config = {\"configurable\": {\"session_id\": \"1\"}}\n",
"\n",
"conversation = RunnableWithMessageHistory(\n",
" chat,\n",
" get_session_history,\n",
")\n",
"\n",
"conversation.invoke(\n",
" \"Hi I'm Srijan Dubey.\", # input or query\n",
" config=config,\n",
")"
]
},
{
@@ -616,26 +621,30 @@
},
"outputs": [],
"source": [
"conversation.invoke(\"I'm doing well! Just having a conversation with an AI.\")[\n",
" \"response\"\n",
"]"
"conversation.invoke(\n",
" \"I'm doing well! Just having a conversation with an AI.\",\n",
" config=config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "LyD1xVKmVSs4",
"id": "uHIMZxVSVNBC",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 350
"height": 284
},
"id": "LyD1xVKmVSs4",
"outputId": "a1714513-a8fd-4d14-f974-233e39d5c4f5"
"id": "uHIMZxVSVNBC",
"outputId": "79acc89d-a820-4f2c-bac2-afe99da95580"
},
"outputs": [],
"source": [
"conversation.invoke(\"Tell me about yourself.\")[\"response\"]"
"conversation.invoke(\n",
" \"Tell me about yourself.\",\n",
" config=config,\n",
")"
]
}
],

View File

@@ -37,7 +37,7 @@
"scrapfly_loader = ScrapflyLoader(\n",
" [\"https://web-scraping.dev/products\"],\n",
" api_key=\"Your ScrapFly API key\", # Get your API key from https://www.scrapfly.io/\n",
" ignore_scrape_failures=True, # Ignore unprocessable web pages and log their exceptions\n",
" continue_on_failure=True, # Ignore unprocessable web pages and log their exceptions\n",
")\n",
"\n",
"# Load documents from URLs as markdown\n",
@@ -72,7 +72,7 @@
"scrapfly_loader = ScrapflyLoader(\n",
" [\"https://web-scraping.dev/products\"],\n",
" api_key=\"Your ScrapFly API key\", # Get your API key from https://www.scrapfly.io/\n",
" ignore_scrape_failures=True, # Ignore unprocessable web pages and log their exceptions\n",
" continue_on_failure=True, # Ignore unprocessable web pages and log their exceptions\n",
" scrape_config=scrapfly_scrape_config, # Pass the scrape_config object\n",
" scrape_format=\"markdown\", # The scrape result format, either `markdown`(default) or `text`\n",
")\n",

View File

@@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 1,
"id": "10ad9224",
"metadata": {
"ExecuteTime": {
@@ -1809,7 +1809,6 @@
"cell_type": "markdown",
"id": "0c69d84d",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -1891,7 +1890,6 @@
"cell_type": "markdown",
"id": "5da41b77",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"tags": []
},
"source": [
@@ -2149,6 +2147,7 @@
},
{
"cell_type": "markdown",
"id": "2ac1a8c7",
"metadata": {},
"source": [
"## SingleStoreDB Semantic Cache\n",
@@ -2173,6 +2172,353 @@
")"
]
},
{
"cell_type": "markdown",
"id": "7019c991-0101-4f9c-b212-5729a5471293",
"metadata": {},
"source": [
"## Couchbase Caches\n",
"\n",
"Use [Couchbase](https://couchbase.com/) as a cache for prompts and responses."
]
},
{
"cell_type": "markdown",
"id": "d6aac680-ba32-4c19-8864-6471cf0e7d5a",
"metadata": {},
"source": [
"### Couchbase Cache\n",
"\n",
"The standard cache that looks for an exact match of the user prompt."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9b4764e4-c75f-4185-b326-524287a826be",
"metadata": {},
"outputs": [],
"source": [
"# Create couchbase connection object\n",
"from datetime import timedelta\n",
"\n",
"from couchbase.auth import PasswordAuthenticator\n",
"from couchbase.cluster import Cluster\n",
"from couchbase.options import ClusterOptions\n",
"from langchain_couchbase.cache import CouchbaseCache\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"COUCHBASE_CONNECTION_STRING = (\n",
" \"couchbase://localhost\" # or \"couchbases://localhost\" if using TLS\n",
")\n",
"DB_USERNAME = \"Administrator\"\n",
"DB_PASSWORD = \"Password\"\n",
"\n",
"auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)\n",
"options = ClusterOptions(auth)\n",
"cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)\n",
"\n",
"# Wait until the cluster is ready for use.\n",
"cluster.wait_until_ready(timedelta(seconds=5))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4b5e73c5-92c1-4eab-84e2-77924ea9c123",
"metadata": {},
"outputs": [],
"source": [
"# Specify the bucket, scope and collection to store the cached documents\n",
"BUCKET_NAME = \"langchain-testing\"\n",
"SCOPE_NAME = \"_default\"\n",
"COLLECTION_NAME = \"_default\"\n",
"\n",
"set_llm_cache(\n",
" CouchbaseCache(\n",
" cluster=cluster,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "db8d28cc-8d93-47b4-8326-57a29a06fb3c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 22.2 ms, sys: 14 ms, total: 36.2 ms\n",
"Wall time: 938 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in the cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b470dc81-2e7f-4743-9435-ce9071394eea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 53 ms, sys: 29 ms, total: 82 ms\n",
"Wall time: 84.2 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time, it is in the cache, so it should be much faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "43626f33-d184-4260-b641-c9341cef5842",
"metadata": {},
"source": [
"### Couchbase Semantic Cache\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. This needs an appropriate Vector Search Index defined to work. Please look at the usage example on how to set up the index."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6b470c03-d7fe-4270-89e1-638251619a53",
"metadata": {},
"outputs": [],
"source": [
"# Create Couchbase connection object\n",
"from datetime import timedelta\n",
"\n",
"from couchbase.auth import PasswordAuthenticator\n",
"from couchbase.cluster import Cluster\n",
"from couchbase.options import ClusterOptions\n",
"from langchain_couchbase.cache import CouchbaseSemanticCache\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"COUCHBASE_CONNECTION_STRING = (\n",
" \"couchbase://localhost\" # or \"couchbases://localhost\" if using TLS\n",
")\n",
"DB_USERNAME = \"Administrator\"\n",
"DB_PASSWORD = \"Password\"\n",
"\n",
"auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)\n",
"options = ClusterOptions(auth)\n",
"cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)\n",
"\n",
"# Wait until the cluster is ready for use.\n",
"cluster.wait_until_ready(timedelta(seconds=5))"
]
},
{
"cell_type": "markdown",
"id": "f831bc4c-f330-4bd7-9b80-76771d91827e",
"metadata": {},
"source": [
"Notes:\n",
"- The search index for the semantic cache needs to be defined before using the semantic cache. \n",
"- The optional parameter, `score_threshold` in the Semantic Cache that you can use to tune the results of the semantic search.\n",
"\n",
"### How to Import an Index to the Full Text Search service?\n",
" - [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html)\n",
" - Click on Search -> Add Index -> Import\n",
" - Copy the following Index definition in the Import screen\n",
" - Click on Create Index to create the index.\n",
" - [Couchbase Capella](https://docs.couchbase.com/cloud/search/import-search-index.html)\n",
" - Copy the index definition to a new file `index.json`\n",
" - Import the file in Capella using the instructions in the documentation.\n",
" - Click on Create Index to create the index.\n",
"\n",
"#### Example index for the vector search. \n",
" ```\n",
" {\n",
" \"type\": \"fulltext-index\",\n",
" \"name\": \"langchain-testing._default.semantic-cache-index\",\n",
" \"sourceType\": \"gocbcore\",\n",
" \"sourceName\": \"langchain-testing\",\n",
" \"planParams\": {\n",
" \"maxPartitionsPerPIndex\": 1024,\n",
" \"indexPartitions\": 16\n",
" },\n",
" \"params\": {\n",
" \"doc_config\": {\n",
" \"docid_prefix_delim\": \"\",\n",
" \"docid_regexp\": \"\",\n",
" \"mode\": \"scope.collection.type_field\",\n",
" \"type_field\": \"type\"\n",
" },\n",
" \"mapping\": {\n",
" \"analysis\": {},\n",
" \"default_analyzer\": \"standard\",\n",
" \"default_datetime_parser\": \"dateTimeOptional\",\n",
" \"default_field\": \"_all\",\n",
" \"default_mapping\": {\n",
" \"dynamic\": true,\n",
" \"enabled\": false\n",
" },\n",
" \"default_type\": \"_default\",\n",
" \"docvalues_dynamic\": false,\n",
" \"index_dynamic\": true,\n",
" \"store_dynamic\": true,\n",
" \"type_field\": \"_type\",\n",
" \"types\": {\n",
" \"_default.semantic-cache\": {\n",
" \"dynamic\": false,\n",
" \"enabled\": true,\n",
" \"properties\": {\n",
" \"embedding\": {\n",
" \"dynamic\": false,\n",
" \"enabled\": true,\n",
" \"fields\": [\n",
" {\n",
" \"dims\": 1536,\n",
" \"index\": true,\n",
" \"name\": \"embedding\",\n",
" \"similarity\": \"dot_product\",\n",
" \"type\": \"vector\",\n",
" \"vector_index_optimized_for\": \"recall\"\n",
" }\n",
" ]\n",
" },\n",
" \"metadata\": {\n",
" \"dynamic\": true,\n",
" \"enabled\": true\n",
" },\n",
" \"text\": {\n",
" \"dynamic\": false,\n",
" \"enabled\": true,\n",
" \"fields\": [\n",
" {\n",
" \"index\": true,\n",
" \"name\": \"text\",\n",
" \"store\": true,\n",
" \"type\": \"text\"\n",
" }\n",
" ]\n",
" }\n",
" }\n",
" }\n",
" }\n",
" },\n",
" \"store\": {\n",
" \"indexType\": \"scorch\",\n",
" \"segmentVersion\": 16\n",
" }\n",
" },\n",
" \"sourceParams\": {}\n",
" }\n",
" ```"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ae0766c8-ea34-4604-b0dc-cf2bbe8077f4",
"metadata": {},
"outputs": [],
"source": [
"BUCKET_NAME = \"langchain-testing\"\n",
"SCOPE_NAME = \"_default\"\n",
"COLLECTION_NAME = \"semantic-cache\"\n",
"INDEX_NAME = \"semantic-cache-index\"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"cache = CouchbaseSemanticCache(\n",
" cluster=cluster,\n",
" embedding=embeddings,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" index_name=INDEX_NAME,\n",
" score_threshold=0.8,\n",
")\n",
"\n",
"set_llm_cache(cache)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a2e82743-10ea-4319-b43e-193475ae5449",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"The average lifespan of a dog is around 12 years, but this can vary depending on the breed, size, and overall health of the individual dog. Some smaller breeds may live longer, while larger breeds may have shorter lifespans. Proper care, diet, and exercise can also play a role in extending a dog's lifespan.\n",
"CPU times: user 826 ms, sys: 2.46 s, total: 3.28 s\n",
"Wall time: 2.87 s\n"
]
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in the cache, so it should take longer\n",
"print(llm.invoke(\"How long do dogs live?\"))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c36f4e29-d872-4334-a1f1-0e6d10c5d9f2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"The average lifespan of a dog is around 12 years, but this can vary depending on the breed, size, and overall health of the individual dog. Some smaller breeds may live longer, while larger breeds may have shorter lifespans. Proper care, diet, and exercise can also play a role in extending a dog's lifespan.\n",
"CPU times: user 9.82 ms, sys: 2.61 ms, total: 12.4 ms\n",
"Wall time: 311 ms\n"
]
}
],
"source": [
"%%time\n",
"# The second time, it is in the cache, so it should be much faster\n",
"print(llm.invoke(\"What is the expected lifespan of a dog?\"))"
]
},
{
"cell_type": "markdown",
"id": "ae1f5e1c-085e-4998-9f2d-b5867d2c3d5b",
@@ -2228,7 +2574,9 @@
"| langchain_core.caches | [InMemoryCache](https://api.python.langchain.com/en/latest/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBCache.html) |\n"
"| langchain_mongodb.cache | [MongoDBCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseCache](https://api.python.langchain.com/en/latest/cache/langchain_couchbase.cache.CouchbaseCache.html) |\n",
"| langchain_couchbase.cache | [CouchbaseSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_couchbase.cache.CouchbaseSemanticCache.html) |\n"
]
},
{
@@ -2256,7 +2604,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -33,7 +33,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet transformers --quiet"
"%pip install --upgrade --quiet transformers"
]
},
{

View File

@@ -50,8 +50,8 @@
"source": [
"import os\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.llms import PipelineAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate"
]
},
@@ -123,7 +123,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
"llm_chain = prompt | llm | StrOutputParser()"
]
},
{
@@ -142,7 +142,7 @@
"source": [
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
"llm_chain.run(question)"
"llm_chain.invoke(question)"
]
}
],

View File

@@ -27,7 +27,7 @@
"outputs": [],
"source": [
"# Install the package\n",
"%pip install --upgrade --quiet dashscope"
"%pip install --upgrade --quiet langchain-community dashscope"
]
},
{

View File

@@ -0,0 +1,325 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a283d2fd-e26e-4811-a486-d3cf0ecf6749",
"metadata": {},
"source": [
"# Couchbase\n",
"> Couchbase is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. Couchbase embraces AI with coding assistance for developers and vector search for their applications.\n",
"\n",
"This notebook goes over how to use the `CouchbaseChatMessageHistory` class to store the chat message history in a Couchbase cluster\n"
]
},
{
"cell_type": "markdown",
"id": "ff868a6c-3e17-4c3d-8d32-67b01f4d7bcc",
"metadata": {},
"source": [
"## Set Up Couchbase Cluster\n",
"To run this demo, you need a Couchbase Cluster. \n",
"\n",
"You can work with both [Couchbase Capella](https://www.couchbase.com/products/capella/) and your self-managed Couchbase Server."
]
},
{
"cell_type": "markdown",
"id": "41fa85e7-6968-45e4-a445-de305d80f332",
"metadata": {},
"source": [
"## Install Dependencies\n",
"`CouchbaseChatMessageHistory` lives inside the `langchain-couchbase` package. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b744ca05-b8c6-458c-91df-f50ca2c20b3c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langchain-couchbase"
]
},
{
"cell_type": "markdown",
"id": "41f29205-6452-493b-ba18-8a3b006bcca4",
"metadata": {},
"source": [
"## Create Couchbase Connection Object\n",
"We create a connection to the Couchbase cluster initially and then pass the cluster object to the Vector Store. \n",
"\n",
"Here, we are connecting using the username and password. You can also connect using any other supported way to your cluster. \n",
"\n",
"For more information on connecting to the Couchbase cluster, please check the [Python SDK documentation](https://docs.couchbase.com/python-sdk/current/hello-world/start-using-sdk.html#connect)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f394908e-f5fe-408a-84d7-b97fdebcfa26",
"metadata": {},
"outputs": [],
"source": [
"COUCHBASE_CONNECTION_STRING = (\n",
" \"couchbase://localhost\" # or \"couchbases://localhost\" if using TLS\n",
")\n",
"DB_USERNAME = \"Administrator\"\n",
"DB_PASSWORD = \"Password\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ad4dce21-d80c-465a-b709-fd366ba5ce35",
"metadata": {},
"outputs": [],
"source": [
"from datetime import timedelta\n",
"\n",
"from couchbase.auth import PasswordAuthenticator\n",
"from couchbase.cluster import Cluster\n",
"from couchbase.options import ClusterOptions\n",
"\n",
"auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)\n",
"options = ClusterOptions(auth)\n",
"cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)\n",
"\n",
"# Wait until the cluster is ready for use.\n",
"cluster.wait_until_ready(timedelta(seconds=5))"
]
},
{
"cell_type": "markdown",
"id": "e3d0210c-e2e6-437a-86f3-7397a1899fef",
"metadata": {},
"source": [
"We will now set the bucket, scope, and collection names in the Couchbase cluster that we want to use for storing the message history.\n",
"\n",
"Note that the bucket, scope, and collection need to exist before using them to store the message history."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e8c7f846-a5c4-4465-a40e-4a9a23ac71bd",
"metadata": {},
"outputs": [],
"source": [
"BUCKET_NAME = \"langchain-testing\"\n",
"SCOPE_NAME = \"_default\"\n",
"COLLECTION_NAME = \"conversational_cache\""
]
},
{
"cell_type": "markdown",
"id": "283959e1-6af7-4768-9211-5b0facc6ef65",
"metadata": {},
"source": [
"## Usage\n",
"In order to store the messages, you need the following:\n",
"- Couchbase Cluster object: Valid connection to the Couchbase cluster\n",
"- bucket_name: Bucket in cluster to store the chat message history\n",
"- scope_name: Scope in bucket to store the message history\n",
"- collection_name: Collection in scope to store the message history\n",
"- session_id: Unique identifier for the session\n",
"\n",
"Optionally you can configure the following:\n",
"- session_id_key: Field in the chat message documents to store the `session_id`\n",
"- message_key: Field in the chat message documents to store the message content\n",
"- create_index: Used to specify if the index needs to be created on the collection. By default, an index is created on the `message_key` and the `session_id_key` of the documents"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "43c3b2d5-aae2-44a9-9e9f-f10adf054cfa",
"metadata": {},
"outputs": [],
"source": [
"from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory\n",
"\n",
"message_history = CouchbaseChatMessageHistory(\n",
" cluster=cluster,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" session_id=\"test-session\",\n",
")\n",
"\n",
"message_history.add_user_message(\"hi!\")\n",
"\n",
"message_history.add_ai_message(\"how are you doing?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e7e348ef-79e9-481c-aeef-969ae03dea6a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!'), AIMessage(content='how are you doing?')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message_history.messages"
]
},
{
"cell_type": "markdown",
"id": "c8b942a7-93fa-4cd9-8414-d047135c2733",
"metadata": {},
"source": [
"## Chaining\n",
"The chat message history class can be used with [LCEL Runnables](https://python.langchain.com/v0.2/docs/how_to/message_history/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8a9f0d91-d1d6-481d-8137-ea11229f485a",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "946d45aa-5a61-49ae-816b-1c3949c56d9a",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"# Create the LCEL runnable\n",
"chain = prompt | ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "20dfd838-b549-42ed-b3ba-ac005f7e024c",
"metadata": {},
"outputs": [],
"source": [
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: CouchbaseChatMessageHistory(\n",
" cluster=cluster,\n",
" bucket_name=BUCKET_NAME,\n",
" scope_name=SCOPE_NAME,\n",
" collection_name=COLLECTION_NAME,\n",
" session_id=session_id,\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "17bd09f4-896d-433d-bb9a-369a06e7aa8a",
"metadata": {},
"outputs": [],
"source": [
"# This is where we configure the session id\n",
"config = {\"configurable\": {\"session_id\": \"testing\"}}"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "4bda1096-2fc2-40d7-a046-0d5d8e3a8f75",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 22, 'total_tokens': 32}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-a0f8a29e-ddf4-4e06-a1fe-cf8c325a2b72-0', usage_metadata={'input_tokens': 22, 'output_tokens': 10, 'total_tokens': 32})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1cfb31da-51bb-4c5f-909a-b7118b0ae08d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Bob.', response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 43, 'total_tokens': 48}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f764a9eb-999e-4042-96b6-fe47b7ae4779-0', usage_metadata={'input_tokens': 43, 'output_tokens': 5, 'total_tokens': 48})"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -27,3 +27,85 @@ See a [usage example](/docs/integrations/document_loaders/couchbase).
```python
from langchain_community.document_loaders.couchbase import CouchbaseLoader
```
## LLM Caches
### CouchbaseCache
Use Couchbase as a cache for prompts and responses.
See a [usage example](/docs/integrations/llm_caching/#couchbase-cache).
To import this cache:
```python
from langchain_couchbase.cache import CouchbaseCache
```
To use this cache with your LLMs:
```python
from langchain_core.globals import set_llm_cache
cluster = couchbase_cluster_connection_object
set_llm_cache(
CouchbaseCache(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
)
)
```
### CouchbaseSemanticCache
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore.
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index.
See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache).
To import this cache:
```python
from langchain_couchbase.cache import CouchbaseSemanticCache
```
To use this cache with your LLMs:
```python
from langchain_core.globals import set_llm_cache
# use any embedding provider...
from langchain_openai.Embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
cluster = couchbase_cluster_connection_object
set_llm_cache(
CouchbaseSemanticCache(
cluster=cluster,
embedding = embeddings,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
index_name=INDEX_NAME,
)
)
```
## Chat Message History
Use Couchbase as the storage for your chat messages.
See a [usage example](/docs/integrations/memory/couchbase_chat_message_history).
To use the chat message history in your applications:
```python
from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory
message_history = CouchbaseChatMessageHistory(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
session_id="test-session",
)
message_history.add_user_message("hi!")
```

View File

@@ -61,7 +61,7 @@ When ready to deploy, you can self-host models with NVIDIA NIM—which is includ
```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank
# connect to an chat NIM running at localhost:8000, specifyig a specific model
# connect to a chat NIM running at localhost:8000, specifying a model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct")
# connect to an embedding NIM running at localhost:8080

View File

@@ -202,7 +202,7 @@ Prem Templates are also available for Streaming too.
## Prem Embeddings
In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key.
In this section we cover how we can get access to different embedding models using `PremEmbeddings` with LangChain. Let's start by importing our modules and setting our API Key.
```python
import os

View File

@@ -21,7 +21,7 @@ whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain_qdrant import Qdrant
from langchain_qdrant import QdrantVectorStore
```
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant)

View File

@@ -21,7 +21,7 @@ For example
```python
from langchain_openai import OpenAI
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_core.output_parsers import StrOutputParser
import os
os.environ['OPENAI_API_BASE'] = "https://shale.live/v1"
@@ -35,10 +35,11 @@ template = """Question: {question}
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain = prompt | llm | StrOutputParser()
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
llm_chain.invoke(question)
```

View File

@@ -309,9 +309,9 @@
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(texts, CohereEmbeddings()).as_retriever(\n",
" search_kwargs={\"k\": 20}\n",
")\n",
"retriever = FAISS.from_documents(\n",
" texts, CohereEmbeddings(model=\"embed-english-v3.0\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
@@ -324,7 +324,8 @@
"metadata": {},
"source": [
"## Doing reranking with CohereRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `CohereRerank`, uses the Cohere rerank endpoint to rerank the returned results."
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `CohereRerank`, uses the Cohere rerank endpoint to rerank the returned results.\n",
"Do note that it is mandatory to specify the model name in CohereRerank!"
]
},
{
@@ -339,7 +340,7 @@
"from langchain_community.llms import Cohere\n",
"\n",
"llm = Cohere(temperature=0)\n",
"compressor = CohereRerank()\n",
"compressor = CohereRerank(model=\"rerank-english-v3.0\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",

View File

@@ -40,7 +40,9 @@
"metadata": {},
"outputs": [],
"source": [
"embeddings = CohereEmbeddings(model=\"embed-english-light-v3.0\")"
"embeddings = CohereEmbeddings(\n",
" model=\"embed-english-light-v3.0\"\n",
") # It is mandatory to pass a model parameter to initialize the CohereEmbeddings object"
]
},
{

View File

@@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TextEmbed - Embedding Inference Server\n",
"\n",
"TextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.\n",
"\n",
"## Features\n",
"\n",
"- **High Throughput & Low Latency:** Designed to handle a large number of requests efficiently.\n",
"- **Flexible Model Support:** Works with various sentence-transformer models.\n",
"- **Scalable:** Easily integrates into larger systems and scales with demand.\n",
"- **Batch Processing:** Supports batch processing for better and faster inference.\n",
"- **OpenAI Compatible REST API Endpoint:** Provides an OpenAI compatible REST API endpoint.\n",
"- **Single Line Command Deployment:** Deploy multiple models via a single command for efficient deployment.\n",
"- **Support for Embedding Formats:** Supports binary, float16, and float32 embeddings formats for faster retrieval.\n",
"\n",
"## Getting Started\n",
"\n",
"### Prerequisites\n",
"\n",
"Ensure you have Python 3.10 or higher installed. You will also need to install the required dependencies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation via PyPI\n",
"\n",
"1. **Install the required dependencies:**\n",
"\n",
" ```bash\n",
" pip install -U textembed\n",
" ```\n",
"\n",
"2. **Start the TextEmbed server with your desired models:**\n",
"\n",
" ```bash\n",
" python -m textembed.server --models sentence-transformers/all-MiniLM-L12-v2 --workers 4 --api-key TextEmbed \n",
" ```\n",
"\n",
"For more information, please read the [documentation](https://github.com/kevaldekivadiya2415/textembed/blob/main/docs/setup.md)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import TextEmbedEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"embeddings = TextEmbedEmbeddings(\n",
" model=\"sentence-transformers/all-MiniLM-L12-v2\",\n",
" api_url=\"http://0.0.0.0:8000/v1\",\n",
" api_key=\"TextEmbed\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Embed your documents"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# Define a list of documents\n",
"documents = [\n",
" \"Data science involves extracting insights from data.\",\n",
" \"Artificial intelligence is transforming various industries.\",\n",
" \"Cloud computing provides scalable computing resources over the internet.\",\n",
" \"Big data analytics helps in understanding large datasets.\",\n",
" \"India has a diverse cultural heritage.\",\n",
"]\n",
"\n",
"# Define a query\n",
"query = \"What is the cultural heritage of India?\""
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"# Embed all documents\n",
"document_embeddings = embeddings.embed_documents(documents)\n",
"\n",
"# Embed the query\n",
"query_embedding = embeddings.embed_query(query)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Data science involves extracting insights from data.': 0.05121298956322118,\n",
" 'Artificial intelligence is transforming various industries.': -0.0060612142358469345,\n",
" 'Cloud computing provides scalable computing resources over the internet.': -0.04877402795301714,\n",
" 'Big data analytics helps in understanding large datasets.': 0.016582168576929422,\n",
" 'India has a diverse cultural heritage.': 0.7408992963028144}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Compute Similarity\n",
"import numpy as np\n",
"\n",
"scores = np.array(document_embeddings) @ np.array(query_embedding).T\n",
"dict(zip(documents, scores))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "check10",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -22,7 +22,7 @@
"`InfobipAPIWrapper` uses name parameters where you can provide credentials:\n",
"\n",
"- `infobip_api_key` - [API Key](https://www.infobip.com/docs/essentials/api-authentication#api-key-header) that you can find in your [developer tools](https://portal.infobip.com/dev/api-keys)\n",
"- `infobip_base_url` - [Base url](https://www.infobip.com/docs/essentials/base-url) for Infobip API. You can use default value `https://api.infobip.com/`.\n",
"- `infobip_base_url` - [Base url](https://www.infobip.com/docs/essentials/base-url) for Infobip API. You can use the default value `https://api.infobip.com/`.\n",
"\n",
"You can also provide `infobip_api_key` and `infobip_base_url` as environment variables `INFOBIP_API_KEY` and `INFOBIP_BASE_URL`."
]
@@ -60,7 +60,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sending a Email"
"## Sending an Email"
]
},
{

View File

@@ -0,0 +1,183 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7d143c73",
"metadata": {},
"source": [
"# Riza Code Interpreter\n",
"\n",
"> The Riza Code Interpreter is a WASM-based isolated environment for running Python or JavaScript generated by AI agents.\n",
"\n",
"In this notebook we'll create an example of an agent that uses Python to solve a problem that an LLM can't solve on its own:\n",
"counting the number of 'r's in the word \"strawberry.\"\n",
"\n",
"Before you get started grab an API key from the [Riza dashboard](https://dashboard.riza.io). For more guides and a full API reference\n",
"head over to the [Riza Code Interpreter API documentation](https://docs.riza.io)."
]
},
{
"cell_type": "markdown",
"id": "894aa87a",
"metadata": {},
"source": [
"Make sure you have the necessary dependencies installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8265cf7f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community rizaio"
]
},
{
"cell_type": "markdown",
"id": "e085eb51",
"metadata": {},
"source": [
"Set up your API keys as an environment variable."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45ba8936",
"metadata": {},
"outputs": [],
"source": [
"%env ANTHROPIC_API_KEY=<your_anthropic_api_key_here>\n",
"%env RIZA_API_KEY=<your_riza_api_key_here>"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "efe26fd9-6e33-4f5f-b49b-ea74fa6c4915",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.riza.command import ExecPython"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "cd5b952e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "markdown",
"id": "7bd0b610",
"metadata": {},
"source": [
"Initialize the `ExecPython` tool."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "32f1543f",
"metadata": {},
"outputs": [],
"source": [
"tools = [ExecPython()]"
]
},
{
"cell_type": "markdown",
"id": "24f952d5",
"metadata": {},
"source": [
"Initialize an agent using Anthropic's Claude Haiku model."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "71831ea8",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatAnthropic(model=\"claude-3-haiku-20240307\", temperature=0)\n",
"\n",
"prompt_template = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Make sure to use a tool if you need to solve a problem.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" (\"placeholder\", \"{agent_scratchpad}\"),\n",
" ]\n",
")\n",
"\n",
"agent = create_tool_calling_agent(llm, tools, prompt_template)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "36b24036",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `riza_exec_python` with `{'code': 'word = \"strawberry\"\\nprint(word.count(\"r\"))'}`\n",
"responded: [{'id': 'toolu_01JwPLAAqqCNCjVuEnK8Fgut', 'input': {}, 'name': 'riza_exec_python', 'type': 'tool_use', 'index': 0, 'partial_json': '{\"code\": \"word = \\\\\"strawberry\\\\\"\\\\nprint(word.count(\\\\\"r\\\\\"))\"}'}]\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m3\n",
"\u001b[0m\u001b[32;1m\u001b[1;3m[{'text': '\\n\\nThe word \"strawberry\" contains 3 \"r\" characters.', 'type': 'text', 'index': 0}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"The word \"strawberry\" contains 3 \"r\" characters.\n"
]
}
],
"source": [
"# Ask a tough question\n",
"result = agent_executor.invoke({\"input\": \"how many rs are in strawberry?\"})\n",
"print(result[\"output\"][0][\"text\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,310 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# ApertureDB\n",
"\n",
"[ApertureDB](https://docs.aperturedata.io) is a database that stores, indexes, and manages multi-modal data like text, images, videos, bounding boxes, and embeddings, together with their associated metadata.\n",
"\n",
"This notebook explains how to use the embeddings functionality of ApertureDB."
]
},
{
"cell_type": "markdown",
"id": "e7393beb",
"metadata": {},
"source": [
"## Install ApertureDB Python SDK\n",
"\n",
"This installs the [Python SDK](https://docs.aperturedata.io/category/aperturedb-python-sdk) used to write client code for ApertureDB."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a62cff8a-bcf7-4e33-bbbc-76999c2e3e20",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet aperturedb"
]
},
{
"cell_type": "markdown",
"id": "4fe12f77",
"metadata": {},
"source": [
"## Run an ApertureDB instance\n",
"\n",
"To continue, you should have an [ApertureDB instance up and running](https://docs.aperturedata.io/HowToGuides/start/Setup) and configure your environment to use it. \n",
"There are various ways to do that, for example:\n",
"\n",
"```bash\n",
"docker run --publish 55555:55555 aperturedata/aperturedb-standalone\n",
"adb config create local --active --no-interactive\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "667eabca",
"metadata": {},
"source": [
"## Download some web documents\n",
"We're going to do a mini-crawl here of one web page."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0798dfdb",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"source": [
"# For loading documents from web\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://docs.aperturedata.io\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "5f077d11",
"metadata": {},
"source": [
"## Select embeddings model\n",
"\n",
"We want to use OllamaEmbeddings so we have to import the necessary modules.\n",
"\n",
"Ollama can be set up as a docker container as described in the [documentation](https://hub.docker.com/r/ollama/ollama), for example:\n",
"```bash\n",
"# Run server\n",
"docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama\n",
"# Tell server to load a specific model\n",
"docker exec ollama ollama run llama2\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8b6ed9cd-81b9-46e5-9c20-5aafca2844d0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.embeddings import OllamaEmbeddings\n",
"\n",
"embeddings = OllamaEmbeddings()"
]
},
{
"cell_type": "markdown",
"id": "b7b313e6",
"metadata": {},
"source": [
"## Split documents into segments\n",
"\n",
"We want to turn our single document into multiple segments."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3c4b7b31",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter()\n",
"documents = text_splitter.split_documents(docs)"
]
},
{
"cell_type": "markdown",
"id": "46339d32",
"metadata": {},
"source": [
"## Create vectorstore from documents and embeddings\n",
"\n",
"This code creates a vectorstore in the ApertureDB instance.\n",
"Within the instance, this vectorstore is represented as a \"[descriptor set](https://docs.aperturedata.io/category/descriptorset-commands)\".\n",
"By default, the descriptor set is named `langchain`. The following code will generate embeddings for each document and store them in ApertureDB as descriptors. This will take a few seconds as the embeddings are bring generated."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "dcf88bdf",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.vectorstores import ApertureDB\n",
"\n",
"vector_db = ApertureDB.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "7672877b",
"metadata": {},
"source": [
"## Select a large language model\n",
"\n",
"Again, we use the Ollama server we set up for local processing."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9a005e4b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import Ollama\n",
"\n",
"llm = Ollama(model=\"llama2\")"
]
},
{
"cell_type": "markdown",
"id": "cd54f2ad",
"metadata": {},
"source": [
"## Build a RAG chain\n",
"\n",
"Now we have all the components we need to create a RAG (Retrieval-Augmented Generation) chain. This chain does the following:\n",
"1. Generate embedding descriptor for user query\n",
"2. Find text segments that are similar to the user query using the vector store\n",
"3. Pass user query and context documents to the LLM using a prompt template\n",
"4. Return the LLM's answer"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a8c513ab",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Based on the provided context, ApertureDB can store images. In fact, it is specifically designed to manage multimodal data such as images, videos, documents, embeddings, and associated metadata including annotations. So, ApertureDB has the capability to store and manage images.\n"
]
}
],
"source": [
"# Create prompt\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"\"\"Answer the following question based only on the provided context:\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"Question: {input}\"\"\")\n",
"\n",
"\n",
"# Create a chain that passes documents to an LLM\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"document_chain = create_stuff_documents_chain(llm, prompt)\n",
"\n",
"\n",
"# Treat the vectorstore as a document retriever\n",
"retriever = vector_db.as_retriever()\n",
"\n",
"\n",
"# Create a RAG chain that connects the retriever to the LLM\n",
"from langchain.chains import create_retrieval_chain\n",
"\n",
"retrieval_chain = create_retrieval_chain(retriever, document_chain)"
]
},
{
"cell_type": "markdown",
"id": "3bc6a882",
"metadata": {},
"source": [
"## Run the RAG chain\n",
"\n",
"Finally we pass a question to the chain and get our answer. This will take a few seconds to run as the LLM generates an answer from the query and context documents."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "020f29f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Based on the provided context, ApertureDB can store images in several ways:\n",
"\n",
"1. Multimodal data management: ApertureDB offers a unified interface to manage multimodal data such as images, videos, documents, embeddings, and associated metadata including annotations. This means that images can be stored along with other types of data in a single database instance.\n",
"2. Image storage: ApertureDB provides image storage capabilities through its integration with the public cloud providers or on-premise installations. This allows customers to host their own ApertureDB instances and store images on their preferred cloud provider or on-premise infrastructure.\n",
"3. Vector database: ApertureDB also offers a vector database that enables efficient similarity search and classification of images based on their semantic meaning. This can be useful for applications where image search and classification are important, such as in computer vision or machine learning workflows.\n",
"\n",
"Overall, ApertureDB provides flexible and scalable storage options for images, allowing customers to choose the deployment model that best suits their needs.\n"
]
}
],
"source": [
"user_query = \"How can ApertureDB store images?\"\n",
"response = retrieval_chain.invoke({\"input\": user_query})\n",
"print(response[\"answer\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -169,6 +169,23 @@
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Specify additional properties for the Azure client such as the following https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md#configurations\n",
"vector_store: AzureSearch = AzureSearch(\n",
" azure_search_endpoint=vector_store_address,\n",
" azure_search_key=vector_store_password,\n",
" index_name=index_name,\n",
" embedding_function=embeddings.embed_query,\n",
" # Configure max retries for the Azure client\n",
" additional_search_client_options={\"retry_total\": 4},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -174,7 +174,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity search"
"## Similarity search\n",
"Optional keyword arguments to similarity_search include specifying k number of documents to retrive, \n",
"a filters dictionary for metadata filtering based on [this syntax](https://docs.databricks.com/en/generative-ai/create-query-vector-search.html#use-filters-on-queries),\n",
"as well as the [query_type](https://api-docs.databricks.com/python/vector-search/databricks.vector_search.html#databricks.vector_search.index.VectorSearchIndex.similarity_search) which can be ANN or HYBRID "
]
},
{

View File

@@ -78,7 +78,7 @@
"# See docker command above to launch a postgres instance with pgvector enabled.\n",
"connection = \"postgresql+psycopg://langchain:langchain@localhost:6024/langchain\" # Uses psycopg3!\n",
"collection_name = \"my_docs\"\n",
"embeddings = CohereEmbeddings()\n",
"embeddings = CohereEmbeddings(model=\"embed-english-v3.0\")\n",
"\n",
"vectorstore = PGVector(\n",
" embeddings=embeddings,\n",

View File

@@ -8,14 +8,15 @@
"source": [
"# Qdrant\n",
"\n",
">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n",
">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage vectors with additional payload and extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n",
"\n",
"This documentation demonstrates how to use Qdrant with Langchain for dense/sparse and hybrid retrieval.\n",
"\n",
"This notebook shows how to use functionality related to the `Qdrant` vector database. \n",
"> This page documents the `QdrantVectorStore` class that supports multiple retrieval modes via Qdrant's new [Query API](https://qdrant.tech/blog/qdrant-1.10.x/). It requires you to run Qdrant v1.10.0 or above.\n",
"\n",
"There are various modes of how to run `Qdrant`, and depending on the chosen one, there will be some subtle differences. The options include:\n",
"- Local mode, no server required\n",
"- On-premise server deployment\n",
"- Docker deployments\n",
"- Qdrant Cloud\n",
"\n",
"See the [installation instructions](https://qdrant.tech/documentation/install/)."
@@ -30,7 +31,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-qdrant langchain-openai langchain langchain-community"
"%pip install langchain-qdrant langchain-openai langchain"
]
},
{
@@ -39,30 +40,7 @@
"id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "082e7e8b-ac52-430c-98d6-8f0924457642",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
"We will use `OpenAIEmbeddings` for demonstration."
]
},
{
@@ -80,7 +58,7 @@
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_qdrant import Qdrant\n",
"from langchain_qdrant import QdrantVectorStore\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
@@ -97,7 +75,7 @@
},
"outputs": [],
"source": [
"loader = TextLoader(\"../../how_to/state_of_the_union.txt\")\n",
"loader = TextLoader(\"some-file.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
@@ -115,7 +93,7 @@
"\n",
"### Local mode\n",
"\n",
"Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\n",
"Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or storing just a small amount of vectors. The embeddings might be fully kept in memory or persisted on disk.\n",
"\n",
"#### In-memory\n",
"\n",
@@ -135,7 +113,7 @@
},
"outputs": [],
"source": [
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" location=\":memory:\", # Local mode with in-memory storage only\n",
@@ -151,7 +129,7 @@
"source": [
"#### On-disk storage\n",
"\n",
"Local mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs."
"Local mode, without using the Qdrant server, may also store your vectors on disk so they persist between runs."
]
},
{
@@ -167,7 +145,7 @@
},
"outputs": [],
"source": [
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" path=\"/tmp/local_qdrant\",\n",
@@ -199,7 +177,7 @@
"outputs": [],
"source": [
"url = \"<---qdrant url here --->\"\n",
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" url=url,\n",
@@ -233,7 +211,7 @@
"source": [
"url = \"<---qdrant cloud cluster url here --->\"\n",
"api_key = \"<---api key here--->\"\n",
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" url=url,\n",
@@ -266,7 +244,7 @@
"metadata": {},
"outputs": [],
"source": [
"qdrant = Qdrant.from_existing_collection(\n",
"qdrant = QdrantVectorStore.from_existing_collection(\n",
" embeddings=embeddings,\n",
" collection_name=\"my_documents\",\n",
" url=\"http://localhost:6333\",\n",
@@ -297,7 +275,7 @@
"outputs": [],
"source": [
"url = \"<---qdrant url here --->\"\n",
"qdrant = Qdrant.from_documents(\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" url=url,\n",
@@ -320,12 +298,31 @@
"source": [
"## Similarity search\n",
"\n",
"The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the `embedding_function` and used to find similar documents in Qdrant collection."
"The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded into vector embeddings and used to find similar documents in Qdrant collection.\n",
"\n",
"`QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter when setting up the class.\n",
"\n",
"- Dense Vector Search(Default)\n",
"- Sparse Vector Search\n",
"- Hybrid Search"
]
},
{
"cell_type": "markdown",
"id": "b3a78d46",
"metadata": {},
"source": [
"### Dense Vector Search\n",
"\n",
"To search with only dense vectors,\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`(default).\n",
"- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "a8c513ab",
"metadata": {
"ExecuteTime": {
@@ -336,38 +333,108 @@
},
"outputs": [],
"source": [
"from langchain_qdrant import RetrievalMode\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embedding=embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.DENSE,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fc516993",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:25.220984Z",
"start_time": "2023-04-04T10:51:25.213943Z"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"cell_type": "markdown",
"id": "dbd93d85",
"metadata": {},
"source": [
"print(found_docs[0].page_content)"
"### Sparse Vector Search\n",
"\n",
"To search with only sparse vectors,\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.SPARSE`.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.\n",
"\n",
"The `langchain-qdrant` package provides a [FastEmbed](https://github.com/qdrant/fastembed) based implementation out of the box.\n",
"\n",
"To use it, install the FastEmbed package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ceb493a3",
"metadata": {},
"outputs": [],
"source": [
"%pip install fastembed"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "052e3412",
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import FastEmbedSparse, RetrievalMode\n",
"\n",
"sparse_embeddings = FastEmbedSparse(model_name=\"Qdrant/BM25\")\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.SPARSE,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
]
},
{
"cell_type": "markdown",
"id": "f4b6c456",
"metadata": {},
"source": [
"### Hybrid Vector Search\n",
"\n",
"To perform a hybrid search using dense and sparse vectors with score fusion,\n",
"\n",
"- The `retrieval_mode` parameter should be set to `RetrievalMode.HYBRID`.\n",
"- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter.\n",
"- An implementation of the [`SparseEmbeddings`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) interface using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.\n",
"\n",
"Note that if you've added documents with the `HYBRID` mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce56f6e9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_qdrant import FastEmbedSparse, RetrievalMode\n",
"\n",
"sparse_embeddings = FastEmbedSparse(model_name=\"Qdrant/BM25\")\n",
"\n",
"qdrant = QdrantVectorStore.from_documents(\n",
" docs,\n",
" embedding=embeddings,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents\",\n",
" retrieval_mode=RetrievalMode.HYBRID,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search(query)"
]
},
{
@@ -400,7 +467,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "756a6887",
"metadata": {
"ExecuteTime": {
@@ -408,23 +475,7 @@
"start_time": "2023-04-04T10:51:25.635947Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"\n",
"Score: 0.8153784913324512\n"
]
}
],
"outputs": [],
"source": [
"document, score = found_docs[0]\n",
"print(document.page_content)\n",
@@ -449,10 +500,10 @@
"metadata": {},
"source": [
"```python\n",
"from qdrant_client.http import models as rest\n",
"from qdrant_client.http import models\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\n",
"found_docs = qdrant.similarity_search_with_score(query, filter=models.Filter(...))\n",
"```"
]
},
@@ -469,7 +520,9 @@
"source": [
"## Maximum marginal relevance search (MMR)\n",
"\n",
"If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents."
"If you'd like to look up some similar documents, but you'd also like to receive diverse results, MMR is the method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.\n",
"\n",
"Note that MMR search is only available if you've added documents with `DENSE` or `HYBRID` modes. Since it requires dense vectors."
]
},
{
@@ -490,7 +543,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "80c6db11",
"metadata": {
"ExecuteTime": {
@@ -498,40 +551,7 @@
"start_time": "2023-04-04T10:51:26.013329Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"2. We cant change how divided weve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n",
"\n",
"I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n",
"\n",
"They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n",
"\n",
"Officer Mora was 27 years old. \n",
"\n",
"Officer Rivera was 22. \n",
"\n",
"Both Dominican Americans whod grown up on the same streets they later chose to patrol as police officers. \n",
"\n",
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
"\n",
"Ive worked on these issues a long time. \n",
"\n",
"I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n",
"\n"
]
}
],
"outputs": [],
"source": [
"for i, doc in enumerate(found_docs):\n",
" print(f\"{i + 1}.\", doc.page_content, \"\\n\")"
@@ -545,7 +565,7 @@
"source": [
"## Qdrant as a Retriever\n",
"\n",
"Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. "
"Qdrant, as all the other vector stores, is a LangChain Retriever. "
]
},
{
@@ -589,7 +609,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"id": "f3c70c31",
"metadata": {
"ExecuteTime": {
@@ -597,18 +617,7 @@
"start_time": "2023-04-04T10:51:26.046407Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"retriever.invoke(query)[0]"
@@ -622,11 +631,11 @@
"source": [
"## Customizing Qdrant\n",
"\n",
"There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain `Document`.\n",
"There are options to use an existing Qdrant collection within your Langchain application. In such cases, you may need to define how to map Qdrant point into the Langchain `Document`.\n",
"\n",
"### Named vectors\n",
"\n",
"Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.\n"
"Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. If you work with a collection created externally or want to have the differently named vector used, you can configure it by providing its name.\n"
]
},
{
@@ -638,25 +647,18 @@
},
"outputs": [],
"source": [
"Qdrant.from_documents(\n",
"QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" embedding=embeddings,\n",
" sparse_embedding=sparse_embeddings,\n",
" location=\":memory:\",\n",
" collection_name=\"my_documents_2\",\n",
" retrieval_mode=RetrievalMode.HYBRID,\n",
" vector_name=\"custom_vector\",\n",
" sparse_vector_name=\"custom_sparse_vector\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b34f5230",
"metadata": {
"collapsed": false
},
"source": [
"As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood."
]
},
{
"cell_type": "markdown",
"id": "b2350093",
@@ -694,7 +696,7 @@
},
"outputs": [],
"source": [
"Qdrant.from_documents(\n",
"QdrantVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" location=\":memory:\",\n",
@@ -729,7 +731,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.11.8"
}
},
"nbformat": 4,

View File

@@ -22,6 +22,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
style={{ width: "100%" }}
title="LangChain Framework Overview"
/>

View File

@@ -107,7 +107,7 @@
"```\n",
"## Preview\n",
"\n",
"In this guide well build a QA app over as website. The specific website we will use is the [LLM Powered Autonomous\n",
"In this guide well build an app that answers questions about the content of a website. The specific website we will use is the [LLM Powered Autonomous\n",
"Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post\n",
"by Lilian Weng, which allows us to ask questions about the contents of\n",
"the post.\n",

View File

@@ -269,7 +269,7 @@
"id": "b4991642-7275-40a9-b11a-e3beccbf2614",
"metadata": {},
"source": [
"Return documents based on similarity to a embedded query:"
"Return documents based on similarity to an embedded query:"
]
},
{

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 158 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 158 KiB

View File

@@ -25,6 +25,7 @@ fireworks-ai>=0.9.0,<0.10
friendli-client>=1.2.4,<2
geopandas>=0.13.1
gitpython>=3.1.32,<4
gliner>=0.2.7
google-cloud-documentai>=2.20.1,<3
gql>=3.4.1,<4
gradientai>=1.4.0,<2
@@ -37,6 +38,7 @@ javelin-sdk>=0.1.8,<0.2
jinja2>=3,<4
jq>=1.4.1,<2
jsonschema>1
keybert>=0.8.5
lxml>=4.9.3,<6.0
markdownify>=0.11.6,<0.12
motor>=3.3.1,<4
@@ -60,7 +62,7 @@ psychicapi>=0.8.0,<0.9
py-trello>=0.19.0,<0.20
pyjwt>=2.8.0,<3
pymupdf>=1.22.3,<2
pypdf>=3.4.0,<4
pypdf>=3.4.0,<5
pypdfium2>=4.10.0,<5
pyspark>=3.4.0,<4
rank-bm25>=0.2.2,<0.3
@@ -72,6 +74,7 @@ rspace_client>=2.5.0,<3
scikit-learn>=1.2.2,<2
simsimd>=4.3.1,<5
sqlite-vss>=0.1.2,<0.2
sseclient-py>=1.8.0,<2
streamlit>=1.18.0,<2
sympy>=1.12,<2
telethon>=1.28.5,<2

View File

@@ -292,17 +292,21 @@ def _create_api_controller_agent(
)
if "DELETE" in allowed_operations:
delete_llm_chain = LLMChain(llm=llm, prompt=PARSING_DELETE_PROMPT)
RequestsDeleteToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=delete_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
tools.append(
RequestsDeleteToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=delete_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
)
)
if "PATCH" in allowed_operations:
patch_llm_chain = LLMChain(llm=llm, prompt=PARSING_PATCH_PROMPT)
RequestsPatchToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=patch_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
tools.append(
RequestsPatchToolWithParsing( # type: ignore[call-arg]
requests_wrapper=requests_wrapper,
llm_chain=patch_llm_chain,
allow_dangerous_requests=allow_dangerous_requests,
)
)
if not tools:
raise ValueError("Tools not found")

View File

@@ -8,6 +8,12 @@ from langchain_core.messages import AIMessage
from langchain_core.outputs import ChatGeneration, LLMResult
MODEL_COST_PER_1K_TOKENS = {
# GPT-4o-mini input
"gpt-4o-mini": 0.00015,
"gpt-4o-mini-2024-07-18": 0.00015,
# GPT-4o-mini output
"gpt-4o-mini-completion": 0.0006,
"gpt-4o-mini-2024-07-18-completion": 0.0006,
# GPT-4o input
"gpt-4o": 0.005,
"gpt-4o-2024-05-13": 0.005,

View File

@@ -311,12 +311,15 @@ class GraphCypherQAChain(Chain):
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
callbacks = _run_manager.get_child()
question = inputs[self.input_key]
args = {
"question": question,
"schema": self.graph_schema,
}
args.update(inputs)
intermediate_steps: List = []
generated_cypher = self.cypher_generation_chain.run(
{"question": question, "schema": self.graph_schema}, callbacks=callbacks
)
generated_cypher = self.cypher_generation_chain.run(args, callbacks=callbacks)
# Extract Cypher code if it is wrapped in backticks
generated_cypher = extract_cypher(generated_cypher)

View File

@@ -124,6 +124,11 @@ class PebbloRetrievalQA(Chain):
),
"doc": doc.page_content,
"vector_db": self.retriever.vectorstore.__class__.__name__,
**(
{"pb_checksum": doc.metadata.get("pb_checksum")}
if doc.metadata.get("pb_checksum")
else {}
),
}
for doc in docs
if isinstance(doc, Document)
@@ -457,25 +462,24 @@ class PebbloRetrievalQA(Chain):
if self.api_key:
if self.classifier_location == "local":
if pebblo_resp:
payload["response"] = (
json.loads(pebblo_resp.text)
.get("retrieval_data", {})
.get("response", {})
)
payload["context"] = (
json.loads(pebblo_resp.text)
.get("retrieval_data", {})
.get("context", [])
)
payload["prompt"] = (
json.loads(pebblo_resp.text)
.get("retrieval_data", {})
.get("prompt", {})
)
resp = json.loads(pebblo_resp.text)
if resp:
payload["response"].update(
resp.get("retrieval_data", {}).get("response", {})
)
payload["response"].pop("data")
payload["prompt"].update(
resp.get("retrieval_data", {}).get("prompt", {})
)
payload["prompt"].pop("data")
context = payload["context"]
for context_data in context:
context_data.pop("doc")
payload["context"] = context
else:
payload["response"] = None
payload["context"] = None
payload["prompt"] = None
payload["response"] = {}
payload["prompt"] = {}
payload["context"] = []
headers.update({"x-api-key": self.api_key})
pebblo_cloud_url = f"{PEBBLO_CLOUD_URL}{PROMPT_URL}"
try:

View File

@@ -129,6 +129,7 @@ class Context(BaseModel):
retrieved_from: Optional[str]
doc: Optional[str]
vector_db: str
pb_checksum: Optional[str]
class Prompt(BaseModel):

View File

@@ -1,6 +1,6 @@
import json
from pathlib import Path
from typing import List
from typing import List, Optional
from langchain_core.chat_history import (
BaseChatMessageHistory,
@@ -11,21 +11,33 @@ from langchain_core.messages import BaseMessage, messages_from_dict, messages_to
class FileChatMessageHistory(BaseChatMessageHistory):
"""Chat message history that stores history in a local file."""
def __init__(self, file_path: str) -> None:
def __init__(
self,
file_path: str,
*,
encoding: Optional[str] = None,
ensure_ascii: bool = True,
) -> None:
"""Initialize the file path for the chat history.
Args:
file_path: The path to the local file to store the chat history.
encoding: The encoding to use for file operations. Defaults to None.
ensure_ascii: If True, escape non-ASCII in JSON. Defaults to True.
"""
self.file_path = Path(file_path)
self.encoding = encoding
self.ensure_ascii = ensure_ascii
if not self.file_path.exists():
self.file_path.touch()
self.file_path.write_text(json.dumps([]))
self.file_path.write_text(
json.dumps([], ensure_ascii=self.ensure_ascii), encoding=self.encoding
)
@property
def messages(self) -> List[BaseMessage]: # type: ignore
"""Retrieve the messages from the local file"""
items = json.loads(self.file_path.read_text())
items = json.loads(self.file_path.read_text(encoding=self.encoding))
messages = messages_from_dict(items)
return messages
@@ -33,8 +45,12 @@ class FileChatMessageHistory(BaseChatMessageHistory):
"""Append the message to the record in the local file"""
messages = messages_to_dict(self.messages)
messages.append(messages_to_dict([message])[0])
self.file_path.write_text(json.dumps(messages))
self.file_path.write_text(
json.dumps(messages, ensure_ascii=self.ensure_ascii), encoding=self.encoding
)
def clear(self) -> None:
"""Clear session memory from the local file"""
self.file_path.write_text(json.dumps([]))
self.file_path.write_text(
json.dumps([], ensure_ascii=self.ensure_ascii), encoding=self.encoding
)

View File

@@ -25,7 +25,7 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
DEFAULT_API_BASE = "https://api.endpoints.anyscale.com/v1"
DEFAULT_MODEL = "meta-llama/Llama-2-7b-chat-hf"
DEFAULT_MODEL = "meta-llama/Meta-Llama-3-8B-Instruct"
class ChatAnyscale(ChatOpenAI):

View File

@@ -141,9 +141,8 @@ class CustomOpenAIChatContentFormatter(ContentFormatterBase):
except (KeyError, IndexError, TypeError) as e:
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
return ChatGeneration(
message=BaseMessage(
message=AIMessage(
content=choice.strip(),
type="assistant",
),
generation_info=None,
)
@@ -158,7 +157,9 @@ class CustomOpenAIChatContentFormatter(ContentFormatterBase):
except (KeyError, IndexError, TypeError) as e:
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
return ChatGeneration(
message=BaseMessage(
message=AIMessage(content=choice["message"]["content"].strip())
if choice["message"]["role"] == "assistant"
else BaseMessage(
content=choice["message"]["content"].strip(),
type=choice["message"]["role"],
),

View File

@@ -22,6 +22,8 @@ from langchain_core.messages import (
ChatMessageChunk,
HumanMessage,
HumanMessageChunk,
SystemMessage,
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
@@ -44,6 +46,8 @@ def _convert_message_to_dict(message: BaseMessage) -> dict:
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
elif isinstance(message, SystemMessage):
message_dict = {"role": "system", "content": message.content}
else:
raise TypeError(f"Got unknown type {message}")
@@ -56,6 +60,8 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
return HumanMessage(content=_dict["content"])
elif role == "assistant":
return AIMessage(content=_dict.get("content", "") or "")
elif role == "system":
return SystemMessage(content=_dict.get("content", ""))
else:
return ChatMessage(content=_dict["content"], role=role)
@@ -70,6 +76,8 @@ def _convert_delta_to_message_chunk(
return HumanMessageChunk(content=content)
elif role == "assistant" or default_class == AIMessageChunk:
return AIMessageChunk(content=content)
elif role == "system" or default_class == SystemMessageChunk:
return SystemMessageChunk(content=content)
elif role or default_class == ChatMessageChunk:
return ChatMessageChunk(content=content, role=role) # type: ignore[arg-type]
else:
@@ -98,10 +106,145 @@ async def aconnect_httpx_sse(
class ChatBaichuan(BaseChatModel):
"""Baichuan chat models API by Baichuan Intelligent Technology.
"""Baichuan chat model integration.
For more information, see https://platform.baichuan-ai.com/docs/api
"""
Setup:
To use, you should have the environment variable``BAICHUAN_API_KEY`` set with
your API KEY.
.. code-block:: bash
export BAICHUAN_API_KEY="your-api-key"
Key init args — completion params:
model: Optional[str]
Name of Baichuan model to use.
max_tokens: Optional[int]
Max number of tokens to generate.
streaming: Optional[bool]
Whether to stream the results or not.
temperature: Optional[float]
Sampling temperature.
top_p: Optional[float]
What probability mass to use.
top_k: Optional[int]
What search sampling control to use.
Key init args — client params:
api_key: Optional[str]
MiniMax API key. If not passed in will be read from env var BAICHUAN_API_KEY.
base_url: Optional[str]
Base URL for API requests.
See full list of supported init args and their descriptions in the params section.
Instantiate:
.. code-block:: python
from langchain_community.chat_models import ChatBaichuan
chat = ChatBaichuan(
api_key=api_key,
model='Baichuan4',
# temperature=...,
# other params...
)
Invoke:
.. code-block:: python
messages = [
("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
("human", "我喜欢编程。"),
]
chat.invoke(messages)
.. code-block:: python
AIMessage(
content='I enjoy programming.',
response_metadata={
'token_usage': {
'prompt_tokens': 93,
'completion_tokens': 5,
'total_tokens': 98
},
'model': 'Baichuan4'
},
id='run-944ff552-6a93-44cf-a861-4e4d849746f9-0'
)
Stream:
.. code-block:: python
for chunk in chat.stream(messages):
print(chunk)
.. code-block:: python
content='I' id='run-f99fcd6f-dd31-46d5-be8f-0b6a22bf77d8'
content=' enjoy programming.' id='run-f99fcd6f-dd31-46d5-be8f-0b6a22bf77d8
.. code-block:: python
stream = chat.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
.. code-block:: python
AIMessageChunk(
content='I like programming.',
id='run-74689970-dc31-461d-b729-3b6aa93508d2'
)
Async:
.. code-block:: python
await chat.ainvoke(messages)
# stream
# async for chunk in chat.astream(messages):
# print(chunk)
# batch
# await chat.abatch([messages])
.. code-block:: python
AIMessage(
content='I enjoy programming.',
response_metadata={
'token_usage': {
'prompt_tokens': 93,
'completion_tokens': 5,
'total_tokens': 98
},
'model': 'Baichuan4'
},
id='run-952509ed-9154-4ff9-b187-e616d7ddfbba-0'
)
Response metadata
.. code-block:: python
ai_msg = chat.invoke(messages)
ai_msg.response_metadata
.. code-block:: python
{
'token_usage': {
'prompt_tokens': 93,
'completion_tokens': 5,
'total_tokens': 98
},
'model': 'Baichuan4'
}
""" # noqa: E501
@property
def lc_secrets(self) -> Dict[str, str]:
@@ -113,7 +256,7 @@ class ChatBaichuan(BaseChatModel):
def lc_serializable(self) -> bool:
return True
baichuan_api_base: str = Field(default=DEFAULT_API_BASE)
baichuan_api_base: str = Field(default=DEFAULT_API_BASE, alias="base_url")
"""Baichuan custom endpoints"""
baichuan_api_key: SecretStr = Field(alias="api_key")
"""Baichuan API Key"""
@@ -121,6 +264,8 @@ class ChatBaichuan(BaseChatModel):
"""[DEPRECATED, keeping it for for backward compatibility] Baichuan Secret Key"""
streaming: bool = False
"""Whether to stream the results or not."""
max_tokens: Optional[int] = None
"""Maximum number of tokens to generate."""
request_timeout: int = Field(default=60, alias="timeout")
"""request timeout for chat http requests"""
model: str = "Baichuan2-Turbo-192K"
@@ -133,7 +278,8 @@ class ChatBaichuan(BaseChatModel):
top_p: float = 0.85
"""What probability mass to use."""
with_search_enhance: bool = False
"""Whether to use search enhance, default is False."""
"""[DEPRECATED, keeping it for for backward compatibility],
Whether to use search enhance, default is False."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for API call not explicitly specified."""
@@ -193,8 +339,8 @@ class ChatBaichuan(BaseChatModel):
"temperature": self.temperature,
"top_p": self.top_p,
"top_k": self.top_k,
"with_search_enhance": self.with_search_enhance,
"stream": self.streaming,
"max_tokens": self.max_tokens,
}
return {**normal_params, **self.model_kwargs}

View File

@@ -32,6 +32,7 @@ from langchain_core.messages import (
SystemMessage,
ToolMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
@@ -88,6 +89,7 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> AIMessage:
request_id=additional_kwargs["id"],
object=additional_kwargs.get("object", ""),
search_info=additional_kwargs.get("search_info", []),
usage=additional_kwargs.get("usage", None),
)
if additional_kwargs.get("function_call", {}):
@@ -102,6 +104,17 @@ def _convert_dict_to_message(_dict: Mapping[str, Any]) -> AIMessage:
}
]
if usage := additional_kwargs.get("usage", None):
return AIMessage(
content=content,
additional_kwargs=msg_additional_kwargs,
usage_metadata=UsageMetadata(
input_tokens=usage.get("prompt_tokens", 0),
output_tokens=usage.get("completion_tokens", 0),
total_tokens=usage.get("total_tokens", 0),
),
)
return AIMessage(
content=content,
additional_kwargs=msg_additional_kwargs,
@@ -578,6 +591,7 @@ class QianfanChatEndpoint(BaseChatModel):
content=msg.content,
role="assistant",
additional_kwargs=additional_kwargs,
usage_metadata=msg.usage_metadata,
),
generation_info=msg.additional_kwargs,
)
@@ -605,6 +619,7 @@ class QianfanChatEndpoint(BaseChatModel):
content=msg.content,
role="assistant",
additional_kwargs=additional_kwargs,
usage_metadata=msg.usage_metadata,
),
generation_info=msg.additional_kwargs,
)

View File

@@ -48,6 +48,7 @@ from langchain_core.messages import (
ToolMessage,
)
from langchain_core.messages.tool import ToolCall
from langchain_core.messages.tool import tool_call as create_tool_call
from langchain_core.outputs import (
ChatGeneration,
ChatGenerationChunk,
@@ -96,7 +97,7 @@ def _parse_tool_calling(tool_call: dict) -> ToolCall:
name = tool_call["function"].get("name", "")
args = json.loads(tool_call["function"]["arguments"])
id = tool_call.get("id")
return ToolCall(name=name, args=args, id=id)
return create_tool_call(name=name, args=args, id=id)
def _convert_to_tool_calling(tool_call: ToolCall) -> Dict[str, Any]:

View File

@@ -36,9 +36,11 @@ from langchain_core.messages import (
InvalidToolCall,
SystemMessage,
ToolCall,
ToolCallChunk,
ToolMessage,
)
from langchain_core.messages.tool import invalid_tool_call as create_invalid_tool_call
from langchain_core.messages.tool import tool_call as create_tool_call
from langchain_core.messages.tool import tool_call_chunk as create_tool_call_chunk
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
@@ -63,7 +65,7 @@ def _result_to_chunked_message(generated_result: ChatResult) -> ChatGenerationCh
message = generated_result.generations[0].message
if isinstance(message, AIMessage) and message.tool_calls is not None:
tool_call_chunks = [
ToolCallChunk(
create_tool_call_chunk(
name=tool_call["name"],
args=json.dumps(tool_call["args"]),
id=tool_call["id"],
@@ -189,7 +191,7 @@ def _extract_tool_calls_from_edenai_response(
for raw_tool_call in raw_tool_calls:
try:
tool_calls.append(
ToolCall(
create_tool_call(
name=raw_tool_call["name"],
args=json.loads(raw_tool_call["arguments"]),
id=raw_tool_call["id"],
@@ -197,7 +199,7 @@ def _extract_tool_calls_from_edenai_response(
)
except json.JSONDecodeError as exc:
invalid_tool_calls.append(
InvalidToolCall(
create_invalid_tool_call(
name=raw_tool_call.get("name"),
args=raw_tool_call.get("arguments"),
id=raw_tool_call.get("id"),

View File

@@ -8,7 +8,7 @@ from typing import TYPE_CHECKING, Dict, Optional, Set
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (
@@ -79,10 +79,12 @@ class ChatEverlyAI(ChatOpenAI):
@root_validator(pre=True)
def validate_environment_override(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values,
"everlyai_api_key",
"EVERLYAI_API_KEY",
values["openai_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,
"everlyai_api_key",
"EVERLYAI_API_KEY",
)
)
values["openai_api_base"] = DEFAULT_API_BASE

View File

@@ -144,7 +144,7 @@ class ChatOllama(BaseChatModel, _OllamaCommon):
elif (
isinstance(temp_image_url, dict) and "url" in temp_image_url
):
image_url = temp_image_url
image_url = temp_image_url["url"]
else:
raise ValueError(
"Only string image_url or dict with string 'url' "

View File

@@ -4,6 +4,7 @@ import asyncio
import functools
import json
import logging
from operator import itemgetter
from typing import (
Any,
AsyncIterator,
@@ -40,7 +41,10 @@ from langchain_core.messages import (
ToolMessage,
ToolMessageChunk,
)
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
PydanticToolsParser,
make_invalid_tool_call,
parse_tool_call,
)
@@ -50,7 +54,7 @@ from langchain_core.outputs import (
ChatResult,
)
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
from langchain_core.runnables import Runnable
from langchain_core.runnables import Runnable, RunnableMap, RunnablePassthrough
from langchain_core.tools import BaseTool
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
from langchain_core.utils.function_calling import convert_to_openai_tool
@@ -372,6 +376,33 @@ class ChatTongyi(BaseChatModel):
}
]
Structured output:
.. code-block:: python
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_chat = tongyi_chat.with_structured_output(Joke)
structured_chat.invoke("Tell me a joke about cats")
.. code-block:: python
Joke(
setup='Why did the cat join the band?',
punchline='Because it wanted to be a solo purr-sonality!',
rating=None
)
Response metadata
.. code-block:: python
@@ -791,3 +822,70 @@ class ChatTongyi(BaseChatModel):
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
return super().bind(tools=formatted_tools, **kwargs)
def with_structured_output(
self,
schema: Union[Dict, Type[BaseModel]],
*,
include_raw: bool = False,
**kwargs: Any,
) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]:
"""Model wrapper that returns outputs formatted to match the given schema.
Args:
schema: The output schema as a dict or a Pydantic class. If a Pydantic class
then the model output will be an object of that class. If a dict then
the model output will be a dict. With a Pydantic class the returned
attributes will be validated, whereas with a dict they will not be. If
`method` is "function_calling" and `schema` is a dict, then the dict
must match the OpenAI function-calling spec.
include_raw: If False then only the parsed structured output is returned. If
an error occurs during model output parsing it will be raised. If True
then both the raw model response (a BaseMessage) and the parsed model
response will be returned. If an error occurs during output parsing it
will be caught and returned as well. The final output is always a dict
with keys "raw", "parsed", and "parsing_error".
Returns:
A Runnable that takes any ChatModel input and returns as output:
If include_raw is True then a dict with keys:
raw: BaseMessage
parsed: Optional[_DictOrPydantic]
parsing_error: Optional[BaseException]
If include_raw is False then just _DictOrPydantic is returned,
where _DictOrPydantic depends on the schema:
If schema is a Pydantic class then _DictOrPydantic is the Pydantic
class.
If schema is a dict then _DictOrPydantic is a dict.
"""
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = isinstance(schema, type) and issubclass(schema, BaseModel)
llm = self.bind_tools([schema])
if is_pydantic_schema:
output_parser: OutputParserLike = PydanticToolsParser(
tools=[schema], # type: ignore[list-item]
first_tool_only=True, # type: ignore[list-item]
)
else:
key_name = convert_to_openai_tool(schema)["function"]["name"]
output_parser = JsonOutputKeyToolsParser(
key_name=key_name, first_tool_only=True
)
if include_raw:
parser_assign = RunnablePassthrough.assign(
parsed=itemgetter("raw") | output_parser, parsing_error=lambda _: None
)
parser_none = RunnablePassthrough.assign(parsed=lambda _: None)
parser_with_fallback = parser_assign.with_fallbacks(
[parser_none], exception_key="parsing_error"
)
return RunnableMap(raw=llm) | parser_with_fallback
else:
return llm | output_parser

View File

@@ -60,7 +60,7 @@ class HuggingFaceCrossEncoder(BaseModel, BaseCrossEncoder):
List of scores, one for each pair.
"""
scores = self.client.predict(text_pairs)
# Somes models e.g bert-multilingual-passage-reranking-msmarco
# Some models e.g bert-multilingual-passage-reranking-msmarco
# gives two score not_relevant and relevant as compare with the query.
if len(scores.shape) > 1: # we are going to get the relevant scores
scores = map(lambda x: x[1], scores)

View File

@@ -6,6 +6,7 @@ import warnings
from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
Iterator,
Mapping,
@@ -27,6 +28,7 @@ if TYPE_CHECKING:
import pdfplumber.page
import pypdf._page
import pypdfium2._helpers.page
from pypdf import PageObject
from textractor.data.text_linearization_config import TextLinearizationConfig
@@ -83,10 +85,17 @@ class PyPDFParser(BaseBlobParser):
"""Load `PDF` using `pypdf`"""
def __init__(
self, password: Optional[Union[str, bytes]] = None, extract_images: bool = False
self,
password: Optional[Union[str, bytes]] = None,
extract_images: bool = False,
*,
extraction_mode: str = "plain",
extraction_kwargs: Optional[Dict[str, Any]] = None,
):
self.password = password
self.extract_images = extract_images
self.extraction_mode = extraction_mode
self.extraction_kwargs = extraction_kwargs or {}
def lazy_parse(self, blob: Blob) -> Iterator[Document]: # type: ignore[valid-type]
"""Lazily parse the blob."""
@@ -98,11 +107,23 @@ class PyPDFParser(BaseBlobParser):
"`pip install pypdf`"
)
def _extract_text_from_page(page: "PageObject") -> str:
"""
Extract text from image given the version of pypdf.
"""
if pypdf.__version__.startswith("3"):
return page.extract_text()
else:
return page.extract_text(
extraction_mode=self.extraction_mode, **self.extraction_kwargs
)
with blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
yield from [
Document(
page_content=page.extract_text()
page_content=_extract_text_from_page(page=page)
+ self._extract_images_from_page(page),
metadata={"source": blob.source, "page": page_number}, # type: ignore[attr-defined]
)

View File

@@ -171,6 +171,9 @@ class PyPDFLoader(BasePDFLoader):
password: Optional[Union[str, bytes]] = None,
headers: Optional[Dict] = None,
extract_images: bool = False,
*,
extraction_mode: str = "plain",
extraction_kwargs: Optional[Dict] = None,
) -> None:
"""Initialize with a file path."""
try:
@@ -180,7 +183,12 @@ class PyPDFLoader(BasePDFLoader):
"pypdf package not found, please install it with " "`pip install pypdf`"
)
super().__init__(file_path, headers=headers)
self.parser = PyPDFParser(password=password, extract_images=extract_images)
self.parser = PyPDFParser(
password=password,
extract_images=extract_images,
extraction_mode=extraction_mode,
extraction_kwargs=extraction_kwargs,
)
def lazy_load(
self,

View File

@@ -5,7 +5,7 @@ import logging
import os
import uuid
from http import HTTPStatus
from typing import Any, Dict, Iterator, List, Optional, Union
from typing import Any, Dict, Iterator, List, Optional
import requests # type: ignore
from langchain_core.documents import Document
@@ -61,7 +61,7 @@ class PebbloSafeLoader(BaseLoader):
self.source_path = get_loader_full_path(self.loader)
self.source_owner = PebbloSafeLoader.get_file_owner_from_path(self.source_path)
self.docs: List[Document] = []
self.docs_with_id: Union[List[IndexedDocument], List[Document], List] = []
self.docs_with_id: List[IndexedDocument] = []
loader_name = str(type(self.loader)).split(".")[-1].split("'")[0]
self.source_type = get_loader_type(loader_name)
self.source_path_size = self.get_source_size(self.source_path)
@@ -89,17 +89,13 @@ class PebbloSafeLoader(BaseLoader):
list: Documents fetched from load method of the wrapped `loader`.
"""
self.docs = self.loader.load()
# Add pebblo-specific metadata to docs
self._add_pebblo_specific_metadata()
if not self.load_semantic:
self._classify_doc(self.docs, loading_end=True)
return self.docs
self.docs_with_id = self._index_docs()
classified_docs = self._classify_doc(self.docs_with_id, loading_end=True)
self.docs_with_id = self._add_semantic_to_docs(
self.docs_with_id, classified_docs
)
self.docs = self._unindex_docs(self.docs_with_id) # type: ignore
classified_docs = self._classify_doc(loading_end=True)
self._add_pebblo_specific_metadata(classified_docs)
if self.load_semantic:
self.docs = self._add_semantic_to_docs(classified_docs)
else:
self.docs = self._unindex_docs() # type: ignore
return self.docs
def lazy_load(self) -> Iterator[Document]:
@@ -125,19 +121,14 @@ class PebbloSafeLoader(BaseLoader):
self.docs = []
break
self.docs = list((doc,))
# Add pebblo-specific metadata to docs
self._add_pebblo_specific_metadata()
if not self.load_semantic:
self._classify_doc(self.docs, loading_end=True)
yield self.docs[0]
self.docs_with_id = self._index_docs()
classified_doc = self._classify_doc()
self._add_pebblo_specific_metadata(classified_doc)
if self.load_semantic:
self.docs = self._add_semantic_to_docs(classified_doc)
else:
self.docs_with_id = self._index_docs()
classified_doc = self._classify_doc(self.docs)
self.docs_with_id = self._add_semantic_to_docs(
self.docs_with_id, classified_doc
)
self.docs = self._unindex_docs(self.docs_with_id) # type: ignore
yield self.docs[0]
self.docs = self._unindex_docs()
yield self.docs[0]
@classmethod
def set_discover_sent(cls) -> None:
@@ -147,13 +138,12 @@ class PebbloSafeLoader(BaseLoader):
def set_loader_sent(cls) -> None:
cls._loader_sent = True
def _classify_doc(self, loaded_docs: list, loading_end: bool = False) -> list:
def _classify_doc(self, loading_end: bool = False) -> dict:
"""Send documents fetched from loader to pebblo-server. Then send
classified documents to Daxa cloud(If api_key is present). Internal method.
Args:
loaded_docs (list): List of documents fetched from loader's load operation.
loading_end (bool, optional): Flag indicating the halt of data
loading by loader. Defaults to False.
"""
@@ -163,9 +153,8 @@ class PebbloSafeLoader(BaseLoader):
}
if loading_end is True:
PebbloSafeLoader.set_loader_sent()
doc_content = [doc.dict() for doc in loaded_docs]
doc_content = [doc.dict() for doc in self.docs_with_id]
docs = []
classified_docs = []
for doc in doc_content:
doc_metadata = doc.get("metadata", {})
doc_authorized_identities = doc_metadata.get("authorized_identities", [])
@@ -183,12 +172,12 @@ class PebbloSafeLoader(BaseLoader):
page_content = str(doc.get("page_content"))
page_content_size = self.calculate_content_size(page_content)
self.source_aggregate_size += page_content_size
doc_id = doc.get("id", None) or 0
doc_id = doc.get("pb_id", None) or 0
docs.append(
{
"doc": page_content,
"source_path": doc_source_path,
"id": doc_id,
"pb_id": doc_id,
"last_modified": doc.get("metadata", {}).get("last_modified"),
"file_owner": doc_source_owner,
**(
@@ -221,6 +210,7 @@ class PebbloSafeLoader(BaseLoader):
self.source_aggregate_size
)
payload = Doc(**payload).dict(exclude_unset=True)
classified_docs = {}
# Raw payload to be sent to classifier
if self.classifier_location == "local":
load_doc_url = f"{self.classifier_url}{LOADER_DOC_URL}"
@@ -228,7 +218,10 @@ class PebbloSafeLoader(BaseLoader):
pebblo_resp = requests.post(
load_doc_url, headers=headers, json=payload, timeout=300
)
classified_docs = json.loads(pebblo_resp.text).get("docs", None)
# Updating the structure of pebblo response docs for efficient searching
for classified_doc in json.loads(pebblo_resp.text).get("docs", []):
classified_docs.update({classified_doc["pb_id"]: classified_doc})
if pebblo_resp.status_code not in [
HTTPStatus.OK,
HTTPStatus.BAD_GATEWAY,
@@ -257,7 +250,21 @@ class PebbloSafeLoader(BaseLoader):
if self.api_key:
if self.classifier_location == "local":
payload["docs"] = classified_docs
docs = payload["docs"]
for doc_data in docs:
classified_data = classified_docs.get(doc_data["pb_id"], {})
doc_data.update(
{
"pb_checksum": classified_data.get("pb_checksum", None),
"loader_source_path": classified_data.get(
"loader_source_path", None
),
"entities": classified_data.get("entities", {}),
"topics": classified_data.get("topics", {}),
}
)
doc_data.pop("doc")
headers.update({"x-api-key": self.api_key})
pebblo_cloud_url = f"{PEBBLO_CLOUD_URL}{LOADER_DOC_URL}"
try:
@@ -453,33 +460,29 @@ class PebbloSafeLoader(BaseLoader):
List[IndexedDocument]: A list of IndexedDocument objects with unique IDs.
"""
docs_with_id = [
IndexedDocument(id=hex(i)[2:], **doc.dict())
IndexedDocument(pb_id=str(i), **doc.dict())
for i, doc in enumerate(self.docs)
]
return docs_with_id
def _add_semantic_to_docs(
self, docs_with_id: List[IndexedDocument], classified_docs: List[dict]
) -> List[Document]:
def _add_semantic_to_docs(self, classified_docs: Dict) -> List[Document]:
"""
Adds semantic metadata to the given list of documents.
Args:
docs_with_id (List[IndexedDocument]): A list of IndexedDocument objects
containing the documents with their IDs.
classified_docs (List[dict]): A list of dictionaries containing the
classified documents.
classified_docs (Dict): A dictionary of dictionaries containing the
classified documents with pb_id as key.
Returns:
List[Document]: A list of Document objects with added semantic metadata.
"""
indexed_docs = {
doc.id: Document(page_content=doc.page_content, metadata=doc.metadata)
for doc in docs_with_id
doc.pb_id: Document(page_content=doc.page_content, metadata=doc.metadata)
for doc in self.docs_with_id
}
for classified_doc in classified_docs:
doc_id = classified_doc.get("id")
for classified_doc in classified_docs.values():
doc_id = classified_doc.get("pb_id")
if doc_id in indexed_docs:
self._add_semantic_to_doc(indexed_docs[doc_id], classified_doc)
@@ -487,19 +490,16 @@ class PebbloSafeLoader(BaseLoader):
return semantic_metadata_docs
def _unindex_docs(self, docs_with_id: List[IndexedDocument]) -> List[Document]:
def _unindex_docs(self) -> List[Document]:
"""
Converts a list of IndexedDocument objects to a list of Document objects.
Args:
docs_with_id (List[IndexedDocument]): A list of IndexedDocument objects.
Returns:
List[Document]: A list of Document objects.
"""
docs = [
Document(page_content=doc.page_content, metadata=doc.metadata)
for i, doc in enumerate(docs_with_id)
for i, doc in enumerate(self.docs_with_id)
]
return docs
@@ -522,12 +522,16 @@ class PebbloSafeLoader(BaseLoader):
)
return doc
def _add_pebblo_specific_metadata(self) -> None:
def _add_pebblo_specific_metadata(self, classified_docs: dict) -> None:
"""Add Pebblo specific metadata to documents."""
for doc in self.docs:
for doc in self.docs_with_id:
doc_metadata = doc.metadata
doc_metadata["full_path"] = get_full_path(
doc_metadata.get(
"full_path", doc_metadata.get("source", self.source_path)
)
)
doc_metadata["pb_id"] = doc.pb_id
doc_metadata["pb_checksum"] = classified_docs.get(doc.pb_id, {}).get(
"pb_checksum", None
)

View File

@@ -7,6 +7,7 @@ from enum import Enum
from pathlib import Path
from typing import Any, Dict, Generator, List, Optional, Sequence, Union
from urllib.parse import parse_qs, urlparse
from xml.etree.ElementTree import ParseError # OK: trusted-source
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import root_validator
@@ -28,6 +29,8 @@ class GoogleApiClient:
As the google api expects credentials you need to set up a google account and
register your Service. "https://developers.google.com/docs/api/quickstart/python"
*Security Note*: Note that parsing of the transcripts relies on the standard
xml library but the input is viewed as trusted in this case.
Example:
@@ -437,6 +440,14 @@ class GoogleApiYoutubeLoader(BaseLoader):
channel_id = response["items"][0]["id"]["channelId"]
return channel_id
def _get_uploads_playlist_id(self, channel_id: str) -> str:
request = self.youtube_client.channels().list(
part="contentDetails",
id=channel_id,
)
response = request.execute()
return response["items"][0]["contentDetails"]["relatedPlaylists"]["uploads"]
def _get_document_for_channel(self, channel: str, **kwargs: Any) -> List[Document]:
try:
from youtube_transcript_api import (
@@ -452,10 +463,11 @@ class GoogleApiYoutubeLoader(BaseLoader):
)
channel_id = self._get_channel_id(channel)
request = self.youtube_client.search().list(
uploads_playlist_id = self._get_uploads_playlist_id(channel_id)
request = self.youtube_client.playlistItems().list(
part="id,snippet",
channelId=channel_id,
maxResults=50, # adjust this value to retrieve more or fewer videos
playlistId=uploads_playlist_id,
maxResults=50,
)
video_ids = []
while request is not None:
@@ -463,23 +475,20 @@ class GoogleApiYoutubeLoader(BaseLoader):
# Add each video ID to the list
for item in response["items"]:
if not item["id"].get("videoId"):
continue
meta_data = {"videoId": item["id"]["videoId"]}
video_id = item["snippet"]["resourceId"]["videoId"]
meta_data = {"videoId": video_id}
if self.add_video_info:
item["snippet"].pop("thumbnails")
meta_data.update(item["snippet"])
try:
page_content = self._get_transcripe_for_video_id(
item["id"]["videoId"]
)
page_content = self._get_transcripe_for_video_id(video_id)
video_ids.append(
Document(
page_content=page_content,
metadata=meta_data,
)
)
except (TranscriptsDisabled, NoTranscriptFound) as e:
except (TranscriptsDisabled, NoTranscriptFound, ParseError) as e:
if self.continue_on_failure:
logger.error(
"Error fetching transscript "

View File

@@ -213,6 +213,9 @@ if TYPE_CHECKING:
from langchain_community.embeddings.tensorflow_hub import (
TensorflowHubEmbeddings,
)
from langchain_community.embeddings.textembed import (
TextEmbedEmbeddings,
)
from langchain_community.embeddings.titan_takeoff import (
TitanTakeoffEmbed,
)
@@ -308,6 +311,7 @@ __all__ = [
"SpacyEmbeddings",
"SparkLLMTextEmbeddings",
"TensorflowHubEmbeddings",
"TextEmbedEmbeddings",
"TitanTakeoffEmbed",
"VertexAIEmbeddings",
"VolcanoEmbeddings",
@@ -392,6 +396,7 @@ _module_lookup = {
"VolcanoEmbeddings": "langchain_community.embeddings.volcengine",
"VoyageEmbeddings": "langchain_community.embeddings.voyageai",
"XinferenceEmbeddings": "langchain_community.embeddings.xinference",
"TextEmbedEmbeddings": "langchain_community.embeddings.textembed",
"TitanTakeoffEmbed": "langchain_community.embeddings.titan_takeoff",
"PremAIEmbeddings": "langchain_community.embeddings.premai",
"YandexGPTEmbeddings": "langchain_community.embeddings.yandex",

View File

@@ -60,7 +60,7 @@ class AscendEmbeddings(Embeddings, BaseModel):
raise ValueError("model_path is required")
if not os.access(values["model_path"], os.F_OK):
raise FileNotFoundError(
f"Unabled to find valid model path in [{values['model_path']}]"
f"Unable to find valid model path in [{values['model_path']}]"
)
try:
import torch_npu

View File

@@ -61,7 +61,7 @@ class AzureOpenAIEmbeddings(OpenAIEmbeddings):
# TODO: Remove OPENAI_API_KEY support to avoid possible conflict when using
# other forms of azure credentials.
values["openai_api_key"] = (
values["openai_api_key"]
values.get("openai_api_key")
or os.getenv("AZURE_OPENAI_API_KEY")
or os.getenv("OPENAI_API_KEY")
)
@@ -75,7 +75,7 @@ class AzureOpenAIEmbeddings(OpenAIEmbeddings):
values, "openai_api_type", "OPENAI_API_TYPE", default="azure"
)
values["openai_organization"] = (
values["openai_organization"]
values.get("openai_organization")
or os.getenv("OPENAI_ORG_ID")
or os.getenv("OPENAI_ORGANIZATION")
)
@@ -85,10 +85,10 @@ class AzureOpenAIEmbeddings(OpenAIEmbeddings):
"OPENAI_PROXY",
default="",
)
values["azure_endpoint"] = values["azure_endpoint"] or os.getenv(
values["azure_endpoint"] = values.get("azure_endpoint") or os.getenv(
"AZURE_OPENAI_ENDPOINT"
)
values["azure_ad_token"] = values["azure_ad_token"] or os.getenv(
values["azure_ad_token"] = values.get("azure_ad_token") or os.getenv(
"AZURE_OPENAI_AD_TOKEN"
)
# Azure OpenAI embedding models allow a maximum of 16 texts

View File

@@ -4,7 +4,7 @@ import logging
from typing import Any, Dict, List, Optional
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.pydantic_v1 import BaseModel, Field, SecretStr
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env, pre_init
logger = logging.getLogger(__name__)
@@ -13,10 +13,10 @@ logger = logging.getLogger(__name__)
class QianfanEmbeddingsEndpoint(BaseModel, Embeddings):
"""`Baidu Qianfan Embeddings` embedding models."""
qianfan_ak: Optional[str] = None
qianfan_ak: Optional[SecretStr] = None
"""Qianfan application apikey"""
qianfan_sk: Optional[str] = None
qianfan_sk: Optional[SecretStr] = None
"""Qianfan application secretkey"""
chunk_size: int = 16

View File

@@ -0,0 +1,356 @@
"""
TextEmbed: Embedding Inference Server
TextEmbed provides a high-throughput, low-latency solution for serving embeddings.
It supports various sentence-transformer models.
Now, it includes the ability to deploy image embedding models.
TextEmbed offers flexibility and scalability for diverse applications.
TextEmbed is maintained by Keval Dekivadiya and is licensed under the Apache-2.0 license.
""" # noqa: E501
import asyncio
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
import aiohttp
import numpy as np
import requests
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
from langchain_core.utils import get_from_dict_or_env
__all__ = ["TextEmbedEmbeddings"]
class TextEmbedEmbeddings(BaseModel, Embeddings):
"""
A class to handle embedding requests to the TextEmbed API.
Attributes:
model : The TextEmbed model ID to use for embeddings.
api_url : The base URL for the TextEmbed API.
api_key : The API key for authenticating with the TextEmbed API.
client : The TextEmbed client instance.
Example:
.. code-block:: python
from langchain_community.embeddings import TextEmbedEmbeddings
embeddings = TextEmbedEmbeddings(
model="sentence-transformers/clip-ViT-B-32",
api_url="http://localhost:8000/v1",
api_key="<API_KEY>"
)
For more information: https://github.com/kevaldekivadiya2415/textembed/blob/main/docs/setup.md
""" # noqa: E501
model: str
"""Underlying TextEmbed model id."""
api_url: str = "http://localhost:8000/v1"
"""Endpoint URL to use."""
api_key: str = "None"
"""API Key for authentication"""
client: Any = None
"""TextEmbed client."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=False, skip_on_failure=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and URL exist in the environment.
Args:
values (Dict): Dictionary of values to validate.
Returns:
Dict: Validated values.
"""
values["api_url"] = get_from_dict_or_env(values, "api_url", "API_URL")
values["api_key"] = get_from_dict_or_env(values, "api_key", "API_KEY")
values["client"] = AsyncOpenAITextEmbedEmbeddingClient(
host=values["api_url"], api_key=values["api_key"]
)
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call out to TextEmbed's embedding endpoint.
Args:
texts (List[str]): The list of texts to embed.
Returns:
List[List[float]]: List of embeddings, one for each text.
"""
embeddings = self.client.embed(
model=self.model,
texts=texts,
)
return embeddings
async def aembed_documents(self, texts: List[str]) -> List[List[float]]:
"""Async call out to TextEmbed's embedding endpoint.
Args:
texts (List[str]): The list of texts to embed.
Returns:
List[List[float]]: List of embeddings, one for each text.
"""
embeddings = await self.client.aembed(
model=self.model,
texts=texts,
)
return embeddings
def embed_query(self, text: str) -> List[float]:
"""Call out to TextEmbed's embedding endpoint for a single query.
Args:
text (str): The text to embed.
Returns:
List[float]: Embeddings for the text.
"""
return self.embed_documents([text])[0]
async def aembed_query(self, text: str) -> List[float]:
"""Async call out to TextEmbed's embedding endpoint for a single query.
Args:
text (str): The text to embed.
Returns:
List[float]: Embeddings for the text.
"""
embeddings = await self.aembed_documents([text])
return embeddings[0]
class AsyncOpenAITextEmbedEmbeddingClient:
"""
A client to handle synchronous and asynchronous requests to the TextEmbed API.
Attributes:
host (str): The base URL for the TextEmbed API.
api_key (str): The API key for authenticating with the TextEmbed API.
aiosession (Optional[aiohttp.ClientSession]): The aiohttp session for async requests.
_batch_size (int): Maximum batch size for a single request.
""" # noqa: E501
def __init__(
self,
host: str = "http://localhost:8000/v1",
api_key: Union[str, None] = None,
aiosession: Optional[aiohttp.ClientSession] = None,
) -> None:
self.host = host
self.api_key = api_key
self.aiosession = aiosession
if self.host is None or len(self.host) < 3:
raise ValueError("Parameter `host` must be set to a valid URL")
self._batch_size = 256
@staticmethod
def _permute(
texts: List[str], sorter: Callable = len
) -> Tuple[List[str], Callable]:
"""
Sorts texts in ascending order and provides a function to restore the original order.
Args:
texts (List[str]): List of texts to sort.
sorter (Callable, optional): Sorting function, defaults to length.
Returns:
Tuple[List[str], Callable]: Sorted texts and a function to restore original order.
""" # noqa: E501
if len(texts) == 1:
return texts, lambda t: t
length_sorted_idx = np.argsort([-sorter(sen) for sen in texts])
texts_sorted = [texts[idx] for idx in length_sorted_idx]
return texts_sorted, lambda unsorted_embeddings: [
unsorted_embeddings[idx] for idx in np.argsort(length_sorted_idx)
]
def _batch(self, texts: List[str]) -> List[List[str]]:
"""
Splits a list of texts into batches of size max `self._batch_size`.
Args:
texts (List[str]): List of texts to split.
Returns:
List[List[str]]: List of batches of texts.
"""
if len(texts) == 1:
return [texts]
batches = []
for start_index in range(0, len(texts), self._batch_size):
batches.append(texts[start_index : start_index + self._batch_size])
return batches
@staticmethod
def _unbatch(batch_of_texts: List[List[Any]]) -> List[Any]:
"""
Merges batches of texts into a single list.
Args:
batch_of_texts (List[List[Any]]): List of batches of texts.
Returns:
List[Any]: Merged list of texts.
"""
if len(batch_of_texts) == 1 and len(batch_of_texts[0]) == 1:
return batch_of_texts[0]
texts = []
for sublist in batch_of_texts:
texts.extend(sublist)
return texts
def _kwargs_post_request(self, model: str, texts: List[str]) -> Dict[str, Any]:
"""
Builds the kwargs for the POST request, used by sync method.
Args:
model (str): The model to use for embedding.
texts (List[str]): List of texts to embed.
Returns:
Dict[str, Any]: Dictionary of POST request parameters.
"""
return dict(
url=f"{self.host}/embedding",
headers={
"accept": "application/json",
"content-type": "application/json",
"Authorization": f"Bearer {self.api_key}",
},
json=dict(
input=texts,
model=model,
),
)
def _sync_request_embed(
self, model: str, batch_texts: List[str]
) -> List[List[float]]:
"""
Sends a synchronous request to the embedding endpoint.
Args:
model (str): The model to use for embedding.
batch_texts (List[str]): Batch of texts to embed.
Returns:
List[List[float]]: List of embeddings for the batch.
Raises:
Exception: If the response status is not 200.
"""
response = requests.post(
**self._kwargs_post_request(model=model, texts=batch_texts)
)
if response.status_code != 200:
raise Exception(
f"TextEmbed responded with an unexpected status message "
f"{response.status_code}: {response.text}"
)
return [e["embedding"] for e in response.json()["data"]]
def embed(self, model: str, texts: List[str]) -> List[List[float]]:
"""
Embeds a list of texts synchronously.
Args:
model (str): The model to use for embedding.
texts (List[str]): List of texts to embed.
Returns:
List[List[float]]: List of embeddings for the texts.
"""
perm_texts, unpermute_func = self._permute(texts)
perm_texts_batched = self._batch(perm_texts)
# Request
map_args = (
self._sync_request_embed,
[model] * len(perm_texts_batched),
perm_texts_batched,
)
if len(perm_texts_batched) == 1:
embeddings_batch_perm = list(map(*map_args))
else:
with ThreadPoolExecutor(32) as p:
embeddings_batch_perm = list(p.map(*map_args))
embeddings_perm = self._unbatch(embeddings_batch_perm)
embeddings = unpermute_func(embeddings_perm)
return embeddings
async def _async_request(
self, session: aiohttp.ClientSession, **kwargs: Dict[str, Any]
) -> List[List[float]]:
"""
Sends an asynchronous request to the embedding endpoint.
Args:
session (aiohttp.ClientSession): The aiohttp session for the request.
kwargs (Dict[str, Any]): Dictionary of POST request parameters.
Returns:
List[List[float]]: List of embeddings for the request.
Raises:
Exception: If the response status is not 200.
"""
async with session.post(**kwargs) as response: # type: ignore
if response.status != 200:
raise Exception(
f"TextEmbed responded with an unexpected status message "
f"{response.status}: {response.text}"
)
embedding = (await response.json())["data"]
return [e["embedding"] for e in embedding]
async def aembed(self, model: str, texts: List[str]) -> List[List[float]]:
"""
Embeds a list of texts asynchronously.
Args:
model (str): The model to use for embedding.
texts (List[str]): List of texts to embed.
Returns:
List[List[float]]: List of embeddings for the texts.
"""
perm_texts, unpermute_func = self._permute(texts)
perm_texts_batched = self._batch(perm_texts)
async with aiohttp.ClientSession(
connector=aiohttp.TCPConnector(limit=32)
) as session:
embeddings_batch_perm = await asyncio.gather(
*[
self._async_request(
session=session,
**self._kwargs_post_request(model=model, texts=t),
)
for t in perm_texts_batched
]
)
embeddings_perm = self._unbatch(embeddings_batch_perm)
embeddings = unpermute_func(embeddings_perm)
return embeddings

View File

@@ -1,7 +1,19 @@
from langchain_community.graph_vectorstores.extractors.gliner_link_extractor import (
GLiNERInput,
GLiNERLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.hierarchy_link_extractor import (
HierarchyInput,
HierarchyLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.html_link_extractor import (
HtmlInput,
HtmlLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.keybert_link_extractor import (
KeybertInput,
KeybertLinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
@@ -10,8 +22,16 @@ from langchain_community.graph_vectorstores.extractors.link_extractor_adapter im
)
__all__ = [
"LinkExtractor",
"LinkExtractorAdapter",
"GLiNERInput",
"GLiNERLinkExtractor",
"HierarchyInput",
"HierarchyLinkExtractor",
"HtmlInput",
"HtmlLinkExtractor",
"KeybertInput",
"KeybertLinkExtractor",
"LinkExtractor",
"LinkExtractor",
"LinkExtractorAdapter",
"LinkExtractorAdapter",
]

View File

@@ -0,0 +1,71 @@
from typing import Any, Dict, Iterable, List, Optional, Set, Union
from langchain_core.documents import Document
from langchain_core.graph_vectorstores.links import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
# TypeAlias is not available in Python 3.9, we can't use that or the newer `type`.
GLiNERInput = Union[str, Document]
class GLiNERLinkExtractor(LinkExtractor[GLiNERInput]):
"""Link documents with common named entities using GLiNER <https://github.com/urchade/GLiNER>."""
def __init__(
self,
labels: List[str],
*,
kind: str = "entity",
model: str = "urchade/gliner_mediumv2.1",
extract_kwargs: Optional[Dict[str, Any]] = None,
):
"""Extract keywords using GLiNER.
Example:
.. code-block:: python
extractor = GLiNERLinkExtractor(
labels=["Person", "Award", "Date", "Competitions", "Teams"]
)
results = extractor.extract_one("some long text...")
Args:
labels: List of kinds of entities to extract.
kind: Kind of links to produce with this extractor.
model: GLiNER model to use.
extract_kwargs: Keyword arguments to pass to GLiNER.
"""
try:
from gliner import GLiNER
self._model = GLiNER.from_pretrained(model)
except ImportError:
raise ImportError(
"gliner is required for GLiNERLinkExtractor. "
"Please install it with `pip install gliner`."
) from None
self._labels = labels
self._kind = kind
self._extract_kwargs = extract_kwargs or {}
def extract_one(self, input: GLiNERInput) -> Set[Link]: # noqa: A002
return next(iter(self.extract_many([input])))
def extract_many(
self,
inputs: Iterable[GLiNERInput],
) -> Iterable[Set[Link]]:
strs = [i if isinstance(i, str) else i.page_content for i in inputs]
for entities in self._model.batch_predict_entities(
strs, self._labels, **self._extract_kwargs
):
yield {
Link.bidir(kind=f"{self._kind}:{e['label']}", tag=e["text"])
for e in entities
}

View File

@@ -0,0 +1,108 @@
from typing import Callable, List, Set
from langchain_core.documents import Document
from langchain_core.graph_vectorstores.links import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
from langchain_community.graph_vectorstores.extractors.link_extractor_adapter import (
LinkExtractorAdapter,
)
# TypeAlias is not available in Python 3.9, we can't use that or the newer `type`.
HierarchyInput = List[str]
_PARENT: str = "p:"
_CHILD: str = "c:"
_SIBLING: str = "s:"
class HierarchyLinkExtractor(LinkExtractor[HierarchyInput]):
def __init__(
self,
*,
kind: str = "hierarchy",
parent_links: bool = True,
child_links: bool = False,
sibling_links: bool = False,
):
"""Extract links from a document hierarchy.
Example:
.. code-block:: python
# Given three paths (in this case, within the "Root" document):
h1 = ["Root", "H1"]
h1a = ["Root", "H1", "a"]
h1b = ["Root", "H1", "b"]
# Parent links `h1a` and `h1b` to `h1`.
# Child links `h1` to `h1a` and `h1b`.
# Sibling links `h1a` and `h1b` together (both directions).
Example use with documents:
.. code_block: python
transformer = LinkExtractorTransformer([
HierarchyLinkExtractor().as_document_extractor(
# Assumes the "path" to each document is in the metadata.
# Could split strings, etc.
lambda doc: doc.metadata.get("path", [])
)
])
linked = transformer.transform_documents(docs)
Args:
kind: Kind of links to produce with this extractor.
parent_links: Link from a section to its parent.
child_links: Link from a section to its children.
sibling_links: Link from a section to other sections with the same parent.
"""
self._kind = kind
self._parent_links = parent_links
self._child_links = child_links
self._sibling_links = sibling_links
def as_document_extractor(
self, hierarchy: Callable[[Document], HierarchyInput]
) -> LinkExtractor[Document]:
"""Create a LinkExtractor from `Document`.
Args:
hierarchy: Function that returns the path for the given document.
Returns:
A `LinkExtractor[Document]` suitable for application to `Documents` directly
or with `LinkExtractorTransformer`.
"""
return LinkExtractorAdapter(underlying=self, transform=hierarchy)
def extract_one(
self,
input: HierarchyInput,
) -> Set[Link]:
this_path = "/".join(input)
parent_path = None
links = set()
if self._parent_links:
# This is linked from everything with this parent path.
links.add(Link.incoming(kind=self._kind, tag=_PARENT + this_path))
if self._child_links:
# This is linked to every child with this as it's "parent" path.
links.add(Link.outgoing(kind=self._kind, tag=_CHILD + this_path))
if len(input) >= 1:
parent_path = "/".join(input[0:-1])
if self._parent_links and len(input) > 1:
# This is linked to the nodes with the given parent path.
links.add(Link.outgoing(kind=self._kind, tag=_PARENT + parent_path))
if self._child_links and len(input) > 1:
# This is linked from every node with the given parent path.
links.add(Link.incoming(kind=self._kind, tag=_CHILD + parent_path))
if self._sibling_links:
# This is a sibling of everything with the same parent.
links.add(Link.bidir(kind=self._kind, tag=_SIBLING + parent_path))
return links

View File

@@ -0,0 +1,73 @@
from typing import Any, Dict, Iterable, Optional, Set, Union
from langchain_core.documents import Document
from langchain_core.graph_vectorstores.links import Link
from langchain_community.graph_vectorstores.extractors.link_extractor import (
LinkExtractor,
)
KeybertInput = Union[str, Document]
class KeybertLinkExtractor(LinkExtractor[KeybertInput]):
def __init__(
self,
*,
kind: str = "kw",
embedding_model: str = "all-MiniLM-L6-v2",
extract_keywords_kwargs: Optional[Dict[str, Any]] = None,
):
"""Extract keywords using KeyBERT <https://maartengr.github.io/KeyBERT/>.
Example:
.. code-block:: python
extractor = KeybertLinkExtractor()
results = extractor.extract_one(PAGE_1)
Args:
kind: Kind of links to produce with this extractor.
embedding_model: Name of the embedding model to use with KeyBERT.
extract_keywords_kwargs: Keyword arguments to pass to KeyBERT's
`extract_keywords` method.
"""
try:
import keybert
self._kw_model = keybert.KeyBERT(model=embedding_model)
except ImportError:
raise ImportError(
"keybert is required for KeybertLinkExtractor. "
"Please install it with `pip install keybert`."
) from None
self._kind = kind
self._extract_keywords_kwargs = extract_keywords_kwargs or {}
def extract_one(self, input: KeybertInput) -> Set[Link]: # noqa: A002
keywords = self._kw_model.extract_keywords(
input if isinstance(input, str) else input.page_content,
**self._extract_keywords_kwargs,
)
return {Link.bidir(kind=self._kind, tag=kw[0]) for kw in keywords}
def extract_many(
self,
inputs: Iterable[KeybertInput],
) -> Iterable[Set[Link]]:
inputs = list(inputs)
if len(inputs) == 1:
# Even though we pass a list, if it contains one item, keybert will
# flatten it. This means it's easier to just call the special case
# for one item.
yield self.extract_one(inputs[0])
elif len(inputs) > 1:
strs = [i if isinstance(i, str) else i.page_content for i in inputs]
extracted = self._kw_model.extract_keywords(
strs, **self._extract_keywords_kwargs
)
for keywords in extracted:
yield {Link.bidir(kind=self._kind, tag=kw[0]) for kw in keywords}

View File

@@ -555,10 +555,11 @@ class Neo4jGraph(GraphStore):
el["labelsOrTypes"] == [BASE_ENTITY_LABEL]
and el["properties"] == ["id"]
for el in self.structured_schema.get("metadata", {}).get(
"constraint"
"constraint", []
)
]
)
if not constraint_exists:
# Create constraint
self.query(

View File

@@ -640,6 +640,12 @@ def _import_yuan2() -> Type[BaseLLM]:
return Yuan2
def _import_you() -> Type[BaseLLM]:
from langchain_community.llms.you import You
return You
def _import_volcengine_maas() -> Type[BaseLLM]:
from langchain_community.llms.volcengine_maas import VolcEngineMaasLLM
@@ -847,6 +853,8 @@ def __getattr__(name: str) -> Any:
return _import_yandex_gpt()
elif name == "Yuan2":
return _import_yuan2()
elif name == "You":
return _import_you()
elif name == "VolcEngineMaasLLM":
return _import_volcengine_maas()
elif name == "type_to_cls_dict":
@@ -959,6 +967,7 @@ __all__ = [
"Writer",
"Xinference",
"YandexGPT",
"You",
"Yuan2",
]
@@ -1056,6 +1065,7 @@ def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
"qianfan_endpoint": _import_baidu_qianfan_endpoint,
"yandex_gpt": _import_yandex_gpt,
"yuan2": _import_yuan2,
"you": _import_you,
"VolcEngineMaasLLM": _import_volcengine_maas,
"SparkLLM": _import_sparkllm,
}

Some files were not shown because too many files have changed in this diff Show More