1
0
mirror of https://github.com/hwchase17/langchain.git synced 2025-05-01 05:15:17 +00:00

Compare commits

...

241 Commits

Author SHA1 Message Date
Ben Gladwell
da59eb7eb4
anthropic: Allow kwargs to pass through when counting tokens ()
- **Description:** `ChatAnthropic.get_num_tokens_from_messages` does not
currently receive `kwargs` and pass those on to
`self._client.beta.messages.count_tokens`. This is a problem if you need
to pass specific options to `count_tokens`, such as the `thinking`
option. This PR fixes that.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** @bengladwell

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-30 17:56:22 -04:00
Really Him
918c950737
DOCS: partners/chroma: Fix documentation around chroma query filter syntax ()
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

**Description**:
* Starting to put together some PR's to fix the typing around
`langchain-chroma` `filter` and `where_document` query filtering, as
mentioned:

https://github.com/langchain-ai/langchain/issues/30879
https://github.com/langchain-ai/langchain/issues/30507

The typing of `dict[str, str]` is on the one hand too restrictive (marks
valid filter expressions as ill-typed) and also too permissive (allows
illegal filter expressions). That's not what this PR addresses though.
This PR just removes from the documentation some examples of filters
that are illegal, and also syntactically incorrect: (a) dictionaries
with keys like `$contains` but the key is missing quotation marks; (b)
dictionaries with multiple entries - this is illegal in Chroma filter
syntax and will raise an exception. (`{"foo": "bar", "qux": "baz"}`).
Filter dictionaries in Chroma must have one and one key only. Again this
is just the documentation issue, which is the lowest hanging fruit. I
also think we need to update the types for `filter` and `where_document`
to be (at the very least `dict[str, Any]`), or, since we have access to
Chroma's types, they should be `Where` and `WhereDocument` types. This
has a wider blast radius though, so I'm starting small.

This PR does not fix the issues mentioned above, it's just starting to
get the ball rolling, and cleaning up the documentation.



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-04-30 17:51:07 -04:00
Mateus Scheper
ed7cd3c5c4
docs: Fixing typo: "chocalate" -> "chocolate". ()
Just fixing a typo that was making me anxious: "chocalate" ->
"chocolate".
2025-04-30 15:09:36 -04:00
yberber-sap
952a0b7b40
Docs: Fix SAP HANA Cloud docs - remove pip output, update vectorstore link, rename provider ()
This PR includes the following documentation fixes for the SAP HANA
Cloud vector store integration:
- Removed stale output from the `%pip install` code cell.
- Replaced an unrelated vectorstore documentation link on the provider
overview page.
- Renamed the provider from "SAP HANA" to "SAP HANA Cloud"
2025-04-30 08:57:40 -04:00
Akshay Dongare
0b8e9868e6
docs: update LiteLLM integration docs for router migration to langchain-litellm ()
# What's Changed?
- [x] 1. docs: **docs/docs/integrations/chat/litellm.ipynb** : Updated
with docs for litellm_router since it has been moved into the
[langchain-litellm](https://github.com/Akshay-Dongare/langchain-litellm)
package along with ChatLiteLLM

- [x] 2. docs: **docs/docs/integrations/chat/litellm_router.ipynb** :
Deleted to avoid redundancy

- [x] 3. docs: **docs/docs/integrations/providers/litellm.mdx** :
Updated to reflect inclusion of ChatLiteLLMRouter class

- [x] Lint and test: Done

# Issue:
- [x] Related to the issue
https://github.com/langchain-ai/langchain/issues/30368

# About me
- [x] 🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-29 17:48:11 -04:00
Lukas Scheucher
275ba2ec37
Add Compass Labs toolkits to langchain docs ()
- **Description**: Adding documentation notebook for [compass-langchain
toolkit](https://pypi.org/project/langchain-compass/).
- **Issue**: N/a
- **Dependencies**: langchain-compass  
- **Twitter handle**: @labs_compass

---------

Co-authored-by: ccosnett <conor142857@icloud.com>
2025-04-29 17:42:43 -04:00
ccurme
bdb7c4a8b3
huggingface: fix embeddings return type ()
Integration tests failing

cc @hanouticelina
2025-04-29 18:45:04 +00:00
célina
868f07f8f4
partners: (langchain-huggingface) Chat Models - Integrate Hugging Face Inference Providers and remove deprecated code ()
Hi there, I'm Célina from 🤗,
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers for chat completion and text
generation tasks.

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpoint`, in favor of the task-specific `text_generation`
method. `InferenceClient.post()` is deprecated and will be removed in
`huggingface_hub v0.31.0`.

---
## Changes made
- bumped the minimum required version of the `huggingface-hub` package
to ensure compatibility with the latest API usage.
- added a `provider` field to `HuggingFaceEndpoint`, enabling users to
select the inference provider (e.g., 'cerebras', 'together',
'fireworks-ai'). Defaults to `hf-inference` (HF Inference API).
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpoint` with the task-specific `text_generation` method
for future-proofing, `post()` will be removed in huggingface-hub
v0.31.0.
- updated the `ChatHuggingFace` component:
    - added async and streaming support.
    - added support for tool calling.
- exposed underlying chat completion parameters for more granular
control.
- Added integration tests for `ChatHuggingFace` and updated the
corresponding unit tests.

  All changes are backward compatible.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-29 09:53:14 -04:00
ccurme
3072e4610a
community: move to separate repo (continued) ()
Missed these after merging
2025-04-29 09:25:32 -04:00
ccurme
9ff5b5d282
community: move to separate repo ()
langchain-community is moving to
https://github.com/langchain-ai/langchain-community
2025-04-29 09:22:04 -04:00
Sydney Runkle
7e926520d5
packaging: remove Python upper bound for langchain and co libs ()
Follow up to https://github.com/langchain-ai/langsmith-sdk/pull/1696,
I've bumped the `langsmith` version where applicable in `uv.lock`.

Type checking problems here because deps have been updated in
`pyproject.toml` and `uv lock` hasn't been run - we should enforce that
in the future - goes with the other dependabot todos :).
2025-04-28 14:44:28 -04:00
Sydney Runkle
d614842d23
ci: temporarily run chroma on 3.12 for CI ()
Waiting on a fix for https://github.com/chroma-core/chroma/issues/4382
2025-04-28 13:20:37 -04:00
Michael Li
ff1602f0fd
docs: replace initialize_agent with create_react_agent in bash.ipynb (part of ) ()
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 13:07:22 -04:00
Christophe Bornet
aee7988a94
community: add mypy warn_unused_ignores rule () 2025-04-28 11:54:12 -04:00
Bae-ChangHyun
a2863f8757
community: add 'get_col_comments' option for retrieve database columns comments ()
## Description
Added support for retrieving column comments in the SQL Database
utility. This feature allows users to see comments associated with
database columns when querying table information. Column comments
provide valuable metadata that helps LLMs better understand the
semantics and purpose of database columns.

A new optional parameter `get_col_comments` was added to the
`get_table_info` method, defaulting to `False` for backward
compatibility. When set to `True`, it retrieves and formats column
comments for each table.

Currently, this feature is supported on PostgreSQL, MySQL, and Oracle
databases.

## Implementation
You should create Table with column comments before.

```python
db = SQLDatabase.from_uri("YOUR_DB_URI")
print(db.get_table_info(get_col_comments=True)) 
```
## Result
```
CREATE TABLE test_table (
	name VARCHAR
        school VARCHAR)
/*
Column Comments: {'name': person name, 'school":school_name}
*/

/*
3 rows from test_table:
name
a
b
c
*/
```

## Benefits
1. Enhances LLM's understanding of database schema semantics
2. Preserves valuable domain knowledge embedded in database design
3. Improves accuracy of SQL query generation
4. Provides more context for data interpretation

Tests are available in
`langchain/libs/community/tests/test_sql_get_table_info.py`.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 15:19:46 +00:00
yberber-sap
3fb0a55122
Deprecate HanaDB, HanaTranslator and update example notebook to use new implementation ()
- **Description:**  
This PR marks the `HanaDB` vector store (and related utilities) in
`langchain_community` as deprecated using the `@deprecated` annotation.
  - Set `since="0.1.0"` and `removal="1.0"`  
- Added a clear migration path and a link to the SAP-maintained
replacement in the
[`langchain_hana`](https://github.com/SAP/langchain-integration-for-sap-hana-cloud)
package.
Additionally, the example notebook has been updated to use the new
`HanaDB` class from `langchain_hana`, ensuring users follow the
recommended integration moving forward.

- **Issue:** None 

- **Dependencies:**  None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 16:37:35 -04:00
湛露先生
5fb8fd863a
langchain_openai: clean duplicate code for openai embedding. ()
The `_chunk_size` has not changed by method `self._tokenize`, So i think
these is duplicate code.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-27 15:07:41 -04:00
Philipp Schmid
79a537d308
Update Chat and Embedding guides ()
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 18:06:59 +00:00
ccurme
ba2518995d
standard-tests: add condition for image tool message test ()
Require support for [standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/).
2025-04-27 17:24:43 +00:00
ccurme
04a899ebe3
infra: support third-party integration packages in API ref build () 2025-04-25 16:02:27 -04:00
Stefano Lottini
a82d987f09
docs: Astra DB, replace leftover links to "community" legacy package + modernize doc loader signature ()
This PR brings some much-needed updates to some of the Astra DB shorter
example notebooks,

- ensuring imports are from the partner package instead of the
(deprecated) community legacy package
- improving the wording in a few related places
- updating the constructor signature introduced with the latest partner
package's AstraDBLoader
- marking the community package counterpart of the LLM caches as
deprecated in the summary table at the end of the page.
2025-04-25 15:45:24 -04:00
ccurme
a60fd06784
docs: document OpenAI flex processing ()
Following https://github.com/langchain-ai/langchain/pull/31005
2025-04-25 15:10:25 -04:00
ccurme
629b7a5a43
openai[patch]: add explicit attribute for service tier () 2025-04-25 18:38:23 +00:00
ccurme
ab871a7b39
docs: enable milvus in API ref build ()
Reverts 

Should be fixed following
https://github.com/langchain-ai/langchain-milvus/pull/68
2025-04-25 12:48:10 +00:00
Georgi Stefanov
d30c56a8c1
langchain: return attachments in _get_response ()
This is a PR to return the message attachments in _get_response, as when
files are generated these attachments are not returned thus generated
files cannot be retrieved

Fixes issue: https://github.com/langchain-ai/langchain/issues/30851
2025-04-24 21:39:11 -04:00
Abderrahmane Gourragui
09c1991e96
docs: update document examples ()
## Description:

As I was following the docs I found a couple of small issues on the
docs.

this fixes some unused imports on the [extraction
page](https://python.langchain.com/docs/tutorials/extraction/#the-extractor)
and updates the examples on [classification
page](https://python.langchain.com/docs/tutorials/classification/#quickstart)
to be independent from the chat model.
2025-04-24 18:07:55 -04:00
ccurme
a7903280dd
openai[patch]: delete redundant tests ()
These are covered by standard tests.
2025-04-24 17:56:32 +00:00
Kyle Jeong
d0f0d1f966
[docs/community]: langchain docs + browserbaseloader fix ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

community: fix browserbase integration
docs: update docs

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updated BrowserbaseLoader to use the new python sdk.
    - **Issue:** update browserbase integration with langchain
    - **Dependencies:** n/a
    - **Twitter handle:** @kylejeong21

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 13:38:49 -04:00
ccurme
403fae8eec
core: release 0.3.56 () 2025-04-24 13:22:31 -04:00
Jacob Lee
d6b50ad3f6
docs: Update Google Analytics tag in docs () 2025-04-24 10:19:10 -07:00
ccurme
10a9c24dae
openai: fix streaming reasoning without summaries ()
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
2025-04-24 16:01:45 +00:00
ccurme
8fc7a723b9
core: release 0.3.56rc1 () 2025-04-24 15:09:44 +00:00
ccurme
f4863f82e2
core[patch]: fix edge cases for _is_openai_data_block () 2025-04-24 10:48:52 -04:00
Philipp Schmid
ae4b6380d9
Documentation: Add Google Gemini dropdown ()
This PR adds Google Gemini (via AI Studio and Gemini API). Feel free to
change the ordering, if needed.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 10:00:16 -04:00
Philipp Schmid
ffbc64c72a
Documentation: Improve structure of Google integrations page ()
This PR restructures the main Google integrations documentation page
(`docs/docs/integrations/providers/google.mdx`) for better clarity and
updates content.

**Key changes:**

* **Separated Sections:** Divided integrations into distinct `Google
Generative AI (Gemini API & AI Studio)`, `Google Cloud`, and `Other
Google Products` sections.
* **Updated Generative AI:** Refreshed the introduction and the `Google
Generative AI` section with current information and quickstart examples
for the Gemini API via `langchain-google-genai`.
* **Reorganized Content:** Moved non-Cloud Platform specific
integrations (e.g., Drive, GMail, Search tools, ScaNN) to the `Other
Google Products` section.
* **Cleaned Up:** Minor improvements to descriptions and code snippets.

This aims to make it easier for users to find the relevant Google
integrations based on whether they are using the Gemini API directly or
Google Cloud services.

| Before                | After      |
|-----------------------|------------|
| ![Screenshot 2025-04-24 at 14 56
23](https://github.com/user-attachments/assets/ff967ec8-a833-4e8f-8015-61af8a4fac8b)
| ![Screenshot 2025-04-24 at 14 56
15](https://github.com/user-attachments/assets/179163f1-e805-484a-bbf6-99f05e117b36)
|

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 09:58:46 -04:00
Jacob Lee
6b0b317cb5
feat(core): Autogenerate filenames for when converting file content blocks to OpenAI format ()
CC @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 13:36:31 +00:00
ccurme
21962e2201
docs: temporarily disable milvus in API ref build () 2025-04-24 09:31:23 -04:00
Behrad Hemati
1eb0bdadfa
community: add indexname to other functions in opensearch ()
- [x] **PR title**: "community: add indexname to other functions in
opensearch"



- [x] **PR message**:
- **Description:** add ability to over-ride index-name if provided in
the kwargs of sub-functions. When used in WSGI application it's crucial
to be able to dynamically change parameters.


- [ ] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 08:59:33 -04:00
Nicky Parseghian
7ecdac5240
community: Strip URLs from sitemap. ()
Fixes 

- **Description:** Simply strips the loc value when building the
element.
    - **Issue:** Fixes 
2025-04-23 18:18:42 -04:00
ccurme
faef3e5d50
core, standard-tests: support PDF and audio input in Chat Completions format ()
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)

Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
2025-04-23 18:32:51 +00:00
Bagatur
d4fc734250
core[patch]: update dict prompt template ()
Align with JS changes made in
https://github.com/langchain-ai/langchainjs/pull/8043
2025-04-23 10:04:50 -07:00
ccurme
4bc70766b5
core, openai: support standard multi-modal blocks in convert_to_openai_messages () 2025-04-23 11:20:44 -04:00
ccurme
e4877e5ef1
fireworks: release 0.3.0 () 2025-04-23 10:08:38 -04:00
Christophe Bornet
8c5ae108dd
text-splitters: Set strict mypy rules ()
* Add strict mypy rules
* Fix mypy violations
* Add error codes to all type ignores
* Add ruff rule PGH003
* Bump mypy version to 1.15
2025-04-22 20:41:24 -07:00
ccurme
eedda164c6
fireworks[minor]: remove default model and temperature ()
`mixtral-8x-7b-instruct` was recently retired from Fireworks Serverless.

Here we remove the default model altogether, so that the model must be
explicitly specified on init:
```python
ChatFireworks(model="accounts/fireworks/models/llama-v3p1-70b-instruct")  # for example
```

We also set a null default for `temperature`, which previously defaulted
to 0.0. This parameter will no longer be included in request payloads
unless it is explicitly provided.
2025-04-22 15:58:58 -04:00
Grant
4be55f7c89
docs: fix typo at 175 ()
**Description:** Corrected pre-buit to pre-built.
**Issue:** Little typo.
2025-04-22 18:13:07 +00:00
CLOVA Studio 개발
577cb53a00
community: update Naver integration to use langchain-naver package and improve documentation ()
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were completed.
We build our package as requested in [this
comment](https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791)
and the initial version is now uploaded to
[pypi](https://pypi.org/project/langchain-naver/).
So we've updated some our documents with the additional changed features
and how to download our partner-managed package.

## **Dependencies:** 

https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-22 12:00:10 -04:00
ccurme
a7c1bccd6a
openai[patch]: remove xfails from image token counting tests ()
These appear to be passing again.
2025-04-22 15:55:33 +00:00
ccurme
25d77aa8b4
community: release 0.3.22 () 2025-04-22 15:34:47 +00:00
ccurme
59fd4cb4c0
docs: update package registry sort order () 2025-04-22 15:27:32 +00:00
ccurme
b8c454b42b
langchain: release 0.3.24 () 2025-04-22 11:23:34 -04:00
Dmitrii Rashchenko
a43df006de
Support of openai reasoning summary streaming ()
**langchain_openai: Support of reasoning summary streaming**

**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries

It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:

```
llm = ChatOpenAI(
    model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
    use_responses_api=True,  # reasoning streaming works only with responses api, not completion api
    model_kwargs={
        "reasoning": {
            "effort": "high",  # also "low" and "medium" supported
            "summary": "auto"  # some models support "concise" summary, some "detailed", but auto will always work
        }
    }
)
```

Now, if you stream events from llm:

```
async for event in llm.astream_events(prompt, version="v2"):
    print(event)
```

or

```
for chunk in llm.stream(prompt):
    print (chunk)
```

OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`

These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.

Example of how this reasoning summary may be printed:

```
    async for event in llm.astream_events(prompt, version="v2"):
        if event["event"] == "on_chat_model_stream":
            chunk: AIMessageChunk = event["data"]["chunk"]
            if "reasoning_summary_chunk" in chunk.additional_kwargs:
                print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
            elif "reasoning_summary" in chunk.additional_kwargs:
                print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
            elif chunk.content and chunk.content[0]["type"] == "text":
                print(chunk.content[0]["text"], end="")
```

or

```
    for chunk in llm.stream(prompt):
        if "reasoning_summary_chunk" in chunk.additional_kwargs:
            print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
        elif "reasoning_summary" in chunk.additional_kwargs:
            print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
        elif chunk.content and chunk.content[0]["type"] == "text":
            print(chunk.content[0]["text"], end="")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-22 14:51:13 +00:00
Alexander Ng
0f6fa34372
Community: Valyu Integration docs ()
PR title:
docs: add Valyu integration documentation
Description:
This PR adds documentation and example notebooks for the Valyu
integration, including retriever and tool usage.
Issue:
N/A
Dependencies:
No new dependencies.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-21 17:43:00 -04:00
Jacob Mansdorfer
e8a84b05a4
Community: Adding tool calling and some new parameters to the langchain-predictionguard docs. ()
- [x] **PR message**: 
- **Description:** Updates the documentation for the
langchain-predictionguard package, adding tool calling functionality and
some new parameters.
2025-04-21 17:01:57 -04:00
ccurme
8574442c57
core[patch]: release 0.3.55 () 2025-04-21 17:56:24 +00:00
ccurme
920d504e47
fireworks[patch]: update model in LLM integration tests ()
`mixtral-8x7b-instruct` has been retired.
2025-04-21 17:53:27 +00:00
Anton Masalovich
1f3054502e
community: fix cost calculations for 4.1 and o4 in OpenAI callback ()
**Issue:** 
2025-04-21 10:59:47 -04:00
Ahmed Tammaa
589bc19890
anthropic[patch]: make description optional on AnthropicTool ()
PR Summary

This change adds a fallback in ChatAnthropic.with_structured_output() to
handle Pydantic models that don’t include a docstring. Without it,
calling:
```py
from pydantic import BaseModel
from langchain_anthropic import ChatAnthropic

class SampleModel(BaseModel):
    sample_field: str

llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```
will raise a
```
KeyError: 'description'
```
because Pydantic omits the description field when no docstring is
present.

This issue doesn’t occur when using ChatOpenAI or if you add a docstring
to the model:
```py
from pydantic import BaseModel
from langchain_openai import ChatOpenAI

class SampleModel(BaseModel):
    """Schema for sample_field output."""
    sample_field: str

llm = ChatOpenAI(
    model="gpt-4o-mini"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:44:39 -04:00
Nuno Campos
27296bdb0c
core: Make Graph.Node.data optional ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-21 07:18:36 -07:00
Pushpa Kumar
0e9d0dbc10
docs: dynamic copyright year in API ref ()
- [x] **PR title:**  
`docs: make footer year dynamic in API reference docs`

- [x] **PR message:**

  - **Description:**  
Update `docs/api_reference/conf.py` to make the copyright year dynamic
(on
[https://python.langchain.com/api_reference/](https://python.langchain.com/api_reference/)).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 14:10:14 +00:00
Ahmed Tammaa
de56c31672
core: Improve OutputParser error messaging when model output is truncated (max_tokens) ()
Addresses 
When using the output parser—either in a chain or standalone—hitting
max_tokens triggers a misleading “missing variable” error instead of
indicating the output was truncated. This subtle bug often surfaces with
Anthropic models.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:06:18 -04:00
xsai9101
335f089d6a
Community: Add bind variable support for oracle adb docloader ()
PR title:
Community: Add bind variable support for oracle adb docloader
Description:
This PR adds support of using bind variable to oracle adb doc loader
class, including minor document change.
Issue:
N/A
Dependencies:
No new dependencies.
2025-04-21 08:47:33 -04:00
Ikko Eltociear Ashimine
9418c0d8a5
docs: update tableau.ipynb ()
Initalize -> Initialize
2025-04-21 08:43:29 -04:00
Aubrey Ford
23f701b08e
langchain_community: OpenAIEmbeddings not respecting chunk_size argument ()
This is a follow-on PR to go with the identical changes that were made
in parters/openai.

Previous PR:  https://github.com/langchain-ai/langchain/pull/30757

When calling embed_documents and providing a chunk_size argument, that
argument is ignored when OpenAIEmbeddings is instantiated with its
default configuration (where check_embedding_ctx_length=True).

_get_len_safe_embeddings specifies a chunk_size parameter but it's not
being passed through in embed_documents, which is its only caller. This
appears to be an oversight, especially given that the
_get_len_safe_embeddings docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.
2025-04-21 08:39:07 -04:00
Aubrey Ford
b344f34635
partners/openai: OpenAIEmbeddings not respecting chunk_size argument ()
When calling `embed_documents` and providing a `chunk_size` argument,
that argument is ignored when `OpenAIEmbeddings` is instantiated with
its default configuration (where `check_embedding_ctx_length=True`).

`_get_len_safe_embeddings` specifies a `chunk_size` parameter but it's
not being passed through in `embed_documents`, which is its only caller.
This appears to be an oversight, especially given that the
`_get_len_safe_embeddings` docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.

This bug also exists in langchain_community package. I can add that to
this PR if requested otherwise I will create a new one once this passes.
2025-04-18 15:27:27 -04:00
Konsti-s
017c8079e1
partners: ChatAnthropic supports urls ()
**Description:**
partners-anthropic: ChatAnthropic supports b64 and urls in the
part[image_url][url] message variable

**Issue**:
ChatAnthropic right now only supports b64 encoded images in the
part[image_url][url] message variable. This PR enables ChatAnthropic to
also accept image urls in said variable and makes it compatible with
OpenAI messages to make model switching easier.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 15:15:45 -04:00
Volodymyr Tkachuk
d0cd115356
community: Add deprecation decorator to SingleStore community integrations ()
SingleStore integration now has its package `langchain-singlestore', so
the community implementation will no longer be maintained.

Added `deprecated` decorator to `SingleStoreDBChatMessageHistory`,
`SingleStoreDBSemanticCache`, and `SingleStoreDB` classes in the
community package.

**Dependencies:** https://github.com/langchain-ai/langchain/pull/30841

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-18 12:58:39 -04:00
Alejandro Rodríguez
34ddfba76b
community: support usage_metadata for litellm streaming calls ()
Support "usage_metadata" for LiteLLM streaming calls.

This is a follow-up to
https://github.com/langchain-ai/langchain/pull/30625, which tackled
non-streaming calls.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-18 12:50:32 -04:00
Volodymyr Tkachuk
5ffcd01c41
docs: Register langchain-singlestore integration ()
I created and published `langchain-singlestoe` integration package that
should replace SingleStoreDB community implementation.
2025-04-18 12:11:33 -04:00
ccurme
096f0e5966
core[patch]: de-beta usage callback () 2025-04-18 15:45:09 +00:00
ccurme
46de0866db
infra: add langchain-google-genai to monorepo test deps and update notebook cassettes ()
Following https://github.com/langchain-ai/langchain/pull/30880
2025-04-18 11:16:12 -04:00
Behrad Hemati
d624a475e4
community: change metadata in opensearch mmr ()
- [ ] **PR message**:
- **Description:** including metadata_field in
max_marginal_relevance_search() would result in error, changed the logic
to be similar to how it's handled in similarity_search, where it can be
any field or simply a "*" to include every field
2025-04-18 10:10:23 -04:00
rylativity
dbf9986d44
langchain-ollama (partners) / langchain-core: allow passing ChatMessages to Ollama (including arbitrary roles) ()
Replacement for PR  (@ccurme)

**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.

Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.

This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.

As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.

Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)

**Issue**: resolves  (related to ollama issue
https://github.com/ollama/ollama/issues/8955)

**Dependencies**: no new dependencies

[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing

---------

Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 10:07:07 -04:00
Christophe Bornet
0c723af4b0
langchain[lint]: fix mypy type ignores ()
* Remove unused ignores
* Add type ignore codes
* Add mypy rule `warn_unused_ignores`
* Add ruff rule PGH003

NB: some `type: ignore[unused-ignore]` are added because the ignores are
needed when `extended_testing_deps.txt` deps are installed.
2025-04-17 17:54:34 -04:00
ccurme
f14bcee525
docs: update multi-modal docs ()
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-17 16:03:05 -04:00
Sydney Runkle
98c357b3d7
core: release 0.3.54 () 2025-04-17 14:27:06 -04:00
Vadym Barda
d2cbfa379f
core[patch]: add retries and better messages to draw_mermaid_png () 2025-04-17 18:25:37 +00:00
Sydney Runkle
75e50a3efd
core[patch]: Raise AttributeError (instead of ModuleNotFoundError) in custom __getattr__ ()
Follow up to https://github.com/langchain-ai/langchain/pull/30769,
fixing the regression reported
[here](https://github.com/langchain-ai/langchain/pull/30769#issuecomment-2807483610),
thanks @krassowski for the report!

Fix inspired by https://github.com/PrefectHQ/prefect/pull/16172/files

Other changes:
* Using tuples for `__all__`, except in `output_parsers` bc of a list
namespace conflict
* Using a helper function for imports due to repeated logic across
`__init__.py` files becoming hard to maintain.

Co-authored-by: Michał Krassowski < krassowski 5832902+krassowski@users.noreply.github.com>"
2025-04-17 14:15:28 -04:00
ccurme
61d2dc011e
openai: release 0.3.14 () 2025-04-17 10:49:14 -04:00
ccurme
f0f90c4d88
anthropic: release 0.3.12 () 2025-04-17 14:45:12 +00:00
ccurme
f01b89df56
standard-tests: release 0.3.19 () 2025-04-17 10:37:44 -04:00
ccurme
add6a78f98
standard-tests, openai[patch]: add support standard audio inputs () 2025-04-17 10:30:57 -04:00
ccurme
2c2db1ab69
core: release 0.3.53 () 2025-04-17 13:10:32 +00:00
ccurme
86d51f6be6
multiple: permit optional fields on multimodal content blocks ()
Instead of stuffing provider-specific fields in `metadata`, they can go
directly on the content block.
2025-04-17 12:48:46 +00:00
湛露先生
83b66cb916
doc: clean doc word description. ()
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:04:37 -04:00
湛露先生
ff2930c119
partners: bug fix check_imports.py exit code. ()
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:02:23 -04:00
ccurme
b36c2bf833
docs: update Bedrock chat model page ()
- document prompt caching
- feature ChatBedrockConverse throughout
2025-04-16 16:55:14 -04:00
ccurme
9e82f1df4e
docs: minor clean up in ChatOpenAI docs () 2025-04-16 16:08:43 -04:00
ccurme
fa362189a1
docs: document OpenAI reasoning summaries () 2025-04-16 19:21:14 +00:00
Sydney Runkle
88fce67724
core: Removing unnecessary pydantic core schema rebuilds ()
We only need to rebuild model schemas if type annotation information
isn't available during declaration - that shouldn't be the case for
these types corrected here.

Need to do more thorough testing to make sure these structures have
complete schemas, but hopefully this boosts startup / import time.
2025-04-16 12:00:08 -04:00
rrozanski-smabbler
60d8ade078
Galaxia integration ()
- [ ] **PR title**: "docs: adding Smabbler's Galaxia integration"

- [ ] **PR message**:  **Twitter handle:** @Galaxia_graph

I'm adding docs here + added the package to the packages.yml. I didn't
add a unit test, because this integration is just a thin wrapper on top
of our API. There isn't much left to test if you mock it away.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-16 10:39:04 -04:00
ccurme
ca39680d2a
ollama: release 0.3.2 () 2025-04-16 09:14:57 -04:00
Sydney Runkle
4af3f89a3a
docs: enforce newlines when signature exceeds char threshold ()
Below is an example of the single line vs new multiline approach.

Before this PR:

<img width="831" alt="Screenshot 2025-04-15 at 8 56 26 PM"
src="https://github.com/user-attachments/assets/0c0277bd-2441-4b22-a536-e16984fd91b7"
/>

After this PR:

<img width="829" alt="Screenshot 2025-04-15 at 8 56 13 PM"
src="https://github.com/user-attachments/assets/e16bfe38-bb17-48ba-a642-e8ff6b48e841"
/>
2025-04-16 08:45:40 -04:00
milosz-l
4ff576e37d
langchain: infer Perplexity provider for sonar model prefix ()
**Description:** This PR adds provider inference logic to
`init_chat_model` for Perplexity models that use the "sonar..." prefix
(`sonar`, `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro` or
`sonar-deep-research`).

This allows users to initialize these models by simply passing the model
name, without needing to explicitly set `model_provider="perplexity"`.

The docstring for `init_chat_model` has also been updated to reflect
this new inference rule.
2025-04-15 18:17:21 -04:00
ccurme
085baef926
ollama[patch]: support standard image format ()
Following https://github.com/langchain-ai/langchain/pull/30746
2025-04-15 22:14:50 +00:00
ccurme
47ded80b64
ollama[patch]: fix generation info ()
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
2025-04-15 19:22:58 +00:00
Sydney Runkle
cf2697ec53
chroma: release 0.2.3 () 2025-04-15 14:11:23 -04:00
ccurme
8e9569cbc8
perplexity: release 0.1.1 () 2025-04-15 18:02:15 +00:00
ccurme
dd5f5902e3
openai: release 0.3.13 () 2025-04-15 17:58:12 +00:00
ccurme
3382ee8f57
anthropic: release 0.3.11 () 2025-04-15 17:57:00 +00:00
Sydney Runkle
ef5aff3b6c
core[fix]: Fix __dir__ in __init__.py for output_parsers module ()
We have a `list.py` file which causes a namespace conflict with `list`
from stdlib, unfortunately.

`__all__` is already a list, so no need to coerce.
2025-04-15 13:09:13 -04:00
Christophe Bornet
a4ca1fe0ed
core: Remove some noqa () 2025-04-15 13:08:40 -04:00
ccurme
6baf5c05a6
standard-tests: release 0.3.18 () 2025-04-15 16:56:54 +00:00
ccurme
c6a8663afb
infra: run old standard-tests on core releases ()
On core releases, we check out the latest published package for
langchain-openai and langchain-anthropic and run their tests against the
candidate version of langchain-core.

Because these packages have a local install of langchain-tests, we also
need to check out the previous version of langchain-tests.
2025-04-15 16:04:08 +00:00
Sydney Runkle
1f5e207379
core[fix]: remove load from dynamic imports dict () 2025-04-15 12:02:46 -04:00
ccurme
7240458619
core: release 0.3.52 () 2025-04-15 15:28:31 +00:00
Sydney Runkle
6aa5494a75
Fix from langchain_core.load.load import load import ()
TL;DR: you can't optimize imports with a lazy `__getattr__` if there is
a namespace conflict with a module name and an attribute name. We should
avoid introducing conflicts like this in the future.

This PR fixes a bug introduced by my lazy imports PR:
https://github.com/langchain-ai/langchain/pull/30769.

In `langchain_core`, we have utilities for loading and dumping data.
Unfortunately, one of those utilities is a `load` function, located in
`langchain_core/load/load.py`. To make this function more visible, we
make it accessible at the top level `langchain_core.load` module via
importing the function in `langchain_core/load/__init__.py`.

So, either of these imports should work:

```py
from langchain_core.load import load
from langchain_core.load.load import load
```

As you can tell, this is already a bit confusing. You'd think that the
first import would produce the module `load`, but because of the
`__init__.py` shortcut, both produce the function `load`.

<details> More on why the lazy imports PR broke this support...

All was well, except when the absolute import was run first, see the
last snippet:

```
>>> from langchain_core.load import load
>>> load
<function load at 0x101c320c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x1069360c0>
```

```
>>> from langchain_core.load import load
>>> load
<function load at 0x10692e0c0>
>>> from langchain_core.load.load import load
>>> load
<function load at 0x10692e0c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x101e2e0c0>
>>> from langchain_core.load import load
>>> load
<module 'langchain_core.load.load' from '/Users/sydney_runkle/oss/langchain/libs/core/langchain_core/load/load.py'>
```

In this case, the function `load` wasn't stored in the globals cache for
the `langchain_core.load` module (by the lazy import logic), so Python
defers to a module import.

</details>

New `langchain` tongue twister 😜: we've created a problem for ourselves
because you have to load the load function from the load file in the
load module 😨.
2025-04-15 11:06:13 -04:00
Bagatur
7262de4217
core[patch]: dict chat prompt template support ()
- Support passing dicts as templates to chat prompt template
- Support making *any* attribute on a message a runtime variable
- Significantly simpler than trying to update our existing prompt
template classes

```python
    template = ChatPromptTemplate(
        [
            {
                "role": "assistant",
                "content": [
                    {
                        "type": "text",
                        "text": "{text1}",
                        "cache_control": {"type": "ephemeral"},
                    },
                    {"type": "image_url", "image_url": {"path": "{local_image_path}"}},
                ],
                "name": "{name1}",
                "tool_calls": [
                    {
                        "name": "{tool_name1}",
                        "args": {"arg1": "{tool_arg1}"},
                        "id": "1",
                        "type": "tool_call",
                    }
                ],
            },
            {
                "role": "tool",
                "content": "{tool_content2}",
                "tool_call_id": "1",
                "name": "{tool_name1}",
            },
        ]
    )

```

will likely close  if we like this idea and update to use this
logic

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-15 11:00:49 -04:00
ccurme
9cfe6bcacd
multiple: multi-modal content blocks ()
Introduces standard content block format for images, audio, and files.

## Examples

Image from url:
```
{
    "type": "image",
    "source_type": "url",
    "url": "https://path.to.image.png",
}
```


Image, in-line data:
```
{
    "type": "image",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "image/png",
}
```


PDF, in-line data:
```
{
    "type": "file",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "application/pdf",
}
```


File from ID:
```
{
    "type": "file",
    "source_type": "id",
    "id": "file-abc123",
}
```


Plain-text file:
```
{
    "type": "file",
    "source_type": "text",
    "text": "foo bar",
}
```
2025-04-15 09:48:06 -04:00
湛露先生
09438857e8
docs: fix tools_human.ipynb url 404. ()
Fix the 404 pages.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-15 09:22:13 -04:00
Sydney Runkle
e3b6cddd5e
core: codspeed tweak to make sure it runs on master () 2025-04-15 13:03:44 +00:00
Sydney Runkle
59f2c9e737
Tinkering with CodSpeed ()
Fix CI to trigger benchmarks on `run-codspeed-benchmarks` label addition

Reduce scope of async benchmark to save time on CI

Waiting to merge this PR until we figure out how to use walltime on
local runners.
2025-04-15 08:49:09 -04:00
William FH
ed5c4805f6
Consistent docstring indentation ()
Should be 4 spaces instead of 3.
2025-04-14 19:04:35 -07:00
Joey Constantino
2282762528
docs: small Tableau docs update ()
Description: small Tableau docs update
Issue: adds required environment variable
Dependencies: tableau-langchain

---------

Co-authored-by: Joe Constantino <joe.constantino@joecons-ltm6v86.internal.salesforce.com>
2025-04-14 15:34:54 -04:00
ccurme
f7c4965fb6
openai[patch]: update imports in test ()
Quick fix to unblock CI, will need to address in core separately.
2025-04-14 19:33:38 +00:00
Sydney Runkle
edb6a23aea
core[lint]: fix issue with unused ignore in __init__.py files ()
Fixing a race condition between
https://github.com/langchain-ai/langchain/pull/30769 and
https://github.com/langchain-ai/langchain/pull/30737
2025-04-14 17:57:00 +00:00
湛露先生
3a64c7195f
community: redis tool typos fix () 2025-04-14 09:01:36 -04:00
Sydney Runkle
4f69094b51
core[performance]: use custom __getattr__ in __init__.py files for lazy imports ()
Most easily reviewed with the "hide whitespace" option toggled.

Seeing 10-50% speed ups in import time for common structures 🚀 

The general purpose of this PR is to lazily import structures within
`langchain_core.XXX_module.__init__.py` so that we're not eagerly
importing expensive dependencies (`pydantic`, `requests`, etc).

Analysis of flamegraphs generated with `importtime` motivated these
changes. For example, the one below demonstrates that importing
`HumanMessage` accidentally triggered imports for `importlib.metadata`,
`requests`, etc.

There's still much more to do on this front, and we can start digging
into our own internal code for optimizations now that we're less
concerned about external imports.

<img width="1210" alt="Screenshot 2025-04-11 at 1 10 54 PM"
src="https://github.com/user-attachments/assets/112a3fe7-24a9-4294-92c1-d5ae64df839e"
/>

I've tracked the improvements with some local benchmarks:

## `pytest-benchmark` results

| Name | Before (s) | After (s) | Delta (s) | % Change |

|-----------------------------|------------|-----------|-----------|----------|
| Document | 2.8683 | 1.2775 | -1.5908 | -55.46% |
| HumanMessage | 2.2358 | 1.1673 | -1.0685 | -47.79% |
| ChatPromptTemplate | 5.5235 | 2.9709 | -2.5526 | -46.22% |
| Runnable | 2.9423 | 1.7793 | -1.163 | -39.53% |
| InMemoryVectorStore | 3.1180 | 1.8417 | -1.2763 | -40.93% |
| RunnableLambda | 2.7385 | 1.8745 | -0.864 | -31.55% |
| tool | 5.1231 | 4.0771 | -1.046 | -20.42% |
| CallbackManager | 4.2263 | 3.4099 | -0.8164 | -19.32% |
| LangChainTracer | 3.8394 | 3.3101 | -0.5293 | -13.79% |
| BaseChatModel | 4.3317 | 3.8806 | -0.4511 | -10.41% |
| PydanticOutputParser | 3.2036 | 3.2995 | 0.0959 | 2.99% |
| InMemoryRateLimiter | 0.5311 | 0.5995 | 0.0684 | 12.88% |

Note the lack of change for `InMemoryRateLimiter` and
`PydanticOutputParser` is just random noise, I'm getting comparable
numbers locally.

## Local CodSpeed results

We're still working on configuring CodSpeed on CI. The local usage
produced similar results.
2025-04-14 08:57:54 -04:00
Christophe Bornet
ada740b5b9
community: Add ruff rule PGH003 ()
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-14 02:32:13 +00:00
ccurme
f005988e31
community[patch]: fix cost calculations for o3 in OpenAI callback ()
Resolves https://github.com/langchain-ai/langchain/issues/30795
2025-04-13 15:20:46 +00:00
BoyuHu
446361a0d3
docs: fix typo ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-13 10:55:30 -04:00
Marina Gómez
afd457d8e1
perplexity[patch]: Fix : Handle missing citations attribute in ChatPerplexity ()
This PR fixes an issue where ChatPerplexity would raise an
AttributeError when the citations attribute was missing from the model
response (e.g., when using offline models like r1-1776).

The fix checks for the presence of citations, images, and
related_questions before attempting to access them, avoiding crashes in
models that don't provide these fields.

Tested locally with models that omit citations, and the fix works as
expected.
2025-04-13 09:24:05 -04:00
Christophe Bornet
42944f3499
core: Improve mypy config ()
* Cleanup mypy config
* Add mypy `strict` rules except `disallow_any_generics`,
`warn_return_any` and `strict_equality` (TODO)
* Add mypy `strict_byte` rule
* Add mypy support for PEP702 `@deprecated` decorator
* Bump mypy version to 1.15

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 16:35:13 -04:00
mpb159753
bb2c2fd885
docs: Add openGauss vector store documentation ()
Hey LangChain community! 👋 Excited to propose official documentation for
our new openGauss integration that brings powerful vector capabilities
to the stack!

### What's Inside 📦  
1. **Full Integration Guide**  
Introducing
[langchain-opengauss](https://pypi.org/project/langchain-opengauss/) on
PyPI - your new toolkit for:
   🔍 Native hybrid search (vectors + metadata)  
   🚀 Production-grade connection pooling  
   🧩 Automatic schema management  

2. **Rigorous Testing Passed**   
![Benchmark
Results](https://github.com/user-attachments/assets/ae3b21f7-aeea-4ae7-a142-f2aec57936a0)
   - 100% non-async test coverage  

ps: Current implementation resides in my personal repository:
https://github.com/mpb159753/langchain-opengauss, How can I transfer
process to langchain-ai org?? *Keen to hear your thoughts and make this
integration shine!* 

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 20:31:39 +00:00
Christophe Bornet
913c896598
core: Add ruff rules FBT001 and FBT002 ()
Add ruff rules
[FBT001](https://docs.astral.sh/ruff/rules/boolean-type-hint-positional-argument/)
and
[FBT002](https://docs.astral.sh/ruff/rules/boolean-default-value-positional-argument/).
Mostly `noqa`s to not introduce breaking changes and possible
non-breaking fixes have already been done in a [previous
PR](https://github.com/langchain-ai/langchain/pull/29424).
These rules will prevent new violations to happen.
2025-04-11 16:26:33 -04:00
William FH
2803a48661
core[patch]: Share executor for async callbacks run in sync context ()
To avoid having to create ephemeral threads, grab the thread lock, etc.
2025-04-11 10:34:43 -07:00
Sydney Runkle
fdc2b4bcac
core[lint]: Use 3.9 formatting for docs and tests ()
Looks like `pyupgrade` was already used here but missed some docs and
tests.

This helps to keep our docs looking professional and up to date.
Eventually, we should lint / format our inline docs.
2025-04-11 10:39:25 -04:00
Sydney Runkle
48affc498b
langchain[lint]: use pyupgrade to get to 3.9 standards () 2025-04-11 10:33:26 -04:00
ccurme
d9b628e764
xai: release 0.2.3 () 2025-04-11 14:05:11 +00:00
ccurme
9cfb95e621
xai[patch]: support reasoning content ()
https://docs.x.ai/docs/guides/reasoning

```python
from langchain.chat_models import init_chat_model

llm = init_chat_model(
    "xai:grok-3-mini-beta",
    reasoning_effort="low"
)
response = llm.invoke("Hello, world!")
```
2025-04-11 14:00:27 +00:00
Christophe Bornet
89f28a24d3
core[lint]: Fix typing in test_async_callbacks () 2025-04-11 07:26:38 -04:00
Sydney Runkle
8c6734325b
partners[lint]: run pyupgrade to get code in line with 3.9 standards ()
Using `pyupgrade` to get all `partners` code up to 3.9 standards
(mostly, fixing old `typing` imports).
2025-04-11 07:18:44 -04:00
Jacob Lee
e72f3c26a0
fix(ollama): Remove redundant message from response_metadata () 2025-04-10 23:12:57 -07:00
Jannik Maierhöfer
f3c3ec9aec
docs: add langfuse integration to provider list ()
This PR adds the Langfuse integration to the provider list.
2025-04-10 22:25:42 -04:00
Christophe Bornet
dc19d42d37
core: Specify code when ignoring type issue (ruff PGH003) ()
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/
2025-04-10 22:23:52 -04:00
Paul Czarkowski
68d16d8a07
Community: Add Managed Identity support for Azure AI Search ()
Add Managed Identity support for Azure AI Search

---------

Signed-off-by: Paul Czarkowski <username.taken@gmail.com>
2025-04-10 22:22:58 -04:00
CtrlMj
5103594a2c
replace the deprecated initialize_agent in playwright.ipynb with create_react_agent ()
**Description:** Replaced the example with the deprecated
`intialize_agent` function with `create_react_agent` from
`langgraph.prebuild`

**Issue:**  

**Dependencies:** N/A
**Twitter handle:** N/A
2025-04-10 22:12:12 -04:00
Eugene Yurtsev
e42b3d285a
langchain: remove langchain-server script ()
Has been replaced by langsmith a long long time ago
2025-04-10 22:11:42 -04:00
Pol de Font-Réaulx
48cf7c838d
feat(community): add oauth2 support for Jira toolkit ()
**Description:** add support for oauth2 in Jira tool by adding the
possibility to pass a dictionary with oauth parameters. I also adapted
the documentation to show this new behavior
2025-04-10 22:04:09 -04:00
Oleg Ovcharuk
b6fe7e8c10
docs: YDB Vector Store docs ()
This PR adds docs about how to use YDB as a vector store

[YDB](https://ydb.tech/) is a versatile open-source distributed SQL
database. It supports [vector
search](https://ydb.tech/docs/en/yql/reference/udf/list/knn) which means
it can be used as a vector store with langchain.

YDB vectore store comes with
[langchain-ydb](https://pypi.org/project/langchain-ydb/) pypi package.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-10 21:33:56 -04:00
湛露先生
7a4ae6fbff
community[patch]: simplify cache logic ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 19:20:57 -04:00
ccurme
8e053ac9d2
core[patch]: support customization of backoff parameters in with_retries ()
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-10 19:18:36 -04:00
amohan
e981a9810d
docs: update links in cloudflare docs ()
Thanks for reviewing again. I was notified of some
[links](https://python.langchain.com/api_reference/cloudflare/) not
being correct in the default integrations so updating them in this PR.
2025-04-10 19:08:18 -04:00
William FH
70532a65f8
Async callback benchmark () 2025-04-10 15:47:19 -07:00
Sydney Runkle
c6172d167a
Only run CodSpeed benchmarks with run-codspeed-benchmarks label () 2025-04-10 15:48:14 -04:00
amohan
f70df01e01
docs: Update ordering of cloudflare integration examples in providers page ()
Updated the ordering of cloudflare integrations and updated import
examples. Follow up from
https://github.com/langchain-ai/langchain/pull/30749
2025-04-10 15:34:58 -04:00
Sydney Runkle
8f8fea2d7e
[performance]: Use hard coded langchain-core version to avoid importlib import ()
This PR aims to reduce import time of `langchain-core` tools by removing
the `importlib.metadata` import previously used in `__init__.py`. This
is the first in a sequence of PRs to reduce import time delays for
`langchain-core` features and structures 🚀.

Because we're now hard coding the version, we need to make sure
`version.py` and `pyproject.toml` stay in sync, so I've added a new CI
job that runs whenever either of those files are modified. [This
run](https://github.com/langchain-ai/langchain/actions/runs/14358012706/job/40251952044?pr=30744)
demonstrates the failure that occurs whenever the version gets out of
sync (thus blocking a PR).

Before, note the ~15% of time spent on the `importlib.metadata` /related
imports

<img width="1081" alt="Screenshot 2025-04-09 at 9 06 15 AM"
src="https://github.com/user-attachments/assets/59f405ec-ee8d-4473-89ff-45dea5befa31"
/>

After (note, lack of `importlib.metadata` time sink):

<img width="1245" alt="Screenshot 2025-04-09 at 9 01 23 AM"
src="https://github.com/user-attachments/assets/9c32e77c-27ce-485e-9b88-e365193ed58d"
/>
2025-04-10 14:15:02 -04:00
Sydney Runkle
cd6a83117c
Adding more import time benchmarks for langchain-core ()
Plus minor typo fix in `ChatPromptTemplate` case id.
2025-04-10 11:50:12 -04:00
Chamath K.B. Attanayaka
6c45c9efc3
docs: update clickhouse version in notebook example ()
update clickhouse docker version tag in notebook example to avoid
compatibility issues with clickhouse-connect.
2025-04-10 09:51:54 -04:00
amohan
44b83460b2
docs: Add Cloudflare integrations ()
Description:
This PR adds documentation for the langchain-cloudflare integration
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
langchain-cloudflare package, located in docs/docs/integrations.
Added a new package to libs/packages.yml.

Lint and Format:

Successfully ran make format and make lint.

---------

Co-authored-by: Collier King <collier@cloudflare.com>
Co-authored-by: Collier King <collierking99@gmail.com>
2025-04-10 09:27:23 -04:00
湛露先生
c87a270e5f
cookbook: Fix docs typos. ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 09:13:24 -04:00
ccurme
63c16f5ca8
community: deprecate AzureCosmosDBNoSqlVectorSearch in favor of langchain-azure-ai implementation () 2025-04-09 21:04:16 +00:00
Christophe Bornet
4cc7bc6c93
core: Add ruff rules PLR ()
Add ruff rules [PLR](https://docs.astral.sh/ruff/rules/#refactor-plr)
Except PLR09xxx and PLR2004.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-09 15:15:38 -04:00
célina
68361f9c2d
partners: (langchain-huggingface) Embeddings - Integrate Inference Providers and remove deprecated code ()
Hi there, This is a complementary PR to .
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpointEmbeddings`, in favor of the task-specific
`feature_extraction` method. `InferenceClient.post()` is deprecated and
will be removed in `huggingface_hub` v0.31.0.

## Changes made

- bumped the minimum required version of the `huggingface_hub` package
to ensure compatibility with the latest API usage.
- added a provider field to `HuggingFaceEndpointEmbeddings`, enabling
users to select the inference provider.
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpointEmbeddings` with the task-specific
`feature_extraction` method for future-proofing, `post()` will be
removed in `huggingface-hub` v0.31.0.

 All changes are backward compatible.

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-09 19:05:43 +00:00
Christophe Bornet
98f0016fc2
core: Add ruff rules ARG ()
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg
2025-04-09 14:39:36 -04:00
Sydney Runkle
66758599a9
[ci]: Quick codspeed.yml tweaks to enable comparisons with master ()
* Only run codspeed logic when `libs/core` is changed (for now, we'll
want to add other benchmarks later
* Also run on `master` so that we can get a reference :)
2025-04-09 13:13:49 -04:00
theosaurus
d47d6ecbc3
dosc: Fix typo in get_separators_for_language method section ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 13:03:01 -04:00
Sydney Runkle
78ec7d886d
[performance]: Adding benchmarks for common langchain-core imports ()
The first in a sequence of PRs focusing on improving performance in
core. We're starting with reducing import times for common structures,
hence the benchmarks here.

The benchmark looks a little bit complicated - we have to use a process
so that we don't suffer from Python's import caching system. I tried
doing manual modification of `sys.modules` between runs, but that's
pretty tricky / hacky to get right, hence the subprocess approach.

Motivated by extremely slow baseline for common imports (we're talking
2-5 seconds):

<img width="633" alt="Screenshot 2025-04-09 at 12 48 12 PM"
src="https://github.com/user-attachments/assets/994616fe-1798-404d-bcbe-48ad0eb8a9a0"
/>

Also added a `make benchmark` command to make local runs easy :).
Currently using walltimes so that we can track total time despite using
a manual proces.
2025-04-09 13:00:15 -04:00
German Molina
5fb261ce27
community: Google Vertex AI Search now returns the website title as part of the document metadata ()
Google vertex ai search will now return the title of the found website
as part of the document metadata, if available.

Thank you for contributing to LangChain!

- **Description**: Vertex AI Search can be used to index websites and
then develop chatbots that use these websites to answer questions. At
present, the document metadata includes an `id` and `source` (which is
the URL). While the URL is enough to create a link, the ID is not
descriptive enough to show users. Therefore, I propose we return `title`
as well, when available (e.g., it will not be available in `.txt`
documents found during the website indexing).
- **Issue**: No bug in particular, but it would be better if this was
here.
- **Dependencies**: None
- I do not use twitter.

Format, Lint and Test seem to be all good.
2025-04-09 08:54:06 -04:00
theosaurus
636d831d27
docs: Fix typo in 'Query re-writing" section ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 08:50:14 -04:00
giulia_p_lib
deec538335
docs: fix small typo in map_rerank_docs_chain.ipynb ()
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fixed a minor typo in map_rerank_docs_chain.ipynb
2025-04-09 08:49:37 -04:00
Akshay Dongare
164e606cae
docs: fix import path and update LiteLLM integration docs ()
- [x] **PR title**: "docs: Update import path and LiteLLM integration
docs"
- Update the old import path for `ChatLiteLLM` to reflect the new export
from
[`__init__.py`](https://github.com/Akshay-Dongare/langchain-litellm/blob/main/langchain_litellm/__init__.py)
in
[`langchain-litellm`](https://github.com/Akshay-Dongare/langchain-litellm)
package

- [x] **PR message**:
    - **Description:** 
    - 🔗 **Follow-up to**: PR 
    - 🔧 **Fixes**: 
- 💬 **Based on this comment from** @ccurme:
https://github.com/langchain-ai/langchain/pull/30637#discussion_r2029084320
 


- [x] **About me**
🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-08 13:04:17 -04:00
Ikko Eltociear Ashimine
5686fed40b
docs: update yellowbrick.ipynb ()
retreival -> retrieval
2025-04-08 11:56:35 -04:00
Sydney Runkle
4556b81b1d
Clean up numpy dependencies and speed up 3.13 CI with numpy>=2.1.0 ()
Generally, this PR is CI performance focused + aims to clean up some
dependencies at the same time.

1. Unpins upper bounds for `numpy` in all `pyproject.toml` files where
`numpy` is specified
2. Requires `numpy >= 2.1.0` for Python 3.13 and `numpy > v1.26.0` for
Python 3.12, plus a `numpy` min version bump for `chroma`
3. Speeds up CI by minutes - linting on Python 3.13, installing `numpy <
2.1.0` was taking [~3
minutes](https://github.com/langchain-ai/langchain/actions/runs/14316342925/job/40123305868?pr=30713),
now the entire env setup takes a few seconds
4. Deleted the `numpy` test dependency from partners where that was not
used, specifically `huggingface`, `voyageai`, `xai`, and `nomic`.

It's a bit unfortunate that `langchain-community` depends on `numpy`, we
might want to try to fix that in the future...

Closes https://github.com/langchain-ai/langchain/issues/26026
Fixes https://github.com/langchain-ai/langchain/issues/30555
2025-04-08 09:45:07 -04:00
ccurme
163730aef4
docs: update SQL QA prompt ()
Resolves https://github.com/langchain-ai/langchain/issues/30724

The [prompt in
langchain-hub](https://smith.langchain.com/hub/langchain-ai/sql-query-system-prompt)
used in this guide was composed of just a system message, but the guide
did not add a human message to it. This was incompatible with some
providers (and is generally not a typical usage pattern).

The prompt in prompt hub has been updated to split the question into a
separate HumanMessage. Here we update the guide to reflect this.
2025-04-08 09:42:49 -04:00
湛露先生
9cbe91896e
Fix deepseek release tag, as it is update name. ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-08 08:43:16 -04:00
Nithish Raghunandanan
893942651b
docs: Update couchbase vector store docs ()
-  **Update LangChain-Couchbase documentation**
- Rename `CouchbaseVectorStore` in favor of `CouchbaseSearchVectorStore`

- [x] **Lint and test**
2025-04-07 18:45:14 -04:00
Eugene Yurtsev
3ce0587199
ci: remove unused debug action ()
Removing an unused action
2025-04-07 22:32:37 +00:00
ccurme
a2bec5f2e5
ollama: release 0.3.1 () 2025-04-07 20:31:25 +00:00
ccurme
e3f15f0a47
ollama[patch]: add model_name to response metadata ()
Fixes [this standard
test](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).
2025-04-07 16:27:58 -04:00
ccurme
e106e9602f
groq[patch]: add retries to integration tests ()
Tool-calling tests started intermittently failing with
> groq.APIError: Failed to call a function. Please adjust your prompt.
See 'failed_generation' for more details.
2025-04-07 12:45:53 -04:00
aaronlaitner
4f9f97bd12
docs: replaced initialize_agent with create_react_agent in dalle_image_generator.ipynb ()
## Description:

Replaced deprecated 'initialize_agent' with 'create_react_agent' in
dalle_image_generator.ipynb
## Issue:


## Dependencies:
None

## Twitter handle:
@Thatopman

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-07 13:33:52 +00:00
Mohammad Mohtashim
e935da0b12
ChatTongyi reasoning_content fix ()
- **Description:** Small fix for `reasoning_content` key
- **Issue:** 
2025-04-07 09:27:33 -04:00
Tin Lai
4d03ba4686
langchain_qdrant: fix showing the missing sparse vector name ()
**Description:** The error message was supposed to display the missing
vector name, but instead, it includes only the existing collection
configs.

This simple PR just includes the correct variable name, so that the user
knows the requested vector does not exist in the collection.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Tin Lai <tin@tinyiu.com>
2025-04-07 09:19:08 -04:00
Eugene Yurtsev
30af9b8166
Update bug-report.yml ()
Update bug report template!
2025-04-04 16:47:50 -04:00
jessicaou
2712ecffeb
Update Contributor Docs ()
Deletes statement on integration marketing
2025-04-04 16:35:11 -04:00
Ninad Sinha
a3671ceb71
docs: Add tools for hyperbrowser ()
Description

This PR updates the docs for the
[langchain-hyperbrowser](https://pypi.org/project/langchain-hyperbrowser/)
package. It adds a few tools
 - Scrape Tool
 - Crawl Tool
 - Extract Tool
 - Browser Agents
   - Claude Computer Use
   - OpenAI CUA
   - Browser Use 

[Hyperbrowser](https://hyperbrowser.ai/) is a platform for running and
scaling headless browsers. It lets you launch and manage browser
sessions at scale and provides easy to use solutions for any webscraping
needs, such as scraping a single page or crawling an entire site.

Issue
None

Dependencies
None

Twitter Handle
`@hyperbrowser`
2025-04-04 16:02:47 -04:00
Christophe Bornet
6650b94627
core: Add ruff rules PYI ()
See https://docs.astral.sh/ruff/rules/#flake8-pyi-pyi

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 19:59:44 +00:00
Philippe PRADOS
d8e3b7667f
community[patch]: Fix empty producer in PDF Parsers ()
Fix an issue where if a pdf file doesn't have a “producer” in metadata, it generates an exception.
2025-04-04 15:53:49 -04:00
Christophe Bornet
f0159c7125
core: Add ruff rules PGH (except PGH003) ()
Add ruff rules PGH: https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
Except PGH003 which will be dealt in a dedicated PR.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-04 19:53:27 +00:00
Jorge Ángel Juárez Vázquez
2491237473
docs: Add Google Calendar documentation ()
## Docs: Add Google Calendar Toolkit Documentation

### Description:
This PR adds documentation for the Google Calendar Toolkit as part of
the `langchain-google` repository. Refer to the related PR: [community:
Add Google Calendar
Toolkit](https://github.com/langchain-ai/langchain-google/pull/688).

### Issue:
N/A 

### Twitter handle:
@jorgejrzz
2025-04-04 15:53:03 -04:00
Armaanjeet Singh Sandhu
7c2468f36b
core: Fix handler removal in BaseCallbackManager (Fixes ) ()
**Description:**  
Fixed a bug in `BaseCallbackManager.remove_handler()` that caused a
`ValueError` when removing a handler added via the constructor's
`handlers` parameter. The issue occurred because handlers passed to the
constructor were added only to the `handlers` list and not automatically
to `inheritable_handlers` unless explicitly specified. However,
`remove_handler()` attempted to remove the handler from both lists
unconditionally, triggering a `ValueError` when it wasn't in
`inheritable_handlers`.

The fix ensures the method checks for the handler’s presence in each
list before attempting removal, making it more robust while preserving
its original behavior.

**Issue:** Fixes 

**Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 15:45:15 -04:00
Mohammad Mohtashim
bff56c5fa6
community[patch]: Redundant Parser checker for Webbaseloader ()
- **Description:** We do not need to set parser in `scrape` since it is
already been done in `_scrape`
- **Issue:** , not directly related but makes sure xml parser is
used
2025-04-04 14:11:26 -04:00
Christophe Bornet
150ac0cb79
core: Add ruff rules DTZ ()
Add ruff rules DTZ:
https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-04-04 13:43:47 -04:00
Christophe Bornet
5e418c2666
core: Rework pydantic version checks ()
This pull request includes various changes to the `langchain_core`
library, focusing on improving compatibility with different versions of
Pydantic. The primary change involves replacing checks for Pydantic
major versions with boolean flags, which simplifies the code and
improves readability.
This also solves ruff rule checks for
[RUF048](https://docs.astral.sh/ruff/rules/map-int-version-parsing/) and
[PLR2004](https://docs.astral.sh/ruff/rules/magic-value-comparison/).

Key changes include:

### Compatibility Improvements:
*
[`libs/core/langchain_core/output_parsers/json.py`](diffhunk://#diff-5add0cf7134636ae4198a1e0df49ee332ae0c9123c3a2395101e02687c717646L22-R24):
Replaced `PYDANTIC_MAJOR_VERSION` with `IS_PYDANTIC_V1` to check for
Pydantic version 1.
*
[`libs/core/langchain_core/output_parsers/pydantic.py`](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14):
Updated version checks from `PYDANTIC_MAJOR_VERSION` to `IS_PYDANTIC_V2`
in the `PydanticOutputParser` class.
[[1]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14)
[[2]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L27-R27)

### Utility Enhancements:
*
[`libs/core/langchain_core/utils/pydantic.py`](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23):
Introduced `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` flags and deprecated
the `get_pydantic_major_version` function. Updated various functions to
use these flags instead of version numbers.
[[1]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23)
[[2]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R42-R78)
[[3]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L90-R89)
[[4]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L104-R101)
[[5]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L120-R122)
[[6]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L135-R132)
[[7]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L149-R151)
[[8]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L164-R161)
[[9]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L248-R250)
[[10]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L330-R335)
[[11]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L356-R357)
[[12]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L393-R390)
[[13]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L403-R400)

### Test Updates:
*
[`libs/core/tests/unit_tests/output_parsers/test_openai_tools.py`](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22):
Updated tests to use `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` for version
checks.
[[1]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22)
[[2]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L532-R535)
[[3]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L567-R570)
[[4]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L602-R605)
*
[`libs/core/tests/unit_tests/prompts/test_chat.py`](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7):
Replaced version tuple checks with `PYDANTIC_VERSION` comparisons.
[[1]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7)
[[2]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L35-R38)
[[3]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L924-R927)
[[4]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L935-R938)
*
[`libs/core/tests/unit_tests/runnables/test_graph.py`](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3):
Simplified version checks using `PYDANTIC_VERSION`.
[[1]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3)
[[2]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL15-R18)
[[3]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL234-L239)
*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20):
Introduced `PYDANTIC_VERSION_AT_LEAST_29` and
`PYDANTIC_VERSION_AT_LEAST_210` for more readable version checks.
[[1]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20)
[[2]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L92-R99)
[[3]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L230-R233)
[[4]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L652-R655)
2025-04-04 13:42:30 -04:00
Christophe Bornet
43b5dc7191
core: Add ruff rules TD and FIX ()
Add ruff rules:
* FIX: https://docs.astral.sh/ruff/rules/#flake8-fixme-fix
* TD: https://docs.astral.sh/ruff/rules/#flake8-todos-td

Code cleanup:

*
[`libs/core/langchain_core/outputs/chat_generation.py`](diffhunk://#diff-a1017ee46f58fa4005b110ffd4f8e1fb08f6a2a11d6ca4c78ff8be641cbb89e5L56-R56):
Removed the "HACK" prefix from a comment in the `set_text` method.

Configuration adjustments:

*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537R85-R93):
Added new rules `FIX002`, `TD002`, and `TD003` to the ignore list.
*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537L102-L108):
Removed the `FIX` and `TD` rules from the ignore list.

Test refinement:

*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L3231-R3232):
Updated a TODO comment to improve clarity in the `test_map_stream`
function.
2025-04-04 13:40:42 -04:00
ccurme
a007c57285
docs: update package registry sort order () 2025-04-04 13:12:39 -04:00
Sydney Runkle
33ed7c31da
docs: fix perplexity install instructions in ChatPerplexity docstring ()
* `openai` install no longer needs to be done manually
2025-04-04 12:58:18 -04:00
Dhruvajyoti Sarma
f9bb5ec5d0
feature: removed pandas dataframe dependency for similary_search when using DuckDB as vector store ()
- [ ] **PR title**: "community: Removes pandas dependency for using
DuckDB for similarity search"


- [ ] **PR message**: 
- **Description:** Removes pandas dependency for using DuckDB for
similarity search. The old function still exists as
`similarity_search_pd`, while the new one is at `similarity_search` and
requires no code changes. Return format remains the same.
    - **Issue:** Issue  and update on PR  
    - **Dependencies:** No dependencies
2025-04-04 12:19:18 -04:00
Akshay Dongare
f79473b752
Solved issue Implement langchain-litellm ()
**PR title**: 
- [x] 1. docs: docs/docs/integrations/providers/LiteLLM.md
- [x] 2. docs: docs/docs/integrations/chat/litellm.ipynb
- [x] 3. libs: libs/packages.yml

- [x] **PR message**:
    - **Description:** Implement langchain-litellm
    - **Issue:** the issue  
    - **Twitter handle:** akshay_d02
    - **LinkedIn Handle** https://linkedin.com/in/akshay-dongare 

- [x] **Add tests and docs**: Done

- [x] **Lint and test**: Done

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-04 16:12:10 +00:00
Yiğit Bekir Kaya, PhD
87e82fe1e8
Added langchain-qwq package documentation (Alibaba Cloud) ()
LangChain QwQ allows non-Tongyi users to access thinking models with
extra capabilities which serve as an extension to Alibaba Cloud.

Hi @ccurme I'm back with the updated PR this time with documentation and
a finished package.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"



- **Description:** adds documentation of `langchain-qwq` integration
package. Also adds it to Alibaba Cloud provider
- **Issue:**   
- **Dependencies:** openai, json-repair
- **Twitter handle:** YigitBekir


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-04 11:47:14 -04:00
Andrew Benton
4e7a9a7014
community: Add support for custom runtimes to Riza tools ()
**Description:**
Adds support for Riza custom runtimes to the two Riza code interpreter
tools, allowing users to run LLM-generated code that depends on
libraries outside stdlib.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** @rizaio
2025-04-04 11:03:14 -04:00
diego dupin
aa37893c00
MariaDB vector store documentation addition ()
### New Feature

Since version 11.7.1, MariaDB support vector. This is a super fast
implementation (see [some perf
blog](https://smalldatum.blogspot.com/2025/01/evaluating-vector-indexes-in-mariadb.html)
The goal is to support MariaDB with langchain

Implementation is done in
https://github.com/mariadb-corporation/langchain-mariadb, published in
https://pypi.org/project/langchain-mariadb/

This concerns the doc addition
 

(initial PR https://github.com/langchain-ai/langchain/pull/29989)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Oskar Stark <oskarstark@googlemail.com>
2025-04-04 14:56:25 +00:00
Sydney Runkle
1cdea6ab07
langchain-community: release 0.3.21 () 2025-04-04 14:14:50 +00:00
Sydney Runkle
901dffe06b
langchain: release 0.3.23 ()
* Bump `text-splitters` min version
* Bump `langchain-core` min version
* Bump `langchain` version 🚀
2025-04-04 10:06:29 -04:00
ccurme
0c2c8c36c1
text-splitters: release 0.3.8 () 2025-04-04 09:58:45 -04:00
ccurme
59d508a2ee
openai[patch]: make computer test more reliable () 2025-04-04 13:53:59 +00:00
Sydney Runkle
c235328b39 Revert "update langchain version and bump min core v"
This reverts commit d0f154dbaa.
2025-04-04 09:31:51 -04:00
Sydney Runkle
d0f154dbaa update langchain version and bump min core v 2025-04-04 09:27:49 -04:00
Sydney Runkle
32cd70d7d2
release: bump core to v0.3.51 () 2025-04-04 13:23:09 +00:00
Max Forsey
18cf457eec
langchain-runpod integration ()
## Description:

This PR adds the necessary documentation for the `langchain-runpod`
partner package integration. It includes:

* A provider page (`docs/docs/integrations/providers/runpod.ipynb`)
explaining the overall setup.
* An LLM component page (`docs/docs/integrations/llms/runpod.ipynb`)
detailing the `RunPod` class usage.
* A Chat Model component page
(`docs/docs/integrations/chat/runpod.ipynb`) detailing the `ChatRunPod`
class usage, including a feature support table.

These documentation files reflect the latest features of the
`langchain-runpod` package (v0.2.0+) such as async support and API
polling logic.

This work also addresses the review feedback provided on the previous
attempt in PR  by:
*   Removing all TODOs from documentation.
*   Adding the required links between provider and component pages.
*   Completing the feature support table in the chat documentation.
*   Linking to the source code on GitHub for API reference.

Finally, it registers the `langchain-runpod` package in
`libs/packages.yml`.

## Dependencies:

None added to the core LangChain repository by these documentation
changes. The required dependency (`langchain-runpod`) is managed as a
separate package.

## Twitter handle:

@runpod_io

---------

Co-authored-by: Max Forsey <maxpod@maxpod.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:57:06 +00:00
NikeHop
9c03cd5775
Fix tool description in serpapi.ipynb ()
Thank you for contributing to LangChain!

- [x] Fix Tool description of SerpAPI tool: "docs: Fix SerpAPI tool
description"



- [ ] Fix SerpAPI tool description: 
- Tool description + name in example initialization of the SerpAPI tool
was still that of the python repl tool.
    - @RLHoeppi

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:36:29 +00:00
Sydney Runkle
af66ab098e
Adding Perplexity extra and deprecating the community version of ChatPerplexity ()
Plus, some accompanying docs updates

Some compelling usage:

```py
from langchain_perplexity import ChatPerplexity


chat = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
response = chat.invoke(
    "What were the most significant newsworthy events that occurred in the US recently?",
    extra_body={"search_recency_filter": "week"},
)
print(response.content)
# > Here are the top significant newsworthy events in the US recently: ...
```

Also, some confirmation of structured outputs:

```py
from langchain_perplexity import ChatPerplexity
from pydantic import BaseModel


class AnswerFormat(BaseModel):
    first_name: str
    last_name: str
    year_of_birth: int
    num_seasons_in_nba: int


messages = [
    {"role": "system", "content": "Be precise and concise."},
    {
        "role": "user",
        "content": (
            "Tell me about Michael Jordan. "
            "Please output a JSON object containing the following fields: "
            "first_name, last_name, year_of_birth, num_seasons_in_nba. "
        ),
    },
]

llm = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
structured_llm = llm.with_structured_output(AnswerFormat)
response = structured_llm.invoke(messages)
print(repr(response))
#> AnswerFormat(first_name='Michael', last_name='Jordan', year_of_birth=1963, num_seasons_in_nba=15)
```
2025-04-03 14:29:17 -04:00
ccurme
b8929e3d5f
docs: add image generation example to Google genai docs () 2025-04-03 14:21:54 -04:00
ccurme
374769e8fe
core[patch]: log information from certain errors ()
Some exceptions raised by SDKs include information in httpx responses
(see for example
[OpenAI](https://github.com/openai/openai-python/blob/main/src/openai/_exceptions.py)).
Here we trace information from those exceptions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-04-03 16:45:19 +00:00
Sydney Runkle
17a9cd61e9
Bump langchain-core version in perplexity's pyproject.toml ()
Blocking v0.1.0 release of `langchain-perplexity`
2025-04-03 16:19:10 +00:00
Sydney Runkle
3814bd1ea7
partners: Add Perplexity Chat Integration ()
Perplexity's importance in the space has been growing, so we think it's
time to add an official integration!

Note: following the release of `langchain-perplexity` to `pypi`, we
should be able to add `perplexity` as an extra in
`libs/langchain/pyproject.toml`, but we're blocked by a circular import
for now.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 16:09:14 +00:00
vgrfl
87c02a1aff
docs: Fixed a typo in 'Google AI vs Google Cloud Vertex AI' section ()
**Description:** Corrected 'encription' spelling to 'encryption'
2025-04-03 09:04:29 -04:00
Alejandro Rodríguez
884125e129
community: support usage_metadata for litellm ()
Support "usage_metadata" for LiteLLM. 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-02 19:45:15 -04:00
Jacob Lee
01d0cfe450
docs: Remove TODO from Ollama docs page () 2025-04-02 22:59:15 +00:00
Christophe Bornet
f241fd5c11
core: Add ruff rules RET ()
See https://docs.astral.sh/ruff/rules/#flake8-return-ret
All auto-fixes
2025-04-02 16:59:56 -04:00
Eugene Yurtsev
9ae792f56c
core: 0.3.50 release ()
0.3.50 release
2025-04-02 14:46:23 -04:00
Christophe Bornet
ccc3d32ec8
core: Add ruff rules for Pylint PLC (Convention) and PLE (Errors) ()
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-04-02 10:58:03 -04:00
ccurme
fe0fd9dd70
openai[patch]: upgrade tiktoken and fix test ()
Related to https://github.com/langchain-ai/langchain/issues/30344

https://github.com/langchain-ai/langchain/pull/30542 introduced an
erroneous test for token counts for o-series models. tiktoken==0.8 does
not support o-series models in
`tiktoken.encoding_for_model(model_name)`, and this is the version of
tiktoken we had in the lock file. So we would default to `cl100k_base`
for o-series, which is the wrong encoding model. The test tested against
this wrong encoding (so it passed with tiktoken 0.8).

Here we update tiktoken to 0.9 in the lock file, and fix the expected
counts in the test. Verified that we are pulling
[o200k_base](https://github.com/openai/tiktoken/blob/main/tiktoken/model.py#L8),
as expected.
2025-04-02 10:44:48 -04:00
oxy-tg
38807871ec
docs: Add Oxylabs integration ()
Description:
This PR adds documentation for the langchain-oxylabs integration
package.

The documentation includes instructions for configuring Oxylabs
credentials and provides example code demonstrating how to use the
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
Langchain-Oxylabs package, located in docs/docs/integrations.
Added a provider page in docs/docs/providers.
Added a new package to libs/packages.yml.

Lint and Test:

Successfully ran make format, make lint, and make test.
2025-04-02 14:40:32 +00:00
ccurme
816492e1d3
openai: release 0.3.12 () 2025-04-02 13:20:15 +00:00
Bagatur
111dd90a46
openai[patch]: support structured output and tools ()
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-02 09:14:02 -04:00
Karol Zmorski
32f7695809
docs: Little update in sample notebook with WatsonxToolkit ()
**Description:**

- Updated sample notebook with valid tools.
2025-04-02 09:08:29 -04:00
Mahir Shah
9d3262c7aa
core: Propagate config_factories in RunnableBinding ()
- **Description:** Propagates config_factories when calling decoration
methods for RunnableBinding--e.g. bind, with_config, with_types,
with_retry, and with_listeners. This ensures that configs attached to
the original RunnableBinding are kept when creating the new
RunnableBinding and the configs are merged during invocation. Picks up
where  left off.
  - **Issue:** 

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-01 18:03:58 -04:00
ccurme
8a69de5c24
openai[patch]: ignore file blocks when counting tokens ()
OpenAI does not appear to document how it transforms PDF pages to
images, which determines how tokens are counted:
https://platform.openai.com/docs/guides/pdf-files?api-mode=chat#usage-considerations

Currently these block types raise ValueError inside
`get_num_tokens_from_messages`. Here we update to generate a warning and
continue.
2025-04-01 15:29:33 -04:00
Christophe Bornet
558191198f
core: Add ruff rule FBT003 (boolean-trap) ()
See
https://docs.astral.sh/ruff/rules/boolean-positional-value-in-call/#boolean-positional-value-in-call-fbt003
This PR also fixes some FBT001/002 in private methods but does not
enforce these rules globally atm.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 17:40:12 +00:00
Christophe Bornet
4f8ea13cea
core: Add ruff rules PERF ()
See https://docs.astral.sh/ruff/rules/#perflint-perf
2025-04-01 13:34:56 -04:00
Christophe Bornet
8a33402016
core: Add ruff rules PT (pytest) ()
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-04-01 13:31:07 -04:00
Ben Faircloth
6896c863e8
docs: add seekrflow chat model integration docs ()
### **PR title**  
`docs: add SeekrFlow integration notebook`

---

### 💬 **PR message**  
- **Description:**  
This PR adds an integration notebook for
[`[ChatSeekrFlow](https://pypi.org/project/langchain-seekrflow/)`](https://pypi.org/project/langchain-seekrflow/)
under `docs/docs/integrations/chat/`. Per LangChain’s guidance,
SeekrFlow has been published as a standalone OSS package
(`langchain-seekrflow`) rather than as a direct community integration.
This notebook ensures discoverability, demonstration, and testability of
the integration within LangChain’s documentation structure.

- **Issue:**  
N/A – this is a new integration contribution aligned with LangChain’s
external package policy.

- **Dependencies:**  
-
[`[langchain-seekrflow](https://pypi.org/project/langchain-seekrflow)`](https://pypi.org/project/langchain-seekrflow)
(published to PyPI)
-
[`[seekrai](https://pypi.org/project/seekrai/)`](https://pypi.org/project/seekrai/)
(SeekrFlow client SDK)

- **Twitter handle (optional):**  
  @seekrtechnology

---------

Co-authored-by: Ben Faircloth <bfaircloth@seekr.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 13:18:01 -04:00
Christophe Bornet
768e4f695a
core: Add ruff rules S110 and S112 () 2025-04-01 13:17:22 -04:00
Christophe Bornet
88b4233fa1
core: Add ruff rules D (docstring) ()
This ensures that the code is properly documented:
https://docs.astral.sh/ruff/rules/#pydocstyle-d

Related to 
2025-04-01 13:15:45 -04:00
Andras L Ferenczi
64df60e690
community[minor]: Add custom sitemap URL parameter to GitbookLoader ()
## Description
This PR adds a new `sitemap_url` parameter to the `GitbookLoader` class
that allows users to specify a custom sitemap URL when loading content
from a GitBook site. This is particularly useful for GitBook sites that
use non-standard sitemap file names like `sitemap-pages.xml` instead of
the default `sitemap.xml`.
The standard `GitbookLoader` assumes that the sitemap is located at
`/sitemap.xml`, but some GitBook instances (including GitBook's own
documentation) use different paths for their sitemaps. This parameter
makes the loader more flexible and helps users extract content from a
wider range of GitBook sites.
## Issue
Fixes bug
[30473](https://github.com/langchain-ai/langchain/issues/30473) where
the `GitbookLoader` would fail to find pages on GitBook sites that use
custom sitemap URLs.
## Dependencies
No new dependencies required.
*I've added*:
* Unit tests to verify the parameter works correctly
* Integration tests to confirm the parameter is properly used with real
GitBook sites
* Updated docstrings with parameter documentation
The changes are fully backward compatible, as the parameter is optional
with a sensible default.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-01 16:17:21 +00:00
Christophe Bornet
fdda1aaea1
core: Accept ALL ruff rules with exclusions ()
This pull request updates the `pyproject.toml` configuration file to
modify the linting rules and ignored warnings for the project. The most
important changes include switching to a more comprehensive selection of
linting rules and updating the list of ignored rules to better align
with the project's requirements.

Linting rules update:

* Changed the `select` option to include all available linting rules by
setting it to `["ALL"]`.

Ignored rules update:

* Updated the `ignore` option to include specific rules that interfere
with the formatter, are incompatible with Pydantic, or are temporarily
excluded due to project constraints.
2025-04-01 11:17:51 -04:00
Kacper Włodarczyk
26a3256fc6
community[major]: DynamoDBChatMessageHistory bulk add messages, raise errors ()
This PR addresses two key issues:

- **Prevent history errors from failing silently**: Previously, errors
in message history were only logged and not raised, which can lead to
inconsistent state and downstream failures (e.g., ValidationError from
Bedrock due to malformed message history). This change ensures that such
errors are raised explicitly, making them easier to detect and debug.
(Side note: I’m using AWS Lambda Powertools Logger but hadn’t configured
it properly with the standard Python logger—my bad. If the error had
been raised, I would’ve seen it in the logs 😄) This is a **BREAKING
CHANGE**

- **Add messages in bulk instead of iteratively**: This introduces a
custom add_messages method to add all messages at once. The previous
approach failed silently when individual messages were too large,
resulting in partial history updates and inconsistent state. With this
change, either all messages are added successfully, or none are—helping
avoid obscure history-related errors from Bedrock.

---------

Co-authored-by: Kacper Wlodarczyk <kacper.wlodarczyk@chaosgears.com>
2025-04-01 11:13:32 -04:00
Olexandr88
8c8bca68b2
docs: edited the badge to an acceptable size ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-01 07:17:12 -04:00
Armaanjeet Singh Sandhu
4bbc249b13
community: Fix attribute access for transcript text in YoutubeLoader (Fixes ) ()
**Description:** 
Fixes a bug in the YoutubeLoader where FetchedTranscript objects were
not properly processed. The loader was only extracting the 'text'
attribute from FetchedTranscriptSnippet objects while ignoring 'start'
and 'duration' attributes. This would cause a TypeError when the code
later tried to access these missing keys, particularly when using the
CHUNKS format or any code path that needed timestamp information.

This PR modifies the conversion of FetchedTranscriptSnippet objects to
include all necessary attributes, ensuring that the loader works
correctly with all transcript formats.

**Issue:** Fixes 

**Dependencies:** None

**Testing:**
- Tested the fix with multiple YouTube videos to confirm it resolves the
issue
- Verified that both regular loading and CHUNKS format work correctly
2025-04-01 07:13:06 -04:00
Ivan Brko
ecff055096
community[minor]: Improve Brave Search Tool, allow api key in env var ()
- **Description:** 

- Make Brave Search Tool consistent with other tools and allow reading
its api key from `BRAVE_SEARCH_API_KEY` instead of having to pass the
api key manually (no breaking changes)
- Improve Brave Search Tool by storing api key in `SecretStr` instead of
plain `str`.
    - Add unit test for `BraveSearchWrapper`
    - Reflect the changes in the documentation
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** ivan_brko
2025-03-31 14:48:52 -04:00
ccurme
0c623045b5
core[patch]: pydantic 2.11 compat ()
Release notes: https://pydantic.dev/articles/pydantic-v2-11-release

Covered here:

- We no longer access `model_fields` on class instances (that is now
deprecated);
- Update schema normalization for Pydantic version testing to reflect
changes to generated JSON schema (addition of `"additionalProperties":
True` for dict types with value Any or object).

## Considerations:

### Changes to JSON schema generation

#### Tool-calling / structured outputs

This may impact tool-calling + structured outputs for some providers,
but schema generation only changes if you have parameters of the form
`dict`, `dict[str, Any]`, `dict[str, object]`, etc. If dict parameters
are typed my understanding is there are no changes.

For OpenAI for example, untyped dicts work for structured outputs with
default settings before and after updating Pydantic, and error both
before/after if `strict=True`.

### Use of `model_fields`

There is one spot where we previously accessed `super(cls,
self).model_fields`, where `cls` is an object in the MRO. This was done
for the purpose of tracking aliases in secrets. I've updated this to
always be `type(self).model_fields`-- see comment in-line for detail.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-03-31 14:22:57 -04:00
keshavshrikant
e8be3cca5c
fix huggingface tokenizer default length function ()
2025-03-31 11:54:30 -04:00
Fai LAW
4419340039
docs: add pre_filter usage in similarity_search_with_score (Azure Cosmos DB No SQL) ()
`pre_filter` should be passed in the `Hybrid Search with filtering`
example. Otherwise, it is just an unused variable.
2025-03-31 11:33:00 -04:00
Wenqi Li
64f97e707e
ollama[patch]: Support seed param for OllamaLLM ()
**Description:** a description of the change
add the seed param for OllamaLLM client reproducibility

**Issue:** the issue # it fixes, if applicable
follow up of a similar issue
https://github.com/langchain-ai/langchain/issues/24703
see also https://github.com/langchain-ai/langchain/pull/24782

**Dependencies:** any dependencies required for this change
n/a
2025-03-31 11:28:49 -04:00
Christophe Bornet
8395abbb42
core: Fix test_stream_error_callback ()
Fixes 

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-31 10:37:22 -04:00
Jorge Piedrahita Ortiz
b9e19c5f97
Docs: Add sambanova cloud embeddings docs ()
- **Description:** Add samba nova cloud embeddings docs, only
samabastudio embeddings were supported, now in the latest release of
langchan_sambanova sambanova cloud embeddings is also available
2025-03-31 10:16:15 -04:00
Augusto César Perin
f4d1df1b2d
docs: add missing with_config method to Runnable templates API reference ()
Broken source/docs links for Runnable methods

### What was changed
Added the `with_config` method to the method lists in both Runnable
template files:
- docs/api_reference/templates/runnable_non_pydantic.rst
- docs/api_reference/templates/runnable_pydantic.rst
2025-03-31 10:08:02 -04:00
Christophe Bornet
026de908eb
core: Add ruff rules G, FA, INP, AIR and ISC ()
Fixes mostly for rules G. See
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-03-31 10:05:23 -04:00
Brayden Zhong
e4515f308f
community: update RankLLM integration and fix LangChain deprecation ()
# Community: update RankLLM integration and fix LangChain deprecation

- [x] **Description:**  
- Removed `ModelType` enum (`VICUNA`, `ZEPHYR`, `GPT`) to align with
RankLLM's latest implementation.
- Updated `chain({query})` to `chain.invoke({query})` to resolve
LangChain 0.1.0 deprecation warnings from
https://github.com/langchain-ai/langchain/pull/29840.

- [x] **Dependencies:** No new dependencies added.  

- [x] **Tests and Docs:**  
- Updated RankLLM documentation
(`docs/docs/integrations/document_transformers/rankllm-reranker.ipynb`).
  - Fixed LangChain usage in related code examples.  

- [x] **Lint and Test:**  
- Ran `make format`, `make lint`, and verified functionality after
updates.
  - No breaking changes introduced.  

```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.

If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-31 09:50:00 -04:00
3130 changed files with 30678 additions and 388609 deletions
.github
.gitignore.pre-commit-config.yamlREADME.md
cookbook
docs
api_reference
cassettes
docs

View File

@ -29,14 +29,14 @@ body:
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: textarea
id: reproduction
validations:

View File

@ -1,8 +1,8 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
@ -24,6 +24,5 @@ Additional guidelines:
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

View File

@ -16,7 +16,6 @@ LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/community",
]
# when set to True, we are ignoring core dependents
@ -38,8 +37,8 @@ IGNORED_PARTNERS = [
]
PY_312_MAX_PACKAGES = [
"libs/partners/huggingface", # https://github.com/pytorch/pytorch/issues/130249
"libs/partners/voyageai",
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
@ -134,12 +133,6 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
elif dir_ == "libs/community" and job == "extended-tests":
py_versions = ["3.9", "3.12"]
elif dir_ == "libs/community" and job == "compile-integration-tests":
# community integration deps are slow in 3.12
py_versions = ["3.9", "3.11"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
@ -184,11 +177,6 @@ def _get_pydantic_test_configs(
else "0"
)
custom_mins = {
# depends on pydantic-settings 2.4 which requires pydantic 2.7
"libs/community": 7,
}
max_pydantic_minor = min(
int(dir_max_pydantic_minor),
int(core_max_pydantic_minor),
@ -196,7 +184,6 @@ def _get_pydantic_test_configs(
min_pydantic_minor = max(
int(dir_min_pydantic_minor),
int(core_min_pydantic_minor),
custom_mins.get(dir_, 0),
)
configs = [

View File

@ -22,7 +22,6 @@ import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
"numpy",
@ -35,7 +34,6 @@ SKIP_IF_PULL_REQUEST = [
"langchain-core",
"langchain-text-splitters",
"langchain",
"langchain-community",
]

View File

@ -20,6 +20,8 @@ def get_target_dir(package_name: str) -> Path:
base_path = Path("langchain/libs")
if package_name_short == "experimental":
return base_path / "experimental"
if package_name_short == "community":
return base_path / "community"
return base_path / "partners" / package_name_short
@ -69,7 +71,7 @@ def main():
clean_target_directories([
p
for p in package_yaml["packages"]
if p["repo"].startswith("langchain-ai/")
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
@ -78,7 +80,7 @@ def main():
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and p["repo"].startswith("langchain-ai/")
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])

View File

@ -1,4 +1,3 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx

View File

@ -34,11 +34,6 @@ jobs:
shell: bash
run: uv sync --group test --group test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: VIRTUAL_ENV=.venv uv pip install "boto3<2" "google-cloud-aiplatform<2"
- name: Run integration tests
shell: bash
env:
@ -76,6 +71,7 @@ jobs:
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
make integration_tests

View File

@ -327,6 +327,7 @@ jobs:
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
@ -394,8 +395,11 @@ jobs:
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
cd $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}
git checkout "$LATEST_PACKAGE_TAG" -- .
rm -rf $GITHUB_WORKSPACE/libs/standard-tests/*
cd $GITHUB_WORKSPACE/libs/
git checkout "$LATEST_PACKAGE_TAG" -- standard-tests/
git checkout "$LATEST_PACKAGE_TAG" -- partners/${{ matrix.partner }}/
cd partners/${{ matrix.partner }}
# Print as a sanity check
echo "Version number from pyproject.toml: "

View File

@ -30,7 +30,7 @@ jobs:
- name: Install langchain editable
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental -e libs/core libs/langchain libs/community
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: Check doc imports
shell: bash

View File

@ -26,7 +26,20 @@ jobs:
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: yq '.packages[].repo' langchain/libs/packages.yml
cmd: |
yq '
.packages[]
| select(
(
(.repo | test("^langchain-ai/"))
and
(.repo != "langchain-ai/langchain")
)
or
(.include_in_api_ref // false)
)
| .repo
' langchain/libs/packages.yml
- name: Parse YAML and checkout repos
env:
@ -38,11 +51,9 @@ jobs:
# Checkout each unique repository that is in langchain-ai org
for repo in $REPOS; do
if [[ "$repo" != "langchain-ai/langchain" && "$repo" == langchain-ai/* ]]; then
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: Setup python ${{ env.PYTHON_VERSION }}

View File

@ -0,0 +1,29 @@
name: Check `langchain-core` version equality
on:
pull_request:
paths:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check version equality
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
exit 1
else
echo "Versions match: $PYPROJECT_VERSION"
fi

44
.github/workflows/codspeed.yml vendored Normal file
View File

@ -0,0 +1,44 @@
name: CodSpeed
on:
push:
branches:
- master
pull_request:
paths:
- 'libs/core/**'
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
codspeed:
name: Run benchmarks
if: (github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'run-codspeed-benchmarks')) || github.event_name == 'workflow_dispatch' || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# We have to use 3.12, 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
python-version: "3.12"
# Using this action is still necessary for CodSpeed to work
- uses: actions/setup-python@v3
with:
python-version: "3.12"
- name: install deps
run: uv sync --group test
working-directory: ./libs/core
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd libs/core
uv run --no-sync pytest ./tests/benchmarks --codspeed
mode: walltime

View File

@ -6,11 +6,6 @@ on:
push:
branches: [jacob/people]
workflow_dispatch:
inputs:
debug_enabled:
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
required: false
default: 'false'
jobs:
langchain-people:
@ -26,12 +21,6 @@ jobs:
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
# Allow debugging with tmate
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
with:
limit-access-to-actor: true
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@ -61,6 +61,7 @@ jobs:
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@ -145,6 +145,7 @@ jobs:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests

1
.gitignore vendored
View File

@ -59,6 +59,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.codspeed/
# Translations
*.mo

View File

@ -7,12 +7,6 @@ repos:
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: community
name: format community
language: system
entry: make -C libs/community format
files: ^libs/community/
pass_filenames: false
- id: langchain
name: format langchain
language: system

View File

@ -15,8 +15,9 @@
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

View File

@ -30,7 +30,7 @@
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken open_clip_torch torch"
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml matplotlib tiktoken open_clip_torch torch"
]
},
{
@ -409,7 +409,7 @@
" table_summaries,\n",
" tables,\n",
" image_summaries,\n",
" image_summaries,\n",
" img_base64_list,\n",
")"
]
},

View File

@ -11,6 +11,7 @@
import json
import os
import sys
from datetime import datetime
from pathlib import Path
import toml
@ -104,7 +105,7 @@ def skip_private_members(app, what, name, obj, skip, options):
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain Inc"
copyright = f"{datetime.now().year}, LangChain Inc"
author = "LangChain, Inc"
html_favicon = "_static/img/brand/favicon.png"
@ -275,3 +276,7 @@ if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True
master_doc = "index"
# If a signatures length in characters exceeds 60,
# each parameter within the signature will be displayed on an individual logical line
maximum_signature_line_length = 60

View File

@ -7,7 +7,7 @@
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
{% block attributes %}
{% if attributes %}

View File

@ -19,6 +19,6 @@
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. example_links:: {{ objname }}

View File

@ -0,0 +1 @@
eNqVVnlwE9cZtzkSh1JKmmYgmTGoGwYS4pV2dUuOGx/CxgRbxhb4Koin3Sdppb28h2TZ9UyBkskE0mYHSBNKpg22ZXANBkw54zQUCMz0dtzO2FCTpslASupp66QZmkD6Vge2Y//THY32Pb3v+H3f+33fp+09MSjJjMDn9jG8AiVAKWgja9t7JNiiQln5QZKDSligu2q8db5OVWJGVocVRZTdJhMQGaMgQh4wRkrgTDHSRIWBYkJrkYUpM10BgU6MzilsxzgoyyAEZczd3I5RAnLFK6m1khAh5sYU2KpgBemXG/NAmZKYADQoYWiIQ4BekoHh0ZaRDQyHDLmxjoL7yqlf/KrEIguTa3c7lnphWbyqyAqANsaZKMNBmgFGQQqZ9J2o73TcHMJsUsIqFzDRJpo2VQRFPM7ICK/M8DgHaEYWeByhwXmgqBLEAwKQ6Dhgo8aIGDKZbXZCbMX/Py2so2NzASYJrB6JKkMJ0/ecQEMde0hUcKuRRIHJigQBh7mDgJUhypQgsPLUDAZVPnV3SPT+EqWAB5x+mkmiX1dDEnQqwWJaaNZ0IyERSEgZkULWDYkSumtJYWBql5VCS8irCFYzJqs8n0BqFCuotL6QAIN+QMFkEKIIGF6PFx0hcjESpHW9rKlJQSEQgZSCBDs2d/SEIaARhLGcxV1hQVa0o9Np1w8oCqIkQZ4SaGReOxJqY8QCAw2DLFBgL7oFHqayofVGIRRxwDIxmExraceAKLIMBfRzUwRdU1+Gm7gOZeZxr35BOGIYr2gnvQhESaWpJoHqgzeQRgdhJI614rKC4mYR33EWIDxJMXV+fuqBCKgoMoJnak9LppWPTpURZK27ClDeumkmgUSFtW4gcXbrwNTfJZVXEKu1nrKame4yh/fd9ViMJIk+x6dZlhM8pXWn6HV6mjZUpAROCciI9gaRpAQhykBtNPdBv58K+gNcUZ3fEvNbAhFPzMFyld6AnWcb1Von79vokNpCzFpbiTGyaf26al+8CicdVqvLaXcRZpw0EkaEArdXcrF4bcOGhMjzmzzlDRTXsq6lPFASLfFz3tKm0rWkTfFtiK6vMEf9PlaqdoQ50OJItFRG1pWysKKkwZMI0eIGX4uDV8iGoLVGoBWy1Ot0rPfVVjdWEy22RiHibamzSaFCA4Ksxhi6iKvdaC8JcYlgIF6/NlaxRoow6yo3EZsIm9PjquQCpRba72+CSn1TaApmh43AiSxswuok9OdoljIs5ENKWOu0OchDEpRFVP9wRxIlUlHl7V2InvA3V3syjfCg97lJZj/W5UFU1QbLJabAQDoNJaJkMBNmm4G0ui02N2EzVFT5+soybnyzMvO4TwK8HETsXJOthB4qrPJRSPeWzVoDg3oNoPvV4aP2g8NWUZAhnkGl9TXgtekRgFd6BtIFh6OOCXimLeVWG0wVQ7ytNU5TKk2HY3GOcLVZLaiTqFTwZEYF9Q3dDQKEc7LWZbbbzEczR1k+9qJgCZxEuSXfasUllAuW4RiU0NQ3w4sq+tb7OioJm57yczOlMuMKedAlzswUUIQoRIMtmb60EWyqhAQ5xHYd4nRnVhd63ppdctKhLvXm7EJZpw6Xy2y2fsWSDL8SXCfJyedmykx6Is2cfGamQMZLl83JyX2tWXmcobWRFWjjt9sdpJ12WkiHy0zazRQBbAGCouwOGyRsdpfjrN6ZKWRHp5AoSAouQwqNeiWhjRRwoFVveEUW0maxo8wVokFMsSoN69SAR9ADlQsNogT12dpPBXEKUGjSpVmv9Xgaq0uqKstONeBT6Yt7xfTfjB5ekHkmGEzWQQmxQetNDQ/UuSWYLCvHa0satZMui9kKzRDaKcJqpUknXop6YtbafbJ36W2/B7AIe4zSBsKWIsxttVqwQgMHipx2K0Gk/oxsS6an0OXcL5bvystJPXN311UJ14iFg5/X5z2zaueecy3vN7ePLmo+PWeQfmrhmiRnO7HiFrV58Qe//MYrj1yrtGz9ztLty5bf+WTtMqz0+0tefnirTe37Y+C/nwxuudS/8e6/N+7/jDv+zs3lQ6tuDw+c9YIh8/B/8l8rHBh/NXKy+Mq+j7cGX3azA78bWtGIL/m586HizgXfwo8M8fuv/zY50XqPeGnPRwUuMnr5saarV/tfWFy66K83Invn39u2+9Keiaodfx5a8O3bO/JyOz3tc4eb+Px9i1bOvbpXebJ5ZOhr5JwDtVjI98Ib4mf55TcvPLnas+WZic//8rcrv/59/b+4R5/t45g/jT93uOLTc8Pt+fP2bTkx9rOHx26E6i/n5Vb+5O27xX+PWBNPT7zozaupu/NAd/QXF4H8w7HiU7m3u68L7+Ut6aKU21/YzZe2foQ/+tPXD9aurmi+cKH39CPSx+cxX/dLo6NPr977Xan55gOHv/6j/PZlr++c99B7ptEFo4fKz18/nP/8wYGRwFj3TjBe/s/357k/PLT03aEPPyh/583iisSt7z3L7j7x+K5thQvzP51f/4Th8cF7AweazxY+eKbv3qq7OcTzo0CL7pgf+rF32a8Wnmq/039s7KZhacH+GwfAuHP/qSfmvzK8i6feHj9ycaXpbN8fwhcvjhYfnvjH3W++u2Js5VOeK7dM6Jq//HJujrk3ea14Xk7O/wDBk1ep

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
eNrdVn1sG2cZT5sxJg1NXVWl0iTWkzexUvKe787n80eWjcRJvCRNnMZJ1sCQeX332nfxffXeO9txSBEZ7B9GywlEN5WKZkvsYkLarqXNOjIYowiham2lDS376KZOY6qoJm1ITEMp5T3HIYnaf/gX/3F3z/t8/Z73+b3P66lKHllYMfRNc4puIwuKNhGwO1Wx0D4HYfv7ZQ3ZsiHNDCSSQ887lrK0S7ZtE0f9fmgqtGEiHSq0aGj+POsXZWj7ybepolqYmbQhjb+1OT7h0xDGMIuwL0p9c8InGiSXbq8I9riJyJfPRkXb11x/ExlaiLJlhMmzYFCK5rl7CxSGGnrUN9lMrfnWtCnHUr0Aa0KUmKy8faugHVM1oEQXlJyiIUmBtGFl/Z5kepIHXiPA/bbsaGm/5JckfzxjgoKCCWas6ECDkoINHRAgQIe2YyGQNqAlFaCao8fMrJ8LCoxZBP+bl2/y/62cbxHklqHWynEwsny1Fc2QUA1/1rQBb3jl6URkyRvbFoIaETJQxajGA80khPSik1WGDk1WZAQlQtcrDVtmZAPb7vxGCh6HoohIYKSLhqToWffX2ZJiNlMSyqjQRlWCXkc1grvVHEImgKqSR+UVL/cENE1VEaGn94+R8ubqNAVeW25VV72yAOmNbrunEwREW7d/YJycFZ1i6SBHMyeKANtQ0VXCfaBCgqds1vQvrVeYUMyRIKB+Dt3yivP8ehsDu7N9UEwkN4SElii7s9DSBP7U+nXL0W3CBrcSG7g1XV25li5AswzNn9wQGI/rojtba8TZDc7ItsaBaJAY7jRTFg0jpyB36dNUSsyk0lqrkYh7GyyoMnQgsvhOmAoPjPfE5K6BXEc8m97XGeO0hJTePVAAbIiLBAOBIMcBlmZolmYBcvql3GBAlNsH421DhfDI7t6SmAwybK/V7bAdOb3EBfYGeh0uI/eMFYdjI85jQ5ZhyomOflW3u5VMHg2hIh4Z3tsed5jQiP1YtygXWiiCzskrUqusBVA6nuzXVCFt8bkIEvQunBrtpEfp4fZSfJ80UOht0zvbpGTfOng8HwRMHaHA8GHG+82vckNFetaW3eeD4cAxC2GTHBD0ZJlsme3gqRnCQ3Thz5X69Hsu0btG4aaZDsJJd7HP0JspjqUSok1xDMdTbCgajEQZgYr3Dc3F6mmGbkvBk0MW1HGG0LBzlfIVUXb0HJKqsduSfdEjO+mkB58cT4CKpoERqKNy5/aCwZW5D7o7Tq2cLEBGCtSVUi2tu1hjfaFULEiiI0lyvqAxkRIfUNLIETOn6y6mZXhpCCCgYXeGZyLMfF21SrwqKZYBLAMY9lwRkHOOVEVTyIbWnvXbh/gGyW4v3GpgGzlE7qnySjteXm9gIY0Q1su9FoWPRCK/vb3RaqRQJExafW6jEUbrsbCchhduNahHmGVZQcNzxVUHoEju0oNESDHhtBQgzOE5nuEy4SBMh3hOFNMRGBTFgMC+SIafIpJAXjtNw7IBRiK5a+1xd6lZg0VvyrQG2GBAILW2UIouqo6Ekk66w/CqwC2UaSHvIjge6wIxKJKxnKwx0K10jPa39XXHzuwF66kEEubKPV/RDawrmUw5iSzSGbcqqoYjkXFpoTKJNdg26p4OS0KAC0REQQoJvChxoJ0MotVo/yXejDdrK1Al2POie0oOtPqiPB/wtVAabA0LPMPU/g18r+zVqmfPb8I7fnhXQ+3XqO55pf8dZsviR1/reuShnX27vzoRo6Y3bx32z7bv5F5Tr14tPp09s+3Sv1t6hOv6F7bvuHL4s78tfpK78+gTv2o6qhp/vOfy9ocOHbz40XevXV9+4cae/Y+2vlRaeHHh4/2t1w5t21/9ztClGztOfzofPcF3HLy6U5UPX7rrwHzqQPWDV8898nrJd+fY5YMv0D3Tvzz2l57tH559bjl694TVNAreFxobikeusGn5k1984+vP/u6BRu7wPW+kW+44uvXze/c8tXTyGWrXe+7Ppr79zODNrtfaSneM+8wnr/34lQd/TgmN91//+1b6cfVPb8IPbzx9/Gzbww57ufXiclPq3PsLxtb7ygyNpo99JX9k0zt9P3j8pjPAXt4ye2/p98x9I8OzoafGCh98efJh3J5efrk6/4/Elw43nQ9fnAgl//rm/W8fOXPhn29NPvGvZ8+Xp+9+94uH1Pzg8oWp2Cl7+tVtP3l7+Q+cUN71o66534wsRScaGxpu3mxsCP705MyBzQ0N/wEq8t3d
eNrdVn1sE2UYL0MBJ4ZgAEEEmgNBYG/v+rGuLRJXxpgT18HWgJtBfHv3tr2tvTvug22QQWQoCQh4fMSA8jHWrdJMWBk4BwwVI98BMUCsCyAQGQaMkQXFBJ3vdd3G3PjwX5vLte/7fP2e5/k9T7osvBCJEstzfWpZTkYipGV8kNRlYREtUJAkL68JItnPM6FZufnuKkVkYy/6ZVmQHCQJBdYAOdkv8gJLG2g+SC40kkEkSdCHpJCHZ8p+SEpbTARh6XyZL0acRDiMlMmSQnToEI43FxMiH0CEg1AkJBIpBM1jFJwcl8hlgiaRUamMJfEvB+EUkV72Iwm/S3g9G9T8aBd6CQbRK0R5SqddXIYNJV4RaXzRKVDEAL7W3g6iIxdFCPCQMZSwxWwQMSw08KKP1E6CdiJxckFcFlL2K0EPyZAMQ2Z5BVDCShiwxHIgCBlW4jmAkQAOyoqIgIeHIlMCA8WGIsFHmlKtlFAK/psVUf6/y2de+TxMAJ5BGlo6ABUGATNIBdiMQzIIQBmTjigP+xFkMDMv6QaH/Lwkq9EebNsNaRoJMkAczTMs51M/9S1ihRQ9g7yalwiteYzTWY0UIyQAGGAXovpSIMmQ5QKYg0DGxeEVWd3pynXPz8qek+mqaXeq1kFBCLA01MzJIgyuNsFMoBW9pzii8RfgBnGy2uDsgEnOKsOzw+kpg8VuoOruDx2AGHGNEJcfuF8gQLoY+wGJuVRr2o133a/DS2p1DqRz87u5hCLtV6uhGLRaumUpKpyWqBrOmNUzXELYGS5sNhiN+Il28yyVcbRa7YUBCUU7m9BpEzFRJjOgrIAyNnTzjWSxDNA8DqFWUrs6KhhAnE/2q1Wp1rRPRCQJmDuoogabyYq0LISbiU4dCyd2xI7cmV1UGB6ajhurNs0Q2RS90aZ3CqIeh07VGy0OM35S9Vk57tqMRBh3r42KukXISV7crMwO3oRpv8IVIyaS0StjYkRXxiKOH2CDrAwS6xE3UjuqIQtFUbHxD9UUURBXRosYMtvt9kf4xZVBsrpXyw9QFmC0uRNZmgp7j8NygoLpGd+0CVQ1GiqMa9Ij9buwddiMfwybByA0F8Ym9GaNR60HxGpbPNrkR+t3QUzYTHgcmwdATC2M6Xsz/1f52gONe4jm/YVr19Y/VPuBJYskOg9YRj2If8+njMYMV5bLLOVbTJkSnFZiQaKL87oau/zjpQ45dlGc3ZpdbJzZbjWnMmYPQB4vAyx2Wxqw201G4DGZbIzFZkyzMNaqhSxUI3jI9T6e9wXQbtoLaEjjnd0+g2p4eoHLmZOdUfsGyOM9PCajG2LScjyHavKRiMdejdABXmHwohVRTcYMkOcsUPfazSYLMpkpK4PsNq/NA6bhBdUxjZ3TFtK2dPzfwTt45kV89U2fPWNWDdDFP32Z2c1FI50D/xpa2aLUbZl58tDQJBAc87z/6eDEGVPGevufWWMdm3PhyN3LJ7kXXr2xPj2rhfh5+Vl6VcV7Fxe1Xjn69ZWWXz86unbLkqYv2m6nb7sZ/uNynXPKmpdzo5OePNJvVajRGZm9eg+sqCh1fb9jpnvq7l1zGyZv/Dg2p/nO7976S1zUOSFmG/nZUhe548iKTYc2DFsatb49dXJFTtKC7V8mu4ffTT/93YJ+TvbHyXcvtQ52REfv/fCpAeMHnxnhqLw64bXFwvmU67oLz+2s7F+1QT/k+sDKDyadu9Woc8713Hhmv3uEvHT5xMJB+ed+G2DIvrP/c+eOW+fPHku/dsw95PXt12Kbg/2cyQPde7eNzW+tpd8tuDKx6NazGadOJo9qLH8pr37FprCJeiJ7a+uJ7HWHR2+11amZx/Vnxq3eSs5+a974RaeHjaouaD5OrTyy6Xz65oaVdsuqoz/dnVv6d/P2ze8fOBxaF808vcS3jzmoO7F2Yz0dkHJ2frX/XtNUeIret+X41nrP4qbGpuJv1ZtlLclkQQmfdYEshbfb6LxflJaLF/+8N0ina2vrqyu2XU1an6TT/QMhrxTS

View File

@ -0,0 +1 @@
eNqNVQ9QFGUUB01TEZNySkvh3EFNYbm/4t2pJB5IJhwEV4AE9N3ud+zK3u66u4cCYolUBpYtCIikph53chFCMpGjNlqTWqOOjaVcWua/STNrpnIsNem74wBN/HNzf3a/937v/d7vvbdX7i6CgkhzbHALzUpQAISEbkS53C3AxQ4oShUuO5QojnSmpWZYtjoE2juJkiReNCqVgKdjACtRAsfTRAzB2ZVFaqUdiiIogKLTypHF3oulmB0szZe4QsiKmFGt0uiisV4XzJhTigkcAzEj5hChgEVjBIdIsJLfIhXzPosEl0rI4v8xYglQJATaChUSBRUkRzjsyN2IlUX3+fceIozIOQQCnfXZHAKDjn3fRqy3Cp60cdZFkJD8FaA7pQjsPANj0CVWVpZblosYcyT0YQgGOEiIa/HpuMixLJRwBkhIJKzMTUFAIiV/DApzUpwoye13qbMdEATkJRyyBEfSbIH8UUEJzUcrSGjzRfEQvoh++WVPIYQ8Dhi6CO5YiosSoFkGiYZLtB1yDkluNqda8pPmv5xodvUEldsAzzM0AXxw5SJEriUgJe4r/W6zxyc4jrrASnJnfC9NZVox6jWrUMXoDDGqtttTMwAxdvF++67bDTwgClEcPDBHsqsH3Hq7DyfKTSmASM24IyQQCEpuAoI9VndHlYKD9RUqu01pd6cLGPvSubUxajV6t98RWSxmCbnJBhgRtvc1oQ/j0ag0WlwVi6vUnXfEhpJQjBMcSiFvVrX2KshAtkCi5K0arWabAEUe7Qdc6UIwySGWO1Ez4aGD7sBQb0ld0D8KY50JqLHynnkCHa1Q6xXxvKBAqacr1DqjVmfUxSqSUiwtpkAay4CNarcIgBVtqFmJvXPjJigHWwhJj2nAifFi/RULKD9D22kJD6wzaqTvVnbqVCqVd/J9PQVoR8r4Mjq1BoPhAXGRMlCSO3z14SodrtZbAlWqFw6ch2Z5BxpP/6MhwMrlY4V4TXugfz+3Xszkh8Dcg6FmoXfKQGi0andRbNL7s0U92L+fYgAz5WEw96AYu9CrGAj+P/l6EkXex/N24Xq8Fff1vqdknkDncZqUd6PrfJVabTInmXXZMxLjs6hkQJloW+JC8/Sd/fE5oQCwdIl/un04b6TWEKudTmqtOLTaSFxn0M/ADQaNGrdqNHpSp1fP0JGxW4toIHvQkisKOK6AgdsJG04AgoJ4zw7K7oRsc3zKfFNLFp7OWTk0jBaAhpblWOjKgAJae9lDMJyDRA9aAbpM8/D0+Gy5w6DV6KAmVg/1ep3eNgPic9EDqncb+7bN6XtK+//NVqCdF9DRl4NgRNWwIP9rMPp0dzMZC7iTqpG3oiqOmN+zlQXFjUpc4t1qKvXWTXw1ev9xLMPz/OGQylvX4ouqi66MWhFcfT2PDH9nhyt2T8flL721y3/89c0/xc5dN67+e6x02d9PLzkdt+7ri3HpDVgY85Vl8pD8Ec+eTZC/s7geq9vdnHJOnbN+T2PuoUhVzZG6qgv/eK07dbX0oKj29dcPnAuzXEotjlu+SDxtCF4tq4ZTr4cGl78aMXvflauhXUfPb8wC/Pf7JlOPXpikyJkz9ASxQPz094vF6WNqvv4rKH5DSXntzpmgdSSz8hpQhDTOppJ+mlDf8GLyN4PPmkd8cuTYeFbzRdH1w5m2yOTIEWDmoPcPVgEpWvFX4+Mlq9a1pYydOXVHhGlL+CtrT62JmzbWeEJ6o2N4WUTX7xNDsleFP/NLeefa/ftOhO9SHTwY3fXEeOvQAx+YXrc/v2fvyLQR5qkbsZqMRYnmcQkvffyouvpkXcKEzp/GlJyqLxvTmt7wuXL/4nWj13QMKWZG57xAnumsWX2rLfP8m+8afsg7fShz91mrrYL/Y24FMYeaNTFzdEZX9aqQ1eGzfjPFnZpzyf1zFqgZtkRzQTuuamJzyE3M+XYluJh2Oq8Ry0l+sqI6qmH8dtw1euqKBKay++zHB7411hmWjeu8/GH1scK9oZWFe9MjsnRntk1yduc56spvlniHNK96bp5IbrKua3Rvqg0LXbxh2KS25tD23EIzMyp081vJp95rNpTmN8y6teZy0jm9LbewK6yqub4+M7OWbrtE/RD5dFNr94Ts8TfePvrZlTdalFEF9U+lpr4W7JulwUFReZXHyUeCgv4Dl57hHA==

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1 @@
eNqVVQlwE1UYTkG5VMQqtMoIIRTQmpfsJmmbZCi2tBwFmnSgaCmF8rL70myz2V336AHWoYjjiBZdULAql6RJibUHVqkn4gUiakcULSA6g6Iz1dGRDohcvk3TAymHmUw27/3/9//ff+6qcBkSJYbn4hoYTkYipGR8kNRVYRE9pCBJXh0KINnH08E894L87YrIdEzyybIgOc1mKDAmyMk+kRcYykTxAXMZaQ4gSYIlSAp6eLrycJy6whCAFcUy70ecZHCShMVmNPToGJyLVxhEnkUGp0GRkGgwGiges+DkqESuFDSJjCpkLIk+nIZsJFEi40F62Yf05Qjih6hnOHxkJD0TwGadhipjLzh6g9ESr4gUvugVKCKLr7Vfp6EnIEVgeUibyhk/E0A0A028WGLWToJ2MuMIAzg3ZtmnBDxm2kzT5lleAZQzEmYtMRwIQJqReA5gToCDsiIi4OGhSJdD1m8qFUrMlpRUQqgA/w+FwzFQkPKhYi05OF39okCCDwVw1VhDVdWSqiU4tTyNtJAoFio0AlaQArBtDsmAhTIup6Eq7EOQxjU/prst6OMlWW25rI5NkKKQIAPEUTzNcCXqqyXLGcGop5FXsxKhNIvRRlEjfoQEAFmmDL1WASQZMhyLqwtknEFekdUdLnd+8aycB2a4Qt1G1WYoCCxDQQ1uLsXkGmI1B1pMl4sjWmcAXEVOVndl9tA051XiruT0hMnmMBHN/V2zEDMOCVH52/0FAqT82A6Idbwa6gY39tfhJbUuF1LuBZeYhCLlU+ugGEi1XRKlqHBaoGo4K+9ydzFhr7uw1USS+NtyiWWpkqPUOi9kJdTSW4ReTMRCWKyASAUEuesS20gWKwHFYxfqNqKxJ4Ms4kpkn7rdRpD1IpIE3GDo0RCGyYq0KoiLiQ7sC8em72X33L5WSAxm48Kq784UGaOetOszBVGPXafoSZvTanOmWPWzcvMbsmJu8gcsVEu+CDnJi4s1o6dvwpRP4fyIjmQN2DEdhr6IReyfZQKMDGKLBxdSO6pBG0EQHZOvqimiAM6M5jFodTgc17CLM4NktVWLDxA2QNrzu6O0OQoH9sNwgoLbM7rDYqxCGivMK/ma+n3cejCTrwMzMMMUorBjykBoPGqXUayzR73dd239PooxzJTrwVyBorWwQz8Q/D/p63aUdBXN/onr1tZfVfuKKYvEKg8YWn0H/y8mSDLLNcuVkuaF4hx/djZX5vJYCly+N/vs480POWZ5tLs1XEeS1ZFqTaGtHoA8XhrYHPY04HBYSOCxWOy0zU6m2ejU7WUMVCN4yPUlPF/CoibKC6KrG3TPoBrOXuTKzM3JaigA83kPj5sxH+Km5XgOhRYgEY+9GqFYXqHxohVRKGsmmJ+5SG11WC02ZHE4aA9B270OCkzHC6pnGnunLaht6eh7txrPvIivPh40afyTw3TRz+Ci+Z9xR4mbz42evnHpO5k7c9sPjW+/o6pozdE1nsy5RS8W1IrZpXdP3Lc7YWfasK/W69d9WDvhzG/2tkM1h8c38uPeOvf32b0bz0y5MOKH3WUHz39//sKRjpQvNjxROnqHx934kOPjuzIWPpBFfttG3rimUZnmGd56tu3tI0z8hCO/N53elfbt2Xf9L7k9SScO3/9lU27N5uSHqwpPE6MOrE7NGzKmuXp0xUebdeEniPsKVq3VbZto/GRv/NjnicVF7+/N/2BQXMLzHUO2TnGT6xJfWvTm8X1vFGWi9oODGlYKJ0zbE+WW9iEvLArfOPWXIrmzfuuGT89lxLX/EhzBLt4yd5x1z9Tm27aufjLPvmCpXrz/u7vSudCGIa1d6Ttaun4k195SDxN+YM8mVdes/3XmzlP13yQsq7u4ddScxsyxw2aOnBAfPP7PTW22FzeamroKs4eOGbS25Smjr+n9abcP77pz4YOqNKf6hrdWOw4kul9ZOW/8HUlxc5c9vbAA7d83vy48NfUZyz2b7t1d+13c2I+edY1Q3Urx7T9PrPQv+ynx2IbWXJ93+Ij0LTnLZp+ccTS58/vDnS2PRR6ZV/jg0tDnnmMFgYzqtaV7MipOtGUkHDSuXJJc9vPSC6kukPv4vPqcos7ZH+yZM6Y85570MfHuxufiqeP7l//5+2tU7Zq7jZtmf/36rpqTvw51/j2vc/bIP06mlW853TSplji38T2uKPnUyOaLXX/9datOd/HiYJ11p+NMzWCd7l+O/AW4

View File

@ -0,0 +1 @@
eNqVVQtMFFcUhWBNSesPatQaZTtQCHVn/+AutFVYkaDlIyBVROHtzNvdkdmZceYtHxFT0aZRInGMn0YiFllYu0GEQksVq1YtNRVNtaaKMbSptpjaRq1S06TVvlkW0IqfbiY7+969597z7rn3bZW3BIoSw3PBzQyHoAgohBeSXOUV4So3lNCGJhdETp72ZGXm5Da4Rab3dSdCgpSg1QKB0QAOOUVeYCgNxbu0JXqtC0oScEDJY+Pp8svBbAXhAmWFiC+GnEQk6HUGk5oY8iESllUQIs9CIoFwS1Ak1ATFYxYc8ltQuaBYECxD2OJ/JRDzoESJjA2qkBOqSiHAL1HFcHjJSCrGhcMmEJXqYbB/B6Ml3i1SeGPY4BZZvK18JxBDB3ILLA9oTSlTzLggzQANLzq0ykpQVlp8QheujRY53S6bltbStDbVLpCljIRZSwxHugDNSDxHYk4kB5BbhKSNByJdCthizUrBoTXExeuEMvL/oYjKyuWVy3HVeBoqbCkWuGlIGsk4EsM4iEgWIKwUUel1QkBjOfuCJnmcvITktsckOgAoCgqIhBzF0wznkPc7VjOCWkVDuxLFRykR/T0g+4ohFEjAMiWwvYyUEGA4FgtHIlwc3o3kjzMycwtT0/JSMpoGg8qtQBBYhgIKXLsSk2sOyEkqRX/c7FNEJ7FAHJI7k4ZoarPKccNxKp3GZNHoWh9OzQLMuEnw27seNgiAKsZxyEAzy02D4JaHfXhJbkwHVGbOIyGBSDnlRiC64k2PnFJ0c8pBZa816/F0AeNwOq9Ro9fjp+2RyFI5R8mNdsBKsG1YhGGMz6AzGEldPKnTdz4SGyKxnKR4nEKu17UMVZCFnAM55QZjvGmfCCUB9w5c34RhyC1VebCYsOeUNzBYezMXjrTCVM88LKz8xXyRUav0ZlWSIKpw6jiV3pRgxI9BlZqe22wNpMkdVai2XBFwkh2LlTLUN17K6eaKIe2zjtoxvcTIiUWcn2VcDCIDdwoWUlnKHpNOp+uNfqqnCF24MkpGj9FisTwjLq4MRHKHcj5SZyL15tzBUxos+aPnYTjBjdvTfz0FWDUprDCvN57pP8JtCBP9HJjRGRp1+b0xo6HxqD1GsdHszzbr2f4jFAOYmOfBPIGiIb9XNRr8P+UbTBT1FM+HCzforXqq9xNL5gsoTzK0fBj/LtTp9daM1AwjvXgRYpKd4hJudn4GPd92cCQ+vtQBx6z2d7eC640yWuKNcbTRRkKbnSZNFvNs0mIx6EmbwWCmTWb9bBMd31DCANmHh1zl4HkHCw9QdpICFL6zB2dQ9s5bmpGUnmZtXkJm8zYeN2MuwE3L8RxsyoEiHnvZR7G8m8YXrQibrPPJ7KSlcofFaDBBg95mMtvN+LGRyfiCGprG4WnzKLe0/y91HZ55EW99FXwjovrFIP8nhM7+sua7uS//M7lYU3B+8bKSSd9M/lyduiM8eXvi1ei8mLrm33vDfPcTFxxKaTZvvnbv7772t/qPxXaAVq6zb80Dw5XSZRFH1lZMe/vG1J9v3773S46zKjPyjNpwoiY5vOPawffsbM1P9aHWtG3W6bbuXc6BtQOtnxDJRy63dN6rHSiZvmtitXnjB39+/Vf0r3eLucK6+DcHxsQk30xb98JhR8jM1shjq6JyQ7StMXMvhef1W8+MO/nOvi1CVXjFhEhff6hNE5delBrdtd5bty1o+48n4zMmLv308na6qGVD6o7xupX62oHtXX1gz6x1M7wpwX8U1DJ5XTU94y79MM3jk85V7MoxT7hwbElP2Z6YU9fHlAnZsSkfXZ2y9f3VyTNevb4gI+pWgb49M2v8S59NTToYlpjSVSOc7uo5d0etDlXvTYxtHLskLLWAmLll4KaGkk+eTfntUGRs0ZWNZ9ZuUme90lM9fnZEz4mj6e3151/LQQsunp74oXT2IlpYI+vO19vnRmzafLq7sL93ygV0fM2Y6ig0RTOntmhF9+6wfPnbsVs77vjCVgQ39C8Ii+3Ilrjk5fdXVIxXUwO7d0ZufL/kzq3b1nCiu65kUWj/4a7ObQPj3j1+d06e7+ikAl67cw5W9cGDkKD9dd/vnhoSFPQvid/bWw==

View File

@ -0,0 +1 @@
eNqVVntsHEcZd5UqIEJBEVFLkVqm25AIenPvc+8OFOE6TpqaOIltWhqIktndudvJ7e4sM7M+X4wFsSukCvHYBglERKsmZ5teQxuLKlVUqKo2EqUU2qrQyJUKfaj9J0X8gQQSUjDf7J19NnEeWOe9m5nv8ft9r9mpuTEqJOP+daeYr6ggloKFjKbmBP1mSKV6YNajyuF2c++ekdGToWALn3OUCmQ5lSIBSxJfOYIHzEpa3EuNZVIelZJUqWya3G68ef2hCcMj4wcVr1FfGuVMOptPGEsyRvnrE4bgLjXKRiipMBKGxQGFr+IT1Qj0iaLjCk7ir7KxnUpLMJMi5VBUpwS+BGI+LJlEzAOzZWMysawc74C25KGwYGP5IBQubOtn2VgiFAYuJ3ayzmrMozYjSS6qKb0K9CoFDD2ITUo5oWem7JRtp3ZWAlxnElBL5mOP2ExyHwMm7BMVCopNToRdJ24teTioprKF3nQwjv8/LaBjWMRy6EEdHAjXChY0cKgHWXONyckDmnYnmkRKJhUkZ2VIjdHLhQyRIKBESKQ4gtASeJBQsUroJlBAhHIbyHJ5aDeQTRqozpSD6LhFXRfsojEmmclcphpJpD3IWgOBYQIHpgAIyHRD2lYC0gGYYkKEsm1RIqkEVcDORsQSXAIi1bbTOScClB2mAJZvd4AiKNoGclnVUfFupU2qkUAyrEJhKeZXkUVcDzYUMd0ubQiGzeISbzuJbWjpjqJ2v1UiGwodNnWNecy2wQCvxCvgn2hzMUXsXoa+dJhPEXPd0GOQwCXFqqDUhyfkIgZZ59yGjW5uNQBBwZvPIR02s4gGpj0JwnwNnQtgwAWSFAh3SWAAiVzOaxLw13S+Aioq1FIJQOva3SS1I1QH4xJV2Rh4d3i9Ay6GtSrtumBsjd7jUqc8DNoURdurcgigp1rAoWSMgYTp8rqPiMlDlTRWlN+1NvNI6HlEsCO6m4nSNVmAMAlbgrUDkwdgUkDIdIdaLgltinO4gKFVfKqwSxSky5icAyw2jLC/9GxsOoA7mr9kLD1JLIsGClPf4ppf9MvqERYkkE0r2krL0hbjuRe1apQGmLgQq1+NYwgI810YVljBQACS0WNDe0YP7tx178DQbNtodBpi6HZSlzoM4E51WGPN+NLjlo4Nhp7zVfR03xLM1N4GDFkfpZP5UjJ9eqVrlwDi2SA+f2blQUCsGtjBnQEezbaVn1gpw2U0s5tYe0ZWmSTCcqIZIrze/CqWIvQ10Wiuf++l7jqHy+7mcslMBj7zqyzLhm9FMxXiSjq/nIRlnVY2nc3hdC9OZ55eZZsq0cAWBxfRo+knliII46WqnKiZyWTTvxBUBtC2dHoW9FQop5qQTfryi3Od2+TEnsFuLXy6uR0yG/1mh2AJlCmivkAg8F1AmXw5ly8XCmjn7tFT/R0/o2tman4UppesQLYGlgpnznJCv0btVv+aJbNgdClD01CXeUzhzkUKmdTLqJlPp9MLW64oKagHodEem7lSqXQVuxAZqqKnND+czuNMcbTDMrd/bT/MD0Koz/hO7qCa1agA1xeuKt/FtqSz5Rp0LoMwv39h61ra0GuXQJwpxt7uuLp8F2JHZ+u16FwGYmH/AlpL/X/C13a0+QqSKwPXlkZXlL5syFqdzGNmR7+G3wfTmUz/0M6hwogpB746zO7KkcFhNnTk7rNd+/AmQ3x2JK5urbewOVfqzRXsnImpWbFxvlS8E5dK2Qw2s9minS9m7szbvSfHGIla0OWoynnVpU9aFRy/iuB2D0Zz2+8f6tu9q//U1/AwNzkU4yiBovW5T2dHqIC+j1rxPQ6TVtDZ/h14uO/+6KlSLpunWbNgmul0sVKy8F0woZa6cbnbmnpMx++RR6Hn9YV8bvGz3/toT/y3Dv4XF7/xg77BF7686YHFe376jx0P5QcWTn7klRMDG6PTb33p8MsfeD/714033LZ4YP+Hjz68/oPjN1f6zj9v/qh/5MfP3HJh8J4/v/7eCw//8OLfLpy/WBmfPvrH238+vPndT1w//fnvHntne3nfbz82fduZTYfemR7E7x//1vFBe8sfHj/R+v2J1qG3b79j/vlnUz/ZN/PIyMXEja882Ohd2Lb5pebf7/7KP9/6XbG67dyfXrrhjU+ue63Y+6nSgx/f8O6Z1/KZ/3zn1czGian3N031lb99ZkP51mO9n5kwj+ILbz73xcVd33/kRXbT2bNDm27aFtUW58XEfTefa54/duHfGzTLdT1/Xb9+377renr+C7bLfqw=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +0,0 @@
eNqVVX1QFOcZPyVNIBPUpGDbyUe3ZzpUw97t3u19Qc4OHHhBQJCvYhql7+2+x663X+zH3QFSjXEytVh1zTRJaeq0ctxZQsAERdHaZExUEtO0TTK1YEpnNE2b6EzaVBtRE/vucRQY/ad7M/fus+/z/p7f8z6/93m3pqJQUTlJXDDAiRpUAK0hQzW2phTYpkNV25YUoMZKTKK2pr6hV1e48RWspslqkd0OZM4myVAEnI2WBHuUtNMs0OzoXeZhGiYRkpj2iQWbOq0CVFXQClVrEfb9TistoViiNm1o7TJEb1YNxjVrYWZENgNVWuFCENNYiMUgQIOCcSIyORXjBIRm7SrEZtenP7XoCm+CzBpFyGV6tM4Q12VeAowtxkU4ATIcsElKq920ZNMyExAQebvG6kLIztgZxh4My3iMUxFvlRNxATCcKok4YoSLQNMViIckoDAxwEdsG+VWu8PlJuQ4/v+tsnZ1rUfMFYlPp6OrULGmvwgSA9P8W2UNpyQzPRGZJBpVTYFAQEYY8CpMb54goyqa6OgrYfN0pVgIGFTjScuSBCupmjE4v25DgKYhAoYiLTGc2Gq81NrByYUYA8M80GA/Yi/CtCqM/giEMg54LgqT06uMA0CWeY4G5rx9I0pvIFNb3CzLrdP9Zlo4qo2oGQdrEImSCnttOxKYiJE2l8NGHIjjqgY4kUeCwXmA+CTl9PyxuRMyoCMIBM+I10hOLx6c6yOpRl81oGvq50EChWaNPqAIbmp47ndFFzWkBiMVqL01XGZyNpzTRhI26uV5wGq7SBt96UIcnrcYako7TksIw/gVkaQlKcJBY/yzlhY63BIS/FJN0NxgN88CHUCFKgct3tr21QF2VW2kLNgaaisPOIQaJlRVG8NJj8PncjpdDgdO2ggbaSNxqK9hInVOmi2tC5Y0xLxNVZUddL2LICuVCp0si4gdDmezs1J3hNnVG+ONgSb9sQZFktmasjW8qFVw4ShsgHG1qbG5NKgTnibtsQqajRVjiJ0e5Rg/KzhhKFi/RuDdIYWK+KBbXKW2rCu3rbM1lnYE25jaWGWJWF7C1FfPoUdRLpzIMHQTlJcwn8EZbfBQbNVYo9fpJfYrUJXRAYFPJdGWabq6NYF0CN8eS2Vaxr6aylkJL02UIU0ax6slsRBzkFgNrWEOwkFhpKfI5SsiSCxY3TAQyIRpuK0EX25QgKiGkQzLZySfolldjECmP3BbsR83xY4qadJHxxOHcVlSIZ5hZQw043XTzRKvKBuePlk4ailA5DrSYY3jadXHOuIxhtYZho3GBMLXQTlRd9Pp8MHMElmRzDCIEC4g7ZI+kiIGM3MzyutH2RI4SeAEeTSOo4MOeU7g0I6m/zM9WzUSLrTdR2510KQIRN09OV2P3851UKCAFGsGn0WhfD7fb27vNIPk8fkchO/ofCcVzuVCOgT1yK0OGYSEyyeoA/EZf5xjjPGHkdHi8RJunxOQTreTIhkv6SF9XsIXIimHz+FyAdcoan4cjXDMcsqSouEqpNEFpbUb44UCiJtdxu8kXU43SrUY3Rw0rzOwXg+VSWYSajEmK9C8CIYCq/AAoFFbrk8r0EiVrVtTUl0RGGnG50oJr5GnL8eUKKkiFw4n66GCCmP007ykM6hdKjCJsOpK1hkHvYzb6UC/cJhyUDTjwEtRI5pB+5/wEmavTQEecY/SxjDr9FuLKMppLcYE4Pe6KYJIX6FPJs1cxdaTC+/8Zne2Jf1k8XXV0jninuPXv7fk6KXzeVP8xAdP/+Jp8NqW3O8cANns9pFDY/seebd6eN/Nrt3PuL8x0b3o03/593RsWmHZEQtnnRrZP3Lx9Q8vPHf99OlznT2TXwxc+vzKtXPrb1yroDbtXfzl0PLhrS9+l7lSrsPRi1m//ns8IVgb3hocvnIjeqh54KMXjX/8ee3YK8mzb+4p+PfHm3w/m6xKni0AZyb3PJ97Y4XFsveUZ9fXH/T35Fad+Vpi2b38GL+h21J1fve9T1k35PX2fFb1+M7VbS9cC1YeKHj/2fuuZk/cd3XR/QsvdZ54+Jf35D+Zuy2HKNr9rVOBnM4hHGt40/NSaf6dC8cfPHX3tp7/YEsblxtZ98dl5SujjWN/W+nGAu/t3r/l7OKF2V1Y3o67Fy3+9h2v9spb/MnBOz7Nznr/hS136W/n/KF/8z8/+fhkfu9Xu/0lP5jMefTqCjj0aPUzPwrlrCyS2vj1F7IKBk60t53549QDPTuEnjc6hSHb5sK1/uyPXju/5ETO5h/+BJPfKxj88I1Xzx/atWFCsObtnDpWPupZdv0Jy42Heh/6svL3l7cf23D4reIfj8vun//lgdzfTR195/LGNutP81/v4/2P/LXtg6K6yyW2d042H8RHh6OPPzHw+cjFzqWn87v4ySN5BQXL7lr7ycihXbl9K6cuLPjTF9HRV87ufPZwd9/yvd2bUcFv3syyNDGD727Pslj+C1CB3ug=

File diff suppressed because one or more lines are too long

View File

@ -1 +0,0 @@
eNqVVn1sU9cVd4DR0Q2BNm10Slk9M5UK5dnvy3acNGsSO5gQEpskREmqLFzfd5334vfF+4jtMG8t7R8VbJ1epMLUqa1IHbvLXKcosDIYdO3E1mls1dBWmjGBVHUaYk3VMlVlzSp237PdJIJ/5j9877n3nHN/55zfPfcdKk4gTRcUua4kyAbSADSwoFuHiho6YCLdeLIgIYNXuHw81tf/oqkJCzt4w1D1Jp8PqIJXUZEMBC9UJN8E5YM8MHx4rorIcZNPKFz2b2smD3okpOtgDOmeJvejBz1QwWfJRkUwsirCM4+BMoanoTpimUM61IQEchs8cqcRwIPmFmQsCrpbkLA3T67BvWzvLI2ammg7WRaasEpl9NSAm6qoAM6bFlKChDgBeBVtzGdLqi3ZAUgYvM/gTSnh43wc54smVSIt6Bi3LsiEBDhBV2QCIyJkYJgaIhIK0Lg0EFPecXXMR/sDpJoh/j8rTy43gpFriuiEY+pI8zgrksIhB/+YahCsYocnY5HCo25oCEhYSAJRR07yJBVX0faOV0lv0F5TFFFfneukKTuFtn19PrczJQPJUajme9S2tZUqxVCrep7IXWpjq6lAww4wj3THm6phfmiGgCpiTdGeI9m0cT/q0U1Zztq2UFRMzplpQMBLdug1vDhOQbYzZO9iYgoa4hzrmsuVykpiHEEDK+dGckUeAQ7DueranOcV3bDKq1k7ByBEOK1IhgqHj7BeHpsU1AY3h5IiMNAsrp2MnPRYsymEVAKIwgQqVKysV4CqigIE9r5vHBe3VGU2YWO5c3vWLiqBmSkb1skYBtHW6Ytn8fWS3ZTXT3vJVzKEbuDoRXxdCBFgPAXV2T+7ckMFMIWdENWraxUqxuWVOopuzXQDGOtb5RJokLdmgCYF2PmV65opG/guWMVw/M7jqpvLxzFeivSyJ1Y51rMytGYcGr66yhgZWpaACvZhHScLUFFSArIWbo6OwuRoQmpRYlE7wQGRByZAGtsBRhvj2d1hfmc8FYmOJQ50hGkpxiX2xNMEFaRDfobx0zRBeUkv5aUIZPZwqV4G8u290bb+dOPAnq5J2OcnqS6t06QiKXmSZgaZLpNO8rvHM/vCA+aufkxKPhbpEWWjU0hOoH6U0Qf2DbZHTTI4YOzqhHy62Y3RmRMC18JLDEpE+3okMZDQ2FQIBeSd+uhQh3fIu699MnqAi6e72uSONq6vewU8lvUTZBVhgGQbSftXrnFDRPKYwVsvBhj2JQ3pKm4P6IkCTplh6ofymIfo4pvFasOcjnUtU/jr+QjmpHWuW5Eb3DTljkHDTZM066aCTf5QE0W6o939pXD1mP67UvBEvwZkPYlp2FGjfBHyppxC3Gz4rmQ/Z5MdV9KGj5sTgTKqoiOiisoqDRK9laeC6IzMV24WgRsqkIVJ51jrnMP69GQmzUGT4/iJtESGJlkG9w8TJk9WTXCvsI/BgAhJt/IsSTPl6laNeLM4WJKgSIKkfml3BIjvmR2NqmgGoSOIXycjay00SCBjX7IWhvIzAZz5ZvxsQNHkUJ+ZiCgSpqbe7FY1ZL8CZzIEbpdIFCQBV8b5r758GIQfG5++U8FQUgi/kYVKXc+vVNCQ7d4OYtkLGwqFfnV3pZqnYChEk6Ezq5V0tBILRUv66TsVqh7y/pCklzI1fULgrIVvY2G0EaJgMgQDITZBBgAHko0sSNI0pGCSoZgQnAvvJMIA4jepzyGgVYwM9bR1d4Z/MUisZBIRUytfBkVZ0WUhmSz0IQ0Xxpp1+jbulhoqYF+9bUPWyUYuwNBMIpgASchCjibacR+qefucd3m71RaBiGs3Aa15nmnxNLEs42l2S6ClMcCSpPP98Hih0v0v1P30gSNfdDm/teLekdSV1s3fuzw3+MHQvVMNzSeufTrVOvzQk633zjz1sUCMaGfOXi2rez78gW/qka6b2iYmOM+cfWDDY/SPhh+/tPXwB0/88Xzu9tmlj25dOn/s9ZaB2PzsO5fmlq68ENv01ie+HVu/9v3Pdg/3zNYvLu3v/GH6nuFo+/E/nzpNHPv3gvhdYvv0JxsX9r69/vc7XngmO4T+M3LU9/Qbf6rvLg182FznuvWTq9R06vrIy+5YbFfs4jtTw889UvfS9cWnHuICDdcjM1t6D25/r/CXt7a+fjTedHj//bui+yfFDeS6N950f/PtdQ83vtoFUvm166QN20qLrb9Wmlsvu5aef8x/OJ5ff/Lj8G+Cl8PbpPW/PTFfX2p5d0n/0r+ubP7s2R//9xvX7hs7/fNt3zoYfDD5UVv3+9+pu8VIqVtK3Y7ItXt+duE1q37jkcTe+7/813+0bjnzsPXuFxZPvV/e85paqv8Ke3QuUGZvPP3skUCsfCP3/KfPXDg+fWBx+7H3ruczv2vPnaPKM/dNh/6ufvXGeHnLxn1/GJqZyt3c5HLdvr3WtebiqX9eXONy/Q8Vr/3y

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
eNrVVk9v3EQUByQuHPkEI4sTWm+8/+Lsoh5Cqcq/0KKkVVFbWWP72R5iz7gz42y20R4oPSOZT9CSKKmiliJAXKASRw58gXDgs/BmvJu027RFVEIQJbLz3rz3fu/fb3zrYAukYoK/ep9xDZJGGv9RX986kHCjAqVv7xegMxHvnj+3sVtJdvRWpnWpRktLtGRtVTCdtXPK0yijjLcjUSwxnoi9UMSTXw8yoDG6v314SYF0V1Pguv7RnLZ2bjlZ8tqddqfvP1yNIii1e45HImY8rR+kN1nZIjEkOdWw36jr72hZ5iyiBuPS50rww7OCc7CY68NNgNKlOduCexJUiWnAl/tKU12pW3voF37/7aAApWgK31z4aA7uz1fe3L8oaVrQ+h4XbkSjDH4y8ZRy0buWInfX6LZBX+8ue97PC7rVPBdj94JkKeP1nV9O1b7fhKrvvn26fs2W2Oi/v+K+RzWNRepuYDPA/SCuj8gwXg7BH/hd6PdpJxx2BivDOFnxw0HY8f2QHpFT3Z6VEGPJGc1VvadlBT9ssAKru4D54RXXmOfuue2SYeXqu97B/PXRRla1iNchH1JOOkPfI543sr/k/NrGfRMPI7gbkxKe7s4epgL1o3WqW6TbJWtUkq7XHZDO8qg3GPV94+LbuYuPgac6wxL73b3LVE7q/Qbfo4XUEJlQ8FhJD1Zz7a5vRfVRO+udcUb9fs95hxT0THcw7Hqe18p6bnd4imL3MqP1Ic4fSYVIc3hw1vR+Hqj+vqxCTKeFFtsuzsyZZc+O9Rc4UxKr+Mdrd3ac2fY4I8drD9u+77QcxnHmeAQBjm6qnNGOE+YiDJQWOGUQAKdhDrEzMg1pLeowYUBn6z10FOMgKNABbNOizEEFRZVrVlKpF528+ASuHi63hoCyAPdaThYPCJkGkQTbvCBmaqZMcHZQW9JJgU1aNCoxe8FpHqC1etpK9Z6VtQIqo+wpaSbGgdZ5ULG5SJslCDQDGcSVnKGjE1vWXPDUbDva97HNxlzqmaDTn7acsZCbqjQOVCRKMCgDxreYBnWM8abScYC0VeK0m04+iQmdhFQjUuw3kiEe5AlLTfBKwRPVjkuBBHqcSURzCKoyuKHYTcSPk5yCRFieBTrXcp1hyWMV5LiXaNxZnitjMeYBh6LUkxPrPmqNu/lp6+tYEIQTm1jXG/qdQdebTt94Nouvv4jF8Q+laulY6lK2pG7krp0eV02UhsItJVZOLxmOVvp/RPpf3Y9mtKNPZa4H9hpwoxkV/Acuhr9L8ftYdCSl+qDaYpGQ/HTKX2TdvU6v5+3FL6brf4+Nn+Ta17s7TjOSQUZVhhQ5iJejrgdDn3q0F0fLyXAw8DtD2lkZQLgyXPHjYRLBCvX6EHXixPd6/X6SDLv9oQ9hlIRIsAXlLMHBtWQS4UIh3+A4oHNsPAauImQvw+jIBled401ASTP3Ct9QovFxFh8XrXADl9aMr3O95WyOqWyugNkI4vvVl461bndvrfH4vKCN5ctkN/PQcp4XhvGy0sEWlcwwp0nRiXEgcUPRUIsy2DRPowvMt6G9ItHCONEzh0EiZIHJjZzEbTrunChReh73mxP8BLGWxHIa7mqL2CsLCCVqwrX5eEX2zCcEJ19ifLIzAzIllraIFkRW3DwyyEuSMB4TnaE9V2OQbXKJYwLKipDgJVElRCxhoDAwyZg6jmwiNrqI8KoI8axIyPweNg4mZMxUZkKJUGNdW4TmY7y5iCVrMhGVPAFFNSmEQry2XFOCVIZ3i2qTz0RFIsxbSNwvi2umIuEEMUjIYYtyjQnnVWETk6Arye1R69J+1hvQPD3Bx5oD5gsjxG+M9jV+jX8CyBszSNgOhJvbQ41rRRIcgcfTth1tEcGx3lRtNjbWIoHxIjJFUttCo57X0Ea9SNFaGy4yVUX8WPfG50lswmlhi0pt3WxBFMA8C4XEXFC8NlQkWWk9k3fRlkpIqtz45EI/ltgckfUXN1rYZkq3yWquRIuUi5jGGYuyORhmy9eIbA1sHhcMZIPdFkAYtjUVtwfU6BrfOVmAqTn/6awII7JjZ3rqTPHneuufb9N0evIhiFbXp38B114n3w==
eNrVV91u3MYVTnLRi9zlovcDIoDRgFxxydXfGgaqOIbTH8cupAQuYoOYHR6SE5Ez9MxQ67Wgizp5AeYJklqQAsNuirRIUbQBcpmLvoB60WfpmeFSstcbK4ETIBEkcDVnzjnf+fsO9/7xHijNpXj5ERcGFGUG/9Ef3z9WcKcBbT46qsAUMn1w9crOg0bxk9cLY2o9XlmhNR/oiptiUFKRs4JyMWCyWuEik4cTmc6+Pi6Apmj+o4fvalDBVg7CtH+3t51eUM9WwsFwMBytf77FGNQmuCKYTLnI28f5PV77JIWspAaOOnH7V1rXJWfUYlz5QEvx8LIUAhzm9uEuQB3Qku/BZwp0jWHAh0faUNPo+4doF/7zzXEFWtMc/nz9dz24/730y+Mrd2uOKu1XO0Xjk3BIfksFGW6uhyQMx+6XXL228/nNwOIog/76p+GD9zhtH2IIJJcyL+GLm8Fb1NBU5sEO5hKC36TtCYmGaQjra/Eai9jqBMMNV0dsLU43hmtDlqXrX1mzWgcYjFHS2Zcagrc7gO2nb3x508kwfcHOrIbgeu2q1B4LqQXPsr/00t+DyE3RPlhbj75cMHqN3rUVQFkY/nPbKM6MxSgwUcoE28CwuGbWnvgVXsQUXYqHqzHeDS8SLljZpLDdTN6SFZZZXyS1glLS9PA9qmbt0XXFcy5OyILLrbKU0+CyghSxcVrq9tCoBv619Fpno/3k30ul11wT2lz8o4+1hxzckNgTiPxKpmgFARWob6TS5IKGMrtA+oZd0qwXiZx8gP0TaMXIBSEFXDi6oWhe0fYzIQNGWQHHWyU622PtyaCIL3nj0Sj2LpKKXopWNyPMj1/EQbS5RLA8krOqHmKrAPZcA9hzG2SrViQKo1UyjMfRxjhyPffoybo/2/+PL1uEvYf2i7qZoNwnfQ3Xwr/t8AoHaiHJbj7/hMOhUPbfVz7Z9+Y04I09HMlwEHm+h4U2NpkJzmCuvfG+NynlJLG5RdsJCDopIfXGtqb+ogzdABrbjtFQiiOhwSRwl1Z1CTqpmtLwmiqzaOT8G8ghyFIGEsoTJCg1W7wgVZ4wBS5HScr1XJhh+6G0prMK07moVGP0UtAyQW39rJaOvy1qDVSx4pnTQk4TY8qk4f2RsXSQGA4qSRs1R0dnLq2lFLmlLdQfYT9ZdWXmB8PRge9NpdrVtTWgmazBoky42OMG9CnGe9qkCbY0Tqa2lXwaExqZUINIsd44HXhRZDy3zhsNT2U7rSVugtNIGC0haerkjub3ED/2Tw4KYYUOaC8VpsCUpzopsdtQebjWC1M5FYmAqjazM+0RSq25/razdXqQTGYusCjcXB+uRuHBwavfvo62z1tH+IeneuX0NKB8Rd8pA9c9gZ5pA1VQK8ycWbHLRpuf1fZ67bzl8F1peXGXPWJzq2Yp9fxYLP6r5Sy+hKkXt97hENfV89beEdYRea49bvY4k0ocps8j4OGmJeDl/PnYbYaAzXn3xXfFua8A5y2TH26lP7UafvHrfa+boKSgukBGRwM0hM00zMJsNcWPsLkxDGmEn2jMNlMWpxAPJzHFO6thtB6NopDFMYw2YA3iyO6DimJfYnUd9zGcf6RH7DE0jglFxw3DwtsFhOT1vnc6uHjSjanGT3hi8HEZHzfc4Q5yjJ0277bv7U6p6jbWfGLw8/sv7GvbUcW1zuLznHaaLxLd3ILvPc8NF3Vjkj2quCV6G6KX4jjjkKCikXWya59Wlth3cqts5oaSTKoKgxp7WdBV2jsT4ulVpCFB8CXY+SCOenHwfeI2KxBK9EwY+2UBSb6cEZwmhX7J/hzAAXHsSowkqhH2UUBZk4yLlJgC9YWeghqQdwUC1+4I95AiugbGMw4aHZOC61PP1mMnY0Q01QTvyoz0rwvWwIxMuS6sKzkxmE+f0HKKC5a4nUJmslFnoKghldSI16XpgCDj4grUA/JH2RCGcUuFc+VwzUVkMkMMOCKwR4XBgMumcoEpMI0S7qoz6b5GWdAiP8PHuwv2RWiCr0KDW+KWeAeQi+aQsBwIt3SXOtOaZFj6J8N2lfSJFJhvqnc7HaeRwXQRmSa5K6EV9zl0Xm9Q1DaWNm1WET/mvbN55psIZGCbVOry5hKiAfooNBJdRXG7aaa4WzYD8ibqUgVZU1qbQponAusROXtpJ4W7XJsB2Sq19Em9iGlacFb0YLhLX3fkcuDiuG4hW+wuAdKyoc24u6DHt8T+WeMfeAf447/w/L/dIHH9pMffHX2fOf/DvDXGZN/pdqm67X9vbvF735jo07d41Lp98H8ZDhN8

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
eNrVV09v3MYVb3voIcd+ggGRQ1MsVyT3n3YLHwRLcJxIlptdpwligxiSj+RkyRl6Zqj12thD3ZwLbD9BUgtSYNhNkQa5tAZ67KFfwD30s/TNcFd/1kpt5FAkggSS8+a9+b03v/eb0ePTI5CKCf7TZ4xrkDTW+KH++PhUwv0alP7spASdi+TJjb3Jk1qyl2/nWldqtLVFK9ZWJdN5u6A8i3PKeDsW5RbjqTiORDL/x2kONMHwnz29o0C6OxlwvfzGzLZ+bjXf8tp+2+8OvtqJY6i0u8djkTCeLZ9nD1nVIgmkBdVw0piXf6FVVbCYGoxbnyrBn14XnIPFvHw6BahcWrAj+FKCqjAN+P2J0lTX6vExxoV//fO0BKVoBn86fH8N7j8/+cXJbUmzki6/5MKNaZzDt2Y9pVyMrqUo3AP6wKBfPul73t82bDtFIWbuoWQZ48vP/36l9d1mqeUXv7rafmBLbOxff+TuUk0TkbkT3AxwbybLlySK++D1aeTFve0g8ob+9nYUdagXRd1h2uvEL8mVYa9LSLDkjBZqeaxlDX+dsBKru4H5q49c4164ew8qhpVbfuGdrl9fTPK6RTyfvEc58YcDj3jeyP6SGweTZ2Y9XMGdzCt4dXeOMRVYvhhT3SJBQA6oJIEX9IjfH3V6o15gQvx5HWIfeKZzLPEgOP6QyvnypMH3YiM1RCYUXCjp6U6h3fFRvHzZzjvXnFG323F+TUp6LegNA8/zWnnHDYZXGJ58yOjyKfKPZEJkBTy/bvZ+vdDy66qOMJ0WejxwkTPX+p6l9e+QUxKr+O+fff7IWXWPM3K89rA9GDgth3HkHI8hROpmyhk9cqJCRKHSAlkGIXAaFZA4I7MhrU0bJgwYbNzBQAkSQYEO4QEtqwJUWNaFZhWVejPI62dg62FzawgpC7Gv5XxzgpBZGEuwmxcmTK2MKXIHrRWdl7hJm04VZi84LUL0Vq96qc53Za2Ayjh/ZTQXs1DrIqzZekibJgg1AxkmtVyho3Nb1kLwzHQ7+ndxm4271KsBv7toOTMhp6oyAVQsKjAoQ8aPmAZ1hvGh0kmIslUh281OXsaEQSKqESnuN4ohTuQpy8zitYJL1U4qgQJ6lklMCwjrKryv2EPEj0zOQCIszwJdW7nOseSJCgvsS3T2+2tjImY85FBWen7u3UWrCbeebWOdDYTR3CYWeMOB3wu8xeKt71bx8etUHP9wVG2djbqUban7hekE7FY1VxpKt5JYOb1lNFrpH5Ho/+FZvJIdfaVyPbfHgBuvpOAHcDC8qcSfYNFRlJan9RGLheRXS/6m6h77/X7/OPmfcu0buf7/qfFlrf3540dOQ8kwpypHiez4fq+f9FLoDLs9PBAB/EHg9/zI2+4P/CAJgmg7HnjpdurHXs/rpb1ub9AbdrtBF5JOL0CBLSlnKRLX9DPDjv/EOWM7WhtuK3zDEY2P6/i4bQcn2JiGos69llPE2IkoVMgjRIWMQcR1jLKHHtMZlc0RsKIgvn/yRmuNbX8dNF7fd9Em6uuyW81qOd93Gb32GDkfi5pQCQQvC1YnsJcVyzgkRAuyvmCSGeoAoWT8m31izrgIT7n2XX4De5gbT8arWhOrW9iPLWKPJYxJ1Jxrc0FFhSzmBNktUQXIowTZjy8LYg82s5Ks0U3nGK4QYkqoNh8E9QEFWxGR2s9mNuUJGnQtuR2kXM1AIpo7HM9TZcdQ6iVRFcQsZaAMjuY9JrwuI7RhwPUJbBzmmKDKDQ4Raax2i9BihmcWsTJN5qKW51ARWykUZqFFFU4Xa5CIwNQyxnIIia11KYEIYeNHAUcUSxyLoi65Tfs8DxvTFtxUkWfnAFkz4ULhbwEKxgpRKiSiLeyUJrAiKfLiYtbaHJAtIjhuAlVT69PAu4RIkczu6KrWdjNXaeX0CEttlcTg1kIUyoY5+x8EEVuWbEA9NGviftjxCFDSGuc2uWQx/4LI0sr5qijIQKzahpdZ+4zUzb6kDK8z5ywwYA/ujCckEXgZxJLkEE8v7mAEuBJgdSGuLWqm2+RmaqaQDLShM0iJmc1yVlycR5sALcQ3k3gj2aCkNs/MHMZ3+V2+e0huHU5QLKeGoXOye7BPzBkH5lKmyC9v3hrvfTBpkTu3d3cmey2yu7e/Z58fHN4moOP2O7bMl2t5l0+EiSJt6gRvT3WRkJ393+58PL7UN3a/XyGOiagAMDHaBDBctQlg7F1hAaspq9CJKVwHKhyfmJ68sNqqDTCswtKWdN2blr5nfGoAtO3dGqUhPKKS2SEjaqvuR6PtICNba0EKGxagLqVuc444C/y592ZxFovzizVOuLf4Lw/1m1w=
eNrVV01v3MYZbnvoobceeh8sCrgplityv/Rh+CBYgutWstRonTqIDWJIDsmJyBl6Zqj12tChbv7A9hcktSAFht0UaZGiaAP02EP/gHrob+kzw119rJXISNpDDRnkzjvvO8/79bzD5ycHTGkuxXdfcWGYorHBD/3b5yeKPa6ZNh8dl8zkMnlxZ3P0olb89Me5MZVeW1qiFe/okpu8U1CRxTnlohPLcomLVB5FMpn8/SRnNIH5j17e10x56xkTZvonu9vpedVkye8EnaC//Nl6HLPKeJsilgkX2fR19pRXbZKwtKCGHTfi6R9oVRU8phbj0odaipe3pRDMYZ6+3Ges8mjBD9iniukKbrDfHGtDTa2fH8Eu++c/TkqmNc3Y73Z+MQf37+/86GTzScWhMv1ylNdt4gfk51SQYHXZJ76/5v7Ine3RZw88i6Pw5ts/8V+8x+n0JVwgmZRZwT5/4G1QQxOZeSPEknl3k+kpiZJuEKU9libdwbAXp92BHyVBN0qX/X7SH0ZfWrNae3DGKOnsS828nzUAp5/89IsHTobweaNJxbydymVpeiKkFjxNfz+XbjGRmXz6Yrjc/WLB6DZ9YjMAme//Zc8oHhuLUSBQynh7LEZyzWR62i6xESG61QsGPez1bxIu4qJO2F4dbcgSadY3SaVYIWly9B5Vk+nxjuIZF6dk4cj1opBj77ZiCbBxWujpkVE1++uV2xob04//dqV02xWhjcWf577OIXu7EjUB5JupoiXzqIC+kUqTG5oV6Q0yL9grivUmkdGHqB9Pq5jcEFKwG8e7imYlnX4qpBfTOGcn6wUOO4inp528d6u11u/3WjdJSW91B6tdxKed97zu6hWCqz05z+oRSoWh5mqGmlsh65UiXb87IEFvrbuyhhfU3KuLeX+z/l/ftgjnJ0w/r+oI8jaZ53Do/3HESzTUQpBdf/4azaEg+9f3Pn7WmtFAa62FlvQ73Va7hUQbG8wQPZjp1tqzVlTIKLSxhe2QCRoVLGmt2Zy2F2U4hsHYXg+GErSEZiZkT2hZFUyHZV0YXlFlFo1cvwMcApYyLKQ8BEGpyeIGqbIwVszFKEy4nglTlB+kFZ2UCOeiUgXvpaBFCG39ppbufZXXmlEV52+s5nIcGlOENZ8vGUsHoeFMhUmtZujoxIW1kCKztAX9PurJqiszWwj6h+3WWKp9XVkDOpYVsyhDLg64YfoM41NtkhAljc7UNpOXMcFIRA2QIt/oDmwUKc/s4bVml6KdVBKT4MyTmBYsrKvwseZPgR/1kzEFWL4DOpcKkyPkiQ4LVBuUg+FcmMixCAUrKzM51+5Das3NdztbZwthNHGOdf3V5WDQ9Q8Pf/DV42jvunGE/1jVS2erHuVL+nFh+wN9pSfasNKrFCJnluyw0eb/anr98Lrh8La0vDjLXsUzq+ZK6vlfsfg7V7P4FUy9OPWOguFw+HVj7xh5BM9NT+oDHksljpKvJeC+JeCr+fO1mwxePOPdbz8rrr0CXDdM/nsj/dJo+P7zZ62mg8Kc6hyM3guCwTAZpKy32h8EKyuMBcvdYBBE/spwOegm3W60Ei/76UoaxP7AH6SD/mB5sNrvd/ss6Q3sYCkp6hLZtfTDQVAftM6aE9KmFTXesGLwuI3HrlscgUdsR7UetVtFDOIAr6I4gQqZAOI6RsVAY39MVTOxZh2D9w/e6qw9RwfbjdY3PbSxep13s13t1jc9xsw11lrvy5pQxQiurY7WQD2aZ4IlxEgyv9iTMWiLULL3yy1iR3KEodx5KO6AcoTV5KKqDXE0iyZvEzdFYZPoiTD2wwCEXkwIOkehEcmzBNyBl0Pi5rA9SdVQMznMFVLuE2rsDwI6w3zRRKbuZ7ObigQCUyvhFlG0Y6aA5r7A+NduDZNJEV2xmKecaYujeY+JqMsIMhicXxiswgQO6tzikJFBtNuEFmOMWOKmCpnIWp1DBbZSanhhZBXuH85BAoGNZYxwSIXWuuRABNj4UbADihDHsqhL4dw+98PZdAG3URTZOUDebLgQ+HsMZDRDlEoFtIXb0hjWJEVdXPTa2HneJlIgCVTvO50G3iVEmmQuo7NYu2TO3MrpAULtiMTiNlIW2pk5+/YDYlclC1B37JnIh1uPwBPjRrlDLknsp58q3YyYBQUViKgtaNmzz4q6yUvKcfs6rwILdvv+3ogkEjdahCRn8f7FDEYMJzFEFyTnUHPTIXdTu4VkzNhyZkrBs3HOi4v7aGOgDXxjsCNbKEljn5m9OzwUD8XGDrm3MwJN79sKnZCN7S1iRzKzd0hNfnL33t7mu6M2ub+7sT7abJONza1N93x3Z5cwE3fecWG+HMuHYiStFeVcJ7js1UVC1rd+tf7+3qW+cfl+o3CsRc0YHKONAVurzgHY3pAOsN7nFZS4xjmswvrI9uSF02ZtALMaoS3pvDdd+Z7VUwOg4z4FQA3hAVXcLVlSm3U/hK6DLG3NCSlsqgC8lHrNHGkd4t+jt7NzeHj+HYANjw7/AxNYKrM=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -6,5 +6,5 @@
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `__call__`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.

View File

@ -15,7 +15,10 @@
* [Messages](/docs/concepts/messages)
:::
Multimodal support is still relatively new and less common, model providers have not yet standardized on the "best" way to define the API. As such, LangChain's multimodal abstractions are lightweight and flexible, designed to accommodate different model providers' APIs and interaction patterns, but are **not** standardized across models.
LangChain supports multimodal data as input to chat models:
1. Following provider-specific formats
2. Adhering to a cross-provider standard (see [how-to guides](/docs/how_to/#multimodal) for detail)
### How to use multimodal models
@ -26,38 +29,85 @@ Multimodal support is still relatively new and less common, model providers have
#### Inputs
Some models can accept multimodal inputs, such as images, audio, video, or files. The types of multimodal inputs supported depend on the model provider. For instance, [Google's Gemini](/docs/integrations/chat/google_generative_ai/) supports documents like PDFs as inputs.
Some models can accept multimodal inputs, such as images, audio, video, or files.
The types of multimodal inputs supported depend on the model provider. For instance,
[OpenAI](/docs/integrations/chat/openai/),
[Anthropic](/docs/integrations/chat/anthropic/), and
[Google Gemini](/docs/integrations/chat/google_generative_ai/)
support documents like PDFs as inputs.
Most chat models that support **multimodal inputs** also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
The gist of passing multimodal inputs to a chat model is to use content blocks that specify a type and corresponding data. For example, to pass an image to a chat model:
The gist of passing multimodal inputs to a chat model is to use content blocks that
specify a type and corresponding data. For example, to pass an image to a chat model
as URL:
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{"type": "text", "text": "Describe the weather in this image:"},
{
"type": "image",
"source_type": "url",
"url": "https://...",
},
],
)
response = model.invoke([message])
```
We can also pass the image as in-line data:
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the weather in this image:"},
{
"type": "image",
"source_type": "base64",
"data": "<base64 string>",
"mime_type": "image/jpeg",
},
],
)
response = model.invoke([message])
```
To pass a PDF file as in-line data (or URL, as supported by providers such as
Anthropic), just change `"type"` to `"file"` and `"mime_type"` to `"application/pdf"`.
See the [how-to guides](/docs/how_to/#multimodal) for more detail.
Most chat models that support multimodal **image** inputs also accept those values in
OpenAI's [Chat Completions format](https://platform.openai.com/docs/guides/images?api-mode=chat):
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the weather in this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
],
)
response = model.invoke([message])
```
:::caution
The exact format of the content blocks may vary depending on the model provider. Please refer to the chat model's
integration documentation for the correct format. Find the integration in the [chat model integration table](/docs/integrations/chat/).
:::
Otherwise, chat models will typically accept the native, provider-specific content
block format. See [chat model integrations](/docs/integrations/chat/) for detail
on specific providers.
#### Outputs
Virtually no popular chat models support multimodal outputs at the time of writing (October 2024).
Some chat models support multimodal outputs, such as images and audio. Multimodal
outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage)
response object. See for example:
The only exception is OpenAI's chat model ([gpt-4o-audio-preview](/docs/integrations/chat/openai/)), which can generate audio outputs.
Multimodal outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage) response object.
Please see the [ChatOpenAI](/docs/integrations/chat/openai/) for more information on how to use multimodal outputs.
- Generating [audio outputs](/docs/integrations/chat/openai/#audio-generation-preview) with OpenAI;
- Generating [image outputs](/docs/integrations/chat/google_generative_ai/#multimodal-usage) with Google Gemini.
#### Tools

View File

@ -92,7 +92,7 @@ structured_model = model.with_structured_output(Questions)
# Define the system prompt
system = """You are a helpful assistant that generates multiple sub-questions related to an input question. \n
The goal is to break down the input into a set of sub-problems / sub-questions that can be answers in isolation. \n"""
The goal is to break down the input into a set of sub-problems / sub-questions that can be answered independently. \n"""
# Pass the question to the model
question = """What are the main components of an LLM-powered autonomous agent system?"""

View File

@ -126,7 +126,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
).
)).
## RunnableConfig
@ -194,7 +194,7 @@ In Python 3.11 and above, this works out of the box, and you do not need to do a
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
This is due to a limitation in [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in Python 3.9 and 3.10 which did
not accept a `context` argument).
not accept a `context` argument.
Propagating the `RunnableConfig` manually is done like so:

View File

@ -66,7 +66,7 @@ This API works with a list of [Document](https://python.langchain.com/api_refere
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
)

View File

@ -13,23 +13,33 @@ Install `uv`: **[documentation on how to install it](https://docs.astral.sh/uv/g
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
- Partner integrations: Partner packages in `libs/partners` that are independently version controlled.
:::note
Some LangChain packages live outside the monorepo, see for example
[langchain-community](https://github.com/langchain-ai/langchain-community) for various
third-party integrations and
[langchain-experimental](https://github.com/langchain-ai/langchain-experimental) for
abstractions that are experimental (either in the sense that the techniques are novel
and still being tested, or they require giving the LLM more access than would be
possible in most production systems).
:::
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain-community:
For this quickstart, start with `langchain`:
```bash
cd libs/community
cd libs/langchain
```
## Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
Install development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
uv sync
@ -62,22 +72,15 @@ make docker_tests
There are also [integration tests and code-coverage](../testing.mdx) available.
### Only develop langchain_core or langchain_community
### Developing langchain_core
If you are only developing `langchain_core` or `langchain_community`, you can simply install the dependencies for the respective projects and run tests:
If you are only developing `langchain_core`, you can simply install the dependencies for the project and run tests:
```bash
cd libs/core
make test
```
Or:
```bash
cd libs/community
make test
```
## Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.

View File

@ -83,7 +83,6 @@ LinkedIn, where we highlight the best examples.
Here are some heuristics for types of content we are excited to promote:
- **Integration announcement:** Announcements of new integrations with a link to the LangChain documentation page.
- **Educational content:** Blogs, YouTube videos and other media showcasing educational content. Note that we prefer content that is NOT framed as "here's how to use integration XYZ", but rather "here's how to do ABC", as we find that is more educational and helpful for developers.
- **End-to-end applications:** End-to-end applications are great resources for developers looking to build. We prefer to highlight applications that are more complex/agentic in nature, and that use [LangGraph](https://github.com/langchain-ai/langgraph) as the orchestration framework. We get particularly excited about anything involving long-term memory, human-in-the-loop interaction patterns, or multi-agent architectures.
- **Research:** We love highlighting novel research! Whether it is research built on top of LangChain or that integrates with it.

View File

@ -40,7 +40,7 @@
"\n",
"To view the list of separators for a given language, pass a value from this enum into\n",
"```python\n",
"RecursiveCharacterTextSplitter.get_separators_for_language`\n",
"RecursiveCharacterTextSplitter.get_separators_for_language\n",
"```\n",
"\n",
"To instantiate a splitter that is tailored for a specific language, pass a value from the enum into\n",

View File

@ -50,6 +50,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
### Messages
@ -67,6 +68,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
### Example selectors
@ -170,7 +172,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
### Tools
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-built tools.
- [How to: create tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
@ -351,7 +353,7 @@ LangSmith allows you to closely trace, monitor and evaluate your LLM application
It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/), but we'll highlight a few sections that are particularly
relevant to LangChain below:
### Evaluation

View File

@ -5,120 +5,165 @@
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to pass multimodal data directly to models\n",
"# How to pass multimodal data to models\n",
"\n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models.\n",
"\n",
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
"LangChain supports multimodal data as input to chat models:\n",
"\n",
"1. Following provider-specific formats\n",
"2. Adhering to a cross-provider standard\n",
"\n",
"Below, we demonstrate the cross-provider standard. See [chat model integrations](/docs/integrations/chat/) for detail\n",
"on native formats for specific providers.\n",
"\n",
":::note\n",
"\n",
"Most chat models that support multimodal **image** inputs also accept those values in\n",
"OpenAI's [Chat Completions format](https://platform.openai.com/docs/guides/images?api-mode=chat):\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": image_url},\n",
"}\n",
"```\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "e30a4ff0-ab38-41a7-858c-a93f99bb2f1b",
"metadata": {},
"source": [
"## Images\n",
"\n",
"Many providers will accept images passed in-line as base64 data. Some will additionally accept an image from a URL directly.\n",
"\n",
"### Images from base64 data\n",
"\n",
"To pass images in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"image/jpeg\", # or image/png, etc.\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"execution_count": 10,
"id": "1fcf7b27-1cc3-420a-b920-0420b5892e20",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image shows a beautiful clear day with bright blue skies and wispy cirrus clouds stretching across the horizon. The clouds are thin and streaky, creating elegant patterns against the blue backdrop. The lighting suggests it's during the day, possibly late afternoon given the warm, golden quality of the light on the grass. The weather appears calm with no signs of wind (the grass looks relatively still) and no indication of rain. It's the kind of perfect, mild weather that's ideal for walking along the wooden boardwalk through the marsh grass.\n"
]
}
],
"source": [
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\""
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch image data\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": image_data,\n",
" \"mime_type\": \"image/jpeg\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "ee2b678a-01dd-40c1-81ff-ddac22be21b7",
"metadata": {},
"source": [
"See [LangSmith trace](https://smith.langchain.com/public/eab05a31-54e8-4fc9-911f-56805da67bef/r) for more detail.\n",
"\n",
"### Images from a URL\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/),\n",
"[Anthropic](/docs/integrations/chat/anthropic/), and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will also accept images from URLs directly.\n",
"\n",
"To pass images as URLs, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" \"url\": \"https://...\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fb896ce9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "markdown",
"id": "4fca4da7",
"metadata": {},
"source": [
"The most commonly supported way to pass in images is to pass it in as a byte string.\n",
"This should work for most model integrations."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ca1040c",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ec680b6b",
"id": "99d27f8f-ae78-48bc-9bf2-3cef35213ec7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions.\n"
"The weather in this image appears to be pleasant and clear. The sky is mostly blue with a few scattered, light clouds, and there is bright sunlight illuminating the green grass and plants. There are no signs of rain or stormy conditions, suggesting it is a calm, likely warm day—typical of spring or summer.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_data}\"},\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" {\n",
" \"type\": \"image\",\n",
" # highlight-start\n",
" \"source_type\": \"url\",\n",
" \"url\": image_url,\n",
" # highlight-end\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"We can feed the image URL directly in a content block of type \"image_url\". Note that only some model providers support this."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered clouds, suggesting good visibility and a likely pleasant temperature. The bright sunlight is casting distinct shadows on the grass and vegetation, indicating it is likely daytime, possibly late morning or early afternoon. The overall ambiance suggests a warm and inviting day, suitable for outdoor activities.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
@ -126,12 +171,12 @@
"id": "1c470309",
"metadata": {},
"source": [
"We can also pass in multiple images."
"We can also pass in multiple images:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "325fb4ca",
"metadata": {},
"outputs": [
@ -139,20 +184,460 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Yes, the two images are the same. They both depict a wooden boardwalk extending through a grassy field under a blue sky with light clouds. The scenery, lighting, and composition are identical.\n"
"Yes, these two images are the same. They depict a wooden boardwalk going through a grassy field under a blue sky with some clouds. The colors, composition, and elements in both images are identical.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"are these two images the same?\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Are these two images the same?\"},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "d72b83e6-8d21-448e-b5df-d5b556c3ccc8",
"metadata": {},
"source": [
"## Documents (PDF)\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/),\n",
"[Anthropic](/docs/integrations/chat/anthropic/), and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will accept PDF documents.\n",
"\n",
"### Documents from base64 data\n",
"\n",
"To pass documents in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"application/pdf\",\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6c1455a9-699a-4702-a7e0-7f6eaec76a21",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This document appears to be a sample PDF file that contains Lorem ipsum placeholder text. It begins with a title \"Sample PDF\" followed by the subtitle \"This is a simple PDF file. Fun fun fun.\"\n",
"\n",
"The rest of the document consists of several paragraphs of Lorem ipsum text, which is a commonly used placeholder text in design and publishing. The text is formatted in a clean, readable layout with consistent paragraph spacing. The document appears to be a single page containing four main paragraphs of this placeholder text.\n",
"\n",
"The Lorem ipsum text, while appearing to be Latin, is actually scrambled Latin-like text that is used primarily to demonstrate the visual form of a document or typeface without the distraction of meaningful content. It's commonly used in publishing and graphic design when the actual content is not yet available but the layout needs to be demonstrated.\n",
"\n",
"The document has a professional, simple layout with generous margins and clear paragraph separation, making it an effective example of basic PDF formatting and structure.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch PDF data\n",
"pdf_url = \"https://pdfobject.com/pdf/sample.pdf\"\n",
"pdf_data = base64.b64encode(httpx.get(pdf_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "efb271da-8fdd-41b5-9f29-be6f8c76f49b",
"metadata": {},
"source": [
"### Documents from a URL\n",
"\n",
"Some providers (specifically [Anthropic](/docs/integrations/chat/anthropic/))\n",
"will also accept documents from URLs directly.\n",
"\n",
"To pass documents as URLs, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"file\",\n",
" \"source_type\": \"url\",\n",
" \"url\": \"https://...\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "55e1d937-3b22-4deb-b9f0-9e688f0609dc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This document appears to be a sample PDF file with both text and an image. It begins with a title \"Sample PDF\" followed by the text \"This is a simple PDF file. Fun fun fun.\" The rest of the document contains Lorem ipsum placeholder text arranged in several paragraphs. The content is shown both as text and as an image of the formatted PDF, with the same content displayed in a clean, formatted layout with consistent spacing and typography. The document consists of a single page containing this sample text.\n"
]
}
],
"source": [
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" # highlight-start\n",
" \"source_type\": \"url\",\n",
" \"url\": pdf_url,\n",
" # highlight-end\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "1e661c26-e537-4721-8268-42c0861cb1e6",
"metadata": {},
"source": [
"## Audio\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/) and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will accept audio inputs.\n",
"\n",
"### Audio from base64 data\n",
"\n",
"To pass audio in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"audio\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"audio/wav\", # or appropriate mime-type\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a0b91b29-dbd6-4c94-8f24-05471adc7598",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The audio appears to consist primarily of bird sounds, specifically bird vocalizations like chirping and possibly other bird songs.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch audio data\n",
"audio_url = \"https://upload.wikimedia.org/wikipedia/commons/3/3d/Alcal%C3%A1_de_Henares_%28RPS_13-04-2024%29_canto_de_ruise%C3%B1or_%28Luscinia_megarhynchos%29_en_el_Soto_del_Henares.wav\"\n",
"audio_data = base64.b64encode(httpx.get(audio_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"google_genai:gemini-2.0-flash-001\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe this audio:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"audio\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": audio_data,\n",
" \"mime_type\": \"audio/wav\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "92f55a6c-2e4a-4175-8444-8b9aacd6a13e",
"metadata": {},
"source": [
"## Provider-specific parameters\n",
"\n",
"Some providers will support or require additional fields on content blocks containing multimodal data.\n",
"For example, Anthropic lets you specify [caching](/docs/integrations/chat/anthropic/#prompt-caching) of\n",
"specific content to reduce token consumption.\n",
"\n",
"To use these fields, you can:\n",
"\n",
"1. Store them on directly on the content block; or\n",
"2. Use the native format supported by each provider (see [chat model integrations](/docs/integrations/chat/) for detail).\n",
"\n",
"We show three examples below.\n",
"\n",
"### Example: Anthropic prompt caching"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "83593b9d-a8d3-4c99-9dac-64e0a9d397cb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image shows a beautiful, clear day with partly cloudy skies. The sky is a vibrant blue with wispy, white cirrus clouds stretching across it. The lighting suggests it's during daylight hours, possibly late afternoon or early evening given the warm, golden quality of the light on the grass. The weather appears calm with no signs of wind (the grass looks relatively still) and no threatening weather conditions. It's the kind of perfect weather you'd want for a walk along this wooden boardwalk through the marshland or grassland area.\n"
]
},
{
"data": {
"text/plain": [
"{'input_tokens': 1586,\n",
" 'output_tokens': 117,\n",
" 'total_tokens': 1703,\n",
" 'input_token_details': {'cache_read': 0, 'cache_creation': 1582}}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" \"url\": image_url,\n",
" # highlight-next-line\n",
" \"cache_control\": {\"type\": \"ephemeral\"},\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())\n",
"response.usage_metadata"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9bbf578e-794a-4dc0-a469-78c876ccd4a3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Clear blue skies, wispy clouds.\n"
]
},
{
"data": {
"text/plain": [
"{'input_tokens': 1716,\n",
" 'output_tokens': 12,\n",
" 'total_tokens': 1728,\n",
" 'input_token_details': {'cache_read': 1582, 'cache_creation': 0}}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"next_message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Summarize that in 5 words.\",\n",
" }\n",
" ],\n",
"}\n",
"response = llm.invoke([message, response, next_message])\n",
"print(response.text())\n",
"response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "915b9443-5964-43b8-bb08-691c1ba59065",
"metadata": {},
"source": [
"### Example: Anthropic citations"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ea7707a1-5660-40a1-a10f-0df48a028689",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'citations': [{'cited_text': 'Sample PDF\\r\\nThis is a simple PDF file. Fun fun fun.\\r\\n',\n",
" 'document_index': 0,\n",
" 'document_title': None,\n",
" 'end_page_number': 2,\n",
" 'start_page_number': 1,\n",
" 'type': 'page_location'}],\n",
" 'text': 'Simple PDF file: fun fun',\n",
" 'type': 'text'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Generate a 5 word summary of this document.\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" # highlight-next-line\n",
" \"citations\": {\"enabled\": True},\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"response.content"
]
},
{
"cell_type": "markdown",
"id": "e26991eb-e769-41f4-b6e0-63d81f2c7d67",
"metadata": {},
"source": [
"### Example: OpenAI file names\n",
"\n",
"OpenAI requires that PDF documents be associated with file names:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ae076c9b-ff8f-461d-9349-250f396c9a25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The document is a sample PDF file containing placeholder text. It consists of one page, titled \"Sample PDF\". The content is a mixture of English and the commonly used filler text \"Lorem ipsum dolor sit amet...\" and its extensions, which are often used in publishing and web design as generic text to demonstrate font, layout, and other visual elements.\n",
"\n",
"**Key points about the document:**\n",
"- Length: 1 page\n",
"- Purpose: Demonstrative/sample content\n",
"- Content: No substantive or meaningful information, just demonstration text in paragraph form\n",
"- Language: English (with the Latin-like \"Lorem Ipsum\" text used for layout purposes)\n",
"\n",
"There are no charts, tables, diagrams, or images on the page—only plain text. The document serves as an example of what a PDF file looks like rather than providing actual, useful content.\n"
]
}
],
"source": [
"llm = init_chat_model(\"openai:gpt-4.1\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" # highlight-next-line\n",
" \"filename\": \"my-file\",\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
@ -167,16 +652,22 @@
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cd22ea82-2f93-46f9-9f7a-6aaf479fcaa9",
"execution_count": 4,
"id": "0f68cce7-350b-4cde-bc40-d3a169551fc3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_BSX4oq4SKnLlp2WlzDhToHBr'}]\n"
]
"data": {
"text/plain": [
"[{'name': 'weather_tool',\n",
" 'args': {'weather': 'sunny'},\n",
" 'id': 'toolu_01G6JgdkhwggKcQKfhXZQPjf',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@ -191,16 +682,17 @@
" pass\n",
"\n",
"\n",
"model_with_tools = model.bind_tools([weather_tool])\n",
"llm_with_tools = llm.bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Describe the weather in this image:\"},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" ],\n",
")\n",
"response = model_with_tools.invoke([message])\n",
"print(response.tool_calls)"
"}\n",
"response = llm_with_tools.invoke([message])\n",
"response.tool_calls"
]
}
],
@ -220,7 +712,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@ -9,157 +9,148 @@
"\n",
"Here we demonstrate how to use prompt templates to format [multimodal](/docs/concepts/multimodality/) inputs to models. \n",
"\n",
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
"To use prompt templates in the context of multimodal data, we can templatize elements of the corresponding content block.\n",
"For example, below we define a prompt that takes a URL for an image as a parameter:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 1,
"id": "2671f995",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4ee35e4f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
"# Define prompt\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"Describe the image provided.\",\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data}\"},\n",
" }\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" # highlight-next-line\n",
" \"url\": \"{image_url}\",\n",
" },\n",
" ],\n",
" ),\n",
" },\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "089f75c2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "02744b06",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "e9b9ebf6",
"id": "f75d2e26-5b9a-4d5f-94a7-7f98f5666f6d",
"metadata": {},
"source": [
"We can also pass in multiple images."
"Let's use this prompt to pass an image to a [chat model](/docs/concepts/chat_models/#multimodality):"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "02190ee3",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"compare the two pictures provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data1}\"},\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data2}\"},\n",
" },\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "42af057b",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "513abe00",
"execution_count": 2,
"id": "5df2e558-321d-4cf7-994e-2815ac37e704",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.\n"
"This image shows a beautiful wooden boardwalk cutting through a lush green wetland or marsh area. The boardwalk extends straight ahead toward the horizon, creating a strong leading line through the composition. On either side, tall green grasses sway in what appears to be a summer or late spring setting. The sky is particularly striking, with wispy cirrus clouds streaking across a vibrant blue background. In the distance, you can see a tree line bordering the wetland area. The lighting suggests this may be during \"golden hour\" - either early morning or late afternoon - as there's a warm, gentle quality to the light that's illuminating the scene. The wooden planks of the boardwalk appear well-maintained and provide safe passage through what would otherwise be difficult terrain to traverse. It's the kind of scene you might find in a nature preserve or wildlife refuge designed to give visitors access to observe wetland ecosystems while protecting the natural environment.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data1\": image_data, \"image_data2\": image_data})\n",
"print(response.content)"
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke({\"image_url\": url})\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "f4cfdc50-4a9f-4888-93b4-af697366b0f3",
"metadata": {},
"source": [
"Note that we can templatize arbitrary elements of the content block:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "53c88ebb-dd57-40c8-8542-b2c916706653",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate(\n",
" [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"Describe the image provided.\",\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"{image_mime_type}\",\n",
" \"data\": \"{image_data}\",\n",
" \"cache_control\": {\"type\": \"{cache_type}\"},\n",
" },\n",
" ],\n",
" },\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "25e4829e-0073-49a8-9669-9f43e5778383",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This image shows a beautiful wooden boardwalk cutting through a lush green marsh or wetland area. The boardwalk extends straight ahead toward the horizon, creating a strong leading line in the composition. The surrounding vegetation consists of tall grass and reeds in vibrant green hues, with some bushes and trees visible in the background. The sky is particularly striking, featuring a bright blue color with wispy white clouds streaked across it. The lighting suggests this photo was taken during the \"golden hour\" - either early morning or late afternoon - giving the scene a warm, peaceful quality. The raised wooden path provides accessible access through what would otherwise be difficult terrain to traverse, allowing visitors to experience and appreciate this natural environment.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(url).content).decode(\"utf-8\")\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke(\n",
" {\n",
" \"image_data\": image_data,\n",
" \"image_mime_type\": \"image/jpeg\",\n",
" \"cache_type\": \"ephemeral\",\n",
" }\n",
")\n",
"print(response.text())"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8152c3",
"id": "424defe8-d85c-4e45-a88d-bf6f910d5ebb",
"metadata": {},
"outputs": [],
"source": []
@ -181,7 +172,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@ -15,7 +15,7 @@
"\n",
"To build a production application, you will need to do more work to keep track of application state appropriately.\n",
"\n",
"We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/).\n",
"We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/).\n",
":::\n"
]
},
@ -209,7 +209,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Do you approve of the following tool invocations\n",
@ -252,7 +252,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Do you approve of the following tool invocations\n",

View File

@ -0,0 +1,118 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# SingleStoreSemanticCache\n",
"\n",
"This example demonstrates how to get started with the SingleStore semantic cache.\n",
"\n",
"### Integration Overview\n",
"\n",
"`SingleStoreSemanticCache` leverages `SingleStoreVectorStore` to cache LLM responses directly in a SingleStore database, enabling efficient semantic retrieval and reuse of results.\n",
"\n",
"### Integration details\n",
"\n",
"\n",
"\n",
"| Class | Package | JS support |\n",
"| :--- | :--- | :---: |\n",
"| SingleStoreSemanticCache | langchain_singlestore | ❌ | "
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"This cache lives in the `langchain-singlestore` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-singlestore"
]
},
{
"cell_type": "markdown",
"id": "5c5f2839-4020-424e-9fc9-07777eede442",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe-9f2e-4e04-bb62-23968f17164a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.globals import set_llm_cache\n",
"from langchain_singlestore import SingleStoreSemanticCache\n",
"\n",
"set_llm_cache(\n",
" SingleStoreSemanticCache(\n",
" embedding=YourEmbeddings(),\n",
" host=\"root:pass@localhost:3306/db\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cddda8ef",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c474168f",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"llm.invoke(\"Tell me one joke\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-singlestore-BD1RbQ07-py3.11",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,378 +1,401 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AWS Bedrock\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatBedrock\n",
"\n",
"This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/chat_models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n",
"\n",
"For more information on which models are accessible via Bedrock, head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/models-features.html).\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/bedrock) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatBedrock](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Bedrock models you'll need to create an AWS account, set up the Bedrock API service, get an access key ID and secret key, and install the `langchain-aws` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html) to sign up to AWS and setup your credentials. You'll also need to turn on model access for your account, which you can do by following [these instructions](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html)."
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Bedrock integration lives in the `langchain-aws` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-aws"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_aws import ChatBedrock\n",
"\n",
"llm = ChatBedrock(\n",
" model_id=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" model_kwargs=dict(temperature=0),\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", additional_kwargs={'usage': {'prompt_tokens': 29, 'completion_tokens': 21, 'total_tokens': 50}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, response_metadata={'usage': {'prompt_tokens': 29, 'completion_tokens': 21, 'total_tokens': 50}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, id='run-fdb07dc3-ff72-430d-b22b-e7824b15c766-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Voici la traduction en français :\n",
"\n",
"J'aime la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={'usage': {'prompt_tokens': 23, 'completion_tokens': 11, 'total_tokens': 34}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, response_metadata={'usage': {'prompt_tokens': 23, 'completion_tokens': 11, 'total_tokens': 34}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, id='run-5ad005ce-9f31-4670-baa0-9373d418698a-0', usage_metadata={'input_tokens': 23, 'output_tokens': 11, 'total_tokens': 34})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Bedrock Converse API\n",
"\n",
"AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.\n",
"\n",
"We recommend using `ChatBedrockConverse` for users who do not need to use custom models.\n",
"\n",
"You can use it like so:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ae728e59-94d4-40cf-9d24-25ad8723fc59",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", response_metadata={'ResponseMetadata': {'RequestId': '4fcbfbe9-f916-4df2-b0bd-ea1147b550aa', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 21 Aug 2024 17:23:49 GMT', 'content-type': 'application/json', 'content-length': '243', 'connection': 'keep-alive', 'x-amzn-requestid': '4fcbfbe9-f916-4df2-b0bd-ea1147b550aa'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': 672}}, id='run-77ee9810-e32b-45dc-9ccb-6692253b1f45-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(\n",
" model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" # other params...\n",
")\n",
"\n",
"llm.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4da16f3e-e80b-48c0-8036-c1cc5f7c8c05",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Note that `ChatBedrockConverse` emits content blocks while streaming:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7794b32e-d8de-4973-bf0f-39807dc745f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=[] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'Vo', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ici', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' la', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' tra', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'duction', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' en', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' français', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' :', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': '\\n\\nJ', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': \"'\", 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'a', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ime', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' la', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' programm', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ation', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': '.', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[] response_metadata={'stopReason': 'end_turn'} id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[] response_metadata={'metrics': {'latencyMs': 713}} id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8' usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50}\n"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"An output parser can be used to filter to text, if desired:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"|Vo|ici| la| tra|duction| en| français| :|\n",
"\n",
"J|'|a|ime| la| programm|ation|.||||"
]
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chain = llm | StrOutputParser()\n",
"\n",
"for chunk in chain.stream(messages):\n",
" print(chunk, end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html\n",
"\n",
"For detailed documentation of all ChatBedrockConverse features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AWS Bedrock\n",
"---"
]
},
"nbformat": 4,
"nbformat_minor": 5
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatBedrock\n",
"\n",
"This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/chat_models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n",
"\n",
"AWS Bedrock maintains a [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html).\n",
"\n",
":::info\n",
"\n",
"We recommend the Converse API for users who do not need to use custom models. It can be accessed using [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).\n",
"\n",
":::\n",
"\n",
"For detailed documentation of all Bedrock features and configurations head to the [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/bedrock) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatBedrock](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"| [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"The below apply to both `ChatBedrock` and `ChatBedrockConverse`.\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Bedrock models you'll need to create an AWS account, set up the Bedrock API service, get an access key ID and secret key, and install the `langchain-aws` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html) to sign up to AWS and setup your credentials. You'll also need to turn on model access for your account, which you can do by following [these instructions](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html)."
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Bedrock integration lives in the `langchain-aws` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-aws"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(\n",
" model_id=\"anthropic.claude-3-5-sonnet-20240620-v1:0\",\n",
" # temperature=...,\n",
" # max_tokens=...,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fcd8de52-4a1b-4875-b463-d41b031e06a1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': 'b07d1630-06f2-44b1-82bf-e82538dd2215', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 16 Apr 2025 19:35:34 GMT', 'content-type': 'application/json', 'content-length': '206', 'connection': 'keep-alive', 'x-amzn-requestid': 'b07d1630-06f2-44b1-82bf-e82538dd2215'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [488]}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'}, id='run-d09ed928-146a-4336-b1fd-b63c9e623494-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "4da16f3e-e80b-48c0-8036-c1cc5f7c8c05",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Note that `ChatBedrockConverse` emits content blocks while streaming:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "605e04fa-1a76-47ac-8c92-fe128659663e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=[] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': 'J', 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': \"'adore la\", 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': ' programmation.', 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[] additional_kwargs={} response_metadata={'stopReason': 'end_turn'} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[] additional_kwargs={} response_metadata={'metrics': {'latencyMs': 600}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd' usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}\n"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"You can filter to text using the [.text()](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.text) method on the output:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"|J|'adore la| programmation.||||"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk.text(), end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "a77519e5-897d-41a0-a9bb-55300fa79efc",
"metadata": {},
"source": [
"## Prompt caching\n",
"\n",
"Bedrock supports [caching](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html) of elements of your prompts, including messages and tools. This allows you to re-use large documents, instructions, [few-shot documents](/docs/concepts/few_shot_prompting/), and other data to reduce latency and costs.\n",
"\n",
":::note\n",
"\n",
"Not all models support prompt caching. See supported models [here](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-models).\n",
"\n",
":::\n",
"\n",
"To enable caching on an element of a prompt, mark its associated content block using the `cachePoint` key. See example below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d5f63d01-85e8-4797-a2be-0fea747a6049",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First invocation:\n",
"{'cache_creation': 1528, 'cache_read': 0}\n",
"\n",
"Second:\n",
"{'cache_creation': 0, 'cache_read': 1528}\n"
]
}
],
"source": [
"import requests\n",
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(model=\"us.anthropic.claude-3-7-sonnet-20250219-v1:0\")\n",
"\n",
"# Pull LangChain readme\n",
"get_response = requests.get(\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
")\n",
"readme = get_response.text\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"What's LangChain, according to its README?\",\n",
" },\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": f\"{readme}\",\n",
" },\n",
" {\n",
" \"cachePoint\": {\"type\": \"default\"},\n",
" },\n",
" ],\n",
" },\n",
"]\n",
"\n",
"response_1 = llm.invoke(messages)\n",
"response_2 = llm.invoke(messages)\n",
"\n",
"usage_1 = response_1.usage_metadata[\"input_token_details\"]\n",
"usage_2 = response_2.usage_metadata[\"input_token_details\"]\n",
"\n",
"print(f\"First invocation:\\n{usage_1}\")\n",
"print(f\"\\nSecond:\\n{usage_2}\")"
]
},
{
"cell_type": "markdown",
"id": "1b550667-af5b-4557-b84f-c8f865dad6cb",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6033f3fa-0e96-46e3-abb3-1530928fea88",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Here's the German translation:\\n\\nIch liebe das Programmieren.\", additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': '1de3d7c0-8062-4f7e-bb8a-8f725b97a8b0', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 16 Apr 2025 19:32:51 GMT', 'content-type': 'application/json', 'content-length': '243', 'connection': 'keep-alive', 'x-amzn-requestid': '1de3d7c0-8062-4f7e-bb8a-8f725b97a8b0'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [719]}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'}, id='run-7021fcd7-704e-496b-a92e-210139614402-0', usage_metadata={'input_tokens': 23, 'output_tokens': 19, 'total_tokens': 42, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html\n",
"\n",
"For detailed documentation of all ChatBedrockConverse features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,262 +1,393 @@
{
"cells": [
{
"cell_type": "raw",
"id": "30373ae2-f326-4e96-a1f7-062f57396886",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cloudflare Workers AI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f679592d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/chat_models). For detailed documentation of all available Cloudflare WorkersAI models head to the [API reference](https://developers.cloudflare.com/workers-ai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare_workersai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatCloudflareWorkersAI | langchain-community| ❌ | ❌ | ✅ | ❌ | ❌ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"- To access Cloudflare Workers AI models you'll need to create a Cloudflare account, get an account number and API key, and install the `langchain-community` package.\n",
"\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/) to sign up to Cloudflare Workers AI and generate an API key."
]
},
{
"cell_type": "markdown",
"id": "4a524cff",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": 3,
"id": "71b53c25",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "777a8526",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatCloudflareWorkersAI integration lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54990998",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "629ba46f",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec13c2d9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.cloudflare_workersai import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" account_id=\"my_account_id\",\n",
" api_token=\"my_api_token\",\n",
" model=\"@hf/nousresearch/hermes-2-pro-mistral-7b\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "119b6732",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2438a906",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:14 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to French. Translate the user sentence.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='{\\'result\\': {\\'response\\': \\'Je suis un assistant virtuel qui peut traduire l\\\\\\'anglais vers le français. La phrase que vous avez dite est : \"J\\\\\\'aime programmer.\" En français, cela se traduit par : \"J\\\\\\'adore programmer.\"\\'}, \\'success\\': True, \\'errors\\': [], \\'messages\\': []}', additional_kwargs={}, response_metadata={}, id='run-838fd398-8594-4ca5-9055-03c72993caf6-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1b4911bd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'result': {'response': 'Je suis un assistant virtuel qui peut traduire l\\'anglais vers le français. La phrase que vous avez dite est : \"J\\'aime programmer.\" En français, cela se traduit par : \"J\\'adore programmer.\"'}, 'success': True, 'errors': [], 'messages': []}\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "111aa5d4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2a14282",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:24 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to German.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"{'result': {'response': 'role: system, content: Das ist sehr nett zu hören! Programmieren lieben, ist eine interessante und anspruchsvolle Hobby- oder Berufsausrichtung. Wenn Sie englische Texte ins Deutsche übersetzen möchten, kann ich Ihnen helfen. Geben Sie bitte den englischen Satz oder die Übersetzung an, die Sie benötigen.'}, 'success': True, 'errors': [], 'messages': []}\", additional_kwargs={}, response_metadata={}, id='run-0d3be9a6-3d74-4dde-b49a-4479d6af00ef-0')"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e1f311bd",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation on `ChatCloudflareWorkersAI` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cloudflare_workersai.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: CloudflareWorkersAI\n",
"---"
]
},
"nbformat": 4,
"nbformat_minor": 5
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCloudflareWorkersAI features and configurations head to the [API reference](https://python.langchain.com/docs/integrations/chat/cloudflare_workersai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare) | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:|:------------:|:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatCloudflareWorkersAI](https://python.langchain.com/docs/integrations/chat/cloudflare_workersai/) | [langchain-cloudflare](https://pypi.org/project/langchain-cloudflare/) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cloudflare?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cloudflare?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"|:-----------------------------------------:|:----------------------------------------------------:|:---------:|:----------------------------------------------:|:-----------:|:-----------:|:-----------------------------------------------------:|:------------:|:------------------------------------------------------:|:----------------------------------:|\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access CloudflareWorkersAI models you'll need to create a/an CloudflareWorkersAI account, get an API key, and install the `langchain-cloudflare` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to https://www.cloudflare.com/developer-platform/products/workers-ai/ to sign up to CloudflareWorkersAI and generate an API key. Once you've done this set the CF_API_KEY environment variable and the CF_ACCOUNT_ID environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {
"is_executing": true
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CF_API_KEY\"):\n",
" os.environ[\"CF_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI API key: \"\n",
" )\n",
"\n",
"if not os.getenv(\"CF_ACCOUNT_ID\"):\n",
" os.environ[\"CF_ACCOUNT_ID\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI account ID: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain CloudflareWorkersAI integration lives in the `langchain-cloudflare` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-cloudflare"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-07T17:48:31.193773Z",
"start_time": "2025-04-07T17:48:31.179196Z"
}
},
"outputs": [],
"source": [
"from langchain_cloudflare.chat_models import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" model=\"@cf/meta/llama-3.3-70b-instruct-fp8-fast\",\n",
" temperature=0,\n",
" max_tokens=1024,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 37, 'completion_tokens': 9, 'total_tokens': 46}, 'model_name': '@cf/meta/llama-3.3-70b-instruct-fp8-fast'}, id='run-995d1970-b6be-49f3-99ae-af4cdba02304-0', usage_metadata={'input_tokens': 37, 'output_tokens': 9, 'total_tokens': 46})"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 32, 'completion_tokens': 7, 'total_tokens': 39}, 'model_name': '@cf/meta/llama-3.3-70b-instruct-fp8-fast'}, id='run-d1b677bc-194e-4473-90f1-aa65e8e46d50-0', usage_metadata={'input_tokens': 32, 'output_tokens': 7, 'total_tokens': 39})"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Structured Outputs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "91cae406-14d7-46c9-b942-2d1476588423",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': 'Why did the cat join a band?',\n",
" 'punchline': 'Because it wanted to be the purr-cussionist',\n",
" 'rating': '8'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"json_schema = {\n",
" \"title\": \"joke\",\n",
" \"description\": \"Joke to tell user.\",\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"setup\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The setup of the joke\",\n",
" },\n",
" \"punchline\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The punchline to the joke\",\n",
" },\n",
" \"rating\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"How funny the joke is, from 1 to 10\",\n",
" \"default\": None,\n",
" },\n",
" },\n",
" \"required\": [\"setup\", \"punchline\"],\n",
"}\n",
"structured_llm = llm.with_structured_output(json_schema)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"id": "dbfc0c43-e76b-446e-bbb1-d351640bb7be",
"metadata": {},
"source": [
"## Bind tools"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "0765265e-4d00-4030-bf48-7e8d8c9af2ec",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'validate_user',\n",
" 'args': {'user_id': '123',\n",
" 'addresses': '[\"123 Fake St in Boston MA\", \"234 Pretend Boulevard in Houston TX\"]'},\n",
" 'id': '31ec7d6a-9ce5-471b-be64-8ea0492d1387',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def validate_user(user_id: int, addresses: List[str]) -> bool:\n",
" \"\"\"Validate user using historical addresses.\n",
"\n",
" Args:\n",
" user_id (int): the user ID.\n",
" addresses (List[str]): Previous addresses as a list of strings.\n",
" \"\"\"\n",
" return True\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([validate_user])\n",
"\n",
"result = llm_with_tools.invoke(\n",
" \"Could you validate user 123? They previously lived at \"\n",
" \"123 Fake St in Boston MA and 234 Pretend Boulevard in \"\n",
" \"Houston TX.\"\n",
")\n",
"result.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"https://developers.cloudflare.com/workers-ai/\n",
"https://developers.cloudflare.com/agents/"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@ -3,7 +3,9 @@
{
"cell_type": "raw",
"id": "59148044",
"metadata": {},
"metadata": {
"id": "59148044"
},
"source": [
"---\n",
"sidebar_label: LiteLLM\n",
@ -11,120 +13,223 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"id": "5bcea387",
"metadata": {
"id": "5bcea387"
},
"source": [
"# ChatLiteLLM and ChatLiteLLMRouter\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.\n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library.\n",
"\n",
"This integration contains two main classes:\n",
"\n",
"- ```ChatLiteLLM```: The main Langchain wrapper for basic usage of LiteLLM ([docs](https://docs.litellm.ai/docs/)).\n",
"- ```ChatLiteLLMRouter```: A ```ChatLiteLLM``` wrapper that leverages LiteLLM's Router ([docs](https://docs.litellm.ai/docs/routing))."
]
},
{
"cell_type": "markdown",
"id": "2ddb7fd3",
"metadata": {},
"source": [
"# ChatLiteLLM\n",
"## Table of Contents\n",
"1. [Overview](#overview)\n",
" - [Integration Details](#integration-details)\n",
" - [Model Features](#model-features)\n",
"2. [Setup](#setup)\n",
"3. [Credentials](#credentials)\n",
"4. [Installation](#installation)\n",
"5. [Instantiation](#instantiation)\n",
" - [ChatLiteLLM](#chatlitellm)\n",
" - [ChatLiteLLMRouter](#chatlitellmrouter)\n",
"6. [Invocation](#invocation)\n",
"7. [Async and Streaming Functionality](#async-and-streaming-functionality)\n",
"8. [API Reference](#api-reference)"
]
},
{
"cell_type": "markdown",
"id": "37be6ef8",
"metadata": {},
"source": [
"## Overview\n",
"### Integration details\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
"| Class | Package | Local | Serializable | JS support| Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"| [ChatLiteLLMRouter](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellmrouter) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library. "
"### Model features\n",
"| [Tool calling](https://python.langchain.com/docs/how_to/tool_calling/) | [Structured output](https://python.langchain.com/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Native async](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Token usage](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/) | [Logprobs](https://python.langchain.com/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"### Setup\n",
"To access ```ChatLiteLLM``` and ```ChatLiteLLMRouter``` models, you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI, or Cohere account. Then, you have to get an API key and export it as an environment variable."
]
},
{
"cell_type": "markdown",
"id": "0a2f8164",
"metadata": {
"id": "0a2f8164"
},
"source": [
"## Credentials\n",
"\n",
"You have to choose the LLM provider you want and sign up with them to get their API key.\n",
"\n",
"### Example - Anthropic\n",
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this, set the ANTHROPIC_API_KEY environment variable.\n",
"\n",
"\n",
"### Example - OpenAI\n",
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this, set the OPENAI_API_KEY environment variable."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"execution_count": null,
"id": "7595eddf",
"metadata": {
"tags": []
"id": "7595eddf"
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatLiteLLM\n",
"from langchain_core.messages import HumanMessage"
"## Set ENV variables\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\n",
"os.environ[\"ANTHROPIC_API_KEY\"] = \"your-anthropic-key\""
]
},
{
"cell_type": "markdown",
"id": "74c3ad30",
"metadata": {
"id": "74c3ad30"
},
"source": [
"### Installation\n",
"\n",
"The LangChain LiteLLM integration is available in the `langchain-litellm` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"id": "ca3f8a25",
"metadata": {
"tags": []
"id": "ca3f8a25"
},
"outputs": [],
"source": [
"chat = ChatLiteLLM(model=\"gpt-3.5-turbo\")"
"%pip install -qU langchain-litellm"
]
},
{
"cell_type": "markdown",
"id": "bc1182b4",
"metadata": {
"id": "bc1182b4"
},
"source": [
"## Instantiation"
]
},
{
"cell_type": "markdown",
"id": "d439241a",
"metadata": {},
"source": [
"### ChatLiteLLM\n",
"You can instantiate a ```ChatLiteLLM``` model by providing a ```model``` name [supported by LiteLLM](https://docs.litellm.ai/docs/providers)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
"from langchain_litellm import ChatLiteLLM\n",
"\n",
"llm = ChatLiteLLM(model=\"gpt-4.1-nano\", temperature=0.1)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"id": "3d0ed306",
"metadata": {},
"source": [
"## `ChatLiteLLM` also supports async and streaming functionality:"
"### ChatLiteLLMRouter\n",
"You can also leverage LiteLLM's routing capabilities by defining your model list as specified [here](https://docs.litellm.ai/docs/routing)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d26393a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_litellm import ChatLiteLLMRouter\n",
"from litellm import Router\n",
"\n",
"model_list = [\n",
" {\n",
" \"model_name\": \"gpt-4.1\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4.1\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2024-10-21\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-4o\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4o\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2024-10-21\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"llm = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-4.1\", temperature=0.1)"
]
},
{
"cell_type": "markdown",
"id": "63d98454",
"metadata": {
"id": "63d98454"
},
"source": [
"## Invocation\n",
"Whether you've instantiated a `ChatLiteLLM` or a `ChatLiteLLMRouter`, you can now use the ChatModel through Langchain's API."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"outputId": "a4c0e5f5-a859-43fa-dd78-74fc0922ecb2",
"tags": []
},
"outputs": [
@ -132,41 +237,76 @@
"name": "stdout",
"output_type": "stream",
"text": [
" J'aime la programmation."
"content='Neutral' additional_kwargs={} response_metadata={'token_usage': Usage(completion_tokens=2, prompt_tokens=30, total_tokens=32, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=0, cached_tokens=0, text_tokens=None, image_tokens=None)), 'model': 'gpt-3.5-turbo', 'finish_reason': 'stop', 'model_name': 'gpt-3.5-turbo'} id='run-ab6a3b21-eae8-4c27-acb2-add65a38221a-0' usage_metadata={'input_tokens': 30, 'output_tokens': 2, 'total_tokens': 32}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatLiteLLM(\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
"response = await llm.ainvoke(\n",
" \"Classify the text into neutral, negative or positive. Text: I think the food was okay. Sentiment:\"\n",
")\n",
"chat(messages)"
"print(response)"
]
},
{
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"metadata": {
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c"
},
"source": [
"## Async and Streaming Functionality\n",
"`ChatLiteLLM` and `ChatLiteLLMRouter` also support async and streaming functionality:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"outputId": "ee8cdda1-d992-4696-9ad0-aa146360a3ee",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Antibiotics are medications that fight bacterial infections in the body. They work by targeting specific bacteria and either killing them or preventing their growth and reproduction.\n",
"\n",
"There are several different mechanisms by which antibiotics work. Some antibiotics work by disrupting the cell walls of bacteria, causing them to burst and die. Others interfere with the protein synthesis of bacteria, preventing them from growing and reproducing. Some antibiotics target the DNA or RNA of bacteria, disrupting their ability to replicate.\n",
"\n",
"It is important to note that antibiotics only work against bacterial infections and not viral infections. It is also crucial to take antibiotics as prescribed by a healthcare professional and to complete the full course of treatment, even if symptoms improve before the medication is finished. This helps to prevent antibiotic resistance, where bacteria become resistant to the effects of antibiotics."
]
}
],
"source": [
"async for token in llm.astream(\"Hello, please explain how antibiotics work\"):\n",
" print(token.text(), end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "88af2a9b",
"metadata": {
"id": "88af2a9b"
},
"source": [
"## API reference\n",
"For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations, head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "g6_alda",
"language": "python",
"name": "python3"
},
@ -180,7 +320,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.12.4"
}
},
"nbformat": 4,

View File

@ -1,218 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "59148044",
"metadata": {},
"source": [
"---\n",
"sidebar_label: LiteLLM Router\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "247da7a6",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# ChatLiteLLMRouter\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatLiteLLMRouter\n",
"from langchain_core.messages import HumanMessage\n",
"from litellm import Router"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model_list = [\n",
" {\n",
" \"model_name\": \"gpt-4\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4-1106-preview\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-35-turbo\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-35-turbo\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"chat = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-35-turbo\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime programmer.\")"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"metadata": {},
"source": [
"## `ChatLiteLLMRouter` also supports async and streaming functionality:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\"J'adore programmer.\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"J'adore programmer.\"))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer."
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\")"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatLiteLLMRouter(\n",
" router=litellm_router,\n",
" model_name=\"gpt-35-turbo\",\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
")\n",
"chat(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -17,21 +17,21 @@
"source": [
"# ChatClovaX\n",
"\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain).\n",
"\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio Guide [documentation](https://guide.ncloud-docs.com/docs/clovastudio-model).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:| :---: |:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatClovaX](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"| [ChatClovaX](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain#HyperCLOVAX%EB%AA%A8%EB%8D%B8%EC%9D%B4%EC%9A%A9) | [langchain-naver](https://pypi.org/project/langchain-naver/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_naver?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_naver?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"|:------------------------------------------:| :---: | :---: | :---: | :---: | :---: |:-----------------------------------------------------:| :---: |:------------------------------------------------------:|:----------------------------------:|\n",
"|❌| ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"|✅| ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
@ -39,26 +39,23 @@
"\n",
"1. Creating [NAVER Cloud Platform](https://www.ncloud.com/) account\n",
"2. Apply to use [CLOVA Studio](https://www.ncloud.com/product/aiService/clovaStudio)\n",
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#테스트앱생성).)\n",
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/clovastudio-playground-testapp).)\n",
"4. Issue a Test or Service API key (See [here](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4).)\n",
"\n",
"### Credentials\n",
"\n",
"Set the `NCP_CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
" - Note that if you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need to get an additional API Key by clicking `App Request Status` > `Service App, Test App List` > `Details button for each app` in [CLOVA Studio](https://clovastudio.ncloud.com/studio-application/service-app) and set it as `NCP_APIGW_API_KEY`.\n",
"Set the `CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
"\n",
"You can add them to your environment variables as below:\n",
"\n",
"``` bash\n",
"export NCP_CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
"# Uncomment below to use a legacy API key\n",
"# export NCP_APIGW_API_KEY=\"your-api-key-here\"\n",
"export CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "2def81b5-b023-4f40-a97b-b2c5ca59d6a9",
"metadata": {},
"outputs": [],
@ -66,22 +63,19 @@
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NCP_CLOVASTUDIO_API_KEY\"):\n",
" os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter your NCP CLOVA Studio API Key: \"\n",
" )\n",
"# Uncomment below to use a legacy API key\n",
"# if not os.getenv(\"NCP_APIGW_API_KEY\"):\n",
"# os.environ[\"NCP_APIGW_API_KEY\"] = getpass.getpass(\n",
"# \"Enter your NCP API Gateway API key: \"\n",
"# )"
"if not os.getenv(\"CLOVASTUDIO_API_KEY\"):\n",
" os.environ[\"CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CLOVA Studio API Key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "7c695442",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
@ -101,7 +95,7 @@
"source": [
"### Installation\n",
"\n",
"The LangChain Naver integration lives in the `langchain-community` package:"
"The LangChain Naver integration lives in the `langchain-naver` package:"
]
},
{
@ -112,7 +106,7 @@
"outputs": [],
"source": [
"# install package\n",
"!pip install -qU langchain-community"
"%pip install -qU langchain-naver"
]
},
{
@ -127,21 +121,19 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatClovaX\n",
"from langchain_naver import ChatClovaX\n",
"\n",
"chat = ChatClovaX(\n",
" model=\"HCX-003\",\n",
" max_tokens=100,\n",
" model=\"HCX-005\",\n",
" temperature=0.5,\n",
" # clovastudio_api_key=\"...\" # set if you prefer to pass api key directly instead of using environment variables\n",
" # task_id=\"...\" # set if you want to use fine-tuned model\n",
" # service_app=False # set True if using Service App. Default value is False (means using Test App)\n",
" # include_ai_filters=False # set True if you want to detect inappropriate content. Default value is False\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
@ -153,12 +145,12 @@
"source": [
"## Invocation\n",
"\n",
"In addition to invoke, we also support batch and stream functionalities."
"In addition to invoke, `ChatClovaX` also support batch and stream functionalities."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
@ -167,10 +159,10 @@
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 1112164354, 'ai_filter': None}, id='run-b57bc356-1148-4007-837d-cc409dbd57cc-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b70c26671cd247a0864115bacfb5fc12', 'finish_reason': 'stop', 'logprobs': None}, id='run-3faf6a8d-d5da-49ad-9fbb-7b56ed23b484-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@ -190,7 +182,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "24e7377f",
"metadata": {},
"outputs": [
@ -198,7 +190,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"저는 네이버 AI를 사용하는 것이 좋아요.\n"
"네이버 인공지능을 사용하는 것을 정말 좋아합니다.\n"
]
}
],
@ -218,17 +210,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 2575184681, 'ai_filter': None}, id='run-7014b330-eba3-4701-bb62-df73ce39b854-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
"AIMessage(content='저는 네이버 인공지능을 사용하는 것을 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 28, 'total_tokens': 38, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b7a826d17fcf4fee8386fca2ebc63284', 'finish_reason': 'stop', 'logprobs': None}, id='run-35957816-3325-4d9c-9441-e40704912be6-0', usage_metadata={'input_tokens': 28, 'output_tokens': 10, 'total_tokens': 38, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@ -266,7 +258,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "2c07af21-dda5-4514-b4de-1f214c2cebcd",
"metadata": {},
"outputs": [
@ -274,7 +266,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Certainly! In Korean, \"Hi\" is pronounced as \"안녕\" (annyeong). The first syllable, \"안,\" sounds like the \"ahh\" sound in \"apple,\" while the second syllable, \"녕,\" sounds like the \"yuh\" sound in \"you.\" So when you put them together, it's like saying \"ahhyuh-nyuhng.\" Remember to pronounce each syllable clearly and separately for accurate pronunciation."
"In Korean, the informal way of saying 'hi' is \"안녕\" (annyeong). If you're addressing someone older or showing more respect, you would use \"안녕하세요\" (annjeonghaseyo). Both phrases are used as greetings similar to 'hello'. Remember, pronunciation is key so make sure to pronounce each syllable clearly: 안-녀-엉 (an-nyeo-eong) and 안-녕-하-세-요 (an-nyeong-ha-se-yo)."
]
}
],
@ -298,115 +290,37 @@
"\n",
"### Using fine-tuned models\n",
"\n",
"You can call fine-tuned models by passing in your corresponding `task_id` parameter. (You dont need to specify the `model_name` parameter when calling fine-tuned model.)\n",
"You can call fine-tuned models by passing the `task_id` to the `model` parameter as: `ft:{task_id}`.\n",
"\n",
"You can check `task_id` from corresponding Test App or Service App details."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "cb436788",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 너무 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 15, 'seed': 52559061, 'ai_filter': None}, id='run-5bea8d4a-48f3-4c34-ae70-66e60dca5344-0', usage_metadata={'input_tokens': 25, 'output_tokens': 15, 'total_tokens': 40})"
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': '2222d6d411a948c883aac1e03ca6cebe', 'finish_reason': 'stop', 'logprobs': None}, id='run-9696d7e2-7afa-4bb4-9c03-b95fcf678ab8-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"fine_tuned_model = ChatClovaX(\n",
" task_id=\"5s8egt3a\", # set if you want to use fine-tuned model\n",
" model=\"ft:a1b2c3d4\", # set as `ft:{task_id}` with your fine-tuned model's task id\n",
" # other params...\n",
")\n",
"\n",
"fine_tuned_model.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "f428deaf",
"metadata": {},
"source": [
"### Service App\n",
"\n",
"When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#서비스앱신청).)\n",
"\n",
"For a Service App, you should use a corresponding Service API key and can only be called with it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcf566df",
"metadata": {},
"outputs": [],
"source": [
"# Update environment variables\n",
"\n",
"os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter NCP CLOVA Studio Service API Key: \"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "cebe27ae",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatClovaX(\n",
" service_app=True, # True if you want to use your service app, default value is False.\n",
" # clovastudio_api_key=\"...\" # if you prefer to pass api key in directly instead of using env vars\n",
" model=\"HCX-003\",\n",
" # other params...\n",
")\n",
"ai_msg = chat.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "d73e7140",
"metadata": {},
"source": [
"### AI Filter\n",
"\n",
"AI Filter detects inappropriate output such as profanity from the test app (or service app included) created in Playground and informs the user. See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#AIFilter) for details."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32bfbc93",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatClovaX(\n",
" model=\"HCX-003\",\n",
" include_ai_filters=True, # True if you want to enable ai filter\n",
" # other params...\n",
")\n",
"\n",
"ai_msg = chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bd9e179",
"metadata": {},
"outputs": [],
"source": [
"print(ai_msg.response_metadata[\"ai_filter\"])"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
@ -414,13 +328,13 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatNaver features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html"
"For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@ -434,7 +348,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.12.8"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@ -408,7 +408,7 @@
"\n",
":::\n",
"\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages.\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages, as well as the output from [reasoning processes](https://platform.openai.com/docs/guides/reasoning?api-mode=responses).\n",
"\n",
"`ChatOpenAI` will route to the Responses API if one of these features is used. You can also specify `use_responses_api=True` when instantiating `ChatOpenAI`.\n",
"\n",
@ -1056,6 +1056,77 @@
"print(second_response.text())"
]
},
{
"cell_type": "markdown",
"id": "67bf5bd2-0935-40a0-b1cd-c6662b681d4b",
"metadata": {},
"source": [
"### Reasoning output\n",
"\n",
"Some OpenAI models will generate separate text content illustrating their reasoning process. See OpenAI's [reasoning documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) for details.\n",
"\n",
"OpenAI can return a summary of the model's reasoning (although it doesn't expose the raw reasoning tokens). To configure `ChatOpenAI` to return this summary, specify the `reasoning` parameter:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8d322f3a-0732-45ab-ac95-dfd4596e0d85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'3^3 = 3 × 3 × 3 = 27.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"reasoning = {\n",
" \"effort\": \"medium\", # 'low', 'medium', or 'high'\n",
" \"summary\": \"auto\", # 'detailed', 'auto', or None\n",
"}\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"o4-mini\",\n",
" use_responses_api=True,\n",
" model_kwargs={\"reasoning\": reasoning},\n",
")\n",
"response = llm.invoke(\"What is 3^3?\")\n",
"\n",
"# Output\n",
"response.text()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d7dcc082-b7c8-41b7-a5e2-441b9679e41b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Calculating power of three**\n",
"\n",
"The user is asking for the result of 3 to the power of 3, which I know is 27. It's a straightforward question, so Ill keep my answer concise: 27. I could explain that this is the same as multiplying 3 by itself twice: 3 × 3 × 3 equals 27. However, since the user likely just needs the answer, Ill simply respond with 27.\n"
]
}
],
"source": [
"# Reasoning\n",
"reasoning = response.additional_kwargs[\"reasoning\"]\n",
"for block in reasoning[\"summary\"]:\n",
" print(block[\"text\"])"
]
},
{
"cell_type": "markdown",
"id": "57e27714",
@ -1342,6 +1413,23 @@
"second_output_message = llm.invoke(history)"
]
},
{
"cell_type": "markdown",
"id": "90c18d18-b25c-4509-a639-bd652b92f518",
"metadata": {},
"source": [
"## Flex processing\n",
"\n",
"OpenAI offers a variety of [service tiers](https://platform.openai.com/docs/guides/flex-processing). The \"flex\" tier offers cheaper pricing for requests, with the trade-off that responses may take longer and resources might not always be available. This approach is best suited for non-critical tasks, including model testing, data enhancement, or jobs that can be run asynchronously.\n",
"\n",
"To use it, initialize the model with `service_tier=\"flex\"`:\n",
"```python\n",
"llm = ChatOpenAI(model=\"o4-mini\", service_tier=\"flex\")\n",
"```\n",
"\n",
"Note that this is a beta feature that is only available for a subset of models. See OpenAI [docs](https://platform.openai.com/docs/guides/flex-processing) for more detail."
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
@ -1349,7 +1437,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
"For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)."
]
}
],

View File

@ -17,12 +17,66 @@
"source": [
"# ChatPerplexity\n",
"\n",
"This notebook covers how to get started with `Perplexity` chat models."
"\n",
"This page will help you get started with Perplexity [chat models](../../concepts/chat_models.mdx). For detailed documentation of all `ChatPerplexity` features and configurations head to the [API reference](https://python.langchain.com/api_reference/perplexity/chat_models/langchain_perplexity.chat_models.ChatPerplexity.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/xai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatPerplexity](https://python.langchain.com/api_reference/perplexity/chat_models/langchain_perplexity.chat_models.ChatPerplexity.html) | [langchain-perplexity](https://python.langchain.com/api_reference/perplexity/perplexity.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-perplexity?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-perplexity?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Perplexity models you'll need to create a Perplexity account, get an API key, and install the `langchain-perplexity` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://www.perplexity.ai/) to sign up for Perplexity and generate an API key. Once you've done this set the `PPLX_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "2243f329",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"PPLX_API_KEY\" not in os.environ:\n",
" os.environ[\"PPLX_API_KEY\"] = getpass.getpass(\"Enter your Perplexity API key: \")"
]
},
{
"cell_type": "markdown",
"id": "7dfe47c4",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "10a791fa",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"ExecuteTime": {
@ -33,8 +87,8 @@
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatPerplexity\n",
"from langchain_core.prompts import ChatPromptTemplate"
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_perplexity import ChatPerplexity"
]
},
{
@ -62,29 +116,9 @@
"id": "97a8ce3a",
"metadata": {},
"source": [
"The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
"\n",
"```python\n",
"chat = ChatPerplexity(temperature=0, pplx_api_key=\"YOUR_API_KEY\", model=\"llama-3.1-sonar-small-128k-online\")\n",
"```\n",
"\n",
"You can check a list of available models [here](https://docs.perplexity.ai/docs/model-cards). For reproducibility, we can set the API key dynamically by taking it as an input in this notebook."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d3e49d78",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"PPLX_API_KEY = getpass()\n",
"os.environ[\"PPLX_API_KEY\"] = PPLX_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 3,
@ -305,7 +339,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@ -319,7 +353,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.11"
}
},
"nbformat": 4,

View File

@ -57,8 +57,8 @@
{
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-08T19:44:51.390231Z",
"start_time": "2024-11-08T19:44:51.387945Z"
"end_time": "2025-04-21T18:23:30.746350Z",
"start_time": "2025-04-21T18:23:30.744744Z"
}
},
"cell_type": "code",
@ -70,7 +70,7 @@
],
"id": "fa57fba89276da13",
"outputs": [],
"execution_count": 1
"execution_count": 2
},
{
"metadata": {},
@ -82,12 +82,25 @@
"id": "87dc1742af7b053"
},
{
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:23:33.359278Z",
"start_time": "2025-04-21T18:23:32.853207Z"
}
},
"cell_type": "code",
"source": "%pip install -qU langchain-predictionguard",
"id": "b816ae8553cba021",
"outputs": [],
"execution_count": null
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"execution_count": 3
},
{
"cell_type": "markdown",
@ -103,13 +116,13 @@
"metadata": {
"id": "2xe8JEUwA7_y",
"ExecuteTime": {
"end_time": "2024-11-08T19:44:53.950653Z",
"start_time": "2024-11-08T19:44:53.488694Z"
"end_time": "2025-04-21T18:23:39.812675Z",
"start_time": "2025-04-21T18:23:39.666881Z"
}
},
"source": "from langchain_predictionguard import ChatPredictionGuard",
"outputs": [],
"execution_count": 2
"execution_count": 4
},
{
"cell_type": "code",
@ -117,8 +130,8 @@
"metadata": {
"id": "Ua7Mw1N4HcER",
"ExecuteTime": {
"end_time": "2024-11-08T19:44:54.890695Z",
"start_time": "2024-11-08T19:44:54.502846Z"
"end_time": "2025-04-21T18:23:41.590296Z",
"start_time": "2025-04-21T18:23:41.253237Z"
}
},
"source": [
@ -126,7 +139,7 @@
"chat = ChatPredictionGuard(model=\"Hermes-3-Llama-3.1-8B\")"
],
"outputs": [],
"execution_count": 3
"execution_count": 5
},
{
"metadata": {},
@ -221,6 +234,132 @@
],
"execution_count": 6
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## Tool Calling\n",
"\n",
"Prediction Guard has a tool calling API that lets you describe tools and their arguments, which enables the model return a JSON object with a tool to call and the inputs to that tool. Tool-calling is very useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n"
],
"id": "1227780d6e6728ba"
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### ChatPredictionGuard.bind_tools()\n",
"\n",
"Using `ChatPredictionGuard.bind_tools()`, you can pass in Pydantic classes, dict schemas, and Langchain tools as tools to the model, which are then reformatted to allow for use by the model."
],
"id": "23446aa52e01d1ba"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"class GetPopulation(BaseModel):\n",
" \"\"\"Get the current population in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = chat.bind_tools(\n",
" [GetWeather, GetPopulation]\n",
" # strict = True # enforce tool args schema is respected\n",
")"
],
"id": "135efb0bfc5916c1"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:42:41.834079Z",
"start_time": "2025-04-21T18:42:40.289095Z"
}
},
"cell_type": "code",
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"Which city is hotter today and which is bigger: LA or NY?\"\n",
")\n",
"ai_msg"
],
"id": "8136f19a8836cd58",
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c', 'type': 'function', 'index': 0, 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"Los Angeles, CA\"}'}}, {'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080', 'type': 'function', 'index': 0, 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"New York, NY\"}'}}, {'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388', 'type': 'function', 'index': 0, 'function': {'name': 'GetPopulation', 'arguments': '{\"location\": \"Los Angeles, CA\"}'}}, {'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510', 'type': 'function', 'index': 0, 'function': {'name': 'GetPopulation', 'arguments': '{\"location\": \"New York, NY\"}'}}]}, response_metadata={}, id='run-4630cfa9-4e95-42dd-8e4a-45db78180a10-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c', 'type': 'tool_call'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080', 'type': 'tool_call'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388', 'type': 'tool_call'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510', 'type': 'tool_call'}])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 7
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### AIMessage.tool_calls\n",
"\n",
"Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic."
],
"id": "84f405c45a35abe5"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:43:00.429453Z",
"start_time": "2025-04-21T18:43:00.426399Z"
}
},
"cell_type": "code",
"source": "ai_msg.tool_calls",
"id": "bdcee85475019719",
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetWeather',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 8
},
{
"cell_type": "markdown",
"id": "ff1b51a8",

View File

@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Qwen QwQ\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatQwQ\n",
"\n",
"This will help you getting started with QwQ [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatQwQ features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ChatQwQ](https://pypi.org/project/langchain-qwq/) | [langchain-qwq](https://pypi.org/project/langchain-qwq/) | ❌ | beta | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-qwq?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-qwq?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ |❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access QwQ models you'll need to create an Alibaba Cloud account, get an API key, and install the `langchain-qwq` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [Alibaba's API Key page](https://account.alibabacloud.com/login/login.htm?oauth_callback=https%3A%2F%2Fbailian.console.alibabacloud.com%2F%3FapiKey%3D1&lang=en#/api-key) to sign up to Alibaba Cloud and generate an API key. Once you've done this set the `DASHSCOPE_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"DASHSCOPE_API_KEY\"):\n",
" os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"Enter your Dashscope API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain QwQ integration lives in the `langchain-qwq` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-qwq"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_qwq import ChatQwQ\n",
"\n",
"llm = ChatQwQ(\n",
" model=\"qwq-plus\",\n",
" max_tokens=3_000,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime la programmation.\", additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into French. Let\\'s start by breaking down the sentence. The subject is \"I\", which in French is \"Je\". The verb is \"love\", which in this context is present tense, so \"aime\". The object is \"programming\". Now, \"programming\" in French can be \"la programmation\". \\n\\nWait, should it be \"programmation\" or \"programmation\"? Let me confirm the spelling. Yes, \"programmation\" is correct. Now, putting it all together: \"Je aime la programmation.\" Hmm, but in French, there\\'s a tendency to contract \"je\" and \"aime\". Wait, actually, \"je\" followed by a vowel sound usually takes \"j\\'\". So it should be \"J\\'aime la programmation.\" \\n\\nLet me double-check. \"J\\'aime\" is the correct contraction for \"I love\". The definite article \"la\" is needed because \"programmation\" is a feminine noun. Yes, \"programmation\" is a feminine noun, so \"la\" is correct. \\n\\nIs there any other way to say it? Maybe \"J\\'adore la programmation\" for \"I love\" in a stronger sense, but the user didn\\'t specify the intensity. Since the original is straightforward, \"J\\'aime la programmation.\" is the direct translation. \\n\\nI think that\\'s it. No mistakes there. So the final translation should be \"J\\'aime la programmation.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-5045cd6a-edbd-4b2f-bf24-b7bdf3777fb9-0', usage_metadata={'input_tokens': 32, 'output_tokens': 326, 'total_tokens': 358, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French.\"\n",
" \"Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into German. Let me think. The verb \"love\" is \"lieben\" or \"mögen\" in German, but \"lieben\" is more like love, while \"mögen\" is prefer. Since it\\'s about programming, which is a strong affection, \"lieben\" is better. The subject is \"I\", which is \"ich\". Then \"programming\" is \"Programmierung\" or \"Coding\". But \"Programmierung\" is more formal. Alternatively, sometimes people say \"ich liebe es zu programmieren\" which is \"I love to program\". Hmm, maybe the direct translation would be \"Ich liebe die Programmierung.\" But maybe the more natural way is \"Ich liebe es zu programmieren.\" Let me check. Both are correct, but the second one might sound more natural in everyday speech. The user might prefer the concise version. Alternatively, maybe \"Ich liebe die Programmierung.\" is better. Wait, the original is \"programming\" as a noun. So using the noun form would be appropriate. So \"Ich liebe die Programmierung.\" But sometimes people also use \"Coding\" in German, like \"Ich liebe das Coding.\" But that\\'s more anglicism. Probably better to stick with \"Programmierung\". Alternatively, \"Programmieren\" as a noun. Oh right! \"Programmieren\" can be a noun when used in the accusative case. So \"Ich liebe das Programmieren.\" That\\'s correct and natural. Yes, that\\'s the best translation. So the answer is \"Ich liebe das Programmieren.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-2c418451-51d8-4319-8269-2ce129363a1a-0', usage_metadata={'input_tokens': 28, 'output_tokens': 341, 'total_tokens': 369, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates\"\n",
" \"{input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8d1b3ef3",
"metadata": {},
"source": [
"## Tool Calling\n",
"ChatQwQ supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool."
]
},
{
"cell_type": "markdown",
"id": "6db1a355",
"metadata": {},
"source": [
"### Use with `bind_tools`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "15fb6a6d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' additional_kwargs={'reasoning_content': 'Okay, the user is asking \"What\\'s 5 times forty two\". Let me break this down. First, I need to identify the numbers involved. The first number is 5, which is straightforward. The second number is forty two, which is 42 in digits. The operation they want is multiplication.\\n\\nLooking at the tools provided, there\\'s a function called multiply that takes two integers. So I should use that. The parameters are first_int and second_int. \\n\\nI need to convert \"forty two\" to 42. Since the function requires integers, both numbers should be in integer form. So 5 and 42. \\n\\nNow, I\\'ll structure the tool call. The function name is multiply, and the arguments should be first_int: 5 and second_int: 42. I\\'ll make sure the JSON is correctly formatted without any syntax errors. Let me double-check the parameters to ensure they\\'re required and of the right type. Yep, both are required and integers. \\n\\nNo examples were provided, but the function\\'s purpose is clear. So the correct tool call should be to multiply those two numbers. I think that\\'s all. No other functions are needed here.'} response_metadata={'model_name': 'qwq-plus'} id='run-638895aa-fdde-4567-bcfa-7d8e5d4f24af-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_d088275851c140529ed2ad', 'type': 'tool_call'}] usage_metadata={'input_tokens': 176, 'output_tokens': 277, 'total_tokens': 453, 'input_token_details': {}, 'output_token_details': {}}\n"
]
}
],
"source": [
"from langchain_core.tools import tool\n",
"from langchain_qwq import ChatQwQ\n",
"\n",
"\n",
"@tool\n",
"def multiply(first_int: int, second_int: int) -> int:\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
" return first_int * second_int\n",
"\n",
"\n",
"llm = ChatQwQ()\n",
"\n",
"llm_with_tools = llm.bind_tools([multiply])\n",
"\n",
"msg = llm_with_tools.invoke(\"What's 5 times forty two\")\n",
"\n",
"print(msg)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatQwQ features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,276 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RunPod Chat Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get started with RunPod chat models.\n",
"\n",
"## Overview\n",
"\n",
"This guide covers how to use the LangChain `ChatRunPod` class to interact with chat models hosted on [RunPod Serverless](https://www.runpod.io/serverless-gpu)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"1. **Install the package:**\n",
" ```bash\n",
" pip install -qU langchain-runpod\n",
" ```\n",
"2. **Deploy a Chat Model Endpoint:** Follow the setup steps in the [RunPod Provider Guide](/docs/integrations/providers/runpod#setup) to deploy a compatible chat model endpoint on RunPod Serverless and get its Endpoint ID.\n",
"3. **Set Environment Variables:** Make sure `RUNPOD_API_KEY` and `RUNPOD_ENDPOINT_ID` (or a specific `RUNPOD_CHAT_ENDPOINT_ID`) are set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# Make sure environment variables are set (or pass them directly to ChatRunPod)\n",
"if \"RUNPOD_API_KEY\" not in os.environ:\n",
" os.environ[\"RUNPOD_API_KEY\"] = getpass.getpass(\"Enter your RunPod API Key: \")\n",
"\n",
"if \"RUNPOD_ENDPOINT_ID\" not in os.environ:\n",
" os.environ[\"RUNPOD_ENDPOINT_ID\"] = input(\n",
" \"Enter your RunPod Endpoint ID (used if RUNPOD_CHAT_ENDPOINT_ID is not set): \"\n",
" )\n",
"\n",
"# Optionally use a different endpoint ID specifically for chat models\n",
"# if \"RUNPOD_CHAT_ENDPOINT_ID\" not in os.environ:\n",
"# os.environ[\"RUNPOD_CHAT_ENDPOINT_ID\"] = input(\"Enter your RunPod Chat Endpoint ID (Optional): \")\n",
"\n",
"chat_endpoint_id = os.environ.get(\n",
" \"RUNPOD_CHAT_ENDPOINT_ID\", os.environ.get(\"RUNPOD_ENDPOINT_ID\")\n",
")\n",
"if not chat_endpoint_id:\n",
" raise ValueError(\n",
" \"No RunPod Endpoint ID found. Please set RUNPOD_ENDPOINT_ID or RUNPOD_CHAT_ENDPOINT_ID.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Initialize the `ChatRunPod` class. You can pass model-specific parameters via `model_kwargs` and configure polling behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_runpod import ChatRunPod\n",
"\n",
"chat = ChatRunPod(\n",
" runpod_endpoint_id=chat_endpoint_id, # Specify the correct endpoint ID\n",
" model_kwargs={\n",
" \"max_new_tokens\": 512,\n",
" \"temperature\": 0.7,\n",
" \"top_p\": 0.9,\n",
" # Add other parameters supported by your endpoint handler\n",
" },\n",
" # Optional: Adjust polling\n",
" # poll_interval=0.2,\n",
" # max_polling_attempts=150\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"Use the standard LangChain `.invoke()` and `.ainvoke()` methods to call the model. Streaming is also supported via `.stream()` and `.astream()` (simulated by polling the RunPod `/stream` endpoint)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are a helpful AI assistant.\"),\n",
" HumanMessage(content=\"What is the RunPod Serverless API flow?\"),\n",
"]\n",
"\n",
"# Invoke (Sync)\n",
"try:\n",
" response = chat.invoke(messages)\n",
" print(\"--- Sync Invoke Response ---\")\n",
" print(response.content)\n",
"except Exception as e:\n",
" print(\n",
" f\"Error invoking Chat Model: {e}. Ensure endpoint ID/API key are correct and endpoint is active/compatible.\"\n",
" )\n",
"\n",
"# Stream (Sync, simulated via polling /stream)\n",
"print(\"\\n--- Sync Stream Response ---\")\n",
"try:\n",
" for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print() # Newline\n",
"except Exception as e:\n",
" print(\n",
" f\"\\nError streaming Chat Model: {e}. Ensure endpoint handler supports streaming output format.\"\n",
" )\n",
"\n",
"### Async Usage\n",
"\n",
"# AInvoke (Async)\n",
"try:\n",
" async_response = await chat.ainvoke(messages)\n",
" print(\"--- Async Invoke Response ---\")\n",
" print(async_response.content)\n",
"except Exception as e:\n",
" print(f\"Error invoking Chat Model asynchronously: {e}.\")\n",
"\n",
"# AStream (Async)\n",
"print(\"\\n--- Async Stream Response ---\")\n",
"try:\n",
" async for chunk in chat.astream(messages):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print() # Newline\n",
"except Exception as e:\n",
" print(\n",
" f\"\\nError streaming Chat Model asynchronously: {e}. Ensure endpoint handler supports streaming output format.\\n\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"The chat model integrates seamlessly with LangChain Expression Language (LCEL) chains."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"parser = StrOutputParser()\n",
"\n",
"chain = prompt | chat | parser\n",
"\n",
"try:\n",
" chain_response = chain.invoke(\n",
" {\"input\": \"Explain the concept of serverless computing in simple terms.\"}\n",
" )\n",
" print(\"--- Chain Response ---\")\n",
" print(chain_response)\n",
"except Exception as e:\n",
" print(f\"Error running chain: {e}\")\n",
"\n",
"\n",
"# Async chain\n",
"try:\n",
" async_chain_response = await chain.ainvoke(\n",
" {\"input\": \"What are the benefits of using RunPod for AI/ML workloads?\"}\n",
" )\n",
" print(\"--- Async Chain Response ---\")\n",
" print(async_chain_response)\n",
"except Exception as e:\n",
" print(f\"Error running async chain: {e}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Features (Endpoint Dependent)\n",
"\n",
"The availability of advanced features depends **heavily** on the specific implementation of your RunPod endpoint handler. The `ChatRunPod` integration provides the basic framework, but the handler must support the underlying functionality.\n",
"\n",
"| Feature | Integration Support | Endpoint Dependent? | Notes |\n",
"| :--------------------------------------------------------- | :-----------------: | :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n",
"| [Tool calling](/docs/how_to/tool_calling) | ❌ | ✅ | Requires handler to process tool definitions and return tool calls (e.g., OpenAI format). Integration needs parsing logic. |\n",
"| [Structured output](/docs/how_to/structured_output) | ❌ | ✅ | Requires handler support for forcing structured output (JSON mode, function calling). Integration needs parsing logic. |\n",
"| JSON mode | ❌ | ✅ | Requires handler to accept a `json_mode` parameter (or similar) and guarantee JSON output. |\n",
"| [Image input](/docs/how_to/multimodal_inputs) | ❌ | ✅ | Requires multimodal handler accepting image data (e.g., base64). Integration does not support multimodal messages. |\n",
"| Audio input | ❌ | ✅ | Requires handler accepting audio data. Integration does not support audio messages. |\n",
"| Video input | ❌ | ✅ | Requires handler accepting video data. Integration does not support video messages. |\n",
"| [Token-level streaming](/docs/how_to/chat_streaming) | ✅ (Simulated) | ✅ | Polls `/stream`. Requires handler to populate `stream` list in status response with token chunks (e.g., `[{\"output\": \"token\"}]`). True low-latency streaming not built-in. |\n",
"| Native async | ✅ | ✅ | Core `ainvoke`/`astream` implemented. Relies on endpoint handler performance. |\n",
"| [Token usage](/docs/how_to/chat_token_usage_tracking) | ❌ | ✅ | Requires handler to return `prompt_tokens`, `completion_tokens` in the final response. Integration currently does not parse this. |\n",
"| [Logprobs](/docs/how_to/logprobs) | ❌ | ✅ | Requires handler to return log probabilities. Integration currently does not parse this. |\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Key Takeaway:** Standard chat invocation and simulated streaming work if the endpoint follows basic RunPod API conventions. Advanced features require specific handler implementations and potentially extending or customizing this integration package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of the `ChatRunPod` class, parameters, and methods, refer to the source code or the generated API reference (if available).\n",
"\n",
"Link to source code: [https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/chat_models.py](https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/chat_models.py)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,362 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "62d5a1ea",
"metadata": {},
"source": [
"# ChatSeekrFlow\n",
"\n",
"> [Seekr](https://www.seekr.com/) provides AI-powered solutions for structured, explainable, and transparent AI interactions.\n",
"\n",
"This notebook provides a quick overview for getting started with Seekr [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatSeekrFlow` features and configurations, head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.seekrflow.ChatSeekrFlow.html).\n",
"\n",
"## Overview\n",
"\n",
"`ChatSeekrFlow` class wraps a chat model endpoint hosted on SeekrFlow, enabling seamless integration with LangChain applications.\n",
"\n",
"### Integration Details\n",
"\n",
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ChatSeekrFlow](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.seekrflow.ChatSeekrFlow.html) | [seekrai](https://python.langchain.com/docs/integrations/providers/seekr/) | ❌ | beta | ![PyPI - Downloads](https://img.shields.io/pypi/dm/seekrai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/seekrai?style=flat-square&label=%20) |\n",
"\n",
"### Model Features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"### Supported Methods\n",
"`ChatSeekrFlow` supports all methods of `ChatModel`, **except async APIs**.\n",
"\n",
"### Endpoint Requirements\n",
"\n",
"The serving endpoint `ChatSeekrFlow` wraps **must** have OpenAI-compatible chat input/output format. It can be used for:\n",
"1. **Fine-tuned Seekr models**\n",
"2. **Custom SeekrFlow models**\n",
"3. **RAG-enabled models using Seekr's retrieval system**\n",
"\n",
"For async usage, please refer to `AsyncChatSeekrFlow` (coming soon).\n"
]
},
{
"cell_type": "markdown",
"id": "93fea471",
"metadata": {},
"source": [
"# Getting Started with ChatSeekrFlow in LangChain\n",
"\n",
"This notebook covers how to use SeekrFlow as a chat model in LangChain."
]
},
{
"cell_type": "markdown",
"id": "2f320c17",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Ensure you have the necessary dependencies installed:\n",
"\n",
"```bash\n",
"pip install seekrai langchain langchain-community\n",
"```\n",
"\n",
"You must also have an API key from Seekr to authenticate requests.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "911ca53c",
"metadata": {},
"outputs": [],
"source": [
"# Standard library\n",
"import getpass\n",
"import os\n",
"\n",
"# Third-party\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import HumanMessage\n",
"from langchain_core.runnables import RunnableSequence\n",
"\n",
"# OSS SeekrFlow integration\n",
"from langchain_seekrflow import ChatSeekrFlow\n",
"from seekrai import SeekrFlow"
]
},
{
"cell_type": "markdown",
"id": "150461cb",
"metadata": {},
"source": [
"## API Key Setup\n",
"\n",
"You'll need to set your API key as an environment variable to authenticate requests.\n",
"\n",
"Run the below cell.\n",
"\n",
"Or manually assign it before running queries:\n",
"\n",
"```python\n",
"SEEKR_API_KEY = \"your-api-key-here\"\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38afcd6e",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"SEEKR_API_KEY\"] = getpass.getpass(\"Enter your Seekr API key:\")"
]
},
{
"cell_type": "markdown",
"id": "82d83c0e",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71b14751",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"SEEKR_API_KEY\"]\n",
"seekr_client = SeekrFlow(api_key=SEEKR_API_KEY)\n",
"\n",
"llm = ChatSeekrFlow(\n",
" client=seekr_client, model_name=\"meta-llama/Meta-Llama-3-8B-Instruct\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1046e86c",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f61a60f6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello there! I'm Seekr, nice to meet you! What brings you here today? Do you have a question, or are you looking for some help with something? I'm all ears (or rather, all text)!\n"
]
}
],
"source": [
"response = llm.invoke([HumanMessage(content=\"Hello, Seekr!\")])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "853b0349",
"metadata": {},
"source": [
"## Chaining"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "35fca3ec",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='The translation of \"Good morning\" in French is:\\n\\n\"Bonne journée\"' additional_kwargs={} response_metadata={}\n"
]
}
],
"source": [
"prompt = ChatPromptTemplate.from_template(\"Translate to French: {text}\")\n",
"\n",
"chain: RunnableSequence = prompt | llm\n",
"result = chain.invoke({\"text\": \"Good morning\"})\n",
"print(result)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a7b28b8d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"🔹 Testing Sync `stream()` (Streaming)...\n",
"Here is a haiku:\n",
"\n",
"Golden sunset fades\n",
"Ripples on the quiet lake\n",
"Peaceful evening sky"
]
}
],
"source": [
"def test_stream():\n",
" \"\"\"Test synchronous invocation in streaming mode.\"\"\"\n",
" print(\"\\n🔹 Testing Sync `stream()` (Streaming)...\")\n",
"\n",
" for chunk in llm.stream([HumanMessage(content=\"Write me a haiku.\")]):\n",
" print(chunk.content, end=\"\", flush=True)\n",
"\n",
"\n",
"# ✅ Ensure streaming is enabled\n",
"llm = ChatSeekrFlow(\n",
" client=seekr_client,\n",
" model_name=\"meta-llama/Meta-Llama-3-8B-Instruct\",\n",
" streaming=True, # ✅ Enable streaming\n",
")\n",
"\n",
"# ✅ Run sync streaming test\n",
"test_stream()"
]
},
{
"cell_type": "markdown",
"id": "b3847b34",
"metadata": {},
"source": [
"## Error Handling & Debugging"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6bc38b48",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running test: Missing Client\n",
"✅ Expected Error: SeekrFlow client cannot be None.\n",
"Running test: Missing Model Name\n",
"✅ Expected Error: A valid model name must be provided.\n"
]
}
],
"source": [
"# Define a minimal mock SeekrFlow client\n",
"class MockSeekrClient:\n",
" \"\"\"Mock SeekrFlow API client that mimics the real API structure.\"\"\"\n",
"\n",
" class MockChat:\n",
" \"\"\"Mock Chat object with a completions method.\"\"\"\n",
"\n",
" class MockCompletions:\n",
" \"\"\"Mock Completions object with a create method.\"\"\"\n",
"\n",
" def create(self, *args, **kwargs):\n",
" return {\n",
" \"choices\": [{\"message\": {\"content\": \"Mock response\"}}]\n",
" } # Mimic API response\n",
"\n",
" completions = MockCompletions()\n",
"\n",
" chat = MockChat()\n",
"\n",
"\n",
"def test_initialization_errors():\n",
" \"\"\"Test that invalid ChatSeekrFlow initializations raise expected errors.\"\"\"\n",
"\n",
" test_cases = [\n",
" {\n",
" \"name\": \"Missing Client\",\n",
" \"args\": {\"client\": None, \"model_name\": \"seekrflow-model\"},\n",
" \"expected_error\": \"SeekrFlow client cannot be None.\",\n",
" },\n",
" {\n",
" \"name\": \"Missing Model Name\",\n",
" \"args\": {\"client\": MockSeekrClient(), \"model_name\": \"\"},\n",
" \"expected_error\": \"A valid model name must be provided.\",\n",
" },\n",
" ]\n",
"\n",
" for test in test_cases:\n",
" try:\n",
" print(f\"Running test: {test['name']}\")\n",
" faulty_llm = ChatSeekrFlow(**test[\"args\"])\n",
"\n",
" # If no error is raised, fail the test\n",
" print(f\"❌ Test '{test['name']}' failed: No error was raised!\")\n",
" except Exception as e:\n",
" error_msg = str(e)\n",
" assert test[\"expected_error\"] in error_msg, f\"Unexpected error: {error_msg}\"\n",
" print(f\"✅ Expected Error: {error_msg}\")\n",
"\n",
"\n",
"# Run test\n",
"test_initialization_errors()"
]
},
{
"cell_type": "markdown",
"id": "d1c9ddf3",
"metadata": {},
"source": [
"## API reference"
]
},
{
"cell_type": "markdown",
"id": "411a8bea",
"metadata": {},
"source": [
"- `ChatSeekrFlow` class: [`langchain_seekrflow.ChatSeekrFlow`](https://github.com/benfaircloth/langchain-seekrflow/blob/main/langchain_seekrflow/seekrflow.py)\n",
"- PyPI package: [`langchain-seekrflow`](https://pypi.org/project/langchain-seekrflow/)\n"
]
},
{
"cell_type": "markdown",
"id": "3ef00a51",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -34,33 +34,46 @@
"id": "juAmbgoWD17u"
},
"source": [
"The AstraDB Document Loader returns a list of Langchain Documents from an AstraDB database.\n",
"The Astra DB Document Loader returns a list of Langchain `Document` objects read from an Astra DB collection.\n",
"\n",
"The Loader takes the following parameters:\n",
"The loader takes the following parameters:\n",
"\n",
"* `api_endpoint`: AstraDB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: AstraDB token. Looks like `AstraCS:6gBhNmsk135....`\n",
"* `api_endpoint`: Astra DB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: Astra DB token. Looks like `AstraCS:aBcD0123...`\n",
"* `collection_name` : AstraDB collection name\n",
"* `namespace`: (Optional) AstraDB namespace\n",
"* `namespace`: (Optional) AstraDB namespace (called _keyspace_ in Astra DB)\n",
"* `filter_criteria`: (Optional) Filter used in the find query\n",
"* `projection`: (Optional) Projection used in the find query\n",
"* `find_options`: (Optional) Options used in the find query\n",
"* `nb_prefetched`: (Optional) Number of documents pre-fetched by the loader\n",
"* `limit`: (Optional) Maximum number of documents to retrieve\n",
"* `extraction_function`: (Optional) A function to convert the AstraDB document to the LangChain `page_content` string. Defaults to `json.dumps`\n",
"\n",
"The following metadata is set to the LangChain Documents metadata output:\n",
"The loader sets the following metadata for the documents it reads:\n",
"\n",
"```python\n",
"{\n",
" metadata : {\n",
" \"namespace\": \"...\", \n",
" \"api_endpoint\": \"...\", \n",
" \"collection\": \"...\"\n",
" }\n",
"metadata={\n",
" \"namespace\": \"...\", \n",
" \"api_endpoint\": \"...\", \n",
" \"collection\": \"...\"\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"!pip install \"langchain-astradb>=0.6,<0.7\""
]
},
{
"attachments": {},
"cell_type": "markdown",
@ -71,24 +84,43 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import AstraDBLoader"
"from langchain_astradb import AstraDBLoader"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[**API Reference:** `AstraDBLoader`](https://python.langchain.com/api_reference/astradb/document_loaders/langchain_astradb.document_loaders.AstraDBLoader.html#langchain_astradb.document_loaders.AstraDBLoader)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:41:22.643335Z",
"start_time": "2024-01-08T12:40:57.759116Z"
},
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
"ASTRA_DB_APPLICATION_TOKEN = ········\n"
]
}
],
"source": [
"from getpass import getpass\n",
"\n",
@ -98,7 +130,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:25.395162Z",
@ -112,19 +144,22 @@
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"movie_reviews\",\n",
" projection={\"title\": 1, \"reviewtext\": 1},\n",
" find_options={\"limit\": 10},\n",
" limit=10,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:30.236489Z",
"start_time": "2024-01-08T12:42:29.612133Z"
},
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
@ -133,7 +168,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:31.369394Z",
@ -144,10 +179,10 @@
{
"data": {
"text/plain": [
"Document(page_content='{\"_id\": \"659bdffa16cbc4586b11a423\", \"title\": \"Dangerous Men\", \"reviewtext\": \"\\\\\"Dangerous Men,\\\\\" the picture\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\"}', metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'})"
"Document(metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'}, page_content='{\"_id\": \"659bdffa16cbc4586b11a423\", \"title\": \"Dangerous Men\", \"reviewtext\": \"\\\\\"Dangerous Men,\\\\\" the picture\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\"}')"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@ -179,7 +214,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.12.8"
}
},
"nbformat": 4,

View File

@ -49,7 +49,14 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import BrowserbaseLoader"
"import os\n",
"\n",
"from langchain_community.document_loaders import BrowserbaseLoader\n",
"\n",
"load_dotenv()\n",
"\n",
"BROWSERBASE_API_KEY = os.getenv(\"BROWSERBASE_API_KEY\")\n",
"BROWSERBASE_PROJECT_ID = os.getenv(\"BROWSERBASE_PROJECT_ID\")"
]
},
{
@ -59,6 +66,8 @@
"outputs": [],
"source": [
"loader = BrowserbaseLoader(\n",
" api_key=BROWSERBASE_API_KEY,\n",
" project_id=BROWSERBASE_PROJECT_ID,\n",
" urls=[\n",
" \"https://example.com\",\n",
" ],\n",
@ -78,52 +87,11 @@
"\n",
"- `urls` Required. A list of URLs to fetch.\n",
"- `text_content` Retrieve only text content. Default is `False`.\n",
"- `api_key` Optional. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.\n",
"- `project_id` Optional. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.\n",
"- `api_key` Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.\n",
"- `project_id` Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.\n",
"- `session_id` Optional. Provide an existing Session ID.\n",
"- `proxy` Optional. Enable/Disable Proxies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading images\n",
"\n",
"You can also load screenshots of webpages (as bytes) for multi-modal models.\n",
"\n",
"Full example using GPT-4V:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from browserbase import Browserbase\n",
"from browserbase.helpers.gpt4 import GPT4VImage, GPT4VImageDetail\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=256)\n",
"browser = Browserbase()\n",
"\n",
"screenshot = browser.screenshot(\"https://browserbase.com\")\n",
"\n",
"result = chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"What color is the logo?\"},\n",
" GPT4VImage(screenshot, GPT4VImageDetail.auto),\n",
" ]\n",
" )\n",
" ]\n",
")\n",
"\n",
"print(result.content)"
]
}
],
"metadata": {

View File

@ -36,10 +36,7 @@
"pip install oracledb"
],
"metadata": {
"collapsed": false,
"pycharm": {
"is_executing": true
}
"collapsed": false
}
},
{
@ -51,10 +48,7 @@
"from settings import s"
],
"metadata": {
"collapsed": false,
"pycharm": {
"is_executing": true
}
"collapsed": false
}
},
{
@ -97,16 +91,14 @@
"doc_2 = doc_loader_2.load()"
],
"metadata": {
"collapsed": false,
"pycharm": {
"is_executing": true
}
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"With TLS authentication, wallet_location and wallet_password are not required."
"With TLS authentication, wallet_location and wallet_password are not required.\n",
"Bind variable option is provided by argument \"parameters\"."
],
"metadata": {
"collapsed": false
@ -117,6 +109,8 @@
"execution_count": null,
"outputs": [],
"source": [
"SQL_QUERY = \"select channel_id, channel_desc from sh.channels where channel_desc = :1 fetch first 5 rows only\"\n",
"\n",
"doc_loader_3 = OracleAutonomousDatabaseLoader(\n",
" query=SQL_QUERY,\n",
" user=s.USERNAME,\n",
@ -124,6 +118,7 @@
" schema=s.SCHEMA,\n",
" config_dir=s.CONFIG_DIR,\n",
" tns_name=s.TNS_NAME,\n",
" parameters=[\"Direct Sales\"],\n",
")\n",
"doc_3 = doc_loader_3.load()\n",
"\n",
@ -133,6 +128,7 @@
" password=s.PASSWORD,\n",
" schema=s.SCHEMA,\n",
" connection_string=s.CONNECTION_STRING,\n",
" parameters=[\"Direct Sales\"],\n",
")\n",
"doc_4 = doc_loader_4.load()"
],

View File

@ -0,0 +1,187 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: SingleStore\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SingleStoreLoader\n",
"\n",
"The `SingleStoreLoader` allows you to load documents directly from a SingleStore database table. It is part of the `langchain-singlestore` integration package.\n",
"\n",
"## Overview\n",
"\n",
"### Integration Details\n",
"\n",
"| Class | Package | JS Support |\n",
"| :--- | :--- | :---: |\n",
"| `SingleStoreLoader` | `langchain_singlestore` | ❌ |\n",
"\n",
"### Features\n",
"- Load documents lazily to handle large datasets efficiently.\n",
"- Supports native asynchronous operations.\n",
"- Easily configurable to work with different database schemas.\n",
"\n",
"## Setup\n",
"\n",
"To use the `SingleStoreLoader`, you need to install the `langchain-singlestore` package. Follow the installation instructions below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_singlestore**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_singlestore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"To initialize `SingleStoreLoader`, you need to provide connection parameters for the SingleStore database and specify the table and fields to load documents from.\n",
"\n",
"### Required Parameters:\n",
"- **host** (`str`): Hostname, IP address, or URL for the database.\n",
"- **table_name** (`str`): Name of the table to query. Defaults to `embeddings`.\n",
"- **content_field** (`str`): Field containing document content. Defaults to `content`.\n",
"- **metadata_field** (`str`): Field containing document metadata. Defaults to `metadata`.\n",
"\n",
"### Optional Parameters:\n",
"- **id_field** (`str`): Field containing document IDs. Defaults to `id`.\n",
"\n",
"### Connection Pool Parameters:\n",
"- **pool_size** (`int`): Number of active connections in the pool. Defaults to `5`.\n",
"- **max_overflow** (`int`): Maximum connections beyond `pool_size`. Defaults to `10`.\n",
"- **timeout** (`float`): Connection timeout in seconds. Defaults to `30`.\n",
"\n",
"### Additional Options:\n",
"- **pure_python** (`bool`): Enables pure Python mode.\n",
"- **local_infile** (`bool`): Allows local file uploads.\n",
"- **charset** (`str`): Character set for string values.\n",
"- **ssl_key**, **ssl_cert**, **ssl_ca** (`str`): Paths to SSL files.\n",
"- **ssl_disabled** (`bool`): Disables SSL.\n",
"- **ssl_verify_cert** (`bool`): Verifies server's certificate.\n",
"- **ssl_verify_identity** (`bool`): Verifies server's identity.\n",
"- **autocommit** (`bool`): Enables autocommits.\n",
"- **results_type** (`str`): Structure of query results (e.g., `tuples`, `dicts`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_singlestore.document_loaders import SingleStoreLoader\n",
"\n",
"loader = SingleStoreLoader(\n",
" host=\"127.0.0.1:3306/db\",\n",
" table_name=\"documents\",\n",
" content_field=\"content\",\n",
" metadata_field=\"metadata\",\n",
" id_field=\"id\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all SingleStore Document Loader features and configurations head to the github page: [https://github.com/singlestore-labs/langchain-singlestore/](https://github.com/singlestore-labs/langchain-singlestore/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@ -11,39 +11,41 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[RankLLM](https://github.com/castorini/rank_llm) offers a suite of listwise rerankers, albeit with focus on open source LLMs finetuned for the task - RankVicuna and RankZephyr being two of them."
"**[RankLLM](https://github.com/castorini/rank_llm)** is a **flexible reranking framework** supporting **listwise, pairwise, and pointwise ranking models**. It includes **RankVicuna, RankZephyr, MonoT5, DuoT5, LiT5, and FirstMistral**, with integration for **FastChat, vLLM, SGLang, and TensorRT-LLM** for efficient inference. RankLLM is optimized for **retrieval and ranking tasks**, leveraging both **open-source LLMs** and proprietary rerankers like **RankGPT and RankGemini**. It supports **batched inference, first-token reranking, and retrieval via BM25 and SPLADE**.\n",
"\n",
"> **Note:** If using the built-in retriever, RankLLM requires **Pyserini, JDK 21, PyTorch, and Faiss** for retrieval functionality."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet rank_llm"
"%pip install --upgrade --quiet rank_llm"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain_openai"
"%pip install --upgrade --quiet langchain_openai"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss-cpu"
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -56,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -64,7 +66,7 @@
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" [f\"Document {i + 1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
@ -79,9 +81,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2025-02-22 15:28:58,344 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import FAISS\n",
@ -114,14 +124,14 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stderr",
"name": "stdout",
"output_type": "stream",
"text": [
"2025-02-17 04:37:08,458 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
"2025-02-22 15:29:00,892 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
]
},
{
@ -331,27 +341,41 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Retrieval + Reranking with RankZephyr"
"RankZephyr performs listwise reranking for improved retrieval quality but requires at least 24GB of VRAM to run efficiently."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading shards: 100%|██████████| 3/3 [00:00<00:00, 2674.37it/s]\n",
"Loading checkpoint shards: 100%|██████████| 3/3 [01:49<00:00, 36.39s/it]\n"
]
}
],
"source": [
"import torch\n",
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank\n",
"\n",
"compressor = RankLLMRerank(top_n=3, model=\"zephyr\")\n",
"torch.cuda.empty_cache()\n",
"\n",
"compressor = RankLLMRerank(top_n=3, model=\"rank_zephyr\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")"
")\n",
"\n",
"del compressor"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -386,7 +410,7 @@
]
},
{
"name": "stderr",
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
@ -407,7 +431,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -432,7 +456,7 @@
" llm=ChatOpenAI(temperature=0), retriever=compression_retriever\n",
")\n",
"\n",
"chain({\"query\": query})"
"chain.invoke({\"query\": query})"
]
},
{
@ -451,9 +475,16 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2025-02-22 15:01:29,469 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
@ -683,7 +714,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -698,9 +729,18 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2025-02-22 15:01:38,554 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
" 0%| | 0/1 [00:00<?, ?it/s]2025-02-22 15:01:43,704 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"100%|██████████| 1/1 [00:05<00:00, 5.15s/it]"
]
},
{
"name": "stdout",
"output_type": "stream",
@ -727,7 +767,7 @@
]
},
{
"name": "stderr",
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
@ -748,15 +788,14 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stderr",
"name": "stdout",
"output_type": "stream",
"text": [
"/tmp/ipykernel_2153001/1437145854.py:10: LangChainDeprecationWarning: The method `Chain.__call__` was deprecated in langchain 0.1.0 and will be removed in 1.0. Use :meth:`~invoke` instead.\n",
" chain({\"query\": query})\n",
" chain.invoke({\"query\": query})\n",
"2025-02-17 04:30:00,016 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
" 0%| | 0/1 [00:00<?, ?it/s]2025-02-17 04:30:01,649 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"100%|██████████| 1/1 [00:01<00:00, 1.63s/it]\n",
@ -785,13 +824,13 @@
" llm=ChatOpenAI(temperature=0), retriever=compression_retriever\n",
")\n",
"\n",
"chain({\"query\": query})"
"chain.invoke({\"query\": query})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "rankllm",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@ -805,7 +844,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.13.2"
}
},
"nbformat": 4,

View File

@ -2525,7 +2525,17 @@
"source": [
"## `SingleStoreDB` semantic cache\n",
"\n",
"You can use [SingleStoreDB](https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/) as a semantic cache to cache prompts and responses."
"You can use [SingleStore](https://python.langchain.com/docs/integrations/vectorstores/singlestore/) as a semantic cache to cache prompts and responses."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "596e15e8",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-singlestore"
]
},
{
@ -2535,11 +2545,11 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.cache import SingleStoreDBSemanticCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_singlestore.cache import SingleStoreSemanticCache\n",
"\n",
"set_llm_cache(\n",
" SingleStoreDBSemanticCache(\n",
" SingleStoreSemanticCache(\n",
" embedding=OpenAIEmbeddings(),\n",
" host=\"root:pass@localhost:3306/db\",\n",
" )\n",
@ -2669,7 +2679,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_couchbase couchbase"
"%pip install -qU langchain_couchbase"
]
},
{
@ -3102,8 +3112,8 @@
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBCache.html) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBCache.html) (deprecated since `langchain-community==0.0.28`) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBSemanticCache.html) (deprecated since `langchain-community==0.0.28`) |\n",
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",

View File

@ -90,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "d285fd7f",
"metadata": {},
"outputs": [],
@ -99,7 +99,7 @@
"\n",
"# Initialize a Fireworks model\n",
"llm = Fireworks(\n",
" model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n",
" model=\"accounts/fireworks/models/llama-v3p1-8b-instruct\",\n",
" base_url=\"https://api.fireworks.ai/inference/v1/completions\",\n",
")"
]
@ -176,7 +176,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "b801c20d",
"metadata": {},
"outputs": [
@ -192,7 +192,7 @@
"source": [
"# Setting additional parameters: temperature, max_tokens, top_p\n",
"llm = Fireworks(\n",
" model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n",
" model=\"accounts/fireworks/models/llama-v3p1-8b-instruct\",\n",
" temperature=0.7,\n",
" max_tokens=15,\n",
" top_p=1.0,\n",
@ -218,7 +218,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "fd2c6bc1",
"metadata": {},
"outputs": [
@ -235,7 +235,7 @@
"from langchain_fireworks import Fireworks\n",
"\n",
"llm = Fireworks(\n",
" model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n",
" model=\"accounts/fireworks/models/llama-v3p1-8b-instruct\",\n",
" temperature=0.7,\n",
" max_tokens=15,\n",
" top_p=1.0,\n",

View File

@ -0,0 +1,266 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RunPod LLM"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get started with RunPod LLMs.\n",
"\n",
"## Overview\n",
"\n",
"This guide covers how to use the LangChain `RunPod` LLM class to interact with text generation models hosted on [RunPod Serverless](https://www.runpod.io/serverless-gpu)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"1. **Install the package:**\n",
" ```bash\n",
" pip install -qU langchain-runpod\n",
" ```\n",
"2. **Deploy an LLM Endpoint:** Follow the setup steps in the [RunPod Provider Guide](/docs/integrations/providers/runpod#setup) to deploy a compatible text generation endpoint on RunPod Serverless and get its Endpoint ID.\n",
"3. **Set Environment Variables:** Make sure `RUNPOD_API_KEY` and `RUNPOD_ENDPOINT_ID` are set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# Make sure environment variables are set (or pass them directly to RunPod)\n",
"if \"RUNPOD_API_KEY\" not in os.environ:\n",
" os.environ[\"RUNPOD_API_KEY\"] = getpass.getpass(\"Enter your RunPod API Key: \")\n",
"if \"RUNPOD_ENDPOINT_ID\" not in os.environ:\n",
" os.environ[\"RUNPOD_ENDPOINT_ID\"] = input(\"Enter your RunPod Endpoint ID: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Initialize the `RunPod` class. You can pass model-specific parameters via `model_kwargs` and configure polling behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_runpod import RunPod\n",
"\n",
"llm = RunPod(\n",
" # runpod_endpoint_id can be passed here if not set in env\n",
" model_kwargs={\n",
" \"max_new_tokens\": 256,\n",
" \"temperature\": 0.6,\n",
" \"top_k\": 50,\n",
" # Add other parameters supported by your endpoint handler\n",
" },\n",
" # Optional: Adjust polling\n",
" # poll_interval=0.3,\n",
" # max_polling_attempts=100\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"Use the standard LangChain `.invoke()` and `.ainvoke()` methods to call the model. Streaming is also supported via `.stream()` and `.astream()` (simulated by polling the RunPod `/stream` endpoint)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"prompt = \"Write a tagline for an ice cream shop on the moon.\"\n",
"\n",
"# Invoke (Sync)\n",
"try:\n",
" response = llm.invoke(prompt)\n",
" print(\"--- Sync Invoke Response ---\")\n",
" print(response)\n",
"except Exception as e:\n",
" print(\n",
" f\"Error invoking LLM: {e}. Ensure endpoint ID/API key are correct and endpoint is active/compatible.\"\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"# Stream (Sync, simulated via polling /stream)\n",
"print(\"\\n--- Sync Stream Response ---\")\n",
"try:\n",
" for chunk in llm.stream(prompt):\n",
" print(chunk, end=\"\", flush=True)\n",
" print() # Newline\n",
"except Exception as e:\n",
" print(\n",
" f\"\\nError streaming LLM: {e}. Ensure endpoint handler supports streaming output format.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Async Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"# AInvoke (Async)\n",
"try:\n",
" async_response = await llm.ainvoke(prompt)\n",
" print(\"--- Async Invoke Response ---\")\n",
" print(async_response)\n",
"except Exception as e:\n",
" print(f\"Error invoking LLM asynchronously: {e}.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"# AStream (Async)\n",
"print(\"\\n--- Async Stream Response ---\")\n",
"try:\n",
" async for chunk in llm.astream(prompt):\n",
" print(chunk, end=\"\", flush=True)\n",
" print() # Newline\n",
"except Exception as e:\n",
" print(\n",
" f\"\\nError streaming LLM asynchronously: {e}. Ensure endpoint handler supports streaming output format.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"The LLM integrates seamlessly with LangChain Expression Language (LCEL) chains."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"# Assumes 'llm' variable is instantiated from the 'Instantiation' cell\n",
"prompt_template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"parser = StrOutputParser()\n",
"\n",
"chain = prompt_template | llm | parser\n",
"\n",
"try:\n",
" chain_response = chain.invoke({\"topic\": \"bears\"})\n",
" print(\"--- Chain Response ---\")\n",
" print(chain_response)\n",
"except Exception as e:\n",
" print(f\"Error running chain: {e}\")\n",
"\n",
"# Async chain\n",
"try:\n",
" async_chain_response = await chain.ainvoke({\"topic\": \"robots\"})\n",
" print(\"--- Async Chain Response ---\")\n",
" print(async_chain_response)\n",
"except Exception as e:\n",
" print(f\"Error running async chain: {e}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Endpoint Considerations\n",
"\n",
"- **Input:** The endpoint handler should expect the prompt string within `{\"input\": {\"prompt\": \"...\", ...}}`.\n",
"- **Output:** The handler should return the generated text within the `\"output\"` key of the final status response (e.g., `{\"output\": \"Generated text...\"}` or `{\"output\": {\"text\": \"...\"}}`).\n",
"- **Streaming:** For simulated streaming via the `/stream` endpoint, the handler must populate the `\"stream\"` key in the status response with a list of chunk dictionaries, like `[{\"output\": \"token1\"}, {\"output\": \"token2\"}]`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of the `RunPod` LLM class, parameters, and methods, refer to the source code or the generated API reference (if available).\n",
"\n",
"Link to source code: [https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/llms.py](https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/llms.py)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -17,22 +17,22 @@
"id": "f507f58b-bf22-4a48-8daf-68d869bcd1ba",
"metadata": {},
"source": [
"## Setting up\n",
"## Setup\n",
"\n",
"To run this notebook you need a running Astra DB. Get the connection secrets on your Astra dashboard:\n",
"\n",
"- the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`;\n",
"- the Token looks like `AstraCS:6gBhNmsk135...`."
"- the Database Token looks like `AstraCS:aBcD0123...`."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "d7092199",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"astrapy>=0.7.1 langchain-community\" "
"!pip install \"langchain-astradb>=0.6,<0.7\""
]
},
{
@ -45,12 +45,12 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "163d97f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
@ -65,14 +65,6 @@
"ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "markdown",
"id": "55860b2d",
"metadata": {},
"source": [
"Depending on whether local or cloud-based Astra DB, create the corresponding database connection \"Session\" object."
]
},
{
"cell_type": "markdown",
"id": "36c163e8",
@ -83,12 +75,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "d15e3302",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import AstraDBChatMessageHistory\n",
"from langchain_astradb import AstraDBChatMessageHistory\n",
"\n",
"message_history = AstraDBChatMessageHistory(\n",
" session_id=\"test-session\",\n",
@ -98,22 +90,31 @@
"\n",
"message_history.add_user_message(\"hi!\")\n",
"\n",
"message_history.add_ai_message(\"whats up?\")"
"message_history.add_ai_message(\"hello, how are you?\")"
]
},
{
"cell_type": "markdown",
"id": "53acb4a8-d536-4a58-9fee-7d70033d9c81",
"metadata": {},
"source": [
"[**API Reference:** `AstraDBChatMessageHistory`](https://python.langchain.com/api_reference/astradb/chat_message_histories/langchain_astradb.chat_message_histories.AstraDBChatMessageHistory.html#langchain_astradb.chat_message_histories.AstraDBChatMessageHistory)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "64fc465e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
"[HumanMessage(content='hi!', additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content='hello, how are you?', additional_kwargs={}, response_metadata={})]"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@ -139,7 +140,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.8"
}
},
"nbformat": 4,

View File

@ -46,6 +46,14 @@ See [installation instructions and a usage example](/docs/integrations/chat/tong
from langchain_community.chat_models.tongyi import ChatTongyi
```
### Qwen QwQ Chat
See [installation instructions and a usage example](/docs/integrations/chat/qwq)
```python
from langchain_qwq import ChatQwQ
```
## Document Loaders
### Alibaba Cloud MaxCompute

View File

@ -35,9 +35,18 @@ from langchain_aws import ChatBedrock
```
### Bedrock Converse
AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.
AWS Bedrock maintains a [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html)
that provides a unified conversational interface for Bedrock models. This API does not
yet support custom models. You can see a list of all
[models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html).
We recommend using `ChatBedrockConverse` for users who do not need to use custom models. See the [docs](/docs/integrations/chat/bedrock/#bedrock-converse-api) and [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) for more detail.
:::info
We recommend the Converse API for users who do not need to use custom models. It can be accessed using [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).
:::
See a [usage example](/docs/integrations/chat/bedrock).
```python
from langchain_aws import ChatBedrockConverse

View File

@ -8,18 +8,34 @@
> learning models, on the `Cloudflare` network, from your code via REST API.
## ChatModels
See [installation instructions and usage example](/docs/integrations/chat/cloudflare_workersai).
```python
from langchain_cloudflare import ChatCloudflareWorkersAI
```
## VectorStore
See [installation instructions and usage example](/docs/integrations/vectorstores/cloudflare_vectorize).
```python
from langchain_cloudflare import CloudflareVectorize
```
## Embeddings
See [installation instructions and usage example](/docs/integrations/text_embedding/cloudflare_workersai).
```python
from langchain_cloudflare import CloudflareWorkersAIEmbeddings
```
## LLMs
See [installation instructions and usage example](/docs/integrations/llms/cloudflare_workersai).
```python
from langchain_community.llms.cloudflare_workersai import CloudflareWorkersAI
```
## Embedding models
See [installation instructions and usage example](/docs/integrations/text_embedding/cloudflare_workersai).
```python
from langchain_community.embeddings.cloudflare_workersai import CloudflareWorkersAIEmbeddings
```
```

View File

@ -17,7 +17,7 @@ pip install langchain-couchbase
See a [usage example](/docs/integrations/vectorstores/couchbase).
```python
from langchain_couchbase import CouchbaseVectorStore
from langchain_couchbase import CouchbaseSearchVectorStore
```
## Document loader

View File

@ -0,0 +1,32 @@
# Smabbler
> Smabblers graph-powered platform boosts AI development by transforming data into a structured knowledge foundation.
# Galaxia
> Galaxia Knowledge Base is an integrated knowledge base and retrieval mechanism for RAG. In contrast to standard solution, it is based on Knowledge Graphs built using symbolic NLP and Knowledge Representation solutions. Provided texts are analysed and transformed into Graphs containing text, language and semantic information. This rich structure allows for retrieval that is based on semantic information, not on vector similarity/distance.
Implementing RAG using Galaxia involves first uploading your files to [Galaxia](https://beta.cloud.smabbler.com/home), analyzing them there and then building a model (knowledge graph). When the model is built, you can use `GalaxiaRetriever` to connect to the API and start retrieving.
More information: [docs](https://smabbler.gitbook.io/smabbler)
## Installation
```
pip install langchain-galaxia-retriever
```
## Usage
```
from langchain_galaxia_retriever.retriever import GalaxiaRetriever
gr = GalaxiaRetriever(
api_url="beta.api.smabbler.com",
api_key="<key>",
knowledge_base_id="<knowledge_base_id>",
n_retries=10,
wait_time=5,
)
result = gr.invoke('<test question>')
print(result)

File diff suppressed because it is too large Load Diff

View File

@ -25,6 +25,114 @@ And you should configure credentials by setting the following environment variab
Make sure to get your API Key from https://app.hyperbrowser.ai/
## Available Tools
Hyperbrowser provides two main categories of tools that are particularly useful for:
- Web scraping and data extraction from complex websites
- Automating repetitive web tasks
- Interacting with web applications that require authentication
- Performing research across multiple websites
- Testing web applications
### Browser Agent Tools
Hyperbrowser provides a number of Browser Agents tools. Currently we supported
- Claude Computer Use
- OpenAI CUA
- Browser Use
You can see more details [here](/docs/integrations/tools/hyperbrowser_browser_agent_tools)
#### Browser Use Tool
A general-purpose browser automation tool that can handle various web tasks through natural language instructions.
```python
from langchain_hyperbrowser import HyperbrowserBrowserUseTool
tool = HyperbrowserBrowserUseTool()
result = tool.run({
"task": "Go to npmjs.com, find the React package, and tell me when it was last updated"
})
print(result)
```
#### OpenAI CUA Tool
Leverages OpenAI's Computer Use Agent capabilities for advanced web interactions and information gathering.
```python
from langchain_hyperbrowser import HyperbrowserOpenAICUATool
tool = HyperbrowserOpenAICUATool()
result = tool.run({
"task": "Go to Hacker News and summarize the top 5 posts right now"
})
print(result)
```
#### Claude Computer Use Tool
Utilizes Anthropic's Claude for sophisticated web browsing and information processing tasks.
```python
from langchain_hyperbrowser import HyperbrowserClaudeComputerUseTool
tool = HyperbrowserClaudeComputerUseTool()
result = tool.run({
"task": "Go to GitHub's trending repositories page, and list the top 3 posts there right now"
})
print(result)
```
### Web Scraping Tools
Here is a brief description of the Web Scraping Tools available with Hyperbrowser. You can see more details [here](/docs/integrations/tools/hyperbrowser_web_scraping_tools)
#### Scrape Tool
The Scrape Tool allows you to extract content from a single webpage in markdown, HTML, or link format.
```python
from langchain_hyperbrowser import HyperbrowserScrapeTool
tool = HyperbrowserScrapeTool()
result = tool.run({
"url": "https://example.com",
"scrape_options": {"formats": ["markdown"]}
})
print(result)
```
#### Crawl Tool
The Crawl Tool enables you to traverse entire websites, starting from a given URL, with configurable page limits.
```python
from langchain_hyperbrowser import HyperbrowserCrawlTool
tool = HyperbrowserCrawlTool()
result = tool.run({
"url": "https://example.com",
"max_pages": 2,
"scrape_options": {"formats": ["markdown"]}
})
print(result)
```
#### Extract Tool
The Extract Tool uses AI to pull structured data from web pages based on predefined schemas, making it perfect for data extraction tasks.
```python
from langchain_hyperbrowser import HyperbrowserExtractTool
from pydantic import BaseModel
class SimpleExtractionModel(BaseModel):
title: str
tool = HyperbrowserExtractTool()
result = tool.run({
"url": "https://example.com",
"schema": SimpleExtractionModel
})
print(result)
```
## Document Loader
The `HyperbrowserLoader` class in `langchain-hyperbrowser` can easily be used to load content from any single page or multiple pages as well as crawl an entire site.
@ -39,7 +147,7 @@ docs = loader.load()
print(docs[0])
```
## Advanced Usage
### Advanced Usage
You can specify the operation to be performed by the loader. The default operation is `scrape`. For `scrape`, you can provide a single URL or a list of URLs to be scraped. For `crawl`, you can only provide a single URL. The `crawl` operation will crawl the provided page and subpages and return a document for each page.

View File

@ -0,0 +1,165 @@
# Langfuse 🪢
> **What is Langfuse?** [Langfuse](https://langfuse.com) is an open source LLM engineering platform that helps teams trace API calls, monitor performance, and debug issues in their AI applications.
## Tracing LangChain
[Langfuse Tracing](https://langfuse.com/docs/tracing) integrates with Langchain using Langchain Callbacks ([Python](https://python.langchain.com/docs/how_to/#callbacks), [JS](https://js.langchain.com/docs/how_to/#callbacks)). Thereby, the Langfuse SDK automatically creates a nested trace for every run of your Langchain applications. This allows you to log, analyze and debug your LangChain application.
You can configure the integration via (1) constructor arguments or (2) environment variables. Get your Langfuse credentials by signing up at [cloud.langfuse.com](https://cloud.langfuse.com) or [self-hosting Langfuse](https://langfuse.com/self-hosting).
### Constructor arguments
```python
pip install langfuse
```
```python
# Initialize Langfuse handler
from langfuse.callback import CallbackHandler
langfuse_handler = CallbackHandler(
secret_key="sk-lf-...",
public_key="pk-lf-...",
host="https://cloud.langfuse.com", # 🇪🇺 EU region
# host="https://us.cloud.langfuse.com", # 🇺🇸 US region
)
# Your Langchain code
# Add Langfuse handler as callback (classic and LCEL)
chain.invoke({"input": "<user_input>"}, config={"callbacks": [langfuse_handler]})
```
### Environment variables
```bash filename=".env"
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
# 🇪🇺 EU region
LANGFUSE_HOST="https://cloud.langfuse.com"
# 🇺🇸 US region
# LANGFUSE_HOST="https://us.cloud.langfuse.com"
```
```python
# Initialize Langfuse handler
from langfuse.callback import CallbackHandler
langfuse_handler = CallbackHandler()
# Your Langchain code
# Add Langfuse handler as callback (classic and LCEL)
chain.invoke({"input": "<user_input>"}, config={"callbacks": [langfuse_handler]})
```
To see how to use this integration together with other Langfuse features, check out [this end-to-end example](https://langfuse.com/docs/integrations/langchain/example-python).
## Tracing LangGraph
This part demonstrates how [Langfuse](https://langfuse.com/docs) helps to debug, analyze, and iterate on your LangGraph application using the [LangChain integration](https://langfuse.com/docs/integrations/langchain/tracing).
### Initialize Langfuse
**Note:** You need to run at least Python 3.11 ([GitHub Issue](https://github.com/langfuse/langfuse/issues/1926)).
Initialize the Langfuse client with your [API keys](https://langfuse.com/faq/all/where-are-langfuse-api-keys) from the project settings in the Langfuse UI and add them to your environment.
```python
%pip install langfuse
%pip install langchain langgraph langchain_openai langchain_community
```
```python
import os
# get keys for your project from https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-***"
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-***"
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # for EU data region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # for US data region
# your openai key
os.environ["OPENAI_API_KEY"] = "***"
```
### Simple chat app with LangGraph
**What we will do in this section:**
* Build a support chatbot in LangGraph that can answer common questions
* Tracing the chatbot's input and output using Langfuse
We will start with a basic chatbot and build a more advanced multi agent setup in the next section, introducing key LangGraph concepts along the way.
#### Create Agent
Start by creating a `StateGraph`. A `StateGraph` object defines our chatbot's structure as a state machine. We will add nodes to represent the LLM and functions the chatbot can call, and edges to specify how the bot transitions between these functions.
```python
from typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
# Messages have the type "list". The `add_messages` function in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatOpenAI(model = "gpt-4o", temperature = 0.2)
# The chatbot node function takes the current State as input and returns an updated messages list. This is the basic pattern for all LangGraph node functions.
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# Add a "chatbot" node. Nodes represent units of work. They are typically regular python functions.
graph_builder.add_node("chatbot", chatbot)
# Add an entry point. This tells our graph where to start its work each time we run it.
graph_builder.set_entry_point("chatbot")
# Set a finish point. This instructs the graph "any time this node is run, you can exit."
graph_builder.set_finish_point("chatbot")
# To be able to run our graph, call "compile()" on the graph builder. This creates a "CompiledGraph" we can use invoke on our state.
graph = graph_builder.compile()
```
#### Add Langfuse as callback to the invocation
Now, we will add then [Langfuse callback handler for LangChain](https://langfuse.com/docs/integrations/langchain/tracing) to trace the steps of our application: `config={"callbacks": [langfuse_handler]}`
```python
from langfuse.callback import CallbackHandler
# Initialize Langfuse CallbackHandler for Langchain (tracing)
langfuse_handler = CallbackHandler()
for s in graph.stream({"messages": [HumanMessage(content = "What is Langfuse?")]},
config={"callbacks": [langfuse_handler]}):
print(s)
```
```
{'chatbot': {'messages': [AIMessage(content='Langfuse is a tool designed to help developers monitor and observe the performance of their Large Language Model (LLM) applications. It provides detailed insights into how these applications are functioning, allowing for better debugging, optimization, and overall management. Langfuse offers features such as tracking key metrics, visualizing data, and identifying potential issues in real-time, making it easier for developers to maintain and improve their LLM-based solutions.', response_metadata={'token_usage': {'completion_tokens': 86, 'prompt_tokens': 13, 'total_tokens': 99}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_400f27fa1f', 'finish_reason': 'stop', 'logprobs': None}, id='run-9a0c97cb-ccfe-463e-902c-5a5900b796b4-0', usage_metadata={'input_tokens': 13, 'output_tokens': 86, 'total_tokens': 99})]}}
```
#### View traces in Langfuse
Example trace in Langfuse: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/d109e148-d188-4d6e-823f-aac0864afbab
![Trace view of chat app in Langfuse](https://langfuse.com/images/cookbook/integration-langgraph/integration_langgraph_chatapp_trace.png)
- Check out the [full notebook](https://langfuse.com/docs/integrations/langchain/example-python-langgraph) to see more examples.
- To learn how to evaluate the performance of your LangGraph application, check out the [LangGraph evaluation guide](https://langfuse.com/docs/integrations/langchain/example-langgraph-agents).

View File

@ -0,0 +1,21 @@
# LiteLLM
>[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.
## Installation and setup
```bash
pip install langchain-litellm
```
## Chat Models
```python
from langchain_litellm import ChatLiteLLM
```
```python
from langchain_litellm import ChatLiteLLMRouter
```
See more detail in the guide [here](/docs/integrations/chat/litellm).
## API reference
For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm

View File

@ -1,37 +0,0 @@
# LiteLLM
>[LiteLLM](https://docs.litellm.ai/docs/) is a library that simplifies calling Anthropic,
> Azure, Huggingface, Replicate, etc. LLMs in a unified way.
>
>You can use `LiteLLM` through either:
>
>* [LiteLLM Proxy Server](https://docs.litellm.ai/docs/#openai-proxy) - Server to call 100+ LLMs, load balance, cost tracking across projects
>* [LiteLLM python SDK](https://docs.litellm.ai/docs/#basic-usage) - Python Client to call 100+ LLMs, load balance, cost tracking
## Installation and setup
Install the `litellm` python package.
```bash
pip install litellm
```
## Chat models
### ChatLiteLLM
See a [usage example](/docs/integrations/chat/litellm).
```python
from langchain_community.chat_models import ChatLiteLLM
```
### ChatLiteLLMRouter
You also can use the `ChatLiteLLMRouter` to route requests to different LLMs or LLM providers.
See a [usage example](/docs/integrations/chat/litellm_router).
```python
from langchain_community.chat_models import ChatLiteLLMRouter
```

View File

@ -0,0 +1,41 @@
# MariaDB
This page covers how to use the [MariaDB](https://github.com/mariadb/) ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific PGVector wrappers.
## Installation
- Install c/c connector
on Debian, Ubuntu
```bash
sudo apt install libmariadb3 libmariadb-dev
```
on CentOS, RHEL, Rocky Linux
```bash
sudo yum install MariaDB-shared MariaDB-devel
```
- Install the Python connector package with `pip install mariadb`
## Setup
1. The first step is to have a MariaDB 11.7.1 or later installed.
The docker image is the easiest way to get started.
## Wrappers
### VectorStore
There exists a wrapper around MariaDB vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain_mariadb import MariaDBStore
```
### Usage
For a more detailed walkthrough of the MariaDB wrapper, see [this notebook](/docs/integrations/vectorstores/mariadb)

View File

@ -10,19 +10,23 @@ Please refer to [NCP User Guide](https://guide.ncloud-docs.com/docs/clovastudio-
## Installation and Setup
- Get a CLOVA Studio API Key by [issuing it](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4) and set it as an environment variable (`NCP_CLOVASTUDIO_API_KEY`).
- If you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need to get an additional API Key by [creating your app](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#create-test-app) and set it as `NCP_APIGW_API_KEY`.
- Get a CLOVA Studio API Key by [issuing it](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4) and set it as an environment variable (`CLOVASTUDIO_API_KEY`).
Naver integrations live in two packages:
- `langchain-naver-community`: a dedicated integration package for Naver. It is a community-maintained package and is not officially maintained by Naver or LangChain.
- `langchain-community`: a collection of [third-party integrations](https://python.langchain.com/docs/concepts/architecture/#langchain-community),
including Naver. **New features should be implemented in the dedicated `langchain-naver-community` package**.
- `langchain-naver`: a dedicated integration package for Naver.
- `langchain-naver-community`: a community-maintained package and is not officially maintained by Naver or LangChain.
```bash
pip install -U langchain-community langchain-naver-community
pip install -U langchain-naver
# pip install -U langchain-naver-community // Install to use Naver Search tool.
```
> **(Note)** Naver integration via `langchain-community`, a collection of [third-party integrations](https://python.langchain.com/docs/concepts/architecture/#langchain-community), is outdated.
> - **Use `langchain-naver` instead as new features should only be implemented via this package**.
> - If you are using `langchain-community` (outdated) and got a legacy API Key (that doesn't start with `nv-*` prefix), you should set it as `NCP_CLOVASTUDIO_API_KEY`, and might need to get an additional API Gateway API Key by [creating your app](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#create-test-app) and set it as `NCP_APIGW_API_KEY`.
## Chat models
### ChatClovaX
@ -30,7 +34,7 @@ pip install -U langchain-community langchain-naver-community
See a [usage example](/docs/integrations/chat/naver).
```python
from langchain_community.chat_models import ChatClovaX
from langchain_naver import ChatClovaX
```
## Embedding models
@ -40,7 +44,7 @@ from langchain_community.chat_models import ChatClovaX
See a [usage example](/docs/integrations/text_embedding/naver).
```python
from langchain_community.embeddings import ClovaXEmbeddings
from langchain_naver import ClovaXEmbeddings
```
## Tools

View File

@ -0,0 +1,18 @@
# Oxylabs
[Oxylabs](https://oxylabs.io/) is a market-leading web intelligence collection platform, driven by the highest business,
ethics, and compliance standards, enabling companies worldwide to unlock data-driven insights.
[langchain-oxylabs](https://pypi.org/project/langchain-oxylabs/) implements
tools enabling LLMs to interact with Oxylabs Web Scraper API.
## Installation and Setup
```bash
pip install langchain-oxylabs
```
## Tools
See details on available tools [here](/docs/integrations/tools/oxylabs/).

View File

@ -8,18 +8,18 @@
## Installation and Setup
Install a Python package:
Install the Perplexity x LangChain integration package:
```bash
pip install openai
pip install langchain-perplexity
````
Get your API key from [here](https://docs.perplexity.ai/docs/getting-started).
## Chat models
See a [usage example](/docs/integrations/chat/perplexity).
See a variety of usage examples [here](/docs/integrations/chat/perplexity).
```python
from langchain_community.chat_models import ChatPerplexity
from langchain_perplexity import ChatPerplexity
```

View File

@ -59,7 +59,7 @@ with the default values of "service-name = mymaster" and "db-number = 0" if not
The service-name is the redis server monitoring group name as configured within the Sentinel.
The current url format limits the connection string to one sentinel host only (no list can be given) and
booth Redis server and sentinel must have the same password set (if used).
both Redis server and sentinel must have the same password set (if used).
#### Redis Cluster connection url

View File

@ -0,0 +1,173 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Runpod\n",
"\n",
"[RunPod](https://www.runpod.io/) provides GPU cloud infrastructure, including Serverless endpoints optimized for deploying and scaling AI models.\n",
"\n",
"This guide covers how to use the `langchain-runpod` integration package to connect LangChain applications to models hosted on [RunPod Serverless](https://www.runpod.io/serverless-gpu).\n",
"\n",
"The integration offers interfaces for both standard Language Models (LLMs) and Chat Models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Intstallation\n",
"\n",
"Install the dedicated partner package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"%pip install -qU langchain-runpod"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"### 1. Deploy an Endpoint on RunPod\n",
"- Navigate to your [RunPod Serverless Console](https://www.runpod.io/console/serverless/user/endpoints).\n",
"- Create a \\\"New Endpoint\\\", selecting an appropriate GPU and template (e.g., vLLM, TGI, text-generation-webui) compatible with your model and the expected input/output format (see component guides or the package [README](https://github.com/runpod/langchain-runpod)).\n",
"- Configure settings and deploy.\n",
"- **Crucially, copy the Endpoint ID** after deployment."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Set API Credentials\n",
"The integration needs your RunPod API Key and the Endpoint ID. Set them as environment variables for secure access:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"RUNPOD_API_KEY\"] = getpass.getpass(\"Enter your RunPod API Key: \")\n",
"os.environ[\"RUNPOD_ENDPOINT_ID\"] = input(\"Enter your RunPod Endpoint ID: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*(Optional)* If using different endpoints for LLM and Chat models, you might need to set `RUNPOD_CHAT_ENDPOINT_ID` or pass the ID directly during initialization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Components\n",
"This package provides two main components:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. LLM\n",
"\n",
"For interacting with standard text completion models.\n",
"\n",
"See the [RunPod LLM Integration Guide](/docs/integrations/llms/runpod) for detailed usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_runpod import RunPod\n",
"\n",
"# Example initialization (uses environment variables)\n",
"llm = RunPod(model_kwargs={\"max_new_tokens\": 100}) # Add generation params here\n",
"\n",
"# Example Invocation\n",
"try:\n",
" response = llm.invoke(\"Write a short poem about the cloud.\")\n",
" print(response)\n",
"except Exception as e:\n",
" print(\n",
" f\"Error invoking LLM: {e}. Ensure endpoint ID and API key are correct and endpoint is active.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Chat Model\n",
"\n",
"For interacting with conversational models.\n",
"\n",
"See the [RunPod Chat Model Integration Guide](/docs/integrations/chat/runpod) for detailed usage and feature support."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_runpod import ChatRunPod\n",
"\n",
"# Example initialization (uses environment variables)\n",
"chat = ChatRunPod(model_kwargs={\"temperature\": 0.8}) # Add generation params here\n",
"\n",
"# Example Invocation\n",
"try:\n",
" response = chat.invoke(\n",
" [HumanMessage(content=\"Explain RunPod Serverless in one sentence.\")]\n",
" )\n",
" print(response.content)\n",
"except Exception as e:\n",
" print(\n",
" f\"Error invoking Chat Model: {e}. Ensure endpoint ID and API key are correct and endpoint is active.\"\n",
" )"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Some files were not shown because too many files have changed in this diff Show More