Compare commits

...

1560 Commits

Author SHA1 Message Date
Chester Curme
ebcee67e1c x 2025-05-08 13:07:12 -04:00
Chester Curme
395008456e x 2025-05-08 13:05:28 -04:00
Chester Curme
d9715ad797 x 2025-05-08 13:03:19 -04:00
Chester Curme
3516c2c394 x 2025-05-08 13:01:28 -04:00
Chester Curme
e01c52d2d0 x 2025-05-08 12:59:31 -04:00
Chester Curme
2f39398736 x 2025-05-08 12:56:12 -04:00
Chester Curme
8868c6b1ff x 2025-05-08 11:06:22 -04:00
Chester Curme
18c1d8a50c x 2025-05-08 11:05:09 -04:00
Chester Curme
fc797fadf4 x 2025-05-08 11:03:31 -04:00
Chester Curme
c678528334 x 2025-05-08 11:00:10 -04:00
Chester Curme
a074bd08f7 x 2025-05-08 10:58:36 -04:00
Chester Curme
e7fef3bd8e Revert "xx"
This reverts commit 08574848cc.
2025-05-08 10:56:38 -04:00
Chester Curme
08574848cc xx 2025-05-08 10:55:08 -04:00
Chester Curme
c65571136f x 2025-05-08 10:52:16 -04:00
Chester Curme
6acd55b216 x 2025-05-08 10:50:14 -04:00
Chester Curme
c225f1a4cb x 2025-05-08 10:47:57 -04:00
Chester Curme
50ce3bf5d9 x 2025-05-08 10:45:55 -04:00
Chester Curme
f23b8008ce x 2025-05-08 10:42:52 -04:00
Chester Curme
31077f5df2 x 2025-05-08 10:41:10 -04:00
Chester Curme
1a43bfb55b x 2025-05-08 10:39:42 -04:00
Chester Curme
3c2e05e43f x 2025-05-08 10:38:05 -04:00
Chester Curme
ad68ddbf40 test stream twice 2025-05-08 10:35:31 -04:00
ccurme
2d202f9762 anthropic[patch]: split test into two (#31167) 2025-05-08 09:23:36 -04:00
ccurme
d4555ac924 anthropic: release 0.3.13 (#31162) 2025-05-08 03:13:15 +00:00
ccurme
e34f9fd6f7 anthropic: update streaming usage metadata (#31158)
Anthropic updated how they report token counts during streaming today.
See changes to `MessageDeltaUsage` in [this
commit](2da00f26c5 (diff-1a396eba0cd9cd8952dcdb58049d3b13f6b7768ead1411888d66e28211f7bfc5)).

It's clean and simple to grab these fields from the final
`message_delta` event. However, some of them are typed as Optional, and
language
[here](e42451ab3f/src/anthropic/lib/streaming/_messages.py (L462))
suggests they may not always be present. So here we take the required
field from the `message_delta` event as we were doing previously, and
ignore the rest.
2025-05-07 23:09:56 -04:00
Tanushree
6c3901f9f9 Change banner (#31159)
Changing the banner to lead to careers page instead of interrupt

---------

Co-authored-by: Brace Sproul <braceasproul@gmail.com>
2025-05-07 18:07:10 -07:00
ccurme
682f338c17 anthropic[patch]: support web search (#31157) 2025-05-07 18:04:06 -04:00
ccurme
d7e016c5fc huggingface: release 0.2 (#31153) 2025-05-07 15:33:07 -04:00
ccurme
4b11cbeb47 huggingface[patch]: update lockfile (#31152) 2025-05-07 15:17:33 -04:00
ccurme
b5b90b5929 anthropic[patch]: be robust to null fields when translating usage metadata (#31151) 2025-05-07 18:30:21 +00:00
ccurme
f70b263ff3 core: release 0.3.59 (#31150) 2025-05-07 17:36:59 +00:00
ccurme
bb69d4c42e docs: specify js support for tavily (#31149) 2025-05-07 11:30:04 -04:00
zhurou603
1df3ee91e7 partners: (langchain-openai) total_tokens should not add 'Nonetype' t… (#31146)
partners: (langchain-openai) total_tokens should not add 'Nonetype' t…

# PR Description

## Description
Fixed an issue in `langchain-openai` where `total_tokens` was
incorrectly adding `None` to an integer, causing a TypeError. The fix
ensures proper type checking before adding token counts.

## Issue
Fixes the TypeError traceback shown in the image where `'NoneType'`
cannot be added to an integer.

## Dependencies
None

## Twitter handle
None

![image](https://github.com/user-attachments/assets/9683a795-a003-455a-ada9-fe277245e2b2)

Co-authored-by: qiulijie <qiulijie@yuaiweiwu.com>
2025-05-07 11:09:50 -04:00
Collier King
19041dcc95 docs: update langchain-cloudflare repo/path on packages.yaml (#31138)
**Library Repo Path Update **: "langchain-cloudflare"

We recently changed our `langchain-cloudflare` repo to allow for future
libraries.
Created a `libs` folder to hold `langchain-cloudflare` python package.


https://github.com/cloudflare/langchain-cloudflare/tree/main/libs/langchain-cloudflare
 
On `langchain`, updating `packages.yaml` to point to new
`libs/langchain-cloudflare` library folder.
2025-05-07 11:01:25 -04:00
Simonas Jakubonis
3cba22d8d7 docs: Pinecone Rerank example notebook (#31147)
Created an example notebook of how to use Pinecone Reranking service
cc @jamescalam
2025-05-07 11:00:42 -04:00
Jacob Lee
66d1ed6099 fix(core): Permit OpenAI style blocks to be passed into convert_to_openai_messages (#31140)
Should effectively be a noop, just shouldn't throw

CC @madams0013

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-07 10:57:37 -04:00
Tushar Nitave
a15034d8d1 docs: Fixed grammar for chat prompt composition (#31148)
This PR fixes a grammar issue in the sentence:

"A chat prompt is made up a of a list of messages..." → "A chat prompt
is made up of a list of messages. "
2025-05-07 10:51:34 -04:00
Michael Li
57c81dc3e3 docs: replace initialize_agent with create_react_agent in graphql.ipynb (#31133)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-06 16:09:52 -04:00
pulvedu
52732a4d13 Update docs (#31135)
Updating tavily docs to indicate JS support

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-05-06 16:09:39 -04:00
Simonas Jakubonis
5dde64583e docs: Updated pinecone.mdx in the integration providers (#31123)
Updated pinecone.mdx in the integration providers

Added short description and examples for SparseVector store and
SparseEmbeddings
2025-05-06 12:58:32 -04:00
Tomaz Bratanic
6b6750967a Docs: Change to async llm graph transformer (#31126) 2025-05-06 12:53:57 -04:00
ccurme
703fce7972 docs: document that Anthropic supports boolean parallel_tool_calls param in guide (#31122) 2025-05-05 20:25:27 -04:00
唐小鸭
50fa524a6d partners: (langchain-deepseek) fix deepseek-r1 always returns an empty reasoning_content when reasoning (#31065)
## Description
deepseek-r1 always returns an empty string `reasoning_content` to the
first chunk when thinking, and sets `reasoning_content` to None when
thinking is over, to determine when to switch to normal output.

Therefore, whether the reasoning_content field exists should be judged
as None.

## Demo
deepseek-r1 reasoning output: 

```
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': 'assistant', 'tool_calls': None, 'reasoning_content': ''}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '好的'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': ','}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '用户'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

deepseek-r1 first normal output
```
...
{'delta': {'content': ' main', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': '\n\nimport', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-05 22:31:58 +00:00
Michael Li
c0b69808a8 docs: replace initialize_agent with create_react_agent in openweathermap.ipynb (#31115)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-05 18:08:09 -04:00
ccurme
fce8caca16 docs: minor fix in Pinecone (#31110) 2025-05-03 20:16:27 +00:00
Simonas Jakubonis
b8d0403671 docs: updated pinecone example notebook (#30993)
- **Description:** Update Pinecone notebook example
  - **Issue:** N\A
  - **Dependencies:** N\A
  - **Twitter handle:** N\A


- [ x ] **Add tests and docs**: Just notebook updates


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-03 16:02:21 -04:00
Michael Li
1204fb8010 docs: replace initialize_agent with create_react_agent in yahoo_finance_news.ipynb (#31108)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-03 15:45:52 -04:00
Adeel Ehsan
1e00116ae7 docs: add docs for vectara tools (#30958)
Thank you for contributing to LangChain!

- [ ] **Docs for Vectara Tools**: "langchain-vectara"
2025-05-03 15:39:16 -04:00
Haris Colic
3e25a93136 Docs: replace initialize agent with create react agent for google tools (#31043)
- **Description:** The deprecated initialize_agent functionality is
replaced with create_react_agent for the google tools. Also noticed a
potential issue with the non-existent "google-drive-search" which was
used in the old `google-drive.ipynb`. If this should be a by default
available tool, an issue should be opened to modify
langchain-community's `load_tools` accordingly.
- **Issue:**  #29277
- **Dependencies:** No added dependencies
- **Twitter handle:** No Twitter account
2025-05-03 15:20:07 -04:00
Stefano Lottini
325f729a92 docs: improvements to Astra DB pages, especially modernize Vector DB example notebook (#30961)
This PR brings several improvements and modernizations to the
documentation around the Astra DB partner package.

- language alignment for better matching with the terms used in the
Astra DB docs
- updated several links to pages on said documentation
- for the `AstraDBVectorStore`, added mentions of the new features in
the overall `astra.mdx`
- for the vector store, rewritten/upgraded most of the usage example
notebook for a more straightforward experience able to highlight the
main usage patterns (including new ones such as the newly-introduced
"autodetect feature")

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-03 14:26:52 -04:00
Asif Mehmood
00ac49dd3e Replace deprecated .dict() with .model_dump() for Pydantic v2 compatibility (#31107)
**What does this PR do?**
This PR replaces deprecated usages of ```.dict()``` with
```.model_dump()``` to ensure compatibility with Pydantic v2 and prepare
for v3, addressing the deprecation warning
```PydanticDeprecatedSince20``` as required in [Issue#
31103](https://github.com/langchain-ai/langchain/issues/31103).

**Changes made:**
* Replaced ```.dict()``` with ```.model_dump()``` in multiple locations
* Ensured consistency with Pydantic v2 migration guidelines
* Verified compatibility across affected modules

**Notes**
* This is a code maintenance and compatibility update
* Tested locally with Pydantic v2.11
* No functional logic changes; only internal method replacements to
prevent deprecation issues
2025-05-03 13:40:54 -04:00
ccurme
6268ae8db0 langchain: release 0.3.25 (#31101) 2025-05-02 17:42:32 +00:00
ccurme
77ecf47f6d openai: release 0.3.16 (#31100) 2025-05-02 13:14:46 -04:00
ccurme
ff41f47e91 core: release 0.3.58 (#31099) 2025-05-02 12:46:32 -04:00
Eugene Yurtsev
4da525bc63 langchain[patch]: Remove beta decorator from init_embeddings (#31098)
Remove beta decorator from init_embeddings.
2025-05-02 11:52:50 -04:00
ccurme
94139ffcd3 openai[patch]: format system content blocks for Responses API (#31096)
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI


llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)

messages = [
    SystemMessage("test"),                                   # Works
    HumanMessage("test"),                                    # Works
    SystemMessage([{"type": "text", "text": "test"}]),       # Bug in this case
    HumanMessage([{"type": "text", "text": "test"}]),        # Works
    SystemMessage([{"type": "input_text", "text": "test"}])  # Works
]

llm._get_request_payload(messages)
```
2025-05-02 15:22:30 +00:00
ccurme
26ad239669 core, openai[patch]: prefer provider-assigned IDs when aggregating message chunks (#31080)
When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.

For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
2025-05-02 11:18:18 -04:00
ccurme
72f905a436 infra: fix notebook tests (#31097) 2025-05-02 14:33:11 +00:00
William FH
b5bf2d6218 0.3.57 (#31095) 2025-05-01 23:42:26 -07:00
William FH
167afa5102 Enable run mutation (#31090)
This lets you more easily modify a run in-flight
2025-05-01 17:00:51 -07:00
Simonas Jakubonis
0b79fc1733 docs: Pinecone Sparse vectorstore example (#31066)
Description: Pinecone SparseVectorStore example
cc @jamescalam

---------

Co-authored-by: James Briggs <35938317+jamescalam@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-01 18:08:10 -04:00
ccurme
c51eadd54f openai[patch]: propagate service_tier to response metadata (#31089) 2025-05-01 13:50:48 -04:00
ccurme
6110c3ffc5 openai[patch]: release 0.3.15 (#31087) 2025-05-01 09:22:30 -04:00
Ben Gladwell
da59eb7eb4 anthropic: Allow kwargs to pass through when counting tokens (#31082)
- **Description:** `ChatAnthropic.get_num_tokens_from_messages` does not
currently receive `kwargs` and pass those on to
`self._client.beta.messages.count_tokens`. This is a problem if you need
to pass specific options to `count_tokens`, such as the `thinking`
option. This PR fixes that.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** @bengladwell

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-30 17:56:22 -04:00
Really Him
918c950737 DOCS: partners/chroma: Fix documentation around chroma query filter syntax (#31058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

**Description**:
* Starting to put together some PR's to fix the typing around
`langchain-chroma` `filter` and `where_document` query filtering, as
mentioned:

https://github.com/langchain-ai/langchain/issues/30879
https://github.com/langchain-ai/langchain/issues/30507

The typing of `dict[str, str]` is on the one hand too restrictive (marks
valid filter expressions as ill-typed) and also too permissive (allows
illegal filter expressions). That's not what this PR addresses though.
This PR just removes from the documentation some examples of filters
that are illegal, and also syntactically incorrect: (a) dictionaries
with keys like `$contains` but the key is missing quotation marks; (b)
dictionaries with multiple entries - this is illegal in Chroma filter
syntax and will raise an exception. (`{"foo": "bar", "qux": "baz"}`).
Filter dictionaries in Chroma must have one and one key only. Again this
is just the documentation issue, which is the lowest hanging fruit. I
also think we need to update the types for `filter` and `where_document`
to be (at the very least `dict[str, Any]`), or, since we have access to
Chroma's types, they should be `Where` and `WhereDocument` types. This
has a wider blast radius though, so I'm starting small.

This PR does not fix the issues mentioned above, it's just starting to
get the ball rolling, and cleaning up the documentation.



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-04-30 17:51:07 -04:00
Mateus Scheper
ed7cd3c5c4 docs: Fixing typo: "chocalate" -> "chocolate". (#31078)
Just fixing a typo that was making me anxious: "chocalate" ->
"chocolate".
2025-04-30 15:09:36 -04:00
yberber-sap
952a0b7b40 Docs: Fix SAP HANA Cloud docs - remove pip output, update vectorstore link, rename provider (#31077)
This PR includes the following documentation fixes for the SAP HANA
Cloud vector store integration:
- Removed stale output from the `%pip install` code cell.
- Replaced an unrelated vectorstore documentation link on the provider
overview page.
- Renamed the provider from "SAP HANA" to "SAP HANA Cloud"
2025-04-30 08:57:40 -04:00
Akshay Dongare
0b8e9868e6 docs: update LiteLLM integration docs for router migration to langchain-litellm (#31063)
# What's Changed?
- [x] 1. docs: **docs/docs/integrations/chat/litellm.ipynb** : Updated
with docs for litellm_router since it has been moved into the
[langchain-litellm](https://github.com/Akshay-Dongare/langchain-litellm)
package along with ChatLiteLLM

- [x] 2. docs: **docs/docs/integrations/chat/litellm_router.ipynb** :
Deleted to avoid redundancy

- [x] 3. docs: **docs/docs/integrations/providers/litellm.mdx** :
Updated to reflect inclusion of ChatLiteLLMRouter class

- [x] Lint and test: Done

# Issue:
- [x] Related to the issue
https://github.com/langchain-ai/langchain/issues/30368

# About me
- [x] 🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-29 17:48:11 -04:00
Lukas Scheucher
275ba2ec37 Add Compass Labs toolkits to langchain docs (#30794)
- **Description**: Adding documentation notebook for [compass-langchain
toolkit](https://pypi.org/project/langchain-compass/).
- **Issue**: N/a
- **Dependencies**: langchain-compass  
- **Twitter handle**: @labs_compass

---------

Co-authored-by: ccosnett <conor142857@icloud.com>
2025-04-29 17:42:43 -04:00
ccurme
bdb7c4a8b3 huggingface: fix embeddings return type (#31072)
Integration tests failing

cc @hanouticelina
2025-04-29 18:45:04 +00:00
célina
868f07f8f4 partners: (langchain-huggingface) Chat Models - Integrate Hugging Face Inference Providers and remove deprecated code (#30733)
Hi there, I'm Célina from 🤗,
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers for chat completion and text
generation tasks.

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpoint`, in favor of the task-specific `text_generation`
method. `InferenceClient.post()` is deprecated and will be removed in
`huggingface_hub v0.31.0`.

---
## Changes made
- bumped the minimum required version of the `huggingface-hub` package
to ensure compatibility with the latest API usage.
- added a `provider` field to `HuggingFaceEndpoint`, enabling users to
select the inference provider (e.g., 'cerebras', 'together',
'fireworks-ai'). Defaults to `hf-inference` (HF Inference API).
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpoint` with the task-specific `text_generation` method
for future-proofing, `post()` will be removed in huggingface-hub
v0.31.0.
- updated the `ChatHuggingFace` component:
    - added async and streaming support.
    - added support for tool calling.
- exposed underlying chat completion parameters for more granular
control.
- Added integration tests for `ChatHuggingFace` and updated the
corresponding unit tests.

  All changes are backward compatible.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-29 09:53:14 -04:00
ccurme
3072e4610a community: move to separate repo (continued) (#31069)
Missed these after merging
2025-04-29 09:25:32 -04:00
ccurme
9ff5b5d282 community: move to separate repo (#31060)
langchain-community is moving to
https://github.com/langchain-ai/langchain-community
2025-04-29 09:22:04 -04:00
Sydney Runkle
7e926520d5 packaging: remove Python upper bound for langchain and co libs (#31025)
Follow up to https://github.com/langchain-ai/langsmith-sdk/pull/1696,
I've bumped the `langsmith` version where applicable in `uv.lock`.

Type checking problems here because deps have been updated in
`pyproject.toml` and `uv lock` hasn't been run - we should enforce that
in the future - goes with the other dependabot todos :).
2025-04-28 14:44:28 -04:00
Sydney Runkle
d614842d23 ci: temporarily run chroma on 3.12 for CI (#31056)
Waiting on a fix for https://github.com/chroma-core/chroma/issues/4382
2025-04-28 13:20:37 -04:00
Michael Li
ff1602f0fd docs: replace initialize_agent with create_react_agent in bash.ipynb (part of #29277) (#31042)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 13:07:22 -04:00
Christophe Bornet
aee7988a94 community: add mypy warn_unused_ignores rule (#30816) 2025-04-28 11:54:12 -04:00
Bae-ChangHyun
a2863f8757 community: add 'get_col_comments' option for retrieve database columns comments (#30646)
## Description
Added support for retrieving column comments in the SQL Database
utility. This feature allows users to see comments associated with
database columns when querying table information. Column comments
provide valuable metadata that helps LLMs better understand the
semantics and purpose of database columns.

A new optional parameter `get_col_comments` was added to the
`get_table_info` method, defaulting to `False` for backward
compatibility. When set to `True`, it retrieves and formats column
comments for each table.

Currently, this feature is supported on PostgreSQL, MySQL, and Oracle
databases.

## Implementation
You should create Table with column comments before.

```python
db = SQLDatabase.from_uri("YOUR_DB_URI")
print(db.get_table_info(get_col_comments=True)) 
```
## Result
```
CREATE TABLE test_table (
	name VARCHAR
        school VARCHAR)
/*
Column Comments: {'name': person name, 'school":school_name}
*/

/*
3 rows from test_table:
name
a
b
c
*/
```

## Benefits
1. Enhances LLM's understanding of database schema semantics
2. Preserves valuable domain knowledge embedded in database design
3. Improves accuracy of SQL query generation
4. Provides more context for data interpretation

Tests are available in
`langchain/libs/community/tests/test_sql_get_table_info.py`.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 15:19:46 +00:00
yberber-sap
3fb0a55122 Deprecate HanaDB, HanaTranslator and update example notebook to use new implementation (#30896)
- **Description:**  
This PR marks the `HanaDB` vector store (and related utilities) in
`langchain_community` as deprecated using the `@deprecated` annotation.
  - Set `since="0.1.0"` and `removal="1.0"`  
- Added a clear migration path and a link to the SAP-maintained
replacement in the
[`langchain_hana`](https://github.com/SAP/langchain-integration-for-sap-hana-cloud)
package.
Additionally, the example notebook has been updated to use the new
`HanaDB` class from `langchain_hana`, ensuring users follow the
recommended integration moving forward.

- **Issue:** None 

- **Dependencies:**  None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 16:37:35 -04:00
湛露先生
5fb8fd863a langchain_openai: clean duplicate code for openai embedding. (#30872)
The `_chunk_size` has not changed by method `self._tokenize`, So i think
these is duplicate code.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-27 15:07:41 -04:00
Philipp Schmid
79a537d308 Update Chat and Embedding guides (#31017)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 18:06:59 +00:00
ccurme
ba2518995d standard-tests: add condition for image tool message test (#31041)
Require support for [standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/).
2025-04-27 17:24:43 +00:00
ccurme
04a899ebe3 infra: support third-party integration packages in API ref build (#31021) 2025-04-25 16:02:27 -04:00
Stefano Lottini
a82d987f09 docs: Astra DB, replace leftover links to "community" legacy package + modernize doc loader signature (#30969)
This PR brings some much-needed updates to some of the Astra DB shorter
example notebooks,

- ensuring imports are from the partner package instead of the
(deprecated) community legacy package
- improving the wording in a few related places
- updating the constructor signature introduced with the latest partner
package's AstraDBLoader
- marking the community package counterpart of the LLM caches as
deprecated in the summary table at the end of the page.
2025-04-25 15:45:24 -04:00
ccurme
a60fd06784 docs: document OpenAI flex processing (#31023)
Following https://github.com/langchain-ai/langchain/pull/31005
2025-04-25 15:10:25 -04:00
ccurme
629b7a5a43 openai[patch]: add explicit attribute for service tier (#31005) 2025-04-25 18:38:23 +00:00
ccurme
ab871a7b39 docs: enable milvus in API ref build (#31016)
Reverts langchain-ai/langchain#30996

Should be fixed following
https://github.com/langchain-ai/langchain-milvus/pull/68
2025-04-25 12:48:10 +00:00
Georgi Stefanov
d30c56a8c1 langchain: return attachments in _get_response (#30853)
This is a PR to return the message attachments in _get_response, as when
files are generated these attachments are not returned thus generated
files cannot be retrieved

Fixes issue: https://github.com/langchain-ai/langchain/issues/30851
2025-04-24 21:39:11 -04:00
Abderrahmane Gourragui
09c1991e96 docs: update document examples (#31006)
## Description:

As I was following the docs I found a couple of small issues on the
docs.

this fixes some unused imports on the [extraction
page](https://python.langchain.com/docs/tutorials/extraction/#the-extractor)
and updates the examples on [classification
page](https://python.langchain.com/docs/tutorials/classification/#quickstart)
to be independent from the chat model.
2025-04-24 18:07:55 -04:00
ccurme
a7903280dd openai[patch]: delete redundant tests (#31004)
These are covered by standard tests.
2025-04-24 17:56:32 +00:00
Kyle Jeong
d0f0d1f966 [docs/community]: langchain docs + browserbaseloader fix (#30973)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

community: fix browserbase integration
docs: update docs

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updated BrowserbaseLoader to use the new python sdk.
    - **Issue:** update browserbase integration with langchain
    - **Dependencies:** n/a
    - **Twitter handle:** @kylejeong21

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 13:38:49 -04:00
ccurme
403fae8eec core: release 0.3.56 (#31000) 2025-04-24 13:22:31 -04:00
Jacob Lee
d6b50ad3f6 docs: Update Google Analytics tag in docs (#31001) 2025-04-24 10:19:10 -07:00
ccurme
10a9c24dae openai: fix streaming reasoning without summaries (#30999)
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
2025-04-24 16:01:45 +00:00
ccurme
8fc7a723b9 core: release 0.3.56rc1 (#30998) 2025-04-24 15:09:44 +00:00
ccurme
f4863f82e2 core[patch]: fix edge cases for _is_openai_data_block (#30997) 2025-04-24 10:48:52 -04:00
Philipp Schmid
ae4b6380d9 Documentation: Add Google Gemini dropdown (#30995)
This PR adds Google Gemini (via AI Studio and Gemini API). Feel free to
change the ordering, if needed.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 10:00:16 -04:00
Philipp Schmid
ffbc64c72a Documentation: Improve structure of Google integrations page (#30992)
This PR restructures the main Google integrations documentation page
(`docs/docs/integrations/providers/google.mdx`) for better clarity and
updates content.

**Key changes:**

* **Separated Sections:** Divided integrations into distinct `Google
Generative AI (Gemini API & AI Studio)`, `Google Cloud`, and `Other
Google Products` sections.
* **Updated Generative AI:** Refreshed the introduction and the `Google
Generative AI` section with current information and quickstart examples
for the Gemini API via `langchain-google-genai`.
* **Reorganized Content:** Moved non-Cloud Platform specific
integrations (e.g., Drive, GMail, Search tools, ScaNN) to the `Other
Google Products` section.
* **Cleaned Up:** Minor improvements to descriptions and code snippets.

This aims to make it easier for users to find the relevant Google
integrations based on whether they are using the Gemini API directly or
Google Cloud services.

| Before                | After      |
|-----------------------|------------|
| ![Screenshot 2025-04-24 at 14 56
23](https://github.com/user-attachments/assets/ff967ec8-a833-4e8f-8015-61af8a4fac8b)
| ![Screenshot 2025-04-24 at 14 56
15](https://github.com/user-attachments/assets/179163f1-e805-484a-bbf6-99f05e117b36)
|

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 09:58:46 -04:00
Jacob Lee
6b0b317cb5 feat(core): Autogenerate filenames for when converting file content blocks to OpenAI format (#30984)
CC @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 13:36:31 +00:00
ccurme
21962e2201 docs: temporarily disable milvus in API ref build (#30996) 2025-04-24 09:31:23 -04:00
Behrad Hemati
1eb0bdadfa community: add indexname to other functions in opensearch (#30987)
- [x] **PR title**: "community: add indexname to other functions in
opensearch"



- [x] **PR message**:
- **Description:** add ability to over-ride index-name if provided in
the kwargs of sub-functions. When used in WSGI application it's crucial
to be able to dynamically change parameters.


- [ ] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 08:59:33 -04:00
Nicky Parseghian
7ecdac5240 community: Strip URLs from sitemap. (#30830)
Fixes #30829

- **Description:** Simply strips the loc value when building the
element.
    - **Issue:** Fixes #30829
2025-04-23 18:18:42 -04:00
ccurme
faef3e5d50 core, standard-tests: support PDF and audio input in Chat Completions format (#30979)
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)

Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
2025-04-23 18:32:51 +00:00
Bagatur
d4fc734250 core[patch]: update dict prompt template (#30967)
Align with JS changes made in
https://github.com/langchain-ai/langchainjs/pull/8043
2025-04-23 10:04:50 -07:00
ccurme
4bc70766b5 core, openai: support standard multi-modal blocks in convert_to_openai_messages (#30968) 2025-04-23 11:20:44 -04:00
ccurme
e4877e5ef1 fireworks: release 0.3.0 (#30977) 2025-04-23 10:08:38 -04:00
Christophe Bornet
8c5ae108dd text-splitters: Set strict mypy rules (#30900)
* Add strict mypy rules
* Fix mypy violations
* Add error codes to all type ignores
* Add ruff rule PGH003
* Bump mypy version to 1.15
2025-04-22 20:41:24 -07:00
ccurme
eedda164c6 fireworks[minor]: remove default model and temperature (#30965)
`mixtral-8x-7b-instruct` was recently retired from Fireworks Serverless.

Here we remove the default model altogether, so that the model must be
explicitly specified on init:
```python
ChatFireworks(model="accounts/fireworks/models/llama-v3p1-70b-instruct")  # for example
```

We also set a null default for `temperature`, which previously defaulted
to 0.0. This parameter will no longer be included in request payloads
unless it is explicitly provided.
2025-04-22 15:58:58 -04:00
Grant
4be55f7c89 docs: fix typo at 175 (#30966)
**Description:** Corrected pre-buit to pre-built.
**Issue:** Little typo.
2025-04-22 18:13:07 +00:00
CLOVA Studio 개발
577cb53a00 community: update Naver integration to use langchain-naver package and improve documentation (#30956)
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were completed.
We build our package as requested in [this
comment](https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791)
and the initial version is now uploaded to
[pypi](https://pypi.org/project/langchain-naver/).
So we've updated some our documents with the additional changed features
and how to download our partner-managed package.

## **Dependencies:** 

https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-22 12:00:10 -04:00
ccurme
a7c1bccd6a openai[patch]: remove xfails from image token counting tests (#30963)
These appear to be passing again.
2025-04-22 15:55:33 +00:00
ccurme
25d77aa8b4 community: release 0.3.22 (#30962) 2025-04-22 15:34:47 +00:00
ccurme
59fd4cb4c0 docs: update package registry sort order (#30960) 2025-04-22 15:27:32 +00:00
ccurme
b8c454b42b langchain: release 0.3.24 (#30959) 2025-04-22 11:23:34 -04:00
Dmitrii Rashchenko
a43df006de Support of openai reasoning summary streaming (#30909)
**langchain_openai: Support of reasoning summary streaming**

**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries

It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:

```
llm = ChatOpenAI(
    model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
    use_responses_api=True,  # reasoning streaming works only with responses api, not completion api
    model_kwargs={
        "reasoning": {
            "effort": "high",  # also "low" and "medium" supported
            "summary": "auto"  # some models support "concise" summary, some "detailed", but auto will always work
        }
    }
)
```

Now, if you stream events from llm:

```
async for event in llm.astream_events(prompt, version="v2"):
    print(event)
```

or

```
for chunk in llm.stream(prompt):
    print (chunk)
```

OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`

These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.

Example of how this reasoning summary may be printed:

```
    async for event in llm.astream_events(prompt, version="v2"):
        if event["event"] == "on_chat_model_stream":
            chunk: AIMessageChunk = event["data"]["chunk"]
            if "reasoning_summary_chunk" in chunk.additional_kwargs:
                print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
            elif "reasoning_summary" in chunk.additional_kwargs:
                print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
            elif chunk.content and chunk.content[0]["type"] == "text":
                print(chunk.content[0]["text"], end="")
```

or

```
    for chunk in llm.stream(prompt):
        if "reasoning_summary_chunk" in chunk.additional_kwargs:
            print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
        elif "reasoning_summary" in chunk.additional_kwargs:
            print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
        elif chunk.content and chunk.content[0]["type"] == "text":
            print(chunk.content[0]["text"], end="")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-22 14:51:13 +00:00
Alexander Ng
0f6fa34372 Community: Valyu Integration docs (#30926)
PR title:
docs: add Valyu integration documentation
Description:
This PR adds documentation and example notebooks for the Valyu
integration, including retriever and tool usage.
Issue:
N/A
Dependencies:
No new dependencies.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-21 17:43:00 -04:00
Jacob Mansdorfer
e8a84b05a4 Community: Adding tool calling and some new parameters to the langchain-predictionguard docs. (#30953)
- [x] **PR message**: 
- **Description:** Updates the documentation for the
langchain-predictionguard package, adding tool calling functionality and
some new parameters.
2025-04-21 17:01:57 -04:00
ccurme
8574442c57 core[patch]: release 0.3.55 (#30952) 2025-04-21 17:56:24 +00:00
ccurme
920d504e47 fireworks[patch]: update model in LLM integration tests (#30951)
`mixtral-8x7b-instruct` has been retired.
2025-04-21 17:53:27 +00:00
Anton Masalovich
1f3054502e community: fix cost calculations for 4.1 and o4 in OpenAI callback (#30899)
**Issue:** #30898
2025-04-21 10:59:47 -04:00
Ahmed Tammaa
589bc19890 anthropic[patch]: make description optional on AnthropicTool (#30935)
PR Summary

This change adds a fallback in ChatAnthropic.with_structured_output() to
handle Pydantic models that don’t include a docstring. Without it,
calling:
```py
from pydantic import BaseModel
from langchain_anthropic import ChatAnthropic

class SampleModel(BaseModel):
    sample_field: str

llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```
will raise a
```
KeyError: 'description'
```
because Pydantic omits the description field when no docstring is
present.

This issue doesn’t occur when using ChatOpenAI or if you add a docstring
to the model:
```py
from pydantic import BaseModel
from langchain_openai import ChatOpenAI

class SampleModel(BaseModel):
    """Schema for sample_field output."""
    sample_field: str

llm = ChatOpenAI(
    model="gpt-4o-mini"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:44:39 -04:00
Nuno Campos
27296bdb0c core: Make Graph.Node.data optional (#30943)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-21 07:18:36 -07:00
Pushpa Kumar
0e9d0dbc10 docs: dynamic copyright year in API ref (#30944)
- [x] **PR title:**  
`docs: make footer year dynamic in API reference docs`

- [x] **PR message:**

  - **Description:**  
Update `docs/api_reference/conf.py` to make the copyright year dynamic
(on
[https://python.langchain.com/api_reference/](https://python.langchain.com/api_reference/)).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 14:10:14 +00:00
Ahmed Tammaa
de56c31672 core: Improve OutputParser error messaging when model output is truncated (max_tokens) (#30936)
Addresses #30158
When using the output parser—either in a chain or standalone—hitting
max_tokens triggers a misleading “missing variable” error instead of
indicating the output was truncated. This subtle bug often surfaces with
Anthropic models.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:06:18 -04:00
xsai9101
335f089d6a Community: Add bind variable support for oracle adb docloader (#30937)
PR title:
Community: Add bind variable support for oracle adb docloader
Description:
This PR adds support of using bind variable to oracle adb doc loader
class, including minor document change.
Issue:
N/A
Dependencies:
No new dependencies.
2025-04-21 08:47:33 -04:00
Ikko Eltociear Ashimine
9418c0d8a5 docs: update tableau.ipynb (#30938)
Initalize -> Initialize
2025-04-21 08:43:29 -04:00
Aubrey Ford
23f701b08e langchain_community: OpenAIEmbeddings not respecting chunk_size argument (#30946)
This is a follow-on PR to go with the identical changes that were made
in parters/openai.

Previous PR:  https://github.com/langchain-ai/langchain/pull/30757

When calling embed_documents and providing a chunk_size argument, that
argument is ignored when OpenAIEmbeddings is instantiated with its
default configuration (where check_embedding_ctx_length=True).

_get_len_safe_embeddings specifies a chunk_size parameter but it's not
being passed through in embed_documents, which is its only caller. This
appears to be an oversight, especially given that the
_get_len_safe_embeddings docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.
2025-04-21 08:39:07 -04:00
Aubrey Ford
b344f34635 partners/openai: OpenAIEmbeddings not respecting chunk_size argument (#30757)
When calling `embed_documents` and providing a `chunk_size` argument,
that argument is ignored when `OpenAIEmbeddings` is instantiated with
its default configuration (where `check_embedding_ctx_length=True`).

`_get_len_safe_embeddings` specifies a `chunk_size` parameter but it's
not being passed through in `embed_documents`, which is its only caller.
This appears to be an oversight, especially given that the
`_get_len_safe_embeddings` docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.

This bug also exists in langchain_community package. I can add that to
this PR if requested otherwise I will create a new one once this passes.
2025-04-18 15:27:27 -04:00
Konsti-s
017c8079e1 partners: ChatAnthropic supports urls (#30809)
**Description:**
partners-anthropic: ChatAnthropic supports b64 and urls in the
part[image_url][url] message variable

**Issue**:
ChatAnthropic right now only supports b64 encoded images in the
part[image_url][url] message variable. This PR enables ChatAnthropic to
also accept image urls in said variable and makes it compatible with
OpenAI messages to make model switching easier.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 15:15:45 -04:00
Volodymyr Tkachuk
d0cd115356 community: Add deprecation decorator to SingleStore community integrations (#30846)
SingleStore integration now has its package `langchain-singlestore', so
the community implementation will no longer be maintained.

Added `deprecated` decorator to `SingleStoreDBChatMessageHistory`,
`SingleStoreDBSemanticCache`, and `SingleStoreDB` classes in the
community package.

**Dependencies:** https://github.com/langchain-ai/langchain/pull/30841

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-18 12:58:39 -04:00
Alejandro Rodríguez
34ddfba76b community: support usage_metadata for litellm streaming calls (#30683)
Support "usage_metadata" for LiteLLM streaming calls.

This is a follow-up to
https://github.com/langchain-ai/langchain/pull/30625, which tackled
non-streaming calls.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-18 12:50:32 -04:00
Volodymyr Tkachuk
5ffcd01c41 docs: Register langchain-singlestore integration (#30841)
I created and published `langchain-singlestoe` integration package that
should replace SingleStoreDB community implementation.
2025-04-18 12:11:33 -04:00
ccurme
096f0e5966 core[patch]: de-beta usage callback (#30928) 2025-04-18 15:45:09 +00:00
ccurme
46de0866db infra: add langchain-google-genai to monorepo test deps and update notebook cassettes (#30925)
Following https://github.com/langchain-ai/langchain/pull/30880
2025-04-18 11:16:12 -04:00
Behrad Hemati
d624a475e4 community: change metadata in opensearch mmr (#30921)
- [ ] **PR message**:
- **Description:** including metadata_field in
max_marginal_relevance_search() would result in error, changed the logic
to be similar to how it's handled in similarity_search, where it can be
any field or simply a "*" to include every field
2025-04-18 10:10:23 -04:00
rylativity
dbf9986d44 langchain-ollama (partners) / langchain-core: allow passing ChatMessages to Ollama (including arbitrary roles) (#30411)
Replacement for PR #30191 (@ccurme)

**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.

Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.

This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.

As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.

Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)

**Issue**: resolves #30122 (related to ollama issue
https://github.com/ollama/ollama/issues/8955)

**Dependencies**: no new dependencies

[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing

---------

Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 10:07:07 -04:00
Christophe Bornet
0c723af4b0 langchain[lint]: fix mypy type ignores (#30894)
* Remove unused ignores
* Add type ignore codes
* Add mypy rule `warn_unused_ignores`
* Add ruff rule PGH003

NB: some `type: ignore[unused-ignore]` are added because the ignores are
needed when `extended_testing_deps.txt` deps are installed.
2025-04-17 17:54:34 -04:00
ccurme
f14bcee525 docs: update multi-modal docs (#30880)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-17 16:03:05 -04:00
Sydney Runkle
98c357b3d7 core: release 0.3.54 (#30911) 2025-04-17 14:27:06 -04:00
Vadym Barda
d2cbfa379f core[patch]: add retries and better messages to draw_mermaid_png (#30881) 2025-04-17 18:25:37 +00:00
Sydney Runkle
75e50a3efd core[patch]: Raise AttributeError (instead of ModuleNotFoundError) in custom __getattr__ (#30905)
Follow up to https://github.com/langchain-ai/langchain/pull/30769,
fixing the regression reported
[here](https://github.com/langchain-ai/langchain/pull/30769#issuecomment-2807483610),
thanks @krassowski for the report!

Fix inspired by https://github.com/PrefectHQ/prefect/pull/16172/files

Other changes:
* Using tuples for `__all__`, except in `output_parsers` bc of a list
namespace conflict
* Using a helper function for imports due to repeated logic across
`__init__.py` files becoming hard to maintain.

Co-authored-by: Michał Krassowski < krassowski 5832902+krassowski@users.noreply.github.com>"
2025-04-17 14:15:28 -04:00
ccurme
61d2dc011e openai: release 0.3.14 (#30908) 2025-04-17 10:49:14 -04:00
ccurme
f0f90c4d88 anthropic: release 0.3.12 (#30907) 2025-04-17 14:45:12 +00:00
ccurme
f01b89df56 standard-tests: release 0.3.19 (#30906) 2025-04-17 10:37:44 -04:00
ccurme
add6a78f98 standard-tests, openai[patch]: add support standard audio inputs (#30904) 2025-04-17 10:30:57 -04:00
ccurme
2c2db1ab69 core: release 0.3.53 (#30901) 2025-04-17 13:10:32 +00:00
ccurme
86d51f6be6 multiple: permit optional fields on multimodal content blocks (#30887)
Instead of stuffing provider-specific fields in `metadata`, they can go
directly on the content block.
2025-04-17 12:48:46 +00:00
湛露先生
83b66cb916 doc: clean doc word description. (#30895)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:04:37 -04:00
湛露先生
ff2930c119 partners: bug fix check_imports.py exit code. (#30897)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:02:23 -04:00
ccurme
b36c2bf833 docs: update Bedrock chat model page (#30883)
- document prompt caching
- feature ChatBedrockConverse throughout
2025-04-16 16:55:14 -04:00
ccurme
9e82f1df4e docs: minor clean up in ChatOpenAI docs (#30884) 2025-04-16 16:08:43 -04:00
ccurme
fa362189a1 docs: document OpenAI reasoning summaries (#30882) 2025-04-16 19:21:14 +00:00
Sydney Runkle
88fce67724 core: Removing unnecessary pydantic core schema rebuilds (#30848)
We only need to rebuild model schemas if type annotation information
isn't available during declaration - that shouldn't be the case for
these types corrected here.

Need to do more thorough testing to make sure these structures have
complete schemas, but hopefully this boosts startup / import time.
2025-04-16 12:00:08 -04:00
rrozanski-smabbler
60d8ade078 Galaxia integration (#30792)
- [ ] **PR title**: "docs: adding Smabbler's Galaxia integration"

- [ ] **PR message**:  **Twitter handle:** @Galaxia_graph

I'm adding docs here + added the package to the packages.yml. I didn't
add a unit test, because this integration is just a thin wrapper on top
of our API. There isn't much left to test if you mock it away.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-16 10:39:04 -04:00
ccurme
ca39680d2a ollama: release 0.3.2 (#30865) 2025-04-16 09:14:57 -04:00
Sydney Runkle
4af3f89a3a docs: enforce newlines when signature exceeds char threshold (#30866)
Below is an example of the single line vs new multiline approach.

Before this PR:

<img width="831" alt="Screenshot 2025-04-15 at 8 56 26 PM"
src="https://github.com/user-attachments/assets/0c0277bd-2441-4b22-a536-e16984fd91b7"
/>

After this PR:

<img width="829" alt="Screenshot 2025-04-15 at 8 56 13 PM"
src="https://github.com/user-attachments/assets/e16bfe38-bb17-48ba-a642-e8ff6b48e841"
/>
2025-04-16 08:45:40 -04:00
milosz-l
4ff576e37d langchain: infer Perplexity provider for sonar model prefix (#30861)
**Description:** This PR adds provider inference logic to
`init_chat_model` for Perplexity models that use the "sonar..." prefix
(`sonar`, `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro` or
`sonar-deep-research`).

This allows users to initialize these models by simply passing the model
name, without needing to explicitly set `model_provider="perplexity"`.

The docstring for `init_chat_model` has also been updated to reflect
this new inference rule.
2025-04-15 18:17:21 -04:00
ccurme
085baef926 ollama[patch]: support standard image format (#30864)
Following https://github.com/langchain-ai/langchain/pull/30746
2025-04-15 22:14:50 +00:00
ccurme
47ded80b64 ollama[patch]: fix generation info (#30863)
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
2025-04-15 19:22:58 +00:00
Sydney Runkle
cf2697ec53 chroma: release 0.2.3 (#30860) 2025-04-15 14:11:23 -04:00
ccurme
8e9569cbc8 perplexity: release 0.1.1 (#30859) 2025-04-15 18:02:15 +00:00
ccurme
dd5f5902e3 openai: release 0.3.13 (#30858) 2025-04-15 17:58:12 +00:00
ccurme
3382ee8f57 anthropic: release 0.3.11 (#30857) 2025-04-15 17:57:00 +00:00
Sydney Runkle
ef5aff3b6c core[fix]: Fix __dir__ in __init__.py for output_parsers module (#30856)
We have a `list.py` file which causes a namespace conflict with `list`
from stdlib, unfortunately.

`__all__` is already a list, so no need to coerce.
2025-04-15 13:09:13 -04:00
Christophe Bornet
a4ca1fe0ed core: Remove some noqa (#30855) 2025-04-15 13:08:40 -04:00
ccurme
6baf5c05a6 standard-tests: release 0.3.18 (#30854) 2025-04-15 16:56:54 +00:00
ccurme
c6a8663afb infra: run old standard-tests on core releases (#30852)
On core releases, we check out the latest published package for
langchain-openai and langchain-anthropic and run their tests against the
candidate version of langchain-core.

Because these packages have a local install of langchain-tests, we also
need to check out the previous version of langchain-tests.
2025-04-15 16:04:08 +00:00
Sydney Runkle
1f5e207379 core[fix]: remove load from dynamic imports dict (#30849) 2025-04-15 12:02:46 -04:00
ccurme
7240458619 core: release 0.3.52 (#30850) 2025-04-15 15:28:31 +00:00
Sydney Runkle
6aa5494a75 Fix from langchain_core.load.load import load import (#30843)
TL;DR: you can't optimize imports with a lazy `__getattr__` if there is
a namespace conflict with a module name and an attribute name. We should
avoid introducing conflicts like this in the future.

This PR fixes a bug introduced by my lazy imports PR:
https://github.com/langchain-ai/langchain/pull/30769.

In `langchain_core`, we have utilities for loading and dumping data.
Unfortunately, one of those utilities is a `load` function, located in
`langchain_core/load/load.py`. To make this function more visible, we
make it accessible at the top level `langchain_core.load` module via
importing the function in `langchain_core/load/__init__.py`.

So, either of these imports should work:

```py
from langchain_core.load import load
from langchain_core.load.load import load
```

As you can tell, this is already a bit confusing. You'd think that the
first import would produce the module `load`, but because of the
`__init__.py` shortcut, both produce the function `load`.

<details> More on why the lazy imports PR broke this support...

All was well, except when the absolute import was run first, see the
last snippet:

```
>>> from langchain_core.load import load
>>> load
<function load at 0x101c320c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x1069360c0>
```

```
>>> from langchain_core.load import load
>>> load
<function load at 0x10692e0c0>
>>> from langchain_core.load.load import load
>>> load
<function load at 0x10692e0c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x101e2e0c0>
>>> from langchain_core.load import load
>>> load
<module 'langchain_core.load.load' from '/Users/sydney_runkle/oss/langchain/libs/core/langchain_core/load/load.py'>
```

In this case, the function `load` wasn't stored in the globals cache for
the `langchain_core.load` module (by the lazy import logic), so Python
defers to a module import.

</details>

New `langchain` tongue twister 😜: we've created a problem for ourselves
because you have to load the load function from the load file in the
load module 😨.
2025-04-15 11:06:13 -04:00
Bagatur
7262de4217 core[patch]: dict chat prompt template support (#25674)
- Support passing dicts as templates to chat prompt template
- Support making *any* attribute on a message a runtime variable
- Significantly simpler than trying to update our existing prompt
template classes

```python
    template = ChatPromptTemplate(
        [
            {
                "role": "assistant",
                "content": [
                    {
                        "type": "text",
                        "text": "{text1}",
                        "cache_control": {"type": "ephemeral"},
                    },
                    {"type": "image_url", "image_url": {"path": "{local_image_path}"}},
                ],
                "name": "{name1}",
                "tool_calls": [
                    {
                        "name": "{tool_name1}",
                        "args": {"arg1": "{tool_arg1}"},
                        "id": "1",
                        "type": "tool_call",
                    }
                ],
            },
            {
                "role": "tool",
                "content": "{tool_content2}",
                "tool_call_id": "1",
                "name": "{tool_name1}",
            },
        ]
    )

```

will likely close #25514 if we like this idea and update to use this
logic

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-15 11:00:49 -04:00
ccurme
9cfe6bcacd multiple: multi-modal content blocks (#30746)
Introduces standard content block format for images, audio, and files.

## Examples

Image from url:
```
{
    "type": "image",
    "source_type": "url",
    "url": "https://path.to.image.png",
}
```


Image, in-line data:
```
{
    "type": "image",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "image/png",
}
```


PDF, in-line data:
```
{
    "type": "file",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "application/pdf",
}
```


File from ID:
```
{
    "type": "file",
    "source_type": "id",
    "id": "file-abc123",
}
```


Plain-text file:
```
{
    "type": "file",
    "source_type": "text",
    "text": "foo bar",
}
```
2025-04-15 09:48:06 -04:00
湛露先生
09438857e8 docs: fix tools_human.ipynb url 404. (#30831)
Fix the 404 pages.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-15 09:22:13 -04:00
Sydney Runkle
e3b6cddd5e core: codspeed tweak to make sure it runs on master (#30845) 2025-04-15 13:03:44 +00:00
Sydney Runkle
59f2c9e737 Tinkering with CodSpeed (#30824)
Fix CI to trigger benchmarks on `run-codspeed-benchmarks` label addition

Reduce scope of async benchmark to save time on CI

Waiting to merge this PR until we figure out how to use walltime on
local runners.
2025-04-15 08:49:09 -04:00
William FH
ed5c4805f6 Consistent docstring indentation (#30834)
Should be 4 spaces instead of 3.
2025-04-14 19:04:35 -07:00
Joey Constantino
2282762528 docs: small Tableau docs update (#30827)
Description: small Tableau docs update
Issue: adds required environment variable
Dependencies: tableau-langchain

---------

Co-authored-by: Joe Constantino <joe.constantino@joecons-ltm6v86.internal.salesforce.com>
2025-04-14 15:34:54 -04:00
ccurme
f7c4965fb6 openai[patch]: update imports in test (#30828)
Quick fix to unblock CI, will need to address in core separately.
2025-04-14 19:33:38 +00:00
Sydney Runkle
edb6a23aea core[lint]: fix issue with unused ignore in __init__.py files (#30825)
Fixing a race condition between
https://github.com/langchain-ai/langchain/pull/30769 and
https://github.com/langchain-ai/langchain/pull/30737
2025-04-14 17:57:00 +00:00
湛露先生
3a64c7195f community: redis tool typos fix (#30811) 2025-04-14 09:01:36 -04:00
Sydney Runkle
4f69094b51 core[performance]: use custom __getattr__ in __init__.py files for lazy imports (#30769)
Most easily reviewed with the "hide whitespace" option toggled.

Seeing 10-50% speed ups in import time for common structures 🚀 

The general purpose of this PR is to lazily import structures within
`langchain_core.XXX_module.__init__.py` so that we're not eagerly
importing expensive dependencies (`pydantic`, `requests`, etc).

Analysis of flamegraphs generated with `importtime` motivated these
changes. For example, the one below demonstrates that importing
`HumanMessage` accidentally triggered imports for `importlib.metadata`,
`requests`, etc.

There's still much more to do on this front, and we can start digging
into our own internal code for optimizations now that we're less
concerned about external imports.

<img width="1210" alt="Screenshot 2025-04-11 at 1 10 54 PM"
src="https://github.com/user-attachments/assets/112a3fe7-24a9-4294-92c1-d5ae64df839e"
/>

I've tracked the improvements with some local benchmarks:

## `pytest-benchmark` results

| Name | Before (s) | After (s) | Delta (s) | % Change |

|-----------------------------|------------|-----------|-----------|----------|
| Document | 2.8683 | 1.2775 | -1.5908 | -55.46% |
| HumanMessage | 2.2358 | 1.1673 | -1.0685 | -47.79% |
| ChatPromptTemplate | 5.5235 | 2.9709 | -2.5526 | -46.22% |
| Runnable | 2.9423 | 1.7793 | -1.163 | -39.53% |
| InMemoryVectorStore | 3.1180 | 1.8417 | -1.2763 | -40.93% |
| RunnableLambda | 2.7385 | 1.8745 | -0.864 | -31.55% |
| tool | 5.1231 | 4.0771 | -1.046 | -20.42% |
| CallbackManager | 4.2263 | 3.4099 | -0.8164 | -19.32% |
| LangChainTracer | 3.8394 | 3.3101 | -0.5293 | -13.79% |
| BaseChatModel | 4.3317 | 3.8806 | -0.4511 | -10.41% |
| PydanticOutputParser | 3.2036 | 3.2995 | 0.0959 | 2.99% |
| InMemoryRateLimiter | 0.5311 | 0.5995 | 0.0684 | 12.88% |

Note the lack of change for `InMemoryRateLimiter` and
`PydanticOutputParser` is just random noise, I'm getting comparable
numbers locally.

## Local CodSpeed results

We're still working on configuring CodSpeed on CI. The local usage
produced similar results.
2025-04-14 08:57:54 -04:00
Christophe Bornet
ada740b5b9 community: Add ruff rule PGH003 (#30812)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-14 02:32:13 +00:00
ccurme
f005988e31 community[patch]: fix cost calculations for o3 in OpenAI callback (#30807)
Resolves https://github.com/langchain-ai/langchain/issues/30795
2025-04-13 15:20:46 +00:00
BoyuHu
446361a0d3 docs: fix typo (#30800)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-13 10:55:30 -04:00
Marina Gómez
afd457d8e1 perplexity[patch]: Fix #30767: Handle missing citations attribute in ChatPerplexity (#30805)
This PR fixes an issue where ChatPerplexity would raise an
AttributeError when the citations attribute was missing from the model
response (e.g., when using offline models like r1-1776).

The fix checks for the presence of citations, images, and
related_questions before attempting to access them, avoiding crashes in
models that don't provide these fields.

Tested locally with models that omit citations, and the fix works as
expected.
2025-04-13 09:24:05 -04:00
Christophe Bornet
42944f3499 core: Improve mypy config (#30737)
* Cleanup mypy config
* Add mypy `strict` rules except `disallow_any_generics`,
`warn_return_any` and `strict_equality` (TODO)
* Add mypy `strict_byte` rule
* Add mypy support for PEP702 `@deprecated` decorator
* Bump mypy version to 1.15

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 16:35:13 -04:00
mpb159753
bb2c2fd885 docs: Add openGauss vector store documentation (#30742)
Hey LangChain community! 👋 Excited to propose official documentation for
our new openGauss integration that brings powerful vector capabilities
to the stack!

### What's Inside 📦  
1. **Full Integration Guide**  
Introducing
[langchain-opengauss](https://pypi.org/project/langchain-opengauss/) on
PyPI - your new toolkit for:
   🔍 Native hybrid search (vectors + metadata)  
   🚀 Production-grade connection pooling  
   🧩 Automatic schema management  

2. **Rigorous Testing Passed**   
![Benchmark
Results](https://github.com/user-attachments/assets/ae3b21f7-aeea-4ae7-a142-f2aec57936a0)
   - 100% non-async test coverage  

ps: Current implementation resides in my personal repository:
https://github.com/mpb159753/langchain-opengauss, How can I transfer
process to langchain-ai org?? *Keen to hear your thoughts and make this
integration shine!* 

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 20:31:39 +00:00
Christophe Bornet
913c896598 core: Add ruff rules FBT001 and FBT002 (#30695)
Add ruff rules
[FBT001](https://docs.astral.sh/ruff/rules/boolean-type-hint-positional-argument/)
and
[FBT002](https://docs.astral.sh/ruff/rules/boolean-default-value-positional-argument/).
Mostly `noqa`s to not introduce breaking changes and possible
non-breaking fixes have already been done in a [previous
PR](https://github.com/langchain-ai/langchain/pull/29424).
These rules will prevent new violations to happen.
2025-04-11 16:26:33 -04:00
William FH
2803a48661 core[patch]: Share executor for async callbacks run in sync context (#30779)
To avoid having to create ephemeral threads, grab the thread lock, etc.
2025-04-11 10:34:43 -07:00
Sydney Runkle
fdc2b4bcac core[lint]: Use 3.9 formatting for docs and tests (#30780)
Looks like `pyupgrade` was already used here but missed some docs and
tests.

This helps to keep our docs looking professional and up to date.
Eventually, we should lint / format our inline docs.
2025-04-11 10:39:25 -04:00
Sydney Runkle
48affc498b langchain[lint]: use pyupgrade to get to 3.9 standards (#30782) 2025-04-11 10:33:26 -04:00
ccurme
d9b628e764 xai: release 0.2.3 (#30790) 2025-04-11 14:05:11 +00:00
ccurme
9cfb95e621 xai[patch]: support reasoning content (#30758)
https://docs.x.ai/docs/guides/reasoning

```python
from langchain.chat_models import init_chat_model

llm = init_chat_model(
    "xai:grok-3-mini-beta",
    reasoning_effort="low"
)
response = llm.invoke("Hello, world!")
```
2025-04-11 14:00:27 +00:00
Christophe Bornet
89f28a24d3 core[lint]: Fix typing in test_async_callbacks (#30788) 2025-04-11 07:26:38 -04:00
Sydney Runkle
8c6734325b partners[lint]: run pyupgrade to get code in line with 3.9 standards (#30781)
Using `pyupgrade` to get all `partners` code up to 3.9 standards
(mostly, fixing old `typing` imports).
2025-04-11 07:18:44 -04:00
Jacob Lee
e72f3c26a0 fix(ollama): Remove redundant message from response_metadata (#30778) 2025-04-10 23:12:57 -07:00
Jannik Maierhöfer
f3c3ec9aec docs: add langfuse integration to provider list (#30573)
This PR adds the Langfuse integration to the provider list.
2025-04-10 22:25:42 -04:00
Christophe Bornet
dc19d42d37 core: Specify code when ignoring type issue (ruff PGH003) (#30675)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/
2025-04-10 22:23:52 -04:00
Paul Czarkowski
68d16d8a07 Community: Add Managed Identity support for Azure AI Search (#30730)
Add Managed Identity support for Azure AI Search

---------

Signed-off-by: Paul Czarkowski <username.taken@gmail.com>
2025-04-10 22:22:58 -04:00
CtrlMj
5103594a2c replace the deprecated initialize_agent in playwright.ipynb with create_react_agent (#30734)
**Description:** Replaced the example with the deprecated
`intialize_agent` function with `create_react_agent` from
`langgraph.prebuild`

**Issue:** #29277 

**Dependencies:** N/A
**Twitter handle:** N/A
2025-04-10 22:12:12 -04:00
Eugene Yurtsev
e42b3d285a langchain: remove langchain-server script (#30755)
Has been replaced by langsmith a long long time ago
2025-04-10 22:11:42 -04:00
Pol de Font-Réaulx
48cf7c838d feat(community): add oauth2 support for Jira toolkit (#30684)
**Description:** add support for oauth2 in Jira tool by adding the
possibility to pass a dictionary with oauth parameters. I also adapted
the documentation to show this new behavior
2025-04-10 22:04:09 -04:00
Oleg Ovcharuk
b6fe7e8c10 docs: YDB Vector Store docs (#30636)
This PR adds docs about how to use YDB as a vector store

[YDB](https://ydb.tech/) is a versatile open-source distributed SQL
database. It supports [vector
search](https://ydb.tech/docs/en/yql/reference/udf/list/knn) which means
it can be used as a vector store with langchain.

YDB vectore store comes with
[langchain-ydb](https://pypi.org/project/langchain-ydb/) pypi package.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-10 21:33:56 -04:00
湛露先生
7a4ae6fbff community[patch]: simplify cache logic (#30760)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 19:20:57 -04:00
ccurme
8e053ac9d2 core[patch]: support customization of backoff parameters in with_retries (#30773)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-10 19:18:36 -04:00
amohan
e981a9810d docs: update links in cloudflare docs (#30776)
Thanks for reviewing again. I was notified of some
[links](https://python.langchain.com/api_reference/cloudflare/) not
being correct in the default integrations so updating them in this PR.
2025-04-10 19:08:18 -04:00
William FH
70532a65f8 Async callback benchmark (#30777) 2025-04-10 15:47:19 -07:00
Sydney Runkle
c6172d167a Only run CodSpeed benchmarks with run-codspeed-benchmarks label (#30774) 2025-04-10 15:48:14 -04:00
amohan
f70df01e01 docs: Update ordering of cloudflare integration examples in providers page (#30768)
Updated the ordering of cloudflare integrations and updated import
examples. Follow up from
https://github.com/langchain-ai/langchain/pull/30749
2025-04-10 15:34:58 -04:00
Sydney Runkle
8f8fea2d7e [performance]: Use hard coded langchain-core version to avoid importlib import (#30744)
This PR aims to reduce import time of `langchain-core` tools by removing
the `importlib.metadata` import previously used in `__init__.py`. This
is the first in a sequence of PRs to reduce import time delays for
`langchain-core` features and structures 🚀.

Because we're now hard coding the version, we need to make sure
`version.py` and `pyproject.toml` stay in sync, so I've added a new CI
job that runs whenever either of those files are modified. [This
run](https://github.com/langchain-ai/langchain/actions/runs/14358012706/job/40251952044?pr=30744)
demonstrates the failure that occurs whenever the version gets out of
sync (thus blocking a PR).

Before, note the ~15% of time spent on the `importlib.metadata` /related
imports

<img width="1081" alt="Screenshot 2025-04-09 at 9 06 15 AM"
src="https://github.com/user-attachments/assets/59f405ec-ee8d-4473-89ff-45dea5befa31"
/>

After (note, lack of `importlib.metadata` time sink):

<img width="1245" alt="Screenshot 2025-04-09 at 9 01 23 AM"
src="https://github.com/user-attachments/assets/9c32e77c-27ce-485e-9b88-e365193ed58d"
/>
2025-04-10 14:15:02 -04:00
Sydney Runkle
cd6a83117c Adding more import time benchmarks for langchain-core (#30770)
Plus minor typo fix in `ChatPromptTemplate` case id.
2025-04-10 11:50:12 -04:00
Chamath K.B. Attanayaka
6c45c9efc3 docs: update clickhouse version in notebook example (#30754)
update clickhouse docker version tag in notebook example to avoid
compatibility issues with clickhouse-connect.
2025-04-10 09:51:54 -04:00
amohan
44b83460b2 docs: Add Cloudflare integrations (#30749)
Description:
This PR adds documentation for the langchain-cloudflare integration
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
langchain-cloudflare package, located in docs/docs/integrations.
Added a new package to libs/packages.yml.

Lint and Format:

Successfully ran make format and make lint.

---------

Co-authored-by: Collier King <collier@cloudflare.com>
Co-authored-by: Collier King <collierking99@gmail.com>
2025-04-10 09:27:23 -04:00
湛露先生
c87a270e5f cookbook: Fix docs typos. (#30763)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 09:13:24 -04:00
ccurme
63c16f5ca8 community: deprecate AzureCosmosDBNoSqlVectorSearch in favor of langchain-azure-ai implementation (#30756) 2025-04-09 21:04:16 +00:00
Christophe Bornet
4cc7bc6c93 core: Add ruff rules PLR (#30696)
Add ruff rules [PLR](https://docs.astral.sh/ruff/rules/#refactor-plr)
Except PLR09xxx and PLR2004.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-09 15:15:38 -04:00
célina
68361f9c2d partners: (langchain-huggingface) Embeddings - Integrate Inference Providers and remove deprecated code (#30735)
Hi there, This is a complementary PR to #30733.
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpointEmbeddings`, in favor of the task-specific
`feature_extraction` method. `InferenceClient.post()` is deprecated and
will be removed in `huggingface_hub` v0.31.0.

## Changes made

- bumped the minimum required version of the `huggingface_hub` package
to ensure compatibility with the latest API usage.
- added a provider field to `HuggingFaceEndpointEmbeddings`, enabling
users to select the inference provider.
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpointEmbeddings` with the task-specific
`feature_extraction` method for future-proofing, `post()` will be
removed in `huggingface-hub` v0.31.0.

 All changes are backward compatible.

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-09 19:05:43 +00:00
Christophe Bornet
98f0016fc2 core: Add ruff rules ARG (#30732)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg
2025-04-09 14:39:36 -04:00
Sydney Runkle
66758599a9 [ci]: Quick codspeed.yml tweaks to enable comparisons with master (#30752)
* Only run codspeed logic when `libs/core` is changed (for now, we'll
want to add other benchmarks later
* Also run on `master` so that we can get a reference :)
2025-04-09 13:13:49 -04:00
theosaurus
d47d6ecbc3 dosc: Fix typo in get_separators_for_language method section (#30748)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 13:03:01 -04:00
Sydney Runkle
78ec7d886d [performance]: Adding benchmarks for common langchain-core imports (#30747)
The first in a sequence of PRs focusing on improving performance in
core. We're starting with reducing import times for common structures,
hence the benchmarks here.

The benchmark looks a little bit complicated - we have to use a process
so that we don't suffer from Python's import caching system. I tried
doing manual modification of `sys.modules` between runs, but that's
pretty tricky / hacky to get right, hence the subprocess approach.

Motivated by extremely slow baseline for common imports (we're talking
2-5 seconds):

<img width="633" alt="Screenshot 2025-04-09 at 12 48 12 PM"
src="https://github.com/user-attachments/assets/994616fe-1798-404d-bcbe-48ad0eb8a9a0"
/>

Also added a `make benchmark` command to make local runs easy :).
Currently using walltimes so that we can track total time despite using
a manual proces.
2025-04-09 13:00:15 -04:00
German Molina
5fb261ce27 community: Google Vertex AI Search now returns the website title as part of the document metadata (#30688)
Google vertex ai search will now return the title of the found website
as part of the document metadata, if available.

Thank you for contributing to LangChain!

- **Description**: Vertex AI Search can be used to index websites and
then develop chatbots that use these websites to answer questions. At
present, the document metadata includes an `id` and `source` (which is
the URL). While the URL is enough to create a link, the ID is not
descriptive enough to show users. Therefore, I propose we return `title`
as well, when available (e.g., it will not be available in `.txt`
documents found during the website indexing).
- **Issue**: No bug in particular, but it would be better if this was
here.
- **Dependencies**: None
- I do not use twitter.

Format, Lint and Test seem to be all good.
2025-04-09 08:54:06 -04:00
theosaurus
636d831d27 docs: Fix typo in 'Query re-writing" section (#30736)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 08:50:14 -04:00
giulia_p_lib
deec538335 docs: fix small typo in map_rerank_docs_chain.ipynb (#30738)
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fixed a minor typo in map_rerank_docs_chain.ipynb
2025-04-09 08:49:37 -04:00
Akshay Dongare
164e606cae docs: fix import path and update LiteLLM integration docs (#30685)
- [x] **PR title**: "docs: Update import path and LiteLLM integration
docs"
- Update the old import path for `ChatLiteLLM` to reflect the new export
from
[`__init__.py`](https://github.com/Akshay-Dongare/langchain-litellm/blob/main/langchain_litellm/__init__.py)
in
[`langchain-litellm`](https://github.com/Akshay-Dongare/langchain-litellm)
package

- [x] **PR message**:
    - **Description:** 
    - 🔗 **Follow-up to**: PR #30637
    - 🔧 **Fixes**: #30368
- 💬 **Based on this comment from** @ccurme:
https://github.com/langchain-ai/langchain/pull/30637#discussion_r2029084320
 


- [x] **About me**
🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-08 13:04:17 -04:00
Ikko Eltociear Ashimine
5686fed40b docs: update yellowbrick.ipynb (#30729)
retreival -> retrieval
2025-04-08 11:56:35 -04:00
Sydney Runkle
4556b81b1d Clean up numpy dependencies and speed up 3.13 CI with numpy>=2.1.0 (#30714)
Generally, this PR is CI performance focused + aims to clean up some
dependencies at the same time.

1. Unpins upper bounds for `numpy` in all `pyproject.toml` files where
`numpy` is specified
2. Requires `numpy >= 2.1.0` for Python 3.13 and `numpy > v1.26.0` for
Python 3.12, plus a `numpy` min version bump for `chroma`
3. Speeds up CI by minutes - linting on Python 3.13, installing `numpy <
2.1.0` was taking [~3
minutes](https://github.com/langchain-ai/langchain/actions/runs/14316342925/job/40123305868?pr=30713),
now the entire env setup takes a few seconds
4. Deleted the `numpy` test dependency from partners where that was not
used, specifically `huggingface`, `voyageai`, `xai`, and `nomic`.

It's a bit unfortunate that `langchain-community` depends on `numpy`, we
might want to try to fix that in the future...

Closes https://github.com/langchain-ai/langchain/issues/26026
Fixes https://github.com/langchain-ai/langchain/issues/30555
2025-04-08 09:45:07 -04:00
ccurme
163730aef4 docs: update SQL QA prompt (#30728)
Resolves https://github.com/langchain-ai/langchain/issues/30724

The [prompt in
langchain-hub](https://smith.langchain.com/hub/langchain-ai/sql-query-system-prompt)
used in this guide was composed of just a system message, but the guide
did not add a human message to it. This was incompatible with some
providers (and is generally not a typical usage pattern).

The prompt in prompt hub has been updated to split the question into a
separate HumanMessage. Here we update the guide to reflect this.
2025-04-08 09:42:49 -04:00
湛露先生
9cbe91896e Fix deepseek release tag, as it is update name. (#30717)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-08 08:43:16 -04:00
Nithish Raghunandanan
893942651b docs: Update couchbase vector store docs (#30710)
-  **Update LangChain-Couchbase documentation**
- Rename `CouchbaseVectorStore` in favor of `CouchbaseSearchVectorStore`

- [x] **Lint and test**
2025-04-07 18:45:14 -04:00
Eugene Yurtsev
3ce0587199 ci: remove unused debug action (#30713)
Removing an unused action
2025-04-07 22:32:37 +00:00
ccurme
a2bec5f2e5 ollama: release 0.3.1 (#30716) 2025-04-07 20:31:25 +00:00
ccurme
e3f15f0a47 ollama[patch]: add model_name to response metadata (#30706)
Fixes [this standard
test](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).
2025-04-07 16:27:58 -04:00
ccurme
e106e9602f groq[patch]: add retries to integration tests (#30707)
Tool-calling tests started intermittently failing with
> groq.APIError: Failed to call a function. Please adjust your prompt.
See 'failed_generation' for more details.
2025-04-07 12:45:53 -04:00
aaronlaitner
4f9f97bd12 docs: replaced initialize_agent with create_react_agent in dalle_image_generator.ipynb (#30697)
## Description:

Replaced deprecated 'initialize_agent' with 'create_react_agent' in
dalle_image_generator.ipynb
## Issue:
#29277

## Dependencies:
None

## Twitter handle:
@Thatopman

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-07 13:33:52 +00:00
Mohammad Mohtashim
e935da0b12 ChatTongyi reasoning_content fix (#30694)
- **Description:** Small fix for `reasoning_content` key
- **Issue:** #30689
2025-04-07 09:27:33 -04:00
Tin Lai
4d03ba4686 langchain_qdrant: fix showing the missing sparse vector name (#30701)
**Description:** The error message was supposed to display the missing
vector name, but instead, it includes only the existing collection
configs.

This simple PR just includes the correct variable name, so that the user
knows the requested vector does not exist in the collection.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Tin Lai <tin@tinyiu.com>
2025-04-07 09:19:08 -04:00
Eugene Yurtsev
30af9b8166 Update bug-report.yml (#30680)
Update bug report template!
2025-04-04 16:47:50 -04:00
jessicaou
2712ecffeb Update Contributor Docs (#30679)
Deletes statement on integration marketing
2025-04-04 16:35:11 -04:00
Ninad Sinha
a3671ceb71 docs: Add tools for hyperbrowser (#30606)
Description

This PR updates the docs for the
[langchain-hyperbrowser](https://pypi.org/project/langchain-hyperbrowser/)
package. It adds a few tools
 - Scrape Tool
 - Crawl Tool
 - Extract Tool
 - Browser Agents
   - Claude Computer Use
   - OpenAI CUA
   - Browser Use 

[Hyperbrowser](https://hyperbrowser.ai/) is a platform for running and
scaling headless browsers. It lets you launch and manage browser
sessions at scale and provides easy to use solutions for any webscraping
needs, such as scraping a single page or crawling an entire site.

Issue
None

Dependencies
None

Twitter Handle
`@hyperbrowser`
2025-04-04 16:02:47 -04:00
Christophe Bornet
6650b94627 core: Add ruff rules PYI (#29335)
See https://docs.astral.sh/ruff/rules/#flake8-pyi-pyi

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 19:59:44 +00:00
Philippe PRADOS
d8e3b7667f community[patch]: Fix empty producer in PDF Parsers (#30620)
Fix an issue where if a pdf file doesn't have a “producer” in metadata, it generates an exception.
2025-04-04 15:53:49 -04:00
Christophe Bornet
f0159c7125 core: Add ruff rules PGH (except PGH003) (#30656)
Add ruff rules PGH: https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
Except PGH003 which will be dealt in a dedicated PR.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-04 19:53:27 +00:00
Jorge Ángel Juárez Vázquez
2491237473 docs: Add Google Calendar documentation (#30633)
## Docs: Add Google Calendar Toolkit Documentation

### Description:
This PR adds documentation for the Google Calendar Toolkit as part of
the `langchain-google` repository. Refer to the related PR: [community:
Add Google Calendar
Toolkit](https://github.com/langchain-ai/langchain-google/pull/688).

### Issue:
N/A 

### Twitter handle:
@jorgejrzz
2025-04-04 15:53:03 -04:00
Armaanjeet Singh Sandhu
7c2468f36b core: Fix handler removal in BaseCallbackManager (Fixes #30640) (#30659)
**Description:**  
Fixed a bug in `BaseCallbackManager.remove_handler()` that caused a
`ValueError` when removing a handler added via the constructor's
`handlers` parameter. The issue occurred because handlers passed to the
constructor were added only to the `handlers` list and not automatically
to `inheritable_handlers` unless explicitly specified. However,
`remove_handler()` attempted to remove the handler from both lists
unconditionally, triggering a `ValueError` when it wasn't in
`inheritable_handlers`.

The fix ensures the method checks for the handler’s presence in each
list before attempting removal, making it more robust while preserving
its original behavior.

**Issue:** Fixes #30640

**Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 15:45:15 -04:00
Mohammad Mohtashim
bff56c5fa6 community[patch]: Redundant Parser checker for Webbaseloader (#30632)
- **Description:** We do not need to set parser in `scrape` since it is
already been done in `_scrape`
- **Issue:** #30629, not directly related but makes sure xml parser is
used
2025-04-04 14:11:26 -04:00
Christophe Bornet
150ac0cb79 core: Add ruff rules DTZ (#30657)
Add ruff rules DTZ:
https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-04-04 13:43:47 -04:00
Christophe Bornet
5e418c2666 core: Rework pydantic version checks (#30653)
This pull request includes various changes to the `langchain_core`
library, focusing on improving compatibility with different versions of
Pydantic. The primary change involves replacing checks for Pydantic
major versions with boolean flags, which simplifies the code and
improves readability.
This also solves ruff rule checks for
[RUF048](https://docs.astral.sh/ruff/rules/map-int-version-parsing/) and
[PLR2004](https://docs.astral.sh/ruff/rules/magic-value-comparison/).

Key changes include:

### Compatibility Improvements:
*
[`libs/core/langchain_core/output_parsers/json.py`](diffhunk://#diff-5add0cf7134636ae4198a1e0df49ee332ae0c9123c3a2395101e02687c717646L22-R24):
Replaced `PYDANTIC_MAJOR_VERSION` with `IS_PYDANTIC_V1` to check for
Pydantic version 1.
*
[`libs/core/langchain_core/output_parsers/pydantic.py`](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14):
Updated version checks from `PYDANTIC_MAJOR_VERSION` to `IS_PYDANTIC_V2`
in the `PydanticOutputParser` class.
[[1]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14)
[[2]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L27-R27)

### Utility Enhancements:
*
[`libs/core/langchain_core/utils/pydantic.py`](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23):
Introduced `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` flags and deprecated
the `get_pydantic_major_version` function. Updated various functions to
use these flags instead of version numbers.
[[1]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23)
[[2]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R42-R78)
[[3]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L90-R89)
[[4]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L104-R101)
[[5]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L120-R122)
[[6]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L135-R132)
[[7]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L149-R151)
[[8]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L164-R161)
[[9]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L248-R250)
[[10]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L330-R335)
[[11]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L356-R357)
[[12]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L393-R390)
[[13]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L403-R400)

### Test Updates:
*
[`libs/core/tests/unit_tests/output_parsers/test_openai_tools.py`](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22):
Updated tests to use `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` for version
checks.
[[1]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22)
[[2]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L532-R535)
[[3]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L567-R570)
[[4]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L602-R605)
*
[`libs/core/tests/unit_tests/prompts/test_chat.py`](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7):
Replaced version tuple checks with `PYDANTIC_VERSION` comparisons.
[[1]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7)
[[2]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L35-R38)
[[3]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L924-R927)
[[4]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L935-R938)
*
[`libs/core/tests/unit_tests/runnables/test_graph.py`](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3):
Simplified version checks using `PYDANTIC_VERSION`.
[[1]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3)
[[2]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL15-R18)
[[3]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL234-L239)
*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20):
Introduced `PYDANTIC_VERSION_AT_LEAST_29` and
`PYDANTIC_VERSION_AT_LEAST_210` for more readable version checks.
[[1]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20)
[[2]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L92-R99)
[[3]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L230-R233)
[[4]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L652-R655)
2025-04-04 13:42:30 -04:00
Christophe Bornet
43b5dc7191 core: Add ruff rules TD and FIX (#30654)
Add ruff rules:
* FIX: https://docs.astral.sh/ruff/rules/#flake8-fixme-fix
* TD: https://docs.astral.sh/ruff/rules/#flake8-todos-td

Code cleanup:

*
[`libs/core/langchain_core/outputs/chat_generation.py`](diffhunk://#diff-a1017ee46f58fa4005b110ffd4f8e1fb08f6a2a11d6ca4c78ff8be641cbb89e5L56-R56):
Removed the "HACK" prefix from a comment in the `set_text` method.

Configuration adjustments:

*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537R85-R93):
Added new rules `FIX002`, `TD002`, and `TD003` to the ignore list.
*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537L102-L108):
Removed the `FIX` and `TD` rules from the ignore list.

Test refinement:

*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L3231-R3232):
Updated a TODO comment to improve clarity in the `test_map_stream`
function.
2025-04-04 13:40:42 -04:00
ccurme
a007c57285 docs: update package registry sort order (#30677) 2025-04-04 13:12:39 -04:00
Sydney Runkle
33ed7c31da docs: fix perplexity install instructions in ChatPerplexity docstring (#30676)
* `openai` install no longer needs to be done manually
2025-04-04 12:58:18 -04:00
Dhruvajyoti Sarma
f9bb5ec5d0 feature: removed pandas dataframe dependency for similary_search when using DuckDB as vector store (#30445)
- [ ] **PR title**: "community: Removes pandas dependency for using
DuckDB for similarity search"


- [ ] **PR message**: 
- **Description:** Removes pandas dependency for using DuckDB for
similarity search. The old function still exists as
`similarity_search_pd`, while the new one is at `similarity_search` and
requires no code changes. Return format remains the same.
    - **Issue:** Issue #29933 and update on PR #30435 
    - **Dependencies:** No dependencies
2025-04-04 12:19:18 -04:00
Akshay Dongare
f79473b752 Solved issue Implement langchain-litellm #30368 (#30637)
**PR title**: 
- [x] 1. docs: docs/docs/integrations/providers/LiteLLM.md
- [x] 2. docs: docs/docs/integrations/chat/litellm.ipynb
- [x] 3. libs: libs/packages.yml

- [x] **PR message**:
    - **Description:** Implement langchain-litellm
    - **Issue:** the issue #30368 
    - **Twitter handle:** akshay_d02
    - **LinkedIn Handle** https://linkedin.com/in/akshay-dongare 

- [x] **Add tests and docs**: Done

- [x] **Lint and test**: Done

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-04 16:12:10 +00:00
Yiğit Bekir Kaya, PhD
87e82fe1e8 Added langchain-qwq package documentation (Alibaba Cloud) (#30628)
LangChain QwQ allows non-Tongyi users to access thinking models with
extra capabilities which serve as an extension to Alibaba Cloud.

Hi @ccurme I'm back with the updated PR this time with documentation and
a finished package.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"



- **Description:** adds documentation of `langchain-qwq` integration
package. Also adds it to Alibaba Cloud provider
- **Issue:** #30580 #30317 #30579
- **Dependencies:** openai, json-repair
- **Twitter handle:** YigitBekir


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-04 11:47:14 -04:00
Andrew Benton
4e7a9a7014 community: Add support for custom runtimes to Riza tools (#30664)
**Description:**
Adds support for Riza custom runtimes to the two Riza code interpreter
tools, allowing users to run LLM-generated code that depends on
libraries outside stdlib.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** @rizaio
2025-04-04 11:03:14 -04:00
diego dupin
aa37893c00 MariaDB vector store documentation addition (#30229)
### New Feature

Since version 11.7.1, MariaDB support vector. This is a super fast
implementation (see [some perf
blog](https://smalldatum.blogspot.com/2025/01/evaluating-vector-indexes-in-mariadb.html)
The goal is to support MariaDB with langchain

Implementation is done in
https://github.com/mariadb-corporation/langchain-mariadb, published in
https://pypi.org/project/langchain-mariadb/

This concerns the doc addition
 

(initial PR https://github.com/langchain-ai/langchain/pull/29989)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Oskar Stark <oskarstark@googlemail.com>
2025-04-04 14:56:25 +00:00
Sydney Runkle
1cdea6ab07 langchain-community: release 0.3.21 (#30673) 2025-04-04 14:14:50 +00:00
Sydney Runkle
901dffe06b langchain: release 0.3.23 (#30670)
* Bump `text-splitters` min version
* Bump `langchain-core` min version
* Bump `langchain` version 🚀
2025-04-04 10:06:29 -04:00
ccurme
0c2c8c36c1 text-splitters: release 0.3.8 (#30671) 2025-04-04 09:58:45 -04:00
ccurme
59d508a2ee openai[patch]: make computer test more reliable (#30672) 2025-04-04 13:53:59 +00:00
Sydney Runkle
c235328b39 Revert "update langchain version and bump min core v"
This reverts commit d0f154dbaa.
2025-04-04 09:31:51 -04:00
Sydney Runkle
d0f154dbaa update langchain version and bump min core v 2025-04-04 09:27:49 -04:00
Sydney Runkle
32cd70d7d2 release: bump core to v0.3.51 (#30668) 2025-04-04 13:23:09 +00:00
Max Forsey
18cf457eec langchain-runpod integration (#30648)
## Description:

This PR adds the necessary documentation for the `langchain-runpod`
partner package integration. It includes:

* A provider page (`docs/docs/integrations/providers/runpod.ipynb`)
explaining the overall setup.
* An LLM component page (`docs/docs/integrations/llms/runpod.ipynb`)
detailing the `RunPod` class usage.
* A Chat Model component page
(`docs/docs/integrations/chat/runpod.ipynb`) detailing the `ChatRunPod`
class usage, including a feature support table.

These documentation files reflect the latest features of the
`langchain-runpod` package (v0.2.0+) such as async support and API
polling logic.

This work also addresses the review feedback provided on the previous
attempt in PR #30246 by:
*   Removing all TODOs from documentation.
*   Adding the required links between provider and component pages.
*   Completing the feature support table in the chat documentation.
*   Linking to the source code on GitHub for API reference.

Finally, it registers the `langchain-runpod` package in
`libs/packages.yml`.

## Dependencies:

None added to the core LangChain repository by these documentation
changes. The required dependency (`langchain-runpod`) is managed as a
separate package.

## Twitter handle:

@runpod_io

---------

Co-authored-by: Max Forsey <maxpod@maxpod.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:57:06 +00:00
NikeHop
9c03cd5775 Fix tool description in serpapi.ipynb (#30660)
Thank you for contributing to LangChain!

- [x] Fix Tool description of SerpAPI tool: "docs: Fix SerpAPI tool
description"



- [ ] Fix SerpAPI tool description: 
- Tool description + name in example initialization of the SerpAPI tool
was still that of the python repl tool.
    - @RLHoeppi

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:36:29 +00:00
Sydney Runkle
af66ab098e Adding Perplexity extra and deprecating the community version of ChatPerplexity (#30649)
Plus, some accompanying docs updates

Some compelling usage:

```py
from langchain_perplexity import ChatPerplexity


chat = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
response = chat.invoke(
    "What were the most significant newsworthy events that occurred in the US recently?",
    extra_body={"search_recency_filter": "week"},
)
print(response.content)
# > Here are the top significant newsworthy events in the US recently: ...
```

Also, some confirmation of structured outputs:

```py
from langchain_perplexity import ChatPerplexity
from pydantic import BaseModel


class AnswerFormat(BaseModel):
    first_name: str
    last_name: str
    year_of_birth: int
    num_seasons_in_nba: int


messages = [
    {"role": "system", "content": "Be precise and concise."},
    {
        "role": "user",
        "content": (
            "Tell me about Michael Jordan. "
            "Please output a JSON object containing the following fields: "
            "first_name, last_name, year_of_birth, num_seasons_in_nba. "
        ),
    },
]

llm = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
structured_llm = llm.with_structured_output(AnswerFormat)
response = structured_llm.invoke(messages)
print(repr(response))
#> AnswerFormat(first_name='Michael', last_name='Jordan', year_of_birth=1963, num_seasons_in_nba=15)
```
2025-04-03 14:29:17 -04:00
ccurme
b8929e3d5f docs: add image generation example to Google genai docs (#30650) 2025-04-03 14:21:54 -04:00
ccurme
374769e8fe core[patch]: log information from certain errors (#30626)
Some exceptions raised by SDKs include information in httpx responses
(see for example
[OpenAI](https://github.com/openai/openai-python/blob/main/src/openai/_exceptions.py)).
Here we trace information from those exceptions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-04-03 16:45:19 +00:00
Sydney Runkle
17a9cd61e9 Bump langchain-core version in perplexity's pyproject.toml (#30647)
Blocking v0.1.0 release of `langchain-perplexity`
2025-04-03 16:19:10 +00:00
Sydney Runkle
3814bd1ea7 partners: Add Perplexity Chat Integration (#30618)
Perplexity's importance in the space has been growing, so we think it's
time to add an official integration!

Note: following the release of `langchain-perplexity` to `pypi`, we
should be able to add `perplexity` as an extra in
`libs/langchain/pyproject.toml`, but we're blocked by a circular import
for now.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 16:09:14 +00:00
vgrfl
87c02a1aff docs: Fixed a typo in 'Google AI vs Google Cloud Vertex AI' section (#30642)
**Description:** Corrected 'encription' spelling to 'encryption'
2025-04-03 09:04:29 -04:00
Alejandro Rodríguez
884125e129 community: support usage_metadata for litellm (#30625)
Support "usage_metadata" for LiteLLM. 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-02 19:45:15 -04:00
Jacob Lee
01d0cfe450 docs: Remove TODO from Ollama docs page (#30627) 2025-04-02 22:59:15 +00:00
Christophe Bornet
f241fd5c11 core: Add ruff rules RET (#29384)
See https://docs.astral.sh/ruff/rules/#flake8-return-ret
All auto-fixes
2025-04-02 16:59:56 -04:00
Eugene Yurtsev
9ae792f56c core: 0.3.50 release (#30623)
0.3.50 release
2025-04-02 14:46:23 -04:00
Christophe Bornet
ccc3d32ec8 core: Add ruff rules for Pylint PLC (Convention) and PLE (Errors) (#29286)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-04-02 10:58:03 -04:00
ccurme
fe0fd9dd70 openai[patch]: upgrade tiktoken and fix test (#30621)
Related to https://github.com/langchain-ai/langchain/issues/30344

https://github.com/langchain-ai/langchain/pull/30542 introduced an
erroneous test for token counts for o-series models. tiktoken==0.8 does
not support o-series models in
`tiktoken.encoding_for_model(model_name)`, and this is the version of
tiktoken we had in the lock file. So we would default to `cl100k_base`
for o-series, which is the wrong encoding model. The test tested against
this wrong encoding (so it passed with tiktoken 0.8).

Here we update tiktoken to 0.9 in the lock file, and fix the expected
counts in the test. Verified that we are pulling
[o200k_base](https://github.com/openai/tiktoken/blob/main/tiktoken/model.py#L8),
as expected.
2025-04-02 10:44:48 -04:00
oxy-tg
38807871ec docs: Add Oxylabs integration (#30591)
Description:
This PR adds documentation for the langchain-oxylabs integration
package.

The documentation includes instructions for configuring Oxylabs
credentials and provides example code demonstrating how to use the
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
Langchain-Oxylabs package, located in docs/docs/integrations.
Added a provider page in docs/docs/providers.
Added a new package to libs/packages.yml.

Lint and Test:

Successfully ran make format, make lint, and make test.
2025-04-02 14:40:32 +00:00
ccurme
816492e1d3 openai: release 0.3.12 (#30616) 2025-04-02 13:20:15 +00:00
Bagatur
111dd90a46 openai[patch]: support structured output and tools (#30581)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-02 09:14:02 -04:00
Karol Zmorski
32f7695809 docs: Little update in sample notebook with WatsonxToolkit (#30614)
**Description:**

- Updated sample notebook with valid tools.
2025-04-02 09:08:29 -04:00
Mahir Shah
9d3262c7aa core: Propagate config_factories in RunnableBinding (#30603)
- **Description:** Propagates config_factories when calling decoration
methods for RunnableBinding--e.g. bind, with_config, with_types,
with_retry, and with_listeners. This ensures that configs attached to
the original RunnableBinding are kept when creating the new
RunnableBinding and the configs are merged during invocation. Picks up
where #30551 left off.
  - **Issue:** #30531

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-01 18:03:58 -04:00
ccurme
8a69de5c24 openai[patch]: ignore file blocks when counting tokens (#30601)
OpenAI does not appear to document how it transforms PDF pages to
images, which determines how tokens are counted:
https://platform.openai.com/docs/guides/pdf-files?api-mode=chat#usage-considerations

Currently these block types raise ValueError inside
`get_num_tokens_from_messages`. Here we update to generate a warning and
continue.
2025-04-01 15:29:33 -04:00
Christophe Bornet
558191198f core: Add ruff rule FBT003 (boolean-trap) (#29424)
See
https://docs.astral.sh/ruff/rules/boolean-positional-value-in-call/#boolean-positional-value-in-call-fbt003
This PR also fixes some FBT001/002 in private methods but does not
enforce these rules globally atm.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 17:40:12 +00:00
Christophe Bornet
4f8ea13cea core: Add ruff rules PERF (#29375)
See https://docs.astral.sh/ruff/rules/#perflint-perf
2025-04-01 13:34:56 -04:00
Christophe Bornet
8a33402016 core: Add ruff rules PT (pytest) (#29381)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-04-01 13:31:07 -04:00
Ben Faircloth
6896c863e8 docs: add seekrflow chat model integration docs (#30596)
### **PR title**  
`docs: add SeekrFlow integration notebook`

---

### 💬 **PR message**  
- **Description:**  
This PR adds an integration notebook for
[`[ChatSeekrFlow](https://pypi.org/project/langchain-seekrflow/)`](https://pypi.org/project/langchain-seekrflow/)
under `docs/docs/integrations/chat/`. Per LangChain’s guidance,
SeekrFlow has been published as a standalone OSS package
(`langchain-seekrflow`) rather than as a direct community integration.
This notebook ensures discoverability, demonstration, and testability of
the integration within LangChain’s documentation structure.

- **Issue:**  
N/A – this is a new integration contribution aligned with LangChain’s
external package policy.

- **Dependencies:**  
-
[`[langchain-seekrflow](https://pypi.org/project/langchain-seekrflow)`](https://pypi.org/project/langchain-seekrflow)
(published to PyPI)
-
[`[seekrai](https://pypi.org/project/seekrai/)`](https://pypi.org/project/seekrai/)
(SeekrFlow client SDK)

- **Twitter handle (optional):**  
  @seekrtechnology

---------

Co-authored-by: Ben Faircloth <bfaircloth@seekr.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 13:18:01 -04:00
Christophe Bornet
768e4f695a core: Add ruff rules S110 and S112 (#30599) 2025-04-01 13:17:22 -04:00
Christophe Bornet
88b4233fa1 core: Add ruff rules D (docstring) (#29406)
This ensures that the code is properly documented:
https://docs.astral.sh/ruff/rules/#pydocstyle-d

Related to #21983
2025-04-01 13:15:45 -04:00
Andras L Ferenczi
64df60e690 community[minor]: Add custom sitemap URL parameter to GitbookLoader (#30549)
## Description
This PR adds a new `sitemap_url` parameter to the `GitbookLoader` class
that allows users to specify a custom sitemap URL when loading content
from a GitBook site. This is particularly useful for GitBook sites that
use non-standard sitemap file names like `sitemap-pages.xml` instead of
the default `sitemap.xml`.
The standard `GitbookLoader` assumes that the sitemap is located at
`/sitemap.xml`, but some GitBook instances (including GitBook's own
documentation) use different paths for their sitemaps. This parameter
makes the loader more flexible and helps users extract content from a
wider range of GitBook sites.
## Issue
Fixes bug
[30473](https://github.com/langchain-ai/langchain/issues/30473) where
the `GitbookLoader` would fail to find pages on GitBook sites that use
custom sitemap URLs.
## Dependencies
No new dependencies required.
*I've added*:
* Unit tests to verify the parameter works correctly
* Integration tests to confirm the parameter is properly used with real
GitBook sites
* Updated docstrings with parameter documentation
The changes are fully backward compatible, as the parameter is optional
with a sensible default.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-01 16:17:21 +00:00
Christophe Bornet
fdda1aaea1 core: Accept ALL ruff rules with exclusions (#30595)
This pull request updates the `pyproject.toml` configuration file to
modify the linting rules and ignored warnings for the project. The most
important changes include switching to a more comprehensive selection of
linting rules and updating the list of ignored rules to better align
with the project's requirements.

Linting rules update:

* Changed the `select` option to include all available linting rules by
setting it to `["ALL"]`.

Ignored rules update:

* Updated the `ignore` option to include specific rules that interfere
with the formatter, are incompatible with Pydantic, or are temporarily
excluded due to project constraints.
2025-04-01 11:17:51 -04:00
Kacper Włodarczyk
26a3256fc6 community[major]: DynamoDBChatMessageHistory bulk add messages, raise errors (#30572)
This PR addresses two key issues:

- **Prevent history errors from failing silently**: Previously, errors
in message history were only logged and not raised, which can lead to
inconsistent state and downstream failures (e.g., ValidationError from
Bedrock due to malformed message history). This change ensures that such
errors are raised explicitly, making them easier to detect and debug.
(Side note: I’m using AWS Lambda Powertools Logger but hadn’t configured
it properly with the standard Python logger—my bad. If the error had
been raised, I would’ve seen it in the logs 😄) This is a **BREAKING
CHANGE**

- **Add messages in bulk instead of iteratively**: This introduces a
custom add_messages method to add all messages at once. The previous
approach failed silently when individual messages were too large,
resulting in partial history updates and inconsistent state. With this
change, either all messages are added successfully, or none are—helping
avoid obscure history-related errors from Bedrock.

---------

Co-authored-by: Kacper Wlodarczyk <kacper.wlodarczyk@chaosgears.com>
2025-04-01 11:13:32 -04:00
Olexandr88
8c8bca68b2 docs: edited the badge to an acceptable size (#30586)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-01 07:17:12 -04:00
Armaanjeet Singh Sandhu
4bbc249b13 community: Fix attribute access for transcript text in YoutubeLoader (Fixes #30309) (#30582)
**Description:** 
Fixes a bug in the YoutubeLoader where FetchedTranscript objects were
not properly processed. The loader was only extracting the 'text'
attribute from FetchedTranscriptSnippet objects while ignoring 'start'
and 'duration' attributes. This would cause a TypeError when the code
later tried to access these missing keys, particularly when using the
CHUNKS format or any code path that needed timestamp information.

This PR modifies the conversion of FetchedTranscriptSnippet objects to
include all necessary attributes, ensuring that the loader works
correctly with all transcript formats.

**Issue:** Fixes #30309

**Dependencies:** None

**Testing:**
- Tested the fix with multiple YouTube videos to confirm it resolves the
issue
- Verified that both regular loading and CHUNKS format work correctly
2025-04-01 07:13:06 -04:00
Ivan Brko
ecff055096 community[minor]: Improve Brave Search Tool, allow api key in env var (#30364)
- **Description:** 

- Make Brave Search Tool consistent with other tools and allow reading
its api key from `BRAVE_SEARCH_API_KEY` instead of having to pass the
api key manually (no breaking changes)
- Improve Brave Search Tool by storing api key in `SecretStr` instead of
plain `str`.
    - Add unit test for `BraveSearchWrapper`
    - Reflect the changes in the documentation
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** ivan_brko
2025-03-31 14:48:52 -04:00
ccurme
0c623045b5 core[patch]: pydantic 2.11 compat (#30554)
Release notes: https://pydantic.dev/articles/pydantic-v2-11-release

Covered here:

- We no longer access `model_fields` on class instances (that is now
deprecated);
- Update schema normalization for Pydantic version testing to reflect
changes to generated JSON schema (addition of `"additionalProperties":
True` for dict types with value Any or object).

## Considerations:

### Changes to JSON schema generation

#### Tool-calling / structured outputs

This may impact tool-calling + structured outputs for some providers,
but schema generation only changes if you have parameters of the form
`dict`, `dict[str, Any]`, `dict[str, object]`, etc. If dict parameters
are typed my understanding is there are no changes.

For OpenAI for example, untyped dicts work for structured outputs with
default settings before and after updating Pydantic, and error both
before/after if `strict=True`.

### Use of `model_fields`

There is one spot where we previously accessed `super(cls,
self).model_fields`, where `cls` is an object in the MRO. This was done
for the purpose of tracking aliases in secrets. I've updated this to
always be `type(self).model_fields`-- see comment in-line for detail.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-03-31 14:22:57 -04:00
keshavshrikant
e8be3cca5c fix huggingface tokenizer default length function (#30185)
#30184
2025-03-31 11:54:30 -04:00
Fai LAW
4419340039 docs: add pre_filter usage in similarity_search_with_score (Azure Cosmos DB No SQL) (#30508)
`pre_filter` should be passed in the `Hybrid Search with filtering`
example. Otherwise, it is just an unused variable.
2025-03-31 11:33:00 -04:00
Wenqi Li
64f97e707e ollama[patch]: Support seed param for OllamaLLM (#30553)
**Description:** a description of the change
add the seed param for OllamaLLM client reproducibility

**Issue:** the issue # it fixes, if applicable
follow up of a similar issue
https://github.com/langchain-ai/langchain/issues/24703
see also https://github.com/langchain-ai/langchain/pull/24782

**Dependencies:** any dependencies required for this change
n/a
2025-03-31 11:28:49 -04:00
Christophe Bornet
8395abbb42 core: Fix test_stream_error_callback (#30228)
Fixes #29436

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-31 10:37:22 -04:00
Jorge Piedrahita Ortiz
b9e19c5f97 Docs: Add sambanova cloud embeddings docs (#30525)
- **Description:** Add samba nova cloud embeddings docs, only
samabastudio embeddings were supported, now in the latest release of
langchan_sambanova sambanova cloud embeddings is also available
2025-03-31 10:16:15 -04:00
Augusto César Perin
f4d1df1b2d docs: add missing with_config method to Runnable templates API reference (#30560)
Broken source/docs links for Runnable methods

### What was changed
Added the `with_config` method to the method lists in both Runnable
template files:
- docs/api_reference/templates/runnable_non_pydantic.rst
- docs/api_reference/templates/runnable_pydantic.rst
2025-03-31 10:08:02 -04:00
Christophe Bornet
026de908eb core: Add ruff rules G, FA, INP, AIR and ISC (#29334)
Fixes mostly for rules G. See
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-03-31 10:05:23 -04:00
Brayden Zhong
e4515f308f community: update RankLLM integration and fix LangChain deprecation (#29931)
# Community: update RankLLM integration and fix LangChain deprecation

- [x] **Description:**  
- Removed `ModelType` enum (`VICUNA`, `ZEPHYR`, `GPT`) to align with
RankLLM's latest implementation.
- Updated `chain({query})` to `chain.invoke({query})` to resolve
LangChain 0.1.0 deprecation warnings from
https://github.com/langchain-ai/langchain/pull/29840.

- [x] **Dependencies:** No new dependencies added.  

- [x] **Tests and Docs:**  
- Updated RankLLM documentation
(`docs/docs/integrations/document_transformers/rankllm-reranker.ipynb`).
  - Fixed LangChain usage in related code examples.  

- [x] **Lint and Test:**  
- Ran `make format`, `make lint`, and verified functionality after
updates.
  - No breaking changes introduced.  

```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.

If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-31 09:50:00 -04:00
ccurme
b4fe1f1ec0 groq: release 0.3.2 (#30570) 2025-03-31 13:29:45 +00:00
Karol Zmorski
c1acf6f756 docs: Add docs for WatsonxToolkit from langchain-ibm (#30340)
**Description:**

Added docs for `WatsonxToolkit` from `langchain-ibm`:
- Sample notebook

Updated provider file: `ibm.mdx`.
2025-03-31 09:18:37 -04:00
ccurme
9213d94057 docs: update cassettes for chat token usage tracking guide (#30558) 2025-03-30 14:57:15 -04:00
ccurme
9c682af8f3 langchain: release 0.3.22 (#30557)
Closes https://github.com/langchain-ai/langchain/issues/30536
2025-03-30 14:48:22 -04:00
ccurme
08796802ca docs: keep tutorial runnable in CI (#30556) 2025-03-30 18:34:05 +00:00
William FH
b075eab3e0 Include delayed inputs in langchain tracer (#30546) 2025-03-28 16:07:22 -07:00
Thommy257
372dc7f991 core[patch]: fix loss of partially initialized variables during prompt composition (#30096)
**Description:**
This PR addresses the loss of partially initialised variables when
composing different prompts. I.e. it allows the following snippet to
run:

```python
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([('system', 'Prompt {x} {y}')]).partial(x='1')
appendix = ChatPromptTemplate.from_messages([('system', 'Appendix {z}')])

(prompt + appendix).invoke({'y': '2', 'z': '3'})
```

Previously, this would have raised a `KeyError`, stating that variable
`x` remains undefined.

**Issue**
References issue #30049

**Todo**
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 20:41:57 +00:00
Koshik Debanath
e7883d5b9f langchain-openai: Support token counting for o-series models in ChatOpenAI (#30542)
Related to #30344

Add support for token counting for o-series models in
`test_token_counts.py`.

* **Update `_MODELS` and `_CHAT_MODELS` dictionaries**
- Add "o1", "o3", and "gpt-4o" to `_MODELS` and `_CHAT_MODELS`
dictionaries.

* **Update token counts**
  - Add token counts for "o1", "o3", and "gpt-4o" models.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30542?shareId=ab208bf7-80a3-4b8d-80c4-2287486fedae).
2025-03-28 16:02:09 -04:00
Eugene Yurtsev
d075ad21a0 core[patch]: specify default event loop scope in pyproject.toml (#30543)
Specify default event loop scope
2025-03-28 19:51:19 +00:00
Ahmed Tammaa
f23c3e2444 text-splitters[patch]: Refactor HTMLHeaderTextSplitter for Enhanced Maintainability and Readability (#29397)
Please see PR #27678 for context

## Overview

This pull request presents a refactor of the `HTMLHeaderTextSplitter`
class aimed at improving its maintainability and readability. The
primary enhancements include simplifying the internal structure by
consolidating multiple private helper functions into a single private
method, thereby reducing complexity and making the codebase easier to
understand and extend. Importantly, all existing functionalities and
public interfaces remain unchanged.

## PR Goals

1. **Simplify Internal Logic**:
- **Consolidation of Private Methods**: The original implementation
utilized multiple private helper functions (`_header_level`,
`_dom_depth`, `_get_elements`) to manage different aspects of HTML
parsing and document generation. This fragmentation increased cognitive
load and potential maintenance overhead.
- **Streamlined Processing**: By merging these functionalities into a
single private method (`_generate_documents`), the class now offers a
more straightforward flow, making it easier for developers to trace and
understand the processing steps. (Thanks to @eyurtsev)

2. **Enhance Readability**:
- **Clearer Method Responsibilities**: With fewer private methods, each
method now has a more focused responsibility. The primary logic resides
within `_generate_documents`, which handles both HTML traversal and
document creation in a cohesive manner.
- **Reduced Redundancy**: Eliminating redundant checks and consolidating
logic reduces the code's verbosity, making it more concise without
sacrificing clarity.

3. **Improve Maintainability**:
- **Easier Debugging and Extension**: A simplified internal structure
allows for quicker identification of issues and easier implementation of
future enhancements or feature additions.
- **Consistent Header Management**: The new implementation ensures that
headers are managed consistently within a single context, reducing the
likelihood of bugs related to header scope and hierarchy.

4. **Maintain Backward Compatibility**:
- **Unchanged Public Interface**: All public methods (`split_text`,
`split_text_from_url`, `split_text_from_file`) and their signatures
remain unchanged, ensuring that existing integrations and usage patterns
are unaffected.
- **Preserved Docstrings**: Comprehensive docstrings are retained,
providing clear documentation for users and developers alike.

## Detailed Changes

1. **Removed Redundant Private Methods**:
- **Eliminated `_header_level`, `_dom_depth`, and `_get_elements`**:
These methods were merged into the `_generate_documents` method,
centralizing the logic for HTML parsing and document generation.

2. **Consolidated Document Generation Logic**:
- **Single Private Method `_generate_documents`**: This method now
handles the entire process of parsing HTML, tracking active headers,
managing document chunks, and yielding `Document` instances. This
consolidation reduces the number of moving parts and simplifies the
overall processing flow.

3. **Simplified Header Management**:
- **Immediate Header Scope Handling**: Headers are now managed within
the traversal loop of `_generate_documents`, ensuring that headers are
added or removed from the active headers dictionary in real-time based
on their DOM depth and hierarchy.
- **Removed `chunk_dom_depth` Attribute**: The need to track chunk DOM
depth separately has been eliminated, as header scopes are now directly
managed within the traversal logic.

4. **Streamlined Chunk Finalization**:
- **Enhanced `finalize_chunk` Function**: The chunk finalization process
has been simplified to directly yield a single `Document` when needed,
without maintaining an intermediate list. This change reduces
unnecessary list operations and makes the logic more straightforward.

5. **Improved Variable Naming and Flow**:
- **Descriptive Variable Names**: Variables such as `current_chunk` and
`node_text` provide clear insights into their roles within the
processing logic.
- **Direct Header Removal Logic**: Headers that are out of scope are
removed immediately during traversal, ensuring that the active headers
dictionary remains accurate and up-to-date.

6. **Preserved Comprehensive Docstrings**:
- **Unchanged Documentation**: All existing docstrings, including
class-level and method-level documentation, remain intact. This ensures
that users and developers continue to have access to detailed usage
instructions and method explanations.

## Testing

All existing test cases from `test_html_header_text_splitter.py` have
been executed against the refactored code. The results confirm that:

- **Functionality Remains Intact**: The splitter continues to accurately
parse HTML content, respect header hierarchies, and produce the expected
`Document` objects with correct metadata.
- **Backward Compatibility is Maintained**: No changes were required in
the test cases, and all tests pass without modifications, demonstrating
that the refactor does not introduce any regressions or alter existing
behaviors.


This example remains fully operational and behaves as before, returning
a list of `Document` objects with the expected metadata and content
splits.

## Conclusion

This refactor achieves a more maintainable and readable codebase by
simplifying the internal structure of the `HTMLHeaderTextSplitter`
class. By consolidating multiple private methods into a single, cohesive
private method, the class becomes easier to understand, debug, and
extend. All existing functionalities are preserved, and comprehensive
tests confirm that the refactor maintains the expected behavior. These
changes align with LangChain’s standards for clean, maintainable, and
efficient code.

---

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:36:00 -04:00
Christophe Bornet
86beb64b50 docs: Add doc for Vectorize provider (#30436)
This pull request adds documentation and a tutorial for integrating the
[Vectorize](https://vectorize.io/) service with LangChain. The most
important changes include adding a new documentation page for Vectorize
and creating a Jupyter notebook that demonstrates how to use the
Vectorize retriever.

The source code for the langchain-vectorize package can be found
[here](https://github.com/vectorize-io/integrations-python/tree/main/langchain).

Previews:
*
https://langchain-git-fork-cbornet-vectorize-langchain.vercel.app/docs/integrations/providers/vectorize/
*
https://langchain-git-fork-cbornet-vectorize-langchain.vercel.app/docs/integrations/retrievers/vectorize/

Documentation updates:

*
[`docs/docs/integrations/providers/vectorize.mdx`](diffhunk://#diff-7e00d4ce4768f73b4d381a7c7b1f94d138f1b27ebd08e3666b942630a0285606R1-R40):
Added a new documentation page for Vectorize, including an overview of
its features, installation instructions, and a basic usage example.

Tutorial updates:

*
[`docs/docs/integrations/retrievers/vectorize.ipynb`](diffhunk://#diff-ba5bb9a1b4586db7740944b001bcfeadc88be357640ded0c82a329b11d8d6e29R1-R294):
Created a Jupyter notebook tutorial that shows how to set up the
Vectorize environment, create a RAG pipeline, and use the LangChain
Vectorize retriever. The notebook includes steps for account creation,
token generation, environment setup, and pipeline deployment.
2025-03-28 15:25:21 -04:00
omahs
6f8735592b docs,langchain-community: Fix typos in docs and code (#30541)
Fix typos
2025-03-28 19:21:16 +00:00
Agus
47d50f49d9 docs: Add GOAT integration to docs (#30478)
This PR adds:
1. Docs for the GOAT integration 
2. An "Agentic Finance" table to the Tools page that includes GOAT

**Twitter handle**: @0xaguspunk
2025-03-28 15:19:37 -04:00
Shixian Sheng
94a7fd2497 docs: fix broken hyperlinks in fireworks integration package README (#30538)
Fix two broken hyperlinks
2025-03-28 15:18:44 -04:00
Oskar Stark
0d2cea747c docs: streamline LangSmith teasing (#30302)
This can only be reviewed by [hiding
whitespaces](https://github.com/langchain-ai/langchain/pull/30302/files?diff=unified&w=1).

The motivation behind this PR is to get my hands on the docs and make
the LangSmith teasing short and clear.

Right now I don't know how to do it, but this could be an include in the
future.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:13:22 -04:00
Eugene Yurtsev
dd0faab07e fix types 2025-03-28 14:23:50 -04:00
Eugene Yurtsev
21ab1dc675 Merge branch 'master' of github.com:xzq-xu/langchain into xzq-xu/master 2025-03-28 13:56:49 -04:00
Eugene Yurtsev
22cee5d983 x 2025-03-28 13:56:10 -04:00
Eugene Yurtsev
a14d8b103b Merge branch 'master' into master 2025-03-28 13:53:58 -04:00
Eugene Yurtsev
6d22f40a0b x 2025-03-28 13:51:06 -04:00
Philippe PRADOS
92189c8b31 community[patch]: Handle gray scale images in ImageBlobParser (Fixes 30261 and 29586) (#30493)
Fix [29586](https://github.com/langchain-ai/langchain/issues/29586) and
[30261](https://github.com/langchain-ai/langchain/pull/30261)
2025-03-28 10:15:40 -04:00
小豆豆学长
1f0686db80 community: add netmind integration (#30149)
Co-authored-by: yanrujing <rujing.yan@protagonist-ai.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-27 15:27:04 -04:00
Kyungho Byoun
e6b6c07395 community: add HANA dialect to SQLDatabase (#30475)
This PR includes support for HANA dialect in SQLDatabase, which is a
wrapper class for SQLAlchemy.

Currently, it is unable to set schema name when using HANA DB with
Langchain. And, it does not show any message to user so that it makes
hard for user to figure out why the SQL does not work as expected.

Here is the reference document for HANA DB to set schema for the
session.

- [SET SCHEMA Statement (Session
Management)](https://help.sap.com/docs/SAP_HANA_PLATFORM/4fe29514fd584807ac9f2a04f6754767/20fd550375191014b886a338afb4cd5f.html)
2025-03-27 15:19:50 -04:00
Eugene Yurtsev
1cf91a2386 docs: fix llms-txt (#30528)
* Fix trailing slashes
* Fix chat model integration links
2025-03-27 19:02:44 +00:00
Christophe Bornet
e181d43214 core: Bump ruff version to 0.11 (#30519)
Changes are from the new TC006 rule:
https://docs.astral.sh/ruff/rules/runtime-cast-value/
TC006 is auto-fixed.
2025-03-27 13:01:49 -04:00
ccurme
59908f04d4 fireworks: release 0.2.9 (#30527) 2025-03-27 16:04:20 +00:00
ccurme
05482877be mistralai: release 0.2.10 (#30526) 2025-03-27 16:01:40 +00:00
Andras L Ferenczi
63673b765b Fix: Enable max_retries Parameter in ChatMistralAI Class (#30448)
**partners: Enable max_retries in ChatMistralAI**

**Description**

- This pull request reactivates the retry logic in the
completion_with_retry method of the ChatMistralAI class, restoring the
intended functionality of the previously ineffective max_retries
parameter. New unit test that mocks failed/successful retry calls and an
integration test to confirm end-to-end functionality.

**Issue**
- Closes #30362

**Dependencies**
- No additional dependencies required

Co-authored-by: andrasfe <andrasf94@gmail.com>
2025-03-27 11:53:44 -04:00
Lakindu Boteju
3aa080c2a8 Fix typos in pdfminer and pymupdf documentations (#30513)
This pull request includes fixes in documentation for PDF loaders to
correct the names of the loaders and the required installations. The
most important changes include updating the loader names and
installation instructions in the Jupyter notebooks.

Documentation fixes:

*
[`docs/docs/integrations/document_loaders/pdfminer.ipynb`](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL34-R34):
Changed references from `PyMuPDFLoader` to `PDFMinerLoader` and updated
the installation instructions to replace `pymupdf` with `pdfminer`.
[[1]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL34-R34)
[[2]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL63-R63)
[[3]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL330-R330)

*
[`docs/docs/integrations/document_loaders/pymupdf.ipynb`](diffhunk://#diff-8487995f457e33daa2a08fdcff3b42e144eca069eeadfad5651c7c08cce7a5cdL292-R292):
Corrected the loader name from `PDFPlumberLoader` to `PyMuPDFLoader`.
2025-03-27 11:29:11 -04:00
Miguel Grinberg
14b7d790c1 docs: Restore accidentally deleted docs on Elasticsearch strategies (#30521)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding back a section of the Elasticsearch
vectorstore documentation that was deleted in [this
commit]([a72fddbf8d (diff-4988344c6ccc08191f89ac1ebf1caab5185e13698d7567fde5352038cd950d77))).
The only change I've made is to update the example RRF request, which
was out of date.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-27 11:27:20 -04:00
ccurme
0b2244ea88 Revert "docs: restore some content to Elasticsearch integration page" (#30523)
Reverts langchain-ai/langchain#30522 in favor of
https://github.com/langchain-ai/langchain/pull/30521.
2025-03-27 15:12:36 +00:00
ccurme
80064893c1 docs: restore some content to Elasticsearch integration page (#30522)
https://github.com/langchain-ai/langchain/pull/24858 standardized vector
store integration pages, but deleted some content.

Here we merge some of the old content back in. We use this version as a
reference:
2c798622cd/docs/docs/integrations/vectorstores/elasticsearch.ipynb
2025-03-27 11:07:19 -04:00
Keiichi Hirobe
956b09f468 core[patch]: stop deleting records with "scoped_full" when doc is empty (#30520)
Fix a bug that causes `scoped_full` in index to delete records when there are no input docs.
2025-03-27 11:04:34 -04:00
Christophe Bornet
b28a474e79 core[patch]: Add ruff rules for PLW (Pylint Warnings) (#29288)
See https://docs.astral.sh/ruff/rules/#warning-w_1

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-27 10:26:12 +00:00
xzq.xu
92dc3f7341 format test lint passed 2025-03-27 13:44:59 +08:00
xzq.xu
d0a9808148 modify test name 2025-03-27 13:34:51 +08:00
xzq.xu
ed2428f902 add a unit test 2025-03-27 12:43:16 +08:00
David Sánchez Sánchez
75823d580b community: fix perplexity response parameters not being included in model response (#30440)
This pull request includes enhancements to the `perplexity.py` file in
the `chat_models` module, focusing on improving the handling of
additional keyword arguments (`additional_kwargs`) in message processing
methods. Additionally, new unit tests have been added to ensure the
correct inclusion of citations, images, and related questions in the
`additional_kwargs`.

Issue: resolves https://github.com/langchain-ai/langchain/issues/30439

Enhancements to `perplexity.py`:

*
[`libs/community/langchain_community/chat_models/perplexity.py`](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212):
Modified the `_convert_delta_to_message_chunk`, `_stream`, and
`_generate` methods to handle `additional_kwargs`, which include
citations, images, and related questions.
[[1]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212)
[[2]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL277-L286)
[[3]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fR324-R331)

New unit tests:

*
[`libs/community/tests/unit_tests/chat_models/test_perplexity.py`](diffhunk://#diff-dab956d79bd7d17a0f5dea3f38ceab0d583b43b63eb1b29138ee9b6b271ba1d9R119-R275):
Added new tests `test_perplexity_stream_includes_citations_and_images`
and `test_perplexity_stream_includes_citations_and_related_questions` to
verify that the `stream` method correctly includes citations, images,
and related questions in the `additional_kwargs`.
2025-03-26 22:28:08 -04:00
Eugene Yurtsev
7664874a0d docs: llms-txt (#30506)
First just verifying it's included in the manifest
2025-03-26 22:21:59 -04:00
Adeel Ehsan
d7d0bca2bc docs: add vectara to libs package yml (#30504) 2025-03-26 16:47:53 -04:00
ccurme
3781144710 docs: update doc on token usage tracking (#30505) 2025-03-26 16:13:45 -04:00
ccurme
a9b1e1b177 openai: release 0.3.11 (#30503) 2025-03-26 19:24:37 +00:00
ccurme
8119a7bc5c openai[patch]: support streaming token counts in AzureChatOpenAI (#30494)
When OpenAI originally released `stream_options` to enable token usage
during streaming, it was not supported in AzureOpenAI. It is now
supported.

Like the [OpenAI
SDK](f66d2e6fdc/src/openai/resources/completions.py (L68)),
ChatOpenAI does not return usage metadata during streaming by default
(which adds an extra chunk to the stream). The OpenAI SDK requires users
to pass `stream_options={"include_usage": True}`. ChatOpenAI implements
a convenience argument `stream_usage: Optional[bool]`, and an attribute
`stream_usage: bool = False`.

Here we extend this to AzureChatOpenAI by moving the `stream_usage`
attribute and `stream_usage` kwarg (on `_(a)stream`) from ChatOpenAI to
BaseChatOpenAI.

---

Additional consideration: we must be sensitive to the number of users
using BaseChatOpenAI to interact with other APIs that do not support the
`stream_options` parameter.

Suppose OpenAI in the future updates the default behavior to stream
token usage. Currently, BaseChatOpenAI only passes `stream_options` if
`stream_usage` is True, so there would be no way to disable this new
default behavior.

To address this, we could update the `stream_usage` attribute to
`Optional[bool] = None`, but this is technically a breaking change (as
currently values of False are not passed to the client). IMO: if / when
this change happens, we could accompany it with this update in a minor
bump.

--- 

Related previous PRs:
- https://github.com/langchain-ai/langchain/pull/22628
- https://github.com/langchain-ai/langchain/pull/22854
- https://github.com/langchain-ai/langchain/pull/23552

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-26 15:16:37 -04:00
Adeel Ehsan
56629ed87b docs: updated the docs for vectara (#30398)
Thank you for contributing to LangChain!

**PR title**: Docs Update for vectara
**Description:** Vectara is moved as langchain partner package and
updating the docs according to that.
2025-03-26 15:02:21 -04:00
ccurme
f68eaab44f tests: release 0.3.17 (#30502) 2025-03-26 18:56:54 +00:00
Louis Auneau
0b532a4ed0 community: Azure Document Intelligence parser features not available fixed (#30370)
Thank you for contributing to LangChain!

- **Description:** Azure Document Intelligence OCR solution has a
*feature* parameter that enables some features such as high-resolution
document analysis, key-value pairs extraction, ... In langchain parser,
you could be provided as a `analysis_feature` parameter to the
constructor that was passed on the `DocumentIntelligenceClient`.
However, according to the `DocumentIntelligenceClient` [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-documentintelligence/azure.ai.documentintelligence.documentintelligenceclient?view=azure-python),
this is not a valid constructor parameter. It was therefore remove and
instead stored as a parser property that is used in the
`begin_analyze_document`'s `features` parameter (see [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentanalysisclient?view=azure-python#azure-ai-formrecognizer-documentanalysisclient-begin-analyze-document)).
I also removed the check for "Supported features" since all features are
supported out-of-the-box. Also I did not check if the provided `str`
actually corresponds to the Azure package enumeration of features, since
the `ValueError` when creating the enumeration object is pretty
explicit.
Last caveat, is that some features are not supported for some kind of
documents. This is documented inside Microsoft documentation and
exception are also explicit.
- **Issue:** N/A
- **Dependencies:** No
- **Twitter handle:** @Louis___A

---------

Co-authored-by: Louis Auneau <louis@handshakehealth.co>
2025-03-26 14:40:14 -04:00
Really Him
fbd2e10703 docs: hide jsx in llm chain tutorial (#30187)
## **Description:** 
The Jupyter notebooks in the docs section are extremely useful and
critical for widespread adoption of LangChain amongst new developers.
However, because they are also converted to MDX and used to build the
HTML for the Docusaurus site, they contain JSX code that degrades
readability when opened in a "notebook" setting (local notebook server,
google colab, etc.). For instance, here we see the website, with a nice
React tab component for installation instructions (`pip` vs `conda`):

![Screenshot 2025-03-07 at 2 07
15 PM](https://github.com/user-attachments/assets/a528d618-f5a0-4d2e-9aed-16d4b8148b5a)

Now, here is the same notebook viewed in colab:

![Screenshot 2025-03-07 at 2 08
41 PM](https://github.com/user-attachments/assets/87acf5b7-a3e0-46ac-8126-6cac6eb93586)

Note that the text following "To install LangChain run:" contains
snippets of JSX code that is (i) confusing, (ii) bad for readability,
(iii) potentially misleading for a novice developer, who might take it
literally to mean that "to install LangChain I should run `import Tabs
from...`" and then an ill-formed command which mixes the `pip` and
`conda` installation instructions.

Ideally, we would like to have a system that presents a
similar/equivalent UI when viewing the notebooks on the documentation
site, or when interacting with them in a notebook setting - or, at a
minimum, we should not present ill-formed JSX snippets to someone trying
to execute the notebooks. As the documentation itself states, running
the notebooks yourself is a great way to learn the tools. Therefore,
these distracting and ill-formed snippets are contrary to that goal.

## **Fixes:**
* Comment out the JSX code inside the notebook
`docs/tutorials/llm_chain` with a special directive `<!-- HIDE_IN_NB`
(closed with `HIDE_IN_NB -->`). This makes the JSX code "invisible" when
viewed in a notebook setting.
* Add a custom preprocessor that runs process_cell and just erases these
comment strings. This makes sure they are rendered when converted to
MDX.
* Minor tweak: Refactor some of the Markdown instructions into an
executable codeblock for better experience when running as a notebook.
* Minor tweak: Optionally try to get the environment variables from a
`.env` file in the repo so the user doesn't have to enter it every time.
Depends on the user installing `python-dotenv` and adding their own
`.env` file.
* Add an environment variable for "LANGSMITH_PROJECT"
(default="default"), per the LangSmith docs, so a local user can target
a specific project in their LangSmith account.

**NOTE:** If this PR is approved, and the maintainers agree with the
general goal of aligning the notebook execution experience and the doc
site UI, I would plan to implement this on the rest of the JSX snippets
that are littered in the notebooks.

**NOTE:** I wasn't able to/don't know how to run the linkcheck Makefile
commands.

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-03-26 14:22:33 -04:00
Philippe PRADOS
8e5d2a44ce community[patch]: update PyPDFParser to take into account filters returned as arrays (#30489)
The image parsing is generating a bug as the the extracted objects for
the /Filter returns sometimes an array, sometimes a string.

Fix [Issue
30098](https://github.com/langchain-ai/langchain/issues/30098)
2025-03-26 14:16:54 -04:00
ccurme
422ba4cde5 infra: handle flaky tests (#30501) 2025-03-26 13:28:56 -04:00
ccurme
9a80be7bb7 core[patch]: release 0.3.49 (#30500) 2025-03-26 13:26:32 -04:00
ccurme
299b222c53 mistral[patch]: check types in adding model_name to response_metadata (#30499) 2025-03-26 16:30:09 +00:00
ccurme
22d1a7d7b6 standard-tests[patch]: require model_name in response_metadata if returns_usage_metadata (#30497)
We are implementing a token-counting callback handler in
`langchain-core` that is intended to work with all chat models
supporting usage metadata. The callback will aggregate usage metadata by
model. This requires responses to include the model name in its
metadata.

To support this, if a model `returns_usage_metadata`, we check that it
includes a string model name in its `response_metadata` in the
`"model_name"` key.

More context: https://github.com/langchain-ai/langchain/pull/30487
2025-03-26 12:20:53 -04:00
Ante Javor
20f82502e5 Community: Add Memgraph integration docs (#30457)
Thank you for contributing to LangChain!

**Description:** 
Since we just implemented
[langchain-memgraph](https://pypi.org/project/langchain-memgraph/)
integration, we are adding basic docs to [your site based on this
comment](https://github.com/langchain-ai/langchain/pull/30197#pullrequestreview-2671616410)
from @ccurme .
   
 **Twitter handle:**
 [@memgraphdb](https://x.com/memgraphdb)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-26 11:58:09 -04:00
xzq.xu
913c8b71d9 format import 2025-03-26 23:34:38 +08:00
xzq.xu
7e3dea5db8 add a new-line 2025-03-26 23:32:07 +08:00
xzq.xu
d602141ab1 remove unused e 2025-03-26 23:10:41 +08:00
xzq.xu
dd9031fc82 _prep_run_args,tool_input copy, Exception 2025-03-26 23:06:43 +08:00
xzq.xu
3382b0d8ea _prep_run_args,tool_input copy 2025-03-26 22:56:32 +08:00
xzq.xu
e90abce577 Merge remote-tracking branch 'origin/master' 2025-03-26 22:42:15 +08:00
xzq.xu
c127ae9d26 fix the format 2025-03-26 22:41:58 +08:00
xzq.xu
65ecc22606 # Fix: Prevent run_manager from being added to state object 2025-03-26 22:36:31 +08:00
ccurme
7e62e3a137 core[patch]: store model names on usage callback handler (#30487)
So we avoid mingling tokens from different models.
2025-03-25 21:26:09 -04:00
ccurme
32827765bf core[patch]: mark usage callback handler as beta (#30486) 2025-03-25 23:25:57 +00:00
Eugene Yurtsev
9f345d64fd core[patch]: Remove old accidental commit (#30483)
Remove commented out file that was accidentally added

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-25 15:37:20 -07:00
ccurme
4b9e2e51f3 core[patch]: add token counting callback handler (#30481)
Stripped-down version of
[OpenAICallbackHandler](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/callbacks/openai_info.py)
that just tracks `AIMessage.usage_metadata`.

```python
from langchain_core.callbacks import get_usage_metadata_callback
from langgraph.prebuilt import create_react_agent

def get_weather(location: str) -> str:
    """Get the weather at a location."""
    return "It's sunny."

tools = [get_weather]
agent = create_react_agent("openai:gpt-4o-mini", tools)

with get_usage_metadata_callback() as cb:
    result = await agent.ainvoke({"messages": "What's the weather in Boston?"})
    print(cb.usage_metadata)
```
2025-03-25 18:16:39 -04:00
pulvedu
1d2b1d8e5e docs: fix typos in Tavily Docs (#30484)
Thank you for contributing to LangChain!
Small changes to docs

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-03-25 18:16:09 -04:00
Christian Jung
19104db7c5 Docs: Fix typo in cookbook (#30485)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fix typo
    - **Issue:** -
    - **Dependencies:** -
    - **Twitter handle:** -


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-25 18:15:29 -04:00
Eugene Yurtsev
0acca6b9c8 core[patch]: Fix handling of title when tool schema is specified manually via JSONSchema (#30479)
Fix issue: https://github.com/langchain-ai/langchain/issues/30456
2025-03-25 15:15:24 -04:00
Ben Chambers
c5e42a4027 community: deprecate graph vector store (#30328)
- **Description:** mark GraphVectorStore `@deprecated`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-25 13:52:54 +00:00
Ian Muge
a8ce63903d community: Add edge properties to the gremlin graph schema (#30449)
Description: Extend the gremlin graph schema to include the edge
properties, grouped by its triples; i.e: `inVLabel` and `outVLabel`.
This should give more context when crafting queries to run against a
gremlin graph db
2025-03-24 19:03:01 -04:00
ccurme
b60e6f6efa community[patch]: update API ref for AmazonTextractPDFParser (#30468) 2025-03-24 23:02:52 +00:00
David Sánchez Sánchez
3ba0d28d8e community: update perplexity docstring (#30451)
This pull request includes extensive documentation updates for the
`ChatPerplexity` class in the
`libs/community/langchain_community/chat_models/perplexity.py` file. The
changes provide detailed setup instructions, key initialization
arguments, and usage examples for various functionalities of the
`ChatPerplexity` class.

Documentation improvements:

* Added setup instructions for installing the `openai` package and
setting the `PPLX_API_KEY` environment variable.
* Documented key initialization arguments for completion parameters and
client parameters, including `model`, `temperature`, `max_tokens`,
`streaming`, `pplx_api_key`, `request_timeout`, and `max_retries`.
* Provided examples for instantiating the `ChatPerplexity` class,
invoking it with messages, using structured output, invoking with
perplexity-specific parameters, streaming responses, and accessing token
usage and response metadata.Thank you for contributing to LangChain!
2025-03-24 15:01:02 -04:00
Vadym Barda
97dec30eea docs[patch]: update trim_messages doc (#30462) 2025-03-24 18:50:48 +00:00
ccurme
c2dd8d84ff infra[patch]: remove pyspark from langchain-community extended testing requirements (#30466) 2025-03-24 14:41:54 -04:00
ccurme
aa30d2d57f standard-tests: release 0.3.16 (#30464) 2025-03-24 18:35:12 +00:00
ccurme
b09e7c125c cli: use pytest-watcher (#30465)
pytest-watch is no longer maintained.
2025-03-24 18:06:31 +00:00
David Sánchez Sánchez
d7b13e12ee community: update perplexity documentation (#30450)
This pull request includes updates to the
`docs/docs/integrations/chat/perplexity.ipynb` file to enhance the
documentation for `ChatPerplexity`. The changes focus on demonstrating
the use of Perplexity-specific parameters and supporting structured
outputs for Tier 3+ users.

Enhancements to documentation:

* Added a new markdown cell explaining the use of Perplexity-specific
parameters through the `ChatPerplexity` class, including parameters like
`search_domain_filter`, `return_images`, `return_related_questions`, and
`search_recency_filter` using the `extra_body` parameter.
* Added a new code cell demonstrating how to invoke `ChatPerplexity`
with the `extra_body` parameter to filter search recency.

Support for structured outputs:

* Added a new markdown cell explaining that `ChatPerplexity` supports
structured outputs for Tier 3+ users.
* Added a new code cell demonstrating how to use `ChatPerplexity` with
structured outputs by defining a `BaseModel` class and invoking the chat
with structured output.[Copilot is generating a summary...]Thank you for
contributing to LangChain!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-24 13:49:59 -04:00
ccurme
50ec4a1a4f openai[patch]: attempt to make test less flaky (#30463) 2025-03-24 17:36:36 +00:00
ccurme
8486e0ae80 openai[patch]: bump openai sdk (#30461)
[New required
field](https://github.com/openai/openai-python/pull/2223/files#diff-530fd17eb1cc43440c82630df0ddd9b0893cf14b04065a95e6eef6cd2f766a44R26)
for `ResponseUsage` released in 1.66.5.
2025-03-24 12:10:00 -04:00
ccurme
cbbc968903 openai: release 0.3.10 (#30460) 2025-03-24 15:37:53 +00:00
ccurme
ed5e589191 openai[patch]: support multi-turn computer use (#30410)
Here we accept ToolMessages of the form
```python
ToolMessage(
    content=<representation of screenshot> (see below),
    tool_call_id="abc123",
    additional_kwargs={"type": "computer_call_output"},
)
```
and translate them to `computer_call_output` items for the Responses
API.

We also propagate `reasoning_content` items from AIMessages.

## Example

### Load screenshots
```python
import base64

def load_png_as_base64(file_path):
    with open(file_path, "rb") as image_file:
        encoded_string = base64.b64encode(image_file.read())
        return encoded_string.decode('utf-8')

screenshot_1_base64 = load_png_as_base64("/path/to/screenshot/of/application.png")
screenshot_2_base64 = load_png_as_base64("/path/to/screenshot/of/desktop.png")
```

### Initial message and response
```python
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="computer-use-preview",
    model_kwargs={"truncation": "auto"},
)

tool = {
    "type": "computer_use_preview",
    "display_width": 1024,
    "display_height": 768,
    "environment": "browser"
}
llm_with_tools = llm.bind_tools([tool])

input_message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": (
                "Click the red X to close and reveal my Desktop. "
                "Proceed, no confirmation needed."
            )
        },
        {
            "type": "input_image",
            "image_url": f"data:image/png;base64,{screenshot_1_base64}",
        }
    ]
)

response = llm_with_tools.invoke(
    [input_message],
    reasoning={
        "generate_summary": "concise",
    },
)
response.additional_kwargs["tool_outputs"]
```

### Construct ToolMessage
```python
tool_call_id = response.additional_kwargs["tool_outputs"][0]["call_id"]

tool_message = ToolMessage(
    content=[
        {
            "type": "input_image",
            "image_url": f"data:image/png;base64,{screenshot_2_base64}"
        }
    ],
    #  content=f"data:image/png;base64,{screenshot_2_base64}",  # <-- also acceptable
    tool_call_id=tool_call_id,
    additional_kwargs={"type": "computer_call_output"},
)
```

### Invoke again
```python
messages = [
    input_message,
    response,
    tool_message,
]

response_2 = llm_with_tools.invoke(
    messages,
    reasoning={
        "generate_summary": "concise",
    },
)
```
2025-03-24 15:25:36 +00:00
Vadym Barda
7bc50730aa core[patch]: release 0.3.48 (#30458) 2025-03-24 09:48:03 -04:00
Mohammad Mohtashim
33f1ab1528 Youtube Loader load method Fixed (#30314)
- **Description:** Fixed the `YoutubeLoader` loading method not
returning the correct object
- **Issue:** #30309

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-23 14:48:03 -04:00
Simon Paredes
df4448dfac langchain-groq: Add response metadata when streaming (#30379)
- **Description:** Add missing `model_name` and `system_fingerprint`
metadata when streaming.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-23 14:34:41 -04:00
Changyong Um
e2d9fe766f community[tool]: Integrate a tool for the naver_search (#30392)
Hello!
I have reopened a pull request for tool integration.
Please refer to the previous
[PR](https://github.com/langchain-ai/langchain/pull/30248).

I understand that for the tool integration, a separate package should be
created, and only the documentation should be added under docs/docs/. If
there are any other procedures, please let me know.


[langchain-naver-community](https://github.com/e7217/langchain-naver-community)

cc: @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-23 14:05:24 -04:00
Jonathan Feng
3848a1371d langchain-contextual: update provider documentation and add reranker documentation (#30415)
Hi @ccurme!

Thanks so much for helping with getting the Contextual documentation
merged last time. We added the reranker to our provider's documentation!
Please let me know if there's any issues with it! Would love to also
work with your team on an announcement for this! 🙏

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** updates contextual provider documentation to include
information about our reranker, also includes documentation for
contextual's reranker in the retrievers section
    - **Twitter handle:** https://x.com/ContextualAI/highlights


docs have been added


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-22 18:09:09 -04:00
ccurme
d867afff1c docs: update package table ordering (#30437)
Update download counts (only impacts ordering, counts in rendered page
are updated automatically).
2025-03-22 18:07:08 -04:00
Brandon Luu
bbbd4e1db8 docs: Update VectorStoreTab vector store initializations (#30413)
Description: Update vector store tab inits to match either the docs or
api_ref (whichever was more comprehensive)

List of changes per vector stores:

- In-memory
  - no change
- AstraDB
  - match to docs - docs/api_refs match (excluding embeddings)
- Chroma
  - match to docs - api_refs is less descriptive
- FAISS
  - match to docs - docs/api_refs match (excluding embeddings)
- Milvus
- match to docs to use Milvus Lite with Flat index - api_refs does not
have index_param for generalization
- MongoDB
  - match to docs - api_refs are sparser
- PGVector
  - match to api_ref
  - changed to include docker cmd directly in code
- docs/api_ref has comment to view docker command in separate code block
- Pinecone
  - match to api_refs - docs have code dispersed
- Qdrant
  - match to api_ref - docs has size=3072, api_ref has size=1536

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-22 17:29:45 -04:00
Matthew Farrellee
e7032901c3 langchain-tests: allow test_serdes for packages outside the default valid namespaces (#30343)
**Description:**

a third party package not listed in the default valid namespaces cannot
pass test_serdes because the load() does not allow for extending the
valid_namespaces.

test_serdes will fail with -
ValueError: Invalid namespace: {'lc': 1, 'type': 'constructor', 'id':
['langchain_other', 'chat_models', 'ChatOther'], 'kwargs':
{'model_name': '...', 'api_key': '...'}, 'name': 'ChatOther'}

this change has test_serdes automatically extend valid_namespaces based
off the ChatModel under test's namespace.
2025-03-22 17:27:39 -04:00
Jiwon Kang
699475a01d community: uuidv1 is unsafe (#30432)
this_row_id previously used UUID v1. However, since UUID v1 can be
predicted if the MAC address and timestamp are known, it poses a
potential security risk. Therefore, it has been changed to UUID v4.
2025-03-22 15:27:49 -04:00
Dhruvajyoti Sarma
31551dab40 feature: added warning when duckdb is used as a vectorstore without pandas (#30435)
added warning when duckdb is used as a vectorstore without pandas being
installed (currently used for similarity search result processing)

Thank you for contributing to LangChain!

- [ ] **PR title**: "community: added warning when duckdb is used as a
vectorstore without pandas"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** displays a warning when using duckdb as a vector
store without pandas being installed, as it is used by the
`similarity_search` function
    - **Issue:** #29933 
    - **Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-22 19:27:21 +00:00
ccurme
e81b82ee0b docs: update cassettes (#30434)
Following updates to `draw_mermaid_png`
2025-03-22 12:57:36 -04:00
ccurme
6484635ac3 docs: update cassettes for response metadata guide (#30431)
As of langchain-groq 0.3 ChatGroq requires a model name.

Also update other models.
2025-03-22 07:52:08 -04:00
Cesar Sanz
5383abfeee Fix incorrect import path for AzureAIChatCompletionsModel (#30417)
Fixes #30416

Correct the import path for `AzureAIChatCompletionsModel` in the
`_init_chat_model_helper` function.

* Update the import statement in
`libs/langchain/langchain/chat_models/base.py` to `from
langchain_azure_ai.chat_models import AzureAIChatCompletionsModel`.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30417?shareId=6ff6d5de-e3d1-4972-8d24-5e74838e9945).
2025-03-22 07:44:51 -04:00
Misakar
7750ad588b community:ChatLiteLLM support output reasoning content (#30430) 2025-03-22 07:43:33 -04:00
Adrián Panella
b75573e858 core: add tool_call exclusion in filter_message (#30289)
Extend functionallity to allow to filter pairs of tool calls (ai +
tool).

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2025-03-21 23:05:29 +00:00
Vadym Barda
673ec00030 docs[patch]: add warning to token counter docstring (#30426) 2025-03-21 18:59:40 -04:00
Adrián Panella
3933a4abc3 core(mermaid): allow greater customization (#29939)
Adds greater style customization by allowing a custom frontmatter
config. This allows to set a `theme` and `look` or to adjust theme by
setting `themeVariables`

Example:

```python

node_colors = NodeStyles(
    default="fill:#e2e2e2,line-height:1.2,stroke:#616161",
    first="fill:#cfeab8,fill-opacity:0",
    last="fill:#eac3b8",
)

frontmatter_config = {
    "config": {
        "theme": "neutral",
        "look": "handDrawn"
    }
}

graph.get_graph().draw_mermaid_png(node_colors=node_colors, frontmatter_config=frontmatter_config)
```


![image](https://github.com/user-attachments/assets/11b56d30-3be2-482f-8432-3ce704a09552)

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2025-03-21 18:25:26 -04:00
Vadym Barda
07823cd41c core[patch]: optimize trim_messages (#30327)
Refactored w/ Claude

Up to 20x speedup! (with theoretical max improvement of `O(n / log n)`)
2025-03-21 17:08:26 -04:00
ccurme
b78ae7817e openai[patch]: trace strict in structured_output_kwargs (#30425) 2025-03-21 14:37:28 -04:00
axiangcoding
428de88398 docs: Update a note about how to track azure openai's token usage when streaming (#30409)
- **Description:** Update a note about how to track azure openai's token
usage when streaming
  - **Issue:** #30390 
  - **Dependencies:** None
  - **Twitter handle:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 14:18:50 -04:00
ccurme
1de7fa8f3a Revert "deepseek: temporarily bypass tests" (#30424)
Reverts langchain-ai/langchain#30423
2025-03-21 17:14:31 +00:00
ccurme
c74dfff836 deepseek: temporarily bypass tests (#30423)
Deepseek infra is not stable enough to get through integration tests.

Previous two attempts had two tests time out, they both pass locally.
2025-03-21 17:08:35 +00:00
ccurme
7147903724 deepseek: release 0.1.3 (#30422) 2025-03-21 16:39:50 +00:00
Andras L Ferenczi
b5f49df86a partner: ChatDeepSeek on openrouter not returning reasoning (#30240)
Deepseek model does not return reasoning when hosted on openrouter
(Issue [30067](https://github.com/langchain-ai/langchain/issues/30067))

the following code did not return reasoning:

```python
llm = ChatDeepSeek( model = 'deepseek/deepseek-r1:nitro', api_base="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY")) 
messages = [
    {"role": "system", "content": "You are an assistant."},
    {"role": "user", "content": "9.11 and 9.8, which is greater? Explain the reasoning behind this decision."}
]
response = llm.invoke(messages, extra_body={"include_reasoning": True})
print(response.content)
print(f"REASONING: {response.additional_kwargs.get('reasoning_content', '')}")
print(response)
```

The fix is to extract reasoning from
response.choices[0].message["model_extra"] and from
choices[0].delta["reasoning"]. and place in response additional_kwargs.
Change is really just the addition of a couple one-sentence if
statements.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 16:35:37 +00:00
Vadym Barda
4852ab8d0a core[patch]: more tests for trim_messages (#30421) 2025-03-21 16:19:52 +00:00
ccurme
e8e3b2bfae ollama: release 0.3.0 (#30420) 2025-03-21 15:50:08 +00:00
Jojo
8f300740ed docs: fix several typos in docs/docs/how_to/split_html.ipynb (#30407)
Fix several typos in docs/docs/how_to/split_html.ipynb
* `structered` should be `structured`
* `signifcant` should be `significant`
* `seperator` should be `separator`
2025-03-21 11:46:26 -04:00
Jojo
c77ee99980 docs: fix typo in chat_history.ipynb (#30406)
`peristence` should be `persistence`
2025-03-21 11:45:52 -04:00
Jojo
f657b19a24 docs: Fix typo in chat_history.ipynb (#30405)
`repsonse` should be `response`
2025-03-21 11:45:31 -04:00
Bob Merkus
5700646cc5 ollama: add reasoning model support (e.g. deepseek) (#29689)
# Description
This PR adds reasoning model support for `langchain-ollama` by
extracting reasoning token blocks, like those used in deepseek. It was
inspired by
[ollama-deep-researcher](https://github.com/langchain-ai/ollama-deep-researcher),
specifically the parsing of [thinking
blocks](6d1aaf2139/src/assistant/graph.py (L91)):
```python
  # TODO: This is a hack to remove the <think> tags w/ Deepseek models 
  # It appears very challenging to prompt them out of the responses 
  while "<think>" in running_summary and "</think>" in running_summary:
      start = running_summary.find("<think>")
      end = running_summary.find("</think>") + len("</think>")
      running_summary = running_summary[:start] + running_summary[end:]
```

This notes that it is very hard to remove the reasoning block from
prompting, but we actually want the model to reason in order to increase
model performance. This implementation extracts the thinking block, so
the client can still expect a proper message to be returned by
`ChatOllama` (and use the reasoning content separately when desired).

This implementation takes the same approach as
[ChatDeepseek](5d581ba22c/libs/partners/deepseek/langchain_deepseek/chat_models.py (L215)),
which adds the reasoning content to
chunk.additional_kwargs.reasoning_content;
```python
  if hasattr(response.choices[0].message, "reasoning_content"):  # type: ignore
      rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
          response.choices[0].message.reasoning_content  # type: ignore
      )
```

This should probably be handled upstream in ollama + ollama-python, but
this seems like a reasonably effective solution. This is a standalone
example of what is happening;

```python
async def deepseek_message_astream(
    llm: BaseChatModel,
    messages: list[BaseMessage],
    config: RunnableConfig | None = None,
    *,
    model_target: str = "deepseek-r1",
    **kwargs: Any,
) -> AsyncIterator[BaseMessageChunk]:
    """Stream responses from Deepseek models, filtering out <think> tags.

    Args:
        llm: The language model to stream from
        messages: The messages to send to the model

    Yields:
        Filtered chunks from the model response
    """
    # check if the model is deepseek based
    if (llm.name and model_target not in llm.name) or (hasattr(llm, "model") and model_target not in llm.model):
        async for chunk in llm.astream(messages, config=config, **kwargs):
            yield chunk
        return

    # Yield with a buffer, upon completing the <think></think> tags, move them to the reasoning content and start over
    buffer = ""
    async for chunk in llm.astream(messages, config=config, **kwargs):
        # start or append
        if not buffer:
            buffer = chunk.content
        else:
            buffer += chunk.content if hasattr(chunk, "content") else chunk

        # Process buffer to remove <think> tags
        if "<think>" in buffer or "</think>" in buffer:
            if hasattr(chunk, "tool_calls") and chunk.tool_calls:
                raise NotImplementedError("tool calls during reasoning should be removed?")
            if "<think>" in chunk.content or "</think>" in chunk.content:
                continue
            chunk.additional_kwargs["reasoning_content"] = chunk.content
            chunk.content = ""
        # upon block completion, reset the buffer
        if "<think>" in buffer and "</think>" in buffer:
            buffer = ""
        yield chunk

```

# Issue
Integrating reasoning models (e.g. deepseek-r1) into existing LangChain
based workflows is hard due to the thinking blocks that are included in
the message contents. To avoid this, we could match the `ChatOllama`
integration with `ChatDeepseek` to return the reasoning content inside
`message.additional_arguments.reasoning_content` instead.

# Dependenices
None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 15:44:54 +00:00
ccurme
d8145dda95 xai: release 0.2.2 (#30403) 2025-03-20 20:25:16 +00:00
ccurme
e194902994 mistral: release 0.2.9 (#30402) 2025-03-20 20:22:24 +00:00
ccurme
49466ec9ca groq: release 0.3.1 (#30401) 2025-03-20 20:19:49 +00:00
ccurme
db1e340387 fireworks: release 0.2.8 (#30400) 2025-03-20 16:15:51 -04:00
ccurme
238f7fb345 docs: add links in Writer provider page (#30399) 2025-03-20 16:13:48 -04:00
ccurme
785a8e7d45 tests: release 0.3.15 (#30397) 2025-03-20 15:38:40 -04:00
ccurme
5588ca4cfb core: release 0.3.47 (#30396) 2025-03-20 18:52:53 +00:00
ccurme
de3960d285 multiple: enforce standards on tool_choice (#30372)
- Test if models support forcing tool calls via `tool_choice`. If they
do, they should support
  - `"any"` to specify any tool
  - the tool name as a string to force calling a particular tool
- Add `tool_choice` to signature of `BaseChatModel.bind_tools` in core
- Deprecate `tool_choice_value` in standard tests in favor of a boolean
`has_tool_choice`

Will follow up with PRs in external repos (tested in AWS and Google
already).
2025-03-20 17:48:59 +00:00
ccurme
b86cd8270c multiple: support strict and method in with_structured_output (#30385) 2025-03-20 13:17:07 -04:00
Mohammad Mohtashim
1103bdfaf1 (Ollama) Fix String Value parsing in _parse_arguments_from_tool_call (#30154)
- **Description:** Fix String Value parsing in
_parse_arguments_from_tool_call
- **Issue:** #30145

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 21:47:18 -04:00
Daniel Liden
c0ffc9aa29 Update MLflow integration docs with concise examples and external links (#30082)
- **Description:** This PR updates the [MLflow
integration](https://python.langchain.com/docs/integrations/providers/mlflow_tracking/)
docs. This PR is based on feedback and suggestions from @efriis on
#29612 . This proposed revision is much shorter, does not contain
images, and links out to the MLflow docs rather than providing lengthy
descriptions directly within these docs. Thank you for taking another
look!

- **Issue:** NA
- **Dependencies:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-20 00:25:10 +00:00
Tim König
b5992695ae community: add ZoteroRetriever (#30270)
**Description** 
This contribution adds a retriever for the Zotero API.
[Zotero](https://www.zotero.org/) is an open source reference management
for bibliographic data and related research materials. A retriever will
allow langchain applications to retrieve relevant documents from
personal or shared group libraries, which I believe will be helpful for
numerous applications, such as RAG systems, personal research
assistants, etc. Tests and docs were added.

The documentation provided assumes the retriever will be part of the
langchain-community package, as this seemed customary. Please let me
know if this is not the preferred way to do it. I also uploaded the
implementation to PyPI.

**Dependencies**
The retriever requires the `pyzotero` package for API access. This
dependency is stated in the docs, and the retriever will return an error
if the package is not found. However, this dependency is not added to
the langchain package itself.

**Twitter handle**
I'm no longer using Twitter, but I'd appreciate a shoutout on
[Bluesky](https://bsky.app/profile/koenigt.bsky.social) or
[LinkedIn](https://www.linkedin.com/in/dr-tim-k%C3%B6nig-534aa2324/)!


Let me know if there are any issues, I'll gladly try and sort them out!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 20:19:32 -04:00
ccurme
aa5ac9279a docs: update tavily guides (#30387)
AgentExecutor -> langgraph
2025-03-19 19:29:57 -04:00
pulvedu
4346aca5cf Integration update (#30381)
This pull request includes a change to the following
- docs/docs/integrations/tools/tavily_search.ipynb 
- docs/docs/integrations/tools/tavily_extract.ipynb
- added docs/docs/integrations/providers/tavily.mdx

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-03-19 17:58:25 -04:00
Daniel Rauber
9b687d7fbd community[minor]: PlaywrightURLLoader can take stored session file (#30152)
**Description:**
Implements an additional `browser_session` parameter on
PlaywrightURLLoader which can be used to initialize the browser context
by providing a stored playwright context.
2025-03-19 16:29:07 -04:00
Ikko Eltociear Ashimine
bffa530816 docs: update contextual.ipynb (#30384)
intialize -> initialize
2025-03-19 15:48:58 -04:00
Yeonseolee
65b16d3200 Docs: Fix deprecated initialize agent in ainetwork (#30355)
## Description
- Replaced `initialize_agent`, `AgentType` usage in ainetwork
integration
- Updated usage example to `create_react_agent` in langgraph

## Issue
- #29277

## Dependencies
- N/A

## Twitter handler
- I don't use Twitter
2025-03-19 15:20:21 -04:00
Vadym Barda
73c04f4707 core[patch]: release 0.3.46 (#30383) 2025-03-19 15:09:08 -04:00
William FH
ce84f8ba7e Dereference run tree (#30377) 2025-03-19 19:05:06 +00:00
William FH
8265be4d3e Unset context to None in var (#30380) 2025-03-19 18:53:17 +00:00
William FH
4130e6476b Unset context after step (#30378)
While we are already careful to copy before setting the config, if other
objects hold a reference to the config or context, it wouldn't be
cleared.
2025-03-19 11:46:23 -07:00
Vadym Barda
37190881d3 core[patch]: add util for approximate token counting (#30373) 2025-03-19 17:48:38 +00:00
Brandon Luu
5ede4248ef docs: Update Vector Store docs formatting (#30359)
Description: Fix formatting in Vector Stores docs.

- astradb: fix API ref spacing
- milvus, pgvector, pinecone, qdrant: removed % in cmds for docs
consistency
- pgvector: removed redundant code and reorganized imports

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 15:54:18 +00:00
Matthew Farrellee
5f812f5968 langchain-tests: skip instead of passing image message tests (#30375)
**Description:** use skip for image message tests
2025-03-19 15:35:32 +00:00
ccurme
aae8306d6c groq: release 0.3.0 (#30374) 2025-03-19 15:23:30 +00:00
Ashwin
83cfb9691f Fix typo: change 'ben' to 'be' in comment (#30358)
**Description:**  
This PR fixes a minor typo in the comments within
`libs/partners/openai/langchain_openai/chat_models/base.py`. The word
"ben" has been corrected to "be" for clarity and professionalism.

**Issue:**  
N/A

**Dependencies:**  
None
2025-03-19 10:35:35 -04:00
pudongair
4d1d726e61 docs: fix some typos (#30367)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: pudongair <744355276@qq.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 13:26:07 +00:00
Florian Chappaz
07cb41ea9e community: aligning ChatLiteLLM default parameters with litellm (#30360)
**Description:**
Since `ChatLiteLLM` is forwarding most parameters to
`litellm.completion(...)`, there is no reason to set other default
values than the ones defined by `litellm`.

In the case of parameter 'n', it also provokes an issue when trying to
call a serverless endpoint on Azure, as it is considered an extra
parameter. So we need to keep it optional.

We can debate about backward compatibility of this change: in my
opinion, there should not be big issues since from my experience,
calling `litellm.completion()` without these parameters works fine.

**Issue:** 
- #29679 

**Dependencies:** None
2025-03-19 09:07:28 -04:00
Hodory
57ffacadd0 community: add keep_newlines parameter to process_pages method (#30365)
- **Description:** Adding keep_newlines parameter to process_pages
method with page_ids on Confluence document loader
- **Issue:** N/A (This is an enhancement rather than a bug fix)
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-19 08:57:59 -04:00
ccurme
0ba03d8f3a Revert "docs: Refactored AWS Lambda Tool to Use AgentExecutor instead of initialize agent " (#30357)
Reverts langchain-ai/langchain#30267

Code is broken.
2025-03-19 03:17:47 +00:00
William FH
f5a0092551 Rm test for parent_run presence (#30356) 2025-03-18 19:44:19 -07:00
Mark Perfect
38b48d257d docs: Fix Qdrant sparse and hybrid vector search (#30208)
- [x] **PR title**


- [x] **PR message**:
- **Description:** Updated the sparse and hybrid vector search due to
changes in the Qdrant API, and cleaned up the notebook
  

- [x] **Add tests and docs**:
    - N/A


- [x] **Lint and test**

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
2025-03-18 22:44:12 -04:00
Adam Brenner
f949d9a3d3 docs: Add Dell PowerScale Document Loader (#30209)
# Description
Adds documentation on LangChain website for a Dell specific document
loader for on-prem storage devices. Additional details on what the
document loader is described in the PR as well as on our github repo:
[https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)

This PR also creates a category on the document loader webpage as no
existing category exists for on-prem. This follows the existing pattern
already established as the website has a category for cloud providers.

# Issue:
New release, no issue.

# Dependencies:

None

# Twitter handle:

DellTech

---------

Signed-off-by: Adam Brenner <adam@aeb.io>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 22:39:21 -04:00
ccurme
9fb0db6937 community: release 0.3.20 (#30354) 2025-03-18 21:57:12 +00:00
ccurme
168f1dfd93 langchain[patch]: update text-splitters min bound (#30352) 2025-03-18 20:53:43 +00:00
ccurme
f6cf2ce2ad langchain[patch]: lock with latest text-splitters (#30350) 2025-03-18 19:29:11 +00:00
ccurme
2909b49045 langchain: release 0.3.21 (#30348) 2025-03-18 19:13:20 +00:00
ccurme
958f85d541 text-splitters: release 0.3.7 (#30347) 2025-03-18 19:11:37 +00:00
Aniket kadukar
36412c02b6 docs: Fix typo "tall" → "tool" in tools_human.ipynb (#30345)
This PR fixes a minor typo.

The word "tall" was mistakenly used instead of "tool." 

I have corrected it to "tool" for better clarity and accuracy.
2025-03-18 13:12:56 -04:00
Lance Martin
46d6bf0330 ollama[minor]: update default method for structured output (#30273)
From function calling to Ollama's [dedicated structured output
feature](https://ollama.com/blog/structured-outputs).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 12:44:22 -04:00
Marlene
ff8ce60dcc Core: Adding Azure AI to Supported Chat Models (#30342)
- **Description:** I was testing out `init_chat` and saw that chat
models can now be inferred. Azure OpenAI is currently only supported but
we would like to add support for Azure AI which is a different package.
This PR edits the `base.py` file to add the chat implementation.
- I don't think this adds any additional dependencies 
- Will add a test and lint, but starting an initial draft PR. 

cc @santiagxf

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 11:53:20 -04:00
TheSongg
251551ccf1 doc: Implement langchain-xinference (#30296)
- [ ] **PR title**: Implement langchain-xinference

- [ ] **PR message**: 
Implement a standalone package for Xinference chat models and llm
models.

https://github.com/langchain-ai/langchain/issues/30045#issue-2887214214
2025-03-18 11:50:16 -04:00
Oskar Stark
492b4c1604 docs(readthedocs): streamline config (#30307) 2025-03-18 11:47:45 -04:00
wenmeng zhou
5a6e1254a7 support return reasoning content for models like qwq in dashscope (#30317)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

here is an example
```python
from langchain_community.chat_models.tongyi import ChatTongyi
from langchain_core.messages import HumanMessage

chatLLM = ChatTongyi(
    model="qwq-32b",   # refer to  https://help.aliyun.com/zh/model-studio/getting-started/models for more models
)
res = chatLLM.stream([HumanMessage(content="how much is 1 plus 1")])
for r in res:
    print(r)
```

```shell
content='' additional_kwargs={'reasoning_content': 'Okay, so the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' user is asking "'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': 'how much is 1 plus'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1." Let me think'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' about this. Hmm'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', 1 plus'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " 1... That's a pretty"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' basic math question. I'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' remember from arithmetic that when'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' you add 1 and'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 together, the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' result is 2.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' But wait, maybe'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' I should double-check to be'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' sure. Let me visualize it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. If I have one apple'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' and someone gives me another'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' apple, I have'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' two apples total. Yeah,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' that makes sense. Or'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' on a number line'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', starting at 1 and'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' moving 1 step forward lands'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' you at 2'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nIs there any'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' context where 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 might not equal'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2? Like in different'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' number bases? Let'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s see. In base"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 10, which'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is standard,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1+1 is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2. But if'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' we were in binary'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' (base 2'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '), 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 would be 1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '0. But the question'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " doesn't specify a base,"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' so I think the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' default is base 10'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nAlternatively, could'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' this be a trick'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' question? Maybe they'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'re referring to something else"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', like in Boolean'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' algebra where 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 might still'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' be 1 in'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' some contexts? Wait'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', no, in Boolean'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' addition, 1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' + 1 is typically'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " 1 because it's logical"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' OR. But the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' question just says "1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' plus 1," which is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' more arithmetic than Boolean.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' \n\nOr maybe in some other'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' mathematical structure like modular arithmetic?'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' For example, modulo'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 0. But again'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', unless specified, it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s probably standard addition"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nThe user might be'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' testing if I know basic'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' math, or maybe'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " they're a student just"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' starting out. Either way,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' the straightforward answer is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2. I should also'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " consider if there's any cultural"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' references or jokes where'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 equals'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' something else, but I can'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'t think of any common"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' ones. \n\nAlternatively'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', in some contexts like'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' in chemistry,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 could refer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' to mixing solutions, but that'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s not standard. The question"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is pretty simple,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' so I think the answer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is 2. To'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' be thorough, maybe mention'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' that in standard arithmetic it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s 2, but if"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " there's a different"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' context, the answer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' might vary. But since'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' no context is given'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', 2 is the safest'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' answer.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='The result' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' of 1 plus' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 1 is **2**.' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' \n\nIn standard arithmetic (base' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 10), adding' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 1 and 1 together' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' yields 2. This is' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' a fundamental mathematical principle. If' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' the question involves a different context' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' (e.g., binary' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=', modular arithmetic, or a' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' metaphorical meaning), it' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' would need clarification,' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' but under typical circumstances, the' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' answer is **2**.' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ''} response_metadata={'finish_reason': 'stop', 'request_id': '4738c641-6bd8-9efc-a4fe-d929d4e62bef', 'token_usage': {'input_tokens': 16, 'output_tokens': 560, 'total_tokens': 576}} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'

```

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-18 11:43:10 -04:00
ccurme
b91daf06eb groq[minor]: remove default model (#30341)
The default model for `ChatGroq`, `"mixtral-8x7b-32768"`, is being
retired on March 20, 2025. Here we remove the default, such that model
names must be explicitly specified (being explicit is a good practice
here, and avoids the need for breaking changes down the line). This
change will be released in a minor version bump to 0.3.

This follows https://github.com/langchain-ai/langchain/pull/30161
(released in version 0.2.5), where we began generating warnings to this
effect.

![Screenshot 2025-03-18 at 10 33
27 AM](https://github.com/user-attachments/assets/f1e4b302-c62a-43b0-aa86-eaf9271e86cb)
2025-03-18 10:50:34 -04:00
amuwall
f6a17fbc56 community: fix import exception too constrictive (#30218)
Fix this issue #30097
2025-03-17 22:09:02 -04:00
Aryan Agarwal
7ff7c4f81b docs: Refactored AWS Lambda Tool to Use AgentExecutor instead of initialize agent (#30267)
## Description:
- Removed deprecated `initialize_agent()` usage in AWS Lambda
integration.
- Replaced it with `AgentExecutor` for compatibility with LangChain
v0.3.
- Fixed documentation linting errors.

## Issue:
- No specific issue linked, but this resolves the use of deprecated
agent initialization.

## Dependencies:
- No new dependencies added.

## Request for Review:
- Please verify if the implementation is correct.
- If approved and merged, I will proceed with updating other related
files.

## Twitter Handle (Optional):
I don't have a Twitter but here is my LinkedIn instead
(https://www.linkedin.com/in/aryan1227/)
2025-03-17 22:04:13 -04:00
qonnop
036f00dc92 community: support in-memory data (Blob.from_data) in all audio parsers (#30262)
OpenAIWhisperParser, OpenAIWhisperParserLocal, YandexSTTParser do not
handle in-memory audio data (loaded via Blob.from_data) correctly. They
require Blob.path to be set and AudioSegment is always read from the
file system. In-memory data is handled correctly only for
FasterWhisperParser so far. I changed OpenAIWhisperParser,
OpenAIWhisperParserLocal, YandexSTTParser accordingly to match
FasterWhisperParser.
Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-17 19:52:33 -04:00
Ke Liu
98a9ef19ec fix typo (#30298)
fix typo in "environment"
2025-03-17 23:37:43 +00:00
Matthew Farrellee
1985aaf095 langchain-tests: allow subclasses to add addition, non-standard tests (#30204)
**description:** the ChatModel[Integration]Tests classes are powerful
and helpful, this change allows sub-classes to add additional tests.

for instance,

```
class TestChatMyServiceIntegration(ChatModelIntegrationTests):
    ...
    def test_myservice(self, model: BaseChatModel) -> None:
        ...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 23:37:16 +00:00
Ben
789db7398b text-splitters: Add JSFrameworkTextSplitter for Handling JavaScript Framework Code (#28972)
## Description
This pull request introduces a new text splitter,
`JSFrameworkTextSplitter`, to the Langchain library. The
`JSFrameworkTextSplitter` extends the `RecursiveCharacterTextSplitter`
to handle JavaScript framework code effectively, including React (JSX),
Vue, and Svelte. It identifies and utilizes framework-specific component
tags and syntax elements as splitting points, alongside standard
JavaScript syntax. This ensures that code is divided at natural
boundaries, enhancing the parsing and processing of JavaScript and
framework-specific code.

### Key Features
- Supports React (JSX), Vue, and Svelte frameworks.
- Identifies and uses framework-specific tags and syntax elements as
natural splitting points.
- Extends the existing `RecursiveCharacterTextSplitter` for seamless
integration.

## Issue
No specific issue addressed.

## Dependencies
No additional dependencies required.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 23:32:33 +00:00
Bagatur
9b48e4c2b0 docs: update langsmith env vars (#30331) 2025-03-17 14:35:22 -07:00
ccurme
54eab796ab docs: update chat model tabs (#30330) 2025-03-17 15:39:10 -04:00
Oskar Stark
4c68749e38 docs(openai): use bold over ticks (#30303)
We are not talking about code here
2025-03-17 18:53:16 +00:00
Oskar Stark
531319f65b infra(GHA): description is required based on schema definition (#30305) 2025-03-17 18:42:42 +00:00
Oskar Stark
192035f8c0 docs: fix typo (#30310) 2025-03-17 16:54:32 +00:00
César García
a5eca20e1b docs: Fix for broken links for Docling project sites (#30313)
**Description:**: Updated Docling project URLs from ds4sd.github.io to
docling-project.github.io/docling/
 **Issue:** #30312
 **Dependencies:** None
 **Twitter handle**: @lahoramaker
2025-03-17 16:47:09 +00:00
Oskar Stark
620d723fbf infra(GHA): remove unused --- at the beginning of some workflows (#30306) 2025-03-17 16:46:32 +00:00
César García
2c8b8114fa docs: Updated link to current Unstructured docs (#30316)
The former link led to a site that explains that the docs have moved,
but did not redirect the user to the actual site automatically. I just
copied the provided url, checked that it works and updated the link to
the current version.


**Description:** Updated the link to Unstructured Docs at
https://docs.unstructured.io
**Issue:** #30315 
**Dependencies:** None
**Twitter handle:** @lahoramaker
2025-03-17 16:46:28 +00:00
ccurme
e2ab4ccab3 docs: update min langchain-openai version in integrations page (#30326)
No longer RC
2025-03-17 16:45:47 +00:00
ccurme
5684653775 openai[patch]: release 0.3.9 (#30325) 2025-03-17 16:08:41 +00:00
ccurme
eb9b992aa6 openai[patch]: support additional Responses API features (#30322)
- Include response headers
- Max tokens
- Reasoning effort
- Fix bug with structured output / strict
- Fix bug with simultaneous tool calling + structured output
2025-03-17 12:02:21 -04:00
Bae-ChangHyun
d8510270ee community: add 'extract' mode to FireCrawlLoader for structured data extraction (#30242)
**Description:** 
Added an 'extract' mode to FireCrawlLoader that enables structured data
extraction from web pages. This feature allows users to Extract
structured data from a single URLs, or entire websites using Large
Language Models (LLMs).
You can show more params and usage on [firecrawl
docs](https://docs.firecrawl.dev/features/extract-beta).
You can extract from only one url now.(it depends on firecrawl's extract
method)

**Dependencies:** 
No new dependencies required. Uses existing FireCrawl API capabilities.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 15:15:57 +00:00
qonnop
747efa16ec community: fix CPU support for FasterWhisperParser (implicit compute type for WhisperModel) (#30263)
FasterWhisperParser fails on a machine without an NVIDIA GPU: "Requested
float16 compute type, but the target device or backend do not support
efficient float16 computation." This problem arises because the
WhisperModel is called with compute_type="float16", which works only for
NVIDIA GPU.

According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#bit-floating-points-float16)
float16 is supported only on NVIDIA GPUs. Removing the compute_type
parameter solves the problem for CPUs. According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#quantize-on-model-loading)
setting compute_type to "default" (standard when omitting the parameter)
uses the original compute type of the model or performs implicit
conversion for the specific computation device (GPU or CPU). I suggest
to remove compute_type="float16".

@hulitaitai you are the original author of the FasterWhisperParser - is
there a reason for setting the parameter to float16?

Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-14 22:22:29 -04:00
ccurme
c74e7b997d openai[patch]: support structured output via Responses API (#30265)
Also runs all standard tests using Responses API.
2025-03-14 15:14:23 -04:00
Priyansh Agrawal
f54f14b747 community: cube document loader - do not load non-public dimensions and measures (#30286)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Do not load non-public dimensions and measures
(public: false) with Cube semantic loader

- **Issue:** Currently, non-public dimensions and measures are loaded by
the Cube document loader which leads to downstream applications using
these which is not allowed by Cube.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 15:07:56 -04:00
Stavros Kontopoulos
ac22cde130 langchain_ollama: Support keep_alive in embeddings (#30251)
- Description: Adds support for keep_alive in Ollama Embeddings see
https://github.com/ollama/ollama/issues/6401.
Builds on top of of
https://github.com/langchain-ai/langchain/pull/29296. I have this use
case where I want to keep the embeddings model in cpu forever.
- Dependencies: no deps are being introduced.
- Issue: haven't created an issue yet.
2025-03-14 14:56:50 -04:00
Matthew Farrellee
65a8f30729 docs: update json-mode docs to use with_structured_output(method="json_mode") (#30291)
**Description:** update the json-mode concepts doc to use
method="json_mode" instead of model_kwargs w/ response_format
**Issue:** https://github.com/langchain-ai/langchain/issues/30290
2025-03-14 14:54:57 -04:00
ccurme
18f9b5d8ab docs: update contributing doc (#30292) 2025-03-14 18:54:11 +00:00
homeffjy
2c99f12062 community[patch]: fix bilibili loader handling of multi-page content (#30283)
Previously the loader would only extract subtitles from the first page
of multi-page videos.
2025-03-14 14:53:03 -04:00
ccurme
0b80bec015 docs: fix typo (#30288) 2025-03-14 17:09:38 +00:00
ccurme
d5d0134e7b anthropic: release 0.3.10 (#30287) 2025-03-14 16:23:21 +00:00
ccurme
226f29bc96 anthropic: support built-in tools, improve docs (#30274)
- Support features from recent update:
https://www.anthropic.com/news/token-saving-updates (mostly adding
support for built-in tools in `bind_tools`
- Add documentation around prompt caching, token-efficient tool use, and
built-in tools.
2025-03-14 16:18:50 +00:00
Priyansh Agrawal
f27e2d7ce7 community: cube document loader - fix logging (#30285)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Fix bad log message on line#56 and replace f-string
logs with format specifiers

- **Issue:** Log messages such as this one
`INFO:langchain_community.document_loaders.cube_semantic:Loading
dimension values for: {dimension_name}...`

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 11:36:18 -04:00
ccurme
bbd4b36d76 mistralai[patch]: bump core (#30278) 2025-03-13 23:04:36 +00:00
ccurme
315bb17ef5 core: release 0.3.45 (#30277) 2025-03-13 22:44:23 +00:00
pulvedu
d0bfc7f820 community[fix] : Pass API_KEY as argument (#30272)
PR Title:
community: Fix Pass API_KEY as argument

PR Message:
Description:
This PR fixes validation error "Value error, Did not find
tavily_api_key, please add an environment variable `TAVILY_API_KEY`
which contains it, or pass `tavily_api_key` as a named parameter."

Dependencies:
No new dependencies introduced.

---------

Co-authored-by: pulvedu <dustin@tavily.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-13 22:19:31 +00:00
ccurme
5e0fa2cce5 infra: update release pipeline (#30276)
Instead of attempting to conditionally `needs` job, always run job and
exit successfully if not needed.
2025-03-13 18:10:59 -04:00
ccurme
733abcc884 mistral: release 0.2.8 (#30275) 2025-03-13 21:54:34 +00:00
Jacob Lee
e9c1765967 fix(core): Ignore missing secrets on deserialization (#30252) 2025-03-13 12:27:03 -07:00
ccurme
ebea5e014d standard tests: test simple agent loop (#30268) 2025-03-13 16:34:12 +00:00
ccurme
5237987643 docs: update readme (#30239)
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2025-03-12 13:45:13 -04:00
ccurme
cd1ea8e94d openai[patch]: support Responses API (#30231)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-03-12 12:25:46 -04:00
Jason Zhang
49bdd3b6fe docs: Add AgentQL provider doc, tool/toolkit doc and documentloader doc (#30144)
- **Description:** Added AgentQL docs for the provider page, tools page
and documentloader page
- **Twitter handle:** @AgentQL

Repo:
https://github.com/tinyfish-io/agentql-integrations/tree/main/langchain
PyPI: https://pypi.org/project/langchain-agentql/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-11 21:57:40 -04:00
Vadym Barda
23fa70f328 core[patch]: release 0.3.44 (#30236) 2025-03-11 18:59:02 -04:00
Vadym Barda
c7842730ef core[patch]: support single-node subgraphs and put subgraph nodes under the respective subgraphs (#30234) 2025-03-11 18:55:45 -04:00
Dharshan A
81d1653a30 docs: Fix typo in Generating Examples section of few-shot prompting doc (#30219)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-11 09:44:20 -04:00
ccurme
27d86d7bc8 infra: update release workflow (#30207)
Fix condition
2025-03-10 17:53:03 -04:00
ccurme
70fc0b8363 infra: update release workflow (#30203) 2025-03-10 20:18:33 +00:00
ccurme
62c570dd77 standard-tests, openai: bump core (#30202) 2025-03-10 19:22:24 +00:00
ccurme
38420ee76e docs: add note on Deepseek R1 (#30201) 2025-03-10 15:17:20 -04:00
ccurme
f896e701eb deepseek: install local langchain-tests in test deps (#30198) 2025-03-10 16:58:17 +00:00
ccurme
7b8f266039 infra: additional testing on core release (#30180)
Here we add a job to the release workflow that, when releasing
`langchain-core`, tests prior published versions of select packages
against the new version of core. We limit the testing to the most recent
published versions of langchain-anthropic and langchain-openai.

This is designed to catch backward-incompatible updates to core. We
sometimes update core and downstream packages simultaneously, so there
may not be any commit in the history at which tests would fail. So
although core and latest downstream packages could be consistent, we can
benefit from testing prior versions of downstream packages against core.

I tested the workflow by simulating a [breaking
change](d7287248cf)
in core and running it with publishing steps disabled:
https://github.com/langchain-ai/langchain/actions/runs/13741876345. The
workflow correctly caught the issue.
2025-03-10 08:59:59 -04:00
Hugh Gao
aa6dae4a5b community: Remove the system message count limit for ChatTongyi. (#30192)
## Description
The models in DashScope support multiple SystemMessage. Here is the
[Doc](https://bailian.console.aliyun.com/model_experience_center/text#/model-market/detail/qwen-long?tabKey=sdk),
and the example code on the document page:
```python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),  # 如果您没有配置环境变量,请在此处替换您的API-KEY
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务base_url
)
# 初始化messages列表
completion = client.chat.completions.create(
    model="qwen-long",
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        # 请将 'file-fe-xxx'替换为您实际对话场景所使用的 file-id。
        {'role': 'system', 'content': 'fileid://file-fe-xxx'},
        {'role': 'user', 'content': '这篇文章讲了什么?'}
    ],
    stream=True,
    stream_options={"include_usage": True}
)

full_content = ""
for chunk in completion:
    if chunk.choices and chunk.choices[0].delta.content:
        # 拼接输出内容
        full_content += chunk.choices[0].delta.content
        print(chunk.model_dump())

print({full_content})
```
Tip: The example code is for OpenAI, but the document said that it also
supports the DataScope API, and I tested it, and it works.
```
Is the Dashscope SDK invocation method compatible?

Yes, the Dashscope SDK remains compatible for model invocation. However, file uploads and file-ID retrieval are currently only supported via the OpenAI SDK. The file-ID obtained through this method is also compatible with Dashscope for model invocation.
```
2025-03-10 08:58:40 -04:00
Dharshan A
34e94755af Fix typo in astream_events in streaming docs (#30195)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-10 08:56:07 -04:00
ccurme
67aff1648b community: Add OpenGradient integration (Toolkit) (#30190)
Commandeering https://github.com/langchain-ai/langchain/pull/30135

---------

Co-authored-by: kylexqian <kylexqian@gmail.com>
2025-03-09 18:08:07 -04:00
ccurme
b209d46eb3 mistral[patch]: set global ssl context (#30189) 2025-03-09 21:27:41 +00:00
Vijay Selvaraj
df459d0d5e community: add Valthera integration (#30105)
```markdown
**Description:**  
This PR integrates Valthera into LangChain, introducing an framework designed to send highly personalized nudges by an LLM agent. This is modeled after Dr. BJ Fogg's Behavior Model. This integration includes:

- Custom data connectors for HubSpot, PostHog, and Snowflake.
- A unified data aggregator that consolidates user data.
- Scoring configurations to compute motivation and ability scores.
- A reasoning engine that determines the appropriate user action.
- A trigger generator to create personalized messages for user engagement.

**Issue:**  
N/A

**Dependencies:**  
N/A

**Twitter handle:**  
- `@vselvarajijay`

**Tests and Docs:**  
- `docs/docs/integrations/tools/valthera` 
- `https://github.com/valthera/langchain-valthera/tree/main/tests`

```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 21:19:08 +00:00
ccurme
3823daa0b9 cli: update integration doc template for tools (#30188)
Chain example -> langgraph agent
2025-03-09 21:14:43 +00:00
David Skarbrevik
0d7cdf290b langchain: clean pyproject ruff section (#30070)
## Changes
- `/Makefile` - added extra step to `make format` and `make lint` to
ensure the lint dep-group is installed before running ruff (documented
in issue #30069)

- `/pyproject.toml` - removed ruff exceptions for files that no longer
exist or no longer create formatting/linting errors in ruff

## Testing

**running `make format` on this branch/PR**
<img width="435" alt="image"
src="https://github.com/user-attachments/assets/82751788-f44e-4591-98ed-95ce893ce623"
/>

## Issue

fixes #30069

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 15:06:02 -04:00
Jonathan Feng
911accf733 docs: add contextualai documentation (#30050)
Thank you for contributing to LangChain!
 
**Description:** adds ContextualAI's `langchain-contextual` package's
documentation

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 02:43:13 +00:00
Bharat
b9746a6910 fixes#30182: update tool names to match OpenAI function name pattern (#30183)
The OpenAI API requires function names to match the pattern
'^[a-zA-Z0-9_-]+$'. This updates the JIRA toolkit's tool names to use
underscores instead of spaces to comply with this requirement and
prevent BadRequestError when using the tools with OpenAI functions.

Error fixed:
```
File "langgraph-bug-fix/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1023, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
During task with name 'agent' and id 'aedd7537-e8d5-6678-d0c5-98129586d3ac'
```

Issue:#30182
2025-03-08 20:48:25 -05:00
ccurme
cee0fecb08 docs: update package registry counts (#30181) 2025-03-08 20:37:59 -05:00
William FH
bac3a28e70 Flush (#30157) 2025-03-07 16:32:15 -08:00
ccurme
a7ab5e8372 community[patch]: ChatPerplexity: track usage metadata (#30175) 2025-03-07 23:25:05 +00:00
Vadym Barda
6c05d4b153 docs[patch]: update trim messages wording (#30174) 2025-03-07 17:05:51 -05:00
ccurme
1c993b921c core[patch]: release 0.3.43 (#30173) 2025-03-07 21:56:00 +00:00
ccurme
9893e5cb80 core[patch]: catch structured_output_format (#30172)
Change to `ls_structured_output_format` was not backward-compatible with
older versions of integration packages.
2025-03-07 16:50:06 -05:00
ccurme
88dc479c4a docs: update model used in ChatGroq (#30170)
`mixtral-8x7b-32768` is being retired on March 20.
2025-03-07 16:29:05 -05:00
ccurme
33a3510243 core[patch]: export ArgsSchema (#30169)
This is needed for type hints

see: https://github.com/langchain-ai/langchain/pull/30167
2025-03-07 20:43:05 +00:00
OysterMax
01317fde21 DOC: type checker complain on args_schema type hint when inheriting from BaseTool (#30167)
Thank you for contributing to LangChain!

- **Description:** update docs to suppress type checker complain on
args_schema type hint when inheriting from BaseTool
- **Issue:** #30142 
- **Dependencies:** N/A
- **Twitter handle:** N/A

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-07 15:41:53 -05:00
ccurme
17507c9ba6 groq[patch]: release 0.2.5 (#30168) 2025-03-07 20:25:51 +00:00
andyzhou1982
9e863c89d2 add JiebaLinkExtractor for chinese doc extracting (#30150)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: chinese doc extracting"


- [ ] **PR message**: 
- **Description:** add jieba_link_extractor.py for chinese doc
extracting
    - **Dependencies:** jieba


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
  /doc/doc/integrations/providers/jieba.md
  /doc/doc/integrations/vectorstores/jieba_link_extractor.ipynb
  /libs/packages.yml

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-07 20:21:46 +00:00
ccurme
74e7772a5f groq[patch]: warn if model is not specified (#30161)
Groq is retiring `mixtral-8x7b-32768`, which is currently the default
model for ChatGroq, on March 20. Here we emit a warning if the model is
not specified explicitly.

A version 0.3.0 will be released ahead of March 20 that removes the
default altogether.
2025-03-07 15:21:13 -05:00
Ioannis Bakagiannis
3444e587ee docs: Integration Update - ADS4GPTs (#30153)
docs: New integration for LangChain - ads4gpts-langchain

Description: Tools and Toolkit for Agentic integration natively within
LangChain with ADS4GPTs, in order to help applications monetize with
advertising.

Twitter handle: @ads4gpts

Co-authored-by: knitlydevaccount <loom+github@knitly.app>
2025-03-07 14:35:44 -05:00
ccurme
3c258194ae tests[patch]: release 0.3.14 (#30165) 2025-03-07 18:34:05 +00:00
ccurme
34638ccfae openai[patch]: release 0.3.8 (#30164) 2025-03-07 18:26:40 +00:00
ccurme
4e5058f29c core[patch]: release 0.3.42 (#30163) 2025-03-07 18:14:45 +00:00
Eugene Yurtsev
894fd63a61 cli: release 0.0.36 (#30159)
Bump for 0.0.36
2025-03-07 13:05:40 -05:00
ccurme
806211475a core[patch]: update structured output tracing (#30123)
- Trace JSON schema in `options`
- Rename to `ls_structured_output_format`
2025-03-07 13:05:25 -05:00
Jakub Kopecký
d0f5bcda29 docs: fix apify actors notebok main heading text (#30040)
- **Description:** Fix Apify Actors tool notebook main heading text so
there is an actual description instead of "Overview" in the tool
integration description on [LangChain tools integration
page](https://python.langchain.com/docs/integrations/tools/#all-tools).
2025-03-07 12:58:10 -05:00
ccurme
230876a7c5 anthropic[patch]: add PDF input example to API reference (#30156) 2025-03-07 14:19:08 +00:00
ccurme
5c7440c201 docs: update configuration how-to guide (#30139) 2025-03-06 11:51:48 -05:00
joeconstantino
022ff9eead Tableau docs for new datasource qa tool (#30125)
- **Description: a notebook showing langchain and langraph agents using
the new langchain_tableau tool
- **Twitter handle: @joe_constantin0

---------

Co-authored-by: Joe Constantino <joe@constantino.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-06 14:58:56 +00:00
ccurme
52b0570bec core, openai, standard-tests: improve OpenAI compatibility with Anthropic content blocks (#30128)
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI

---

Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.

Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.

That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
2025-03-06 09:53:14 -05:00
Pat Patterson
b3dc66f7a3 community: fix AttributeError when creating LanceDB vectorstore (#30127)
**Description:**

This PR adds a call to `guard_import()` to fix an AttributeError raised
when creating LanceDB vectorstore instance with an existing LanceDB
table.

**Issue:**

This PR fixes issue #30124.

**Dependencies:**

No additional dependencies.

**Twitter handle:**

[@metadaddy](https://x.com/metadaddy), but I spend more time at
[@metadaddy.net](https://bsky.app/profile/metadaddy.net) these days.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 23:04:38 +00:00
Hugh Gao
9b7b8e4a1a community: make DashScope models support Partial Mode for text continuation. (#30108)
## Description
make DashScope models support Partial Mode for text continuation.

For text continuation in ChatTongYi, it supports text continuation with
a prefix by adding a "partial" argument in AIMessage. The document is
[Partial Mode
](https://help.aliyun.com/zh/model-studio/user-guide/partial-mode?spm=a2c4g.11186623.help-menu-2400256.d_1_0_0_8.211e5b77KMH5Pn&scm=20140722.H_2862210._.OR_help-T_cn~zh-V_1).
The API example is:
```py
import os
import dashscope

messages = [{
    "role": "user",
    "content": "请对“春天来了,大地”这句话进行续写,来表达春天的美好和作者的喜悦之情"
},
{
    "role": "assistant",
    "content": "春天来了,大地",
    "partial": True
}]
response = dashscope.Generation.call(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    model='qwen-plus',
    messages=messages,
    result_format='message',  
)

print(response.output.choices[0].message.content)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 16:22:14 +00:00
黑牛
f0153414d5 Add request_id field to improve request tracking and debugging (for Tongyi model) (#30110)
- **Description**: Added the request_id field to the check_response
function to improve request tracking and debugging, applicable for the
Tongyi model.
- **Issue**: None
- **Dependencies**: None
- **Twitter handle**: None

- **Add tests and docs**: None

- **Lint and test**: Ran `make format`, `make lint`, and `make test` to
ensure the code meets formatting and testing requirements.
2025-03-05 11:03:47 -05:00
Manthan Surkar
1ee8aceaee community: fix Jira API wrapper failing initialization with cloud param (#30117)
### **Description**  
Converts the boolean `jira_cloud` parameter in the Jira API Wrapper to a
string before initializing the Jira Client. Also adds tests for the
same.

### **Issue**  
[Jira API Wrapper
Bug](8abb65e138/libs/community/langchain_community/utilities/jira.py (L47))

```python
jira_cloud_str = get_from_dict_or_env(values, "jira_cloud", "JIRA_CLOUD")
jira_cloud = jira_cloud_str.lower() == "true"
```

The above code has a bug where the value of `"jira_cloud"` is a boolean.
If it is passed, calling `.lower()` on a boolean raises an error.
Additionally, `False` cannot be passed explicitly since
`get_from_dict_or_env` falls back to environment variables.

Relevant code in `langchain_core`:  

[Source](https://github.com/thesmallstar/langchain/blob/master/.venv/lib/python3.13/site-packages/langchain_core/utils/env.py#L46)

```python
if isinstance(key, str) and key in data and data[key]:  # Here, data[key] is False
```

This PR fixes both issues.

### **Twitter Handle**  
[Manthan Surkar](https://x.com/manthan_surkar)
2025-03-05 10:49:25 -05:00
Adrián Panella
c599ba47d5 core(mermaid): fix error when 3+ subgraph levels (#29970) 2025-03-04 13:27:49 -05:00
Alexander Henlein
417efa30a6 docs: add Taiga Tool integration docs (#30042)
This PR adds documentation for the langchain-taiga Tool integration,
including an example notebook at
'docs/docs/integrations/tools/taiga.ipynb' and updates to
'libs/packages.yml' to track the new package.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-04 17:51:20 +00:00
Mathias Marciano
5f0102242a Fixed an issue with the OpenAI Assistant's 'retrieval' tool and adding support for the 'attachments' parameter (#30006)
PR Title:
langchain: add attachments support in OpenAIAssistantRunnable

PR Description:
This PR fixes an issue with the "retrieval" tool (internally named
"file_search") in the OpenAI Assistant by adding support for the
"attachments" parameter in the invoke method. This change allows files
to be linked to messages when they are inserted into threads, which is
essential for utilizing OpenAI's Retrieval Augmented Generation (RAG)
feature.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-04 17:34:11 +00:00
Philippe PRADOS
4710c1fa8c community[minor]: Fix regular expression in visualize and outlines modules. (#30002)
Fix invalid escape characteres
2025-03-04 12:23:48 -05:00
ccurme
577c0d0715 community[patch]: release 0.3.19 (#30104) 2025-03-04 16:12:03 +00:00
ccurme
ba5ddb218f anthropic[patch]: release 0.3.9 (#30103) 2025-03-04 10:53:55 -05:00
ccurme
9383a0536a tests[patch]: release 0.3.13 (#30102) 2025-03-04 10:53:43 -05:00
ccurme
fb16c25920 langchain[patch]: release 0.3.20 (#30101) 2025-03-04 15:47:27 +00:00
ccurme
692a68bf1c core[patch]: release 0.3.41 (#30100) 2025-03-04 15:08:57 +00:00
ccurme
484d945500 community[patch]: remove numpy cap for python < 3.12 (#30084) 2025-03-04 09:46:41 -05:00
Cheney Zhang
7eb6dde720 docs: refine milvus server description (#30071)
Document refinement: optimize milvus server description. The description
of "milvus standalone", and "milvus server" is confusing, so I clarify
it with a detailed description.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-03-04 09:39:54 -05:00
ZhangShenao
8575d7491f [Doc] Improve api doc (#30073)
- Update api_doc for `BaseMessage`
- add static method decorator for `retry_runnable`
2025-03-04 09:39:07 -05:00
Antonio Pisani
9a11e0edcd docs:Add SWI-Prolog for langchain-prolog (#30081)
Some users have complained that t is not clear that SWI-Prolog must be
installed before installing langchain-prolog.
2025-03-04 09:12:47 -05:00
Samuel Dion-Girardeau
ccb64e9f4f docs: Fix typo in code samples for max_tokens_for_prompt (#30088)
- **Description:** Fix typo in code samples for max_tokens_for_prompt.
Code blocks had singular "token" but the method has plural "tokens".
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-04 09:11:21 -05:00
ccurme
33354f984f docs: update contributing docs (#30064) 2025-03-01 17:36:35 -05:00
ccurme
c7cd666a17 docs: add to vercel overrides (#30063) 2025-03-01 17:21:15 -05:00
ArrayPD
c671d54c6f core: make with_alisteners() example workable. (#30059)
**Description:**

5 fix of example from function with_alisteners() in
libs/core/langchain_core/runnables/base.py
Replace incoherent example output with workable example's output.

1. SyntaxError: unterminated string literal
    print(f"on start callback starts at {format_t(time.time())}
    correct as
    print(f"on start callback starts at {format_t(time.time())}")

2. SyntaxError: unterminated string literal
    print(f"on end callback starts at {format_t(time.time())}
    correct as
    print(f"on end callback starts at {format_t(time.time())}")

3. NameError: name 'Runnable' is not defined
    Fix as
    from langchain_core.runnables import Runnable

4. NameError: name 'asyncio' is not defined
    Fix as
    import asyncio

5. NameError: name 'format_t' is not defined.
    Implement format_t() as
    from datetime import datetime, timezone

    def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
2025-03-01 15:39:02 -05:00
Chandra Nandan
eca8c5515d docs: sidebar-content-render (#30061) (#30062)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: added proper width to sidebar content"

- [x] **PR message**: added proper width to sidebar content
- **Description:** While accessing the [LangChain Python API
Reference](https://python.langchain.com/api_reference/index.html) the
sidebar content does not display correctly.
    - **Issue:** Follow-up to #30061
    - **Dependencies:** None
    - **Twitter handle:** https://x.com/implicitdefcnc


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 15:30:41 -05:00
cold-eye
7c175e3fda Update ascend.py (#30060)
add batch_size to fix oom when embed large amount texts

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 14:10:41 -05:00
ccurme
3b066dc005 anthropic[patch]: allow structured output when thinking is enabled (#30047)
Structured output will currently always raise a BadRequestError when
Claude 3.7 Sonnet's `thinking` is enabled, because we rely on forced
tool use for structured output and this feature is not supported when
`thinking` is enabled.

Here we:
- Emit a warning if `with_structured_output` is called when `thinking`
is enabled.
- Raise `OutputParserException` if no tool calls are generated.

This is arguably preferable to raising an error in all cases.

```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel


class Person(BaseModel):
    name: str
    age: int


llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    max_tokens=5_000,
    thinking={"type": "enabled", "budget_tokens": 2_000},
)
structured_llm = llm.with_structured_output(Person)  # <-- this generates a warning
```

```python
structured_llm.invoke("Alice is 30.")  # <-- works
```

```python
structured_llm.invoke("Hello!")  # <-- raises OutputParserException
```
2025-02-28 14:44:11 -05:00
ccurme
f8ed5007ea anthropic, mistral: return model_name in response metadata (#30048)
Took a "census" of models supported by init_chat_model-- of those that
return model names in response metadata, these were the only two that
had it keyed under `"model"` instead of `"model_name"`.
2025-02-28 18:56:05 +00:00
Christophe Bornet
9e6ffd1264 core: Add ruff rules PTH (pathlib) (#29338)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-28 13:22:20 -05:00
TheSongg
86b364de3b Add asynchronous generate interface (#30001)
- [ ] **PR title**: [langchain_community.llms.xinference]: Add
asynchronous generate interface

- [ ] **PR message**: The asynchronous generate interface support stream
data and non-stream data.
          
        chain = prompt | llm
        async for chunk in chain.astream(input=user_input):
            yield chunk


- [ ] **Add tests and docs**:

       from langchain_community.llms import Xinference
       from langchain.prompts import PromptTemplate

       llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
           stream = True
            )
prompt = PromptTemplate(input=['country'], template="Q: where can we
visit in the capital of {country}? A:")
       chain = prompt | llm
       async for chunk in chain.astream(input=user_input):
           yield chunk
2025-02-28 12:32:44 -05:00
Cheney Zhang
a1897ca621 docs: refine milvus doc with hybrid-search (#30037)
Milvus Document refinement: add more detailed hybrid search description
with full-text search introduction here.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-02-28 10:22:53 -05:00
Tiest van Gool
476cd26f57 Add xAI to ChatModelTabs drop down (#30028)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: add xAI to ChatModelTabs"

- [ ] **PR message**:
- **Description:** Added `ChatXAI` to `ChatModelTabs` dropdown to
improve visibility of xAI chat models (e.g., "grok-2", "grok-3").
    - **Issue:** Follow-up to #30010 
    - **Dependencies:** none
    - **Twitter handle:** @tiestvangool 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-28 09:08:12 -05:00
Fakai Zhao
f07338d2bf Implementing the MMR algorithm for OLAP vector storage (#30033)
Thank you for contributing to LangChain!

-  **Implementing the MMR algorithm for OLAP vector storage**: 
  - Support Apache Doris and StarRocks OLAP database.
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- **Implementing the MMR algorithm for OLAP vector storage**: 
    - **Apache Doris
    - **StarRocks
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- **Add tests and docs**: 
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: fakzhao <fakzhao@cisco.com>
2025-02-28 08:50:22 -05:00
Daniel Rauber
186cd7f1a1 community: PlaywrightURLLoader should wait for page load event before attempting to extract data (#30043)
## Description

The PlaywrightURLLoader should wait for a page to be loaded before
attempting to extract data.
2025-02-28 08:45:51 -05:00
Ikko Eltociear Ashimine
46908ee3da docs: update google_cloud_vertexai_rerank.ipynb (#30039)
recieve -> receive
2025-02-28 08:45:06 -05:00
ccurme
0dbcc1d099 docs: document anthropic features (#30030)
Update integrations page with extended thinking feature.

Update API reference with extended thinking and citations.
2025-02-27 19:37:04 -05:00
ccurme
6c7c8a164f openai[patch]: add unit test (#30022)
Test `max_completion_tokens` is propagated to payload for
AzureChatOpenAI.
2025-02-27 11:09:17 -05:00
DamonXue
156a60013a docs: fix tavily_search code-block format. (#30012)
This pull request includes a change to the `TavilySearchResults` class
in the `tool.py` file, which updates the code block format in the
documentation.

Documentation update:

*
[`libs/community/langchain_community/tools/tavily_search/tool.py`](diffhunk://#diff-e3b6a980979268b639c6a86e9b182756b0f7c7e9e5605e613bc0a72ea6aa5301L54-R59):
Changed the code block format from Python to JSON in the example
provided in the docstring.Thank you for contributing to LangChain!
2025-02-27 10:55:15 -05:00
kawamou
8977ac5ab0 community[fix]: Handle None value in raw_content from Tavily API response (#30021)
## **Description:**

When using the Tavily retriever with include_raw_content=True, the
retriever occasionally fails with a Pydantic ValidationError because
raw_content can be None.

The Document model in langchain_core/documents/base.py requires
page_content to be a non-None value, but the Tavily API sometimes
returns None for raw_content.

This PR fixes the issue by ensuring that even when raw_content is None,
an empty string is used instead:

```python
page_content=result.get("content", "")
            if not self.include_raw_content
            else (result.get("raw_content") or ""),
2025-02-27 10:53:53 -05:00
Yan
d0c9b98171 docs: writer integration docs cosmetic fixes (#29984)
Fixed links at Writer partners integration docs
2025-02-27 10:52:49 -05:00
Lakindu Boteju
f69deee1bd community: Add cost data for aws bedrock anthropic.claude-3-7 model (#30016)
This pull request includes updates to the
`libs/community/langchain_community/callbacks/bedrock_anthropic_callback.py`
file to add a new model version to the list of supported models.

Updates to supported models:

* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.003` for 1000 input tokens.
* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.015` for 1000 output tokens.

AWS Bedrock pricing reference : https://aws.amazon.com/bedrock/pricing
2025-02-27 09:51:52 -05:00
Mark Perfect
289b3422dc docs: Add Milvus Standalone to documentation (#29650)
- [x] **PR title**:


- [x] **PR message**:
- Added a new section for how to set up and use Milvus with Docker, and
added an example of how to instantiate Milvus for hybrid retrieval
- Fixed the documentation setup to run `make lint` and `make format`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 21:31:40 +00:00
Lakindu Boteju
e0e9e560b3 PyMuPDF4LLM integration to LangChain (#29953)
## PyMuPDF4LLM integration to LangChain for PDF content extraction in
Markdown format

### Description

[PyMuPDF4LLM](https://github.com/pymupdf/RAG) makes it easier to extract
PDF content in Markdown format, needed for LLM & RAG applications.
(License: GNU Affero General Public License v3.0)


[langchain-pymupdf4llm](https://github.com/lakinduboteju/langchain-pymupdf4llm)
integrates PyMuPDF4LLM to LangChain as a Document Loader.
(License: MIT License)

This pull request introduces the integration of
[PyMuPDF4LLM](https://pymupdf.readthedocs.io/en/latest/pymupdf4llm) into
the LangChain project as an integration package:
[`langchain-pymupdf4llm`](https://github.com/lakinduboteju/langchain-pymupdf4llm).
The most important changes include adding new Jupyter notebooks to
document the integration and updating the package configuration file to
include the new package.

### Documentation:

* `docs/docs/integrations/providers/pymupdf4llm.ipynb`: Added a new
Jupyter notebook to document the integration of `PyMuPDF4LLM` with
LangChain, including installation instructions and class imports.
* `docs/docs/integrations/document_loaders/pymupdf4llm.ipynb`: Added a
new Jupyter notebook to document the usage of `langchain-pymupdf4llm` as
a LangChain integration package in detail.

### Package registration:

* `libs/packages.yml`: Updated the package configuration file to include
the `langchain-pymupdf4llm` package.

### Additional information

* Related to: https://github.com/langchain-ai/langchain/pull/29848

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:59:12 -05:00
Dan Mirsky
d98c3f76c2 core[patch]: Fix FileCallbackHandler name resolution, Fixes #29941 (#29942)
- **Description:** Same changes as #26593 but for FileCallbackHandler
- **Issue:**  Fixes #29941
- **Dependencies:** None
- **Twitter handle:** None

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-26 14:54:24 -05:00
Christophe Bornet
b3885c124f core: Add ruff rules TC (#29268)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
Some fixes done for TC001,TC002 and TC003 but these rules are excluded
since they don't play well with Pydantic.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 19:39:05 +00:00
talos
9cd20080fc community: Update SQLiteVec table trigger (#29914)
**Issue**: This trigger can only be used by the first table created.
Cannot create additional triggers for other tables.

**fixed**: Update the trigger name so that it can be used for new
tables.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:10:13 +00:00
ccurme
7562677f3f langchain[patch]: delete erroneous lock file (#30007)
Picked up during merge.
2025-02-26 15:01:05 +00:00
Erick Friis
3c96012f5e langchain: make numpy optional (#29182)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 14:35:24 +00:00
James Yang
8c28742980 docs: fix kinetica vectorstore typo (#29999)
Description:
- fix kinetica vectorstore typo
- add links

Co-authored-by: jamesongithub@users.noreply.github.com <jamesongithub@users.noreply.github.com>
2025-02-26 08:31:56 -05:00
Artem Yankov
6177b9f9ab community: add title, score and raw_content to tavily search results (#29995)
**Description:**

Tavily search results returned from API include useful information like
title, score and (optionally) raw_content that is missed in wrapper
although it's documented there properly. Add this data to the result
structure.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 23:27:21 +00:00
Eugene Yurtsev
b525226531 core[patch]: version 0.3.40 (#29997)
Version 0.3.40 release
2025-02-25 23:09:40 +00:00
Vadym Barda
0fc50b82a0 core[patch]: allow passing description to @tool decorator (#29976) 2025-02-25 17:45:36 -05:00
Naveen SK
21bfc95e14 docs: Correct grammatical typos in various documentation files (#29983)
**Description:**
Fixed grammatical typos in various documentation files

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@MrNaveenSK

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-25 19:13:31 +00:00
ccurme
1158d3134d langchain[patch]: remove aiohttp (#29991)
My guess is this was left over from when `community` was in langchain.
2025-02-25 11:43:00 -05:00
ccurme
afd7888392 langchain[patch]: remove explicit dependency on tenacity (#29990)
Not used anywhere in `langchain`, already a dependency of
langchain-core.
2025-02-25 11:31:55 -05:00
naveencloud
143c39a4a4 Update gitlab.mdx (#29987)
Instead of Github it was mentioned that Gitlab which causing confusion
while refering the documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 15:06:37 +00:00
ccurme
32704f0ad8 langchain: update extended test (#29988) 2025-02-25 14:58:20 +00:00
Yan
47e1a384f7 Writer partners integration docs (#29961)
**Documentation of Writer provider and additional features**
* [PyPi langchain-writer
web-page](https://pypi.org/project/langchain-writer/)
* [GitHub langchain-writer
repo](https://github.com/writer/langchain-writer)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 19:30:09 -05:00
Antonio Pisani
820a4c068c Transition prolog_tool doc to langgraph (#29972)
@ccurme As suggested I transitioned the prolog_tool documentation to
langgraph

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 23:34:53 +00:00
ccurme
79f5bbfb26 anthropic[patch]: release 0.3.8 (#29973) 2025-02-24 15:24:35 -05:00
ccurme
ded886f622 anthropic[patch]: support claude 3.7 sonnet (#29971) 2025-02-24 15:17:47 -05:00
Bagatur
d00d645829 docs[patch]: update disable_streaming docstring (#29968) 2025-02-24 18:40:31 +00:00
ccurme
b7a1705052 openai[patch]: release 0.3.7 (#29967) 2025-02-24 11:59:28 -05:00
ccurme
5437ee385b core[patch]: release 0.3.39 (#29966) 2025-02-24 11:47:01 -05:00
ccurme
291a232fb8 openai[patch]: set global ssl context (#29932)
We set 
```python
global_ssl_context = ssl.create_default_context(cafile=certifi.where())
```
at the module-level and share it among httpx clients.
2025-02-24 11:25:16 -05:00
ccurme
9ce07980b7 core[patch]: pydantic 2.11 compat (#29963)
Resolves https://github.com/langchain-ai/langchain/issues/29951

Was able to reproduce the issue with Anthropic installing from pydantic
`main` and correct it with the fix recommended in the issue.

Thanks very much @Viicos for finding the bug and the detailed writeup!
2025-02-24 11:11:25 -05:00
ccurme
0d3a3b99fc core[patch]: release 0.3.38 (#29962) 2025-02-24 15:04:53 +00:00
ccurme
b1a7f4e106 core, openai[patch]: support serialization of pydantic models in messages (#29940)
Resolves https://github.com/langchain-ai/langchain/issues/29003,
https://github.com/langchain-ai/langchain/issues/27264
Related: https://github.com/langchain-ai/langchain-redis/issues/52

```python
from langchain.chat_models import init_chat_model
from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from pydantic import BaseModel

cache = SQLiteCache()

set_llm_cache(cache)

class Temperature(BaseModel):
    value: int
    city: str

llm = init_chat_model("openai:gpt-4o-mini")
structured_llm = llm.with_structured_output(Temperature)
```
```python
# 681 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
```python
# 6.98 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
2025-02-24 09:34:27 -05:00
HackHuang
1645ec1890 docs(tool_artifacts.ipynb) : Remove the unnecessary information (#29960)
Update tool_artifacts.ipynb : Remove the unnecessary information as
below.


8b511a3a78/docs/docs/how_to/tool_artifacts.ipynb (L95)
2025-02-24 09:22:18 -05:00
HackHuang
78c54fccf3 docs(custom_tools.ipynb) : Fix the invalid URL link (#29958)
Update custom_tools.ipynb : Fix the invalid URL link about `@tool
decorator`
2025-02-24 14:03:26 +00:00
ccurme
927ec20b69 openai[patch]: update system role to developer for o-series models (#29785)
Some o-series models will raise a 400 error for `"role": "system"`
(`o1-mini` and `o1-preview` will raise, `o1` and `o3-mini` will not).

Here we update `ChatOpenAI` to update the role to `"developer"` for all
model names matching `^o\d`.

We only make this change on the ChatOpenAI class (not BaseChatOpenAI).
2025-02-24 08:59:46 -05:00
Ahmed Tammaa
8b511a3a78 [Exception Handling] DeepSeek JSONDecodeError (#29758)
For Context please check #29626 

The Deepseek is using langchain_openai. The error happens that it show
`json decode error`.

I added a handler for this to give a more sensible error message which
is DeepSeek API returned empty/invalid json.

Reproducing the issue is a bit challenging as it is inconsistent,
sometimes DeepSeek returns valid data and in other times it returns
invalid data which triggers the JSON Decode Error.

This PR is an exception handling, but not an ultimate fix for the issue.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 15:00:32 -05:00
Julien Elkaim
e586bffe51 community: Repair embeddings/llamacpp's embed_query method (#29935)
**Description:** As commented on the commit
[41b6a86](41b6a86bbe)
it introduced a bug for when we do an embedding request and the model
returns a non-nested list. Typically it's the case for model
**_nomic-embed-text_**.

- I added the unit test, and ran `make format`, `make lint` and `make
test` from the `community` package.
- No new dependency.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:32:17 +00:00
Saraswathy Kalaiselvan
5ca4933b9d docs: updated ChatLiteLLM model_kwargs description (#29937)
- [x] **PR title**: docs: (community) update ChatLiteLLM

- [x] **PR message**:
- **Description:** updated description of model_kwargs parameter which
was wrongly describing for temperature.
    - **Issue:** #29862 
    - **Dependencies:** N/A
    
- [x] **Add tests and docs**: N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:27:13 +00:00
ccurme
512eb1b764 anthropic[patch]: update models for integration tests (#29938) 2025-02-23 14:23:48 -05:00
Christophe Bornet
f6d4fec4d5 core: Add ruff rules ANN (type annotations) (#29271)
See https://docs.astral.sh/ruff/rules/#flake8-annotations-ann
The interest compared to only mypy is that ruff is very fast at
detecting missing annotations.

ANN101 and ANN102 are deprecated so we ignore them 
ANN401 (no Any type) ignored to be in sync with mypy config

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-22 17:46:28 -05:00
Bagatur
979a991dc2 core[patch]: dont deep copy merge_message_runs (#28454)
afaict no need to deep copy here, if we merge messages then we convert
them to chunks first anyways
2025-02-22 21:56:45 +00:00
Mohammad Mohtashim
afa94e5bf7 _wait_for_run calling fix for OpenAIAssistantRunnable (#29927)
- **Description:** Fixed the `OpenAIAssistantRunnable` call of
`_wait_for_run`
- **Issue:**  #29923
2025-02-22 00:27:24 +00:00
Vadym Barda
437fe6d216 core[patch]: return ToolMessage from tools when tool call ID is empty string (#29921) 2025-02-21 11:53:15 -05:00
Taofiq Aiyelabegan
5ee8a8f063 [Integration]: Langchain-Permit (#29867)
## Which area of LangChain is being modified?
- This PR adds a new "Permit" integration to the `docs/integrations/`
folder.
- Introduces two new Tools (`LangchainJWTValidationTool` and
`LangchainPermissionsCheckTool`)
- Introduces two new Retrievers (`PermitSelfQueryRetriever` and
`PermitEnsembleRetriever`)
- Adds demo scripts in `examples/` showcasing usage.

## Description of Changes
- Created `langchain_permit/tools.py` for JWT validation and permission
checks with Permit.
- Created `langchain_permit/retrievers.py` for custom Permit-based
retrievers.
- Added documentation in `docs/integrations/providers/permit.ipynb` (or
`.mdx`) to explain setup, usage, and examples.
- Provided sample scripts in `examples/demo_scripts/` to illustrate
usage of these tools and retrievers.
- Ensured all code is linted and tested locally.

Thank you again for reviewing!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-21 10:59:00 -05:00
Jean-Philippe Dournel
ebe38baaf9 community/mlx_pipeline: fix crash at mlx call (#29915)
- **Description:** 
Since mlx_lm 0.20, all calls to mlx crash due to deprecation of the way
parameters are passed to methods generate and generate_step.
Parameters top_p, temp, repetition_penalty and repetition_context_size
are not passed directly to those method anymore but wrapped into
"sampler" and "logit_processor".


- **Dependencies:** mlx_lm (optional)

-  **Tests:** 
I've had a new test to existing test file:
tests/integration_tests/llms/test_mlx_pipeline.py

---------

Co-authored-by: Jean-Philippe Dournel <jp@insightkeeper.io>
2025-02-21 09:14:53 -05:00
Sinan CAN
bd773cffc3 docs: remove redundant cell in sql_large_db guide (#29917)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-21 08:38:49 -05:00
ccurme
1fa9f6bc20 docs: build mongo in api ref (#29908) 2025-02-20 19:58:35 -05:00
Chaunte W. Lacewell
d972c6d6ea partners: add langchain-vdms (#29857)
**Description:** Deprecate vdms in community, add integration
langchain-vdms, and update any related files
**Issue:** n/a
**Dependencies:** langchain-vdms
**Twitter handle:** n/a

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 19:48:46 -05:00
Mohammad Mohtashim
8293142fa0 mistral[patch]: support model_kwargs (#29838)
- **Description:** Frequency_penalty added as a client parameter
- **Issue:** #29803

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 18:47:39 -05:00
ccurme
924d9b1b33 cli[patch]: fix retriever template (#29907)
Chat model tabs don't render correctly in .ipynb template.
2025-02-20 17:51:19 +00:00
Brayden Zhong
a70f31de5f Community: RankLLMRerank AttributeError (Handle list-based rerank results) (#29840)
# community: Fix AttributeError in RankLLMRerank (`list` object has no
attribute `candidates`)

## **Description**
This PR fixes an issue in `RankLLMRerank` where reranking fails with the
following error:

```
AttributeError: 'list' object has no attribute 'candidates'
```

The issue arises because `rerank_batch()` returns a `List[Result]`
instead of an object containing `.candidates`.

### **Changes Introduced**
- Adjusted `compress_documents()` to support both:
  - Old API format: `rerank_results.candidates`
  - New API format: `rerank_results` as a list
  - Also fix wrong .txt location parsing while I was at it.

---

## **Issue**
Fixes **AttributeError** in `RankLLMRerank` when using
`compression_retriever.invoke()`. The issue is observed when
`rerank_batch()` returns a list instead of an object with `.candidates`.

**Relevant log:**
```
AttributeError: 'list' object has no attribute 'candidates'
```

## **Dependencies**
- No additional dependencies introduced.

---

## **Checklist**
- [x] **Backward compatible** with previous API versions
- [x] **Tested** locally with different RankLLM models
- [x] **No new dependencies introduced**
- [x] **Linted** with `make format && make lint`
- [x] **Ready for review**

---

## **Testing**
- Ran `compression_retriever.invoke(query)`

## **Reviewers**
If no review within a few days, please **@mention** one of:
- @baskaryan
- @efriis
- @eyurtsev
- @ccurme
- @vbarda
- @hwchase17
2025-02-20 12:38:31 -05:00
Levon Ghukasyan
ec403c442a Separate deepale vector store (#29902)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:37:19 +00:00
Jorge Piedrahita Ortiz
3acf842e35 core: add sambanova chat models to load module mapping (#29855)
- **Description:** add sambanova integration package chat models to load
module mapping, to allow serialization and deserialization
2025-02-20 12:30:50 -05:00
ccurme
d227e4a08e mistralai[patch]: release 0.2.7 (#29906) 2025-02-20 17:27:12 +00:00
Hande
d8bab89e6e community: add cognee retriever (#29878)
This PR adds a new cognee integration, knowledge graph based retrieval
enabling developers to ingest documents into cognee’s knowledge graph,
process them, and then retrieve context via CogneeRetriever.
It includes:
- langchain_cognee package with a CogneeRetriever class
- a test for the integration, demonstrating how to create, process, and
retrieve with cognee
- an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


Followed additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

Thank you for the review!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:15:23 +00:00
Sinan CAN
97dd5f45ae Update retrieval.mdx (#29905)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:12:29 +00:00
dokato
92b415a9f6 community: Made some Jira fields optional for agent to work correctly (#29876)
**Description:** Two small changes have been proposed here:
(1)
Previous code assumes that every issue has a priority field. If an issue
lacks this field, the code will raise a KeyError.
Now, the code checks if priority exists before accessing it. If priority
is missing, it assigns None instead of crashing. This prevents runtime
errors when processing issues without a priority.

(2)

Also If the "style" field is missing, the code throws a KeyError.
`.get("style", None)` safely retrieves the value if present.

**Issue:** #29875 

**Dependencies:** N/A
2025-02-20 12:10:11 -05:00
am-kinetica
ca7eccba1f Handled a bug around empty query results differently (#29877)
Thank you for contributing to LangChain!

- [ ] **Handled query records properly**: "community:
vectorstores/kinetica"

- [ ] **Bugfix for empty query results handling**: 
- **Description:** checked for the number of records returned by a query
before processing further
- **Issue:** resulted in an `AttributeError` earlier which has now been
fixed

@efriis
2025-02-20 12:07:49 -05:00
Antonio Pisani
2c403a3ea9 docs: Add langchain-prolog documentation (#29788)
I want to add documentation for a new integration with SWI-Prolog.

@hwchase17 check this out:

https://github.com/apisani1/langchain-prolog/tree/main/examples/travel_agent

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 11:50:28 -05:00
Marlene
be7fa920fa Partner: Azure AI Langchain Docs and Package Registry (#29879)
This PR adds documentation for the Azure AI package in Langchain to the
main mono-repo

No issue connected or updated dependencies.

Utilises existing tests and makes updates to the docs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 14:35:26 +00:00
Hankyeol Kyung
2dd0ce3077 openai: Update reasoning_effort arg documentation (#29897)
**Description:** Update docstring for `reasoning_effort` argument to
specify that it applies to reasoning models only (e.g., OpenAI o1 and
o3-mini), clarifying its supported models.
**Issue:** None
**Dependencies:** None
2025-02-20 09:03:42 -05:00
Joe Ferrucci
c28ee329c9 Fix typo in local_llms.ipynb docs (#29903)
Change `tailed` to `tailored`

`Docs > How-To > Local LLMs:`

https://python.langchain.com/docs/how_to/local_llms/#:~:text=use%20a%20prompt-,tailed,-for%20your%20specific
2025-02-20 09:03:10 -05:00
ccurme
ed3c2bd557 core[patch]: set version="v2" as default in astream_events (#29894) 2025-02-19 23:21:37 +00:00
Fabian Blatz
a2d05a376c community: ConfluenceLoader: add a filter method for attachments (#29882)
Adds a `attachment_filter_func` parameter to the ConfluenceLoader class
which can be used to determine which files are indexed. This is useful
if you are interested in excluding files based on their media type or
other metadata.
2025-02-19 18:20:45 -05:00
ccurme
9ed47a4d63 community[patch]: release 0.3.18 (#29896) 2025-02-19 20:13:00 +00:00
ccurme
92889edafd core[patch]: release 0.3.37 (#29895) 2025-02-19 20:04:35 +00:00
ccurme
ffd6194060 core[patch]: de-beta rate limiters (#29891) 2025-02-19 19:19:59 +00:00
Erick Friis
5637210a20 infra: run docs build on packages.yml updates (#29796)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-19 18:45:30 +00:00
ccurme
fb4c8423f0 docs: fix builds (#29890)
Missed in https://github.com/langchain-ai/langchain/pull/29889
2025-02-19 13:35:59 -05:00
ccurme
68b13e5172 pinecone: delete from monorepo (#29889)
This now lives in https://github.com/langchain-ai/langchain-pinecone
2025-02-19 12:55:15 -05:00
Erick Friis
6c1e21d128 core: basemessage.text() (#29078) 2025-02-18 17:45:44 -08:00
Ben Burns
e2ba336e72 docs: fix partner package table build for packages with no download stats (#29871)
The build in #29867 is currently broken because `langchain-cli` didn't
add download stats to the provider file.

This change gracefully handles sorting packages with missing download
counts. I initially updated the build to fetch download counts on every
run, but pypistats [requests](https://pypistats.org/api/) that users not
fetch stats like this via CI.
2025-02-19 11:05:57 +13:00
Eugene Yurtsev
8e5074d82d core: release 0.3.36 (#29869)
Release 0.3.36
2025-02-18 19:51:43 +00:00
Vadym Barda
d04fa1ae50 core[patch]: allow passing JSON schema as args_schema to tools (#29812) 2025-02-18 14:44:31 -05:00
ccurme
5034a8dc5c xai[patch]: release 0.2.1 (#29854) 2025-02-17 14:30:41 -05:00
ccurme
83dcef234d xai[patch]: support dedicated structured output feature (#29853)
https://docs.x.ai/docs/guides/structured-outputs

Interface appears identical to OpenAI's.
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel

class Joke(BaseModel):
    setup: str
    punchline: str

llm = init_chat_model("xai:grok-2").with_structured_output(
    Joke, method="json_schema"
)
llm.invoke("Tell me a joke about cats.")
```
2025-02-17 14:19:51 -05:00
ccurme
9d6fcd0bfb infra: add xai to scheduled testing (#29852) 2025-02-17 18:59:45 +00:00
ccurme
8a3b05ae69 langchain[patch]: release 0.3.19 (#29851) 2025-02-17 13:36:23 -05:00
ccurme
c9061162a1 langchain[patch]: add xai to extras (#29850) 2025-02-17 17:49:34 +00:00
Bagatur
1acf57e9bd langchain[patch]: init_chat_model xai support (#29849) 2025-02-17 09:45:39 -08:00
Paul Nikonowicz
1a55da9ff4 docs: Update gemini vector docs (#29841)
# Description

2 changes: 
1. removes get pass from the code example as it reads from stdio causing
a freeze to occur
2. updates to the latest gemini model in the example

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-17 07:54:23 -05:00
hsm207
037b129b86 weaviate: Add-deprecation-warning (#29757)
- **Description:** add deprecation warning when using weaviate from
langchain_community
  - **Issue:** NA
  - **Dependencies:** NA
  - **Twitter handle:** NA

---------

Signed-off-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 21:42:18 -05:00
Đỗ Quang Minh
cd198ac9ed community: add custom model for OpenAIWhisperParser (#29831)
Add `model` properties for OpenAIWhisperParser. Defaulted to `whisper-1`
(previous value).
Please help me update the docs and other related components of this
repo.
2025-02-16 21:26:07 -05:00
Cole McIntosh
6874c9c1d0 docs: add notebook for langchain-salesforce package (#29800)
**Description:**  
This PR adds a Jupyter notebook that explains the features,
installation, and usage of the
[`langchain-salesforce`](https://github.com/colesmcintosh/langchain-salesforce)
package. The notebook includes:
- Setup instructions for configuring Salesforce credentials  
- Example code demonstrating common operations such as querying,
describing objects, creating, updating, and deleting records

**Issue:**  
N/A

**Dependencies:**  
No new dependencies are required.

**Tests and Docs:**  
- Added an example notebook demonstrating the usage of the
`langchain-salesforce` package, located in `docs/docs/integrations`.

**Lint and Test:**  
- Ran `make format`, `make lint`, and `make test` successfully.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 08:34:57 -05:00
Jan Heimes
60f58df5b3 community: add top_k as param to Needle Retriever (#29821)
Thank you for contributing to LangChain!

- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
This PR adds top_k as a param to the Needle Retriever. By default we use
top 10.



- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-16 08:30:52 -05:00
Mateusz Szewczyk
8147679169 docs: Rename IBM product name to IBM watsonx (#29802)
Thank you for contributing to LangChain!

Rename IBM product name to `IBM watsonx`

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-15 21:48:02 -05:00
Jesus Fernandez Bes
1dfac909d8 community: Adding IN Operator to AzureCosmosDBNoSQLVectorStore (#29805)
- ** Description**: I have added a new operator in the operator map with
key `$in` and value `IN`, so that you can define filters using lists as
values. This was already contemplated but as IN operator was not in the
map they cannot be used.
- **Issue**: Fixes #29804.
- **Dependencies**: No extra.
2025-02-15 21:44:54 -05:00
Wahed Hemati
8901b113c3 docs: add Discord integration docs (#29822)
This PR adds documentation for the `langchain-discord-shikenso`
integration, including an example notebook at
`docs/docs/integrations/tools/discord.ipynb` and updates to
`libs/packages.yml` to track the new package.

  **Issue:**  
  N/A

  **Dependencies:**  
  None

  **Twitter handle:**  
  N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:43:45 -05:00
Akmal Ali Jasmin
f1792e486e fix: Correct getpass usage in Google Generative AI Embedding docs (#29809) (#29810)
**fix: Correct getpass usage in Google Generative AI Embedding docs
(#29809)**

- **Description:** Corrected the `getpass` usage in the Google
Generative AI Embedding documentation by replacing `getpass()` with
`getpass.getpass()` to fix the `TypeError`.
- **Issue:** #29809  
- **Dependencies:** None  

**Additional Notes:**  
The change ensures compatibility with Google Colab and follows Python's
`getpass` module usage standards.
2025-02-15 21:41:00 -05:00
HackHuang
80ca310c15 langchain : Add the full code snippet in rag.ipynb (#29820)
docs(rag.ipynb) : Add the `full code` snippet, it’s necessary and useful
for beginners to demonstrate.

Preview the change :
https://langchain-git-fork-googtech-patch-3-langchain.vercel.app/docs/tutorials/rag/

Two `full code` snippets are added as below :
<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

##################################################################################################
# 5.Using LangGraph to tie together the retrieval and generation steps into a single application #                               #
##################################################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    context: List[Document]
    answer: str

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    retrieved_docs = vector_store.similarity_search(state["question"])
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()
```

</details>

<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph
from typing import Literal
from typing_extensions import Annotated

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

# Search analysis: Add some metadata to the documents in our vector store,
# so that we can filter on section later. 
total_documents = len(all_splits)
third = total_documents // 3
for i, document in enumerate(all_splits):
    if i < third:
        document.metadata["section"] = "beginning"
    elif i < 2 * third:
        document.metadata["section"] = "middle"
    else:
        document.metadata["section"] = "end"

# Search analysis: Define the schema for our search query
class Search(TypedDict):
    query: Annotated[str, ..., "Search query to run."]
    section: Annotated[
        Literal["beginning", "middle", "end"], ..., "Section to query."]

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

###################################################################
# 5.Using LangGraph to tie together the analyze_query, retrieval  #
# and generation steps into a single application                  #
###################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    query: Search
    context: List[Document]
    answer: str

# Search analysis: Define the node of application, 
# which be used to generate a query from the user's raw input
def analyze_query(state: State):
    structured_llm = llm.with_structured_output(Search)
    query = structured_llm.invoke(state["question"])
    return {"query": query}

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    query = state["query"]
    retrieved_docs = vector_store.similarity_search(
        query["query"],
        filter=lambda doc: doc.metadata.get("section") == query["section"],
    )
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([analyze_query, retrieve, generate]) 
graph_builder.add_edge(START, "analyze_query")
graph = graph_builder.compile()
```

</details>

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:37:58 -05:00
Michael Chin
b2c21f3e57 docs: Update SagemakerEndpoint examples (#29814)
Related issue: https://github.com/langchain-ai/langchain-aws/issues/361

Updated the AWS `SagemakerEndpoint` LLM documentation to import from
`langchain-aws`.
2025-02-15 21:34:56 -05:00
Krishna Kulkarni
a98c5f1c4b langchain_community: add image support to DuckDuckGoSearchAPIWrapper (#29816)
- [ ] **PR title**: langchain_community: add image support to
DuckDuckGoSearchAPIWrapper

- **Description:** This PR enhances the DuckDuckGoSearchAPIWrapper
within the langchain_community package by introducing support for image
searches. The enhancement includes:
  - Adding a new method _ddgs_images to handle image search queries.
- Updating the run and results methods to process and return image
search results appropriately.
- Modifying the source parameter to accept "images" as a valid option,
alongside "text" and "news".
- **Dependencies:** No additional dependencies are required for this
change.
2025-02-15 21:32:14 -05:00
Iris Liu
0d9f0b4215 docs: updates Chroma integration API ref docs (#29826)
- Description: updates Chroma integration API ref docs
- Issue: #29817
- Dependencies: N/A
- Twitter handle: @irieliu

Co-authored-by: “Iris <“liuirisny@gmail.com”>
2025-02-15 21:05:21 -05:00
ccurme
3fe7c07394 openai[patch]: release 0.3.6 (#29824) 2025-02-15 13:53:35 -05:00
ccurme
65a6dce428 openai[patch]: enable streaming for o1 (#29823)
Verified streaming works for the `o1-2024-12-17` snapshot as well.
2025-02-15 12:42:05 -05:00
Christophe Bornet
3dffee3d0b all: Bump blockbuster version to 1.5.18 (#29806)
Has fixes for running on Windows and non-CPython runtimes.
2025-02-14 07:55:38 -08:00
ccurme
d9a069c414 tests[patch]: release 0.3.12 (#29797) 2025-02-13 23:57:44 +00:00
ccurme
e4f106ea62 groq[patch]: remove xfails (#29794)
These appear to pass.
2025-02-13 15:49:50 -08:00
Erick Friis
f34e62ef42 packages: add langchain-xai (#29795)
wasn't registered per the contribution guide:
https://python.langchain.com/docs/contributing/how_to/integrations/
2025-02-13 15:36:41 -08:00
ccurme
49cc6106f7 tests[patch]: fix query for test_tool_calling_with_no_arguments (#29793) 2025-02-13 23:15:52 +00:00
Erick Friis
1a225fad03 multiple: fix uv path deps (#29790)
file:// format wasn't working with updates - it doesn't install as an
editable dep

move to tool.uv.sources with path= instead
2025-02-13 21:32:34 +00:00
Erick Friis
ff13384eb6 packages: update counts, add command (#29789) 2025-02-13 20:45:25 +00:00
Mateusz Szewczyk
8d0e31cbc5 docs: Fix model_id on EmbeddingTabs page (#29784)
Thank you for contributing to LangChain!

Fix `model_id` in IBM provider on EmbeddingTabs page

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 09:41:51 -08:00
Mateusz Szewczyk
61f1be2152 docs: Added IBM to ChatModelTabs and EmbeddingTabs (#29774)
Thank you for contributing to LangChain!

Added IBM to ChatModelTabs and EmbeddingTabs

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:43:42 -08:00
HackHuang
76d32754ff core : update the class docs of InMemoryVectorStore in in_memory.py (#29781)
- **Description:** Add the new introduction about checking `store` in
in_memory.py, It’s necessary and useful for beginners.
```python
Check Documents:
    .. code-block:: python
    
        for doc in vector_store.store.values():
            print(doc)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 16:41:47 +00:00
Mateusz Szewczyk
b82cef36a5 docs: Update IBM WatsonxLLM and ChatWatsonx documentation (#29752)
Thank you for contributing to LangChain!

Update presented model in `WatsonxLLM` and `ChatWatsonx` documentation.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:41:07 -08:00
Mohammad Mohtashim
96ad09fa2d (Community): Added API Key for Jina Search API Wrapper (#29622)
- **Description:** Simple change for adding the API Key for Jina Search
API Wrapper
- **Issue:** #29596
2025-02-12 20:12:07 -08:00
ccurme
f1c66a3040 docs: minor fix to provider table (#29771)
Langfair renders as LangfAIr
2025-02-13 04:06:58 +00:00
Jakub Kopecký
c8cb7c25bf docs: update apify integration (#29553)
**Description:** Fixed and updated Apify integration documentation to
use the new [langchain-apify](https://github.com/apify/langchain-apify)
package.
**Twitter handle:** @apify
2025-02-12 20:02:55 -08:00
ccurme
16fb1f5371 chroma[patch]: release 0.2.2 (#29769)
Resolves https://github.com/langchain-ai/langchain/issues/29765
2025-02-13 02:39:16 +00:00
Mohammad Mohtashim
2310847c0f (Chroma): Small Fix in add_texts when checking for embeddings (#29766)
- **Description:** Small fix in `add_texts` to make embedding
nullability is checked properly.
- **Issue:** #29765

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 02:26:13 +00:00
Eric Pinzur
716fd89d8e docs: contributed Graph RAG Retriever integration (#29744)
**Description:** 

This adds the `Graph RAG` Retriever integration documentation, per
https://python.langchain.com/docs/contributing/how_to/integrations/.

* The integration exists in this public repository:
https://github.com/datastax/graph-rag
* We've implemented the standard langchain tests for retrievers:
https://github.com/datastax/graph-rag/blob/main/packages/langchain-graph-retriever/tests/test_langchain.py
* Our integration is published to PyPi:
https://pypi.org/project/langchain-graph-retriever/
2025-02-12 18:25:48 -08:00
Sunish Sheth
f42dafa809 Deprecating sql_database access for creating UC functions for agent tools (#29745)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-13 02:24:44 +00:00
Thor 雷神 Schaeff
a0970d8d7e [WIP] chore: update ElevenLabs tool. (#29722)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:54:34 +00:00
Chaymae El Aattabi
4b08a7e8e8 Fix #29759: Use local chunk_size_ for looping in embed_documents (#29761)
This fix ensures that the chunk size is correctly determined when
processing text embeddings. Previously, the code did not properly handle
cases where chunk_size was None, potentially leading to incorrect
chunking behavior.

Now, chunk_size_ is explicitly set to either the provided chunk_size or
the default self.chunk_size, ensuring consistent chunking. This update
improves reliability when processing large text inputs in batches and
prevents unintended behavior when chunk_size is not specified.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:28:26 +00:00
Jorge Piedrahita Ortiz
1fbc01c350 docs: update sambanova integration api reference links (#29762)
- **Description:** update sambanova external package integration api
reference links in docs
2025-02-12 15:58:00 -08:00
Sunish Sheth
043d78d85d Deprecate langhchain community ucfunctiontoolkit in favor for databricks_langchain (#29746)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-12 15:50:35 -08:00
Hugues Chocart
e4eec9e9aa community: add langchain-abso documentation (#29739)
Add the documentation for the community package `langchain-abso`. It
provides a new Chat Model class, that uses https://abso.ai

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-02-12 19:57:33 +00:00
ccurme
e61f463745 core[patch]: release 0.3.35 (#29764) 2025-02-12 18:13:10 +00:00
Nuno Campos
fe59f2cc88 core: Fix output of convert_messages when called with BaseMessage.model_dump() (#29763)
- additional_kwargs was being nested twice
- example, response_metadata was placed inside additional_kwargs
2025-02-12 10:05:33 -08:00
Jacob Lee
f4e3e86fbb feat(langchain): Infer o3 modelstrings passed to init_chat_model as OpenAI (#29743) 2025-02-11 16:51:41 -08:00
Mohammad Mohtashim
9f3bcee30a (Community): Adding Structured Support for ChatPerplexity (#29361)
- **Description:** Adding Structured Support for ChatPerplexity
- **Issue:** #29357
- This is implemented as per the Perplexity official docs:
https://docs.perplexity.ai/guides/structured-outputs

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-11 15:51:18 -08:00
Jawahar S
994c5465e0 feat: add support for IBM WatsonX AI chat models (#29688)
**Description:** Updated init_chat_model to support Granite models
deployed on IBM WatsonX
**Dependencies:**
[langchain-ibm](https://github.com/langchain-ai/langchain-ibm)

Tagging @baskaryan @efriis for review when you get a chance.
2025-02-11 15:34:29 -08:00
Shailendra Mishra
c7d74eb7a3 Oraclevs integration (#29723)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"
  community: langchain_community/vectorstore/oraclevs.py


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Refactored code to allow a connection or a connection
pool.
- **Issue:** Normally an idel connection is terminated by the server
side listener at timeout. A user thus has to re-instantiate the vector
store. The timeout in case of connection is not configurable. The
solution is to use a connection pool where a user can specify a user
defined timeout and the connections are managed by the pool.
    - **Dependencies:** None
    - **Twitter handle:** 


- [ ] **Add tests and docs**: This is not a new integration. A user can
pass either a connection or a connection pool. The determination of what
is passed is made at run time. Everything should work as before.

- [ ] **Lint and test**:  Already done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-11 14:56:55 -08:00
ccurme
42ebf6ae0c deepseek[patch]: release 0.1.2 (#29742) 2025-02-11 11:53:43 -08:00
ccurme
ec55553807 pinecone[patch]: release 0.2.3 (#29741) 2025-02-11 19:27:39 +00:00
ccurme
001cf99253 pinecone[patch]: add support for python 3.13 (#29737) 2025-02-11 11:20:21 -08:00
ccurme
ba8f752bf5 openai[patch]: release 0.3.5 (#29740) 2025-02-11 19:20:11 +00:00
ccurme
9477f49409 openai, deepseek: make _convert_chunk_to_generation_chunk an instance method (#29731)
1. Make `_convert_chunk_to_generation_chunk` an instance method on
BaseChatOpenAI
2. Override on ChatDeepSeek to add `"reasoning_content"` to message
additional_kwargs.

Resolves https://github.com/langchain-ai/langchain/issues/29513
2025-02-11 11:13:23 -08:00
Christopher Menon
1edd27d860 docs: fix SQL-based metadata filter syntax, add link to BigQuery docs (#29736)
Fix the syntax for SQL-based metadata filtering in the [Google BigQuery
Vector Search
docs](https://python.langchain.com/docs/integrations/vectorstores/google_bigquery_vector_search/#searching-documents-with-metadata-filters).
Also add a link to learn more about BigQuery operators that can be used
here.

I have been using this library, and have found that this is the correct
syntax to use for the SQL-based filters.

**Issue**: no open issue.
**Dependencies**: none.
**Twitter handle**: none.

No tests as this is only a change to the documentation.

<!-- Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2025-02-11 11:10:12 -08:00
ccurme
d0c2dc06d5 mongodb[patch]: fix link in readme (#29738) 2025-02-11 18:19:59 +00:00
zzaebok
3b3d52206f community: change wikidata rest api version from v0 to v1 (#29708)
**Description:**

According to the [wikidata
documentation](https://www.wikidata.org/wiki/Wikidata_talk:REST_API),
Wikibase REST API version 1 (stable) is released from November 11, 2024.
Their guide is to use the new v1 API and, it just requires replacing v0
in the routes with v1 in almost all cases.
So I replaced WIKIDATA_REST_API_URL from v0 to v1 for stable usage.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-10 17:12:38 -08:00
ccurme
4a389ef4c6 community: fix extended testing (#29715)
v0.3.100 of premai sdk appears to break on import:
89d9276cbf/premai/api/__init__.py (L230)
2025-02-10 16:57:34 -08:00
Yoav Levy
af3f759073 docs: fixed nimble's provider page and retriever (#29695)
## **Description:**
- Added information about the retriever that Nimble's provider exposes.
- Fixed the authentication explanation on the retriever page.
2025-02-10 15:30:40 -08:00
Bhav Sardana
624216aa64 community:Fix for Pydantic model validator of GoogleApiYoutubeLoader (#29694)
- **Description:** Community: bugfix for pedantic model validator for
GoogleApiYoutubeLoader
- **Issue:** #29165, #27432 
Fix is similar to #29346
2025-02-10 08:57:58 -05:00
Changyong Um
60740c44c5 community: Add configurable text key for indexing and the retriever in Pinecone Hybrid Search (#29697)
**issue**

In Langchain, the original content is generally stored under the `text`
key. However, the `PineconeHybridSearchRetriever` searches the `context`
field in the metadata and cannot change this key. To address this, I
have modified the code to allow changing the key to something other than
context.

In my opinion, following Langchain's conventions, the `text` key seems
more appropriate than `context`. However, since I wasn't sure about the
author's intent, I have left the default value as `context`.
2025-02-10 08:56:37 -05:00
Jun He
894b0cac3c docs: Remove redundant line (#29698)
If I understand it correctly, chain1 is never used.
2025-02-10 08:53:21 -05:00
Tiest van Gool
6655246504 Classification Tutorial: Replaced .dict() with .model_dump() method (#29701)
The .dict() method is deprecated inf Pydantic V2.0 and use `model_dump`
method instead.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-10 08:38:15 -05:00
Edmond Wang
c36e6d4371 docs: Add Comments and Supplementary Example Code to Vearch Vector Dat… (#29706)
- **Description:** Added some comments to the example code in the Vearch
vector database documentation and included commonly used sample code.
- **Issue:** None
- **Dependencies:** None

---------

Co-authored-by: wangchuxiong <wangchuxiong@jd.com>
2025-02-10 08:35:38 -05:00
Akmal Ali Jasmin
bc5fafa20e [DOC] Fix #29685: HuggingFaceEndpoint missing task argument in documentation (#29686)
## **Description**
This PR updates the LangChain documentation to address an issue where
the `HuggingFaceEndpoint` example **does not specify the required `task`
argument**. Without this argument, users on `huggingface_hub == 0.28.1`
encounter the following error:

```
ValueError: Task unknown has no recommended model. Please specify a model explicitly.
```

---

## **Issue**
Fixes #29685

---

## **Changes Made**
 **Updated `HuggingFaceEndpoint` documentation** to explicitly define
`task="text-generation"`:
```python
llm = HuggingFaceEndpoint(
    repo_id=GEN_MODEL_ID,
    huggingfacehub_api_token=HF_TOKEN,
    task="text-generation"  # Explicitly specify task
)
```

 **Added a deprecation warning note** and recommended using
`InferenceClient`:
```python
from huggingface_hub import InferenceClient
from langchain.llms.huggingface_hub import HuggingFaceHub

client = InferenceClient(model=GEN_MODEL_ID, token=HF_TOKEN)

llm = HuggingFaceHub(
    repo_id=GEN_MODEL_ID,
    huggingfacehub_api_token=HF_TOKEN,
    client=client,
)
```

---

## **Dependencies**
- No new dependencies introduced.
- Change only affects **documentation**.

---

## **Testing**
-  Verified that adding `task="text-generation"` resolves the issue.
-  Tested the alternative approach with `InferenceClient` in Google
Colab.

---

## **Twitter Handle (Optional)**
If this PR gets announced, a shout-out to **@AkmalJasmin** would be
great! 🚀

---

## **Reviewers**
📌 **@langchain-maintainers** Please review this PR. Let me know if
further changes are needed.

🚀 This fix improves **developer onboarding** and ensures the **LangChain
documentation remains up to date**! 🚀
2025-02-08 14:41:02 -05:00
manukychen
3de445d521 using getattr and default value to prevent 'OpenSearchVectorSearch' has no attribute 'bulk_size' (#29682)
- Description: Adding getattr methods and set default value 500 to
cls.bulk_size, it can prevent the error below:
Error: type object 'OpenSearchVectorSearch' has no attribute 'bulk_size'

- Issue: https://github.com/langchain-ai/langchain/issues/29071
2025-02-08 14:39:57 -05:00
Yao Tianjia
5d581ba22c langchain: support the situation when action_input is null in json output_parser (#29680)
Description:
This PR fixes handling of null action_input in
[langchain.agents.output_parser]. Previously, passing null to
action_input could cause OutputParserException with unclear error
message which cause LLM don't know how to modify the action. The changes
include:

Added null-check validation before processing action_input
Implemented proper fallback behavior with default values
Maintained backward compatibility with existing implementations

Error Examples:
```
{
  "action":"some action",
  "action_input":null
}
```

Issue:
None

Dependencies:
None
2025-02-07 22:01:01 -05:00
Philippe PRADOS
beb75b2150 community[minor]: 05 - Refactoring PyPDFium2 parser (#29625)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once. This specific part focuses on updating the
PyPDFium2 parser.

For more details, see
https://github.com/langchain-ai/langchain/pull/28970.
2025-02-07 21:31:12 -05:00
Christophe Bornet
723031d548 community: Bump ruff version to 0.9 (#29206)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:21:10 +00:00
Christophe Bornet
30f6c9f5c8 community: Use Blockbuster to detect blocking calls in asyncio during tests (#29609)
Same as https://github.com/langchain-ai/langchain/pull/29043 for
langchain-community.

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:10:39 +00:00
Christophe Bornet
3a57a28daa langchain: Use Blockbuster to detect blocking calls in asyncio during tests (#29616)
Same as https://github.com/langchain-ai/langchain/pull/29043 for the
langchain package.

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:08:15 +00:00
Keenan Pepper
c67d473397 core: Make abatch_as_completed respect max_concurrency (#29426)
- **Description:** Add tests for respecting max_concurrency and
implement it for abatch_as_completed so that test passes
- **Issue:** #29425
- **Dependencies:** none
- **Twitter handle:** keenanpepper
2025-02-07 16:51:22 -08:00
Aaron V
dcfaae85d2 Core: Fix __add__ for concatting two BaseMessageChunk's (#29531)
Description:

The change allows you to use the overloaded `+` operator correctly when
`+`ing two BaseMessageChunk subclasses. Without this you *must*
instantiate a subclass for it to work.

Which feels... wrong. Base classes should be decoupled from sub classes
and should have in no way a dependency on them.

Issue:

You can't `+` a BaseMessageChunk with a BaseMessageChunk

e.g. this will explode

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import BaseMessageChunk


chunk1 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

# this will throw
new_chunk = chunk1 + chunk2
```

In case anyone ran into this issue themselves, it's probably best to use
the AIMessageChunk:

a la 

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import AIMessageChunk


chunk1 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

# No explosion!
new_chunk = chunk1 + chunk2
```

Dependencies:

None!

Twitter handle: 
`aaron_vogler`

Keeping these for later if need be:
```
baskaryan
efriis 
eyurtsev
ccurme 
vbarda
hwchase17
baskaryan
efriis
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 00:43:36 +00:00
Marlene
4fa3ef0d55 Community/Partner: Adding Azure community and partner user agent to better track usage in Python (#29561)
- This pull request includes various changes to add a `user_agent`
parameter to Azure OpenAI, Azure Search and Whisper in the Community and
Partner packages. This helps in identifying the source of API requests
so we can better track usage and help support the community better. I
will also be adding the user_agent to the new `langchain-azure` repo as
well.

- No issue connected or  updated dependencies. 
- Utilises existing tests and docs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 23:28:30 +00:00
Ella Charlaix
c401254770 huggingface: Add ipex support to HuggingFaceEmbeddings (#29386)
ONNX and OpenVINO models are available by specifying the `backend`
argument (the model is loaded using `optimum`
https://github.com/huggingface/optimum)

```python
from langchain_huggingface import HuggingFaceEmbeddings

embedding = HuggingFaceEmbeddings(
    model_name=model_id,
    model_kwargs={"backend": "onnx"},
)
```

With this PR we also enable the IPEX backend 



```python
from langchain_huggingface import HuggingFaceEmbeddings

embedding = HuggingFaceEmbeddings(
    model_name=model_id,
    model_kwargs={"backend": "ipex"},
)
```
2025-02-07 15:21:09 -08:00
Bruno Alvisio
3eaf561561 core: Handle unterminated escape character when parsing partial JSON (#29065)
**Description**
Currently, when parsing a partial JSON, if a string ends with the escape
character, the whole key/value is removed. For example:

```
>>> from langchain_core.utils.json import parse_partial_json
>>> my_str = '{"foo": "bar", "baz": "qux\\'
>>> 
>>> parse_partial_json(my_str)
{'foo': 'bar'}
```

My expectation (and with this fix) would be for `parse_partial_json()`
to return:
```
>>> from langchain_core.utils.json import parse_partial_json
>>> 
>>> my_str = '{"foo": "bar", "baz": "qux\\'
>>> parse_partial_json(my_str)
{'foo': 'bar', 'baz': 'qux'}
```

Notes:
1. It could be argued that current behavior is still desired.
2. I have experienced this issue when the streaming output from an LLM
and the chunk happens to end with `\\`
3. I haven't included tests. Will do if change is accepted.
4. This is specially troublesome when this function is used by

187131c55c/libs/core/langchain_core/output_parsers/transform.py (L111)

since what happens is that, for example, if the received sequence of
chunks are: `{"foo": "b` , `ar\\` :

Then, the result of calling `self.parse_result()` is:
```
{"foo": "b"}
```
and the second time:
```
{}
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 23:18:21 +00:00
ccurme
0040d93b09 docs: showcase extras in chat model tabs (#29677)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 18:16:44 -05:00
Viren
252cf0af10 docs: add LangFair as a provider (#29390)
**Description:**
- Add `docs/docs/providers/langfair.mdx`
- Register langfair in `libs/packages.yml`

**Twitter handle:** @LangFair

**Tests and docs**
1. Integration tests not needed as this PR only adds a .mdx file to
docs.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Dylan Bouchard <dylan.bouchard@cvshealth.com>
Co-authored-by: Dylan Bouchard <109233938+dylanbouchard@users.noreply.github.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 21:27:37 +00:00
Erick Friis
eb9eddae0c docs: use init_chat_model (#29623) 2025-02-07 12:39:27 -08:00
ccurme
bff25b552c community: release 0.3.17 (#29676) 2025-02-07 19:41:44 +00:00
ccurme
01314c51fa langchain: release 0.3.18 (#29654) 2025-02-07 13:40:26 -05:00
ccurme
92e2239414 openai[patch]: make parallel_tool_calls explicit kwarg of bind_tools (#29669)
Improves discoverability and documentation.

cc @vbarda
2025-02-07 13:34:32 -05:00
ccurme
2a243df7bb infra: add UV_NO_SYNC to monorepo makefile (#29670)
Helpful for running `api_docs_quick_preview` locally.
2025-02-07 17:17:05 +00:00
Marc Ammann
5690575f13 openai: Removed tool_calls from completion chunk after other chunks have already been sent. (#29649)
- **Description:** Before sending a completion chunk at the end of an
OpenAI stream, removing the tool_calls as those have already been sent
as chunks.
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** -

@ccurme as mentioned in another PR

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-07 10:15:52 -05:00
Ikko Eltociear Ashimine
0d45ad57c1 community: update base_o365.py (#29657)
extention -> extension
2025-02-07 08:43:29 -05:00
weeix
1b064e198f docs: Fix llama.cpp GPU Installation in llamacpp.ipynb (Deprecated Env Variable) (#29659)
- **Description:** The llamacpp.ipynb notebook used a deprecated
environment variable, LLAMA_CUBLAS, for llama.cpp installation with GPU
support. This commit updates the notebook to use the correct GGML_CUDA
variable, fixing the installation error.
- **Issue:** none
-  **Dependencies:** none
2025-02-07 08:43:09 -05:00
Vincent Emonet
3645181d0e qdrant: Add similarity_search_with_score_by_vector() function to the QdrantVectorStore (#29641)
Added `similarity_search_with_score_by_vector()` function to the
`QdrantVectorStore` class.

It is required when we want to query multiple time with the same
embeddings. It was present in the now deprecated original `Qdrant`
vectorstore implementation, but was absent from the new one. It is also
implemented in a number of others `VectorStore` implementations

I have added tests for this new function

Note that I also argued in this discussion that it should be part of the
general `VectorStore`
https://github.com/langchain-ai/langchain/discussions/29638

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 00:55:58 +00:00
ccurme
488cb4a739 anthropic: release 0.3.7 (#29653) 2025-02-06 17:05:57 -05:00
ccurme
ab09490c20 openai: release 0.3.4 (#29652) 2025-02-06 17:02:21 -05:00
ccurme
29a0c38cc3 openai[patch]: add test for message.name (#29651) 2025-02-06 16:49:28 -05:00
ccurme
91cca827c0 tests: release 0.3.11 (#29648) 2025-02-06 21:48:09 +00:00
Sunish Sheth
25ce1e211a docs: Updating the imports for langchain-databricks to databricks-langchain (#29646)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-06 13:28:07 -08:00
ccurme
e1b593ae77 text-splitters[patch]: release 0.3.6 (#29647) 2025-02-06 16:16:05 -05:00
ccurme
a91e58bc10 core: release 0.3.34 (#29644) 2025-02-06 15:53:56 -05:00
Vincent Emonet
08b9eaaa6f community: improve FastEmbedEmbeddings support for ONNX execution provider (e.g. GPU) (#29645)
I made a change to how was implemented the support for GPU in
`FastEmbedEmbeddings` to be more consistent with the existing
implementation `langchain-qdrant` sparse embeddings implementation

It is directly enabling to provide the list of ONNX execution providers:
https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/fastembed_sparse.py#L15

It is a bit less clear to a user that just wants to enable GPU, but
gives more capabilities to work with other execution providers that are
not the `CUDAExecutionProvider`, and is more future proof

Sorry for the disturbance @ccurme

> Nice to see you just moved to `uv`! It is so much nicer to run
format/lint/test! No need to manually rerun the `poetry install` with
all required extras now
2025-02-06 15:31:23 -05:00
Erick Friis
1bf620222b infra: remove deepseek from scheduled tests (#29643) 2025-02-06 19:43:03 +00:00
ccurme
3450bfc806 infra: add UV_FROZEN to makefiles (#29642)
These are set in Github workflows, but forgot to add them to most
makefiles for convenience when developing locally.

`uv run` will automatically sync the lock file. Because many of our
development dependencies are local installs, it will pick up version
changes and update the lock file. Passing `--frozen` or setting this
environment variable disables the behavior.
2025-02-06 14:36:54 -05:00
ccurme
d172984c91 infra: migrate to uv (#29566) 2025-02-06 13:36:26 -05:00
ccurme
9da06e6e94 standard-tests[patch]: use has_structured_output property to engage structured output tests (#29635)
Motivation: dedicated structured output features are becoming more
common, such that integrations can support structured output without
supporting tool calling.

Here we make two changes:

1. Update the `has_structured_output` method to default to True if a
model supports tool calling (in addition to defaulting to True if
`with_structured_output` is overridden).
2. Update structured output tests to engage if `has_structured_output`
is True.
2025-02-06 10:09:06 -08:00
Vincent Emonet
db8201d4da community: fix typo in the module imported when using GPU with FastEmbedEmbeddings (#29631)
Made a mistake in the module to import (the module stay the same only
the installed package changes), fixed it and tested it

https://github.com/langchain-ai/langchain/pull/29627

@ccurme
2025-02-06 10:26:08 -05:00
Mohammed Abbadi
f8fd65dea2 community: Update deeplake.py (#29633)
Deep Lake recently released version 4, which introduces significant
architectural changes, including a new on-disk storage format, enhanced
indexing mechanisms, and improved concurrency. However, LangChain's
vector store integration currently does not support Deep Lake v4 due to
breaking API changes.

Previously, the installation command was:
`pip install deeplake[enterprise]`
This installs the latest available version, which now defaults to Deep
Lake v4. Since LangChain's vector store integration is still dependent
on v3, this can lead to compatibility issues when using Deep Lake as a
vector database within LangChain.

To ensure compatibility, the installation command has been updated to:
`pip install deeplake[enterprise]<4.0.0`
This constraint ensures that pip installs the latest available version
of Deep Lake within the v3 series while avoiding the incompatible v4
update.
2025-02-06 10:25:13 -05:00
Vincent Emonet
0ac5536f04 community: add support for using GPUs with FastEmbedEmbeddings (#29627)
- **Description:** add a `gpu: bool = False` field to the
`FastEmbedEmbeddings` class which enables to use GPU (through ONNX CUDA
provider) when generating embeddings with any fastembed model. It just
requires the user to install a different dependency and we use a
different provider when instantiating `fastembed.TextEmbedding`
- **Issue:** when generating embeddings for a really large amount of
documents this drastically increase performance (honestly that is a must
have in some situations, you can't just use CPU it is way too slow)
- **Dependencies:** no direct change to dependencies, but internally the
users will need to install `fastembed-gpu` instead of `fastembed`, I
made all the changes to the init function to properly let the user know
which dependency they should install depending on if they enabled `gpu`
or not
 
cf. fastembed docs about GPU for more details:
https://qdrant.github.io/fastembed/examples/FastEmbed_GPU/

I did not added test because it would require access to a GPU in the
testing environment
2025-02-06 08:04:19 -05:00
Dmitrii Rashchenko
0ceda557aa add o1 and o3-mini to pricing (#29628)
### PR Title:  
**community: add latest OpenAI models pricing**  

### Description:  
This PR updates the OpenAI model cost calculation mapping by adding the
latest OpenAI models, **o1 (non-preview)** and **o3-mini**, based on the
pricing listed on the [OpenAI pricing
page](https://platform.openai.com/docs/pricing).

### Changes:  
- Added pricing for `o1`, `o1-2024-12-17`, `o1-cached`, and
`o1-2024-12-17-cached` for input tokens.
- Added pricing for `o1-completion` and `o1-2024-12-17-completion` for
output tokens.
- Added pricing for `o3-mini`, `o3-mini-2025-01-31`, `o3-mini-cached`,
and `o3-mini-2025-01-31-cached` for input tokens.
- Added pricing for `o3-mini-completion` and
`o3-mini-2025-01-31-completion` for output tokens.

### Issue:  
N/A  

### Dependencies:  
None  

### Testing & Validation:  
- No functional changes outside of updating the cost mapping.  
- No tests were added or modified.
2025-02-06 08:02:20 -05:00
ZhangShenao
ac53977dbc [MistralAI] Improve MistralAIEmbeddings (#29242)
- Add static method decorator for method.
- Add expected exception for retry decorator

#29125
2025-02-05 21:31:54 -05:00
Andrew Wason
22aa5e07ed standard-tests: Fix ToolsIntegrationTests to correctly handle "content_and_artifact" tools (#29391)
**Description:**

The response from `tool.invoke()` is always a ToolMessage, with content
and artifact fields, not a tuple.
The tuple is converted to a ToolMessage here

b6ae7ca91d/libs/core/langchain_core/tools/base.py (L726)

**Issue:**

Currently `ToolsIntegrationTests` requires `invoke()` to return a tuple
and so standard tests fail for "content_and_artifact" tools. This fixes
that to check the returned ToolMessage.

This PR also adds a test that now passes.
2025-02-05 21:27:09 -05:00
Mohammad Anash
f849305a56 fixed Bug in PreFilter of AzureCosmosDBNoSqlVectorSearch (#29613)
Description: Fixes PreFilter value handling in Azure Cosmos DB NoSQL
vectorstore. The current implementation fails to handle numeric values
in filter conditions, causing an undefined value variable error. This PR
adds support for numeric, boolean, and NULL values while maintaining the
existing string and list handling.

Changes:
Added handling for numeric types (int/float)
Added boolean value support
Added NULL value handling
Added type validation for unsupported values
Fixed scope of value variable initialization

Issue: 
Fixes #29610

Implementation Notes:
No changes to public API
Backwards compatible
Maintains consistent behavior with existing MongoDB-style filtering
Preserves SQL injection prevention through proper value handling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-06 02:20:26 +00:00
Philippe PRADOS
6ff0d5c807 community[minor]: 04 - Refactoring PDFMiner parser (#29526)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once. This specific part focuses on updating the XXX
parser.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-02-05 21:08:27 -05:00
Yoav Levy
4460d20ba9 docs: Nimble provider doc fixes (#29597)
## Description

- Removed broken link for the API Reference
- Added `OPENAI_API_KEY` setter for the chains to properly run
- renamed one of our examples so it won't override the original
retriever and cause confusion due to it using a different mode of
retrieving
- Moved one of our simple examples to be the first example of our
retriever :)
2025-02-05 11:24:37 -08:00
Isaac Francisco
91ffd7caad core: allow passing message dicts into ChatPromptTemplate (#29363)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-05 09:45:52 -08:00
ccurme
69595b0914 docs: fix builds (#29607)
Failing with:
> ValueError: Provider page not found for databricks-langchain. Please
add one at docs/integrations/providers/databricks-langchain.{mdx,ipynb}
2025-02-05 14:24:53 +00:00
ccurme
91a33a9211 anthropic[patch]: release 0.3.6 (#29606) 2025-02-05 14:18:02 +00:00
ccurme
5cbe6aba8f anthropic[patch]: support citations in streaming (#29591) 2025-02-05 09:12:07 -05:00
William FH
5ae4ed791d Drop duplicate inputs (#29589) 2025-02-04 18:06:10 -08:00
Erick Friis
65f0deb81a packages: databricks-langchain (#29593) 2025-02-05 01:53:34 +00:00
Yoav Levy
621bba7e26 docs: add nimble as a provider (#29579)
## Description:

- Add docs/docs/providers/nimbleway.ipynb
- Add docs/docs/integrations/retrievers/nimbleway.ipynb
- Register nimbleway in libs/packages.yml

- X (twitter) handle: @urielkn / @LevyNorbit8
2025-02-04 16:47:03 -08:00
Erick Friis
50d61eafa2 partners/deepseek: release 0.1.1 (#29592) 2025-02-04 23:46:38 +00:00
Erick Friis
7edfcbb090 docs: rename to langchain-deepseek in docs (#29587) 2025-02-04 14:22:17 -08:00
Erick Friis
04e8f3b6d7 infra: add deepseek api key to release (#29585) 2025-02-04 10:35:07 -08:00
Erick Friis
df8fa882b2 deepseek: bump core (#29584) 2025-02-04 10:25:46 -08:00
Erick Friis
455f65947a deepseek: rename to langchain-deepseek from langchain-deepseek-official (#29583) 2025-02-04 17:57:25 +00:00
Philippe PRADOS
5771e561fb [Bugfix langchain_community] Fix PyMuPDFLoader (#29550)
- **Description:**  add legacy properties
    - **Issue:** #29470
    - **Twitter handle:** pprados
2025-02-04 09:24:40 -05:00
Ashutosh Kumar
65b404a2d1 [oci_generative_ai] Option to pass auth_file_location (#29481)
**PR title**: "community: Option to pass auth_file_location for
oci_generative_ai"

**Description:** Option to pass auth_file_location, to overwrite config
file default location "~/.oci/config" where profile name configs
present. This is not fixing any issues. Just added optional parameter
called "auth_file_location", which internally supported by any OCI
client including GenerativeAiInferenceClient.
2025-02-03 21:44:13 -05:00
Teruaki Ishizaki
aeb42dc900 partners: Fixed the procedure of initializing pad_token_id (#29500)
- **Description:** Add to check pad_token_id and eos_token_id of model
config. It seems that this is the same bug as the HuggingFace TGI bug.
It's same bug as #29434
- **Issue:** #29431
- **Dependencies:** none
- **Twitter handle:** tell14

Example code is followings:
```python
from langchain_huggingface.llms import HuggingFacePipeline

hf = HuggingFacePipeline.from_model_id(
    model_id="meta-llama/Llama-3.2-3B-Instruct",
    task="text-generation",
    pipeline_kwargs={"max_new_tokens": 10},
)

from langchain_core.prompts import PromptTemplate

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | hf

question = "What is electroencephalography?"

print(chain.invoke({"question": question}))
```
2025-02-03 21:40:33 -05:00
Tanushree
e8b91283ef Banner for interrupt (#29567)
Adding banner for interrupt
2025-02-03 17:40:24 -08:00
Erick Friis
ab67137fa3 docs: chat model order experiment (#29480) 2025-02-03 18:55:18 +00:00
AmirPoursaberi
a6efd22ba1 Fix a tiny typo in create_retrieval_chain docstring (#29552)
Hi there!

To fix a tiny typo in `create_retrieval_chain` docstring.
2025-02-03 10:54:49 -05:00
JHIH-SIOU LI
48fa3894c2 docs: update readthedocs document loader options (#29556)
Hi there!

This PR updates the documentation according to the code.
If we run the example as is, then it would result in the following
error:

![image](https://github.com/user-attachments/assets/9c0a336c-775c-489c-a275-f1153d447ecb)

It seems that this part of the code already supplied the required
argument to the BeautifulSoup4:

0c782ee547/libs/community/langchain_community/document_loaders/readthedocs.py (L87-L90)

Since the example can only work by removing this argument, it also seems
legit to remove it from the documentation.
2025-02-03 10:54:24 -05:00
Tyllen
0c782ee547 docs: update payman docs (#29479)
- **Description:** fix the import docs variables

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-02 02:41:54 +00:00
Hemant Rawat
db1693aa70 community: fix issue #29429 in age_graph.py (#29506)
## Description:

This PR addresses issue #29429 by fixing the _wrap_query method in
langchain_community/graphs/age_graph.py. The method now correctly
handles Cypher queries with UNION and EXCEPT operators, ensuring that
the fields in the SQL query are ordered as they appear in the Cypher
query. Additionally, the method now properly handles cases where RETURN
* is not supported.

### Issue: #29429

### Dependencies: None


### Add tests and docs:

Added unit tests in tests/unit_tests/graphs/test_age_graph.py to
validate the changes.
No new integrations were added, so no example notebook is necessary.
Lint and test:

Ran make format, make lint, and make test to ensure code quality and
functionality.
2025-02-01 21:24:45 -05:00
Keenan Pepper
2f97916dea docs: Add goodfire notebook and add to packages.yml (#29512)
- **Description:** Add Goodfire ipynb notebook and add
langchain-goodfire package to packages.yml
- **Issue:** n/a
- **Dependencies:** docs only
- **Twitter handle:** keenanpepper

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-01 19:43:20 -05:00
ccurme
a3c5e4d070 deepseek[patch]: bump langchain-openai and add to scheduled testing (#29535) 2025-02-01 18:40:59 -05:00
ccurme
16a422f3fa community: add standard tests for Perplexity (#29534) 2025-02-01 17:02:57 -05:00
A Venkata Sai Krishna Varun
21d8d41595 docs: update delete method in vectorstores.mdx (#29497)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 18:15:28 +00:00
Mark Perfect
b8e218b09f docs: Fix Milvus vector store initialization (#29511)
- [x] **PR title**:


- [x] **PR message**:

- A change in the Milvus API has caused an issue with the local vector
store initialization. Having used an Ollama embedding model, the vector
store initialization results in the following error:

<img width="978" alt="image"
src="https://github.com/user-attachments/assets/d57e495c-1764-4fbe-ab8c-21ee44f1e686"
/>

- This is fixed by setting the index type explicitly:

`vector_store = Milvus(embedding_function=embeddings,
connection_args={"uri": URI}, index_params={"index_type": "FLAT",
"metric_type": "L2"},)`

Other small documentation edits were also made.


- [x] **Add tests and docs**:
  N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 12:57:36 -05:00
Amit Ghadge
0c405245c4 [Integrations][Tool] Added Jenkins tools support (#29516)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 12:50:10 -05:00
Subrat Lima
5b826175c9 docs: Update local_llms.ipynb - fixed a typo (#29520)
Description: fixed a typo in the how to > local llma > llamafile section
description.
2025-01-31 11:18:24 -05:00
Christophe Bornet
aab2e42169 core[patch]: Use Blockbuster to detect blocking calls in asyncio during tests (#29043)
This PR uses the [blockbuster](https://github.com/cbornet/blockbuster)
library in langchain-core to detect blocking calls made in the asyncio
event loop during unit tests.
Avoiding blocking calls is hard as these can be deeply buried in the
code or made in 3rd party libraries.
Blockbuster makes it easier to detect them by raising an exception when
a call is made to a known blocking function (eg: `time.sleep`).

Adding blockbuster allowed to find a blocking call in
`aconfig_with_context` (it ends up calling `get_function_nonlocals`
which loads function code).

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_
2025-01-31 10:06:34 -05:00
Philippe PRADOS
ceda8bc050 community[minor]: 03 - Refactoring PyPDF parser (#29330)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once.
This specific part focuses on updating the PyPDF parser.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).
2025-01-31 10:05:07 -05:00
Julian Castro Pulgarin
b7e3e337b1 community: Fix YahooFinanceNewsTool to handle updated yfinance data structure (#29498)
*Description:**
Updates the YahooFinanceNewsTool to handle the current yfinance news
data structure. The tool was failing with a KeyError due to changes in
the yfinance API's response format. This PR updates the code to
correctly extract news URLs from the new structure.

**Issue:** #29495

**Dependencies:** 
No new dependencies required. Works with existing yfinance package.

The changes maintain backwards compatibility while fixing the KeyError
that users were experiencing.

The modified code properly handles the new data structure where:
- News type is now at `content.contentType`
- News URL is now at `content.canonicalUrl.url`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-31 02:31:44 +00:00
Vadym Barda
22219eefaf docs: update README/intro (#29492) 2025-01-29 22:50:00 +00:00
Erick Friis
332e303858 partners/mistralai: release 0.2.6 (#29491) 2025-01-29 22:23:14 +00:00
Erick Friis
2c795f5628 partners/openai: release 0.3.3 (#29490) 2025-01-29 22:23:03 +00:00
Erick Friis
f307b3cc5f langchain: release 0.3.17 (#29485) 2025-01-29 22:22:49 +00:00
Erick Friis
5cad3683b4 partners/groq: release 0.2.4 (#29488) 2025-01-29 22:22:30 +00:00
Erick Friis
e074c26a6b partners/fireworks: release 0.2.7 (#29487) 2025-01-29 22:22:18 +00:00
Erick Friis
685609e1ef partners/anthropic: release 0.3.5 (#29486) 2025-01-29 22:22:11 +00:00
Erick Friis
ed3a5e664c standard-tests: release 0.3.10 (#29484) 2025-01-29 22:21:05 +00:00
Erick Friis
29461b36d9 partners/ollama: release 0.2.3 (#29489) 2025-01-29 22:19:44 +00:00
Erick Friis
07e2e80fe7 core: release 0.3.33 (#29483) 2025-01-29 14:11:53 -08:00
Erick Friis
8f95da4eb1 multiple: structured output tracing standard metadata (#29421)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-29 14:00:26 -08:00
ccurme
284c935b08 tests[patch]: improve coverage of structured output tests (#29478) 2025-01-29 14:52:09 -05:00
Erick Friis
c79274cb7c docs: typo in contrib integrations (#29477) 2025-01-29 19:39:36 +00:00
ccurme
a3878a3c62 infra: update deps for notebook tests (#29476) 2025-01-29 10:23:50 -05:00
Matheus Torquato
7aae738296 docs:Fix Imports for Document and BaseRetriever (#29473)
This pull request addresses an issue with import statements in the
langchain_core/retrievers.py file. The following changes have been made:

Corrected the import for Document from langchain_core.documents.base.
Corrected the import for BaseRetriever from langchain_core.retrievers.
These changes ensure that the SimpleRetriever class can correctly
reference the Document and BaseRetriever classes, improving code
reliability and maintainability.

---------

Co-authored-by: Matheus Torquato <mtorquat@jaguarlandrover.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-29 14:32:05 +00:00
Mohammad Anash
12bcc85927 added operator filter for supabase (#29475)
Description
This PR adds support for MongoDB-style $in operator filtering in the
Supabase vectorstore implementation. Currently, filtering with $in
operators returns no results, even when matching documents exist. This
change properly translates MongoDB-style filters to PostgreSQL syntax,
enabling efficient multi-document filtering.
Changes

Modified similarity_search_by_vector_with_relevance_scores to handle
MongoDB-style $in operators
Added automatic conversion of $in filters to PostgreSQL IN clauses
Preserved original vector type handling and numpy array conversion
Maintained compatibility with existing postgrest filters
Added support for the same filtering in
similarity_search_by_vector_returning_embeddings

Issue
Closes #27932

Implementation Notes
No changes to public API or function signatures
Backwards compatible - behavior unchanged for non-$in filters
More efficient than multiple individual queries for multi-ID searches
Preserves all existing functionality including numpy array conversion
for vector types

Dependencies
None

Additional Notes
The implementation handles proper SQL escaping for filter values
Maintains consistent behavior with other vectorstore implementations
that support MongoDB-style operators
Future extensions could support additional MongoDB-style operators ($gt,
$lt, etc.)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-29 14:24:18 +00:00
ccurme
585f467d4a mistral[patch]: release 0.2.5 (#29463) 2025-01-28 18:29:54 -05:00
ccurme
ca9d4e4595 mistralai: support method="json_schema" in structured output (#29461)
https://docs.mistral.ai/capabilities/structured-output/custom_structured_output/
2025-01-28 18:17:39 -05:00
Michael Chin
e120378695 community: Additional AWS deprecations (#29447)
Added deprecation warnings for a few more classes that weremoved to
`langchain-aws` package:
- [SageMaker Endpoint
LLM](https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html)
- [Amazon Kendra
retriever](https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.kendra.AmazonKendraRetriever.html)
- [Amazon Bedrock Knowledge Bases
retriever](https://python.langchain.com/api_reference/aws/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html)
2025-01-28 09:50:14 -05:00
Tommy Cox
6f711794a7 docs: tiny grammary fix to why_langchain.mdx (#29455)
Description: Tiny grammar fix to doc - why_langchain.mdx
2025-01-28 09:49:33 -05:00
Erick Friis
2d776351af community: release 0.3.16 (#29452) 2025-01-28 07:44:54 +00:00
Erick Friis
737a68fcdc langchain: release 0.3.16 (#29451) 2025-01-28 07:31:09 +00:00
Erick Friis
fa3857c9d0 docs: tests/standard tests api ref redirect (#29444) 2025-01-27 23:21:50 -08:00
Erick Friis
8bf9c71673 core: release 0.3.32 (#29450) 2025-01-28 07:20:04 +00:00
Erick Friis
ecdc881328 langchain: add deepseek provider to init chat model (#29449) 2025-01-27 23:13:59 -08:00
Erick Friis
dced0ed3fd deepseek, docs: chatdeepseek integration added (#29445) 2025-01-28 06:32:58 +00:00
Vadym Barda
7cbf885c18 docs: replace 'state_modifier' with 'prompt' (#29415) 2025-01-27 21:29:18 -05:00
Isaac Francisco
2bb2c9bfe8 change behavior for converting a string to openai messages (#29446) 2025-01-27 18:18:54 -08:00
ccurme
b1fdac726b groq[patch]: update model used in test (#29441)
`llama-3.1-70b-versatile` was [shut
down](https://console.groq.com/docs/deprecations).
2025-01-27 21:11:44 +00:00
Adrián Panella
1551d9750c community(doc_loaders): allow any credential type in AzureAIDocumentI… (#29289)
allow any credential type in AzureAIDocumentInteligence, not only
`api_key`.
This allows to use any of the credentials types integrated with AD.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-27 20:56:30 +00:00
ccurme
f00c66cc1f chroma[patch]: release 0.2.1 (#29440) 2025-01-27 20:41:35 +00:00
Jorge Piedrahita Ortiz
3b886cdbb2 libs: add sambanova-lagchain integration package (#29417)
- **Description:**: Add sambanova-langchain integration package as
suggested in previous PRs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-27 20:34:55 +00:00
Mohammad Anash
aba1fd0bd4 fixed similarity search with score error #29407 (#29413)
Description: Fix TypeError in AzureSearch similarity_search_with_score
by removing search_type from kwargs before passing to underlying
requests.

This resolves issue #29407 where search_type was being incorrectly
passed through to Session.request().
Issue: #29407

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-27 20:34:42 +00:00
itaismith
7b404fcd37 partners[chroma]: Upgrade Chroma to 0.6.x (#29404)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-27 15:32:21 -05:00
Teruaki Ishizaki
3fce78994e community: Fixed the procedure of initializing pad_token_id (#29434)
- **Description:** Add to check pad_token_id and eos_token_id of model
config. It seems that this is the same bug as the HuggingFace TGI bug.
In addition, the source code of
libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py
also requires similar changes.
- **Issue:** #29431
- **Dependencies:** none
- **Twitter handle:** tell14
2025-01-27 14:54:54 -05:00
Christophe Bornet
dbb6b7b103 core: Add ruff rules TRY (tryceratops) (#29388)
TRY004 ("use TypeError rather than ValueError") existing errors are
marked as ignore to preserve backward compatibility.
LMK if you prefer to fix some of them.

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-24 05:01:40 +00:00
Erick Friis
723b603f52 docs: groq api key links (#29402) 2025-01-24 04:33:18 +00:00
ccurme
bd1909fe05 docs: document citations in ChatAnthropic (#29401) 2025-01-23 17:18:51 -08:00
ccurme
bbc50f65e7 anthropic[patch]: release 0.3.4 (#29399) 2025-01-23 23:55:58 +00:00
ccurme
ed797e17fb anthropic[patch]: always return content blocks if citations are generated (#29398)
We currently return string (and therefore no content blocks / citations)
if the response is of the form
```
[
    {"text": "a claim", "citations": [...]},
]
```

There are other cases where we do return citations as-is:
```
[
    {"text": "a claim", "citations": [...]},
    {"text": "some other text"},
    {"text": "another claim", "citations": [...]},
]
```
Here we update to return content blocks including citations in the first
case as well.
2025-01-23 18:47:23 -05:00
ccurme
933b35b9c5 docs: update how-to index page (#29393) 2025-01-23 16:55:36 -05:00
Bagatur
317fb86fd9 openai[patch]: fix int test (#29395) 2025-01-23 21:23:01 +00:00
Bagatur
8d566a8fe7 openai[patch]: detect old models in with_structured_output (#29392)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-23 20:47:32 +00:00
Christophe Bornet
b6ae7ca91d core: Cache RunnableLambda __repr__ (#29199)
`RunnableLambda`'s `__repr__` may do costly OS operation by calling
`get_lambda_source`.
So it's better to cache it.
See #29043

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-23 18:34:47 +00:00
Christophe Bornet
618e550f06 core: Cache RunnableLambda deps (#29200)
`RunnableLambda`'s `deps` may do costly OS operation by calling
`get_function_nonlocals`.
So it's better to cache it.
See #29043
2025-01-23 13:09:07 -05:00
ccurme
f795ab99ec docs: fix title rendered for integration package (#29387)
"Tilores LangchAIn" -> "Tilores"
2025-01-23 12:21:19 -05:00
Stefan Berkner
8977451c76 docs: add Tilores provider and tools (#29244)
Description: This PR adds documentation for the Tilores provider and
tools.
Issue: closes #26320
2025-01-23 12:17:59 -05:00
Ahmed Tammaa
d5b8aabb32 text-splitters[patch]: delete unused html_chunks_with_headers.xslt (#29340)
This pull request removes the now-unused html_chunks_with_headers.xslt
file from the codebase. In a previous update ([PR
#27678](https://github.com/langchain-ai/langchain/pull/27678)), the
HTMLHeaderTextSplitter class was refactored to utilize BeautifulSoup
instead of lxml and XSLT for HTML processing. As a result, the
html_chunks_with_headers.xslt file is no longer necessary and can be
safely deleted to maintain code cleanliness and reduce potential
confusion.

Issue: N/A

Dependencies: N/A
2025-01-23 11:29:08 -05:00
Wang Ran (汪然)
8f2c11e17b core[patch]: fix API reference for draw_ascii (#29370)
typo: no `draw` but `draw_ascii` and other things

now, it works:
<img width="688" alt="image"
src="https://github.com/user-attachments/assets/5b5a8cc2-cf81-4a5c-b443-da0e4426556c"
/>

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-23 16:04:58 +00:00
Michael Chin
2df9daa7f2 docs: Update BedrockEmbeddings import example in aws.mdx (#29364)
The `BedrockEmbeddings` class in `langchain-community` has been
deprecated since v0.2.11:


https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/bedrock.py#L14-L19

Updated the AWS docs for `BedRockEmbeddings` to use the new class in
`langchain-aws`.
2025-01-23 10:05:57 -05:00
Loris Alexandre
e4921239a6 community: missing mandatory parameter partition_key for AzureCosmosDBNoSqlVectorSearch (#29382)
- **Description:** the `delete` function of
AzureCosmosDBNoSqlVectorSearch is using
`self._container.delete_item(document_id)` which miss a mandatory
parameter `partition_key`
We use the class function `delete_document_by_id` to provide a default
`partition_key`
- **Issue:** #29372 
- **Dependencies:** None
- **Twitter handle:** None

Co-authored-by: Loris Alexandre <loris.alexandre@boursorama.fr>
2025-01-23 10:05:10 -05:00
Terry Tan
ec0ebb76f2 community: fix Google Scholar tool errors (#29371)
Resolve https://github.com/langchain-ai/langchain/issues/27557
2025-01-23 10:03:01 -05:00
江同学呀
a1e62070d0 community: Fix the problem of error reporting when OCR extracts text from PDF. (#29378)
- **Description:** The issue has been fixed where images could not be
recognized from ```xObject[obj]["/Filter"]``` (whose value can be either
a string or a list of strings) in the ```_extract_images_from_page()```
method. It also resolves the bug where vectorization by Faiss fails due
to the failure of image extraction from a PDF containing only
images```IndexError: list index out of range```.

![69a60f3f6bd474641b9126d74bb18f7e](https://github.com/user-attachments/assets/dc9e098d-2862-49f7-93b0-00f1056727dc)

- **Issue:** 
    Fix the following issues:
[#15227 ](https://github.com/langchain-ai/langchain/issues/15227)
[#22892 ](https://github.com/langchain-ai/langchain/issues/22892)
[#26652 ](https://github.com/langchain-ai/langchain/issues/26652)
[#27153 ](https://github.com/langchain-ai/langchain/issues/27153)
    Related issues:
[#7067 ](https://github.com/langchain-ai/langchain/issues/7067)

- **Dependencies:** None
- **Twitter handle:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-23 15:01:52 +00:00
Tim Mallezie
a13faab6b7 community; allow to set gitlab url in gitlab tool in constrictor (#29380)
This pr, expands the gitlab url so it can also be set in a constructor,
instead of only through env variables.

This allows to do something like this. 
```
       # Create the GitLab API wrapper
        gitlab_api = GitLabAPIWrapper(
            gitlab_url=self.gitlab_url,
            gitlab_personal_access_token=self.gitlab_personal_access_token,
            gitlab_repository=self.gitlab_repository,
            gitlab_branch=self.gitlab_branch,
            gitlab_base_branch=self.gitlab_base_branch,
        )
```
Where before you could not set the url in the constructor.

Co-authored-by: Tim Mallezie <tim.mallezie@dropsolid.com>
2025-01-23 09:36:27 -05:00
Tyllen
f2ea62f632 docs: add payman docs (#29362)
- **Description:** Adding the docs to use the payman-langchain
integration :)
2025-01-22 18:37:47 -08:00
Erick Friis
861024f388 docs: openai audio input (#29360) 2025-01-22 23:45:35 +00:00
Erick Friis
3f1d20964a standard-tests: release 0.3.9 (#29356) 2025-01-22 09:46:19 -08:00
Macs Dickinson
7378c955db community: adds support for getting github releases for the configured repository (#29318)
**Description:** adds support for github tool to query github releases
on the configure respository
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** @macsdickinson

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-22 15:45:52 +00:00
Tayaa Med Amine
ef1610e24a langchain[patch]: support ollama in init_embeddings (#29349)
Why not Ollama ?

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-22 14:47:12 +00:00
Siddhant
9eb10a9240 langchain: added vectorstore docstring linting (#29241)
…ore.py

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"
  
Added docstring linting in the vectorstore.py file relating to issue
#25154


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Siddhant Jain <sjain35@buffalo.edu>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 03:47:43 +00:00
Erick Friis
a2ed796aa6 infra: run doc lint on root pyproject change (#29350) 2025-01-22 03:22:13 +00:00
Sohan
de1fc4811d packages, docs: Pipeshift - Langchain integration of pipeshift (#29114)
Description: Added pipeshift integration. This integrates pipeshift LLM
and ChatModels APIs with langchain
Dependencies: none

Unit Tests & Integration tests are added

Documentation is added as well

This PR is w.r.t
[#27390](https://github.com/langchain-ai/langchain/pull/27390) and as
per request, a freshly minted `langchain-pipeshift` package is uploaded
to PYPI. Only changes to the docs & packages.yml are made in langchain
master branch

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 03:03:06 +00:00
Erick Friis
e723882a49 docs: mongodb api ref redirect (#29348) 2025-01-21 16:48:03 -08:00
Christophe Bornet
836c791829 text-splitters: Bump ruff version to 0.9 (#29231)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:27:58 +00:00
Christophe Bornet
a004dec119 langchain: Bump ruff version to 0.9 (#29211)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:26:39 +00:00
Christophe Bornet
2340b3154d standard-tests: Bump ruff version to 0.9 (#29230)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:23:01 +00:00
Christophe Bornet
e4a78dfc2a core: Bump ruff version to 0.9 (#29201)
Also run some preview autofix and formatting

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:20:09 +00:00
Ella Charlaix
6f95db81b7 huggingface: Add IPEX models support (#29179)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-22 00:16:44 +00:00
Bhav Sardana
d6a7aaa97d community: Fix for Pydantic model validator of GoogleApiClient (#29346)
- [ *] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Fix for pedantic model validator for GoogleApiHandler
    - **Issue:** the issue #29165 

- [ *] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

---------

Signed-off-by: Bhav Sardana <sardana.bhav@gmail.com>
2025-01-21 15:17:43 -05:00
Christophe Bornet
1c4ce7b42b core: Auto-fix some docstrings (#29337) 2025-01-21 13:29:53 -05:00
ccurme
86a0720310 fireworks[patch]: update model used in integration tests (#29342)
No access to firefunction-v1 and -v2.
2025-01-21 11:05:30 -05:00
Hugo Berg
32c9c58adf Community: fix missing f-string modifier in oai structured output parsing error (#29326)
- **Description:** The ValueError raised on certain structured-outputs
parsing errors, in langchain openai community integration, was missing a
f-string modifier and so didn't produce useful outputs. This is a
2-line, 2-character change.
- **Issue:** None open that this fixes
- **Dependencies:** Nothing changed
- **Twitter handle:** None

- [X] **Add tests and docs**: There's nothing to add for.
- [-] **Lint and test**: Happy to run this if you deem it necessary.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-21 14:26:38 +00:00
Nuno Campos
566915d7cf core: fix call to get closure vars for partial-wrapped funcs (#29316)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-21 09:26:15 -05:00
ZhangShenao
33e22ccb19 [Doc] Improve api doc (#29324)
- Fix doc description
- Add static method decorator
2025-01-21 09:16:08 -05:00
Tugay Talha İçen
7b44c3384e Docs: update huggingfacehub.ipynb (#29329)
langchain -> langchain langchain-huggingface 


Updated the installation command from:
%pip install --upgrade --quiet langchain sentence_transformers to: %pip
install --upgrade --quiet langchain-huggingface sentence_transformers

This resolves an import error in the notebook when using from
langchain_huggingface.embeddings import HuggingFaceEmbeddings.
2025-01-21 09:12:22 -05:00
Bagatur
536b44a47f community[patch]: Release 0.3.15 (#29325) 2025-01-21 03:10:07 +00:00
Bagatur
ec5fae76d4 langchain[patch]: Release 0.3.15 (#29322) 2025-01-21 02:24:11 +00:00
Bagatur
923e6fb321 core[patch]: 0.3.31 (#29320) 2025-01-21 01:17:31 +00:00
Ikko Eltociear Ashimine
06456c1dcf docs: update google_cloud_sql_mssql.ipynb (#29315)
arbitary -> arbitrary
2025-01-20 16:11:08 -05:00
Ahmed Tammaa
d3ed9b86be text-splitters[minor]: Replace lxml and XSLT with BeautifulSoup in HTMLHeaderTextSplitter for Improved Large HTML File Processing (#27678)
This pull request updates the `HTMLHeaderTextSplitter` by replacing the
`split_text_from_file` method's implementation. The original method used
`lxml` and XSLT for processing HTML files, which caused
`lxml.etree.xsltapplyerror maxhead` when handling large HTML documents
due to limitations in the XSLT processor. Fixes #13149

By switching to BeautifulSoup (`bs4`), we achieve:

- **Improved Performance and Reliability:** BeautifulSoup efficiently
processes large HTML files without the errors associated with `lxml` and
XSLT.
- **Simplified Dependencies:** Removes the dependency on `lxml` and
external XSLT files, relying instead on the widely used `beautifulsoup4`
library.
- **Maintained Functionality:** The new method replicates the original
behavior, ensuring compatibility with existing code and preserving the
extraction of content and metadata.

**Issue:**

This change addresses issues related to processing large HTML files with
the existing `HTMLHeaderTextSplitter` implementation. It resolves
problems where users encounter lxml.etree.xsltapplyerror maxhead due to
large HTML documents.

**Dependencies:**

- **BeautifulSoup (`beautifulsoup4`):** The `beautifulsoup4` library is
now used for parsing HTML content.
  - Installation: `pip install beautifulsoup4`

**Code Changes:**

Updated the `split_text_from_file` method in `HTMLHeaderTextSplitter` as
follows:

```python
def split_text_from_file(self, file: Any) -> List[Document]:
    """Split HTML file using BeautifulSoup.

    Args:
        file: HTML file path or file-like object.

    Returns:
        List of Document objects with page_content and metadata.
    """
    from bs4 import BeautifulSoup
    from langchain.docstore.document import Document
    import bs4

    # Read the HTML content from the file or file-like object
    if isinstance(file, str):
        with open(file, 'r', encoding='utf-8') as f:
            html_content = f.read()
    else:
        # Assuming file is a file-like object
        html_content = file.read()

    # Parse the HTML content using BeautifulSoup
    soup = BeautifulSoup(html_content, 'html.parser')

    # Extract the header tags and their corresponding metadata keys
    headers_to_split_on = [tag[0] for tag in self.headers_to_split_on]
    header_mapping = dict(self.headers_to_split_on)

    documents = []

    # Find the body of the document
    body = soup.body if soup.body else soup

    # Find all header tags in the order they appear
    all_headers = body.find_all(headers_to_split_on)

    # If there's content before the first header, collect it
    first_header = all_headers[0] if all_headers else None
    if first_header:
        pre_header_content = ''
        for elem in first_header.find_all_previous():
            if isinstance(elem, bs4.Tag):
                text = elem.get_text(separator=' ', strip=True)
                if text:
                    pre_header_content = text + ' ' + pre_header_content
        if pre_header_content.strip():
            documents.append(Document(
                page_content=pre_header_content.strip(),
                metadata={}  # No metadata since there's no header
            ))
    else:
        # If no headers are found, return the whole content
        full_text = body.get_text(separator=' ', strip=True)
        if full_text.strip():
            documents.append(Document(
                page_content=full_text.strip(),
                metadata={}
            ))
        return documents

    # Process each header and its associated content
    for header in all_headers:
        current_metadata = {}
        header_name = header.name
        header_text = header.get_text(separator=' ', strip=True)
        current_metadata[header_mapping[header_name]] = header_text

        # Collect all sibling elements until the next header of the same or higher level
        content_elements = []
        for sibling in header.find_next_siblings():
            if sibling.name in headers_to_split_on:
                # Stop at the next header
                break
            if isinstance(sibling, bs4.Tag):
                content_elements.append(sibling)

        # Get the text content of the collected elements
        current_content = ''
        for elem in content_elements:
            text = elem.get_text(separator=' ', strip=True)
            if text:
                current_content += text + ' '

        # Create a Document if there is content
        if current_content.strip():
            documents.append(Document(
                page_content=current_content.strip(),
                metadata=current_metadata.copy()
            ))
        else:
            # If there's no content, but we have metadata, still create a Document
            documents.append(Document(
                page_content='',
                metadata=current_metadata.copy()
            ))

    return documents
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-01-20 16:10:37 -05:00
Christophe Bornet
989eec4b7b core: Add ruff rule S101 (no assert) (#29267)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-01-20 20:24:31 +00:00
Christophe Bornet
e5d62c6ce7 core: Add ruff rule W293 (whitespaces) (#29272) 2025-01-20 15:16:12 -05:00
Philippe PRADOS
4efc5093c1 community[minor]: Refactoring PyMuPDF parser, loader and add image blob parsers (#29063)
* Adds BlobParsers for images. These implementations can take an image
and produce one or more documents per image. This interface can be used
for exposing OCR capabilities.
* Update PyMuPDFParser and Loader to standardize metadata, handle
images, improve table extraction etc.

- **Twitter handle:** pprados

This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once.
This specific part focuses to prepare the update of all parsers.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-01-20 15:15:43 -05:00
Syed Baqar Abbas
f175319303 [feat] Added backwards compatibility for OllamaEmbeddings initialization (migration from langchain_community.embeddings to langchain_ollama.embeddings (#29296)
- [feat] **Added backwards compatibility for OllamaEmbeddings
initialization (migration from `langchain_community.embeddings` to
`langchain_ollama.embeddings`**: "langchain_ollama"
- **Description:** Given that `OllamaEmbeddings` from
`langchain_community.embeddings` is deprecated, code is being shifted to
``langchain_ollama.embeddings`. However, this does not offer backward
compatibility of initializing the parameters and `OllamaEmbeddings`
object.
    - **Issue:** #29294 
    - **Dependencies:** None
    - **Twitter handle:** @BaqarAbbas2001


## Additional Information
Previously, `OllamaEmbeddings` from `langchain_community.embeddings`
used to support the following options:

e9abe583b2/libs/community/langchain_community/embeddings/ollama.py (L125-L139)

However, in the new package `from langchain_ollama import
OllamaEmbeddings`, there is no method to set these options. I have added
these parameters to resolve this issue.

This issue was also discussed in
https://github.com/langchain-ai/langchain/discussions/29113
2025-01-20 11:16:29 -05:00
CLOVA Studio 개발
7a95ffc775 community: fix some features on Naver ChatModel & embedding model 2 (#29243)
## Description
- Responding to `NCP API Key` changes.
- To fix `ChatClovaX` `astream` function to raise `SSEError` when an
error event occurs.
- To add `token length` and `ai_filter` to ChatClovaX's
`response_metadata`.
- To update document for apply NCP API Key changes.

cc. @efriis @vbarda
2025-01-20 11:01:03 -05:00
Sangyun_LEE
5d64597490 docs: fix broken Appearance of langchain_community/document_loaders/recursive_url_loader API Reference (#29305)
# PR mesesage
## Description
Fixed a broken Appearance of RecurisveUrlLoader API Reference.

### Before
<p align="center">
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/f39df65d-b788-411d-88af-8bfa2607c00b"
/>
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/b8a92b70-4548-4b4a-965f-026faeebd0ec"
/>
</p>

### After
<p align="center">
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/8ea28146-de45-42e2-b346-3004ec4dfc55"
/>
<img width="750" alt="image"
src="https://github.com/user-attachments/assets/914c6966-4055-45d3-baeb-2d97eab06fe7"
/>
</p>

## Issue:
N/A
## Dependencies
None
## Twitter handle
N/A

# Add tests and docs
Not applicable; this change only affects documentation.

# Lint and test
Ran make format, make lint, and make test to ensure no issues.
2025-01-20 10:56:59 -05:00
Hemant Rawat
6c52378992 Add Google-style docstring linting and update pyproject.toml (#29303)
### Description:

This PR introduces Google-style docstring linting for the
ModelLaboratory class in libs/langchain/langchain/model_laboratory.py.
It also updates the pyproject.toml file to comply with the latest Ruff
configuration standards (deprecating top-level lint settings in favor of
lint).

### Changes include:
- [x] Added detailed Google-style docstrings to all methods in
ModelLaboratory.
- [x] Updated pyproject.toml to move select and pydocstyle settings
under the [tool.ruff.lint] section.
- [x] Ensured all files pass Ruff linting.

Issue:
Closes #25154

### Dependencies:
No additional dependencies are required for this change.

### Checklist
- [x] Files passes ruff linting.
- [x] Docstrings conform to the Google-style convention.
- [x] pyproject.toml updated to avoid deprecation warnings.
- [x] My PR is ready to review, please review.
2025-01-19 14:37:21 -05:00
Mohammad Mohtashim
b5fbebb3c8 (Community): Changing the BaseURL and Model for MiniMax (#29299)
- **Description:** Changed the Base Default Model and Base URL to
correct versions. Plus added a more explicit exception if user provides
an invalid API Key
- **Issue:** #29278
2025-01-19 14:15:02 -05:00
ccurme
c20f7418c7 openai[patch]: fix Azure LLM test (#29302)
The tokens I get are:
```
['', '\n\n', 'The', ' sun', ' was', ' setting', ' over', ' the', ' horizon', ',', ' casting', '']
```
so possibly an extra empty token is included in the output.

lmk @efriis if we should look into this further.
2025-01-19 17:25:42 +00:00
ccurme
6b249a0dc2 openai[patch]: release 0.3.1 (#29301) 2025-01-19 17:04:00 +00:00
ThomasSaulou
e9abe583b2 chatperplexity stream-citations in additional kwargs (#29273)
chatperplexity stream-citations in additional kwargs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-18 22:31:10 +00:00
Farzad Sharif
8fad9214c7 docs: fix qa_per_user.ipynb (#29290)
# Description
The `config` option was not passed to `configurable_retriever.invoke()`.
Screenshot below. Fixed.
 
<img width="731" alt="Screenshot 2025-01-18 at 11 59 28 AM"
src="https://github.com/user-attachments/assets/21f30739-2abd-4150-b3ad-626ea9e3f96c"
/>
2025-01-18 16:24:31 -05:00
Vadim Rusin
2fb6fd7950 docs: fix broken link in JSONOutputParser reference (#29292)
### PR message:
- **Description:** Fixed a broken link in the documentation for the
`JSONOutputParser`. Updated the link to point to the correct reference:
  From:  

`https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JSONOutputParser.html`
  To:  

`https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html`.
This ensures accurate navigation for users referring to the
`JsonOutputParser` documentation.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** N/A

### Add tests and docs:
Not applicable; this change only affects documentation.

### Lint and test:
Ran `make format`, `make lint`, and `make test` to ensure no issues.
2025-01-18 16:17:34 -05:00
TheSongg
1cd4d8d101 [langchain_community.llms.xinference]: Rewrite _stream() method and support stream() method in xinference.py (#29259)
- [ ] **PR title**:[langchain_community.llms.xinference]: Rewrite
_stream() method and support stream() method in xinference.py

- [ ] **PR message**: Rewrite the _stream method so that the
chain.stream() can be used to return data streams.

       chain = prompt | llm
       chain.stream(input=user_input)


- [ ] **tests**: 
      from langchain_community.llms import Xinference
      from langchain.prompts import PromptTemplate

      llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
          stream = True
       )
prompt = PromptTemplate(input=['country'], template="Q: where can we
visit in the capital of {country}? A:")
      chain = prompt | llm
      chain.stream(input={'country': 'France'})
2025-01-17 20:31:59 -05:00
Amaan
d4b9404fd6 docs: add langchain dappier tool integration notebook (#29265)
Add tools to interact with Dappier APIs with an example notebook.

For `DappierRealTimeSearchTool`, the tool can be invoked with:

```python
from langchain_dappier import DappierRealTimeSearchTool

tool = DappierRealTimeSearchTool()

tool.invoke({"query": "What happened at the last wimbledon"})
```

```
At the last Wimbledon in 2024, Carlos Alcaraz won the title by defeating Novak Djokovic. This victory marked Alcaraz's fourth Grand Slam title at just 21 years old! 🎉🏆🎾
```

For DappierAIRecommendationTool the tool can be invoked with:

```python
from langchain_dappier import DappierAIRecommendationTool

tool = DappierAIRecommendationTool(
    data_model_id="dm_01j0pb465keqmatq9k83dthx34",
    similarity_top_k=3,
    ref="sportsnaut.com",
    num_articles_ref=2,
    search_algorithm="most_recent",
)
```

```
[{"author": "Matt Weaver", "image_url": "https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34...", "pubdate": "Fri, 17 Jan 2025 08:04:03 +0000", "source_url": "https://sportsnaut.com/chili-bowl-thursday-bell-column/", "summary": "The article highlights the thrilling unpredictability... ", "title": "Thursday proves why every lap of Chili Bowl..."},
{"author": "Matt Higgins", "image_url": "https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34...", "pubdate": "Fri, 17 Jan 2025 02:48:42 +0000", "source_url": "https://sportsnaut.com/new-york-mets-news-pete-alonso...", "summary": "The New York Mets are likely parting ways with star...", "title": "MLB insiders reveal New York Mets’ last-ditch..."},
{"author": "Jim Cerny", "image_url": "https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34...", "pubdate": "Fri, 17 Jan 2025 05:10:39 +0000", "source_url": "https://www.foreverblueshirts.com/new-york-rangers-news...", "summary": "The New York Rangers achieved a thrilling 5-3 comeback... ", "title": "Rangers score 3 times in 3rd period for stirring 5-3..."}]
```

The integration package can be found over here -
https://github.com/DappierAI/langchain-dappier
2025-01-17 19:02:28 -05:00
ccurme
184ea8aeb2 anthropic[patch]: update tool choice type (#29276) 2025-01-17 15:26:33 -05:00
ccurme
ac52021097 anthropic[patch]: release 0.3.2 (#29275) 2025-01-17 19:48:31 +00:00
ccurme
c616b445f2 anthropic[patch]: support parallel_tool_calls (#29257)
Need to:
- Update docs
- Decide if this is an explicit kwarg of bind_tools
- Decide if this should be in standard test with flag for supporting
2025-01-17 19:41:41 +00:00
Erick Friis
628145b172 infra: fix api build (#29274) 2025-01-17 10:41:59 -08:00
Zapiron
97a5bc7fc7 docs: Fixed typos and improve metadata explanation (#29266)
Fix mini typos and made the explanation of metadata filtering clearer
2025-01-17 11:17:40 -05:00
Jun He
f0226135e5 docs: Remove redundant "%" (#29205)
Before this commit, the copied command can't be used directly.
2025-01-17 14:30:58 +00:00
Michael Chin
36ff83a0b5 docs: Message history for Neptune chains (#29260)
Expanded the Amazon Neptune documentation with new sections detailing
usage of chat message history with the
`create_neptune_opencypher_qa_chain` and
`create_neptune_sparql_qa_chain` functions.
2025-01-17 09:06:17 -05:00
ccurme
d5360b9bd6 core[patch]: release 0.3.30 (#29256) 2025-01-16 17:52:37 -05:00
Nuno Campos
595297e2e5 core: Add support for calls in get_function_nonlocals (#29255)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-16 14:43:42 -08:00
Luis Lopez
75663f2cae community: Add cost per 1K tokens for fine-tuned model cached input (#29248)
### Description

- Since there is no cost per 1k input tokens for a fine-tuned cached
version of `gpt-4o-mini-2024-07-18` is not available when using the
`OpenAICallbackHandler`, it raises an error when trying to make calls
with such model.
- To add the price in the `MODEL_COST_PER_1K_TOKENS` dictionary

cc. @efriis
2025-01-16 15:19:26 -05:00
Junon
667d2a57fd add mode arg to OBSFileLoader.load() method (#29246)
- **Description:** add mode arg to OBSFileLoader.load() method
  - **Issue:** #29245
  - **Dependencies:** no dependencies required for this change

---------

Co-authored-by: Junon_Gz <junon_gz@qq.com>
2025-01-16 11:09:04 -05:00
c6388d736b docs: fix typo in tool_results_pass_to_model.ipynb (how-to) (#29252)
Description: fix typo. change word from `cals` to `calls`
Issue: closes #29251 
Dependencies: None
Twitter handle: None
2025-01-16 11:05:28 -05:00
Erick Friis
4bc6cb759f docs: update recommended code interpreters (#29236)
unstable :(
2025-01-15 16:03:26 -08:00
Erick Friis
5eb4dc5e06 standard-tests: double messages test (#29237) 2025-01-15 15:14:29 -08:00
Nithish Raghunandanan
1051fa5729 couchbase: Migrate couchbase partner package to different repo (#29239)
**Description:** Migrate the couchbase partner package to
[Couchbase-Ecosystem](https://github.com/Couchbase-Ecosystem/langchain-couchbase)
org
2025-01-15 12:37:27 -08:00
Nadeem Sajjad
eaf2fb287f community(pypdfloader): added page_label in metadata for pypdf loader (#29225)
# Description

## Summary
This PR adds support for handling multi-labeled page numbers in the
**PyPDFLoader**. Some PDFs use complex page numbering systems where the
actual content may begin after multiple introductory pages. The
page_label field helps accurately reflect the document’s page structure,
making it easier to handle such cases during document parsing.

## Motivation
This feature improves document parsing accuracy by allowing users to
access the actual page labels instead of relying only on the physical
page numbers. This is particularly useful for documents where the first
few pages have roman numerals or other non-standard page labels.

## Use Case
This feature is especially useful for **Retrieval-Augmented Generation**
(RAG) systems where users may reference page numbers when asking
questions. Some PDFs have both labeled page numbers (like roman numerals
for introductory sections) and index-based page numbers.

For example, a user might ask:

	"What is mentioned on page 5?"

The system can now check both:
	•	**Index-based page number** (page)
	•	**Labeled page number** (page_label)

This dual-check helps improve retrieval accuracy. Additionally, the
results can be validated with an **agent or tool** to ensure the
retrieved pages match the user’s query contextually.

## Code Changes

- Added a page_label field to the metadata of the Document class in
**PyPDFLoader**.
- Implemented support for retrieving page_label from the
pdf_reader.page_labels.
- Created a test case (test_pypdf_loader_with_multi_label_page_numbers)
with a sample PDF containing multi-labeled pages
(geotopo-komprimiert.pdf) [[Source of
pdf](https://github.com/py-pdf/sample-files/blob/main/009-pdflatex-geotopo/GeoTopo-komprimiert.pdf)].
- Updated existing tests to ensure compatibility and verify page_label
extraction.

## Tests Added

- Added a new test case for a PDF with multi-labeled pages.
- Verified both page and page_label metadata fields are correctly
extracted.

## Screenshots

<img width="549" alt="image"
src="https://github.com/user-attachments/assets/65db9f5c-032e-4592-926f-824777c28f33"
/>
2025-01-15 14:18:07 -05:00
Mehdi
1a38948ee3 Mehdi zare/fmp data doc (#29219)
Title: community: add Financial Modeling Prep (FMP) API integration

Description: Adding LangChain integration for Financial Modeling Prep
(FMP) API to enable semantic search and structured tool creation for
financial data endpoints. This integration provides semantic endpoint
search using vector stores and automatic tool creation with proper
typing and error handling. Users can discover relevant financial
endpoints using natural language queries and get properly typed
LangChain tools for discovered endpoints.

Issue: N/A

Dependencies:

fmp-data>=0.3.1
langchain-core>=0.1.0
faiss-cpu
tiktoken
Twitter handle: @mehdizarem

Unit tests and example notebook have been added:

Tests are in tests/integration_tests/est_tools.py and
tests/unit_tests/test_tools.py
Example notebook is in docs/tools.ipynb
All format, lint and test checks pass:

pytest
mypy .
Dependencies are imported within functions and not added to
pyproject.toml. The changes are backwards compatible and only affect the
community package.

---------

Co-authored-by: mehdizare <mehdizare@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-15 15:31:01 +00:00
Mohammad Mohtashim
288613d361 (text-splitters): Small Fix in _process_html for HTMLSemanticPreservingSplitter to properly extract the metadata. (#29215)
- **Description:** Include `main` in the list of elements whose child
elements needs to be processed for splitting the HTML.
- **Issue:** #29184
2025-01-15 10:18:06 -05:00
TheSongg
4867fe7ac8 [langchain_community.llms.xinference]: fix error in xinference.py (#29216)
- [ ] **PR title**: [langchain_community.llms.xinference]: fix error in
xinference.py

- [ ] **PR message**:
- The old code raised an ValidationError:
pydantic_core._pydantic_core.ValidationError: 1 validation error for
Xinference when import Xinference from xinference.py. This issue has
been resolved by adjusting it's type and default value.

File "/media/vdc/python/lib/python3.10/site-packages/pydantic/main.py",
line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data,
self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for
Xinference
        client
Field required [type=missing, input_value={'server_url':
'http://10...t4', 'model_kwargs': {}}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/missing

- [ ] **tests**:

       from langchain_community.llms import Xinference
       llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
         )
2025-01-15 10:11:26 -05:00
Kostadin Devedzhiev
bea5798b04 docs: Fix typo in retrievers documentation: 'An vectorstore' -> 'A vectorstore' (#29221)
- [x] **PR title**: "docs: Fix typo in documentation"

- [x] **PR message**:
- **Description:** Fixed a typo in the documentation, changing "An
vectorstore" to "A vector store" for grammatical accuracy.
    - **Issue:** N/A (no issue filed for this typo fix)
    - **Dependencies:** None
    - **Twitter handle:** N/A


- [x] **Add tests and docs**: This is a minor documentation fix that
doesn't require additional tests or example notebooks.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-01-15 10:10:14 -05:00
Sohaib Athar
d1cf10373b Update elasticsearch_retriever.ipynb (#29223)
docs: fix typo (connection)
- **Twitter handle:** @ReallyVirtual
2025-01-15 10:09:51 -05:00
Syed Baqar Abbas
4278046329 [fix] Convert table names to list for compatibility in SQLDatabase (#29229)
- [langchain_community.utilities.SQLDatabase] **[fix] Convert table
names to list for compatibility in SQLDatabase**:
  - The issue #29227 is being fixed here
  - The "package" modified is community
  - The issue lied in this block of code:

44b41b699c/libs/community/langchain_community/utilities/sql_database.py (L72-L77)

- [langchain_community.utilities.SQLDatabase] **[fix] Convert table
names to list for compatibility in SQLDatabase**:
- **Description:** When the SQLDatabase is initialized, it runs a code
`self._inspector.get_table_names(schema=schema)` which expects an output
of list. However, with some connectors (such as snowflake) the data type
returned could be another iterable. This results in a type error when
concatenating the table_names to view_names. I have added explicit type
casting to prevent this.
    - **Issue:** The issue #29227 is being fixed here
    - **Dependencies:** None
    - **Twitter handle:** @BaqarAbbas2001

## Additional Information
When the following method is called for a Snowflake database:

44b41b699c/libs/community/langchain_community/utilities/sql_database.py (L75)

Snowflake under the hood calls:
```python
from snowflake.sqlalchemy.snowdialect import SnowflakeDialect
SnowflakeDialect.get_table_names
```

This method returns a `dict_keys()` object which is incompatible to
concatenate with a list and results in a `TypeError`

### Relevant Library Versions
- **snowflake-sqlalchemy**: 1.7.2  
- **snowflake-connector-python**: 3.12.4  
- **sqlalchemy**: 2.0.20  
- **langchain_community**: 0.3.14
2025-01-15 10:00:03 -05:00
Jin Hyung Ahn
05554265b4 community: Fix ConfluenceLoader load() failure caused by deleted pages (#29232)
## Description
This PR modifies the is_public_page function in ConfluenceLoader to
prevent exceptions caused by deleted pages during the execution of
ConfluenceLoader.process_pages().


**Example scenario:**
Consider the following usage of ConfluenceLoader:
```python
import os
from langchain_community.document_loaders import ConfluenceLoader

loader = ConfluenceLoader(
        url=os.getenv("BASE_URL"),
        token=os.getenv("TOKEN"),
        max_pages=1000,
        cql=f'type=page and lastmodified >= "2020-01-01 00:00"',
        include_restricted_content=False,
)

# Raised Exception : HTTPError: Outdated version/old_draft/trashed? Cannot find content Please provide valid ContentId.
documents = loader.load()
```

If a deleted page exists within the query result, the is_public_page
function would previously raise an exception when calling
get_all_restrictions_for_content, causing the loader.load() process to
fail for all pages.



By adding a pre-check for the page's "current" status, unnecessary API
calls to get_all_restrictions_for_content for non-current pages are
avoided.


This fix ensures that such pages are skipped without affecting the rest
of the loading process.





## Issue
N/A (No specific issue number)

## Dependencies
No new dependencies are introduced with this change.

## Twitter handle
[@zenoengine](https://x.com/zenoengine)
2025-01-15 09:56:23 -05:00
Mohammad Mohtashim
21eb39dff0 [Community]: AzureOpenAIWhisperParser Authenication Fix (#29135)
- **Description:** `AzureOpenAIWhisperParser` authentication fix as
stated in the issue.
- **Issue:** #29133
2025-01-15 09:44:53 -05:00
Erick Friis
44b41b699c docs: api docs build folder prep update (#29220) 2025-01-15 03:52:00 +00:00
Erick Friis
b05543c69b packages: disable mongodb for api docs (#29218) 2025-01-15 02:23:01 +00:00
Erick Friis
30badd7a32 packages: update mongodb folder (#29217) 2025-01-15 02:01:06 +00:00
pm390
76172511fd community: Additional parameters for OpenAIAssistantV2Runnable (#29207)
**Description:** Added Additional parameters that could be useful for
usage of OpenAIAssistantV2Runnable.

This change is thought to allow langchain users to set parameters that
cannot be set using assistants UI
(max_completion_tokens,max_prompt_tokens,parallel_tool_calls) and
parameters that could be useful for experimenting like top_p and
temperature.

This PR originated from the need of using parallel_tool_calls in
langchain, this parameter is very important in openAI assistants because
without this parameter set to False strict mode is not respected by
OpenAI Assistants
(https://platform.openai.com/docs/guides/function-calling#parallel-function-calling).

> Note: Currently, if the model calls multiple functions in one turn
then strict mode will be disabled for those calls.

**Issue:** None
**Dependencies:** openai
2025-01-14 15:53:37 -05:00
Guy Korland
efadad6067 Add Link to FalkorDB Memory example (#29204)
- **Description:** Add Link to FalkorDB Memory example
2025-01-14 13:27:52 -05:00
Bagatur
4ab04ad6be docs: oai api ref nit (#29210) 2025-01-14 17:55:16 +00:00
Michael Chin
d9b856abad community: Deprecate Amazon Neptune resources in langchain-community (#29191)
Related: https://github.com/langchain-ai/langchain-aws/pull/322

The legacy `NeptuneOpenCypherQAChain` and `NeptuneSparqlQAChain` classes
are being replaced by the new LCEL format chains
`create_neptune_opencypher_qa_chain` and
`create_neptune_sparql_qa_chain`, respectively, in the `langchain_aws`
package.

This PR adds deprecation warnings to all Neptune classes and functions
that have been migrated to `langchain_aws`. All relevant documentation
has also been updated to replace `langchain_community` usage with the
new `langchain_aws` implementations.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-14 10:23:34 -05:00
Erick Friis
c55af44711 anthropic: pydantic mypy plugin (#29144) 2025-01-13 15:32:40 -08:00
Erick Friis
cdf3a17e55 docs: fix httpx conflicts with overrides in docs build (#29180) 2025-01-13 21:25:00 +00:00
ccurme
1bf6576709 cli[patch]: fix anchor links in templates (#29178)
These are outdated and can break docs builds.
2025-01-13 18:28:18 +00:00
Christopher Varjas
e156b372fb langchain: support api key argument with OpenAI moderation chain (#29140)
**Description:** Makes it possible to instantiate
`OpenAIModerationChain` with an `openai_api_key` argument only and no
`OPENAI_API_KEY` environment variable defined.

**Issue:** https://github.com/langchain-ai/langchain/issues/25176

**Dependencies:** `openai`

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-13 11:00:02 -05:00
Nikhil Shahi
335ca3a606 docs: add HyperbrowserLoader docs (#29143)
### Description
This PR adds docs for the
[langchain-hyperbrowser](https://pypi.org/project/langchain-hyperbrowser/)
package. It includes a document loader that uses Hyperbrowser to scrape
or crawl any urls and return formatted markdown or html content as well
as relevant metadata.
[Hyperbrowser](https://hyperbrowser.ai) is a platform for running and
scaling headless browsers. It lets you launch and manage browser
sessions at scale and provides easy to use solutions for any webscraping
needs, such as scraping a single page or crawling an entire site.

### Issue
None

### Dependencies
None

### Twitter Handle
`@hyperbrowser`
2025-01-13 10:45:39 -05:00
Zhengren Wang
4c0217681a cookbook: fix typo in cookbook/mongodb-langchain-cache-memory.ipynb (#29149)
Description: fix "enviornment" into "environment". 
Issue: Typo
Dependencies: None
Twitter handle: zrwang01
2025-01-13 10:35:34 -05:00
Gabe Cornejo
e64bfb537f docs: Fix old link to Unstructured package in document_loader_markdown.ipynb (#29175)
Fixed a broken link in `document_loader_markdown.ipynb` to point to the
updated documentation page for the Unstructured package.
Issue: N/A
Dependencies: None
2025-01-13 15:26:01 +00:00
Tymon Żarski
689592f9bb community: Fix rank-llm import paths for new 0.20.3 version (#29154)
# **PR title**: "community: Fix rank-llm import paths for new 0.20.3
version"
- The "community" package is being modified to handle updated import
paths for the new `rank-llm` version.

---

## Description
This PR updates the import paths for the `rank-llm` package to account
for changes introduced in version `0.20.3`. The changes ensure
compatibility with both pre- and post-revamp versions of `rank-llm`,
specifically version `0.12.8`. Conditional imports are introduced based
on the detected version of `rank-llm` to handle different path
structures for `VicunaReranker`, `ZephyrReranker`, and `SafeOpenai`.

## Issue
RankLLMRerank usage throws an error when used GPT (not only) when
rank-llm version is > 0.12.8 - #29156

## Dependencies
This change relies on the `packaging` and `pkg_resources` libraries to
handle version checks.

## Twitter handle
@tymzar
2025-01-13 10:22:14 -05:00
Andrew
0e3115330d Add additional_instructions on openai assistan runs create. (#29164)
- **Description**: In the functions `_create_run` and `_acreate_run`,
the parameters passed to the creation of
`openai.resources.beta.threads.runs` were limited.

  Source: 
  ```
  def _create_run(self, input: dict) -> Any:
        params = {
            k: v
            for k, v in input.items()
            if k in ("instructions", "model", "tools", "run_metadata")
        }
        return self.client.beta.threads.runs.create(
            input["thread_id"],
            assistant_id=self.assistant_id,
            **params,
        )
  ```
- OpenAI Documentation
([createRun](https://platform.openai.com/docs/api-reference/runs/createRun))

- Full list of parameters `openai.resources.beta.threads.runs` ([source
code](https://github.com/openai/openai-python/blob/main/src/openai/resources/beta/threads/runs/runs.py#L91))

 
- **Issue:** Fix #17574 



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-13 10:11:47 -05:00
ccurme
e4ceafa1c8 langchain[patch]: update extended tests for compatibility with langchain-openai==0.3 (#29174) 2025-01-13 15:04:22 +00:00
Syed Muneeb Abbas
8ef7f3eacc Fixed the import error in OpenAIWhisperParserLocal and resolved the L… (#29168)
…angChain parser issue.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-13 09:47:31 -05:00
Priyansh Agrawal
c115c09b6d community: add missing format specifier in error log in CubeSemanticLoader (#29172)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**
- **Description:** Add a missing format specifier in an an error log in
`langchain_community.document_loaders.CubeSemanticLoader`
- **Issue:** raises `TypeError: not all arguments converted during
string formatting`


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-13 09:32:57 -05:00
ThomasSaulou
349b5c91c2 fix chatperplexity: remove 'stream' from params in _stream method (#29173)
quick fix chatperplexity: remove 'stream' from params in _stream method
2025-01-13 09:31:37 -05:00
LIU Yuwei
f980144e9c community: add init for unstructured file loader (#29101)
## Description
Add `__init__` for unstructured loader of
epub/image/markdown/pdf/ppt/word to restrict the input type to `str` or
`Path`.
In the
[signature](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html)
these unstructured loaders receive `file_path: str | List[str] | Path |
List[Path]`, but actually they only receive `str` or `Path`.

## Issue
None

## Dependencies
No changes.
2025-01-13 09:26:00 -05:00
Erick Friis
bbc3e3b2cf openai: disable streaming for o1 by default (#29147)
Currently 400s
https://community.openai.com/t/streaming-support-for-o1-o1-2024-12-17-resulting-in-400-unsupported-value/1085043

o1-mini and o1-preview stream fine
2025-01-11 02:24:11 +00:00
Isaac Francisco
62074bac60 replace all LANGCHAIN_ flags with LANGSMITH_ flags (#29120) 2025-01-11 01:24:40 +00:00
Bagatur
5c2fbb5b86 docs: Update openai README.md (#29146) 2025-01-10 17:24:16 -08:00
Erick Friis
0a54aedb85 anthropic: pdf integration test (#29142) 2025-01-10 21:56:31 +00:00
ccurme
8de8519daf tests[patch]: release 0.3.8 (#29141) 2025-01-10 21:53:41 +00:00
Jiang
7d3fb21807 Add lindorm as new integration (#29123)
Misoperation caused the pr close: [origin pr
link](https://github.com/langchain-ai/langchain/pull/29085)

---------

Co-authored-by: jiangzhijie <jiangzhijie.jzj@alibaba-inc.com>
2025-01-10 16:30:37 -05:00
Zapiron
7594ad694f docs: update the correct learning objective YAML instead of XML (#29131)
Update the correct learning objective for the how-to page by changing
XML to YAML which is taught.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-01-10 16:13:13 -05:00
Mateusz Szewczyk
b1d3e25eb6 docs: Update IBM WatsonxRerank documentation (#29138)
Thank you for contributing to LangChain!

Update presented model in `WatsonxRerank` documentation.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-01-10 15:07:29 -05:00
ccurme
4819b500e8 pinecone[patch]: release 0.2.2 (#29139) 2025-01-10 14:59:57 -05:00
Ashvin
46fd09ffeb partner: Update aiohttp in langchain pinecone. (#28863)
- **partner**: "Update Aiohttp for resolving vulnerability issue"
    
- **Description:** I have updated the upper limit of aiohttp from `3.10`
to `3.10.5` in the pyproject.toml file of langchain-pinecone. Hopefully
this will resolve #28771 . Please review this as I'm quite unsure.

---------

Co-authored-by: = <=>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-10 14:54:52 -05:00
ccurme
df5ec45b32 docs[patch]: update docs for langchain-openai==0.3 (#29119)
Update model for one notebook that specified `gpt-4`.

Otherwise just updating cassettes.

---------

Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-01-10 13:29:31 -05:00
ccurme
f3d370753f xai[minor]: release 0.2 (#29132)
Update `langchain-openai` to 0.3. See [release
notes](https://github.com/langchain-ai/langchain/releases/tag/langchain-openai%3D%3D0.3.0)
for details. Should only impact default values of `temperature`, `n`,
and `max_retries`.
2025-01-10 11:47:27 -05:00
ccurme
6e63ccba84 openai[minor]: release 0.3 (#29100)
## Goal

Solve the following problems with `langchain-openai`:

- Structured output with `o1` [breaks out of the
box](https://langchain.slack.com/archives/C050X0VTN56/p1735232400232099).
- `with_structured_output` by default does not use OpenAI’s [structured
output
feature](https://platform.openai.com/docs/guides/structured-outputs).
- We override API defaults for temperature and other parameters.

## Breaking changes:

- Default method for structured output is changing to OpenAI’s dedicated
[structured output
feature](https://platform.openai.com/docs/guides/structured-outputs).
For schemas specified via TypedDict or JSON schema, strict schema
validation is disabled by default but can be enabled by specifying
`strict=True`.
- To recover previous default, pass `method="function_calling"` into
`with_structured_output`.
- Models that don’t support `method="json_schema"` (e.g., `gpt-4` and
`gpt-3.5-turbo`, currently the default model for ChatOpenAI) will raise
an error unless `method` is explicitly specified.
- To recover previous default, pass `method="function_calling"` into
`with_structured_output`.
- Schemas specified via Pydantic `BaseModel` that have fields with
non-null defaults or metadata (like min/max constraints) will raise an
error.
- To recover previous default, pass `method="function_calling"` into
`with_structured_output`.
- `strict` now defaults to False for `method="json_schema"` when schemas
are specified via TypedDict or JSON schema.
- To recover previous behavior, use `with_structured_output(schema,
strict=True)`
- Schemas specified via Pydantic V1 will raise a warning (and use
`method="function_calling"`) unless `method` is explicitly specified.
- To remove the warning, pass `method="function_calling"` into
`with_structured_output`.
- Streaming with default structured output method / Pydantic schema no
longer generates intermediate streamed chunks.
- To recover previous behavior, pass `method="function_calling"` into
`with_structured_output`.
- We no longer override default temperature (was 0.7 in LangChain, now
will follow OpenAI, currently 1.0).
- To recover previous behavior, initialize `ChatOpenAI` or
`AzureChatOpenAI` with `temperature=0.7`.
- Note: conceptually there is a difference between forcing a tool call
and forcing a response format. Tool calls may have more concise
arguments vs. generating content adhering to a schema. Prompts may need
to be adjusted to recover desired behavior.

---------

Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-01-10 10:50:32 -05:00
ccurme
facfd42768 docs[patch]: fix links in partner package table (#29112)
Integrations in external repos are not built into [API
ref](https://python.langchain.com/api_reference/), so currently [the
table](https://python.langchain.com/docs/integrations/providers/#integration-packages)
includes broken links. Here we update the links for this type of package
to point to PyPi.
2025-01-09 10:37:15 -05:00
ccurme
815bfa1913 openai[patch]: support streaming with json_schema response format (#29044)
- Stream JSON string content. Final chunk includes parsed representation
(following OpenAI
[docs](https://platform.openai.com/docs/guides/structured-outputs#streaming)).
- Mildly (?) breaking change: if you were using streaming with
`response_format` before, usage metadata will disappear unless you set
`stream_usage=True`.

## Response format

Before:

![Screenshot 2025-01-06 at 11 59
01 AM](https://github.com/user-attachments/assets/e54753f7-47d5-421d-b8f3-172f32b3364d)


After:

![Screenshot 2025-01-06 at 11 58
13 AM](https://github.com/user-attachments/assets/34882c6c-2284-45b4-92f7-5b5b69896903)


## with_structured_output

For pydantic output, behavior of `with_structured_output` is unchanged
(except for warning disappearing), because we pluck the parsed
representation straight from OpenAI, and OpenAI doesn't return it until
the stream is completed. Open to alternatives (e.g., parsing from
content or intermediate dict chunks generated by OpenAI).

Before:

![Screenshot 2025-01-06 at 12 38
11 PM](https://github.com/user-attachments/assets/913d320d-f49e-4cbb-a800-b394ae817fd1)

After:

![Screenshot 2025-01-06 at 12 38
58 PM](https://github.com/user-attachments/assets/f7a45dd6-d886-48a6-8d76-d0e21ca767c6)
2025-01-09 10:32:30 -05:00
Panos Vagenas
858f655a25 docs: add Docling loader docs (#29104)
### Description
This adds the docs for the Docling document loader.
[Docling](https://github.com/DS4SD/docling) parses PDF, DOCX, PPTX,
HTML, and other formats into a rich unified representation including
document layout, tables etc., making them ready for generative AI
workflows like RAG.

Some references:
- https://research.ibm.com/blog/docling-generative-AI
-
https://www.redhat.com/en/blog/docling-missing-document-processing-companion-generative-ai
- [Docling Technical Report](https://arxiv.org/abs/2408.09869)

The introduced `DoclingLoader` enables users to:
- use various document types in their LLM applications with ease and
speed, and
- leverage Docling's rich representation for advanced, document-native
grounding.

### Issue
Replacing PR #27987 as discussed with @efriis
[here](https://github.com/langchain-ai/langchain/pull/27987#issuecomment-2489354930).

### Dependencies
None

---------

Signed-off-by: Panos Vagenas <35837085+vagenas@users.noreply.github.com>
2025-01-09 10:15:35 -05:00
fzowl
cc55e32924 docs: Adding voyage-3-large to the .ipynb file (#29098)
**Description:**
Adding voyage-3-large model to the .ipynb file (its just extending a
list, so not even a code change)


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-09 10:01:55 -05:00
Tacobaco
287abd9e0d Update word in databricks_vector_search.ipynb from "cna" to "can" (#29109)
fix to word "can"

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-09 10:01:00 -05:00
Mohammad Mohtashim
a46c2bce51 [Community]: Small Fix in google_firestore memory notebook (#29107)
- **Description:** Just a small fix in google_firestore memory notebook
- **Issue:** @29095
2025-01-09 10:00:41 -05:00
Joshua Campbell
00dcc44739 Langchain_community: Fix issue with missing backticks in arango client (#29110)
- **Description:** Adds backticks to generate_schema function in the
arango graph client
- **Issue:** We experienced an issue with the generate schema function
when talking to our arango database where these backticks were missing
    - **Dependencies:** none
    - **Twitter handle:** @anangelofgrace
2025-01-09 10:00:10 -05:00
Inah Jeon
fa6f08faa1 docs: Add upstage document parse loader to pdf loaders (#29099)
Add upstage document parse loader to pdf loaders

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-08 15:32:39 -05:00
LIU Yuwei
2b09f798e1 community: add init for UnstructuredHTMLLoader to solve pathlib paths (#29091)
## Description
Add `__init__` for `UnstructuredHTMLLoader` to restrict the input type
to `str` or `Path`, and transfer the `self.file_path` to `str` just like
`UnstructuredXMLLoader` does.

## Issue
Fix #29090 

## Dependencies
No changes.
2025-01-08 10:19:27 -05:00
Jin Hyung Ahn
c8ca1cd42f community: fix "confluence-loader" enable include_labels for documents loaded via CQL (#29089)
## Description
This PR enables label inclusion for documents loaded via CQL in the
confluence-loader.

- Updated _lazy_load to pass the include_labels parameter instead of
False in process_pages calls for documents loaded via CQL.
- Ensured that labels can now be fetched and added to the metadata for
documents queried with cql.

## Related Modification History
This PR builds on the previous functionality introduced in
[#28259](https://github.com/langchain-ai/langchain/pull/28259), which
added support for including labels with the include_labels option.
However, this functionality did not work as expected for CQL queries,
and this PR fixes that issue.

If the False handling was intentional due to another issue, please let
me know. I have verified with our Confluence instance that this change
allows labels to be correctly fetched for documents loaded via CQL.

## Issue
Fixes #29088


## Dependencies
No changes.

## Twitter Handle
[@zenoengine](https://x.com/zenoengine)
2025-01-08 10:16:39 -05:00
Inah Jeon
9d290abccd partner: Update Upstage Model Names and Remove Deprecated Model (#29093)
This PR updates model names in the upstage library to reflect the latest
naming conventions and removes deprecated models.

Changes:

Renamed Models:
- `solar-1-mini-chat` -> `solar-mini`
- `solar-1-mini-embedding-query` -> `embedding-query`

Removed Deprecated Models:
- `layout-analysis` (replaced to `document-parse`)

Reference:
- https://console.upstage.ai/docs/getting-started/overview
-
https://github.com/langchain-ai/langchain-upstage/releases/tag/libs%2Fupstage%2Fv0.5.0

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-08 10:13:22 -05:00
Zapiron
9f5fa50bbf docs: Remove additional ` in heading (#29096)
Remove the additional ` in the pipe operator heading
2025-01-08 10:11:30 -05:00
Prashanth Rao
b1dafaef9b Kùzu package integration docs (#29076)
## Langchain Kùzu

### Description
 
This PR adds docs for the `langchain-kuzu` package [on
PyPI](https://pypi.org/project/langchain-kuzu/) that was recently
published, allowing Kùzu users to more easily use and work with
LangChain QA chains. The package will also make it easier for the Kùzu
team to continue supporting and updating the integration over future
releases.

### Twitter Handle

Please tag [@kuzudb](https://x.com/kuzudb) on Twitter once this PR is
merged, so LangChain users can be notified!

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2025-01-08 01:14:00 +00:00
Erick Friis
cc0f81f40f partners/groq: release 0.2.3 (#29081) 2025-01-07 23:36:51 +00:00
Erick Friis
fcc9cdd100 multiple: disable socket for unit tests (#29080) 2025-01-07 15:31:50 -08:00
Erick Friis
539ebd5431 groq: user agent (#29079) 2025-01-07 23:21:57 +00:00
Erick Friis
c5bee0a544 pinecone: bump core version (#29077) 2025-01-07 20:23:33 +00:00
Cory Waddingham
ce9e9f9314 pinecone: Review pinecone tests (#29073)
Title: langchain-pinecone: improve test structure and async handling

Description: This PR improves the test infrastructure for the
langchain-pinecone package by:
1. Implementing LangChain's standard test patterns for embeddings
2. Adding comprehensive configuration testing
3. Improving async test coverage
4. Fixing integration test issues with namespaces and async markers

The changes make the tests more robust, maintainable, and aligned with
LangChain's testing standards while ensuring proper async behavior in
the embeddings implementation.

Key improvements:
- Added standard EmbeddingsTests implementation
- Split custom configuration tests into a separate test class
- Added proper async test coverage with pytest-asyncio
- Fixed namespace handling in vector store integration tests
- Improved test organization and documentation

Dependencies: None (uses existing test dependencies)

Tests and Documentation:
-  Added standard test implementation following LangChain's patterns
-  Added comprehensive unit tests for configuration and async behavior
-  All tests passing locally
- No documentation changes needed (internal test improvements only)

Twitter handle: N/A

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-07 11:46:30 -08:00
ccurme
d9c51b71c4 infra[patch]: drop prompty from core dependents (#29068) 2025-01-07 11:01:29 -05:00
Philippe PRADOS
2921597c71 community[patch]: Refactoring PDF loaders: 01 prepare (#29062)
- **Refactoring PDF loaders step 1**: "community: Refactoring PDF
loaders to standardize approaches"

- **Description:** Declare CloudBlobLoader in __init__.py. file_path is
Union[str, PurePath] anywhere
- **Twitter handle:** pprados

This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once.
This specific part focuses to prepare the update of all parsers.

For more details, see [PR
28970](https://github.com/langchain-ai/langchain/pull/28970).

@eyurtsev it's the start of a PR series.
2025-01-07 11:00:04 -05:00
lspataroG
a49448a7c9 Add Google Vertex AI Vector Search Hybrid Search Documentation (#29064)
Add examples in the documentation to use hybrid search in Vertex AI
[Vector
Search](https://github.com/langchain-ai/langchain-google/pull/628)
2025-01-07 10:29:03 -05:00
Keiichi Hirobe
0d226de25c [docs] Update indexing.ipynb (#29055)
According to https://github.com/langchain-ai/langchain/pull/21127, now
`AzureSearch` should be compatible with LangChain indexer.
2025-01-07 10:03:32 -05:00
ccurme
55677e31f7 text-splitters[patch]: release 0.3.5 (#29054)
Resolves https://github.com/langchain-ai/langchain/issues/29053
2025-01-07 09:48:26 -05:00
Erick Friis
187131c55c Revert "integrations[patch]: remove non-required chat param defaults" (#29048)
Reverts langchain-ai/langchain#26730

discuss best way to release default changes (esp openai temperature)
2025-01-06 14:45:34 -08:00
Bagatur
3d7ae8b5d2 integrations[patch]: remove non-required chat param defaults (#26730)
anthropic:
  - max_retries

openai:
  - n
  - temperature
  - max_retries

fireworks
  - temperature

groq
  - n
  - max_retries
  - temperature

mistral
  - max_retries
  - timeout
  - max_concurrent_requests
  - temperature
  - top_p
  - safe_mode

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-06 22:26:22 +00:00
UV
b9db8e9921 DOC: Improve human input prompt in FewShotChatMessagePromptTemplate example (#29023)
Fixes #29010 

This PR updates the example for FewShotChatMessagePromptTemplate by
modifying the human input prompt to include a more descriptive and
user-friendly question format ('What is {input}?') instead of just
'{input}'. This change enhances clarity and usability in the
documentation example.

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-06 12:29:15 -08:00
ccurme
1f78d4faf4 voyageai[patch]: release 0.1.4 (#29046) 2025-01-06 20:20:19 +00:00
Eugene Evstafiev
6a152ce245 docs: add langchain-pull-md Markdown loader (#29024)
- [x] **PR title**: "docs: add langchain-pull-md Markdown loader"

- [x] **PR message**: 
- **Description:** This PR introduces the `langchain-pull-md` package to
the LangChain community. It includes a new document loader that utilizes
the pull.md service to convert URLs into Markdown format, particularly
useful for handling web pages rendered with JavaScript frameworks like
React, Angular, or Vue.js. This loader helps in efficient and reliable
Markdown conversion directly from URLs without local rendering, reducing
server load.
    - **Issue:** NA
    - **Dependencies:** requests >=2.25.1
    - **Twitter handle:** https://x.com/eugeneevstafev?s=21

- [x] **Add tests and docs**: 
1. Added unit tests to verify URL checking and conversion
functionalities.
2. Created a comprehensive example notebook detailing the usage of the
new loader.

- [x] **Lint and test**: 
- Completed local testing using `make format`, `make lint`, and `make
test` commands as per the LangChain contribution guidelines.


**Related Links:**
- [Package Repository](https://github.com/chigwell/langchain-pull-md)
- [PyPI Package](https://pypi.org/project/langchain-pull-md/)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-06 19:32:43 +00:00
Ashvin
20a715a103 community: Fix redundancy in code. (#29022)
In my previous PR (#28953), I added an unwanted condition for validating
the Azure ML Endpoint. In this PR, I have rectified the issue.
2025-01-06 12:58:16 -05:00
Jason Rodrigues
c8d6f9d52b Update index.mdx (#29029)
spell check

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-01-04 22:04:00 -05:00
Adrián Panella
acddfc772e core: allow artifact in create_retriever_tool (#28903)
Add option to return content and artifacts, to also be able to access
the full info of the retrieved documents.

They are returned as a list of dicts in the `artifacts` property if
parameter `response_format` is set to `"content_and_artifact"`.

Defaults to `"content"` to keep current behavior.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-03 22:10:31 +00:00
ccurme
3e618b16cd community[patch]: release 0.3.14 (#29019) 2025-01-03 15:34:24 -05:00
ccurme
18eb9c249d langchain[patch]: release 0.3.14 (#29018) 2025-01-03 15:15:44 -05:00
ccurme
8e50e4288c core[patch]: release 0.3.29 (#29017) 2025-01-03 14:58:39 -05:00
ccurme
85403bfa99 core[patch]: substantially speed up @deprecated (#29016)
Resolves https://github.com/langchain-ai/langchain/issues/26918

Unit tests don't raise any additional `LangChainDeprecationWarning`.
Would like guidance on how to test this more thoroughly if needed.

Note: speed up for `bind_tools` path is shown below. This is
**redundant** with the speedup in
https://github.com/langchain-ai/langchain/pull/29015. I include it for
demonstration purposes.

Before:

![Screenshot 2025-01-03 at 12 54
50 PM](https://github.com/user-attachments/assets/87f289eb-4cad-4304-85f7-5c58c59080f1)

After:

![Screenshot 2025-01-03 at 12 55
35 PM](https://github.com/user-attachments/assets/95ad0506-e1d1-4c5c-bb27-6a634d8810c9)
2025-01-03 14:38:53 -05:00
ccurme
4bb391fd4e core[patch]: remove deprecated functions from tool binding hotpath (#29015)
(Inspired by https://github.com/langchain-ai/langchain/issues/26918)

We rely on some deprecated public functions in the hot path for tool
binding (`convert_pydantic_to_openai_function`,
`convert_python_function_to_openai_function`, and
`format_tool_to_openai_function`). My understanding is that what is
deprecated is not the functionality they implement, but use of them in
the public API -- we expect to continue to rely on them.

Here we update these functions to be private and not deprecated. We keep
the public, deprecated functions as simple wrappers that can be safely
deleted.

The `@deprecated` wrapper adds considerable latency due to its use of
the `inspect` module. This update speeds up `bind_tools` by a factor of
~100x:

Before:

![Screenshot 2025-01-03 at 11 22
55 AM](https://github.com/user-attachments/assets/94b1c433-ce12-406f-b64c-ca7103badfe0)

After:

![Screenshot 2025-01-03 at 11 23
41 AM](https://github.com/user-attachments/assets/02d0deab-82e4-45ca-8cc7-a20b91a5b5db)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-03 19:29:01 +00:00
Eugene Evstafiev
a86904e735 docs: fix typo (#29012)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: fix typo"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a minor fix of typo
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. ~~a test for the integration, preferably unit tests that do not rely
on network access,~~
2. ~~an example notebook showing its use. It lives in
`docs/docs/integrations` directory.~~


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-01-03 09:52:24 -08:00
Erick Friis
919d1c7da6 box: remove box readme for api docs build (#29014) 2025-01-03 09:50:04 -08:00
Erick Friis
d8bc556c94 packages: update box location (#29013) 2025-01-03 09:45:13 -08:00
Amaan
8d7daa59fb docs: add langchain dappier retriever integration notebooks (#28931)
Add a retriever to interact with Dappier APIs with an example notebook.

The retriever can be invoked with:

```python
from langchain_dappier import DappierRetriever

retriever = DappierRetriever(
    data_model_id="dm_01jagy9nqaeer9hxx8z1sk1jx6",
    k=5
)

retriever.invoke("latest tech news")
```

To retrieve 5 documents related to latest news in the tech sector. The
included notebook also includes deeper details about controlling filters
such as selecting a data model, number of documents to return, site
domain reference, minimum articles from the reference domain, and search
algorithm, as well as including the retriever in a chain.

The integration package can be found over here -
https://github.com/DappierAI/langchain-dappier
2025-01-03 10:21:41 -05:00
ccurme
0185010b88 community[patch]: additional check for prompt caching support (#29008)
Prompt caching explicitly excludes `gpt-4o-2024-05-13`:
https://platform.openai.com/docs/guides/prompt-caching

Resolves https://github.com/langchain-ai/langchain/issues/28997
2025-01-03 10:14:07 -05:00
zzaebok
4de52e7891 docs: fix typo in callbacks_custom_events (#29005)
This PR is to correct a simple typo (dipsatch -> dispatch) in how-to
guide.
2025-01-03 09:36:21 -05:00
Andreas Motl
e493e227c9 docs: CrateDB: Educate readers about full and semantic cache components (#29000)
Dear @ccurme and @efriis,

following up on our initial patch adding documentation about CrateDB
[^1], with version 0.1.0, just released, the [CrateDB
provider](https://python.langchain.com/docs/integrations/providers/cratedb/)
starts providing `CrateDBCache` and `CrateDBSemanticCache` classes. This
little patch updates the documentation accordingly.

Happy New Year!

With kind regards,
Andreas.

[^1]: Thanks for merging
https://github.com/langchain-ai/langchain/pull/28877 so quickly.

/cc @kneth, @simonprickett


#### Preview
- [Full
Cache](https://langchain-git-fork-crate-workbench-docs-cratedb-cache-langchain.vercel.app/docs/integrations/providers/cratedb/#full-cache)
- [Semantic
Cache](https://langchain-git-fork-crate-workbench-docs-cratedb-cache-langchain.vercel.app/docs/integrations/providers/cratedb/#semantic-cache)
2025-01-03 09:31:05 -05:00
Ahmad Elmalah
b258ff1930 Docs: Add 'Optional' to installation section to fix an issue (#28902)
Problem:
"Optional" object is used in one example without importing, which raises
the following error when copying the example into IDE or Jupyter Lab

![image](https://github.com/user-attachments/assets/3a6c48cc-937f-4774-979b-b3da64ced247)

Solution:
Just importing Optional from typing_extensions module, this solves the
problem!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-03 00:05:27 +00:00
Erick Friis
97dc906a18 docs: add stripe toolkit (#28122) 2025-01-02 16:03:37 -08:00
RuofanChen03
5c32307a7a docs: Add FAISS Filter with Advanced Query Operators Documentation & Demonstration (#28938)
## Description
This pull request updates the documentation for FAISS regarding filter
construction, following the changes made in commit `df5008f`.

## Issue
None. This is a follow-up PR for documentation of
[#28207](https://github.com/langchain-ai/langchain/pull/28207)

## Dependencies:
None.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-02 16:08:25 -05:00
Tari Yekorogha
ba9dfd9252 docs: Add FalkorDB Chat Message History and Update Package Registry (#28914)
This commit updates the documentation and package registry for the
FalkorDB Chat Message History integration.

**Changes:**

- Added a comprehensive example notebook
falkordb_chat_message_history.ipynb demonstrating how to use FalkorDB
for session-based chat message storage.

- Added a provider notebook for FalkorDB

- Updated libs/packages.yml to register FalkorDB as an integration
package, following LangChain's new guidelines for community
integrations.

**Notes:**

- This update aligns with LangChain's process for registering new
integrations via documentation updates and package registry
modifications.

- No functional or core package changes were made in this commit.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-02 15:46:47 -05:00
ccurme
39b35b3606 docs[patch]: fix link (#28994) 2025-01-02 15:38:31 -05:00
Ashvin
d26c102a5a community: Update azureml endpoint (#28953)
- In this PR, I have updated the AzureML Endpoint with the latest
endpoint.
- **Description:** I have changed the existing `/chat/completions` to
`/models/chat/completions` in
libs/community/langchain_community/llms/azureml_endpoint.py
    - **Issue:** #25702

---------

Co-authored-by: = <=>
2025-01-02 14:47:02 -05:00
ccurme
7c28321f04 core[patch]: fix deprecation admonition in API ref (#28992)
Before:

![Screenshot 2025-01-02 at 1 49
30 PM](https://github.com/user-attachments/assets/cb30526a-fc0b-439f-96d1-962c226d9dc7)

After:

![Screenshot 2025-01-02 at 1 49
38 PM](https://github.com/user-attachments/assets/32c747ea-6391-4dec-b778-df457695d197)
2025-01-02 14:37:55 -05:00
Yanzhong Su
d57f0c46da docs: fix typo in how-to guides (#28951)
This PR is to correct a simple typo in how-to guides section.
2025-01-02 14:11:25 -05:00
Mohammad Mohtashim
0e74757b0a (Community): DuckDuckGoSearchAPIWrapper backend changed from api to auto (#28961)
- **Description:** `DuckDuckGoSearchAPIWrapper` default value for
backend has been changed to avoid User Warning
- **Issue:** #28957
2025-01-02 14:08:22 -05:00
Saeed Hassanvand
273b2fe81e docs: Remove deprecated schema() usage in examples (#28956)
This pull request updates the documentation in
`docs/docs/how_to/custom_tools.ipynb` to reflect the recommended
approach for generating JSON schemas in Pydantic. Specifically, it
replaces instances of the deprecated `schema()` method with the newer
and more versatile `model_json_schema()`.
2025-01-02 12:22:29 -05:00
Ikko Eltociear Ashimine
a092f5a607 docs: update multi_vector.ipynb (#28954)
accross -> across
2025-01-02 12:16:52 -05:00
Mohammad Mohtashim
aa551cbcee (Core) Small Change in Docstring for method partial for BasePromptTemplate (#28969)
- **Description:** Very small change in Docstring for
`BasePromptTemplate`
- **Issue:** #28966
2025-01-02 12:16:30 -05:00
minpeter
a873e0fbfb community: update documentation and model IDs for FriendliAI provider (#28984)
### Description  

- In the example, remove `llama-2-13b-chat`,
`mixtral-8x7b-instruct-v0-1`.
- Fix llm friendli streaming implementation.
- Update examples in documentation and remove duplicates.

### Issue  
N/A  

### Dependencies  
None  

### Twitter handle  
`@friendliai`
2025-01-02 12:15:59 -05:00
Hrishikesh Kalola
437ec53e29 langchain.agents: corrected documentation (#28986)
**Description:**
This PR updates the codebase to reflect the deprecation of the AgentType
feature. It includes the following changes:

Documentation Update:

Added a deprecation notice to the AgentType class comment.
Provided a reference to the official LangChain migration guide for
transitioning to LangGraph agents.
Reference Link: https://python.langchain.com/docs/how_to/migrate_agent/

**Twitter handle:** @hrrrriiiishhhhh

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-01-02 12:13:42 -05:00
Muhammad Magdy Abomouta
308825a6d5 docs: Update streaming.mdx (#28985)
Description:
Add a missing 'has' verb in the Streaming Conceptual Guide.
2025-01-02 11:54:32 -05:00
Mohammad Mohtashim
49a26c1fca (Community): Fix Keyword argument for AzureAIDocumentIntelligenceParser (#28959)
- **Description:** Fix the `body` keyword argument for
AzureAIDocumentIntelligenceParser`
- **Issue:** #28948
2025-01-02 11:27:12 -05:00
ccurme
efc687a13b community[patch]: fix instantiation for Slack tools (#28990)
Believe the current implementation raises PydanticUserError following
[this](https://github.com/pydantic/pydantic/releases/tag/v2.10.1)
Pydantic release.

Resolves https://github.com/langchain-ai/langchain/issues/28989
2025-01-02 16:14:17 +00:00
Yunlin Mao
c59093d67f docs: add modelscope endpoint (#28941)
## Description

To integrate ModelScope inference API endpoints for both Embeddings,
LLMs and ChatModels, install the package
`langchain-modelscope-integration` (as discussed in issue #28928 ). This
is necessary because the package name `langchain-modelscope` was already
registered by another party.

ModelScope is a premier platform designed to connect model checkpoints
with model applications. It provides the necessary infrastructure to
share open models and promote model-centric development. For more
information, visit GitHub page:
[ModelScope](https://github.com/modelscope).
2025-01-02 10:08:41 -05:00
Sathesh Sivashanmugam
a37be6dc65 docs: Minor typo fixed, install necessary pip (#28976)
Description: Document update. A minor typo is fixed. Install lxml as
required.
    Issue: -
    Dependencies: -
    Twitter handle: @sathesh

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-02 04:21:29 +00:00
Yanzhong Su
b8aa8c86ba docs: Remove redundant word for improved sentence fluency (#28975)
Remove redundant word for improved sentence fluency

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-02 04:14:08 +00:00
Bagatur
1c797ac68f infra: speed up unit tests (#28974)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-01-02 04:13:08 +00:00
Morgante Pell
79fc9b6b04 cli: bump gritql version (#28981)
**Description:**

bump gritql dependency, to use new binary names from
[here](https://github.com/getgrit/gritql/pull/565)

**Issue:**

fixes https://github.com/langchain-ai/langchain/issues/27822
2025-01-01 20:02:46 -08:00
Bagatur
edbe7d5f5e core,anthropic[patch]: fix with_structured_output typing (#28950) 2024-12-28 15:46:51 -05:00
Scott Hurrey
ccf69368b4 docs: Update documentation for BoxBlobLoader, extra_fields (#28942)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- **Description:** Update docs to add BoxBlobLoader and extra_fields to
all Box connectors.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @BoxPlatform


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-27 12:06:58 -08:00
dabzr
ffbe5b2106 partners: fix default value for stop_sequences in ChatGroq (#28924)
- **Description:**  
This PR addresses an issue with the `stop_sequences` field in the
`ChatGroq` class. Currently, the field is defined as:
```python
stop: Optional[Union[List[str], str]] = Field(None, alias="stop_sequences")
```  
This causes the language server (LSP) to raise an error indicating that
the `stop_sequences` parameter must be implemented. The issue occurs
because `Field(None, alias="stop_sequences")` is different compared to
`Field(default=None, alias="stop_sequences")`.


![image](https://github.com/user-attachments/assets/bfc34cb1-c664-4c31-b856-8f18419c7350)
To resolve the issue, the field is updated to:  
```python
stop: Optional[Union[List[str], str]] = Field(default=None, alias="stop_sequences")
```  
While this issue does not affect runtime behavior, it ensures
compatibility with LSPs and improves the development experience.
- **Issue:** N/A  
- **Dependencies:** None
2024-12-26 16:43:34 -05:00
Andy Wermke
5940ed3952 community: Fix error handling bug in ChatDeepInfra (#28918)
In the async ClientResponse, `response.text` is not a string property,
but an asynchronous function returning a string.
2024-12-26 14:45:12 -05:00
Ahmad Elmalah
d46fddface Docs: Updaing 'JSON Schema' code block output (#28909)
Out seems outdate, I ran the example several times and this is the
updated output, one key-value pair was missing!

![image](https://github.com/user-attachments/assets/95231ce7-714e-43ac-b07e-57debded4735)
2024-12-26 14:37:11 -05:00
Steve Kim
0fd4a68d34 docs: Update VectorStoreTabs.js (#28916)
- Title: Fix typo to correct "embedding" to "embeddings" in PGVector
initialization example

- Problem: There is a typo in the example code for initializing the
PGVector class. The current parameter "embedding" is incorrect as the
class expects "embeddings".

- Correction: The corrected code snippet is:

vector_store = PGVector(
    embeddings=embeddings,
    collection_name="my_docs",
    connection="postgresql+psycopg://...",
)
2024-12-26 14:31:58 -05:00
Benjamin
8db2338e96 docs: Fix typo in Build a Retrieval Augmented Generation Part 1 section (#28921)
This PR fixes a typo in [Build a Retrieval Augmented Generation (RAG)
App: Part 1](https://python.langchain.com/docs/tutorials/rag/)
2024-12-26 14:29:35 -05:00
zep.hyr
7b4d2d5d44 Community : Add cost information for missing OpenAI model (#28882)
In the previous commit, the cached model key for this model was omitted.
When using the "gpt-4o-2024-11-20" model, the token count in the
callback appeared as 0, and the cost was recorded as 0.

We add model and cost information so that the token count and cost can
be displayed for the respective model.

- The message before modification is as follows.
```
Tokens Used: 0
Prompt Tokens: 0
Prompt Tokens Cached: 0 
Completion Tokens: 0  
Reasoning Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```

- The message after modification is as follows.
```
Tokens Used: 3783 
Prompt Tokens: 3625
Prompt Tokens Cached: 2560
Completion Tokens: 158
Reasoning Tokens: 0
Successful Requests: 1
Total Cost (USD): $0.010642500000000001
```
2024-12-26 14:28:31 -05:00
Erick Friis
5991b45a88 docs: change margin (#28908) 2024-12-24 21:04:08 +00:00
Erick Friis
17f1ec8610 docs: remove console log (#28894) 2024-12-23 21:22:21 +00:00
Erick Friis
3726a944c0 docs: sorted by downloads [wip] (#28869) 2024-12-23 13:13:35 -08:00
Andreas Motl
6352edf77f docs: CrateDB: Register package langchain-cratedb, and add minimal "provider" documentation (#28877)
Hi Erick. Coming back from a previous attempt, we now made a separate
package for the CrateDB adapter, called `langchain-cratedb`, as advised.
Other than registering the package within `libs/packages.yml`, this
patch includes a minimal amount of documentation to accompany the advent
of this new package. Let us know about any mistakes we made, or changes
you would like to see. Thanks, Andreas.

## About
- **Description:** Register a new database adapter package,
`langchain-cratedb`, providing traditional vector store, document
loader, and chat message history features for a start.
- **Addressed to:** @efriis, @eyurtsev
- **References:** GH-27710
- **Preview:** [Providers » More »
CrateDB](https://langchain-git-fork-crate-workbench-register-la-4bf945-langchain.vercel.app/docs/integrations/providers/cratedb/)

## Status
- **PyPI:** https://pypi.org/project/langchain-cratedb/
- **GitHub:** https://github.com/crate/langchain-cratedb
- **Documentation (CrateDB):**
https://cratedb.com/docs/guide/integrate/langchain/
- **Documentation (LangChain):** _This PR._

## Backlog?
Is this applicable for this kind of patch?
> - [ ] **Add tests and docs**: If you're adding a new integration,
please include
> 1. a test for the integration, preferably unit tests that do not rely
on network access,
> 2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

## Q&A
1. Notebooks that use the LangChain CrateDB adapter are currently at
[CrateDB LangChain
Examples](https://github.com/crate/cratedb-examples/tree/main/topic/machine-learning/llm-langchain),
and the documentation refers to them. Because they are derived from very
old blueprints coming from LangChain 0.0.x times, we guess they need a
refresh before adding them to `docs/docs/integrations`. Is it applicable
to merge this minimal package registration + documentation patch, which
already includes valid code snippets in `cratedb.mdx`, and add
corresponding notebooks on behalf of a subsequent patch later?

2. How would it work getting into the tabular list of _Integration
Packages_ enumerated on the [documentation entrypoint page about
Providers](https://python.langchain.com/docs/integrations/providers/)?

/cc Please also review, @ckurze, @wierdvanderhaar, @kneth,
@simonprickett, if you can find the time. Thanks!
2024-12-23 10:55:44 -05:00
Wang Ran (汪然)
e5c9da3eb6 core[patch]: remove redundant imports (#28861)
`Graph` has been imported at Line: 62
2024-12-23 10:31:23 -05:00
Adrián Panella
8d9907088b community(azuresearch): allow to use any valid credential (#28873)
Add option to use any valid credential type.
Differentiates async cases needed by Azure Search.

This could replace the use of a static token
2024-12-23 10:05:48 -05:00
ZhangShenao
4b4d09f82b [Doc] Improvement: Fix docs of ChatMLX (#28884)
- `ChatMLX` doesn't supports the role of system.
- Fix https://github.com/langchain-ai/langchain/issues/28532
#28532
2024-12-23 09:51:44 -05:00
Mohammad Mohtashim
41b6a86bbe Community: LlamaCppEmbeddings embed_documents and embed_query (#28827)
- **Description:** `embed_documents` and `embed_query` was throwing off
the error as stated in the issue. The issue was that `Llama` client is
returning the embeddings in a nested list which is not being accounted
for in the current implementation and therefore the stated error is
being raised.
- **Issue:** #28813

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-23 09:50:22 -05:00
Darien Schettler
32917a0b98 Update dataframe.py (#28871)
community: optimize DataFrame document loader

**Description:**
Simplify the `lazy_load` method in the DataFrame document loader by
combining text extraction and metadata cleanup into a single operation.
This makes the code more concise while maintaining the same
functionality.

**Issue:** N/A

**Dependencies:** None

**Twitter handle:** N/A
2024-12-22 19:16:16 -05:00
Erick Friis
cb4e6ac941 docs: frontmatter gen, colab/github links (#28852) 2024-12-21 17:38:31 +00:00
Mikhail Khludnev
2a7469e619 add langchain-localai link to Providers page localai.mdx (#28855)
follow up #28751

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-20 22:02:32 +00:00
yeounhak
f38fc89f35 community: Corrected aload func to be asynchronous from webBaseLoader (#28337)
- **Description:** The aload function, contrary to its name, is not an
asynchronous function, so it cannot work concurrently with other
asynchronous functions.

- **Issue:** #28336 

- **Test: **: Done

- **Docs: **
[here](e0a95e5646/docs/docs/integrations/document_loaders/web_base.ipynb (L201))

- **Lint: ** All checks passed

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-20 14:42:52 -05:00
Ahmad Elmalah
a08c76a6b2 Docs: Add langgraph to installation section for Rag tutorial (#28849)
**Issue**:
This tutorial depends on langgraph, however Langgraph is not mentioned
on the installation section for the tutorial, which raises an error when
copying and pasting the code snippets as following:

![image](https://github.com/user-attachments/assets/829c9118-fcf8-4f17-9abb-32e005ebae07)

**Solution**:
Just adding langgraph package to installation section, for both pip and
Conda tabs as this tutorial requires it.
2024-12-20 12:08:19 -05:00
Mohammad Mohtashim
8cf5f20bb5 required tool_choice added for ChatHuggingFace (#28851)
- **Description:** HuggingFace Inference Client V3 now supports
`required` as tool_choice which has been added.
- **Issue:** #28842
2024-12-20 12:06:04 -05:00
Sylvain DEPARTE
fcba567a77 partners: allow to set Prefix in AIMessage (for MistralAI) (#28846)
**Description:**

Added ability to set `prefix` attribute to prevent error : 
```
httpx.HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant","type":"invalid_request_error","param":null,"code":null}
```

Co-authored-by: Sylvain DEPARTE <sylvain.departe@wizbii.com>
2024-12-20 11:09:45 -05:00
Jacob Mansdorfer
6d81137325 community: adding langchain-predictionguard partner package documentation (#28832)
- *[x] **PR title**: "community: adding langchain-predictionguard
partner package documentation"

- *[x] **PR message**:
- **Description:** This PR adds documentation for the
langchain-predictionguard package to main langchain repo, along with
deprecating current Prediction Guard LLMs package. The LLMs package was
previously broken, so I also updated it one final time to allow it to
continue working from this point onward. . This enables users to chat
with LLMs through the Prediction Guard ecosystem.
    - **Package Links**: 
        -  [PyPI](https://pypi.org/project/langchain-predictionguard/)
- [Github
Repo](https://www.github.com/predictionguard/langchain-predictionguard)
    - **Issue:** None
    - **Dependencies:** None
- **Twitter handle:** [@predictionguard](https://x.com/predictionguard)

- *[x] **Add tests and docs**: All docs have been added for the partner
package, and the current LLMs package test was updated to reflect
changes.


- *[x] **Lint and test**: Linting tests are all passing.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-20 10:51:44 -05:00
Leonid Ganeline
5135bf1002 docs: integrations google packages (#28840)
Issue: several Google integrations are implemented on the
[github.com/googleapis](https://github.com/googleapis) organization
repos and these integrations are almost lost. But they are essential
integrations.
Change: added a list of all packages that have Google integrations.
Added a description of this situation.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-20 09:46:06 -05:00
Barry McCardel
5a351a133c fix tiny 'lil typo in tutorial page (#28839)
ez pz
2024-12-19 16:33:59 -08:00
ccurme
f0e858b4e3 core[patch]: release 0.3.28 (#28837) 2024-12-19 17:52:32 -05:00
ccurme
137d1e9564 langchain[patch]: fix test following update to langchain-openai (#28838) 2024-12-19 22:39:48 +00:00
Emmanuel Leroy
c8db5a19ce langchain_community.chat_models.oci_generative_ai: Fix a bug when using optional parameters in tools (#28829)
When using tools with optional parameters, the parameter `type` is not
longer available since langchain update to 0.3 (because of the pydantic
upgrade?) and there is now an `anyOf` field instead. This results in the
`type` being `None` in the chat request for the tool parameter, and the
LLM call fails with the error:

```
oci.exceptions.ServiceError: {'target_service': 'generative_ai_inference', 
'status': 400, 'code': '400', 
'opc-request-id': '...', 
'message': 'Parameter definition must have a type.', 
'operation_name': 'chat'
...
}
```

Example code that fails:

```
from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
from langchain_core.tools import tool
from typing import Optional

llm = ChatOCIGenAI(
        model_id="cohere.command-r-plus",
        service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
        compartment_id="ocid1.compartment.oc1...",
        auth_profile="your_profile",
        auth_type="API_KEY",
        model_kwargs={"temperature": 0, "max_tokens": 3000},
)

@tool
def test(example: Optional[str] = None):
    """This is the tool to use to test things

    Args:
        example: example variable, defaults to None
    """
    return "this is a test"

llm_with_tools = llm.bind_tools([test])

result = llm_with_tools.invoke("can you make a test for g")
```

This PR sets the param type to `any` in that case, and fixes the
problem.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-19 22:17:34 +00:00
Bagatur
c3ccd93c12 patch openai json mode test (#28831) 2024-12-19 21:43:32 +00:00
Bagatur
ce6748dbfe xfail openai image token count test (#28828) 2024-12-19 21:23:30 +00:00
Anusha Karkhanis
26bdf40072 Langchain_Community: SQL LanguageParser (#28430)
## Description
(This PR has contributions from @khushiDesai, @ashvini8, and
@ssumaiyaahmed).

This PR addresses **Issue #11229** which addresses the need for SQL
support in document parsing. This is integrated into the generic
TreeSitter parsing library, allowing LangChain users to easily load
codebases in SQL into smaller, manageable "documents."

This pull request adds a new ```SQLSegmenter``` class, which provides
the SQL integration.

## Issue
**Issue #11229**: Add support for a variety of languages to
LanguageParser

## Testing
We created a file ```test_sql.py``` with several tests to ensure the
```SQLSegmenter``` is functional. Below are the tests we added:

- ```def test_is_valid```: Checks SQL validity.
- ```def test_extract_functions_classes```: Extracts individual SQL
statements.
- ```def test_simplify_code```: Simplifies SQL code with comments.

---------

Co-authored-by: Syeda Sumaiya Ahmed <114104419+ssumaiyaahmed@users.noreply.github.com>
Co-authored-by: ashvini hunagund <97271381+ashvini8@users.noreply.github.com>
Co-authored-by: Khushi Desai <khushi.desai@advantawitty.com>
Co-authored-by: Khushi Desai <59741309+khushiDesai@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-19 20:30:57 +00:00
Bagatur
a7f2148061 openai[patch]: Release 0.2.14 (#28826) 2024-12-19 11:56:44 -08:00
Bagatur
1378ddfa5f openai[patch]: type reasoning_effort (#28825) 2024-12-19 19:36:49 +00:00
Erick Friis
6a37899b39 core: dont mutate tool_kwargs during tool run (#28824)
fixes https://github.com/langchain-ai/langchain/issues/24621
2024-12-19 18:11:56 +00:00
Qun
033ac41760 fix crash when using create_xml_agent with parameterless function as … (#26002)
When using `create_xml_agent` or `create_json_chat_agent` to create a
agent, and the function corresponding to the tool is a parameterless
function, the `XMLAgentOutputParser` or `JSONAgentOutputParser` will
parse the tool input into an empty string, `BaseTool` will parse it into
a positional argument.
So, the program will crash finally because we invoke a parameterless
function but with a positional argument.Specially, below code will raise
StopIteration in
[_parse_input](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools/base.py#L419)
```python
from langchain import hub
from langchain.agents import AgentExecutor, create_json_chat_agent, create_xml_agent
from langchain_openai import ChatOpenAI

prompt = hub.pull("hwchase17/react-chat-json")

llm = ChatOpenAI()

# agent = create_xml_agent(llm, tools, prompt)
agent = create_json_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke(......)
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 13:00:46 -05:00
Luke
f69695069d text_splitters: Add HTMLSemanticPreservingSplitter (#25911)
**Description:** 

With current HTML splitters, they rely on secondary use of the
`RecursiveCharacterSplitter` to further chunk the document into
manageable chunks. The issue with this is it fails to maintain important
structures such as tables, lists, etc within HTML.

This Implementation of a HTML splitter, allows the user to define a
maximum chunk size, HTML elements to preserve in full, options to
preserve `<a>` href links in the output and custom handlers.

The core splitting begins with headers, similar to `HTMLHeaderSplitter`.
If these sections exceed the length of the `max_chunk_size` further
recursive splitting is triggered. During this splitting, elements listed
to preserve, will be excluded from the splitting process. This can cause
chunks to be slightly larger then the max size, depending on preserved
length. However, all contextual relevance of the preserved item remains
intact.

**Custom Handlers**: Sometimes, companies such as Atlassian have custom
HTML elements, that are not parsed by default with `BeautifulSoup`.
Custom handlers allows a user to provide a function to be ran whenever a
specific html tag is encountered. This allows the user to preserve and
gather information within custom html tags that `bs4` will potentially
miss during extraction.

**Dependencies:** User will need to install `bs4` in their project to
utilise this class

I have also added in `how_to` and unit tests, which require `bs4` to
run, otherwise they will be skipped.

Flowchart of process:


![HTMLSemanticPreservingSplitter](https://github.com/user-attachments/assets/20873c36-22ed-4c80-884b-d3c6f433f5a7)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 12:09:22 -05:00
Tommaso De Lorenzo
24bfa062bf langchain: add support for Google Anthropic Vertex AI model garden provider in init_chat_model (#28177)
Simple modification to add support for anthropic models deployed in
Google Vertex AI model garden in `init_chat_model` importing
`ChatAnthropicVertex`

- [v] **Lint and test**
2024-12-19 12:06:21 -05:00
Erick Friis
ff7b01af88 anthropic: less pydantic for client (#28823) 2024-12-19 08:00:02 -08:00
Erick Friis
f1d783748a anthropic: sdk bump (#28820) 2024-12-19 15:39:21 +00:00
Erick Friis
907f36a6e9 fireworks: fix lint (#28821) 2024-12-19 15:36:36 +00:00
Erick Friis
6526db4871 community: bump core (#28819) 2024-12-19 06:41:53 -08:00
Vignesh A
4c9acdfbf1 Community : Add OpenAI prompt caching and reasoning tokens tracking (#27135)
Added Token tracking for OpenAI's prompt caching and reasoning tokens
Costs updated from https://openai.com/api/pricing/

usage example
```python
from langchain_community.callbacks import get_openai_callback
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model_name="o1-mini",temperature=1)

with get_openai_callback() as cb:
    response = llm.invoke("hi "*1500)
    print(cb)
```
Output
```
Tokens Used: 1720
	Prompt Tokens: 1508
		Prompt Tokens Cached: 1408
	Completion Tokens: 212
		Reasoning Tokens: 192
Successful Requests: 1
Total Cost (USD): $0.0049559999999999995
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 09:31:13 -05:00
ScriptShi
97f1e1d39f community: tablestore vector store check the dimension of the embedding when writing it to store. (#28812)
Added some restrictions to a vectorstore I released in the community
before.
2024-12-19 09:30:43 -05:00
fzowl
024f020f04 docs: Adding VoyageAI to 'integrations/text_embedding/' dropdown (#28817)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


**Description:** 
Adding VoyageAI's text_embedding to 'integrations/text_embedding/'


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-19 09:29:30 -05:00
Wang Ran (汪然)
f48755d35b core: typo Utilities for tests. -> Utilities for pydantic. (#28814)
**Description:** typo
2024-12-19 09:26:17 -05:00
Wang Ran (汪然)
51b8ddaf10 core: typo in runnable (#28815)
Thank you for contributing to LangChain!

**Description:** Typo
2024-12-19 09:25:57 -05:00
Leonid Ganeline
c823cc532d docs: integration providers index update (#28808)
Issue: integrations related to a provider can be spread across several
packages and classes. It is very hard to find a provider using only
ToCs.
Fix: we have a very useful and helpful tool to search by provider name.
It is the `Search` field. So, I've added recommendations for using this
field. It seems obvious but it is not.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-19 02:28:37 +00:00
wangda
24c4af62b0 docs:Correcting spelling mistakes (#28780) 2024-12-18 21:11:50 -05:00
Erick Friis
3b036a1cf2 partners/fireworks: release 0.2.6 (#28805) 2024-12-18 22:48:35 +00:00
Erick Friis
4eb8bf7793 partners/anthropic: release 0.3.1 (#28801) 2024-12-18 22:45:38 +00:00
Lu Peng
50afa7c4e7 community: add new parameter default_headers (#28700)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- "community: 1. add new parameter `default_headers` for oci model
deployments and oci chat model deployments. 2. updated k parameter in
OCIModelDeploymentLLM class."


- [x] **PR message**:
- **Description:** 1. add new parameters `default_headers` for oci model
deployments and oci chat model deployments. 2. updated k parameter in
OCIModelDeploymentLLM class.


- [x] **Add tests and docs**:
  1. unit tests
  2. notebook

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 22:33:23 +00:00
Christophe Bornet
1e88adaca7 all: Add pre-commit hook (#26993)
This calls `make format` on projects that have modified files.
So `poetry install --with lint` must have been done for those projects.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 22:22:58 +00:00
Erick Friis
cc616de509 partners/xai: release 0.1.1 (#28806) 2024-12-18 22:15:24 +00:00
Erick Friis
ba8c1b0d8c partners/groq: release 0.2.2 (#28804) 2024-12-18 22:12:02 +00:00
Erick Friis
a119cae5bd partners/mistralai: release 0.2.4 (#28803) 2024-12-18 22:11:48 +00:00
Erick Friis
514d78516b partners/ollama: release 0.2.2 (#28802) 2024-12-18 22:11:08 +00:00
Bagatur
68940dd0d6 openai[patch]: Release 0.2.13 (#28800) 2024-12-18 22:08:47 +00:00
Erick Friis
4dc28b43ac community: release 0.3.13 (#28798) 2024-12-18 21:58:46 +00:00
Bagatur
557f63c2e6 core[patch]: Release 0.3.27 (#28799) 2024-12-18 21:58:03 +00:00
Bagatur
4a531437bb core[patch], openai[patch]: Handle OpenAI developer msg (#28794)
- Convert developer openai messages to SystemMessage
- store additional_kwargs={"__openai_role__": "developer"} so that the
correct role can be reconstructed if needed
- update ChatOpenAI to read in openai_role

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 21:54:07 +00:00
bjoaquinc
43b0736a51 docs: added a link to the taxonomy of labels in the contributing guide for easy access (#28719)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Added a link to make it easier to organize github
issues for langchain.
- **Issue:** After reading that there was a taxonomy of labels I had to
figure out how to find it.
    - **Dependencies:** None
    - **Twitter handle:** None


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 16:08:18 -05:00
Erick Friis
079f1d93ab langchain: release 0.3.13 (#28797) 2024-12-18 12:32:00 -08:00
Yuxin Chen
3256b5d6ae text-splitters: fix state persistence issue in ExperimentalMarkdownSyntaxTextSplitter (#28373)
- **Description:** 
This PR resolves an issue with the
`ExperimentalMarkdownSyntaxTextSplitter` class, which retains the
internal state across multiple calls to the `split_text` method. This
behaviour caused an unintended accumulation of chunks in `self`
variables, leading to incorrect outputs when processing multiple
Markdown files sequentially.

- Modified `libs\text-splitters\langchain_text_splitters\markdown.py` to
reset the relevant internal attributes at the start of each `split_text`
invocation. This ensures each call processes the input independently.
- Added unit tests in
`libs\text-splitters\tests\unit_tests\test_text_splitters.py` to verify
the fix and ensure the state does not persist across calls.

- **Issue:**  
Fixes [#26440](https://github.com/langchain-ai/langchain/issues/26440).

- **Dependencies:**
No additional dependencies are introduced with this change.


- [x] Unit tests were added to verify the changes.
- [x] Updated documentation where necessary.  
- [x] Ran `make format`, `make lint`, and `make test` to ensure
compliance with project standards.

---------

Co-authored-by: Angel Chen <angelchen396@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 20:27:59 +00:00
Mohammad Mohtashim
7c8f977695 Community: Fix with_structured_output for ChatSambaNovaCloud (#28796)
- **Description:** The `kwargs` was being checked as None object which
was causing the rest of code in `with_structured_output` not getting
executed. The checking part has been fixed in this PR.
- **Issue:** #28776
2024-12-18 14:35:06 -05:00
V.Prasanna kumar
684b146b18 Fixed adding float values into DynamoDB (#26562)
Thank you for contributing to LangChain!

- [x] **PR title**: Add float Message into Dynamo DB
  -  community
  - Example: "community: Chat Message History 


- [x] **PR message**: 
- **Description:** pushing float values into dynamo db creates error ,
solved that by converting to str type
    - **Issue:** Float values are not getting pushed
    - **Twitter handle:** VpkPrasanna
    
    
Have added an utility function for str conversion , let me know where to
place it happy to do an commit.
    
    This PR is from an discussion of #26543
    
    @hwchase17 @baskaryan @efriis

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 13:45:00 -05:00
William FH
50ea1c3ea3 [Core] respect tracing project name cvar (#28792) 2024-12-18 10:02:02 -08:00
Martin Triska
e6b41d081d community: DocumentLoaderAsParser wrapper (#27749)
## Description

This pull request introduces the `DocumentLoaderAsParser` class, which
acts as an adapter to transform document loaders into parsers within the
LangChain framework. The class enables document loaders that accept a
`file_path` parameter to be utilized as blob parsers. This is
particularly useful for integrating various document loading
capabilities seamlessly into the LangChain ecosystem.

When merged in together with PR
https://github.com/langchain-ai/langchain/pull/27716 It opens options
for `SharePointLoader` / `OneDriveLoader` to process any filetype that
has a document loader.

### Features

- **Flexible Parsing**: The `DocumentLoaderAsParser` class can adapt any
document loader that meets the criteria of accepting a `file_path`
argument, allowing for lazy parsing of documents.
- **Compatibility**: The class has been designed to work with various
document loaders, making it versatile for different use cases.

### Usage Example

To use the `DocumentLoaderAsParser`, you would initialize it with a
suitable document loader class and any required parameters. Here’s an
example of how to do this with the `UnstructuredExcelLoader`:

```python
from langchain_community.document_loaders.blob_loaders import Blob
from langchain_community.document_loaders.parsers.documentloader_adapter import DocumentLoaderAsParser
from langchain_community.document_loaders.excel import UnstructuredExcelLoader

# Initialize the parser adapter with UnstructuredExcelLoader
xlsx_parser = DocumentLoaderAsParser(UnstructuredExcelLoader, mode="paged")

# Use parser, for ex. pass it to MimeTypeBasedParser
MimeTypeBasedParser(
    handlers={
        "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": xlsx_parser
    }
)
```


- **Dependencies:** None
- **Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 12:47:08 -05:00
Erick Friis
9b024d00c9 text-splitters: release 0.3.4 (#28795) 2024-12-18 09:44:36 -08:00
Erick Friis
5cf965004c core: release 0.3.26 (#28793) 2024-12-18 17:28:42 +00:00
Mohammad Mohtashim
d49df4871d [Community]: Image Extraction Fixed for PDFPlumberParser (#28491)
- **Description:** One-Bit Images was raising error which has been fixed
in this PR for `PDFPlumberParser`
 - **Issue:** #28480

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 11:45:48 -05:00
binhnd102
f723a8456e Fixes: community: fix LanceDB return no metadata (#27024)
- [ x ] Fix when lancedb return table without metadata column
- **Description:** Check the table schema, if not has metadata column,
init the Document with metadata argument equal to empty dict
    - **Issue:** https://github.com/langchain-ai/langchain/issues/27005

- [ x ] **Add tests and docs**

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-18 15:21:28 +00:00
ANSARI MD AAQIB AHMED
91d28ef453 Add langchain-yt-dlp Document Loader Documentation (#28775)
## Overview
This PR adds documentation for the `langchain-yt-dlp` package, a YouTube
document loader that uses `yt-dlp` for Youtube videos metadata
extraaction.

## Changes
- Added documentation notebook for YoutubeLoader
- Updated packages.yml to include langchain-yt-dlp

## Motivation
The existing LangChain YoutubeLoader was unable to fetch YouTube
metadata due to changes in YouTube's structure. This package resolves
those issues by leveraging the `yt-dlp` library.

## Features
- Reliable YouTube metadata extraction

## Related
- Package Repository: https://github.com/aqib0770/langchain-yt-dlp
- PyPI Package: https://pypi.org/project/langchain-yt-dlp/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 10:16:50 -05:00
GITHUBear
33b1fb95b8 partners: langchain-oceanbase Integration (#28782)
Hi, langchain team! I'm a maintainer of
[OceanBase](https://github.com/oceanbase/oceanbase).

With the integration guidance, I create a python lib named
[langchain-oceanbase](https://github.com/oceanbase/langchain-oceanbase)
to integrate `Oceanbase Vector Store` with `Langchain`.

So I'd like to add the required docs. I will appreciate your feedback.
Thank you!

---------

Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 14:51:49 +00:00
Rave Harpaz
986b752fc8 Add OCI Generative AI new model and structured output support (#28754)
- [X] **PR title**: 
 community: Add new model and structured output support


- [X] **PR message**: 
- **Description:** add support for meta llama 3.2 image handling, and
JSON mode for structured output
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA


- [x] **Add tests and docs**: 
  1. we have updated our unit tests,
  2. no changes required for documentation.


- [x] **Lint and test**: 
make format, make lint and make test we run successfully

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-18 09:50:25 -05:00
David Pryce-Compson
ef24220d3f community: adding haiku 3.5 and opus callbacks (#28783)
**Description:** 
Adding new AWS Bedrock model and their respective costs to match
https://aws.amazon.com/bedrock/pricing/ for the Bedrock callback

**Issue:** 
Missing models for those that wish to try them out

**Dependencies:**
Nothing added

**Twitter handle:**
@David_Pryce and / or @JamfSoftware

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-18 09:45:10 -05:00
Yudai Kotani
05a44797ee langchain_community: Add default None values to DocumentAttributeValue class properties (#28785)
**Description**: 
This PR addresses an issue where the DocumentAttributeValue class
properties did not have default values of None. By explicitly setting
the Optional attributes (DateValue, LongValue, StringListValue, and
StringValue) to default to None, this change ensures the class functions
as expected when no value is provided for these attributes.

**Changes Made**:
Added default None values to the following properties of the
DocumentAttributeValue class:
DateValue
LongValue
StringListValue
StringValue
Removed the invalid argument extra="allow" from the BaseModel
inheritance.
Dependencies: None.

**Twitter handle (optional)**: @__korikori1021

**Checklist**
- [x] Verified that KendraRetriever works as expected after the changes.

Co-authored-by: y1u0d2a1i <y.kotani@raksul.com>
2024-12-18 09:43:04 -05:00
Satyam Kumar
90f7713399 refactor: improve docstring parsing logic for Google style (#28730)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


Description:  
Improved the `_parse_google_docstring` function in `langchain/core` to
support parsing multi-paragraph descriptions before the `Args:` section
while maintaining compliance with Google-style docstring guidelines.
This change ensures better handling of docstrings with detailed function
descriptions.

Issue:  
Fixes #28628

Dependencies:  
None.

Twitter handle:  
@isatyamks

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-18 09:35:19 -05:00
Zapiron
85c3bc1bbd docs: Grammar and Typo update for Runnable Conceptual guide (#28777) 2024-12-17 21:18:58 -05:00
Dong Shin
0b1359801e community: add trust_env at web_base_loader (#28514)
- **Description:** I am working to address a similar issue to the one
mentioned in https://github.com/langchain-ai/langchain/pull/19499.
Specifically, there is a problem with the Webbase loader used in
open-webui, where it fails to load the proxy configuration. This PR aims
to resolve that issue.




<!--If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.-->

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 21:18:16 -05:00
Erick Friis
be738aa7de packages: enable vertex api build (#28773) 2024-12-17 11:31:14 -08:00
Bagatur
ac278cbe8b core[patch]: export InjectedToolCallId (#28772) 2024-12-17 19:29:20 +00:00
ccurme
5656702b8d docs: fix readme link (#28770)
SQL Llama2 Template -> LangChain Extract
2024-12-17 18:51:01 +00:00
Bagatur
e4d3ccf62f json mode standard test (#25497)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 18:47:34 +00:00
ccurme
24bf24270d docs: reference ExperimentalMarkdownTextSplitter (#28768)
Continuing https://github.com/langchain-ai/langchain/pull/27832

---------

Co-authored-by: promptless[bot] <179508745+promptless[bot]@users.noreply.github.com>
Co-authored-by: Frances Liu <francestfls@gmail.com>
2024-12-17 12:56:38 -05:00
Frank Dai
e81433497b community: support Confluence cookies (#28760)
**Description**: Some confluence instances don't support personal access
token, then cookie is a convenient way to authenticate. This PR adds
support for Confluence cookies.

**Twitter handle**: soulmachine
2024-12-17 12:16:36 -05:00
ccurme
b745281eec anthropic[patch]: increase timeouts for integration tests (#28767)
Some tests consistently ran into the 10s limit in CI.
2024-12-17 15:47:17 +00:00
Leonid Ganeline
6479fd8c1c docs: integrations cache table of content (#28755)
Issue: the current
[Cache](https://python.langchain.com/docs/integrations/llm_caching/)
page has an inconsistent heading.
Mixed terms are used; mixed casing; and mixed `selecting`. Excessively
long titles make right-side ToC hard to read and unnecessarily long.

Changes: consitent and more-readable ToC
2024-12-17 10:12:49 -05:00
Vinit Kudva
a00258ec12 chroma: fix persistence if client_settings is passed in (#25199)
…ent path given.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 10:03:02 -05:00
Omri Eliyahu Levy
f8883a1321 partners/voyageai: enable setting output dimension (#28740)
Voyage has introduced voyage-3-large and voyage-code-3, which feature
different output dimensions by leveraging a technique called "Matryoshka
Embeddings" (see blog -
https://blog.voyageai.com/2024/12/04/voyage-code-3/).
These two models are available in various sizes: [256, 512, 1024, 2048]
(https://docs.voyageai.com/docs/embeddings#model-choices).

This PR adds the option to set the required output dimension.
2024-12-17 10:02:00 -05:00
Swastik-Swarup-Dash
0afc284920 fix:Agent reactgpt4 or gpt 4o as input model #28747 (#28764)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-17 14:35:09 +00:00
Zapiron
0c11aee486 docs: small grammar changes for conceptual guide page (#28765)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-17 14:27:55 +00:00
German Martin
3a1d05394d community: Apache AGE wrapper. Ensure Node Uniqueness by ID. (#28759)
**Description:**

The Apache AGE graph integration incorrectly handled node merging,
allowing duplicate nodes with different IDs but the same type and other
properties. Unlike
[Neo4j](cdf6202156/libs/community/langchain_community/graphs/neo4j_graph.py (L47)),
[Memgraph](cdf6202156/libs/community/langchain_community/graphs/memgraph_graph.py (L50)),
[Kuzu](cdf6202156/libs/community/langchain_community/graphs/kuzu_graph.py (L253)),
and
[Gremlin](cdf6202156/libs/community/langchain_community/graphs/gremlin_graph.py (L165)),
it did not use the node ID as the primary identifier for merging.

This inconsistency caused data integrity issues and unexpected behavior
when users expected updates to specific nodes by ID.

**Solution:**
This PR modifies the `node_insert_query` to `MERGE` nodes based on label
and ID *only* and updates properties with `SET`, aligning the behavior
with other graph database integrations. The `_format_properties` method
was also modified to handle id overrides.

**Impact:**

This fix ensures data integrity by preventing duplicate nodes, and
provides a consistent behavior across graph database integrations.
2024-12-17 09:21:59 -05:00
gsa9989
cdf6202156 cosmosdbnosql: Added Cosmos DB NoSQL Semantic Cache Integration with tests and jupyter notebook (#24424)
* Added Cosmos DB NoSQL Semantic Cache Integration with tests and
jupyter notebook

---------

Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 21:57:05 -05:00
Brian Burgin
27a9056725 community: Fix ChatLiteLLMRouter runtime issues (#28163)
**Description:** Fix ChatLiteLLMRouter ctor validation and model_name
parameter
**Issue:** #19356, #27455, #28077
**Twitter handle:** @bburgin_0
2024-12-16 18:17:39 -05:00
Eugene Yurtsev
234d49653a docs: Create custom embeddings (#20398)
Guidelines on how to create custom embeddings

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 17:57:51 -05:00
Mikhail Khludnev
00deacc67e docs, external: introduce langchain-localai (#28751)
Thank you for contributing to LangChain!

Referring to https://github.com/mkhludnev/langchain-localai

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 22:22:37 +00:00
Erick Friis
d4b5e7ef22 community: recommend RedisVectorStore over Redis (#28749) 2024-12-16 21:08:30 +00:00
Hiros
8f5e72de05 community: Correctly handle multi-element rich text (#25762)
**Description:**

- Add _concatenate_rich_text method to combine all elements in rich text
arrays
- Update load_page method to use _concatenate_rich_text for rich text
properties
- Ensure all text content is captured, including inline code and
formatted text
- Add unit tests to verify correct handling of multi-element rich text
This fix prevents truncation of content after backticks or other
formatting elements.

 **Issue:**

Using Notion DB Loader, the text for `richtext` and `title` is truncated
after 1st element was loaded as Notion Loader only read the first
element.

**Dependencies:** any dependencies required for this change
None.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 20:20:27 +00:00
Antonio Lanza
b2102b8cc4 text-splitters: Inconsistent results with NLTKTextSplitter's add_start_index=True (#27782)
This PR closes #27781

# Problem
The current implementation of `NLTKTextSplitter` is using
`sent_tokenize`. However, this `sent_tokenize` doesn't handle chars
between 2 tokenized sentences... hence, this behavior throws errors when
we are using `add_start_index=True`, as described in issue #27781. In
particular:
```python
from nltk.tokenize import sent_tokenize

output1 = sent_tokenize("Innovation drives our success. Collaboration fosters creative solutions. Efficiency enhances data management.", language="english")
print(output1)
output2 = sent_tokenize("Innovation drives our success.        Collaboration fosters creative solutions. Efficiency enhances data management.", language="english")
print(output2)
>>> ['Innovation drives our success.', 'Collaboration fosters creative solutions.', 'Efficiency enhances data management.']
>>> ['Innovation drives our success.', 'Collaboration fosters creative solutions.', 'Efficiency enhances data management.']
```

# Solution
With this new `use_span_tokenize` parameter, we can use NLTK to create
sentences (with `span_tokenize`), but also add extra chars to be sure
that we still can map the chunks to the original text.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-12-16 19:53:15 +00:00
Tari Yekorogha
d262d41cc0 community: added FalkorDB vector store support i.e implementation, test, docs an… (#26245)
**Description:** Added support for FalkorDB Vector Store, including its
implementation, unit tests, documentation, and an example notebook. The
FalkorDB integration allows users to efficiently manage and query
embeddings in a vector database, with relevance scoring and maximal
marginal relevance search. The following components were implemented:

- Core implementation for FalkorDBVector store.
- Unit tests ensuring proper functionality and edge case coverage.
- Example notebook demonstrating an end-to-end setup, search, and
retrieval using FalkorDB.

**Twitter handle:** @tariyekorogha

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 19:37:55 +00:00
Aaron Pham
12fced13f4 chore(community): update to OpenLLM 0.6 (#24609)
Update to OpenLLM 0.6, which we decides to make use of OpenLLM's
OpenAI-compatible endpoint. Thus, OpenLLM will now just become a thin
wrapper around OpenAI wrapper.

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

---------

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-16 14:30:07 -05:00
Lvlvko
5c17a4ace9 community: support Hunyuan Embedding (#23160)
## description

- I refactor `Chathunyuan` using tencentcloud sdk because I found the
original one can't work in my application
- I add `HunyuanEmbeddings` using tencentcloud sdk
- Both of them are extend the basic class of langchain. I have fully
tested them in my application

## Dependencies
- tencentcloud-sdk-python

---------

Co-authored-by: centonhuang <centonhuang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 19:27:19 +00:00
Harrison Chase
de7996c2ca core: add kwargs support to VectorStore (#25934)
has been missing the passthrough until now

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 18:57:57 +00:00
Henry Tu
87c50f99e5 docs: cerebras Update Llama 3.1 70B to Llama 3.3 70B (#28746)
This PR updates the docs for the Cerebras integration to use Llama 3.3
70b instead of Llama 3.1 70b.

cc: @efriis
2024-12-16 18:43:35 +00:00
Lorenzo
b79a1156ed community: correct return type of get_files_from_directory in github tool (#27885)
### About:
- **Description:** the _get_files_from_directory_ method return a
string, but it's used in other methods that expect a List[str]
- **Issue:** None
- **Dependencies:** None

This pull request import a new method _list_files_ with the old logic of
_get_files_from_directory_, but it return a List[str] at the end.
The behavior of _ get_files_from_directory_ is not changed.
2024-12-16 10:30:33 -08:00
Sheepsta300
580a8d53f9 community: Add configurable VisualFeatures to the AzureAiServicesImageAnalysisTool (#27444)
Thank you for contributing to LangChain!

- [ ] **PR title**: community: Add configurable `VisualFeatures` to the
`AzureAiServicesImageAnalysisTool`


- [ ] **PR message**:  
- **Description:** The `AzureAiServicesImageAnalysisTool` is a good
service and utilises the Azure AI Vision package under the hood.
However, since the creation of this tool, new `VisualFeatures` have been
added to allow the user to request other image specific information to
be returned. Currently, the tool offers neither configuration of which
features should be return nor does it offer any newer feature types. The
aim of this PR is to address this and expose more of the Azure Service
in this integration.
- **Dependencies:** no new dependencies in the main class file,
azure.ai.vision.imageanalysis added to extra test dependencies file.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. Although no tests exist for already implemented Azure Service tools,
I've created 3 unit tests for this class that test initialisation and
credentials, local file analysis and a test for the new changes/
features option.


- [ ] **Lint and test**: All linting has passed.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 18:30:04 +00:00
Erick Friis
1c120e9615 core: xml output parser tags docstring (#28745) 2024-12-16 18:25:16 +00:00
Ana
ebab2ea81b Fix Azure National Cloud authentication using token (RBAC) (Generated by Ana - AI SDE) (#25843)
This pull request addresses the issue with authenticating Azure National
Cloud using token (RBAC) in the AzureSearch vectorstore implementation.

## Changes

- Modified the `_get_search_client` method in `azuresearch.py` to pass
`additional_search_client_options` to the `SearchIndexClient` instance.

## Implementation Details

The patch updates the `SearchIndexClient` initialization to include the
`additional_search_client_options` parameter:

```python
index_client: SearchIndexClient = SearchIndexClient(
    endpoint=endpoint,
    credential=credential,
    user_agent=user_agent,
    **additional_search_client_options
)
```

This change allows the `audience` parameter to be correctly passed when
using Azure National Cloud, fixing the authentication issues with
GovCloud & RBAC.

This patch was generated by [Ana - AI SDE](https://openana.ai/), an
AI-powered software development assistant.

This is a fix for [Issue
25823](https://github.com/langchain-ai/langchain/issues/25823)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-16 18:22:24 +00:00
chenzimin
169d419581 community: Remove all other keys in ChatLiteLLM and add api_key (#28097)
Thank you for contributing to LangChain!

- **PR title**: "community: Remove all other keys in ChatLiteLLM and add
api_key"


- **PR message**: Currently, no api_key are passed to LiteLLM, and
LiteLLM only takes on api_key parameter. Therefore I removed all current
`*_api_key` attributes (They are not used), and added `api_key` that is
passed to ChatLiteLLM.
  - Should fix issue #27826

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 17:54:29 +00:00
German Martin
d5d18c62b3 community: Apache AGE wrapper additional edge cases. (#28151)
Description: 
Current AGEGraph() implementation does some custom wrapping for graph
queries. The method here is _wrap_query() as it parse the field from the
original query to add some SQL context to it.
This improves the current parsing logic to cover additional edge cases
that are added to the test coverage, basically if any Node property name
or value has the "return" literal in it will break the graph / SQL
query.
We discovered this while dealing with real world datasets, is not an
uncommon scenario and I think it needs to be covered.
2024-12-16 11:28:01 -05:00
Rock2z
768e4a7fd4 [community][fix] Compatibility support to bump up wikibase-rest-api-client version (#27316)
**Description:**

This PR addresses the `TypeError: sequence item 0: expected str
instance, FluentValue found` error when invoking `WikidataQueryRun`. The
root cause was an incompatible version of the
`wikibase-rest-api-client`, which caused the tool to fail when handling
`FluentValue` objects instead of strings.

The current implementation only supports `wikibase-rest-api-client<0.2`,
but the latest version is `0.2.1`, where the current implementation
breaks. Additionally, the error message advises users to install the
latest version: [code
reference](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py#L125C25-L125C32).
Therefore, this PR updates the tool to support the latest version of
`wikibase-rest-api-client`.

Key changes:
- Updated the handling of `FluentValue` objects to ensure compatibility
with the latest `wikibase-rest-api-client`.
- Removed the restriction to `wikibase-rest-api-client<0.2` and updated
to support the latest version (`0.2.1`).

**Issue:**

Fixes [#24093](https://github.com/langchain-ai/langchain/issues/24093) –
`TypeError: sequence item 0: expected str instance, FluentValue found`.

**Dependencies:**

- Upgraded `wikibase-rest-api-client` to the latest version to resolve
the issue.

---------

Co-authored-by: peiwen_zhang <peiwen_zhang@email.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 16:22:18 +00:00
André Quintino
a26c786bc5 community: refactor opensearch query constructor to use wildcard instead of match in the contain comparator (#26653)
- **Description:** Changed the comparator to use a wildcard query
instead of match. This modification allows for partial text matching on
analyzed fields, which improves the flexibility of the search by
performing full-text searches that aren't limited to exact matches.
- **Issue:** The previous implementation used a match query, which
performs exact matches on analyzed fields. This approach limited the
search capabilities by requiring the query terms to align with the
indexed text. The modification to use a wildcard query instead addresses
this limitation. The wildcard query allows for partial text matching,
which means the search can return results even if only a portion of the
term matches the text. This makes the search more flexible and suitable
for use cases where exact matches aren't necessary or expected, enabling
broader full-text searches across analyzed fields.
In short, the problem was that match queries were too restrictive, and
the change to wildcard queries enhances the ability to perform partial
matches.
- **Dependencies:** none
- **Twitter handle:** @Andre_Q_Pereira

---------

Co-authored-by: André Quintino <andre.quintino@tui.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 11:16:34 -05:00
Davi Schumacher
0f9b4bf244 community[patch]: update dynamodb chat history to update instead of overwrite (#22397)
**Description:**
The current implementation of `DynamoDBChatMessageHistory` updates the
`History` attribute for a given chat history record by first extracting
the existing contents into memory, appending the new message, and then
using the `put_item` method to put the record back. This has the effect
of overwriting any additional attributes someone may want to include in
the record, like chat session metadata.

This PR suggests changing from using `put_item` to using `update_item`
instead which will keep any other attributes in the record untouched.
The change is backward compatible since
1. `update_item` is an "upsert" operation, creating the record if it
doesn't already exist, otherwise updating it
2. It only touches the db insert call and passes the exact same
information. The rest of the class is left untouched

**Dependencies:**
None

**Tests and docs:**
No unit tests currently exist for the `DynamoDBChatMessageHistory`
class. This PR adds the file
`libs/community/tests/unit_tests/chat_message_histories/test_dynamodb_chat_message_history.py`
to test the `add_message` and `clear` methods. I wanted to use the moto
library to mock DynamoDB calls but I could not get poetry to resolve it
so I mocked those calls myself in the test. Therefore, no test
dependencies were added.

The change was tested on a test DynamoDB table as well. The first three
images below show the current behavior. First a message is added to chat
history, then a value is inserted in the record in some other attribute,
and finally another message is added to the record, destroying the other
attribute.

![using_put_1_first_message](https://github.com/langchain-ai/langchain/assets/29493541/426acd62-fe29-42f4-b75f-863fb8b3fb21)

![using_put_2_add_attribute](https://github.com/langchain-ai/langchain/assets/29493541/f8a1c864-7114-4fe3-b487-d6f9252f8f92)

![using_put_3_second_message](https://github.com/langchain-ai/langchain/assets/29493541/8b691e08-755e-4877-8969-0e9769e5d28a)

The next three images show the new behavior. Once again a value is added
to an attribute other than the History attribute, but now when the
followup message is added it does not destroy that other attribute. The
History attribute itself is unaffected by this change.

![using_update_1_first_message](https://github.com/langchain-ai/langchain/assets/29493541/3e0d76ed-637e-41cd-82c7-01a86c468634)

![using_update_2_add_attribute](https://github.com/langchain-ai/langchain/assets/29493541/52585f9b-71a2-43f0-9dfc-9935aa59c729)

![using_update_3_second_message](https://github.com/langchain-ai/langchain/assets/29493541/f94c8147-2d6f-407a-9a0f-86b94341abff)

The doc located at `docs/docs/integrations/memory/aws_dynamodb.ipynb`
required no changes and was tested as well.
2024-12-16 10:38:00 -05:00
Christophe Bornet
6ddd5dbb1e community: Add FewShotSQLTool (#28232)
The `FewShotSQLTool` gets some SQL query examples from a
`BaseExampleSelector` for a given question.
This is useful to provide [few-shot
examples](https://python.langchain.com/docs/how_to/sql_prompting/#few-shot-examples)
capability to an SQL agent.

Example usage:
```python
from langchain.agents.agent_toolkits.sql.prompt import SQL_PREFIX

embeddings = OpenAIEmbeddings()

example_selector = SemanticSimilarityExampleSelector.from_examples(
    examples,
    embeddings,
    AstraDB,
    k=5,
    input_keys=["input"],
    collection_name="lc_few_shots",
    token=ASTRA_DB_APPLICATION_TOKEN,
    api_endpoint=ASTRA_DB_API_ENDPOINT,
)

few_shot_sql_tool = FewShotSQLTool(
    example_selector=example_selector,
    description="Input to this tool is the input question, output is a few SQL query examples related to the input question. Always use this tool before checking the query with sql_db_query_checker!"
)

agent = create_sql_agent(
    llm=llm, 
    db=db, 
    prefix=SQL_PREFIX + "\nYou MUST get some example queries before creating the query.", 
    extra_tools=[few_shot_sql_tool]
)

result = agent.invoke({"input": "How many artists are there?"})
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-16 15:37:21 +00:00
Mohammad Mohtashim
8d746086ab Added bind_tools support for ChatMLX along with small fix in _stream (#28743)
- **Description:** Added Support for `bind_tool` as requested in the
issue. Plus two issue in `_stream` were fixed:
    - Corrected the Positional Argument Passing for `generate_step`
    - Accountability if `token` returned by `generate_step` is integer.
- **Issue:** #28692
2024-12-16 09:52:49 -05:00
Jorge Piedrahita Ortiz
558b65ea32 community: SamabaStudio Tool Calling and Structured Output (#28025)
Description: Add tool calling and structured output support for
SambaStudio chat models, docs included

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 06:15:19 +00:00
Hristiyan Genchev
c87f24d85d docs: correct variable name from formatted_docs to docs_content (#28735)
- **Description:** Fixed incorrect variable name formatted_docs,
replacing it with docs_content to ensure correct functionality.
2024-12-15 22:09:58 -08:00
clairebehue
fb44e74ca4 community: fix AzureSearch Oauth with azure_ad_access_token (#26995)
**Description:** 
AzureSearch vector store: create a wrapper class on
`azure.core.credentials.TokenCredential` (which is not-instantiable) to
fix Oauth usage with `azure_ad_access_token` argument

**Issue:** [the issue it
fixes](https://github.com/langchain-ai/langchain/issues/26216)

 **Dependencies:** None

- [x] **Lint and test**

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 05:56:45 +00:00
SirSmokeAlot
29305cd948 community: O365Toolkit - send_event - fixed timezone error (#25876)
**Description**: Fixed formatting start and end time
**Issue**: The old formatting resulted everytime in an timezone error
**Dependencies**: /
**Twitter handle**: /

---------

Co-authored-by: Yannick Opitz <yannick.opitz@gob.de>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-16 05:32:28 +00:00
Erick Friis
4f6ccb7080 text-splitters: extended-tests without socket (#28736) 2024-12-16 05:19:50 +00:00
Erick Friis
8ec1c72e03 text-splitters: test without socket (#28732) 2024-12-15 22:10:35 +00:00
Tibor Reiss
690aa02c31 docs[experimental]: Make docs clearer and add min_chunk_size (#26398)
Fixes #26171:

- added some clarification text for the keyword argument
`breakpoint_threshold_amount`
- added min_chunk_size: together with `breakpoint_threshold_amount`, too
small/big chunk sizes can be avoided

Note: the langchain-experimental was moved to a separate repo, so only
the doc change stays here.
2024-12-15 13:43:48 -08:00
Aayush Kataria
d417e4b372 Community: Azure CosmosDB No Sql Vector Store: Full Text and Hybrid Search Support (#28716)
Thank you for contributing to LangChain!

- Added [full
text](https://learn.microsoft.com/en-us/azure/cosmos-db/gen-ai/full-text-search)
and [hybrid
search](https://learn.microsoft.com/en-us/azure/cosmos-db/gen-ai/hybrid-search)
support for Azure CosmosDB NoSql Vector Store
- Added a new enum called CosmosDBQueryType which supports the following
values:
    - VECTOR = "vector"
    - FULL_TEXT_SEARCH = "full_text_search"
    - FULL_TEXT_RANK = "full_text_rank"
    - HYBRID = "hybrid"
- User now needs to provide this query_type to the similarity_search
method for the vectorStore to make the correct query api call.
- Added a couple of work arounds as for the FULL_TEXT_RANK and HYBRID
query functions we don't support parameterized queries right now. I have
added TODO's in place, and will remove these work arounds by end of
January.
- Added necessary test cases and updated the 


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-12-15 13:26:32 -08:00
Mohammad Mohtashim
4c1871d9a8 community: Passing the model_kwargs correctly while maintaing backward compatability (#28439)
- **Description:** `Model_Kwargs` was not being passed correctly to
`sentence_transformers.SentenceTransformer` which has been corrected
while maintaing backward compatability
- **Issue:** #28436

---------

Co-authored-by: MoosaTae <sadhis.tae@gmail.com>
Co-authored-by: Sadit Wongprayon <101176694+MoosaTae@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-15 20:34:29 +00:00
nhols
a3851cb3bc community: FAISS vectorstore - consistent Document id field (#28728)
make sure id field of Documents in `FAISS` docstore have the same id as
values in `index_to_docstore_id`, implement `get_by_ids` method
2024-12-15 12:23:49 -08:00
Bagatur
a0534ae62a community[patch]: Release 0.3.12 (#28725) 2024-12-14 22:13:20 +00:00
Bagatur
089e659e03 langchain[patch]: Release 0.3.12 (#28724) 2024-12-14 20:02:18 +00:00
Bagatur
679e3a9970 text-splitters[patch]: Release 0.3.3 (#28723) 2024-12-14 19:20:22 +00:00
ccurme
23b433f683 infra: fix notebook tests (#28722)
Bump unstructured to pick up resolution of
https://github.com/Unstructured-IO/unstructured/issues/3795
2024-12-14 15:13:19 +00:00
Erick Friis
387284c259 core: release 0.3.25 (#28718) 2024-12-14 02:22:28 +00:00
Nawaf Alharbi
decd77c515 community: fix an issue with deepinfra integration (#28715)
Thank you for contributing to LangChain!

- [x] **PR title**: langchain: add URL parameter to ChatDeepInfra class

- [x] **PR message**: add URL parameter to ChatDeepInfra class
- **Description:** This PR introduces a url parameter to the
ChatDeepInfra class in LangChain, allowing users to specify a custom
URL. Previously, the URL for the DeepInfra API was hardcoded to
"https://stage.api.deepinfra.com/v1/openai/chat/completions", which
caused issues when the staging endpoint was not functional. The _url
method was updated to return the value from the url parameter, enabling
greater flexibility and addressing the problem. out!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 02:15:29 +00:00
Ben Chambers
008efada2c [community]: Render documents to graphviz (#24830)
- **Description:** Adds a helper that renders documents with the
GraphVectorStore metadata fields to Graphviz for visualization. This is
helpful for understanding and debugging.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 02:02:09 +00:00
Leonid Ganeline
fc8006121f docs: integrations W&B update (#28059)
Issue: Here is an ambiguity about W&B integrations. There are two
existing provider pages.
Fix: Added the "root" W&B provider page. Added there the references to
the documentation in the W&B site. Cleaned up formats in existing pages.
Added one more integration reference.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-14 00:47:16 +00:00
Erick Friis
288f204758 docs, community: aerospike docs update (#28717)
Co-authored-by: Jesse Schumacher <jschumacher@aerospike.com>
Co-authored-by: Jesse S <jschmidt@aerospike.com>
Co-authored-by: dylan <dwelch@aerospike.com>
2024-12-14 00:27:37 +00:00
ronidas39
bd008baee0 docs: Update additional_resources/tutorials.mdx (#28005)
Added Langchain complete tutorial playlist from total technology zonne
channel .In this playlist every video is focusing one specific use case
and hands on demo.All tutorials are equally good for every levels .

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-14 00:21:30 +00:00
Vimpas
337fed80a5 community: 🐛 PDF Filter Type Error (#27154)
Thank you for contributing to LangChain!

 **PR title**: "community: fix  PDF Filter Type Error"


  - **Description:** fix  PDF Filter Type Error"
  - **Issue:** the issue #27153 it fixes,
  - **Dependencies:** no
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 23:30:29 +00:00
Ryan Parker
12111cb922 community: fallback on core async atransform_documents method for MarkdownifyTransformer (#27866)
# Description
Implements the `atransform_documents` method for
`MarkdownifyTransformer` using the `asyncio` built-in library for
concurrency.

Note that this is mainly for API completeness when working with async
frameworks rather than for performance, since the `markdownify` function
is not I/O bound because it works with `Document` objects already in
memory.

# Issue
Fixes #27865

# Dependencies
No new dependencies added, but
[`markdownify`](https://github.com/matthewwithanm/python-markdownify) is
required since this PR updates the `markdownify` integration.

# Tests and docs
- Tests added
- I did not modify the docstrings since they already described the basic
functionality, and [the API docs also already included a
description](https://python.langchain.com/api_reference/community/document_transformers/langchain_community.document_transformers.markdownify.MarkdownifyTransformer.html#langchain_community.document_transformers.markdownify.MarkdownifyTransformer.atransform_documents).
If it would be helpful, I would be happy to update the docstrings and/or
the API docs.

# Lint and test
- [x] format
- [x] lint
- [x] test

I ran formatting with `make format`, linting with `make lint`, and
confirmed that tests pass using `make test`. Note that some unit tests
pass in CI but may fail when running `make_test`. Those unit tests are:
- `test_extract_html` (and `test_extract_html_async`)
- `test_strip_tags` (and `test_strip_tags_async`)
- `test_convert_tags` (and `test_convert_tags_async`)

The reason for the difference is that there are trailing spaces when the
tests are run in the CI checks, and no trailing spaces when run with
`make test`. I ensured that the tests pass in CI, but they may fail with
`make test` due to the addition of trailing spaces.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 22:32:22 +00:00
Manuel
af2e0a7ede partners: add 'model' alias for consistency in embedding classes (#28374)
**Description:** This PR introduces a `model` alias for the embedding
classes that contain the attribute `model_name`, to ensure consistency
across the codebase, as suggested by a moderator in a previous PR. The
change aligns the usage of attribute names across the project (see for
example
[here](65deeddd5d/libs/partners/groq/langchain_groq/chat_models.py (L304))).
**Issue:** This PR addresses the suggestion from the review of issue
#28269.
**Dependencies:**  None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 22:30:00 +00:00
Erick Friis
3107d78517 huggingface: fix standard test lint (#28714) 2024-12-13 22:18:54 +00:00
Kaiwei Zhang
b909d54e70 chroma[patch]: Update logic for assigning ids 2024-12-13 21:58:34 +00:00
ccurme
9c55c75eb5 docs: dropdowns for embeddings and vector stores (#28713) 2024-12-13 16:48:02 -05:00
Karthik Bharadhwaj
498f0249e2 community[minor]: Opensearch hybridsearch implementation (#25375)
community: add hybrid search in opensearch

# Langchain OpenSearch Hybrid Search Implementation

## Implementation of Hybrid Search: 

I have taken LangChain's OpenSearch integration to the next level by
adding hybrid search capabilities. Building on the existing
OpenSearchVectorSearch class, I have implemented Hybrid Search
functionality (which combines the best of both keyword and semantic
search). This new functionality allows users to harness the power of
OpenSearch's advanced hybrid search features without leaving the
familiar LangChain ecosystem. By blending traditional text matching with
vector-based similarity, the enhanced class delivers more accurate and
contextually relevant results. It's designed to seamlessly fit into
existing LangChain workflows, making it easy for developers to upgrade
their search capabilities.

In implementing the hybrid search for OpenSearch within the LangChain
framework, I also incorporated filtering capabilities. It's important to
note that according to the OpenSearch hybrid search documentation, only
post-filtering is supported for hybrid queries. This means that the
filtering is applied after the hybrid search results are obtained,
rather than during the initial search process.

**Note:** For the implementation of hybrid search, I strictly followed
the official OpenSearch Hybrid search documentation and I took
inspiration from
https://github.com/AndreasThinks/langchain/tree/feature/opensearch_hybrid_search
Thanks Mate!  

### Experiments

I conducted few experiments to verify that the hybrid search
implementation is accurate and capable of reproducing the results of
both plain keyword search and vector search.

Experiment - 1
Hybrid Search
Keyword_weight: 1, vector_weight: 0

I conducted an experiment to verify the accuracy of my hybrid search
implementation by comparing it to a plain keyword search. For this test,
I set the keyword_weight to 1 and the vector_weight to 0 in the hybrid
search, effectively giving full weightage to the keyword component. The
results from this hybrid search configuration matched those of a plain
keyword search, confirming that my implementation can accurately
reproduce keyword-only search results when needed. It's important to
note that while the results were the same, the scores differed between
the two methods. This difference is expected because the plain keyword
search in OpenSearch uses the BM25 algorithm for scoring, whereas the
hybrid search still performs both keyword and vector searches before
normalizing the scores, even when the vector component is given zero
weight. This experiment validates that my hybrid search solution
correctly handles the keyword search component and properly applies the
weighting system, demonstrating its accuracy and flexibility in
emulating different search scenarios.


Experiment - 2
Hybrid Search
keyword_weight = 0.0, vector_weight = 1.0

For experiment-2, I took the inverse approach to further validate my
hybrid search implementation. I set the keyword_weight to 0 and the
vector_weight to 1, effectively giving full weightage to the vector
search component (KNN search). I then compared these results with a pure
vector search. The outcome was consistent with my expectations: the
results from the hybrid search with these settings exactly matched those
from a standalone vector search. This confirms that my implementation
accurately reproduces vector search results when configured to do so. As
with the first experiment, I observed that while the results were
identical, the scores differed between the two methods. This difference
in scoring is expected and can be attributed to the normalization
process in hybrid search, which still considers both components even
when one is given zero weight. This experiment further validates the
accuracy and flexibility of my hybrid search solution, demonstrating its
ability to effectively emulate pure vector search when needed while
maintaining the underlying hybrid search structure.



Experiment - 3
Hybrid Search - balanced

keyword_weight = 0.5, vector_weight = 0.5

For experiment-3, I adopted a balanced approach to further evaluate the
effectiveness of my hybrid search implementation. In this test, I set
both the keyword_weight and vector_weight to 0.5, giving equal
importance to keyword-based and vector-based search components. This
configuration aims to leverage the strengths of both search methods
simultaneously. By setting both weights to 0.5, I intended to create a
scenario where the hybrid search would consider lexical matches and
semantic similarity equally. This balanced approach is often ideal for
many real-world applications, as it can capture both exact keyword
matches and contextually relevant results that might not contain the
exact search terms.

Kindly verify the notebook for the experiments conducted!  

**Notebook:**
https://github.com/karthikbharadhwajKB/Langchain_OpenSearch_Hybrid_search/blob/main/Opensearch_Hybridsearch.ipynb

### Instructions to follow for Performing Hybrid Search:

**Step-1: Instantiating OpenSearchVectorSearch Class:**
```python
opensearch_vectorstore = OpenSearchVectorSearch(
    index_name=os.getenv("INDEX_NAME"),
    embedding_function=embedding_model,
    opensearch_url=os.getenv("OPENSEARCH_URL"),
    http_auth=(os.getenv("OPENSEARCH_USERNAME"),os.getenv("OPENSEARCH_PASSWORD")),
    use_ssl=False,
    verify_certs=False,
    ssl_assert_hostname=False,
    ssl_show_warn=False
)
```

**Parameters:**
1. **index_name:** The name of the OpenSearch index to use.
2. **embedding_function:** The function or model used to generate
embeddings for the documents. It's assumed that embedding_model is
defined elsewhere in the code.
3. **opensearch_url:** The URL of the OpenSearch instance.
4. **http_auth:** A tuple containing the username and password for
authentication.
5. **use_ssl:** Set to False, indicating that the connection to
OpenSearch is not using SSL/TLS encryption.
6. **verify_certs:** Set to False, which means the SSL certificates are
not being verified. This is often used in development environments but
is not recommended for production.
7. **ssl_assert_hostname:** Set to False, disabling hostname
verification in SSL certificates.
8. **ssl_show_warn:** Set to False, suppressing SSL-related warnings.

**Step-2: Configure Search Pipeline:**

To initiate hybrid search functionality, you need to configures a search
pipeline first.

**Implementation Details:**

This method configures a search pipeline in OpenSearch that:
1. Normalizes the scores from both keyword and vector searches using the
min-max technique.
2. Applies the specified weights to the normalized scores.
3. Calculates the final score using an arithmetic mean of the weighted,
normalized scores.


**Parameters:**

* **pipeline_name (str):** A unique identifier for the search pipeline.
It's recommended to use a descriptive name that indicates the weights
used for keyword and vector searches.
* **keyword_weight (float):** The weight assigned to the keyword search
component. This should be a float value between 0 and 1. In this
example, 0.3 gives 30% importance to traditional text matching.
* **vector_weight (float):** The weight assigned to the vector search
component. This should be a float value between 0 and 1. In this
example, 0.7 gives 70% importance to semantic similarity.

```python
opensearch_vectorstore.configure_search_pipelines(
    pipeline_name="search_pipeline_keyword_0.3_vector_0.7",
    keyword_weight=0.3,
    vector_weight=0.7,
)
```

**Step-3: Performing Hybrid Search:**

After creating the search pipeline, you can perform a hybrid search
using the `similarity_search()` method (or) any methods that are
supported by `langchain`. This method combines both `keyword-based and
semantic similarity` searches on your OpenSearch index, leveraging the
strengths of both traditional information retrieval and vector embedding
techniques.

**parameters:**
* **query:** The search query string.
* **k:** The number of top results to return (in this case, 3).
* **search_type:** Set to `hybrid_search` to use both keyword and vector
search capabilities.
* **search_pipeline:** The name of the previously created search
pipeline.

```python
query = "what are the country named in our database?"

top_k = 3

pipeline_name = "search_pipeline_keyword_0.3_vector_0.7"

matched_docs = opensearch_vectorstore.similarity_search_with_score(
                query=query,
                k=top_k,
                search_type="hybrid_search",
                search_pipeline = pipeline_name
            )

matched_docs
```

twitter handle: @iamkarthik98

---------

Co-authored-by: Karthik Kolluri <karthik.kolluri@eidosmedia.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 16:34:12 -05:00
Philippe PRADOS
f3fb5a9c68 community[minor]: Fix json._validate_metadata_func() (#22842)
JSONparse, in _validate_metadata_func(), checks the consistency of the
_metadata_func() function. To do this, it invokes it and makes sure it
receives a dictionary in response. However, during the call, it does not
respect future calls, as shown on line 100. This generates errors if,
for example, the function is like this:
```python
        def generate_metadata(json_node:Dict[str,Any],kwargs:Dict[str,Any]) -> Dict[str,Any]:
             return {
                "source": url,
                "row": kwargs['seq_num'],
                "question":json_node.get("question"),
            }
        loader = JSONLoader(
            file_path=file_path,
            content_key="answer",
            jq_schema='.[]',
            metadata_func=generate_metadata,
            text_content=False)
```
To avoid this, the verification must comply with the specifications.
This patch does just that.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 21:24:20 +00:00
Keiichi Hirobe
67fd554512 core[patch]: throw exception indexing code if deletion fails in vectorstore (#28103)
The delete methods in the VectorStore and DocumentIndex interfaces
return a status indicating the result. Therefore, we can assume that
their implementations don't throw exceptions but instead return a result
indicating whether the delete operations have failed. The current
implementation doesn't check the returned value, so I modified it to
throw an exception when the operation fails.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 16:14:27 -05:00
Keiichi Hirobe
258b3be5ec core[minor]: add new clean up strategy "scoped_full" to indexing (#28505)
~Note that this PR is now Draft, so I didn't add change to `aindex`
function and didn't add test codes for my change.
After we have an agreement on the direction, I will add commits.~

`batch_size` is very difficult to decide because setting a large number
like >10000 will impact VectorDB and RecordManager, while setting a
small number will delete records unnecessarily, leading to redundant
work, as the `IMPORTANT` section says.
On the other hand, we can't use `full` because the loader returns just a
subset of the dataset in our use case.

I guess many people are in the same situation as us.

So, as one of the possible solutions for it, I would like to introduce a
new argument, `scoped_full_cleanup`.
This argument will be valid only when `claneup` is Full. If True, Full
cleanup deletes all documents that haven't been updated AND that are
associated with source ids that were seen during indexing. Default is
False.

This change keeps backward compatibility.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-13 20:35:25 +00:00
ccurme
4802c31a53 docs: update intro page (#28639) 2024-12-13 15:24:14 -05:00
Eugene Yurtsev
ce90b25313 core[patch]: Update error message in indexing code for unreachable code assertion (#28712)
Minor update for error message that should never be triggered
2024-12-13 20:21:14 +00:00
Keiichi Hirobe
da28cf1f54 core[patch]: Reverts PR #25754 and add unit tests (#28702)
I reported the bug 2 weeks ago here:
https://github.com/langchain-ai/langchain/issues/28447

I believe this is a critical bug for the indexer, so I submitted a PR to
revert the change and added unit tests to prevent similar bugs from
being introduced in the future.

@eyurtsev Could you check this?
2024-12-13 15:13:06 -05:00
ScriptShi
b0a298894d community[minor]: Add TablestoreVectorStore (#25767)
Thank you for contributing to LangChain!

- [x] **PR title**:  community: add TablestoreVectorStore



- [x] **PR message**: 
    - **Description:** add TablestoreVectorStore
    - **Dependencies:** none


- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. a test for the integration: yes
  2. an example notebook showing its use: yes

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-12-13 11:17:28 -08:00
Erick Friis
86b3c6e81c community: make old stub for QuerySQLDataBaseTool private to skip api ref (#28711) 2024-12-13 10:43:23 -08:00
Martin Triska
05ebe1e66b Community: add modified_since argument to O365BaseLoader (#28708)
## What are we doing in this PR
We're adding `modified_since` optional argument to `O365BaseLoader`.
When set, O365 loader will only load documents newer than
`modified_since` datetime.

## Why?
OneDrives / Sharepoints can contain large number of documents. Current
approach is to download and parse all files and let indexer to deal with
duplicates. This can be prohibitively time-consuming. Especially when
using OCR-based parser like
[zerox](fa06188834/libs/community/langchain_community/document_loaders/pdf.py (L948)).
This argument allows to skip documents that are older than known time of
indexing.

_Q: What if a file was modfied during last indexing process?
A: Users can set the `modified_since` conservatively and indexer will
still take care of duplicates._




If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-13 17:30:17 +00:00
UV
c855d434c5 DOC: Fixed conflicting info on ChatOllama structured output support (#28701)
This PR resolves the conflicting information in the Chat models
documentation regarding structured output support for ChatOllama.

- The Featured Providers table has been updated to reflect the correct
status.
- Structured output support for ChatOllama was introduced on Dec 6,
2024.
- A note has been added to ensure users update to the latest Ollama
version for structured outputs.

**Issue:** Fixes #28691
2024-12-13 17:24:59 +00:00
Bagatur
fa06188834 community[patch]: fix QuerySQLDatabaseTool name (#28659)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-12 19:16:03 -08:00
Bagatur
94c22c3f48 rfc: dropdown for chat models (#28673) 2024-12-12 19:14:39 -08:00
Erick Friis
48ab91b520 docs: more useful vercel warnings (#28699) 2024-12-13 03:07:24 +00:00
Erick Friis
f60110107c docs: ganalytics in api ref (#28697) 2024-12-12 23:55:59 +00:00
Michael Chin
28cb2cefc6 docs: Fix stack diagram in community README (#28685)
- **Description:** The stack diagram illustration in the community
README fails to render due to an invalid branch reference. This PR
replaces the broken image link with a valid one referencing master
branch.
2024-12-12 13:33:50 -08:00
Botong Zhu
13c3c4a210 community: fixes json loader not getting texts with json standard (#27327)
This PR fixes JSONLoader._get_text not converting objects to json string
correctly.
If an object is serializable and is not a dict, JSONLoader will use
python built-in str() method to convert it to string. This may cause
object converted to strings not following json standard. For example, a
list will be converted to string with single quotes, and if json.loads
try to load this string, it will cause error.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 19:33:45 +00:00
Lorenzo
4149c0dd8d community: add method to create branch and list files for gitlab tool (#27883)
### About

- **Description:** In the Gitlab utilities used for the Gitlab tool
there are no methods to create branches, list branches and files, as
this is already done for Github
- **Issue:** None
- **Dependencies:** None

This Pull request add the methods:
- create_branch
- list_branches_in_repo
- set_active_branch
- list_files_in_main_branch
- list_files_in_bot_branch
- list_files_from_directory

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 19:11:35 +00:00
Prathamesh Nimkar
ca054ed1b1 community: ChatSnowflakeCortex - Add streaming functionality (#27753)
Description:
snowflake.py
Add _stream and _stream_content methods to enable streaming
functionality
fix pydantic issues and added functionality with the overall langchain
version upgrade
added bind_tools method for agentic workflows support through langgraph
updated the _generate method to account for agentic workflows support
through langgraph
cosmetic changes to comments and if conditions

snowflake.ipynb
Added _stream example
cosmetic changes to comments
fixed lint errors

check_pydantic.sh
Decreased counter from 126 to 125 as suggested when formatting

---------

Co-authored-by: Prathamesh Nimkar <prathamesh.nimkar@snowflake.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 18:35:40 -08:00
Wang, Yi
d834c6b618 huggingface: fix tool argument serialization in _convert_TGI_message_to_LC_message (#26075)
Currently `_convert_TGI_message_to_LC_message` replaces `'` in the tool
arguments, so an argument like "It's" will be converted to `It"s` and
could cause a json parser to fail.

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-12-11 18:34:32 -08:00
Lakindu Boteju
5a31792bf1 community: Add support for cross-region inference profile IDs in Bedrock Anthropic Claude token cost calculation (#28167)
This change modifies the token cost calculation logic to support
cross-region inference profile IDs for Anthropic Claude models. Instead
of explicitly listing all regional variants of new inference profile IDs
in the cost dictionaries, the code now extracts a base model ID from the
input model ID (or inference profile ID), making it more maintainable
and automatically supporting new regional variants.

These inference profile IDs follow the format:
`<region>.<vendor>.<model-name>` (e.g.,
`us.anthropic.claude-3-haiku-xxx`, `eu.anthropic.claude-3-sonnet-xxx`).

Cross-region inference profiles are system-defined identifiers that
enable distributing model inference requests across multiple AWS
regions. They help manage unplanned traffic bursts and enhance
resilience during peak demands without additional routing costs.

References for Amazon Bedrock's cross-region inference profiles:-
-
https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html
-
https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 02:33:50 +00:00
fatmelon
d1e0ec7b55 community: VectorStores: Azure Cosmos DB Mongo vCore with DiskANN (#27329)
# Description
Add a new vector index type `diskann` to Azure Cosmos DB Mongo vCore
vector store. Paper of DiskANN can be found here [DiskANN: Fast Accurate
Billion-point Nearest Neighbor Search on a Single
Node](https://proceedings.neurips.cc/paper_files/paper/2019/file/09853c7fb1d3f8ee67a61b6bf4a7f8e6-Paper.pdf).

## Sample Usage
```python
from pymongo import MongoClient

# INDEX_NAME = "izzy-test-index-2"
# NAMESPACE = "izzy_test_db.izzy_test_collection"
# DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")

client: MongoClient = MongoClient(CONNECTION_STRING)
collection = client[DB_NAME][COLLECTION_NAME]

model_deployment = os.getenv(
    "OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada"
)
model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")

vectorstore = AzureCosmosDBVectorSearch.from_documents(
    docs,
    openai_embeddings,
    collection=collection,
    index_name=INDEX_NAME,
)

# Read more about these variables in detail here. https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search
maxDegree = 40
dimensions = 1536
similarity_algorithm = CosmosDBSimilarityType.COS
kind = CosmosDBVectorSearchType.VECTOR_DISKANN
lBuild = 20

vectorstore.create_index(
            dimensions=dimensions,
            similarity=similarity_algorithm,
            kind=kind ,
            max_degree=maxDegree,
            l_build=lBuild,
        )
```

## Dependencies
No additional dependencies were added

---------

Co-authored-by: Yang Qiao (from Dev Box) <yangqiao@microsoft.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 01:54:04 +00:00
manukychen
ba9b95cd23 Community: Adding bulk_size as a setable param for OpenSearchVectorSearch (#28325)
Description:
When using langchain.retrievers.parent_document_retriever.py with
vectorstore is OpenSearchVectorSearch, I found that the bulk_size param
I passed into OpenSearchVectorSearch class did not work on my
ParentDocumentRetriever.add_documents() function correctly, it will be
overwrite with int 500 the function which OpenSearchVectorSearch class
had (e.g., add_texts(), add_embeddings()...).

So I made this PR requset to fix this, thanks!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-12 01:45:22 +00:00
Erick Friis
0af5ad8262 docs: provider list from packages.yml (#28677) 2024-12-12 00:12:30 +00:00
Ayantunji Timilehin
a4713cab47 FIX: typos in docs (#28679)
- **Twitter handle:**@timi471
2024-12-11 16:06:04 -08:00
xintoteai
45f9c9ae88 langchain: fixed weaviate (v4) vectorstore import for self-query retriever (#28675)
Co-authored-by: Xin Heng <xin.heng@gmail.com>
2024-12-11 15:53:41 -08:00
Thomas van Dongen
ee640d6bd3 community: fixed bug in model2vec embedding code (#28670)
This PR fixes a bug with the current implementation for Model2Vec
embeddings where `embed_documents` does not work as expected.

- **Description**: the current implementation uses `encode_as_sequence`
for encoding documents. This is incorrect, as `encode_as_sequence`
creates token embeddings and not mean embeddings. The normal `encode`
function handles both single and batched inputs and should be used
instead. The return type was also incorrect, as encode returns a NumPy
array. This PR converts the embedding to a list so that the output is
consistent with the Embeddings ABC.
2024-12-11 15:50:56 -08:00
Brian Sharon
b20230c800 community: use correct id_key when deleting by id in LanceDB wrapper (#28655)
- **Description:** The current version of the `delete` method assumes
that the id field will always be called `id`.
- **Issue:** n/a
- **Dependencies:** n/a
- **Twitter handle:** ugh, Twitter :D 

---

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 23:49:35 +00:00
Huy Nguyen
8780f7a2ad Fix typo in doc for: Custom Functions & Pass Through Arguments pages (#28663)
- [x] Fix typo in Custom Output Parser doc
2024-12-11 15:47:14 -08:00
Mohammad Mohtashim
fa155a422f [Community]: requests_kwargs not being used in _fetch (#28646)
- **Description:** `requests_kwargs` is not being passed to `_fetch`
which is fetching pages asynchronously. In this PR, making sure that we
are passing `requests_kwargs` to `_fetch` just like `_scrape`.
- **Issue:** #28634

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-11 23:46:54 +00:00
bjoaquinc
8c37808d47 docs: added caution notes on Jina and LocalAI docs about openai sdk version compatibility (#28662)
- [ ] Main note
- **Description:** I added notes on the Jina and LocalAI pages telling
users that they must be using this integrations with openai sdk version
0.x, because if they dont they will get an error saying that "openai has
no attribute error". This PR was recommended by @efriis
    - **Issue:** warns people about the issue in #28529 
    - **Dependencies:** None
    - **Twitter handle:** JoaqCore



- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-11 15:46:32 -08:00
Mohammad Mohtashim
a37afbe353 mistral[minor]: Added Retrying Mechanism in case of Request Rate Limit Error for MistralAIEmbeddings (#27818)
- **Description:**: In the event of a Rate Limit Error from the
MistralAI server, the response JSON raises a KeyError. To address this,
a simple retry mechanism has been implemented to handle cases where the
request limit is exceeded.
  - **Issue:** #27790

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-12-11 17:53:42 -05:00
Vincent Zhang
df5008fe55 community[minor]: FAISS Filter Function Enhancement with Advanced Query Operators (#28207)
## Description
We are submitting as a team of four for a project. Other team members
are @RuofanChen03, @LikeWang10067, @TANYAL77.

This pull requests expands the filtering capabilities of the FAISS
vectorstore by adding MongoDB-style query operators indicated as
follows, while including comprehensive testing for the added
functionality.
- $eq (equals)
- $neq (not equals)
- $gt (greater than)
- $lt (less than)
- $gte (greater than or equal)
- $lte (less than or equal)
- $in (membership in list)
- $nin (not in list)
- $and (all conditions must match)
- $or (any condition must match)
- $not (negation of condition)


## Issue
This closes https://github.com/langchain-ai/langchain/issues/26379.


## Sample Usage
```python
import faiss
import asyncio
from langchain_community.vectorstores import FAISS
from langchain.schema import Document
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
documents = [
    Document(page_content="Process customer refund request", metadata={"schema_type": "financial", "handler_type": "refund",}),
    Document(page_content="Update customer shipping address", metadata={"schema_type": "customer", "handler_type": "update",}),
    Document(page_content="Process payment transaction", metadata={"schema_type": "financial", "handler_type": "payment",}),
    Document(page_content="Handle customer complaint", metadata={"schema_type": "customer","handler_type": "complaint",}),
    Document(page_content="Process invoice payment", metadata={"schema_type": "financial","handler_type": "payment",})
]

async def search(vectorstore, query, schema_type, handler_type, k=2):
    schema_filter = {"schema_type": {"$eq": schema_type}}
    handler_filter = {"handler_type": {"$eq": handler_type}}
    combined_filter = {
        "$and": [
            schema_filter,
            handler_filter,
        ]
    }
    base_retriever = vectorstore.as_retriever(
        search_kwargs={"k":k, "filter":combined_filter}
    )
    return await base_retriever.ainvoke(query)

async def main():
    vectorstore = FAISS.from_texts(
        texts=[doc.page_content for doc in documents],
        embedding=embeddings,
        metadatas=[doc.metadata for doc in documents]
    )
    
    def printt(title, documents):
        print(title)
        if not documents:
            print("\tNo documents found.")
            return
        for doc in documents:
            print(f"\t{doc.page_content}. {doc.metadata}")

    printt("Documents:", documents)
    printt('\nquery="process payment", schema_type="financial", handler_type="payment":', await search(vectorstore, query="process payment", schema_type="financial", handler_type="payment", k=2))
    printt('\nquery="customer update", schema_type="customer", handler_type="update":', await search(vectorstore, query="customer update", schema_type="customer", handler_type="update", k=2))
    printt('\nquery="refund process", schema_type="financial", handler_type="refund":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="refund", k=2))
    printt('\nquery="refund process", schema_type="financial", handler_type="foobar":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="foobar", k=2))
    print()

if __name__ == "__main__":asyncio.run(main())
```

## Output
```
Documents:
	Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'}
	Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'}
	Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'}
	Handle customer complaint. {'schema_type': 'customer', 'handler_type': 'complaint'}
	Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'}

query="process payment", schema_type="financial", handler_type="payment":
	Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'}
	Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'}

query="customer update", schema_type="customer", handler_type="update":
	Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'}

query="refund process", schema_type="financial", handler_type="refund":
	Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'}

query="refund process", schema_type="financial", handler_type="foobar":
	No documents found.

```

---------

Co-authored-by: ruofan chen <ruofan.is.awesome@gmail.com>
Co-authored-by: RickyCowboy <like.wang@mail.utoronto.ca>
Co-authored-by: Shanni Li <tanya.li@mail.utoronto.ca>
Co-authored-by: RuofanChen03 <114096642+ruofanchen03@users.noreply.github.com>
Co-authored-by: Like Wang <102838708+likewang10067@users.noreply.github.com>
2024-12-11 17:52:22 -05:00
Erick Friis
b9dd4f2985 docs: box to package table (#28676) 2024-12-11 13:01:00 -08:00
like
3048a9a26d community: tongyi multimodal response format fix to support langchain (#28645)
Description: The multimodal(tongyi) response format "message": {"role":
"assistant", "content": [{"text": "图像"}]}}]} is not compatible with
LangChain.
Dependencies: No

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 21:13:26 +00:00
Bagatur
d0e662e43b community[patch]: Release 0.3.11 (#28658) 2024-12-10 20:51:13 +00:00
Bagatur
91227ad7fd langchain[patch]: Release 0.3.11 (#28657) 2024-12-10 12:28:14 -08:00
Bagatur
1fbd86a155 core[patch]: Release 0.3.24 (#28656) 2024-12-10 20:19:21 +00:00
Bagatur
e6a62d8422 core,langchain,community[patch]: allow langsmith 0.2 (#28598) 2024-12-10 18:50:58 +00:00
ccurme
bc4dc7f4b1 ollama[patch]: permit streaming for tool calls (#28654)
Resolves https://github.com/langchain-ai/langchain/issues/28543

Ollama recently
[released](https://github.com/ollama/ollama/releases/tag/v0.4.6) support
for streaming tool calls. Previously we would override the `stream`
parameter if tools were passed in.

Covered in standard tests here:
c1d348e95d/libs/standard-tests/langchain_tests/integration_tests/chat_models.py (L893-L897)

Before, the test generates one message chunk:
```python
[
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={
            'model': 'llama3.1',
            'created_at': '2024-12-10T17:49:04.468487Z',
            'done': True,
            'done_reason': 'stop',
            'total_duration': 525471208,
            'load_duration': 19701000,
            'prompt_eval_count': 170,
            'prompt_eval_duration': 31000000,
            'eval_count': 17,
            'eval_duration': 473000000,
            'message': Message(
                role='assistant',
                content='',
                images=None,
                tool_calls=[
                    ToolCall(
                        function=Function(name='magic_function', arguments={'input': 3})
                    )
                ]
            )
        },
        id='run-552bbe0f-8fb2-4105-ada1-fa38c1db444d',
        tool_calls=[
            {
                'name': 'magic_function',
                'args': {'input': 3},
                'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2',
                'type': 'tool_call',
            },
        ],
        usage_metadata={
            'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187
        },
        tool_call_chunks=[
            {
                'name': 'magic_function',
                'args': '{"input": 3}',
                'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2',
                'index': None,
                'type': 'tool_call_chunk',
            }
        ]
    )
]
```

After, it generates two (tool call in one, response metadata in
another):
```python
[
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={},
        id='run-9a3f0860-baa1-4bae-9562-13a61702de70',
        tool_calls=[
            {
                'name': 'magic_function',
                'args': {'input': 3},
                'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0',
                'type': 'tool_call',
            },
        ],
        tool_call_chunks=[
            {
                'name': 'magic_function',
                'args': '{"input": 3}',
                'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0',
                'index': None,
                'type': 'tool_call_chunk',
            },
        ],
    ),
    AIMessageChunk(
        content='',
        additional_kwargs={},
        response_metadata={
            'model': 'llama3.1',
            'created_at': '2024-12-10T17:46:43.278436Z',
            'done': True,
            'done_reason': 'stop',
            'total_duration': 514282750,
            'load_duration': 16894458,
            'prompt_eval_count': 170,
            'prompt_eval_duration': 31000000,
            'eval_count': 17,
            'eval_duration': 464000000,
            'message': Message(
                role='assistant', content='', images=None, tool_calls=None
            ),
        },
        id='run-9a3f0860-baa1-4bae-9562-13a61702de70',
        usage_metadata={
            'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187
        }
    ),
]
```
2024-12-10 12:54:37 -05:00
Tomaz Bratanic
704059466a Fix graph example documentation (#28653) 2024-12-10 17:46:50 +00:00
Johannes Mohren
c1d348e95d doc-loader: retain Azure Doc Intelligence API metadata in Document parser (#28382)
**Description**:
This PR modifies the doc_intelligence.py parser in the community package
to include all metadata returned by the Azure Doc Intelligence API in
the Document object. Previously, only the parsed content (markdown) was
retained, while other important metadata such as bounding boxes (bboxes)
for images and tables was discarded. These image bboxes are crucial for
supporting use cases like multi-modal RAG workflows when using Azure Doc
Intelligence.

The change ensures that all information returned by the Azure Doc
Intelligence API is preserved by setting the metadata attribute of the
Document object to the entire result returned by the API, rather than an
empty dictionary. This extends the parser's utility for complex use
cases without breaking existing functionality.

**Issue**:
This change does not address a specific issue number, but it resolves a
critical limitation in supporting multimodal workflows when using the
LangChain wrapper for the Azure API.

**Dependencies**:
No additional dependencies are required for this change.

---------

Co-authored-by: jmohren <johannes.mohren@aol.de>
2024-12-10 11:22:58 -05:00
Alex Tonkonozhenko
0d20c314dd Confluence Loader: Fix CQL loading (#27620)
fix #12082

<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->
2024-12-10 11:05:23 -05:00
Katarina Supe
aba2711e7f community: update Memgraph integration (#27017)
**Description:**
- **Memgraph** no longer relies on `Neo4jGraphStore` but **implements
`GraphStore`**, just like other graph databases.
- **Memgraph** no longer relies on `GraphQAChain`, but implements
`MemgraphQAChain`, just like other graph databases.
- The refresh schema procedure has been updated to try using `SHOW
SCHEMA INFO`. The fallback uses Cypher queries (a combination of schema
and Cypher) → **LangChain integration no longer relies on MAGE
library**.
- The **schema structure** has been reformatted. Regardless of the
procedures used to get schema, schema structure is the same.
- The `add_graph_documents()` method has been implemented. It transforms
`GraphDocument` into Cypher queries and creates a graph in Memgraph. It
implements the ability to use `baseEntityLabel` to improve speed
(`baseEntityLabel` has an index on the `id` property). It also
implements the ability to include sources by creating a `MENTIONS`
relationship to the source document.
- Jupyter Notebook for Memgraph has been updated.
- **Issue:** /
- **Dependencies:** /
- **Twitter handle:** supe_katarina (DX Engineer @ Memgraph)

Closes #25606
2024-12-10 10:57:21 -05:00
ccurme
5c6e2cbcda ollama[patch]: support structured output (#28629)
- Bump minimum version of `ollama` to 0.4.4 (which also addresses
https://github.com/langchain-ai/langchain/issues/28607).
- Support recently-released [structured
output](https://ollama.com/blog/structured-outputs) feature. This can be
accessed by calling `.with_structured_output` with
`method="json_schema"` (choice of name
[mirrors](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)
what we have for OpenAI's structured output feature).

`ChatOllama` previously implemented `.with_structured_output` via the
[base
implementation](ec9b41431e/libs/core/langchain_core/language_models/chat_models.py (L1117)).
2024-12-10 10:36:00 -05:00
Bagatur
24292c4a31 core[patch]: Release 0.3.23 (#28648) 2024-12-10 10:01:16 +00:00
Bagatur
e24f86e55f core[patch]: return ToolMessage from tool (#28605) 2024-12-10 09:59:38 +00:00
hsm207
d0e95971f5 langchain-weaviate: Remove outdated docs (#28058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


Docs on how to do hybrid search with weaviate is covered
[here](https://python.langchain.com/docs/integrations/vectorstores/weaviate/)

@efriis

---------

Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 05:00:07 +00:00
Erick Friis
ef2f875dfb core: deprecate PipelinePromptTemplate (#28644) 2024-12-10 03:56:48 +00:00
TamagoTorisugi
0f0df2df60 fix: Set default search_type to 'similarity' in as_retriever method of AzureSearch (#28376)
**Description**
This PR updates the `as_retriever` method in the `AzureSearch` to ensure
that the `search_type` parameter defaults to 'similarity' when not
explicitly provided.

Previously, if the `search_type` was omitted, it did not default to any
specific value. So it was inherited from
`AzureSearchVectorStoreRetriever`, which defaults to 'hybrid'.

This change ensures that the intended default behavior aligns with the
expected usage.

**Issue**
No specific issue was found related to this change.

**Dependencies**
No new dependencies are introduced with this change.

---------

Co-authored-by: prrao87 <prrao87@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:40:04 +00:00
Prashanth Rao
8c6eec5f25 community: KuzuGraph needs allow_dangerous_requests, add graph documents via LLMGraphTransformer (#27949)
- [x] **PR title**: "community: Kuzu - Add graph documents via
LLMGraphTransformer"
- This PR adds a new method `add_graph_documents` to use the
`GraphDocument`s extracted by `LLMGraphTransformer` and store in a Kùzu
graph backend.
- This allows users to transform unstructured text into a graph that
uses Kùzu as the graph store.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:15:28 +00:00
Pooja Kamath
9b7d49f7da docs: Adding Docs for new SQLServer Vector store package (#28173)
**Description:** Adding Documentation for new SQL Server Vector Store
Package.

Changed files -
Added new Vector Store -
docs\docs\integrations\vectorstores\sqlserver.ipynb
 FeatureTable.Js - docs\src\theme\FeatureTables.js
Microsoft.mdx - docs\docs\integrations\providers\microsoft.mdx

Detailed documentation on API -
https://python.langchain.com/api_reference/sqlserver/index.html

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 03:00:10 +00:00
Erick Friis
5afeb8b46c infra: merge queue allowed (#28641) 2024-12-09 17:11:15 -08:00
Filip Ratajczak
4e743b5427 Core: google docstring parsing fix (#28404)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: google docstring parsing fix"


- [x] **PR message**:
- **Description:** Added a solution for invalid parsing of google
docstring such as:
    Args:
net_annual_income (float): The user's net annual income (in current year
dollars).
- **Issue:** Previous code would return arg = "net_annual_income
(float)" which would cause exception in
_validate_docstring_args_against_annotations
    - **Dependencies:** None

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 00:27:25 +00:00
Arnav Priyadarshi
b78b2f7a28 community[fix]: Update Perplexity to pass parameters into API calls (#28421)
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- **Description:** I realized the invocation parameters were not being
passed into `_generate` so I added those in but then realized that the
parameters contained some old fields designed for an older openai client
which I removed. Parameters work fine now.
- **Issue:** Fixes #28229 
- **Dependencies:** No new dependencies.  
- **Twitter handle:** @arch_plane

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-10 00:23:31 +00:00
Erick Friis
34ca31e467 docs: integration contrib typo (#28642) 2024-12-09 23:46:31 +00:00
Clément Jumel
cf6d1c0ae7 docs: add Linkup integration documentation (#28366)
## Description

First of all, thanks for the great framework that is LangChain!

At [Linkup](https://www.linkup.so/) we're working on an API to connect
LLMs and agents to the internet and our partner sources. We'd be super
excited to see our API integrated in LangChain! This essentially
consists in adding a LangChain retriever and tool, which is done in our
own [package](https://pypi.org/project/langchain-linkup/). Here we're
simply following the [integration
documentation](https://python.langchain.com/docs/contributing/how_to/integrations/)
and update the documentation of LangChain to mention the Linkup
integration.

We do have tests (both units & integration) in our [source
code](https://github.com/LinkupPlatform/langchain-linkup), and tried to
follow as close as possible the [integration
documentation](https://python.langchain.com/docs/contributing/how_to/integrations/)
which specifically requests to focus on documentation changes for an
integration PR, so I'm not adding tests here, even though the PR
checklist seems to suggest so. Feel free to correct me if I got this
wrong!

By the way, we would be thrilled by being mentioned in the list of
providers which have standalone packages
[here](https://langchain-git-fork-linkupplatform-cj-doc-langchain.vercel.app/docs/integrations/providers/),
is there something in particular for us to do for that? 🙂

## Twitter handle

Linkup_platform
<!--
## PR Checklist

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
--!>
2024-12-09 14:36:25 -08:00
Amir Sadeghi
2c49f587aa community[fix]: could not locate runnable browser (#28289)
set open_browser to false to resolve "could not locate runnable browser"
error while default browser is None

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 21:05:52 +00:00
Martin Triska
75bc6bb191 community: [bugfix] fix source path for office files in O365 (#28260)
# What problem are we fixing?

Currently documents loaded using `O365BaseLoader` fetch source from
`file.web_url` (where `file` is `<class 'O365.drive.File'>`). This works
well for `.pdf` documents. Unfortunately office documents (`.xlsx`,
`.docx` ...) pass their `web_url` in following format:

`https://sharepoint_address/sites/path/to/library/root/Doc.aspx?sourcedoc=%XXXXXXXX-1111-1111-XXXX-XXXXXXXXXX%7D&file=filename.xlsx&action=default&mobileredirect=true`

This obfuscates the path to the file. This PR utilizes the parrent
folder's path and file name to reconstruct the actual location of the
file. Knowing the file's location can be crucial for some RAG
applications (path to the file can carry information we don't want to
loose).

@vbarda Could you please look at this one? I'm @-mentioning you since
we've already closed some PRs together :-)

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 12:34:59 -08:00
Erick Friis
534b8f4364 standard-tests: release 0.3.7 (#28637) 2024-12-09 15:12:18 -05:00
Tomaz Bratanic
6815981578 Switch graphqa example in docs to langgraph (#28574)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-09 14:46:00 -05:00
Naka Masato
ce3b69aa05 community: add include_labels option to ConfluenceLoader (#28259)
## **Description:**

Enable `ConfluenceLoader` to include labels with `include_labels` option
(`false` by default for backward compatibility). and the labels are set
to `metadata` in the `Document`. e.g. `{"labels": ["l1", "l2"]}`

## Notes

Confluence API supports to get labels by providing `metadata.labels` to
`expand` query parameter

All of the following functions support `expand` in the same way:
- confluence.get_page_by_id
- confluence.get_all_pages_by_label
- confluence.get_all_pages_from_space
- cql (internally using
[/api/content/search](https://developer.atlassian.com/cloud/confluence/rest/v1/api-group-content/#api-wiki-rest-api-content-search-get))

## **Issue:**

No issue related to this PR.

## **Dependencies:** 

No changes.

## **Twitter handle:** 

[@gymnstcs](https://x.com/gymnstcs)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 19:35:01 +00:00
Rajendra Kadam
242fee11be community[minor] Pebblo: Support for new Pinecone class PineconeVectorStore (#28253)
- **Description:** Support for new Pinecone class PineconeVectorStore in
PebbloRetrievalQA.
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** -

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 19:33:54 +00:00
Pranav Ramesh Lohar
85114b4f3a docs: Update sql-query doc by fixing spelling mistake of chinhook.db to chinook.db (#28465)
Link (of doc with mistake):
https://python.langchain.com/v0.1/docs/use_cases/sql/quickstart/#:~:text=Now%2C-,Chinhook.db,-is%20in%20our

  - **Description:** speeling mistake in how-to docs of sql-db 
  - **Issue:** just a spelling mistake.
  - **Dependencies:** NA
2024-12-09 14:15:29 -05:00
nikitajoyn
9fcd203556 partners/mistralai: Fix KeyError in Vertex AI stream (#28624)
- **Description:** Streaming response from Mistral model using Vertex AI
raises KeyError when trying to access `choices` key, that the last chunk
doesn't have. The fix is to access the key safely using `get()`.
  - **Issue:** https://github.com/langchain-ai/langchain/issues/27886
  - **Dependencies:**
  - **Twitter handle:**
2024-12-09 14:14:58 -05:00
Huy Nguyen
bdb4cf7cc0 Fix typo in Custom Output Parser doc (#28617)
- [x] Fix typo in Custom Output Parser doc
2024-12-09 14:14:00 -05:00
ccurme
b476fdb54a docs: update readme (#28631) 2024-12-09 13:50:12 -05:00
maang-h
b64d846347 docs: Standardize MoonshotChat docstring (#28159)
- **Description:** Add docstring

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 18:46:25 +00:00
Erick Friis
4c70ffff01 standard-tests: sync/async vectorstore tests conditional (#28636)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-09 18:02:55 +00:00
ccurme
ffb5c1905a openai[patch]: release 0.2.12 (#28633) 2024-12-09 12:38:13 -05:00
ccurme
6e6061fe73 openai[patch]: bump minimum SDK version (#28632)
Resolves https://github.com/langchain-ai/langchain/issues/28625
2024-12-09 11:28:05 -05:00
Mohammad Mohtashim
ec9b41431e [Core]: Small Docstring Clarification for BaseTool (#28148)
- **Description:** `kwargs` are not being passed to `run` of the
`BaseTool` which has been fixed
- **Issue:** #28114

---------

Co-authored-by: Stevan Kapicic <kapicic.ste1@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 06:10:19 +00:00
Erick Friis
cef21a0b49 cli: warning on app add (#28619)
instead of #28128
2024-12-09 06:07:14 +00:00
Ankit Dangi
90f162efb6 text-splitters: add pydocstyle linting (#28127)
As seen in #23188, turned on Google-style docstrings by enabling
`pydocstyle` linting in the `text-splitters` package. Each resulting
linting error was addressed differently: ignored, resolved, suppressed,
and missing docstrings were added.

Fixes one of the checklist items from #25154, similar to #25939 in
`core` package. Ran `make format`, `make lint` and `make test` from the
root of the package `text-splitters` to ensure no issues were found.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 06:01:03 +00:00
Erick Friis
b53f07bfb9 docs: more integration contrib (#28618) 2024-12-09 05:41:08 +00:00
WGNW_MG
eabe587787 community[patch]:Fix for get_openai_callback() return token_cost=0.0 when model is gpt-4o-11-20 (#28408)
- **Description:** update MODEL_COST_PER_1K_TOKENS for new gpt-4o-11-20.
- **Issue:** with latest gpt-4o-11-20, openai callback return
token_cost=0.0
- **Dependencies:** None (just simple dict fix.)
- **Twitter handle:** I Don't Use Twitter. 
- (However..., I have a YouTube channel. Could you upload this there, by
any chance?
https://www.youtube.com/@%EA%B2%9C%EC%B0%BD%EB%B6%80%EA%B3%A0%EB%AC%B8AI%EC%9E%90%EB%AC%B8%EC%84%BC%EC%84%B8)
2024-12-08 20:46:50 -08:00
Fahim Zaman
481c4bfaba core[patch]: Fixed trim functions, and added corresponding unit test for the solved issue (#28429)
- **Description:** 
- Trim functions were incorrectly deleting nodes with more than 1
outgoing/incoming edge, so an extra condition was added to check for
this directly. A unit test "test_trim_multi_edge" was written to test
this test case specifically.
- **Issue:** 
  - Fixes #28411 
  - Fixes https://github.com/langchain-ai/langgraph/issues/1676
- **Dependencies:** 
  - No changes were made to the dependencies

- [x] Unit tests were added to verify the changes.
- [x] Updated documentation where necessary.
- [x] Ran make format, make lint, and make test to ensure compliance
with project standards.

---------

Co-authored-by: Tasif Hussain <tasif006@gmail.com>
2024-12-08 20:45:28 -08:00
Inah Jeon
54fba7e520 docs: change upstage solar model descriptions (#28419)
Thank you for contributing to LangChain!

- [ ] **PR message**:
- **Description:**: We have launched the new **Solar Pro** model, and
the documentation has been updated to include its details and features.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-08 20:43:19 -08:00
funkyrailroad
079c7ea0fc docs: Fix typo in weaviate integration docs (#28425)
- [ ] "docs: Fix typo in weaviate integration docs"
2024-12-08 20:42:00 -08:00
Zapiron
e8508fb4c6 docs: Fixed mini typo in recommend and improve the phrasing (#28438)
Fixed a typo on the word "recommend" and generally improved the phrasing
2024-12-08 20:30:43 -08:00
Zapiron
220b33df7f docs: Fixed broken link in the warning message to @tool API Reference… (#28437)
Fixed the broken hyperlink in the warning of docstring section to the
correct `@tool` API reference
2024-12-08 20:29:08 -08:00
Zapiron
1fc4ac32f0 docs: Resolve incorrect import of AttributeInfo for self-query retriever section (#28446)
Most of the imports from the self-query retriever section seems to
imported `AttributeInfo` from `query_constructor.base` instead of
`query_constructor.schema`, found in the API reference
[here](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.query_constructor.schema.AttributeInfo.html)

This PR resolves the wrong imports from most of the notebooks
2024-12-08 20:23:26 -08:00
Marco Perini
2354bb7bfa partners: 🕷️🦜 ScrapeGraph API Integration (#28559)
Hi Langchain team!

I'm the co-founder and mantainer at
[ScrapeGraphAI](https://scrapegraphai.com/).
By following the integration
[guide](https://python.langchain.com/docs/contributing/how_to/integrations/publish/)
on your site, I have created a new lib called
[langchain-scrapegraph](https://github.com/ScrapeGraphAI/langchain-scrapegraph).

With this PR I would like to integrate Scrapegraph as provider in
Langchain, adding the required documentation files.
Let me know if there are some changes to be made to be properly
integrated both in the lib and in the documentation.

Thank you 🕷️🦜

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-09 02:38:21 +00:00
Abhinav
317a38b83e community[minor]: Add support for modle2vec embeddings (#28507)
This PR add an embeddings integration for model2vec, the
`Model2vecEmbeddings` class.

- **Description**: [Model2Vec](https://github.com/MinishLab/model2vec)
lets you turn any sentence transformer into a really small static model
and makes running the model faster.
- **Issue**:
- **Dependencies**: model2vec
([pypi](https://pypi.org/project/model2vec/))
- **Twitter handle:**:

- [x] **Add tests and docs**: 
-
[Test](https://github.com/blacksmithop/langchain/blob/model2vec_embeddings/libs/community/langchain_community/embeddings/model2vec.py),
[docs](https://github.com/blacksmithop/langchain/blob/model2vec_embeddings/docs/docs/integrations/text_embedding/model2vec.ipynb)

- [x] **Lint and test**:

---------

Co-authored-by: Abhinav KM <abhinav.m@zerone-consulting.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-12-09 02:17:22 +00:00
Mateusz Szewczyk
fbf0704e48 docs: Update IBM documentation (#28503)
Thank you for contributing to LangChain!

PR: Update IBM documentation
2024-12-08 12:40:29 -08:00
Mohammad Mohtashim
524ee6d9ac Invalid tool_choice being passed to ChatLiteLLM (#28198)
- **Description:** Invalid `tool_choice` is given to `ChatLiteLLM` to
`bind_tools` due to it's parent's class default value being pass through
`with_structured_output`.
- **Issue:** #28176
2024-12-07 14:33:40 -05:00
Erick Friis
dd0085a9ff docs: standard tests to markdown, load templates from files (#28603) 2024-12-07 01:37:21 +00:00
Erick Friis
9b848491c8 docs: tool, retriever contributing docs (#28602) 2024-12-07 00:36:55 +00:00
Erick Friis
5e8553c31a standard-tests: retriever docstrings (#28596) 2024-12-07 00:32:19 +00:00
ccurme
d801c6ffc7 tests[patch]: nits (#28601) 2024-12-07 00:13:04 +00:00
Ikko Eltociear Ashimine
a32035d17d docs: update uptrain.ipynb (#28561)
evluate -> evaluate
2024-12-06 19:09:48 -05:00
Erick Friis
07c2ac765a community: release 0.3.10 (#28600) 2024-12-07 00:07:13 +00:00
Erick Friis
4a7dc6ec4c standard-tests: release 0.3.6 (#28599) 2024-12-07 00:05:04 +00:00
ccurme
80a88f8f04 tests[patch]: update API ref for chat models (#28594) 2024-12-06 19:00:14 -05:00
Erick Friis
0eb7ab65f1 multiple: fix xfailed signatures (#28597) 2024-12-06 15:39:47 -08:00
Erick Friis
b7c2029e84 standard-tests: root docstrings (#28595) 2024-12-06 15:14:52 -08:00
Erick Friis
925ca75ca5 docs: format (#28593) 2024-12-06 15:08:25 -08:00
Erick Friis
f943205ebf docs: dont document root init (#28592) 2024-12-06 15:07:53 -08:00
Erick Friis
9e2abcd152 standard-tests: show right classes in api docs (#28591) 2024-12-06 14:48:13 -08:00
Erick Friis
246c10a1cc standard-tests: private members and tools unit troubleshoot (#28590) 2024-12-06 13:52:58 -08:00
Erick Friis
1cedf401a7 docs: enable private docstring submembers sphinx (#28589) 2024-12-06 13:36:34 -08:00
Erick Friis
791d7e965e docs: enable private docstring modules sphinx (#28588) 2024-12-06 13:23:06 -08:00
Erick Friis
4f99952129 docs: enable private docstring members sphinx (#28586) 2024-12-06 13:19:52 -08:00
Bagatur
221ab03fe4 docs: readme/intro nits (#28581) 2024-12-06 12:52:15 -08:00
Erick Friis
e6663b69f3 langchain: release 0.3.10 (#28585) 2024-12-06 20:20:24 +00:00
Erick Friis
c38b845d7e core: fix path test (#28584) 2024-12-06 20:05:18 +00:00
ccurme
2c6bc74cb1 multiple: combine sync/async vector store standard test suites (#28580)
Breaking change in `langchain-tests`.
2024-12-06 14:55:06 -05:00
Bagatur
dda9f90047 core[patch]: Release 0.3.22 (#28582) 2024-12-06 19:36:53 +00:00
ccurme
15cbc36a23 docs[patch]: update contributor docs for integrations (#28576)
- Reformat tabs
- Add code snippets inline
- Add embeddings content
2024-12-06 13:33:24 -05:00
ccurme
f3dc142d3c cli[patch]: implement minimal starter vector store (#28577)
Basically the same as core's in-memory vector store. Removed some
optional methods.
2024-12-06 13:10:22 -05:00
Erick Friis
5277a021c1 docs: raw loader codeblock (#28548)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-06 09:26:34 -08:00
Erick Friis
18386c16c7 core, tests: more tolerant _aget_relevant_documents function (#28462) 2024-12-06 00:49:30 +00:00
Erick Friis
bc636ccc60 cli: release 0.0.35 (#28557) 2024-12-05 16:40:52 -08:00
Erick Friis
7ecf38f4fa cli: create specific files from template (#28556) 2024-12-06 00:32:47 +00:00
Erick Friis
a197e0ba3d docs: custom deprecated coloring, organize css a bit (#28555) 2024-12-05 23:57:54 +00:00
Erick Friis
478def8dcc core: deprecation doc removal (#28553)
![ScreenShot 2024-12-05 at 02 33
43PM@2x](https://github.com/user-attachments/assets/e1ce495b-90ca-41c7-9a65-b403a934675c)
2024-12-05 15:35:28 -08:00
cinqisap
482e8a7855 community: Add support for SAP HANA Vector hnsw index creation (#27884)
**Issue:** Added support for creating indexes in the SAP HANA Vector
engine.
 
**Changes**: 
1. Introduced a new function `create_hnsw_index` in `hanavector.py` that
enables the creation of indexes for SAP HANA Vector.
2. Added integration tests for the index creation function to ensure
functionality.
3. Updated the documentation to reflect the new index creation feature,
including examples and output from the notebook.
4. Fix the operator issue in ` _process_filter_object` function and
change the array argument to a placeholder in the similarity search SQL
statement.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 23:29:08 +00:00
blaufink
28f8d436f6 mistral: fix of issue #26029 (#28233)
- Description: Azure AI takes an issue with the safe_mode parameter
being set to False instead of None. Therefore, this PR changes the
default value of safe_mode from False to None. This results in it being
filtered out before the request is sent - avoind the extra-parameter
issue described below.

- Issue: #26029

- Dependencies: /

---------

Co-authored-by: blaufink <sebastian.brueckner@outlook.de>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 23:28:12 +00:00
Erick Friis
7a96ce1320 docs: deprecated styling (#28550) 2024-12-05 14:05:25 -08:00
ccurme
5519a1c1d3 docs[patch]: improve integration docs (#28547)
Alternative to https://github.com/langchain-ai/langchain/pull/28426

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 16:32:09 -05:00
Erick Friis
66f819c59e infra: run cli tests on test changes (#28542) 2024-12-05 13:09:25 -08:00
Erick Friis
3d5493593b docs: deprecated styling (#28546)
from

![image](https://github.com/user-attachments/assets/52565861-618b-407c-8bb4-f16dce3b5718)

to

![image](https://github.com/user-attachments/assets/fafeef00-6101-42cd-a8c6-ca15808d1e7c)
2024-12-05 12:41:41 -08:00
ccurme
ecdfc98ef6 tests[patch]: run standard tests for embeddings and populate embeddings API ref (#28545)
plus minor updates to chat models and vector store API refs
2024-12-05 19:39:03 +00:00
dwelch-spike
1581857e3d docs: add Aerospike to providers list (#28066)
**Description:** Adds Aerospike to the list of langchain providers and
points users to documentation for the vector store and Python SDK.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-05 17:29:00 +00:00
WEIQ-beepbeep
1e285cb5f3 docs: Updated incorrected type used for the multiply_by_max function (#28042)
In the `multiply_by_max()` tool, `a` is a scale factor but it is
annotated with a string type.
2024-12-05 09:15:59 -08:00
ccurme
b8e861a63b openai[patch]: add standard tests for embeddings (#28540) 2024-12-05 17:00:27 +00:00
ZhangShenao
d26555c682 [VectorStore] Improvement: Improve chroma vector store (#28524)
- Complete unit test
- Fix spelling error
2024-12-05 11:58:32 -05:00
John
7d44316d92 docs: Add link for how to install extras (#28537)
**Description:** Add link for how to install extras
2024-12-05 11:57:51 -05:00
ccurme
8f9b3b7498 chroma[patch]: fix bug (#28538)
Fix bug introduced in
https://github.com/langchain-ai/langchain/pull/27995

If all document IDs are `""`, the chroma SDK will raise
```
DuplicateIDError: Expected IDs to be unique
```

Caught by [docs
tests](https://github.com/langchain-ai/langchain/actions/runs/12180395579/job/33974633950),
but added a test to langchain-chroma as well.
2024-12-05 15:37:19 +00:00
Erick Friis
ecff9a01e4 cli: release 0.0.34 (#28525) 2024-12-05 15:35:49 +00:00
ccurme
d9e42a1517 langchain[patch]: fix deprecation warning (#28535) 2024-12-05 14:49:10 +00:00
Erick Friis
0f539f0246 standard-tests: release 0.3.5 (#28526) 2024-12-05 00:41:07 -08:00
Erick Friis
43c35d19d4 cli: standard tests in cli, test that they run, skip vectorstore tests (#28521) 2024-12-05 00:38:32 -08:00
Erick Friis
c5acedddc2 anthropic: timeout in tests (10s) (#28488) 2024-12-04 16:03:38 -08:00
ccurme
f459754470 tests[patch]: populate API reference for vector stores (#28520) 2024-12-05 00:02:31 +00:00
Erick Friis
2b360d6a2f infra: scheduled test fix (#28519) 2024-12-04 15:20:56 -08:00
ccurme
8bc2c912b8 chroma[patch]: (nit) simplify test (#28517)
Use `self.get_embeddings` on test class instead of importing embeddings
separately.
2024-12-04 20:22:55 +00:00
Tomaz Bratanic
a0130148bc switch graph semantic layer docs to langgraph (#28513)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-04 14:30:12 -05:00
ccurme
eec55c2550 chroma[patch]: add get_by_ids and fix bug (#28516)
- Run standard integration tests in Chroma
- Add `get_by_ids` method
- Fix bug in `add_texts`: if a list of `ids` is passed but any of them
are None, Chroma will raise an exception. Here we assign a uuid.
2024-12-04 14:00:36 -05:00
Erick Friis
12d74d5bef docs: single security doc (#28515) 2024-12-04 18:15:34 +00:00
Erick Friis
e6a08355a3 docs: more api ref links, add linting step to prevent more (#28495) 2024-12-04 04:19:42 +00:00
wlleiiwang
6151ea78d5 community: implement _select_relevance_score_fn for tencent vectordb (#28036)
implement _select_relevance_score_fn for tencent vectordb
fix use external embedding for tencent vectordb

Co-authored-by: wlleiiwang <wlleiiwang@tencent.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 03:03:00 +00:00
Asi Greenholts
d34bf78f3b community: BM25Retriever preservation of document id (#27019)
Currently this retriever discards document ids

---------

Co-authored-by: asi-cider <88270351+asi-cider@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 00:36:00 +00:00
Erick Friis
a009249369 infra: release rely on local built in testing (#28492) 2024-12-03 16:35:38 -08:00
peterdhp
bc5ec63d67 community : allow using apikey for PubMedAPIWrapper (#27246)
**Description**: 

> Without an API key, any site (IP address) posting more than 3 requests
per second to the E-utilities will receive an error message. By
including an API key, a site can post up to 10 requests per second by
default.

quoted from A General Introduction to the E-utilities,NCBI :
https://www.ncbi.nlm.nih.gov/books/NBK25497/

I have simply added a api_key parameter to the PubMedAPIWrapper that can
be used to increase the number of requests per second from 3 to 10.

**Twitter handle** : @KORmaori

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:21:22 -08:00
Eric Pinzur
eff8a54756 langchain_chroma: added document.id support (#27995)
Description:
* Added internal `Document.id` support to Chroma VectorStore

Dependencies:
* https://github.com/langchain-ai/langchain/pull/27968 should be merged
first and this PR should be re-based on top of those changes.

Tests:
* Modified/Added tests for `Document.id` support. All tests are passing.


Note: I am not a member of the Chroma team.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-04 00:04:27 +00:00
William Smith
15e7353168 langchain_community: updated query constructor for Databricks Vector Search due to LangChainDeprecationWarning: filters was deprecated since langchain-community 0.2.11 and will be removed in 0.3. Please use filter instead. (#27974)
- **Description:** Updated the kwargs for the structured query from
filters to filter due to deprecation of 'filters' for Databricks Vector
Search. Also changed the error messages as the allowed operators and
comparators are different which can cause issues with functions such as
get_query_constructor_prompt()

- **Issue:** Fixes the Key Error for filters due to deprecation in favor
for 'filter':

LangChainDeprecationWarning: DatabricksVectorSearch received a key
`filters` in search_kwargs. `filters` was deprecated since
langchain-community 0.2.11 and will be removed in 0.3. Please use
`filter` instead.

- **Dependencies:** N/A
- **Twitter handle:** N/A

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:03:53 -08:00
miri-bar
6e607bb237 docs: langchain-ai21 update ai21 docs (#28076)
Thank you for contributing to LangChain!

Update docs to match latest langchain-ai21 release.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 16:01:36 -08:00
Jan Heimes
ef365543cb community: add Needle retriever and document loader integration (#28157)
- [x] **PR title**: "community: add Needle retriever and document loader
integration"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** This PR adds a new integration for Needle, which
includes:
- **NeedleRetriever**: A retriever for fetching documents from Needle
collections.
- **NeedleLoader**: A document loader for managing and loading documents
into Needle collections.
      - Example notebooks demonstrating usage have been added in:
        - `docs/docs/integrations/retrievers/needle.ipynb`
        - `docs/docs/integrations/document_loaders/needle.ipynb`.
- **Dependencies:** The `needle-python` package is required as an
external dependency for accessing Needle's API. It has been added to the
extended testing dependencies list.
- **Twitter handle:** Feel free to mention me if this PR gets announced:
[needlexai](https://x.com/NeedlexAI).

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Unit tests have been added for both `NeedleRetriever` and
`NeedleLoader` in `libs/community/tests/unit_tests`. These tests mock
API calls to avoid relying on network access.
2. Example notebooks have been added to `docs/docs/integrations/`,
showcasing both retriever and loader functionality.

- [x] **Lint and test**: Run `make format`, `make lint`, and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
  - `make format`: Passed
  - `make lint`: Passed
- `make test`: Passed (requires `needle-python` to be installed locally;
this package is not added to LangChain dependencies).

Additional guidelines:
- [x] Optional dependencies are imported only within functions.
- [x] No dependencies have been added to pyproject.toml files except for
those required for unit tests.
- [x] The PR does not touch more than one package.
- [x] Changes are fully backwards compatible.
- [x] Community additions are not re-imported into LangChain core.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 22:06:25 +00:00
prakashshan50
b0a83071df Update graph_constructing.ipynb (#28489)
Used llm_transformer_tuple instance to create graph documents and
assigned to graph_documents_filtered variable.

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-12-03 16:16:55 -05:00
ccurme
ab831ce05c tests[patch]: populate API reference for chat models (#28487)
Populate API reference for test class properties and test methods for
chat models.

Also:
- Make `standard_chat_model_params` private.
- `pytest.skip` some tests that were previously passed if features are
not supported.
2024-12-03 15:24:54 -05:00
Erick Friis
50ddf13692 infra: configurable scheduled tests (#28486) 2024-12-03 12:06:29 -08:00
Erick Friis
a220ee56cd infra: add 20min timeout to ci steps (#28483) 2024-12-03 10:35:57 -08:00
Erick Friis
c74f34cb41 pinecone: release 0.2.1 (version sequence) (#28485) 2024-12-03 10:22:16 -08:00
Audrey Sage Lorberfeld
926e452f44 partners: update version header for Pinecone integration (#28481)
Just need to update the version header used with Pinecone in
recently-merged method (from [this
PR](https://github.com/langchain-ai/langchain/pull/28320/files#r1867820929)).

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-03 18:08:56 +00:00
Erick Friis
7315360907 openai: dont populate logit_bias if None (#28482) 2024-12-03 17:54:53 +00:00
Erick Friis
ff675c11f6 partners/pinecone: release 0.2.2 (#28466) 2024-12-03 06:49:35 +00:00
Audrey Sage Lorberfeld
6b7e93d4c7 pinecone: update pinecone client (#28320)
This PR updates the Pinecone client to `5.4.0`, as well as its
dependencies (`pinecone-plugin-inference` and
`pinecone-plugin-interface`).

Note: `pinecone-client` is now simply called `pinecone`.

**Question for reviewer(s):** should this PR also update the `pinecone`
dep in [the root dir's `poetry.lock`
file](https://github.com/langchain-ai/langchain/blob/master/poetry.lock#L6729)?
Was unsure. (I don't believe so b/c it seems pinned to a lower version
likely based on 3rd-party deps (e.g. Unstructured).)

--
TW: @audrey_sage_


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1208693659122374
2024-12-02 22:47:09 -08:00
Erick Friis
000be1f32c tests: init retriever standard tests (#28459) 2024-12-02 23:36:09 +00:00
Erick Friis
42d40d694b partners/openai: release 0.2.11 (#28461) 2024-12-02 23:35:18 +00:00
Erick Friis
9f04416768 openai: set logit_bias to none instead of empty dict by default (#28460) 2024-12-02 15:30:32 -08:00
William FH
ecee41ab72 fix: Handle response metadata in merge_messages_runs (#28453) 2024-12-02 13:56:23 -08:00
lucasiscovici
60021e54b5 community: Add the additonnal kward 'context' for openai (#28351)
- **Description:** 
Add the additonnal kward 'context' for openai into
`convert_dict_to_message` and `convert_message_to_dict` functions.
2024-12-02 16:43:30 -05:00
ccurme
28487597b2 ollama[patch]: release 0.2.1 (#28458)
We inadvertently skipped 0.2.1, so release pipeline
[failed](https://github.com/langchain-ai/langchain/actions/runs/12126964367/job/33810204551).
2024-12-02 21:17:51 +00:00
ccurme
88d6d02b59 ollama[patch]: release 0.2.2 (#28456) 2024-12-02 14:57:30 -05:00
Ülgen Sarıkavak
c953f93c54 infra: Update Poetry version, to current latest (1.8.4) (#28194)
Update all Poetry versions to the current latest, 1.8.4 .

I was checking how lock files are managed and found out that even though
the files are generated - updated with the current latest version of
Poetry, the version used in CI and Dockerfile was outdated.

*
https://github.com/langchain-ai/langchain/pull/28061/files#diff-e00422d37a73d07c174e7838ad7c30f642d06305aff8f9d71e1e84c6897efbffL1
*
https://github.com/langchain-ai/langchain/pull/28070/files#diff-55267c883e58892916d5316bc029725fdeeba5a77e2557cf7667793823d9d9c6L1
*
https://github.com/langchain-ai/langchain/pull/27991/files#diff-9f96b8fd39133c3f1d737e013c9042b065b42ae04b3da76902304f30cec136d8R1

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-02 19:01:13 +00:00
Prithvi Kannan
e5b4f9ad75 docs: Add ChatDatabricks to llm models (#28398)
Thank you for contributing to LangChain!

Add `ChatDatabricks` to the list of LLM models options.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-02 13:19:30 -05:00
Riccardo Cocetta
58d2bfe310 docs: fixed a variable name in embeddings section (#28434)
This is a simple change for a variable name in the Embeddings section of
this document:
https://python.langchain.com/docs/tutorials/retrievers/#embeddings .

The variable name `embeddings_model` seems to be wrong and doesn't match
what follows in the template: `embeddings`.
2024-12-02 12:29:06 -05:00
chistokir
e294e7159a Update mistralai.ipynb (#28440)
Just fixing a broken link

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-12-02 17:05:28 +00:00
Bagatur
47433485e7 mistral[patch]: Release 0.2.3 (#28452) 2024-12-02 08:26:28 -08:00
Bagatur
49914e959a community[patch]: Release 0.3.9 (#28451) 2024-12-02 16:23:37 +00:00
ccurme
c2f1d022a2 mistral[patch]: ensure tool call IDs in tool messages are correctly formatted (#28422)
Fixes tests for cross-provider compatibility:
https://github.com/langchain-ai/langchain/actions/runs/12085358877/job/33702420504#step:10:376
2024-11-29 13:56:06 +00:00
Alex Thomas
2813e86407 docs: Adds the langchain-neo4j package to the API docs (#28386)
This PR adds the `langchain-neo4j` package to the `libs/packages.yml` so
the API docs can be built.
2024-11-27 12:41:12 -08:00
Bagatur
b7e10bb199 langchain[patch]: Release 0.3.9 (#28399) 2024-11-27 20:06:11 +00:00
ccurme
a8b21afc08 qdrant[patch]: run python 3.13 in CI (#28394) 2024-11-27 12:22:17 -05:00
ccurme
ee6fc3f3f6 nomic[patch]: run python 3.13 in CI (#28393) 2024-11-27 17:08:15 +00:00
Massimiliano Pronesti
83586661d6 partners[chroma]: add retrieval of embedding vectors (#28290)
This PR adds an additional method to `Chroma` to retrieve the embedding
vectors, besides the most relevant Documents. This is sometimes of use
when you need to run a postprocessing algorithm on the retrieved results
based on the vectors, which has been the case for me lately.

Example issue (discussion) requesting this change:
https://github.com/langchain-ai/langchain/discussions/20383

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-27 16:34:02 +00:00
ccurme
733a6ad328 mistral[patch]: run python 3.13 in CI (#28392) 2024-11-27 11:29:04 -05:00
ccurme
b9bf7fd797 couchbase[patch]: run python 3.13 in CI (#28391) 2024-11-27 11:28:21 -05:00
Greg Hinch
5141f25a20 community[patch]: support numpy2 (#28184)
Follows on from #27991, updates the langchain-community package to
support numpy 2 versions

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-27 11:10:58 -05:00
LuisMSotamba
0901f11b0f community: add truncation params when an openai assistant's run is created (#28158)
**Description:** When an OpenAI assistant is invoked, it creates a run
by default, allowing users to set only a few request fields. The
truncation strategy is set to auto, which includes previous messages in
the thread along with the current question until the context length is
reached. This causes token usage to grow incrementally:
consumed_tokens = previous_consumed_tokens + current_consumed_tokens.

This PR adds support for user-defined truncation strategies, giving
better control over token consumption.

**Issue:** High token consumption.
2024-11-27 10:53:53 -05:00
Pratool Bharti
c09000f20e Building RAG agents locally using open source LLMs on Intel CPU (#28302)
**Description:** Added a cookbook that showcase how to build a RAG agent
pipeline locally using open-source LLM and embedding models on Intel
Xeon CPU. It uses Llama 3.1:8B model from Ollama for LLM and
nomic-embed-text-v1.5 from NomicEmbeddings for embeddings. The whole
experiment is developed and tested on Intel 4th Gen Xeon Scalable CPU.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-27 15:40:09 +00:00
TheDannyG
607c60a594 partners/ollama: fix tool calling with nested schemas (#28225)
## Description

This PR addresses the following:

**Fixes Issue #25343:**
- Adds additional logic to parse shallowly nested JSON-encoded strings
in tool call arguments, allowing for proper parsing of responses like
that of Llama3.1 and 3.2 with nested schemas.
 
**Adds Integration Test for Fix:**
- Adds a Ollama specific integration test to ensure the issue is
resolved and to prevent regressions in the future.

**Fixes Failing Integration Tests:**
- Fixes failing integration tests (even prior to changes) caused by
`llama3-groq-tool-use` model. Previously,
tests`test_structured_output_async` and
`test_structured_output_optional_param` failed due to the model not
issuing a tool call in the response. Resolved by switching to
`llama3.1`.

## Issue
Fixes #25343.

## Dependencies
No dependencies.

____

Done in collaboration with @ishaan-upadhyay @mirajismail @ZackSteine.
2024-11-27 10:32:02 -05:00
ccurme
bb83abd037 community[patch]: remove sqlalchemy cap (#28389) 2024-11-27 10:20:36 -05:00
ccurme
51e98a5548 docs[patch]: fix typo in embeddings tab (#28388) 2024-11-27 10:05:06 -05:00
ccurme
42b8ad067d chroma[patch]: test python 3.13 in CI (#28387) 2024-11-27 15:02:40 +00:00
Kunal Pathak
85b8cecb6f docs: fix typo in embedding vectors documentation (#28378)
**Description:** Fixed a grammatical error in the documentation section
on embedding vectors. Replaced "Embedding vectors can be comparing" with
"Embedding vectors can be compared."

**Issue:** N/A (This is a minor documentation fix with no linked issue.)

**Dependencies:** None.
2024-11-27 09:52:40 -05:00
William FH
585da22752 Init embeddings (#28370) 2024-11-27 08:25:10 +00:00
Bagatur
ffe7bd4832 langchain[patch]: init_chat_model provider in model string (#28367)
```python
llm = init_chat_model("openai:gpt-4o")
```
2024-11-27 00:20:25 -08:00
ccurme
8adc4a5bcc langchain[patch]: update deprecation message for agent classes and constructors (#28369) 2024-11-26 16:07:13 -05:00
Kiril Buga
ec205fcee0 Updated docs for the BM25 preprocessing function (#28101)
- [x] **PR title**: "docs: add explanation for preprocessing function"


- [x] **PR message**: 
- **Description:** Extending the BM25 description and demonstrating the
preprocessing function
    - **Dependencies:** nltk
    - **Twitter handle:** @kirilbuga

@efriis
@baskaryan
@vbarda
@ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-26 14:59:15 -05:00
Mohammad Mohtashim
06fafc6651 Community: Marqo Index Setting GET Request Updated according to 2.x API version while keep backward compatability for 1.5.x (#28342)
- **Description:** `add_texts` was using `get_setting` for marqo client
which was being used according to 1.5.x API version. However, this PR
updates the `add_text` accounting for updated response payload for 2.x
and later while maintaining backward compatibility. Plus I have verified
this was the only place where marqo client was not accounting for
updated API version.
  - **Issue:** #28323

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-26 18:26:56 +00:00
willtai
7d95a10ada langchain: Fix Neo4jVector vector store reference from partner package for self query (#28292)
_This should only be merged once neo4j is included under libs/partners._

# **Description:**

Neo4jVector from langchain-community is being moved to langchain-neo4j:
[see
link](https://github.com/langchain-ai/langchain-neo4j/blob/main/libs/neo4j/langchain_neo4j/vectorstores/neo4j_vector.py#L436).

To solve the issue below, this PR adds an attempt to import
`Neo4jVector` from the partner package `langchain-neo4j`, similarly to
the other partner packages.

# **Issue:**
When initializing `SelfQueryRetriever`, the following error is raised:

```
ValueError: Self query retriever with Vector Store type <class 'langchain_neo4j.vectorstores.neo4j_vector.Neo4jVector'> not supported.
```

[See related
issue](https://github.com/langchain-ai/langchain/issues/19748).

# **Dependencies:**
- langchain-neo4j
2024-11-26 13:21:04 -05:00
ccurme
a1c90794e1 ollama[patch]: bump to 0.4.1 in lock file (#28365) 2024-11-26 18:19:31 +00:00
ccurme
74d9d2cba1 ollama[patch]: support ollama 0.4 (#28364)
v0.4 of the Python SDK is already installed via the lock file in CI, but
our current implementation is not compatible with it.

This also addresses an issue introduced in
https://github.com/langchain-ai/langchain/pull/28299. @RyanMagnuson
would you mind explaining the motivation for that change? From what I
can tell the Ollama SDK [does not support
kwargs](6c44bb2729/ollama/_client.py (L286)).
Previously, unsupported kwargs were ignored, but they currently raise
`TypeError`.

Some of LangChain's standard test suite expects `tool_choice` to be
supported, so here we catch it in `bind_tools` so it is ignored and not
passed through to the client.
2024-11-26 12:45:59 -05:00
Bagatur
e9c16552fa openai[patch]: bump core dep (#28361) 2024-11-26 08:37:05 -08:00
Bagatur
e7dc26aefb openai[patch]: Release 0.2.10 (#28360) 2024-11-26 08:30:29 -08:00
ccurme
42b18824c2 openai[patch]: use max_completion_tokens in place of max_tokens (#26917)
`max_tokens` is deprecated:
https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-11-26 16:30:19 +00:00
Greg Hinch
869c8f5879 langchain[patch]: support numpy 2 (#28183)
Follows on from #27991, updates the langchain package to support numpy 2
versions

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-26 11:20:02 -05:00
ccurme
7b9a0d9ed8 docs: update tutorials (#28219) 2024-11-26 10:43:12 -05:00
ccurme
a97c53e7c2 docs[patch]: fix broken anchor link (#28358) 2024-11-26 10:12:44 -05:00
Richard Hao
c161f7d46f docs(create_sql_agent): fix reStructured Text Markup (#28356)
- **Description:** Lines of code must be indented beneath `..
code-block::` for proper formatting.
https://devguide.python.org/documentation/markup/#showing-code-examples
- **Issue:** The example code block on the `create_sql_agent` document
page is not properly rendered.

https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.sql.base.create_sql_agent.html#langchain_community.agent_toolkits.sql.base.create_sql_agent

<img width="933" alt="image"
src="https://github.com/user-attachments/assets/d764bcad-e412-408b-ab0b-9a78a11188ee">
2024-11-26 09:52:16 -05:00
Mohammad Mohtashim
195ae7baa3 Community: Adding citations in AIMessage for ChatPerplexity (#28321)
**Description**: Adding Citation in response payload of ChatPerplexity
**Issue**: #28108
2024-11-26 09:45:47 -05:00
Ikko Eltociear Ashimine
aa2c17b56c docs: update azure_openai_whisper_parser.ipynb (#28327)
conjuction -> conjunction
2024-11-25 15:59:40 -05:00
ccurme
a5374952f8 community[patch]: fix import in test (#28339)
Library name was updated after
https://github.com/langchain-ai/langchain/pull/27879 branched off
master.
2024-11-25 19:28:01 +00:00
Alex Thomas
5867f25ff3 community[patch]: Neo4j community deprecation (#28130)
Adds deprecation notices for Neo4j components moving to the
`langchain_neo4j` partner package.

- Adds deprecation warnings to all Neo4j-related classes and functions
that have been migrated to the new `langchain_neo4j` partner package
- Updates documentation to reference the new `langchain_neo4j` package
instead of `langchain_community`
2024-11-25 10:34:22 -08:00
Yan
c60695a1c7 community: fixed critical bugs at Writer provider (#27879) 2024-11-25 12:03:37 -05:00
Yelin Zhang
6ed2d387bb docs: fix GOOGLE_API_KEY typo (#28322)
fix small GOOGLE_API_KEY markdown formatting typo
2024-11-25 09:45:22 -05:00
ccurme
a83357dc5a community[patch]: release 0.3.8 (#28316) 2024-11-23 08:21:21 -05:00
ccurme
82bb0cdfff langchain[patch]: release 0.3.8 (#28315) 2024-11-23 13:02:10 +00:00
ccurme
f5f1149257 core[patch]: release 0.3.21 (#28314) 2024-11-23 12:46:56 +00:00
Erick Friis
7170a4e3e1 docs: standard test api link (#28309) 2024-11-23 04:02:56 +00:00
Eugene Yurtsev
563587e14f langchain[patch]: Compat with pydantic 2.10 (#28307)
pydantic compat 2.10 for langchain
2024-11-23 03:21:27 +00:00
Eugene Yurtsev
a813d11c14 core[patch]: Compat pydantic 2.10 (#28308)
pydantic 2.10 compat for langchain-core
2024-11-22 21:44:55 -05:00
ZhangShenao
ed84d48eef [Doc] Improvement: fix import statement for qdrant (#28286)
- fix import statement for qdrant
- issue: https://github.com/langchain-ai/langchain/issues/28012

#28012
2024-11-22 21:43:01 -05:00
Erick Friis
a3296479a0 docs: integration asyncio mode (#28306) 2024-11-23 02:18:53 +00:00
Erick Friis
39fd0fd196 infra: more rst (#28305) 2024-11-22 17:50:42 -08:00
ccurme
25a636c597 langchain[patch]: update deprecation message for MapReduceChain (#28304)
Link migration guide first.
2024-11-23 00:47:52 +00:00
Erick Friis
242e9fc865 infra: install standard tests in docs build (#28303) 2024-11-22 15:49:10 -08:00
ccurme
203d20caa5 community[patch]: fix errors introduced by pydantic 2.10 (#28297) 2024-11-22 17:50:13 -05:00
Erick Friis
aa7fa80e1e partners/ollama: release 0.2.2rc1 (#28300) 2024-11-22 22:25:05 +00:00
Erick Friis
7277794a59 ollama: include kwargs in requests (#28299)
courtesy of @ryanmagnuson
2024-11-22 14:15:42 -08:00
Pat Patterson
2ee37a1c7b community: list valid values for LanceDB constructor's mode argument (#28296)
**Description:**

Currently, the docstring for `LanceDB.__init__()` provides the default
value for `mode`, but not the list of valid values. This PR adds that
list to the docstring.

**Issue:**

N/A

**Dependencies:**

N/A

**Twitter handle:**

`@metadaddy`

[Leaving as a reminder: If no one reviews your PR within a few days,
please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda,
hwchase17.]
2024-11-22 15:40:06 -05:00
ccurme
697dda5052 core[patch]: release 0.3.20 (#28293) 2024-11-22 14:04:29 -05:00
ccurme
a433039a56 core[patch]: support final AIMessage responses in tool_example_to_messages (#28267)
We have a test
[test_structured_few_shot_examples](ad4333ca03/libs/standard-tests/langchain_tests/integration_tests/chat_models.py (L546))
in standard integration tests that implements a version of tool-calling
few shot examples that works with ~all tested providers. The formulation
supported by ~all providers is: `human message, tool call, tool message,
AI reponse`.

Here we update
`langchain_core.utils.function_calling.tool_example_to_messages` to
support this formulation.

The `tool_example_to_messages` util is undocumented outside of our API
reference. IMO, if we are testing that this function works across all
providers, it can be helpful to feature it in our guides. The structured
few-shot examples we document at the moment require users to implement
this function and can be simplified.
2024-11-22 15:38:49 +00:00
Manuel
a5fcbe69eb docs: correct HuggingFaceEmbeddings documentation model param (#28269)
- **Description:** Corrected the parameter name in the
HuggingFaceEmbeddings documentation under integrations/text_embedding/
from model to model_name to align with the actual code usage in the
langchain_huggingface package.
- **Issue:** Fixes #28231
- **Dependencies:** None
2024-11-22 09:59:33 -05:00
Erick Friis
65deeddd5d docs: poetry publish 3 (#28280) 2024-11-22 05:14:28 +00:00
Erick Friis
29f8a79ebe groq,openai,mistralai: fix unit tests (#28279) 2024-11-22 04:54:01 +00:00
Erick Friis
9a717c9b32 docs: poetry publish 2 (#28277)
- **docs: poetry publish**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
- **x**
2024-11-21 20:49:38 -08:00
Erick Friis
4ccb3e64c7 cli: release 0.0.33 (#28278) 2024-11-21 20:13:37 -08:00
Prithvi Kannan
2917f8573f docs: Update langchain docs to new Databricks package (#28274)
Thank you for contributing to LangChain!

Ctrl+F to find instances of `langchain-databricks` and replace with
`databricks-langchain`.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
2024-11-21 20:03:28 -08:00
Erick Friis
49254cde70 docs: poetry publish (#28275) 2024-11-22 03:10:03 +00:00
Erick Friis
f173b72e35 api-docs: add standard tests package to build 2 (#28273) 2024-11-21 15:40:48 -08:00
Erick Friis
45402d1a20 api-docs: add standard tests package to build (#28272) 2024-11-21 15:35:47 -08:00
Erick Friis
b3ee1f8713 core: add space at end of error message link (#28270) 2024-11-21 22:19:59 +00:00
Erick Friis
5bc2df3060 standard-tests: troubleshooting docstrings (#28268) 2024-11-21 22:05:31 +00:00
Erick Friis
ad4333ca03 infra: disable vertex api build (#28266) 2024-11-21 10:37:17 -08:00
Erick Friis
69a706adff infra: fix api docs build (#28264) 2024-11-21 10:25:52 -08:00
ccurme
56499cf58b openai[patch]: unskip test and relax tolerance in embeddings comparison (#28262)
From what I can tell response using SDK is not deterministic:
```python
import numpy as np
import openai

documents = ["disallowed special token '<|endoftext|>'"]
model = "text-embedding-ada-002"

direct_output_1 = (
    openai.OpenAI()
    .embeddings.create(input=documents, model=model)
    .data[0]
    .embedding
)

for i in range(10):
    direct_output_2 = (
        openai.OpenAI()
        .embeddings.create(input=documents, model=model)
        .data[0]
        .embedding
    )
    print(f"{i}: {np.isclose(direct_output_1, direct_output_2).all()}")
```
```
0: True
1: True
2: True
3: True
4: False
5: True
6: True
7: True
8: True
9: True
```

See related discussion here:
https://community.openai.com/t/can-text-embedding-ada-002-be-made-deterministic/318054

Found the same result using `"text-embedding-3-small"`.
2024-11-21 10:23:10 -08:00
Priyanshi Garg
f5f53d1101 community: fix compatibility issue in kinetica chat model integration for Pydantic 2 (#28252)
Fixed a compatibility issue in the `load_messages_from_context()`
function for the Kinetica chat model integration. The issue was caused
by stricter validation introduced in Pydantic 2.
2024-11-21 09:33:00 -05:00
Erick Friis
96c67230aa docs: standard test version badge (#28247) 2024-11-21 04:00:04 +00:00
Erick Friis
d1108607f4 multiple: push deprecation removals to 1.0 (#28236) 2024-11-20 19:56:29 -08:00
Erick Friis
4f76246cf2 standard-tests: release 0.3.4 (#28245) 2024-11-20 19:35:58 -08:00
Erick Friis
4bdf1d7d1a standard-tests: fix decorator init test (#28246) 2024-11-21 03:35:43 +00:00
Erick Friis
60e572f591 standard-tests: tool tests (#28244) 2024-11-20 19:26:16 -08:00
Erick Friis
35e6052df5 infra: remove stale dockerfiles from repo (#28243)
deleting the following docker things from monorepo. they aren't
currently usable because of old dependencies, and I'd rather avoid
people using them / having to maintain them

- /docker
- this folder has a compose file that spins up postgres,pgvector
(separate from postgres and very stale version),mongo instance with
default user/password that we've gotten security pings about before. not
worth having
- also spins up a custom dockerfile with onttotext/graphdb - not even
sure what that is
- /libs/langchain/dockerfile + dev.dockerfile
  - super old poetry version, doesn't implement the right thing anymore
- .github/workflows/_release_docker.yml, langchain_release_docker.yml
  - not used anymore, not worth having an alternate release path
2024-11-21 00:05:01 +00:00
Erick Friis
161ab736ce standard-tests: release 0.3.3 (#28242) 2024-11-20 23:47:02 +00:00
Erick Friis
8738973267 docs: vectorstore standard tests (#28241) 2024-11-20 23:38:08 +00:00
Eugene Yurtsev
2acc83f146 mistralai[patch]: 0.2.2 release (#28240)
mistralai 0.2.2 release
2024-11-20 22:18:15 +00:00
Eugene Yurtsev
1a66175e38 mistral[patch]: Propagate tool call id (#28238)
mistralai-large-2411 requires tool call id

Older models accept tool call id if its provided

mistral-large-2407 
mistral-large-2402
2024-11-20 17:02:30 -05:00
shroominic
dee72c46c1 community: Outlines integration (#27449)
In collaboration with @rlouf I build an
[outlines](https://dottxt-ai.github.io/outlines/latest/) integration for
langchain!

I think this is really useful for doing any type of structured output
locally.
[Dottxt](https://dottxt.co) spend alot of work optimising this process
at a lower level
([outlines-core](https://pypi.org/project/outlines-core/0.1.14/) written
in rust) so I think this is a better alternative over all current
approaches in langchain to do structured output.
It also implements the `.with_structured_output` method so it should be
a drop in replacement for a lot of applications.

The integration includes:
- **Outlines LLM class**
- **ChatOutlines class**
- **Tutorial Cookbooks**
- **Documentation Page**
- **Validation and error messages** 
- **Exposes Outlines Structured output features**
- **Support for multiple backends**
- **Integration and Unit Tests**

Dependencies: `outlines` + additional (depending on backend used)

I am not sure if the unit-tests comply with all requirements, if not I
suggest to just remove them since I don't see a useful way to do it
differently.

### Quick overview:

Chat Models:
<img width="698" alt="image"
src="https://github.com/user-attachments/assets/05a499b9-858c-4397-a9ff-165c2b3e7acc">

Structured Output:
<img width="955" alt="image"
src="https://github.com/user-attachments/assets/b9fcac11-d3e5-4698-b1ae-8c4cb3d54c45">

---------

Co-authored-by: Vadym Barda <vadym@langchain.dev>
2024-11-20 16:31:31 -05:00
Mikelarg
2901fa20cc community: Add deprecation warning for GigaChat integration in langchain-community (#28022)
- **Description:** We have released the
[langchain-gigachat](https://github.com/ai-forever/langchain-gigachat?tab=readme-ov-file)
with new GigaChat integration that support's function/tool calling. This
PR deprecated legacy GigaChat class in community package.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-20 21:03:47 +00:00
Renzo-vS
567dc1e422 community: fix duplicate content (#28003)
Thank you for reading my first PR!

**Description:**
Deduplicate content in AzureSearch vectorstore.
Currently, by default, the content of the retrieval is placed both in
metadata and page_content of a Document.
This PR removes the content from metadata, and leaves it in
page_content.

**Issue:**:
Previously, the content was popped from result before metadata was
populated.
In #25828 , the order was changed which leads to a response with
duplicated content.
This was not the intention of that PR and seems undesirable.

Looking forward to seeing my contribution in the next version!

Cheers, 
Renzo
2024-11-20 12:49:03 -08:00
Jorge Piedrahita Ortiz
abaea28417 community: SamabanovaCloud tool calling and Structured output (#27967)
**Description:** Add tool calling and structured output support for
SambaNovaCloud chat models, docs included

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-20 19:12:08 +00:00
ccurme
cb32bab69d docs: update notebook env dependencies (#28221) 2024-11-20 14:10:42 -05:00
af su
7c7ee07d30 huggingface[fix]: HuggingFaceEndpointEmbeddings model parameter passing error when async embed (#27953)
This change refines the handling of _model_kwargs in POST requests.
Instead of nesting _model_kwargs as a dictionary under the parameters
key, it is now directly unpacked and merged into the request's JSON
payload. This ensures that the model parameters are passed correctly and
avoids unnecessary nesting.E. g.:

```python
import asyncio

from langchain_huggingface.embeddings import HuggingFaceEndpointEmbeddings

embedding_input = ["This input will get multiplied" * 10000]

embeddings = HuggingFaceEndpointEmbeddings(
    model="http://127.0.0.1:8081/embed",
    model_kwargs={"truncate": True},
)

# Truncated parameters in synchronized methods are handled correctly
embeddings.embed_documents(texts=embedding_input)
# The truncate parameter is not handled correctly in the asynchronous method,
# and 413 Request Entity Too Large is returned.
asyncio.run(embeddings.aembed_documents(texts=embedding_input))
```

Co-authored-by: af su <saf@zjuici.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-20 19:08:56 +00:00
Eric Pinzur
923ef85105 langchain_chroma: fixed integration tests (#27968)
Description:
* I'm planning to add `Document.id` support to the Chroma VectorStore,
but first I wanted to make sure all the integration tests were passing
first. They weren't. This PR fixes the broken tests.
* I found 2 issues:
* This change (from a year ago, exactly :) ) for supporting multi-modal
embeddings:
https://docs.trychroma.com/deployment/migration#migration-to-0.4.16---november-7,-2023
* This change https://github.com/langchain-ai/langchain/pull/27827 due
to an update in the chroma client.
  
Also ran `format` and `lint` on the changes.

Note: I am not a member of the Chroma team.
2024-11-20 11:05:02 -08:00
CLOVA Studio 개발
218b4e073e community: fix some features on Naver ChatModel & embedding model (#28228)
# Description

- adding stopReason to response_metadata to call stream and astream
- excluding NCP_APIGW_API_KEY input required validation
- to remove warning Field "model_name" has conflict with protected
namespace "model_".

cc. @vbarda
2024-11-20 10:35:41 -08:00
Erick Friis
4da35623af docs: formatting fix (#28235) 2024-11-20 18:08:47 +00:00
Erick Friis
43e24cd4a1 docs, standard-tests: property tags, support tool decorator (#28234) 2024-11-20 17:19:03 +00:00
Soham Das
4027da1b6e docs: fix typo in migration guide: migrate_agent.ipynb (#28227)
PR Title: `docs: fix typo in migration guide`

PR Message:
- **Description**: This PR fixes a small typo in the "How to Migrate
from Legacy LangChain Agents to LangGraph" guide. "In this cases" -> "In
this case"
- **Issue**: N/A (no issue linked for this typo fix)
- **Dependencies**: None
- **Twitter handle**: N/A
2024-11-20 11:44:22 -05:00
Erick Friis
16918842bf docs: add conceptual testing docs (#28205) 2024-11-19 22:46:26 +00:00
Lance Martin
6bda89f9a1 Clarify bind tools takes a list (#28222)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-19 12:59:10 -08:00
William FH
197b885911 [CLI] Relax constraints (#28218) 2024-11-19 09:31:56 -08:00
Eugene Yurtsev
5599a0a537 core[minor]: Add other langgraph packages to sys_info (#28190)
Add other langgraph packages to sys_info output
2024-11-19 09:20:25 -05:00
Erick Friis
97f752c92d docs: more standard test stubs (#28202) 2024-11-19 03:59:52 +00:00
Erick Friis
0a06732d3e docs: links in integration contrib (#28200) 2024-11-19 03:25:47 +00:00
Erick Friis
0dbaf05bb7 standard-tests: rename langchain_standard_tests to langchain_tests, release 0.3.2 (#28203) 2024-11-18 19:10:39 -08:00
Erick Friis
24eea2e398 infra: allow non-langchainai packages (#28199) 2024-11-19 01:43:08 +00:00
Erick Friis
d9d689572a openai: release 0.2.9, o1 streaming (#28197) 2024-11-18 23:54:38 +00:00
Erick Friis
cbeb8601d6 docs: efficient rebuild (#28195)
if you run `make build start` in one tab, then start editing files, you
can efficient rebuild notebooks with `make generate-files md-sync
render`
2024-11-18 22:09:16 +00:00
ccurme
018f4102f4 docs: fix embeddings tabs (#28193)
- Update fake embeddings to deterministic fake embeddings
- Fix indentation
2024-11-18 16:00:20 -05:00
Mahdi Massahi
6dfea7e508 docs: fixed a typo (#28191)
**Description**: removed the redundant phrase (typo)
2024-11-18 15:46:47 -05:00
Eugene Yurtsev
3a63055ce2 docs[patch]: Add missing link to streaming concepts page (#28189)
Add missing streaming concept
2024-11-18 14:35:10 -05:00
ccurme
a1db744b20 docs: add component tabs to integration landing pages (#28142)
- Add to embedding model tabs
- Add tabs for vector stores
- Add "hello world" examples in integration landing pages using tabs
2024-11-18 13:34:35 -05:00
Erick Friis
c26b3575f8 docs: community integration guide clarification (#28186) 2024-11-18 17:58:07 +00:00
Erick Friis
093f24ba4d docs: standard test update (#28185) 2024-11-18 17:49:21 +00:00
Talha Munir
0c051e57e0 docs: fix grammatical error in delegation to sync methods (#28165)
### **Description**  
Fixed a grammatical error in the documentation section about the
delegation to synchronous methods to improve readability and clarity.

### **Issue**  
No associated issue.

### **Dependencies**  
No additional dependencies required.

### **Twitter handle**  
N/A

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-18 16:27:30 +00:00
DreamOfStars
22a8652ecc langchain: add missing punctuation in react_single_input.py (#28161)
- [x] **PR title**: "langchain: add missing punctuation in
react_single_input.py"

- [x] **PR message**: 
- **Description:** Add missing single quote to line 12: "Invalid Format:
Missing 'Action:' after 'Thought:"
2024-11-18 09:38:48 -05:00
Eugene Yurtsev
76e210a349 docs: link to langgraph platform (#28150)
Link to langgraph platform
2024-11-16 22:37:58 -05:00
Eric Pinzur
0a57fc0016 community: OpenSearchVectorStore: use engine set at init() time by default (#28147)
Description:
* Updated the OpenSearchVectorStore to use the `engine` parameter
captured at `init()` time as the default when adding documents to the
store.

Formatted, Linted, and Tested.
2024-11-16 17:07:42 -05:00
Zapiron
e6fe8cc2fb docs: Fix wrong import of AttributeInfo (#28155)
Fix wrong import of `AttributeInfo` from
`langchain.chains.query_constructor.base` to
`langchain.chains.query_constructor.schema`
2024-11-16 16:59:35 -05:00
Zapiron
0b2bea4c0e docs: Resolve incorrect import for AttributeInfo (#28154)
`AttributeInfo` is incorrectly imported from
`langchain.chains.query_constructor.base` instead of
`langchain.chains.query_constructor.schema`
2024-11-16 16:57:55 -05:00
Alexey Morozov
3b602d0453 docs: Added missing installation for required packages in tutorial notebooks (#28156)
**Description:** some of the required packages are missing in the
installation cell in tutorial notebooks. So I added required packages to
installation cell or created latter one if it was not presented in the
notebook at all.

Tested in colab: "Kernel" -> "Run all cells". All the notebooks under
`docs/tutorials` run as expected without `ModuleNotFoundError` error.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-16 21:51:30 +00:00
Zapiron
2de59d0b3e docs: Fixed mini typo (#28149)
Fix mini typo from objets to objects
2024-11-16 16:31:31 -05:00
Erick Friis
709c418022 docs: how to contribute integrations (#28143) 2024-11-15 14:52:17 -08:00
Erick Friis
683644320b docs: reorg sidebar (#27978) 2024-11-15 14:28:18 -08:00
Piyush Jain
c48fdbba6a docs:Moved AWS tab ahead in the list as per integration telemetry (#28144)
Moving ahead per integration telemetry
2024-11-15 22:24:27 +00:00
Erick Friis
364fd5e17f infra: release standard test case (#28140) 2024-11-15 11:58:28 -08:00
Erick Friis
6d2004ee7d multiple: langchain-standard-tests -> langchain-tests (#28139) 2024-11-15 11:32:04 -08:00
Erick Friis
409c7946ac docs, standard-tests: how to standard test a custom tool, imports (#27931) 2024-11-15 10:49:14 -08:00
alex shengzhi li
39fcb476fd community: add reka chat model integration (#27379) 2024-11-15 13:37:14 -05:00
Erick Friis
d3252b7417 core: release 0.3.19 (#28137) 2024-11-15 18:15:28 +00:00
ccurme
585479e1ff docs: add legacy LLM page to concepts index (#28135)
This page was previously not discoverable.
2024-11-15 13:06:48 -05:00
Jorge Piedrahita Ortiz
39956a3ef0 community: sambanovacloud llm integration (#27526)
- **Description:** SambaNovaCloud llm integration added, previously only
chat model integration

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-15 16:58:11 +00:00
Elham Badri
d696728278 partners/ollama: Enabled Token Level Streaming when Using Bind Tools for ChatOllama (#27689)
**Description:** The issue concerns the unexpected behavior observed
using the bind_tools method in LangChain's ChatOllama. When tools are
not bound, the llm.stream() method works as expected, returning
incremental chunks of content, which is crucial for real-time
applications such as conversational agents and live feedback systems.
However, when bind_tools([]) is used, the streaming behavior changes,
causing the output to be delivered in full chunks rather than
incrementally. This change negatively impacts the user experience by
breaking the real-time nature of the streaming mechanism.
**Issue:** #26971

---------

Co-authored-by: 4meyDam1e <amey.damle@mail.utoronto.ca>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-15 11:36:27 -05:00
ccurme
776e3271e3 standard-tests[patch]: add test for async tool calling (#28133) 2024-11-15 16:09:50 +00:00
Vadym Barda
ed4952e475 core[patch]: add caching to get_function_nonlocals (#28131) 2024-11-15 07:53:53 -08:00
ccurme
74438f3ae8 docs: add links to concept guides in how-tos (#28118) 2024-11-15 09:44:11 -05:00
ccurme
ef2dc9eae5 docs: update "quickstart" tutorial (#28096)
- Update language / add links in places
- De-emphasize output parsers
- remove deployment section
2024-11-14 14:38:45 -05:00
ccurme
f1222739f8 core[patch]: support numpy 2 (#27991) 2024-11-14 13:08:57 -05:00
Zapiron
cff70c2d67 docs: Add hyperlink to immediately show the table at the bottom of th… (#28102)
Added a hyperlink which can be clicked so users can immediately see the
table and find out the various example selector methods
2024-11-14 09:52:18 -05:00
Zapiron
4b641f87ae English Update and fixed a duplicate "the" (#27981)
Fixed a duplicate "the" in the documentation and made the documentation
generally easier to understand
2024-11-13 14:36:56 -05:00
Erick Friis
f6d34585f0 docs: throw on broken anchors (#27773)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-11-13 14:29:27 -05:00
Zapiron
7bd9c8cba3 docs: Updated link to ensure reference to the correct header for ToolNode (#28088)
When `ToolNode` hyperlink is clicked, it does not automatically scroll
to the section due to incorrect reference to the heading / id in the
LangGraph documentation
2024-11-13 14:19:55 -05:00
ccurme
940e93e891 docs: add docs on StrOutputParser (#28089)
Think it's worth adding a quick guide and including in the table in the
concepts page. `StrOutputParser` can make it easier to deal with the
union type for message content. For example, ChatAnthropic with bound
tools will generate string content if there are no tool calls and
`list[dict]` content otherwise.

I'm also considering removing the output parser section from the
["quickstart"
tutorial](https://python.langchain.com/docs/tutorials/llm_chain/); we
can link to this guide instead.
2024-11-13 14:16:50 -05:00
Vadym Barda
6ec688cf2b xai[patch]: update core (#28092) 2024-11-13 17:51:51 +00:00
Artur Barseghyan
2ab5673eb1 docs: Add example using TypedDict in structured outputs how-to guide (#27415)
For me, the [Pydantic
example](https://python.langchain.com/docs/how_to/structured_output/#choosing-between-multiple-schemas)
does not work (tested on various Python versions from 3.10 to 3.12, and
`Pydantic` versions from 2.7 to 2.9).

The `TypedDict` example (added in this PR) does.

----

Additionally, fixed an error in [Using PydanticOutputParser
example](https://python.langchain.com/docs/how_to/structured_output/#using-pydanticoutputparser).

Was:

```python
query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.invoke(query).to_string())
```

Corrected to:

```python
query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.invoke({"query": query}).to_string())
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-11-13 16:53:37 +00:00
Bharat Ramanathan
3e972faf81 community: chore warn deprecate the tracer (#27159)
- **Description:**: This PR deprecates the wandb tracer in favor of the
new
[WeaveTracer](https://weave-docs.wandb.ai/guides/integrations/langchain#using-weavetracer)
in W&B
- **Dependencies:** No dependencies, just a deprecation warning.
- **Twitter handle:** @parambharat


@baskaryan
2024-11-13 11:33:34 -05:00
Erick Friis
76e0127539 core: release 0.3.18 (#28070) 2024-11-13 16:19:13 +00:00
Eric Pinzur
eadc2f6a90 core: added DeleteResponse to the module (#28069)
Description:
* added `DeleteResponse` to the `langchain_core.indexing` module, for
implementing DocumentIndex classes.
2024-11-13 11:08:08 -05:00
ZhangShenao
c89e7ce8b5 core[patch]: Update doc-strings in callbacks (#28073)
- Fix api docs
2024-11-13 11:07:15 -05:00
Tom Pham
965286db3e docs: fix spelling error (#28075)
Fix spelling error in docs
2024-11-13 11:06:13 -05:00
Zapiron
892694d735 docs: Fixed broken link for AI models introduction (#28079)
Fixed broken redirect to the introduction to AI models in the Forefront
platform
2024-11-13 11:03:40 -05:00
Vruddhi Shah
beef4c4d62 Proofreading and Editing Report for Migration Guide (#28084)
Corrections and Suggestions for Migrating LangChain Code Documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-13 11:03:09 -05:00
Zapiron
2cec957274 docs: Fix missing space between the words API Reference (#28087)
Added an expected space between the words APIReference
2024-11-13 11:02:46 -05:00
Zapiron
da7c79b794 DOCS: Concept Section Improvements & Updates (#27733)
Edited mainly the `Concepts` section in the LangChain documentation.

Overview:
* Updated some explanations to make the point more clear / Add missing
words for some documentations.
* Rephrased some sentences to make it shorter and more concise.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-11-13 11:01:27 -05:00
Zapiron
02de346f6d docs: Fixed additional 'the' and remove 'turns' to make explanation clearer (#28082)
Fixed additional 'the' and remove the word 'turns' as it would make
explanation clearer

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-13 15:15:39 +00:00
Zapiron
298ebeee4e docs: Fixed broken link for Cloudfare docs for the models available (#28080)
Fixed the broken redirect to see all the cloudfare models
2024-11-13 10:07:33 -05:00
Zapiron
8241c0df23 docs: Fixed wrong link redirect from JS ToolMessage to Python ToolMes… (#28083)
Fixed the link to ToolMessage from the JS documentation to Python
documentation
2024-11-13 10:05:19 -05:00
Zapiron
77c8a5c70c docs: Fixed broken link to the Luminous model family introduction (#28078)
The Luminous Model hyperlink at the start of the model is broken.
Fixed it to update it with the latest link used by the integration
2024-11-13 10:04:50 -05:00
Vadym Barda
09e85c7c4b xai[patch]: update dependencies (#28067) 2024-11-12 16:15:17 -05:00
am-kinetica
a646f1c383 Handled empty search result handling and updated the notebook (#27914)
- [ ] **PR title**: "community: updated Kinetica vectorstore"

  - **Description:** Handled empty search results
  - **Issue:** used to throw error if the search results were empty

@efriis
2024-11-12 13:03:49 -08:00
ccurme
00e7b2dada anthropic[patch]: add examples to API ref (#28065) 2024-11-12 20:17:02 +00:00
Vadym Barda
48ee322a78 partners: add xAI chat integration (#28032) 2024-11-12 15:11:29 -05:00
ccurme
2898b95ca7 anthropic[major]: release 0.3.0 (#28063) 2024-11-12 14:58:00 -05:00
ccurme
5eaa0e8c45 openai[patch]: release 0.2.8 (#28062) 2024-11-12 14:57:11 -05:00
ccurme
15b7dd3ad7 community[patch]: release 0.3.7 (#28061) 2024-11-12 19:54:58 +00:00
ccurme
5460096086 core[patch]: release 0.3.17 (#28060) 2024-11-12 19:38:56 +00:00
ccurme
1538ee17f9 anthropic[major]: support python 3.13 (#27916)
Last week Anthropic released version 0.39.0 of its python sdk, which
enabled support for Python 3.13. This release deleted a legacy
`client.count_tokens` method, which we currently access during init of
the `Anthropic` LLM. Anthropic has replaced this functionality with the
[client.beta.messages.count_tokens()
API](https://github.com/anthropics/anthropic-sdk-python/pull/726).

To enable support for `anthropic >= 0.39.0` and Python 3.13, here we
drop support for the legacy token counting method, and add support for
the new method via `ChatAnthropic.get_num_tokens_from_messages`.

To fully support the token counting API, we update the signature of
`get_num_tokens_from_message` to accept tools everywhere.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-11-12 14:31:07 -05:00
Syed Hyder Zaidi
759b6ed17a docs: Fix typo in Tavily Search example (#28034)
Changed "demon" to "demo" in the code comment for clarity.

PR Title
docs: Fix typo in Tavily Search example

PR Message
Description:
This PR fixes a typo in the code comment of the Tavily Search
documentation. Changed "demon" to "demo" for clarity and to avoid
confusion.

Issue:
No specific issue was mentioned, but this is a minor improvement in
documentation.

Dependencies:
No additional dependencies required.
2024-11-12 13:58:13 -05:00
ZhangShenao
ca7375ac20 Improvement[Community]Improve Embeddings API (#28038)
- Fix `BaichuanTextEmbeddings` api url
- Remove unused params in api doc
- Fix word spelling
2024-11-12 13:57:35 -05:00
Aditya Anand
e290736696 Update streaming.mdx (#28055)
fix: correct grammar in documentation for streaming modes

Updated sentence to clarify usage of "choose" in "When using the stream
and astream methods with LangGraph, you can choose one or more streaming
modes..." for better readability.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-12 16:43:12 +00:00
Aditya Anand
f9212c77e7 DOC: Fix typo in documentation for streaming modes, correcting 'witte… (#28052)
…n' to 'written' in 'Emit custom output written using LangGraph’s
StreamWriter.'

### Changes:
- Corrected the typo in the phrase 'Emit custom output witten using
LangGraph’s StreamWriter.' to 'Emit custom output written using
LangGraph’s StreamWriter.'
- Enhanced the clarity of the documentation surrounding LangGraph’s
streaming modes, specifically around the StreamWriter functionality.
- Provided additional context and emphasis on the role of the
StreamWriter class in handling custom output.

### Issue Reference:
- GitHub issue: https://github.com/langchain-ai/langchain/issues/28051

This update addresses the issue raised regarding the incorrect spelling
and aims to improve the clarity of the streaming mode documentation for
better user understanding.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**:
**Description:**  
Fixed a typo in the documentation for streaming modes, changing "witten"
to "written" in the phrase "Emit custom output witten using LangGraph’s
StreamWriter."
**Issue:**  
This PR addresses and fixes the typo in the documentation referenced in
[#28051](https://github.com/langchain-ai/langchain/issues/28051).


**Issue:**  
This PR addresses and fixes the typo in the documentation referenced in
[#28051](https://github.com/langchain-ai/langchain/issues/28051).


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-12 11:42:30 -05:00
Bagatur
139881b108 openai[patch]: fix azure oai stream check (#28048) 2024-11-12 15:42:06 +00:00
Bagatur
9611f0b55d openai[patch]: Release 0.2.7 (#28047) 2024-11-12 15:16:15 +00:00
Bagatur
5c14e1f935 community[patch]: Release 0.3.6 (#28046) 2024-11-12 15:15:07 +00:00
Bagatur
9ebd7ebed8 core[patch]: Release 0.3.16 (#28045) 2024-11-12 14:57:15 +00:00
Changyong Um
9484cc0962 community[docs]: modify parameter for the LoRA adapter on the vllm page (#27930)
**Description:** 
This PR modifies the documentation regarding the configuration of the
VLLM with the LoRA adapter. The updates aim to provide clear
instructions for users on how to set up the LoRA adapter when using the
VLLM.

- before
```python
VLLM(..., enable_lora=True)
```
- after
```python
VLLM(..., 
    vllm_kwargs={
        "enable_lora": True
    }
)
```
This change clarifies that users should use the vllm_kwargs to enable
the LoRA adapter.

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
2024-11-11 15:41:56 -05:00
Zapiron
0b85f9035b docs: Makes the phrasing more smooth and reasoning more clear (#28020)
Updated the phrasing and reasoning on the "abstraction not receiving
much development" part of the documentation

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-11 17:17:29 +00:00
Zapiron
f695b96484 docs:Fixed missing hyperlink and changed AI to LLMs for clarity (#28006)
Changed "AI" to "LLM" in a paragraph
Fixed missing hyperlink for the structured output point
2024-11-11 12:14:29 -05:00
Choy Fuguan
c0f3777657 docs: removed bolding from header (#28001)
removed extra ** after heading two
2024-11-11 12:13:02 -05:00
Salman Faroz
44df79cf52 Correcting AzureOpenAI initialization (#28014) 2024-11-11 12:10:59 -05:00
Hammad Randhawa
57fc62323a docs : Update sql_qa.ipynb (#28026)
Text Documentation Bug:

Changed DSL query to SQL query.
2024-11-11 12:04:09 -05:00
ccurme
922b6b0e46 docs: update some cassettes (#28010) 2024-11-09 21:04:18 +00:00
ccurme
8e91c7ceec docs: add cross-links (#28000)
Mainly to improve visibility of integration pages.
2024-11-09 08:57:58 -05:00
Bagatur
33dbfba08b openai[patch]: default to invoke on o1 stream() (#27983) 2024-11-08 19:12:59 -08:00
Bagatur
503f2487a5 docs: intro nit (#27998) 2024-11-08 11:51:17 -08:00
ccurme
ff2152b115 docs: update tutorials index and add get started guides (#27996) 2024-11-08 14:47:32 -05:00
Eric Pinzur
c421997caa community[patch]: Added type hinting to OpenSearch clients (#27946)
Description:
* When working with OpenSearchVectorSearch to make
OpenSearchGraphVectorStore (coming soon), I noticed that there wasn't
type hinting for the underlying OpenSearch clients. This fixes that
issue.
* Confirmed tests are still passing with code changes.

Note that there is some additional code duplication now, but I think
this approach is cleaner overall.
2024-11-08 11:04:57 -08:00
Zapiron
4c2392e55c docs: fix link in custom tools guide (#27975)
Fixed broken link in tools documentation for `BaseTool`
2024-11-08 09:40:15 -05:00
Zapiron
85925e3164 docs: fix link in tool-calling guide (#27976)
Fix broken BaseTool link in documentation
2024-11-08 09:39:27 -05:00
Zapiron
138f360b25 docs: fix typo in PDF loader guide (#27977)
Fixed duplicate "py" in hyperlink to `pypdf` docs
2024-11-08 09:38:32 -05:00
Saad Makrod
b509747c7f Community: Google Books API Tool (#27307)
## Description

As proposed in our earlier discussion #26977 we have introduced a Google
Books API Tool that leverages the Google Books API found at
[https://developers.google.com/books/docs/v1/using](https://developers.google.com/books/docs/v1/using)
to generate book recommendations.

### Sample Usage

```python
from langchain_community.tools import GoogleBooksQueryRun
from langchain_community.utilities import GoogleBooksAPIWrapper

api_wrapper = GoogleBooksAPIWrapper()
tool = GoogleBooksQueryRun(api_wrapper=api_wrapper)

tool.run('ai')
```

### Sample Output

```txt
Here are 5 suggestions based off your search for books related to ai:

1. "AI's Take on the Stigma Against AI-Generated Content" by Sandy Y. Greenleaf: In a world where artificial intelligence (AI) is rapidly advancing and transforming various industries, a new form of content creation has emerged: AI-generated content. However, despite its potential to revolutionize the way we produce and consume information, AI-generated content often faces a significant stigma. "AI's Take on the Stigma Against AI-Generated Content" is a groundbreaking book that delves into the heart of this issue, exploring the reasons behind the stigma and offering a fresh, unbiased perspective on the topic. Written from the unique viewpoint of an AI, this book provides readers with a comprehensive understanding of the challenges and opportunities surrounding AI-generated content. Through engaging narratives, thought-provoking insights, and real-world examples, this book challenges readers to reconsider their preconceptions about AI-generated content. It explores the potential benefits of embracing this technology, such as increased efficiency, creativity, and accessibility, while also addressing the concerns and drawbacks that contribute to the stigma. As you journey through the pages of this book, you'll gain a deeper understanding of the complex relationship between humans and AI in the realm of content creation. You'll discover how AI can be used as a tool to enhance human creativity, rather than replace it, and how collaboration between humans and machines can lead to unprecedented levels of innovation. Whether you're a content creator, marketer, business owner, or simply someone curious about the future of AI and its impact on our society, "AI's Take on the Stigma Against AI-Generated Content" is an essential read. With its engaging writing style, well-researched insights, and practical strategies for navigating this new landscape, this book will leave you equipped with the knowledge and tools needed to embrace the AI revolution and harness its potential for success. Prepare to have your assumptions challenged, your mind expanded, and your perspective on AI-generated content forever changed. Get ready to embark on a captivating journey that will redefine the way you think about the future of content creation.
Read more at https://play.google.com/store/books/details?id=4iH-EAAAQBAJ&source=gbs_api

2. "AI Strategies For Web Development" by Anderson Soares Furtado Oliveira: From fundamental to advanced strategies, unlock useful insights for creating innovative, user-centric websites while navigating the evolving landscape of AI ethics and security Key Features Explore AI's role in web development, from shaping projects to architecting solutions Master advanced AI strategies to build cutting-edge applications Anticipate future trends by exploring next-gen development environments, emerging interfaces, and security considerations in AI web development Purchase of the print or Kindle book includes a free PDF eBook Book Description If you're a web developer looking to leverage the power of AI in your projects, then this book is for you. Written by an AI and ML expert with more than 15 years of experience, AI Strategies for Web Development takes you on a transformative journey through the dynamic intersection of AI and web development, offering a hands-on learning experience.The first part of the book focuses on uncovering the profound impact of AI on web projects, exploring fundamental concepts, and navigating popular frameworks and tools. As you progress, you'll learn how to build smart AI applications with design intelligence, personalized user journeys, and coding assistants. Later, you'll explore how to future-proof your web development projects using advanced AI strategies and understand AI's impact on jobs. Toward the end, you'll immerse yourself in AI-augmented development, crafting intelligent web applications and navigating the ethical landscape.Packed with insights into next-gen development environments, AI-augmented practices, emerging realities, interfaces, and security governance, this web development book acts as your roadmap to staying ahead in the AI and web development domain. What you will learn Build AI-powered web projects with optimized models Personalize UX dynamically with AI, NLP, chatbots, and recommendations Explore AI coding assistants and other tools for advanced web development Craft data-driven, personalized experiences using pattern recognition Architect effective AI solutions while exploring the future of web development Build secure and ethical AI applications following TRiSM best practices Explore cutting-edge AI and web development trends Who this book is for This book is for web developers with experience in programming languages and an interest in keeping up with the latest trends in AI-powered web development. Full-stack, front-end, and back-end developers, UI/UX designers, software engineers, and web development enthusiasts will also find valuable information and practical guidelines for developing smarter websites with AI. To get the most out of this book, it is recommended that you have basic knowledge of programming languages such as HTML, CSS, and JavaScript, as well as a familiarity with machine learning concepts.
Read more at https://play.google.com/store/books/details?id=FzYZEQAAQBAJ&source=gbs_api

3. "Artificial Intelligence for Students" by Vibha Pandey: A multifaceted approach to develop an understanding of AI and its potential applications KEY FEATURES ● AI-informed focuses on AI foundation, applications, and methodologies. ● AI-inquired focuses on computational thinking and bias awareness. ● AI-innovate focuses on creative and critical thinking and the Capstone project. DESCRIPTION AI is a discipline in Computer Science that focuses on developing intelligent machines, machines that can learn and then teach themselves. If you are interested in AI, this book can definitely help you prepare for future careers in AI and related fields. The book is aligned with the CBSE course, which focuses on developing employability and vocational competencies of students in skill subjects. The book is an introduction to the basics of AI. It is divided into three parts – AI-informed, AI-inquired and AI-innovate. It will help you understand AI's implications on society and the world. You will also develop a deeper understanding of how it works and how it can be used to solve complex real-world problems. Additionally, the book will also focus on important skills such as problem scoping, goal setting, data analysis, and visualization, which are essential for success in AI projects. Lastly, you will learn how decision trees, neural networks, and other AI concepts are commonly used in real-world applications. By the end of the book, you will develop the skills and competencies required to pursue a career in AI. WHAT YOU WILL LEARN ● Get familiar with the basics of AI and Machine Learning. ● Understand how and where AI can be applied. ● Explore different applications of mathematical methods in AI. ● Get tips for improving your skills in Data Storytelling. ● Understand what is AI bias and how it can affect human rights. WHO THIS BOOK IS FOR This book is for CBSE class XI and XII students who want to learn and explore more about AI. Basic knowledge of Statistical concepts, Algebra, and Plotting of equations is a must. TABLE OF CONTENTS 1. Introduction: AI for Everyone 2. AI Applications and Methodologies 3. Mathematics in Artificial Intelligence 4. AI Values (Ethical Decision-Making) 5. Introduction to Storytelling 6. Critical and Creative Thinking 7. Data Analysis 8. Regression 9. Classification and Clustering 10. AI Values (Bias Awareness) 11. Capstone Project 12. Model Lifecycle (Knowledge) 13. Storytelling Through Data 14. AI Applications in Use in Real-World
Read more at https://play.google.com/store/books/details?id=ptq1EAAAQBAJ&source=gbs_api

4. "The AI Book" by Ivana Bartoletti, Anne Leslie and Shân M. Millie: Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise into a single, informative volume and explains what artifical intelligence really means and how it can be used across financial services today. Key industry developments are explained in detail, and critical insights from cutting-edge practitioners offer first-hand information and lessons learned. Coverage includes: · Understanding the AI Portfolio: from machine learning to chatbots, to natural language processing (NLP); a deep dive into the Machine Intelligence Landscape; essentials on core technologies, rethinking enterprise, rethinking industries, rethinking humans; quantum computing and next-generation AI · AI experimentation and embedded usage, and the change in business model, value proposition, organisation, customer and co-worker experiences in today’s Financial Services Industry · The future state of financial services and capital markets – what’s next for the real-world implementation of AITech? · The innovating customer – users are not waiting for the financial services industry to work out how AI can re-shape their sector, profitability and competitiveness · Boardroom issues created and magnified by AI trends, including conduct, regulation & oversight in an algo-driven world, cybersecurity, diversity & inclusion, data privacy, the ‘unbundled corporation’ & the future of work, social responsibility, sustainability, and the new leadership imperatives · Ethical considerations of deploying Al solutions and why explainable Al is so important
Read more at http://books.google.ca/books?id=oE3YDwAAQBAJ&dq=ai&hl=&source=gbs_api

5. "Artificial Intelligence in Society" by OECD: The artificial intelligence (AI) landscape has evolved significantly from 1950 when Alan Turing first posed the question of whether machines can think. Today, AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises.
Read more at https://play.google.com/store/books/details?id=eRmdDwAAQBAJ&source=gbs_api
```

## Issue 

This closes #27276 

## Dependencies

No additional dependencies were added

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 15:29:35 -08:00
Massimiliano Pronesti
be3b7f9bae cookbook: add Anthropic's contextual retrieval (#27898)
Hi there, this PR adds a notebook implementing Anthropic's proposed
[Contextual
retrieval](https://www.anthropic.com/news/contextual-retrieval) to
langchain's cookbook.
2024-11-07 14:48:01 -08:00
Erick Friis
733e43eed0 docs: new stack diagram (#27972) 2024-11-07 22:46:56 +00:00
Erick Friis
a073c4c498 templates,docs: leave templates in v0.2 (#27952)
all template installs will now have to declare `--branch v0.2` to make
clear they aren't compatible with langchain 0.3 (most have a pydantic v1
setup). e.g.

```
langchain-cli app add pirate-speak --branch v0.2
```
2024-11-07 22:23:48 +00:00
Erick Friis
8807e6986c docs: ignore case production fork master (#27971) 2024-11-07 13:55:21 -08:00
Shawn Lee
6f368e9eab community: handle chatdeepinfra jsondecode error (#27603)
Fixes #27602 

Added error handling to return empty dict if args is empty string or
None.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 13:47:19 -08:00
CLOVA Studio 개발
0588bab33e community: fix ClovaXEmbeddings document API link address (#27957)
- **Description:** 404 error occurs because `API reference` link address
path is incorrect on
`langchain/docs/docs/integrations/text_embedding/naver.ipynb`
- **Issue:** fix `API reference` link address correct path.

@vbarda @efriis
2024-11-07 13:46:01 -08:00
Akshata
05fd6a16a9 Add ChatModels wrapper for Cloudflare Workers AI (#27645)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: chat models wrapper for Cloudflare
Workers AI"


- [x] **PR message**:
- **Description:** Add chat models wrapper for Cloudflare Workers AI.
Enables Langgraph intergration via ChatModel for tool usage, agentic
usage.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-07 15:34:24 -05:00
Erick Friis
8a5b9bf2ad box: migrate to repo (#27969) 2024-11-07 10:19:22 -08:00
ccurme
1ad49957f5 docs[patch]: update cassettes for sql/csv notebook (#27966) 2024-11-07 11:48:45 -05:00
ccurme
a747dbd24b anthropic[patch]: remove retired model from tests (#27965)
`claude-instant` was [retired
yesterday](https://docs.anthropic.com/en/docs/resources/model-deprecations).
2024-11-07 16:16:29 +00:00
Aksel Joonas Reedi
2cb39270ec community: bytes as a source to AzureAIDocumentIntelligenceLoader (#26618)
- **Description:** This PR adds functionality to pass in in-memory bytes
as a source to `AzureAIDocumentIntelligenceLoader`.
- **Issue:** I needed the functionality, so I added it.
- **Dependencies:** NA
- **Twitter handle:** @akseljoonas if this is a big enough change :)

---------

Co-authored-by: Aksel Joonas Reedi <aksel@klippa.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:40:21 +00:00
Martin Triska
7a9149f5dd community: ZeroxPDFLoader (#27800)
# OCR-based PDF loader

This implements [Zerox](https://github.com/getomni-ai/zerox) PDF
document loader.
Zerox utilizes simple but very powerful (even though slower and more
costly) approach to parsing PDF documents: it converts PDF to series of
images and passes it to a vision model requesting the contents in
markdown.

It is especially suitable for complex PDFs that are not parsed well by
other alternatives.

## Example use:
```python
from langchain_community.document_loaders.pdf import ZeroxPDFLoader

os.environ["OPENAI_API_KEY"] = "" ## your-api-key

model = "gpt-4o-mini" ## openai model
pdf_url = "https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf"

loader = ZeroxPDFLoader(file_path=pdf_url, model=model)
docs = loader.load()
```

The Zerox library supports wide range of provides/models. See Zerox
documentation for details.

- **Dependencies:** `zerox`
- **Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-11-07 03:14:57 +00:00
Dmitriy Prokopchuk
53b0a99f37 community: Memcached LLM Cache Integration (#27323)
## Description
This PR adds support for Memcached as a usable LLM model cache by adding
the ```MemcachedCache``` implementation relying on the
[pymemcache](https://github.com/pinterest/pymemcache) client.

Unit test-wise, the new integration is generally covered under existing
import testing. All new functionality depends on pymemcache if
instantiated and used, so to comply with the other cache implementations
the PR also adds optional integration tests for ```MemcachedCache```.

Since this is a new integration, documentation is added for Memcached as
an integration and as an LLM Cache.

## Issue
This PR closes #27275 which was originally raised as a discussion in
#27035

## Dependencies
There are no new required dependencies for langchain, but
[pymemcache](https://github.com/pinterest/pymemcache) is required to
instantiate the new ```MemcachedCache```.

## Example Usage
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client

llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")

# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:07:59 +00:00
Siddharth Murching
cfff2a057e community: Update UC toolkit documentation to use LangGraph APIs (#26778)
- **Description:** Update UC toolkit documentation to show an example of
using recommended LangGraph agent APIs before the existing LangChain
AgentExecutor example. Tested by manually running the updated example
notebook
- **Dependencies:** No new dependencies

---------

Signed-off-by: Sid Murching <sid.murching@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:47:41 +00:00
ZhangShenao
c2072d909a Improvement[Partner] Improve qdrant vector store (#27251)
- Add static method decorator
- Add args for api doc
- Fix word spelling

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:42:41 +00:00
Baptiste Pasquier
81f7daa458 community: add InfinityRerank (#27043)
**Description:** 

- Add a Reranker for Infinity server.

**Dependencies:** 

This wrapper uses
[infinity_client](https://github.com/michaelfeil/infinity/tree/main/libs/client_infinity/infinity_client)
to connect to an Infinity server.

**Tests and docs**

- integration test: test_infinity_rerank.py
- example notebook: infinity_rerank.ipynb
[here](https://github.com/baptiste-pasquier/langchain/blob/feat/infinity-rerank/docs/docs/integrations/document_transformers/infinity_rerank.ipynb)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 17:26:30 -08:00
Erick Friis
2494deb2a4 infra: remove google creds from release and integration test workflows (#27950) 2024-11-07 00:31:10 +00:00
Martin Triska
90189f5639 community: Allow other than default parsers in SharePointLoader and OneDriveLoader (#27716)
## What this PR does?

### Currently `O365BaseLoader` (and consequently both derived loaders)
are limited to `pdf`, `doc`, `docx` files.
- **Solution: here we introduce _handlers_ attribute that allows for
custom handlers to be passed in. This is done in _dict_ form:**

**Example:**
```python
from langchain_community.document_loaders.parsers.documentloader_adapter import DocumentLoaderAsParser
# PR for DocumentLoaderAsParser here: https://github.com/langchain-ai/langchain/pull/27749
from langchain_community.document_loaders.excel import UnstructuredExcelLoader

xlsx_parser = DocumentLoaderAsParser(UnstructuredExcelLoader, mode="paged")

# create dictionary mapping file types to handlers (parsers)
handlers = {
    "doc": MsWordParser()
    "pdf": PDFMinerParser()
    "txt": TextParser()
    "xlsx": xlsx_parser
}
loader = SharePointLoader(document_library_id="...",
                            handlers=handlers # pass handlers to SharePointLoader
                            )
documents = loader.load()

# works the same in OneDriveLoader
loader = OneDriveLoader(document_library_id="...",
                            handlers=handlers
                            )
```
This dictionary is then passed to `MimeTypeBasedParser` same as in the
[current
implementation](5a2cfb49e0/libs/community/langchain_community/document_loaders/parsers/registry.py (L13)).


### Currently `SharePointLoader` and `OneDriveLoader` are separate
loaders that both inherit from `O365BaseLoader`
However both of these implement the same functionality. The only
differences are:
- `SharePointLoader` requires argument `document_library_id` whereas
`OneDriveLoader` requires `drive_id`. These are just different names for
the same thing.
  - `SharePointLoader` implements significantly more features.
- **Solution: `OneDriveLoader` is replaced with an empty shell just
renaming `drive_id` to `document_library_id` and inheriting from
`SharePointLoader`**

**Dependencies:** None
**Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 17:44:34 -05:00
takahashi
482c168b3e langchain_core: add file_type option to make file type default as png (#27855)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **description**
langchain_core.runnables.graph_mermaid.draw_mermaid_png calls this
function, but the Mermaid API returns JPEG by default. To be consistent,
add the option `file_type` with the default `png` type.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
With this small change, I didn't add tests and docs.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more:
One long sentence was divided into two.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 22:37:07 +00:00
Roman Solomatin
0f85dea8c8 langchain-huggingface: use separate kwargs for queries and docs (#27857)
Now `encode_kwargs` used for both for documents and queries and this
leads to wrong embeddings. E. g.:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?",)
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # output: tensor([[0.8421, 0.3317]], dtype=torch.float64)
```
But from the [model
card](https://huggingface.co/dunzhang/stella_en_400M_v5#sentence-transformers)
expexted like this:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False}
    query_encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
        query_encode_kwargs=query_encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?", )
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # tensor([[0.8398, 0.2990]], dtype=torch.float64)
```
2024-11-06 17:35:39 -05:00
Bagatur
60123bef67 docs: fix trim_messages docstring (#27948) 2024-11-06 22:25:13 +00:00
murrlincoln
14f1827953 docs: Adding notebook for cdp agentkit toolkit (#27910)
- **Description:** Adding in the first pass of documentation for the CDP
Agentkit Toolkit
    - **Issue:** N/a
    - **Dependencies:** cdp-langchain
    - **Twitter handle:** @CoinbaseDev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: John Peterson <john.peterson@coinbase.com>
2024-11-06 13:28:27 -08:00
Eric Pinzur
ea0ad917b0 community: added Document.id support to opensearch vectorstore (#27945)
Description:
* Added support of Document.id on OpenSearch vector store
* Added tests cases to match
2024-11-06 15:04:09 -05:00
Hammad Randhawa
75aa82fedc docs: Completed sentence under the heading "Instantiating a Browser … (#27944)
…Toolkit" in "playwright.ipynb" integration.

- Completed the incomplete sentence in the Langchain Playwright
documentation.

- Enhanced documentation clarity to guide users on best practices for
instantiating browser instances with Langchain Playwright.

Example before:
> "It's always recommended to instantiate using the from_browser method
so that the

Example after:
> "It's always recommended to instantiate using the `from_browser`
method so that the browser context is properly initialized and managed,
ensuring seamless interaction and resource optimization."

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 19:55:00 +00:00
Bagatur
67ce05a0a7 core[patch]: make oai tool description optional (#27756) 2024-11-06 18:06:47 +00:00
Bagatur
b2da3115ed docs: document init_chat_model standard params (#27812) 2024-11-06 09:50:07 -08:00
Dobiichi-Origami
395674d503 community: re-arrange function call message parse logic for Qianfan (#27935)
the [PR](https://github.com/langchain-ai/langchain/pull/26208) two month
ago has a potential bug which causes malfunction of `tool_call` for
`QianfanChatEndpoint` waiting for fix
2024-11-06 09:58:16 -05:00
Erick Friis
41b7a5169d infra: starter codeowners file (#27929) 2024-11-05 16:43:11 -08:00
ccurme
66966a6e72 openai[patch]: release 0.2.6 (#27924)
Some additions in support of [predicted
outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs)
feature:
- Bump openai sdk version
- Add integration test
- Add example to integration docs

The `prediction` kwarg is already plumbed through model invocation.
2024-11-05 23:02:24 +00:00
Erick Friis
a8c473e114 standard-tests: ci pipeline (#27923) 2024-11-05 20:55:38 +00:00
Erick Friis
c3b75560dc infra: release note grep order of operations (#27922) 2024-11-05 12:44:36 -08:00
Erick Friis
b3c81356ca infra: release note compute 2 (#27921) 2024-11-05 12:04:41 -08:00
Erick Friis
bff2a8b772 standard-tests: add tools standard tests (#27899) 2024-11-05 11:44:34 -08:00
SHJUN
f6b2f82099 community: chroma error patch(attribute changed on chroma) (#27827)
There was a change of attribute name which was "max_batch_size". It's
now "get_max_batch_size" method.
I want to use "create_batches" which is right down below.

Please check this PR link.
reference: https://github.com/chroma-core/chroma/pull/2305

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Prithvi Kannan <46332835+prithvikannan@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Jun Yamog <jkyamog@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ono-hiroki <86904208+ono-hiroki@users.noreply.github.com>
Co-authored-by: Dobiichi-Origami <56953648+Dobiichi-Origami@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Duy Huynh <vndee.huynh@gmail.com>
Co-authored-by: Rashmi Pawar <168514198+raspawar@users.noreply.github.com>
Co-authored-by: sifatj <26035630+sifatj@users.noreply.github.com>
Co-authored-by: Eric Pinzur <2641606+epinzur@users.noreply.github.com>
Co-authored-by: Daniel Vu Dao <danielvdao@users.noreply.github.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
Co-authored-by: Stéphane Philippart <wildagsx@gmail.com>
2024-11-05 19:43:11 +00:00
Tomaz Bratanic
a3bbbe6a86 update llm graph transformer documentation (#27905) 2024-11-05 11:54:26 -05:00
Erick Friis
31f4fb790d standard-tests: release 0.3.0 (#27900) 2024-11-04 17:29:15 -08:00
Erick Friis
ba5cba04ff infra: get min versions (#27896) 2024-11-04 23:46:13 +00:00
Bagatur
6973f7214f docs: sidebar capitalization (#27894) 2024-11-04 22:09:32 +00:00
Stéphane Philippart
4b8cd7a09a community: Use new OVHcloud batch embedding (#26209)
- **Description:** change to do the batch embedding server side and not
client side
- **Twitter handle:** @wildagsx

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 16:40:30 -05:00
Erick Friis
a54f390090 infra: fix prev tag output (#27892) 2024-11-04 12:46:23 -08:00
Erick Friis
75f80c2910 infra: fix prev tag condition (#27891) 2024-11-04 12:42:22 -08:00
Ofer Mendelevitch
d7c39e6dbb community: update Vectara integration (#27869)
Thank you for contributing to LangChain!

- **Description:** Updated Vectara integration
- **Issue:** refresh on descriptions across all demos and added UDF
reranker
- **Dependencies:** None
- **Twitter handle:** @ofermend

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:40:39 +00:00
Erick Friis
14a71a6e77 infra: fix prev tag calculation (#27890) 2024-11-04 12:38:39 -08:00
Daniel Vu Dao
5745f3bf78 docs: Update messages.mdx (#27856)
### Description
Updates phrasing for the header of the `Messages` section.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:36:31 +00:00
sifatj
e02a5ee03e docs: Update VectorStore as_retriever method url in qa_chat_history_how_to.ipynb (#27844)
**Description**: Update VectorStore `as_retriever` method api reference
url in `qa_chat_history_how_to.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:34:50 +00:00
sifatj
dd1711f3c2 docs: Update max_marginal_relevance_search api reference url in multi_vector.ipynb (#27843)
**Description**: Update VectorStore `max_marginal_relevance_search` api
reference url in `multi_vector.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:31:36 +00:00
sifatj
aa1f46a03a docs: Update VectorStore .as_retriever method url in vectorstore_retriever.ipynb (#27842)
**Description**: Update VectorStore `.as_retriever` method url in
`vectorstore_retriever.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:28:11 +00:00
Eric Pinzur
8eb38622a6 community: fixed bug in GraphVectorStoreRetriever (#27846)
Description:

This fixes an issue that mistakenly created in
https://github.com/langchain-ai/langchain/pull/27253. The issue
currently exists only in `langchain-community==0.3.4`.

Test cases were added to prevent this issue in the future.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:17 +00:00
sifatj
eecf95df9b docs: Update VectorStore api reference url in rag.ipynb (#27841)
**Description**: Update VectorStore api reference url in `rag.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:03 +00:00
sifatj
50563400fb docs: Update broken vectorstore urls in retrievers.ipynb (#27838)
**Description**: Update outdated `VectorStore` api reference urls in
`retrievers.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:26:03 +00:00
Bagatur
dfa83531ad qdrant,nomic[minor]: bump core deps (#27849) 2024-11-04 20:19:50 +00:00
Erick Friis
4e5cc84d40 infra: release tag compute (#27836) 2024-11-04 12:16:51 -08:00
Rashmi Pawar
f86a09f82c Add nvidia as provider for embedding, llm (#27810)
Documentation: Add NVIDIA as integration provider

cc: @mattf @dglogo

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 19:45:51 +00:00
Erick Friis
0c62684ce1 Revert "infra: add neo4j to package list" (#27887)
Reverts langchain-ai/langchain#27833

Wait for release
2024-11-04 18:18:38 +00:00
Erick Friis
bcf499df16 infra: add neo4j to package list (#27833) 2024-11-04 09:24:04 -08:00
Duy Huynh
a487ec47f4 community: set default output_token_limit value for PowerBIToolkit to fix validation error (#26308)
### Description:
This PR sets a default value of `output_token_limit = 4000` for the
`PowerBIToolkit` to fix the unintentionally validation error.

### Problem:
When attempting to run a code snippet from [Langchain's PowerBI toolkit
documentation](https://python.langchain.com/v0.1/docs/integrations/toolkits/powerbi/)
to interact with a `PowerBIDataset`, the following error occurs:

```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for QueryPowerBITool
output_token_limit
  none is not an allowed value (type=type_error.none.not_allowed)
```

### Root Cause:
The issue arises because when creating a `QueryPowerBITool`, the
`output_token_limit` parameter is unintentionally set to `None`, which
is the current default for `PowerBIToolkit`. However, `QueryPowerBITool`
expects a default value of `4000` for `output_token_limit`. This
unintended override causes the error.


17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L63)

17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L72-L79)

17659ca2cd/libs/community/langchain_community/tools/powerbi/tool.py (L39)

### Solution:
To resolve this, the default value of `output_token_limit` is now
explicitly set to `4000` in `PowerBIToolkit` to prevent the accidental
assignment of `None`.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 14:34:27 +00:00
Dobiichi-Origami
f7ced5b211 community: read function call from tool_calls for Qianfan (#26208)
I added one more 'elif' to read tool call message from `tool_calls`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-04 14:33:32 +00:00
ono-hiroki
b7d549ae88 docs: fix undefined 'data' variable in document_loader_csv.ipynb (#27872)
**Description:** 
This PR addresses an issue in the CSVLoader example where data is not
defined, causing a NameError. The line `data = loader.load()` is added
to correctly assign the output of loader.load() to the data variable.
2024-11-04 14:10:56 +00:00
Bagatur
3b0b7cfb74 chroma[minor]: release 0.2.0 (#27840) 2024-11-01 18:12:00 -07:00
5015 changed files with 181145 additions and 497280 deletions

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
/.github/ @baskaryan @ccurme
/libs/packages.yml @ccurme

View File

@@ -22,7 +22,7 @@ body:
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -16,7 +16,7 @@ body:
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
@@ -29,14 +29,14 @@ body:
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: textarea
id: reproduction
validations:

View File

@@ -21,7 +21,7 @@ body:
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),

View File

@@ -1,8 +1,8 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
@@ -24,6 +24,5 @@ Additional guidelines:
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

21
.github/actions/uv_setup/action.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
name: uv-install
description: Set up Python and uv
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
env:
UV_VERSION: "0.5.25"
runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v5
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}

View File

@@ -7,6 +7,8 @@ from typing import Dict, List, Set
from pathlib import Path
import tomllib
from packaging.requirements import Requirement
from get_min_versions import get_min_version_from_toml
@@ -14,7 +16,6 @@ LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/community",
]
# when set to True, we are ignoring core dependents
@@ -30,21 +31,14 @@ IGNORED_PARTNERS = [
# specifically in huggingface jobs
# https://github.com/langchain-ai/langchain/issues/25558
"huggingface",
# prompty exhibiting issues with numpy for Python 3.13
# https://github.com/langchain-ai/langchain/actions/runs/12651104685/job/35251034969?pr=29065
"prompty",
]
# Cap python version at 3.12 for some packages with dependencies that are not yet
# compatible with python 3.13 (mostly hf tokenizers).
PY_312_MAX_PACKAGES = [
f"libs/partners/{integration}"
for integration in [
"anthropic",
"chroma",
"couchbase",
"huggingface",
"mistralai",
"nomic",
"qdrant",
]
"libs/partners/voyageai",
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
@@ -69,15 +63,17 @@ def dependents_graph() -> dict:
# load regular and test deps from pyproject.toml
with open(path, "rb") as f:
pyproject = tomllib.load(f)["tool"]["poetry"]
pyproject = tomllib.load(f)
pkg_dir = "libs" + "/".join(path.split("libs")[1].split("/")[:-1])
for dep in [
*pyproject["dependencies"].keys(),
*pyproject["group"]["test"]["dependencies"].keys(),
*pyproject["project"]["dependencies"],
*pyproject["dependency-groups"]["test"],
]:
requirement = Requirement(dep)
package_name = requirement.name
if "langchain" in dep:
dependents[dep].add(pkg_dir)
dependents[package_name].add(pkg_dir)
continue
# load extended deps from extended_testing_deps.txt
@@ -128,21 +124,15 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
# custom logic for specific directories
elif dir_ == "libs/partners/milvus":
# milvus poetry doesn't allow 3.12 because they
# declare deps in funny way
# milvus doesn't allow 3.12 because they declare deps in funny way
py_versions = ["3.9", "3.11"]
elif dir_ in PY_312_MAX_PACKAGES:
py_versions = ["3.9", "3.12"]
elif dir_ in ["libs/community", "libs/langchain"] and job == "extended-tests":
# community extended test resolution in 3.12 is slow
# even in uv
py_versions = ["3.9", "3.11"]
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
elif dir_ == "libs/community" and job == "compile-integration-tests":
# community integration deps are slow in 3.12
py_versions = ["3.9", "3.11"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
@@ -155,17 +145,17 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
def _get_pydantic_test_configs(
dir_: str, *, python_version: str = "3.11"
) -> List[Dict[str, str]]:
with open("./libs/core/poetry.lock", "rb") as f:
core_poetry_lock_data = tomllib.load(f)
for package in core_poetry_lock_data["package"]:
with open("./libs/core/uv.lock", "rb") as f:
core_uv_lock_data = tomllib.load(f)
for package in core_uv_lock_data["package"]:
if package["name"] == "pydantic":
core_max_pydantic_minor = package["version"].split(".")[1]
break
with open(f"./{dir_}/poetry.lock", "rb") as f:
dir_poetry_lock_data = tomllib.load(f)
with open(f"./{dir_}/uv.lock", "rb") as f:
dir_uv_lock_data = tomllib.load(f)
for package in dir_poetry_lock_data["package"]:
for package in dir_uv_lock_data["package"]:
if package["name"] == "pydantic":
dir_max_pydantic_minor = package["version"].split(".")[1]
break
@@ -187,11 +177,6 @@ def _get_pydantic_test_configs(
else "0"
)
custom_mins = {
# depends on pydantic-settings 2.4 which requires pydantic 2.7
"libs/community": 7,
}
max_pydantic_minor = min(
int(dir_max_pydantic_minor),
int(core_max_pydantic_minor),
@@ -199,7 +184,6 @@ def _get_pydantic_test_configs(
min_pydantic_minor = max(
int(dir_min_pydantic_minor),
int(core_min_pydantic_minor),
custom_mins.get(dir_, 0),
)
configs = [
@@ -282,6 +266,9 @@ if __name__ == "__main__":
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
dirs_to_run["test"].add("libs/partners/anthropic")
@@ -289,8 +276,9 @@ if __name__ == "__main__":
dirs_to_run["test"].add("libs/partners/groq")
elif file.startswith("libs/cli"):
# todo: add cli makefile
pass
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
@@ -307,9 +295,8 @@ if __name__ == "__main__":
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
elif file.startswith("docs/") or file in ["pyproject.toml", "uv.lock"]: # docs or root uv files
docs_edited = True
dirs_to_run["lint"].add(".")
dependents = dependents_graph()

View File

@@ -10,26 +10,25 @@ if __name__ == "__main__":
toml_data = tomllib.load(file)
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
version = toml_data["project"]["version"]
releasing_rc = "rc" in version or "dev" in version
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["tool"]["poetry"]["dependencies"]
for lib in dependencies:
dep_version = dependencies[lib]
dependencies = toml_data["project"]["dependencies"]
for dep_version in dependencies:
dep_version_string = (
dep_version["version"] if isinstance(dep_version, dict) else dep_version
)
if "rc" in dep_version_string:
raise ValueError(
f"Dependency {lib} has a prerelease version. Please remove this."
f"Dependency {dep_version} has a prerelease version. Please remove this."
)
if isinstance(dep_version, dict) and dep_version.get(
"allow-prereleases", False
):
raise ValueError(
f"Dependency {lib} has allow-prereleases set to true. Please remove this."
f"Dependency {dep_version} has allow-prereleases set to true. Please remove this."
)

View File

@@ -1,3 +1,4 @@
from collections import defaultdict
import sys
from typing import Optional
@@ -7,17 +8,23 @@ else:
# for python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
from packaging.version import parse as parse_version
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version
import requests
from packaging.version import parse
from typing import List
import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
"numpy",
"SQLAlchemy",
]
@@ -27,33 +34,81 @@ SKIP_IF_PULL_REQUEST = [
"langchain-core",
"langchain-text-splitters",
"langchain",
"langchain-community",
]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
# case ^x.x.x
_match = re.match(f"^\\^({vstring})$", version)
if _match:
return _match.group(1)
def get_pypi_versions(package_name: str) -> List[str]:
"""
Fetch all available versions for a package from PyPI.
# case >=x.x.x,<y.y.y
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
assert parse_version(_min) < parse_version(_max)
return _min
Args:
package_name (str): Name of the package
# case x.x.x
_match = re.match(f"^({vstring})$", version)
if _match:
return _match.group(1)
Returns:
List[str]: List of all available versions
raise ValueError(f"Unrecognized version format: {version}")
Raises:
requests.exceptions.RequestException: If PyPI API request fails
KeyError: If package not found or response format unexpected
"""
pypi_url = f"https://pypi.org/pypi/{package_name}/json"
response = requests.get(pypi_url)
response.raise_for_status()
return list(response.json()["releases"].keys())
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""
Find the minimum published version that satisfies the given constraints.
Args:
package_name (str): Name of the package
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Optional[str]: Minimum compatible version or None if no compatible version found
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
)
spec_set = SpecifierSet(spec_string)
all_versions = get_pypi_versions(package_name)
valid_versions = []
for version_str in all_versions:
try:
version = parse(version_str)
if spec_set.contains(version):
valid_versions.append(version)
except ValueError:
continue
return str(min(valid_versions)) if valid_versions else None
def _check_python_version_from_requirement(
requirement: Requirement, python_version: str
) -> bool:
if not requirement.marker:
return True
else:
marker_str = str(requirement.marker)
if "python_version" or "python_full_version" in marker_str:
python_version_str = "".join(
char
for char in marker_str
if char.isdigit() or char in (".", "<", ">", "=", ",")
)
return check_python_version(python_version, python_version_str)
return True
def get_min_version_from_toml(
@@ -67,8 +122,10 @@ def get_min_version_from_toml(
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
# Get the dependencies from tool.poetry.dependencies
dependencies = toml_data["tool"]["poetry"]["dependencies"]
dependencies = defaultdict(list)
for dep in toml_data["project"]["dependencies"]:
requirement = Requirement(dep)
dependencies[requirement.name].append(requirement)
# Initialize a dictionary to store the minimum versions
min_versions = {}
@@ -83,20 +140,14 @@ def get_min_version_from_toml(
if lib in dependencies:
if include and lib not in include:
continue
# Get the version string
version_string = dependencies[lib]
if isinstance(version_string, dict):
version_string = version_string["version"]
if isinstance(version_string, list):
version_string = [
vs
for vs in version_string
if check_python_version(python_version, vs["python"])
][0]["version"]
requirements = dependencies[lib]
for requirement in requirements:
if _check_python_version_from_requirement(requirement, python_version):
version_string = str(requirement.specifier)
break
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
min_version = get_minimum_version(lib, version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
@@ -112,6 +163,20 @@ def check_python_version(version_string, constraint_string):
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
:return: True if the version matches the constraints, False otherwise.
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
)
try:
version = Version(version_string)
constraints = SpecifierSet(constraint_string)

View File

@@ -20,27 +20,24 @@ def get_target_dir(package_name: str) -> Path:
base_path = Path("langchain/libs")
if package_name_short == "experimental":
return base_path / "experimental"
if package_name_short == "community":
return base_path / "community"
return base_path / "partners" / package_name_short
def clean_target_directories(packages: Dict[str, Any]) -> None:
def clean_target_directories(packages: list) -> None:
"""Remove old directories that will be replaced."""
for package in packages["packages"]:
if package["repo"] != "langchain-ai/langchain":
target_dir = get_target_dir(package["name"])
if target_dir.exists():
print(f"Removing {target_dir}")
shutil.rmtree(target_dir)
for package in packages:
target_dir = get_target_dir(package["name"])
if target_dir.exists():
print(f"Removing {target_dir}")
shutil.rmtree(target_dir)
def move_libraries(packages: Dict[str, Any]) -> None:
def move_libraries(packages: list) -> None:
"""Move libraries from their source locations to the target directories."""
for package in packages["packages"]:
# Skip if it's the main langchain repo or disabled
if package["repo"] == "langchain-ai/langchain" or package.get(
"disabled", False
):
continue
for package in packages:
repo_name = package["repo"].split("/")[1]
source_path = package["path"]
@@ -68,13 +65,30 @@ def main():
"""Main function to orchestrate the library sync process."""
try:
# Load packages configuration
packages = load_packages_yaml()
package_yaml = load_packages_yaml()
# Clean target directories
clean_target_directories(packages)
clean_target_directories([
p
for p in package_yaml["packages"]
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
# Move libraries to their new locations
move_libraries(packages)
move_libraries([
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
# Delete ones without a pyproject.toml
for partner in Path("langchain/libs/partners").iterdir():
if partner.is_dir() and not (partner / "pyproject.toml").exists():
print(f"Removing {partner} as it does not have a pyproject.toml")
shutil.rmtree(partner)
print("Library sync completed successfully!")

View File

@@ -1,4 +1,3 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx

View File

@@ -13,7 +13,7 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
jobs:
build:
@@ -21,25 +21,23 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: "poetry run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
timeout-minutes: 20
name: "uv run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: compile-integration
- name: Install integration dependencies
shell: bash
run: poetry install --with=test_integration,test
run: uv sync --group test --group test_integration
- name: Check integration tests compile
shell: bash
run: poetry run pytest -m compile tests/integration_tests
run: uv run pytest -m compile tests/integration_tests
- name: Ensure the tests did not create any additional files
shell: bash

View File

@@ -6,13 +6,14 @@ on:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
jobs:
build:
@@ -24,28 +25,14 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test,test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
run: uv sync --group test --group test_integration
- name: Run integration tests
shell: bash
@@ -73,8 +60,6 @@ jobs:
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
@@ -85,6 +70,8 @@ jobs:
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
make integration_tests

View File

@@ -13,38 +13,25 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
# This env var allows us to get inline annotations when ruff has complaints.
RUFF_OUTPUT_FORMAT: github
UV_FROZEN: "true"
jobs:
build:
name: "make lint #${{ inputs.python-version }}"
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: lint-with-extras
- name: Check Poetry File
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry check
- name: Check lock file
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry lock --check
- name: Install dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
@@ -57,17 +44,7 @@ jobs:
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with lint,typing
- name: Get .mypy_cache to speed up mypy
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ inputs.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
uv sync --group lint --group typing
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
@@ -86,21 +63,12 @@ jobs:
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test
uv sync --inexact --group test
- name: Install unit+integration test dependencies
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test,test_integration
- name: Get .mypy_cache_test to speed up mypy
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ inputs.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
uv sync --inexact --group test --group test_integration
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}

View File

@@ -12,6 +12,7 @@ on:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
dangerous-nonmaster-release:
required: false
@@ -21,7 +22,8 @@ on:
env:
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
@@ -36,13 +38,10 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
@@ -56,7 +55,7 @@ jobs:
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
run: poetry build
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
@@ -67,11 +66,18 @@ jobs:
- name: Check Version
id: check-version
shell: bash
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
release-notes:
needs:
- build
@@ -95,9 +101,47 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
# Handle regular versions and pre-release versions differently
if [[ "$VERSION" == *"-"* ]]; then
# This is a pre-release version (contains a hyphen)
# Extract the base version without the pre-release suffix
BASE_VERSION=${VERSION%%-*}
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
fi
else
# Regular version handling
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
fi
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
GIT_TAG_RESULT=$(git tag -l "$PREV_TAG")
if [ -z "$GIT_TAG_RESULT" ]; then
echo "Previous tag $PREV_TAG not found in git repo"
exit 1
fi
fi
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
@@ -146,6 +190,7 @@ jobs:
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
@@ -162,15 +207,18 @@ jobs:
# - The package is published, and it breaks on the missing dependency when
# used in the real world.
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
- name: Import published package
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Import dist package
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
@@ -186,27 +234,21 @@ jobs:
# - attempt install again after 5 seconds if it fails because there is
# sometimes a delay in availability on test pypi
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 15 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
)
uv venv
VIRTUAL_ENV=.venv uv pip install dist/*.whl
# Replace all dashes in the package name with underscores,
# since that's how Python imports packages with dashes in the name.
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g)"
# also remove _official suffix
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g | sed s/_official//g)"
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
uv run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
- name: Import test dependencies
run: poetry install --with test
run: uv sync --group test
working-directory: ${{ inputs.working-directory }}
# Overwrite the local version of the package with the test PyPI version.
# Overwrite the local version of the package with the built version
- name: Import published package (again)
working-directory: ${{ inputs.working-directory }}
shell: bash
@@ -214,9 +256,7 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION"
VIRTUAL_ENV=.venv uv pip install dist/*.whl
- name: Run unit tests
run: make tests
@@ -225,15 +265,15 @@ jobs:
- name: Check for prerelease versions
working-directory: ${{ inputs.working-directory }}
run: |
poetry run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
python_version="$(poetry run python --version | awk '{print $2}')"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release $python_version)"
VIRTUAL_ENV=.venv uv pip install packaging requests
python_version="$(uv run python --version | awk '{print $2}')"
min_versions="$(uv run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
@@ -242,18 +282,12 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS --editable .
make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
run: uv sync --group test --group test_integration
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
@@ -281,8 +315,6 @@ jobs:
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
@@ -293,15 +325,98 @@ jobs:
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
test-prior-published-packages-against-new-core:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
strategy:
matrix:
partner: [openai, anthropic]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v4
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
echo "Not in libs/core. Exiting successfully."
exit 0
fi
- name: Set up Python + uv
if: startsWith(inputs.working-directory, 'libs/core')
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Test against ${{ matrix.partner }}
if: startsWith(inputs.working-directory, 'libs/core')
run: |
# Identify latest tag
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| sort -Vr \
| head -n 1
)"
echo "Latest package tag: $LATEST_PACKAGE_TAG"
# Shallow-fetch just that single tag
git fetch --depth=1 origin tag "$LATEST_PACKAGE_TAG"
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
rm -rf $GITHUB_WORKSPACE/libs/standard-tests/*
cd $GITHUB_WORKSPACE/libs/
git checkout "$LATEST_PACKAGE_TAG" -- standard-tests/
git checkout "$LATEST_PACKAGE_TAG" -- partners/${{ matrix.partner }}/
cd partners/${{ matrix.partner }}
# Print as a sanity check
echo "Version number from pyproject.toml: "
cat pyproject.toml | grep "version = "
# Run tests
uv sync --group test --group test_integration
uv pip install ../../core/dist/*.whl
make integration_tests
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- test-prior-published-packages-against-new-core
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
@@ -318,13 +433,10 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v4
with:
@@ -360,13 +472,10 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v4
with:

View File

@@ -1,62 +0,0 @@
name: release_docker
on:
workflow_call:
inputs:
dockerfile:
required: true
type: string
description: "Path to the Dockerfile to build"
image:
required: true
type: string
description: "Name of the image to build"
env:
TEST_TAG: ${{ inputs.image }}:test
LATEST_TAG: ${{ inputs.image }}:latest
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Get git tag
uses: actions-ecosystem/action-get-latest-tag@v1
id: get-latest-tag
- name: Set docker tag
env:
VERSION: ${{ steps.get-latest-tag.outputs.tag }}
run: |
echo "VERSION_TAG=${{ inputs.image }}:${VERSION#v}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build for Test
uses: docker/build-push-action@v5
with:
context: .
file: ${{ inputs.dockerfile }}
load: true
tags: ${{ env.TEST_TAG }}
- name: Test
run: |
docker run --rm ${{ env.TEST_TAG }} python -c "import langchain"
- name: Build and Push to Docker Hub
uses: docker/build-push-action@v5
with:
context: .
file: ${{ inputs.dockerfile }}
# We can only build for the intersection of platforms supported by
# QEMU and base python image, for now build only for
# linux/amd64 and linux/arm64
platforms: linux/amd64,linux/arm64
tags: ${{ env.LATEST_TAG }},${{ env.VERSION_TAG }}
push: true

View File

@@ -13,7 +13,8 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
@@ -21,21 +22,19 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test
run: uv sync --group test --dev
- name: Run core tests
shell: bash
@@ -47,9 +46,9 @@ jobs:
id: min-version
shell: bash
run: |
poetry run pip install packaging tomli
python_version="$(poetry run python --version | awk '{print $2}')"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request $python_version)"
VIRTUAL_ENV=.venv uv pip install packaging tomli requests
python_version="$(uv run python --version | awk '{print $2}')"
min_versions="$(uv run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
@@ -58,7 +57,7 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
VIRTUAL_ENV=.venv uv pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}

View File

@@ -9,34 +9,33 @@ on:
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: "check doc imports #${{ inputs.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test
run: uv sync --group test
- name: Install langchain editable
run: |
poetry run pip install langchain-experimental -e libs/core libs/langchain libs/community
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: Check doc imports
shell: bash
run: |
poetry run python docs/scripts/check_imports.py
uv run python docs/scripts/check_imports.py
- name: Ensure the test did not create any additional files
shell: bash

View File

@@ -18,7 +18,8 @@ on:
description: "Pydantic version to test."
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
@@ -26,25 +27,23 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test # pydantic: ~=${{ inputs.pydantic-version }}, python: ${{ inputs.python-version }}, "
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ inputs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test
run: uv sync --group test
- name: Overwrite pydantic version
shell: bash
run: poetry run pip install pydantic~=${{ inputs.pydantic-version }}
run: VIRTUAL_ENV=.venv uv pip install pydantic~=${{ inputs.pydantic-version }}
- name: Run core tests
shell: bash

View File

@@ -14,8 +14,8 @@ on:
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.7.1"
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
jobs:
build:
@@ -29,13 +29,10 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
@@ -49,7 +46,7 @@ jobs:
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
run: poetry build
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
@@ -60,11 +57,18 @@ jobs:
- name: Check Version
id: check-version
shell: bash
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
publish:
needs:

View File

@@ -5,7 +5,6 @@ on:
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.8.1"
PYTHON_VERSION: "3.11"
jobs:
@@ -27,7 +26,20 @@ jobs:
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: yq '.packages[].repo' langchain/libs/packages.yml
cmd: |
yq '
.packages[]
| select(
(
(.repo | test("^langchain-ai/"))
and
(.repo != "langchain-ai/langchain")
)
or
(.include_in_api_ref // false)
)
| .repo
' langchain/libs/packages.yml
- name: Parse YAML and checkout repos
env:
@@ -37,29 +49,25 @@ jobs:
# Get unique repositories
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
# Checkout each unique repository
# Checkout each unique repository that is in langchain-ai org
for repo in $REPOS; do
if [ "$repo" != "langchain-ai/langchain" ]; then
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: Set up Python ${{ env.PYTHON_VERSION }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./langchain/.github/actions/poetry_setup"
- name: Setup python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: api-docs
working-directory: langchain
- name: Install initial py deps
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: Move libs with script
run: python langchain/.github/scripts/prep_api_docs_build.py
env:
@@ -72,8 +80,8 @@ jobs:
- name: Install dependencies
working-directory: langchain
run: |
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}")
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
python -m uv pip install -r docs/api_reference/requirements.txt
- name: Set Git config

View File

@@ -0,0 +1,29 @@
name: Check `langchain-core` version equality
on:
pull_request:
paths:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check version equality
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
exit 1
else
echo "Versions match: $PYPROJECT_VERSION"
fi

View File

@@ -1,10 +1,10 @@
---
name: CI
on:
push:
branches: [master]
pull_request:
merge_group:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
@@ -17,7 +17,8 @@ concurrency:
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
@@ -31,7 +32,7 @@ jobs:
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python -m pip install packaging
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
lint: ${{ steps.set-matrix.outputs.lint }}
@@ -119,30 +120,26 @@ jobs:
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
fail-fast: false
runs-on: ubuntu-latest
timeout-minutes: 20
defaults:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.job-configs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python ${{ matrix.job-configs.python-version }} + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.job-configs.working-directory }}
cache-key: extended
- name: Install dependencies
- name: Install dependencies and run extended tests
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install --with test
poetry run pip install uv
poetry run uv pip install -r extended_testing_deps.txt
- name: Run extended tests
run: make extended_tests
echo "Running extended tests, installing dependencies with uv..."
uv venv
uv sync --group test
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash

View File

@@ -1,4 +1,3 @@
---
name: Integration docs lint
on:

View File

@@ -1,4 +1,3 @@
---
name: CI / cd . / make spell_check
on:

44
.github/workflows/codspeed.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: CodSpeed
on:
push:
branches:
- master
pull_request:
paths:
- 'libs/core/**'
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
codspeed:
name: Run benchmarks
if: (github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'run-codspeed-benchmarks')) || github.event_name == 'workflow_dispatch' || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# We have to use 3.12, 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
python-version: "3.12"
# Using this action is still necessary for CodSpeed to work
- uses: actions/setup-python@v3
with:
python-version: "3.12"
- name: install deps
run: uv sync --group test
working-directory: ./libs/core
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd libs/core
uv run --no-sync pytest ./tests/benchmarks --codspeed
mode: walltime

View File

@@ -1,14 +0,0 @@
---
name: docker/langchain/langchain Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
workflow_call: # Allows triggering from another workflow
jobs:
release:
uses: ./.github/workflows/_release_docker.yml
with:
dockerfile: docker/Dockerfile.base
image: langchain/langchain
secrets: inherit

View File

@@ -6,11 +6,6 @@ on:
push:
branches: [jacob/people]
workflow_dispatch:
inputs:
debug_enabled:
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
required: false
default: 'false'
jobs:
langchain-people:
@@ -26,12 +21,6 @@ jobs:
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
# Allow debugging with tmate
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
with:
limit-access-to-actor: true
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@@ -15,7 +15,7 @@ on:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
UV_FROZEN: "true"
jobs:
build:
@@ -25,13 +25,10 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: run-notebooks
- name: 'Authenticate to Google Cloud'
id: 'auth'
@@ -48,22 +45,23 @@ jobs:
- name: Install dependencies
run: |
poetry install --with dev,test
uv sync --group dev --group test
- name: Pre-download files
run: |
poetry run python docs/scripts/cache_data.py
uv run python docs/scripts/cache_data.py
curl -s https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql | sqlite3 docs/docs/how_to/Chinook.db
cp docs/docs/how_to/Chinook.db docs/docs/tutorials/Chinook.db
- name: Prepare notebooks
run: |
poetry run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
uv run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
- name: Run notebooks
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@@ -2,32 +2,62 @@ name: Scheduled tests
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
jobs:
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: Compute matrix
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set matrix
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
WORKING_DIRECTORY_FORCE: ${{ github.event.inputs.working-directory-force || '' }}
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
fi
if [ -n "$WORKING_DIRECTORY_FORCE" ]; then
working_directory="[\"$WORKING_DIRECTORY_FORCE\"]"
fi
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
python-version:
- "3.9"
- "3.11"
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
- "libs/partners/fireworks"
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/google-vertexai"
- "libs/partners/google-genai"
- "libs/partners/aws"
python-version: ${{ fromJSON(needs.compute-matrix.outputs.matrix).python-version }}
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v4
@@ -51,7 +81,8 @@ jobs:
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python ${{ matrix.python-version }} with poetry
if: contains(env.POETRY_LIBS, matrix.working-directory)
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
@@ -59,6 +90,12 @@ jobs:
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: Set up Python ${{ matrix.python-version }} + uv
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
@@ -72,12 +109,20 @@ jobs:
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
- name: Install dependencies (poetry)
if: contains(env.POETRY_LIBS, matrix.working-directory)
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Install dependencies (uv)
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: Run integration tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
@@ -89,15 +134,18 @@ jobs:
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests

1
.gitignore vendored
View File

@@ -59,6 +59,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.codspeed/
# Translations
*.mo

117
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,117 @@
repos:
- repo: local
hooks:
- id: core
name: format core
language: system
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format langchain
language: system
entry: make -C libs/langchain format
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format standard-tests
language: system
entry: make -C libs/standard-tests format
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format text-splitters
language: system
entry: make -C libs/text-splitters format
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format partners/anthropic
language: system
entry: make -C libs/partners/anthropic format
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format partners/chroma
language: system
entry: make -C libs/partners/chroma format
files: ^libs/partners/chroma/
pass_filenames: false
- id: couchbase
name: format partners/couchbase
language: system
entry: make -C libs/partners/couchbase format
files: ^libs/partners/couchbase/
pass_filenames: false
- id: exa
name: format partners/exa
language: system
entry: make -C libs/partners/exa format
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format partners/fireworks
language: system
entry: make -C libs/partners/fireworks format
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format partners/groq
language: system
entry: make -C libs/partners/groq format
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format partners/huggingface
language: system
entry: make -C libs/partners/huggingface format
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format partners/mistralai
language: system
entry: make -C libs/partners/mistralai format
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format partners/nomic
language: system
entry: make -C libs/partners/nomic format
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format partners/ollama
language: system
entry: make -C libs/partners/ollama format
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format partners/openai
language: system
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system
entry: make -C libs/partners/prompty format
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format partners/qdrant
language: system
entry: make -C libs/partners/qdrant format
files: ^libs/partners/qdrant/
pass_filenames: false
- id: voyageai
name: format partners/voyageai
language: system
entry: make -C libs/partners/voyageai format
files: ^libs/partners/voyageai/
pass_filenames: false
- id: root
name: format docs, cookbook
language: system
entry: make format
files: ^(docs|cookbook)/
pass_filenames: false

View File

@@ -1,12 +1,7 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
formats:
- pdf
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
@@ -15,15 +10,16 @@ build:
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt
- requirements: docs/api_reference/requirements.txt

View File

@@ -1,11 +1,11 @@
# Migrating
Please see the following guides for migratin LangChain code:
Please see the following guides for migrating LangChain code:
* Migrate to [LangChain v0.3](https://python.langchain.com/docs/versions/v0_3/)
* Migrate to [LangChain v0.2](https://python.langchain.com/docs/versions/v0_2/)
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrate to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help automatically upgrade your code to use non deprecated imports.
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.

View File

@@ -1,5 +1,8 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
.EXPORT_ALL_VARIABLES:
UV_FROZEN = true
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@@ -25,20 +28,20 @@ docs_clean:
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
uv run --no-group test linkchecker _dist/docs/ --ignore-url node_modules
## api_docs_build: Build the API Reference documentation.
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
poetry run python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
uv run --no-group test python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --no-group test make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
poetry run python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
uv run --no-group test python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
@@ -50,15 +53,15 @@ api_docs_clean:
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
poetry run linkchecker docs/api_reference/_build/html/index.html
uv run --no-group test linkchecker docs/api_reference/_build/html/index.html
## spell_check: Run codespell on the project.
spell_check:
poetry run codespell --toml pyproject.toml
uv run --no-group test codespell --toml pyproject.toml
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
poetry run codespell --toml pyproject.toml -w
uv run --no-group test codespell --toml pyproject.toml -w
######################
# LINTING AND FORMATTING
@@ -66,12 +69,19 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
uv run --group lint ruff check docs cookbook
uv run --group lint ruff format docs cookbook cookbook --diff
uv run --group lint ruff check --select I docs cookbook
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --select I --fix docs cookbook
update-package-downloads:
uv run python docs/scripts/packages_yml_get_downloads.py

178
README.md
View File

@@ -1,6 +1,12 @@
# 🦜️🔗 LangChain
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
⚡ Build context-aware reasoning applications ⚡
<div>
<br>
</div>
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
@@ -9,133 +15,69 @@
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
With pip:
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
```bash
pip install langchain
pip install -U langchain
```
With conda:
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If youre looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
```bash
conda install langchain -c conda-forge
```
## Why use LangChain?
## 🤔 What is LangChain?
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
**LangChain** is a framework for developing applications powered by large language models (LLMs).
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
frontier evolves, adapt quickly — LangChains abstractions keep you moving without
losing momentum.
For these applications, LangChain simplifies the entire application lifecycle:
## LangChains ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel), [components](https://python.langchain.com/docs/concepts/), and [third-party integrations](https://python.langchain.com/docs/integrations/providers/).
Use [LangGraph](https://langchain-ai.github.io/langgraph/) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
To improve your LLM application development, pair LangChain with:
### Open-source libraries
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/#langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long
running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
### Productionization:
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Tutorials](https://python.langchain.com/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is a key part of LangChain, allowing you to build and organize chains of processes in a straightforward, declarative manner. It was designed to support taking prototypes directly into production without needing to alter any code. This means you can use LCEL to set up everything from basic "prompt + LLM" setups to intricate, multi-step workflows.
- **[Overview](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
- **[Cheatsheet](https://python.langchain.com/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
## Components
Components fall into the following **modules**:
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/docs/concepts/#output-parsers).
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/#retrievers) it for use in the generation step.
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Introduction](https://python.langchain.com/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/tutorials/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Create stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploy LangChain runnables and chains as REST APIs.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
## 🌟 Contributors
[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.

View File

@@ -1,5 +1,30 @@
# Security Policy
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best practices
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, its safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. Its best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
@@ -14,7 +39,7 @@ Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
3) The [Best practicies](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
@@ -33,13 +58,13 @@ The following packages and repositories are eligible for bug bounties:
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- langchain/tools
- langchain-community/tools
- Please review our [security guidelines](https://python.langchain.com/docs/security)
- libs/langchain/langchain/tools
- libs/community/langchain_community/tools
- Please review the [best practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
@@ -47,7 +72,7 @@ All out of scope targets defined by huntr as well as:
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs see below.
- Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities

View File

@@ -60,7 +60,7 @@
"id": "CI8Elyc5gBQF"
},
"source": [
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint it ready, you need to copy its number."
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint is ready, you need to copy its number."
]
},
{

View File

@@ -21,7 +21,6 @@ Notebook | Description
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools
@@ -50,7 +49,7 @@ Notebook | Description
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.
[rag_upstage_layout_analysis_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_layout_analysis_groundedness_check.ipynb) | End-to-end RAG example using Upstage Layout Analysis and Groundedness Check.
[rag_upstage_document_parse_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_document_parse_groundedness_check.ipynb) | End-to-end RAG example using Upstage Document Parse and Groundedness Check.
[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
@@ -62,4 +61,6 @@ Notebook | Description
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.

View File

@@ -30,7 +30,7 @@
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken open_clip_torch torch"
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml matplotlib tiktoken open_clip_torch torch"
]
},
{
@@ -409,7 +409,7 @@
" table_summaries,\n",
" tables,\n",
" image_summaries,\n",
" image_summaries,\n",
" img_base64_list,\n",
")"
]
},

View File

@@ -31,8 +31,8 @@
"source": [
"# Optional\n",
"import os\n",
"# os.environ['LANGCHAIN_TRACING_V2'] = 'true' # enables tracing\n",
"# os.environ['LANGCHAIN_API_KEY'] = <your-api-key>"
"# os.environ['LANGSMITH_TRACING'] = 'true' # enables tracing\n",
"# os.environ['LANGSMITH_API_KEY'] = <your-api-key>"
]
},
{

View File

@@ -66,7 +66,7 @@
},
"outputs": [],
"source": [
"#!python3 -m pip install --upgrade langchain deeplake openai"
"#!python3 -m pip install --upgrade langchain langchain-deeplake openai"
]
},
{
@@ -666,89 +666,26 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your Deep Lake dataset has been successfully created!\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" \r"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset(path='hub://adilkhan/langchain-code', tensors=['embedding', 'id', 'metadata', 'text'])\n",
"\n",
" tensor htype shape dtype compression\n",
" ------- ------- ------- ------- ------- \n",
" embedding embedding (8244, 1536) float32 None \n",
" id text (8244, 1) str None \n",
" metadata json (8244, 1) str None \n",
" text text (8244, 1) str None \n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": []
},
{
"data": {
"text/plain": [
"<langchain_community.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_deeplake.vectorstores import DeeplakeVectorStore\n",
"\n",
"username = \"<USERNAME_OR_ORG>\"\n",
"\n",
"\n",
"db = DeepLake.from_documents(\n",
" texts, embeddings, dataset_path=f\"hub://{username}/langchain-code\", overwrite=True\n",
"db = DeeplakeVectorStore.from_documents(\n",
" documents=texts,\n",
" embedding=embeddings,\n",
" dataset_path=f\"hub://{username}/langchain-code\",\n",
" overwrite=True,\n",
")\n",
"db"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`Optional`: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"# from langchain_community.vectorstores import DeepLake\n",
"\n",
"# db = DeepLake.from_documents(\n",
"# texts, embeddings, dataset_path=f\"hub://{<org_id>}/langchain-code\", runtime={\"tensor_db\": True}\n",
"# )\n",
"# db"
]
},
{
"attachments": {},
"cell_type": "markdown",
@@ -760,24 +697,16 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Deep Lake Dataset in hub://adilkhan/langchain-code already exists, loading from the storage\n"
]
}
],
"outputs": [],
"source": [
"db = DeepLake(\n",
"db = DeeplakeVectorStore(\n",
" dataset_path=f\"hub://{username}/langchain-code\",\n",
" read_only=True,\n",
" embedding=embeddings,\n",
" embedding_function=embeddings,\n",
")"
]
},
@@ -796,36 +725,6 @@
"retriever.search_kwargs[\"k\"] = 20"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also specify user defined functions using [Deep Lake filters](https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def filter(x):\n",
" # filter based on source code\n",
" if \"something\" in x[\"text\"].data()[\"value\"]:\n",
" return False\n",
"\n",
" # filter based on path e.g. extension\n",
" metadata = x[\"metadata\"].data()[\"value\"]\n",
" return \"only_this\" in metadata[\"source\"] or \"also_that\" in metadata[\"source\"]\n",
"\n",
"\n",
"### turn on below for custom filtering\n",
"# retriever.search_kwargs['filter'] = filter"
]
},
{
"cell_type": "code",
"execution_count": 20,
@@ -837,10 +736,8 @@
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo-0613\"\n",
") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0613\") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
"qa = RetrievalQA.from_llm(model, retriever=retriever)"
]
},
{

File diff suppressed because it is too large Load Diff

View File

@@ -1,273 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "707d13a7",
"metadata": {},
"source": [
"# Databricks\n",
"\n",
"This notebook covers how to connect to the [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.\n",
"It is broken into 3 parts: installation and setup, connecting to Databricks, and examples."
]
},
{
"cell_type": "markdown",
"id": "0076d072",
"metadata": {},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "739b489b",
"metadata": {},
"outputs": [],
"source": [
"!pip install databricks-sql-connector"
]
},
{
"cell_type": "markdown",
"id": "73113163",
"metadata": {},
"source": [
"## Connecting to Databricks\n",
"\n",
"You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the `SQLDatabase.from_databricks()` method.\n",
"\n",
"### Syntax\n",
"```python\n",
"SQLDatabase.from_databricks(\n",
" catalog: str,\n",
" schema: str,\n",
" host: Optional[str] = None,\n",
" api_token: Optional[str] = None,\n",
" warehouse_id: Optional[str] = None,\n",
" cluster_id: Optional[str] = None,\n",
" engine_args: Optional[dict] = None,\n",
" **kwargs: Any)\n",
"```\n",
"### Required Parameters\n",
"* `catalog`: The catalog name in the Databricks database.\n",
"* `schema`: The schema name in the catalog.\n",
"\n",
"### Optional Parameters\n",
"There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.\n",
"* `host`: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `warehouse_id`: The warehouse ID in the Databricks SQL.\n",
"* `cluster_id`: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.\n",
"* `engine_args`: The arguments to be used when connecting Databricks.\n",
"* `**kwargs`: Additional keyword arguments for the `SQLDatabase.from_uri` method."
]
},
{
"cell_type": "markdown",
"id": "b11c7e48",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8102bca0",
"metadata": {},
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain_community.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog=\"samples\", schema=\"nyctaxi\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9dd36f58",
"metadata": {},
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
]
},
{
"cell_type": "markdown",
"id": "5b5c5f1a",
"metadata": {},
"source": [
"### SQL Chain example\n",
"\n",
"This example demonstrates the use of the [SQL Chain](https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html) for answering a question over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "36f2270b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4e2b5f25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What is the average duration of taxi rides that start between midnight and 6am?\n",
"SQLQuery:\u001b[32;1m\u001b[1;3mSELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration\n",
"FROM trips\n",
"WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(987.8122786304605,)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3mThe average duration of taxi rides that start between midnight and 6am is 987.81 seconds.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\n",
" \"What is the average duration of taxi rides that start between midnight and 6am?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e496d5e5",
"metadata": {},
"source": [
"### SQL Database Agent example\n",
"\n",
"This example demonstrates the use of the [SQL Database Agent](/docs/integrations/tools/sql_database) for answering questions over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9918e86a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c484a76e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mtrips\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI should check the schema of the trips table to see if it has the necessary columns for trip distance and duration.\n",
"Action: schema_sql_db\n",
"Action Input: trips\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"CREATE TABLE trips (\n",
"\ttpep_pickup_datetime TIMESTAMP, \n",
"\ttpep_dropoff_datetime TIMESTAMP, \n",
"\ttrip_distance FLOAT, \n",
"\tfare_amount FLOAT, \n",
"\tpickup_zip INT, \n",
"\tdropoff_zip INT\n",
") USING DELTA\n",
"\n",
"/*\n",
"3 rows from trips table:\n",
"tpep_pickup_datetime\ttpep_dropoff_datetime\ttrip_distance\tfare_amount\tpickup_zip\tdropoff_zip\n",
"2016-02-14 16:52:13+00:00\t2016-02-14 17:16:04+00:00\t4.94\t19.0\t10282\t10171\n",
"2016-02-04 18:44:19+00:00\t2016-02-04 18:46:00+00:00\t0.28\t3.5\t10110\t10110\n",
"2016-02-17 17:13:57+00:00\t2016-02-17 17:17:55+00:00\t0.7\t5.0\t10103\t10023\n",
"*/\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3mSELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe query is correct. I will now execute it to find the longest trip distance and its duration.\n",
"Action: query_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[(30.6, '0 00:43:31.000000000')]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
"Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the longest trip distance and how long did it take?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -115,7 +115,7 @@
"\n",
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n",
"\n",
"Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.\n",
"Unless told to do not query for all the columns from a specific index, only ask for a few relevant columns given the question.\n",
"\n",
"Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.\n",
"\n",

View File

@@ -358,7 +358,7 @@
"id": "6e5cd014-db86-4d6b-8399-25cae3da5570",
"metadata": {},
"source": [
"## Helper function to plot retrived similar images"
"## Helper function to plot retrieved similar images"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -124,8 +124,8 @@
"# Optional-- If you want to enable Langsmith -- good for debugging\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
"os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass()"
]
},
{
@@ -156,7 +156,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Ensure you have an HF_TOKEN in your development enviornment:\n",
"# Ensure you have an HF_TOKEN in your development environment:\n",
"# access tokens can be created or copied from the Hugging Face platform (https://huggingface.co/docs/hub/en/security-tokens)\n",
"\n",
"# Load MongoDB's embedded_movies dataset from Hugging Face\n",

View File

@@ -21,40 +21,6 @@
"* Passing raw images and text chunks to a multimodal LLM for answer synthesis "
]
},
{
"cell_type": "markdown",
"id": "6a6b6e73",
"metadata": {},
"source": [
"## Start VDMS Server\n",
"\n",
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5f483872",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a1b9206b08ef626e15b356bf9e031171f7c7eb8f956a2733f196f0109246fe2b\n"
]
}
],
"source": [
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
"\n",
"# Connect to VDMS Vector Store\n",
"from langchain_community.vectorstores.vdms import VDMS_Client\n",
"\n",
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "2498a0a1",
@@ -67,20 +33,20 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental\n",
"! pip install --quiet -U langchain-vdms langchain-experimental langchain-ollama\n",
"\n",
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" \"onnxruntime==1.17.0\" pillow pydantic lxml open_clip_torch"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "78ac6543",
"metadata": {},
"outputs": [],
@@ -89,6 +55,40 @@
"# load_dotenv(find_dotenv(), override=True);"
]
},
{
"cell_type": "markdown",
"id": "e5c8916e",
"metadata": {},
"source": [
"## Start VDMS Server\n",
"\n",
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1e6e2c15",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a701e5ac3523006e9540b5355e2d872d5d78383eab61562a675d5b9ac21fde65\n"
]
}
],
"source": [
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
"\n",
"# Connect to VDMS Vector Store\n",
"from langchain_vdms.vectorstores import VDMS_Client\n",
"\n",
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
@@ -115,11 +115,12 @@
"import requests\n",
"\n",
"# Folder to store pdf and extracted images\n",
"datapath = Path(\"./data/multimodal_files\").resolve()\n",
"base_datapath = Path(\"./data/multimodal_files\").resolve()\n",
"datapath = base_datapath / \"images\"\n",
"datapath.mkdir(parents=True, exist_ok=True)\n",
"\n",
"pdf_url = \"https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\"\n",
"pdf_path = str(datapath / pdf_url.split(\"/\")[-1])\n",
"pdf_path = str(base_datapath / pdf_url.split(\"/\")[-1])\n",
"with open(pdf_path, \"wb\") as f:\n",
" f.write(requests.get(pdf_url).content)"
]
@@ -185,8 +186,8 @@
"source": [
"import os\n",
"\n",
"from langchain_community.vectorstores import VDMS\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from langchain_vdms import VDMS\n",
"\n",
"# Create VDMS\n",
"vectorstore = VDMS(\n",
@@ -312,10 +313,10 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.ollama import Ollama\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_ollama.llms import OllamaLLM\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",
@@ -340,8 +341,8 @@
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please convert answers to english and use your extensive knowledge \"\n",
" \"and analytical skills to provide a comprehensive summary that includes:\\n\"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
@@ -359,7 +360,7 @@
" \"\"\"Multi-modal RAG chain\"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" llm_model = Ollama(\n",
" llm_model = OllamaLLM(\n",
" verbose=True, temperature=0.5, model=\"llava\", base_url=\"http://localhost:11434\"\n",
" )\n",
"\n",
@@ -419,6 +420,121 @@
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"© 2017 LARRY D. MOORE\n",
"\n",
"contemporary criticism of the less-than- thoughtful circumstances under which Lange photographed Thomson, the pictures power to engage has not diminished. Artists in other countries have appropriated the image, changing the mothers features into those of other ethnicities, but keeping her expression and the positions of her clinging children. Long after anyone could help the Thompson family, this picture has resonance in another time of national crisis, unemployment and food shortages.\n",
"\n",
"A striking, but very different picture is a 1900 portrait of the legendary Hin-mah-too-yah- lat-kekt (Chief Joseph) of the Nez Percé people. The Bureau of American Ethnology in Washington, D.C., regularly arranged for its photographer, De Lancey Gill, to photograph Native American delegations that came to the capital to confer with officials about tribal needs and concerns. Although Gill described Chief Joseph as having “an air of gentleness and quiet reserve,” the delegate skeptically appraises the photographer, which is not surprising given that the United States broke five treaties with Chief Joseph and his father between 1855 and 1885.\n",
"\n",
"More than a glance, second looks may reveal new knowledge into complex histories.\n",
"\n",
"Anne Wilkes Tucker is the photography curator emeritus of the Museum of Fine Arts, Houston and curator of the “Not an Ostrich” exhibition.\n",
"\n",
"28\n",
"\n",
"28 LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"THEYRE WILLING TO HAVE MEENTERTAIN THEM DURING THE DAY,BUT AS SOON AS IT STARTSGETTING DARK, THEY ALLGO OFF, AND LEAVE ME! \n",
"ROSA PARKS: IN HER OWN WORDS\n",
"\n",
"COMIC ART: 120 YEARS OF PANELS AND PAGES\n",
"\n",
"SHALL NOT BE DENIED: WOMEN FIGHT FOR THE VOTE\n",
"\n",
"More information loc.gov/exhibits\n",
"Nuestra Sefiora de las Iguanas\n",
"\n",
"Graciela Iturbides 1979 portrait of Zobeida Díaz in the town of Juchitán in southeastern Mexico conveys the strength of women and reflects their important contributions to the economy. Díaz, a merchant, was selling iguanas to cook and eat, carrying them on her head, as is customary.\n",
"\n",
"GRACIELA ITURBIDE. “NUESTRA SEÑORA DE LAS IGUANAS.” 1979. GELATIN SILVER PRINT. © GRACIELA ITURBIDE, USED BY PERMISSION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"Iturbide requested permission to take a photograph, but this proved challenging because the iguanas were constantly moving, causing Díaz to laugh. The result, however, was a brilliant portrait that the inhabitants of Juchitán claimed with pride. They have reproduced it on posters and erected a statue honoring Díaz and her iguanas. The photo now appears throughout the world, inspiring supporters of feminism, womens rights and gender equality.\n",
"\n",
"—Adam Silvia is a curator in the Prints and Photographs Division.\n",
"\n",
"6\n",
"\n",
"6 LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"Migrant Mother is Florence Owens Thompson\n",
"\n",
"The iconic portrait that became the face of the Great Depression is also the most famous photograph in the collections of the Library of Congress.\n",
"\n",
"The Library holds the original source of the photo — a nitrate negative measuring 4 by 5 inches. Do you see a faint thumb in the bottom right? The photographer, Dorothea Lange, found the thumb distracting and after a few years had the negative altered to make the thumb almost invisible. Langes boss at the Farm Security Administration, Roy Stryker, criticized her action because altering a negative undermines the credibility of a documentary photo.\n",
"Shrimp Picker\n",
"\n",
"The photos and evocative captions of Lewis Hine served as source material for National Child Labor Committee reports and exhibits exposing abusive child labor practices in the United States in the first decades of the 20th century.\n",
"\n",
"LEWIS WICKES HINE. “MANUEL, THE YOUNG SHRIMP-PICKER, FIVE YEARS OLD, AND A MOUNTAIN OF CHILD-LABOR OYSTER SHELLS BEHIND HIM. HE WORKED LAST YEAR. UNDERSTANDS NOT A WORD OF ENGLISH. DUNBAR, LOPEZ, DUKATE COMPANY. LOCATION: BILOXI, MISSISSIPPI.” FEBRUARY 1911. NATIONAL CHILD LABOR COMMITTEE COLLECTION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"For 15 years, Hine\n",
"\n",
"crisscrossed the country, documenting the practices of the worst offenders. His effective use of photography made him one of the committee's greatest publicists in the campaign for legislation to ban child labor.\n",
"\n",
"Hine was a master at taking photos that catch attention and convey a message and, in this photo, he framed Manuel in a setting that drove home the boys small size and unsafe environment.\n",
"\n",
"Captions on photos of other shrimp pickers emphasized their long working hours as well as one hazard of the job: The acid from the shrimp made pickers hands sore and “eats the shoes off your feet.”\n",
"\n",
"Such images alerted viewers to all that workers, their families and the nation sacrificed when children were part of the labor force. The Library holds paper records of the National Child Labor Committee as well as over 5,000 photographs.\n",
"\n",
"—Barbara Natanson is head of the Reference Section in the Prints and Photographs Division.\n",
"\n",
"8\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"Intergenerational Portrait\n",
"\n",
"Raised on the Apsáalooke (Crow) reservation in Montana, photographer Wendy Red Star created her “Apsáalooke Feminist” self-portrait series with her daughter Beatrice. With a dash of wry humor, mother and daughter are their own first-person narrators.\n",
"\n",
"Red Star explains the significance of their appearance: “The dress has power: You feel strong and regal wearing it. In my art, the elk tooth dress specifically symbolizes Crow womanhood and the matrilineal line connecting me to my ancestors. As a mother, I spend hours searching for the perfect elk tooth dress materials to make a prized dress for my daughter.”\n",
"\n",
"In a world that struggles with cultural identities, this photograph shows us the power and beauty of blending traditional and contemporary styles.\n",
"American Gothic Product #216040262 Price: $24\n",
"\n",
"U.S. Capitol at Night Product #216040052 Price: $24\n",
"\n",
"Good Reading Ahead Product #21606142 Price: $24\n",
"\n",
"Gordon Parks created an iconic image with this 1942 photograph of cleaning woman Ella Watson.\n",
"\n",
"Snow blankets the U.S. Capitol in this classic image by Ernest L. Crandall.\n",
"\n",
"Start your new year out right with a poster promising good reading for months to come.\n",
"\n",
"▪ Order online: loc.gov/shop ▪ Order by phone: 888.682.3557\n",
"\n",
"26\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"SUPPORT\n",
"\n",
"A PICTURE OF PHILANTHROPY Annenberg Foundation Gives $1 Million and a Photographic Collection to the Library.\n",
"\n",
"A major gift by Wallis Annenberg and the Annenberg Foundation in Los Angeles will support the effort to reimagine the visitor experience at the Library of Congress. The foundation also is donating 1,000 photographic prints from its Annenberg Space for Photography exhibitions to the Library.\n",
"\n",
"The Library is pursuing a multiyear plan to transform the experience of its nearly 2 million annual visitors, share more of its treasures with the public and show how Library collections connect with visitors own creativity and research. The project is part of a strategic plan established by Librarian of Congress Carla Hayden to make the Library more user-centered for Congress, creators and learners of all ages.\n",
"\n",
"A 2018 exhibition at the Annenberg Space for Photography in Los Angeles featured over 400 photographs from the Library. The Library is planning a future photography exhibition, based on the Annenberg-curated show, along with a documentary film on the Library and its history, produced by the Annenberg Space for Photography.\n",
"\n",
"“The nations library is honored to have the strong support of Wallis Annenberg and the Annenberg Foundation as we enhance the experience for our visitors,” Hayden said. “We know that visitors will find new connections to the Library through the incredible photography collections and countless other treasures held here to document our nations history and creativity.”\n",
"\n",
"To enhance the Librarys holdings, the foundation is giving the Library photographic prints for long-term preservation from 10 other exhibitions hosted at the Annenberg Space for Photography. The Library holds one of the worlds largest photography collections, with about 14 million photos and over 1 million images digitized and available online.\n",
"18 LIBRARY OF CONGRESS MAGAZINE\n"
]
}
],
"source": [
@@ -461,10 +577,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
" The image depicts a woman with several children. The woman appears to be of Cherokee heritage, as suggested by the text provided. The image is described as having been initially regretted by the subject, Florence Owens Thompson, due to her feeling that it did not accurately represent her leadership qualities.\n",
"The historical and cultural context of the image is tied to the Great Depression and the Dust Bowl, both of which affected the Cherokee people in Oklahoma. The photograph was taken during this period, and its subject, Florence Owens Thompson, was a leader within her community who worked tirelessly to help those affected by these crises.\n",
"The image's symbolism and meaning can be interpreted as a representation of resilience and strength in the face of adversity. The woman is depicted with multiple children, which could signify her role as a caregiver and protector during difficult times.\n",
"Connections between the image and the related text include Florence Owens Thompson's leadership qualities and her regretted feelings about the photograph. Additionally, the mention of Dorothea Lange, the photographer who took this photo, ties the image to its historical context and the broader narrative of the Great Depression and Dust Bowl in Oklahoma. \n"
" The image is a black and white photograph by Dorothea Lange titled \"Destitute Pea Pickers in California. Mother of Seven Children. Age Thirty-Two. Nipomo, California.\" It was taken in March 1936 as part of the Farm Security Administration-Office of War Information Collection.\n",
"\n",
"The photograph features a woman with seven children, who appear to be in a state of poverty and hardship. The woman is seated, looking directly at the camera, while three of her children are standing behind her. They all seem to be dressed in ragged clothing, indicative of their impoverished condition.\n",
"\n",
"The historical context of this image is related to the Great Depression, which was a period of economic hardship in the United States that lasted from 1929 to 1939. During this time, many people struggled to make ends meet, and poverty was widespread. This photograph captures the plight of one such family during this difficult period.\n",
"\n",
"The symbolism of the image is multifaceted. The woman's direct gaze at the camera can be seen as a plea for help or an expression of desperation. The ragged clothing of the children serves as a stark reminder of the poverty and hardship experienced by many during this time.\n",
"\n",
"In terms of connections to the related text, it is mentioned that Florence Owens Thompson, the woman in the photograph, initially regretted having her picture taken. However, she later came to appreciate the importance of the image as a representation of the struggles faced by many during the Great Depression. The mention of Helena Zinkham suggests that she may have played a role in the creation or distribution of this photograph.\n",
"\n",
"Overall, this image is a powerful depiction of poverty and hardship during the Great Depression, capturing the resilience and struggles of one family amidst difficult times. \n"
]
}
],
@@ -491,11 +614,17 @@
"source": [
"! docker kill vdms_rag_nb"
]
},
{
"cell_type": "markdown",
"id": "fe4a98ee",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".langchain-venv",
"display_name": ".test-venv",
"language": "python",
"name": "python3"
},
@@ -509,7 +638,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

View File

@@ -71,9 +71,9 @@
"# Optional: LangSmith API keys\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = \"api_key\""
"os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"os.environ[\"LANGSMITH_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n",
"os.environ[\"LANGSMITH_API_KEY\"] = \"api_key\""
]
},
{

View File

@@ -29,7 +29,7 @@
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_PROJECT\"] = \"movie-qa\""
"os.environ[\"LANGSMITH_PROJECT\"] = \"movie-qa\""
]
},
{

View File

@@ -0,0 +1,82 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG using Upstage Document Parse and Groundedness Check\n",
"This example illustrates RAG using [Upstage](https://python.langchain.com/docs/integrations/providers/upstage/) Document Parse and Groundedness Check."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_community.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.runnables.base import RunnableSerializable\n",
"from langchain_upstage import (\n",
" ChatUpstage,\n",
" UpstageDocumentParseLoader,\n",
" UpstageEmbeddings,\n",
" UpstageGroundednessCheck,\n",
")\n",
"\n",
"model = ChatUpstage()\n",
"\n",
"files = [\"/PATH/TO/YOUR/FILE.pdf\", \"/PATH/TO/YOUR/FILE2.pdf\"]\n",
"\n",
"loader = UpstageDocumentParseLoader(file_path=files, split=\"element\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(\n",
" docs, embedding=UpstageEmbeddings(model=\"solar-embedding-1-large\")\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"output_parser = StrOutputParser()\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"How many parameters in SOLAR model?\")\n",
"\n",
"groundedness_check = UpstageGroundednessCheck()\n",
"groundedness = \"\"\n",
"while groundedness != \"grounded\":\n",
" chain: RunnableSerializable = RunnablePassthrough() | prompt | model | output_parser\n",
"\n",
" result = chain.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"question\": \"How many parameters in SOLAR model?\",\n",
" }\n",
" )\n",
"\n",
" groundedness = groundedness_check.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"answer\": result,\n",
" }\n",
" )"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,82 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG using Upstage Layout Analysis and Groundedness Check\n",
"This example illustrates RAG using [Upstage](https://python.langchain.com/docs/integrations/providers/upstage/) Layout Analysis and Groundedness Check."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_community.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.runnables.base import RunnableSerializable\n",
"from langchain_upstage import (\n",
" ChatUpstage,\n",
" UpstageEmbeddings,\n",
" UpstageGroundednessCheck,\n",
" UpstageLayoutAnalysisLoader,\n",
")\n",
"\n",
"model = ChatUpstage()\n",
"\n",
"files = [\"/PATH/TO/YOUR/FILE.pdf\", \"/PATH/TO/YOUR/FILE2.pdf\"]\n",
"\n",
"loader = UpstageLayoutAnalysisLoader(file_path=files, split=\"element\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(\n",
" docs, embedding=UpstageEmbeddings(model=\"solar-embedding-1-large\")\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"output_parser = StrOutputParser()\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"How many parameters in SOLAR model?\")\n",
"\n",
"groundedness_check = UpstageGroundednessCheck()\n",
"groundedness = \"\"\n",
"while groundedness != \"grounded\":\n",
" chain: RunnableSerializable = RunnablePassthrough() | prompt | model | output_parser\n",
"\n",
" result = chain.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"question\": \"How many parameters in SOLAR model?\",\n",
" }\n",
" )\n",
"\n",
" groundedness = groundedness_check.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"answer\": result,\n",
" }\n",
" )"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -233,7 +233,7 @@ Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Never query for all the columns from a specific table, only ask for a few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.

View File

@@ -26,7 +26,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"2e44b44201c8778b462342ac97f5ccf05a4e02aa8a04505ecde97bf20dcc4cbb\n"
"76e78b89cee4d6d31154823f93592315df79c28410dfbfc87c9f70cbfdfa648b\n"
]
}
],
@@ -49,7 +49,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental sentence-transformers opencv-python open_clip_torch torch accelerate"
"! pip install --quiet -U langchain-vdms langchain-experimental sentence-transformers opencv-python open_clip_torch torch accelerate"
]
},
{
@@ -63,7 +63,16 @@
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/data1/cwlacewe/apps/cwlacewe_langchain/.langchain-venv/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"import json\n",
"import os\n",
@@ -80,10 +89,10 @@
"from langchain_community.embeddings.sentence_transformer import (\n",
" SentenceTransformerEmbeddings,\n",
")\n",
"from langchain_community.vectorstores.vdms import VDMS, VDMS_Client\n",
"from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from langchain_vdms.vectorstores import VDMS, VDMS_Client\n",
"from transformers import (\n",
" AutoModelForCausalLM,\n",
" AutoTokenizer,\n",
@@ -363,7 +372,7 @@
"\t\tThere are 2 shoppers in this video. Shopper 1 is wearing a plaid shirt and a spectacle. Shopper 2 who is not completely captured in the frame seems to wear a black shirt and is moving away with his back turned towards the camera. There is a shelf towards the right of the camera frame. Shopper 2 is hanging an item back to a hanger and then quickly walks away in a similar fashion as shopper 2. Contents of the nearer side of the shelf with respect to camera seems to be camping lanterns and cleansing agents, arranged at the top. In the middle part of the shelf, various tools including grommets, a pocket saw, candles, and other helpful camping items can be observed. Midway through the shelf contains items which appear to be steel containers and items made up of plastic with red, green, orange, and yellow colors, while those at the bottom are packed in cardboard boxes. Contents at the farther part of the shelf are well stocked and organized but are not glaringly visible.\n",
"\n",
"\tMetadata:\n",
"\t\t{'fps': 24.0, 'id': 'c6e5f894-b905-46f5-ac9e-4487a9235561', 'total_frames': 120.0, 'video': 'clip16.mp4'}\n",
"\t\t{'fps': 24.0, 'total_frames': 120.0, 'video': 'clip16.mp4'}\n",
"Retrieved Top matching video!\n",
"\n",
"\n"
@@ -392,18 +401,12 @@
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "3edf8783e114487ca490d8dec5c46884",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
"name": "stderr",
"output_type": "stream",
"text": [
"Loading checkpoint shards: 100%|██████████| 2/2 [00:18<00:00, 9.01s/it]\n",
"WARNING:accelerate.big_modeling:Some parameters are on the meta device because they were offloaded to the cpu.\n"
]
}
],
"source": [
@@ -555,7 +558,7 @@
"\t\tA single shopper is seen in this video standing facing the shelf and in the bottom part of the frame. He's wearing a light-colored shirt and a spectacle. The shopper is carrying a red colored basket in his left hand. The entire basket is not clearly visible, but it does seem to contain something in a blue colored package which the shopper has just placed in the basket given his right hand was seen inside the basket. Then the shopper leans towards the shelf and checks out an item in orange package. He picks this single item with his right hand and proceeds to place the item in the basket. The entire shelf looks well stocked except for the top part of the shelf which is empty. The shopper has not picked any item from this part of the shelf. The rest of the shelf looks well stocked and does not need any restocking. The contents on the farther part of the shelf consists of items, majority of which are packed in black, yellow, and green packages. No other details are visible of these items.\n",
"\n",
"\tMetadata:\n",
"\t\t{'fps': 24.0, 'id': '37ddc212-994e-4db0-877f-5ed09965ab90', 'total_frames': 162.0, 'video': 'clip10.mp4'}\n",
"\t\t{'fps': 24.0, 'total_frames': 162.0, 'video': 'clip10.mp4'}\n",
"Retrieved Top matching video!\n",
"\n",
"\n"
@@ -585,7 +588,7 @@
"User : Find a man holding a red shopping basket\n",
"Assistant : Most relevant retrieved video is **clip9.mp4** \n",
"\n",
"I see a person standing in front of a well-stocked shelf, they are wearing a light-colored shirt and glasses, and they have a red shopping basket in their left hand. They are leaning forward and picking up an item from the shelf with their right hand. The item is packaged in a blue-green box. Based on the scene description, I can confirm that the person is indeed holding a red shopping basket.</s>\n"
"I see a person standing in front of a well-stocked shelf, they are wearing a light-colored shirt and glasses, and they have a red shopping basket in their left hand. They are leaning forward and picking up an item from the shelf with their right hand. The item is packaged in a blue-green box. Based on the available information, I cannot confirm whether the basket is empty or contains items. However, the rest of the\n"
]
}
],
@@ -655,7 +658,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": ".langchain-venv",
"language": "python",
"name": "python3"
},
@@ -669,7 +672,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

View File

@@ -144,8 +144,8 @@
"outputs": [],
"source": [
"# import os\n",
"# os.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\"\n",
"# os.environ[\"LANGCHAIN_SESSION\"] = \"default\" # Make sure this session actually exists."
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_PROJECT\"] = \"default\" # Make sure this session actually exists."
]
},
{

View File

@@ -1,3 +0,0 @@
FROM python:3.11
RUN pip install langchain

View File

@@ -1,12 +0,0 @@
# Makefile
build_graphdb:
docker build --tag graphdb ./graphdb
start_graphdb:
docker-compose up -d graphdb
down:
docker-compose down -v --remove-orphans
.PHONY: build_graphdb start_graphdb down

View File

@@ -1,84 +0,0 @@
# docker-compose to make it easier to spin up integration tests.
# Services should use NON standard ports to avoid collision with
# any existing services that might be used for development.
# ATTENTION: When adding a service below use a non-standard port
# increment by one from the preceding port.
# For credentials always use `langchain` and `langchain` for the
# username and password.
version: "3"
name: langchain-tests
services:
redis:
image: redis/redis-stack-server:latest
# We use non standard ports since
# these instances are used for testing
# and users may already have existing
# redis instances set up locally
# for other projects
ports:
- "6020:6379"
volumes:
- ./redis-volume:/data
graphdb:
image: graphdb
ports:
- "6021:7200"
mongo:
image: mongo:latest
container_name: mongo_container
ports:
- "6022:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: langchain
MONGO_INITDB_ROOT_PASSWORD: langchain
postgres:
image: postgres:16
environment:
POSTGRES_DB: langchain
POSTGRES_USER: langchain
POSTGRES_PASSWORD: langchain
ports:
- "6023:5432"
command: |
postgres -c log_statement=all
healthcheck:
test:
[
"CMD-SHELL",
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
]
interval: 5s
retries: 60
volumes:
- postgres_data:/var/lib/postgresql/data
pgvector:
# postgres with the pgvector extension
image: ankane/pgvector
environment:
POSTGRES_DB: langchain
POSTGRES_USER: langchain
POSTGRES_PASSWORD: langchain
ports:
- "6024:5432"
command: |
postgres -c log_statement=all
healthcheck:
test:
[
"CMD-SHELL",
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
]
interval: 5s
retries: 60
volumes:
- postgres_data_pgvector:/var/lib/postgresql/data
vdms:
image: intellabs/vdms:latest
container_name: vdms_container
ports:
- "6025:55555"
volumes:
postgres_data:
postgres_data_pgvector:

View File

@@ -1,5 +0,0 @@
FROM ontotext/graphdb:10.5.1
RUN mkdir -p /opt/graphdb/dist/data/repositories/langchain
COPY config.ttl /opt/graphdb/dist/data/repositories/langchain/
COPY graphdb_create.sh /run.sh
ENTRYPOINT bash /run.sh

View File

@@ -1,46 +0,0 @@
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
@prefix rep: <http://www.openrdf.org/config/repository#>.
@prefix sr: <http://www.openrdf.org/config/repository/sail#>.
@prefix sail: <http://www.openrdf.org/config/sail#>.
@prefix graphdb: <http://www.ontotext.com/config/graphdb#>.
[] a rep:Repository ;
rep:repositoryID "langchain" ;
rdfs:label "" ;
rep:repositoryImpl [
rep:repositoryType "graphdb:SailRepository" ;
sr:sailImpl [
sail:sailType "graphdb:Sail" ;
graphdb:read-only "false" ;
# Inference and Validation
graphdb:ruleset "empty" ;
graphdb:disable-sameAs "true" ;
graphdb:check-for-inconsistencies "false" ;
# Indexing
graphdb:entity-id-size "32" ;
graphdb:enable-context-index "false" ;
graphdb:enablePredicateList "true" ;
graphdb:enable-fts-index "false" ;
graphdb:fts-indexes ("default" "iri") ;
graphdb:fts-string-literals-index "default" ;
graphdb:fts-iris-index "none" ;
# Queries and Updates
graphdb:query-timeout "0" ;
graphdb:throw-QueryEvaluationException-on-timeout "false" ;
graphdb:query-limit-results "0" ;
# Settable in the file but otherwise hidden in the UI and in the RDF4J console
graphdb:base-URL "http://example.org/owlim#" ;
graphdb:defaultNS "" ;
graphdb:imports "" ;
graphdb:repository-type "file-repository" ;
graphdb:storage-folder "storage" ;
graphdb:entity-index-size "10000000" ;
graphdb:in-memory-literal-properties "true" ;
graphdb:enable-literal-index "true" ;
]
].

View File

@@ -1,28 +0,0 @@
#! /bin/bash
REPOSITORY_ID="langchain"
GRAPHDB_URI="http://localhost:7200/"
echo -e "\nUsing GraphDB: ${GRAPHDB_URI}"
function startGraphDB {
echo -e "\nStarting GraphDB..."
exec /opt/graphdb/dist/bin/graphdb
}
function waitGraphDBStart {
echo -e "\nWaiting GraphDB to start..."
for _ in $(seq 1 5); do
CHECK_RES=$(curl --silent --write-out '%{http_code}' --output /dev/null ${GRAPHDB_URI}/rest/repositories)
if [ "${CHECK_RES}" = '200' ]; then
echo -e "\nUp and running"
break
fi
sleep 30s
echo "CHECK_RES: ${CHECK_RES}"
done
}
startGraphDB &
waitGraphDBStart
wait

View File

@@ -13,32 +13,25 @@ OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec sh -c ' \
for dir; do \
if find "$$dir" -maxdepth 1 -type f \( -name "pyproject.toml" -o -name "setup.py" \) | grep -q .; then \
echo "$$dir"; \
fi \
done' sh {} + | grep -vE "airbyte|ibm|databricks" | tr '\n' ' ')
PORT ?= 3001
clean:
rm -rf build
install-vercel-deps:
yum -y update
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install --pre --editable $(PARTNER_DEPS_LIST)
$(PYTHON) -m pip install -q --upgrade pip
$(PYTHON) -m pip install -q --upgrade uv
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt
generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
cp -rp $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR)
@@ -47,6 +40,7 @@ generate-files:
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:
@@ -59,6 +53,7 @@ copy-infra:
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
render:
@@ -80,6 +75,7 @@ build: install-py-deps generate-files copy-infra render md-sync append-related
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
cp -r ../libs/cli/langchain_cli/integration_template src/theme
rm -rf build
mkdir static/api_reference
git clone --depth=1 https://github.com/langchain-ai/langchain-api-docs-html.git

View File

@@ -80,6 +80,8 @@
html {
--pst-font-family-base: 'Inter';
--pst-font-family-heading: 'Inter Tight', sans-serif;
--pst-icon-versionmodified-deprecated: var(--pst-icon-exclamation-triangle);
}
/*******************************************************************************
@@ -92,7 +94,7 @@ html {
* https://sass-lang.com/documentation/interpolation
*/
/* Defaults to light mode if data-theme is not set */
html:not([data-theme]) {
html:not([data-theme]), html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
@@ -122,58 +124,8 @@ html:not([data-theme]) {
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
--pst-color-secondary-bg: #DAD6FE;
--pst-color-accent: #c132af;
--pst-color-accent-bg: #f8dff5;
--pst-color-info: #276be9;
--pst-color-info-bg: #dce7fc;
--pst-color-warning: #f66a0a;
--pst-color-warning-bg: #f8e3d0;
--pst-color-success: #00843f;
--pst-color-success-bg: #d6ece1;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #d72d47;
--pst-color-danger-bg: #f9e1e4;
--pst-color-text-base: #222832;
--pst-color-text-muted: #48566b;
--pst-color-heading-color: #ffffff;
--pst-color-shadow: rgba(0, 0, 0, 0.1);
--pst-color-border: #d1d5da;
--pst-color-border-muted: rgba(23, 23, 26, 0.2);
--pst-color-inline-code: #912583;
--pst-color-inline-code-links: #246161;
--pst-color-target: #f3cf95;
--pst-color-background: #ffffff;
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
--pst-color-deprecated: #f47d2e;
--pst-color-deprecated-bg: #fff3e8;
}
html[data-theme=dark] {
@@ -206,6 +158,8 @@ html[data-theme=dark] {
--pst-color-on-background: #222832;
--pst-color-surface: #29313d;
--pst-color-on-surface: #f3f4f5;
--pst-color-deprecated: #b46f3e;
--pst-color-deprecated-bg: #341906;
/* Adjust images in dark mode (unless they have class .only-dark or
* .dark-light, in which case assume they're already optimized for dark
* mode).
@@ -216,6 +170,30 @@ html[data-theme=dark] {
*/
color-scheme: dark;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
}
html[data-theme=dark] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
@@ -350,7 +328,7 @@ html[data-theme=dark] .MathJax_SVG * {
}
.bd-sidebar-primary {
width: 22%; /* Adjust this value to your preference */
width: max-content; /* Adjust this value to your preference */
line-height: 1.4;
}
@@ -389,6 +367,13 @@ html[data-theme=dark] .MathJax_SVG * {
div.deprecated {
margin-top: 0.5em;
margin-bottom: 2em;
background-color: var(--pst-color-deprecated-bg);
border-color: var(--pst-color-deprecated);
}
span.versionmodified.deprecated:before {
color: var(--pst-color-deprecated);
}
.admonition-beta.admonition, div.admonition-beta.admonition {
@@ -408,4 +393,4 @@ dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.
p {
font-size: 0.9rem;
margin-bottom: 0.5rem;
}
}

View File

@@ -11,6 +11,7 @@
import json
import os
import sys
from datetime import datetime
from pathlib import Path
import toml
@@ -87,12 +88,24 @@ class Beta(BaseAdmonition):
def setup(app):
app.add_directive("example_links", ExampleLinksDirective)
app.add_directive("beta", Beta)
app.connect("autodoc-skip-member", skip_private_members)
def skip_private_members(app, what, name, obj, skip, options):
if skip:
return True
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# dont document default init
return True
return None
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain Inc"
copyright = f"{datetime.now().year}, LangChain Inc"
author = "LangChain, Inc"
html_favicon = "_static/img/brand/favicon.png"
@@ -116,6 +129,7 @@ extensions = [
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
"sphinxcontrib.googleanalytics",
]
source_suffix = [".rst", ".md"]
@@ -255,8 +269,14 @@ html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
googleanalytics_id = "G-9B66JQQH2F"
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True
master_doc = "index"
# If a signatures length in characters exceeds 60,
# each parameter within the signature will be displayed on an individual logical line
maximum_signature_line_length = 60

View File

@@ -72,14 +72,21 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
Returns:
list: A list of loaded module objects.
"""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
module = importlib.import_module(module_path)
if ":private:" in (module.__doc__ or ""):
return ModuleMembers(classes_=[], functions=[])
for name, type_ in inspect.getmembers(module):
if not hasattr(type_, "__module__"):
continue
if type_.__module__ != module_path:
continue
if ":private:" in (type_.__doc__ or ""):
continue
if inspect.isclass(type_):
# The type of the class is used to select a template
@@ -479,11 +486,11 @@ def _package_namespace(package_name: str) -> str:
Returns:
modified package_name: Can be either "langchain" or "langchain_{package_name}"
"""
return (
package_name
if package_name == "langchain"
else f"langchain_{package_name.replace('-', '_')}"
)
if package_name == "langchain":
return "langchain"
if package_name == "standard-tests":
return "langchain_tests"
return f"langchain_{package_name.replace('-', '_')}"
def _package_dir(package_name: str = "langchain") -> Path:
@@ -495,6 +502,7 @@ def _package_dir(package_name: str = "langchain") -> Path:
"core",
"cli",
"text-splitters",
"standard-tests",
):
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
@@ -520,7 +528,12 @@ def _get_package_version(package_dir: Path) -> str:
"Aborting the build."
)
exit(1)
return pyproject["tool"]["poetry"]["version"]
try:
# uses uv
return pyproject["project"]["version"]
except KeyError:
# uses poetry
return pyproject["tool"]["poetry"]["version"]
def _out_file_path(package_name: str) -> Path:
@@ -649,7 +662,7 @@ def main(dirs: Optional[list] = None) -> None:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners", "standard-tests", "packages.yml")
if dir_ not in ("cli", "partners", "packages.yml")
]
dirs += [
dir_

View File

@@ -9,3 +9,4 @@ pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4
sphinxcontrib-googleanalytics

View File

@@ -7,7 +7,7 @@
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
{% block attributes %}
{% if attributes %}

View File

@@ -19,6 +19,6 @@
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. example_links:: {{ objname }}

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrtVnlwE9cZFzEhkDDFgQnTMgHLKk1ozEq7ugV4gmzJR4MtyQfGeIy72n3Srqw9uruSr4LBFAKYkKydBEIIFF+ixpjDhgRzToDCACEcDhPjpgmEJCTFEG5Sg+mTkIsZ6H/9Jy2rGWnf+773+67f+/RVhYJAEGmOHdRCsxIQcEKCC7GmKiSAPwSAKP2piQESxZENTkd2Tn1AoLteoSSJFydpNDhPqzkesDitJjhGE8Q0BIVLGvjO+0EEpsHNkWVdVRUqBogi7gWiapKyoEJFcNAUK8GFKgf4/UoGKHGljysGqolKlcD5QVgSEIGgml0IdxiOBP7wlpeXED2HMDRLhzVZuIfBX1ESAM7AhQf3iwBuSIDhYShSQAgjoWp0dogCOAkD/bsitoHiRElufdj5jThBAIgOWIIjadYrb/CW0/xEJQk8flwCzdBjFkRSIzcXA8AjuJ8Ogqb7p+RNOM/7aQIPyzU+kWNbohEiUhkPHhU3h2NDYD5YSW53QCes6RpnGcwyq8TUerMa3VSKiBJOs36YNsSPQ3+a+Ih8x0ABjxPFEASJVlBuun+4daAOJ8qNGTjhyH4IEhcISm7EBcaobxu4LwRYiWaAHEp2PmouKnxgTqfGMLVp80PAYhlLyI2RQnz40GEgCWUIwUEMeS3aRHBcMQ3krqtFRYSnyM0kqn/n9FqduU6DwZVtTPVPc2enpYh0qsGZk2TjfZzdM8NqZSk8YM8vQTCT1oKZdPCDYGpUjakxpAhkmo2U6Ja0GZZAMMOaYcs2StYcZlqZ6MoT0bzigMGtNqa7swNlPnMQvhmBd+ZMq9OLW7XFfoertDxT788OenyvpbnTLdNtUprdhbkmK6F3gSBNJmYGS1N8NpCZn4UWJeWlqzN9rnQcs+t1WiF5Wj6FpdvUafkeny3fzg1wT6s1I2jUQyOqN6Php7WfG37AeiVKrsd06DoBiDy8L2B+E0yZFBCrGiAPwdFDoejFqXO89oDCoxtskJPyrhwqMFGJmZQOQlJqUa1eieknodpJmFmZmpHTkhw1k/NYCm7OEXBW9EAa2vspHyKoAFsMyObkx5J9V5jssJJh9+EdRUApz4kAiXolt8xAsu53DCTd1nb/ZiGc4MVZujxiVt4VYX1JeWkJSQRIkgqWMKilXK+j3SBAeNqjR3iBC5uBDiGMKNcbDVhrVNLPu2YYK4pgKIJiHaUIvObATzM0zGfkO9q2RLnBAJP90aMKEuwzrCiH9JFqoLsHagiAgYQN234Ao7dYLDsfr9QPpYMqFpOl42EtEQz0BtMy4kePKkQh6lCxpbRfG6FJuWs8XBShWlxn8hBGyCWUJEkLQN24yeMxEajZjBk9hu2w9dEERAkXk+cECREBAXu0VCZ3TWTw0nCPSdRhBp0RRjpZSbOEP0CC7IDbxoVjECcreQH4OZzcmJyCJOMEBZDsCP/kkC0/05qRnrxtBjKQSIiDjzR2OcRyIkt7PE3ZQICFkZsJPxcgYbMUQBPEyrLmy+1mUo+SOoNO69ECM4kZkCTYhvrR/k27hnCnDeF+6HuQkNsoXaJqkl6vU01WMnii2QjLFPkXmdcUjpX1HhhUG1c9VBF5YvwuK9uNxu78xwsFqzd8XPPFhJuN1JvWmOudeurZeS/lndzWWfNVVuCNvZWa8Qnbqs9/vG+4N+i5ZYlxkp+Nfvt7STub/ym0Q3Orkh1XRl1a8P7UswUdvR30nrjTN8+6kJQFX95Nqp2Vc/mNy/P32VY72hevG7Ws1Z7LLvHlbmhefGGX3XT4zFtXt35a4T58BGw/tjx31t+O/XCs1uoquC0PUkxxXF5Ri/TlFsxQz3prnF2q9lX0Df5zXcmv69qHpp9c1Xkuacynw8qW9B0fu7dmQWf8ohPPZ1yv+HzUfM/xirE9qaXnXu5bmjKi6nj8xa1z405e8C0uUC5cNnhMkvfLwjcXfz687mDzFLPp64Mi+1137N1MT99S0/7xhWNd1d9h9ePjOveP081dc2rOwnuvrI+dHZf1wR3u/IlNEzKDPwjvJFwNvlTrOFP4o9BWum/DthV6y/YzG7VnJEpzGARze24c/XBi5o1F3XnqrosN55XDejviWWxhWqjasPavgvfKvmc5Jzh4MQ5m/969GAV3Ze7oPU8pFE/GhSfjwpNx4ec5Llj+d8aFBsxgMP4fzwtmQFr0bsyIWTBgMOEY6SGByaL1mCx6nRkntD+LecGsJ92m/+K8sPw/zgunDt1opGIG27sVie3DqhpiKeE3TDLytPppw9A7t6Zuce9IU085ynu8wTUjFfbpzNT3UlJPBdV37/Qc/IwP4HHsxj0dPd9ofJUeYc7s3vcLegvDM0OffVQRnBm4ya/LUy7To5ypLxa2qe6teGG9e+vqRacLpPqWS+7yb0+9u+LkH99b03hE9cnpto4rpLNzyKWRirmh78eKWw5U+mKOHHjuZN2iVhfzizWKnB9Z29uj4xevrJ/5YvrXFpXjBLcydRMZbxzqlqaOuqOIwaghiRd+/+oX1xIrrfUVVip2CPJVr3nlzHNjFq2Nn7fkeFdJKPup6bsubP7trWOf3L4U2/1Tx3NzEnSVexPcNU3D1x249ktX3eXO/ZdGzn31VPk/741Zr769rRpcH3GoHn+3p6fzyoSXSaz2RGrPstLds3aenT5z6cpVq1ryfsWMaU/9Rmv5y83ybwtG95bvv5GwnAxmXZ3wfPcWVSsmzbv5zPqOkdc2HBiRMCX1nYs13ujMcGLhdP9uODP8C9Sfla4=

View File

@@ -1 +1 @@
eNrtl81u40YSgDfY2wC55swQOQVqiRRpiZIhLPw7/hlbzkhZ2xkMhFazJfaIZHPYTVmS4UMmAXIMGOQFkvFYC8OZJJhBkt1k9pxDXsA57LNskaJsGTYQIMdAPlju6urqqq+qq6xn4z4NBeP+WxfMlzTERMJCxM/GIX0aUSE/PfOodLh9uldvNJ9HIbt835EyENVCAQcszwPqY5Yn3Cv09QJxsCzA34FLUzOnbW4PL58fqx4VAnepUKvKo2OVcLjKl7BQHeq6XM0pashdmggiQUP15DFIPG5TNxF1A4lMjjzms0TTB5kOn0KGFHuwkGFEr9YtHqRXg/xYZT5xI5u2ouTyTPMEVCX1AohVRmEi1fLlk7FDsQ0kPj91uJDxy5uxfYsJoeAF9Qm3md+Nv+mOWJBTbNpxsaTnEJBPU3LxeY/SAGGX9enZ5FT8HQ4ClxGc7BeeCO5fZACQHAb09vZ5wgCBx76MX9fBiaXNwt4QkuAret608tp3AyQkZr4LVJGLwZ+zIN3/eXYjwKQHRlCW4PhscvjlrA4X8YsdTOqNGyZxSJz4BQ69kvlqVh5GvmQejccre7evyzavrzPyup4vf3/DsBj6JH7Rwa6gP944TGU4RISDjfgr7eWUj0v9rnTi53rJ+FdIRQB5pZ+cwTEZiWenkAv626/jrLa+rm9Pk/i/v71zugp5id80nSin6GWlTqRS1IqmoptVrVjVTeX+TvNiJbummaThUpF0IAu0n0gmtbSoQEGHgspaJDvI+r4ZYl90IDdr0zoYEyfye9Q+X7mzAt4kFQDhJfFAgSM6CLigKHMzvjhADyevDG2uvpqUG+JhF/tslJZD/CYthaPR4MgmkW07/SNPq4xMg7VpRDqvsyNByJNrwCHkifi5Wam8zHamyTiH4DWka0jT/zNAUPnUZR4DwOnv7KmL+HRB07SfbitI3qPQFMamlv78d1YjpB5kMbn72gx4UPnlbqWpKQNUKtZNbyDFdNYbveiJn24rZCa+1sTFYKqNmB1fvgeLlt3RcBvTom2W2tSwDL2jacUyEC9jUiSV0r8ht4yAlSSZAQ8h2ZRAX5PD+DLn4UHy8GqGvmCUINJFJWshjai9ypMYxKIShNTl2P52ZR2tYOJQ1EgLMh6vHu4u7WyunDfAyRXOe4x+8ftbf2+1SKfV9mrr68u64Xp6r9zYP1yyPK1ot/SV1fvBk92DgRiNemzBfqC18y2+g/RysaKXjbJhIj2v5fW8jnpr251Bb7Np7+cPxOH9dbarG/fNduufO1a5EZWb0eijzS1PyoVl1nT3TWuz3N/pLjWWnx5tt6xeXawdtjcc2VjobLkiLLYPsHZUefgBRIOlUyssKlCbDPjWsieD4MmgyYMxpg9mUbFTBrX8zfa4qGzASKj77nBRaSQwKXxijzaYpLVd7tPLL4FB1Gd2rbnfWO7tuyP+cOmB4If9p/2P5ILzdK13ZNUfkG5U0ZsaZ4eya23OQDCMMtIyDiXNtNIqvHb9T3r1wwGa7QCoPpkc8djnwmedzlmDhvCA4nPi8siGTh/SM8j5w6XD+LVlm5ptGG3bxtjqkDJarjfG2IVi6pP4lWPU1KppGuqi4uGaVYJ3k47Cj8+S4vO7v7/9mY0lTkeUrVbVZG4SmJpoaevBjtjaXl1iI7L34bY1/LDUOFrfO9zf2rBh+Km8/QS6THYifz1p82kfAgUCfUtSsHkNLzedpLODFCXdEGllpFtwSgwFjMRWB1yjYQAeJld0ghYttm26YNASTUw7nJFkhD9KBqtNB2pVy6lgWWK1epwNcBVDH4IGDRZy13NehUVIOzCGwQ0/ct2TnOryLvSttpgIcipczoTTAv9hBGZaj3NqNrrT5b17fzlq14g20v+E5mBugXl3DuU2FGWDH8253MGFYH/O5Q4um3Mqd1CZTKs5mjvQDHk053IHF8ltPJyTuU3mH3MoGZQ/5qAKyQP1r0XiOphjFWL3AtmafFNXq1byXWDq1ZW0klMll9i9Eujl3M2TLZtKzFyRFlryVdu+0tVO7rA5qz9BDb7PHDm5Qvxotb679vjevf8DlwNB/w==
eNrtWE1v48YZbq57yaUoemSJngqPRIqULMkQirXkz40tx7KitYNAGM0MxYlJDk0OZUlbH7rtPWCQP5Cs1yoMd5NgF0maZnPuoX/AOfS39CVFrWzsNol7jQRB9sy8834878cj8fFkwIKQC++tK+5JFmAiYRHGjycBO4lYKP964TJpC3q+12wdPIkCfv0HW0o/rObz2Oc54TMP8xwRbn6g54mNZR7+9x2WqjnvCTq6/tMj1WVhiPssVKvvP1KJAEueVKuqzRxHqEtqIBwGyyhkgXr2wZLqCsoc2Oj7EpkCudzjIBXKgGFXrcogYrNVV/ipJbX6SOUecSLKulFiaip2djaxGaYQ4kfntghl/Oy2059jQhgYYR4RlHv9+O/9MfeXFMosB0t2Ca56LIUkvjxmzEfY4QN2Mb0Vf4F93+EEJ+f5D0PhXWWhITny2evHl0mACJzzZPyiCU7c38rvjQBdT9FzpVKu8MUQhRJzzwG4kIPBnws/Pf/nzQMfk2NQgrLMxRfTy89uyogwfrqDSbN1SyUOiB0/xYFbMp/f3A8iT3KXxZP63uvmssNX5iZGTtfh/eUtzeHII/FTCzsh+/rWbSaDESIClMSfas9mADnM60s7fqIXzL8FLPQhhewvF3BNRuHjc0gG+/e/JlnVfNZ8MMvif371m/MGJCZ+2Yq8JcXQlB0cKAWtUFT0ctU0q4ahbOwcXNUzMwdJHq4VyYYyzwbJzrRsVhQo1SBkshZJC5W/PAiwF1qQnLVZIUyIHXnHjF7W31gCL5MSgPCSeKB6ERv6ImQoczO+eoj2p/2DthrPp/WGRNDHHh+n9RC/TGvhdDw8pSSi1B6culplbBq8xyJivciu+IFIzIBDyA3jJ4Vi5Vl2MsvGJQSvIV1Dmv7tEAWAjcNdDgCnn1kTh/F5UdO0b14XkOKYQbtPTC19fX9TImAuZDGxPVdjViqV794sNFNlVJJX8dvbUoD1DTV6wQ2/eV0gU/GZFl4NZ9KI0/j697DoGr2KppcMU6PUxL1KzypRnRiUUFIwsFbA/4DccgJakmT6IoBkMwITS47i6yUXD5POqxl60ShBpCtKNi5aUa8hkhjCFcUPmCMw/ZxYiGBiMzQtyHjSONy9v7NVv2yBk3Uhjjn7+Ie3ftvtEqvbc2vto7Cs22vt9pHBR559WDzZbfpWl7a9wO4fNVe3Hw6CwkllVz/pI33ZNMCBwrKB9JyWg0ZCJxt0vDbmlSOnYzX8euE0NybuqGjpp51orbe+Vt+K3ms3C5vuzoPNjjPu1veG73n9xrjdKVjN5bWoXeocbh6tbUTbO/X9gK/vHnVaVmFja2/53YHNzNPcnjzZtx76/sEWhIilXcuvKFCwHECvZX2EoI9Q0kWVqj7rohWFpsDUcreH5oqyCQzQ9JzRitJKEGbwF7usxSWr7QqPXX8CwEQDTmvmgeGOnVI9t1pq5xo7D9bf6ZgHx/bYlmwXd9/t1+0TdzvoNPAmuYHMcrGItAyckmaW09Kcu/5/evXVQ3RzLKDmlDniiSdCj1vWRYsF0FXxJXFERGH+B+yivo727x/GLyqFcoUUTFKxsG5SvYRWm60JdqDCBiR+bhs1FYaPoa4oLq6VS9BMKfP9+SKpSK//w9u/pljiqgIcRYHWEpokQJJodSM41d0WbguzHO0f3t9e1Uxv1NmOAnG0Downeh/C6Mlu5ObEmkuHEwgQGGaSgc45eG9kTwRTwkTaMtLLCZFCoJywruTAt1UVyA5HjkwORqFkbtcCn1ngg+uJbcvv9ozlErV6PVpMbNoCLk+pnHuUDdWqtgRKHIkTFs64HMPUgu7xErVzwk+4nllA0OCfFznO2ZLqiD5MuV443VhSwTgP7S4EBoyZScFXgozU0+W9e78cOOfYbaZflhaI/XzEfrdA6w5oKZvidAHYXQAj2FsAdhfAthZw3QWuKYcuMLsLZiMRLQC7C2BSUDxaQHYHyP64QOun0PppgNRQCl/9hUA0j/KRCqC4vuxOH3Wo1XLy82jm7qtdHbCUQmJnvgOCt652KZOYO+mTz/RhBX0lC5dxRLmYb5y9wcpNBdOsQDQ/ogM20seeYMgPGOXklsda8ssuScL/OD47e5Xc9xvN3bUP7t37L6OJzvU=

View File

@@ -1 +1 @@
eNrtmk9v48YVwJv2th+gZ5bIKRBpUqT+GkKhlR3/leVa2tbOxhVG5FCiTc5wyaH+2NCh294LBv0CzTpWYTibBBu0SdrtuYd+Ae+hnyAfIm8oypazGxRdHgoE9MEiZ968efN7b948Snw6G2I/sCl559omDPvIYHATRE9nPn4S4oD94dLFbEDNi/1Wu/Ms9O2b9waMeUF1ZQV5tkw9TJAtG9RdGaorxgCxFbj2HByruehRc/Lqp789F10cBKiPA7EqPD4XDQpzEQY3Ygc7juBiAQkn9BSLOUH0qYN5TxhgX5weQ4tLTezwpr7HJJ1Krk1sLkmgTYVPD/nIcbDTZZQ6XQOu+TwWcgIMvQHzMXKhgfnh3X2XerGJ0H4u2sRwQhN3Q25jIjkFUYZdD5iw0OetilzibfEcA2obOB7LJl5srhWSmB037PaaCxDkxgLbfH3TaaIiIfG/jYZOEweGb3tJf9wsMCowjpETk8UECLgNPBvr8Hxwk89sPL8NMAu9+Op7ymKHwzVXCEJC6C25ZWEp4LNJX+Tr8MDQgWMT/CZliAQj7HNVPg6oMwQzB/gHtXF1POJsH5ucTGLkvUmOl8bR3gk2GIybHk9nA4xMWOsfLwY0YNHz+3H5GTIMDHGDiUFNmCv6tH9meznBxJaDGL6CWCQ45h1dnWLsScixh/hyPir6HHmeYxuI96+cBJRcJ7ErcUte777iPpAgigiLvq4HE2K0wJL61sr+BHYREVRZL8vK52MpYMgmDuwKyUFg1KUX9/99ucNDxilokpIdGl3OBz9flqFB9EkTGa32PZXINwbRJ8h3i/qL5XY/JMx2cTRr7L8+XdJ5N50mq6pc+uKeYr6i6NP4oxr/t+nf7inBzJ9IBgVd0Z+V5wtYDiZ9NoieFZXiXyAcPNh4+PeXMIyFwdMLcAz+979mSY74uLWz8Oh/fvLzizVwUvSyMwhzgloSWgYT8kpeF1S9quSrakHYaHauG8k0He6TG9gMY7aCh7xlvtlXBchMPoRULWSWVP6i40N0WuCo9UVQzIxBSE6xedV4Yzi85OEAy+Prgfwk4bFHAywlZkbXh9LBPF1KW2sv5rEnUb+PiH0Wx0b0Mo6L0dl4ZBqhaQ6GI1epnOma3cOhYX2ZDIGNyqcBgyQ3iJ5plcrzpGfhlCtYvCKpiqSo34wlSE3YsV0bAMf/k5wdRBcFRVG+el2AwfaD7D7Tlfjvn8sSPnbBi3zuOzV6pVL5x5uFFqrAyEqlVPnmvhSwXlKj5t3gq9cFEhUfK8H1eCEt2WZ08y7cdA2tUtAVtZwv6lavoJq4XEEVlC/iHqrkK0j5mucOA7RwZ3rUB2djAw4oNoluci4a811Y09SCVoSVrgpJjm+HvTXK1xCsCp6PHYrMzxrvSw1kDLDUjgMymq0d7dWbW42rNhjZoPTUxh+9eudn3a5hdXtubc809io+622O8D61O1sN89eaafaIWl972OlsHDU6B2NnZ9SWS7qklvIVtaSVtIKkyoqsyqrUeFR6ND5SDhxbPWm0ZYL1w72+MR4+LPS3zjZ2yKG3s21VPrBQ+Sh8v7eptYuU7e6MGyW2azfXZe9XOxqZFIqbxrrXbebXizKh2506rAaxQW1lVYDYhCwa1JItI8GWkeYbRltsmFXBjBnU5Pu5clXYhLO9RZzJqtDmMDF8wknSthmu7VGCb/4EDMKhbdbW5e3GmVWcbHxg9Z64G80m6xb3W2TC8rudA+fwyUa4ueczYg0n9SUIpXxZUhIORUUvx1F4Z/pbWvXXQ2k5A0it+dEezQgNiG1Zl23swwaKrgyHhiakfR9fgs8P6kfRl2VTV0xNL+gWwpqSL0gPW+0ZciCYhkb0YqDVxKqua+Kq4KJauQj7Jq5pfnc5P7hevfutiRiKawg4t0ReABlQ/kj17d0m2yrr6EmbNUmL6EWl88g/2UU7W86GmFucX/MR8l3JJMd5CAQMyFuMn4V38HKLQmi5DpJ4NpSUkqSWYVQwCaBm6VpgGvY9sJBPYXldnO+ZuKDhIuaq4+IFaoHHvPIx8VisKjk4vh2GxOp5Un+JCPIQJGjQkLur10joODlxucy6p2POADq6u/t1xaufkZP2hrav9Hv90e7entbkNs6P8KWKZ6ngWdQ7SbkjIr8fujAzTCTCYX+cgyLBgirNmZsyzYkO7UPW7AUL22DpdjDoAr2AK4ylYFhS2cW3Dx786Hz2gx5ZRrsM8/zDmGcG8G0BzuvTDGEKhB+K1SwK0yH8zWCSAUwDUDBh7RnBNAThsT4jmIpgYEBFmjFMw9Dw6SgjmCoKRzbJCKYiiDKAKQGOkJ8dx6kY/jLDl+6ZJJc9k6RDmD0Vp8LHf+7KCKYhGP9YmBHMvpn5fyJ8iA0UBlkcpisIs68W0j7UoSAjmIogDRn/Gcycv6aSkXx7ktn3C2mzoZ1t5nQELRs72RcMqRj+IqsM0wGcZvju8P13DmLAqCf+uEjcLSZ+Odf1WHf+LpxYLen8dZuFWbfN+SJ/7YYh57ZFVQDjvcFdEzNk8zg8Fw3+Ppt5K6xM36B1WX5OG8xfGjK9pfx4rbW3fvzgwXcLR6Ib
eNrtWk1z28YZbnr0qadOjyjaXjoEBRD81mg6EmXJsi2RFuVacuLhLBYLYiUAC2EXIimNDnX7B9DpH2isiB2N4iTjTBuncc899A/Ih/6WvguSIlW5jZmZnAKOhiQW7+ez7xfEfT48IhGnLPjgkgaCRAgLuODJ82FEDmPCxR/OfSJcZp+1mu2dF3FEr37tChHy+sICCmmehSRANI+Zv3BkLGAXiQX4HnokFXNmMXvw9se/OlF9wjnqEq7WPzxRMQNVgVDr6g7xPMUnClL22QFRc2rEPALrMSeRevosp/rMJh4sdEOhFZnm04BKKsJDEE86Dot8BIJOVDEIJeM+Z0GHY5f4COhmr4Bm+s0mHEc0lEYC033QrQimCGmNVJ0H3jAC3yJBpc3ASkQc3uZMIYKvkhtIlDicuiKoSH1pp6y5iYVcRDToqqegIQ6w69GA3JaLAt4jkZQKrjLvCMxzyX8Lbl3z3xJ+KjE6jGlEbEB8bP2sxmdTMffHUkcSmLVPsIBrZNtUWoO81gwSDvI4AekB8md4pVpgqosoJlI3XBPkj64nVx2WOpeiSQPsxTbpxDImJmxDlyAbgvHfP/rJmcu4SF7eDLDPEMYEwoAEmNngZfJp95iGOcUmjocEuYCoCkgavsnFASGhhjx6RM5HXMnnKAw9ipG8vyDj4nIchZp0/PbtCxkHGpgXiOT1Mh8EuAmWLG8stAaQDoFi5MvlfOHzvsYFooEH4a15CIw6D9P7f5+9ESJ8AJK0caol5yPml7M0jCefbCLcbN8QiSLsJp+gyC8XX82uR3EgqE+SYaN1W9345rW6oZk3DPj74oZk6VLyafpRT98pez1L4BIPdl0bZX/yjUUEysv8zs/kd360tX+7oZ2IaKBhBkYkf9ZfTlD2SNAVbvKiVKz9ZZK+vz8HNhHz52ewo+Rf/xyOq8THzQfTYPjp2SrsbvKmHQc5xdSVTRQpBb1QUoxqvVism0VlfXPnsjFWsyM38woyuS8WyJFcGZm4qIDtEeTBUiwcrfrFTgQp5oB/dyfRNMRuHBwQ+6Lxzjh6I+MI3JP+QJHSSD9knGhjM5PLXW17VDC1jdVXo6DVWNRFAT1Ogyp5kwZU77jfs3Fs2+5Rz9drx0WTWiTGzpdjFqg6Ug0YpPk8eVEwKy/Hdya7eQHO65qha7rxdV+LABuP+hQATt/HVZsnZyVd17+6TSAgY6G+D4t6+vrHLEUE1REKLOieiinWarVv3k00EWXW5Mv8+iYVYD0jxij4/KvbBGMRH+v8sj+h1qidXP0SLjpVVCbFAq6ZBVzBZceqmCUL6zWjWnJsHZvl16O6owm5mSGLYLMJhhYlBslVzkd9mb5LplEyy+DpojKuOu3YWmXSB76ohBHxGLI/w46GEXQHbRSQyXB1b2t5c6NxAcVbazB2QMkf337ws04HOx3LXwp7+09rT9e34g3mbjzZfdAvR3zfPu4+pCttcRS4je6m+9s1q7f9+LFmVIomGFCoFDUjr+chEbXGRmllgJ32mmPbe7y206xt05b7gNWwVT28v7O9vJ+/134Y2jvosF+NdvmjxjoJei2IX3ps9VnPbTTMsGU1reW9Nd7xEXerYdHLuw+PO041au9urj1t6WwwoM5dcBEJd2lhUYGAhXbAl8Z5pEEeaTKLanVjkkWLip0Cs5S/WXkXlXvQ8puBN1hU2hJhAp/QANpUkKUtFpCrPwEw8RG1l1bDR4/u2xsNGljL7qp/7FpGP79+t3XYerJTNuP9irm3vFo1tvKt3gwyVaOi6WNwynqxmobm1PTvaNVfd7XZsqA1Rw0oGQaMB9RxztskgqxKLrDHYhuaSETOG2va9vJe8mWtUK3hQsms1nS96lQtbaXZHiIPIuwIJ69cc0mF4mOqi4qPlqplSKZ01Pnd+agBv/1Fx0YC1RVoddCAVVk3MVRNbWU96hUeG6JS8Fce3yMP295qNzj01+8NNp0+NNJx9x1xzFTafFqcgABDMROyqU/Be+eQpEGVKGp6RTOqsj+DoxSTDvTwCOigY6LYk02eD7ggfscBm0kUgulStxN2LLNSth3LsktSp8uAeTS60cAmfbWu50CIJ9JZajyyIahakD2BFDsd8NJhzYE+D/YFsefBcOCxLlQ5i48Wcioop9ztgGNczj4pFYwn49kgvbxz54cD5xS7k49UNcPr/fEaTbkZYu+P2EdqPYuxuRB74g4yvObAS7HB2wywOQCDB/0MsHkA4xgGtwyyOSDDEetlgM0TYz0aZIDNAxjK8JoPrx6KskY5D2S/ydCaa9LPZZP+XIhlT5LzoCV/WMoAmwOw9Fe4DLDsfxXfH2IrBKOYZ1E21yCWPX3P+WSEeAbYPICxWMifSezxIZAMuPcFLnsEn7OS0Swz5wLMocTLnsHngezn2UQ2F16nGVrfhta3A6RywUL1BwLR1MsTeSLVD0VndGZKrVer8qDFxN7r5UIlpwomkHe9Yhil3E3mjk0Eol56FjM992RfE8NWoNimbLpw+g41swJG+wL+/B8ZsJAewwRFYURsim+YrMtDInIb/sft09Pr7f1wtbl199mdO/8BuSY6dA==

View File

@@ -1 +0,0 @@
eNrtVk1v2zYYRo+77SewcoFeTEukJMsyMARuCrRFsyZYPLToOhgUScuKJVIRqSxu5sOyAdvV/QmNkQxBsfWy42457LA/kP2akf7IsCUDdliHAdtJ4Pu+JF8+z/s80PHZAa9UJsWtN5nQvCJUm4V6dXxW8f2aK/3VacH1SLL5zvZu/6Sussudkdal6rpuwlkl6RhWtdBZwVu1gpwoDVGLFOSlFOQz1aKycAvJeO4SoUeVLDPaojmpGYcH2M3EgRzzeSLZ5PLzI+eqZLBqyukCZ30L9rAPvRD6yGkCp+BKkZQrU/HJkVPJnNvaWvHKZqk0bxHahvo8z0HBAQF75ipn+qndTA4H2qyE3Y48HEzPRpwwc+fXpz1KealnF++TsswzSiwc7p6S4s3m8lDYn5T8hvz5x+Zy2EtNyezi1T2ppe+ilh+2AgwK5toAlRVvrGM1cXHLA1K5BaFSNbDfapu1KSUVHTVIVbQDkBORuuXEECAafguhVmQLyklWlHljc2eRAHSYuhXX1QRapBs5TwmdgPWF6ybOn8Fe8RLeJ9p2b9AMkIeiPgoMrvj5D8TkFBtDS8nyVTBjs4s7QTvp+EGMYcySEAamFnaCtqGhHRBO2yiIUPz9evNqZmYX7xGteVHqD9B3a9i2uEj1aHaCcPBtxVVppox/eao00bU6nlPT+c8/na1ofb39eM3IN9cO8FF8bmKCL0Z1dj7mvIQkzw747yl6+0eGLm8vIBDw3mqktmtd1qbajgLclGaMZ6999PYQElv10fIxj9js8k7s+ZSFeAhpyAgMEk5hEnY6MAw8EjEcJEkcXTv+0W9QbhnUBZ3MTmLsX4Jrdde6QP58QdSP/VHdBCgC21QDyxlAQdfDXd8HDz7sL4TzhcGwykT6y62NIydjTtcpVDpIWDUeeKiz+2R/f1QhcZ8964221NMoFDh1mo42CNnSJd4msJSQQ5TKDCdGO01noVsTW+nVTKsJXknLyG51iOaHtnzx6ToPecXvKqM3leX5ZKG6LnghXoinI6IBk2Aia0CJUSUBiZSF8RyRAr1MciXuamBMg4OE0PGG3dcDSmd0fNtq11FaloPK2Iw1B4cLNtB1JZxVQlnKBDVNiTrPm8YP7OO6BhcL8ZXmkd905IL7q5CPptP/Pe+/5Hn+n3teGEesEwYxjDtDDAPmD2GS+AlEjA6jEDMcDf9pz/Pa79DzcHST51EUxmHbY5BQ3zh/QGKYxGYZtjuozXiYREnwVzxvjrzQ+1tML7zB9PB103vwZBfHe1E61I+f050JDTCqdot/jekxmYKCpBnNiNgAPTPvifntIsx+mKzerc/haDr9FcHSkDA=

View File

@@ -0,0 +1 @@
eNrNlwlwE9cZx21gEkjCQMgUHNKAIg4T4pVXWkmW7DjFyMYWtmSDBD4CiNXuk3axtLveQ5JtaAImJAVasoQCpQ0FY1tgzGHMYcA0ZAgESBjKMaSGFkKaQCGBTHDJMTS4TxeWY46ESWe642v1vu+9//uO936eF/QBXqBZJrGJZkTA44QIX4Sl84I8qJCAIM5v8AKRYsm6okKbfZ3E0+1jKVHkhPTUVJyjVSwHGJxWEaw31adOJShcTIV/cx4QnqbOyZKV7VOqlV4gCLgbCMr0l6uVBAtXYkRlujIPeDysMkXJsx4AXyUB8Mo501OUXpYEHviBmxMRLYt4aYaGVoLIA9yrTHfhHgHMCVIAJ6H2JXUUK4jy5u5qtuAEAaA3YAiWpBm3vMldRXMpChK4PLgIGqEGBoT3KjeWA8AhuIf2gYaIl7wV5zgPTeCh8dRZAss0RTUjYiUHeg43hpQjcIOMKG8vhCKyzKlFlTBsjEKt0utVmq0BRBBxmvHAOCAeHOpp4MLje+MHOJwoh5Mg0ZTIDRHnzfE2rCDXW3Ci0NZtSpwnKLke5716bUv857zEiLQXyEFTUc/looN3lgtiKrUafjV3m1moZAi5PhzzXd28gchXIgQLJ5HXoptjAfIAxi1Scq1Bt54HAgeLANQ0QC9REubVwVyADw8Ho9VQW5gfS+L5hCF12TAv8j6bxKQoMFRhwXmFBtXoFGpDulabjqUpci32JlN0Fftd09Bs53FGcMFU5MTSHiQoiSkHZKPprgnfF0o43ExIPSxCBAQ4VgBIVJXcVIJMjrQBYs5uiVQXwvJunKGrwsvK+8KZ91cF/CQhkSTl83tRY5UWo51AIlzboy4cz4aWgYIQryCv0+qMm6Mjsdg3wr2iiBpFUPWeAMLDUHhoLw3DGf4Z7UVBrtOhKNra00BkywEjyEEtGn7+Em/BAy/MWWjtrmm0RqOx7e5GsakwY+jR7eluJYB4NWqNV2jtaRCdohYVmgIxa4Qm5faR8MWBulxOQkuiOIYCAyBdhNFJAFyXRhgNGACkdjfsc5qAs4SSybG8iAiAgAePWCm3p3jxQKjPMjG1DtPDnWYoaIbwSCSwSc5sNrQHIUPB8cDD4uQWwoUQOEEBJFJ/cjC71JplMZsabVCkiWXLabD0bGKSw0G4HE5vppHhzabikgqppEQzNcD4vVMrCXdFtoH3STmUzyIV27XFWosjN9smIOo0LQYFaNLSELUKVcG2QQq9Rj9nLqe86oDFgvpdphyPj5vqoqbkFBWIVJaLKZLYEs5e4vIDvTtbS5G5gJvlw1m/0VxFAkHK95ZQhQ49mT25KI2UDEUGdV5xQY5jcgCbWGGmzY582gJyLCbaypbCLeIilZmaoYAFS8OgZ0bbBoFtg4SaxpiujjVNhoIMByZT1f2IzFDkwYO8kPFUZihsoQgD+Bv3Ahstgkwry4D2ZTAwko8mM0sDuMSaJLeBT+Ml0xSbKJo1NmDG/LNM43HKrtLpbJjajFL2CaVxkdHDikajwdGjWkO4NLukP6SqnSVI/CmAFHLhq0YOMqzA0C5Xgw3wsKvkRsLDSiQ87XnQYJqATM4qlbcbNQYjodFjBqc+zeAy6JHx8ByNzXbnzKgLXRVB3AMLz0fILRSWqYRHEKbMUHjxTIMe9lj4XpvbECpUxn0wcfvwRX0Twk9v+N3ZuXiyhT2NDtx3uXilkS36ep2l/fFBg4o+frXM+otG5ZbMI6vSh32wumj5rM4MftveI+NfqDiCn29b8c3qy88/1Wvxgh0JO37fVMZ2+Icf+OfBY49ueenCpIpb3tsNi64N6eSEW7e+L+7X8t6aYwPss8l9U09XLfIc7r/1sTFPv3W1bRqyYsXB2pTqNaN+qz8x9Ph3C2+cMbyVuvDg2NSjs38tzWSPjFm8dfAF7Ln5gavvHAme+vbxM7Urn01GDo15Zv5N2ty3L3mqD7l49DbxiZYn7Z8mTWe/H7qpvviTVcNeW2qdPah31hfrnyoou37e+pvEz1VHyfPSGWvClV82F1ys/uCbR8lXNPSGr+bqv0ode/rQH1o39Tqap3p6YcGwKvHP/SourDjx9me9h6+/OU2dObNz78as/oeGJDV/tzZ5ppNO7hhxpaDp5OGTfzNcGbTM/vLpEULKDN/Zv/Y5PFru1HX86euB51ovvfJp7rjn/714z43EHZMuz31/x6tXXnKUZUz5+zML3rgYnFTS/O6bq7MIJCNjWv/3ppOejWdbH6kZ4DK8fbb4j6Byw+fHJjjG7Tx+O5ya3gnW5n01eTBP9yGcUfGEgzMixbMcTcQgJ8YyEbghIdzgAUfkyFOmq1GNNqU77sTDTUoP+InHHcKDw5MMwRAdQuF0uYSEUEUQlXO6rsknI7TT3EPZwwFPN1gIwQArifIGa6HdkWuemmP9OXhoV1ZMZgyJUJXWqEIfEokizv83SNR8JwndrnIMQfXwKv8JwLROjaI/jZiSHkBMhv8JMbUru3Ycf+9HkCCCI5Av4EnfPvq+lneIQ64LocYD5g2Dhbw9tD8ExRAMtce4sOzu69AMJ8VIJKqqIUJH7WMfaN+lLeYz+kf43EOhoaw9+W7esNV6SKwPX5LtLzzYvkti1Cf5x/jcW6Libu4/CF9koZH3sYwPXMRacV/re+ppjMPGtjA1qida850aqpwrcJVaXSXuHGKSNptU7+6aPx7MI7iJGfWYjsScCHC6SERrNKQhRqNGjTg1GgOpNUBUIfXrfDQuN8ImV7hZ1u0B96TG0D8DrJOFxWjHYdEykEV+BGuk4UYS0xLgfqzxA5wY2IUT5JKs/APjBr7WeTT9cv+RJ9b8Iz+rbuewtoVvzMf9ecsbWir8y4dOvL23rqzoxf5tX1YPuNTn/Ny5pqtntzUWb9ll+ez94ArmZlu7dsCJ68O/zKopGIEMrp//7NwFSQtG7V82exCZ2P/1j6mE3r79X6xvVa2qzX3neM70M2lTSj9a8+4jg52JHb49nsfmqK51qLfenFi1UKzfeXE9unTV+KmqwPVlfV4/VXbt5K7aN5OvDNlqG1x6U7vYfetf9Y4D/9n8nDnpXN7KXjWJzo8m/u6JXylO5I7e2WdBB0odvnzhw34vLikt61f9SZ9L1pqcc7byEWu/3X/NMWNGYKTYWXr4RsaGjZW3e0Wu6taTrt1iYkLCfwEe36Og

View File

@@ -1 +1 @@
eNrtVs1u20YQbq/ppZfeWaKnQiuRImVSMozC/1FiWY4lxFaCQFgtlyIjkktzl7Ikw4emfQEe+gBNHKkwXCdBgjb9Sc899AXcQx+iT9ChJMMyHKD3VjpY2t2Zb7/5ZmbHT0ZdGnGXBR+euYGgESYCFjx5MoroQUy5+HroU+Ew62SnWqs/iyP34nNHiJCXcjkculkW0gC7WcL8XFfNEQeLHPwOPTqGOWkxq3/RO5J9yjluUy6XpIdHMmFwVSBgITvU85ickeSIeTTdiDmN5ONHsOMzi3rpVjsUSGfIdwM3tQxgT4VvLiKKfViIKKawFtQPIQARRymQkjWORw7FFoT31wcfnziMi+T8OuUXmBAK4DQgzHKDdvJ9e+CGGcmitocFPQWeAR0Lkpx2KA0R9twuHU68kpc4DD2X4PQ895iz4GwaFxL9kN48Pk1DQ6BCIJI3VSCxXM7t9EHbQFKzuplVXvYQF9gNPBALeRj4DMPx+S+zByEmHQBB07wlw4nz+awN48nzCibV2jVIHBEneY4jf0F/PbsfxYFwfZqMVnduXjc9vLpOy6pq1nh1DZj3A5I8t7HH6Y/XnKmI+ogwwEi+VYaEsY5Lk4u/m01iN1v+0sbGiqp5vtoxanuNZdNX8lZTXV3bDB9v7/f4YNBxC9aW0so2WQWpRr6oGpqh6UjNKlk1q6LO+l271ynXrb3sPm9sbrjbqrapt5r3K6ZRi416PHhQvuMLUVhx696ebpaNbqW9XFs5OLzbNDtVvt5o3XZErWDf8XiUb+1j5bC4e29RAnZx17WW6nu1lc6eN2C7y1ucNboH3Qei4Bysdw7N6hZpx0W1rjC3IdpmeYaephlImTJcUHRTST/nl7Xh0aAtnOSZmle+iygPoUvoV0OQTMT8yQnUIf3j99G0XZ5W716V8Ccna1CTybu6E2ck1ZCqREh5Ja9Lql5S8iVVlzYr9bPV6TX1tAQvJEF7Ike76c6kXRYl6NGIU7EUCxuZr+oRDrgNdbl+2QMj4sRBh1qnq++t/ndp9UNq03igZxHthYxTNKWZnO2j3cnDgcprryethljUxoE7GLdC8m7cBoeD3qFFYstyuoe+UhzomtuiMbHfTF3CiKXXACHk8+QZqHo+PbksxFMIXkGqghT15x6Cvqee67sg8Pjv9PXiyUkB1H9700CwDoV3bqSP06P8NmsRUR8qOL37CkYvFou/vt/oEkoDk6J5nQ2kmM6yUfM+f3vTYArxVOFnvUtr5FrJxWewaFqKUWjRBZXqqtHSbNOgql40sVFQDEzslvET5NYlgJImM2QRJJsSeKpFP7nI+LiXPjpLmlrQFiDSRckNiBdbtBa31lgaA1+Uwoh6DFsvVjfQKiYORbVxQSajtcb2cqW8+sM+mq0sVA0nY2IUMB64tj2s0QgSk5wSj8UWvJ4RHQLW7nIjeWNaumJputKytIJpEwOtVGsj7AHJLkleO9qSXNJ1TV6UfLxkLkA+xlPjy2EaVND+86NvLCxwSTqSXUsuyemIITBg0PKdrQrHON+1du/x3UG4W9kRjbbn34/u1QZyRmatx1C9U4/s1VDKjusbDAj0g6CAedW6mcuhMztzUNplSDGQaoIX73MYNE0bqNEoBIbpFXbYpPmWRQsaXaAptMNckk47GHZuYNGeXFIyMiALLJeOprNOxlDf0PiAkLkaiTIsImrHHAONIPa844zssTb0Q4tPNjIyXO5ypwn8YaxMrR4d37r1nxPqSpXb4/8T5lqkjp/OdRg7SrfZ4VyKiRQEB3MpJlKU50JMhJjMl7kaEzX6LJ5LMZFCMAv352KMHb/4P+vw76HLXLBQngn+4Vp1e/3RrVv/AJrrWTs=
eNrtVs1u20YQbk4FjB566V0leiq8EilSv4ZRxJL8k9SWY0mR7SAQVsulyJjk0tyl/gwfmvYF+AhNHCkwXCdBgjZNm5576As4hz5Lh5Icy7ARP0BFCIK4OzvzzTcz++nxsE19bjH31onlCupjIuCFh4+HPt0PKBc/DRwqTKYfbZYr1aeBb519awrh8XwigT0rzjzqYitOmJNoKwliYpGA355NR26OmkzvnVUPJIdyjluUS/kHBxJhEMkVUl4yqW0zaV7ymU3hNeDUlw4fzksO06kNCy1PII0hx3ItsOLCp9iR8sIP6OHQpFgH6P9+9uWRybgITy/DeYEJoXCcuoTpltsKf2n1LW8+plPDxoIeAwiXjpINj/co9RC2rTYdjE+FL7Hn2RbB0X7iEWfuyQQ0Ej2PXt0+jqAjyNAV4ZsygLi9ltjsAW9uTImn0/Hkyy7iAluuDUQgGwOegTfa/2N6w8NkD5ygSU3Cwfjw6bQN4+GzdUzKlUsusU/M8Bn2nbT2enrdD1xhOTQcFjavhptsfgw3VOOKAp9XlzzznkvCZwa2Of3t0mkq/B4iDJyEP8sDwtieRcMPtz5vNIjRaDqLtV2eVcxSrbarWj3X3Entb5Q9o6HXXN9s7ZaX7my3/eR+bkPZbyElo6lqWk5mVKTE5TigQPsrer/Ut3K7dt0oeoVkJ94nTi9lKJ16UGoulwprwf1aObnqrN9drdv9RmGze99tFfu1etIoZ0pBLV3fWd0trQR31gtbvrW8sVuvGMmVtc3MvbZJtU58U+xvGdueV11biAHkoG3pi1pVdfp2uhBfStfixfW7y9/Xteqe2TcF3cCNe62Cue/c8etFvEqmMGdSKSRPYKdlLStHz+l5y9jUbQkzfJLVnvuUezAX9McB8CgC/vgIupP+8/dwMiBPyncvGvuroyJ0avi+ErjzMVWOrWM/lpSTqZiSzWtaXtViK+vVk8IkSjVqzLOYoF2RoO1oZTwvCzGYSp9TsRgIA2VfVX3scgO6tXQ+GUNiBu4e1Y8L187E+2gmoN5ROjCoiHY9ximawAxPttHW+KpAa8XX4wFEzG9h1+qPBiR8PxqOTr/b0Umg62a748i5vqZaTRoQ483kiOezKAwAQg4Pn0Kip5Od8/Y8hjUZKTKSlXdd5AM3tuVYwO/oe3Jf8fAoBeS/vWog2B6Fm22ojaoj/zVt4VMH2jqKfeFGy+Vyf15vdO5KzUVP6t1lK+B6yo2SdPjbqwYTF09kftI9t0aWHp59Ay8NRc01ZZxLyVmiqpkkTidxU1W1XDOTbsop2fgdamsR8BIV02M+FJsSuJxFLzybd3A3uooWVSUF7SnLCzHLJXag00rQLLIoB74Q83xqM6y/IAYimJgUjRsyHBZ3Nm6vrxV+3UbTnYXK3lgYhi7jrmUYgwr1oTDhMbFZoMOd6tNBYRlt3d4J3+SS2RxJaoZBqKHpShotlStDbAPINglfm+qiBP2rSgsxBy9m01CPkU78MIiSclsfvniuY4HzsQPJ0kEEIlEhICloacXvJMtG+x7WgqVKLlvJPKo7Rc6SZZhj0AfWfATdOzkRv5Ch+Ki/wYDAPAgKPj9Ornat1iBoNA3JGaRkI9mBRC1CG8ICdcpLICA4sEW00eOCOg0DMFPfA+hRbMNrNNVMWjeaTT0VxTQZHB4Ln+XqtCvl5XlwYgss5Q/OlQ9D40MB3MjthTxGykiNgGPA5wa2fTgv2awFg9Lk44V5CYJb3GxAYqBCE6uHh3Nz/x8GL+haHf2bmJH0SZK+nhH0aYJiq6wz4+gGjgh2ZxzdwNHajKEbGBqr3oymG2jqsWDG0Q0cCabj3oylT7P03Yygawi6mROJC+ZJU6w8KJY3Sg/n5v4DB+yupA==

View File

@@ -1 +1 @@
eNptU19sU1UYL5kxopFJwobRIDdVNCQ77b29XdtNHtwKQx1jY23MUHA5u/e099Lbcy73nDu6LdMwNMYwSI4xGnmQbOta0syNMg1maKIRk0VBZ3iaMYjRhxFijPig8wVPSzu3jPN0zved79/v9/tG8v3IoSbBG6ZMzJADNSYelI/kHXTURZS9kUsjZhA929UZi0+4jrm4w2DMps1+P7RNH8TMcIhtaj6NpP39ij+NKIVJRLN9RB9YtIa8aZjpZSSFMPU2S4ocCDZI3uonYXllyOsQC4mb16XI8QqvRkQnmJVMBrIs4h0+XIohOrJKNs2Cro6ACgxoplwQECllVQ57h/MGgrqY5rrnkaxBKOPFdR3OQE1DNgMIa0Q3cZJ/lBw07QZJRwkLMlQQpTEqQ8ALKYRsAC2zH+XuRvHz0LYtU4Mlv/8IJXiq0ipgAzZa7y6UJgJiUMz4xZZqH/6uAQEolmSfGvLJ5zOAMmhiS0ACLChaytll/6XVDhtqKZEHVMjiubvB06v/EMonO6DWGVuTEjqawSehkw4FZ1fbHRczM414Ptq1vlzF+X851acovnBxTWI6gDU+mYAWRcUVkFdCCoIXFcghICsX16RGzBkAGhEV+Jg8XQXQQjjJDD6hyJFzDqK20CA6kRNhzKUjWUEWujKfr8hmvLO9SvVodregjX8eN9wGSQlLnRqTSoKQlGCzHGhWVGlvR3wqWikSvydLxbgDMU0IpvZUVZHXDBenkF6I3lMPhcpuAFPnn4l7r6x0tLcog60HEs8n3dhR1dl3rCcY72/6JAM0i7g6YGKxECgPm2F8UYJNicZgUFWaFD2kNwbDETWQaAxElEQoElHUhDLRb0JeUHyKlCQkaaGZaBuIQs1AIFaGhOd3H9zf0vFCdKoHdJM+wiiIwyTPYoJRLoYcwQIvlEsLXTsoJ8K7Ww7yjyN6UNZVNdyoB7VIoqkJtAq5VOFZGT9bWoryAh8XFDjCdPnX7Scf8JRPzb7TV1/86rktb/aiXbc2Zt8+d6Xtvi/fw3P2k144f+jI0m/7zwy99Vjr9cPddYe2/fNdbbJt642Fb4s7w35yrPP7Cz/Xzrw/98fLvds2HX+mZym4cMa7+fWnekZ3Tn9xko1/WFc//OCWXxbaztbUuX/evBTzFS7MgVu3n6hP9+146dO/l8mBm6nfX/0atiu5x2c7tl4bDD371yzJvXsqdyO/t2F7/bhptN1+59FNl8d+2BV47eEx+jTeeJqO1i8PLl5dysz/aDRvfuinE3u++fesVUDL93s8d+7UeDzXTn1Qu8Hj+Q/DbAZZ
eNptU21MHEUYPtpgKL/a2sQ/Bs+1Kkbmbu/2gLuLpuKBSJGPliuBNAbndudul9ubWXZnL70iP0obf/g9xpj4FW3vuCsnLUWbNH40Ngi2GjVGQ5UoJrURxWqJNcaYptbhuEMI3eyPmXnnfZ/3fZ5nhnNJZFoawWVjGqbIhDLlG4sN50w0YCOLHsomEFWJkuns6AqnbVObvVOl1LCCbjc0NBfEVDWJockumSTcSY87gSwLxpCViRAlNasMCgm4r4+SOMKWEPSIXl+NULoiBPcOCibRkRAUbAuZQo0gE94EpvxARbpOhKFH+XWiIJ2fyDq0FQQkoEItbgMvryVKYr0wlFMRVPgUPzi2ZFRiUTaxrrNxKMvIoABhmSgajrFjsf2aUeNUUFSHFOU5LkaF0Vk+jpABoK4l0Tv7gEWhhnXeMKBaAhGbstH2jnBfc0t3U3t2uSg7AQ1D12S4lO7utwgeK44BaMpA68P5pWEBZwBTdqqh1Ka7M8V5xk7R5Qu4xBOroXXIO84ahfj7qwMGlOO8DihqyLLLycdX3yEWG2mDckfXmpLQlFU2As1EnW/NlKaNlwZluVDnerhicAUuJ7k8Hv5PrKlspbDMRqJQt9DEiggrOXmumwTEOiB6Tq2pjaiZAjLhEOyweLzEoI5wjKos7RHFoyayDO5NdDDL06htDWe4mOizc7mioY50tJas8HSmkcvKTnfZuMYpic42aDo5cK3T4w/6fEFJcja3hcdCRZDwDWWaCJsQW1EuVVPJNTlZtXEcKfnQDf2SL74ZoCnsA77uEz1d/TFv085mYnfv2rUnNdBrx5v7tfZ3/+eFmDGItf0F2KW82e1SoE6qVaQIQJGoAnwBfz0IBLweEPF6/YrP76n3KXXppAZZnnPvjBES09G4HAUylFUElqlhucbe9oa2ltBYD9hNIoRaIAxjLIMJRtkuZHI1WF7Wia1w/5soG3oI7G7oZScDXn9A9vpqoRLx+6P1UfAg902JphUaMkuPp/DAD3ApTH409fdtT1U4Ct9G5dm5nVPitkN9I1/YlXPnd5h3VD9c/s3eLWzzCxexWjf6yab4PVcvfzVTPv3WgX+iT7x8oWZ7MpV+ZUbqewR/P754/rm537/MLfy0cOVa1Yb7njmZqY6dvr0i88stoZ9/8146s/nbptaqrR9fioxe/vVYMjT6XfZ+8t585Ya7bsqeeX78T40tWAtVTUfuXrzYO3N2CNGjVfnFa/1Pjvyo/kUPfvTqA/P+fGPwrGNTZfdAS9kO59fNhx+r8G5bLP/0j7dfuzDZ0zNwa7VyLl6W/vylF1sys9Nv/ntlfupm7eob05Mfvr5n0P94mcNx/fpGx9bJ+L2tfP0flrQqIg==

View File

@@ -1 +1 @@
eNqdVXtsU1UY78QoiS8UX0iMd1XxkZ3be3tvXxsVu3ZsRba2tAgFZbm997S93X15H107RWUawYDgBUFN5gM3Wq2FsQwGTibG+Nb4ihgmEeMrxhiNIxgTNeJp18kW+MvbpPee833ne/x+3/ed3mIWqhovS3VlXtKhyrA6Wmhmb1GF9xhQ0x8uiFBPy9xAOBSN9RsqP35rWtcVrdFmYxQelxUoMTzOyqItS9rYNKPb0LciwKqZgYTM5cd777WKUNOYFNSsjdjqe62sjFxJOlpYY1AQMBFiDJaRu6C1AbOqsgArEkODqnXt3WhHlDkoVLZSig5oGYi8xFc0JbRHoremq5AR0SLJCBpEGzoUFZSKbqgVSwROrC2mIcOhRLcMpGVNN/fMDH2QYVmIbEOJlTleSpm7Uz280oBxMCkwOiyheCVYBcYsdUGoAEbgs7AwecrcyyiKwLNMRW7LaLJUruUH9LwCzxSXKpkBhIakm/tCKAhf0BbOI4wljMRpN07szQFNZ3hJQKABgUHxFJSq/NXpAoVhu5ARUOPPLEwe3jNdR9bMXe0MG4rOMMmobNrcxaiikx6evq8aks6L0Cz6w2e6qwlPu6NwksRdQzMMa3mJNXdVaTgw4zDU1TxgZWTD3EnsmcJHgFJKT5v9JEW8qEJNQRUDHyqgY7qh9Q4gLuCH7xZrpfNC6I4pEo9brhoIIF7MsVjaaMBIFxZidcxO2GmMpBsJeyPaaW2Plf01N7Gz0jAUUxlJSyIqWqZoL7JpQ+qCXMl/VsLHKoSjbCrhoyoFMKfIGgS1qMzySrBssmdAMDA8WV1AVlOMxPdU3ZpjVea7e3LdHGtwXDrbLRKeHpriE9Bgk/tqRxRVrrhBAQFRM/sdHseemmQK+xLKlQAkAQhyNAdQoUOBF3mEZ/W/1riaOeAgCOLgmQo66jTU4kWaqD6vTddQoYhIq/g+bYb2eDyHzq40ZYpCKh6XZ3SmlganR0PaRe3gmQo1Ey8QWjk3pQ14zhy/AS06XYSDo10slaQSTg5C0kO6WA9FcSzlIDmXE76Cmp9nkZUKmYqs6kCDLJpSet4cbxCZXKXPvBTpoJwo0yaMl1jB4GDUSATkSg5aE6aoUJAZbtC/GPgZNg1BtFp/ZjEQ7/C1B/2lKArSL8tdPNz6Zd2szk422ZkQvfiScMoXXh52OCJRZ6uwNBFtW6zxrY5wrDmgZOSW5EqfT0ozRku8G5AuO4qbQj9A4gRO4iTohB1uZ1pL6PZ2j5Ft97UHok7dFxOX5rXICo1Y0WU4ErgzmIga+Yw7i76cMLVqlS+cYnz2LiEUyfV00EI0m8zc0ZYIeu4M6G0tETKCsmH0tNfWhKHa5BG+3lqHANQhYLI/qKn+aMK4KgZefOY0bMLa0IAPSUK+CYtWwITozYgwyuvQ2yFLcPwJhIGR5TlvRza3OBOAHfFlRGfziiDekYkEGbKFpuyqf2k8TQYDeFs8mQnEW+RpINjtbkDUcHAStLtahadD/59RjawE0xsehJTJm6woyZrEJ5OFKFRRA5klVpANDg12FRYQ58t8cXOfm6MJjqI5R9KZdHOkAzSjkTll7b/xMFC5FYqMgGosy5rDacprbaRpytqEiYzX7UTtVL3v1hUqNSml3qrbdt3G2ZbqM0uI+KRjxJxDP1+x+tndb2x95uaJXW3r7up1v3Nl+9zZ9CtbR3buODfou+icU2PPl4/iWy+4hOobf/qrAxfe/viV8+eUtkV+/+h7V/cfx9bd/9L6iaHRY2ty9x/6wfi09b1FI66EeXn2j/OX37di7fKR7vOuj9/2yfv9m566a2PpEF5avn+0b8PxDB38YvSpQfH43Oga/APjtfKJbzN3du+4/LyhdV/Ps7x+7GShrzwxePR1dWLzyat7hYiw+TnLwwcW1Dc/cnv9ks2Je6zbf9iwsOXU+FjvgouG/LOX9MSObyYue4h7vP/kn/MXTsx9oPXG9Q9+Ur/J/uC1CfXIm2COv7Xumub+w31bJo5e6CuUFl5NffcXK7Grf/wF//DvS+UNt+j7W+c1FDfVr7ppQfa5uoWhv+b983JD34GxjcY+27vv7b354uy3+nbqRGZB/2Of9f12omfi0dW7ySX2b4TDzcEjO2+75cnx+P4t1KJh9+fXS4sOcsPbZg/WJzpdb4KeH39Sht4+svJXek35t8taRj7+p85iOXVqloV4bH748DkWy78yRYCR
eNrNlwlwE9cZx22YFkqGFqecoRkUTTgKXmkvySs7SrFl4wvbsmVjyw6jrHafpEXSrry7smwzHIZAIECchUBCSLhsLGJcYwoEwmFgQiBNOjSBJo1CoECGMrRAM4UmENLQpwvLsTmSaWe647G1et/3vf/7vve99/OCYB0QJU7gkzs4XgYizcjwRVq5ICiCWj+Q5OfavEB2CWyrucRS3uIXudBklyz7pHStlvZxGsEHeJrTMIJXW4dpGRcta+FnnwdEwrTaBbYhVDFb7QWSRDuBpE6vma1mBDgTL6vT1XnA4xHUqWpR8AD46peAqJ4zM1XtFVjggV84fTJCCoiX4zloJckioL3qdAftkcCcoAvQLNTe3OoSJFnp7K1mO80wAHoDnhFYjncqv3U2cr5UFQscHloG7VADDyJrVdrdAPgQ2sPVgbaol9JF+3wejqHD49pZksB3xDQjcoMP9B1uDytH4AJ5WdlVAkVk5mvNDTBtvArT6PUavKsekWSa4z0wD4iHhnrafJHx/YkDPppxwyBIrCRKW9S5M9FGkJQtRTRTYukVkhYZl7KFFr16cmfi96KflzkvUIImc9/pYoN3pwsSGgyDPzt6RZYaeEbZEsn5nl7eQBYbEEaAQZRNaGc8QR7AO2WXspnSbRWB5IObACxsg16yX1rQCmsB/vBeMLYbNpcUxot4NmlUazasi3LQ4udTVQSqKqJFFY7iOhVGpZNkOqFT5RaVd5his5T3W4Yd5SLNSw5Yipx42YOMy8+7Adtu6rfgB8MFh4sJq4ebEAH1PkECSEyV0lGFlEXbAMnP3hndXYggOmmea4xMqxyMVD7QWB9gGT/LuuoCXtTQSBKcHfgZx66Yi08UwtNAQYhXUloIPdEZG4nnvh2uFUUwFEGxffWICFPh4bwcTGfkd6wXJaVVh6Lo3r4GsuAGvKQESTTydCdaiMALaxaeuycMaTAYDvRvFA9FGMKPbl9vKwkkqsFwr7S3r0EsxGZU6qiPWyMcq4SehC82O25gAK5HcVbHGAgMozAcw8k0lsYwUucg7W/DPucYGCVcTJ8gyogEGHjwyA1KKNVL14f7zEhgOkIPV5qh4njG42eBxW/PFsJrkDJUPhF4BJrdzjgQhmZcAInuPyWYbS3OLMo3tVugSJMguDmw8rPk0TYb47DZvcZaypJNY36Kzq8uy7ISOdUOuqGSt1oL8uVSRy2FcY4i3msmcK66AsHSSAIKwNN0CKZBNbBtkKpyShINsypzzTlSXlmmPmAIlFGky6y3kbTfk5WViVoq8oqtpdWFJqyyqo6tDniyKZ1sszkklGBtujSrhkMbKTtfIbLmTLOrttZdag5QFVXMjCqhFreVFVmr3Y21hd7cHLhEWnYZtRkquGE5mHRjrG0Q2DZIuGkM6Vi8aTJUbCQxRk3vIzJDlQcP8hLe05ChsoQzDOBf2gssnAyMxQIPQi/DxPjrONZYLleRRIHT7LSiGnOO1U8SWQ25VXJOUUAjpZU6XXmUt4DES0wuLjMhMxSJIWgsOXqUpCJbs0f6j1T1VhWSeAogJb7IVaMEeUHiOYejzQJE2FVKO+MR/Cw87UXQZpqGlGValV0GnDIwuI5iKYPOgGIskgXP0Xi0u2dGa/iqCNIeuPHqGGWnizCq4RFEqDNUXtpI6WGPRe61prbwRuWd7yZvH7dscFLkGbi87AP+NDrswN+nfGqsWWTbYbENISe9dvPlATmzJw1oUlfiNS8UvrRtL39unnZSaM3ngz5c9vOMjJa157vZrPkfdaVsrFtZ8YX3u2umT7Vf/eLm9VNnds7T7jndvMEZaG6+AXaf6W7Kt5tvDR2xwbp9hO6seqxo7BhQcKw1vUY7atnHcnf35fmLyc+ytxkvFb4/9ovpT2848cot+Y0/8iVXHh+zKEV/wT3huWHaM2m35MfmHZ42feKl/IXY+4/RkvXRAQNCZPKYlqWpj7/DJo0wTPuz9VCu+/LqvcGD5pNLB330y6NN1UdaL56/+urorlkt0/8603118EsnUj4Y3r3zWznrxVGr1swat3X5odJN+ImjM1PM08Tfm9l/frjp+PQlwc7lX00ed3PqiksLLmfUHdp/9eNn65uS173ROGDCmjryN6s6jw5ZkbpiRYsmM3+3yYg+fwFsG7pivmf4rRJi3qlrzNjKO83fJk1atL501NMb9X9L/7JrJHrm0qFPNCMCi5+w1+uHPKIffZJwdbDu29ua6pcsLyodU8l8eeRG0HJr8zMdlfsmF84ZGMJu/yQp6c6dgUmFx5dOzBuYlHQfuBmfCDc0L7tEwccxcb6JY0yUa1jINXS9LXraqdMxFCdTe5NOItek9uGeRNJhPDQ8xBAC0SEumnP7kTClSLJ6Ts8NmRIFnR19lP041unFCWEOEPyy8mZxSbktN39GTvF/A4X2ZMZlxmkI1ZAGDfojaSjq/H9DQzvuFqHXLU4gqB7e4j+AlVowFP1hsDT6AbCk/5/AUkjds+LEKz9KA1ESgWgBD/nQhPta3oUNpTVMGQ+IG2EKZVd4fQhKIARaHkfC6v7n4XifPw4hMVVtUTAKTX6gfY+2uM+Eh/C5h0J9dWhif96w1fpI3BK5H0NTHmzfIzHmM/FhfO4tUdWf+/fSF53oyftYJiYuaq26r/U99bQnEOOBMDCiWG6ariyAZwkmg4PKLbYEpgkNwKJ/uyd+IpNHSZMw6AkdS9gRYHewCGmg0hCDAccQO45TLElBSmH1LXUcrbTDJlc5BcHpAfcExvD/AYJdgJuxnIablocY8hCYwVAOBifsFH4fzPgeSTzaQxJsc2bhQGzYojsFr0sj33IjrxWm5FRvHXVk4ZJUE+7J2rNKt7bqlZN3NDXIpPU/Pfv1wUHS/HOHD2sCXmON8ZvbZy5wcwP754SupC1rHjS3O39j5rGmCcvEYy9ONb3w7+HPnnjikd/NXzxT/tPC5NNMbfnMGZWrj3chFZa069Yz5MYj00bak683UsKQOcVXT2FdQwqeGr/PvG1v5at5K3ZMGjUwULD5ndo3P7lVU/pey82fdazOmbH+m/PsPxbeHP+rv5DsUtO6wRKLTr1S/eul5wexjdqjaw77Ls8eevHGOsPUc5vyptQWvn7kGXeXy7hbn1Kw5PPvLv5r7cSsrze8qz60ylH31Nzk6GXdOOOaJMPP/wFXB480

View File

@@ -1 +1 @@
eNqdVX1sE2UYLxKMhCCQgCQMYtOgENi1d71rr90sOtptbKxrt3ZsSGC53r3XO3Zfu7t27QgmFDHIR8jxEUQEhG2tzrGxjIzPoUhAEwhGgh9DYySgYBAUJWiMiG+7TrbAX94fvb7P87zPx+/3PM+lMnGgarwsjeniJR2oFK3Dg2akMipojgFNfz0tAp2TmfZgIBRui6n84DxO1xWtyGajFN4qK0CieCsti7Y4ZqM5SrfB/4oAcm7aIzKTHEyusohA06go0CxF5mWrLLQMQ0k6PFg4IAiypdBsUWUBZAUxDaiW1cuhRJQZIGRFUUVHCBkReYnPWkpQhsG3pquAEuGBpQQNQIEORAVWoMfUrCfUSq7OcIBiYH1b2jlZ043u0Rn3UDQNoG8g0TLDS1HjYLSVVwrNDGAFSgedME0J5PAwOpsAUBBK4OMgPXTLOEQpisDTVFZvW6nJUle+LERPKuBxdWe2MgSCIOnG4QBMoqTCFkxCaCUzZiVcVvRQAtF0ipcEiBUiUDCftJLTnxipUCi6CTpB8rQZ6aHL3SNtZM3o8FN0IDTKJaXSnNFBqaKT6BspV2OSzovAyHiDj4fLKx+Fw60YZiV7RznWkhJtdORoODLqMtDVJELL0IexH+0exkcAUlTnjDbMjr2nAk2BjQLWpuE1Paal2iEX4MKnmXzHHAgsHibxO9P0dh/kxRgIc7FCM0aaA7RutqN2wowRRai9CLOby/3hLm8+TPiJNPSGVUrSWEhF6TDtGZqLSU2A6fQ+kfCBLOGwmmz6sEsRkFBkDSD5rIyuBqR2aFSQCl/fUHchshqlJL41F9YYyDHf0ppoYegYw3DxFhF1txI4HwExmj2cv6KocjYMTAgRNaONxN3dec0w9p2wVhTBUATFjicQ2OhA4EUe4pn7zc+rZrQ7UBQ9+riBLjcBONkZAs09p0ZaqECEpGVjP3JDuN3uk082GnaFQxO3a3Q2kFEwMhvMLmpHHzfIuziAal2JYWuEZ4zB2fDQaCcdLpeLwFw4g7pQ2oUTJOliWUAwJOF2kI5jcPh5GnrJkqnIqo5ogIbLSU8ag4UilcjOmQfHHLgTVlps5iVaiDEgFIv45GwNWrFZUYEgU0yPtwzxUjQHkFCu/4yMb2l1ib/C2xmCSXpluYkHW6+MGdvYSLONEdFT6yYW4aSjpMXN+BIEGqeCZWVJ1hkng6iIEaUqXssHpMV6WfUSP4KRdjdG4iRuRzArasWsGGLFytiwO1wW9enJZB0XXlJtb11cLzBcXXNlq19UiIaAXreYddcnIiAYXxgt18sX+psjammEbeJRP+MPNVeGHRGnxFU0VCXCFbV1rY0NLbAaSuc8tmIz7E0e4uvJTwgCJwQZmg98eD6KzUwOA4919DYsNi+Cez0gCclicygLJoBvSgQhXgeealkCg9shBrE4z3gIR52vPlFSqsUDZNiHVXm9lWrUEUIj5T5ncJFDYexkEy8Gy2v8dSNAIF0uBM3j4EQJV64LH6X+P7Pqb0BGDjwSUIY+YBlJ1iSeZdMhoMIBMjppQY4xcLGrIA05ry1Zahx2MQTK4DjqdjlQF+tkkYVwZQ57+289tGe/ChlKgD0Wp40+DvdYiggCtxSbRcrjcsJxyn3m1qSzPSlFz47Rnt/4jCn3jBVqzlefQSefvDH/uX0Fl2cfvHQ/9cqOudPWjyUnb9jUV1E1q6d3Vsm6v36Zu8VZqtyl2GicfTDLhO7qm4bWNNtWtn186/aDM+W/Uf2bE69d6fmq/+sFF2V/6sTus5v8r44bGGj7tfyLjnevzL23VQ9M2PgtfWtGx1175TbizeID6Zl18+dvnhSd+mXteXTvuT7l7Tc+jwTrZ48r6k/tNJm+P7Hqm3eKPrH1p366vv7yzZk242C113Rh68op5/Zsu6jdHmRmj+O3HN94qqTj5lPOe+NTTUfXrjCtWvlR6OIf9TcWbiy9lihYQVZO/PPqdl8NXzhwbM3LV9ddvlS3q/TOoiPL9sw5U3R/sulkfPPuxhfe6g0X7Ete2S9NnD5QUPDZS+r5Nb9fPzbj6tw9f1/b8cPNY+E7P6+oUmqoBfMyp5f2fjj/Yd/N8xXL/2ke//ScH+9fpPdMq93Jbd7xYNLUF5dduIFtSKamTHj29CSI7sOHY00f7K16f8tTJtO/AQ5vFA==
eNqdVX9sG9UdbxXWlZWNgjTWqVLxzGijkWff+WzHF2NtjvsjaZTYiZ3glGXm+e7Zvvju3uV+OLahA9LujwJauamFVl02SBO7DSElSygFElhZWwqkSEtFpVRi1QrL0KCbqjKY1G3Zs+NAovavneyz7973fb+f7+fz/X5fXzGDVE3A8soRQdaRCjmdPGhmX1FFPQbS9N0FCekpzA+GguHIYUMVZn+U0nVFq7PboSLYsIJkKNg4LNkztJ1LQd1O/isiKrsZjGM+N9v+sFVCmgaTSLPWPfiwlcMkkqxb66wpJIrYWmNVsYjIo6Eh1bqzq8YqYR6J5EVS0YETA0mQBWKl6SqCkrUuAUUN7SymEOQJ9r2DKazp5uhyNMcgxyGyG8kc5gU5ab6YzAtKjYVHCRHqaJhgkFE5V3M4jZACoChkUGFhl/kSVBRR4GBp3d6tYXmkghnoOQXduDxcQg5IgrJuTgQJCH+jPZQjtMkW2uZ22xwvZYGmQ0EWCQ9AhARPQSmvv750QYFcmjgBFUnMwsLm0aU2WDOHmiEXDC9zCVUuZQ5BVXI7x5e+Vw1ZFyRkFgOhG8NVFr8KV2RsNE0+Y8s8azmZM4fKnL+ybDfS1RzgMHFiPk+NLhIkIjmpp8wBj+uIijSFFAHaVSC7dEPrGyRaoOmzxUo1DASbFkX804rvDW4muphTYUOusTCUpRmqFgflcFloT53TWcc4LNuaIyOBSpTITWUYi6hQ1hJEii2Lshe5lCGnET8cuKngUyXBSTIl9KQIAcoqWEOggsociYK2hTYAjZvHF6oLYDUJZSFfDmtOlZXvzWd7ec7g+VSmV6LYvJMR4sjgEhOVLYqKS2EIICBp5mHGXTtaWVnkfpjkSgGaAhT9WhaohApRkARCZ/le6UXNHHRRFHXiRgMdpxHp2qKTKl9vLLVQkUQ0K8X+2o2TZdnJmxstumLY0uV6bbmVhpaioR2SduJGg4qLAUobyS5aA4E3Z39IHmK1LpiADMMybpZ2OBwJBqE4Q9N8gubiCZfb9Srpc4EjXkpiKljVgYY4Mnj0nDlbI8Fsqc98DO1i3CRTr0WQOdHgUdiIb8alHDSvRVGRiCF/jEsADnIpBBbqzyxu7mzxNzcGhsMEZADjtIB+dXHluliMS8Tiks+v01uMdNIRCQqxls4tDVFd3g7FjhRksYQ825qM1mC4SY0LMMYButbJEACOWhrQNspG2gb0MIm8g1Ubd7BKg7tbVLV2Vu7UG3rk3mZ/fbhV4ppsAVpuSwu5RA7ucHGaEA2gnF/vaIBpZkc0ryRCMQeV7vD7G5KcH+P62m7/dkXIRxO98UhEoPLNTWzOzeSdqWaSItRTPrvXQgpWIKT7Km0DSNuAUtOwdTRpGrrUNF4LXybGZ1s+Ir2WBjLIg7KY81rCJYYR+YUSCgs68rVgGc3uI8QYGYH3bc9qOwI4p7VuxSk23cOIuMPBh7gU2541PM7upnhrt78jVL8VZtuXMMOytYCqkOOmnJ5yaX4N/f9EdTwKlk4BEFQWTqyijDVZSCQKYaSSrjKHOREbPJn2KioEtoI2f6c5wTo8LOdgEAddtSxFMaCezNFFb1/NjMHSUVGEIim8DGeOpxiflYwgxuq1SNDncZMeK59rjxdKhSonT68cv/vJ1SvKVxX5zs8/1Xbyl+eptVN/vW/61+v3TXx4/PO+W7551609j695rv3jlzcdPXk1+gUv9H7w6LaWixt/0bWn6TvX5qYmv7w/eHDtQ07xG0f6+Q/yh968vnbDhh/7ztgvPXHs6rXrh06f+9vrl+7+4kohBKbv3PX3RwcuM5++PHDhQT4gP9b+k3cORN77pzoW/e41Y7YKCBN7LmTff3bfK2cOPvKz6WdzY++/d27P1D2Hqbc6/n1299rLR+c3/rblow3HBw70bwJnqtfvvoIaV6/mz9/CP7Xxd/pt43dEPl7Xhf9z14tDD1x+5udr7u3/V/WqPk9o04GJ+yf776y60LXvnclPR/pXzLDxg299+cw/Ws5ejc+0fVT14dtvHgmtT14e/Vb1Pa/y1Y7cJ3/kNp7qim3t3X/7I89d0E5/vub6JWmVa13soVPvhoO/PzXTMpec2R8dMofGemburT6aObJtLjr52cTztz7dbJ1Pz7FXfnC859z82b2PtX3yfeMvVd2HR6v2plee3/nGROboidrb7vjDrtbo2Mmnf+PngNf702+fCvDiCxdPrNp1e8LTf/GBQ/ttc/gJ1Fmfmf5vWZmqFds/2/vnBiLT/wBOWZm+

View File

@@ -1 +0,0 @@
eNrtmM1u20YQgNtrTr0UvbJET4UokxT1Z8MoFMuJ3diWHbl1nDQQVsuhSJnkstylLNnwoWlfgI/QxpEKw01bJOh/eu6hL+Ae+hB9gg4lOpZhAr0XtABLuzs7O/Pt/Eh8MhlAyB3mv3nu+AJCQgUOePxkEsKnEXDxxdgDYTPzdLvV3n0ahc7F+7YQAV9cWCCBU2QB+MQpUuYtDLQFahOxgJ8DF6ZqTrvMHF2cH8secE56wOVF6dGxTBke5QscyLvgupIHEpH67ADkgiSHzIVkJeIQyiePccZjJrjJVC8QisEUz/GdRNLHOQ3fuQiBeDgQYQSvxx0WTG3A+WPZ8akbmdCJEitSyRMUFeAF6LSIwmRWLaonExuIiUj+fuOtU5txET+/7ua3hFJAO8CnzHT8XvxN78gJCpIJlksEnKFvPkwhxmcHAIFCXGcA49mu+DsSBK5DSbK+0OfMP09ZKGIUwM3ls4SCgjb7In7ZQiMa6wvbI7wPX9KKRq2ofjdUuCCO7yJgxSVozziYrv86vxAQeoBKlPSu4/Fs8/N5GcbjZ5uEttrXVJKQ2vEzEnoV48X8fBj5wvEgnqxs3zwuXbw6rlTUtGL1+2uK+cin8TOLuBx+vLYZRDhSKEMd8ZfqmDJ24EB88U+nQ61O11sufrjda2x/tF0u77Qrd92NbnvtDnfulrd3bzeDPlu1HjQavk2i1f1DRavqda1awpeiFdWiVtSUDmzVKjbvCn2zHg02G5vNdkU0dr2NEd/Z4+reQVTuFivr3XY06tcG+KkCvYcPG9s90tAP3NbO8GjLcNsDq39vrbte/7gp1lZ3tJ0lCa2LBo65vDUY3uk3YWv/vtq5vbde3OrvrBNt1Sjp4crGvq2tN4tr+1a/ub/K5szT9ZqiphZWVKOmJn/PL2PDBb8n7Pgpyn8dAg8wquHzMSITEX9yinEIf/4xSVPsq9a9qxB++7SJMRm/2rWjgqRVpRYVkq7qhqQZi6q+qKvS3c3d85X0mN0kBC8kAUOxAINkZpZJSxLmdchBLEfCUmrf74bE5xbG5eplDkyoHfkHYJ6tZEb/qyT68WoTfzC9FRgGjIOSmhmfP1Duz4qNst58MUs1hYU94jtH01SIX03T4PBoeGjSyDTtwaGn1o+MktOFiFov0y1ByJJj0CDF4/HTsl5+nq5cBuIZOq8qmqqo2i9DBfMeXMdzEPD0f1rxeHxaRvo/3RQQWKKwNk6M6fWov89LhOBhBCdnX6kx6vX6b9lCl6pKKFKv1n+5LoWs59Rousd/uimQqvhK5efDS2nFMeOL93DQ6VZotwwVYgHRq/hWrVoVratZpmWYVZ2oP+PdOhS1JJcZsBAvGyiWdzGKLwoeGSZFZ7mklUsV9HRJSgtoO+o2WeIDX5KCEFxGzG9X7igrhNqgtKcBGU+a+1uNzfWVHx4o85GltGb1OJ74jPuOZY3bEOLFxGfUZZGJ1TOEMeq639iPX9ZMQzVLFd2i5UrN1MrK7VZ7Qlw0ckDjF3ZpWV40jJK8JHlkuVbB+5h2ms/GiVN+7693DkwiyLTwm/KinLQlik1JaXy4sTmyVza0qLw3sMsP16qBsRW49wL/3qEqF2TW7WP0pjuKV42sOI1vFKCYDwJQ52XqGmrhsj/NtyclyTJFxbJTw118xLHRdCw0DcIALUyOsIIO6F0TyiWoQKLaZg5NOuSjpF2ZMJQXUTdqFkRePE7bokwwvjHxUUPhqo3KOAjBwuaGZviR654UZJf1MB+6fDZRkPFwh9sdtB/bSir1uCCnDXE6vHXrf0ftCtGePZJzLDewSCY6lXO5yUXYkHPJ4MIp1umczE0yNGSHOZeMiDl0/JxLBheSY8nEckjCvCVlkPkgh5IRLp/4n+R5lAHmNlCCv5pzNBkxk3+1y27UhOdcMriwSCQ/uZMnXTmfDD7597vsKuPk6ZTFxXLAzb/gZZB5N4eSQvlvDjIXLJD/XySunDmW0XcvEJ3ZM348Vkse916adTVdK8iCCeK+ntHrhet7OyYI4rh8GmrJY3rztax6kqF0Xn4GG62f23LyGvKjZmtr9fGtW/8C1DFJQg==

View File

@@ -0,0 +1 @@
eNrtVwlwFGUWTggEXA8iECDRNU0LEiE96Z4rMxOC5ObIZJJJQgImTvV0/zPTyUx3092TY0IQEUEOgw0IK8dqIMmEkEMqIEQuWURURCGehMMrK7hZtNQFBHWz/0wmkBS65W5h1bpl1xzd/b//ve+997/3f/8ibykQRIZjg5sYVgICSUnwQVy9yCuAeW4gSovrXUBycHRtliknd6tbYE5NckgSLxpiY0meUXA8YElGQXGu2FIilnKQUiy8553Ar6bWytEVncEPVaIuIIqkHYio4eFKlOKgKVZCDWg+nDBRRCQHQMoACf8EhGGRJE6UOPYhNAYVOCeAYm4RCGhVUQzq4mjghC/svISpOczFsAyUEiUBkC7UYCOdIohBJY5z9hqSKnjfdJub9bsFRa/fGipRlnT5Ru1AsgSMQwEaiJTA8L0yaDqQBoAjJYREnBxF+sYVUJwnBagFBk70aeQFGA9BYoD/qU/Odx9AApEyrB2tqoKuwfgyAqAh0BuS0MWAJGctBpQEJauKqrwOQNLQxKpaB4yM3DIw8K0kRQEYD8BSHA21y812D8PHIDSwOUkJNMJos8DvtNxYAgCPkU6mFNT3zpJfIHneyfSajy0WObYpkB3MB+Tm4UZfLjCYSlaSd5ogiMQZsVkVcIWwCKHQahXKF8oxUSIZ1gkzjjlJiKee94/v7T/Ak1QJVIIFVp9c3zu5pb8MJ8p1RpIy5QxQSQqUQ64jBZdW3db/veBmJcYFZG9y1s3mAoPXzXlVCoKAnx0DNIsVLCXX+VfR7gGzgSRUYBQHlcg1eEtfgJyAtUsOeauK0DcIQOThggeP18NpkltcVAuTAd58zRtY+FtMs/qyeC5oTG0KTIy8P8fNxiAqHDGSAqLElRqE0BnUaoNKj6Qbc5uSA2ZyfzIPO3IFkhVtMBepfXn3Ug43WwLoxuSfzPh+X8ahNz74sK4wUM5zIsACqOSmAszcW/LYjJS23uWFcYKdZBmP36y835/6Mk95GU25adpRWubC9R61irECN2XbGZgCa8BnBgLCXKK8VaPXtQRG+oLfCH3FMQLHcOKlckyAoXAyLgbG0/8b6DuiXKvBcXzPzQISVwJYUfaqcf91oL+EAFwwaT7bN9So9Xr9vp8W6lOl0vsu/KWBUiLoj4ZQusQ9NwsEVGzBxabyPmmMoeVT4+GDhaQ0NqtSqwE2FanV2wgdjhO0Xq206uk4XBunbPc1BApq8SWT5wQJEwEFm6xUIZ+KcZHlvkJLUBEalRZ6Gg97I+V00yDHbU3hfD6I8QgvACdH0q2UDaNIygGw3vUne1PmZCYaZyQ35kCQyRxXwoDVncFjLRbKZrG6EoRZlnkzmelpeL5DbVeydDGtmIlP98xzp8zO5fNmauelFDt06WVUko3CiDi1CgJQxukxQoErYN1gCi6lgDEKuU5nmpkn5qiy+PJsTQmfkaFyZJakaEoEzoOrSpMFyTpd48iIU5lNiQ6FB49LTU2zMyCNMrqEXIciL0WT606zOfJna+ela92qfLFMn8lWgIwMZUoS60y3qKczRugi7L0JsfEIXLCwYYoJgbLBYNlgvqLRG4i+oolHaH9gEhQDe2Q8Mh1uWibWWRGP5PgiDOA/bNw5jAQSMjkWnFoLA+MuZegEj0Ufl6bMBiaiQGHibXPnEBB4rqKcMLkV+mQ9bbSraE06MzsxQ9cvMjpci+GB4Ghxtc6/NG9A/y9RvViA9e8CmMm/L8HkspzIMjZbfQ4QYFXJjZSTc9Ow3QugPjkNMyfOkXfqlTo9pYzDNbReo8c1OJYEG2mftus9o9a3V3hJJ1x4pZTc5lAloLAFqdB4xEUm6LSwxvx7+GP1vTvXkUG3Ra0YFuS/QuC3p2dlTtGK03jY/CutYd/PV7zlvedg0fPGmoSOruhRGw5Hf9N5Mm/d4vEWxeIfrhxeTsW3T353eNrfHaV2e/7RhWEfR4wf/GG1YXbx3W+NsUW9ve6jA3N/+DEyyrZi6V+nRB4fM/nbH08eKjqeddpOf/mXa6/kbhtiaOmKsIVuO2zoRu5blfaOY+zbU7Ia70g1Tt55+9FJMe1dn+CR1Yej7xr63D2XM+ctGT3im4nI4qe39nxgAEtan376wpcjHjw5dy4y/uKEpJejs9MXzj3WvPWbpvbgZ1ffBbCi2Un/CPKunR4RvmXy/H9W7xfto8PR/a0LVj7anH9s15+bFrQcOGO6vKp7/dLzqa3n9JcSKxMrJswpMQ/nPnvdSCRN6+gQb5e+uG/b6Zr2mpGD3h2Vt6zjvY5P9gTXjG0eySKnXi6ckx9l4lPjh0Rtfs5TPfbNybOOVG5vWPvG1UHntntD9buouxtPpLku3hkuKNrP1oVuu3T2snG4Imnf4IaWz//Q2WDe++62gxdWZG7440L2koHHmWnee8OL5nuWWcAslbT2+x27N74afZp8n9j71Na8V/OGD/NsPjjm2+wvxSuFo3vWHAgZm3wRn7Xr0Mq27iV7S0K7l15tLWk++gheoI7dRP5Nt8ETGfH5zhHHR+4e2v1Chqi4Xd3J99iQD74oHWVqm7/3+ANhmxbF+fMdErTE8LwicXBQ0K2kiIOMt4YixvSbybqdzuvDJNyRYGeE73v5oYUinT9LEhlIylCfgCXRnaHOM2ebzLwmOys7NVFTlmMuNRan/xImSQp2twsigVbQysLrHK8QNSCFaC/+QrQK9RG8/rDRGT53RTfLVihuuOfD3B+65Rdg/J0wnwsK+50y/w9Q5nrKT0DkzuChv0UC8itQg5sOERrdf3iIGP3vDxFq/P/oEKEilL+RQ4ROe+sPEVZSAwirmtbGaa1KZZyGtuo1gKK0ekKr0ZBW8OsfIm4BD1XbAKG7dTw0uPkGD11pTmQhBd3XHT747dkff1j9zHeOly/URC8P3b8zFcnqmtm57tMRXY2ZYd99NWTCUXPHqAupSzdvPvkm/5xq3PjI9pHm3cvd510HesKqnzj8eXlP3UNTFkw9E7Xl2oGzUVM121d91IUSDddu22ApyN9g4B6TzwzrDlo+btsjYzZe8uwtDC8LOSEHb3Q8dX5+csXSu759T9j1+vrUBVWVtt2LI7ecWRb1bFjSlOEfdb0TUb42fkfhvgzDk9kapGz9hHE0svEB85En5Weo9j9NM1YfuvP9gy8i1c/veQI5sSwh9YGJ+PIP469IGvuQhu8bVosfPBwfslZ8ZdKEqB+mbpLfOVJT9+rQxk/vaWw6STqPBU1aPYtF16wP8byxuHzshguWJ9ZMzTrfTl8bcvir5RuzMzN/NH18NoKyAu/XF21fVxcwC8M7dZPNxXUP4yPF4qJLoUwo3fbao5UVpsqVT7Vcjbp624nIKY8/WBX0+DX3OQabZnjr0eaiSffdcWzYJ3HF0RHjau141vZz1RPv/yyHvvBGZFHIoaH3flV/GexSvFjfMkm8vy3n8tBeroh3rGNT4UHhX2E+8rk=

View File

@@ -1 +1 @@
eNqdVmtsFNcVhqIKqBCQtBTaRGRYVSRxdtYz+15bkCxrtizED2wTPwhy787c3R3vzNxhHvuwSyLIgySkUaZtHk1aU2BZO8YYHFDKy0WpWmqVSAmBNnIikyaqWpqQSC0tVWNSemZ2jdeFXx1pd+7j3O9895zv3Ds7+jNY1QQizx4SZB2riNOho5k7+lW81cCa/nhRwnqK8IWmxpbWfYYqjFeldF3RaqqrkSK4iIJlJLg4IlVn2GouhfRqaCsitmEKccLn359zqdchYU1DSaw5aqjNvQ6OgC9Zh47jYZmCZ21Ot1xTegpTPNYEFfOUICeIKiELiEqoRLInE0QUSVaQk5SCbERXCaD03yiLeQpXYCkqEFR1AWuUBA4Byga25+6OiAAhJATO9nE3lTBke/czIJtKbmpKvTaMYKlKCRpF0hS0sJOKURySqSShiKFrAo+prAABM3RKMrgUJREVgzuwQBRHkF7CcTgph0pEbEXA0LDq2LYFRiTCY9EaSio67SW0JMiCZSnDGAtvBalIFLHYpRMidnHQtuKZQKIGNByarmIkVQzoWILNI91QLT+Mi7HG7JUpInDWWK9Dzys2iam9W+5utC0DGUm2wcxgObZtK4OVM/r/4oAZ5JtTBaVs6XCUNwqyA2Xaa6ezaHc1K5NSST+9Diwb1qY3O1JIUfJ2uLABChCtpoZ4hxXZKXoQI9COwyKPkkkVRClksAwvG+p/mJS6cdBOimSpaXtbPZqOIL5AArTgtEdSQtKShtUESvFy087/9FqL1BRjSKnbSXmclNdJ+SpZWpWYtEQBYyKSkwYIcOZeNQXJgpYqoSXFcjOhYpmzW7BaQnZ0BR2JAjRvFQUL3ypzq9xKsDcie3OAKrlUgpF4N+Z0ANu2ZVt/CiMesnZx1uJCimi6OTzzjDiEOA6DtoEm4YGCeTDZIyhOKPmECOEchHNBxrZmzME0xgoN3DO4WFplHoYEi2XhVHdrRB4qnyO0xeXm6UGrsmjgK+vm0UYgEY5VN+WhNmWKdXmDLuZwjoY0CrII+6OtzZlFxZ4/WTmhIC4NIHT5oDSLpcXDlTZEM/fXI66xZQYkUrmUuR+pkt97pHJcNewwm/2RppvdlSen3XlcLOsKjMwA1vIyZ+63C/0XMxZjXc3THAEMcw9T5AhJC9gc/3tXF5foikurBBfKcp6osj7d4G/oTEaiHiGnhdc8uD5AupCYr+8Ii2vT9fGmZCdHswF3wOvzMKybZl2Mi3WxdOc6LtGNW8KNm4RUMNusIUMw/FtFb6vsyrR5G92bvKmN3Q3tDIoq/iifViLt9WGdxDdopC7ToMfiET7R2BLcEGXWCkKrL5BLRKLxjeFaCtgZGYFf5dvYvNYIxcV4ptHbytT5U/n1zclQe+tD4a2cHEn0xPPB+qzwoHuTEqygF/AHaKbM0M94g4z1DE9pQ4QC0VNmgYXBARC0AvcSfqxo1a+h7SiAEPFbY/3l+2lv44ZpDS8p1IEozdGoKkClBqgWrFBuxu2lWH8N46nxeKjv1rcORcp+Wm+pwZFWFclaAnS4dkrz/VzKkNOYH4zcUu2jltohlRZ/uCJonFOIhukyK3OonW4u3cx0rO5IqbRooibhPOix3ZqjtuyzPbkszxk8n8pkJSbU4/XASWZwiaPlJXCmWm6AEC1p5j4PGxouz0wJbxD2ytAsQzPscevE4KDOrM0oRNVpDXPwLaDnzXGnhHJWka3ysD6PH4JcCxcsJxo8bjHidUQCaWq1cBFjkSD+RI6GCwmLgiRAZuz/8neGZhZ8sPjYzQY6SWP4Iun32nllfllpoWIL39rENIw3FAqdurXRFJQHTEJu94mZVhquZMO6Je3YzQZliH2spA3lpsxpgTfHvwOdLh4zft4XCnkDQbePifMs70dMIs6y2B8IBRLoUCRKRxCXwnSLLUCzv66jIVwfi7zRTlcqiW5USh9i/TLRZCGRKLZgFTJjDnIiMXg4LlVcBKzmcId5NMiFuHgo4AshhvHyTJBeAwfRFNoN3RWss9b+ItteLN0Av5n98V275s2ynznwu35dbw7LHzCLT00u+XXfwTWv/2zs3EFW3Xpu+Y6j6zYv+Qp96J6V34gpr/mHPzo998d3xubfvnxefGet587Pt55ZNraKX/C7VyZfH6n64lTsrjd0+svPJl7YMPCf4xOPPHrigw/fuf+OR5/IX2JOznV+ueG9/ne9D+15Kfov5a/P9y4Yip1Y17b0bLAj8wB3dWDNq4dXd/5kwOyM7sqNPPOxd/0f/vL5+2O3z3df7T2zjPutvGj1P5fvpri/Xdv/LnXb0nnjytOsEn3ia4oj1pdedOU25w+uDjxmTFT5Zp9iB+853rxPOf3cj64s+2bb4LrhR651fv9A29nq6/eN5h+++OHFlyfX+74+2vfF92ojmw6cXVGYv/rCkUD82e1/rn/v5MaPQi84/1274NnZ2b1S0+Qflz8XWtxdHVmx07nzlc29Y6hvztJfrTCv7fr20+rYD/MLn0mfU7dPbLmwJjaoBd+8zzx/efTK7tjjCw+8eKwm2Tu8+8m3m6Ptycw/VvYOFF5sXbowZHxrVp7r/ezn5+99YPKT3W/WnG/vyl15qmbxypd29S060/HTvcl37nj794W+4fs/qbp66fjlJ9HLpy/uuRDnn5r76kTt9oV/Grm36rWv1nzKB4Sm8b2dXZef7yp+uuSthiNJO5FzZsVGjCNZyOp/ARcvjAU=
eNp1VmtsFNcVJo9SQoRIoI1SSMlkkwiFeNYzO7vrXVNTzNo4tjFrbGNsErK6O3NnZ7zz8jz2YeoooSGCkEiZqCIRVUUDZtdxHBMKpWACUX/kUYGEWipStwXSVKjk0ahKmyoQInru7BrvCjqS1/dx7ne++51zz71bihlsWrKu3TIuazY2EW9Dx3K3FE086GDLfragYlvShZHOeHfPXseUp5ZJtm1Y9bW1yJD9uoE1JPt5Xa3NsLW8hOxaaBsK9mBGkrqQ//Pt9272qdiyUApbvnrq8c0+Xgdfmg0d3xMaVf6aczZxT9kSpgRsySYWKFkTdVNFBIwSTV31JkVdUfSsrKUoA3mo/hmQmVZcU/IUrsA0TCBr2jK2KBWcA6TnwJtbGlMAShZl3vO1lBIdzVPiptCdJbf1MyMbMAIYk5ItSk9T0MI1VCvFI41K6ZTu2JYsYCorg5COTakOL1GqbmJwDRaI4nVkz2D5aiifqSuYqONY2PQNb4IRVRewQoZShk0HdVqVNdmzxJYBSuNESSiw2Oyz84a3esDStYTFS1hFxLSyS8wqmjPalGaIPmopQpt9WHNUEjefhAwjT5A07ICuCmlaSPARfrZslyh3X19LRstMLNuEePmGYQilUsDZkjNYg3+eA4g2b8oG0ZsYl7pJiJOkZ6kZey9Slo1sTOBB6xpvRJJTRHrSBKLJctPTd2YtITO9D7aGCtRQXA0VrKFCVdwbq7lVbIAcjhSJBYwpSEs5EP9qcSwDabIllRyllHJTNLHGey1YrSKNtGQbKTI0q1yvmQa9iWrDXpwHHXIkSr4qNUY3kL5OsMpDdYpX+tGTA5gvYQmCTGaR0lmZESJSLExYaEj9P1iEK0/yxTYd7DGGEYzU6dXEHVYBEtmOSTAYPzNclDASoP6cn3X3iKRbtjtRXVP2I57HkO+goS6AFO6bqSHZqIHyICqQBmNQRzTsnVN3LI2xQYOwGVworXLfgnRVyhRrSfKPl+sOTTZ+4/QYOW006KbZ7qE4kGhsre3Mw5nVKNYfivpDb+VoSD9ZU0BnmojsFgxv/ljlhIH4NIDQ5cLqFkqLJyptdMvd14H4eHcVJDJ5yd2HTDUcPFg5bjpeuN1irPNGd+XJGXecn2X9dQeqgK28xrv7vEAcrZyQsAIhoUsl3j2WxDbykyLuryjifgOZFv5NlU9sm3ma18G1+xpT4HU9LWN36stEghcTSbUh3q22DLSlYwkcbbeyWWd9nI30Oe2dvS2pTOoxP+rsWye3YpVrSrfSbB0XDoaCHBegWT/jZ/0snQ31BZNKb9eGPlnNtiSYoLpRDWfWh3uljo74wPq805eLdWhKLNfrj0ucxoaTTRvMeCIR7dzYnQ5tTDc193axqV6pri3Fr+oWhWjjaq5u3XIK2DkZWWgYtJWunBppa7Sb7QyXFPVAImOsRWuaMx1xpj8ZR2IPH7P7ZS7XWEEvEOZopswwzAQjDPkmplNKgUNvS+4ICzOj00X5pwVSrhxrywjkLz71QbF8De6Jt8+k/j0jTZDL7vEeyamhmCjVBndCgAmE4Kc+EKrnglRLR894rOyn56ape6DHRJolQiibp49KkZccLY2FsdhND8lxckgglIQ/3DY0zhm6hekyK3e8j+4qPQDo1qaDpRNJ62YKatyQ59Y97p2W7FAuK/COIEiZrMpEh4IcFG6HFw+Vl8DFQtwAIVq13L3BOmaiPDOdr2OwV4ZmGZphJ3M0lAesyKoMgnq/5VeI5Y6EQO0jNxrYehrDe6UY9MLBnKi0MOF6g2sSfM/ABKPR6Ns3N5qG4sAkyoYmq60sXMmGDajWkRsNyhB7WdUaz02b07LgTj0EnQRKhgUcjkQifF0kjCNsSESYiyb5ECdGmQgWj5aqKG2TaBq6adMW5uHNZefdqRoV5UhxauDYEBeGrS6HxwuvOALudpJNOtmEtRweOVjRkbA/tpqOIbjf6W4vAd1iU//axo7W2OE+ujKT6LhReu8VNd3SZFEsdGMTIuOO8YruCFBlTVwArK7GfvdQRBTrApFggBMi0SgTqqNXQf2aRruedyOkRBeRAtwzvHtQ4hp89cEg51tOqaghEoY4ea/CZwql2+3dW/bfv2POLO+7Df6uXXuh66T2F+autz97dFtD+9b7n//FpZ175lyZNXTp18tu3/XEOuWRlz+aeHHyuWvHX/q8b+6uW4/+iznIvTp18ZGFt44/1/b0pw+9sWLy49//e8MKZcXw1av/+PbMF6O7T+ybnz32z1d+1XDvdy//cs6x3aNf93+1duuKemHx1a29dzz8948m/zpRd3HT0Sf3t+yZv+gcxbk7At87/hWI1YsP/Omek9+8MXln7wOND59IrZozuOjCA8UPv1687MAHu90l20caR7XYqpVztZUrF/QnP/nBl9t7fstdYa+8PnvV6cMtp0M9ITvwHj8vP+8Prz7b9N+nl2wfGN2Zn7d26OArs9/v/897/Z8ORE6fW3h488ux2DvZL57frQ9uYtitP9x0efH5jfrr6QVtc5/atvo+3wufRxd+/P6Rnd/uKNzVfenBuvueeq04eObiu/ML6VNTj/7umX0//9v8k1e//87qB9s/kw73dXY5o0s/sb9z9MPwqW3K2SsXgvvziV2P6+eTP767DQl/7Fgz+ubQVG5Re+QnZ6+9eO7wj2bPrl/AXzh0dvEadurkBpk/M/GzJ8/pS9rumIp+cyeJym2zXspfXh+BEP0PoYqenw==

View File

@@ -1 +1 @@
eNqdVn1wE8cVh5JQWtKE0DJt0kk51BYG8Ml3kqwPHDIRsg0GbNmWCB+BmNXdnu7su9vj9k62bD5DJlMmkz8uaVpamjCpjUVdY/NVSKF0hqFpISHJNJCZGhLoDCkNk9bNQFo3bTP03UkGufBXNSNpb/ft7/3ee7+3e08XctikCtEnDii6hU0kWPBAnacLJt5gY2o906dhSyZib1Myle6xTWV4nmxZBl1QWYkMxU8MrCPFLxCtMsdXCjKyKmFsqNiD6c0QMX9h0oVun4YpRVlMfQuYJ7t9AgFfugUPvrU6A5/aTst1zVgyZkRMFROLjKJLxNSQC8RIJtG8RYmoKulQ9CxjIA/RXwQo/iZ1Nc/gMizDBIKmpWDKaOAQoDxgb21OQgUIRVIEz8ccRrJ1L/pxkE1FNwtKPKlF8oxmgxedtCGRMBAKkyN0FpPG8J9nECMik7Fs8GdiQRHJrOJOXwXjM4mK3Zhtik3fpnUwoxERq+5U1rDYEGE1RVdcSx3mePg3kIlUFautFiFqqwBjN4MSUimGVWqZGGllExbWIFxk2abrh/Nz7py3UyaK4M51+6y84ZEYi9Z1d2vsGuhI8wzGp8e3aVMJrFTD/xcHzKDCgqkYJUufrxQoCA206O29XTfvkbq104qK6fZh3XaDftInI8PIe+nCNtRcdYcUiT43s2P0IEegFp9LHmWzJshQyWEd/jyo/2FSfMyAWmTSwdy29/RCLQT5BRKMQiu8GVnJytj0hkApUxpqxMRle11SY4yhpIEKJljBhCqYqnKWbu9lXVHAnIr0rA2SGx8rNZCuULmIllVLQ8nEuuCNYLeGvOwqFlIVGN4tCy6+29hugxVhb2X2zgSVcykHI5k2LFgAtmndpoKMkQhVuzRhWq9MqOUMjj8VhpAgYNA20CQiUHD2ZbsUowKaXFIhnf3QPjr2NOP0t2NssMA9h/uKu5z9UGC1JJzKNkr0gdLJwbpc7lzudzuLBb665RxOAol4fWVTHo4vneH9oaif29/JQhkVXYX4WDc4p8/w1o+XLxhIaAcQtnQ0On3FzYPlNoQ6exqQkEyNg0SmIDt7kKmFQ4fK503bS7NTSDTd6a60eNtd0M/z/siBccA0rwvOHq/Rj47bjC0zzwoEMJxXuT6BkHYFO8PXW1sFqTWjLVT8qEMI1hlL2xvDjWuyibqg0knji5YvjZBWpOYbVsfV2vaGTFN2jcDykUAkVBXk+ADL+zk/7+fZNUsEqQ2n4skVihztaKHIVuzwBjWU1v25laFkYEVIbm5rXMWhOiNcJ7YbiVUNcYtkllFSk2u06jMJUUqmosvquFpFSVdFOqVEXaY5Xs0AOzuniAurmltq7VhGzeSSoTRXE5bzS1uysVXpJ+IbBD0hdWXy0YYOZXlghREtoxcJR1iuxDDMhaKc+xkc04YKDWLJTk8sGt4LejbgIsLb+9z2tenTvaBDfPZ0oXQh/TS57LaEZ/TWgCadE3WmAo0aYVLYYAJcIMTw4QVccEEwyCxuSA8kSm7Sd5XggbSJdCqBDGvHJF8QZFtvx2J/4q5iP+GKHSrp0ocbgsWdBqGYLbFyBlaxLcWrmK2vOVTsLJaYWTgOujy3zglP9R1dnR2iYIuinOvQuFhXKAgHmS1Ih0tb4Eh13QAhVqNOTyAWHSytjOmuH2LlWJ5jOf6X7oEhQJu5wRjEtFiKBbj8rbwzXKGhTrfHFgb5qmAYEl8NN6qg2iJO2ZkaooEyaTXcvFglSDzWycJ9hFVFU6Aw3m/pxYI6vVWw+bU7DSzSjuEVpBDyysr9utzCxC6+G8RtmFAsFvvV3Y3GoIJgEguEj423oricDR/Q6Gt3GpQgeniNDnSOmbOK6Ax/Bx5aA2GEcawKRaUARsFIFeJxIIaCoSCPMlFRDA8l6tgEEmTMpjwBOoWa1Y3xhvrEkVVsuZLYpFF88yrohOqKJPWlsAmVcfoFldginJYm7gOslvhq53BUiAmZWCTAC5FoSOSi7CI4h8bQbumu1z1qvVewbX3FC+D1iZdnPjdlgveZBN+bN60W1H6Rm/b5K62zRuijL4R+Ftwyf+432Yvr73niD4+eXR2b/4ZYf+PMB0tPb5l5MPU79uXHpmxIjl46d+6ROYu2XorPfnyEP31h14fSi98aGfxk9KgWG9Lf3HVq9K3d16+NoDc3LkEPV6/NMw+9cvxS/cruRLz64qmDs+772uo/X/OtGxyUbtR+8by+ePaRuTNf2LOsTbMP7mR/cKai4erHdPSSUn/fj/7JP7joe8LRxbtzi7fM+uqha3TPbCl9z7BRP0X+yY4Zj0tvTdS2Xrv3+29PPvj7Y9rzD008GUstCV31vT905aONX5r6/Nwj7105tfLTP/3wnac2D7z7gNVNzn9+9dySocLm6y+/2v8Lc4ryzAN//8tLO6etPzDSFZn9WOHd6ZfXT/vC/matIVwzapzddmbyI1NjYowbMXde3bq06jcDM89Mp/cPt/14e/zna9LnL38qV/2NRk9O3XHuw3033mne+pUVbE9sTvfg7meHAo2f/fal/8zL77XOz3940cj0Cfe/PXqhuZDa+o/XK7eTtX9NPPh+5UHH2Hzio8/+GNjx7dDC7SfvDV35179v2jOqv7z3ePM3+oY69u47xp2dvOuD6m27o03r0hfmtW3sOR7b/sbX3+taMf2pwsffXd744havipMmZDKfPGdDSf8L0feGaA==
eNp1VmtsFNcVhtI2gTwK+QFtScJ4FYk09qx3d2bXu3ZNY6+NMcSssY0fNGR1d+bOztjzYh7rXYytxoGKKKAwAvGjLUooxotcYl5OGmJsQlQgNKAWqjY1SYjUqs3DahBN0ypJBT13do13BR1pd+7ce+73ffecc8+9g9kUNkxJU+cellQLG4iz4MN0BrMG3mRj09o6rGBL1Pih5lhr2wHbkKaeEC1LNyvLy5EueTUdq0jycppSnvKXcyKyyqGty9iFGUpofObqNxf0eRRsmiiJTU8l9eM+D6cBl2rBh+dplco/9WmL0FOWiCkem5KBeUpSBc1QEAGjBENT3EFBk2WtV1KTlI5cVO8syGwrpsoZChdg6gaINSwJm5QC5ADpErhjy6MyQEmCxLlcyynBVl1P3BW6OUdbWaDdtLQMpdjAqGrdiNcoWCKV0swSqg3DO0MhikcGZdnAbWBO4rWS2dmeMspjaDIm/rBNbHj6N0KPovFYJl1J3aJZjVYkVXItsamDb3E85xqw6PNYGd2d3W1qatzkRKwgYlr4ScwKmrPeyI0Qjyi5mPR5sGorJFIeEel6hiCp2AZPyqRpIt5D9FmSlZPcensu6c0rMS0DIuTphy6UTIJmU0phFV4uAcSXMySdeJgY5z4TEBlR66Vm7d3YmBayMIGnJLPM7RGlpIgNtwlCE/mmohm4YC4RM7MOfxkVKKOYMooto4JF2muKtRUsgGyHJIkF9MlITdoQ8WLnmDpSJVPMESXlfFMwsMq5LZitIJW0JAvJEjSLqJ+aAb2L1/rdOG+yySbIcRX6GN0h+rbAIobipC7k0RLdmMth8bxERpHcXJgRApJNTFSoSPk/WEQrR/LFMmzsKoYejJSZ2YQOKwCJLNsgGD6vrz8rYsRDxbk2Z9GQqJmWM1pcRY4gjsOQ7+BDjQdXOK8kN0t6GRQEQYY0GIFtpWJ3ZzojPRjrNDg2hYdzs5yjkK5yXmI5Sf7D+UpDk4XfOTxCdhsNflMtZywGImoay5szUO5Uyu8NRrzBo2ka0k9SZfAzTZzsDOvu+HjhgI64HgCh86XUGc5NHi200UznYBPiYq1FkMjgROcgMpQQe6Kw37DdcDvZaPOddPnBWTrG6/d7K44VAZsZlXMOuoE4WTggYhlCQueKujOewBbykrLtLSjbXh0ZJv51ESe2jAzNaUDt7PcNc5rWI2Fn6p/xOCfEE0p1rFVp6F7dE43jyBqzt9deH/OHO+01ze0NyVRylRc1d66TGrHC1PU00v4KJsQGWYYJ0H6vz+v3+uneYCebkNtbOjolpbch7mOVDUootT7ULjY1xbrXZ+zOdLRJlaPpdm9MZFR/KFHXYcTi8Ujzhtae4Iaeuvr2Fn+yXaxYneRqWwU+UrOSqVhXRYE6OyXx1ZssuSWthFfXWPVWikkIWiCe0teip+pTTTFfVyKGhDYuanVJTLqmQF4gxNC+vMKQjw37yDM6k1IybHpLdIb8PoY9NFOUnxsm5co2B4cgf/HFt7P5g++XsTWzqb94qA5y2ZloE+0yyhehViOVCvgCQfirDAQrGYZqaGo7HM3ztN01dY+1GUg1BQhl/cxWyXKirfZgfiR6100yQTYJhJLoh9OGxmldMzGdV+Uc7qRbckc+3Vh3Ircjac1IQo3b7NI6E+5u6d2c7uU5m+fFVK/ii2xmGSjcNieM5afAwUJoQBCtmOAdJuAfzQ/NJOwILNZH+320z/9Gmob6gGVJkcCj7n/+4gFzg+Du1+80sLQeDFeULOvGwzdZaGHA+QbnJJDPwrCRSOTU3Y1moBgwifgjbxRbmbhQjT+gmK/faZCHOOBXzMPpGXNa4p2px+AjHk6EwgEmEuaCHMuGgkIwyGJIJRxhMR8KMuhkrozSFgmnrhkWbWIOrllWxpkqU1CaVKdqxh9kQrDUKrivcLLN41Y7UaeRRZhVcK/Bsob4I9GVdBTBAU+3uhnoZOu61tY0NUZf66QLU4mO6bkrXlbVTFUShOFWbEBknBFO1mweyqyBhwGrpabLGQsLQkUgzIQFJCQivmAFXQsFbAbtduINkRqdRTJoT3HOCZGp9lSyLOOpohRUHQ5BnNyL4LPDuePt7NxfLXvh3jnuMw9+t27taDnT/b5v4cTXpb/5xT+OT9snln5YO//ZrVuvTSfGrvy5vXRNLeN94J2bfccmuqrYbc9Yae3aXye+t++hkjfFx791aO3xT5+79NnDp3dNv/bi+f1Tpwcy4R9+cHbt9c+++MPn73Ssatj2o5tPntVGYr/9zxnP0uvChvo9Ow5Mp06P/2DFkvGNKxZNvCs2dC0Nl27509Lk8S27l0wJLynfWWJ/tLN2sGXLmq0L/8JOcGuzNyr2JHZPllQ/dGHx/lMPzp97+b1Fcxv57Qe5VUtaMuc/Hv44u+DeR37+6CMdze1PtnTsaqfeC3mcL7c9+PwTj78w9o3SjVVvL/7oArVv/+jg51erEsyuh78+eebqfQOTP3118oHLyz55K37j3I5n7LrGazvPbO//yfPVy2u3vLQ3cGp6fslXK/RLN5dX/uzfT18cOPp+4uyj3sf69qa2ffj3T+4ZeHXl3nNXul95eejT5vIrf/u28ftzR7bzf/zqOrss87tLO3tunR8o2fPWm4emF65b+sXkjcDiTf8dufX9cTWz4P5/rd7R/8F3303cf/nisoPz7d2l+5ap6n3zL0S+nEciMm+OvFu+GoL2/wCPq5n8

View File

@@ -1 +1 @@
eNqdVmtsFNcVBuG2aVXRKiEEJSgMSyEqMOuZfXh3bW0DXtvY8WMdr8E2hGzuztzdHXtm7nhm1vswprzcVpAEBhHUCBIabO9Sy+ERXCCkJFIrCGnaPKCJaqKkTSgllZO0aqRSmirumdldWBf6JyPtzp17z/3O4zvn3Lsl14dVTSDyzDFB1rGKOB0+NGNLTsW9Cazp27IS1uOEH24NhtqHEqowsTSu64pWWV6OFMFOFCwjwc4RqbyPLefiSC+HsSJiC2Y4Qvj0pVmv9NskrGkohjVbJbWu38YR0CXr8GF7RKbgqU3ppmpKj2OKx5qgYp4S5ChRJWQCUVGVSNZilIgiSQpyjFKQhWjPA+T/g7KYpnAJlqKCgaouYI2SQCFAWcDW2gMBESCEqMBZOh6gognZ8n4aZGteTWXBTk0naQDgVCxERGxiYqrgDQHLqTiKgD5rjnACTxZSARUTCkJJaVjFEgFDEmkqksAyDJEkxIi2MI9tW07ZVCJiMyoJELYNrIcZifBYNKdiik67CC0JsmBKyjDHwltBKhJFLIZ1QsQwB2MzxlEkahhWNV3FSCqZ0LEEAUF6QjX1MHbGnLN2xonAmXP9Nj2tWEYU42GquzE2BWQkWQLTA2gbGCiAFVj+qjggBjnAqYJSkLTZCo5CKkK2WntvMmt9aia7Uj6n+v93e3vcjH5BgCJRi34d8sTELVoJoYK0spk+oFhMhXwV+rAMr9sh1pMkdVPqBh4laBRkK6I0IALns5alIDNYplSVWWkxk2CYE5EcS0CC/T+7i+vTdCRVQYeUg0S8nQcmrFm9ZhWZTJQE51bnSk1YXwJGIt2Y0wFsYP1ALo4RD4HfORwnmm4cnl74RxDHYUhOLHOEBwOM52MZQVkO1RAVkY5HoRRkbJFujPZgrNBIBOXZ/C7jKFIUscB8ebdG5LFCOdGmJbcuj5qlQYO1sm6MB8GIlQ3lrWnoUDLF2l1eO3M0RWs6EmQRvKNN14ysYq2/VLqgIK4HQOhC9zOy+c2HS2WIZow0Iy4YmgaJVC5ujCBVqnAdL51XE1aQjVyg9VZ1hcWb6px2lrV7jk0D1tIyZ4xYlXpy2masq2maI4BhPMccLsZHxHJMjxtDPrfvEDCqQL/FW7OwTU9oW4aBC/zb87lC3z0YbCyS+MGMucM1wItxpk4VllMODxXCCuVgHC6KrahknJVOllrV3D4WKKhpvy0Nx9pVJGtRoKK2SHuOiyfkHsyPBm5L+BmTcPDGNB/aHI1TCtEwXbDKGOuk2/InDt1QczyfXTRRY0gWMpZa44zFfDKTSvJcgufjfUmJ8WVcTiGCE1x0vLAF+oKpBgyiJc0Ycvp8hwsrxdiPgq8MzTI0w75olgwHqWY6oxBVpzXMwRmnp42J5RJKmXnmd7JuZwXDMFVm3xcTPA4lIjVEAna0KjhgsEgQfzpFQ1PFoiAJQIz1Xzg/NWPYDZtP3Sqgkx4MJ23OxVjPy6UScFIAvunETRiXz+f75e2FilDgrc/Hek9Pl9JwqTWsQ9JO3SpQgBhiJW0sVRSnBd6Y+B58hDnsZDiei7gQirijnggb9TJet9uFMONgEeM+EqijA4iLYzpkJaCRq+lqWdncEBgNAXiAkB4B7740c1Y4zEXDEckflPu8PVFHsldo0F2aXVe7EO7trpbbddTTla7Qe5uYdG+8s8UZjdGsx+FxuZ0My9KsnbGzdpZ2Mt32gCrXhtrsdU7PmlbNE9JiMmlw1MYavc61Ld6umlAog5nqtjVtAZdXbmnXHcmm2GpvOrk2LsZiyU6mMZnysRXhNdWrKho7fa1tLTUPA51Ij/vLq+AOoUD31PyFEqGhROh8gTiLBVJF8VYS+O3T22EVVQ/3I/MmUgWVBdmE4Q0nV0jQsb8FLiATeyAGiT6B98uxjpq6IOMLPoRXtTbX29ujQY/W3lZX0xRsCsR40uRyBeuaPGo40VASBIfHSTOFOFQwLq+VPTdN/4pWneikSyueDir5i2AO7imyEI1mQ1iFCjJGOZEkeOjsKs4C520ru4xxL+fjIr4KVyRSwXqj3gq6GnpmEe1Gfxg2jwXrRrg5mz+qzs68umDHHTOsZxb8pqb0trflfcycgavLxr6Yv6S+/uNUT9XP13qEi6ebPz237duP/GQdt+6EsLfu+rXXlnxtbWLj0bH+DW/6fvDyjzd99zE3XzZK/2J1LpXa+PQH4cUv7Qx/cm3BgSMPfvKHifcfndr+eqbj0altg5O7rvjX/PPFyfWLq86+7fj+109dqKcr3/Wuz76Wufus8Y3KrkVJ6fkXkO3XT35zrn0I31knzDvwzOmTu1fHqUWvzKq+w/ev96/PcX85+HjKHv7rW/Mu2fjLi6tXNH62dMUT6FwbQuzmp+bMHTxuz7wR2/TU7nvuvrLhzvvfRX3P/urg/ns6wvveI1dPZgY+H1/2Tm7HuHj/x2dV98X7Dl37zX3dmYefWPi7vx979sydi8rKPt3wt8XJN4XO2j8+RpUdkr+YPZj47MNdW1eip5f9edeVnw2jvf7I7B3z4+91u37kP7iCU89/VDt7//YTOfrfk57t34qe8M3/cPANNjX5n9jeg+eynj0HPvK/s/PyD3snltKNny/dGP/pxWWutsvcDG7k3vFnnuzY9OXZjq0XXgidOPqPmYOLHu+Y3PydbO/+5/bdFR8Z2bAm/OrU5NLu8T3X0/NWKcr5miq2Zqh5qoXp544tu3fJ2t8/5HxQfOuQzf46fxe58Bd/2at/WmCxOWtGWfUCNQPU/hewh7rA
eNqdVn1wFOUZB9MRph1aO3VqrdpubmhlWvaye9+Xm2sJFyAJJnfJJTGJlcx7u+/eLrdf2Y/7SAijoFZFoFvb6rRjnULI1Rg+EjKKCFY70GqtiogDgQL9Q7Hj6FA6UztVhD67d0fuhvhPb+budt/3eX+/5/09H++7qZjFmi4o8sJJQTawhhgDXnRrU1HDQybWjQfGJWzwCjuWiCe7d5qaMPsD3jBUvbGhAamCW1GxjAQ3o0gNWbqB4ZHRAM+qiB2YsZTCFk7XXRxxSVjXURrrrkbinhEXowCXbMCL6ycyUf6syhs2PWHwmGCxLmiYJQSZUzQJ2WAEpymSM8kpoqjkBDlNqMhBdc+BzD3FZbFA4CpMVQNnNUPAOiEBOUA6BM7cnTERoAROYByuOwnOlB0l5oVOlGgbq3zXDaUAYIyGhZSIbXxMlHepwG4IHqWA2xlTGIFV6omYhhUCJCZ0rGFJAafMApEysQyPSBLSil4/h+9aTrg0RcS2YiYscI3eCyOSwmLRHkqrBulTSEmQBccS6yqojwdL4oHFiMsoqM7q9boiD+oMjyVkm1a/2mZVj3N6lWZszaRS1EZcECBGE1RbIhu1m7f3UTYgFM4R1QD1bQ5DMEqeJ69B2KNlh3RDg1C6RmEIpdPgui5ksQx/8/G0KDlizuoaCyHoBGQIInQGibiUKTQBytNUjQNNtQRVXtjJn7Z1hTERyWkT4vtFG63M19DnNMGAaEMO1DDeVcGaZ8ejTqiGTDvT7bqokvh6Mar9ureaoTZzq3mU1HrMlLBYVrBnkZioDiqHRB3bXshI+gIs21fGDrmhmdjxGEYwkiqrbTosASQyTM3GoNzUaJHHiIW2cm7BTWO8ohvWntpWsRcxDIaUxTKjsCCFtTs9LKjLoU44ERl4AopExk75WRMZjFUSiSDDeGmVtQ+pqlh2scHO38lyoZH2xq+fnrALhgTdZMOaiYMTTa0NiQL0NJmg3f6w278vT+oGEmQRdCZtka1x1Zl/sXpCRUwGQMhyv7TGS4v3VNsourWrHTHxZA0k0hje2oU0KeDbXz2umU64rWIscT1deXKOzuumaXdwqgZYL8iMtcsJxAvVEzwWISRkqXNbL6awgdx2b3ZX9Wa3ijQdP1/DiQ2tQDIKUFu/o/ZUZBWxnDZ4a4ymqODvK71l8zisM0x90xjEEP/11WK5w++Ir50L/y1jzRBP63A3by4nqDDRhmTCQ3n88NPo8TfSIWJNe/dkrMzTPW/4pro1JOscbGdVJV2KDG/KGcxOxOZNlMN2osB2bP+haZI4ryo6JsteWZN9ZFfpbCNbm/eXspJUtDSShWGH1jrsZExuOJ9jGZNl+WxOosLDPq+QwibDzZSXQH+0acAhUtKtnaEQtac8U4nZBOyVImmKpOiDeRJKBIuCJICgzm/5gNWtMT9FUQeuNzCUDIajuOijnM9L1RZwZEDUbO45GF84HD40v1EFygsmYbrWG4gorvaG9kj6gesNyhA7aUmfzFfMSYG1ZpfCy2AgGPCEqEDIF+L8yBfwhjxcALE+RAe9HA6wnhdKnYQ07GiqimaQOmbgOmEUrNnlEsrbBRr10n5vALYasY9S0WRx0kw1K/Ym9Aic31hUELs3tpqMITimyKSTgFaxub+jqb01NpEEL2OKkhHwz08vrBscZLjBlBT1Kqgr07M6lDTWNnm6Bn1yrleIrR0YGuR7epjO9tz6cGdHT3tXu9DcT4KzAZ/f56VDJO2m3LSbJjt6BwaHeJ5K5Lp7PHetlfq6e8KMN5wY6PQPx1vuTrqTRoGPSe2r+k2qfSAT4BmuJdHmyw8pZm+8P9WL+tVESkr0djbnWt1US5BrW52k0rAbZPDRhgjcUFTo/3q0XCIklAhZKhB/pUAiBOtoEHXXttEI0QI3MfueEyGStpgY/qGTJwUDRzvgejP7C9DAzApstDfX0pEvZKh4dogLr+ljEB2W+gbCOQ/2YTmwqoPuCNK+gqy63UyVCEFPiKTKOgQoX8hJwznX/0+vnusjqyuejKulK2cRbj6ywHHjSaxBBVkTjKiYLJwIGh6HmHc19VszIY4LeoIsixDnCVMBD7kSem0F7Vp/GLOPkyISIceyjLWf90ZdjT6f1xUhJBQNBaCenIvp/eOlk/jowunvblm8wPnUwffq1ce6Xtn2DnXzoQ9+OJnrSDzeHvxe8IHQ/Sv2Lfrm4kcPvbl2vSq+fnHXDT2f596bOcMedQ1dQhfeyF+Y/XPz4mN3nLjhxLrpbx9+bV3/xu2LPgts2JA8e/7wZzt+vOGlqwOjE6/+e+9Iy84/Ri+vOLp94pa/bD1HZNJvzXx52a07M+vO/u1XZ/71/fNvHPlHo68zkH+z7Ymtpz+cPHUzfu6Z6Q25zotP9bYktiiP3LTy9g+8F2buuLJpQDy+hm12hYkTJ+vridA3vv6z5l/3DTxk1P9y/2PZbRvF+45Nf+dtrc6sO2dxoy/fePzG96gvLT2+aKblxIqniku+Jfy09auh7ecLgv/j20K7Gxafii9lWiO36q/ETx7505IfHVh08J93Hzr59rNLlr02++S2U4UnZ5OuI/8peN95fPrvm2+LPv/Rxh1bH/pvduSTze839Ypm/eyUL3Oo5elLmQj7zJLd2a+0TUSzW4zC4rB5evfD7LufXvS9VXj6N5nI5e2fr1z24G+nMqMvdzxb3PrE1H2X/nAl89HB98PUMSV1+5l1ZxZu/tqnl+8ZPv/Jgx8+evbj17tWam1XnMDULXg4OvBuBKL0PwOa1t8=

View File

@@ -1 +1 @@
eNqdVmtw1NYVduq20CaN3ZmkgZJO5R2aNNRaS7val4Ek9hqD48c63sWxKZ6tVrralS3pytLdlx+T4pA0CaGOmOlrAqHGxpv4CeVh6uBCS9tA2yFhMh1qPKWTmdKUhklLCEmnzJReadd4Xbt/qh963HPOd8655zvnqi+TAJouQuWuMVFBQGM5hD90oy+jgc440NHOYRmgGOSHGgPB0GBcE2fXxRBS9fKyMlYV7VAFCivaOSiXJegyLsaiMvyuSsCCGYpAPn2pcF+3TQa6zkaBbisnvtlt4yD2pSD8YduuEPjalEKmawLFAMEDXdQAT4iKADWZNYEIQYOyJRSgJMGkqEQJlbUQ7VmA7D2gSGkC5GGpGg5QQyLQCRk7xFAWsCV72C9hCFEQOcvHw4QQV6zsF0E2Zt2U5+LUEUwTchx7UWA7y0MCp0IkoF5ChAB+pgmW4FmNQHHsTwOcyMOSrKWtlLBpUAJmznEdaLbeNrwiQx5I5lJURSQDSVlURFNTwWs0fqqsxkoSkMIIQinM4XdzBwVW0gGW6kgDrJy3gICM02VRXDP9UHbKXLMsY1DkzLVuG0qrVhDz2Zru7rybCgorWwqLt8fW25sDy9Xw/8XBarjCnCaqOU2bLZcoJhrmomW7UDfrUzdrJ2cZ0/3f5iFcyzsKBBSs4iLMAhN3Pkq8VZg0NjMHNhrVMBvFBFDwYznELTBJLGjdwSNEncDVZgkdFwJkOUkTCBI0le/K7KOoWWC8JrFKNI7p87/inpcv8pHURITbA/N0uQxMWLM3zR4xK5G3OUuTyw+hLQ8MRtoBhzBYb1tvJgZYHm/85YLioRjUkTGxuLEnWY4DmJ5A4SCPQzDGo12iWor7VJBYBEZwByjAKrsx0gGASrISdj+ctTIOsaoq5Wpf1q5DZSzX/KQZy1LxiNkcJI5XQcbRAA6ioqasMY0nkELQdsZrpw6lSB2xoiLh/EgzOWNYteRv5AtUluvAIGRuuhnDWeOJfB2oGwfrWS4QXATJalzMOMhqsps5kr+uxa1tNjL+xqXucsIFd047Tds9hxcB62mFMw5avTq1yBggLU1yEGMYA9QwB2GHCIzZD8NhTghH5I0BJeHtEBzJTrEGMbodaa0s6GyvVEKI7WhNu1FnHZXujLU0OIUoSXscHsblpGiapO2UnbbTpJNqt/s1ZVOwyV7t9DQ36p6gHlVgjWNTtNbr3Nbgba0KBrsAVdnU3ORnvEpDCDmSddGt3nRyW0yKRpMtVG0y5aPd4ebKze7aFl9jU0PVk+sJHF08IfIblehTVdUByhd4AmxurN9iDwkBjx5qqq6qC9T5ozysY5hAdZ1HC8dr8sJzeJwklYvQTTFeyrwm5rkhASWKYsagz+F6DfNZxWcJeGYYbxmK631DmIfgd2czuTPlQKB2gcL3D1VhThoz1ZpYSjg8RBCohINyMATtLqec5U4Hsbk+NObPuQktS8HDIY1VdAHTcNM85TNcLK50AH7EvyzZZ0yy40qa4eMhT4KUCnVA5qIyxlrIpuxpStZUHcl2Fgm1KKuIXZZbY8ZifbIrleS5OM/HEkmZ8nUxTjEC4pxwNGeCp6LpBgdEyroxyDC+iZxknncjOFeKpCmSon9qDgwOt5mZjAo1ROqAw+c3ShuzpTKbMntso5N2Od1449fjYcNJcR4E45EqKGNm6uvx4QkkyPLTKRIfKUASZREXxrrn/g10Y8iFjU8sVUCwA+C/iAxjlZX6Wb6GBkx8M4kFGMbn851cXmkeyolVMB2mF2vpID8a2iHrJ5Yq5CAGaVkfS82rkyJvzK7FH2HMSF/EzTgFysO7XYCKRLxCxEN5GCBwNOvxTPqrST/LxQAZtAhoZKpaGyrqa/zHW8h8JpEBNfvzlFGgroiCMBwEGq6MMcJJMM7jaamBYYzVVNFqHPVyPi7ic/toH8t4Ba+brMRzaB7tDu+GzFFr/UXtGM4eAL+6a+6ru1YWWFchamrbNUcV97w1WXzrwQfmzl05Ay9u7z99/7bY3TumV7W2rv/N1TOxK4Mlfz71wKzvzeT2m8Qvv5sQmEs3dtIvd59d9XzhJ5WeddN/Ku2fa/vR0JvNjXuPP/al/jdu/mAFeHWqrJ/6wleObzjwj8m2/Tt/v7uybPwq++7L3feMnXO0nf+70LkHfft4d2hgXOlhpozvV1eLe88PplZ/bbTjk58/sePTq29tZF86rE+t2C/0P13CXf/o2AHj+YsrZ9UXaLX52c+rtppX1xS9s+brY9djffF31rk+dfLYSPH2119RT50fvEGWToSu1/721tafzI2feuzCxx+1wmtTmf3hP57ZMpl51LVvYMQVWSnuLLp5bd8q7lsnPujyrNuQ2dP39OnTq9Snpr68u+iDd/cWDlx33Ked1R7n+nug/sO/nH7/UonRU/FMlY8fFe998ZXpK66Ppx75zt0nnzxR92j57dVX3jvU/sJox6GL/wL/fAS8/+vZPZc/U/Q9Zs+9z/ob2goev+yeSbx99b4f994497Z3dE04deulh4ofOtL61oqpigcHRr/x1/H32kOnav99ceu1C/ps/S+onhm/+Lc1/g1/uHb5c8jWP/3a0OsXXA0J7rNbg2uBL7x2Fzj23DnwYVFBwe3bhQVfvGf3i6iwoOA/rld3HQ==
eNp1Vg1sFMcVJjVqEgJKUimkKUVdX12hIu959/7PxqrMHeCf2Gf7zq7tgpzx7uzt+nZ31ruzPp+RGxXSBpUEslLTNK0SCrbvKtd1wFCSGONKSSoFCUdCEIqLSERLUAONKJSoSCGhs3tnfJadle5n5735vjffe/NmduX6oW5ISH1gXFIx1AGHyYth7crpsM+EBn4uq0AsIn6kORZPDJu6NLdRxFgzKisqgCa5kQZVILk5pFT0sxWcCHAF+a/J0IEZ6UF85u8lsztdCjQMkISGq5L6yU4XhwiXismLa7tKFZ4tA9imp7AIKR4akg55SlIFpCvABqMEHSmOUUCyjNKSmqQ04KC6F0AW/sVUOUPBIkxNJ8HqWIIGpRByAukQOLYNEZlASYLEOVwbKMFUHSWWhW7O01YWxW5glKEUkzCqqBfwiCJLpPqRUUolIPnNUIDigU5hk3DrkJN4VLow21VOuXQkQ1sP04C6a2gHGVEQD2V7KKlh2odoRVIlxxMaGtEWduelIR47XTijObN7DaR2G5wIFWC7Fr/abkV/F9TIW2xFlHxOdrqI/JwuabYANmqCKHTfgUKCIxkm2tocWML5yOP3IezRQkAG1kmiXENkCCSTJHRD6ocq+VmOpxalqQWv+yyUZFBETUAZHJBhvg5YCiOKZRYFULOYoCgKu7STtq5kTAZq0iTZ+7qFztsX0ad1CZOKJeWyiPHpeaxlVjzkpKrPtOvYrvoiiZeKURzXjmKGxXVZzIN6eiGXx+J5ybYCubk4qQKQDWhHoQLla7DsWDk75Vg3oRMxGYFAmZ9t00GFQAJs6jYG42aGciIEPGkaH614fEREBrYmFjeCNwDHQVKyUOUQT6Sw/pQclLRysqcFGWA4RnaGCp3NZY2lINRoIBMZsvlZ1mGgaXIhxAq7fscLzYK2F77UPGZvGJropmLrWIwEUVNX0ZwhHUulWLc/7PYfHqANDCRVJjrTtshWVnPsJ4oNGuBSBIQudEMrm588UeyDDGu0EXCx+CJIoHOiNQp0JeA7Wjyum066rVykeSldwbhA53WzrDt4ZBGwkVE5a9RJxNvFBhHKJCV0vi9bJ3ogBm6787qLOq9bA7oB31zECbGeoTlEqK2DTJZDKCVBa+5WdzcndPco1V4EWlNtW0Nx3FDjae32qel2KdLQ1dcttrVxLY3p3nBLU1tja6MU7aTZoDfg8/u8bIhm3YybdbN0U3tXd58oMs3pRJvn6QalI9EW5rzh5q4W/2Cs9sdxdxxnxIjSuKXTZBq7UgGRE2qb630Dfchsj3X2tINOrblHaW5viabr3ExtUKjfGmeSVRSJzuyX+Or2dG3TQCbFxPr7hPC2Dg6wYaWjK5z2QB9UA1ua2KYg68uomtvNFYUX9IRophBhgPGFGPuZmC8pGapJLFrD4aD3D/NtdXeWSIZNY9cIKV94+v1c4eg6FGtYqPy1I1FSytbJhGiWU0yYqgcq5WE8fvJV6fFXsmFqW2NiPFKgSSxbuUcSOlANgWRyy/xOyXGiqaYgPxZZdo+ctPcIyaQdPjkvaDigIQPShais8Q66NX9o03XRo/kNSSM9CVRp0KG1TjqbJT04kOY5k+fF/rTChAd9XqkHmpxwrDCFHA02DQmIVgxr2M8GJgqW+XIdI2tlaJahGXZqgCbdAcqSIhE9ne/CzcGwRvxE7LeWOmCUguSOkfM52WBmij10ckCRg45wL8D4wuHw9PJO81Be4hJmQ1OLvQxYHA3rUYy3ljoUIIZZxRgfmHenJd6aKyMv3X4BeH18jz9A+pfg4ULQz/eEfTAQgH5fOMQxb+ebKI3tbGpIx7QBOXJPwhlrrlwBA3Zvqvayfm+ALLWKnCCcbPIwbvZEkb0Io4pcTKCMAP9GZCsdAeSEpuNOAVq5aGdTTWNd5HgHXVxJdEzL39FyKjJUSRCycaiTzFhjnIxMnjRZHWYJVmtNp3UsJAhBTxD6AgLPhpmAh95M2tc82v26G7E7dA7IJPZ+zjoqeqtdlT6f11VFKaA6FCB5cm5yP8vmD7e/PnD4e3sfWuE8JeRz794LrTXqReax6etPvPf66DuTL3ecof/2zVM/OLmyrZQ590uhwVqdftnnv/u/zZPcD8VVV9559/QHzwivfBF9iKET3zjXNHlt9+zE+o8vRzJffH71k8s/Vw8c+Mfv76y7dOmV2LMqtf6Jr25u2PjT+FDLzKNr5wbSI23RU6GpqxdnYV+LfEt3DX58aD2s1NrvjAc/4K9/OPV++8Xyiti114KrHt54s/rxzXu+vL0vvGPvXw5+64+313xq1fY+d21T2coo2rayzCr99b7S1+o+WT/8u0M3UhSWB/H2SNtuXDlz/qXz1J/31fKvnyltWdP26vlnRs/uPhea/ejBT1d/1hsqv/T9/pJTRw6uG6++/Lyx/bq55yur6e5TN07/68Por65c+vaLFx78zYWI6739Ev3uiexjh26UBP/97MF1a/67d/bMo1dT5RduRSeT0pN3ArEf/SfVxUvl9NpH6sfWHz+KZ3Z9t+ufT0YnxS/Rb/dP75998eyBIwceHmWY4417Lnzn85mbm9b23R2/t64KZVatvl3/wtDZp3LsL8R93aOPNL7Z/tnU9PTGsrLnr1Q4aSlZ8WqFvilEcvR/822J/w==

View File

@@ -1 +1 @@
eNqdVmtwE9cVNnXJBOiDtilD2smwVtqQZLzyriTrYdfJGNkKfspYBoyBMVe7V9rFu3uXfdiWXZMG6DDTQpoNMGHaIdTYloLiYAKURxob8mpJG5oOTUN5NDPptKEtLcy0k04JBHp2JYFc+JX9Id2999zvfPec75y7GzK9WNNFoswYFxUDa4gz4EW3NmQ0vM7EurEpLWNDIPxoWzTWMWJq4tlHBcNQ9aqKCqSKbqJiBYlujsgVvWwFJyCjAsaqhB2Y0TjhU+dKrw+6ZKzrKIl1VxW1ctDFEfClGPDiWqVQ8NT3G7ZryhAwxWNd1DBPiUqCaDKygaiERmRnMUEkifSJSpJSkYPozgHkfqOKlKJwEZaqAUHNELFOyeAQoBxgZ21hWAIIMSFyjo+FVMJUnNNPg2zLuanK89QNkgIATsNiXMI2JqbypyHAnBJQHPw5c4QTeVJGhTVMKAglpWMNywSImCkqbmIFhkgWk0Qvy2G7yimXRiRsR8UEY9fQapiRCY8leyqpGrSP0LKoiLalAnMs/KtIQ5KEpW6DEKmbg7Ed4wSSdAyruqFhJBdNGFiGgCDD1Gw/jJux55ydAhE5e27QZaRUh0QhHra7W2PbQEGyYzA9gK6hoTxYPsufFQfMQAOcJqp5S5crf1CQIqjV2Xs7s86rbmdXzmlq0IUV0z70SpeAVDXlhAuboArJHuqId9mRLdCDGIGeXDZ5lExqIFSxFyvw50D9H5Pcaxz0JJA+6ra9oyjdQIajCUrUy50ZQUwKWHOGQCmeH8pEw0V7bVIFxpBSTznlLad85VRlMUu7OpO2KGBOQkrSBFFOP6uuIkXUhRxaUsoPExpWOGcEu2XkRFc0kCTC8G5RsPHt0rdLMAd7K7J3BqiYSzEYia/FnAFgQ6uHMgJGPGTtR6MC0Q1r3/SuMYE4DoOygSThgYD1UnJAVMuhlBISBDMLdaRgRzFWtgdjlQbmvTid22Xth/RKedlUrNWJMp6vRdpmcudy1q4rGtgqhnUoCiRqGyraUtDeFIp1+4JuZn8/DUkUFQlOR9tHs9Kqs/7z4gUVcT0AQudbp5XObd5XbEN0a6wFcdHYNEikcYI1hjTZ7ztYPK+ZTpCtTLjtTnf5xdvuvG6WdQdengaspxTOGnPK/Mi0zdjQUjRHAMMaZvYV4iOBRAzBGmUZD/MCpFSFbo03pm0Fm/qGUUgGfudkJt+190SbCln8oGTeaB0kxpqMaCJoNUDFsEp5GI+PYv1VjLfK66GeaOkYD+f9dNw1Dy93aEjRE5CL+kLeM5xgKj2Yz4bvmvFJO+NwHJs/NEka96tEx3SelTXeSbfn7iu6oe5gTl400ZJQEQOOW2vSSX3fQH8fz5k8L/T2yUxowOeFWja5xKH8FugqthsgRMu6NeLzhfblVwrBz8JZGZplaIY9ZtcMB1qzD6MSzaB1zMENaaSss+Uy6reFVuNlK71+hmGq7VtDMnkcM+N1RIb06NVwPWGJIP6VfhpaMpZEWYTMOL/521e3Rith89E7DQzSg+GezvgY55kqtoB7BvDtQ9yG8YVCoVfvblSA8oJJiA2+Mt1Kx8VsWI+sH73TIA8xwsr6eH/BnBZ56+y34KUbxRFmWBxKsIEgQkyI82Efw3E86w/G/SwOToQjdBhxAqZjjgCtTN2K1tqWhnA2BuBhQnpE/Oy5GaXd3VyiOy7XiG7Ux3kjamNPq7+1KxmOeMV+vXZRc2OAdCMp1bKiVqrvaYm3Jbs4mg14Ar5KL8N6aNbNuFk3S3ct5hJrcaw2ulQUgn3tOjJF079O8nUo7t7lvqhnqU9Ysra1k0ER1R/he9RwZ0utQeJNOqnrbTUa4mE+EY0FmyJMvSh2VAb6E+FIfEktpBMZQk1FNXyBqNA+9Zp8idBQInSuQLyFAqmmeEcENe7p/bCaWgxfV/Z3TDVUFqgJwz/cezHRwDWt8PlydjvEwOwV+ZrKJe31ZiguxXujvg6mzi+kGtuToc6OZbXrOCWcGIingi19YrNnqRosCkLAH6CZfBz8jC/oqOc29c/I6nAnXVzxdFTNfUZm4CtHEROJdAxrUEFWlpOIyUNr13Aact5eu8I6FORCXDzk5xGfSPh4JkgvgqZZQLvVH0bte8H5nnwqnbur3prx4YIf3lviPKVG+2ryE+a+oXcn9l/Wd37vobI/vfT39x+NpfeenjrUPvxNrax1yz/R774fuXplW9nMLnP9/vHB774bemwqM4s7cWSu92SvcLB9zpVPHjo3seej9UND15Kv7V7+8Y1rV85/tP7TTb8UZsbODPsHf/WPpjObm3d0PfOXxl512Ve6tPHa+yIHr+5YsIluy9KzziwTLp3avlx7e/K3X04+vyD12r9OT8a+82ApO7iT+cUf9v5xV/TY2INvvrN7qqPRc8+cDUePvzm/o/Ho7J+yFzq3z0uX7nxEbKi6dGBe/POdY3U/Hp67UT3SOvLvufNmW5cfW7D++qcHDh9/evfUA5cvP3/8RvnjysUPf794YkQxeh6ewO9v9r1+beB6ZAv7TOfhycDrNQsv3j/1xon5a5a+9+2GSxfeeGRG+LS2fDHVumZzV2BW89dO7Hnh5Pwbdc9uaaC+lJ517/b6rTN/1rc18vYp91PeV78ain5Q/fjTrvGHJ774g6bDz31y7Gb58OzVe7emttX8+cLfxr4+LHpLuB2XD+1qfbHk4xefvGf13vbz72VLG56YX554YMEXDgYXjcwRdtSz2YqNN82xlQdOnkd/vTihn8Qs89/JU//ZteYb3N5m3T0na7x1vmHjqsxVbapsVd2v72/+zbYnIZc3b5aWLAx+7rm+0pKS/wE1Cc+Y
eNqdVmtsFNcVhlD6iNpiqpCqpTTDBiko2VnP7OzTloPsNcGGmDVe29gkdLk7c3dn8LyYx3oXChEQVEJah6GoUhKF4mDvpo7L0wmEQCqqEFAVXmoaRJImP4jUplWhaUlomjT03NldvCvon460O3fuPfec737ndTcVs9gwJU2dOi6pFjYQb8GH6WwqGniNjU3r8YKCLVETRjrjie49tiFdul+0LN1sqK9HuuTTdKwiycdrSn2WredFZNXDWJexq2YkpQn5d77iXedRsGmiDDY9DdQj6zy8BrZUCz48j6pU+VmYs4h5yhIxJWBTMrBASWpaMxRElFFpQ1PcxbQmy9qgpGYoHblafZNKJkdxVc5TuEqnbgBYw5KwSSlgHFS6Bty1+2IyqJLSEu/auo9K26rLxG1Vd5bMNlRhNy0tD8p4A0spGRP9mCqfUoPTUCJKgW13TuMlQZtLxQysUUAxZWIDKxqAsvNUysYqDJEiZTRz7qR+j5fyGJqMCWM2bPCsXwkziiZgmUxldIsOaLQiqZIriU0d2MfJEnkgsc5j5XV392pTU5MmL2IFEdHqTyJWNZzkq7RCOFNKXlvnwaqtEF96RKTreaJJxTZwLZOhiQQPwWdJVgly4uZeMltGYloG+NCzHqZQJgOYTSmLVXi5BiACeEPSiQ+IcOkzBb4TtUFqUt71nmkhy+WckkyvOyNKGREb7hCApspDRTNw1V4CpnIO1kv5vRTnpQJeKliDvbkWW9UBSMJkiC9gTkZqxoaYqCXH1JEqmWLJUEYuD9MGVnl3BLsVpJKRZCFZgmGN6YcrSm/D2nrXz2tskiYlW9Uco1tA3wRYY6E27KvtaKnVmC/pEgSJrCK5szoi0kg2MUGhIuV/6CJYeRIvlmFjFzHMYKRUdhNzWAGVyLINooPxMeuLIkYC1KT3p9SNiJppOXtr68w+xPMY4h041ASgwvl1Zq2keyHJ0jKEwRhkmIrd3HXGBjDWaSA2iwulXc5+CFe5DLGeBP94OUtpcvBbl8dIttHAm2o5E3EA0dxe35mHgqhSrC8Y9QX352gIP0mVgWeakOwUdHf91eoFHfEDoIQuF1unUNq8t1pGM53RDsTHEzUqkcGLzigylFDgUPW8YbvudoqxzlvNlRcnzXE+lvWFD9QoNvMq74y6jnilekHEMriELpV959UUtpCPFHZfVWH36cgw8eEam9gy8jSvgWlnmNlboVWGwLdEZ4RlQpEXKoVpc4GkrG1uGgEf4jdPF8vt4fn4kkn3f3ekFfzpHO8WbS/FRKnFSKX8jD8Ifw3+YAPnpxZ1dI/Hyna6b+u+A90GUs00HGdhJVyKvGirA1gYi902UI6TQIHjEPxQcWmc0zUT02VUzngf3VVqjHR766FSVNKakYE8X+uadY67ETO4Njco8LYgiNlBhYmuDXBQvGw+PVHeAsWVmAFAtAJ+Z/3BSHRvea3itTE4LUOzDM2wR3M0JAmWJUUCSt3/cn82nZEgwzBHbhWwtAEMnbwYYNzntWoJ6DjgN2J9Uk0gGo0eu71QRRUHIlG2Fg34FFejYf2KeeRWgbKKPaxijucq4rQkOJfmwUeSDaAQioRwhE+FoohjUVQIR6IYC+FwOBJI4VdKtYS2iD91zbBoE/NwG7HyziWvgnIkRZs4NsiF4KiNpBPLtoATdqpVI4cwG6H9Y1lDwr7YQ3QMQZejE24IOsXW/qXNHe2xsQSgjGnagIR3vDN1WjLJp5MppSmeUBatXjwQS+LoEnNw0O6Js5E+e0ln76JMNtPmQ519y6R2rHCtA+00G+ZCgWCA4/w062N8rI+lB4N9gZTc27W8T1IGFyWZgLJCCWV7Qr1iR0d8dU/e7svFOlQ5luv1xUVOZUOp1uVGPJmMdq5IDARXDLQu7O1iM71ieHGGb0mkhWjzQ1x4GZwGWWJTfSNccHToAGZTOUloSBK6lCLBSoo0UoLLQZOvtpA2Um1wkSPXpEYqQcjE8IZanpAs3LQUbkeXdgIHdlYSmtZYcldOiSxuthZaWS6V1vzJrL4UPbww2xFn+lNxlO7mY1a/xOWaq0jwhziaKfMQYgIRNwwnof+fqF7uo6tzno7rpRtrES5OqpROFxLYgAxyxnhZswXoCQYugM+7mvudiUg6HfaHcSQtsP4oEwzTLVBtK9puVogR0lCKSIYYy/LOIZFr8jQEApynkVJQUyQE+eTeazcWSr345NT99zz59SnuMw1+N278tKtZfZepO/bXWa/vGv3twZ5TF+gzw/NXfNibHGvZ/atZJ3d6+q/c37bqX1e3CKe6zt/15y0/efq5E2/6nnqibtX3509/YenBv2w+c+UHC16/ev0312coRz7/dGLtl0fPffzPL/d/NnGjZfvQnY/lWk/uG5vxu+snPLOvnp3YNv97iy5+kT5EP3j3sx0PzjyunR/yz4k88OO3Z2cOfrJ6pfDis94PVr71XPjObyz++N6ZLVsvXxuKrtx8rnNbz7+3XWubF2zJz/nOHc8fvveOHafrlpzafnbkH0MXOi48NauuIdTUsHf3+O6fzz3Wvb2XemmoTdh1fu6y6XhX76rR329+K3Kma/vZzuUnpI9+eGBO29C3/rjqb9MXrLy8Yl/s2teO/n35seCe5XOW7RjkA6l7VqXiT9QtKL70Te2Zlq2vzXj00I0tQ4UO7dwGJsmfuOKbt+4X2Y0f3P1R4wZ7s/dnoydTze1/QNySPz1wepjd0DzvybcfK+bPff7ee5H/LPiUG77rq28Uvj1z9p73Q7tOeYZ++UmSv0jHN2780dYD+x65HHij7sxnL04MqxfH+q++fHjL4x/u/GIqccu0Ke/aBS4CPvovQgblGA==

View File

@@ -0,0 +1 @@
eNrtVgt0E2UWLiBFoSwg0CKsS8zqskAnncl7WqqkadN3+kjfwIbJzJ9kmsnMdB5pE057aOEgbKsSFXZRUaSlxfKoPa2glIKrAj5W8eAKIqWisIIiXXYVWRSX/ZOmpSzuOZ49nLPuHuckmcz89/Hde/97/6+hzQcEkebYUdtpVgICQUrwQXy0oU0AVTIQpZWtXiC5OaolP89W1CwL9PF5bknixcSEBIKnVRwPWIJWkZw3wYclkG5CSoD/eQaEzbQ4OMr/4SjzMqUXiCLhAqIycdEyJclBV6ykTFSWQoU5okJyA0U1IOBNUNCsIoUTJY59QBmvFDgGQDFZBIKydkm80stRgIEvXLyEaDnES7M0lBIlARBeZaKTYEQQr5SAl4eRSLIAdVEVCt9wHDPoWvLzIYNOmQ0HCpWH/yYuU7KEN7TqApI9AgcKUEAkBZoflFGmA2kkXBUU4AkB6sHkiSEbvABzIkg0CD8xHEkMWY/4hmhp1qWsrYXhwRzTAqAgtOuSMMyIJOeoBKQEJWuX1La5AUFBF4+0uGF2gjtvTH4HQZIA5gSwJEdB68EuV4Dm4xUUcDKEBOIVAVGi2mHeWRAONtjuAYBHCIb2gdZB3eDzBM8z9CCIhEqRY7dH6oSE4Ny83B6qCgKLykrB7jwIxZSZkO+He4VVYCoDzPvzNYgoETTLwNojDAFRtfLh9Z6RCzxBeqARJLIPg62DyjtHynBicEsuQebZbjBJCKQ7uIUQvHpt18j3gsxKtBcE28z5N7uLLA67a9OoMAx+Om+wLPpZMrglvJ9236ANJMGPkBw0EnwW3TmUIAawLskdbNaoNVsFIPJw64MVrVBNksWGFlgS8MfX2yItsDkve6iW/VFxLamwPMFei0DHK1C1IpfwK9SoWqfAtIkomqjBFem5RdvNETdF31uHziKBYEUnrEXaUPXbSLfMegDVbv7eiveGKg6jCcGHHYaAGp4TARJBFdxehhQONj+Smdo1uMkQTnARLB0Iuw32hktfHaippkiZoty+ai+KB7Qa2gFk0tkdUYGdEHIDASFeMdisR7U7IytDyW+HsaIIhiIotqcGgS0LGNpLw3yGfyMTSAy26FAUffFmAYnzAFYMtmnR8LVvpIQAvLBoId/XzWhxHN/7/UJDpjR46EL33CglgpFoMLVXfPFmgYiJzai4vWZIGqGp4PF74YMd1zkoVI2hetxBaSmDQ6/VGXQGI0oZjJSGwtUvhcYCCa2EislzgoSIgITjVvIHj8d7iZpQoyVrMJ1GDyNNglOSZGQK2GRHKheKQUxS8AJgOILqIJ0ISZBugAzuv2BbarnVlJtpbrdBkGaO89Dg0Q9HzbDbSafd4U12URkBvIzlspz2fFxj11EsTxfSqqpSPgW34lXFrF4vyTWUj1V5EMyg1WO4VqvBEUyFqmDfIH7Uwjoc2aaArjKHsXssWWVlbp81s5jKsJitOlDhK3YF0smiXKcaxfyswan3pweys0m7wV2dRbDFhS6DnCKkUKVian4a4TVYSqy8TV9dk1HKlGIqj8eOmdyeNN7k0bEcDBHO3OSEJAXcsHBsismRtkFg2yCDTaMZapokBRVOTLLqxkmZpMiAx1cey/iTFLZQhgG8w/FtoyWQbOVYcPxxmBjZR1PJuTZzZmVKdnpVoDil3FVdghEyzljK+GyrnvVlppdU5jkLcuXSCis7MjNqtQFBI8mBW94Y3prXof+HqHaVISOnAJIXPo9gcVlOZGmns9UGBNhVwXaS4WQKDn0BtJotSKGpPNiNa3Aj6iCNBCAIHa4jkUxT6vND1oZnRkvoxGgjGLjxfGSwy61JVibCeJRJCi+RbNTDHguf5vWtg+fXgdFjZzfeHhW+xsDvtWtNtlzuBBrT+23p7QsOrXis5srFzhPs/N/Wbhq7Z9qqee/tSixX37uHjj2zf9L6qScyNUvvnzF69kenfzn3wpiDC51r0cn3zSl4f97eK8/u69tX17+mY5Fq1pPf0b+o3m2oHvi47tsvp+ZUrPLXmUxzjo4j2ivuyWxOKis7s3LH59JdCa3njU93LMjfEFOcPb9yPDm1pPAtPvbtT9CKVe+9H1wyc+67plfqxjZhuy5fO9r2zsWJd8bmWoPjN54em3JkDhZ9dKt+dB/+2WpPwxtR45vvkD1LDFkPRaktWxrm66YN7Pvu3IKYxvnrdsYlXN72twuL4veJb860Xjq9+PcP9hT0WgOly8z+5rQHWqb0H3qs0dFU/+m6s7EZn8Xi69wJ55pG7U6zPdt6NPELw6amUztORT86sW9G+oW/Rrm6zfyFq9onVh/YW7W4d/b0Yx+uTTjluiSl/uy7isfnVnqePjl2uvDclvm2xr4r7CHzJKtl71j3O1+Pf0FXnDt3w0fnGg++ZZ22apth+XuPpK+NqdK9Ftf1xvQXtOZPjnT07qcbGu/sqsf7jP3UkablS/90ZZnmWHfPwBbVP9Lrola+/ASa7WncMO3sxEl07IkHr3R01x3sR43dHYVLs+709jwUI/ZvHXhp+Qn8lXlvb1G9e7TkG8MY/0bLl5cY3ZNp529vMF+MC9V6TNSRrx/GTbdFRd1Koji64NYQxfgRmqzMMMPLBDyN4FSE7wc5oZ0kmH9LDGlIy5QhATuurywxumsKtGQhIVEyIfEVGpIrzfsh7JEQXLIXIoFelMsWD7O8xcpExWLlIP7FylpliOKNhK3MDIUryizrV10PL4R5JHT7D8D4E23+iTb/yGmzDr+1tFmL/j/RZu3/Cm026m89bVar1ToNihkx4KQARRopA+mg9NAZcKJ6jZH4r9FmNaapFNJc1QV5pM6XU6bKkYtS5ZKULMpodWf4jRZXwIVhKUVZeVbTMDnUosO0WYWXyWxOlT2gN2ZWMGWlKqAp1trNIiNk56dRuLW0uLwSVFVxXFWWz5TKGr1VckWlLktTaUlLN+uskINmO4zpWpXBIFVzdGqGoVwf0Djz821OT3F5uSWfpDVVWrMrI9f7Q2nzYNPcAtpM1hCFTtqXw2W5y4tom8dMe6rZQF6KRiuSsujQ4Ga1k88pyPEWVY/IDIbhP07aDDCjwQB0TuzW0OZRnf9KmwtN7Al08t7z0287XKJ4K8V7ZYbi9R0P7sqYMK1+oW3+mrkH7ntx22uWU/vjHv95Nm3bFHe3z2fRvFqbu3Jt6/pl98htZZf7W3fbL5w8u76OOXVRKDr85F8m2QdmP5zu/nvR/W8UU6c+vkO7v5PVft0/epY0dfuYX6/auqc/9pmjoIB5c8q5zuhZ9I6z3S93G95p8a+Po3flHuuZ0GNurj/zm6s5Kyd/Il+1PLNzVXNfI3NJ/1l9xguKy8fuu4daOGtO4ZS7HH8u2+nMbyb55kvc3avj8vtMkw+uWXSvZU7MHTOfSjoZXz7qifoiaSDaE7dc+Fn5gTPNHx2aIOxL7P7gm+Xjvlkxb9bpGLk4Ku3c5lmOr4TUw2smP0JOfbWHPzLui+TSpv1Lr57UP3PmTN+1mPVxTcF1Gw/1XPp8TGd7dMnBI5sznls3sOnpDeL5yq0TVcupKVe4hNl0XudDnmvpddHd0YfzS+9f6P3q271fViyveepaRU/Jgpnrb/v4uHvrBwtXH843N0cnbWxqmYp9VUwunvCHcZqTuae3MbZPexYcfjPnV2fPGKIGKe6Krt850mCR/gk3ST0U

View File

@@ -0,0 +1 @@
eNqdVX1sE+cZj1cKk5jKWtBgo2otZ60Q5Oz3fBcnF88Zjp0PJzg2cT5pUHa+e88+fHfvcR+OnZRJpVslFqrqKqoWqe2WYmxmopQQRqFLYB9iqgCtpdXaptNghWhthYrKuj/QtKV77ThtIvhrr+yz33ufj9/z/J7nefcX0lDTRaTYJkTFgBrLGXijW/sLGtxrQt34WV6GRhLxuWgk1n3E1MS5rUnDUPUGl4tVRSdSocKKTg7JrjTp4pKs4cL/VQmWzeTiiM/OJUYdMtR1NgF1R8MTow4OYU+K4WhwtEFJQo4ah4YkiLemDjXHvt01DhnxUMIvEqpB0IiQRUXEUrqhQVZ2NAispMMahwFlFQM2TA3rAifYV0hClsfRPJdLIt2wJlfie53lOIjtQYVDvKgkrOnEiKjW2HkoSKwBa+wjusEXMTYFlnNgFVMQqgQriWmYX9S1TrCqKokcWzp37dGRMlGJhTCyKrz7uFiKiMCBK4Z1KoKh+EOuaBanU7GTzjqM+USG0A1WVCScH0JiMaq8Wj7/7fIDleVS2AhRocrKLypPLpdBunU0zHKR2AqTrMYlraOsJnvo6eXvNVMxRBlahUD0bneVw6/dFSgnSeLP1ArLelbhrKNlLt5YoQ0NLUtwCBuxxsHkUoIkqCSMpHWEBNQxDeoqrg74dB6rGaa+P4cpgZffKlTK5LVIxxKXV6s25oKYHmu2RRNr7MBtD7NZuxu4a+0k3QBAA1Vnbw13TwQqbrrvycNUt8YquoC5aF5iv8AlTSUF+WLgnozPlhjH0ZTg4+okYEZFOiQqqKyJfqJrsT+IUHB6scgIpCVYRRwpu7Vmy9QPj2SGec7k+WR6WAbMCE2JcWhywqmKiqqhkhsMiJB16wjNUJOVk6XkF3GsgCABAcg3MwQudyiJsojzWX5WmlS3crUAgDN3CxgoBXE7F2hQXueWS2hQxqSVfH9jhmYYZubeQkumKKa0at9cKaXD5WhIt6yfuVugYuI1oE9klqQJkbfmfog3Q5yHpxgyDqjaeo4HUIAU5CnKA2qpeggZRjiLB4DIYSslMlWkGYQOOTyRjKw1VyOzmVKj+SiyFqsA4LWLCieZPIyZ8SAqxaB77aoGJcTyr3MCwbFcEhKL9WcVggOd/nAoUIxhkAGEUiJ8/iPbpqEhThiKy75m2GPw7qApd8QjdFsr6uJCMNzZQYIQUIZ29tL1UXcKtbQhqidBkHW0h2RomqojSCdw4r4hDEAJKtua2NHeinhJ03v7IrrY3tvdtRO5/ZTBZvTOKIBorxGLhDmqt6+F1gyVRUmnsEsHQZqWUm5/PUgzbLPQ2cS0+fcEI20wI2RVhHq6d0rDdbxI893t6b66EA6RNZI+l9eOC1bESfdV2obAbUMsNg211DReO19OjM+5clJ67W14wkcUKeu1x0oZhviXlWFMNKCvEylw7hBOjJkWed/e0ECmvr8pwGhkmmprbnK3NLn9Uc9QLDLQTjOBIIjEBCGsJ/uF5Zkhaz0EqCTHA+j6cml+A/3/RHW6n1g+BYiIuniVFRSkK6Ig5GNQw11lFTkJmTwe+hrMB1qILv+AdYqhmHoQj8eB4OGhh6wjQv7giSVrX8+MXOnGKLASLrw0Z00nKZ+jAcfj8Npl1lfvwT1WvvCeypcKVUlcsP3m0bFvV5XXffj71VcHuy4pfwXfnbm57fSPnvj5xKbbN7p+td1WN7shuc4/tv0nb71w+ZnrJ+eLB9bfuba2/fRjBeKdsQe83iOHPz7HV/0hOWUbTz/fc1NeWBj77xfi2Sedj7zxeeDM+PU/Nf6t8dq/0S/Pe358oYb8dGHN1sapq1v3zDZ75lf333/oYMg8Nzj49gdjF/5irjuVu/L4rhvMUPTqDxL/PLllY3bq8ifHq0dbW9gNm++8aqua+de1+ffEJx/avEUohMbIi6sv/GNmlf1bc/SD7pb1Aw3fm36we37TbrRv7P13N3/4bvWWz/5e/Z1VH8bW7Nh161r4gO2m8yJ/1Xy/s+qzh6d2fDx6abh1XvWIv7791K0b24rHnqX29t1/yBp0bzg88+mVwGr1gxev7J5f3xi9eTY+8ruF4eP+8EEXiH/JRfYfOPrK5Uhu7VR7dQd3I9cxnvto/JHD3O2T7z3z+HbPgf8UDp8X1009m1po/emqQ01vH9v2aHSz+f3M4NqXbn05lPrkF6N/dMTvwA29Esgl3um/9OcH+qqvN7pP98iDT/9+DfVy5+fHr+/ZPXnxZP9Gh5c/bytxc1+VLT8P2zBR/wOKYJQ0

View File

@@ -1 +1 @@
eNptU39QFGUYPhQbUhmIGZv6R3ZupBqHvdtjr+OApomOEEQ6hBsGRhM/dj9ul9vbXXe/xQO0ApnUKZlZSy2omYjjzllJpSgkMSMpayAHGkeHqRwrM5pKBRRicqDvTjAZ3b++fX887/s+z/s2hmugovKSGNPJiwgqgEH4R9UbwwrcpkEVNYX8EHESGyxyl3jaNYUfTeEQktVMqxXIvAWIiFMkmWcsjOS31tisfqiqwAvVYKXE1o4eqDf7QaACST4oquZMwkal2VMJ80IQtmyqNyuSAPHLrKlQMWMvI+FORBQxeaAgEH5IAKIaQxCgUtIQUQmBopp3vhQBklgoRAIZAWgsJGmSA7xPI9NwHYqm0iNwCPplPBjSlEgVykLtDHMQsHjsS6bEICepSO+6b5RjgGGgjEgoMhLLi179Q28dL6cSLKwSAIIG7lGEUa50wwehTAKBr4GhO1n6cSDLAs+AiN9arUpi5/xMJKqV4f1uIzI6iRkRkd6TvdCHtagWMy/ilmmHhToeIFUEeFHA3JECwC2F5Kj/5L0OGTA+jEPOq6qH7iQfvTdGUvWOQsC4SxZBAoXh9A6g+B32j++1K5qIeD/Uw66i+8vNO/8vR1tsNkt61yJgtVZk9I4qIKiw6y7Jd1MMrBVNUg6SsvUsgoZIqSUZCVfQ26ijCwQKUPQiTm+32TMOK1CV8bLCXSGchjS1MYjFgkPfhOf36wN3wYLUbwRzsGz6KQ+npRK2dMLNICKyJITNnkmnZdopYl2hp9M1X8TzQJW6PAoQ1Sqs1AsLWxFmOE30QdZwPXAfjPkjInlW78PvCspGF+Ruc2nFG9VAYVFengLp9EKQzX8SIBlB0lgS4QuEZHTYANJHiUo7nWG3MU4nTdMOSDkhpKg0J2ScjN3BVjrp9hoe6IbNYiO8kuQV4DFXLukCDAfJkiglejin/MXswnxXZxlZLFVKSCU9wKsHRUmEoRKoYBV0I1oa77UCQzi9OLtc73aydqqqislgYYbdARhAPo/XZYGeu+MHI0cRvfQGLIGCTV/FbE9+Pc4U/ZZu2Fjh++G5lXNrW7ZM/NLU/XDs7eKxM3knv9/t79gFm1rdzdKzBefyD51fMXPtzLWH6pNuz64fa+xn9r63emqw9eKtyetTJ2Zvjxe03hz/Z+I7/+eXXvlyiX+M9+hGVuKjb69pfrUkvro5fsVfAyUDSUfGPgrPVF86MXx15u+JmlObyfSyrSnCwGu/TQ+ezmnvGcnal9TzFhe6fPnJ3tjTwZTDKf1E74/jzRWtMX03hp7Y8vMzhscovb717LSntzP2wrBpX9ZI/fpCNjl3uvQcvcz37k/9S8uWX+mPXftHnJLw+9Z453Dir6adVXE79geW7whIm74orTMm8xL2tx0cWt3YQO+5cv79FSNnYzYfevoGiHGzw490p3rjbnEttXvaOro+fXy7mnqhqGX51QtF617+rK7hYvIyI/+0+4rnsYI1U/WDe8etbWhoYBX4NuPrJZM335lobPhXqH/zYFlXX/mqp0ZndzuO/HkrATM8N7fURLZuWDkdYzL9Byv+bUg=
eNptVH1sE2UY75gYHBWDJkSUj0sFEsJuvet1W7uY8VHYJNjuqwwWxfn27l176/Xe497rXEeIMAko4JIjmdH9gVG6dnR1WzcMHwKOMQgMppglyIiI0xkDG5KNhCBC5rtPWeByl9y9z/v8fs/z+z3vVUcroIpFJCfFRVmDKuA18oH16qgKtwYh1nZFAlDzISGcn1fkPhRUxd6lPk1TcJbZDBQxDciaT0WKyKfxKGCuYM0BiDHwQhz2ICHUu2+bKQAqSzXkhzI2ZbGMxZpqmtxiynpnm0lFEjRlmYIYqqZUE49IEbJGFtxQkqgApABVTpIp4EFBjfJAoGLT9i0EAwlQItt4CQQFSHO0D4j+IG0hBAzHZBIoDQYU0o8WVAk+k8Zsj/ogEEizvxrmhn0Ia3riqQaaAc9DRaOhzCNBlL16m7dKVFIpAZZJQIOpVBXWhBgpUoZjOukxP4QKDSSxArZV0lgDoiyR7mhNDEBSsH7YlecuzV1fvM4VGYfWW4CiSCIPRtPN5RjJ8YmeaS2kwKfDsVFlaCKXrOlHV08Wa84PEVNkikmz2tOYliepJUDqjihj8e+eDCiA9xMcesJwPTKe3PTkHoT1eifg84qmQQKV9+n1QA1kWKd1qQbl0Ub1qCP/abqJ4BRdlEtjWXInpiHjkMzr9WVAwjAxZcVUToz4ydFMBs2wR6dhQ00N0TwiFPpXTNOkghKUvZpPP8Ry9gYVYoUMMvwoQtK0IK4OE0vh5QvRien7Om/D5EDsD68l5uqnclQxlWIslBOEKEKcTrHWLIbJsqZTuU533DFB4n6mTQm3CmRcRqxaNzk7Ud4XlP1QiDmeOS+xiQNGi4J+kryXMizrcIkKWpOZU5ibU761fO2mYndG5dbj/+uCVC+Qxaox2tG83iWcPYNLFzgPDT1lAm212zJpu93C0h6LxSZYbWymVcg4VCECPUa0p7wIeSXYzJfRPOB9kB6XRo+uLXGtdq53xDfThciDNEy7gVcPy0iGkSKoEjf0GC+hoEBOgQojjhy6cHWJfsTO2W2MB3psEHJWFjL0uk2FLZMyTckQHj1CY3+DncQKlSydS6pdvG+WYexKJs/IiFCwBfXkGx+vOFj68ICrN8XTdalzV+eqWUNW60/LVXxj6ZLE2W+Ro+HhzYJW547K1v57fbj/jYyk2YN3EpfNTdnn+079cuPG0cfdfXea2+/+Xbo4m9+d/eKbbdf2L5rP7b/ePdviv7x+ATeYruyZWbywvn2g+0Qk4hy49Gh44+mLN9/fXbu3zTbvZo255oeV3juWlJN/XliRXL2waNfd65bB9I7w4WtdxaevvvJcba352Blb29uNV5vAisx3CyzPN4eq6+Y86m/13m8r2d74R1e7f9j/wLDo9Tm/pdTeN4o7EkOvzYrf2lPDtUQGlr16IP6BsOBgxxVXz70h/OBA544l7Q2scaFzlb6z7Ox7M3Pq5v17od8YvJ1e3rqye3BBdsLx45W9NVcG/6prNGzpGU6Ne02fVHXPdT8w5r00v24gP2dZ0ZkZJWtuLU/JTj6/8uUN5za/dfuEsWf4m98//tR4ccPPn51rOXZN2FywER8P+91dH77wz+xFX3SMfL5sZBuu/dL1fdKo9skGYeiInjvDYPgPFluJzw==

View File

@@ -1 +1 @@
eNqdVW1sG0UadluuQoJDOqVAdRLqYsEJQnez6931V3Cp4zjNR12ncUJT0J1vvDu2N96v7sw6cXL9cYWrQNypXVqoysePUscuJm2aNrQcbenpuCvtFXEH3J0IEhVV+VFAgBAoIEGBWce5Jmp/3f7weGbeeT+e531mtlWL0EKKoS+ZUHQMLSBhMkHOtqoFt9gQ4UcrGsR5Qy73JlP9+21LmWnOY2yicEsLMBXGMKEOFEYytJYi1yLlAW4h/00V1t2UM4Zcmtk55tUgQiAHkTdMPTzmlQwSSsdk4u2HqkppkALUkFEgQ8awMZWBwELe1ZTXMlToWtkIWt6tvyYrmiFD1V3KmZjmGZHGtpUxXFudrHJkRNiCQCOTLFARJAsYaiYpjBi6vlgmsLWah0AmZe8o5w2EnUOLC5kEkgSJd6hLhqzoOedgblQxV1MyzKoAwxrJXod1mJxaAUKTBqpShJW5U85hYJqqIgF3v2UIGfpEo1oal0x47XbNrY0m2OjYmU6SJKJdLb0lgrhOcYwQZNjDIzTCQNFVAiGtApJPxazvn1i4YQKpQJzQDTadytzhQwttDOSMJ4CUTC1yCSwp74wDS/MLRxeuW7aOFQ061VjvteEam1fD8QzHMYGpRY5RSZec8ToNxxcdhtgq0ZJBfDj72EPz+KhQz+G8s58ThAMWRCbpH/hIhRzDNtpWJlzAN89WG430QrJnnsQLntvL7YQX51R/3l5NcQEqKWHKx/oEihPCvC/MB6l1if6JWCNM/3VpmOq3gI6yhIr4PO1VKW/rBSjXYtcl/JRLOKnGTZ/0KQ1HTANBupGVMzFI980piO5qPzrXXbRh5YCujNbDOqfqzA+PjgzLki3L+eKwxoZGBV7JQFvKTjeOmJbhhiEJ0Rpy9vt57lBjZx77GqmVpTmWZrlXR2jS6FBVNIXgWf9tyBg5ZZFl2VeuNcBEd0TwVYGtf68ttLCgRkhzY191I4RCoZPXN5p3xROTUEB8dbEVgguz4XwaeuVag4aLF1g0MTJvTSuyM3MXmaSDok/MhPggHwqwPlEUWAGEIBviQhzL+WR/6M9E/IpEvLhkmoaFaQQlcmfhkjOzWgMjrs4iPCfyflJpK6XokmrLMGVn2g23BtRKmRZUDSBPxjroGJDykE7V+8+ptm/eEE10xWopkmTMMAoKfPL9JcvSaSmbzmiRbpvBw1GspyxN8aVD6YQh2pIciOfibahL797UWSxuiveYWxL2MM0FfCEuIIpikOYYluEYjjZ4vbetLbNe7tR7Ep1tfWzRH+gG4kDM6var6eSD3XhTVttUjJo+f1f/6IC4rs0uDPWW1qc5ExaVUDZbTG5WChrGMcvkt8QHbMbfNxQl1QCcj7S0UqQ3FYJvpKEQmiiEdvUhhtl5fbRSch2DCLP4NmylOsl1n9TVUiuVcsGEZAQaTCkYRjYYOpzZTTCwi4ocCchxkOhghmzWig32GcxwMj2QeyjR548XgqZtGsPRQk6IM2aiJ7cABJHjabaBg58VgvUuvJr6/5nVsUF6oeDppDn3rlV1A+lKNltJQYsIyKlJqmHL5GK3YIVw3hfd7EwHZYHNZiWBC/plnvVxdBu5Mue9/e96KLuvQhWopMeKknM0z0e8YUHgva2UBiJBP5FT/fX7fcXtST339yVo1RM3eurfMnVjIrmUW3Hyi8mR2Zu3/yxw4+37f3XXL/WO9Sti3d9MPXx59j+zB52VP5z4+MjvVp76mn1371vrTzd59iFu+b7uczWrMvZd8MrIoFGrHrxjw8Y1H1xUhbGts98cPffb5vDG6Hmxec0B5taBt6jCkbGHSrv/+LZ4x+y6j9nzRz7dfuCedU3wD5c+/9D3qfXZnn9/Nn5lYuDp8q3Lue3Hb/B8eCIkdTx7ac/K2CRa23xz9LZ7zrzR4/nrrscf23nn5b+8c+Tyc53xp9/2T6X/QX90Q8/uNWvvfeAX3S9euW1F1y0B1Hp81/uPfPW3A5nJJ9CqHS+Fz773+n/PTH/7+co3ey/eT8fOr/ryqWPh3Tft8O38051LdzX9/ESP58KzSz5JP//oV/2h37Davzpex4ePN1XP/fPs9N17B59punD6k+8vHTvrjPsvqmv3lZu5B+4dO7f55dPGD7mpyar25Rt3K8+3v7f8zNDUO8fWfnFSf62819rzPUH3xx+XeW46fd+7zlKP5yeHyHrs
eNqdVX1sG+UZd9bBqlWwaoBaYIWT11Ws+Ow739mNY3kojZvETZw4cbomYcy8vnt9d/F95d6zYzu0W0NbhIJGj4JoNaCtk8YlzZJAMpY2/fhjKiyhW7euZQqsqxhiMKoxaWwMjbLutePQRO1fe2X7fPc87/Px+z2/9/oKaWggSVMrRiTVhAbgTHyDrL6CAbtTEJk7hxRoiho/GGmOtg2kDGluvWiaOqpyuYAuOTUdqkBycpriStMuTgSmC//XZVgKMxjX+Oyc3GtXIEJAgMhe9XCvndNwJtW0V9nboCwTCiQA0aUl8SWupUwiDoGB7A67ockQ+6QQNOzbHnHYFY2HMn4g6CbJOD2kmTLiGvZDpgGBYq9KABnBbQURAh639PSgqCHTGl1a5BjgOIj3Q5XTeEkVrAkhJ+kOgocJGZjQQeSQyQ/jAlVYAsIaTkKok0CW0nBofq81DnRdljhQtLu6kKaOlBsizawObzQPFzsgcfeqaU0241KqQ65IFmOqErRzA+WkxjMkMoGkyhgkUga4qiG9ZJ9ebNABl8RByDJf1tD85tHFPhqyDocB1xxdEhIYnGgdBobiZScWPzdSqikp0CrURG5MVzZ+ma7AOGkaf15ZEhllVc46XEL+F0t2Q9PIkpyGg1iHqNEFgGSoCqZoDdBU5REDIh2PCHx8CG8zU6hvEFMCz/6qUJ6VfHPDApd/sq0aDGJ6rJO1huQgKDcRBlnCTbk9BM1WUVQVyxJ14baRmnKatpvy8EqbAVSUwFxsWmC/wIkpNQn54ZqbMn6yyDjuplg+nkYSZnQNQbJclTXSTrbOi4QMBSfmh4zUDAGoUq6U1jpZor4nl+nhuRTPi+kehfLlWEaKwxSXmCxv0Q2tmAYXRCrIGmBparRsWQB/GPdKkTRFUvTxDGlgKGRJkTCepd+yUpE16KEoaupGBxOLC2u6wFKldWqxhwEVTFox9/UwrM/nO3Fzp4VQjK+4qONLvRBcXA3tVtDUjQ7lEHkKjWQWvEmJt+bW4psYTccpNs7zbm+coyjo9fC+SornOZ8X4PJ98BiWu8ThKEUydc0wSQQ5fCyZWWvOoYBMUWgBhvYwXtypn5BUTk7xMJqKB7ViD8hP6AaUNcCPcQmSA5wIyfn5swrBjqbqcKhmOIqLrNG0pASfebtidSzGJWJxJRADht4hKK2bw3CjlEbZJtQZZsMUHWyoj4GOTtWdiZtJIWowqRBJb2C9tI/Fi6SdlBPrhuyoTTq/H1KUdmTobtVbDfRKb0prCkU6PaHOzS0CL+s1kGeZLY05JddBR7d0C1FJoDN1DdEYX6N4o3rLBnlTXW0r04A2sptCSo4VDaGhMZPp9gQ3bqVbWGe2TnYjhU7iFoEpBlx+Ag+shEEPlGVDYtmQ86JhFkTjJ/gSMAHn0pPST9TjY75ZlbN+IlpEGOIrUGBUMmGgSVPh3LMYmFRa4gMKtTUrtXfVJ7YmEp18jA+72+X6ZEt9hvOwm7JbtqhBqXpDrJsPYildR4aiK0mqDI6XYitLo3m99P+zqtfaycWnANmsz7/PCqqGVCmRGIpCA6vKGuZkLcXjQ9+AQzW1ZGt1hzXpY/CoxWElBXwJ6Im7yVB1cHwh2pdnxmDxjVEAMh68NGdNiEzAjqFk7H5CAYFKLx7S0ltvx1BxUFXhTMXs/f3LbaW1DH+vXXuq9R31Dnrltstjq6++0Bbea8ysyEfWrlz15PIdwaPPcc3p8Mn73mRWhI5e9f/jL6/e6/yma/s+RnjsN/uzD9ienei6ffZCZ+MdX3zW9e7YQ8d6xmInHhn89ONwbv8P5871Xylc+Xfsoe2RgbMH3+9hjUdfvLsrX9d9Z3o8dOSWTy7M7J17b2ZiTvb+7ce7Hf9pPJp/qiA8z0m/f+b1D/b2vxmYWCfsufitU3fZbN+49FPPrf1JzxOf/O7QeWJiNPLrp8O29da5uw7vX9N/e742ID6wfHfTp8/nlr3Vsece8GTVju882nfgw8CUTe6yX7u67o87H2SPw533VN87GfHvXn/s897TFT+4svv+207XT3V1CP2H/rXd9uHKevMnR5ie6brX6vwvDe8TZ063XJy95Y3U/gizy/HirvxHv1Uq9HfOvPXS2kPx/LrN3zWfuNSz96/qCx+tuvj4qs/rn9sjXv5D4WvnZybP58UPsiun3W9fEHsTp799qbNu2cHVmf9Gz15O/P2x6cgX5z67j7p14CDJbbxy9uV3T5Hge+/96OdrmI+/+uDr//zl3Y4zYLZ3fOr8zy6v2ffn21Z435/d9fVo4+j0qyfeiD989StFwpbZDlw70Mdj9v4HNFGplQ==

View File

@@ -1 +1 @@
eNptVWtsFFUULhIU/CMiKlEC40JCgM50Zmf21Vqx7FJobNnSXWiXh5u7d+52pzuvzmPZLaKhEklEsKPGxII8t7ultuVRVARKAgmCkYAGDFlMIEGMKCKJ0SiBgHe3W2kD82N37j3nfuc753z3THs2gTRdUOQxvYJsIA1AAy90qz2roVYT6cb6jISMmMKn6/2B4G5TE3JzYoah6uVlZUAVKEVFMhAoqEhlCaYMxoBRht9VERVg0hGFT+XeXWOTkK6DZqTbyokVa2xQwaFkAy9sqgDjBCA0IPOKRMimFEGarZSwaYqI8nZTx+u1q/COpPBIzG81qwbJUg7SMLWIkveV8S6D/3VDQ0DCiygQdYQ3DCSpOCXsmMeiKc/abAwBHid8uWRiOqbohtU/Oom9AEKE8ZEMFV6Qm62+5jZBLSV4FBWBgXowcxkVSmT1xBFSSSAKCZQZOmXtA6oqChDk7WUtuiL3FjMljZSKHjb35LMjcV1kwzroxySqasrqU7jaMsFQnJui9yVJ3QCCLOLykSLAfDJqwX5kpEEFMI5ByGInrczQ4f6RPopuddUB6A+MggQajFldQJOc3MDIfc2UDUFCVtZb/3C4ovFBOJZiGMq1fxSwnpKh1VVoxJejDiNDS5FQwRjWTjoDFSUuICv3ZzgMo+GIVEkvCqyGSbfaEoQN1aK5mIm0LnC5mpY6fLpz4aKGUFgPaouESFiohiTjsnsYl8PhYEmGoimGYkifSYVbW5OaZ+GS1jgdFms4KVXbxFeFFtY7l4nVcktESbqF+fZQi1iPElEoBhtbwi2IYpUmuQ60ur1L6huqG+mQ7EkmI0Ba4vVG/WJVBYHZmQmBr+SWsUq9BFoT82vbgn7GHpcSTVRYEhvtXkOJL/fVsgbV5mUXBVtCI+jRmCFdZOikOTedf/qHtSEiudmIWbsZ1t2tIV3F9wa9ncElM0y9PY11iM6czhYv0C7/aw8k/FzahzVpDQZjZinBuAg/NAg7becIhitn7eUsSyysC/Z6i2GCj5Tg/iC+enoUy3DBsOSzMGbKccT3eB8p9sG82HEn8/TxLSVRUlV0RBZZWb1NZMPQ5CBrfANDN4tUtGYgC22FsNZgQfWr25KreWjyfCyxWqI9bRwrRJAJoweLR1RNyYfBhEhJt3Y7HPb+omVYdz04V5pkaJJmDidJfM2RKEgCrmfhtzi+dCvtwMU+9LAD7hfCgy7LFbpBHxvpoSEJCzYf+wEM5/F4jj7aaRiKxS4el+vwaC8djWTD2CX90MMORYhdtN6bHPYmBd7KzcSLsJPnaIhQBEVRlHaxHIMflkEOYI9GgRugr/DoEyBGyTdTVTSD1BHEs9pIWblSCSTzM6aSZRysE2daQQgyFE0eBcyIT8nnoFcQqoZEBfB7vdWkF8AYIgMF/VlZX2hxVV2N94smcqSQSL869J3IyoouC9FoJoA03BirB4qKyeNhqaEMxmqoClkH3Zg95umMuu2Ig9BJzsdjaBjtf9ml85M2C0TMPQGtgRhbaSvnONZWQUig0u3EbSp8TdZl8rnKzSfHLJm+cXxJ4RkrdhyXT9ATfbduPxk/N8G2Y7L14qSWcX1t3RcCSxdsHphJbb55aUvPs1evzN0wueH4yQ2pbblcpe+9db1EM/HC4l2fzwlJ21+5+OOGU0fu/PLzxUtKmB/M/hDf8nFiYJ3bXXt7Xfu9lc+A5Z91pL+fVTpVO9HRSUW/NSyUO/pE3576cSuk2Z/4D6wvb+zkO88eOJ6bMvs779nLtq+nVPylXO+ee/edrXU/zetacCu1qWP7PObxs907Sv55ve1qJ3HsVcDt/COe3cioh268/NRJI3thx/RzH360p4/tmv5v6EoNuTbgvnb5esdv3zRd/5WJ/D54pftO46Retq+r/cw0c3zt+cDUlbK7fewM78pN10JvzOjt7jgUfNOYOHEl17Rq6/nYwdzfp1c8nb2Z7t/bsOulCXffnyVu/PSCo/wG/9bVex+cuv58Scn9+2NL7m5bOg08VlLyH7A2NRs=
eNptVQtsE2UcH2DEYAwzRuWhrBQ1CL3urr1r184po91g7NE9urGhbH797mt79u6+2911faAkPEx8DOJBDAYFJxstGWM8Nl6DoSFO0eAjM5KAiAmCJoKaKDGiMfq162QLfGmbfvd//f7//+//v/XpDqRqApan9AmyjlQAdXLRjPVpFbVHkaZvTElID2O+p9bX4O+OqsL5RWFdVzR3YSFQBCtWkAwEK8RSYQdTCMNALyT/FRFl3fQEMJ84H19jlpCmgRDSzO5n15ghJpFk3ew2KwKMmIBJBTKPJZMclQJINVvMKhYRkUY1cntptcUsYR6J5EFI0Sm7laP0qBrARE/TVQQkszsIRA1ZzDqSFJIBkRJr2up6KR1GgCfpXcrL7wljTTf6J0PeDyBExCeSIeYFOWQMhJKCYjHxKCgCHVlMSU3newlcGWXLYvRGEFIoIAodKDVmaxwAiiIKEGTkhS9oWO7LpUfpCQXdLu7NZEWRWsi6MegjUEorCmsTpMKyibE6aSt9IE5pOhBkkZSMEgFBlVKy8hMTBQqAEeKEynXPSI0Z90/UwZqxuxpAX8Mkl0CFYWM3UCUHOzDxuRqVdUFCRtpTe3u4nPD/cGm7lWHI5+Akz1pChsbubDeOTrJGupqgICZOjPfoFMQ4IiDjwpTpbW0w2BaQSiLlqicoqw5P3N7RXNbuT0Za+JYGb0Roqot1yLFkqNnhrfA7RBmXUYyTdTAulmVpirHSVoKCqinn6gLN9atCEtteC+wRWlSWRXyxGri0LdHWKDK6zHK1ZWpzKFK9lLPpZa5EpVjvXeEPtge48Kr48kY26eGWN9WggI79dThYRJcnlvnaAVuDlRYoNmCfUFtfFaqqLYsVmwjkaIfAlwhBBfkb/AFeSciNarkzrq7U+RdsuExrsnmkFbCpuZYP60x95fKiCZgdLpaic7AdNFtEZ07/OGVEJIf0sNHN2Og9KtIUMkJoQ4oUUo9q63sISdHZM+ncLO3yVd7i90M9XkJYY7hcFSwm2maqBgmTjbZxJoZ107SbZUzLqv19nlwY/x2ZedBP5lALEnaWjc9DGoajcgTxvZ47zsBwZgZIfzPwycxSKK5gDVE5VEZfM1U/tkSoCu/A2NhRWA0BWUhmwxrD2WGIJeMxHkZ5PtwRk2hXkrULARSFwcGciaLiTBgCiJI0o9tmd/XnJON07CW5EjrQFM0MxSmyApAoSAKpZ/Y3t8k0o4cjxT52u4KOI4jsvDSb7QZ9aqKGiiRC40zsW25Yl8t18s5K467srsyxD03W0tBENIxN0o7drpBzsYvW+uLj2pTAG+cfI5e2QBAip4vnnAA5OUfAyTshxzB2yKAgsvFOdJwsRQESL5lmKljVKQ1Bsrb1hHHeIoF4ZvWU2BnO7iCZFpsEGYpRHjVEA16cyUErNikqEjHg98MgBQEMI2qMf0ba21JTWl3hOdJMTSQS5VPGXhlpGWuyEAymGpBKGmP0QhFHebJJVZTylFP1pS3GoMvuKqIDvMMVDLBcwM5RFaXeA+Pe/qddT2YNp4FIsHdAYyBsLzG7WdZuLjZJoKTIQdqUfbGsS2VylUMjUzoLXr8nL3umke+//3Zu8VVOY/JHfv37ydPvfDN6eN/IvAdn7kzm339/yyg/NPez4FLb4tdunKgcvfDtX9NPXnnju5PbnFtvLtyY1w0eX8Jtabz285+/nWvd/+vp4UPuVvbe4Z2rpe2HwZCwbSa9YsaFp7ovF37trbtGHZp/9ONFXXxV/1V1bWvr2R9/F+6mhMGKpjXzvnjidSYGPN8LiSNFC+faZizo+xDO35hfcLNq9o6PnM2vXt+KV3XWnbtnLzTy88+k5r7BbD8zynQm95xhhh458c59pR9ID6MZP7gj05e4l1kaoGv0q3XXN8+3fP30l0+s3Xr6XNEDby/q6rwyusH4xJM6sfSV9EUxtqXgxa5z2siN9TePH7r4Tekzl+btfHTWA5XFX54tf/+txjktc+ZOH5j6eOOGoZV/bHhzh4PNG+m6+m7L2t8vHZWe+6fg73kLX95Zt/LpLsdPg9+dsjx/Yfj9Tzcv+Hzd/ECcm7HYMWvUGb746ee/HJ16eQ0v7t3Xzt1dUNW66camkf4/78qUfFreoa3x1tmk/v8Bv8VQGQ==

View File

@@ -1 +1 @@
eNptVXtsE3UcH8MgUUEJCkQTvBTlIbvurr2+NhfoujoK6zbWMcoIlOvdr71b77V7dO0Ir/GcEOBEIT54CKMlzRiMbTwm4x/kFQyJgpgJ6CAxPkZMQBNBjPhr18kWuD96/X2/39/n+/p8v9eUjAJZYUVhRCsrqEAmKRUeFL0pKYN6DSjqugQPVEakWyorfNUHNJntfYdRVUkpyM8nJdYoSkAgWSMl8vlRPJ9iSDUf/pc4kIFpCYp0vHf7cgMPFIUMA8VQgCxebqBE6EpQ4cFQDTgO4QFCInViBL6CoqYiQUDKiiEPMcgiB9JWmgJkw4olUMKLNODSorCkomajBVU1OSimbQUoxeFbUWVA8vAQIjkFQIEKeAkmBg3TWJjRtiLJAJKGaf+Q80oLIyqq3jY8lSMkRQGIDwRKpFkhrB8ON7JSHkKDEEeqIAXjF0CmUHoqAoCEkhwbBYmBW/pRUpI4liLT+vw6RRRas/mialwCT6tT6exQWB1B1TsrYBBOT35lHNZcQHAjYTdiR2OoopKswMEiohwJ40lIGf0XQxUSSUUgCJrtp54YuNw21EZU9INekqrwDYMkZYrRD5IybyU6hsplTVBZHuhJV+XT7rLKJ+7MRhw32tqHAStxgdIPZhpxYthloMpxlBIhhv45lqBEMcICvfd+IECFAkG+qDwQrGFrg3MCxVUYUeOiTcAPfJgvEvVWe+qKrQsX0VUxu8BFF5U7UdxmcuA2i8XqQHEjZsSNOIo5Of8cY71pzkJ/CRVjmGrVNN8bwGzEArbGBVSV9ou1NQruqdHCc/C4sQEPlS1g3FXOuFaplXKlbndFsadeNhbzrgYz4W60+nhigRguRGB0WpSli2qpWqu7qsEi1FviXo32WZ0RWTLFJDzgsTJOsnw+VQYwvMFmLhkantliQrFshFaMsGPpp22QGxwQwiqjH8AJ4pAMFAlOD1ibgCVTNaWpBfIQfHUxmR2j/RXznlB4QksJ5KTeU81oeQhuQyooFTFhJgLBiQKzqcCCI6Xe6lZX1k31MynYXi2TghKCNHQPUj5JMZoQAXTK9Uyy96TJDjuZDh9OKQpikqgANBuV3upHqwb2B+op6RiYLFSUw6TANmbc6j0Z1jc0xhpoSqNpJtrAY45GwswGgUaFOrNXJFlMu4EBobyiH7BhjrasZpB3KZgrhuIYiuHdMRSOOeBYnoX1zPxml5iit1hgsU8+baDCrQPXXZLIdAM7M9RCBjwkbNr3ExjC4XCcfrbRIJQZmjhslu7hVgoYGg1u4pWTTxtkIfZjSmts0Bplab33LXgI0LagzRHEYeEpm4V2OIAdd9B0KEiE7BhuxSyn4OpjKYiSbqYkyiqqAApubDWu9+bxZCy9Y4rMuMVshZkWIqxAcRoNfFqwREznoBQikgw4kaSPuN5DXSTFANSX4Z+eLIGT5vW4jvvRoURCK6SBr0VSEBWBDYUSPiDDxugpihM1Gi5LGSQgVpVzkd5ppwkcw3AToIHDHgI0WgzX0CDa/7RrSW/aJMnB2KOU3sGYiwwFBGE2FCI8WWS3wjZlvilrEulchfC5EVvf3Dw6J/OM5OY7hRvYK6f7X1u8VG4eeWlD3Ze7kJ5XR8vexftmvPR+7eYtM4zz2vpeWPN46faavOcvrd/48e7bV+5GRiBlM63IhztbG8WJf728bdXeVOnf3YF/7ScDroed/an7hdf767fdKZhxKO48J6becPecuLjhZnPNiLcp+fD1yzfqlnzXbxpf1MXsaP3jtP/WtRcnF7s2H6u9vqNqx/xJ50atnJiTc+Sht7B7wuPmMfTeLX9OOnC3krviyjFs927ajn9y8Zv2k3s87q1f/9S16rdxtVdzI+tGgdVT990823d7mT/s33H23tg9tzoebZg7dpxzyvTXuf62By1zlXurL46lnvv1/J7claf+MfcJm0wgurZr/Mp3Nx699nC297MH5jPmKyU9Y6ZPOdUp0fcNHRdy+yqMkdiuwIPZPT+jVycvMbXWTCvE94an7vLpTc0Hd9/79tNfdt847t55/vzmY+s/mEsVzrx8x/D9hVmzOuznzG2P+qrKyg1b9fYff9/Sjz1Ydv+h/NGFO7Ng4R8/Hpkzratb78nNyfkPIFJRJQ==
eNptVQ1sG+UZTgSjrLBS6NSq2taaC4jR+ew7++z4nGVg4thxUtdu7eaP0ej83Wf7kvvL/fgnWSjNEEwdERyjGlo1aFrX7qKQpmvSNO0SMsFGx5ZBWYGGruNHWkBQtCEhmICm++w4kKj9ZJ99973f8z7v+z7ve/2FFFRUThIrhzlRgwoDNHSjGv0FBXbrUNUezgtQS0psLhyKRA/pCje3Jalpsuq2WhmZs0gyFBnOAiTBmiKtIMloVvRf5mEJJheT2Owc34sJUFWZBFQx9/29GJCQJ1HD3FgU8rxJgCbG1Cl1oZ+YpGumGGQUFTNjisRDZKOrUMH6HjBjgsRCHj1IyBputzhwTVdiErJTNQUyAuaOM7wK+wpJyLAopH9VrM0lJVUzRlbSPMoAABECFIHEcmLCOJ7o4WSziYVxntGg2dSjauwQoijCUiqMoS4IZZzhuRTML541RhlZ5jnAFPetnaokDpdDwrWsDK/eHirGgKP4Rc0YCyEqnoA1nEVZFU2kpZqwEKMZXNUYTuRRmnCeQazycmn/9PINmQFdCAQvV8zILx4eWW4jqcbhIANCkRWQjAKSxmFGEZzU8eXPFV3UOAEahbrw1e7Km1+7K9gtJIk+x1Ygq1kRGIdLuZ9YcRpqShYHEgIxBok8kKQuDhpvVa7q6ADxjphQG/VtbYLtQtKesKea21xNO1l7QKe9nK9J294a6fCn0zIZ9m0TtutpnKymnCRNURSNkxbCgljgCdLZ2eZp1/1d7eT2docPRONbxUxdRG1xOEhAiZlM1JMOKUG2eWeseWddthv4JGe1w5+KeDPdejxhC9wnakRCoJsbiWyssQOm2r1KwBlu7KwOSgmipVt0uDypdFu2oavGhCjrKY6t7WlJSp3ZSMafSta7BMi1Q3/IE3UGAhmvnm2UMjG/GPFSlMfXmljGmXQSOFGm7SQoF1FcI0uS4aGY0JLGIZJwHVGgKqO2gT/Po0RqutqfQyKFfztTKPfPwVDTN/pen/MiwRpTPoUzmwibKchkTTbC5jCRlJsg3A7C5A9Gh+vKbqLXVOaxqMKIahyps36pHwogqYtdkB2qu2YPTBV7ANW3SB91KA4zsqRCvMzKGG7FdywODjzgPb7YdrikJBiR6ym5NaZKzZDuyaRZoLNsMpUWCLqHsnMxqIP4WPmIrEhFN4gQLqjGIZTKkfLOkhyHUKwETqLUkqcyuIJSwXMCh/JZupanl2rkHCjZJ6820NDAQXOuQJWqQUwvt1CggGRc9P0NDEXT9B+ubbQEZaeLizi10kqFy9mQNkE9ebVBGeIgoQ5nlqxxjjXm7kA3HbTLFmdIigGQpR020gFdMVd1jGaAw06TMZdtEo1ADiCUYjFlSdFwFQI0qrWsMWcWmExx9NTaSYfdiSKtMXEi4HUWRvSYVyrGoNaYZAXyEsMeBXEcMCAJ8UX9GQVv2zZPMFB3ohVfLiQ8JC++JgqipIpcPJ6PQAUVxhgCvKSzaJIqMF/nw3d42owx2k67CEBAwJIE66RteMDjHV1C+1p2ueIYLjA84p4CxvGkvRZzU5QdqzEJTK3LicpUepnsyRdjFRN/qjy7+Zc3VpTWdeh75cpjO4LSOWLt1PstN0/Lc8/95vIBzdqw74mT69f8bHzv93tv2+8+sTH0kDB+5Qf0nLjjh7dt2Dw/2/tMtbRpdQUzfn5P2Pf7Dw/ev6tPlu6Z2j3xxfQXP+29L/0JbHrz84v6xIPBz2PGxsGFR/Zubkts0c7efuEdf+utjX8ebt/F79r36/2z80rFqQNnYfv3spM/uWCZLLjNr/z19c/OUH/csjW2quOmiocefXfW3jPw/Itr/r7h8X1rIk/iey/+7tv3huVJjG10HvvUvH7dkf7Epy+c3zy7aez1mRsGfYHwupdrPqj6aoyfOX/9c3e++UT90bdnBp7/pOUI++zam/tXT/1j4c7Hzw17PmA/e/axAdPl6Zsets7vXufse3Lu9H9+u7/ytS9nqqpPT9dij/4qtOGthsFtu8P1YCqYx566+3+X7u7+RezeW7418/HoKwt/2evNvVo/4vrnu/99YHLTv1986p3rR16uOrPn8P4PP5p9b+N4Y1XTR6+ZbxnM2Y7cNf/dZy6Nf8d+4OnsQu/T74/5335j4krVPZWNDHtu5x3nmmsdPx6/df7LhZYL3EsvzGC3Xx5d/aMTDfSqgYFLFzNZ7ON1oMXxxkSDcUl+kH7pvfBXNxTrdl3FtlVzr/ahIv4fRx504Q==

View File

@@ -0,0 +1 @@
eNqdVXlwE+cVtyANnmmgtKkDM9jDWik0uF5pVyvJkjwKGMnCR2XZljA+SsVq95O10l7sIUt2nMPJBAIhYZ3U7TSkUx9YrXEMwQ4GGjcDLW1n4gxDknHjNEPKQMuRUmbSUpgpifvpMNgDf3VH2uP73vF77/fe+3pScSDJjMDrRhleARJJKfBD1npSEtipAll5YZgDSkSgh+p9/sCgKjGzJRFFEWWH0UiKjEEQAU8yBkrgjHHcSEVIxQjfRRZkzAyFBDr5qW5jl54Dsky2A1nvaOvSUwJ0xSt6h34bVPi+jCgRgHQAEj4khOERv2ejvlQvCSyAIqoMJH339lI9J9CAhQvtooKaBZRjeAZKyYoESE7vCJOsDEr1CuBEGIWiSlAXM2BwRRDYrFslKaYNhlU+EyRUvvvq6NLzJJfebQdKMAcFCtBApiRGzMrotwBlIVQDFBBJCerBxMlpG6IE8yEpDMh8sQJFzlvP+YZoGb5d390Nw4P5ZSRAQ2j3JGGYOUkhFAWUAiW7t3enIoCkoYtXhyKCrGhjixN/mKQoAHMCeEqgoXVtvL2TEUsRGoRZUgGlSKes0CMw5zzIBKuNxAAQUZJl4mA4q6sdIUWRZbIgjFFZ4EdzHKFpOPdvj6RZQSGhvKJN+CCUimpjfRLWCY/ghjKY9yMJVFZIhmch7yhLQlTDYmb/Nws3RJKKQSNorga14azy2EIZQdYOeknK519kkpSoiHaQlDireXzhuqTyCsMBLeWqv99dbvOuuxRhwHH4e3uRZTnJU9rBTD1NLtIGipREKQEa0fqxsfkEsYBvVyLaIIHbfyUBWYRlD54fhmqKKvcMQUrA9J9SufIf8NXOc3k+b9WQG9KjTXkkphTBTIiXTCImzGRBcLMDwxyEDdniDYy6cm4CD+Th7YBE8nIYclE5z36Kiqh8DNAjrgcyPpVmHEaThg87DAUJUZABmkOljTajjdnGR6vd49kiQwWpneSZzoxbbSpDfUdnooOmVJqOxDs4zN5pJpgQUKnwRE4FdkLaDQSEcrI2aDPhY7md+eSPwFgxFMdQDD+ZQGHLApbhGJjPzD03fWRtyIJh2PH7BRQhBuCcSpmxzPXbhRIS4CBpad/3zJjtdvu7DxaaN0XY0xd2crGUDBaiwU2cfPx+gZyJAUweTcxLowytzX4PfgQxCwAh0gIhEGUhnKYokrDYccxMhewmS9gWOpEeCxS0kiZTFCQFlQEFR62S1GZLOTKRbjQngVsIK4y0HE5IilVp4FdDbiEdg1yOiBJgBZI+TIVRiqQiAM3Wn5Zyt9RVeKtdI34I0iUIMQb0fqpbHQxS4WCIc5pawt5Ea1Mg1mLAKxvpCjHmVhvqRX+dL0BaKbO/Jr5ZVv3+EFEroHiZ2YrbzWbChuIGzAD7Bt0Wjrutbi/GmJPRmtoQK4mmYLzO5y5jlRa/J1pvT9QxIpXwkJtbW4VgmGFtnWyZBXjLyIQH3xmzgQRdESKwuhbaW9Xi+aGSZFwNtq0Wb1MNVlkf3ErXWLaZq/joTmJzAwwRzlynsRyBBQvHpuzMtQ0K2wbNNg0x3zTlCJ1JjNOweFKWI1Xw6PLxbLIc8aczDOATjm8/owBnncCD2ddhYtQ4Qzs7G2or1NaqjvraVh8e6NgS5uMxr7XZEAK1UZ8hGLVhjVX4TrGBMnsXZAa321AslxwrZrZlSvMe9P8T1bFmdOEUQH1i9oxO8YLMM+HwsB9IsKu0EYoVVBoOfQkMuzxoY0WLNmEn7DYsROEmmjTbykgCra5wH5m3dndmDKVPjBTJwsKLU9p4hHDqHTAefTnCkU6bFfZY5iR/bjh7fp1Zsnzt3vy8zLUU/ufmXvafeuUNbOXUvwpmDrgq82uOfnYyeszJfotbMRsd3/xj4lk/1XZs1rrvyxsF63W93oPSppl3k8L56U92Pbvyr/ahh5oriw69k3/jz6Gbxz+7dOjsR1cu/i462fBG6vDRS3tvPz33jfWoa+qf+08QXyz1v/PYleO9RdOPNO5665qiTac6qk1tiV822ff0NUVXrS8ZPXGJLDxTvRo9ffXL3499tz1SvO6srviFwjvv3dwg31m+4aX1l6v2zXz4neLrr+XrBt1rdFF08pWaZa/p6BrH62+ps/n4kgONeiqwu1+8VeS5TDYHdrOpz2eIO01nnzyXutY1UZL8KrVh7TMvX1M/7Lq+vGAf/dV+T2Kt7/Ceby/pnflF28D7TxerBT//gaN4k3f24ye2x4pWtK3cc65J/ObVx2YCvctc6y72/RotvNDT/Qj/ZtDzPN5s7ttz69HBxpLLX9zWpg7sp94r6q/hPj40NbD51LrJreH9rZ//O/GfU7O7J/M+ONNZsWZr7Kf2m492n6PPvXRaXLpMdGx8rs5e8uLtp6wn+vv+duWJW3+5YSwcHD391I4Vq1612rc9jqxWvz4/jV4f6zK8P6etzftDffk0wWGX/3H0QvEHO/775NxPZs5+subhvX/fR0wXbvmIf/yPPdaBLsuOWzPkmwU/6otMXPj66qb+8Uh8lfGi7eRexPbwM7o050vzfqbTdlU9lJf3PxKZCgE=

View File

@@ -1 +1 @@
eNptU29QFGUYx5wIUyY/ZMWHar0ZKp3bu13uuD9gxHHkpUgccBkmgu/tvtwtt7e77b6LdxglSM2U0swOqEzK9MfjLq9LITGaTPtQFs1kMjZDkijqJHxIbGrSGG2g987DZHQ/vfv8+T3P8/s9T1usCcoKJwoLEpyAoAwYhH8UrS0mw1dVqKD2aBAiv8hG3BXVnv2qzI3m+hGSlAKjEUicAQjIL4sSxxgYMWhsoo1BqCjAB5WIV2TDo7u26oIgVI/EABQUXQFBU3lmPaGbC8KWjVt1sshD/NKpCpR12MuIuBMBJU0eyPNEEBKAaMQQBPCKKiK8EMiKrmVTEkhkIZ8MZHigspA0kX7ABVQyD9ehTJQ1CYdgUMKDIVVOVqEMVEvMDwGLxz6fsTTiFxWk9d81yiHAMFBCJBQYkeUEn/apr5mT9AQLG3iAYBz3KMAUV1o8AKFEAp5rgtFbWVofkCSeY0DSb2xURCGRnolEYQne7Y4nRycxIwLSBh1zfRjdYcy8gFs2WQxUX4hUEOAEHnNH8gC3FJVS/qN3OiTABDAOmVZVi95KPnhnjKhoveWAqaieBwlkxq/1AjloMR++0y6rAuKCUIs53XeXSzv/L2cy0LTB2j8PWAkLjNbbAHgF9t8m+XZKHGtlIikLSdGD86AhksMkI+IK2ofUwTkCeSj4kF/bT5vtH8tQkfCywu1RnIZUpS2CxYI/DsXS+/VRRdmc1DsjpVg27ZjHr+oJ2kpUMIhILglBmwtMeQUmK+Eq9ySc6SKee6rU75GBoDRgpZ6f24oY41eFAGTjznvuQzx9RCTHal/hdz1F01V0CWdRnSG6zOYoAeH1DkuTy30kRDK8qLIkwhcIydSwIaSNEnZLvs1szwdsvt1M51tsXjND2/OslB2Txlqphv1NHNDitIEmfKLo4+Eh52rSCRg/JKtTlGix0g0vOsrXOBM1ZJXoFZFCeoBPiwiiAKPVUMYqaPFUabzXMozi9CrHBm3Axpqphgav18bCPAtgAFmC12WOntvjR5JHkbr0ViyBjE0nFmx5ckdWRupbuK6yPnC2eMnsynjdX5fazTlouneYHlpdWJB5bnPOxYnOlpffz/3ldG7+8vGWM+c2V7SN1y/qoZv1/T9P9B2fGWl5bebPy+oXN29M28fEolVTT5y3VtR80PFs+coqNRu0bo8skuKVJ/tBa09obY13V+31gTHD9h3BOmFk75bfp25kLk60d5zp+zc8s8f/d0F4svLSu5mvPGObLhe2+bhFjTmbu2qvhk+Nf5NVFwKmQ9J7npUvfTLYN1kYr11/32JzieRqLjvw9pVG+Y/fBtx9F648UtSara3qLs593HH14sPktoUdKzqGfW8NudZ8P3Ry49kHm05X9zzd3LXkCHt9afY+hy6WuWldUceFLxPa3mVjnQ7X8Gf7iJFO7uLyE7ujv1JTEzX6YepRPel2vX68+c099ffH17zx2NABavcLVx7a/cB3X7efLnvqYFbZTyuWdU+e6vLtHPMVdg98+87la58X/3D9OXT4WO0MZnh2dmHG0WulN/9ZkJHxH+xyZNQ=
eNptVGtsFFUU3lJAQSJCUqDEhnHFSGFvd2b20e6GP7gFUmH7XKQNYrk7c3d2urNzpzOzC1ssj6IQhaiTQEiLaEq3u7CpfaSEBgVETUOBggk2aZogMeGRlEfCG41ivS1tpYHJ/Ji5557vO+f7zr11yShSNRHLaS2irCMVcjr50Yy6pIqqI0jTP0mEkR7EfLy4qMzXFFHFgXeCuq5obqsVKmIOlPWgihWRy+Fw2BplrGGkaVBAWtyP+djA7s3mMNxUqeMQkjWzm6FZu8U8tsXsXrfZrGIJmd3miIZUs8XMYVKErJMFH5IkKowoSFWRZAr6cUSn/Aiqmrl2PcHAPJLINk6CER4BGwhCMRQBLCGgbXQugdJRWCH96BGV4NM5dG0yiCBPmr1imhUPYk03Ol5ooA1yHFJ0gGQO86IsGJ1CjahYKB4FJKgjC1Wj6XyKFCmjEZ2MVAghBUBJjKLOTUDToShLpDugi2FECjaOFBb5KlcWfLC8MPEM2miHiiKJHBxOt1ZpWG4Z7RnoMQW9GE4NKwOIXLJudC0bK9ZaHCOmyBSdY3fl0O3PU0uQ1J1QRuI/PB9QIBciOGDUcCPxLLn1+T1YM5q9kCsqmwAJVS5oNEM17LRP6FKNyMONGklP8Yt0o8FxuqQth2HI2zEBWYvJnNEcgJKGOsatGM9JET9tgHYCmumagI10NQY4TCiMRrp1TEEJyYIeNJoYm+uwijSFDDLakSBpekSrixNLUW9PcnT6DhWtGhuIPfF8Yq5xcoUqWiiapbwwRhFiB8XY3TTtttuolV5fi2eUxPdSmzp8KpS1ALFq+djsJLlgRA4hPuV56bykRg8YEHnjBPmupBnGUygq2O56H5VFndjFK+9VBR3s8f91waoAZbFmhHY4b2ChzeW0OXibHyB/gAd2V14ucLlYBvhZNo+35zG5dt7ZFBWhkSLaUwLGgoTauADgIBdE4Jk0RjK/onCZt8DTUg5KsR/rGvBBwYjLWEaJMqQSN4wUJ+EIT06BihKeFaB0WYVx1GVz5dF+xDhcTmRnEA2Wry1tH5NpXIb48BEauQ22EytUstSd9uWC3a+aRp50vmT97r7iGU+XDJzoXSUM3l60pLbiYfesjPzyyVnZ9Vd6jx7z3vS2/s72bt0f3VhyL/urB4bWk3lg25TH0cUPLpzZdfZM25Mng4+fDl6/eOyV2r9jD+bbHop/1HcHdH8vXZVr7Wlh3q3ube8/nTWnPLJ3UlvXrYYG53f4/oG5fzl/7J6xJEOotpw6+G3WwQWBO+z0y9XX9qX9tD3755nhzC/mw0u+28haeXNa3blzcy9v+LoisSa0sX1/7ocl7NS2WF3DlntN542BtsCjwjP7tvdf23j39aXON09nLL6TecnEXXW8Uf7bzCxhg7ffSy+sWNCTt3Rn445E8mr/3ezP0gq2lXDz3A3bzOkXdp6cvscOHhtnM2/2hfr4rQ0Djo837PqmMSOr0XXRsTbt+8T1ivJf82c/anj78N2Mj04JwVZPZv3qKa/x05qTi/6cPG9otSWDe+vILXb1+a7PbU52oXX9ujm/+DsvtB9K3VAL+g/vHZp9X1jKfjrEeP99cmPNluP/TDWZhobSTcemHq9ZOclk+g8rUHyZ

View File

@@ -1 +1 @@
eNqdVWtsU1Uc3+SDssSoUUNUkMvEaHD39t7e29eWGreWsSmjZS2w8rCcnnva3vW+eh9dO/CD0xgRFS8xSqKMx0qrdQ4IExA3o6JEIxoFNRlGo4mPmPhAIxFExXO7TrbAJ++H3p7z/5//4/f7/84dKOeQpguKXD8syAbSADTwQrcGyhrKmkg3Hi5JyEgrfDEcikSHTE2YWJQ2DFVvdjiAKlCKimQgUFCRHDnGAdPAcOD/qoiqYYoJhS9MPLahUUK6DlJIb2wm1mxohApOJRt40agKMEMAQgMyr0iEbEoJpDU2EY2aIiLbbup4/cA6vCMpPBLtrZRqkCzlIg1TSyi2r4x3GfzWDQ0BCS+SQNQR3jCQpOKWsKMdi6boB8ppBHjc8JZiWtENa2RmC3sBhAhHRzJUeEFOWS+n+gW1ieBRUgQGquC6ZVQFyKpkEFJJIAo5VJo8Ze0DqioKENh2R6+uyMO1PkmjoKJLzRW7NxKjIhvWaAgX0drpCBcw1jLBUJyXovflSd0Agixi8EgR4HpKatX+2nSDCmAGByFrPFqlycMj030U3drTBWAoMiMk0GDa2gM0yc0dmL6vmbIhSMgqB8KXpqsZL6ZjKYahPPtnBNYLMrT2VGk4NOMwMrQCCRUcw9pFj0zhIyI5ZaStIYb1vqAhXcWTgx4q4WOGqQ8UMRfo+Lvl2gjtDt03ReKXdXOKQcyLNR5Nm00E4yFC0CCctJMjGK6ZdTazLLGkKzocqKWJXpaG/VE8fHoSU7F4ivYyTJtyBvGVwGUJH7cJx93Y5eM5JVFeVXRE1qqyhnvI7kntkJ3BA5PTRSpaCshCfzWtNV5lvq8/38dDk+fTuT6J9vVzrJBAJkyO1o6ommKnwQWRkm4NOX3OkZplCvsK7pUmGZqkmSN5Eg86EgVJwHhWf2sC1q2ii6bpw5c6GEoGYamXObr6vD7dQ0MSJs3OfTEM5/P5xi7vNBWKxS4+j+fITC8dTa+GcUr64UsdaiF20/pwfsqbFHhrYiFexHm3h3ZBt8uFmKTXyXsTTg4kII+c0OWBnkTyVSx+AeIoNpmqohmkjiC+rYyCNdEkgbytMz/LuFg37rSFEGQomjyKmImgYvegtxCqhkQF8HsD7WQAwDQiI9X5s8rB2LLWrs5AJYKLDChKRkBbT9XPisdhMp6Q/HRHpA/mvWpvFHa3i+YyJpFd7PH0rHAFdfeSju5YXI9qHUIiLrRDkvE4fYzH5XKxJEPRFEMxZNCk4tlsXvMtWZ7N0HGxk5MKS3v41tiSsHul2C73JpS8V2hzxnrFMMoloRhd1RvvRRSr9MhdIOsNLA93t6+iY7Ivn08AaXkgkAyJrbgbYKT9jhYCz6aA8fXXFEJihZC2PlzN9JQ+Wgi+ioGfmnkbthAd+KIPyWKhhYjYYCL8BhKKCAbyL1NkNPE0xsDMCbyfW8kqYQlkc21L+6MhxpmRcj1UXBJXOQOY1tXBpaxB9QfYjmhvbBoINMaBruHgpjlvdQovlv4/qzrYQ04XPBlSJ79oZVnRZSGZLEWQhgVkVaComDy+2DVUwpx3t8asUS/P0ckk4GiIEAehm2zDV+ZUtP+uh6L9VSgDEc9YDloH0qy/sZnj2MYWQgJ+rxvLqfrde7Bkz6Sceqc+PH/zVXXVZ5b41JvLjtLXBn851/D+98Xegd1t8kfXv3DVplva7tS2dyXUsZHx1qcrF558viSFN/wKBlO5XN91cO2KsfVrDw7wKVO7MHqqaccHv60ZEz1bzvz00rl15IlDN95tHPkuq3715/UL561jN23/9J6GD+rvu3KAKJY2/0it+6YwePXOow3GPtczW5+LDYZ/+Wzb8ZNN87eTi2YPzvnRn3h8/xenH3myNDD//n2FjdHm0/dzb2w+ecexw4F5N1BjqwYfXnDrjl3BwPpHTv2QaJj7yficXdt+eOXWM38LW48da1573trx++qbvv38xIOn022VO0Y37jz/6P4tYOifeKc8L/fMvW856AV01zXZvzYenT33pgWz/daBs/f2xOpn7/n7G2Jtw0Sbmftw6L3ks7ftXTz3YKT0asemYwt/PjS4864Muln97MWv3z56+x8fz8fQXbgwq27pE9edBVfU1f0LdzdoqQ==
eNqdVX9sG9Udb9Y/+KFpLVqhdGnh8CYQ4LPvfGfXTuZuqWMnIU3sxm5I6Lbs+d07+/Dde5e7s2O77dRfYxo/Vi5AEQJEaVwbZVHaKqFtSjvoaEtXEKjrYEoHnUQRVdG6btU2aWMbe3YcSNT+tSf77Hff7/v++Hy+3+/bWskhw1QIbhhTsIUMAC26Me2tFQMNZpFpbS9ryEoTqRSLxhMjWUOZvi9tWbrZ5HYDXXERHWGguCDR3DneDdPActP/uopqZkpJIhWm8xscGjJNkEKmo2n9Bgck1BO2HE0OXYEZBjAGwBLRGJzVkshwOB0GURGVZk262/RDp0MjElLpi5RusYLLy1pZI0monmkZCGiOJhmoJnI6LKTpNAMqpac5F7epkkZAountKKWJadnj8wPeCyBE1CLCkEgKTtkTqaKiOxkJySqwkJMpmpY0SoPFqAaKPZpBSGeBquRQeeasvQ/ouqpAUJW7HzYJHqsnx1oFHV0rHq3mxFIksGVPRmkoLR3uWIHiixnetZLGvC/PmhZQsEoBY1VAoyrrNflrcwU6gBlqhK1zZ5dnDo/P1SGmvacLwGh8nklgwLS9BxiaT5yY+97IYkvRkF0Jxa51Vxd+6a4iuHiefvbPs2wWMLT31Lg4OO80sowCCwk1Yr/Mjc8CpCKcstL2CO/hXjGQqdNyQdvK9JiVNbeWKCXonVOVet3sjnbOcnl+wdJSK6XHPhoxFCfDeZguUGA8nMfL8GITxzWJHNPWlRgL1d0krsvD/gStOVOmXIRn2a/AdBZnkDQaui7jR6uM02yq4dP6ZFFeJyZi61HZY31sz0zDsB2tEzNFxhIjBbBSrLm1j9aoHyrmhySYlaR0bkjjAkVRUJIoC+XJ+hHdIFU3NCBWM+0RwRMYr0tmwR+luXIsz7EcfzjP0nJHqqIpFM/as961pl3ychx36FoFi2QQ7e+KyNXWr+ZqGEijpFV9f2VGDAQCR66vNGtKCFSXcHi+lonmRsN7NPPQtQp1E7s5cyw/q80qkj39HboZEKCfC/hlmQd+UaD4r/RA6PFKyaQE/EAMyFN0ACiQWqmSqRPDYk0E6YiyCva0UwP5aqMFBd4r+GimzYyCoZqVUDybbCXVHMxmRjeQSoC0F8osBDCN2Jn6syut/d0tXR2h0TgNMkRIRkHD5xpuHxiA8kBSC2YiRkjGhi+UF3J94cFEMdMv9cdbM0rv2qEcHiqm+nytHQmfikmY5VeKPj4giiIlzcW5aN+w3RHv2mRfz0MpTRyMASHDqXpbJjrUDVcPFAbWqbyFRW8sbPSlMl2rvR4rHCh0qj2tDyTkwaQ3/VC+fZ1YDHnbe7tR0iKJtUT2c5FCW3QQiN1E74dqnESVWM+a1JpYeIimCKx00N3M0IJVKOjBetuwtG3YmaYRZpummZFqwARd8ydlM9NOR34Uq4VmJl5FGNFfoKG4YqFgN8Fo+mkKTDanSEFF1lEinkhKegGvMyIr88aDlvSwh4TNXk9IewD29sWktMX3dLb75yDjC4gsVwfHx4n+Wml+Ffr/GdWBPnbuFGCj+szdVsHExIosl+PIoF1lj0KVZCU69A1UDkXYnpZ+ezIgBPxcUhICEPm8ScHLdrS07pu19uXMKFVvjApQaeHloD2RFoKOJlEUHM2MBoJ+H+2x2g24pVwtVJw60bDjzsduXFBbC+n3iy8eH+7qXMgvfuTK5/f/+oX1lw78fSR4c+93b3p0aXin+BwemV7TLu9x/OfK8GTu4E82P7v86juff/ribzYuWb35DLd01wfD6y5c/svV8z/6qKew9sBjZ3bjtmd3PD/95qcgt2L8Cv7xk0smN7acGBj9ZviJg6ce+fAbv9w8zE/tvHh2wvC92N22TCydubv/csEfCKwwGneKy5+5cPao0RUB26fy921f/PG3X7p35OyhLVOnVv2zfOx29AT3vSU3NWxRDzcsG/m5c8Wbu7Y9unzk7f1vXdy0WH8/8tzWqQ/O3fHku7e8/bu+X/ztyD1/CN3Y+eqiycf/WNrnefd49tZYxLh07NQnd53V7jpu4gPB04tWxS5MJYsn/v2D/j//aVcjueGtDSeP//Yf4lPH3PIzkZ0nT54f+37Hq5GP7n3t2Onexs+27P791fecrxeunPON/3fHphueXv3eK/ffGWvMfkgevI27fH4A/eyWi28s5o8s++mtjV+/1CiOS5mh9Q1/DXp8758+9K1ti+7eG/nXyeT6jV+ror5wwapuEl5GKfgfnRWRgQ==

View File

@@ -1 +1 @@
eNptU3tsE3Uc3wRFGLKpOIcQuTTDsLhre2vXbSUIpYPNwVjdinW8xq93v/Zuvd7d7q6jHS66Bw8zEjglEubGa31R92oQpkNChCwBo5g4CA4nM8BgoCQoII9g8NeyIQTur99935/P5/utD1VDUWJ4LrGd4WQoAlJGP5JSHxJhlQdKcmPQDWWap/yWkjJrm0dkBmbRsixIRo0GCIwacDIt8gJDqknerakmNG4oScAJJb+dp3wDO9ap3MBbIfMuyEkqI0Zos/SZmGosCFlWrFOJPAvRS+WRoKhCXpJHk3ByzGQTGRliAJNoXpQxgYduDNh5j4zZIRAlVe2qWDGegmwsmGSBh4K4DqcB4/LgWaiXVqfNiZWUoVtA4GSPGOukVWtrQzQEFIJ+LiHFT/OSrESfgtMFSBIKMg45kqcYzql0OGsYIROjoIMFMoygOTkY50uJuCAUcMAy1TD4MEvpBoLAMiSI+TWVEs+1j+LCZZ8An3ZHYvBxxAonKz2msTk0Fh9in0Mj6wxqbbcXl2TAcCziD2cBGikoxP2HHncIgHShOvioskrwYXLn4zG8pASKAVlS9kRJIJK0EgCi26Df/7hd9HAy44ZKyGx5ut2o8/92OjVBqHOiTxSWfBypBByAlWD0EcmPUiJIKx2uNeBaoueJ0lAWfTjJow7KHm3nGIEs5JwyrbQR2bqwCCUBLSxsCKI02SPV+5FY8IfjodEd21uyeEzqzf58JJty2Ep7MjEiByshZSy2JBihN+qyjHodVlBsbTePNrE+U6WoVQSc5EBKLRzbihBJezgXpCLmZ+5DZPSQcIZSvkXvCi2xoDTHvIQp9HIOU3nVkvwam6m0Sqo84MVJlvdQuIyuEOJxsF5ZGcD0kIR2u94BCSo3R59DGewIuJ7IgiBP58jNJtuqGaBECDWBOXneycIu8yLcDEga4mVxSpRQfvlSU/G75vYP8FLezssSbgVOxc/xHAyWQRGpoETirdFeizCI0ktN5cpXuZRe63BQlCFXn62zG/LwBWhdxuh5BN8fO4r4tdchCURk6ku8PrPpxYT4Ny5fKeF/1U7uOzJpz2J10VbDsUl3vgYf1T23SjPOmFzH9h9MbbpZWLna8ueh5KLBa+lba2ruz7060HdymfqKMDg02NJ//87N4VsTh2rfPHvst89Gzgxvv3HypC71tciGxNa1R8uY8afDqcdb1n4yp2Xl8NmMi1N6Nl5LO5/B87d7uv7Z+G/rjU6YlS6d7rn8esGxoeFrw+HBgqmN4ZT5BakNOwgTnTtdn5b/ZXNlb/I8C37wzOcTssEbmGXCW9Nqi7fMKPUnsvkj5ZbiuZlVV9V7V6eszZ57vf7nyxMvdYc3vpM0+3a1oj+adLf5nFhU13t+Vlh38FLTlWnpJd7ozEad59DRWztnTV54dv/8BtXLTfypqP70eHNpRX9DN3C9vWZqrhT8rmth+oHCjoyWFMbWXztj0ZwjjS/tDrxA0sLFfTeya4+betMK69btXlPeza5PSqCydnZ+vGukfmVq3YJw74aMmb90XOjyfT+F6HP9/eFfrfYt09eX2ye9OtJ878d5EanSey+5+73C509tjxoLMpYC0ybbJiI0f/rh4MA384YOl124Oztt24llU/bNWZ4q/rH+/E+bb9pO7Ct63xfIyzMWBX5vpJd1bNthsyx/peLTXa3NtnYK0u1t2yqW16sou3VFEpL2wYNxCaGA84vLzyUk/AexLZt7
eNptVAlMFFcYRvFoPdKAR0FLXTdeqQw7e3DsYqJ0xZNjgVVYlNC3M4+dgdmZYeYNZQFDihRi1ej00Ki1LbCwulnlsKlYUYqWqtGWmFhTWqrSGqKobWJabKvWPhCoRCczycz73/9///d9/5sKXxGUZFbgxwVYHkEJUAh/yGqFT4KFCpRRZYMbIkagvbbUDHudIrHdCxmERNmi0wGRjQI8YiRBZKkoSnDrivQ6N5Rl4IKy1ynQnu5dpVo3KM5FQgHkZa1FTxpMkdqRLVrLplKtJHBQa9EqMpS0kVpKwE3wCC9kSiyCGqCRGUFCGlGAbg1wCgrSOCGQZO2WHFxHoCGHt1IcUGhIGAkGsAUKYcAgpJGMxeUQdIuYE1IkjEFGkVt8DAQ0JnwtKMTLCDJSm58j0QgoCoqIgDwl0CzvUo+5SlgxUkPDPA4gGKkpkRHtx43ycEgr1V8AoUgAji2Cx4oJGQGW5zBDArFuiBtWD6ek2nNXr92YmNLwtLTaBESRYykwmK7LlwU+MMybQB4RPh/2D6pDYMl4pB5PGGlWZ/NgY3gNGWUyR5FNz0JzAPfdIA7FTz4bEAFVgOsQw6arDU+Tjz67R5DV+mRApWaMKQkkilHrgeSOMY1hKSn8IFHVZ7U9DzccHIXzGaP0enw3j6kse3hKrc8DnAybR60YzfFjP40EGUOQ+uNjakMkeQhKwBBqDXl0REEO8i7EqHV6k/GQBGURDzPc2oDTkCJXeLGl8NJ53/AE1qauHxmIHd6V2Fz11CqJjdSQBk0y8GgwcLRGb7KQpMUUq1mdbA9Yh0HsL7Sp2S4BXs7DViWOzI6PYhS+ANJ+6wvnxT98yAiWVtvwey6p11tTWFFITkorsNkTo0UWIEf2m4YT/+siSC7AsyVDsIN53QuM5hhjNG10EtCZRxMmc1wsYTYb9ITTYIijTXH6WBMdU1fEAtWPtde4BMHFwUYqj6AAxUDiqTSqb6UjJSF5rTWQRaQLTgHJhB24VC8v8LAhA0rYDdVPcYJC41MgwQbrKiI9waF+bjaa40hnHoijooE+zkATiZnpTSMyjcrgHTxCQ3+Ed7AVEl7qHPfXvO0vBQ1dwfh58oROz0ntWTHt0dKrp/+ACZOr7dHCxOwloaGPS9hGi3nbB9zau1cGBuLm/xv/48zdtk+v3tkX0ZVVOi99rp90tOdEHGm8Vn6qzRLRPy828/eo/q7rXe051w9Wh5UtzwBq+MBMpy18ffC375tbqsMWsey0DcyqS0RE4JVJScf/vlGslLVkN16rCpsDFg/YHreVb0RfLhX21k3Z651wIahjUlq1gxkfWBh/pTL/+6rYsp3tF5ZFP8kOubShdfLN2ZuNndPLQz4OfXAjOGHxur7ZYO83RfOVdai1d263WuSpDH97ds3FKYsO6j7MhucKz7oMyefdl9Nb9s3Z9dUtx+2dCRlrqjsvr067O/XwjHf322fM6qSt7P6lvXTh2dg3UnYud6yfUnqh59VFrPXizU21tZ6D/3Q86Ho5JNDTMTF/Ds39WqUz5B/YPd27qWJPmTcpL8Rwcse9X+7NaP+s/oet6f3hkV3rMhxXqw/vKcpddmrBHcXRtyLDceX29m2P7Rez+j5a0/OwKeu3WQe0qOqYr73167OFlT9r3lqZmhbP3SKXPXqvY3PfkdD94qHxhSf/DLz2SVDd8n2tNbbC18+F3b/f63+4u/i708qJhV/YIo5QTFM489OSmsW99wrO9IdnnhkoT2pp3xPvKS6eMGhvcBDZVjt9Kvb6P84drZ8=

View File

@@ -1 +1 @@
eNqdVX1QFOcZP9EkpU1rRDM41Oh6UTDIHrt3e19cz3ocCIzCAXfgSWKve7vv3a3sF7t7B4ehMWiNtmPsoqnVdho/4I4wfEigiDE4dqqpydBMJtEOpB+2aWNorNJqM62Jgb53HBVG/+r+cXvv+z7v73me3+95nm2NR4AkMwK/oJvhFSCRlAIXstoal0BDGMjKnhgHlJBAt1e63J5TYYkZzw0piigX5OeTIqMTRMCTjI4SuPwInk+FSCUf/hdZkIRp9wt0dPzwTi0HZJkMAllbgDy7U0sJ0BWvwIV2q8QoACEROSRICiIKgENIvxBWED8gJVmbh2glgQUJy7AMJG3LdrjDCTRgE1tBUUENOiOqhCW/kLDl4S4O37IiAZKDiwDJygBuKIATYXLQMIGF6cwt8RAgaZj6wfaQICtq7/xk+kiKAhAd8JRAM3xQ7Qk2M2IeQoMASyqgC2bAgyRValc9ACJKskwExGZuqadJUWQZikyc5++QBb47lTGqREXw4HFXIjcU8sMr6qALBuEoy6+MQtZ5BNcRFh12ugmVFZLhWUgjypIwnpiYPD8390AkqXoIgqYUVWMzl3vn2giy2lFOUi73PEhSokJqBylxJmJg7r4U5hWGA2rcWfmgu9ThfXcGHY7rzP3zgOUoT6kdSRnOzLsMFCmKUgLEUE9gvbP8sIAPKiH1FE5YOiUgi7CGwO4YvKaE5dZ2qAUYvRxPFdNJ1+ZZEf+oyWwvgrqoI55QOA/BzYiLUhA9picQnCgw6AsIK1JS7ul2ptx4HipDv0cieTkApSielT1OhcJ8PaC7nA8VfCQhOMwmET6sUxQ0iYIM0FRUarcXrZ7pIrSsaGCmulBBCpI805x0q44klW9sbmqkqTBNhyKNHGZtJgyMH4SpwGDqiigJCTcwIJST1Xaj0WLoTR3Nkt8Fk8VQHEMx/I0mFFY6YBmOgYQmf1O9nLiLYdjwgwaKUA9g18cJLPmcn2shAQ6qlnB+H4awWq1vPtxoFsoATaxm4xvzrWQwNxpcz8nDDxqkIE5icnfTrDXK0Or4Grjw4Sag12MGvQGzGHHaBEgDbcIIU4DETAHcSBjOwu5nKIiSUFOEYwWVAQUHlxJVx/M4sinRaHYDbjSYYKY2hOEpNkwDd9hfJCRykG2IKAFWIOk+5ybUSVIhgLqTBajGi7ZVOMrLnF1uGKRTEOoZ0PbhgoU+HxXw+Tl7hc9fy9T5S32F1RhR66T1wAvcmLs+Uu4p21Fo2rqNrm6y8GwEgqC4WW/FzUajyYriOkyH63AUc7DeUl2DvnSrt4hqCoU8ir6q3IeZiRqm1gkUhfYKdbUyXlYbDpbiUV0jHthSEyqudkTDleEStqS42FVY1iDpCjlno4Eobja5OaJGCMJsSCVkz7chsDgZyK891SIobBE00SDGAmy2QWwIneTArps/Dm1IKZz5Lp6N2hB3gkwA3yQH3HB42ysEHowfhhyEIwxtr6PqTMXVjUa+wRgtD9Nuk6NeEvVNIu4rM4UcZEUVtQVgeKPZUDSXBINRj2IpHqCYlmQV3g/9/4xqyIvO7XjUJc583OK8IPNMIBBzAwk2kNpFsUKYhpNdAjGoebVjmzpooQksEAAWykpSlgCg0UI4M2fR/jcf2hOfhTjJwhqLUOpAyGDXFhCEQWtDONJuMcF2Sn4CX4wlapIPXkpzrfrhVzTJZyHb5hJ+hz1+afLusj35IxbH+wh67Nl+cuMZ8saUxj7qvHB5sHtT7Dm1N3PZl7d2r4j/7fR36F2uyWBbs2vprtFXxrP07lPHYp99cOPgq6vOD37x2NkD525tn75564aTO3IzJ2jbqX789pKhf9P/mjh29L3Bn195ZMPl7IzctZ9+dPvejoazddvzHjuRvvYuPdBTcWTgqufc8Nc/uPvR88aCG396vGxor2+pZlf8z1vsUtkLrxtH71Adq5cMHu7x6RbQ+H88b1de3K+t//GrRUu9+18aEpYPZncWrqlat79uo+5E4/nRr02tPL1k4/LrE1rHx+uysnqJd48MtE7+7LeV4SWVxy8Ob7721it/Z3sQW0lb5r2B7LGMTRPq5h+ktUifdcZ6CxYP5hx98rl/LD7w4s19Z35xSx471Lq21nV89V8fufbWcfpK//dbJsZWejd8K6NUc+Cp33z1PDnx1PpLhgVu2+Qd4vrKHWc6Lncezry3IcOUvjT3R6aCoWDa2F9ettSs9xQQT144VJjz3nefX7O4M/26dmxfttp7dk/p+23W3Nbs6vSekkPIqsJFR0w3F33+RIs1svugQlxZzvx+2Us98RVZb+ZUBW83ZP7q0WDfNbz6anpsRcMnX3pf+F763sVDB8j1p6aq+1bYXjOrlWnxJ/btNX3zyifpFyZX7/rDt4++8+HyY7XmrMjF9GHvSnH8mds5//zUv8y47vOtsbs1U4/y7raTd27n/vKow+2Ovv5uTl/gi+m4sPobd6xD01XZTz9zaGSd8afv9Hf8RHv819s3uKbSNJrp6YWakrrXrmYs0mj+C91Z35g=
eNqdVXtQFPcdR6nK+GgTGmNMxrjeqGla9ti9Fxx4jsjBcSBwcCgPy+De7u/uFvbFPu6BtaWWpDo6gcXamGhoFLxLkaAoxicWO22qg4k6liZUE51UjdrWJghOEVv6u+NIYPSv7tzt3e73/fl+P9/f5rAPiBLNc9M6aE4GIkHK8EFSN4dFUKsASW4IsUD28lSbo9BZ0qqI9MAPvbIsSGnJyYRAa3kBcAStJXk22Ycnk15CTob/BQZE3bS5eCo4IGzUsECSCA+QNGnrN2pIHkbiZE2aplSkZYAQiOTlRRkReMAihItXZMQFCFHSJGlEngFQT5GAqNlUmaRheQow8IVHkFG91ojKiujioZ4ki4BgNWlugpHAprAXEBQsq7HNy0uy2jk10YMESQJoDziSp2jOox7x1NFCEkIBN0PIIAmpk2SqHSbJgSgYansNAAJKMLQPhMZt1UOEIDA0SUTkydUSz3XEikLloACeFLdHKkAhApysdhfCVDLsyY4gxJVDcG0KpsUOBVBJJmiOgUChDAGzCglR+anJAoEga6ATNNYzNTRu3DlZh5fU/fkEWeic4pIQSa+6nxBZk+HI5Peiwsk0C9RwpuPJcDHhN+HCei2Ow0/XFM9SkCPV/VHkj02xBrIYREkeOlH3Yp0TADGA88hetRXHde+JQBLgmIBfhKCZrEib22BLwIVz4di87CvMm+jl53EL2qywPWpPtkgnIZgOySeCiA7TGRHckIZhaQYzYssv6ciMhSl5ah+6SkSCk9ywF1kT3Q+TXoWrAVR75lM73hPpOKwmkj6cRhQEBF4CaCwrtaMMLR4nCmq3HhkfMpQXPQRH10XDqj3R1vvrAn6KVCjK6/OzmLnOoKddQCHd3TETQeQjYWBCKCupbbhOb+qMiSbQb4fFYiiOoRh+MoCKEAuGZmkIaPQeoyu0NWIYdvxJBZmvAZDYYQMWvc5M1hABC7sWCf6tG4PZbD79dKUJV3pz5MJOTtWSwORscB0rHX9SIeZiHyZ1BCa0UZpSB5bChyodmWIGZr1LhxvcODDpUnGMMGGkzmCkXHozaT4B+U6T0EukmwLcHagESLib5KA6kMQSgQjTLHrcqDfBStMRmiMZhQJOxWXlIzVI6YggAoYnqIOkGyUJ0gvQ8QFUw9bygox8e2a7EyaZyfM1NGj+67QXqqpId5WLtZRkr8kDFaxX79H71pWn5q2l9HbFbKWz8+SiMmeVze8XcEd2AVuk+FE8xWDCzQaDwYziWkwLiYN6cFN1eUaFYqupwIsqjNlkiXsNF8h0SqVGI04auECgJMNfKOZT69a61q3NDNaS2bwpxWjzOa2BWsXt0dlXczLmYc3rcrGgK7cK+Cqsot3kyK1Oyec9WGktZ0zN8PnLgzk1sERC9lqS0xE4sTQE3RLjDQp5g46zRj/BmnSEigJj0U5dlelIDtz1hRwTTEecEYQB/CVY4IRr21LAc2DgVxAYxUdTlrpSL18ddAZsPm9WKgvoCmArzCgx2e0BqxLM5QMuG+e0GgwZ2WWeScjgJjjOMXBMmCE1Oprfpv5/ZvVBGTp5DaCFwvihFuZ4iaPd7pATiJBVajvJ8AoFt74IQpnZaHFGudpt1ptTMRLTuUmQSpnMOtSeYT004e2bpdEWOTLCBAMHz0eqR7x6iybNYNBr0hGWsKSaIMeiR9/PQ5FB5Tx/nN62eFtCXPSKh9+xse0lZ7mXbHN7/v6jLSeGDj/fkd1VdN2FvFrRvar+BXYzhziOXHtnq8c8p7kmK+Hxv85ufy6PTkQWLPr3bYuluXGUnnbD9SXXO1J/+0+vVl47Y08eW3j/Ue1Pb7x54MDosdHrIwfzxu7fe4Mbk/OJm/+cNfjwtaNDwT2J5Rd+Ut/zzIvz+/uHH1UPX63rOdPZuys99zXTGt/XzMjj37sHbm/r6z94sfFCU+srSspX16fH3XD+B335RGvj/Zna3ck71ZJaxP9gKKHXXrD/C4exfcnumoJ3LzJ/6H+4eO6m54udd7aEZn7QsA9ZaFsVv3p/6Nbvvo7/bdb3ju4tz3nJseNCPOK/NOfy4Vs/a51xxzPtow1LHo3OftSQji6b9wPnlcHKdy3n9iQkPfMP170NpW81NOfMn753l3Y9N7Lz5rOPT/e73LnHZ/XVf7ZnDbHgwLGr1ce3fB9h9P5Dy5Z3O4aHVxZfPjnnuRkzz59q6/8E0e+ePuK4ltvyy/izDTm7f11Vf6kshDaNbrZOoxZvnfdhz4vnM47fSlmYuPDC7JVN1vfTcsPEWeRc5ZXBmvPZn79RvaQRNO1oal5xqevtplOg7+DhVTfv5SRcKVsj7BzNOtt6iWjGPlzy6ZnOeQ1f9M4eWnX4+ozEiha+8r3fzM+7Fyi4O3fVbaaseP2Dh//dsmLlsqRbRPPp/rGiysXm6k8T9rq1dHHfjgN3/jb9xqb8wcdrUz9bGtzDkCMt966/37Lo5rW7vo97+4afPcrs+ri+X3mwqCff9jp+dwBNbVzp8O9r+e6sL7W39Rtq/zJ8zHcmvOnOisGv9IOv9G58e/vJj5b+OWtM6Qt7Vt+6OtjqKXR2vl5pdwx1d+UUmy4XNUppP+65kvhJ7RXbyznLtw4P9S1f0LjtYnQU4+P6Wma+M/KduLj/AQcpInA=

View File

@@ -1 +0,0 @@
eNrtVUFrJEUUZvGPFIUgyPRMz0zPTNKSQwhhV9wQxSjIbmgq1a+7a9Nd1VtVnclsmINxrx7aX6AmJEvYVQ/iRRc8evAPxLv/w1fdMxuNkV32IAgODNPzXr33vvfqe18fnx+ANkLJW0+FtKAZt/jHfHF8ruFhBcY+PivAZio+ub25c1JpcflmZm1pwl6PlaJrCmGzbs5kyjMmZJeroidkok73VDz7+TwDFmP6xxcfGdDeegrS1t+7002cV856frff7Q9Wvl3nHErrbUquYiHT+ln6SJQdEkOSMwtnrbv+jpVlLjhzGHsPjJIXG0pKaDDXF/sApcdycQBPNJgS24DPzoxltjLHp5gXfv3lvABjWApfb7+3BPf5yceC1RcIg6RKpTk85QonIa1nZyX8veQZtoIzq8+rA8GVlj84bMZ4iMRqlXtb7NB1Wp+Mff+na771PFdTb6sZqKm/evubjUWpuyBTm9UnwcroxxtjtrVIhay/vCQ3ujc0xJhHsNzUp1ZXcBrj2OrnO1nVIf0J2eaWDPxBQPpBOByEwxG5vbXzjDOegcfbTPUTqbzGcr6eW+/DA15fdrPhGg2DYEjfIQVbG4xWB77vd7KhN1i9wfH8GrbNw1IZ8O60g8Z+b57Hlb+hzad4Zxo58Nut34/ogp00pH530h0HtEPxNgCvNoLDUujmXiIrCqChrPK8Q/eY5VmE8UjeCHtLRErDI1phRFHlVpRM2whkXCokPA3dsDrUcJZDVJXRQyMeQYT10xQ0Dfuu3SuvtJlGsCbKBRIY3eOlM1ZTGUkoSju7ig7Q69ItTze5XhiivZkFQ8OBvzrpjwb+vEOFRLpKDhGyPjUONq4MLqWFiIkI91HPEDrbyyFeIlc6jTiCauYQC7NwJsgE11emppG1eVSJZYDFHccOBegorhbzi9msqZYrmbr9wQRBAzZT2i4M/QABGmAap3sNw1TpfVO6tIarEiKHScgD0bS3RDKMjFUad++v0fP5P0vN2sukBr9oNT2dFz1M7ZVa4Q30nGQY+78GvZ4GrQTBv6JBwX9Bg964e0RblkUZMxnq0MgPggFL+sMhjPujyRgmwWg45uPRaMwhgX7CeD+YcBb7w0ESDCeTvbHPg+EYxjzme2NABSuYFAky1K2cwD24R1/QGr0tiQ0+ocXizwb+vN8Yd1BgHBfpLsogx53EdUaCICocICKuOK4YRuxPmW71Y8E1fL73SrXuVAhuqw163Zpt0pc1tzjVoa9bxi4jQvqJqgjTQJgkzBjhRNSSRGnS6ApujcekmYK7UWKZ2TddgmpAbAZ4yt2/c5QCkBhEJUQDXj6g6pFmDQ8tsYq0GZqYZdYueTchM6wdK/mWJftSTRt/e7RDHlTGEsNmaGT22sElAg1ADLgNdMULdiiKqsAMMXFS8qd0DgsXBrr35QeL+iE5WkKZk/tyowWL1gVsZ1xvgkPqXi5lZaMDpoWTX8cIuox299+GuPEvBxvhBAtkRUgTr10HOsfP7iunmuMbAw4ZZmvO7M7/ACbG6BE=

Some files were not shown because too many files have changed in this diff Show More