Compare commits

..

226 Commits

Author SHA1 Message Date
Chester Curme
ebcee67e1c x 2025-05-08 13:07:12 -04:00
Chester Curme
395008456e x 2025-05-08 13:05:28 -04:00
Chester Curme
d9715ad797 x 2025-05-08 13:03:19 -04:00
Chester Curme
3516c2c394 x 2025-05-08 13:01:28 -04:00
Chester Curme
e01c52d2d0 x 2025-05-08 12:59:31 -04:00
Chester Curme
2f39398736 x 2025-05-08 12:56:12 -04:00
Chester Curme
8868c6b1ff x 2025-05-08 11:06:22 -04:00
Chester Curme
18c1d8a50c x 2025-05-08 11:05:09 -04:00
Chester Curme
fc797fadf4 x 2025-05-08 11:03:31 -04:00
Chester Curme
c678528334 x 2025-05-08 11:00:10 -04:00
Chester Curme
a074bd08f7 x 2025-05-08 10:58:36 -04:00
Chester Curme
e7fef3bd8e Revert "xx"
This reverts commit 08574848cc.
2025-05-08 10:56:38 -04:00
Chester Curme
08574848cc xx 2025-05-08 10:55:08 -04:00
Chester Curme
c65571136f x 2025-05-08 10:52:16 -04:00
Chester Curme
6acd55b216 x 2025-05-08 10:50:14 -04:00
Chester Curme
c225f1a4cb x 2025-05-08 10:47:57 -04:00
Chester Curme
50ce3bf5d9 x 2025-05-08 10:45:55 -04:00
Chester Curme
f23b8008ce x 2025-05-08 10:42:52 -04:00
Chester Curme
31077f5df2 x 2025-05-08 10:41:10 -04:00
Chester Curme
1a43bfb55b x 2025-05-08 10:39:42 -04:00
Chester Curme
3c2e05e43f x 2025-05-08 10:38:05 -04:00
Chester Curme
ad68ddbf40 test stream twice 2025-05-08 10:35:31 -04:00
ccurme
2d202f9762 anthropic[patch]: split test into two (#31167) 2025-05-08 09:23:36 -04:00
ccurme
d4555ac924 anthropic: release 0.3.13 (#31162) 2025-05-08 03:13:15 +00:00
ccurme
e34f9fd6f7 anthropic: update streaming usage metadata (#31158)
Anthropic updated how they report token counts during streaming today.
See changes to `MessageDeltaUsage` in [this
commit](2da00f26c5 (diff-1a396eba0cd9cd8952dcdb58049d3b13f6b7768ead1411888d66e28211f7bfc5)).

It's clean and simple to grab these fields from the final
`message_delta` event. However, some of them are typed as Optional, and
language
[here](e42451ab3f/src/anthropic/lib/streaming/_messages.py (L462))
suggests they may not always be present. So here we take the required
field from the `message_delta` event as we were doing previously, and
ignore the rest.
2025-05-07 23:09:56 -04:00
Tanushree
6c3901f9f9 Change banner (#31159)
Changing the banner to lead to careers page instead of interrupt

---------

Co-authored-by: Brace Sproul <braceasproul@gmail.com>
2025-05-07 18:07:10 -07:00
ccurme
682f338c17 anthropic[patch]: support web search (#31157) 2025-05-07 18:04:06 -04:00
ccurme
d7e016c5fc huggingface: release 0.2 (#31153) 2025-05-07 15:33:07 -04:00
ccurme
4b11cbeb47 huggingface[patch]: update lockfile (#31152) 2025-05-07 15:17:33 -04:00
ccurme
b5b90b5929 anthropic[patch]: be robust to null fields when translating usage metadata (#31151) 2025-05-07 18:30:21 +00:00
ccurme
f70b263ff3 core: release 0.3.59 (#31150) 2025-05-07 17:36:59 +00:00
ccurme
bb69d4c42e docs: specify js support for tavily (#31149) 2025-05-07 11:30:04 -04:00
zhurou603
1df3ee91e7 partners: (langchain-openai) total_tokens should not add 'Nonetype' t… (#31146)
partners: (langchain-openai) total_tokens should not add 'Nonetype' t…

# PR Description

## Description
Fixed an issue in `langchain-openai` where `total_tokens` was
incorrectly adding `None` to an integer, causing a TypeError. The fix
ensures proper type checking before adding token counts.

## Issue
Fixes the TypeError traceback shown in the image where `'NoneType'`
cannot be added to an integer.

## Dependencies
None

## Twitter handle
None

![image](https://github.com/user-attachments/assets/9683a795-a003-455a-ada9-fe277245e2b2)

Co-authored-by: qiulijie <qiulijie@yuaiweiwu.com>
2025-05-07 11:09:50 -04:00
Collier King
19041dcc95 docs: update langchain-cloudflare repo/path on packages.yaml (#31138)
**Library Repo Path Update **: "langchain-cloudflare"

We recently changed our `langchain-cloudflare` repo to allow for future
libraries.
Created a `libs` folder to hold `langchain-cloudflare` python package.


https://github.com/cloudflare/langchain-cloudflare/tree/main/libs/langchain-cloudflare
 
On `langchain`, updating `packages.yaml` to point to new
`libs/langchain-cloudflare` library folder.
2025-05-07 11:01:25 -04:00
Simonas Jakubonis
3cba22d8d7 docs: Pinecone Rerank example notebook (#31147)
Created an example notebook of how to use Pinecone Reranking service
cc @jamescalam
2025-05-07 11:00:42 -04:00
Jacob Lee
66d1ed6099 fix(core): Permit OpenAI style blocks to be passed into convert_to_openai_messages (#31140)
Should effectively be a noop, just shouldn't throw

CC @madams0013

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-07 10:57:37 -04:00
Tushar Nitave
a15034d8d1 docs: Fixed grammar for chat prompt composition (#31148)
This PR fixes a grammar issue in the sentence:

"A chat prompt is made up a of a list of messages..." → "A chat prompt
is made up of a list of messages. "
2025-05-07 10:51:34 -04:00
Michael Li
57c81dc3e3 docs: replace initialize_agent with create_react_agent in graphql.ipynb (#31133)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-06 16:09:52 -04:00
pulvedu
52732a4d13 Update docs (#31135)
Updating tavily docs to indicate JS support

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-05-06 16:09:39 -04:00
Simonas Jakubonis
5dde64583e docs: Updated pinecone.mdx in the integration providers (#31123)
Updated pinecone.mdx in the integration providers

Added short description and examples for SparseVector store and
SparseEmbeddings
2025-05-06 12:58:32 -04:00
Tomaz Bratanic
6b6750967a Docs: Change to async llm graph transformer (#31126) 2025-05-06 12:53:57 -04:00
ccurme
703fce7972 docs: document that Anthropic supports boolean parallel_tool_calls param in guide (#31122) 2025-05-05 20:25:27 -04:00
唐小鸭
50fa524a6d partners: (langchain-deepseek) fix deepseek-r1 always returns an empty reasoning_content when reasoning (#31065)
## Description
deepseek-r1 always returns an empty string `reasoning_content` to the
first chunk when thinking, and sets `reasoning_content` to None when
thinking is over, to determine when to switch to normal output.

Therefore, whether the reasoning_content field exists should be judged
as None.

## Demo
deepseek-r1 reasoning output: 

```
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': 'assistant', 'tool_calls': None, 'reasoning_content': ''}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '好的'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': ','}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '用户'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

deepseek-r1 first normal output
```
...
{'delta': {'content': ' main', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': '\n\nimport', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-05 22:31:58 +00:00
Michael Li
c0b69808a8 docs: replace initialize_agent with create_react_agent in openweathermap.ipynb (#31115)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-05 18:08:09 -04:00
ccurme
fce8caca16 docs: minor fix in Pinecone (#31110) 2025-05-03 20:16:27 +00:00
Simonas Jakubonis
b8d0403671 docs: updated pinecone example notebook (#30993)
- **Description:** Update Pinecone notebook example
  - **Issue:** N\A
  - **Dependencies:** N\A
  - **Twitter handle:** N\A


- [ x ] **Add tests and docs**: Just notebook updates


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-03 16:02:21 -04:00
Michael Li
1204fb8010 docs: replace initialize_agent with create_react_agent in yahoo_finance_news.ipynb (#31108)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-03 15:45:52 -04:00
Adeel Ehsan
1e00116ae7 docs: add docs for vectara tools (#30958)
Thank you for contributing to LangChain!

- [ ] **Docs for Vectara Tools**: "langchain-vectara"
2025-05-03 15:39:16 -04:00
Haris Colic
3e25a93136 Docs: replace initialize agent with create react agent for google tools (#31043)
- **Description:** The deprecated initialize_agent functionality is
replaced with create_react_agent for the google tools. Also noticed a
potential issue with the non-existent "google-drive-search" which was
used in the old `google-drive.ipynb`. If this should be a by default
available tool, an issue should be opened to modify
langchain-community's `load_tools` accordingly.
- **Issue:**  #29277
- **Dependencies:** No added dependencies
- **Twitter handle:** No Twitter account
2025-05-03 15:20:07 -04:00
Stefano Lottini
325f729a92 docs: improvements to Astra DB pages, especially modernize Vector DB example notebook (#30961)
This PR brings several improvements and modernizations to the
documentation around the Astra DB partner package.

- language alignment for better matching with the terms used in the
Astra DB docs
- updated several links to pages on said documentation
- for the `AstraDBVectorStore`, added mentions of the new features in
the overall `astra.mdx`
- for the vector store, rewritten/upgraded most of the usage example
notebook for a more straightforward experience able to highlight the
main usage patterns (including new ones such as the newly-introduced
"autodetect feature")

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-03 14:26:52 -04:00
Asif Mehmood
00ac49dd3e Replace deprecated .dict() with .model_dump() for Pydantic v2 compatibility (#31107)
**What does this PR do?**
This PR replaces deprecated usages of ```.dict()``` with
```.model_dump()``` to ensure compatibility with Pydantic v2 and prepare
for v3, addressing the deprecation warning
```PydanticDeprecatedSince20``` as required in [Issue#
31103](https://github.com/langchain-ai/langchain/issues/31103).

**Changes made:**
* Replaced ```.dict()``` with ```.model_dump()``` in multiple locations
* Ensured consistency with Pydantic v2 migration guidelines
* Verified compatibility across affected modules

**Notes**
* This is a code maintenance and compatibility update
* Tested locally with Pydantic v2.11
* No functional logic changes; only internal method replacements to
prevent deprecation issues
2025-05-03 13:40:54 -04:00
ccurme
6268ae8db0 langchain: release 0.3.25 (#31101) 2025-05-02 17:42:32 +00:00
ccurme
77ecf47f6d openai: release 0.3.16 (#31100) 2025-05-02 13:14:46 -04:00
ccurme
ff41f47e91 core: release 0.3.58 (#31099) 2025-05-02 12:46:32 -04:00
Eugene Yurtsev
4da525bc63 langchain[patch]: Remove beta decorator from init_embeddings (#31098)
Remove beta decorator from init_embeddings.
2025-05-02 11:52:50 -04:00
ccurme
94139ffcd3 openai[patch]: format system content blocks for Responses API (#31096)
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI


llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)

messages = [
    SystemMessage("test"),                                   # Works
    HumanMessage("test"),                                    # Works
    SystemMessage([{"type": "text", "text": "test"}]),       # Bug in this case
    HumanMessage([{"type": "text", "text": "test"}]),        # Works
    SystemMessage([{"type": "input_text", "text": "test"}])  # Works
]

llm._get_request_payload(messages)
```
2025-05-02 15:22:30 +00:00
ccurme
26ad239669 core, openai[patch]: prefer provider-assigned IDs when aggregating message chunks (#31080)
When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.

For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
2025-05-02 11:18:18 -04:00
ccurme
72f905a436 infra: fix notebook tests (#31097) 2025-05-02 14:33:11 +00:00
William FH
b5bf2d6218 0.3.57 (#31095) 2025-05-01 23:42:26 -07:00
William FH
167afa5102 Enable run mutation (#31090)
This lets you more easily modify a run in-flight
2025-05-01 17:00:51 -07:00
Simonas Jakubonis
0b79fc1733 docs: Pinecone Sparse vectorstore example (#31066)
Description: Pinecone SparseVectorStore example
cc @jamescalam

---------

Co-authored-by: James Briggs <35938317+jamescalam@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-01 18:08:10 -04:00
ccurme
c51eadd54f openai[patch]: propagate service_tier to response metadata (#31089) 2025-05-01 13:50:48 -04:00
ccurme
6110c3ffc5 openai[patch]: release 0.3.15 (#31087) 2025-05-01 09:22:30 -04:00
Ben Gladwell
da59eb7eb4 anthropic: Allow kwargs to pass through when counting tokens (#31082)
- **Description:** `ChatAnthropic.get_num_tokens_from_messages` does not
currently receive `kwargs` and pass those on to
`self._client.beta.messages.count_tokens`. This is a problem if you need
to pass specific options to `count_tokens`, such as the `thinking`
option. This PR fixes that.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** @bengladwell

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-30 17:56:22 -04:00
Really Him
918c950737 DOCS: partners/chroma: Fix documentation around chroma query filter syntax (#31058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

**Description**:
* Starting to put together some PR's to fix the typing around
`langchain-chroma` `filter` and `where_document` query filtering, as
mentioned:

https://github.com/langchain-ai/langchain/issues/30879
https://github.com/langchain-ai/langchain/issues/30507

The typing of `dict[str, str]` is on the one hand too restrictive (marks
valid filter expressions as ill-typed) and also too permissive (allows
illegal filter expressions). That's not what this PR addresses though.
This PR just removes from the documentation some examples of filters
that are illegal, and also syntactically incorrect: (a) dictionaries
with keys like `$contains` but the key is missing quotation marks; (b)
dictionaries with multiple entries - this is illegal in Chroma filter
syntax and will raise an exception. (`{"foo": "bar", "qux": "baz"}`).
Filter dictionaries in Chroma must have one and one key only. Again this
is just the documentation issue, which is the lowest hanging fruit. I
also think we need to update the types for `filter` and `where_document`
to be (at the very least `dict[str, Any]`), or, since we have access to
Chroma's types, they should be `Where` and `WhereDocument` types. This
has a wider blast radius though, so I'm starting small.

This PR does not fix the issues mentioned above, it's just starting to
get the ball rolling, and cleaning up the documentation.



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-04-30 17:51:07 -04:00
Mateus Scheper
ed7cd3c5c4 docs: Fixing typo: "chocalate" -> "chocolate". (#31078)
Just fixing a typo that was making me anxious: "chocalate" ->
"chocolate".
2025-04-30 15:09:36 -04:00
yberber-sap
952a0b7b40 Docs: Fix SAP HANA Cloud docs - remove pip output, update vectorstore link, rename provider (#31077)
This PR includes the following documentation fixes for the SAP HANA
Cloud vector store integration:
- Removed stale output from the `%pip install` code cell.
- Replaced an unrelated vectorstore documentation link on the provider
overview page.
- Renamed the provider from "SAP HANA" to "SAP HANA Cloud"
2025-04-30 08:57:40 -04:00
Akshay Dongare
0b8e9868e6 docs: update LiteLLM integration docs for router migration to langchain-litellm (#31063)
# What's Changed?
- [x] 1. docs: **docs/docs/integrations/chat/litellm.ipynb** : Updated
with docs for litellm_router since it has been moved into the
[langchain-litellm](https://github.com/Akshay-Dongare/langchain-litellm)
package along with ChatLiteLLM

- [x] 2. docs: **docs/docs/integrations/chat/litellm_router.ipynb** :
Deleted to avoid redundancy

- [x] 3. docs: **docs/docs/integrations/providers/litellm.mdx** :
Updated to reflect inclusion of ChatLiteLLMRouter class

- [x] Lint and test: Done

# Issue:
- [x] Related to the issue
https://github.com/langchain-ai/langchain/issues/30368

# About me
- [x] 🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-29 17:48:11 -04:00
Lukas Scheucher
275ba2ec37 Add Compass Labs toolkits to langchain docs (#30794)
- **Description**: Adding documentation notebook for [compass-langchain
toolkit](https://pypi.org/project/langchain-compass/).
- **Issue**: N/a
- **Dependencies**: langchain-compass  
- **Twitter handle**: @labs_compass

---------

Co-authored-by: ccosnett <conor142857@icloud.com>
2025-04-29 17:42:43 -04:00
ccurme
bdb7c4a8b3 huggingface: fix embeddings return type (#31072)
Integration tests failing

cc @hanouticelina
2025-04-29 18:45:04 +00:00
célina
868f07f8f4 partners: (langchain-huggingface) Chat Models - Integrate Hugging Face Inference Providers and remove deprecated code (#30733)
Hi there, I'm Célina from 🤗,
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers for chat completion and text
generation tasks.

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpoint`, in favor of the task-specific `text_generation`
method. `InferenceClient.post()` is deprecated and will be removed in
`huggingface_hub v0.31.0`.

---
## Changes made
- bumped the minimum required version of the `huggingface-hub` package
to ensure compatibility with the latest API usage.
- added a `provider` field to `HuggingFaceEndpoint`, enabling users to
select the inference provider (e.g., 'cerebras', 'together',
'fireworks-ai'). Defaults to `hf-inference` (HF Inference API).
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpoint` with the task-specific `text_generation` method
for future-proofing, `post()` will be removed in huggingface-hub
v0.31.0.
- updated the `ChatHuggingFace` component:
    - added async and streaming support.
    - added support for tool calling.
- exposed underlying chat completion parameters for more granular
control.
- Added integration tests for `ChatHuggingFace` and updated the
corresponding unit tests.

  All changes are backward compatible.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-29 09:53:14 -04:00
ccurme
3072e4610a community: move to separate repo (continued) (#31069)
Missed these after merging
2025-04-29 09:25:32 -04:00
ccurme
9ff5b5d282 community: move to separate repo (#31060)
langchain-community is moving to
https://github.com/langchain-ai/langchain-community
2025-04-29 09:22:04 -04:00
Sydney Runkle
7e926520d5 packaging: remove Python upper bound for langchain and co libs (#31025)
Follow up to https://github.com/langchain-ai/langsmith-sdk/pull/1696,
I've bumped the `langsmith` version where applicable in `uv.lock`.

Type checking problems here because deps have been updated in
`pyproject.toml` and `uv lock` hasn't been run - we should enforce that
in the future - goes with the other dependabot todos :).
2025-04-28 14:44:28 -04:00
Sydney Runkle
d614842d23 ci: temporarily run chroma on 3.12 for CI (#31056)
Waiting on a fix for https://github.com/chroma-core/chroma/issues/4382
2025-04-28 13:20:37 -04:00
Michael Li
ff1602f0fd docs: replace initialize_agent with create_react_agent in bash.ipynb (part of #29277) (#31042)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 13:07:22 -04:00
Christophe Bornet
aee7988a94 community: add mypy warn_unused_ignores rule (#30816) 2025-04-28 11:54:12 -04:00
Bae-ChangHyun
a2863f8757 community: add 'get_col_comments' option for retrieve database columns comments (#30646)
## Description
Added support for retrieving column comments in the SQL Database
utility. This feature allows users to see comments associated with
database columns when querying table information. Column comments
provide valuable metadata that helps LLMs better understand the
semantics and purpose of database columns.

A new optional parameter `get_col_comments` was added to the
`get_table_info` method, defaulting to `False` for backward
compatibility. When set to `True`, it retrieves and formats column
comments for each table.

Currently, this feature is supported on PostgreSQL, MySQL, and Oracle
databases.

## Implementation
You should create Table with column comments before.

```python
db = SQLDatabase.from_uri("YOUR_DB_URI")
print(db.get_table_info(get_col_comments=True)) 
```
## Result
```
CREATE TABLE test_table (
	name VARCHAR
        school VARCHAR)
/*
Column Comments: {'name': person name, 'school":school_name}
*/

/*
3 rows from test_table:
name
a
b
c
*/
```

## Benefits
1. Enhances LLM's understanding of database schema semantics
2. Preserves valuable domain knowledge embedded in database design
3. Improves accuracy of SQL query generation
4. Provides more context for data interpretation

Tests are available in
`langchain/libs/community/tests/test_sql_get_table_info.py`.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 15:19:46 +00:00
yberber-sap
3fb0a55122 Deprecate HanaDB, HanaTranslator and update example notebook to use new implementation (#30896)
- **Description:**  
This PR marks the `HanaDB` vector store (and related utilities) in
`langchain_community` as deprecated using the `@deprecated` annotation.
  - Set `since="0.1.0"` and `removal="1.0"`  
- Added a clear migration path and a link to the SAP-maintained
replacement in the
[`langchain_hana`](https://github.com/SAP/langchain-integration-for-sap-hana-cloud)
package.
Additionally, the example notebook has been updated to use the new
`HanaDB` class from `langchain_hana`, ensuring users follow the
recommended integration moving forward.

- **Issue:** None 

- **Dependencies:**  None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 16:37:35 -04:00
湛露先生
5fb8fd863a langchain_openai: clean duplicate code for openai embedding. (#30872)
The `_chunk_size` has not changed by method `self._tokenize`, So i think
these is duplicate code.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-27 15:07:41 -04:00
Philipp Schmid
79a537d308 Update Chat and Embedding guides (#31017)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 18:06:59 +00:00
ccurme
ba2518995d standard-tests: add condition for image tool message test (#31041)
Require support for [standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/).
2025-04-27 17:24:43 +00:00
ccurme
04a899ebe3 infra: support third-party integration packages in API ref build (#31021) 2025-04-25 16:02:27 -04:00
Stefano Lottini
a82d987f09 docs: Astra DB, replace leftover links to "community" legacy package + modernize doc loader signature (#30969)
This PR brings some much-needed updates to some of the Astra DB shorter
example notebooks,

- ensuring imports are from the partner package instead of the
(deprecated) community legacy package
- improving the wording in a few related places
- updating the constructor signature introduced with the latest partner
package's AstraDBLoader
- marking the community package counterpart of the LLM caches as
deprecated in the summary table at the end of the page.
2025-04-25 15:45:24 -04:00
ccurme
a60fd06784 docs: document OpenAI flex processing (#31023)
Following https://github.com/langchain-ai/langchain/pull/31005
2025-04-25 15:10:25 -04:00
ccurme
629b7a5a43 openai[patch]: add explicit attribute for service tier (#31005) 2025-04-25 18:38:23 +00:00
ccurme
ab871a7b39 docs: enable milvus in API ref build (#31016)
Reverts langchain-ai/langchain#30996

Should be fixed following
https://github.com/langchain-ai/langchain-milvus/pull/68
2025-04-25 12:48:10 +00:00
Georgi Stefanov
d30c56a8c1 langchain: return attachments in _get_response (#30853)
This is a PR to return the message attachments in _get_response, as when
files are generated these attachments are not returned thus generated
files cannot be retrieved

Fixes issue: https://github.com/langchain-ai/langchain/issues/30851
2025-04-24 21:39:11 -04:00
Abderrahmane Gourragui
09c1991e96 docs: update document examples (#31006)
## Description:

As I was following the docs I found a couple of small issues on the
docs.

this fixes some unused imports on the [extraction
page](https://python.langchain.com/docs/tutorials/extraction/#the-extractor)
and updates the examples on [classification
page](https://python.langchain.com/docs/tutorials/classification/#quickstart)
to be independent from the chat model.
2025-04-24 18:07:55 -04:00
ccurme
a7903280dd openai[patch]: delete redundant tests (#31004)
These are covered by standard tests.
2025-04-24 17:56:32 +00:00
Kyle Jeong
d0f0d1f966 [docs/community]: langchain docs + browserbaseloader fix (#30973)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

community: fix browserbase integration
docs: update docs

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updated BrowserbaseLoader to use the new python sdk.
    - **Issue:** update browserbase integration with langchain
    - **Dependencies:** n/a
    - **Twitter handle:** @kylejeong21

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 13:38:49 -04:00
ccurme
403fae8eec core: release 0.3.56 (#31000) 2025-04-24 13:22:31 -04:00
Jacob Lee
d6b50ad3f6 docs: Update Google Analytics tag in docs (#31001) 2025-04-24 10:19:10 -07:00
ccurme
10a9c24dae openai: fix streaming reasoning without summaries (#30999)
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
2025-04-24 16:01:45 +00:00
ccurme
8fc7a723b9 core: release 0.3.56rc1 (#30998) 2025-04-24 15:09:44 +00:00
ccurme
f4863f82e2 core[patch]: fix edge cases for _is_openai_data_block (#30997) 2025-04-24 10:48:52 -04:00
Philipp Schmid
ae4b6380d9 Documentation: Add Google Gemini dropdown (#30995)
This PR adds Google Gemini (via AI Studio and Gemini API). Feel free to
change the ordering, if needed.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 10:00:16 -04:00
Philipp Schmid
ffbc64c72a Documentation: Improve structure of Google integrations page (#30992)
This PR restructures the main Google integrations documentation page
(`docs/docs/integrations/providers/google.mdx`) for better clarity and
updates content.

**Key changes:**

* **Separated Sections:** Divided integrations into distinct `Google
Generative AI (Gemini API & AI Studio)`, `Google Cloud`, and `Other
Google Products` sections.
* **Updated Generative AI:** Refreshed the introduction and the `Google
Generative AI` section with current information and quickstart examples
for the Gemini API via `langchain-google-genai`.
* **Reorganized Content:** Moved non-Cloud Platform specific
integrations (e.g., Drive, GMail, Search tools, ScaNN) to the `Other
Google Products` section.
* **Cleaned Up:** Minor improvements to descriptions and code snippets.

This aims to make it easier for users to find the relevant Google
integrations based on whether they are using the Gemini API directly or
Google Cloud services.

| Before                | After      |
|-----------------------|------------|
| ![Screenshot 2025-04-24 at 14 56
23](https://github.com/user-attachments/assets/ff967ec8-a833-4e8f-8015-61af8a4fac8b)
| ![Screenshot 2025-04-24 at 14 56
15](https://github.com/user-attachments/assets/179163f1-e805-484a-bbf6-99f05e117b36)
|

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 09:58:46 -04:00
Jacob Lee
6b0b317cb5 feat(core): Autogenerate filenames for when converting file content blocks to OpenAI format (#30984)
CC @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 13:36:31 +00:00
ccurme
21962e2201 docs: temporarily disable milvus in API ref build (#30996) 2025-04-24 09:31:23 -04:00
Behrad Hemati
1eb0bdadfa community: add indexname to other functions in opensearch (#30987)
- [x] **PR title**: "community: add indexname to other functions in
opensearch"



- [x] **PR message**:
- **Description:** add ability to over-ride index-name if provided in
the kwargs of sub-functions. When used in WSGI application it's crucial
to be able to dynamically change parameters.


- [ ] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 08:59:33 -04:00
Nicky Parseghian
7ecdac5240 community: Strip URLs from sitemap. (#30830)
Fixes #30829

- **Description:** Simply strips the loc value when building the
element.
    - **Issue:** Fixes #30829
2025-04-23 18:18:42 -04:00
ccurme
faef3e5d50 core, standard-tests: support PDF and audio input in Chat Completions format (#30979)
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)

Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
2025-04-23 18:32:51 +00:00
Bagatur
d4fc734250 core[patch]: update dict prompt template (#30967)
Align with JS changes made in
https://github.com/langchain-ai/langchainjs/pull/8043
2025-04-23 10:04:50 -07:00
ccurme
4bc70766b5 core, openai: support standard multi-modal blocks in convert_to_openai_messages (#30968) 2025-04-23 11:20:44 -04:00
ccurme
e4877e5ef1 fireworks: release 0.3.0 (#30977) 2025-04-23 10:08:38 -04:00
Christophe Bornet
8c5ae108dd text-splitters: Set strict mypy rules (#30900)
* Add strict mypy rules
* Fix mypy violations
* Add error codes to all type ignores
* Add ruff rule PGH003
* Bump mypy version to 1.15
2025-04-22 20:41:24 -07:00
ccurme
eedda164c6 fireworks[minor]: remove default model and temperature (#30965)
`mixtral-8x-7b-instruct` was recently retired from Fireworks Serverless.

Here we remove the default model altogether, so that the model must be
explicitly specified on init:
```python
ChatFireworks(model="accounts/fireworks/models/llama-v3p1-70b-instruct")  # for example
```

We also set a null default for `temperature`, which previously defaulted
to 0.0. This parameter will no longer be included in request payloads
unless it is explicitly provided.
2025-04-22 15:58:58 -04:00
Grant
4be55f7c89 docs: fix typo at 175 (#30966)
**Description:** Corrected pre-buit to pre-built.
**Issue:** Little typo.
2025-04-22 18:13:07 +00:00
CLOVA Studio 개발
577cb53a00 community: update Naver integration to use langchain-naver package and improve documentation (#30956)
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were completed.
We build our package as requested in [this
comment](https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791)
and the initial version is now uploaded to
[pypi](https://pypi.org/project/langchain-naver/).
So we've updated some our documents with the additional changed features
and how to download our partner-managed package.

## **Dependencies:** 

https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-22 12:00:10 -04:00
ccurme
a7c1bccd6a openai[patch]: remove xfails from image token counting tests (#30963)
These appear to be passing again.
2025-04-22 15:55:33 +00:00
ccurme
25d77aa8b4 community: release 0.3.22 (#30962) 2025-04-22 15:34:47 +00:00
ccurme
59fd4cb4c0 docs: update package registry sort order (#30960) 2025-04-22 15:27:32 +00:00
ccurme
b8c454b42b langchain: release 0.3.24 (#30959) 2025-04-22 11:23:34 -04:00
Dmitrii Rashchenko
a43df006de Support of openai reasoning summary streaming (#30909)
**langchain_openai: Support of reasoning summary streaming**

**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries

It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:

```
llm = ChatOpenAI(
    model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
    use_responses_api=True,  # reasoning streaming works only with responses api, not completion api
    model_kwargs={
        "reasoning": {
            "effort": "high",  # also "low" and "medium" supported
            "summary": "auto"  # some models support "concise" summary, some "detailed", but auto will always work
        }
    }
)
```

Now, if you stream events from llm:

```
async for event in llm.astream_events(prompt, version="v2"):
    print(event)
```

or

```
for chunk in llm.stream(prompt):
    print (chunk)
```

OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`

These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.

Example of how this reasoning summary may be printed:

```
    async for event in llm.astream_events(prompt, version="v2"):
        if event["event"] == "on_chat_model_stream":
            chunk: AIMessageChunk = event["data"]["chunk"]
            if "reasoning_summary_chunk" in chunk.additional_kwargs:
                print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
            elif "reasoning_summary" in chunk.additional_kwargs:
                print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
            elif chunk.content and chunk.content[0]["type"] == "text":
                print(chunk.content[0]["text"], end="")
```

or

```
    for chunk in llm.stream(prompt):
        if "reasoning_summary_chunk" in chunk.additional_kwargs:
            print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
        elif "reasoning_summary" in chunk.additional_kwargs:
            print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
        elif chunk.content and chunk.content[0]["type"] == "text":
            print(chunk.content[0]["text"], end="")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-22 14:51:13 +00:00
Alexander Ng
0f6fa34372 Community: Valyu Integration docs (#30926)
PR title:
docs: add Valyu integration documentation
Description:
This PR adds documentation and example notebooks for the Valyu
integration, including retriever and tool usage.
Issue:
N/A
Dependencies:
No new dependencies.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-21 17:43:00 -04:00
Jacob Mansdorfer
e8a84b05a4 Community: Adding tool calling and some new parameters to the langchain-predictionguard docs. (#30953)
- [x] **PR message**: 
- **Description:** Updates the documentation for the
langchain-predictionguard package, adding tool calling functionality and
some new parameters.
2025-04-21 17:01:57 -04:00
ccurme
8574442c57 core[patch]: release 0.3.55 (#30952) 2025-04-21 17:56:24 +00:00
ccurme
920d504e47 fireworks[patch]: update model in LLM integration tests (#30951)
`mixtral-8x7b-instruct` has been retired.
2025-04-21 17:53:27 +00:00
Anton Masalovich
1f3054502e community: fix cost calculations for 4.1 and o4 in OpenAI callback (#30899)
**Issue:** #30898
2025-04-21 10:59:47 -04:00
Ahmed Tammaa
589bc19890 anthropic[patch]: make description optional on AnthropicTool (#30935)
PR Summary

This change adds a fallback in ChatAnthropic.with_structured_output() to
handle Pydantic models that don’t include a docstring. Without it,
calling:
```py
from pydantic import BaseModel
from langchain_anthropic import ChatAnthropic

class SampleModel(BaseModel):
    sample_field: str

llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```
will raise a
```
KeyError: 'description'
```
because Pydantic omits the description field when no docstring is
present.

This issue doesn’t occur when using ChatOpenAI or if you add a docstring
to the model:
```py
from pydantic import BaseModel
from langchain_openai import ChatOpenAI

class SampleModel(BaseModel):
    """Schema for sample_field output."""
    sample_field: str

llm = ChatOpenAI(
    model="gpt-4o-mini"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:44:39 -04:00
Nuno Campos
27296bdb0c core: Make Graph.Node.data optional (#30943)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-21 07:18:36 -07:00
Pushpa Kumar
0e9d0dbc10 docs: dynamic copyright year in API ref (#30944)
- [x] **PR title:**  
`docs: make footer year dynamic in API reference docs`

- [x] **PR message:**

  - **Description:**  
Update `docs/api_reference/conf.py` to make the copyright year dynamic
(on
[https://python.langchain.com/api_reference/](https://python.langchain.com/api_reference/)).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 14:10:14 +00:00
Ahmed Tammaa
de56c31672 core: Improve OutputParser error messaging when model output is truncated (max_tokens) (#30936)
Addresses #30158
When using the output parser—either in a chain or standalone—hitting
max_tokens triggers a misleading “missing variable” error instead of
indicating the output was truncated. This subtle bug often surfaces with
Anthropic models.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:06:18 -04:00
xsai9101
335f089d6a Community: Add bind variable support for oracle adb docloader (#30937)
PR title:
Community: Add bind variable support for oracle adb docloader
Description:
This PR adds support of using bind variable to oracle adb doc loader
class, including minor document change.
Issue:
N/A
Dependencies:
No new dependencies.
2025-04-21 08:47:33 -04:00
Ikko Eltociear Ashimine
9418c0d8a5 docs: update tableau.ipynb (#30938)
Initalize -> Initialize
2025-04-21 08:43:29 -04:00
Aubrey Ford
23f701b08e langchain_community: OpenAIEmbeddings not respecting chunk_size argument (#30946)
This is a follow-on PR to go with the identical changes that were made
in parters/openai.

Previous PR:  https://github.com/langchain-ai/langchain/pull/30757

When calling embed_documents and providing a chunk_size argument, that
argument is ignored when OpenAIEmbeddings is instantiated with its
default configuration (where check_embedding_ctx_length=True).

_get_len_safe_embeddings specifies a chunk_size parameter but it's not
being passed through in embed_documents, which is its only caller. This
appears to be an oversight, especially given that the
_get_len_safe_embeddings docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.
2025-04-21 08:39:07 -04:00
Aubrey Ford
b344f34635 partners/openai: OpenAIEmbeddings not respecting chunk_size argument (#30757)
When calling `embed_documents` and providing a `chunk_size` argument,
that argument is ignored when `OpenAIEmbeddings` is instantiated with
its default configuration (where `check_embedding_ctx_length=True`).

`_get_len_safe_embeddings` specifies a `chunk_size` parameter but it's
not being passed through in `embed_documents`, which is its only caller.
This appears to be an oversight, especially given that the
`_get_len_safe_embeddings` docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.

This bug also exists in langchain_community package. I can add that to
this PR if requested otherwise I will create a new one once this passes.
2025-04-18 15:27:27 -04:00
Konsti-s
017c8079e1 partners: ChatAnthropic supports urls (#30809)
**Description:**
partners-anthropic: ChatAnthropic supports b64 and urls in the
part[image_url][url] message variable

**Issue**:
ChatAnthropic right now only supports b64 encoded images in the
part[image_url][url] message variable. This PR enables ChatAnthropic to
also accept image urls in said variable and makes it compatible with
OpenAI messages to make model switching easier.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 15:15:45 -04:00
Volodymyr Tkachuk
d0cd115356 community: Add deprecation decorator to SingleStore community integrations (#30846)
SingleStore integration now has its package `langchain-singlestore', so
the community implementation will no longer be maintained.

Added `deprecated` decorator to `SingleStoreDBChatMessageHistory`,
`SingleStoreDBSemanticCache`, and `SingleStoreDB` classes in the
community package.

**Dependencies:** https://github.com/langchain-ai/langchain/pull/30841

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-18 12:58:39 -04:00
Alejandro Rodríguez
34ddfba76b community: support usage_metadata for litellm streaming calls (#30683)
Support "usage_metadata" for LiteLLM streaming calls.

This is a follow-up to
https://github.com/langchain-ai/langchain/pull/30625, which tackled
non-streaming calls.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-18 12:50:32 -04:00
Volodymyr Tkachuk
5ffcd01c41 docs: Register langchain-singlestore integration (#30841)
I created and published `langchain-singlestoe` integration package that
should replace SingleStoreDB community implementation.
2025-04-18 12:11:33 -04:00
ccurme
096f0e5966 core[patch]: de-beta usage callback (#30928) 2025-04-18 15:45:09 +00:00
ccurme
46de0866db infra: add langchain-google-genai to monorepo test deps and update notebook cassettes (#30925)
Following https://github.com/langchain-ai/langchain/pull/30880
2025-04-18 11:16:12 -04:00
Behrad Hemati
d624a475e4 community: change metadata in opensearch mmr (#30921)
- [ ] **PR message**:
- **Description:** including metadata_field in
max_marginal_relevance_search() would result in error, changed the logic
to be similar to how it's handled in similarity_search, where it can be
any field or simply a "*" to include every field
2025-04-18 10:10:23 -04:00
rylativity
dbf9986d44 langchain-ollama (partners) / langchain-core: allow passing ChatMessages to Ollama (including arbitrary roles) (#30411)
Replacement for PR #30191 (@ccurme)

**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.

Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.

This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.

As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.

Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)

**Issue**: resolves #30122 (related to ollama issue
https://github.com/ollama/ollama/issues/8955)

**Dependencies**: no new dependencies

[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing

---------

Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 10:07:07 -04:00
Christophe Bornet
0c723af4b0 langchain[lint]: fix mypy type ignores (#30894)
* Remove unused ignores
* Add type ignore codes
* Add mypy rule `warn_unused_ignores`
* Add ruff rule PGH003

NB: some `type: ignore[unused-ignore]` are added because the ignores are
needed when `extended_testing_deps.txt` deps are installed.
2025-04-17 17:54:34 -04:00
ccurme
f14bcee525 docs: update multi-modal docs (#30880)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-17 16:03:05 -04:00
Sydney Runkle
98c357b3d7 core: release 0.3.54 (#30911) 2025-04-17 14:27:06 -04:00
Vadym Barda
d2cbfa379f core[patch]: add retries and better messages to draw_mermaid_png (#30881) 2025-04-17 18:25:37 +00:00
Sydney Runkle
75e50a3efd core[patch]: Raise AttributeError (instead of ModuleNotFoundError) in custom __getattr__ (#30905)
Follow up to https://github.com/langchain-ai/langchain/pull/30769,
fixing the regression reported
[here](https://github.com/langchain-ai/langchain/pull/30769#issuecomment-2807483610),
thanks @krassowski for the report!

Fix inspired by https://github.com/PrefectHQ/prefect/pull/16172/files

Other changes:
* Using tuples for `__all__`, except in `output_parsers` bc of a list
namespace conflict
* Using a helper function for imports due to repeated logic across
`__init__.py` files becoming hard to maintain.

Co-authored-by: Michał Krassowski < krassowski 5832902+krassowski@users.noreply.github.com>"
2025-04-17 14:15:28 -04:00
ccurme
61d2dc011e openai: release 0.3.14 (#30908) 2025-04-17 10:49:14 -04:00
ccurme
f0f90c4d88 anthropic: release 0.3.12 (#30907) 2025-04-17 14:45:12 +00:00
ccurme
f01b89df56 standard-tests: release 0.3.19 (#30906) 2025-04-17 10:37:44 -04:00
ccurme
add6a78f98 standard-tests, openai[patch]: add support standard audio inputs (#30904) 2025-04-17 10:30:57 -04:00
ccurme
2c2db1ab69 core: release 0.3.53 (#30901) 2025-04-17 13:10:32 +00:00
ccurme
86d51f6be6 multiple: permit optional fields on multimodal content blocks (#30887)
Instead of stuffing provider-specific fields in `metadata`, they can go
directly on the content block.
2025-04-17 12:48:46 +00:00
湛露先生
83b66cb916 doc: clean doc word description. (#30895)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:04:37 -04:00
湛露先生
ff2930c119 partners: bug fix check_imports.py exit code. (#30897)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:02:23 -04:00
ccurme
b36c2bf833 docs: update Bedrock chat model page (#30883)
- document prompt caching
- feature ChatBedrockConverse throughout
2025-04-16 16:55:14 -04:00
ccurme
9e82f1df4e docs: minor clean up in ChatOpenAI docs (#30884) 2025-04-16 16:08:43 -04:00
ccurme
fa362189a1 docs: document OpenAI reasoning summaries (#30882) 2025-04-16 19:21:14 +00:00
Sydney Runkle
88fce67724 core: Removing unnecessary pydantic core schema rebuilds (#30848)
We only need to rebuild model schemas if type annotation information
isn't available during declaration - that shouldn't be the case for
these types corrected here.

Need to do more thorough testing to make sure these structures have
complete schemas, but hopefully this boosts startup / import time.
2025-04-16 12:00:08 -04:00
rrozanski-smabbler
60d8ade078 Galaxia integration (#30792)
- [ ] **PR title**: "docs: adding Smabbler's Galaxia integration"

- [ ] **PR message**:  **Twitter handle:** @Galaxia_graph

I'm adding docs here + added the package to the packages.yml. I didn't
add a unit test, because this integration is just a thin wrapper on top
of our API. There isn't much left to test if you mock it away.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-16 10:39:04 -04:00
ccurme
ca39680d2a ollama: release 0.3.2 (#30865) 2025-04-16 09:14:57 -04:00
Sydney Runkle
4af3f89a3a docs: enforce newlines when signature exceeds char threshold (#30866)
Below is an example of the single line vs new multiline approach.

Before this PR:

<img width="831" alt="Screenshot 2025-04-15 at 8 56 26 PM"
src="https://github.com/user-attachments/assets/0c0277bd-2441-4b22-a536-e16984fd91b7"
/>

After this PR:

<img width="829" alt="Screenshot 2025-04-15 at 8 56 13 PM"
src="https://github.com/user-attachments/assets/e16bfe38-bb17-48ba-a642-e8ff6b48e841"
/>
2025-04-16 08:45:40 -04:00
milosz-l
4ff576e37d langchain: infer Perplexity provider for sonar model prefix (#30861)
**Description:** This PR adds provider inference logic to
`init_chat_model` for Perplexity models that use the "sonar..." prefix
(`sonar`, `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro` or
`sonar-deep-research`).

This allows users to initialize these models by simply passing the model
name, without needing to explicitly set `model_provider="perplexity"`.

The docstring for `init_chat_model` has also been updated to reflect
this new inference rule.
2025-04-15 18:17:21 -04:00
ccurme
085baef926 ollama[patch]: support standard image format (#30864)
Following https://github.com/langchain-ai/langchain/pull/30746
2025-04-15 22:14:50 +00:00
ccurme
47ded80b64 ollama[patch]: fix generation info (#30863)
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
2025-04-15 19:22:58 +00:00
Sydney Runkle
cf2697ec53 chroma: release 0.2.3 (#30860) 2025-04-15 14:11:23 -04:00
ccurme
8e9569cbc8 perplexity: release 0.1.1 (#30859) 2025-04-15 18:02:15 +00:00
ccurme
dd5f5902e3 openai: release 0.3.13 (#30858) 2025-04-15 17:58:12 +00:00
ccurme
3382ee8f57 anthropic: release 0.3.11 (#30857) 2025-04-15 17:57:00 +00:00
Sydney Runkle
ef5aff3b6c core[fix]: Fix __dir__ in __init__.py for output_parsers module (#30856)
We have a `list.py` file which causes a namespace conflict with `list`
from stdlib, unfortunately.

`__all__` is already a list, so no need to coerce.
2025-04-15 13:09:13 -04:00
Christophe Bornet
a4ca1fe0ed core: Remove some noqa (#30855) 2025-04-15 13:08:40 -04:00
ccurme
6baf5c05a6 standard-tests: release 0.3.18 (#30854) 2025-04-15 16:56:54 +00:00
ccurme
c6a8663afb infra: run old standard-tests on core releases (#30852)
On core releases, we check out the latest published package for
langchain-openai and langchain-anthropic and run their tests against the
candidate version of langchain-core.

Because these packages have a local install of langchain-tests, we also
need to check out the previous version of langchain-tests.
2025-04-15 16:04:08 +00:00
Sydney Runkle
1f5e207379 core[fix]: remove load from dynamic imports dict (#30849) 2025-04-15 12:02:46 -04:00
ccurme
7240458619 core: release 0.3.52 (#30850) 2025-04-15 15:28:31 +00:00
Sydney Runkle
6aa5494a75 Fix from langchain_core.load.load import load import (#30843)
TL;DR: you can't optimize imports with a lazy `__getattr__` if there is
a namespace conflict with a module name and an attribute name. We should
avoid introducing conflicts like this in the future.

This PR fixes a bug introduced by my lazy imports PR:
https://github.com/langchain-ai/langchain/pull/30769.

In `langchain_core`, we have utilities for loading and dumping data.
Unfortunately, one of those utilities is a `load` function, located in
`langchain_core/load/load.py`. To make this function more visible, we
make it accessible at the top level `langchain_core.load` module via
importing the function in `langchain_core/load/__init__.py`.

So, either of these imports should work:

```py
from langchain_core.load import load
from langchain_core.load.load import load
```

As you can tell, this is already a bit confusing. You'd think that the
first import would produce the module `load`, but because of the
`__init__.py` shortcut, both produce the function `load`.

<details> More on why the lazy imports PR broke this support...

All was well, except when the absolute import was run first, see the
last snippet:

```
>>> from langchain_core.load import load
>>> load
<function load at 0x101c320c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x1069360c0>
```

```
>>> from langchain_core.load import load
>>> load
<function load at 0x10692e0c0>
>>> from langchain_core.load.load import load
>>> load
<function load at 0x10692e0c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x101e2e0c0>
>>> from langchain_core.load import load
>>> load
<module 'langchain_core.load.load' from '/Users/sydney_runkle/oss/langchain/libs/core/langchain_core/load/load.py'>
```

In this case, the function `load` wasn't stored in the globals cache for
the `langchain_core.load` module (by the lazy import logic), so Python
defers to a module import.

</details>

New `langchain` tongue twister 😜: we've created a problem for ourselves
because you have to load the load function from the load file in the
load module 😨.
2025-04-15 11:06:13 -04:00
Bagatur
7262de4217 core[patch]: dict chat prompt template support (#25674)
- Support passing dicts as templates to chat prompt template
- Support making *any* attribute on a message a runtime variable
- Significantly simpler than trying to update our existing prompt
template classes

```python
    template = ChatPromptTemplate(
        [
            {
                "role": "assistant",
                "content": [
                    {
                        "type": "text",
                        "text": "{text1}",
                        "cache_control": {"type": "ephemeral"},
                    },
                    {"type": "image_url", "image_url": {"path": "{local_image_path}"}},
                ],
                "name": "{name1}",
                "tool_calls": [
                    {
                        "name": "{tool_name1}",
                        "args": {"arg1": "{tool_arg1}"},
                        "id": "1",
                        "type": "tool_call",
                    }
                ],
            },
            {
                "role": "tool",
                "content": "{tool_content2}",
                "tool_call_id": "1",
                "name": "{tool_name1}",
            },
        ]
    )

```

will likely close #25514 if we like this idea and update to use this
logic

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-15 11:00:49 -04:00
ccurme
9cfe6bcacd multiple: multi-modal content blocks (#30746)
Introduces standard content block format for images, audio, and files.

## Examples

Image from url:
```
{
    "type": "image",
    "source_type": "url",
    "url": "https://path.to.image.png",
}
```


Image, in-line data:
```
{
    "type": "image",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "image/png",
}
```


PDF, in-line data:
```
{
    "type": "file",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "application/pdf",
}
```


File from ID:
```
{
    "type": "file",
    "source_type": "id",
    "id": "file-abc123",
}
```


Plain-text file:
```
{
    "type": "file",
    "source_type": "text",
    "text": "foo bar",
}
```
2025-04-15 09:48:06 -04:00
湛露先生
09438857e8 docs: fix tools_human.ipynb url 404. (#30831)
Fix the 404 pages.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-15 09:22:13 -04:00
Sydney Runkle
e3b6cddd5e core: codspeed tweak to make sure it runs on master (#30845) 2025-04-15 13:03:44 +00:00
Sydney Runkle
59f2c9e737 Tinkering with CodSpeed (#30824)
Fix CI to trigger benchmarks on `run-codspeed-benchmarks` label addition

Reduce scope of async benchmark to save time on CI

Waiting to merge this PR until we figure out how to use walltime on
local runners.
2025-04-15 08:49:09 -04:00
William FH
ed5c4805f6 Consistent docstring indentation (#30834)
Should be 4 spaces instead of 3.
2025-04-14 19:04:35 -07:00
Joey Constantino
2282762528 docs: small Tableau docs update (#30827)
Description: small Tableau docs update
Issue: adds required environment variable
Dependencies: tableau-langchain

---------

Co-authored-by: Joe Constantino <joe.constantino@joecons-ltm6v86.internal.salesforce.com>
2025-04-14 15:34:54 -04:00
ccurme
f7c4965fb6 openai[patch]: update imports in test (#30828)
Quick fix to unblock CI, will need to address in core separately.
2025-04-14 19:33:38 +00:00
Sydney Runkle
edb6a23aea core[lint]: fix issue with unused ignore in __init__.py files (#30825)
Fixing a race condition between
https://github.com/langchain-ai/langchain/pull/30769 and
https://github.com/langchain-ai/langchain/pull/30737
2025-04-14 17:57:00 +00:00
湛露先生
3a64c7195f community: redis tool typos fix (#30811) 2025-04-14 09:01:36 -04:00
Sydney Runkle
4f69094b51 core[performance]: use custom __getattr__ in __init__.py files for lazy imports (#30769)
Most easily reviewed with the "hide whitespace" option toggled.

Seeing 10-50% speed ups in import time for common structures 🚀 

The general purpose of this PR is to lazily import structures within
`langchain_core.XXX_module.__init__.py` so that we're not eagerly
importing expensive dependencies (`pydantic`, `requests`, etc).

Analysis of flamegraphs generated with `importtime` motivated these
changes. For example, the one below demonstrates that importing
`HumanMessage` accidentally triggered imports for `importlib.metadata`,
`requests`, etc.

There's still much more to do on this front, and we can start digging
into our own internal code for optimizations now that we're less
concerned about external imports.

<img width="1210" alt="Screenshot 2025-04-11 at 1 10 54 PM"
src="https://github.com/user-attachments/assets/112a3fe7-24a9-4294-92c1-d5ae64df839e"
/>

I've tracked the improvements with some local benchmarks:

## `pytest-benchmark` results

| Name | Before (s) | After (s) | Delta (s) | % Change |

|-----------------------------|------------|-----------|-----------|----------|
| Document | 2.8683 | 1.2775 | -1.5908 | -55.46% |
| HumanMessage | 2.2358 | 1.1673 | -1.0685 | -47.79% |
| ChatPromptTemplate | 5.5235 | 2.9709 | -2.5526 | -46.22% |
| Runnable | 2.9423 | 1.7793 | -1.163 | -39.53% |
| InMemoryVectorStore | 3.1180 | 1.8417 | -1.2763 | -40.93% |
| RunnableLambda | 2.7385 | 1.8745 | -0.864 | -31.55% |
| tool | 5.1231 | 4.0771 | -1.046 | -20.42% |
| CallbackManager | 4.2263 | 3.4099 | -0.8164 | -19.32% |
| LangChainTracer | 3.8394 | 3.3101 | -0.5293 | -13.79% |
| BaseChatModel | 4.3317 | 3.8806 | -0.4511 | -10.41% |
| PydanticOutputParser | 3.2036 | 3.2995 | 0.0959 | 2.99% |
| InMemoryRateLimiter | 0.5311 | 0.5995 | 0.0684 | 12.88% |

Note the lack of change for `InMemoryRateLimiter` and
`PydanticOutputParser` is just random noise, I'm getting comparable
numbers locally.

## Local CodSpeed results

We're still working on configuring CodSpeed on CI. The local usage
produced similar results.
2025-04-14 08:57:54 -04:00
Christophe Bornet
ada740b5b9 community: Add ruff rule PGH003 (#30812)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-14 02:32:13 +00:00
ccurme
f005988e31 community[patch]: fix cost calculations for o3 in OpenAI callback (#30807)
Resolves https://github.com/langchain-ai/langchain/issues/30795
2025-04-13 15:20:46 +00:00
BoyuHu
446361a0d3 docs: fix typo (#30800)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-13 10:55:30 -04:00
Marina Gómez
afd457d8e1 perplexity[patch]: Fix #30767: Handle missing citations attribute in ChatPerplexity (#30805)
This PR fixes an issue where ChatPerplexity would raise an
AttributeError when the citations attribute was missing from the model
response (e.g., when using offline models like r1-1776).

The fix checks for the presence of citations, images, and
related_questions before attempting to access them, avoiding crashes in
models that don't provide these fields.

Tested locally with models that omit citations, and the fix works as
expected.
2025-04-13 09:24:05 -04:00
Christophe Bornet
42944f3499 core: Improve mypy config (#30737)
* Cleanup mypy config
* Add mypy `strict` rules except `disallow_any_generics`,
`warn_return_any` and `strict_equality` (TODO)
* Add mypy `strict_byte` rule
* Add mypy support for PEP702 `@deprecated` decorator
* Bump mypy version to 1.15

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 16:35:13 -04:00
mpb159753
bb2c2fd885 docs: Add openGauss vector store documentation (#30742)
Hey LangChain community! 👋 Excited to propose official documentation for
our new openGauss integration that brings powerful vector capabilities
to the stack!

### What's Inside 📦  
1. **Full Integration Guide**  
Introducing
[langchain-opengauss](https://pypi.org/project/langchain-opengauss/) on
PyPI - your new toolkit for:
   🔍 Native hybrid search (vectors + metadata)  
   🚀 Production-grade connection pooling  
   🧩 Automatic schema management  

2. **Rigorous Testing Passed**   
![Benchmark
Results](https://github.com/user-attachments/assets/ae3b21f7-aeea-4ae7-a142-f2aec57936a0)
   - 100% non-async test coverage  

ps: Current implementation resides in my personal repository:
https://github.com/mpb159753/langchain-opengauss, How can I transfer
process to langchain-ai org?? *Keen to hear your thoughts and make this
integration shine!* 

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 20:31:39 +00:00
Christophe Bornet
913c896598 core: Add ruff rules FBT001 and FBT002 (#30695)
Add ruff rules
[FBT001](https://docs.astral.sh/ruff/rules/boolean-type-hint-positional-argument/)
and
[FBT002](https://docs.astral.sh/ruff/rules/boolean-default-value-positional-argument/).
Mostly `noqa`s to not introduce breaking changes and possible
non-breaking fixes have already been done in a [previous
PR](https://github.com/langchain-ai/langchain/pull/29424).
These rules will prevent new violations to happen.
2025-04-11 16:26:33 -04:00
William FH
2803a48661 core[patch]: Share executor for async callbacks run in sync context (#30779)
To avoid having to create ephemeral threads, grab the thread lock, etc.
2025-04-11 10:34:43 -07:00
Sydney Runkle
fdc2b4bcac core[lint]: Use 3.9 formatting for docs and tests (#30780)
Looks like `pyupgrade` was already used here but missed some docs and
tests.

This helps to keep our docs looking professional and up to date.
Eventually, we should lint / format our inline docs.
2025-04-11 10:39:25 -04:00
Sydney Runkle
48affc498b langchain[lint]: use pyupgrade to get to 3.9 standards (#30782) 2025-04-11 10:33:26 -04:00
ccurme
d9b628e764 xai: release 0.2.3 (#30790) 2025-04-11 14:05:11 +00:00
ccurme
9cfb95e621 xai[patch]: support reasoning content (#30758)
https://docs.x.ai/docs/guides/reasoning

```python
from langchain.chat_models import init_chat_model

llm = init_chat_model(
    "xai:grok-3-mini-beta",
    reasoning_effort="low"
)
response = llm.invoke("Hello, world!")
```
2025-04-11 14:00:27 +00:00
Christophe Bornet
89f28a24d3 core[lint]: Fix typing in test_async_callbacks (#30788) 2025-04-11 07:26:38 -04:00
Sydney Runkle
8c6734325b partners[lint]: run pyupgrade to get code in line with 3.9 standards (#30781)
Using `pyupgrade` to get all `partners` code up to 3.9 standards
(mostly, fixing old `typing` imports).
2025-04-11 07:18:44 -04:00
Jacob Lee
e72f3c26a0 fix(ollama): Remove redundant message from response_metadata (#30778) 2025-04-10 23:12:57 -07:00
Jannik Maierhöfer
f3c3ec9aec docs: add langfuse integration to provider list (#30573)
This PR adds the Langfuse integration to the provider list.
2025-04-10 22:25:42 -04:00
Christophe Bornet
dc19d42d37 core: Specify code when ignoring type issue (ruff PGH003) (#30675)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/
2025-04-10 22:23:52 -04:00
Paul Czarkowski
68d16d8a07 Community: Add Managed Identity support for Azure AI Search (#30730)
Add Managed Identity support for Azure AI Search

---------

Signed-off-by: Paul Czarkowski <username.taken@gmail.com>
2025-04-10 22:22:58 -04:00
CtrlMj
5103594a2c replace the deprecated initialize_agent in playwright.ipynb with create_react_agent (#30734)
**Description:** Replaced the example with the deprecated
`intialize_agent` function with `create_react_agent` from
`langgraph.prebuild`

**Issue:** #29277 

**Dependencies:** N/A
**Twitter handle:** N/A
2025-04-10 22:12:12 -04:00
Eugene Yurtsev
e42b3d285a langchain: remove langchain-server script (#30755)
Has been replaced by langsmith a long long time ago
2025-04-10 22:11:42 -04:00
Pol de Font-Réaulx
48cf7c838d feat(community): add oauth2 support for Jira toolkit (#30684)
**Description:** add support for oauth2 in Jira tool by adding the
possibility to pass a dictionary with oauth parameters. I also adapted
the documentation to show this new behavior
2025-04-10 22:04:09 -04:00
Oleg Ovcharuk
b6fe7e8c10 docs: YDB Vector Store docs (#30636)
This PR adds docs about how to use YDB as a vector store

[YDB](https://ydb.tech/) is a versatile open-source distributed SQL
database. It supports [vector
search](https://ydb.tech/docs/en/yql/reference/udf/list/knn) which means
it can be used as a vector store with langchain.

YDB vectore store comes with
[langchain-ydb](https://pypi.org/project/langchain-ydb/) pypi package.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-10 21:33:56 -04:00
湛露先生
7a4ae6fbff community[patch]: simplify cache logic (#30760)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 19:20:57 -04:00
ccurme
8e053ac9d2 core[patch]: support customization of backoff parameters in with_retries (#30773)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-10 19:18:36 -04:00
amohan
e981a9810d docs: update links in cloudflare docs (#30776)
Thanks for reviewing again. I was notified of some
[links](https://python.langchain.com/api_reference/cloudflare/) not
being correct in the default integrations so updating them in this PR.
2025-04-10 19:08:18 -04:00
William FH
70532a65f8 Async callback benchmark (#30777) 2025-04-10 15:47:19 -07:00
Sydney Runkle
c6172d167a Only run CodSpeed benchmarks with run-codspeed-benchmarks label (#30774) 2025-04-10 15:48:14 -04:00
amohan
f70df01e01 docs: Update ordering of cloudflare integration examples in providers page (#30768)
Updated the ordering of cloudflare integrations and updated import
examples. Follow up from
https://github.com/langchain-ai/langchain/pull/30749
2025-04-10 15:34:58 -04:00
Sydney Runkle
8f8fea2d7e [performance]: Use hard coded langchain-core version to avoid importlib import (#30744)
This PR aims to reduce import time of `langchain-core` tools by removing
the `importlib.metadata` import previously used in `__init__.py`. This
is the first in a sequence of PRs to reduce import time delays for
`langchain-core` features and structures 🚀.

Because we're now hard coding the version, we need to make sure
`version.py` and `pyproject.toml` stay in sync, so I've added a new CI
job that runs whenever either of those files are modified. [This
run](https://github.com/langchain-ai/langchain/actions/runs/14358012706/job/40251952044?pr=30744)
demonstrates the failure that occurs whenever the version gets out of
sync (thus blocking a PR).

Before, note the ~15% of time spent on the `importlib.metadata` /related
imports

<img width="1081" alt="Screenshot 2025-04-09 at 9 06 15 AM"
src="https://github.com/user-attachments/assets/59f405ec-ee8d-4473-89ff-45dea5befa31"
/>

After (note, lack of `importlib.metadata` time sink):

<img width="1245" alt="Screenshot 2025-04-09 at 9 01 23 AM"
src="https://github.com/user-attachments/assets/9c32e77c-27ce-485e-9b88-e365193ed58d"
/>
2025-04-10 14:15:02 -04:00
Sydney Runkle
cd6a83117c Adding more import time benchmarks for langchain-core (#30770)
Plus minor typo fix in `ChatPromptTemplate` case id.
2025-04-10 11:50:12 -04:00
Chamath K.B. Attanayaka
6c45c9efc3 docs: update clickhouse version in notebook example (#30754)
update clickhouse docker version tag in notebook example to avoid
compatibility issues with clickhouse-connect.
2025-04-10 09:51:54 -04:00
amohan
44b83460b2 docs: Add Cloudflare integrations (#30749)
Description:
This PR adds documentation for the langchain-cloudflare integration
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
langchain-cloudflare package, located in docs/docs/integrations.
Added a new package to libs/packages.yml.

Lint and Format:

Successfully ran make format and make lint.

---------

Co-authored-by: Collier King <collier@cloudflare.com>
Co-authored-by: Collier King <collierking99@gmail.com>
2025-04-10 09:27:23 -04:00
湛露先生
c87a270e5f cookbook: Fix docs typos. (#30763)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 09:13:24 -04:00
ccurme
63c16f5ca8 community: deprecate AzureCosmosDBNoSqlVectorSearch in favor of langchain-azure-ai implementation (#30756) 2025-04-09 21:04:16 +00:00
Christophe Bornet
4cc7bc6c93 core: Add ruff rules PLR (#30696)
Add ruff rules [PLR](https://docs.astral.sh/ruff/rules/#refactor-plr)
Except PLR09xxx and PLR2004.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-09 15:15:38 -04:00
célina
68361f9c2d partners: (langchain-huggingface) Embeddings - Integrate Inference Providers and remove deprecated code (#30735)
Hi there, This is a complementary PR to #30733.
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpointEmbeddings`, in favor of the task-specific
`feature_extraction` method. `InferenceClient.post()` is deprecated and
will be removed in `huggingface_hub` v0.31.0.

## Changes made

- bumped the minimum required version of the `huggingface_hub` package
to ensure compatibility with the latest API usage.
- added a provider field to `HuggingFaceEndpointEmbeddings`, enabling
users to select the inference provider.
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpointEmbeddings` with the task-specific
`feature_extraction` method for future-proofing, `post()` will be
removed in `huggingface-hub` v0.31.0.

 All changes are backward compatible.

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-09 19:05:43 +00:00
Christophe Bornet
98f0016fc2 core: Add ruff rules ARG (#30732)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg
2025-04-09 14:39:36 -04:00
Sydney Runkle
66758599a9 [ci]: Quick codspeed.yml tweaks to enable comparisons with master (#30752)
* Only run codspeed logic when `libs/core` is changed (for now, we'll
want to add other benchmarks later
* Also run on `master` so that we can get a reference :)
2025-04-09 13:13:49 -04:00
theosaurus
d47d6ecbc3 dosc: Fix typo in get_separators_for_language method section (#30748)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 13:03:01 -04:00
Sydney Runkle
78ec7d886d [performance]: Adding benchmarks for common langchain-core imports (#30747)
The first in a sequence of PRs focusing on improving performance in
core. We're starting with reducing import times for common structures,
hence the benchmarks here.

The benchmark looks a little bit complicated - we have to use a process
so that we don't suffer from Python's import caching system. I tried
doing manual modification of `sys.modules` between runs, but that's
pretty tricky / hacky to get right, hence the subprocess approach.

Motivated by extremely slow baseline for common imports (we're talking
2-5 seconds):

<img width="633" alt="Screenshot 2025-04-09 at 12 48 12 PM"
src="https://github.com/user-attachments/assets/994616fe-1798-404d-bcbe-48ad0eb8a9a0"
/>

Also added a `make benchmark` command to make local runs easy :).
Currently using walltimes so that we can track total time despite using
a manual proces.
2025-04-09 13:00:15 -04:00
German Molina
5fb261ce27 community: Google Vertex AI Search now returns the website title as part of the document metadata (#30688)
Google vertex ai search will now return the title of the found website
as part of the document metadata, if available.

Thank you for contributing to LangChain!

- **Description**: Vertex AI Search can be used to index websites and
then develop chatbots that use these websites to answer questions. At
present, the document metadata includes an `id` and `source` (which is
the URL). While the URL is enough to create a link, the ID is not
descriptive enough to show users. Therefore, I propose we return `title`
as well, when available (e.g., it will not be available in `.txt`
documents found during the website indexing).
- **Issue**: No bug in particular, but it would be better if this was
here.
- **Dependencies**: None
- I do not use twitter.

Format, Lint and Test seem to be all good.
2025-04-09 08:54:06 -04:00
theosaurus
636d831d27 docs: Fix typo in 'Query re-writing" section (#30736)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 08:50:14 -04:00
giulia_p_lib
deec538335 docs: fix small typo in map_rerank_docs_chain.ipynb (#30738)
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fixed a minor typo in map_rerank_docs_chain.ipynb
2025-04-09 08:49:37 -04:00
Akshay Dongare
164e606cae docs: fix import path and update LiteLLM integration docs (#30685)
- [x] **PR title**: "docs: Update import path and LiteLLM integration
docs"
- Update the old import path for `ChatLiteLLM` to reflect the new export
from
[`__init__.py`](https://github.com/Akshay-Dongare/langchain-litellm/blob/main/langchain_litellm/__init__.py)
in
[`langchain-litellm`](https://github.com/Akshay-Dongare/langchain-litellm)
package

- [x] **PR message**:
    - **Description:** 
    - 🔗 **Follow-up to**: PR #30637
    - 🔧 **Fixes**: #30368
- 💬 **Based on this comment from** @ccurme:
https://github.com/langchain-ai/langchain/pull/30637#discussion_r2029084320
 


- [x] **About me**
🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-08 13:04:17 -04:00
Ikko Eltociear Ashimine
5686fed40b docs: update yellowbrick.ipynb (#30729)
retreival -> retrieval
2025-04-08 11:56:35 -04:00
3088 changed files with 22724 additions and 389143 deletions

View File

@@ -1,8 +1,8 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
@@ -24,6 +24,5 @@ Additional guidelines:
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

View File

@@ -16,7 +16,6 @@ LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/community",
]
# when set to True, we are ignoring core dependents
@@ -38,8 +37,8 @@ IGNORED_PARTNERS = [
]
PY_312_MAX_PACKAGES = [
"libs/partners/huggingface", # https://github.com/pytorch/pytorch/issues/130249
"libs/partners/voyageai",
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
@@ -134,12 +133,6 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
elif dir_ == "libs/community" and job == "extended-tests":
py_versions = ["3.9", "3.12"]
elif dir_ == "libs/community" and job == "compile-integration-tests":
# community integration deps are slow in 3.12
py_versions = ["3.9", "3.11"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
@@ -184,11 +177,6 @@ def _get_pydantic_test_configs(
else "0"
)
custom_mins = {
# depends on pydantic-settings 2.4 which requires pydantic 2.7
"libs/community": 7,
}
max_pydantic_minor = min(
int(dir_max_pydantic_minor),
int(core_max_pydantic_minor),
@@ -196,7 +184,6 @@ def _get_pydantic_test_configs(
min_pydantic_minor = max(
int(dir_min_pydantic_minor),
int(core_min_pydantic_minor),
custom_mins.get(dir_, 0),
)
configs = [

View File

@@ -22,7 +22,6 @@ import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
"numpy",
@@ -35,7 +34,6 @@ SKIP_IF_PULL_REQUEST = [
"langchain-core",
"langchain-text-splitters",
"langchain",
"langchain-community",
]

View File

@@ -20,6 +20,8 @@ def get_target_dir(package_name: str) -> Path:
base_path = Path("langchain/libs")
if package_name_short == "experimental":
return base_path / "experimental"
if package_name_short == "community":
return base_path / "community"
return base_path / "partners" / package_name_short
@@ -69,7 +71,7 @@ def main():
clean_target_directories([
p
for p in package_yaml["packages"]
if p["repo"].startswith("langchain-ai/")
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
@@ -78,7 +80,7 @@ def main():
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and p["repo"].startswith("langchain-ai/")
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])

View File

@@ -1,4 +1,3 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx

View File

@@ -34,11 +34,6 @@ jobs:
shell: bash
run: uv sync --group test --group test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: VIRTUAL_ENV=.venv uv pip install "boto3<2" "google-cloud-aiplatform<2"
- name: Run integration tests
shell: bash
env:

View File

@@ -395,8 +395,11 @@ jobs:
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
cd $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}
git checkout "$LATEST_PACKAGE_TAG" -- .
rm -rf $GITHUB_WORKSPACE/libs/standard-tests/*
cd $GITHUB_WORKSPACE/libs/
git checkout "$LATEST_PACKAGE_TAG" -- standard-tests/
git checkout "$LATEST_PACKAGE_TAG" -- partners/${{ matrix.partner }}/
cd partners/${{ matrix.partner }}
# Print as a sanity check
echo "Version number from pyproject.toml: "

View File

@@ -30,7 +30,7 @@ jobs:
- name: Install langchain editable
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental -e libs/core libs/langchain libs/community
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: Check doc imports
shell: bash

View File

@@ -26,7 +26,20 @@ jobs:
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: yq '.packages[].repo' langchain/libs/packages.yml
cmd: |
yq '
.packages[]
| select(
(
(.repo | test("^langchain-ai/"))
and
(.repo != "langchain-ai/langchain")
)
or
(.include_in_api_ref // false)
)
| .repo
' langchain/libs/packages.yml
- name: Parse YAML and checkout repos
env:
@@ -38,11 +51,9 @@ jobs:
# Checkout each unique repository that is in langchain-ai org
for repo in $REPOS; do
if [[ "$repo" != "langchain-ai/langchain" && "$repo" == langchain-ai/* ]]; then
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: Setup python ${{ env.PYTHON_VERSION }}

View File

@@ -0,0 +1,29 @@
name: Check `langchain-core` version equality
on:
pull_request:
paths:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check version equality
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
exit 1
else
echo "Versions match: $PYPROJECT_VERSION"
fi

44
.github/workflows/codspeed.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: CodSpeed
on:
push:
branches:
- master
pull_request:
paths:
- 'libs/core/**'
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
codspeed:
name: Run benchmarks
if: (github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'run-codspeed-benchmarks')) || github.event_name == 'workflow_dispatch' || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# We have to use 3.12, 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
python-version: "3.12"
# Using this action is still necessary for CodSpeed to work
- uses: actions/setup-python@v3
with:
python-version: "3.12"
- name: install deps
run: uv sync --group test
working-directory: ./libs/core
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd libs/core
uv run --no-sync pytest ./tests/benchmarks --codspeed
mode: walltime

View File

@@ -61,6 +61,7 @@ jobs:
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

1
.gitignore vendored
View File

@@ -59,6 +59,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.codspeed/
# Translations
*.mo

View File

@@ -7,12 +7,6 @@ repos:
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: community
name: format community
language: system
entry: make -C libs/community format
files: ^libs/community/
pass_filenames: false
- id: langchain
name: format langchain
language: system

View File

@@ -17,6 +17,7 @@
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

View File

@@ -30,7 +30,7 @@
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken open_clip_torch torch"
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml matplotlib tiktoken open_clip_torch torch"
]
},
{
@@ -409,7 +409,7 @@
" table_summaries,\n",
" tables,\n",
" image_summaries,\n",
" image_summaries,\n",
" img_base64_list,\n",
")"
]
},

View File

@@ -11,6 +11,7 @@
import json
import os
import sys
from datetime import datetime
from pathlib import Path
import toml
@@ -104,7 +105,7 @@ def skip_private_members(app, what, name, obj, skip, options):
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain Inc"
copyright = f"{datetime.now().year}, LangChain Inc"
author = "LangChain, Inc"
html_favicon = "_static/img/brand/favicon.png"
@@ -275,3 +276,7 @@ if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True
master_doc = "index"
# If a signatures length in characters exceeds 60,
# each parameter within the signature will be displayed on an individual logical line
maximum_signature_line_length = 60

View File

@@ -1 +1 @@
eNrtVnl0E3Ueb4GtgAesT1dQjpgVdbWTziSTs+RBD5IW0jNJL4p1MvNLM81cnaNtWrpWLD4flC0p4AG7rpTSYFsK2AJySlFXFFRA0aUooOyKBy5KQURQ9pc0peWh7/n28d66+8wfyWR+3+PzPX+feaEKIEo0z8V20JwMRIKU4R+paV5IBOUKkOT6VhbIPp5qyc5yulYpIn34AZ8sC5IlIYEQaA0vAI6gNSTPJlRgCaSPkBPgs8CAiJkWD08FemNTatQskCSiFEhqy+waNclDV5ystqjzocJ9kkr2AVUlIOCPqKI5VTIvyTw3TR2vFnkGQDFFAqK6dk68muUpwMAXpYKM4DzC0hwNpSRZBASrtngJRgLxahmwAoxEVkSoi2pQ+IbnmX7XckAIG/QqXCRQqHzl0VKj5gg2fFoK5JIoHChAAYkUaaFfRm0H8lC4GiggECLUg8mTwjYEEeZElGkQ+cfwJDFgPeoboqW5UnVtLQwP5pgWAQWhDUrCMKOSvKcMkDKUrJ1TG/IBgoIuGlt8MDvBzquTv44gSQBzAjiSp6D14NrSalqIV1HAyxAyaIMZ50AkzGCbHwABIRi6ArT2awXXE4LA0P3uE8oknuuIVggJA7n2uC1cDwSWk5OD3VkQRFJ6QnYAdgmnwjQGTIOtr0IkmaA5BlYdYQiIp1WInG8beiAQpB8aQaIdGGztV+4cKsNLwdUZBJnlvMokIZK+4GpCZA1419D3osLJNAuCoZTsa91FDwfd6TSYVmPacJVhKcCRwdWRRtp8lTKQxQBC8tBGcCXaOZAfBnClsi+4SqfVrRGBJMCeB4+1QjVZkea1wFqAfXtC0d5vzpo1UMSjMXe0pMK6BHe4fEq8CjWoMghRpUW1ehVmsOh0FhxX2TNcHSlRN64fLcMGl0hwkheWYsZA2UOkT+H8gGpL+dGC7wgXHEYThg9HCwFVAi8BJIoq2FGA5PZPPZKe2tXfXQgvlhIcXR1xG9wRqXxldVUlRSoU5auoZFFzNa6jPUAhvd1RFTgCYTcQEMJKwVW4Tt8ZPRnIfRuMFUUwFEGxrVUInFXA0CwN8xn5jq4eKdiiR1H0xWsFZN4POCkYwtHIZ+dQCRGwsGhh34NmcLPZvP3HhQZM6aCI2ajferWUBIaiwbSs9OK1AlETzajUUTUgjdBU8PA98E8JoaXMHr0eJylCZzR4dVrMRGBmwmPEUEyvA8SW8D4goZVwMQVelBEJkHDPyoHg4XiWqArPmVUHRQ0w0kS4HklGoYBT8aTy4RikRJUgAoYnqHWkFyEJ0geQ/v4LhlILM5My0lPanBBkCs/7adDUGzuupIT0lnhYK5UXKChnUhiHO60cNYvJQoEsukRjEup345WEwNJYSpqU5EzPsfMIZsQxrdFk0uIIpkE1cEoRh8dv0uWSdjTfp+NLAnaz35lRaMygdTPNmWhBmZMRGCXTpfHmutOqzclV2pmObHNWkjbXVpXkypNmZmp0eHlmJenT0kwuhuL+VNLuNThmCTnp2Q5TmQE3G4n8HFdWieyHUQtw2VoTElWwYeG+lKzRsUHg2CDhoTFa0IGhSVRRkcRYNVevyERVGry3sjgmkKhyhjMM4C/c205aBtZMngOHl8LEKBU0Zc0kst0FAb3bnWVMwryluRTvtpFKURrhChQVlZXn8YYkD12ur/R4MoZkxmjEETSaHAOKmyKtOQj9P0S1qQAZugWQrMhFBIvL8RJHe72tTiDCqQq2kQyvUHDbi6A1xYbkJhUGu80YqcMxwmMwGSicBFokGe7RAWtXdkZL+KoIEQxsvAoy2OXTWdUwlTp1ooolrCYDnLHINf5oa//F9Wrs95MXjoyJfIY3OHP4FejYP57MLzg/4d7krulVs7ue722zyssMa/fWF0t1iWTNpvTZMy6cXnJ3bBM4WDX9vdrztZXnD2++cWTsQ9NtdYfe3WBb+JUl4zS//Ejo3N6LzT98c+GdD/s+fODW+FtnT3kP7bvp3Lf1+yYW8k+XdU9/bdkpwrvYcratj8Ial+CL7pi4f82w+SF37/wbbzcUn0H/9PGMcVM+mmpd7391fNGePY11Y5MrPjx2/g83zn20IbWLbEny0/MX35xeX5ez215XtHftqoM9zIhnmm5Zzc/ZlOyICS3dM/52T/wHj2x7PYAvHB1cN21u+iPH+iqPbT7w/pEzJ777W+ekT1NWOD4/d3DdVv8aZspdjuXHU6YmPmgbmdYrfzB6zg8N7x7yvfVpfYzefeQ5eafyd3b6/bYdx+NeuLn4wS8ulY+5r7tW+OyS9PETPS9sdE3osLxy/rHzye7eNfYa75SlxeLsk3HPf2lbbrlrQruiPGfdMnW/Td/7z8zvkz8Dd6QnjD5x8c6eC0eGf7p0lach4cTh32zcPt0eqNz8suRZeqigYd/XhQc2HS8bNmLJ65e5T5oqplmfOnR5T2MMuWbi77YYCoQKy1dPe/fzx146GNe47fGe5sPi41tO+C6xtxUczDubGPvNIQtzpmnv6EWG7bNvul+ZhCxpr+/sbJ4y4uLxMbDOly8Pj1m499ZX7h0RE3M9meGwnOvDDOOHaHIKw1w5JuAtBLchfN9PAktIgvlJJkhDHqYOC5SkzMg1lTvppMIKNG+WNz3FLpmLiCLF+HPoIiGWKixEAr2oa4qv0LpitUVVrO7HX6yuVYc53VDY6vRwuJLCcQHNYHhhzEOhl/wMjL/y5F958i+VJ+vN15kn6/+feDJu/l/hyej158l6j8ELtJQR1eEoYaSMmM6DkqhJqzNjqAkjTP81nqxPqp4VcNjQnGxthZ9zC7NYxlxt53J9AYdU5GYdgUwnmYWasllHziAb1F/hyRk+Zx5fVe0nBV7Qp4KkDGehKblcq/X4gSYnA6W5yjRCyVNm+sopv4Mql/SFvNGuoXL4alsZW00bZPsMtsgrAQOa7MqbZeRmujWKN4DZZ2T7s3MdnLmoxGPSZqQmp5vSfy5P1l8nnmxOE205zvI02sPYyqkiW7U5tyDgd3kU3FjIYeXmUszmrKCNKD4THZIZHDX+MnmyVw8oyuQ1UNeJJ68Z5MkRDtWQ28OtQm/b3nf7k2+vHPZGXBO52p1ndvdavyybX3wxllv0V/+0TYvaLl18Y/HjLnb1Q6Z928d7nz02a/e49xfw45/aoTKblhx999zbk2uwf710+vTkWxKPLdv8xZucvfHcae3J9jW7ttUsPtu2tclkGt5y4K3us6qRzf6HlgVyDilfs+o418PNHdLSJ5++pffz7j0LtG+24euyPp+0sX7X5lNZu8eRCe8YP+ke/eyFJ0F7PlU3NvWGuTWTPA01+G8/Uo+YmFbwcfv8pb5Rp277Pn/MqVEjzh1/ZtOpuOErgWU9WlTz4lZXwPeayiMm3intbjw5Li7x5NeaP3+rO1rWnnKsamwsewmo5LmexlHkI6GtJ1bEfRCYMEFfP+bhVYR7X8XcwjPJXOaEu0d17VJ2frerI2d68gphA6KZfyoJeQNs9LjuTHjiH81nLsZPDry9ceOXOz0vjc2d9/vsRX3Pdc89Ujm+b2zCV10b+haEdk2tuxyagrf0vFaavX/BgX2VPS/fPfamjHb2hqMB62T7F8uZOXPyPdJf1gZExw/D+yntPW9mrnwAPv8bzps/8g==
eNrtVgt0E2UWLiBFoSwg0CKsS8zqskAnncl7WqqkadN3+kjfwIbJzJ9kmsnMdB5pE057aOEgbKsSFXZRUaSlxfKoPa2glIKrAj5W8eAKIqWisIIiXXYVWRSX/ZOmpSzuOZ49nLPuHuckmcz89/Hde/97/6+hzQcEkebYUdtpVgICQUrwQXy0oU0AVTIQpZWtXiC5OaolP89W1CwL9PF5bknixcSEBIKnVRwPWIJWkZw3wYclkG5CSoD/eQaEzbQ4OMr/4SjzMqUXiCLhAqIycdEyJclBV6ykTFSWQoU5okJyA0U1IOBNUNCsIoUTJY59QBmvFDgGQDFZBIKydkm80stRgIEvXLyEaDnES7M0lBIlARBeZaKTYEQQr5SAl4eRSLIAdVEVCt9wHDPoWvLzIYNOmQ0HCpWH/yYuU7KEN7TqApI9AgcKUEAkBZoflFGmA2kkXBUU4AkB6sHkiSEbvABzIkg0CD8xHEkMWY/4hmhp1qWsrYXhwRzTAqAgtOuSMMyIJOeoBKQEJWuX1La5AUFBF4+0uGF2gjtvTH4HQZIA5gSwJEdB68EuV4Dm4xUUcDKEBOIVAVGi2mHeWRAONtjuAYBHCIb2gdZB3eDzBM8z9CCIhEqRY7dH6oSE4Ny83B6qCgKLykrB7jwIxZSZkO+He4VVYCoDzPvzNYgoETTLwNojDAFRtfLh9Z6RCzxBeqARJLIPg62DyjtHynBicEsuQebZbjBJCKQ7uIUQvHpt18j3gsxKtBcE28z5N7uLLA67a9OoMAx+Om+wLPpZMrglvJ9236ANJMGPkBw0EnwW3TmUIAawLskdbNaoNVsFIPJw64MVrVBNksWGFlgS8MfX2yItsDkve6iW/VFxLamwPMFei0DHK1C1IpfwK9SoWqfAtIkomqjBFem5RdvNETdF31uHziKBYEUnrEXaUPXbSLfMegDVbv7eiveGKg6jCcGHHYaAGp4TARJBFdxehhQONj+Smdo1uMkQTnARLB0Iuw32hktfHaippkiZoty+ai+KB7Qa2gFk0tkdUYGdEHIDASFeMdisR7U7IytDyW+HsaIIhiIotqcGgS0LGNpLw3yGfyMTSAy26FAUffFmAYnzAFYMtmnR8LVvpIQAvLBoId/XzWhxHN/7/UJDpjR46EL33CglgpFoMLVXfPFmgYiJzai4vWZIGqGp4PF74YMd1zkoVI2hetxBaSmDQ6/VGXQGI0oZjJSGwtUvhcYCCa2EislzgoSIgITjVvIHj8d7iZpQoyVrMJ1GDyNNglOSZGQK2GRHKheKQUxS8AJgOILqIJ0ISZBugAzuv2BbarnVlJtpbrdBkGaO89Dg0Q9HzbDbSafd4U12URkBvIzlspz2fFxj11EsTxfSqqpSPgW34lXFrF4vyTWUj1V5EMyg1WO4VqvBEUyFqmDfIH7Uwjoc2aaArjKHsXssWWVlbp81s5jKsJitOlDhK3YF0smiXKcaxfyswan3pweys0m7wV2dRbDFhS6DnCKkUKVian4a4TVYSqy8TV9dk1HKlGIqj8eOmdyeNN7k0bEcDBHO3OSEJAXcsHBsismRtkFg2yCDTaMZapokBRVOTLLqxkmZpMiAx1cey/iTFLZQhgG8w/FtoyWQbOVYcPxxmBjZR1PJuTZzZmVKdnpVoDil3FVdghEyzljK+GyrnvVlppdU5jkLcuXSCis7MjNqtQFBI8mBW94Y3prXof+HqHaVISOnAJIXPo9gcVlOZGmns9UGBNhVwXaS4WQKDn0BtJotSKGpPNiNa3Aj6iCNBCAIHa4jkUxT6vND1oZnRkvoxGgjGLjxfGSwy61JVibCeJRJCi+RbNTDHguf5vWtg+fXgdFjZzfeHhW+xsDvtWtNtlzuBBrT+23p7QsOrXis5srFzhPs/N/Wbhq7Z9qqee/tSixX37uHjj2zf9L6qScyNUvvnzF69kenfzn3wpiDC51r0cn3zSl4f97eK8/u69tX17+mY5Fq1pPf0b+o3m2oHvi47tsvp+ZUrPLXmUxzjo4j2ivuyWxOKis7s3LH59JdCa3njU93LMjfEFOcPb9yPDm1pPAtPvbtT9CKVe+9H1wyc+67plfqxjZhuy5fO9r2zsWJd8bmWoPjN54em3JkDhZ9dKt+dB/+2WpPwxtR45vvkD1LDFkPRaktWxrm66YN7Pvu3IKYxvnrdsYlXN72twuL4veJb860Xjq9+PcP9hT0WgOly8z+5rQHWqb0H3qs0dFU/+m6s7EZn8Xi69wJ55pG7U6zPdt6NPELw6amUztORT86sW9G+oW/Rrm6zfyFq9onVh/YW7W4d/b0Yx+uTTjluiSl/uy7isfnVnqePjl2uvDclvm2xr4r7CHzJKtl71j3O1+Pf0FXnDt3w0fnGg++ZZ22apth+XuPpK+NqdK9Ftf1xvQXtOZPjnT07qcbGu/sqsf7jP3UkablS/90ZZnmWHfPwBbVP9Lrola+/ASa7WncMO3sxEl07IkHr3R01x3sR43dHYVLs+709jwUI/ZvHXhp+Qn8lXlvb1G9e7TkG8MY/0bLl5cY3ZNp529vMF+MC9V6TNSRrx/GTbdFRd1Koji64NYQxfgRmqzMMMPLBDyN4FSE7wc5oZ0kmH9LDGlIy5QhATuurywxumsKtGQhIVEyIfEVGpIrzfsh7JEQXLIXIoFelMsWD7O8xcpExWLlIP7FylpliOKNhK3MDIUryizrV10PL4R5JHT7D8D4E23+iTb/yGmzDr+1tFmL/j/RZu3/Cm026m89bVar1ToNihkx4KQARRopA+mg9NAZcKJ6jZH4r9FmNaapFNJc1QV5pM6XU6bKkYtS5ZKULMpodWf4jRZXwIVhKUVZeVbTMDnUosO0WYWXyWxOlT2gN2ZWMGWlKqAp1trNIiNk56dRuLW0uLwSVFVxXFWWz5TKGr1VckWlLktTaUlLN+uskINmO4zpWpXBIFVzdGqGoVwf0Djz821OT3F5uSWfpDVVWrMrI9f7Q2nzYNPcAtpM1hCFTtqXw2W5y4tom8dMe6rZQF6KRiuSsujQ4Ga1k88pyPEWVY/IDIbhP07aDDCjwQB0TuzW0OZRnf9KmwtN7Al08t7z0287XKJ4K8V7ZYbi9R0P7sqYMK1+oW3+mrkH7ntx22uWU/vjHv95Nm3bFHe3z2fRvFqbu3Jt6/pl98htZZf7W3fbL5w8u76OOXVRKDr85F8m2QdmP5zu/nvR/W8UU6c+vkO7v5PVft0/epY0dfuYX6/auqc/9pmjoIB5c8q5zuhZ9I6z3S93G95p8a+Po3flHuuZ0GNurj/zm6s5Kyd/Il+1PLNzVXNfI3NJ/1l9xguKy8fuu4daOGtO4ZS7HH8u2+nMbyb55kvc3avj8vtMkw+uWXSvZU7MHTOfSjoZXz7qifoiaSDaE7dc+Fn5gTPNHx2aIOxL7P7gm+Xjvlkxb9bpGLk4Ku3c5lmOr4TUw2smP0JOfbWHPzLui+TSpv1Lr57UP3PmTN+1mPVxTcF1Gw/1XPp8TGd7dMnBI5sznls3sOnpDeL5yq0TVcupKVe4hNl0XudDnmvpddHd0YfzS+9f6P3q271fViyveepaRU/Jgpnrb/v4uHvrBwtXH843N0cnbWxqmYp9VUwunvCHcZqTuae3MbZPexYcfjPnV2fPGKIGKe6Krt850mCR/gk3ST0U

View File

@@ -1 +1 @@
eNqdVXtwVNUZ3220OjKDBRGwHe12BxBr7u597TOzbZNNWJIQNskueVSZ9Oy5Z3dvcu89l/vY7C4CFTs6KOJcH6NC21ESdp1tComkKAgqdup0QFQmOhpbqrYVaXV0aJyxiiU9u9mUZOCv3j9277nf6/d9v+/7zvZiBmm6iBX7iKgYSAPQIAfd2l7U0CYT6cYvCjIy0lgYbo/G4kOmJk7+MG0Yqh50u4EqurCKFCC6IJbdGcYN08Bwk3dVQhU3wwks5CZTm50y0nWQQrozeMdmJ8QkkmI4g861SJKws9apYQmRo6kjzbllY61TxgKSyIeUalA8pmRREYmWbmgIyM5gEkg6qnUaSFYJYMPUiC3torcU0wgIJJuHhtNYN6z98/EdABAi4g8pEAuikrJ+m8qLaq1DQEkJGKhEUCmokr1VGkBIpYAkZlBhxsoaBaoqiRCU5e5+HSsj1SwoI6eiy8Wlci4USVkxrPEoAVHf7G7PkUIqDsblZVzMaJbSDSAqEqkMJQGCp6BW5C/MFagADhAnVJUkqzBjvH+uDtatfW0ARmPzXAINpq19QJO9/MG53zVTMUQZWcVw++XhqsJL4TgXw7r8Y/Mc6zkFWvsqJDw3zxgZWo6CmPiwnqb3z9ZHQkrKSFtDDM09oyFdJW2B7ikQM8PUtw8TLtBrfyxW+2NvtHWWxL/Ylg03El6sY/G0WeugvY42oDlYmvU4GG+Q44I854i0xUfC1TDxK9IwFteAoicJFU2ztBdh2lQGkFAKX5HwY2XCSTZl+KQtKZRVsY6oKiprpIfqnBkMqrnx4Ex3UVhLAUXMV8JaxyrMD+azgwI0BSGdGZTpQJ7nxAQyYXK8aqJquByGAKJk3Rpiae/+qmS29iWSK00xNEUzR7IU6XMkibJI6ln5rU6nbg17aJp+/nIFAw8gMsdFnq48L87V0JBMSCvHvuSGDwQCR6+sNOuKIyoB/3w0hFE0Fw3DyvrzlytUXeyl9ZHsrDYlCtbkCnLoYz0AIS/rpZlkgKYTwMvxnM+T8EGO9UEv7z1MJl+ExEuZTBVrBqUjSFaRkbMma2WQLc9ZiGM8nJdkWucQFSiZAoqZiUZczkGvc6gakjAQDsAkBQFMI2qm/6xiY+/6+rbmcClGQIYxHhDRw+/Zl/f1wWRfQg7lUNwTiAs+RsFrNrV0xF2NmmfA71Kjme56LHd3x2TI5tRIjO1rohgfz7A+v5/lKMZFu8iUUk39PIwMCp2eVj7CmevyeqSzKSA2RrubeiMZF611NRiDjJoRcS4Qz/Zgo9/or491d3DqYKx9TS6Xp1vCrfGe5oS/q1Py+dZJ0J/IJnztZnN8fdf6dfl6r9ri6W/pEPPJNpIiMNIhd52DNKxIih6qjg1FxoYqD40vSM8OTZ1DqBQm5Jq/Iusca8lqjypSrs4RK1cYkX8go5hooNB6rKDJR0lhzIwohMxNG3LxCNfLh1k6BnSlK94Ge/yg2aer9AbIrkl1cJGGgWRLfBDPqQzrZSi6WhwvzfsrrXkJ+v+J6lAPNXcLUFF15g4rKlhXxGSyEEMamSqrBCVsCmTba6gQXkN11vda4wEGcjwDOC7hY3mB4akGskdnvf1vZwyXr4oikEjjZaB1MM2FnEGe55x1DhmE/F4yY5Wb7u5CuVGV1B/sQ99/4Fpb5anZ2Xl81wS95OjHt9/yI9/q5pOrvtg+fk18/OSOBYsbLPjSiuTNPz6ZWXqiNL3rl+Ntpat3tC7k8Crujc9X2n/W8myIPj3l+PZvJi6Uvv7Xafma9z960K9O//uFc3vOfvj0ha+/2vh7b0f9qeuXfLg3f3t69Lv3OLmJV35w/fKNU9rjy9CGG/+0Y/navbueff1eI/KPp3a2PDDy7r7eJ8++3XvmkcXnJm95caXN5p7wneUXbxsLCXsWsE99eTR86KWFdjoSdO7cfOCuD1aNnmq4jrWfWvKfn/904nv+N6Pf+fPLtz5xtTq86FvNW78ZXRqUIm/ALNe/5dXoOyt2Bz955nzq4v3PnVj9/jsLD996w/Q/a3q9Y8WrPrjwVvbobcdHho7fG3zzht3pnlOK1Dv1ykcfo9Gf3P3JjVsXnXni7cU13e8dWPrqVctCJ5oOS48/1Hn6/rVdj/Xd99WeL++848nPtn26rOHacTuSb25ZqSxww4V/W3Rx4NMjf72Nnjr368deDq42zt+04tBdb02J9LYv7Pcd2VAIp361+/Oh35mZbqX14tY9Z8b+/ug3dptterrG9hlmzlM1Ntt/AUZXjK8=
eNqdVX1sE+cZj1cKk5jKWtBgo2otZ60Q5Oz3fBcnF88Zjp0PJzg2cT5pUHa+e88+fHfvcR+OnZRJpVslFqrqKqoWqe2WYmxmopQQRqFLYB9iqgCtpdXaptNghWhthYrKuj/QtKV77ThtIvhrr+yz33ufj9/z/J7nefcX0lDTRaTYJkTFgBrLGXijW/sLGtxrQt34WV6GRhLxuWgk1n3E1MS5rUnDUPUGl4tVRSdSocKKTg7JrjTp4pKs4cL/VQmWzeTiiM/OJUYdMtR1NgF1R8MTow4OYU+K4WhwtEFJQo4ah4YkiLemDjXHvt01DhnxUMIvEqpB0IiQRUXEUrqhQVZ2NAispMMahwFlFQM2TA3rAifYV0hClsfRPJdLIt2wJlfie53lOIjtQYVDvKgkrOnEiKjW2HkoSKwBa+wjusEXMTYFlnNgFVMQqgQriWmYX9S1TrCqKokcWzp37dGRMlGJhTCyKrz7uFiKiMCBK4Z1KoKh+EOuaBanU7GTzjqM+USG0A1WVCScH0JiMaq8Wj7/7fIDleVS2AhRocrKLypPLpdBunU0zHKR2AqTrMYlraOsJnvo6eXvNVMxRBlahUD0bneVw6/dFSgnSeLP1ArLelbhrKNlLt5YoQ0NLUtwCBuxxsHkUoIkqCSMpHWEBNQxDeoqrg74dB6rGaa+P4cpgZffKlTK5LVIxxKXV6s25oKYHmu2RRNr7MBtD7NZuxu4a+0k3QBAA1Vnbw13TwQqbrrvycNUt8YquoC5aF5iv8AlTSUF+WLgnozPlhjH0ZTg4+okYEZFOiQqqKyJfqJrsT+IUHB6scgIpCVYRRwpu7Vmy9QPj2SGec7k+WR6WAbMCE2JcWhywqmKiqqhkhsMiJB16wjNUJOVk6XkF3GsgCABAcg3MwQudyiJsojzWX5WmlS3crUAgDN3CxgoBXE7F2hQXueWS2hQxqSVfH9jhmYYZubeQkumKKa0at9cKaXD5WhIt6yfuVugYuI1oE9klqQJkbfmfog3Q5yHpxgyDqjaeo4HUIAU5CnKA2qpeggZRjiLB4DIYSslMlWkGYQOOTyRjKw1VyOzmVKj+SiyFqsA4LWLCieZPIyZ8SAqxaB77aoGJcTyr3MCwbFcEhKL9WcVggOd/nAoUIxhkAGEUiJ8/iPbpqEhThiKy75m2GPw7qApd8QjdFsr6uJCMNzZQYIQUIZ29tL1UXcKtbQhqidBkHW0h2RomqojSCdw4r4hDEAJKtua2NHeinhJ03v7IrrY3tvdtRO5/ZTBZvTOKIBorxGLhDmqt6+F1gyVRUmnsEsHQZqWUm5/PUgzbLPQ2cS0+fcEI20wI2RVhHq6d0rDdbxI893t6b66EA6RNZI+l9eOC1bESfdV2obAbUMsNg211DReO19OjM+5clJ67W14wkcUKeu1x0oZhviXlWFMNKCvEylw7hBOjJkWed/e0ECmvr8pwGhkmmprbnK3NLn9Uc9QLDLQTjOBIIjEBCGsJ/uF5Zkhaz0EqCTHA+j6cml+A/3/RHW6n1g+BYiIuniVFRSkK6Ig5GNQw11lFTkJmTwe+hrMB1qILv+AdYqhmHoQj8eB4OGhh6wjQv7giSVrX8+MXOnGKLASLrw0Z00nKZ+jAcfj8Npl1lfvwT1WvvCeypcKVUlcsP3m0bFvV5XXffj71VcHuy4pfwXfnbm57fSPnvj5xKbbN7p+td1WN7shuc4/tv0nb71w+ZnrJ+eLB9bfuba2/fRjBeKdsQe83iOHPz7HV/0hOWUbTz/fc1NeWBj77xfi2Sedj7zxeeDM+PU/Nf6t8dq/0S/Pe358oYb8dGHN1sapq1v3zDZ75lf333/oYMg8Nzj49gdjF/5irjuVu/L4rhvMUPTqDxL/PLllY3bq8ifHq0dbW9gNm++8aqua+de1+ffEJx/avEUohMbIi6sv/GNmlf1bc/SD7pb1Aw3fm36we37TbrRv7P13N3/4bvWWz/5e/Z1VH8bW7Nh161r4gO2m8yJ/1Xy/s+qzh6d2fDx6abh1XvWIv7791K0b24rHnqX29t1/yBp0bzg88+mVwGr1gxev7J5f3xi9eTY+8ruF4eP+8EEXiH/JRfYfOPrK5Uhu7VR7dQd3I9cxnvto/JHD3O2T7z3z+HbPgf8UDp8X1009m1po/emqQ01vH9v2aHSz+f3M4NqXbn05lPrkF6N/dMTvwA29Esgl3um/9OcH+qqvN7pP98iDT/9+DfVy5+fHr+/ZPXnxZP9Gh5c/bytxc1+VLT8P2zBR/wOKYJQ0

View File

@@ -1 +1 @@
eNptVGtsFFUUXh6RthaiwUQMaCcbEB+925ndZemuMXRtS+1KH7RLbQHFu3fu7kx3dmaYudP0AYlUIU2BkMFQAsEX3e7i2lKaklSiqDxtgKRAiVqJVQMiVqBi8AHa1NvSAg3Mr5l7zvm+c77v3KmPV2FNFxV5UqsoE6xBROiHbtbHNbzawDp5OxbBRFD4aHFRqb/Z0MS+eQIhqu7JyICqaIMyETRFFZENKZGMKi4jgnUdhrAeDSh8Td/GOmsEVq8iShjLutXDsXZnunU8xepZUWfVFAlbPVZDx5o13YoU2oRM6IEfSxITwQxkKmkxAwOKQZgAhppuXfsaxVB4LNE0JEGDx8ABBCiGDWCnBKyDXUihCI6odB5iaBSftbFr4wKGPB223/JoVFB0YnbcN0A7RAirBGAZKbwoh8y2UK2opjM8DkqQ4ARtT8ajCpmJMMYqgJJYhTurgU6gKEt0LkDECKatmh8VFvlX5eWX5RbGboOa+6CqSiKCI+UZlboit45NC0iNiu8PJ0Y0AVQomZhd3vE2M4prqB0yw9qcbhu7715qCdKOY+po/NN7AypEYYoDxqw2Y7eL996bo+hmSwFERaUTIKGGBLMFahGXc8KUmiGPDGrGs4vvpxsL3qVz2Di7LbNjArBeIyOzJQglHXfc8eBOSYIa6QCsC7Bc1wRoTLQagBTKYH7I7h0XUMJyiAhmM+dw79GwrtINxm/FaBkx9Poo9RKf6o6Prd3uolfGN2FTNIe6ah70C0Y6w7qYAqgxlHgBw7k8DodnAcvkFfhbs8dI/A90qcOvQVkPUqdyx5cmjgRDDmM+kf3AdUmM3Swg8uZn9H0Vy+VFjBLVtzQYUivD4mpfdkmuXIjzDtzVRdFCUBZrR2lH6vrmOtwuxwLeEQA4EOSB0525ELjddg4E7PZM3pnJLXTyruYqEZoJzsYxIUUJSbgdBQGCSMDgtjRmPKei0FuQn91aDkqUgEJ04IchMyorMo6VYo26YSaQpBg8XX8Nx7IXgxJvhbnfzSGHk0OIDbpYJ0J28BJdm3GZ7sgQHbk7o7+BddQKjR4dm9SYtjHJMvpM4U3vlt6s1PXDjddbtu/3dTX4DrRmzWl882RK8qTcM+wFsmHJFfaCmLxs+AVf07asXek363p6mtpnzXFlXixdoaRdORL+/otzXWn/3fht6Je2J6/GH59Ztqf35c5GzTX/cMo7Rw9AbUbF9OfLMhuaG6Z/JzQ2/ylfLbh88tbez+teHxx4sUxYv62846a2dueeG/pftpZDU+Ytx+zvm5fUD56dSba+sfXsnJvdaUNTC6tbHu5RdyyrkN4f/Km7vXvn1uRzvVP7d/262pcX7Qn+7bV9nDqwuf/I7NOPda9LKX8i6UTS9g+eSr34yIWpCCataaqesaZ2yfkvvYsHTmya/+zpfDjNe+iZHHsFTDpzLPlUsfZj70P95uGs050rGdIp/sxseO9S+aLBYPm50uWTvx4Qma6VBw+d/yal88S3T9srZs2eW9XWdPSfafytpQ0AHpl3POXkHzsuNa77anebr2hXR9+15/Yfr62MWJu9nwy9Wrzoes7xoZ7o5R/eHbpW9e90i2V4eIrF9ARTHZMtlv8BHmuAkg==
eNptVH1sE2UY75gYHBWDJkSUj0sFEsJuvet1W7uY8VHYJNjuqwwWxfn27l176/Xe497rXEeIMAko4JIjmdH9gVG6dnR1WzcMHwKOMQgMppglyIiI0xkDG5KNhCBC5rtPWeByl9y9z/v8fs/z+z3vVUcroIpFJCfFRVmDKuA18oH16qgKtwYh1nZFAlDzISGcn1fkPhRUxd6lPk1TcJbZDBQxDciaT0WKyKfxKGCuYM0BiDHwQhz2ICHUu2+bKQAqSzXkhzI2ZbGMxZpqmtxiynpnm0lFEjRlmYIYqqZUE49IEbJGFtxQkqgApABVTpIp4EFBjfJAoGLT9i0EAwlQItt4CQQFSHO0D4j+IG0hBAzHZBIoDQYU0o8WVAk+k8Zsj/ogEEizvxrmhn0Ia3riqQaaAc9DRaOhzCNBlL16m7dKVFIpAZZJQIOpVBXWhBgpUoZjOukxP4QKDSSxArZV0lgDoiyR7mhNDEBSsH7YlecuzV1fvM4VGYfWW4CiSCIPRtPN5RjJ8YmeaS2kwKfDsVFlaCKXrOlHV08Wa84PEVNkikmz2tOYliepJUDqjihj8e+eDCiA9xMcesJwPTKe3PTkHoT1eifg84qmQQKV9+n1QA1kWKd1qQbl0Ub1qCP/abqJ4BRdlEtjWXInpiHjkMzr9WVAwjAxZcVUToz4ydFMBs2wR6dhQ00N0TwiFPpXTNOkghKUvZpPP8Ry9gYVYoUMMvwoQtK0IK4OE0vh5QvRien7Om/D5EDsD68l5uqnclQxlWIslBOEKEKcTrHWLIbJsqZTuU533DFB4n6mTQm3CmRcRqxaNzk7Ud4XlP1QiDmeOS+xiQNGi4J+kryXMizrcIkKWpOZU5ibU761fO2mYndG5dbj/+uCVC+Qxaox2tG83iWcPYNLFzgPDT1lAm212zJpu93C0h6LxSZYbWymVcg4VCECPUa0p7wIeSXYzJfRPOB9kB6XRo+uLXGtdq53xDfThciDNEy7gVcPy0iGkSKoEjf0GC+hoEBOgQojjhy6cHWJfsTO2W2MB3psEHJWFjL0uk2FLZMyTckQHj1CY3+DncQKlSydS6pdvG+WYexKJs/IiFCwBfXkGx+vOFj68ICrN8XTdalzV+eqWUNW60/LVXxj6ZLE2W+Ro+HhzYJW547K1v57fbj/jYyk2YN3EpfNTdnn+079cuPG0cfdfXea2+/+Xbo4m9+d/eKbbdf2L5rP7b/ePdviv7x+ATeYruyZWbywvn2g+0Qk4hy49Gh44+mLN9/fXbu3zTbvZo255oeV3juWlJN/XliRXL2waNfd65bB9I7w4WtdxaevvvJcba352Blb29uNV5vAisx3CyzPN4eq6+Y86m/13m8r2d74R1e7f9j/wLDo9Tm/pdTeN4o7EkOvzYrf2lPDtUQGlr16IP6BsOBgxxVXz70h/OBA544l7Q2scaFzlb6z7Ox7M3Pq5v17od8YvJ1e3rqye3BBdsLx45W9NVcG/6prNGzpGU6Ne02fVHXPdT8w5r00v24gP2dZ0ZkZJWtuLU/JTj6/8uUN5za/dfuEsWf4m98//tR4ccPPn51rOXZN2FywER8P+91dH77wz+xFX3SMfL5sZBuu/dL1fdKo9skGYeiInjvDYPgPFluJzw==

View File

@@ -1 +1 @@
eNqdVX1sE+cdDnSsdNqg7WhHitrerFYgyDl3vvP5S+4WO1kwIbWTeCFmouH13Wv74rt7L/fh2CaglbVoKF3bK6hp1Q/RJNhtloQAUUrDR2Gj2ibaIjZ1ItCVdm01NqXrumkIqSvda8cZieCvWbLPd+/v83l+z+92FjNQ00WkLBoRFQNqgDfwjW7tLGqw24S68VhBhkYKCUORcFt00NTE6bUpw1B1b20tUEU7UqECRDuP5NoMXcungFGL/6sSLIcZiiMhNy1ts8lQ10ES6jbvT7bZeIQzKYbNa4tCSSJkSACiC6XxJY5Mg4hDoOm2GpuGJIhtTB1qtu1bamwyEqCEHyRVg2TsTtIwtTjCdrqhQSDbvAkg6XB7MQWBgFt6aiiFdMMaW1jkAcDzEPtDhUeCqCSt0WReVGsIASYkYMBhXJoCyxBYw2kIVRJIYgYWZr2scaCqksiD0nltl46UkUorpJFT4Y3Hw6XaSdy3YlgTYVxEXag2ksNoKgRt52g7PZ4ldQOIioThISWA6ymo5fOj8w9UwKdxELLClFWYdR6bb4N0a38z4MNtC0ICjU9Z+4Emc+zh+c81UzFEGVrFYOTGdJXD6+kYO+2wuw8uCKznFN7aX4b89QXO0NByJI9wDOsVamwOHwkqSSNlDdKU+1UN6iqeDfizAnYzTH3nEOYCvv3bYmVIBsJNcyR+UPW9oXrMi3U8mjJrCIojmoFGOCiHk6A5L8N4WQ/R2BwdCVbSRG9Kw8GoBhQ9galomKO9yKdMJQ2F4eBNCT9eIhx3UyofjyEJsyrSIVmpyhrpIFtn1UGG6g/PTheJtCRQxHw5rXW8zHxPPtsj8KYgpDI9MuXJs4wYhyafmKi4qBoqpcEFkbJuDTI0O1Y5mcN+GPdKkTRFUvRUltQwFJIoixjP8m9Foro15KQo6siNBgZWFRZzkaXKnxPzLTQoY9JKua+HYT0ez7GbG82FYrCJx8VNLbTS4fxqaIesH7nRoBJigNJHsnPWpChY0w/gm06WYwDnctJ0nIlzgHYybifPO+Jx2gGcXMLlfAPrXORxlBKZKtIMUoc83kdGzpqukUG2pDM/g/043KmPEBVeMgXYZsbrUakH3UeoGpQQEA7wCZIHfAqSs/NnFetjD9c1h4LDbbjIIEJpET5zYdHKzk4+0RmX/XqCozktHNmoC4w9vSnTwWyoz6wPqmZrY0NQkNrzrhTNBc0mj9xD0i6WdrjcboeHpO2UHauUpELOphArpSI9kiMUYlvr8jFRocS0XbBvyCWNpBM1mNFQw8b2bgwzn3C2bwyCSDYaVpxNdTnUGg8505nG9nadk9nopqA7nd0cyNY1uzqCsXSek1uEaCJmpDwUZW+I4RaBkfLX+gg8sCIG3V+RDYllQ5ZE4/JSc6LxEUIZGL994Yr0Eevxfg8rUs5HtJUQhvgKZNgmGtD/MFLg9F4MjJkRBb87Bto7pM0bOsXubn0zciZBc93miJnPIEHgskoItCQ6+fWmqnW75yHD0k6SqoDDUay7PJrXS/8/q5rsIOdvATKszr7IigrSFTGRKLRBDavKGuYlZAp422uwEPwR2VoXsyY8NM+wNO9h3dDjTiQ8ZADv0blo/9sZQ6VXRRFIePAyvHU4xfhtXpZlbD5CBn43hzVWft09WigNqpJ8a9Gx+/uWVpU/t+Dv118/0XpKuUh9+/hf1rlf2jOQufv0O/CHj27Z9TK3Ynp82a6ac5P3xO6sz8daPrl0q0eZ2bpqvG9Zb2/4c6t3S9XSwbeWPdb14C/t59/b3tKbi5y9OPjum5/9AHoe6n3yhQPTH3187WB2xZnT/3iPWbuD/9OaS0v6pGO+X68dEDYOT1/Z/izqs7256qfvpid2//Ebj4wibt3f+U83vR47Uf38qOtboZkPv1xc9aGZ3dNY/KL//KnP/jn6++9bUmTG3lxly7+wNnBndfyvHTWrI7nfXP7mxS+vLJk4d5IMPO4I3NVy377lKvt+IL30nu/K937H+9Wil9/fc7TxwW2BfXd/tOa2fP/PDx26qlwLnZ5a/Xxg6oMLB3dMHZk8cdu5Teyf+/Pyi69MPrHmd0c/efr8rfS+4pKTTwHvVz/eesfA54v7Z5Zffmbr5ENndov3M6pv50zswmsn7ludH23vEsSadwrr9kbJv+32jAauXLhaPf4cvPaHty9NnH3jF+lrjTsW7/3VyVfP3N6yyrw329X33Ni/O+X+6uW7loD/iCuelaiCPzUC0dmeBy4/vvLoIW7s1MqB6ifF8/8Sv1hVouiWqgF09VwT5uu/A4Sdew==
eNqdVX1sG+UZd9bBqlWwaoBaYIWT11Ws+Ow739mNY3kojZvETZw4cbomYcy8vnt9d/F95d6zYzu0W0NbhIJGj4JoNaCtk8YlzZJAMpY2/fhjKiyhW7euZQqsqxhiMKoxaWwMjbLutePQRO1fe2X7fPc87/Px+z2/9/oKaWggSVMrRiTVhAbgTHyDrL6CAbtTEJk7hxRoiho/GGmOtg2kDGluvWiaOqpyuYAuOTUdqkBycpriStMuTgSmC//XZVgKMxjX+Oyc3GtXIEJAgMhe9XCvndNwJtW0V9nboCwTCiQA0aUl8SWupUwiDoGB7A67ockQ+6QQNOzbHnHYFY2HMn4g6CbJOD2kmTLiGvZDpgGBYq9KABnBbQURAh639PSgqCHTGl1a5BjgOIj3Q5XTeEkVrAkhJ+kOgocJGZjQQeSQyQ/jAlVYAsIaTkKok0CW0nBofq81DnRdljhQtLu6kKaOlBsizawObzQPFzsgcfeqaU0241KqQ65IFmOqErRzA+WkxjMkMoGkyhgkUga4qiG9ZJ9ebNABl8RByDJf1tD85tHFPhqyDocB1xxdEhIYnGgdBobiZScWPzdSqikp0CrURG5MVzZ+ma7AOGkaf15ZEhllVc46XEL+F0t2Q9PIkpyGg1iHqNEFgGSoCqZoDdBU5REDIh2PCHx8CG8zU6hvEFMCz/6qUJ6VfHPDApd/sq0aDGJ6rJO1huQgKDcRBlnCTbk9BM1WUVQVyxJ14baRmnKatpvy8EqbAVSUwFxsWmC/wIkpNQn54ZqbMn6yyDjuplg+nkYSZnQNQbJclTXSTrbOi4QMBSfmh4zUDAGoUq6U1jpZor4nl+nhuRTPi+kehfLlWEaKwxSXmCxv0Q2tmAYXRCrIGmBparRsWQB/GPdKkTRFUvTxDGlgKGRJkTCepd+yUpE16KEoaupGBxOLC2u6wFKldWqxhwEVTFox9/UwrM/nO3Fzp4VQjK+4qONLvRBcXA3tVtDUjQ7lEHkKjWQWvEmJt+bW4psYTccpNs7zbm+coyjo9fC+SornOZ8X4PJ98BiWu8ThKEUydc0wSQQ5fCyZWWvOoYBMUWgBhvYwXtypn5BUTk7xMJqKB7ViD8hP6AaUNcCPcQmSA5wIyfn5swrBjqbqcKhmOIqLrNG0pASfebtidSzGJWJxJRADht4hKK2bw3CjlEbZJtQZZsMUHWyoj4GOTtWdiZtJIWowqRBJb2C9tI/Fi6SdlBPrhuyoTTq/H1KUdmTobtVbDfRKb0prCkU6PaHOzS0CL+s1kGeZLY05JddBR7d0C1FJoDN1DdEYX6N4o3rLBnlTXW0r04A2sptCSo4VDaGhMZPp9gQ3bqVbWGe2TnYjhU7iFoEpBlx+Ag+shEEPlGVDYtmQ86JhFkTjJ/gSMAHn0pPST9TjY75ZlbN+IlpEGOIrUGBUMmGgSVPh3LMYmFRa4gMKtTUrtXfVJ7YmEp18jA+72+X6ZEt9hvOwm7JbtqhBqXpDrJsPYildR4aiK0mqDI6XYitLo3m99P+zqtfaycWnANmsz7/PCqqGVCmRGIpCA6vKGuZkLcXjQ9+AQzW1ZGt1hzXpY/CoxWElBXwJ6Im7yVB1cHwh2pdnxmDxjVEAMh68NGdNiEzAjqFk7H5CAYFKLx7S0ltvx1BxUFXhTMXs/f3LbaW1DH+vXXuq9R31Dnrltstjq6++0Bbea8ysyEfWrlz15PIdwaPPcc3p8Mn73mRWhI5e9f/jL6/e6/yma/s+RnjsN/uzD9ienei6ffZCZ+MdX3zW9e7YQ8d6xmInHhn89ONwbv8P5871Xylc+Xfsoe2RgbMH3+9hjUdfvLsrX9d9Z3o8dOSWTy7M7J17b2ZiTvb+7ce7Hf9pPJp/qiA8z0m/f+b1D/b2vxmYWCfsufitU3fZbN+49FPPrf1JzxOf/O7QeWJiNPLrp8O29da5uw7vX9N/e742ID6wfHfTp8/nlr3Vsece8GTVju882nfgw8CUTe6yX7u67o87H2SPw533VN87GfHvXn/s897TFT+4svv+207XT3V1CP2H/rXd9uHKevMnR5ie6brX6vwvDe8TZ063XJy95Y3U/gizy/HirvxHv1Uq9HfOvPXS2kPx/LrN3zWfuNSz96/qCx+tuvj4qs/rn9sjXv5D4WvnZybP58UPsiun3W9fEHsTp799qbNu2cHVmf9Gz15O/P2x6cgX5z67j7p14CDJbbxy9uV3T5Hge+/96OdrmI+/+uDr//zl3Y4zYLZ3fOr8zy6v2ffn21Z435/d9fVo4+j0qyfeiD989StFwpbZDlw70Mdj9v4HNFGplQ==

View File

@@ -1 +1 @@
eNptVXlsFGUUb8FYg4RUg2iIqduVxAQ725md2ZNU6LW1Qnd7LJTSYPn2m293pztX59ijFdGCRCIIg3IJVqBlF2stYhsuqTFeoCiJHNESQvxDMFYTbqMkgN9ut9IGJtnNzLz3/d7vvfd7bzpTUaSonCTm9nGihhQANfygGp0pBbXpSNVWJQWkhSW2p9bX4O/WFW54dljTZNVdXAxkziLJSAScBUpCcZQqhmGgFeN7mUcZmJ6AxCaG4x1mAakqCCHV7G7uMEMJRxI1s9ssczBiAiYFiKwkmERdCCDFXGRWJB5hq67ip+VLi8yCxCIevwjJGkFbbISmKwEJ+6magoBgdgcBr6Iis4YEGWeArfg0aXEtT4URYHF6F3Lye8KSqhn9EynvAxAijIlEKLGcGDI+DrVzcpGJRUEeaKgXExVRpiBGbwQhmQA8F0XJ0VPGJ0CWeQ6CtL24VZXEvmxihJaQ0f3m3nQ+BK6CqBmDPkyitLq4NoFrK5ooi52yUJ/ECVUDnMjjYhE8wHyScsb+2XiDDGAEgxDZvhnJ0cP9430k1dhTA6CvYQIkUGDY2AMUwc4MjH+v6KLGCchIldfeHy5rvBeOtlBWi3P/BGA1IUJjT6YNByccRpqSIKCEMYxdZBJKUoRDxrncvJYWGGwJCCWJKqUxHvR4xPkeuSHKxSo4ts7ZtiDm9NQtbK3iWgKKpC9ssQVtchNBORjK6nA6rTaCspAWnDNRHobWhL4gZqulyqpF2MjRkTaucYE34q22+ivraOgi+RoZvORzufyNXrHyRVQpu1wKFKOeejVQB1B9VX29FtNL6/yoIVIdb5VsdY3WJaXQWeubXweiMsW3Ly5nHKxVnWPClPUox5Z4I1RFqz8QtdlfXBimvazPW9nkrWTb2LL6Jb4oYnxtLQ0tiwOLrNVwHGcnZSfILG07yTjJ9NU/phgeiSEtbHRTVnKvglQZzw5amcSF1HS1swerE/1wPJUdot2++feEPaOnAivVGPKH9SITaTfVAMVkJa02E2V307SbsZuqavx95dkw/gcKc78fD6AaxOKsHBuEFAzrYgSxveUPHIGh9Ajg/qbp42ElUFyWVERkWRl9i4n60e1BVFcMjM4bISkhIHLtmbDGUGYWYu3xGAt1lg1HYwLpamdoLoB0GBzMHpEVKR0GEyIE1ei2knR/1jKmxl6cK0lQJEFSR+IEnn3EcwKH65n5z64w1eix4WIfut9BkyIIL7sUk+kG+fl4DwUJWMbp2PdgGJfLdfTBTmNQNHZxORxHJnqpaDwbyiqoh+53yELsJtW++Jg3wbHG8Cz80GIjaZvL6bJbqQAJSARJu9NJIRsZDFIOysY4DuNtyEGMkm6mLCkaoSKI97WWMIaLBBBPb54SmrLRdpzpHBMnQl5nUYMeqJDSOWCBywriJcDug0ECAhhGxKj+jFRFk7e0prr8wGJivJAInzz6rUiJkipywWCyASm4MUYv5CWdxStUQclyD1Ff2mQMuihIM1TA5gQuxslSFFGGl9MY2v+y60nv3xTgMfcoNAbCdInZzTC0eY5JACVOO25T5ovyejKdqxj6Jnf1M289kpO5JuPf3btrN/4ofknmr7qcmHIiNG/Wa28qked7u5o/HFnfe/yX7V+f2U90nO7Mv/yya97j/tk3N3716px3hpdefrZs73Tm2XcbpyaEs//Yaz4oODBw+edke9eFfaeOne0/44hdTAKpWJs+PTiU+3bo7Krt52bP4248puZNLTz07YnEjqLTTNPgTvsk/4ZNM44NfBdYu655+1XGO+M5hGbl+ehrBY+VHf/89uq1S77YHGTd7sbWK12b5g30r8kfWTvlie+PbinY5nGPFDpmlh7c//uJQrmW6Xpj2ZG6Ef9cbUV9cvBk3+01sZtlrX98dD73uievcf23pzYXmMteOLfmvbc2nHE3NP/w0yud01a3vTLYfXtax9YdT81kW3duW9e/9eKkDdUX8x7dtePqlIp9fx08NrPwx5yKlcaK0OFLy59uHrk29++SX58u6jz+8eTCfy8eXb595ZUv7rT+uej8w8mC81t+29D90JMnTTuXrXN3dd16v2rv9bKRx3fonw6t6LhUfmNyutiTc26dzvfcmZST8x/1AE4b
eNptVQtsE2UcH2DEYAwzRuWhrBQ1CL3urr1r184po91g7NE9urGhbH797mt79u6+2911faAkPEx8DOJBDAYFJxstGWM8Nl6DoSFO0eAjM5KAiAmCJoKaKDGiMfq162QLfGmbfvd//f7//+//v/XpDqRqApan9AmyjlQAdXLRjPVpFbVHkaZvTElID2O+p9bX4O+OqsL5RWFdVzR3YSFQBCtWkAwEK8RSYQdTCMNALyT/FRFl3fQEMJ84H19jlpCmgRDSzO5n15ghJpFk3ew2KwKMmIBJBTKPJZMclQJINVvMKhYRkUY1cntptcUsYR6J5EFI0Sm7laP0qBrARE/TVQQkszsIRA1ZzDqSFJIBkRJr2up6KR1GgCfpXcrL7wljTTf6J0PeDyBExCeSIeYFOWQMhJKCYjHxKCgCHVlMSU3newlcGWXLYvRGEFIoIAodKDVmaxwAiiIKEGTkhS9oWO7LpUfpCQXdLu7NZEWRWsi6MegjUEorCmsTpMKyibE6aSt9IE5pOhBkkZSMEgFBlVKy8hMTBQqAEeKEynXPSI0Z90/UwZqxuxpAX8Mkl0CFYWM3UCUHOzDxuRqVdUFCRtpTe3u4nPD/cGm7lWHI5+Akz1pChsbubDeOTrJGupqgICZOjPfoFMQ4IiDjwpTpbW0w2BaQSiLlqicoqw5P3N7RXNbuT0Za+JYGb0Roqot1yLFkqNnhrfA7RBmXUYyTdTAulmVpirHSVoKCqinn6gLN9atCEtteC+wRWlSWRXyxGri0LdHWKDK6zHK1ZWpzKFK9lLPpZa5EpVjvXeEPtge48Kr48kY26eGWN9WggI79dThYRJcnlvnaAVuDlRYoNmCfUFtfFaqqLYsVmwjkaIfAlwhBBfkb/AFeSciNarkzrq7U+RdsuExrsnmkFbCpuZYP60x95fKiCZgdLpaic7AdNFtEZ07/OGVEJIf0sNHN2Og9KtIUMkJoQ4oUUo9q63sISdHZM+ncLO3yVd7i90M9XkJYY7hcFSwm2maqBgmTjbZxJoZ107SbZUzLqv19nlwY/x2ZedBP5lALEnaWjc9DGoajcgTxvZ47zsBwZgZIfzPwycxSKK5gDVE5VEZfM1U/tkSoCu/A2NhRWA0BWUhmwxrD2WGIJeMxHkZ5PtwRk2hXkrULARSFwcGciaLiTBgCiJI0o9tmd/XnJON07CW5EjrQFM0MxSmyApAoSAKpZ/Y3t8k0o4cjxT52u4KOI4jsvDSb7QZ9aqKGiiRC40zsW25Yl8t18s5K467srsyxD03W0tBENIxN0o7drpBzsYvW+uLj2pTAG+cfI5e2QBAip4vnnAA5OUfAyTshxzB2yKAgsvFOdJwsRQESL5lmKljVKQ1Bsrb1hHHeIoF4ZvWU2BnO7iCZFpsEGYpRHjVEA16cyUErNikqEjHg98MgBQEMI2qMf0ba21JTWl3hOdJMTSQS5VPGXhlpGWuyEAymGpBKGmP0QhFHebJJVZTylFP1pS3GoMvuKqIDvMMVDLBcwM5RFaXeA+Pe/qddT2YNp4FIsHdAYyBsLzG7WdZuLjZJoKTIQdqUfbGsS2VylUMjUzoLXr8nL3umke+//3Zu8VVOY/JHfv37ydPvfDN6eN/IvAdn7kzm339/yyg/NPez4FLb4tdunKgcvfDtX9NPXnnju5PbnFtvLtyY1w0eX8Jtabz285+/nWvd/+vp4UPuVvbe4Z2rpe2HwZCwbSa9YsaFp7ovF37trbtGHZp/9ONFXXxV/1V1bWvr2R9/F+6mhMGKpjXzvnjidSYGPN8LiSNFC+faZizo+xDO35hfcLNq9o6PnM2vXt+KV3XWnbtnLzTy88+k5r7BbD8zynQm95xhhh458c59pR9ID6MZP7gj05e4l1kaoGv0q3XXN8+3fP30l0+s3Xr6XNEDby/q6rwyusH4xJM6sfSV9EUxtqXgxa5z2siN9TePH7r4Tekzl+btfHTWA5XFX54tf/+txjktc+ZOH5j6eOOGoZV/bHhzh4PNG+m6+m7L2t8vHZWe+6fg73kLX95Zt/LpLsdPg9+dsjx/Yfj9Tzcv+Hzd/ECcm7HYMWvUGb746ee/HJ16eQ0v7t3Xzt1dUNW66camkf4/78qUfFreoa3x1tmk/v8Bv8VQGQ==

View File

@@ -1 +1 @@
eNptVXlsFGUULxCPqFFT0UQ82G40IdKZndnZs4e17LaldEuPXbSFkPrtN9/sTHeuzrHd3arYVv4weI0oEpNKeu1iKQWlYgUxHqmagBojlRQV9Q/iAcYgicYL/Ha7lTYwyR4z732/93vv/d6bvmwCabqgyEvGBdlAGoAGvtGtvqyGukykG09kJGTwCjvS3BSODJuaMHsfbxiqXuZwAFUgFRXJQCChIjkStAPywHDg/6qI8jAjUYVNzYo9dgnpOogh3V62qccOFRxJNuxl9ggSRZuEbMDWqcTxT1QxDVsUAU23l9o1RUTYx9SRZn90c6ldUlgk4gcx1SAY0k0YphZVsJ9uaAhI9jIOiDp6NMsjwOKUThXdPMIrumFNLKa5D0CIMAKSocIKcszaG0sLaqmNRZwIDDSGyckoXwRrLI6QSgBRSKDM3ClrP1BVUYAgZ3d06oo8XkiGMFIqutw8lmNP4Mxlw5pswiSq6x3NKVxP2UaTHpqk9ycJ3QCCLOICESLAfDJq3n54oUEFMI5BiEKvrMzc4YmFPopujTYC2BReBAk0yFujQJM8rgMLn2umbAgSsrKB5svDFYyXwjEk7SR9ry0C1lMytEbzRX9z0WFkaCkCKhjDGqQyUFHiArJOLrmmowNyHVGpsjb9oDvQ3bRefyjlrVlvxkFAhrViMl0fqlFrYmRYpM0Etc5sp/hGgva6aKfX52PcBE1SJM6ZqDdDBt1c16lIzqivfmMqInVQzTBc19jA1DJiWCATDfUh3ddYw7W2mu4gUtvWyc1NnBxYX90AIw0BMaShQFuQbE9Wp9zORMuGuDvUBTtCAlPdqdVHNRALAX96Y7Mn2F5uw5TNhMBWJmtJ0q02JJy6DMNdUF3DC2l6bXojG6+JSeFgROl6MBzkw+2tvpYFnCnKQ1AF2h7K5aNy18S8YkQkxwzeGqYp324N6SqeF9SfwYU0TL1vBKsTHfs4WxicoaaGS8K+bSSIlWodifBmqY3y2BqBZnNSTreN9pQxTJnbbatrjIwHCmEiVxTmaxENyDqHxVkzPwhZyJtyHLFjgSuOwJHcCOD+5ujj0SRQUlV0RBRYWeNtROvcxiDqgwfm5o1QtBiQhXQ+rHUkPwvd6WQ3C02W5RPdEuVPuxghikzITRaOqJqSC4MJEZJuDTNO70TBMq/GMZwrRdAUQdGHkoSGSyEKkoDrmf8urC3dGsHlp6YudzDwpsELLuvKd4N6Z6GHhiQs41zsSzAuv9//9pWd5qEY7OL3eg4t9tLRQja0U9KnLncoQAxR+nhy3psQWGv2HnzTwXm8yIO8XhhlIeeNAhcHnSwLvBTnpwDyM2/h3SdAjJJrpqpoBqEjiHe0kbJmSyWQzG2eSoZ2Mx6cablNkKFosihsRoNKLge93KZqSFQAuw9yBASQR8Sc/qxssH19dWN94GAbsVBIRJM6937IyoouCxyXCSMNN8Yag6JisniFaigTqCVaq9utST8NGRfN0ZwPeZiojyXW4OU0j/a/7EZy+zcLRMw9Aa0DPFNpL3O5GHu5TQKVPg9uU/4t0pvJ5SrHppfsXbnt2qL8tQx/Ll58qvWo/BV189tnVh+s+KoXffTGmYbeO7ctvap4eeU9Wx/YHn+SmL53ak/F91scq1bv6Ms84Kg4dvYGbsffrUWrYjM3viBM7vT89edMokcnPt/y1Ndffvbtl+lzp3ue/Ca7r/fu/pfRVb9sGbQemtk66Ekvr1smFXe+d3ajTB49zR3evLdn6N5t9x/9edX5mek0ufnkF4+0kCf6Xy+JPTdzo3x90eMvXfjk9v7pujf6p8/eKljtJ3YnfigpevHjWFDgPhrq3z27dsV1fYPP/jucWnqmdG3//rqBd8u43/64lny3/7bQrl/f3zy1Zri8Vl1608slz5/46YPjzu8H4cDqrZ/+vaTqFSY9Bocqb/rn9qqp76ZeLe7MPHd+f3Oksqe06LHfD/946niYbrljtPivp0vu2LFn+8CKN6tKHNe0rp1cee409/szm1DL5PmK2VHnSRdfd3oq+U173S1DO40L5cd6TmVPkccvwqriDx9ePrkhxA+kNiXvbNipnO+AP9z1R++t8FD31au7dv5W1Vbx46e7JmrP3XL9wZn3tq4IhyaqBi7senZ6Zb4ny4oyzKFhP27Qf7FZXVw=
eNptVQ1sG+UZTgSjrLBS6NSq2taaC4jR+ew7++z4nGVg4thxUtdu7eaP0ej83Wf7kvvL/fgnWSjNEEwdERyjGlo1aFrX7qKQpmvSNO0SMsFGx5ZBWYGGruNHWkBQtCEhmICm++w4kKj9ZJ99973f8z7v+z7ve/2FFFRUThIrhzlRgwoDNHSjGv0FBXbrUNUezgtQS0psLhyKRA/pCje3Jalpsuq2WhmZs0gyFBnOAiTBmiKtIMloVvRf5mEJJheT2Owc34sJUFWZBFQx9/29GJCQJ1HD3FgU8rxJgCbG1Cl1oZ+YpGumGGQUFTNjisRDZKOrUMH6HjBjgsRCHj1IyBputzhwTVdiErJTNQUyAuaOM7wK+wpJyLAopH9VrM0lJVUzRlbSPMoAABECFIHEcmLCOJ7o4WSziYVxntGg2dSjauwQoijCUiqMoS4IZZzhuRTML541RhlZ5jnAFPetnaokDpdDwrWsDK/eHirGgKP4Rc0YCyEqnoA1nEVZFU2kpZqwEKMZXNUYTuRRmnCeQazycmn/9PINmQFdCAQvV8zILx4eWW4jqcbhIANCkRWQjAKSxmFGEZzU8eXPFV3UOAEahbrw1e7Km1+7K9gtJIk+x1Ygq1kRGIdLuZ9YcRpqShYHEgIxBok8kKQuDhpvVa7q6ADxjphQG/VtbYLtQtKesKea21xNO1l7QKe9nK9J294a6fCn0zIZ9m0TtutpnKymnCRNURSNkxbCgljgCdLZ2eZp1/1d7eT2docPRONbxUxdRG1xOEhAiZlM1JMOKUG2eWeseWddthv4JGe1w5+KeDPdejxhC9wnakRCoJsbiWyssQOm2r1KwBlu7KwOSgmipVt0uDypdFu2oavGhCjrKY6t7WlJSp3ZSMafSta7BMi1Q3/IE3UGAhmvnm2UMjG/GPFSlMfXmljGmXQSOFGm7SQoF1FcI0uS4aGY0JLGIZJwHVGgKqO2gT/Po0RqutqfQyKFfztTKPfPwVDTN/pen/MiwRpTPoUzmwibKchkTTbC5jCRlJsg3A7C5A9Gh+vKbqLXVOaxqMKIahyps36pHwogqYtdkB2qu2YPTBV7ANW3SB91KA4zsqRCvMzKGG7FdywODjzgPb7YdrikJBiR6ym5NaZKzZDuyaRZoLNsMpUWCLqHsnMxqIP4WPmIrEhFN4gQLqjGIZTKkfLOkhyHUKwETqLUkqcyuIJSwXMCh/JZupanl2rkHCjZJ6820NDAQXOuQJWqQUwvt1CggGRc9P0NDEXT9B+ubbQEZaeLizi10kqFy9mQNkE9ebVBGeIgoQ5nlqxxjjXm7kA3HbTLFmdIigGQpR020gFdMVd1jGaAw06TMZdtEo1ADiCUYjFlSdFwFQI0qrWsMWcWmExx9NTaSYfdiSKtMXEi4HUWRvSYVyrGoNaYZAXyEsMeBXEcMCAJ8UX9GQVv2zZPMFB3ohVfLiQ8JC++JgqipIpcPJ6PQAUVxhgCvKSzaJIqMF/nw3d42owx2k67CEBAwJIE66RteMDjHV1C+1p2ueIYLjA84p4CxvGkvRZzU5QdqzEJTK3LicpUepnsyRdjFRN/qjy7+Zc3VpTWdeh75cpjO4LSOWLt1PstN0/Lc8/95vIBzdqw74mT69f8bHzv93tv2+8+sTH0kDB+5Qf0nLjjh7dt2Dw/2/tMtbRpdQUzfn5P2Pf7Dw/ev6tPlu6Z2j3xxfQXP+29L/0JbHrz84v6xIPBz2PGxsGFR/Zubkts0c7efuEdf+utjX8ebt/F79r36/2z80rFqQNnYfv3spM/uWCZLLjNr/z19c/OUH/csjW2quOmiocefXfW3jPw/Itr/r7h8X1rIk/iey/+7tv3huVJjG10HvvUvH7dkf7Epy+c3zy7aez1mRsGfYHwupdrPqj6aoyfOX/9c3e++UT90bdnBp7/pOUI++zam/tXT/1j4c7Hzw17PmA/e/axAdPl6Zsets7vXufse3Lu9H9+u7/ytS9nqqpPT9dij/4qtOGthsFtu8P1YCqYx566+3+X7u7+RezeW7418/HoKwt/2evNvVo/4vrnu/99YHLTv1986p3rR16uOrPn8P4PP5p9b+N4Y1XTR6+ZbxnM2Y7cNf/dZy6Nf8d+4OnsQu/T74/5335j4krVPZWNDHtu5x3nmmsdPx6/df7LhZYL3EsvzGC3Xx5d/aMTDfSqgYFLFzNZ7ON1oMXxxkSDcUl+kH7pvfBXNxTrdl3FtlVzr/ahIv4fRx504Q==

View File

@@ -1 +1 @@
eNqdVWtwE9cVtovbuCmQtKGY0OmgKAmQ4JV3tbJerpzIsuWY2JaxZPzg4V7tXkkr78t7d23JxpNCMm1TG6dbm2nKQDLBD4HiBxQSHFI8GVIaaGknkISODW1DJo/OZMiESaZJhhL36uFgD/yqfkjavd95fd855+5KdEAFcZKYO8aJKlQAo+IHpO9KKLBdg0h9alSAakRih+t8/sCQpnAzD0dUVUbOoiIgcyZJhiLgTIwkFHVQRUwEqEX4v8zDtJvhoMTGZ3Mf6TYKECEQhsjo3NJtZCQcSlSNTmMjNliHDGoEGjohwD+KgRMNfu8jxkKjIvEQQzQEFWPPtkKjILGQxy/CskpYJELgRA6jkKpAIBidIcAjWGhUoSDjKlRNwbakicRvJInPhFXjcsphSBPTRWLjb/46u40iEFKnYai2ZlPBABYiRuHkDMZYCdWFqZowQAYKtsPEoZQPWcF8KCoH00+8xIB579nYOFtODBt7enB5mF9OgSxO7SYSl5lFSsEoZFSM7NnWk4hAwOIQzwxHJKTqE4uJnwQMAzEnUGQkFnvXx8NdnFxoYGGIBypMYrZFmC5TT7ZBKBOA5zrgaMZKPwxkmecy4YuiSBLHsuoQqURuPU6m9CCwlKKqH/PhJNxVRXVx3CGigTJZKRN1OEYgFXAijxUneIDzGZXT568uPJAB04adENnu00czxhMLMRLSR2oA4/MvcgkUJqKPAEWwWo4ufK9oosoJUE946m4Nlz28GY42UWaT/cgixyguMvpIupGOLzKGqhInGAn70F8gJ+b54aEYViP6EE05DioQybjf4ZOj2EzV0K5hrAU8dyaR7fsDvsfnRfxnTsFwOdZFPxmIaIUG0mqoAYrBTJqLDZTVSdNOC22orAmMebJhAreV4UhAASIKYSkq5mVPMBFNbINs0nNbwU+mBMfVpNLHo0XAmCwhSGSz0seaiPrMxBNV5Ucz3UVIShiIXFc6rH4yrXxnV6yTZTSWjXR0CqSjy0JzQagxoWNZEzwCqTA4IUJAmBwrOZE9mec+iWslCYokSOpEjMCzCnlO4DCf6e/s2kH6cDFJklO3AlSpDeIFlbCQ6c/0QoQCBSxaKvZNNxaHw/GH24PmXdEY4rAVn1iMQnBhNpRZQFO3ArIuDpBoLDaPJjhWn3kAP7SabRaKtNtIyk4DlmSDdiugg5AGVruZCdqLLa+k9gGDvaTElCVFJRBk8I5V4/pMoQBiqTlz0VQxZpEkS/BqZHiNhX4tWC6lakAlBlmBvATYSSZEMICJQCLTf3qivLnWXVPlSfpxkh5JauPgb2ZzV7W2MqHWoOBSZU+V2xoHQnu8vbjGjTRLwL7JXS/5N1eihvYyjxm1uDvaGqjI5hqCwkWYbXa7mSYoE2nCU0qwgUoEo22g1WNTWpHVFmDZOMNuDDTEvU1Nzc1as6MOVfg2bWLKKnzmmKUOKqQnYAGo3RGtkarVliaOlWJ2WfXxUTcdEDyq6tjULlVX14r18VhluLjM29q5WeVsSK3AJeJl6yoqMeCGxfsSubJjQ+CxIVJDY3OS80NTYmDTxLhMi1dkieExfGf5RD5eYvCnGIb4F+9tP6dCV60kwplBTIzWwbEu2ttIosdpqoGOhsrsUm2jucsW0hyVDc0+r6XFvtlnCtdvjJV7y7xoATN2i40gs+RYSYs93Zo3U/8/s3q5iVi4BQifnLmcE6KERC4UGvVDBU+VnmR4SWPxtlfgqMdL1Lub9WMOiqEtFCgO2c1BOmiHRBneo/PevtkZw6mrIgF43HgdjH40QruMTouFNpYYBOCyW/GMpa/wnaOZi+v0t5as6c3PSX+W9PlrpEvk0pPXG/N/8oZn+u7u6ZfO3DkWOQ5c/OvmVcnqSxvZswP/Hhd+PVd6ppZf9/HPvvpp/o4dz7w/dOyuVUt+5z716OC+8oYXP3X2rxFLC0ufOHH9P9e+tnad/WT7J1xNdLzz4y/Ia0s/m6pwr7v4I5Bsua9qqKSp6f2ntmsrifBn6/s/6l15sPrnf37rV3tPjXx3wyFof/5vv71iuTN8Td+2+qE33aem7+ijXpbmpn37r9zzAO90VDys9hfkVz3/x1VNQzvyptCF7/89byB3hfee6L9OiIOrc9m+jd9ujK6/3HP9fJzs3bCn9MZ/lRvvbCU+atwy+8Hx0JaZL7e9Yh7q2v85+od10Nby+drdn5Y+t3UFs9Qy2SLuHPzaW/vg7PeczGvC+cDBs8eXX1+2/v6pgsuP/n7nHRMzuz9oeWzdue8sl/9Umje5/4m+viPP2vpfHGLe+PKF1p3mCuGvQ6/KK5RDIxv8vZdfGti9bHntO7PON/ddvTvaltxbby55e3vHvdoBcuvR3H3j/YZ734v+cnLlQOPbF5Y+/ZeO0zfGf7B18Nll5650XFs7EMrLGzo7J35YfFXsOtQ7p6/Jue/w2nMXZoNrT3/1mmdvxVz+3J6L3ZetS5dfLXqIfPDpi9L9r++zJbtXv/vcj8GeD+sLIm0bLn2RG99f23je+J5U8NYP837x7l1Y7bm5JTnanIMn8nJy/gcqPgKj
eNqdVXlwE+cVtyANnmmgtKkDM9jDWik0uF5pVyvJkjwKGMnCR2XZljA+SsVq95O10l7sIUt2nMPJBAIhYZ3U7TSkUx9YrXEMwQ4GGjcDLW1n4gxDknHjNEPKQMuRUmbSUpgpifvpMNgDf3VH2uP73vF77/fe+3pScSDJjMDrRhleARJJKfBD1npSEtipAll5YZgDSkSgh+p9/sCgKjGzJRFFEWWH0UiKjEEQAU8yBkrgjHHcSEVIxQjfRRZkzAyFBDr5qW5jl54Dsky2A1nvaOvSUwJ0xSt6h34bVPi+jCgRgHQAEj4khOERv2ejvlQvCSyAIqoMJH339lI9J9CAhQvtooKaBZRjeAZKyYoESE7vCJOsDEr1CuBEGIWiSlAXM2BwRRDYrFslKaYNhlU+EyRUvvvq6NLzJJfebQdKMAcFCtBApiRGzMrotwBlIVQDFBBJCerBxMlpG6IE8yEpDMh8sQJFzlvP+YZoGb5d390Nw4P5ZSRAQ2j3JGGYOUkhFAWUAiW7t3enIoCkoYtXhyKCrGhjixN/mKQoAHMCeEqgoXVtvL2TEUsRGoRZUgGlSKes0CMw5zzIBKuNxAAQUZJl4mA4q6sdIUWRZbIgjFFZ4EdzHKFpOPdvj6RZQSGhvKJN+CCUimpjfRLWCY/ghjKY9yMJVFZIhmch7yhLQlTDYmb/Nws3RJKKQSNorga14azy2EIZQdYOeknK519kkpSoiHaQlDireXzhuqTyCsMBLeWqv99dbvOuuxRhwHH4e3uRZTnJU9rBTD1NLtIGipREKQEa0fqxsfkEsYBvVyLaIIHbfyUBWYRlD54fhmqKKvcMQUrA9J9SufIf8NXOc3k+b9WQG9KjTXkkphTBTIiXTCImzGRBcLMDwxyEDdniDYy6cm4CD+Th7YBE8nIYclE5z36Kiqh8DNAjrgcyPpVmHEaThg87DAUJUZABmkOljTajjdnGR6vd49kiQwWpneSZzoxbbSpDfUdnooOmVJqOxDs4zN5pJpgQUKnwRE4FdkLaDQSEcrI2aDPhY7md+eSPwFgxFMdQDD+ZQGHLApbhGJjPzD03fWRtyIJh2PH7BRQhBuCcSpmxzPXbhRIS4CBpad/3zJjtdvu7DxaaN0XY0xd2crGUDBaiwU2cfPx+gZyJAUweTcxLowytzX4PfgQxCwAh0gIhEGUhnKYokrDYccxMhewmS9gWOpEeCxS0kiZTFCQFlQEFR62S1GZLOTKRbjQngVsIK4y0HE5IilVp4FdDbiEdg1yOiBJgBZI+TIVRiqQiAM3Wn5Zyt9RVeKtdI34I0iUIMQb0fqpbHQxS4WCIc5pawt5Ea1Mg1mLAKxvpCjHmVhvqRX+dL0BaKbO/Jr5ZVv3+EFEroHiZ2YrbzWbChuIGzAD7Bt0Wjrutbi/GmJPRmtoQK4mmYLzO5y5jlRa/J1pvT9QxIpXwkJtbW4VgmGFtnWyZBXjLyIQH3xmzgQRdESKwuhbaW9Xi+aGSZFwNtq0Wb1MNVlkf3ErXWLaZq/joTmJzAwwRzlynsRyBBQvHpuzMtQ0K2wbNNg0x3zTlCJ1JjNOweFKWI1Xw6PLxbLIc8aczDOATjm8/owBnncCD2ddhYtQ4Qzs7G2or1NaqjvraVh8e6NgS5uMxr7XZEAK1UZ8hGLVhjVX4TrGBMnsXZAa321AslxwrZrZlSvMe9P8T1bFmdOEUQH1i9oxO8YLMM+HwsB9IsKu0EYoVVBoOfQkMuzxoY0WLNmEn7DYsROEmmjTbykgCra5wH5m3dndmDKVPjBTJwsKLU9p4hHDqHTAefTnCkU6bFfZY5iR/bjh7fp1Zsnzt3vy8zLUU/ufmXvafeuUNbOXUvwpmDrgq82uOfnYyeszJfotbMRsd3/xj4lk/1XZs1rrvyxsF63W93oPSppl3k8L56U92Pbvyr/ahh5oriw69k3/jz6Gbxz+7dOjsR1cu/i462fBG6vDRS3tvPz33jfWoa+qf+08QXyz1v/PYleO9RdOPNO5665qiTac6qk1tiV822ff0NUVXrS8ZPXGJLDxTvRo9ffXL3499tz1SvO6srviFwjvv3dwg31m+4aX1l6v2zXz4neLrr+XrBt1rdFF08pWaZa/p6BrH62+ps/n4kgONeiqwu1+8VeS5TDYHdrOpz2eIO01nnzyXutY1UZL8KrVh7TMvX1M/7Lq+vGAf/dV+T2Kt7/Ceby/pnflF28D7TxerBT//gaN4k3f24ye2x4pWtK3cc65J/ObVx2YCvctc6y72/RotvNDT/Qj/ZtDzPN5s7ttz69HBxpLLX9zWpg7sp94r6q/hPj40NbD51LrJreH9rZ//O/GfU7O7J/M+ONNZsWZr7Kf2m492n6PPvXRaXLpMdGx8rs5e8uLtp6wn+vv+duWJW3+5YSwcHD391I4Vq1612rc9jqxWvz4/jV4f6zK8P6etzftDffk0wWGX/3H0QvEHO/775NxPZs5+subhvX/fR0wXbvmIf/yPPdaBLsuOWzPkmwU/6otMXPj66qb+8Uh8lfGi7eRexPbwM7o050vzfqbTdlU9lJf3PxKZCgE=

View File

@@ -1 +1 @@
eNptVH1QFGUYhxzDRi3I0rLUnQOyyXuP3dvz4BgtTzBkEA7hciATfG/3vbuVvX2X/bjghBoVZgJMW5qxKSYy77ijG0IuFC0/pkadMZWZJv8orBynj2HUP0qzHM2il09ldP/afZ/n+f2e5/d73t0eCyJFFbCU3CNIGlIgp5EP1dgeU1CtjlStKRpAmh/zkVJXuTusK8JQpl/TZDU3KwvKggVKml/BssBZOBzICjJZAaSq0IfUiAfz9UNtW00BWFet4RokqaZchrbazKbJFFPuxq0mBYvIlGvSVaSYzCYOkyYkjRy4kShSAURBagsppqAH6xrlQVBRTY2bCAbmkUjSOBHqPAIs8EOhRgdWQkCzdDaB0lBAJvNoukLwaQvdGPMjyJNhLyalRfxY1YzEfQPshxyHZA0gicO8IPmMT30hQTZTPPKKUENx0p6ExhQy4jUIyQCKQhD11wFVg4IkkrmAJgQQadX4pMTlri4o3LCmJDoOavRBWRYFDo6WZ21RsdQzMS3Q6mV0fzg+qgkgQkmaccg52WZWaT2xQ6Joi81hofvupRYh6Tgqj8WP3BuQIVdDcMCE1UZ0vLj33hysGl3FkHOVT4OECuc3uqASsNumTano0uigRiyv9H66ieBdOtbCWC05iWnAar3EGV1eKKooMeXBVEmcGMkC2g5o5tA0aKQp9YDDhMH4mO6dFFBEkk/zG2GGdXQrSJXJBqMdUVKm6er2CPESnTsdm1i7fa6iyU3YGcknrhrH3H7dTNF2qhgqFCFeTjH2XJbNteVQBcXunrwJEvcDXUq4FSipXuLUmsmliXF+XapBfDzvgesSn7hZQOCNo+S9mmaKvF5vbQUKWiX7entdTijEhoLlzs/v6oIVH5SE0BjtaN1QBuuws8t51gOQx8sDmyMnGzgcVgZ4rNYc3pbDZNt4ezgoQCPOWBjKh7FPRPs5L+Ag50dgXBojll9Z4iwuzOupAGXYgzUVuKHPiEhYQtFypBA3jDgnYp0n66+gaN7LoMxZaRxwMBxrYzia53mbjeOsYDVZm0mZpmSIjN6dsd/ANmKFQo5OJbctaZuVNPbM4A1nzY+r5jSPtMbPVh1t6y9Z+EfmO+cfLnseFT32kye6eIj9q0JZ8YE++NnIse8FqmvptuM3b2RoTpw4+eLAK2/82dXx7dlrhzF+aeX80PBx+5Lfd33E5H2Y8qr7ibDwCPtspnnuvPDuLRXMiSrzTHPi0ecqL8zf6Gt4b+Cfhjc7Ftxa6a2YvSzhufXvdfv6yzd/3lTUkrJQmPfVly5r+p32XXuNNG/hby2uO4ODm5uzv7g+6/y+1rLXryWX3Fl7sNBYtWjw0rn3q2K2/Nuwdgl/Ib/UdyO5c1WBa757755Oai5dr2+mbqd6n3p8x3B/U//w38sDKSlPdxfO7sx0preAupnh1NP5zRdTnvHSja2dPyxa8e5q1zeJdZc6Th1IpDYUFPVsDLLmK/7XZgxc6V56pOrYiau754TPlL7N76P3rBWXFaf/Up16OTPtu6bMWKp13VXfmdmhxe0bXlhUe/J69HSf/WA4vvjr7gW9/z35K27I4EZ87Tcah48frhohqo+MzEhKOed+i30oKel/M3dwnw==
eNptVGtsFFUU3lJAQSJCUqDEhnHFSGFvd2b20e6GP7gFUmH7XKQNYrk7c3d2urNzpzOzC1ssj6IQhaiTQEiLaEq3u7CpfaSEBgVETUOBggk2aZogMeGRlEfCG41ivS1tpYHJ/Ji5557vO+f7zr11yShSNRHLaS2irCMVcjr50Yy6pIqqI0jTP0mEkR7EfLy4qMzXFFHFgXeCuq5obqsVKmIOlPWgihWRy+Fw2BplrGGkaVBAWtyP+djA7s3mMNxUqeMQkjWzm6FZu8U8tsXsXrfZrGIJmd3miIZUs8XMYVKErJMFH5IkKowoSFWRZAr6cUSn/Aiqmrl2PcHAPJLINk6CER4BGwhCMRQBLCGgbXQugdJRWCH96BGV4NM5dG0yiCBPmr1imhUPYk03Ol5ooA1yHFJ0gGQO86IsGJ1CjahYKB4FJKgjC1Wj6XyKFCmjEZ2MVAghBUBJjKLOTUDToShLpDugi2FECjaOFBb5KlcWfLC8MPEM2miHiiKJHBxOt1ZpWG4Z7RnoMQW9GE4NKwOIXLJudC0bK9ZaHCOmyBSdY3fl0O3PU0uQ1J1QRuI/PB9QIBciOGDUcCPxLLn1+T1YM5q9kCsqmwAJVS5oNEM17LRP6FKNyMONGklP8Yt0o8FxuqQth2HI2zEBWYvJnNEcgJKGOsatGM9JET9tgHYCmumagI10NQY4TCiMRrp1TEEJyYIeNJoYm+uwijSFDDLakSBpekSrixNLUW9PcnT6DhWtGhuIPfF8Yq5xcoUqWiiapbwwRhFiB8XY3TTtttuolV5fi2eUxPdSmzp8KpS1ALFq+djsJLlgRA4hPuV56bykRg8YEHnjBPmupBnGUygq2O56H5VFndjFK+9VBR3s8f91waoAZbFmhHY4b2ChzeW0OXibHyB/gAd2V14ucLlYBvhZNo+35zG5dt7ZFBWhkSLaUwLGgoTauADgIBdE4Jk0RjK/onCZt8DTUg5KsR/rGvBBwYjLWEaJMqQSN4wUJ+EIT06BihKeFaB0WYVx1GVz5dF+xDhcTmRnEA2Wry1tH5NpXIb48BEauQ22EytUstSd9uWC3a+aRp50vmT97r7iGU+XDJzoXSUM3l60pLbiYfesjPzyyVnZ9Vd6jx7z3vS2/s72bt0f3VhyL/urB4bWk3lg25TH0cUPLpzZdfZM25Mng4+fDl6/eOyV2r9jD+bbHop/1HcHdH8vXZVr7Wlh3q3ube8/nTWnPLJ3UlvXrYYG53f4/oG5fzl/7J6xJEOotpw6+G3WwQWBO+z0y9XX9qX9tD3755nhzC/mw0u+28haeXNa3blzcy9v+LoisSa0sX1/7ocl7NS2WF3DlntN542BtsCjwjP7tvdf23j39aXON09nLL6TecnEXXW8Uf7bzCxhg7ffSy+sWNCTt3Rn445E8mr/3ezP0gq2lXDz3A3bzOkXdp6cvscOHhtnM2/2hfr4rQ0Djo837PqmMSOr0XXRsTbt+8T1ivJf82c/anj78N2Mj04JwVZPZv3qKa/x05qTi/6cPG9otSWDe+vILXb1+a7PbU52oXX9ujm/+DsvtB9K3VAL+g/vHZp9X1jKfjrEeP99cmPNluP/TDWZhobSTcemHq9ZOclk+g8rUHyZ

View File

@@ -1 +1 @@
eNqdVXlsVNUaLzQIUZOnL/G5gV5G8OVp78zdZm0mtp12Smk7085MKYVoOXPumc5l7ta7TGeK+CLU5UUN3qYxkryEpe2M1AI2rYKFGtG4gGiCC6EaMS4Ed1/Ce7iDZ6ZTaQN/vfvHzD3323+/7/vOlnwaabqgyAtGBdlAGoAGPujWlryGuk2kG305CRlJhR9qCUdjg6YmTN+VNAxV9zkcQBXsiopkINihIjnStAMmgeHA76qIim6G4gqfnc5ssklI10EX0m2+9ZtsUMGRZMPms6kCTBGA0IDMKxIhm1IcabYKm6aICEtNHZ8231thkxQeifhDl2qQrN1JGqYWV7CebmgISDZfAog6qrAZSFJxBViKrSk7tTmfRIDH5W0bSiq6Ye2bn/B+ACHCHpEMFV6Qu6y9Xb2CWkHwKCECA43gNGVUhMMaSSGkkkAU0ig3Y2U9B1RVFCAoyB0bdUUeLZVFGlkVXS4eKVRDYgxkw5oI4ySqGxwtWYysTNB2F22nn8uQugEEWcRQkSLA+eTUovzQXIEKYAo7IUusWbkZ431zdRTdGm4GMByd5xJoMGkNA01yceNzv2umbAgSsvKBlsvDlYSXwrF2mrF7xuY51rMytIaLJByYZ4wMLUtCBfuwdlH7ZvERkdxlJK1BmqGe0ZCu4j5BW3PYzDD1LUOYC3T8zXypYXaHG2dJPF1241At5sWaiiXNCoJyEc1AIxiKcRK0y8eyPs5J1DfHRgOlMLEr0jAWw82mJzAVdbO052HSlFOIHwlckfCpAuG4mkL6uDFJlFEVHZGlrKzRtWRkZlLIhtrxme4iFa0LyEJvMaw1VWS+pzfTw0OT55PpHony9nKsEEcmTEyUTFRNKYTBCZGSjsHxuPeVJLPYj+BaKZKmSIqezJC4z5EoSALGs/hbGlfdGnJSFHXwcgVDSSE82HmOKj4vzdXQkIRJK8S+5Ibzer2Hr6w064rFKl63e3K+lo7mZkMzkn7wcoWSi92UPpqZ1SYF3ppegQ+dLpZ1cpDj4jDOMF4WxIsviEpAlmHdHs+LePIFiL0UyFQVzSB1BPFuMrLWdIUEMoU587O0k3XhSisJQYaiyaOoGa9VCjXolYSqIVEB/H6YICGASUTO9J+Vr+0IVTc3BEaiOMmAoqQE1P/hgps6O2GiMy75s/VaeyYRDMqNQTWaFnpqBb7V093U4wm2tm2sFzrjmmK2dToTTrWDpN0czeBkGSdJ2yk7nlIykIRM1mzqcbbQNQ0ybBfYVLfQ3hRKhRqYWF0rC72U2KyC1WGvN9YekutWoTrV69WgnA5G9HgrQJH6SMToMatbYyiaashsVJyt7cy6auhpCTe2grRKi71rA5ybZwolAiPpd1QSuGEFDLq/NDYkHhuyMDRuHzU7NJUEXwTGb5+/IiuJVXjXh2UxW0lECwgj/A8kFBUM5A8pMpoewMCYaYH3h1J07cZYPO10rWpLsiE+HKrrCNXx3XxNZF04jbhwd2e0c218DdMA5yDjoV0kVQLHRXGeYmteSv3/zOqFteTcLUCG1ZlLLS8ruiwkErko0vBUWSNQVEweb3sN5QJBMlLdYU14achydJx1JRKUh6dpsgbv0Vlvf+6MocJVkQcibrw0tMaTrN/m4zjWVklIwO9x4RkrXn0P5gqNKne9tuDB2x9bUlZ8yh/vp5VXqOse+vG3q9/Sq1YM/BWlFr+/q+m7qrY2a8zxrz1w/flFNyy/sOlMX//OyJ7Gh3/9furwoXO0rW91tV6TfftJ95rm28a/PSMcuG/42QPfgnOTnxw9++rmQ/cN3Lqt6vOd1EfL1N9am18URhcOBH98unLDkiPOjtMrv2Leerdu8S13VC3qQN3MDvudpyb3bj/eb3SvORnU/sP9/Yfrl/ctPaS8+cyi+0/8+/hnO1aXnx6/OvmAYOvzDdY8xPwwXJ+zXi//InjHl/b05NLyxeibjiW59cPv/O+9M+FjsZPb995zrnFqYN0vk+rLh08c+WBwbGIw/+iT2+Td5+9a8fw73N+u4Xbs3Ar6/5s2Tn1a9h4b2Nr0BHfup4c7lpdtjz0wcd2xm/dXX3sMbBq94d3Hq7hbfu55av3p309F2i+O3V52dsP1tWDZkqP7zy77cGnf4vPtH6HX7t3w9ZHvVj6ycOVB9aoFK1Nt/+z9+i+37frH6j13qyePfn/h4+3hE26M9cWL5WXjb4CnLiwsK/sDSUeFGQ==
eNqdVX9sG9Udb9Y/+KFpLVqhdGnh8CYQ4LPvfGfXTuZuqWMnIU3sxm5I6Lbs+d07+/Dde5e7s2O77dRfYxo/Vi5AEQJEaVwbZVHaKqFtSjvoaEtXEKjrYEoHnUQRVdG6btU2aWMbe3YcSNT+tSf77Hff7/v++Hy+3+/bWskhw1QIbhhTsIUMAC26Me2tFQMNZpFpbS9ryEoTqRSLxhMjWUOZvi9tWbrZ5HYDXXERHWGguCDR3DneDdPActP/uopqZkpJIhWm8xscGjJNkEKmo2n9Bgck1BO2HE0OXYEZBjAGwBLRGJzVkshwOB0GURGVZk262/RDp0MjElLpi5RusYLLy1pZI0monmkZCGiOJhmoJnI6LKTpNAMqpac5F7epkkZAountKKWJadnj8wPeCyBE1CLCkEgKTtkTqaKiOxkJySqwkJMpmpY0SoPFqAaKPZpBSGeBquRQeeasvQ/ouqpAUJW7HzYJHqsnx1oFHV0rHq3mxFIksGVPRmkoLR3uWIHiixnetZLGvC/PmhZQsEoBY1VAoyrrNflrcwU6gBlqhK1zZ5dnDo/P1SGmvacLwGh8nklgwLS9BxiaT5yY+97IYkvRkF0Jxa51Vxd+6a4iuHiefvbPs2wWMLT31Lg4OO80sowCCwk1Yr/Mjc8CpCKcstL2CO/hXjGQqdNyQdvK9JiVNbeWKCXonVOVet3sjnbOcnl+wdJSK6XHPhoxFCfDeZguUGA8nMfL8GITxzWJHNPWlRgL1d0krsvD/gStOVOmXIRn2a/AdBZnkDQaui7jR6uM02yq4dP6ZFFeJyZi61HZY31sz0zDsB2tEzNFxhIjBbBSrLm1j9aoHyrmhySYlaR0bkjjAkVRUJIoC+XJ+hHdIFU3NCBWM+0RwRMYr0tmwR+luXIsz7EcfzjP0nJHqqIpFM/as961pl3ychx36FoFi2QQ7e+KyNXWr+ZqGEijpFV9f2VGDAQCR66vNGtKCFSXcHi+lonmRsN7NPPQtQp1E7s5cyw/q80qkj39HboZEKCfC/hlmQd+UaD4r/RA6PFKyaQE/EAMyFN0ACiQWqmSqRPDYk0E6YiyCva0UwP5aqMFBd4r+GimzYyCoZqVUDybbCXVHMxmRjeQSoC0F8osBDCN2Jn6syut/d0tXR2h0TgNMkRIRkHD5xpuHxiA8kBSC2YiRkjGhi+UF3J94cFEMdMv9cdbM0rv2qEcHiqm+nytHQmfikmY5VeKPj4giiIlzcW5aN+w3RHv2mRfz0MpTRyMASHDqXpbJjrUDVcPFAbWqbyFRW8sbPSlMl2rvR4rHCh0qj2tDyTkwaQ3/VC+fZ1YDHnbe7tR0iKJtUT2c5FCW3QQiN1E74dqnESVWM+a1JpYeIimCKx00N3M0IJVKOjBetuwtG3YmaYRZpummZFqwARd8ydlM9NOR34Uq4VmJl5FGNFfoKG4YqFgN8Fo+mkKTDanSEFF1lEinkhKegGvMyIr88aDlvSwh4TNXk9IewD29sWktMX3dLb75yDjC4gsVwfHx4n+Wml+Ffr/GdWBPnbuFGCj+szdVsHExIosl+PIoF1lj0KVZCU69A1UDkXYnpZ+ezIgBPxcUhICEPm8ScHLdrS07pu19uXMKFVvjApQaeHloD2RFoKOJlEUHM2MBoJ+H+2x2g24pVwtVJw60bDjzsduXFBbC+n3iy8eH+7qXMgvfuTK5/f/+oX1lw78fSR4c+93b3p0aXin+BwemV7TLu9x/OfK8GTu4E82P7v86juff/ribzYuWb35DLd01wfD6y5c/svV8z/6qKew9sBjZ3bjtmd3PD/95qcgt2L8Cv7xk0smN7acGBj9ZviJg6ce+fAbv9w8zE/tvHh2wvC92N22TCydubv/csEfCKwwGneKy5+5cPao0RUB26fy921f/PG3X7p35OyhLVOnVv2zfOx29AT3vSU3NWxRDzcsG/m5c8Wbu7Y9unzk7f1vXdy0WH8/8tzWqQ/O3fHku7e8/bu+X/ztyD1/CN3Y+eqiycf/WNrnefd49tZYxLh07NQnd53V7jpu4gPB04tWxS5MJYsn/v2D/j//aVcjueGtDSeP//Yf4lPH3PIzkZ0nT54f+37Hq5GP7n3t2Onexs+27P791fecrxeunPON/3fHphueXv3eK/ffGWvMfkgevI27fH4A/eyWi28s5o8s++mtjV+/1CiOS5mh9Q1/DXp8758+9K1ti+7eG/nXyeT6jV+ror5wwapuEl5GKfgfnRWRgQ==

View File

@@ -1 +1 @@
eNptVGtMFFcUxtCSKlqosRHtD4bVPmKZ3ZndZdndii0siiiPhd2UajV4d+ayM7A7M8zcQYFaK7aagFoHbNWaGpF1wQ0iBEWRWpVq1dZXGm3VWLRNg9EIplKD9VF7QVCJzq+Ze875vnO+79ypqC+BssKLwqhGXkBQBgzCH4pWUS/DYhUq6POgHyJOZAPObJe7TpX5i29yCEmK3WAAEq8HAuJkUeIZPSP6DSW0wQ8VBXihEvCIbOnFL8t1frAkH4lFUFB0dpoymuN1wyk6+8flOln0QZ1dpypQ1sXrGBE3ISB8kCfzCBKAUDhRRoQkQj8BPKKKCA8EsqJbuhDjiCz04VTGB1QWkiaSA3yRShoxCWWiEjEcgn4Jz4RUGXNQemppPQcBiwfuCnstwIkK0lqeG2IXYBgoIRIKjMjyglfb6S3jpXiChQU+gGAItyjAQZW0UBGEEgl8fAlsXUIqCPCCD89GIt4Pcavajqxsd35a+oczs4KPQbVmIEk+ngED5YZCRRQahyYmUakEnw+HBnQhsVgC0vYmD7dpcJZiSwSC0ptteqr5WWofwB0HpcF4x7MBCTBFGIccslsLPi5uejZHVLTtmYDJdo2ABDLDaduB7LeYR0wpq8LAoFq9w/k83VDwKZ1JTxv11pYRwEqpwGjbC4BPgS1PPHhSEsJGmkjKQlL03hHQEMmlJCNiBq2WahoW0AcFL+K0OtpsapChIuEthiuCuAypSkUAewlPHq8fWr1t2XOHN2F1IBW7qh1wc2o8QVmITCATmDiBoC12k8meYCLSMt2NjiES9wtdanHLQFAKsFMzh5emnuFUoQiyIccL1yU0dLtIntW+w+/5FO12leQIljkSci2WzB7RmZPqRIUZ7U91EWUvEPiyQdqBuotTTTaLKYE1eUjoKWBJs82aSNpsRpr0GI1W1mylE82spa6EB1qI1tOEVxS9PrjLMYt0AIaDpGtQGq0+dV5Wcma6o/EjMlf0iEgh3cCrBQRRgEEXlLEbWojxiSqL11+GQVyemzxP222jGZOZZm3QVkCbPFYjmYLXZlimJzIEBu7O4K9gObZCxkdHR/XGVr0SNviEs9XZ4jkq+ujBn45P1Hedb+uYsP4N4u/sadOkDyq2flWTMb71RMbGSa3jzn+6+K/dkSHrvC39fZv7p15pv7Cvc82lnsSJefce9N03xAp38vqLux72HuxtkqZbb/m2UK6GyXbUNTqqNNpdN59QutZcvnp1TF5UXMK+S033oqN4tb39sr9ts/d0xzcn1p18/Urv6bUr9/7bI0/oJcekuF/ZmlR1+Ig1pcE6yR2TWlU9Z/+r7zvJX/unRBV2nkveGpEx/l71jOKYKeqipJSfKwpv/BGzqmD5/OQFG9APaest46gv6sq4ms/kosjc0bEnNvxTeYx21hx4Se28NN3ZPPVuau3ps/a3y06N3z92T7yzdvaRmzF7zjgSDlMhw/VZF5rbVnE5i7pTF76VU7jAEhvdek29Wbb5zLhw7vsVtPtC39m71V0BInNs4FBPrrnfEee+s+74Eu5r3y0uJy2Ljasua+iG22I+STf0RRxq+bPgv7zMa8U3NnWr6Ya1+yoDrvUb61aWvzP27NV1jm+THngqQ7MTujOWnfpx9c7fr3csLpze8/IKY3tnXEL53JTeZOeDTbeL8pyTI279cvu3yvcm8nPaEmcQO97Vt3Wdr+0tzo+MqMq4f+xQd2Pkyu5la2qS2srL7A9HhYU9ehQepj6sCEaEh4X9D/5zr+U=
eNptVAlMFFcYRvFoPdKAR0FLXTdeqQw7e3DsYqJ0xZNjgVVYlNC3M4+dgdmZYeYNZQFDihRi1ej00Ki1LbCwulnlsKlYUYqWqtGWmFhTWqrSGqKobWJabKvWPhCoRCczycz73/9///d9/5sKXxGUZFbgxwVYHkEJUAh/yGqFT4KFCpRRZYMbIkagvbbUDHudIrHdCxmERNmi0wGRjQI8YiRBZKkoSnDrivQ6N5Rl4IKy1ynQnu5dpVo3KM5FQgHkZa1FTxpMkdqRLVrLplKtJHBQa9EqMpS0kVpKwE3wCC9kSiyCGqCRGUFCGlGAbg1wCgrSOCGQZO2WHFxHoCGHt1IcUGhIGAkGsAUKYcAgpJGMxeUQdIuYE1IkjEFGkVt8DAQ0JnwtKMTLCDJSm58j0QgoCoqIgDwl0CzvUo+5SlgxUkPDPA4gGKkpkRHtx43ycEgr1V8AoUgAji2Cx4oJGQGW5zBDArFuiBtWD6ek2nNXr92YmNLwtLTaBESRYykwmK7LlwU+MMybQB4RPh/2D6pDYMl4pB5PGGlWZ/NgY3gNGWUyR5FNz0JzAPfdIA7FTz4bEAFVgOsQw6arDU+Tjz67R5DV+mRApWaMKQkkilHrgeSOMY1hKSn8IFHVZ7U9DzccHIXzGaP0enw3j6kse3hKrc8DnAybR60YzfFjP40EGUOQ+uNjakMkeQhKwBBqDXl0REEO8i7EqHV6k/GQBGURDzPc2oDTkCJXeLGl8NJ53/AE1qauHxmIHd6V2Fz11CqJjdSQBk0y8GgwcLRGb7KQpMUUq1mdbA9Yh0HsL7Sp2S4BXs7DViWOzI6PYhS+ANJ+6wvnxT98yAiWVtvwey6p11tTWFFITkorsNkTo0UWIEf2m4YT/+siSC7AsyVDsIN53QuM5hhjNG10EtCZRxMmc1wsYTYb9ITTYIijTXH6WBMdU1fEAtWPtde4BMHFwUYqj6AAxUDiqTSqb6UjJSF5rTWQRaQLTgHJhB24VC8v8LAhA0rYDdVPcYJC41MgwQbrKiI9waF+bjaa40hnHoijooE+zkATiZnpTSMyjcrgHTxCQ3+Ed7AVEl7qHPfXvO0vBQ1dwfh58oROz0ntWTHt0dKrp/+ACZOr7dHCxOwloaGPS9hGi3nbB9zau1cGBuLm/xv/48zdtk+v3tkX0ZVVOi99rp90tOdEHGm8Vn6qzRLRPy828/eo/q7rXe051w9Wh5UtzwBq+MBMpy18ffC375tbqsMWsey0DcyqS0RE4JVJScf/vlGslLVkN16rCpsDFg/YHreVb0RfLhX21k3Z651wIahjUlq1gxkfWBh/pTL/+6rYsp3tF5ZFP8kOubShdfLN2ZuNndPLQz4OfXAjOGHxur7ZYO83RfOVdai1d263WuSpDH97ds3FKYsO6j7MhucKz7oMyefdl9Nb9s3Z9dUtx+2dCRlrqjsvr067O/XwjHf322fM6qSt7P6lvXTh2dg3UnYud6yfUnqh59VFrPXizU21tZ6D/3Q86Ho5JNDTMTF/Ds39WqUz5B/YPd27qWJPmTcpL8Rwcse9X+7NaP+s/oet6f3hkV3rMhxXqw/vKcpddmrBHcXRtyLDceX29m2P7Rez+j5a0/OwKeu3WQe0qOqYr73167OFlT9r3lqZmhbP3SKXPXqvY3PfkdD94qHxhSf/DLz2SVDd8n2tNbbC18+F3b/f63+4u/i708qJhV/YIo5QTFM489OSmsW99wrO9IdnnhkoT2pp3xPvKS6eMGhvcBDZVjt9Kvb6P84drZ8=

View File

@@ -1 +1 @@
eNqdVXtQE3cexwd3Fp8z0mp91DTYuaGyYTebhBBEB0JA5CmJAvYc3Oz+kqzJPtxHgOATtVpRe+uJelrbUZA4FLCeUK1o1XpqK56jrcphxbNnLa3a4mOQ01O8X2I4YfSv25k8dn/fx+f7+Xy/3y33e4Eg0hw7oI5mJSAQpARvRKXcL4CFMhCllTUMkFwcVZ2bY7VVyQLd9q5LknjRFBtL8LSG4wFL0BqSY2K9WCzpIqRY+J/3gGCYajtHlbbxZWoGiCLhBKLa9F6ZmuRgJlZSm9T5Ai0BFaESXZwgqXgOMCrCzsmSyg4IQVTHqAXOA6CdLAJBvXhejJrhKOCBD5y8hOAaPSLJgp2DdqIkAIJRmxyERwSL/S5AULCsD6tdnCgpDf2B7iVIEkB/wJIcRbNOpd7po/kYFQUcHkICtRAeC4I0KLVuAHiE8NBeUPPcS/mM4HkPTRKB89gFIsfWhcpBpFIevHxcG8COwNpZSWnMgSCS0mNzSyGjrArTGDAN9lkJIkoEzXogRYiHgHhq+OB5c98DniDdMAgSUkupee7c0NeGE5XdWQSZY+0XkhBIl7KbEBiDbn/f54LMSjQDFL859+V0ocMX6XANptUY9/ULLJaypLI7SPmBfs5AEkoRkoMxlJ1oQy8/HsA6JZdShWHaPQIQedgfYEUNdJNksbwaagHOfu0PNcqunIxeEa+FjalOgbooR2wuOUaFGlRZhKDSolq9CjOYcNyk16vSsmx15lAa2ytl2GcTCFZ0QCksvbL7SZfMugFVa36l4EcCgsNqAvBhGyKghOdEgIRQKXUFSN7zCUHSU/Y/7y6EE5wES/uCaZUjQeWLfSXFFClTlMtbzKDxPh1O24FMOhpDLrzABdJAQAgjKtWYNl7bEDrqJb8WFosiGIqg2KESRIBceGiGhoQGv0NzCn31KIoefNlA4twATrRfhwavL/taCICBqgWSvwiji4+PP/xqo95QODSJj9Mf6m8lgr5oMC0jHnzZIBRiFyrWlfRaIzSltE2GN0V2vcGoi3Po4wzxFEUSqENH2YFdq9WSlFZnB9QXcNBpEkYJqMnDpYGIgIRLSSpV2mIYoiQwaIk4pscNsNIEFc2SHpkCVtmewgVqEBNUvAA8HEHtNaciZoJ0AcQabEDFn1KYnZSVbq61QpBmjnPTYOOVAWOLikhHkZ1JTPXN0ZuLc7LF/NI4S7bsJswsmeop8aVnWniLU2P1YLIXnSkXoq4sBIvTYdo4oxHXI5gG1cAxRdLlTAnLTVvAMVq7MX1uqY0pQnNJa1pWBp6Ke6y0xpuRnikasyyOvDxZnwL4gplsbo6DNWcnZZC2DLMnUwDmghRNYUlSqV7rnTXbrc9cSBZl0njSAiHdLhDOTCLeNzfXkFIISyQkV2Jsggp2LA1JTwzNDQLnBglMTZwJ7Z2aBBUVJCZR039HJqhmwCWfw3pKE1TWAMMA/hIMsMJ9nZjNsaBtEyRG9tJUYkmqRqPnM7xakSWtC0k+2UX7sBm+uZTb4mSsKTZu4RxristamGec1YcZFOJBQ+QYUJ0x2JovoP+fqD4vQPquASSHf/4287OcyNIOR40VCHCqlFrSw8kUXPcCqIGNkJdUqDTGYySuw4ABcziMuN1IIclwkfZG+9/SqA68K/yEBzael1T2u/BEtUmnw9UJKoZINBrgjAXfectrAo3KOk8OvDOpYkhY8BoEP8+erbN9v3Z82qjFrfmRD6ZOefivk7cXt4xb9Jph8N7jr80aVf/buKzT7bPNs/+efcrx9lHlny1bVrLfbNxs/NOYebeExx9Z5z2VB5z/2/mryTfKnNNG/nLnt59+6vF3c3fbbz79w69LlnTd4HvugGc/nGiPfXhf/m7ziGtPNlz7UkYSO97eaamZd2Wo6US5796TtpM3H2zf/umn276egFZ+/ut8Z37LbXw8OOHsGLOl9T/nPq5KY9gfWsPDjrU+LjTmHZu+7T0jd9SirT2EGA6sVi9LNVbOuGGr2hblvkJ+i7+Zt/vp5mO+lctWdEVcvbR04tDapvmad/jI+VM2JDfvmqQ6zXC4jahSDWmua4qOj2h5ffHay+a4CGP4zXNnDgyaKKYuX69acnxgyoCLg2l3YkHnsMNR6Fvvr5xfvqfl9fqfR+ywzdn+45OOmSBq6qyn+q4LEZfLlqxpDN9hd9LvH8yPxCnHzFV3BcuksnUDjVVf/Xi2u33rlbyx4drp81O14VlTm6b4Ms4nfJOJZ1pGzK5f+uzRluQtJbUXpz005nd1NZ6N+ktFdWXhB5Gyxt28q3jf/RsXHnZkYavemGk+FbbzMlLxhaljuH9X9/Ws5aMvblVvPD0x7vejMyJ2n+veIHdmZEab7hf80o5P1o1tkkb+fLtt7erj0Wv+XN69FnCFkWMHN308aHTymOhlK5KursubQM24NyCja3TxmXeio+p6bEzzowjH2fGbFpz+XbtL3LnIXtNJRrR8tz36H+tvnXGc3/rttJ6Sg7c+OLDibmr3I/PhisimoqEm8G7r3PWraxyTxoQdvVQ57hNPOj6k4/jlU58oEddPXhrREK+5R98/Oc3t0eyY3BD7fef1+kHbKtJzNv/13rnusRvqTw/3kmsWbSz4aE9P3I5OonNT2pWeSf8mvyqbMOyEb9SwC9MrF8WMjvnw6tLKiX9sWDneUrrmrdY3Zp2ZPNDLZjZ3+C8UDn9wqKLR8ubky3F7sh8vvDcy2NKDwoatcuwtDw8L+y+SEjoQ
eNqdVXtQFPcdR6nK+GgTGmNMxrjeqGla9ti9Fxx4jsjBcSBwcCgPy+De7u/uFvbFPu6BtaWWpDo6gcXamGhoFLxLkaAoxicWO22qg4k6liZUE51UjdrWJghOEVv6u+NIYPSv7tzt3e73/fl+P9/f5rAPiBLNc9M6aE4GIkHK8EFSN4dFUKsASW4IsUD28lSbo9BZ0qqI9MAPvbIsSGnJyYRAa3kBcAStJXk22Ycnk15CTob/BQZE3bS5eCo4IGzUsECSCA+QNGnrN2pIHkbiZE2aplSkZYAQiOTlRRkReMAihItXZMQFCFHSJGlEngFQT5GAqNlUmaRheQow8IVHkFG91ojKiujioZ4ki4BgNWlugpHAprAXEBQsq7HNy0uy2jk10YMESQJoDziSp2jOox7x1NFCEkIBN0PIIAmpk2SqHSbJgSgYansNAAJKMLQPhMZt1UOEIDA0SUTkydUSz3XEikLloACeFLdHKkAhApysdhfCVDLsyY4gxJVDcG0KpsUOBVBJJmiOgUChDAGzCglR+anJAoEga6ATNNYzNTRu3DlZh5fU/fkEWeic4pIQSa+6nxBZk+HI5Peiwsk0C9RwpuPJcDHhN+HCei2Ow0/XFM9SkCPV/VHkj02xBrIYREkeOlH3Yp0TADGA88hetRXHde+JQBLgmIBfhKCZrEib22BLwIVz4di87CvMm+jl53EL2qywPWpPtkgnIZgOySeCiA7TGRHckIZhaQYzYssv6ciMhSl5ah+6SkSCk9ywF1kT3Q+TXoWrAVR75lM73hPpOKwmkj6cRhQEBF4CaCwrtaMMLR4nCmq3HhkfMpQXPQRH10XDqj3R1vvrAn6KVCjK6/OzmLnOoKddQCHd3TETQeQjYWBCKCupbbhOb+qMiSbQb4fFYiiOoRh+MoCKEAuGZmkIaPQeoyu0NWIYdvxJBZmvAZDYYQMWvc5M1hABC7sWCf6tG4PZbD79dKUJV3pz5MJOTtWSwORscB0rHX9SIeZiHyZ1BCa0UZpSB5bChyodmWIGZr1LhxvcODDpUnGMMGGkzmCkXHozaT4B+U6T0EukmwLcHagESLib5KA6kMQSgQjTLHrcqDfBStMRmiMZhQJOxWXlIzVI6YggAoYnqIOkGyUJ0gvQ8QFUw9bygox8e2a7EyaZyfM1NGj+67QXqqpId5WLtZRkr8kDFaxX79H71pWn5q2l9HbFbKWz8+SiMmeVze8XcEd2AVuk+FE8xWDCzQaDwYziWkwLiYN6cFN1eUaFYqupwIsqjNlkiXsNF8h0SqVGI04auECgJMNfKOZT69a61q3NDNaS2bwpxWjzOa2BWsXt0dlXczLmYc3rcrGgK7cK+Cqsot3kyK1Oyec9WGktZ0zN8PnLgzk1sERC9lqS0xE4sTQE3RLjDQp5g46zRj/BmnSEigJj0U5dlelIDtz1hRwTTEecEYQB/CVY4IRr21LAc2DgVxAYxUdTlrpSL18ddAZsPm9WKgvoCmArzCgx2e0BqxLM5QMuG+e0GgwZ2WWeScjgJjjOMXBMmCE1Oprfpv5/ZvVBGTp5DaCFwvihFuZ4iaPd7pATiJBVajvJ8AoFt74IQpnZaHFGudpt1ptTMRLTuUmQSpnMOtSeYT004e2bpdEWOTLCBAMHz0eqR7x6iybNYNBr0hGWsKSaIMeiR9/PQ5FB5Tx/nN62eFtCXPSKh9+xse0lZ7mXbHN7/v6jLSeGDj/fkd1VdN2FvFrRvar+BXYzhziOXHtnq8c8p7kmK+Hxv85ufy6PTkQWLPr3bYuluXGUnnbD9SXXO1J/+0+vVl47Y08eW3j/Ue1Pb7x54MDosdHrIwfzxu7fe4Mbk/OJm/+cNfjwtaNDwT2J5Rd+Ut/zzIvz+/uHH1UPX63rOdPZuys99zXTGt/XzMjj37sHbm/r6z94sfFCU+srSspX16fH3XD+B335RGvj/Zna3ck71ZJaxP9gKKHXXrD/C4exfcnumoJ3LzJ/6H+4eO6m54udd7aEZn7QsA9ZaFsVv3p/6Nbvvo7/bdb3ju4tz3nJseNCPOK/NOfy4Vs/a51xxzPtow1LHo3OftSQji6b9wPnlcHKdy3n9iQkPfMP170NpW81NOfMn753l3Y9N7Lz5rOPT/e73LnHZ/XVf7ZnDbHgwLGr1ce3fB9h9P5Dy5Z3O4aHVxZfPjnnuRkzz59q6/8E0e+ePuK4ltvyy/izDTm7f11Vf6kshDaNbrZOoxZvnfdhz4vnM47fSlmYuPDC7JVN1vfTcsPEWeRc5ZXBmvPZn79RvaQRNO1oal5xqevtplOg7+DhVTfv5SRcKVsj7BzNOtt6iWjGPlzy6ZnOeQ1f9M4eWnX4+ozEiha+8r3fzM+7Fyi4O3fVbaaseP2Dh//dsmLlsqRbRPPp/rGiysXm6k8T9rq1dHHfjgN3/jb9xqb8wcdrUz9bGtzDkCMt966/37Lo5rW7vo97+4afPcrs+ri+X3mwqCff9jp+dwBNbVzp8O9r+e6sL7W39Rtq/zJ8zHcmvOnOisGv9IOv9G58e/vJj5b+OWtM6Qt7Vt+6OtjqKXR2vl5pdwx1d+UUmy4XNUppP+65kvhJ7RXbyznLtw4P9S1f0LjtYnQU4+P6Wma+M/KduLj/AQcpInA=

View File

@@ -1 +0,0 @@
eNrVVk1vG0UY5uPGkV8wWiEhIa+9ttfr2iiHKFRtoVGLaqqitlqNZ1/vDtmd2c7MxnEjHyg9Iy2/oCVRUkUtRQVxgUocOfAHwoHfwjtrO6VO2lThhGXL9vs1z/v1zN7b3wSluRRvP+bCgKLM4B/93b19BXcK0Ob+XgYmkdHOhfODnULxww8SY3LdbzRozus64yapp1TELKFc1JnMGlyM5O5QRpPf9xOgEYa/f/CFBuWuxiBM+bO1rvzcfNLw6s160+8+XWUMcuOeF0xGXMTlk/guz2skglFKDezN1OWPNM9TzqjF2PhKS3GwJoWACnN5sAGQuzTlm/BIgc4xDfhmTxtqCn1vF+PCn3/sZ6A1jeH7K58twP391vu/2fBauxjMKJm6q2kqx+56lbcuH360+wliKJ8PkqJGvICsU0VaXqtDmkG/3e77XXJhffDriTGuKB5zUT74acAzTGtJ+mSNsgQWLuWzvBhidjWS0S0XQa4E3v5qatxrm6w8rCftFafv+23nY9SvtDq9lud5taTttnonKJ4vwTm/lUsN7sVZzpjTyTm/0O9ep2pS7s2g/mCtsHnuZRCxScqdoNv6ZSnAOoLGDqPO83auc1oeYGdJLGWcwtMbrrWuYHDsTfnQ27uqaJzR8pGQLrNleHbDxTLTSMbuAMcQ3EtReUg6TW/YG3UpbfqtJgPai+g51qI96nm9YdRrHZIT81hTECFeTlNd7hpVwONFBoNJDsfnaH8BbN7kJvmUCtLsdT3ief3qbZtcjfXXOFMKm/nXOw+2nfn2OH3Hq/fqncCpOVzgzAkGIY5urJ3+tjNM5TDURmLGEIKgwxQip29h1ZZ1WGzAYNfaGCjCcmgwIWzRLE9Bh1mRGp5TZZaDnG6Bq4fLbSCkPMS9VpNlA6nikCmoShJGXM+VI6wganM6ybB6y045Zi8FTUP01se9dPtVWWugiiXHpIkch8akYcEXImNHITQcVBgVao6OTqqyplLEdtvR38dVsO7KzAVNf1pzxlJt6NwG0EzmYFGGXGxyA/oI411tohBpK8fu206+jAmDDKlBpNhvJEM0FCMe28MLDS9VO8olEuhRJoymEBZ5eEfzu4gftygGhbC8CuhCK0yCJY90mCI9oHMzWCgjORahgCw3kxfePmptuIV1FetIEA4nVWItr9dtdlredPreq1l85TQWxw9KdUOlWQM76OYKa2Qalo21+R/R+7enkftZiHs3eu2FEFiuODN1P2ZzqjInUtV/ZvZjZH7O959UFOyy+U10RMpvSq+vuwz2cDiQJsv9YpMzqcTy5fAyqb57eduZzV6YUJ0gF3Y832/RUbPdhqDZ6QbQ9TvtgAWdTsBgBM0RZThijEZeuzXy293uMPCY3w4gYBEbBoBMmlHBRzi3dnE5rvZN52jYUTsbbY2/UGLwaw2/rlbCAW6gnVDnds1JGa4cMhJ2BVFhqRBxwZDf0GNjTNWM6+cTiL9vvtFZFwsEtz5zOuuZs6CnJTe3qjlnPcYsPPrOl7IgVAHBS5IibdoLz5CRVKRiGxxVlwo9BttRgpfYhq4T5AhiEkArO0JWkXPAoSFyRBRg8wGJm1Szv2WIkWQWofJZRK2TSyMywbMjKT40ZEPIcaWfmdbIV4U2RNMJCqlZMlwgUABEg90Aezg+avGsyDBCRCzB/CucxcK4hvot8fn8/D7ZXkCZkltibQYWpXPYVrhaOferB4G8MOEmVdzeKHYinIW37f/MxZZ/UdgQK5jhVPSdkTtbB2eKr9tvHGo6ffEsgDa3p/8A34xtTA==

View File

@@ -1 +1 @@
eNptVG1MFFcUXZUooj/aWpM2mjputZbK7M7sLAtLo5UuYlBZiCxksa307czbnZHZeePMGwTFJkLVVtRkaGPSxqrIsttuQaTYGENpsZZW0baoxEg3MTZ+tRqjqfUj2kgfCCrR+TXz7r3n3HvOfVMTq4CaLiFlTLOkYKgBHpMP3ayJaXCVAXX8YTQMsYiESGFBka/R0KT+2SLGqp5ltwNVsgEFixpSJd7Go7C9grWHoa6DENQjASRU9dettYZBZRlG5VDRrVks43CmWUdSrFnvrLVqSIbWLKuhQ82aZuURaULB5MAHZZkKQwpQK0kxBQLIwFQAAk23rnuPYCAByiSNl4EhQJqjRSCVG7SDEDAck0GgMAyrZB5saASfsTHrYiIEAhn2rOX5iIh0bLY9NUAr4HmoYhoqPBIkJWS2hNZIaholwKAMMIyT9hQ4pJAZL4dQpYEsVcD2SlrHQFJkMheNpTAkrZpfeQt8ZYvyShZ6ow9BzX1AVWWJB4Pl9pU6UpqHp6VxlQqfDscHNaGJUAo2D2SPtGkvrCJ2KBRjc7ptzL4nqWVAOo6qQ/GOJwMq4MsJDj1stRl9WLz3yRykm035gC8oGgUJNF40m4AWdjlHTakZyuCgZsxT+DTdcPAxHWdjHbbMtlHAepXCm01BIOuw7ZEHj0rixEiOZlw0wx4YBQ2xVkXziDCYDczeEQFlqISwaDaynPtLDeoq2WBYGyVl2NBrIsRLePxIbHjt9hQsGdmELZEc4qrZ6RONNIpxUflAowhxOsW6sjguK52lFuX7mj3DJL5nutTm04CiB4lTC0eWJsaLhlIOhbjnmesSH75ZtCSY35H3MoZd5k33ZLCOVeWoeKVRIvBVyFfqCR18rAvSQkCR1gzRDtb1z+LcLi5d4AI0DAQF2unOzKDdbgdLBxyOTMGZyWY4BVdjhQTMOGtjqRBCIRm28kGaB7wI6YfSmLGcUm92fp6n2U8vQwGEddoHQmZEQQqMFkGNuGHGeRkZAll/DUY9ufSy7FJzv5vlOScrpLNAcHOBTAf9NlmbEZkeyRAZvDtDv4H1xAqNHHWP2TyjLtky9IwTzOy6xILJGwY2x5fYvDeUlNTv0/79JvXTefv3NTm7l4YuHO/jj35xYhY7c6DzTGLD9p1J986e+qR3au3h8Zuq5asdqwsO35yy6djfV28+OHU+NLejbp3T3+B99dc3evqnT5y79IXLG4+6hfRm/5/UNqtLbEzddjrxbUaD/er9ew/CnS2r/Q1TSwLdlxK3jD0n4W1b06F5s5dD5kbt0ppdJ6fg+vfrT06/21X2X5K3smnS7+rO4lJ514xzXdePfl4/se9U0tkdf8HFiyIoeGfFb19PvrJ1fteE3hePrE/xv5zck3x59yuTLzx3PokHydX1lSnV1XmJrhW5V3q2zEntzQMTsg+9nuMoBcknuiceL9TOnR8/v/7HBb3tO2bidukitXHnJf9b14P+vqLlY09fkagD73YeStxJae8585qj9KVpsypatv90d4Jwr/gjGhye/XPKsX8+u7R5/S97WhYX7Gg7mCgRt97q+yMnt1a937r7g4sfb7m/K+/ED292XLt2e5rFMjAwzpJ7fZKTG2ux/A/oUIK0
eNptVH9sE2UYLpJNVFCRKEjMOBog4nrdXa9r1waRrWMTZ9e5FhgMM7/efb0eu94dd1/nuskf+wERRyYfJqCQ4Ny6FsoCm0OQCGF/CEjkh3GRbWCQSMiQn8YgGkIyv41tssDl/rj73u99nvd9nvf76pNVUDckVZnUISkI6oBH5MfA9UkdrotCAzUmIhCFVSFe4vMH2qK6NDA/jJBmuLOygCZZgYLCuqpJvJVXI1lVbFYEGgYQoREPqkJsoKnWHAHVFUithIphdrOMzW4xj20xu8trzboqQ7PbHDWgbraYeZUUoSCyEICyTEUgBai1JJkCQTWKqCAEumFe/z7BUAUok228DKICpDk6DKTKKG0jBAzHOAkUghGN9IOiOsFnrMz6ZBgCgTR7yTQ9HlYNhLsea2A/4HmoIRoqvCpIioi7xRpJs1ACDMkAQQtVYyAhRYpU4IhOOFUJoUYDWaqC3dW0gYCkyKQ7GkkRSArGe4p9gYrCZSuWFiceQuNOoGmyxIPh9Ky1hqp0jPZMo5gGHw+nhpWhiVwKwodyx4rNKokRUxSKsdpdVqbzUWoZkLoT2kj8u0cDGuArCQ49ajhOPEze9+ge1cDtXsD7/BMggc6HcTvQIw77hC71qDLcKE56Sh6nGw2O0yU5K8uSt2sCshFTeNweArIBu8atGM9JET85mnHQDHtoAjZEeozmVUKBv2L2jSkoQ0VEYdzGcq7dOjQ0MsiwIUHSUNSojxNL4ekfkqPT1+orGhuIzfF8Yi4+WqBLFoqxUV4QowhxNsXa3QzjtjuoQm+gwzNKEniiTV0BHShGiFi1dGx2knw4qlRCIeV54rykRg8YLQn4CPmuYFjWUyxpagHPo+C6fP8q4LCFlr/nPfy/LqouAkWqGaEdzhuYx7kcXLbABWkYDAm03ZXjpF0uG0sHbbYcwZ7DOu2Co61KAjhFtKdEVRVluJ8P0Tzgw5B+KA1O5q8qzvUu83SU0aVqUEUGHQAijiuqAhN+qBM3cIqX1ahAToEOE54CujR3FT7g4lw5TDBkY0PZHJtjE+ilK0s7x2QalyE+fIRGboM6YoVOlo5PaprTNMU08kwWcG7TxSVTNwx9kiqyFv+pvNN7/+ufMnbkLbwJBvK6P99T2JzMKLr99oy+xgf//C7PXF3z0rVany9PnJqR+drdsu3OircunTu4c/GRWubVjNt/39p+0tm7Oe3Cop9bzcs/nGdhlq2ZXZy92+WwaC/6pmfKrT/uOn8xOpj3Zd+hq7du6OW3O3sK+j+etlcd3L6p4Ngrd74JfJvMz0x7+rdT8/+d+2Zu5jOhU7/UvfCg/8zlDSurrj7b21oq1RxtmHwseQLO72wOXft1067kgfx7a9fdgasXlg3efH5xXfuSjHdbcB31KZN2o6du4yxxFue5/Nn3/stXjAvp92ylO2cuKtq4wlY7Jf0stdXfMDd/jqh5zvecnPbRgoa6FvEL6kCTdHXuxl3b+pWBM11lhX/M7jzYnP5X315ndeyNEu8JN/aDji39fdtmXJnTcpeb4Wx8eXDWc6f7zgXEqYtCe701wePXQ+UfXNnRfH0B7qbWDGW+XlFtHhwSt966c1apPfwg3WQaGppsur+lqox7ymT6DwO7epk=

View File

@@ -1 +1 @@
eNptVWtsG1UWdgiPwv6gCIraCi2uQQiBrzMPj2OnhG5qO2lIY8eJQxJKa13fuc5MPK/Mw45TWtqEh4BdYKBQIfEojWO3aUiBPiClRS2IR0XDQ4BEgIVKkP2xoouAVVeobLvXjrNN1M4Py3fOud/5zjnfOTNUzGDdEFWlalxUTKxDZJKDYQ8VddxvYcN8oCBjU1D5fFu0Iz5i6eL0bYJpakZdTQ3URI+qYQWKHqTKNRm6BgnQrCH/NQmXYfJJlc9Nb9nokrFhwF5suOrWbXQhlURSTFedq01EaSd06lDhVdmpWHIS606YVDPYSbncLl2VMPGyDKy7Nq13u2SVxxJ50auZgPVwwLT0pEr8DFPHUHbVpaBkYLfLxLJGMiFWcpvyBDYVBQx5kuZ3jsV5QTVMe2Ih9b0QIUwwsYJUXlR67Vd6B0XN7eRxSoImHiOEFVwujD2WxlgDUBIzuDB7y34VapokIliy1/QZqjJeSRCYOQ1faB4r5QNINRTT3h8lJBqaa9pypMaKk/b4aA/96gAwTCgqEikakCDhU9DK9rfmGzSI0gQEVPpnF2YvT8z3UQ17tBWiaMcCSKgjwR6Fuuzz7pv/XrcUU5SxXQy2XRiuYjwfjvXQjMf/2gJgI6cge7TchjcWXMamngNIJRj2y1QBqWpaxPbXVVckEiiVSMr1uSa9ayDV2Ki0NGodGTEbEvmYv39t1t8Y6+xrEhNJXbU6E1yK03oAXeulmVq/n+EA7aE8JGcQFBCTs9ZmuTZ6dbOCukQ23S92rY2kI81MPBxjUYCSWjV4VzQQiHdFlPAaHNYCAR0pmcZ2IxmDuL2pvd3MWg2xOO5INw/0qVysi7mnAfnboi0xmNFoabA76K3lGWOlk1C2MiJfH0nTob54MsP51nQKbISPRsI9kTDfz69uvyeawd5of6Ij0Z28m2lG8zj7aR+gKrR9lNdPlZ6JOcVIWOk1BXuEZvy7dGxoZIbwcIEU0rSMoTxRJz7xYbEyTDujLeeFfX0+RJRqH4kLlttJ+ZytUHcyFMM5aV8dy9Z5fc6m1vh4sBImflFhvhYng2ikiDjDc4NQRIKlpDE/FrzoCBwpjQDpb4k+GVaABzTVwKDCyh7vBu2zWwQ0h/bNzhtQ9V6oiIPlsPaR8ixkBweyPLJ4XshkZSow6GXFJLZQan/liqarpTCEEJANe4ShvBMVy5wax0iuFKApQNGHBgCZfSyJskjqWf6trDLDznOk2G9e6GCqaUyWXtFb7gb19nwPHctExqXY52G8gUDg8MWd5qBY4hKo5Q4t9DLwfDY0IxtvXuhQgdhJGeMDc95A5O3pm8khkfKxqJZJJiFLQ4ZPMhxXy9DkmPLXYl+AqZ0k21BEBKXUTE3VTWBgRPa2mbOn3TIcKG2eepbmWB/JdKVTVJBk8bjDSobUUg5E4JqOJRXye1EKIIgEDGb1ZxdDPZGG1ubgwW4wX0ggqs1+M4qKaihiKlXowDppjD2GJNXiyQrVcSHYCNobeuz9ARqxXjoJ/QHM+XmaBqvJcppD+7/s8qX9W4QS4Z5B9j6BrXfVeb2sa6VThvV+H2lT+cuytVDKVel9r2rLjY8tcpSf6r8+9ZEyTC0O/+e+B2e4Ry6ZqsbTu8ZeP4VinUv0n6q+XfrNgYfvvG7m+9uvvSrUOqp0n3hu4yT7QzC06JmhiStnlgQ3rOMmP379zKkhfPUTv4LJv292nz4arspG90xNPfR+A8zsPfpLEzdTOHhV2/JN73xbFVw0cbzvxfwed8t2sOtvjiX86HvHvzTpY++fOsE/m38s0rPs+LHPudzji1b8+OlmUoSTT08cd5/d/cXJ+tu3Tpy5ZfUu9rb4rb9tGOSXi3f8RfEOKeiTLnRyh3DHn46ZI/GuVafvby/s//in+z44M7jiwL87/7UlNzK8pwmJLU9cs6LmVGiqZ03fC7vBP4TTO4Yhs2r6M/hw9dqVD13+3HLx2V8u23evg9n9h7BiG7th69Jt757uvim/bKvg/fVn9/bDUdSy/uwHmx3PD4cXT1rOm1e9lBWMxX/+KiH/99Fvjn4xvHH7Dfq2k+8su/S68Vjx845zM9f88/CThy7tfUH6+fsXf1+6qdrhOHeu2jE0tXPH2Uscjv8BZV1LvA==
eNptVQ9sE2UUZ6KOqCgmSOKIctY/ENy11961Wzenjm4dY24rWwcdqOPrd9+1R+/uu91917UjoCLRiNNwiyYgCOpKS5ZlIJsKyIgaNBoVjVFxohj/xRAkiIiaGKNfu063wKVtevfe93u/997vvduYSyLDlLFWMiRrBBkAEnpj2htzBuq2kEk2ZVVE4ljMhFrbwwOWIY8vjhOim1UuF9BlJ9aRBmQnxKor6XbBOCAu+l9XUAEmE8VievzhdQ4VmSaIIdNRtXqdA2IaSSOOKkdIhgkGMAbQRKwymqVGkcGAKE4ihnOUOwysIOplmchwrH+g3KFiESn0QUwnLO/0ssQyopj6mcRAQHVUSUAxUbmDIFWnmVArPc05/etzcQREmubJGXMycWwSe3g69b0AQkQxkQaxKGsxeyTWK+vljIgkBRBUzvSaRByktDVUKI89mEBIZ4EiJ1F24qy9D+i6IkOQt7vWmlgbKqbJkrSOLjYP5rNiaU00Yo+2Uiq1ja5QmlZaY9zOCs7J7UuxJgGyptDSsQqgrLJ6wf76VIMOYIKCsMUu2tmJw8NTfbBp724GsLV9GiQwYNzeDQzVJ4xMfW5YGpFVZOcCoYvDFY3/hcvxTrebfl6ehmymNWjvLnTjtWmnETHSLMQUxH6Ry0KMEzKyvywp7eqCUldUrUkEjYCkGb5Aik9G6rvDvYlOsbO9LiGvWN6T1Hp6YxFfXWPYp2i4nnVXCD63XxAEjnU7OSdlwbYEvcujkbZVMVXoDgE+wSl6Q6K1pwUu6Up3dShuogneUL0RiSWal3g9pN6fblLa6paFpe6oN74qtbRD6A14l65oQVGCw8uxVMkF0w2t3UBowXonVNpxqxxquy92X6i+p5qhlK2kLNbIko7C7eGoqKe1DiNYkTJWEnGtB9ebKzwBdRlcEQmJceJua1paOYWzzy+wXJG2jxMqufw1PCkZBWkxErcH3J7KPQYydTpK6NEsLSSxzI0ZKlL0wbu54ky91Nr0v77nZeqoYO2xoCGXM5yHaQZpxsN5vIxbqOK4KsHDNDSHhwLFMOFLKvPlMJ1HU6LqrJ+chxyMW1oCiYOBS87AWH4GaH/z9OnMsiilYxOxRVb2UIRtm1gmbGPdyMTYsdiIAU3uLYS1xwrD0NOb6hGhJYrxZI/K+XsFXo4iC0qjxSO6gfNhKCFWNe0BwcsPFy2TchykuVI5cCznPpRi6QpAiqzKtJ6F3+JGM+2Mlxb7wMUOBCcQ3X05odAN7shUDwOpVMb52P/DCH6///ClnSaheH/+4g5N9zLRVDZuj2oeuNihCPESZw6lJr1ZWbTHb6M3XQD6JAGIXhiFlZzHU4EkiffzfCX0iQD6PZ6DdCnKkKLkm6ljg7AmgnR9k7Q9Xq6CVH711PBuL++jmVYzsgYVS0TtVrQO53MwqxndQAoG4l4osRDAOGIn9Gfn6jpbapsbA69G2KlCYlv1iVdHTsOmJktSth0ZtDH2IFSwJdJNaqBsIMi21Xbao37eX8lFRShAye+N8l62sbZu3yTaf7LL5NdwDiiUexLaI3G+xlElCLyjmlFBTaWPtqnwgnkkm89Vi71d8vSCJ2fNKFwz6feff/r6m5tmuuc8dvavO9/asbr79OAXA8zNmxzhuxbtz7xH5n8oRT2nF/59tn80edz1S39gZ3rh2eptfyyZxZVlL/t02f61YzftOnZm/MeZR3beU3G+4ZsHusbGr36ohfDbrdl9/Rvn/bHnz/ZXcnsuzBvfcmLNDSFPdvjHbiu8M9zRIX3cH9wuPvXTuTdPfrv19X0vnNvg93+1eKU4N7h184kls245eX7bXVVPubgb/hr4aPs8+Mzst8jK2y6/fP+pK8W+O/aTawZuXmz0OU/sDAfm6OP12/oOHvuyZMux69+fW7760In+C8c27X6ndNVR3BYpi303eO2iWw/+tkb8dcupQ3O+cz5+5sifpQv2XLifr1mzYXjId+aFBfi6havfOfrJtULDQNdzn5V1ls0vHbns9s+usvb+XHpntRqfsbn2h10Pbjg99oR6//ldv9eErqoZGF5/r/rp8cPrO98YXffQs2eCO2YHjv4izv1c4TL44wPPPP+1csXR4+9+8XnT92XX3L31cNmp+U9a50ryRZ85Y12k8+SNtAP/AiMbVis=

View File

@@ -1 +1 @@
eNptVQtsU+cVDgLRVlPaTBQQQms8066Q5rfv9fUzWZSmdpI6iZM0NiGhm9Lf//3te+P7yn04thmtgE6jQCXuKgErj6rE2DRNwrOUhoQ92CrYoy+1SEm39N2qXdVpQ5Xo6Ep/O85IBFfy495z/u9855zvnLs1n8SqxsvSohFe0rEKkU5uNHNrXsUDBtb0J3Mi1jmZzXZ2hCNDhspPVXG6rmg1djtUeJusYAnyNiSL9iRtRxzU7eS/IuAiTDYqs+kpYZNVxJoG41iz1jy6yYpkEknSrTXWCBYEi4gt0NIvJ8hPVDZ0SxRDVbNWW1VZwMTH0LBq3fzzaqsos1ggD+KKDhibC+iGGpWJn6arGIrWmhgUNLw5z2HIkpRmyiqynKzp5thCmscgQpggYAnJLC/FzdF4hleqLSyOCVDHw4SchItFMIcTGCsACnwS52ZPmcehogg8ggW7vV+TpZFSMkBPK/hm83CBPSCZS7p5uoOQaAjaO9OknpKFtrlpG308BTQd8pJACgQESPjklKL93HyDAlGCgIBSr8zc7OGx+T6yZh4JQdQRXgAJVcSZR6Aqup2n5j9XDUnnRWzm/Z03hysZb4RjbLTD5j2xAFhLS8g8Uiz6ywsOY11NAyQTDPN5KodkOcFjc3rRbX19KNYXFeuaMt0u/2BHu7Yh7WlsNxLQL6EmIZUJtjUqjXFbWKCNJNVi9FJcCNAeJ+3weL2MC9A2ykZyBkGjTac7m/tl0RH1BjemI2If1YnCzaFWpokRwrwt2Rps07yhxlhXl+EKYKWnRersiEn+9oZWFGn1C20q9vcEbL2phrTLkXxkfcLVNoD62nimoV8NRlUYb4O+zMZOd6C31kIoG0merUs12WwupTXp0CQUHkDKQxyfoR/ObGQTjXExHIjIA93hABfu7fI+Mo8zRbkBVaLtppxeqnCNzSlGwFJc58whmvIeVbGmkHnB23KkkLqhbc0SdeK/XsyXBudwR+sNYa/IBohSzckIZ1RbKLclBFWLg3K4LLS7hmFqXC5Lcygy4i+FidxSmCciKpS0GBFn49wg5BFnSAnMDvtvOQKThREg/S3QJ6MJcEqRNQxKrMyRHtA1uzFAMHBqdt6ArMahxGeKYc3J4iwMZlKDLDJYlksOipQv42T4KDZQ7HTpiKLKhTCEEBA1c8jh842VLHNqHCa5UoCmAEWPp4BKSiHwIk/qWfwurS3NzJLyU2dvdtDJpiELLu8sdoM6P99DxSKRcSH2DRinz+ebuLXTHBRDXHwe1/hCLw3PZ0M7RO3szQ4liMOUNpKa8wY8a07dS276fJjC0EVHfYybdUVdvhhiPZQbuukoZijswK+Q3ccjglJopiKrOtAwIjtaT5tT1SJMFTZPHUO7GDfJtNbCS0gwWBw2ogG5kINWa1FULMiQPYZiAEHEYTCrPzMf6G1vCAX9Z3rAfCGBDmX2/ZCXZE3iY7FcGKukMeYwEmSDJStUxTl/E+hq6DVP+2jEOGkcY1inj4l6WfAQWU5zaP+XXbawf/NQINyTyDzFMXXWGqeTsdZaRFjndZM2Fd8iW3KFXKX4nxaNV+68vax4LSaf69d3dYUSq+mKyX8d2/eFcNu7q5d91Mq3b9nzxuUjtidPXnzry5aXutdkp8Z/fPW3K0frL+fKPz58YdOVQzO/mLi77A/9J5Y8v4t9Z6nnfF91d1/lXy69bH+xo/Lcu5V3TTx+deK7787uF+pH/rj2Ae7KD5Y/F/HsmH5/N/hm8ai15dUvaybPHXztqxU71ZllgNd7Ly+5Z6/nCj044P/oku6uryrv/WWw+oMXyspSn7934M3Et2v2UKsO7l4R/nX5jk9euf3BgLrqh4779vdkVgxtiXy8anPlJvFQw6PlQoV77RrB+megL0ruCd11f7Mwc6Guauq+JduOP3PHvrVM7dF1W39/9av/bNtu7GWlNwZf+9G//+c7MfSTwHR2zROvJv4pOtYHfnPxM/2pddsP/WNlWf21S/qGv+2oeKH8Z7T4ZvwCnz6+7IJtqZo88Oxj0c+XTvz0qbG/Vz1dPdq68pnlLVXbMp13JnYfPKM33L2y7vrqT5+Y+dX5UXkmWl/RAh97e/32o6PjUmrtzn32r186eWXXtd9Z4bfi8m6B/rSOG8Gf3em59y0tKrw4/d+laPuZ+uzjr1/70F5s0OIyZnDa3Uy69T3LDll/
eNptVQ1sE+cZTsgEm5aurSohyrr2ZFWsPz77znd2fE6zyYtxcEjqEDt/ZpCdv/vsu+Tuvsv9+Ce0HaX7KT9pdVU1tCKVlRi7crMAC1BgQNcxKghb127SqgRRVRoq/RlT6TqqjVXZZ8dpE8En3/nue9/vfZ/3fZ/3vW2lDNQNCan1E5JqQp0HJn4x7G0lHY5Y0DB/UlSgKSKh0BWNxcctXZp5SDRNzQi43bwmuZAGVV5yAaS4M7QbiLzpxs+aDKtmCkkk5GfkLQ4FGgafhoYjsHGLAyDsSTUdAUccyjKhQIInhtAw/ksiyySSkNcNh9OhIxliHcuAuuPxTU6HggQo4420ZpKMy0ualp5EWM8wdcgrjkCKlw34eEmEvIBDerfujoKIDNOeXArzAA8AxBagCpAgqWl7Kj0qaU5CgCmZN6GTGDVMoYwhqrCaCrs8DKFG8rKUgcX5s/ZBXtNkCfAVuXvIQOpELSTSzGvwZnG5EgOJ41dN+3AUQwlG3F15nFWVoF1NlIs6mCMNk5dUGaeJlHmMqqhV5b9dLNB4MIyNkLWK2cX5w5OLdZBh7+/kQTS2xCSvA9Hez+uKj51avK9bqikp0C61dt3srib80l2JcdE0/h1aYtnIq8DeX839q0tOQ1PPkwBhI/ZLVBEgNCxBe7Z+xeAgSA0mlZZ4uGM9TCgik2YyvQP+9T0CE7G4kBReb27ojw22ZbMa3RV+VNlgZUm6ifXRHMuyHEm7KBdGQaZp39BAMGG1DSfoDQlvGMRTHWquNWb0eb00YNVcLh7MRvVOobcn2dvTmh8BYeRr8rZlYqHciJVKeyI/UE0qrXC97VQ+2T4IM4mQHvF1tQ81daI01Teiev3BTHYgv264mcCQrYwktIz2iWgoH8u1ZcS1fgVKCdgWDcZ9kUguZOXbUS7ZpsZCLBsM96cXYaZ9FEnVYPso1k9V1uQCZWSopk3RHqcp/8s6NDTcNvCpIk6kaRnbCpik8I/nSrX+2Rdd/xW/VxZCmLD2qbAuOQnKQ3TyecJDebwEzQYoKsByRFtnfKK15iZ+S2Yeiuu8aqQwO9cu9EMJiJY6DIVy6y174FSlB3B9K/Bxh5IwpyEDkjVU9kQ/2T0/OMhIaGq+7Uikp3lVGq26tU9VmyE7mssKwBIEMZNVKG6UZaQktEDqcO2IpqOKGwyIVAx7nPFTkzXJAh3LOFaKpHFq6RM5UsepkCVFwvms3mvTy7ALXpzsYzcrmHjg4DlXYqvVoE4v1tChgmlc8f2VGZbjuJO3VlowxXCVRZ1YqmXAxWhoj2Icu1mhZmIfZUzkFrRJSbBn7scvg6lUU8rDJj0010R5eIYBlAfvQJaFtMcHWO44HoESwFYqxdSQbpIGBHhUm3l7xqnwucroaWFoL+PDkTYTkgpkS4AxKxlClRiMZkLToYx44QBIkYAHIiTn+WeXQgOPBjsjrUf7ycVEIqPa/GeipCJDlVKpYgzquDB2GcjIEvAk1WGxNUx2BwfswxzD+SlAJf2wiRF8nIeMBEMHF6x9SbtCZQyXeBljzwB7SmRaHAGWZRzNhMK3+H24TNWPyZPFSqxq+mz9yft2fr2uuhrwNTe3q/t19SLVeOrGw/5NmciOYyPN9++u/87P+Cs/6v/5EUrYfvTIuX3E22e2Oea+ty4Csh9v3WS+9/cto89ejtUTu1ZuvKO8e2I52p1tvnHtiXce+F9v5tJHZ/574VfZvf84MXsJXd971kG//J8nd+wd6NuxHNm3Hb9rYln7G4XE6VfUC509PXsaGwtvr0lcyR8/fWLg01WrE5PTH+9hpr47Tu0gH3PX1b346Z5HAmOv/WFl784r01vp6W+dvf6XrxHLZuJ3esJ3DQRW7vxG/PKqTUe/OPL9rWs+/8VLwbsd/VNtIhTE7Z/dWPH5e//sOk87tGdXdHn+/dD58ooz6P1y/UV6+vnpsb+9WffhPYc6Yufz9/753LXnftd9pcF77fayOMYcOtAwvc51fKxjs/BZf90j70fG3L//5n3iJz9krtI/7puM6snZPzWebHmqw3k1/tq7zs0bXvH+hvW9uYZ8Pt7y0Xbu12ufmX2r8YHUL+d8/3rh+oOvjlyaOxfd2v3B3dbVhqHxzctfnF1WvPBF30X4xszBp1c9sfqn3m83Srev3sjN/rVze+7pXa2XVr/Df/L65TPtW5zPnEXVEjXUbd732LI+XK//A71xYoc=

View File

@@ -1 +1 @@
eNptVXtsU2UUL6AZQWVoFBON8VogGbjb3UfbtcUZt3Udc6zd1sKYaJqv3/3a3vW+dh/tHszgUOKDMa4CYkRA2FqzjDkdkzeKBsWIkaB/OCCQaGKEGaPxFcPLr10nW+D+cXPvPef7nd8553fO7c4kkarxsjRjkJd0pAKo4xfN7M6oqNVAmv5iWkR6XOb66gPB0B5D5ceWxHVd0TwlJUDhbbKCJMDboCyWJOkSGAd6CX5WBJSD6YvIXPvY2k6riDQNxJBm9azutEIZR5J0q8daz8MEAQgVSJwsEpIhRpBKgIicRARlLbaqsoCwl6Eh1dr1XLFVlDkk4A8xRSdZm4PUDTUiYz9NVxEQrZ4oEDRUbNWRqOBMsBWfpmxUVyaOAIfTvGCZ1xeXNd0cmk79fQAhwphIgjLHSzFzb6yDV4oJDkUFoKMBTFhCucKYAwmEFBIIfBKlJ06Zw0BRBB6CrL2kRZOlwXyCpN6uoFvNA9l8SFwNSTf3BTCJ8pqS+nZcY4mgbU7aRg+3kZoOeEnARSMFgPmklZz98FSDAmACg5D5/pnpicNDU31kzeyvAzAQnAYJVBg3+4EqOu0jU7+rhqTzIjIzlfW3hssbb4ZjbTRjc30wDVhrl6DZn2vD/mmHka62k1DGGOa7VBrKcoJH5tkZBeEwjIYjYll7tdrUFvX5pFqfEkzyKS/PNbhal6dcvoYVLdV8OKLKxoqwI+pQmkm61E4zpS4X4yBpG2XDOZOVcci0G8tTjnq6okaCTTybaOWblvsT/homVNXAQjcl1Cng6YDbHWryS1XLUJXidqtQSvoatUgDQI3VjY16yihvCKFgoqatRXY0NDHPlENXfaC2ASQVWuhYVWkv5RhtKYEpG0meK/MnaG9LKJJ0OJetiLN+LuCvavZXca1cReMzgSSyB1rDwfCqyEqmBk7h7KKdJJWn7aTsLip7DU0qRkBSTI+be2jG9Z6KNAXPEFqXxoXUDa27D6sTnTqZyQ/T7kDtTWHP7/NipZpHQ3GjmKCcRB1QCYZiHATt9LCsx+4kqutCg5X5MKHbCvODEB5ELYrFWTU5CBkYN6QE4gYqbzsCR7MjgPubpY+HlURtiqwhMs/KHFxFNk5sEbLGOzIxb6SsxoDEd+TCmkdzs5DqaEtx0OC4eDIlUu4OO8tHkAGj+/JHFFXOhsGESFHDxXExQ3nLpBoHcK4USVMkRR9qI/HsI4EXeVzP3D2/yjSzz4GLfeBWB11OILz0MvZcN6hjUz1UJGIZZ2PfhLG73e4jt3eahGKxi7vUeWi6l4amsqEZUTtwq0MeYjelDbZNepM8Z44txC9hdykHWQcCkYg9iqIMa3c5KQo5aQTcTpaNMgfxNuQhRsk2U5FVndQQxHtbbzfHikXQlt08ZSztYPExainBS1AwOBQ0Il45mwMWuKIiQQbc+zBKQgDjiJzQn5nxNvvL62oqP1pFThUSGVAm/hkZSdYkPhpNB5GKG2MOQEE2OLxCVZSu9JGN5c3mPjcNWTsdKYWcq9TF0TRZgZfTJNr/suvL7t8MEDD3JDRH4myZ1WO3s9alhAjKXE7cptyf5YV0NlcpdmJG96OvzbbkrlkbXg/Xfkbdf+LilcUVOw6D8eubjy0uerWQ8Xq9T289b7zF3/f66nc2dKVqzu19xP9bf+E/qW8uHfwn+uC9Ff273O+u+XpT08qeoZFfLl1fYfvwzROf9ZwtenIk3PvN/tR58a8HetZ9Wlz6ye+7HtKCzXcW9Uh06pN53Qlm58JLT3x1pnnWkgVP3dnMt+ruHRuHk7XsoWVn+MyzD395/NvtR6penPdh4thje+YP97sPvly4Zc6R2KLTV7/3zjZ8L81BF2p7ly/o6TzZ4ztd94v+5B1zV47GHts2tOXy5Ss/7ty8eC9be/F32LVo9M/xT2f84Sto6v38zPpfF1RcePuVy29s+s4TbC4+vWbt3PVf9o3uuTa3c1vy7nsYx+lTP7DRlyxc47/HK4pia+7Z3fT3OTi6KWrZffzkxyPb/5C+2Hroam9XoavgNWLO+FPLnvfIF38inhj++OzGx6WZwcJfg2pRffddsXnDkdGWQODa/ur3Xt3y3SLXOKfd+OnUlZ8LLJYbN2ZZNj9EJK7PtFj+A1XHTc0=
eNptVX1wE2UaB+oJejejf4iDMuCa4w/n6La7ySZNWntHSVIotE1oA5ei2Hnz7rvJsh/vdj/SJD08DqqOUsC9O1G5O09tSLhcKZX2VETQm4EbcBAFP7A4RU9Oh3HKcAfjDMeN9t6kqbYDO0kmu8/z/p7f8zy/59kt+STSDRGrswdE1UQ6gCa5MewteR11Wcgwe3MKMhOYz4ZD7ZF+SxdHf5YwTc2ora4GmliFNaQCsQpipTrJVsMEMKvJf01GJZhsDPPp0c09DgUZBogjw1H7UI8DYhJJNR21jrAIJQpQOlB5rFCqpcSQToEYTiKKcVQ6dCwj4mUZSHds2lDpUDCPZPIgrpm0q8pNm5Yew8TPMHUEFEetAGQDVTpMpGgkE2Ilp5kqZlM+gQBP0jw/685sAhumPTiT+n4AISKYSIWYF9W4PRzPiFolxSNBBiaqpDKGyRcIbRWVymMXJIQ0GshiEuUmz9pDQNNkEYKivXqjgdWBcpq0mdbQjeZCMSua1EQ17ZEQodLQVB1Ok0qrFFtVQ1gPpWjDBKIqk9LRMiCsclrJfmi6QQNQIiB0uYt2bvLw4HQfbNh7WgAMtc+ABDpM2HuArni44enPdUs1RQXZeX/4xnBl4/fh8q4qliWfV2YgG2kV2ntK3Xhtxmlk6mkaYgJiv8TkIMaSiOxzs+d2dkKhM6bUS426X1B1jz/lSkaDXZGM1MF3tAckcd2a7qTanYlHPYGmiEdWcZBmazgP6+M4jqHZKqaKsKBbG91rYtG29XGF6woDl8TI2gop1N0Kl3emO9fKrKly7nBQj8alluVupxn0pVfLbYFVEaEr5k6sT61cy2X87pXrWlHMxJE1WPAyjekVoS7AtWKtA8rtOCSG25rjzeFgdx1FKFtJka8XBQ1F2iMxXkura/XGmpT+S5Pf6MRBY53Tr6yC66JhPmGybatXeqdx9vg4minT9jCclyleg1OSkZEaNxN2P+v07tWRoZFRQltzpJCmZWzJEpGik8fz5Zl6ObT6B33fnQ0QwdqHG3WxkmKcVAtIU07G6aZYrpZhajmWWtESGfCXw0RuqsxXImQeDYGoMzg1D3mYsFQJ8QX/TWfgcHEGSH+L9MnM0iilYQPRZVb2QJRum1wmdFNgeHLsaKzHgSpmSmHtw6Vh6M6kunlo8Xwi2a0wvgznEmPIgsJI+Yim42IYQohWDLvfWcMMli1TciyQXIkcGJph30jRZAUgWVREUs/Sb3mjGXbWTYr9+o0OJpYQ2X15rtQN5sh0Dx0pRMbF2D/AcD6f782bO01BuXzFi3ljppeBprNhnYrx+o0OZYiXGWMgNeVNi7w9uoTcdHpdHuhieQEJ0MkxXi/jRG62RqjxAQAQgJ6DZCmKkKAUm6lh3aQNBMn6NtP2aKUCUsXVU+9i3S4PybSOElUoWzxqt2IBXMzBqKM0HckY8PuhQEMAE4ie1J+dD3S0NrQ0+V+N0tOFRIe0yVdHXsWGKgpCrh3ppDF2AcrY4skm1VHO30i3NXTYIz6Xz8vEeB/jQ053zOWmmxoCQ1No38suW1zDeSAT7kloDydc9Y5ajnM56igF1Hs9pE2lF8xvcsVc1fix2c/ct23erNJVQb4TE32/bZAWsnc+dvl/S4/+8dLTVo3nP+zT2SfOZH+//MXe4C7h7KqRe4O76X3bJ3rs8VTNvoqDF1/E5y8ePi3/+P63E9Hbw9KJwtVk5puxRz77w6dJY7T30NeDnZse/e+Vb/4pXIVX3//11qa297++bf5bhVN3WecdC/WegTmr/vHh3x6OXL1yoFBwbt3BnVsi1Yd+8acNzQfvWLr7hWeHt23VWxpB755rHfPuP+r584n+D64t3PXFr65kPgB9B4LRxdt6Ny/zjG3uOx3YtQM4lwV3nEafDh9bcIt8/YHfsWcfqptzxhd7fgtavXPugbrdQx9eX/bXazvvuvdC88NnZ6Mlzy7ua17sHJu//MGvhr5dOPF5hfWClIuffO+Zj44s+smFRZ+Pv/1c9Jwt/Nx3euDj42c+8e6e/8BfPt4790smOir5fxruc0xc/jJyeZN84dzOkfs2t128x7pUsbH//FeL3pmTG+vZP+79Ql7geLyzEbx7q//6rdu3j196dCn13ZN9G8YWnI39++//+mjlZ9Hxp94q1b5i1qkfvXb7PaQR/weSPmR2

View File

@@ -1 +0,0 @@
eNrVVk1vG0UY5uPGkV8wWiEhIa+99vojNilSFKq2QGhRTNUqRKvZ3de70+zObGdm47iRD5SekZZf0JIorqKWooK4QCWOHPgD4cBv4Z21nVInbapwwrJl+/2a5/16Zu9OtkEqJvibjxjXIGmg8Y/67u5Ewu0clL53kIKORbh36WJ/L5fs6L1Y60z1ajWasapKmY6rCeVREFPGq4FIa4wPxL4vwtHvkxhoiOHvHX6pQNorEXBd/GysSz87G9Wcar1ab3aerAQBZNq+yAMRMh4Vj6M7LKuQEAYJ1XAwVRc/0ixLWEANxtotJfjhquAcSszF4RZAZtOEbcNDCSrDNOCbA6WpztXdfYwLf/4xSUEpGsH3Vz+dg/v7jXd/M+GVsjGYliKxV5JEDO21Mm9VPPhg/2PEUDzrx3mFOG2yRiVpOI0Wqbd7rttrdsiltf6vp8a4KlnEeHH/pz5LMa0F6eNVGsQwdymeZrmP2VVISndsBHmh7UxWEm2vbwfFUTV2L1i9ZtO1PkT9hUar23AcpxK7dqN7iuLZApyLO5lQYF+e5ow5nZ7zc/3+dSpHxcEU6g/GCptnfwY80nGx1+40flkIsIagscOoc5y964wWh9hZEgkRJfDkhm2sSxgMe1M8cA6uSRqltHjIhR2YMjy9YWOZaSgiu49jCPaVsDgi9S64IQ0dGviNptsMO92u67ZD120N/G7Q8Y/IqXmsSggRL6OJKva1zOHRPIP+KIOTczSZA5s1uU4+oRwP7zjEcXrl2zS5HOuvcaYkNvOvt+7vWrPtsXqWU+1WW22rYjGOM8cD8HB0I2X1di0/Eb6ntMCMwQNO/QRCq2dgVRZ1WGzAYOsuBgqxHAq0Bzs0zRJQXponmmVU6sUgZ1vg6uFya/Ao83Cv5WjRQMjICySUJfFCpmbKAVYQtRkdpVi9RacMsxecJh56q5Neyn1Z1gqoDOIT0lgMPa0TL2dzkTaj4GkG0gtzOUNHR2VZE8Ejs+3o38RVMO5SzwT15rhiDYXcUpkJoAKRgUHpMb7NNKhjjHeUDj2krQy7bzr5IiYM4lONSLHfSIZoyAcsMofnCl6odpgJJNDjTAKagJdn3m3F7iB+3KIIJMJySqBzLdcxljxUXoL0gM719lwZiiH3OKSZHj33bqLWhJtbl7GOBZ4/KhNrON1OvdVwxuN3Xs7iq2exOH5QqmoySWvYQTuTWCNtJwlNac1wstL/I5L/9iyKPw9974evvBbahjHOTeCPghlh6VMJ6z/z+wlKX+q0HpdEbAez++iYml+XZF91JRzgcCBZFpN8mwVC8sUr4kVqfXtr15pOoBdTFSMjtrsubYR+q9nsgOu3lgKn7dA2DZxGp7UUBp1BnZpXE6DdqDfDeqPjDzqOj2Wtt5baroN8mlLOBji3Zn0ZLviGdTzyqJ0OuMJfKNH4tYpf10phH/fQTKi1WbGSABcPeQm7gqiwVIg4D5Dl0GNrSOWU8WcTiL83XuusyzmCW5s6nffMadCzkptZVazzHqPnHj1r48rn6/3N5eX1m+sffURuipxQCQQvTopUai5BTQZCkpKBcHBtytUQTH8JXmxbqkqQMYiOAa3MQBlFxgBHiIgBkYCjAEjmpNyEHU20INMIpc88apVcGZARnh0K/r4mW1wMS/3UtEJu5UoTRUcopHrBcI5AAhAFZh/M4fj4xdI8xQghMXTzr3AGS8AUVJeXa9Osv+JfzID0yO4c0xjFq1PUKJ3hN8KVMkqPbNTK0pVPC1muvW0qmbl2zMBY8yhmPKaupjvzuntY0hSHpmcN7Om2WGN8bb52qPH4+QMD2myO/wFK03fT

View File

@@ -0,0 +1 @@
eNqVVnlwE9cZtzkSh1JKmmYgmTGoGwYS4pV2dUuOGx/CxgRbxhb4Koin3Sdppb28h2TZ9UyBkskE0mYHSBNKpg22ZXANBkw54zQUCMz0dtzO2FCTpslASupp66QZmkD6Vge2Y//THY32Pb3v+H3f+33fp+09MSjJjMDn9jG8AiVAKWgja9t7JNiiQln5QZKDSligu2q8db5OVWJGVocVRZTdJhMQGaMgQh4wRkrgTDHSRIWBYkJrkYUpM10BgU6MzilsxzgoyyAEZczd3I5RAnLFK6m1khAh5sYU2KpgBemXG/NAmZKYADQoYWiIQ4BekoHh0ZaRDQyHDLmxjoL7yqlf/KrEIguTa3c7lnphWbyqyAqANsaZKMNBmgFGQQqZ9J2o73TcHMJsUsIqFzDRJpo2VQRFPM7ICK/M8DgHaEYWeByhwXmgqBLEAwKQ6Dhgo8aIGDKZbXZCbMX/Py2so2NzASYJrB6JKkMJ0/ecQEMde0hUcKuRRIHJigQBh7mDgJUhypQgsPLUDAZVPnV3SPT+EqWAB5x+mkmiX1dDEnQqwWJaaNZ0IyERSEgZkULWDYkSumtJYWBql5VCS8irCFYzJqs8n0BqFCuotL6QAIN+QMFkEKIIGF6PFx0hcjESpHW9rKlJQSEQgZSCBDs2d/SEIaARhLGcxV1hQVa0o9Np1w8oCqIkQZ4SaGReOxJqY8QCAw2DLFBgL7oFHqayofVGIRRxwDIxmExraceAKLIMBfRzUwRdU1+Gm7gOZeZxr35BOGIYr2gnvQhESaWpJoHqgzeQRgdhJI614rKC4mYR33EWIDxJMXV+fuqBCKgoMoJnak9LppWPTpURZK27ClDeumkmgUSFtW4gcXbrwNTfJZVXEKu1nrKame4yh/fd9ViMJIk+x6dZlhM8pXWn6HV6mjZUpAROCciI9gaRpAQhykBtNPdBv58K+gNcUZ3fEvNbAhFPzMFyld6AnWcb1Von79vokNpCzFpbiTGyaf26al+8CicdVqvLaXcRZpw0EkaEArdXcrF4bcOGhMjzmzzlDRTXsq6lPFASLfFz3tKm0rWkTfFtiK6vMEf9PlaqdoQ50OJItFRG1pWysKKkwZMI0eIGX4uDV8iGoLVGoBWy1Ot0rPfVVjdWEy22RiHibamzSaFCA4Ksxhi6iKvdaC8JcYlgIF6/NlaxRoow6yo3EZsIm9PjquQCpRba72+CSn1TaApmh43AiSxswuok9OdoljIs5ENKWOu0OchDEpRFVP9wRxIlUlHl7V2InvA3V3syjfCg97lJZj/W5UFU1QbLJabAQDoNJaJkMBNmm4G0ui02N2EzVFT5+soybnyzMvO4TwK8HETsXJOthB4qrPJRSPeWzVoDg3oNoPvV4aP2g8NWUZAhnkGl9TXgtekRgFd6BtIFh6OOCXimLeVWG0wVQ7ytNU5TKk2HY3GOcLVZLaiTqFTwZEYF9Q3dDQKEc7LWZbbbzEczR1k+9qJgCZxEuSXfasUllAuW4RiU0NQ3w4sq+tb7OioJm57yczOlMuMKedAlzswUUIQoRIMtmb60EWyqhAQ5xHYd4nRnVhd63ppdctKhLvXm7EJZpw6Xy2y2fsWSDL8SXCfJyedmykx6Is2cfGamQMZLl83JyX2tWXmcobWRFWjjt9sdpJ12WkiHy0zazRQBbAGCouwOGyRsdpfjrN6ZKWRHp5AoSAouQwqNeiWhjRRwoFVveEUW0maxo8wVokFMsSoN69SAR9ADlQsNogT12dpPBXEKUGjSpVmv9Xgaq0uqKstONeBT6Yt7xfTfjB5ekHkmGEzWQQmxQetNDQ/UuSWYLCvHa0satZMui9kKzRDaKcJqpUknXop6YtbafbJ36W2/B7AIe4zSBsKWIsxttVqwQgMHipx2K0Gk/oxsS6an0OXcL5bvystJPXN311UJ14iFg5/X5z2zaueecy3vN7ePLmo+PWeQfmrhmiRnO7HiFrV58Qe//MYrj1yrtGz9ztLty5bf+WTtMqz0+0tefnirTe37Y+C/nwxuudS/8e6/N+7/jDv+zs3lQ6tuDw+c9YIh8/B/8l8rHBh/NXKy+Mq+j7cGX3azA78bWtGIL/m586HizgXfwo8M8fuv/zY50XqPeGnPRwUuMnr5saarV/tfWFy66K83Invn39u2+9Keiaodfx5a8O3bO/JyOz3tc4eb+Px9i1bOvbpXebJ5ZOhr5JwDtVjI98Ib4mf55TcvPLnas+WZic//8rcrv/59/b+4R5/t45g/jT93uOLTc8Pt+fP2bTkx9rOHx26E6i/n5Vb+5O27xX+PWBNPT7zozaupu/NAd/QXF4H8w7HiU7m3u68L7+Ut6aKU21/YzZe2foQ/+tPXD9aurmi+cKH39CPSx+cxX/dLo6NPr977Xan55gOHv/6j/PZlr++c99B7ptEFo4fKz18/nP/8wYGRwFj3TjBe/s/357k/PLT03aEPPyh/583iisSt7z3L7j7x+K5thQvzP51f/4Th8cF7AweazxY+eKbv3qq7OcTzo0CL7pgf+rF32a8Wnmq/039s7KZhacH+GwfAuHP/qSfmvzK8i6feHj9ycaXpbN8fwhcvjhYfnvjH3W++u2Js5VOeK7dM6Jq//HJujrk3ea14Xk7O/wDBk1ep

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrdVn1sG2cZT5sxJg1NXVWl0iTWkzexUvKe787n80eWjcRJvCRNnMZJ1sCQeX332nfxffXeO9txSBEZ7B9GywlEN5WKZkvsYkLarqXNOjIYowiham2lDS376KZOY6qoJm1ITEMp5T3HIYnaf/gX/3F3z/t8/Z73+b3P66lKHllYMfRNc4puIwuKNhGwO1Wx0D4HYfv7ZQ3ZsiHNDCSSQ887lrK0S7ZtE0f9fmgqtGEiHSq0aGj+POsXZWj7ybepolqYmbQhjb+1OT7h0xDGMIuwL0p9c8InGiSXbq8I9riJyJfPRkXb11x/ExlaiLJlhMmzYFCK5rl7CxSGGnrUN9lMrfnWtCnHUr0Aa0KUmKy8faugHVM1oEQXlJyiIUmBtGFl/Z5kepIHXiPA/bbsaGm/5JckfzxjgoKCCWas6ECDkoINHRAgQIe2YyGQNqAlFaCao8fMrJ8LCoxZBP+bl2/y/62cbxHklqHWynEwsny1Fc2QUA1/1rQBb3jl6URkyRvbFoIaETJQxajGA80khPSik1WGDk1WZAQlQtcrDVtmZAPb7vxGCh6HoohIYKSLhqToWffX2ZJiNlMSyqjQRlWCXkc1grvVHEImgKqSR+UVL/cENE1VEaGn94+R8ubqNAVeW25VV72yAOmNbrunEwREW7d/YJycFZ1i6SBHMyeKANtQ0VXCfaBCgqds1vQvrVeYUMyRIKB+Dt3yivP8ehsDu7N9UEwkN4SElii7s9DSBP7U+nXL0W3CBrcSG7g1XV25li5AswzNn9wQGI/rojtba8TZDc7ItsaBaJAY7jRTFg0jpyB36dNUSsyk0lqrkYh7GyyoMnQgsvhOmAoPjPfE5K6BXEc8m97XGeO0hJTePVAAbIiLBAOBIMcBlmZolmYBcvql3GBAlNsH421DhfDI7t6SmAwybK/V7bAdOb3EBfYGeh0uI/eMFYdjI85jQ5ZhyomOflW3u5VMHg2hIh4Z3tsed5jQiP1YtygXWiiCzskrUqusBVA6nuzXVCFt8bkIEvQunBrtpEfp4fZSfJ80UOht0zvbpGTfOng8HwRMHaHA8GHG+82vckNFetaW3eeD4cAxC2GTHBD0ZJlsme3gqRnCQ3Thz5X69Hsu0btG4aaZDsJJd7HP0JspjqUSok1xDMdTbCgajEQZgYr3Dc3F6mmGbkvBk0MW1HGG0LBzlfIVUXb0HJKqsduSfdEjO+mkB58cT4CKpoERqKNy5/aCwZW5D7o7Tq2cLEBGCtSVUi2tu1hjfaFULEiiI0lyvqAxkRIfUNLIETOn6y6mZXhpCCCgYXeGZyLMfF21SrwqKZYBLAMY9lwRkHOOVEVTyIbWnvXbh/gGyW4v3GpgGzlE7qnySjteXm9gIY0Q1su9FoWPRCK/vb3RaqRQJExafW6jEUbrsbCchhduNahHmGVZQcNzxVUHoEju0oNESDHhtBQgzOE5nuEy4SBMh3hOFNMRGBTFgMC+SIafIpJAXjtNw7IBRiK5a+1xd6lZg0VvyrQG2GBAILW2UIouqo6Ekk66w/CqwC2UaSHvIjge6wIxKJKxnKwx0K10jPa39XXHzuwF66kEEubKPV/RDawrmUw5iSzSGbcqqoYjkXFpoTKJNdg26p4OS0KAC0REQQoJvChxoJ0MotVo/yXejDdrK1Al2POie0oOtPqiPB/wtVAabA0LPMPU/g18r+zVqmfPb8I7fnhXQ+3XqO55pf8dZsviR1/reuShnX27vzoRo6Y3bx32z7bv5F5Tr14tPp09s+3Sv1t6hOv6F7bvuHL4s78tfpK78+gTv2o6qhp/vOfy9ocOHbz40XevXV9+4cae/Y+2vlRaeHHh4/2t1w5t21/9ztClGztOfzofPcF3HLy6U5UPX7rrwHzqQPWDV8898nrJd+fY5YMv0D3Tvzz2l57tH559bjl694TVNAreFxobikeusGn5k1984+vP/u6BRu7wPW+kW+44uvXze/c8tXTyGWrXe+7Ppr79zODNrtfaSneM+8wnr/34lQd/TgmN91//+1b6cfVPb8IPbzx9/Gzbww57ufXiclPq3PsLxtb7ygyNpo99JX9k0zt9P3j8pjPAXt4ye2/p98x9I8OzoafGCh98efJh3J5efrk6/4/Elw43nQ9fnAgl//rm/W8fOXPhn29NPvGvZ8+Xp+9+94uH1Pzg8oWp2Cl7+tVtP3l7+Q+cUN71o66534wsRScaGxpu3mxsCP705MyBzQ0N/wEq8t3d
eNrdVn1sE2UYL0MBJ4ZgAEEEmgNBYG/v+rGuLRJXxpgT18HWgJtBfHv3tr2tvTvug22QQWQoCQh4fMSA8jHWrdJMWBk4BwwVI98BMUCsCyAQGQaMkQXFBJ3vdd3G3PjwX5vLte/7fP2e5/k9T7osvBCJEstzfWpZTkYipGV8kNRlYREtUJAkL68JItnPM6FZufnuKkVkYy/6ZVmQHCQJBdYAOdkv8gJLG2g+SC40kkEkSdCHpJCHZ8p+SEpbTARh6XyZL0acRDiMlMmSQnToEI43FxMiH0CEg1AkJBIpBM1jFJwcl8hlgiaRUamMJfEvB+EUkV72Iwm/S3g9G9T8aBd6CQbRK0R5SqddXIYNJV4RaXzRKVDEAL7W3g6iIxdFCPCQMZSwxWwQMSw08KKP1E6CdiJxckFcFlL2K0EPyZAMQ2Z5BVDCShiwxHIgCBlW4jmAkQAOyoqIgIeHIlMCA8WGIsFHmlKtlFAK/psVUf6/y2de+TxMAJ5BGlo6ABUGATNIBdiMQzIIQBmTjigP+xFkMDMv6QaH/Lwkq9EebNsNaRoJMkAczTMs51M/9S1ihRQ9g7yalwiteYzTWY0UIyQAGGAXovpSIMmQ5QKYg0DGxeEVWd3pynXPz8qek+mqaXeq1kFBCLA01MzJIgyuNsFMoBW9pzii8RfgBnGy2uDsgEnOKsOzw+kpg8VuoOruDx2AGHGNEJcfuF8gQLoY+wGJuVRr2o133a/DS2p1DqRz87u5hCLtV6uhGLRaumUpKpyWqBrOmNUzXELYGS5sNhiN+Il28yyVcbRa7YUBCUU7m9BpEzFRJjOgrIAyNnTzjWSxDNA8DqFWUrs6KhhAnE/2q1Wp1rRPRCQJmDuoogabyYq0LISbiU4dCyd2xI7cmV1UGB6ajhurNs0Q2RS90aZ3CqIeh07VGy0OM35S9Vk57tqMRBh3r42KukXISV7crMwO3oRpv8IVIyaS0StjYkRXxiKOH2CDrAwS6xE3UjuqIQtFUbHxD9UUURBXRosYMtvt9kf4xZVBsrpXyw9QFmC0uRNZmgp7j8NygoLpGd+0CVQ1GiqMa9Ij9buwddiMfwybByA0F8Ym9GaNR60HxGpbPNrkR+t3QUzYTHgcmwdATC2M6Xsz/1f52gONe4jm/YVr19Y/VPuBJYskOg9YRj2If8+njMYMV5bLLOVbTJkSnFZiQaKL87oau/zjpQ45dlGc3ZpdbJzZbjWnMmYPQB4vAyx2Wxqw201G4DGZbIzFZkyzMNaqhSxUI3jI9T6e9wXQbtoLaEjjnd0+g2p4eoHLmZOdUfsGyOM9PCajG2LScjyHavKRiMdejdABXmHwohVRTcYMkOcsUPfazSYLMpkpK4PsNq/NA6bhBdUxjZ3TFtK2dPzfwTt45kV89U2fPWNWDdDFP32Z2c1FI50D/xpa2aLUbZl58tDQJBAc87z/6eDEGVPGevufWWMdm3PhyN3LJ7kXXr2xPj2rhfh5+Vl6VcV7Fxe1Xjn69ZWWXz86unbLkqYv2m6nb7sZ/uNynXPKmpdzo5OePNJvVajRGZm9eg+sqCh1fb9jpnvq7l1zGyZv/Dg2p/nO7976S1zUOSFmG/nZUhe548iKTYc2DFsatb49dXJFTtKC7V8mu4ffTT/93YJ+TvbHyXcvtQ52REfv/fCpAeMHnxnhqLw64bXFwvmU67oLz+2s7F+1QT/k+sDKDyadu9Woc8713Hhmv3uEvHT5xMJB+ed+G2DIvrP/c+eOW+fPHku/dsw95PXt12Kbg/2cyQPde7eNzW+tpd8tuDKx6NazGadOJo9qLH8pr37FprCJeiJ7a+uJ7HWHR2+11amZx/Vnxq3eSs5+a974RaeHjaouaD5OrTyy6Xz65oaVdsuqoz/dnVv6d/P2ze8fOBxaF808vcS3jzmoO7F2Yz0dkHJ2frX/XtNUeIret+X41nrP4qbGpuJv1ZtlLclkQQmfdYEshbfb6LxflJaLF/+8N0ina2vrqyu2XU1an6TT/QMhrxTS

View File

@@ -0,0 +1 @@
eNqNVQ9QFGUUB01TEZNySkvh3EFNYbm/4t2pJB5IJhwEV4AE9N3ud+zK3u66u4cCYolUBpYtCIikph53chFCMpGjNlqTWqOOjaVcWua/STNrpnIsNem74wBN/HNzf3a/937v/d7vvbdX7i6CgkhzbHALzUpQAISEbkS53C3AxQ4oShUuO5QojnSmpWZYtjoE2juJkiReNCqVgKdjACtRAsfTRAzB2ZVFaqUdiiIogKLTypHF3oulmB0szZe4QsiKmFGt0uiisV4XzJhTigkcAzEj5hChgEVjBIdIsJLfIhXzPosEl0rI4v8xYglQJATaChUSBRUkRzjsyN2IlUX3+fceIozIOQQCnfXZHAKDjn3fRqy3Cp60cdZFkJD8FaA7pQjsPANj0CVWVpZblosYcyT0YQgGOEiIa/HpuMixLJRwBkhIJKzMTUFAIiV/DApzUpwoye13qbMdEATkJRyyBEfSbIH8UUEJzUcrSGjzRfEQvoh++WVPIYQ8Dhi6CO5YiosSoFkGiYZLtB1yDkluNqda8pPmv5xodvUEldsAzzM0AXxw5SJEriUgJe4r/W6zxyc4jrrASnJnfC9NZVox6jWrUMXoDDGqtttTMwAxdvF++67bDTwgClEcPDBHsqsH3Hq7DyfKTSmASM24IyQQCEpuAoI9VndHlYKD9RUqu01pd6cLGPvSubUxajV6t98RWSxmCbnJBhgRtvc1oQ/j0ag0WlwVi6vUnXfEhpJQjBMcSiFvVrX2KshAtkCi5K0arWabAEUe7Qdc6UIwySGWO1Ez4aGD7sBQb0ld0D8KY50JqLHynnkCHa1Q6xXxvKBAqacr1DqjVmfUxSqSUiwtpkAay4CNarcIgBVtqFmJvXPjJigHWwhJj2nAifFi/RULKD9D22kJD6wzaqTvVnbqVCqVd/J9PQVoR8r4Mjq1BoPhAXGRMlCSO3z14SodrtZbAlWqFw6ch2Z5BxpP/6MhwMrlY4V4TXugfz+3Xszkh8Dcg6FmoXfKQGi0andRbNL7s0U92L+fYgAz5WEw96AYu9CrGAj+P/l6EkXex/N24Xq8Fff1vqdknkDncZqUd6PrfJVabTInmXXZMxLjs6hkQJloW+JC8/Sd/fE5oQCwdIl/un04b6TWEKudTmqtOLTaSFxn0M/ADQaNGrdqNHpSp1fP0JGxW4toIHvQkisKOK6AgdsJG04AgoJ4zw7K7oRsc3zKfFNLFp7OWTk0jBaAhpblWOjKgAJae9lDMJyDRA9aAbpM8/D0+Gy5w6DV6KAmVg/1ep3eNgPic9EDqncb+7bN6XtK+//NVqCdF9DRl4NgRNWwIP9rMPp0dzMZC7iTqpG3oiqOmN+zlQXFjUpc4t1qKvXWTXw1ev9xLMPz/OGQylvX4ouqi66MWhFcfT2PDH9nhyt2T8flL721y3/89c0/xc5dN67+e6x02d9PLzkdt+7ri3HpDVgY85Vl8pD8Ec+eTZC/s7geq9vdnHJOnbN+T2PuoUhVzZG6qgv/eK07dbX0oKj29dcPnAuzXEotjlu+SDxtCF4tq4ZTr4cGl78aMXvflauhXUfPb8wC/Pf7JlOPXpikyJkz9ASxQPz094vF6WNqvv4rKH5DSXntzpmgdSSz8hpQhDTOppJ+mlDf8GLyN4PPmkd8cuTYeFbzRdH1w5m2yOTIEWDmoPcPVgEpWvFX4+Mlq9a1pYydOXVHhGlL+CtrT62JmzbWeEJ6o2N4WUTX7xNDsleFP/NLeefa/ftOhO9SHTwY3fXEeOvQAx+YXrc/v2fvyLQR5qkbsZqMRYnmcQkvffyouvpkXcKEzp/GlJyqLxvTmt7wuXL/4nWj13QMKWZG57xAnumsWX2rLfP8m+8afsg7fShz91mrrYL/Y24FMYeaNTFzdEZX9aqQ1eGzfjPFnZpzyf1zFqgZtkRzQTuuamJzyE3M+XYluJh2Oq8Ry0l+sqI6qmH8dtw1euqKBKay++zHB7411hmWjeu8/GH1scK9oZWFe9MjsnRntk1yduc56spvlniHNK96bp5IbrKua3Rvqg0LXbxh2KS25tD23EIzMyp081vJp95rNpTmN8y6teZy0jm9LbewK6yqub4+M7OWbrtE/RD5dFNr94Ts8TfePvrZlTdalFEF9U+lpr4W7JulwUFReZXHyUeCgv4Dl57hHA==

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
eNqVVQlwE1UYTkG5VMQqtMoIIRTQmpfsJmmbZCi2tBwFmnSgaCmF8rL70myz2V336AHWoYjjiBZdULAql6RJibUHVqkn4gUiakcULSA6g6Iz1dGRDohcvk3TAymHmUw27/3/9//ff+6qcBkSJYbn4hoYTkYipGR8kNRVYRE9pCBJXh0KINnH08E894L87YrIdEzyybIgOc1mKDAmyMk+kRcYykTxAXMZaQ4gSYIlSAp6eLrycJy6whCAFcUy70ecZHCShMVmNPToGJyLVxhEnkUGp0GRkGgwGiges+DkqESuFDSJjCpkLIk+nIZsJFEi40F62Yf05Qjih6hnOHxkJD0TwGadhipjLzh6g9ESr4gUvugVKCKLr7Vfp6EnIEVgeUibyhk/E0A0A028WGLWToJ2MuMIAzg3ZtmnBDxm2kzT5lleAZQzEmYtMRwIQJqReA5gToCDsiIi4OGhSJdD1m8qFUrMlpRUQqgA/w+FwzFQkPKhYi05OF39okCCDwVw1VhDVdWSqiU4tTyNtJAoFio0AlaQArBtDsmAhTIup6Eq7EOQxjU/prst6OMlWW25rI5NkKKQIAPEUTzNcCXqqyXLGcGop5FXsxKhNIvRRlEjfoQEAFmmDL1WASQZMhyLqwtknEFekdUdLnd+8aycB2a4Qt1G1WYoCCxDQQ1uLsXkGmI1B1pMl4sjWmcAXEVOVndl9tA051XiruT0hMnmMBHN/V2zEDMOCVH52/0FAqT82A6Idbwa6gY39tfhJbUuF1LuBZeYhCLlU+ugGEi1XRKlqHBaoGo4K+9ydzFhr7uw1USS+NtyiWWpkqPUOi9kJdTSW4ReTMRCWKyASAUEuesS20gWKwHFYxfqNqKxJ4Ms4kpkn7rdRpD1IpIE3GDo0RCGyYq0KoiLiQ7sC8em72X33L5WSAxm48Kq784UGaOetOszBVGPXafoSZvTanOmWPWzcvMbsmJu8gcsVEu+CDnJi4s1o6dvwpRP4fyIjmQN2DEdhr6IReyfZQKMDGKLBxdSO6pBG0EQHZOvqimiAM6M5jFodTgc17CLM4NktVWLDxA2QNrzu6O0OQoH9sNwgoLbM7rDYqxCGivMK/ma+n3cejCTrwMzMMMUorBjykBoPGqXUayzR73dd239PooxzJTrwVyBorWwQz8Q/D/p63aUdBXN/onr1tZfVfuKKYvEKg8YWn0H/y8mSDLLNcuVkuaF4hx/djZX5vJYCly+N/vs480POWZ5tLs1XEeS1ZFqTaGtHoA8XhrYHPY04HBYSOCxWOy0zU6m2ejU7WUMVCN4yPUlPF/CoibKC6KrG3TPoBrOXuTKzM3JaigA83kPj5sxH+Km5XgOhRYgEY+9GqFYXqHxohVRKGsmmJ+5SG11WC02ZHE4aA9B270OCkzHC6pnGnunLaht6eh7txrPvIivPh40afyTw3TRz+Ci+Z9xR4mbz42evnHpO5k7c9sPjW+/o6pozdE1nsy5RS8W1IrZpXdP3Lc7YWfasK/W69d9WDvhzG/2tkM1h8c38uPeOvf32b0bz0y5MOKH3WUHz39//sKRjpQvNjxROnqHx934kOPjuzIWPpBFfttG3rimUZnmGd56tu3tI0z8hCO/N53elfbt2Xf9L7k9SScO3/9lU27N5uSHqwpPE6MOrE7NGzKmuXp0xUebdeEniPsKVq3VbZto/GRv/NjnicVF7+/N/2BQXMLzHUO2TnGT6xJfWvTm8X1vFGWi9oODGlYKJ0zbE+WW9iEvLArfOPWXIrmzfuuGT89lxLX/EhzBLt4yd5x1z9Tm27aufjLPvmCpXrz/u7vSudCGIa1d6Ttaun4k195SDxN+YM8mVdes/3XmzlP13yQsq7u4ddScxsyxw2aOnBAfPP7PTW22FzeamroKs4eOGbS25Smjr+n9abcP77pz4YOqNKf6hrdWOw4kul9ZOW/8HUlxc5c9vbAA7d83vy48NfUZyz2b7t1d+13c2I+edY1Q3Urx7T9PrPQv+ynx2IbWXJ93+Ij0LTnLZp+ccTS58/vDnS2PRR6ZV/jg0tDnnmMFgYzqtaV7MipOtGUkHDSuXJJc9vPSC6kukPv4vPqcos7ZH+yZM6Y85570MfHuxufiqeP7l//5+2tU7Zq7jZtmf/36rpqTvw51/j2vc/bIP06mlW853TSplji38T2uKPnUyOaLXX/9datOd/HiYJ11p+NMzWCd7l+O/AW4

View File

@@ -0,0 +1 @@
eNqVVQtMFFcUhWBNSesPatQaZTtQCHVn/+AutFVYkaDlIyBVROHtzNvdkdmZceYtHxFT0aZRInGMn0YiFllYu0GEQksVq1YtNRVNtaaKMbSptpjaRq1S06TVvlkW0IqfbiY7+969597z7rn3bZW3BIoSw3PBzQyHoAgohBeSXOUV4So3lNCGJhdETp72ZGXm5Da4Rab3dSdCgpSg1QKB0QAOOUVeYCgNxbu0JXqtC0oScEDJY+Pp8svBbAXhAmWFiC+GnEQk6HUGk5oY8iESllUQIs9CIoFwS1Ak1ATFYxYc8ltQuaBYECxD2OJ/JRDzoESJjA2qkBOqSiHAL1HFcHjJSCrGhcMmEJXqYbB/B6Ml3i1SeGPY4BZZvK18JxBDB3ILLA9oTSlTzLggzQANLzq0ykpQVlp8QheujRY53S6bltbStDbVLpCljIRZSwxHugDNSDxHYk4kB5BbhKSNByJdCthizUrBoTXExeuEMvL/oYjKyuWVy3HVeBoqbCkWuGlIGsk4EsM4iEgWIKwUUel1QkBjOfuCJnmcvITktsckOgAoCgqIhBzF0wznkPc7VjOCWkVDuxLFRykR/T0g+4ohFEjAMiWwvYyUEGA4FgtHIlwc3o3kjzMycwtT0/JSMpoGg8qtQBBYhgIKXLsSk2sOyEkqRX/c7FNEJ7FAHJI7k4ZoarPKccNxKp3GZNHoWh9OzQLMuEnw27seNgiAKsZxyEAzy02D4JaHfXhJbkwHVGbOIyGBSDnlRiC64k2PnFJ0c8pBZa816/F0AeNwOq9Ro9fjp+2RyFI5R8mNdsBKsG1YhGGMz6AzGEldPKnTdz4SGyKxnKR4nEKu17UMVZCFnAM55QZjvGmfCCUB9w5c34RhyC1VebCYsOeUNzBYezMXjrTCVM88LKz8xXyRUav0ZlWSIKpw6jiV3pRgxI9BlZqe22wNpMkdVai2XBFwkh2LlTLUN17K6eaKIe2zjtoxvcTIiUWcn2VcDCIDdwoWUlnKHpNOp+uNfqqnCF24MkpGj9FisTwjLq4MRHKHcj5SZyL15tzBUxos+aPnYTjBjdvTfz0FWDUprDCvN57pP8JtCBP9HJjRGRp1+b0xo6HxqD1GsdHszzbr2f4jFAOYmOfBPIGiIb9XNRr8P+UbTBT1FM+HCzforXqq9xNL5gsoTzK0fBj/LtTp9daM1AwjvXgRYpKd4hJudn4GPd92cCQ+vtQBx6z2d7eC640yWuKNcbTRRkKbnSZNFvNs0mIx6EmbwWCmTWb9bBMd31DCANmHh1zl4HkHCw9QdpICFL6zB2dQ9s5bmpGUnmZtXkJm8zYeN2MuwE3L8RxsyoEiHnvZR7G8m8YXrQibrPPJ7KSlcofFaDBBg95mMtvN+LGRyfiCGprG4WnzKLe0/y91HZ55EW99FXwjovrFIP8nhM7+sua7uS//M7lYU3B+8bKSSd9M/lyduiM8eXvi1ei8mLrm33vDfPcTFxxKaTZvvnbv7772t/qPxXaAVq6zb80Dw5XSZRFH1lZMe/vG1J9v3773S46zKjPyjNpwoiY5vOPawffsbM1P9aHWtG3W6bbuXc6BtQOtnxDJRy63dN6rHSiZvmtitXnjB39+/Vf0r3eLucK6+DcHxsQk30xb98JhR8jM1shjq6JyQ7StMXMvhef1W8+MO/nOvi1CVXjFhEhff6hNE5delBrdtd5bty1o+48n4zMmLv308na6qGVD6o7xupX62oHtXX1gz6x1M7wpwX8U1DJ5XTU94y79MM3jk85V7MoxT7hwbElP2Z6YU9fHlAnZsSkfXZ2y9f3VyTNevb4gI+pWgb49M2v8S59NTToYlpjSVSOc7uo5d0etDlXvTYxtHLskLLWAmLll4KaGkk+eTfntUGRs0ZWNZ9ZuUme90lM9fnZEz4mj6e3151/LQQsunp74oXT2IlpYI+vO19vnRmzafLq7sL93ygV0fM2Y6ig0RTOntmhF9+6wfPnbsVs77vjCVgQ39C8Ii+3Ilrjk5fdXVIxXUwO7d0ZufL/kzq3b1nCiu65kUWj/4a7ObQPj3j1+d06e7+ikAl67cw5W9cGDkKD9dd/vnhoSFPQvid/bWw==

View File

@@ -0,0 +1 @@
eNqVVntsHEcZd5UqIEJBEVFLkVqm25AIenPvc+8OFOE6TpqaOIltWhqIktndudvJ7e4sM7M+X4wFsSukCvHYBglERKsmZ5teQxuLKlVUqKo2EqUU2qrQyJUKfaj9J0X8gQQSUjDf7J19NnEeWOe9m5nv8ft9r9mpuTEqJOP+daeYr6ggloKFjKbmBP1mSKV6YNajyuF2c++ekdGToWALn3OUCmQ5lSIBSxJfOYIHzEpa3EuNZVIelZJUqWya3G68ef2hCcMj4wcVr1FfGuVMOptPGEsyRvnrE4bgLjXKRiipMBKGxQGFr+IT1Qj0iaLjCk7ir7KxnUpLMJMi5VBUpwS+BGI+LJlEzAOzZWMysawc74C25KGwYGP5IBQubOtn2VgiFAYuJ3ayzmrMozYjSS6qKb0K9CoFDD2ITUo5oWem7JRtp3ZWAlxnElBL5mOP2ExyHwMm7BMVCopNToRdJ24teTioprKF3nQwjv8/LaBjWMRy6EEdHAjXChY0cKgHWXONyckDmnYnmkRKJhUkZ2VIjdHLhQyRIKBESKQ4gtASeJBQsUroJlBAhHIbyHJ5aDeQTRqozpSD6LhFXRfsojEmmclcphpJpD3IWgOBYQIHpgAIyHRD2lYC0gGYYkKEsm1RIqkEVcDORsQSXAIi1bbTOScClB2mAJZvd4AiKNoGclnVUfFupU2qkUAyrEJhKeZXkUVcDzYUMd0ubQiGzeISbzuJbWjpjqJ2v1UiGwodNnWNecy2wQCvxCvgn2hzMUXsXoa+dJhPEXPd0GOQwCXFqqDUhyfkIgZZ59yGjW5uNQBBwZvPIR02s4gGpj0JwnwNnQtgwAWSFAh3SWAAiVzOaxLw13S+Aioq1FIJQOva3SS1I1QH4xJV2Rh4d3i9Ay6GtSrtumBsjd7jUqc8DNoURdurcgigp1rAoWSMgYTp8rqPiMlDlTRWlN+1NvNI6HlEsCO6m4nSNVmAMAlbgrUDkwdgUkDIdIdaLgltinO4gKFVfKqwSxSky5icAyw2jLC/9GxsOoA7mr9kLD1JLIsGClPf4ppf9MvqERYkkE0r2krL0hbjuRe1apQGmLgQq1+NYwgI810YVljBQACS0WNDe0YP7tx178DQbNtodBpi6HZSlzoM4E51WGPN+NLjlo4Nhp7zVfR03xLM1N4GDFkfpZP5UjJ9eqVrlwDi2SA+f2blQUCsGtjBnQEezbaVn1gpw2U0s5tYe0ZWmSTCcqIZIrze/CqWIvQ10Wiuf++l7jqHy+7mcslMBj7zqyzLhm9FMxXiSjq/nIRlnVY2nc3hdC9OZ55eZZsq0cAWBxfRo+knliII46WqnKiZyWTTvxBUBtC2dHoW9FQop5qQTfryi3Od2+TEnsFuLXy6uR0yG/1mh2AJlCmivkAg8F1AmXw5ly8XCmjn7tFT/R0/o2tman4UppesQLYGlgpnznJCv0btVv+aJbNgdClD01CXeUzhzkUKmdTLqJlPp9MLW64oKagHodEem7lSqXQVuxAZqqKnND+czuNMcbTDMrd/bT/MD0Koz/hO7qCa1agA1xeuKt/FtqSz5Rp0LoMwv39h61ra0GuXQJwpxt7uuLp8F2JHZ+u16FwGYmH/AlpL/X/C13a0+QqSKwPXlkZXlL5syFqdzGNmR7+G3wfTmUz/0M6hwogpB746zO7KkcFhNnTk7rNd+/AmQ3x2JK5urbewOVfqzRXsnImpWbFxvlS8E5dK2Qw2s9minS9m7szbvSfHGIla0OWoynnVpU9aFRy/iuB2D0Zz2+8f6tu9q//U1/AwNzkU4yiBovW5T2dHqIC+j1rxPQ6TVtDZ/h14uO/+6KlSLpunWbNgmul0sVKy8F0woZa6cbnbmnpMx++RR6Hn9YV8bvGz3/toT/y3Dv4XF7/xg77BF7686YHFe376jx0P5QcWTn7klRMDG6PTb33p8MsfeD/714033LZ4YP+Hjz68/oPjN1f6zj9v/qh/5MfP3HJh8J4/v/7eCw//8OLfLpy/WBmfPvrH238+vPndT1w//fnvHntne3nfbz82fduZTYfemR7E7x//1vFBe8sfHj/R+v2J1qG3b79j/vlnUz/ZN/PIyMXEja882Ohd2Lb5pebf7/7KP9/6XbG67dyfXrrhjU+ue63Y+6nSgx/f8O6Z1/KZ/3zn1czGian3N031lb99ZkP51mO9n5kwj+ILbz73xcVd33/kRXbT2bNDm27aFtUW58XEfTefa54/duHfGzTLdT1/Xb9+377renr+C7bLfqw=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNqVVX1QFOcZPyVNIBPUpGDbyUe3ZzpUw97t3u19Qc4OHHhBQJCvYhql7+2+x663X+zH3QFSjXEytVh1zTRJaeq0ctxZQsAERdHaZExUEtO0TTK1YEpnNE2b6EzaVBtRE/vucRQY/ad7M/fus+/z/p7f8z6/93m3pqJQUTlJXDDAiRpUAK0hQzW2phTYpkNV25YUoMZKTKK2pr6hV1e48RWspslqkd0OZM4myVAEnI2WBHuUtNMs0OzoXeZhGiYRkpj2iQWbOq0CVFXQClVrEfb9TistoViiNm1o7TJEb1YNxjVrYWZENgNVWuFCENNYiMUgQIOCcSIyORXjBIRm7SrEZtenP7XoCm+CzBpFyGV6tM4Q12VeAowtxkU4ATIcsElKq920ZNMyExAQebvG6kLIztgZxh4My3iMUxFvlRNxATCcKok4YoSLQNMViIckoDAxwEdsG+VWu8PlJuQ4/v+tsnZ1rUfMFYlPp6OrULGmvwgSA9P8W2UNpyQzPRGZJBpVTYFAQEYY8CpMb54goyqa6OgrYfN0pVgIGFTjScuSBCupmjE4v25DgKYhAoYiLTGc2Gq81NrByYUYA8M80GA/Yi/CtCqM/giEMg54LgqT06uMA0CWeY4G5rx9I0pvIFNb3CzLrdP9Zlo4qo2oGQdrEImSCnttOxKYiJE2l8NGHIjjqgY4kUeCwXmA+CTl9PyxuRMyoCMIBM+I10hOLx6c6yOpRl81oGvq50EChWaNPqAIbmp47ndFFzWkBiMVqL01XGZyNpzTRhI26uV5wGq7SBt96UIcnrcYako7TksIw/gVkaQlKcJBY/yzlhY63BIS/FJN0NxgN88CHUCFKgct3tr21QF2VW2kLNgaaisPOIQaJlRVG8NJj8PncjpdDgdO2ggbaSNxqK9hInVOmi2tC5Y0xLxNVZUddL2LICuVCp0si4gdDmezs1J3hNnVG+ONgSb9sQZFktmasjW8qFVw4ShsgHG1qbG5NKgTnibtsQqajRVjiJ0e5Rg/KzhhKFi/RuDdIYWK+KBbXKW2rCu3rbM1lnYE25jaWGWJWF7C1FfPoUdRLpzIMHQTlJcwn8EZbfBQbNVYo9fpJfYrUJXRAYFPJdGWabq6NYF0CN8eS2Vaxr6aylkJL02UIU0ax6slsRBzkFgNrWEOwkFhpKfI5SsiSCxY3TAQyIRpuK0EX25QgKiGkQzLZySfolldjECmP3BbsR83xY4qadJHxxOHcVlSIZ5hZQw043XTzRKvKBuePlk4ailA5DrSYY3jadXHOuIxhtYZho3GBMLXQTlRd9Pp8MHMElmRzDCIEC4g7ZI+kiIGM3MzyutH2RI4SeAEeTSOo4MOeU7g0I6m/zM9WzUSLrTdR2510KQIRN09OV2P3851UKCAFGsGn0WhfD7fb27vNIPk8fkchO/ofCcVzuVCOgT1yK0OGYSEyyeoA/EZf5xjjPGHkdHi8RJunxOQTreTIhkv6SF9XsIXIimHz+FyAdcoan4cjXDMcsqSouEqpNEFpbUb44UCiJtdxu8kXU43SrUY3Rw0rzOwXg+VSWYSajEmK9C8CIYCq/AAoFFbrk8r0EiVrVtTUl0RGGnG50oJr5GnL8eUKKkiFw4n66GCCmP007ykM6hdKjCJsOpK1hkHvYzb6UC/cJhyUDTjwEtRI5pB+5/wEmavTQEecY/SxjDr9FuLKMppLcYE4Pe6KYJIX6FPJs1cxdaTC+/8Zne2Jf1k8XXV0jninuPXv7fk6KXzeVP8xAdP/+Jp8NqW3O8cANns9pFDY/seebd6eN/Nrt3PuL8x0b3o03/593RsWmHZEQtnnRrZP3Lx9Q8vPHf99OlznT2TXwxc+vzKtXPrb1yroDbtXfzl0PLhrS9+l7lSrsPRi1m//ns8IVgb3hocvnIjeqh54KMXjX/8ee3YK8mzb+4p+PfHm3w/m6xKni0AZyb3PJ97Y4XFsveUZ9fXH/T35Fad+Vpi2b38GL+h21J1fve9T1k35PX2fFb1+M7VbS9cC1YeKHj/2fuuZk/cd3XR/QsvdZ54+Jf35D+Zuy2HKNr9rVOBnM4hHGt40/NSaf6dC8cfPHX3tp7/YEsblxtZ98dl5SujjWN/W+nGAu/t3r/l7OKF2V1Y3o67Fy3+9h2v9spb/MnBOz7Nznr/hS136W/n/KF/8z8/+fhkfu9Xu/0lP5jMefTqCjj0aPUzPwrlrCyS2vj1F7IKBk60t53549QDPTuEnjc6hSHb5sK1/uyPXju/5ETO5h/+BJPfKxj88I1Xzx/atWFCsObtnDpWPupZdv0Jy42Heh/6svL3l7cf23D4reIfj8vun//lgdzfTR195/LGNutP81/v4/2P/LXtg6K6yyW2d042H8RHh6OPPzHw+cjFzqWn87v4ySN5BQXL7lr7ycihXbl9K6cuLPjTF9HRV87ufPZwd9/yvd2bUcFv3syyNDGD727Pslj+C1CB3ug=

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNqVVn1sU9cVd4DR0Q2BNm10Slk9M5UK5dnvy3acNGsSO5gQEpskREmqLFzfd5334vfF+4jtMG8t7R8VbJ1epMLUqa1IHbvLXKcosDIYdO3E1mls1dBWmjGBVHUaYk3VMlVlzSp237PdJIJ/5j9877n3nHN/55zfPfcdKk4gTRcUua4kyAbSADSwoFuHiho6YCLdeLIgIYNXuHw81tf/oqkJCzt4w1D1Jp8PqIJXUZEMBC9UJN8E5YM8MHx4rorIcZNPKFz2b2smD3okpOtgDOmeJvejBz1QwWfJRkUwsirCM4+BMoanoTpimUM61IQEchs8cqcRwIPmFmQsCrpbkLA3T67BvWzvLI2ammg7WRaasEpl9NSAm6qoAM6bFlKChDgBeBVtzGdLqi3ZAUgYvM/gTSnh43wc54smVSIt6Bi3LsiEBDhBV2QCIyJkYJgaIhIK0Lg0EFPecXXMR/sDpJoh/j8rTy43gpFriuiEY+pI8zgrksIhB/+YahCsYocnY5HCo25oCEhYSAJRR07yJBVX0faOV0lv0F5TFFFfneukKTuFtn19PrczJQPJUajme9S2tZUqxVCrep7IXWpjq6lAww4wj3THm6phfmiGgCpiTdGeI9m0cT/q0U1Zztq2UFRMzplpQMBLdug1vDhOQbYzZO9iYgoa4hzrmsuVykpiHEEDK+dGckUeAQ7DueranOcV3bDKq1k7ByBEOK1IhgqHj7BeHpsU1AY3h5IiMNAsrp2MnPRYsymEVAKIwgQqVKysV4CqigIE9r5vHBe3VGU2YWO5c3vWLiqBmSkb1skYBtHW6Ytn8fWS3ZTXT3vJVzKEbuDoRXxdCBFgPAXV2T+7ckMFMIWdENWraxUqxuWVOopuzXQDGOtb5RJokLdmgCYF2PmV65opG/guWMVw/M7jqpvLxzFeivSyJ1Y51rMytGYcGr66yhgZWpaACvZhHScLUFFSArIWbo6OwuRoQmpRYlE7wQGRByZAGtsBRhvj2d1hfmc8FYmOJQ50hGkpxiX2xNMEFaRDfobx0zRBeUkv5aUIZPZwqV4G8u290bb+dOPAnq5J2OcnqS6t06QiKXmSZgaZLpNO8rvHM/vCA+aufkxKPhbpEWWjU0hOoH6U0Qf2DbZHTTI4YOzqhHy62Y3RmRMC18JLDEpE+3okMZDQ2FQIBeSd+uhQh3fIu699MnqAi6e72uSONq6vewU8lvUTZBVhgGQbSftXrnFDRPKYwVsvBhj2JQ3pKm4P6IkCTplh6ofymIfo4pvFasOcjnUtU/jr+QjmpHWuW5Eb3DTljkHDTZM066aCTf5QE0W6o939pXD1mP67UvBEvwZkPYlp2FGjfBHyppxC3Gz4rmQ/Z5MdV9KGj5sTgTKqoiOiisoqDRK9laeC6IzMV24WgRsqkIVJ51jrnMP69GQmzUGT4/iJtESGJlkG9w8TJk9WTXCvsI/BgAhJt/IsSTPl6laNeLM4WJKgSIKkfml3BIjvmR2NqmgGoSOIXycjay00SCBjX7IWhvIzAZz5ZvxsQNHkUJ+ZiCgSpqbe7FY1ZL8CZzIEbpdIFCQBV8b5r758GIQfG5++U8FQUgi/kYVKXc+vVNCQ7d4OYtkLGwqFfnV3pZqnYChEk6Ezq5V0tBILRUv66TsVqh7y/pCklzI1fULgrIVvY2G0EaJgMgQDITZBBgAHko0sSNI0pGCSoZgQnAvvJMIA4jepzyGgVYwM9bR1d4Z/MUisZBIRUytfBkVZ0WUhmSz0IQ0Xxpp1+jbulhoqYF+9bUPWyUYuwNBMIpgASchCjibacR+qefucd3m71RaBiGs3Aa15nmnxNLEs42l2S6ClMcCSpPP98Hih0v0v1P30gSNfdDm/teLekdSV1s3fuzw3+MHQvVMNzSeufTrVOvzQk633zjz1sUCMaGfOXi2rez78gW/qka6b2iYmOM+cfWDDY/SPhh+/tPXwB0/88Xzu9tmlj25dOn/s9ZaB2PzsO5fmlq68ENv01ie+HVu/9v3Pdg/3zNYvLu3v/GH6nuFo+/E/nzpNHPv3gvhdYvv0JxsX9r69/vc7XngmO4T+M3LU9/Qbf6rvLg182FznuvWTq9R06vrIy+5YbFfs4jtTw889UvfS9cWnHuICDdcjM1t6D25/r/CXt7a+fjTedHj//bui+yfFDeS6N950f/PtdQ83vtoFUvm166QN20qLrb9Wmlsvu5aef8x/OJ5ff/Lj8G+Cl8PbpPW/PTFfX2p5d0n/0r+ubP7s2R//9xvX7hs7/fNt3zoYfDD5UVv3+9+pu8VIqVtK3Y7ItXt+duE1q37jkcTe+7/813+0bjnzsPXuFxZPvV/e85paqv8Ke3QuUGZvPP3skUCsfCP3/KfPXDg+fWBx+7H3ruczv2vPnaPKM/dNh/6ufvXGeHnLxn1/GJqZyt3c5HLdvr3WtebiqX9eXONy/Q8Vr/3y

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrVV91u3MYVTnLRi9zlovcDIoDRgFxxydXfGgaqOIbTH8cupAQuYoOYHR6SE5Ez9MxQ67Wgizp5AeYJklqQAsNuirRIUbQBcpmLvoB60WfpmeFSstcbK4ETIBEkcDVnzjnf+fsO9/7xHijNpXj5ERcGFGUG/9Ef3z9WcKcBbT46qsAUMn1w9crOg0bxk9cLY2o9XlmhNR/oiptiUFKRs4JyMWCyWuEik4cTmc6+Pi6Apmj+o4fvalDBVg7CtH+3t51eUM9WwsFwMBytf77FGNQmuCKYTLnI28f5PV77JIWspAaOOnH7V1rXJWfUYlz5QEvx8LIUAhzm9uEuQB3Qku/BZwp0jWHAh0faUNPo+4doF/7zzXEFWtMc/nz9dz24/730y+Mrd2uOKu1XO0Xjk3BIfksFGW6uhyQMx+6XXL228/nNwOIog/76p+GD9zhtH2IIJJcyL+GLm8Fb1NBU5sEO5hKC36TtCYmGaQjra/Eai9jqBMMNV0dsLU43hmtDlqXrX1mzWgcYjFHS2Zcagrc7gO2nb3x508kwfcHOrIbgeu2q1B4LqQXPsr/00t+DyE3RPlhbj75cMHqN3rUVQFkY/nPbKM6MxSgwUcoE28CwuGbWnvgVXsQUXYqHqzHeDS8SLljZpLDdTN6SFZZZXyS1glLS9PA9qmbt0XXFcy5OyILLrbKU0+CyghSxcVrq9tCoBv619Fpno/3k30ul11wT2lz8o4+1hxzckNgTiPxKpmgFARWob6TS5IKGMrtA+oZd0qwXiZx8gP0TaMXIBSEFXDi6oWhe0fYzIQNGWQHHWyU622PtyaCIL3nj0Sj2LpKKXopWNyPMj1/EQbS5RLA8krOqHmKrAPZcA9hzG2SrViQKo1UyjMfRxjhyPffoybo/2/+PL1uEvYf2i7qZoNwnfQ3Xwr/t8AoHaiHJbj7/hMOhUPbfVz7Z9+Y04I09HMlwEHm+h4U2NpkJzmCuvfG+NynlJLG5RdsJCDopIfXGtqb+ogzdABrbjtFQiiOhwSRwl1Z1CTqpmtLwmiqzaOT8G8ghyFIGEsoTJCg1W7wgVZ4wBS5HScr1XJhh+6G0prMK07moVGP0UtAyQW39rJaOvy1qDVSx4pnTQk4TY8qk4f2RsXSQGA4qSRs1R0dnLq2lFLmlLdQfYT9ZdWXmB8PRge9NpdrVtTWgmazBoky42OMG9CnGe9qkCbY0Tqa2lXwaExqZUINIsd44HXhRZDy3zhsNT2U7rSVugtNIGC0haerkjub3ED/2Tw4KYYUOaC8VpsCUpzopsdtQebjWC1M5FYmAqjazM+0RSq25/razdXqQTGYusCjcXB+uRuHBwavfvo62z1tH+IeneuX0NKB8Rd8pA9c9gZ5pA1VQK8ycWbHLRpuf1fZ67bzl8F1peXGXPWJzq2Yp9fxYLP6r5Sy+hKkXt97hENfV89beEdYRea49bvY4k0ocps8j4OGmJeDl/PnYbYaAzXn3xXfFua8A5y2TH26lP7UafvHrfa+boKSgukBGRwM0hM00zMJsNcWPsLkxDGmEn2jMNlMWpxAPJzHFO6thtB6NopDFMYw2YA3iyO6DimJfYnUd9zGcf6RH7DE0jglFxw3DwtsFhOT1vnc6uHjSjanGT3hi8HEZHzfc4Q5yjJ0277bv7U6p6jbWfGLw8/sv7GvbUcW1zuLznHaaLxLd3ILvPc8NF3Vjkj2quCV6G6KX4jjjkKCikXWya59Wlth3cqts5oaSTKoKgxp7WdBV2jsT4ulVpCFB8CXY+SCOenHwfeI2KxBK9EwY+2UBSb6cEZwmhX7J/hzAAXHsSowkqhH2UUBZk4yLlJgC9YWeghqQdwUC1+4I95AiugbGMw4aHZOC61PP1mMnY0Q01QTvyoz0rwvWwIxMuS6sKzkxmE+f0HKKC5a4nUJmslFnoKghldSI16XpgCDj4grUA/JH2RCGcUuFc+VwzUVkMkMMOCKwR4XBgMumcoEpMI0S7qoz6b5GWdAiP8PHuwv2RWiCr0KDW+KWeAeQi+aQsBwIt3SXOtOaZFj6J8N2lfSJFJhvqnc7HaeRwXQRmSa5K6EV9zl0Xm9Q1DaWNm1WET/mvbN55psIZGCbVOry5hKiAfooNBJdRXG7aaa4WzYD8ibqUgVZU1qbQponAusROXtpJ4W7XJsB2Sq19Em9iGlacFb0YLhLX3fkcuDiuG4hW+wuAdKyoc24u6DHt8T+WeMfeAf447/w/L/dIHH9pMffHX2fOf/DvDXGZN/pdqm67X9vbvF735jo07d41Lp98H8ZDhN8

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrVV01v3MYZbnvoobceeh8sCrgplityv/Rh+CBYgutWstRonTqIDWJIDsmJyBl6Zqj12tChbv7A9hcktSAFht0UaZGiaAP02EP/gHrob+kzw119rJXISNpDDRnkzjvvO8/79bzD5ycHTGkuxXdfcWGYorHBD/3b5yeKPa6ZNh8dl8zkMnlxZ3P0olb89Me5MZVeW1qiFe/okpu8U1CRxTnlohPLcomLVB5FMpn8/SRnNIH5j17e10x56xkTZvonu9vpedVkye8EnaC//Nl6HLPKeJsilgkX2fR19pRXbZKwtKCGHTfi6R9oVRU8phbj0odaipe3pRDMYZ6+3Ges8mjBD9iniukKbrDfHGtDTa2fH8Eu++c/TkqmNc3Y73Z+MQf37+/86GTzScWhMv1ylNdt4gfk51SQYHXZJ76/5v7Ine3RZw88i6Pw5ts/8V+8x+n0JVwgmZRZwT5/4G1QQxOZeSPEknl3k+kpiZJuEKU9libdwbAXp92BHyVBN0qX/X7SH0ZfWrNae3DGKOnsS828nzUAp5/89IsHTobweaNJxbydymVpeiKkFjxNfz+XbjGRmXz6Yrjc/WLB6DZ9YjMAme//Zc8oHhuLUSBQynh7LEZyzWR62i6xESG61QsGPez1bxIu4qJO2F4dbcgSadY3SaVYIWly9B5Vk+nxjuIZF6dk4cj1opBj77ZiCbBxWujpkVE1++uV2xob04//dqV02xWhjcWf577OIXu7EjUB5JupoiXzqIC+kUqTG5oV6Q0yL9grivUmkdGHqB9Pq5jcEFKwG8e7imYlnX4qpBfTOGcn6wUOO4inp528d6u11u/3WjdJSW91B6tdxKed97zu6hWCqz05z+oRSoWh5mqGmlsh65UiXb87IEFvrbuyhhfU3KuLeX+z/l/ftgjnJ0w/r+oI8jaZ53Do/3HESzTUQpBdf/4azaEg+9f3Pn7WmtFAa62FlvQ73Va7hUQbG8wQPZjp1tqzVlTIKLSxhe2QCRoVLGmt2Zy2F2U4hsHYXg+GErSEZiZkT2hZFUyHZV0YXlFlFo1cvwMcApYyLKQ8BEGpyeIGqbIwVszFKEy4nglTlB+kFZ2UCOeiUgXvpaBFCG39ppbufZXXmlEV52+s5nIcGlOENZ8vGUsHoeFMhUmtZujoxIW1kCKztAX9PurJqiszWwj6h+3WWKp9XVkDOpYVsyhDLg64YfoM41NtkhAljc7UNpOXMcFIRA2QIt/oDmwUKc/s4bVml6KdVBKT4MyTmBYsrKvwseZPgR/1kzEFWL4DOpcKkyPkiQ4LVBuUg+FcmMixCAUrKzM51+5Das3NdztbZwthNHGOdf3V5WDQ9Q8Pf/DV42jvunGE/1jVS2erHuVL+nFh+wN9pSfasNKrFCJnluyw0eb/anr98Lrh8La0vDjLXsUzq+ZK6vlfsfg7V7P4FUy9OPWOguFw+HVj7xh5BM9NT+oDHksljpKvJeC+JeCr+fO1mwxePOPdbz8rrr0CXDdM/nsj/dJo+P7zZ62mg8Kc6hyM3guCwTAZpKy32h8EKyuMBcvdYBBE/spwOegm3W60Ei/76UoaxP7AH6SD/mB5sNrvd/ss6Q3sYCkp6hLZtfTDQVAftM6aE9KmFTXesGLwuI3HrlscgUdsR7UetVtFDOIAr6I4gQqZAOI6RsVAY39MVTOxZh2D9w/e6qw9RwfbjdY3PbSxep13s13t1jc9xsw11lrvy5pQxQiurY7WQD2aZ4IlxEgyv9iTMWiLULL3yy1iR3KEodx5KO6AcoTV5KKqDXE0iyZvEzdFYZPoiTD2wwCEXkwIOkehEcmzBNyBl0Pi5rA9SdVQMznMFVLuE2rsDwI6w3zRRKbuZ7ObigQCUyvhFlG0Y6aA5r7A+NduDZNJEV2xmKecaYujeY+JqMsIMhicXxiswgQO6tzikJFBtNuEFmOMWOKmCpnIWp1DBbZSanhhZBXuH85BAoGNZYxwSIXWuuRABNj4UbADihDHsqhL4dw+98PZdAG3URTZOUDebLgQ+HsMZDRDlEoFtIXb0hjWJEVdXPTa2HneJlIgCVTvO50G3iVEmmQuo7NYu2TO3MrpAULtiMTiNlIW2pk5+/YDYlclC1B37JnIh1uPwBPjRrlDLknsp58q3YyYBQUViKgtaNmzz4q6yUvKcfs6rwILdvv+3ogkEjdahCRn8f7FDEYMJzFEFyTnUHPTIXdTu4VkzNhyZkrBs3HOi4v7aGOgDXxjsCNbKEljn5m9OzwUD8XGDrm3MwJN79sKnZCN7S1iRzKzd0hNfnL33t7mu6M2ub+7sT7abJONza1N93x3Z5cwE3fecWG+HMuHYiStFeVcJ7js1UVC1rd+tf7+3qW+cfl+o3CsRc0YHKONAVurzgHY3pAOsN7nFZS4xjmswvrI9uSF02ZtALMaoS3pvDdd+Z7VUwOg4z4FQA3hAVXcLVlSm3U/hK6DLG3NCSlsqgC8lHrNHGkd4t+jt7NzeHj+HYANjw7/AxNYKrM=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNp1VX9sG9UdTxfGMml/JGIZot3Y9UDKKL3znX+ek1pN4iRt2hQntknSRGBd7t7ZZ9/dO9+PxHYI0CTbiqBqD+gGFWjQpja4WdIuGUtpso0piBVYp00aKJRStGnroFC1IE3bKLB3jkMTtbs/7Hvvfb6f76/Pfd9oYRBougiVdZOiYgCN5Qy00K3RggbSJtCN8bwMjATkJzpDkegRUxOXNiUMQ9XrHQ5WFUmoAoUVSQ7KjkHawSVYw4HeVQmUaCYGIJ99Z92zw7gMdJ2NAx2vx/qHcQ4iX4qBFngPMqnTMQGaGmaICIY5XfhmDNegBOxzUwcaPnIf2pEhDyR7K64ahBsSsqiINlJBezT61w0NsDJaCKykA7RhAFlFGRmmZjNRJGXvQSiVgzCyasmDYCqlpG2ur97rsWFcYeUSQDYlQ1SlrA3ggc5polrG4LvKR5gxBDG7gHFUTcyAcVQzoJG2hcpqiAeVVi+RqhoqmWaIYHkpiJpuxMRSLa6FVGbCR+y0AKoW//8hNsbulagB3k5sFeVaa7uGK9ZwIAk4AxmP3DdSSACWR+G9V1E9kYC6YU2tbew0y3EAlRwoHORFJW79PJ4T1c0YDwSJNUAROVBAqWpWMQWASrCSOAjyy1bWcVZVJZFj7XNHUofKZLn5hB3L9cdFu+EEkopiWLMhFERTu6MzixSoYDTpcZLU8QyhG6yoSEhRhMSiePJq6fzU6gOV5VKIhCir28ovG0+txkDdOrqL5UKRNZSsxiWso6wme90zq/c1U7H1aRWCnde7Kx9ec+ciaZr0nVhDrGcVzjpaUuev1hgDQ8sSHEQc1vNUnoMwJQJr6ZNYjBNiA3IANHenvLpoinSKjLRL6aZMEDbnhM6eDiYYDadFrwQj7qZkh6+ji6B9Tr/X7/fRToImKZImaSJMcrwp7Ugo4SEtRjfxMWdbUPFGkh6llUnmBkgAki46GRO829Lbt2fIrkQo0dbVAdJJqPU2Kx6f2tpO9vZFu7algzmGhKFsLOv3NTVgKDpzUOQDRlDkk72MEunpSWmUfyfDp3tcgm60efwt6WSEVg05qnu71XDcvSo8j89PUOUIvZSboexnakUbElDiRsI64qZdL2hAV9EsAWN5VDLD1EcnkA7Bm78vlGfK4dDOaxL+zkQL0qS10AP4zWiSYCHOwJyU043R3nqKqXe5sG27opPBspvoDSV4Iqqxii4gGbauSL7AJUwlBfhi8IZiX7DFjjpph48GFwEyKtQBUY7KmuwlwsvTlGhvmVn+sgioxVlFzJXcWgsl1Q/lMkM8Z/J8YnBIpvw5t0scACYnzJZN0Oyw3aCACFlHxWGcU+WTFd0VUa4UQVMERb+cIdDsA5Ioi6iepd/ySNetCQ8q9tz1AAOmABr+BXepG9SvVyM0ICPB2r6v0bj9fv/8jUErVC4E8ft8L69F6WB1NLRT1ueuB5QpDlP6ZGYFTYi8tXQnWsRYP8UIAz7g8Qk072Q4zuv3MB6G9wCAxMXRJ9F9IHKIxW6mCjWDQLMQ3V9G1lraLLMZe8YEXLTH5UWZNqDhzUkmDyLmQAu0c9AbMFUDEmT56WAbEWS5BCAiJf1ZhZbd9zTtag++1EusFhIRUpfvzoICdUUUhHwEaKgxVpGToMmjYamBPOIKN+22Zhne5xQ4xgV4mmEE9C00ozG0wvaV7CbsSVtgJRT7IGfNJFwBvN7tduENmMwGGC9qU+mG3ZO3c1Xir6774/cfraooPZVS+A3lLFU9f/HuvYGdP2w91jXZb41ZeGIMb7xlcU9j5MnFJ6cPRMOvXf7NN3K94e9ebKx2vfP0ud2XKscfvvXAxuc8V6Lraz6fy879d+HUg7HCpS/eutPRdz4wv5B8422f46fVL9XmPnvzha3dI0f61r+y5e1i4UWwONPtuefKhwO/3MG9WxO+tWMWHLzSdfiO07X7L3+t7k/4vr82PfPWocg/sOD+dc1VH5+a+ThS+9FWPlFze83pZ+6oHZOufnPj8b7ZquoLc5+2ZKvqHk/+/elkf8++atfvUoS2Y/ypxz6DW0+F2JqNnzzq+c9d701f+nDrwd4viIcC5/929vDJ+89czX3w7AfrN21ofe2AcHVvvE6o2oP/4tjBfZff3XuunWhulP6yPf/AWd9Hvk7t4gNnDp5v/Mn4BvWx5L3fqrj/9vef+O1GZuihQ1/ndi8x+I8Wb75KTTX+8/XeBbPukebTtUOmclv6HP/pw2NH+i9U/qHwvdETfet/Nvz5/he7/9VwqZXNCbfcNn3zn1OhxfFCOtNGOJ6qm++/8u/iD5733J1/vZXacPnYBf8jD26fKjo2vfLcTcqFLU9kT9/76uSJ+eHRzkNbTi5+m3tcueumH79fWVHx5ZeVFWd6At0H0Pv/AJy+xfc=
eNp1VQ1wFPUVTzBWLOMoMxbSSsp6DUpj9rJ7t7mvENvkLt8kly9ILjGce7v/u9u7/cp+3BfF2lQgQBR3xonaKi2Q3GEIITEMUkNspy20KlNop8gQOwXS0VhpyxgUyqBD/3u5YDLQnbub233v/d7vvff7v+1JRYAkMwKfPczwCpBISoE3staTkkC3CmTluSQHlKBADzS6W1r3qxJzviCoKKLsKCoiRcYoiIAnGSMlcEURvIgKkkoR/C+yIA0z4BPo+FR2ZLOBA7JMBoBscHRuNlACTMUrBoehDQY8LiN+QZUQhYFOiMlsKDRIAgugVZWBZNjSVWjgBBqw8EFAVFBCQDmGZ6CXrEiA5AwOP8nKoNCgAE6EBSiqBGMxIwafCAI7l1GJizqgX+XT9cHg238dmw08yelWTmUVRmTj0EoDmZIYcc7BUJ8xIEpUQPQ2BWDPEEUIwM4AyQj9RVKCGLB/so4nSrAtksKA9J2fkWTFy+j13iaSATFsgVUA2A76/9mhgz4IRgI0LGQB1qJA2KJMoOALAUqBcVu6tqSCgKQhpb9nPTQQFGRFG1k8scMkRQHYUcBTAs3wAW08kGDEQoQGfpZUQCGSkBV6CCbhQbpV2lAYABElWSYCknOx2igpiixDkbq9KCQL/HBmuKhO6E7zkD5TFCqBV7QjbkilrKaoMQ4FxiO40QqnNhpDZYVkeBYKBmVJyCoppu0TCw0iSYUhCJoRr5acCx5Z6CPI2mA9SblbFkGSEhXUBkmJsxDjC59LKq8LUEs5G+9MlzHeTpcyG3EcfsYWIctxntIG02p8a1E0UKQ4SgkQRNuLJSlBCDNAm8q+z+ul/F4fV8pT5Spn7Gh3OqsSFQF/wilHEp6GcGPcCegqtwpcIbys1tfASM1hFLcSFtxeTBTjKG7EjJAF2hCVXa0uLlLe3UqzbjzAeGulyrqOULxOaPO1kHUdVfVeOWGJc/FmosXkbgNV630hq68DM0pEGYHXRwRPwBi3Bt2xaKQ94XIXe6y2WlzwVjd7cL6qImRsM1V2x9aT7bFwCQIpqxGGLhXLQ84g3mAT1JomTy3ZrRjFCpu9LkwHaipNcqWFawmYNoI6e3vItoAzbrWhWIa2BSNsmH6NzEuGBXxACWr7zVbigARkEW4Q8NMkbKSiyj0DUKTg1B9TmU2yz133tb5XDLigYLXJSokpRDATUk/GERNmKkZwwoFbHWYcqapvHXZm0rTeVZljrRLJy36ozor585CigiofBvSQ865nYFI/A3C+On24sVAQEwUZoBlW2nA72jy3Q9Ea1/jcsUMFKUDyTCKdVptMH4ZoIhalKZWmg5Eoh9kThJnxAZXyH8mEwG2ip4GEUE7W9hebbCMZy7wch2CtGIpjKIa/HUPhCgQswzGwn+nfzCKXtYFi2OxjdzooQhjAlZ8i0tPA3lnoIQEOyljP/TUMYbfbj9/daR7KbNcv89uLvWSwkA1u4uRjdzpkIPZh8nBs3htlaO18Przx4sDus5CY1VRswywWK2210jQJ/HaCsBTb7H77r+BLgaEgij5MUZAUFK5J+NZS4tr5Qo6M6aun1IwXmy2w0hK4zClWpUGL6nMJeg1yCSJKgBVI+jDlRymSCgJ0Tn9ayuVpKKuvcR5tRxcKCXWLc2/MFC/IPOP3J1uABAejDVGsoNJwk0og6axEm8s82hG72W4zETaT2W82mYENQ2vKXKPzaLdlN6Cv4RTJQu4RShsPmksNDoIwG0oQjiy1WeCY0u/VnyT1WvnAiSXLVu9ampW+7oHfW7f6Wt7nP8QeOn75id7Szq2D70UOdT7w+ZlVUw9XTC3pRs78tdZ3amV1QX3ovxeWNR3NL/l+Y16Ob7u55ObFRw05sfvt2R80nh5Y/tnHE8fC/JP/+dfpH3yUmuiavmadfqPp5x+N/+wt7HJu4nrvqV94rrzSX/nLmY4vsPHq0sjQuwWdj12uOFq9BFF37Nt/9uQfXn1hTdu6hhlmxem27tlLv+eJwCfI49eXP/rcqt/++uobtT/a9tqpwEyvayb4/IsPvrw0+2DFyuxPV0X3FDw4vpzuiLe+rp79Jp59/IN9T7f24t238h6bGfO09k5v7zzz6fuHh2/03bzw8bWNI+d2e62zH+Z7i77y9ewd2sAG/5KzevrGBlP5D5u+fezNb+155DPpxJqcA4c6d7wXfefS6NadnTPLRr5cMcsevXf7+oOzD1dfWH2p/GLupPrMK8RJ31lHwaZoX8Fg6b/7tveXvvbiP7C8vWHqiz91Ne2d3jmRHK3uWpuYuvkJm9/+7DXf8aeH+0+u3HQFz+8amFzb94it55+M+zdh94lc7+s1K0MVO/bMYl+Rl3LBA/3fk59aN9X/lPvZYPONsFlgrp+VN9za/eVSbNtJypO7a03g5Xt3nyuMXLnqPbftu/f97uKZJ7u3bZ0JTjAvtf954+eHv/G3XVfX7tp58DuHxtatyzuwfmzTm2zNQeJdxFH542x94vdk5Q1XPlOdk5X1P4F86do=

View File

@@ -1 +1 @@
eNqdVn1sE+cZDzA62KqOqfRji+gOq0Mazdl3/jj74rkodgIkTeIktnAKot7l7rV99n357rVjJ8q2UjqtZStcmyJYO6nFjp2ZLAklQEsKalZou7UFNG0VZKxsnbRIKxkMVqlby7L3HLskgv0z/2Hf+77P83uf5/d7nue8o5gGqsbL0pIRXoJAZViIFpq+o6iCZApocGdBBDAmc/kOfyCYS6n8hfUxCBWt3mJhFN4sK0BieDMri5Y0aWFjDLSgZ0UAZZh8j8xlp5e81G8SgaYxUaCZ6rFt/SZWRndJEC1MvchFwxwY5JEJFpFVmMVgr2yqw0yqLADDJKUB1TSwHe2IMgcEYyuqQNwu4yIv8YalhPZI9KtBFTAiWkQYQQNoAwJRQUnBlGogEWbC2JNloRIHzCrlGyIpqZy3gfXFcz3Wb5IYsWwgpgTIK0LWMOCAxqq8UrExtVWOjKgxg8MoIhSDchTRBlSz4aEwKsJB7GplUEVFrKmQB/PLCK9qMMyX6bgZUgXJNGCkBRBh3P82MWwMuXgVcEZiCyAXexscVr3lnjhgIXIe2D5QjAGGQ+HtzsdkDeqji5UdY1gWIMKBxMocL0X1X0b7eKUO40BEYCAoIXgJlDnTSwkAFJwR+DQozHvp44yiCDzLGOeWuCZLIxX1cSOSW49Lhtw4qhUJ6hN+FERDs6Uji0pQwkizw2omxjO4BhleElBJ4QKD4iko5fPJhQcKwyYQCF4pb70w7zy60EbW9KE2hvUHFkEyKhvThxhVpOyHF+6rKckoUr3o67j1usrhzetsZpI0Ow8tAtayEqsPlWvz2CJnANUszsoIQ3+ZGK3yIwApCmN6zk5SwyrQFNRQ4IkCcoMpbUceaQHee6dYaawD/keqIn5Yc1++EeminwgBrg6z2jA/CzErYbVjJFVPuOptVmxTW3DEV7kmeFsZDgVVRtIiSIqmquxFNpaSEoAr+W4r+AlDcJSNET5qXRxkFFkDeCUqfaQb75ofKXhz4+H56sJlNcpIfF/5Wv1EWfnevkwvx6Y4LpbuFQm6z27je0CKjUxUXFD3GNeggHBR03NOkhitnFS5L6FcCZwkcII8nsFR9wOBF3nEZ/m7Mtc0Pe8gCOLVWw2gnABoAhbtRPlzcqGFCkQkmnH3TRg7TdOv396oCmVDJrTTeXyxlQYWRkNaRe3VWw0qEAcIbSRTtcZ5Tr/wIFqEaUCxJMHYaYeLAsDqoiiWdpA22kpaGYqycq+hicizCMUQU0HDFUfTAA1xmNUv1IlMxugzj4102CiUqRuNL1ZIcSCQ6mmUjRw0N6aoQJAZbsy3EfcxbAzggXL96cXGR9sb2pp9pQAK0ifLCR48O71kWTjMRsI9ogd4tyQojU/xZMIcaBaSDRmf7O2LdIRaXb5gV5KnBDlgb4i3Ols7cdJppSnED2nFSTNhJs0k3mVmuZTQEpO6etUw2cCFrRt9EhWIO6QmV7yvxwxA3EbGwxFqU3Lz5oy5M+aPbexsBcm4rHZ7JYdTaWo2d28Ndm5K+vpcZtmfDWdpZwPKhoExj8WNodpEw1LzVDoERx2Cz/eHrdofbowrc+AxL56Gbmwzev/5JSHrxgIGmQD9ovEe4CHwtMsSuDCIOEilec4DfTwX73ZJgVAooRL0Iy4uGbJFNLjRQTcm4wFSgWJQo7YoXVH7AhIcThonKjxQhN1VrsKbof+fUR3txhc2PO5X5l/0RUnWJD4SKQSAihpIL7GCnOLQYFdBAWne1fCoPuHinNYI60SNTDldERSeF43MKtoX4yFvvBWKjIBqLM3qh2M2j6nebreZ3JjIeFwUaqfy34HHC0ZNStHTS97/1q4VNeXPMqFrm/wCcf/ATKj7kq31ce+K15t+P912cNrT9pu9K+m3fui+FK/9Refg3MPJlefs3+3+gH5RO7nL/4D3jZZ3vr7nDrfJ/rWj/snPZt6+1Dc5MH3iwdk3x2YjNy6e2r3hY1/P1JXVf7va8tTk8MlV37YXn/kOfe5Ux+XYzDcLnUfPvGK/s987/Oy19i2J+06tX9X6Cra1KTne8oRiK9WeXnFkT27DA94VP//34YOBvf9co05cs/z4pQ8++pn34Rs7V91/2v2l8SPv1det+dTWPAiD6aF3915f/ufBj5ZPvXh26s3erRdnGObMrtnPQsmZ8e4fFD9595izc8tf3/5D+0XLtTs+zv39q6WtzYW1+Kcrz479atXSp5rG7sF7W6//5fl7175RkJY/P3qwtr1/eN+h0e/9dnat/3NPd8tg3SbsmTNXp/50+Y/H57bf+da50pF9u++u/cf1iaWjcfF3U4+1rV6/br/4Fd+hWfe2u85P5j+hz33//NpYbmf6ycDc8GXhuQzx5L/2v8/t+2k+cNf1b/ivfF4rNTWvidx4uhhqIvgY/VDowwNTZ/cfq/vRuv0jG17L5O4+n0sFV9/znMMxPf6fld6hXet/fXXdT3ITS69cvvHlmpq5uWU1LxTS2T3Lamr+Cz2nCNA=
eNqdVmtwE9cVtoEMUALDnzZ0koG1QjKU8Uq70uplo7S25De2qGX8YlxltXslrbUvdu/KlogpcWnzA+p6SUpDCgkOwgLFD6hNwQRSUtqSzJQ2JEOpTUgzDGlDZqhLeCWTUnpXlok90D/d0Ui7e97fd8656k7HgaJykpg/wIkQKDQD0YOqd6cVsFEDKtzaLwAYldjUOn+gYZ+mcOOroxDKapHFQsucWZKBSHNmRhIscdLCRGloQfcyD7JuUiGJTUzkJzaZBKCqdASopqINm0yMhEKJ0FRk6kAGKmbHIIcUsLCkwAQGOyRToUmReIAUNBUopq62QpMgsYBHLyIyxCkJFziRQ1oqVAAtmIrCNK+CQhMEgoxqgJqCbAkzgd5IEj8VFCZkw2FYE7MlIuP7t0WbTCItGFJB4yEn8wkkZYHKKJw8pWCqzQmM7DADqQiCDYNSBIEDFDPSl2kF+UAQqoY/WUHIKJAD2acwp6gwyBkl308k58TUhaoACBH2f8mRgsEFpwAWFTLD1yxDBFHOUAq1AwYiu662rnQU0CxK6WepqKRCfWg2ZcM0wwCEJxAZieXEiD4SSXJyIcaCME9DUIglVchmUAgRZIHSMzEAZJzmuTjon7LVD9GyzHMMbcgt7aokDuTYxY10HhRnDEZx1Aoi1Ef9KJWSKsu6BOowESPNTsTZoU5chTQn8qhjcJ5GWfXLWfmbMwUyzcSQEzzXvXr/lPHQTB1J1ffX0ow/MMslrTBRfT+tCA5qZOZ7RRONLtTT3nUPhssJ74dL28wkiT6HZ3lWEyKj78/24tFZ1gAqCZyRkBO9jxiaBogHYgRG9X02p/OAAlQZDQz4UT8yg5ranUKUgD++k84Nzuv+mmkuP8p7LOVD9OgnyxWuECOsWC2dwKyE1Y6RVBHpLLKRWEVtw4A3F6bhoTwcblBoUQ0jLsqm2U8zUU2MATbjfSjjJw3GUTVG+mg6cdApSyrAc1npA814/dTKwKt8I1NNhktKhBa5ZDasfjJLfUeys4NlNJaNxjsEwp2kbFwIaEx4NGeCJscIgxLCBVXf57A5hnKSafAzqFYCJwmcII934mjcAc8JHMIz+53bW6qeshMEcexBBSjFANpwaYrIXm/N1FCAgEgzYn/thnK73ScerjTtyuY2rtnZIEbBzGxIq6Aee1Ah5+J1Qh3onNbGOVYfX4kegjY3SdFhu4O0U1YCjSVJsqzLRYSstJWy21jXGFqAHIO8GGTKaH3iaCWgJQ0T+nihQHcag+axkXabA1VajBYXw2ssCGghn2TUoBZjsgJ4iWaHmTDO0EwU4FP9p6d9LXUltVXeTAAl6ZWkGAd2TOQvCwaZcDAkeESmVBPMrc1eb0WyLBJOetV4sqUuti7hBWyFXwO+drKkOlTHKfUxnHRSDtJtp+wkTpoJM5obvK5D9TX4hHjpxgaW95MRLlitlNe0tidqpKZQgK5pragNqklHQkjUUwGrvwlUrA21O0OthFmhSiiyNi61RMwJZ9Tf2RFvTvr89hanq5qUgpX1LaRYUdZubrKWb+xcSzd3xlCJNIx6LMUYali0RFVPbmxwNDb41NBQ00NTjLFZYDzm2ZuyGKtEh55f5BPFWMBAGKBftO4DHASeOkkE4y8hYLQ4x3rk0nZvlKxzSVrV91uq6Y3QLJe53DUxNlJVblXLHUIgYm0ENe7mdtcMZEinCydy4DgIypVtza9T/z+z+nUzPnML4H556nRPi5IqcuFwfwAoaKr0DMNLGouWvgL6veV4fUmLPuq2uV1WysnaAXDZgIvAq0p8h6a93d8ZKePESNM8arw4o49EbR5TEUXZTMWYQHtcDjRj2f8Az/cbjSpGfj9n/optC/Ky19ztgYs9HxBLu/7c1Hzjw70/eH/n3/Xi6BHsvW/g8Jtjt112jR++hmUy93ouj1ZNtK38fGmL+tyfdo990bt0zg5v3RaBe/bn66++IX029Bz+1qfDIy13v1glfvTP3105feeXF7/y/3t5+eLT/OaCRZG/LAqvfoOo3LfhZXcq//Hz71o3rG28wT/T8z3gyvykcaI3/PRgaudVuObs+9ete06sOXy03Hqz98XXFhZsfeLyb64fdP/n+d2+wWUH39vUarpy6anSU98+79uStHTtLTp1YP6uc/ka03ahevGWZ7evXrY+4/9w8wtvj0nfWp/561crzp/Z8+aeg3ekRv7Wl+Idsc9RQ75zoXpSoFa2nz3+9OJ/mV8NPvXkPGrQdDSzvrfx2vCxrXkaaOtb8lPJNVBS+Q9vgdbsfWbF4NtP2l55bc74zQWPzTlzZezG6b3kkrJrB5MTZObIwlFy+YbbcmnPJfjSdwZtj/+w7JGCIzcv/rbg1uFP6ckXR/Ro/mfbrndr/b/qWxB+gby6JlbW97dTy3e/ente98KGW9c/2QX7GuVHG/5wtWPRmVUXTZO77l4+90qNLW9HySf+s2vSlxot2pff/dxzatnElhOHloyxv3jkzoUnHJM3gheOL684/fG5uzsf3Yads3/w8aro4ETT3M2Tcxt/3HvcF9mz3T9/K964aCd+4OU6a2XPu0HE+L17c/P0CWdV5by8vP8CUtgnaw==

View File

@@ -1 +0,0 @@
eNrtVs1u20YQRoA+CLHoqRAlkpJoiUUOhmEkQGo4QNwWRRAs1uSI3JrcZXaXkhVDh7q59sA+QVsbVmAkbQ9FL23QHnvoC7jnPkhnKSk/jooYOdVABQikdnZ2vpn55lsdz8egNJfixlMuDCgWG/yhvz6eK3hYgTaPzwowmUxObm3vnVSKX7yfGVPqqNNhJW/rgpusnTORxhnjoh3LosPFSJ7uy2T6+zwDluDxj88/1qDczRSEqX+yuxs/t5x2vLbf9oPBD5txDKVxt0UsEy7S+ln6iJctJ4FRzgycLcz1j6wscx4zi7HzuZbifEsKAQ3m+vwAoHRZzsfwRIEuMQ348kwbZip9fIrnwp9/zAvQmqXw3e6dFbivThOMUD//FJKWE3Sd3dg4gRf0HD+MvEHU7Tq3dvaexhKrI4xrpiW8CeO5xae1i2iMkrm7fVhKDe7tRYT62w9+vbRhM8/l5BX7ySec1edYCieVMs3hWcziDNx4sb1+IqTbrJxhHbFh9bwa81gq8cvac3cVT7mov/l+a4n6IxCpyeqT3qD38yWPHXZoG1OfhJ63HuVO03+L8sJZu2FLQYJROMt1fWpUBfPN3Lj3xnF90c66N0nU63XJh07Bbgb9YeB5XivrusFwjaGhzRfYM4Uc+OvG30dkyU4SEa89aIekRbDwgJ2lcFhy1bSAGl4AiUSV5y2yz0ycUXRH7lKs34inJDoiFXoUVW54yZShIJJSIt9JZOG2iI5ZDrQq6UPNHwHF8GkKikS+BfvSKkymsGWa5hz5i+ZwZUzkRFABRWmmL717aLXHrXY3Z71YoPtTA5pEgTfc8PuBN2sRLpCtIgaKpE+1hY0TgzNpgDJOcRzVFKGz/RySFXKpUhojqKYOCddL4wh7YfPK5IQak9OKrxwMjjhmyEHRpFrWL2HTJlouRWrHBw/oNWAzqcxywe8hQA1MYXUvYZhIdaBLe6yOZQnUYuJizJv0Vki6VBupcPRe957N/l1p7rxNafCLq7qTTXBNg7/RwfCCcddImWuXWb3pWP3Q5n9B+i8K0qkf+P1rpEjv/XZEFqSjSLgMVSn2B+FGMPCDjcGQdfeH4XAj7IdJ0u8OIRl5PjZu0E9iBt2wOwwTb7/fGzDoBawHSRiO+ihoBRN8hBS1E8hxLO6TFyxHa6kkqorGN1wx+NjCx91mcQ/1xpKRPEBVjHFEcbqRDRYVUg3TjnHi0ONgwtRCTpZkw/f7V4p1b6oNFDsLr3cNujj1bdktd7XIu4YxK4+IfCYrhylwmJNBXo6q3GFac6utNgAXZWXomCludcjWAmOsvOlIqgJzj8jIXTTdYsM7A7n0qs/RzH5aV6risn76bo4amckcR+zqacnSDjXLV2K7wkAFKxo/jEEzbsV1Sq6K6HaFpLt2bT1qOjdb28LFErkenVzeiZdb2dxXeIcq+xemZAm288HaXN/Y2HqdBq1VNTAvAocMC7Lg+ewf6nhqJA==

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrtnE1v28gZx1ugaItc2kN7J4ge2kKU+Sa+OMhBluOX2IkdyY2dXQfCiByKlPlmzlCWHPjQbL8AgX6B7jp2EaTZXexiu912e+6hXyB7WKBfobcCPXSGkmI6trOS6yBxPAbiiJyZhzO/eXkezn/kR4ddmCAvCr//1AsxTICFyQX6/aPDBG6nEOHfHQQQu5G9v7rSWPsoTbznv3YxjtH01BSIvXIUwxB4ZSsKprrSlOUCPEU+xz7Mzey3Irv/zQ/kh3wAEQJtiPhp7v2HvBWRZ4WYXPD3o5QDCeQA50I/dlKfAwh5CAOSXOL4JPIhzYb6CMOA3ytxx0qvgS3IKRyOOOxCzvEc7HJxtAMTDoQ2F6Q+9mK/TxIB5lr9PBNKAy5yOLwD/S7Ms2E3gbBEE0MObae0NjTjjkuezSUQESvFqqQIJvzeA3IniGzo01vtGAtqJARe6NGcIbknkf8RTiAIyAVOUvIAnrQgJohxmlBDYlmk96LIH1LB/Th/gJOGeS9QUy8+T5OGhyDIM4zaRTPYEFmJFw/z8LdfNHkn4miPtkn3Ej5t0okwKdMSMUiIHdLXKDcaJ6QPE+zBwaXjJQg3vRzvUZWGlih/HkHSAfbZWWgeOni8BNq0YQWTx0tThKPSUasDLUwK5z08Nglg26dAqNr2sfa/c82GvTgKyRTwAIantP9mITkfyi2A4GiSjMoO5skYaGjhM6mMrI3JJLdVLHY6jAd7hy4ENqnQt9/76b4bIZw9O77afAwsC5JZB0Mrsr2wnf2pvevFJc6Gjk9a/YTQDmEOLnuyBWEsAN/rwoNBqewTEMe+ZwGaPtVBUfh0uKYItC4nk5/QOS+Q9SvE2ecrpBLVxanVPlkWQ04qV+Sy+ElPIEuWF/pkmRN8QOpzEOfpfy0mxMDaIkaE4ZKbHQwKPyvmiVD2+DawVhrHTILEcrPHIAk09bPi/SQlvRzA7LC2evJxw8SjxyllSSrrnx4zjPqhlT12gI/gn48VhjjpC1ZEbGR/EA+sKNryYPb8382m5TRbwQ04c29LQ17qSVvlxqK/Xe3VopldZ3V92ait1bc9zY8aarWzrC/fFSRdNjXT1CVZkMpiWSpLQr1s2al/yw3rO0lTqtpNea4Wao1OJbxpdHZbZQg7itRpOtr89sJCr3zXXXHn7i7D7U6UbMyEFT2+uVjeeG/t7vx2bdcoRyv9Zt/Uq9c5Uru069k3cM2zOxtG2Fhf30pEc8mwt9cVB+G5ijm73WlIMQ7WkHYvrrfVQvUquimIwxpqomqI9OfZaGz4MGxjN9uXRF39I/ELZAgj+MEBYYZT9GifDET4z38cDj3dhytLR2P45/uzZFBmX69Du8TJCrdiYU4WZZWTtGnRmFYq3Pzttae14XPW6Bh8zmHYw1OwS+8MHMl1jvjXBEF8I8WOYHy6loAQOWRg3hxNgkPLTcMtaD+pnTr8v6bDn/QtbRDxZgKdhQgKw2pmTzeE+sDpC4uznw3mmhAlbRB6u/lcyL7O58HObm/HtlLbdrs7gWjuqorXgqnlfD4sQtYP+hhSISFA2UeGqT8bpoxG4hPSeFGQREGUvuoJxCNC3ws8Qjj/PYw8ULZfIfi/PJkBR1swRNmhmveP+PdijgQGZAjTZx+ZUU3T/NvpmUamFJLFVM2vjucirAtmJDlAX57MMDTxoYie9ka5Bc/Onv+CXDTtlii3YAXahgkrhi6pFVsXoa61JMvWFaD9hfStZxErtDPjKCGdDS0SZuF+9rwUgB5ddW4oUkXRSEuvE5dm+akNG2lrNqJtQNe5OIF+BOyPa3NCDVguFBr5gMwOZ+/fqd5erH2xIRRHlrCS+wmSHkYo9BznoAET0jHZE8uPUpssnwk8ILbq1fvZ54aty45l2HJLUg2HzI6ZlcYh8Eklu1b2mavc4KdVVeGvcwG4YWikP/KI77cHtFFh+5uf/csGGFDX4JH1n6fhoUWCQ6G6fL92lyyTVdybn+10lnbC+0hM0cyGBWe7fGnkCQYlykcBZTkf3ySDReYDpj7lxdxVS6NwrBiNCXSWCaIuSAYpNQgimw6pGkziJHflvBM3VUO2ZBnoLYuadiPPoq6PRGReaMMePy2WiIf1MeCnHw6jQL4QpB7Fo2Hq+3sl3o/aZAq00OBGiQQCoYfcJqkyov45z/Vg79q1d44NjWWbFvD9l3MMWkgSmuG9hYUlZV4F9a67rFTba8E8mqvvKsT4IAwoBD+F2GcU+rwU+fAgaacBuSQP5HkaNzD6J+kXMRaBPdwk8RijNiE1BDdpsMqwTYZNKXGbZPYybhNyo8sdgzYhtBCzOXoObJU9huw0ZFIxgmndTVfTmWC+vlCv6/1w8XY8ay/6STBuBJNvWbHAZRzorwhcHI9Rm5BavhvJqE1IbeBKJMZtQm4yjfcYtQmp5TIBozYhtVxXYaNtYm7TnHLVI77vbnoRawHB+7Mrd24+uHbtIoXyH2rvoFB+vCI54qMsx5pQHL9n6eF03HHftZU4pmpamArcYDNssLcz3KoYZKY38veiswTaQo3OfjV4tXz9ckVeKMd56JF70iP9eLMwbY/3sawqRfwU5zGszXHgvWxUqoxt88zms8MS7LAEOyzBDkuwwxJX8LBExRTZYYmxD0vIpnxZDktULv6whGpIBpAsSVJMzdB1GUqSaevko6a0DM1sXYbDEqaq2pp2jsMSP/nv2S9Xdf9evFBtLMhLerK6EbhSd6E2V+ku3zrfy1XlLTksURrnyICIG0lvrruxVTPurPY9Z+69am/Ju7U+7ob7UQB4Yte9RByukyLgX+DZjUvaVec5PXBhewdXhVke5TJqE1LLXwQYtMmgbZJ/jNlkzOgGCmM2GbMScwPnOEJG9zcYNuYHmB94G5lJFYZsQmR7V5zYW6Uj/miT6YhMR3wHdcTXNgbO3mMaQ1V8JXYC8yR3qXIqeEVTx6d0ZpWZ2srUVqa2MrWVqa1XUG01jAv+arr2Lqutily5LGqrfPFqq2EpplGp6FDTgaTahuaYsqyKui4CKNqaeSnUVkNpOfJ51Nb/nP0K2rjTiNcDeWXBdcTl+kao+xs9Oe136+d7BdUuk9rq3dbklWrNWMJ1aTmQjEXX6ix3G7MX9AXt16G4XtLueqOK61Vhlkd1DNqb2zO+KszIuztjNun3jRmxN6i2XhVmF/hHJ64KsuEOCMPG/OZrP3PDiL0xqfVyEnurpNYff8CkVia1Mqn1ikqtr43S2dtw/+9MIQ1/ebLIpyKSFNnQRHl8TGfWmSnSTJFmijRTpJkiffUUaVnSTaZIj69Ii9IlUaQV4+IVaUWrtAzJ1oBuQEUzRKUFNAcoFUmXWvQvkF8KRdqSTSidR5H+4hVv6u0ltLR4X1n6zXK9B2e8XR8ndxRrYenyKtI8/1q04MsJ6ojKmgt5RoIW5IgdMjYYjBzGcK+G0chpeIiRGJDY3PwlQzFAwTjkBSWFgcgLlhiHvKBsaIwEGxEFDnRXl5EY+k9GIi/4q/JVBjGGqIxwFJ8mJ/8Psj6Xlw==

View File

@@ -6,5 +6,5 @@
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `__call__`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.

View File

@@ -15,7 +15,10 @@
* [Messages](/docs/concepts/messages)
:::
Multimodal support is still relatively new and less common, model providers have not yet standardized on the "best" way to define the API. As such, LangChain's multimodal abstractions are lightweight and flexible, designed to accommodate different model providers' APIs and interaction patterns, but are **not** standardized across models.
LangChain supports multimodal data as input to chat models:
1. Following provider-specific formats
2. Adhering to a cross-provider standard (see [how-to guides](/docs/how_to/#multimodal) for detail)
### How to use multimodal models
@@ -26,38 +29,85 @@ Multimodal support is still relatively new and less common, model providers have
#### Inputs
Some models can accept multimodal inputs, such as images, audio, video, or files. The types of multimodal inputs supported depend on the model provider. For instance, [Google's Gemini](/docs/integrations/chat/google_generative_ai/) supports documents like PDFs as inputs.
Some models can accept multimodal inputs, such as images, audio, video, or files.
The types of multimodal inputs supported depend on the model provider. For instance,
[OpenAI](/docs/integrations/chat/openai/),
[Anthropic](/docs/integrations/chat/anthropic/), and
[Google Gemini](/docs/integrations/chat/google_generative_ai/)
support documents like PDFs as inputs.
Most chat models that support **multimodal inputs** also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
The gist of passing multimodal inputs to a chat model is to use content blocks that specify a type and corresponding data. For example, to pass an image to a chat model:
The gist of passing multimodal inputs to a chat model is to use content blocks that
specify a type and corresponding data. For example, to pass an image to a chat model
as URL:
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{"type": "text", "text": "Describe the weather in this image:"},
{
"type": "image",
"source_type": "url",
"url": "https://...",
},
],
)
response = model.invoke([message])
```
We can also pass the image as in-line data:
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the weather in this image:"},
{
"type": "image",
"source_type": "base64",
"data": "<base64 string>",
"mime_type": "image/jpeg",
},
],
)
response = model.invoke([message])
```
To pass a PDF file as in-line data (or URL, as supported by providers such as
Anthropic), just change `"type"` to `"file"` and `"mime_type"` to `"application/pdf"`.
See the [how-to guides](/docs/how_to/#multimodal) for more detail.
Most chat models that support multimodal **image** inputs also accept those values in
OpenAI's [Chat Completions format](https://platform.openai.com/docs/guides/images?api-mode=chat):
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the weather in this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
],
)
response = model.invoke([message])
```
:::caution
The exact format of the content blocks may vary depending on the model provider. Please refer to the chat model's
integration documentation for the correct format. Find the integration in the [chat model integration table](/docs/integrations/chat/).
:::
Otherwise, chat models will typically accept the native, provider-specific content
block format. See [chat model integrations](/docs/integrations/chat/) for detail
on specific providers.
#### Outputs
Virtually no popular chat models support multimodal outputs at the time of writing (October 2024).
Some chat models support multimodal outputs, such as images and audio. Multimodal
outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage)
response object. See for example:
The only exception is OpenAI's chat model ([gpt-4o-audio-preview](/docs/integrations/chat/openai/)), which can generate audio outputs.
Multimodal outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage) response object.
Please see the [ChatOpenAI](/docs/integrations/chat/openai/) for more information on how to use multimodal outputs.
- Generating [audio outputs](/docs/integrations/chat/openai/#audio-generation-preview) with OpenAI;
- Generating [image outputs](/docs/integrations/chat/google_generative_ai/#multimodal-usage) with Google Gemini.
#### Tools

View File

@@ -92,7 +92,7 @@ structured_model = model.with_structured_output(Questions)
# Define the system prompt
system = """You are a helpful assistant that generates multiple sub-questions related to an input question. \n
The goal is to break down the input into a set of sub-problems / sub-questions that can be answers in isolation. \n"""
The goal is to break down the input into a set of sub-problems / sub-questions that can be answered independently. \n"""
# Pass the question to the model
question = """What are the main components of an LLM-powered autonomous agent system?"""

View File

@@ -126,7 +126,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
).
)).
## RunnableConfig
@@ -194,7 +194,7 @@ In Python 3.11 and above, this works out of the box, and you do not need to do a
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
This is due to a limitation in [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in Python 3.9 and 3.10 which did
not accept a `context` argument).
not accept a `context` argument.
Propagating the `RunnableConfig` manually is done like so:

View File

@@ -66,7 +66,7 @@ This API works with a list of [Document](https://python.langchain.com/api_refere
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
)

View File

@@ -13,23 +13,33 @@ Install `uv`: **[documentation on how to install it](https://docs.astral.sh/uv/g
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
- Partner integrations: Partner packages in `libs/partners` that are independently version controlled.
:::note
Some LangChain packages live outside the monorepo, see for example
[langchain-community](https://github.com/langchain-ai/langchain-community) for various
third-party integrations and
[langchain-experimental](https://github.com/langchain-ai/langchain-experimental) for
abstractions that are experimental (either in the sense that the techniques are novel
and still being tested, or they require giving the LLM more access than would be
possible in most production systems).
:::
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain-community:
For this quickstart, start with `langchain`:
```bash
cd libs/community
cd libs/langchain
```
## Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
Install development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
uv sync
@@ -62,22 +72,15 @@ make docker_tests
There are also [integration tests and code-coverage](../testing.mdx) available.
### Only develop langchain_core or langchain_community
### Developing langchain_core
If you are only developing `langchain_core` or `langchain_community`, you can simply install the dependencies for the respective projects and run tests:
If you are only developing `langchain_core`, you can simply install the dependencies for the project and run tests:
```bash
cd libs/core
make test
```
Or:
```bash
cd libs/community
make test
```
## Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.

View File

@@ -40,7 +40,7 @@
"\n",
"To view the list of separators for a given language, pass a value from this enum into\n",
"```python\n",
"RecursiveCharacterTextSplitter.get_separators_for_language`\n",
"RecursiveCharacterTextSplitter.get_separators_for_language\n",
"```\n",
"\n",
"To instantiate a splitter that is tailored for a specific language, pass a value from the enum into\n",

View File

@@ -336,70 +336,6 @@
"chain.with_config(configurable={\"llm_temperature\": 0.9}).invoke({\"x\": 0})"
]
},
{
"cell_type": "markdown",
"id": "fb9637d0",
"metadata": {},
"source": [
"### With HubRunnables\n",
"\n",
"This is useful to allow for switching of prompts"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9a9ea077",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:\")])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.runnables.hub import HubRunnable\n",
"\n",
"prompt = HubRunnable(\"rlm/rag-prompt\").configurable_fields(\n",
" owner_repo_commit=ConfigurableField(\n",
" id=\"hub_commit\",\n",
" name=\"Hub Commit\",\n",
" description=\"The Hub commit to pull from\",\n",
" )\n",
")\n",
"\n",
"prompt.invoke({\"question\": \"foo\", \"context\": \"bar\"})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f33f3cf2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]\")])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke(\n",
" {\"question\": \"foo\", \"context\": \"bar\"}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "79d51519",

View File

@@ -167,7 +167,7 @@
"She was, in 1906, the first woman to become a professor at the University of Paris.\n",
"\"\"\"\n",
"documents = [Document(page_content=text)]\n",
"graph_documents = llm_transformer.convert_to_graph_documents(documents)\n",
"graph_documents = await llm_transformer.aconvert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents[0].relationships}\")"
]
@@ -205,7 +205,7 @@
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n",
")\n",
"graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents(\n",
"graph_documents_filtered = await llm_transformer_filtered.aconvert_to_graph_documents(\n",
" documents\n",
")\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
@@ -245,7 +245,9 @@
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"graph_documents_filtered = llm_transformer_tuple.convert_to_graph_documents(documents)\n",
"graph_documents_filtered = await llm_transformer_tuple.aconvert_to_graph_documents(\n",
" documents\n",
")\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
@@ -289,7 +291,9 @@
" allowed_relationships=[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n",
" node_properties=[\"born_year\"],\n",
")\n",
"graph_documents_props = llm_transformer_props.convert_to_graph_documents(documents)\n",
"graph_documents_props = await llm_transformer_props.aconvert_to_graph_documents(\n",
" documents\n",
")\n",
"print(f\"Nodes:{graph_documents_props[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_props[0].relationships}\")"
]

View File

@@ -50,6 +50,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
### Messages
@@ -67,6 +68,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
### Example selectors
@@ -170,7 +172,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
### Tools
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-built tools.
- [How to: create tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
@@ -351,7 +353,7 @@ LangSmith allows you to closely trace, monitor and evaluate your LLM application
It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/), but we'll highlight a few sections that are particularly
relevant to LangChain below:
### Evaluation

View File

@@ -5,120 +5,165 @@
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to pass multimodal data directly to models\n",
"# How to pass multimodal data to models\n",
"\n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models.\n",
"\n",
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
"LangChain supports multimodal data as input to chat models:\n",
"\n",
"1. Following provider-specific formats\n",
"2. Adhering to a cross-provider standard\n",
"\n",
"Below, we demonstrate the cross-provider standard. See [chat model integrations](/docs/integrations/chat/) for detail\n",
"on native formats for specific providers.\n",
"\n",
":::note\n",
"\n",
"Most chat models that support multimodal **image** inputs also accept those values in\n",
"OpenAI's [Chat Completions format](https://platform.openai.com/docs/guides/images?api-mode=chat):\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": image_url},\n",
"}\n",
"```\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "e30a4ff0-ab38-41a7-858c-a93f99bb2f1b",
"metadata": {},
"source": [
"## Images\n",
"\n",
"Many providers will accept images passed in-line as base64 data. Some will additionally accept an image from a URL directly.\n",
"\n",
"### Images from base64 data\n",
"\n",
"To pass images in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"image/jpeg\", # or image/png, etc.\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"execution_count": 10,
"id": "1fcf7b27-1cc3-420a-b920-0420b5892e20",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image shows a beautiful clear day with bright blue skies and wispy cirrus clouds stretching across the horizon. The clouds are thin and streaky, creating elegant patterns against the blue backdrop. The lighting suggests it's during the day, possibly late afternoon given the warm, golden quality of the light on the grass. The weather appears calm with no signs of wind (the grass looks relatively still) and no indication of rain. It's the kind of perfect, mild weather that's ideal for walking along the wooden boardwalk through the marsh grass.\n"
]
}
],
"source": [
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\""
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch image data\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": image_data,\n",
" \"mime_type\": \"image/jpeg\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "ee2b678a-01dd-40c1-81ff-ddac22be21b7",
"metadata": {},
"source": [
"See [LangSmith trace](https://smith.langchain.com/public/eab05a31-54e8-4fc9-911f-56805da67bef/r) for more detail.\n",
"\n",
"### Images from a URL\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/),\n",
"[Anthropic](/docs/integrations/chat/anthropic/), and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will also accept images from URLs directly.\n",
"\n",
"To pass images as URLs, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" \"url\": \"https://...\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fb896ce9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "markdown",
"id": "4fca4da7",
"metadata": {},
"source": [
"The most commonly supported way to pass in images is to pass it in as a byte string.\n",
"This should work for most model integrations."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ca1040c",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ec680b6b",
"id": "99d27f8f-ae78-48bc-9bf2-3cef35213ec7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions.\n"
"The weather in this image appears to be pleasant and clear. The sky is mostly blue with a few scattered, light clouds, and there is bright sunlight illuminating the green grass and plants. There are no signs of rain or stormy conditions, suggesting it is a calm, likely warm day—typical of spring or summer.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_data}\"},\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" {\n",
" \"type\": \"image\",\n",
" # highlight-start\n",
" \"source_type\": \"url\",\n",
" \"url\": image_url,\n",
" # highlight-end\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"We can feed the image URL directly in a content block of type \"image_url\". Note that only some model providers support this."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered clouds, suggesting good visibility and a likely pleasant temperature. The bright sunlight is casting distinct shadows on the grass and vegetation, indicating it is likely daytime, possibly late morning or early afternoon. The overall ambiance suggests a warm and inviting day, suitable for outdoor activities.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
@@ -126,12 +171,12 @@
"id": "1c470309",
"metadata": {},
"source": [
"We can also pass in multiple images."
"We can also pass in multiple images:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "325fb4ca",
"metadata": {},
"outputs": [
@@ -139,20 +184,460 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Yes, the two images are the same. They both depict a wooden boardwalk extending through a grassy field under a blue sky with light clouds. The scenery, lighting, and composition are identical.\n"
"Yes, these two images are the same. They depict a wooden boardwalk going through a grassy field under a blue sky with some clouds. The colors, composition, and elements in both images are identical.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"are these two images the same?\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Are these two images the same?\"},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "d72b83e6-8d21-448e-b5df-d5b556c3ccc8",
"metadata": {},
"source": [
"## Documents (PDF)\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/),\n",
"[Anthropic](/docs/integrations/chat/anthropic/), and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will accept PDF documents.\n",
"\n",
"### Documents from base64 data\n",
"\n",
"To pass documents in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"application/pdf\",\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6c1455a9-699a-4702-a7e0-7f6eaec76a21",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This document appears to be a sample PDF file that contains Lorem ipsum placeholder text. It begins with a title \"Sample PDF\" followed by the subtitle \"This is a simple PDF file. Fun fun fun.\"\n",
"\n",
"The rest of the document consists of several paragraphs of Lorem ipsum text, which is a commonly used placeholder text in design and publishing. The text is formatted in a clean, readable layout with consistent paragraph spacing. The document appears to be a single page containing four main paragraphs of this placeholder text.\n",
"\n",
"The Lorem ipsum text, while appearing to be Latin, is actually scrambled Latin-like text that is used primarily to demonstrate the visual form of a document or typeface without the distraction of meaningful content. It's commonly used in publishing and graphic design when the actual content is not yet available but the layout needs to be demonstrated.\n",
"\n",
"The document has a professional, simple layout with generous margins and clear paragraph separation, making it an effective example of basic PDF formatting and structure.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch PDF data\n",
"pdf_url = \"https://pdfobject.com/pdf/sample.pdf\"\n",
"pdf_data = base64.b64encode(httpx.get(pdf_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "efb271da-8fdd-41b5-9f29-be6f8c76f49b",
"metadata": {},
"source": [
"### Documents from a URL\n",
"\n",
"Some providers (specifically [Anthropic](/docs/integrations/chat/anthropic/))\n",
"will also accept documents from URLs directly.\n",
"\n",
"To pass documents as URLs, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"file\",\n",
" \"source_type\": \"url\",\n",
" \"url\": \"https://...\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "55e1d937-3b22-4deb-b9f0-9e688f0609dc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This document appears to be a sample PDF file with both text and an image. It begins with a title \"Sample PDF\" followed by the text \"This is a simple PDF file. Fun fun fun.\" The rest of the document contains Lorem ipsum placeholder text arranged in several paragraphs. The content is shown both as text and as an image of the formatted PDF, with the same content displayed in a clean, formatted layout with consistent spacing and typography. The document consists of a single page containing this sample text.\n"
]
}
],
"source": [
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" # highlight-start\n",
" \"source_type\": \"url\",\n",
" \"url\": pdf_url,\n",
" # highlight-end\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "1e661c26-e537-4721-8268-42c0861cb1e6",
"metadata": {},
"source": [
"## Audio\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/) and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will accept audio inputs.\n",
"\n",
"### Audio from base64 data\n",
"\n",
"To pass audio in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"audio\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"audio/wav\", # or appropriate mime-type\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a0b91b29-dbd6-4c94-8f24-05471adc7598",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The audio appears to consist primarily of bird sounds, specifically bird vocalizations like chirping and possibly other bird songs.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch audio data\n",
"audio_url = \"https://upload.wikimedia.org/wikipedia/commons/3/3d/Alcal%C3%A1_de_Henares_%28RPS_13-04-2024%29_canto_de_ruise%C3%B1or_%28Luscinia_megarhynchos%29_en_el_Soto_del_Henares.wav\"\n",
"audio_data = base64.b64encode(httpx.get(audio_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"google_genai:gemini-2.0-flash-001\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe this audio:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"audio\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": audio_data,\n",
" \"mime_type\": \"audio/wav\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "92f55a6c-2e4a-4175-8444-8b9aacd6a13e",
"metadata": {},
"source": [
"## Provider-specific parameters\n",
"\n",
"Some providers will support or require additional fields on content blocks containing multimodal data.\n",
"For example, Anthropic lets you specify [caching](/docs/integrations/chat/anthropic/#prompt-caching) of\n",
"specific content to reduce token consumption.\n",
"\n",
"To use these fields, you can:\n",
"\n",
"1. Store them on directly on the content block; or\n",
"2. Use the native format supported by each provider (see [chat model integrations](/docs/integrations/chat/) for detail).\n",
"\n",
"We show three examples below.\n",
"\n",
"### Example: Anthropic prompt caching"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "83593b9d-a8d3-4c99-9dac-64e0a9d397cb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image shows a beautiful, clear day with partly cloudy skies. The sky is a vibrant blue with wispy, white cirrus clouds stretching across it. The lighting suggests it's during daylight hours, possibly late afternoon or early evening given the warm, golden quality of the light on the grass. The weather appears calm with no signs of wind (the grass looks relatively still) and no threatening weather conditions. It's the kind of perfect weather you'd want for a walk along this wooden boardwalk through the marshland or grassland area.\n"
]
},
{
"data": {
"text/plain": [
"{'input_tokens': 1586,\n",
" 'output_tokens': 117,\n",
" 'total_tokens': 1703,\n",
" 'input_token_details': {'cache_read': 0, 'cache_creation': 1582}}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" \"url\": image_url,\n",
" # highlight-next-line\n",
" \"cache_control\": {\"type\": \"ephemeral\"},\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())\n",
"response.usage_metadata"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9bbf578e-794a-4dc0-a469-78c876ccd4a3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Clear blue skies, wispy clouds.\n"
]
},
{
"data": {
"text/plain": [
"{'input_tokens': 1716,\n",
" 'output_tokens': 12,\n",
" 'total_tokens': 1728,\n",
" 'input_token_details': {'cache_read': 1582, 'cache_creation': 0}}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"next_message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Summarize that in 5 words.\",\n",
" }\n",
" ],\n",
"}\n",
"response = llm.invoke([message, response, next_message])\n",
"print(response.text())\n",
"response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "915b9443-5964-43b8-bb08-691c1ba59065",
"metadata": {},
"source": [
"### Example: Anthropic citations"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ea7707a1-5660-40a1-a10f-0df48a028689",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'citations': [{'cited_text': 'Sample PDF\\r\\nThis is a simple PDF file. Fun fun fun.\\r\\n',\n",
" 'document_index': 0,\n",
" 'document_title': None,\n",
" 'end_page_number': 2,\n",
" 'start_page_number': 1,\n",
" 'type': 'page_location'}],\n",
" 'text': 'Simple PDF file: fun fun',\n",
" 'type': 'text'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Generate a 5 word summary of this document.\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" # highlight-next-line\n",
" \"citations\": {\"enabled\": True},\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"response.content"
]
},
{
"cell_type": "markdown",
"id": "e26991eb-e769-41f4-b6e0-63d81f2c7d67",
"metadata": {},
"source": [
"### Example: OpenAI file names\n",
"\n",
"OpenAI requires that PDF documents be associated with file names:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ae076c9b-ff8f-461d-9349-250f396c9a25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The document is a sample PDF file containing placeholder text. It consists of one page, titled \"Sample PDF\". The content is a mixture of English and the commonly used filler text \"Lorem ipsum dolor sit amet...\" and its extensions, which are often used in publishing and web design as generic text to demonstrate font, layout, and other visual elements.\n",
"\n",
"**Key points about the document:**\n",
"- Length: 1 page\n",
"- Purpose: Demonstrative/sample content\n",
"- Content: No substantive or meaningful information, just demonstration text in paragraph form\n",
"- Language: English (with the Latin-like \"Lorem Ipsum\" text used for layout purposes)\n",
"\n",
"There are no charts, tables, diagrams, or images on the page—only plain text. The document serves as an example of what a PDF file looks like rather than providing actual, useful content.\n"
]
}
],
"source": [
"llm = init_chat_model(\"openai:gpt-4.1\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" # highlight-next-line\n",
" \"filename\": \"my-file\",\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
@@ -167,16 +652,22 @@
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cd22ea82-2f93-46f9-9f7a-6aaf479fcaa9",
"execution_count": 4,
"id": "0f68cce7-350b-4cde-bc40-d3a169551fc3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_BSX4oq4SKnLlp2WlzDhToHBr'}]\n"
]
"data": {
"text/plain": [
"[{'name': 'weather_tool',\n",
" 'args': {'weather': 'sunny'},\n",
" 'id': 'toolu_01G6JgdkhwggKcQKfhXZQPjf',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -191,16 +682,17 @@
" pass\n",
"\n",
"\n",
"model_with_tools = model.bind_tools([weather_tool])\n",
"llm_with_tools = llm.bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Describe the weather in this image:\"},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" ],\n",
")\n",
"response = model_with_tools.invoke([message])\n",
"print(response.tool_calls)"
"}\n",
"response = llm_with_tools.invoke([message])\n",
"response.tool_calls"
]
}
],
@@ -220,7 +712,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -9,157 +9,148 @@
"\n",
"Here we demonstrate how to use prompt templates to format [multimodal](/docs/concepts/multimodality/) inputs to models. \n",
"\n",
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
"To use prompt templates in the context of multimodal data, we can templatize elements of the corresponding content block.\n",
"For example, below we define a prompt that takes a URL for an image as a parameter:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 1,
"id": "2671f995",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4ee35e4f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
"# Define prompt\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"Describe the image provided.\",\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data}\"},\n",
" }\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" # highlight-next-line\n",
" \"url\": \"{image_url}\",\n",
" },\n",
" ],\n",
" ),\n",
" },\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "089f75c2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "02744b06",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "e9b9ebf6",
"id": "f75d2e26-5b9a-4d5f-94a7-7f98f5666f6d",
"metadata": {},
"source": [
"We can also pass in multiple images."
"Let's use this prompt to pass an image to a [chat model](/docs/concepts/chat_models/#multimodality):"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "02190ee3",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"compare the two pictures provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data1}\"},\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data2}\"},\n",
" },\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "42af057b",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "513abe00",
"execution_count": 2,
"id": "5df2e558-321d-4cf7-994e-2815ac37e704",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.\n"
"This image shows a beautiful wooden boardwalk cutting through a lush green wetland or marsh area. The boardwalk extends straight ahead toward the horizon, creating a strong leading line through the composition. On either side, tall green grasses sway in what appears to be a summer or late spring setting. The sky is particularly striking, with wispy cirrus clouds streaking across a vibrant blue background. In the distance, you can see a tree line bordering the wetland area. The lighting suggests this may be during \"golden hour\" - either early morning or late afternoon - as there's a warm, gentle quality to the light that's illuminating the scene. The wooden planks of the boardwalk appear well-maintained and provide safe passage through what would otherwise be difficult terrain to traverse. It's the kind of scene you might find in a nature preserve or wildlife refuge designed to give visitors access to observe wetland ecosystems while protecting the natural environment.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data1\": image_data, \"image_data2\": image_data})\n",
"print(response.content)"
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke({\"image_url\": url})\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "f4cfdc50-4a9f-4888-93b4-af697366b0f3",
"metadata": {},
"source": [
"Note that we can templatize arbitrary elements of the content block:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "53c88ebb-dd57-40c8-8542-b2c916706653",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate(\n",
" [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"Describe the image provided.\",\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"{image_mime_type}\",\n",
" \"data\": \"{image_data}\",\n",
" \"cache_control\": {\"type\": \"{cache_type}\"},\n",
" },\n",
" ],\n",
" },\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "25e4829e-0073-49a8-9669-9f43e5778383",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This image shows a beautiful wooden boardwalk cutting through a lush green marsh or wetland area. The boardwalk extends straight ahead toward the horizon, creating a strong leading line in the composition. The surrounding vegetation consists of tall grass and reeds in vibrant green hues, with some bushes and trees visible in the background. The sky is particularly striking, featuring a bright blue color with wispy white clouds streaked across it. The lighting suggests this photo was taken during the \"golden hour\" - either early morning or late afternoon - giving the scene a warm, peaceful quality. The raised wooden path provides accessible access through what would otherwise be difficult terrain to traverse, allowing visitors to experience and appreciate this natural environment.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(url).content).decode(\"utf-8\")\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke(\n",
" {\n",
" \"image_data\": image_data,\n",
" \"image_mime_type\": \"image/jpeg\",\n",
" \"cache_type\": \"ephemeral\",\n",
" }\n",
")\n",
"print(response.text())"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8152c3",
"id": "424defe8-d85c-4e45-a88d-bf6f910d5ebb",
"metadata": {},
"outputs": [],
"source": []
@@ -181,7 +172,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -100,7 +100,7 @@
"id": "8554bae5",
"metadata": {},
"source": [
"A chat prompt is made up a of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt.\n",
"A chat prompt is made up of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt.\n",
"\n",
"First, let's initialize the a [`ChatPromptTemplate`](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) with a [`SystemMessage`](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.system.SystemMessage.html)."
]

View File

@@ -6,9 +6,9 @@
"source": [
"# How to disable parallel tool calling\n",
"\n",
":::info OpenAI-specific\n",
":::info Provider-specific\n",
"\n",
"This API is currently only supported by OpenAI.\n",
"This API is currently only supported by OpenAI and Anthropic.\n",
"\n",
":::\n",
"\n",
@@ -55,12 +55,12 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = init_chat_model(\"openai:gpt-4.1-mini\")"
]
},
{
@@ -121,7 +121,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -74,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 2,
"id": "90187d07",
"metadata": {},
"outputs": [],
@@ -90,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "d7009e1a",
"metadata": {},
"outputs": [
@@ -99,7 +99,7 @@
"output_type": "stream",
"text": [
"multiply\n",
"multiply(first_int: int, second_int: int) -> int - Multiply two integers together.\n",
"Multiply two integers together.\n",
"{'first_int': {'title': 'First Int', 'type': 'integer'}, 'second_int': {'title': 'Second Int', 'type': 'integer'}}\n"
]
}
@@ -112,7 +112,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "be77e780",
"metadata": {},
"outputs": [
@@ -122,7 +122,7 @@
"20"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -154,7 +154,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "9bce8935-1465-45ac-8a93-314222c753c4",
"metadata": {},
"outputs": [],
@@ -177,7 +177,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "3bfe2cdc-7d72-457c-a9a1-5fa1e0bcde55",
"metadata": {},
"outputs": [],
@@ -195,7 +195,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"id": "68f30343-14ef-48f1-badd-b6a03977316d",
"metadata": {},
"outputs": [
@@ -204,10 +204,11 @@
"text/plain": [
"[{'name': 'multiply',\n",
" 'args': {'first_int': 5, 'second_int': 42},\n",
" 'id': 'call_cCP9oA3tRz7HDrjFn1FdmDaG'}]"
" 'id': 'call_8QIg4QVFVAEeC1orWAgB2036',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 9,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -237,7 +238,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 8,
"id": "4f5325ca-e5dc-4d1a-ba36-b085a029c90a",
"metadata": {},
"outputs": [
@@ -247,7 +248,7 @@
"92"
]
},
"execution_count": 12,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -274,58 +275,31 @@
"source": [
"## Agents\n",
"\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/concepts/agents/) let us do just this.\n",
"\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts/agents).\n",
"\n",
"We'll use the [tool calling agent](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n",
"We'll demonstrate a simple example using a LangGraph agent. See [this tutorial](/docs/tutorials/agents) for more detail.\n",
"\n",
"![agent](../../static/img/tool_agent.svg)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "21723cf4-9421-4a8d-92a6-eeeb8f4367f1",
"execution_count": null,
"id": "86789cfb-f441-4453-adf8-961eeceb00bc",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent"
"!pip install -qU langgraph"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6be83879-9da3-4dd9-b147-a79f76affd7a",
"execution_count": 9,
"id": "21723cf4-9421-4a8d-92a6-eeeb8f4367f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m System Message \u001b[0m================================\n",
"\n",
"You are a helpful assistant\n",
"\n",
"=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{chat_history}\u001b[0m\n",
"\n",
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{input}\u001b[0m\n",
"\n",
"=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{agent_scratchpad}\u001b[0m\n"
]
}
],
"outputs": [],
"source": [
"# Get the prompt to use - can be replaced with any prompt that includes variables \"agent_scratchpad\" and \"input\"!\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"prompt.pretty_print()"
"from langgraph.prebuilt import create_react_agent"
]
},
{
@@ -338,7 +312,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 10,
"id": "95c86d32-ee45-4c87-a28c-14eff19b49e9",
"metadata": {},
"outputs": [],
@@ -360,24 +334,13 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 11,
"id": "17b09ac6-c9b7-4340-a8a0-3d3061f7888c",
"metadata": {},
"outputs": [],
"source": [
"# Construct the tool calling agent\n",
"agent = create_tool_calling_agent(llm, tools, prompt)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "675091d2-cac9-45c4-a5d7-b760ee6c1986",
"metadata": {},
"outputs": [],
"source": [
"# Create an agent executor by passing in the agent and tools\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
"agent = create_react_agent(llm, tools)"
]
},
{
@@ -390,62 +353,72 @@
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f7dbb240-809e-4e41-8f63-1a4636e8e26d",
"execution_count": 13,
"id": "71c84594-d420-4703-8bdd-ca4eb7efefb6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" exponentiate (call_EHGS8gnEVNCJQ9rVOk11KCQH)\n",
" Call ID: call_EHGS8gnEVNCJQ9rVOk11KCQH\n",
" Args:\n",
" base: 3\n",
" exponent: 5\n",
" add (call_s2cxOrXEKqI6z7LWbMUG6s8c)\n",
" Call ID: call_s2cxOrXEKqI6z7LWbMUG6s8c\n",
" Args:\n",
" first_int: 12\n",
" second_int: 3\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: add\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `exponentiate` with `{'base': 3, 'exponent': 5}`\n",
"15\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" multiply (call_25v5JEfDWuKNgmVoGBan0d7J)\n",
" Call ID: call_25v5JEfDWuKNgmVoGBan0d7J\n",
" Args:\n",
" first_int: 243\n",
" second_int: 15\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: multiply\n",
"\n",
"3645\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" exponentiate (call_x1yKEeBPrFYmCp2z5Kn8705r)\n",
" Call ID: call_x1yKEeBPrFYmCp2z5Kn8705r\n",
" Args:\n",
" base: 3645\n",
" exponent: 2\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: exponentiate\n",
"\n",
"\u001b[0m\u001b[38;5;200m\u001b[1;3m243\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `add` with `{'first_int': 12, 'second_int': 3}`\n",
"13286025\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m15\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `multiply` with `{'first_int': 243, 'second_int': 15}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m3645\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `exponentiate` with `{'base': 405, 'exponent': 2}`\n",
"\n",
"\n",
"\u001b[0m\u001b[38;5;200m\u001b[1;3m13286025\u001b[0m\u001b[32;1m\u001b[1;3mThe result of taking 3 to the fifth power is 243. \n",
"\n",
"The sum of twelve and three is 15. \n",
"\n",
"Multiplying 243 by 15 gives 3645. \n",
"\n",
"Finally, squaring 3645 gives 13286025.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"The final result of taking 3 to the fifth power, multiplying it by the sum of twelve and three, and then squaring the whole result is **13,286,025**.\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result',\n",
" 'output': 'The result of taking 3 to the fifth power is 243. \\n\\nThe sum of twelve and three is 15. \\n\\nMultiplying 243 by 15 gives 3645. \\n\\nFinally, squaring 3645 gives 13286025.'}"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result\"\n",
" }\n",
")"
"# Use the agent\n",
"\n",
"query = (\n",
" \"Take 3 to the fifth power and multiply that by the sum of twelve and \"\n",
" \"three, then square the whole result.\"\n",
")\n",
"input_message = {\"role\": \"user\", \"content\": query}\n",
"\n",
"for step in agent.stream({\"messages\": [input_message]}, stream_mode=\"values\"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -473,7 +446,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -15,7 +15,7 @@
"\n",
"To build a production application, you will need to do more work to keep track of application state appropriately.\n",
"\n",
"We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/).\n",
"We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/).\n",
":::\n"
]
},
@@ -209,7 +209,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Do you approve of the following tool invocations\n",
@@ -252,7 +252,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Do you approve of the following tool invocations\n",

View File

@@ -0,0 +1,118 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# SingleStoreSemanticCache\n",
"\n",
"This example demonstrates how to get started with the SingleStore semantic cache.\n",
"\n",
"### Integration Overview\n",
"\n",
"`SingleStoreSemanticCache` leverages `SingleStoreVectorStore` to cache LLM responses directly in a SingleStore database, enabling efficient semantic retrieval and reuse of results.\n",
"\n",
"### Integration details\n",
"\n",
"\n",
"\n",
"| Class | Package | JS support |\n",
"| :--- | :--- | :---: |\n",
"| SingleStoreSemanticCache | langchain_singlestore | ❌ | "
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"This cache lives in the `langchain-singlestore` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-singlestore"
]
},
{
"cell_type": "markdown",
"id": "5c5f2839-4020-424e-9fc9-07777eede442",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe-9f2e-4e04-bb62-23968f17164a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.globals import set_llm_cache\n",
"from langchain_singlestore import SingleStoreSemanticCache\n",
"\n",
"set_llm_cache(\n",
" SingleStoreSemanticCache(\n",
" embedding=YourEmbeddings(),\n",
" host=\"root:pass@localhost:3306/db\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cddda8ef",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c474168f",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"llm.invoke(\"Tell me one joke\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-singlestore-BD1RbQ07-py3.11",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,378 +1,401 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AWS Bedrock\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatBedrock\n",
"\n",
"This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/chat_models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n",
"\n",
"For more information on which models are accessible via Bedrock, head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/models-features.html).\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/bedrock) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatBedrock](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Bedrock models you'll need to create an AWS account, set up the Bedrock API service, get an access key ID and secret key, and install the `langchain-aws` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html) to sign up to AWS and setup your credentials. You'll also need to turn on model access for your account, which you can do by following [these instructions](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html)."
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Bedrock integration lives in the `langchain-aws` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-aws"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_aws import ChatBedrock\n",
"\n",
"llm = ChatBedrock(\n",
" model_id=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" model_kwargs=dict(temperature=0),\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", additional_kwargs={'usage': {'prompt_tokens': 29, 'completion_tokens': 21, 'total_tokens': 50}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, response_metadata={'usage': {'prompt_tokens': 29, 'completion_tokens': 21, 'total_tokens': 50}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, id='run-fdb07dc3-ff72-430d-b22b-e7824b15c766-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Voici la traduction en français :\n",
"\n",
"J'aime la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={'usage': {'prompt_tokens': 23, 'completion_tokens': 11, 'total_tokens': 34}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, response_metadata={'usage': {'prompt_tokens': 23, 'completion_tokens': 11, 'total_tokens': 34}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, id='run-5ad005ce-9f31-4670-baa0-9373d418698a-0', usage_metadata={'input_tokens': 23, 'output_tokens': 11, 'total_tokens': 34})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Bedrock Converse API\n",
"\n",
"AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.\n",
"\n",
"We recommend using `ChatBedrockConverse` for users who do not need to use custom models.\n",
"\n",
"You can use it like so:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ae728e59-94d4-40cf-9d24-25ad8723fc59",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", response_metadata={'ResponseMetadata': {'RequestId': '4fcbfbe9-f916-4df2-b0bd-ea1147b550aa', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 21 Aug 2024 17:23:49 GMT', 'content-type': 'application/json', 'content-length': '243', 'connection': 'keep-alive', 'x-amzn-requestid': '4fcbfbe9-f916-4df2-b0bd-ea1147b550aa'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': 672}}, id='run-77ee9810-e32b-45dc-9ccb-6692253b1f45-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(\n",
" model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" # other params...\n",
")\n",
"\n",
"llm.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4da16f3e-e80b-48c0-8036-c1cc5f7c8c05",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Note that `ChatBedrockConverse` emits content blocks while streaming:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7794b32e-d8de-4973-bf0f-39807dc745f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=[] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'Vo', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ici', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' la', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' tra', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'duction', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' en', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' français', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' :', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': '\\n\\nJ', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': \"'\", 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'a', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ime', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' la', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' programm', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ation', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': '.', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[] response_metadata={'stopReason': 'end_turn'} id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[] response_metadata={'metrics': {'latencyMs': 713}} id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8' usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50}\n"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"An output parser can be used to filter to text, if desired:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"|Vo|ici| la| tra|duction| en| français| :|\n",
"\n",
"J|'|a|ime| la| programm|ation|.||||"
]
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chain = llm | StrOutputParser()\n",
"\n",
"for chunk in chain.stream(messages):\n",
" print(chunk, end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html\n",
"\n",
"For detailed documentation of all ChatBedrockConverse features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AWS Bedrock\n",
"---"
]
},
"nbformat": 4,
"nbformat_minor": 5
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatBedrock\n",
"\n",
"This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/chat_models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n",
"\n",
"AWS Bedrock maintains a [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html).\n",
"\n",
":::info\n",
"\n",
"We recommend the Converse API for users who do not need to use custom models. It can be accessed using [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).\n",
"\n",
":::\n",
"\n",
"For detailed documentation of all Bedrock features and configurations head to the [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/bedrock) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatBedrock](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"| [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"The below apply to both `ChatBedrock` and `ChatBedrockConverse`.\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Bedrock models you'll need to create an AWS account, set up the Bedrock API service, get an access key ID and secret key, and install the `langchain-aws` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html) to sign up to AWS and setup your credentials. You'll also need to turn on model access for your account, which you can do by following [these instructions](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html)."
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Bedrock integration lives in the `langchain-aws` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-aws"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(\n",
" model_id=\"anthropic.claude-3-5-sonnet-20240620-v1:0\",\n",
" # temperature=...,\n",
" # max_tokens=...,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fcd8de52-4a1b-4875-b463-d41b031e06a1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': 'b07d1630-06f2-44b1-82bf-e82538dd2215', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 16 Apr 2025 19:35:34 GMT', 'content-type': 'application/json', 'content-length': '206', 'connection': 'keep-alive', 'x-amzn-requestid': 'b07d1630-06f2-44b1-82bf-e82538dd2215'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [488]}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'}, id='run-d09ed928-146a-4336-b1fd-b63c9e623494-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "4da16f3e-e80b-48c0-8036-c1cc5f7c8c05",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Note that `ChatBedrockConverse` emits content blocks while streaming:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "605e04fa-1a76-47ac-8c92-fe128659663e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=[] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': 'J', 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': \"'adore la\", 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': ' programmation.', 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[] additional_kwargs={} response_metadata={'stopReason': 'end_turn'} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[] additional_kwargs={} response_metadata={'metrics': {'latencyMs': 600}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd' usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}\n"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"You can filter to text using the [.text()](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.text) method on the output:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"|J|'adore la| programmation.||||"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk.text(), end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "a77519e5-897d-41a0-a9bb-55300fa79efc",
"metadata": {},
"source": [
"## Prompt caching\n",
"\n",
"Bedrock supports [caching](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html) of elements of your prompts, including messages and tools. This allows you to re-use large documents, instructions, [few-shot documents](/docs/concepts/few_shot_prompting/), and other data to reduce latency and costs.\n",
"\n",
":::note\n",
"\n",
"Not all models support prompt caching. See supported models [here](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-models).\n",
"\n",
":::\n",
"\n",
"To enable caching on an element of a prompt, mark its associated content block using the `cachePoint` key. See example below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d5f63d01-85e8-4797-a2be-0fea747a6049",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First invocation:\n",
"{'cache_creation': 1528, 'cache_read': 0}\n",
"\n",
"Second:\n",
"{'cache_creation': 0, 'cache_read': 1528}\n"
]
}
],
"source": [
"import requests\n",
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(model=\"us.anthropic.claude-3-7-sonnet-20250219-v1:0\")\n",
"\n",
"# Pull LangChain readme\n",
"get_response = requests.get(\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
")\n",
"readme = get_response.text\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"What's LangChain, according to its README?\",\n",
" },\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": f\"{readme}\",\n",
" },\n",
" {\n",
" \"cachePoint\": {\"type\": \"default\"},\n",
" },\n",
" ],\n",
" },\n",
"]\n",
"\n",
"response_1 = llm.invoke(messages)\n",
"response_2 = llm.invoke(messages)\n",
"\n",
"usage_1 = response_1.usage_metadata[\"input_token_details\"]\n",
"usage_2 = response_2.usage_metadata[\"input_token_details\"]\n",
"\n",
"print(f\"First invocation:\\n{usage_1}\")\n",
"print(f\"\\nSecond:\\n{usage_2}\")"
]
},
{
"cell_type": "markdown",
"id": "1b550667-af5b-4557-b84f-c8f865dad6cb",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6033f3fa-0e96-46e3-abb3-1530928fea88",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Here's the German translation:\\n\\nIch liebe das Programmieren.\", additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': '1de3d7c0-8062-4f7e-bb8a-8f725b97a8b0', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 16 Apr 2025 19:32:51 GMT', 'content-type': 'application/json', 'content-length': '243', 'connection': 'keep-alive', 'x-amzn-requestid': '1de3d7c0-8062-4f7e-bb8a-8f725b97a8b0'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [719]}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'}, id='run-7021fcd7-704e-496b-a92e-210139614402-0', usage_metadata={'input_tokens': 23, 'output_tokens': 19, 'total_tokens': 42, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html\n",
"\n",
"For detailed documentation of all ChatBedrockConverse features and configurations head to the API reference: https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,262 +1,393 @@
{
"cells": [
{
"cell_type": "raw",
"id": "30373ae2-f326-4e96-a1f7-062f57396886",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cloudflare Workers AI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f679592d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/chat_models). For detailed documentation of all available Cloudflare WorkersAI models head to the [API reference](https://developers.cloudflare.com/workers-ai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare_workersai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatCloudflareWorkersAI | langchain-community| ❌ | ❌ | ✅ | ❌ | ❌ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"- To access Cloudflare Workers AI models you'll need to create a Cloudflare account, get an account number and API key, and install the `langchain-community` package.\n",
"\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/) to sign up to Cloudflare Workers AI and generate an API key."
]
},
{
"cell_type": "markdown",
"id": "4a524cff",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": 3,
"id": "71b53c25",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "777a8526",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatCloudflareWorkersAI integration lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54990998",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "629ba46f",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec13c2d9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.cloudflare_workersai import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" account_id=\"my_account_id\",\n",
" api_token=\"my_api_token\",\n",
" model=\"@hf/nousresearch/hermes-2-pro-mistral-7b\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "119b6732",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2438a906",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:14 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to French. Translate the user sentence.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='{\\'result\\': {\\'response\\': \\'Je suis un assistant virtuel qui peut traduire l\\\\\\'anglais vers le français. La phrase que vous avez dite est : \"J\\\\\\'aime programmer.\" En français, cela se traduit par : \"J\\\\\\'adore programmer.\"\\'}, \\'success\\': True, \\'errors\\': [], \\'messages\\': []}', additional_kwargs={}, response_metadata={}, id='run-838fd398-8594-4ca5-9055-03c72993caf6-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1b4911bd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'result': {'response': 'Je suis un assistant virtuel qui peut traduire l\\'anglais vers le français. La phrase que vous avez dite est : \"J\\'aime programmer.\" En français, cela se traduit par : \"J\\'adore programmer.\"'}, 'success': True, 'errors': [], 'messages': []}\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "111aa5d4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2a14282",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:24 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to German.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"{'result': {'response': 'role: system, content: Das ist sehr nett zu hören! Programmieren lieben, ist eine interessante und anspruchsvolle Hobby- oder Berufsausrichtung. Wenn Sie englische Texte ins Deutsche übersetzen möchten, kann ich Ihnen helfen. Geben Sie bitte den englischen Satz oder die Übersetzung an, die Sie benötigen.'}, 'success': True, 'errors': [], 'messages': []}\", additional_kwargs={}, response_metadata={}, id='run-0d3be9a6-3d74-4dde-b49a-4479d6af00ef-0')"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e1f311bd",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation on `ChatCloudflareWorkersAI` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cloudflare_workersai.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: CloudflareWorkersAI\n",
"---"
]
},
"nbformat": 4,
"nbformat_minor": 5
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCloudflareWorkersAI features and configurations head to the [API reference](https://python.langchain.com/docs/integrations/chat/cloudflare_workersai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare) | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:|:------------:|:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatCloudflareWorkersAI](https://python.langchain.com/docs/integrations/chat/cloudflare_workersai/) | [langchain-cloudflare](https://pypi.org/project/langchain-cloudflare/) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cloudflare?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cloudflare?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"|:-----------------------------------------:|:----------------------------------------------------:|:---------:|:----------------------------------------------:|:-----------:|:-----------:|:-----------------------------------------------------:|:------------:|:------------------------------------------------------:|:----------------------------------:|\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access CloudflareWorkersAI models you'll need to create a/an CloudflareWorkersAI account, get an API key, and install the `langchain-cloudflare` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to https://www.cloudflare.com/developer-platform/products/workers-ai/ to sign up to CloudflareWorkersAI and generate an API key. Once you've done this set the CF_API_KEY environment variable and the CF_ACCOUNT_ID environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {
"is_executing": true
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CF_API_KEY\"):\n",
" os.environ[\"CF_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI API key: \"\n",
" )\n",
"\n",
"if not os.getenv(\"CF_ACCOUNT_ID\"):\n",
" os.environ[\"CF_ACCOUNT_ID\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI account ID: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain CloudflareWorkersAI integration lives in the `langchain-cloudflare` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-cloudflare"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-07T17:48:31.193773Z",
"start_time": "2025-04-07T17:48:31.179196Z"
}
},
"outputs": [],
"source": [
"from langchain_cloudflare.chat_models import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" model=\"@cf/meta/llama-3.3-70b-instruct-fp8-fast\",\n",
" temperature=0,\n",
" max_tokens=1024,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 37, 'completion_tokens': 9, 'total_tokens': 46}, 'model_name': '@cf/meta/llama-3.3-70b-instruct-fp8-fast'}, id='run-995d1970-b6be-49f3-99ae-af4cdba02304-0', usage_metadata={'input_tokens': 37, 'output_tokens': 9, 'total_tokens': 46})"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 32, 'completion_tokens': 7, 'total_tokens': 39}, 'model_name': '@cf/meta/llama-3.3-70b-instruct-fp8-fast'}, id='run-d1b677bc-194e-4473-90f1-aa65e8e46d50-0', usage_metadata={'input_tokens': 32, 'output_tokens': 7, 'total_tokens': 39})"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Structured Outputs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "91cae406-14d7-46c9-b942-2d1476588423",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': 'Why did the cat join a band?',\n",
" 'punchline': 'Because it wanted to be the purr-cussionist',\n",
" 'rating': '8'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"json_schema = {\n",
" \"title\": \"joke\",\n",
" \"description\": \"Joke to tell user.\",\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"setup\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The setup of the joke\",\n",
" },\n",
" \"punchline\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The punchline to the joke\",\n",
" },\n",
" \"rating\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"How funny the joke is, from 1 to 10\",\n",
" \"default\": None,\n",
" },\n",
" },\n",
" \"required\": [\"setup\", \"punchline\"],\n",
"}\n",
"structured_llm = llm.with_structured_output(json_schema)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"id": "dbfc0c43-e76b-446e-bbb1-d351640bb7be",
"metadata": {},
"source": [
"## Bind tools"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "0765265e-4d00-4030-bf48-7e8d8c9af2ec",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'validate_user',\n",
" 'args': {'user_id': '123',\n",
" 'addresses': '[\"123 Fake St in Boston MA\", \"234 Pretend Boulevard in Houston TX\"]'},\n",
" 'id': '31ec7d6a-9ce5-471b-be64-8ea0492d1387',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def validate_user(user_id: int, addresses: List[str]) -> bool:\n",
" \"\"\"Validate user using historical addresses.\n",
"\n",
" Args:\n",
" user_id (int): the user ID.\n",
" addresses (List[str]): Previous addresses as a list of strings.\n",
" \"\"\"\n",
" return True\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([validate_user])\n",
"\n",
"result = llm_with_tools.invoke(\n",
" \"Could you validate user 123? They previously lived at \"\n",
" \"123 Fake St in Boston MA and 234 Pretend Boulevard in \"\n",
" \"Houston TX.\"\n",
")\n",
"result.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"https://developers.cloudflare.com/workers-ai/\n",
"https://developers.cloudflare.com/agents/"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,35 +1,26 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"cell_type": "markdown",
"id": "d982c99f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google AI\n",
"sidebar_label: Google Gemini\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"id": "56a6d990",
"metadata": {},
"source": [
"# ChatGoogleGenerativeAI\n",
"\n",
"This docs will help you get started with Google AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html).\n",
"Access Google's Generative AI models, including the Gemini family, directly via the Gemini API or experiment rapidly using Google AI Studio. The `langchain-google-genai` package provides the LangChain integration for these models. This is often the best starting point for individual developers.\n",
"\n",
"Google AI offers a number of different chat models. For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini).\n",
"For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini). All examples use the `gemini-2.0-flash` model. Gemini 2.5 Pro and 2.5 Flash can be used via `gemini-2.5-pro-preview-03-25` and `gemini-2.5-flash-preview-04-17`. All model ids can be found in the [Gemini API docs](https://ai.google.dev/gemini-api/docs/models).\n",
"\n",
":::info Google AI vs Google Cloud Vertex AI\n",
"\n",
"Google's Gemini models are accessible through Google AI and through Google Cloud Vertex AI. Using Google AI just requires a Google account and an API key. Using Google Cloud Vertex AI requires a Google Cloud account (with term agreements and billing) but offers enterprise features like customer encryption key, virtual private cloud, and more.\n",
"\n",
"To learn more about the key features of the two APIs see the [Google docs](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai#google-ai).\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/google_generativeai) | Package downloads | Package latest |\n",
@@ -37,23 +28,46 @@
"| [ChatGoogleGenerativeAI](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) | [langchain-google-genai](https://python.langchain.com/api_reference/google_genai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-genai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-genai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"### Setup\n",
"\n",
"To access Google AI models you'll need to create a Google Acount account, get a Google AI API key, and install the `langchain-google-genai` integration package.\n",
"To access Google AI models you'll need to create a Google Account, get a Google AI API key, and install the `langchain-google-genai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://ai.google.dev/gemini-api/docs/api-key to generate a Google AI API key. Once you've done this set the GOOGLE_API_KEY environment variable:"
"**1. Installation:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"id": "8d12ce35",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "60be0b38",
"metadata": {},
"source": [
"**2. Credentials:**\n",
"\n",
"Head to [https://ai.google.dev/gemini-api/docs/api-key](https://ai.google.dev/gemini-api/docs/api-key) (or via Google AI Studio) to generate a Google AI API key.\n",
"\n",
"### Chat Models\n",
"\n",
"Use the `ChatGoogleGenerativeAI` class to interact with Google's chat models. See the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) for full details.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fb18c875",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +80,7 @@
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"id": "f050e8db",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
@@ -75,7 +89,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"id": "82cb346f",
"metadata": {},
"outputs": [],
"source": [
@@ -85,27 +99,7 @@
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Google AI integration lives in the `langchain-google-genai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"id": "273cefa0",
"metadata": {},
"source": [
"## Instantiation\n",
@@ -115,15 +109,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"execution_count": 4,
"id": "7d3dc0b3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(\n",
" model=\"gemini-2.0-flash-001\",\n",
" model=\"gemini-2.0-flash\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
@@ -134,7 +128,7 @@
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"id": "343a8c13",
"metadata": {},
"source": [
"## Invocation"
@@ -142,19 +136,17 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"execution_count": 5,
"id": "82c5708c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []}, id='run-61cff164-40be-4f88-a2df-cca58297502f-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27, 'input_token_details': {'cache_read': 0}})"
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-3b28d4b8-8a62-4e6c-ad4e-b53e6e825749-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 3,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -173,8 +165,8 @@
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"execution_count": 6,
"id": "49d2d0c2",
"metadata": {},
"outputs": [
{
@@ -191,7 +183,7 @@
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"id": "ee3f6e1d",
"metadata": {},
"source": [
"## Chaining\n",
@@ -201,17 +193,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"execution_count": 7,
"id": "3c8407ee",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []}, id='run-dd2f8fb9-62d9-4b84-9c97-ed9c34cda313-0', usage_metadata={'input_tokens': 15, 'output_tokens': 7, 'total_tokens': 22, 'input_token_details': {'cache_read': 0}})"
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-e5561c6b-2beb-4411-9210-4796b576a7cd-0', usage_metadata={'input_tokens': 15, 'output_tokens': 7, 'total_tokens': 22, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 5,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -241,22 +233,164 @@
},
{
"cell_type": "markdown",
"id": "41c2ff10-a3ba-4f40-b3aa-7a395854849e",
"id": "bdae9742",
"metadata": {},
"source": [
"## Image generation\n",
"## Multimodal Usage\n",
"\n",
"Some Gemini models (specifically `gemini-2.0-flash-exp`) support image generation capabilities.\n",
"Gemini models can accept multimodal inputs (text, images, audio, video) and, for some models, generate multimodal outputs.\n",
"\n",
"### Text to image\n",
"### Image Input\n",
"\n",
"See a simple usage example below:"
"Provide image inputs along with text using a `HumanMessage` with a list content format. The `gemini-2.0-flash` model can handle images."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7589e14d-8d1b-4c82-965f-5558d80cb677",
"execution_count": null,
"id": "6833fe5d",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"# Example using a public URL (remains the same)\n",
"message_url = HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the image at the URL.\",\n",
" },\n",
" {\"type\": \"image_url\", \"image_url\": \"https://picsum.photos/seed/picsum/200/300\"},\n",
" ]\n",
")\n",
"result_url = llm.invoke([message_url])\n",
"print(f\"Response for URL image: {result_url.content}\")\n",
"\n",
"# Example using a local image file encoded in base64\n",
"image_file_path = \"/Users/philschmid/projects/google-gemini/langchain/docs/static/img/agents_vs_chains.png\"\n",
"\n",
"with open(image_file_path, \"rb\") as image_file:\n",
" encoded_image = base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
"\n",
"message_local = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe the local image.\"},\n",
" {\"type\": \"image_url\", \"image_url\": f\"data:image/png;base64,{encoded_image}\"},\n",
" ]\n",
")\n",
"result_local = llm.invoke([message_local])\n",
"print(f\"Response for local image: {result_local.content}\")"
]
},
{
"cell_type": "markdown",
"id": "1b422382",
"metadata": {},
"source": [
"Other supported `image_url` formats:\n",
"- A Google Cloud Storage URI (`gs://...`). Ensure the service account has access.\n",
"- A PIL Image object (the library handles encoding).\n",
"\n",
"### Audio Input\n",
"\n",
"Provide audio file inputs along with text. Use a model like `gemini-2.0-flash`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3461836",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"# Ensure you have an audio file named 'example_audio.mp3' or provide the correct path.\n",
"audio_file_path = \"example_audio.mp3\"\n",
"audio_mime_type = \"audio/mpeg\"\n",
"\n",
"\n",
"with open(audio_file_path, \"rb\") as audio_file:\n",
" encoded_audio = base64.b64encode(audio_file.read()).decode(\"utf-8\")\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Transcribe the audio.\"},\n",
" {\n",
" \"type\": \"media\",\n",
" \"data\": encoded_audio, # Use base64 string directly\n",
" \"mime_type\": audio_mime_type,\n",
" },\n",
" ]\n",
")\n",
"response = llm.invoke([message]) # Uncomment to run\n",
"print(f\"Response for audio: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "0d898e27",
"metadata": {},
"source": [
"### Video Input\n",
"\n",
"Provide video file inputs along with text. Use a model like `gemini-2.0-flash`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3046e74b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"# Ensure you have a video file named 'example_video.mp4' or provide the correct path.\n",
"video_file_path = \"example_video.mp4\"\n",
"video_mime_type = \"video/mp4\"\n",
"\n",
"\n",
"with open(video_file_path, \"rb\") as video_file:\n",
" encoded_video = base64.b64encode(video_file.read()).decode(\"utf-8\")\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe the first few frames of the video.\"},\n",
" {\n",
" \"type\": \"media\",\n",
" \"data\": encoded_video, # Use base64 string directly\n",
" \"mime_type\": video_mime_type,\n",
" },\n",
" ]\n",
")\n",
"response = llm.invoke([message]) # Uncomment to run\n",
"print(f\"Response for video: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "2df11d89",
"metadata": {},
"source": [
"### Image Generation (Multimodal Output)\n",
"\n",
"The `gemini-2.0-flash` model can generate text and images inline (image generation is experimental). You need to specify the desired `response_modalities`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0b7180f",
"metadata": {},
"outputs": [
{
@@ -266,17 +400,12 @@
"<IPython.core.display.Image object>"
]
},
"metadata": {
"image/png": {
"width": 300
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import base64\n",
"from io import BytesIO\n",
"\n",
"from IPython.display import Image, display\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
@@ -301,7 +430,7 @@
},
{
"cell_type": "markdown",
"id": "b14c0d87-cf7e-4d88-bda1-2ab40ec0350a",
"id": "14bf00f1",
"metadata": {},
"source": [
"### Image and text to image\n",
@@ -311,8 +440,8 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0f4ed7a5-980c-4b54-b743-0b988909744c",
"execution_count": null,
"id": "d65e195c",
"metadata": {},
"outputs": [
{
@@ -322,11 +451,7 @@
"<IPython.core.display.Image object>"
]
},
"metadata": {
"image/png": {
"width": 300
}
},
"metadata": {},
"output_type": "display_data"
}
],
@@ -349,7 +474,7 @@
},
{
"cell_type": "markdown",
"id": "a62669d8-becd-495f-8f4a-82d7c5d87969",
"id": "43b54d3f",
"metadata": {},
"source": [
"You can also represent an input image and query in a single message by encoding the base64 data in the [data URI scheme](https://en.wikipedia.org/wiki/Data_URI_scheme):"
@@ -357,8 +482,8 @@
},
{
"cell_type": "code",
"execution_count": 9,
"id": "6241da43-e210-43bc-89af-b3c480ea06e9",
"execution_count": null,
"id": "0dfc7e1e",
"metadata": {},
"outputs": [
{
@@ -368,11 +493,7 @@
"<IPython.core.display.Image object>"
]
},
"metadata": {
"image/png": {
"width": 300
}
},
"metadata": {},
"output_type": "display_data"
}
],
@@ -403,7 +524,7 @@
},
{
"cell_type": "markdown",
"id": "cfe228d3-6773-4283-9788-87bdf6912b1c",
"id": "789818d7",
"metadata": {},
"source": [
"You can also use LangGraph to manage the conversation history for you as in [this tutorial](/docs/tutorials/chatbot/)."
@@ -411,7 +532,313 @@
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"id": "b037e2dc",
"metadata": {},
"source": [
"## Tool Calling\n",
"\n",
"You can equip the model with tools to call."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0d759f9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'a6248087-74c5-4b7c-9250-f335e642927c', 'type': 'tool_call'}]\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"OK. It's sunny in San Francisco.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-ac5bb52c-e244-4c72-9fbc-fb2a9cd7a72e-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.tools import tool\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"\n",
"# Define the tool\n",
"@tool(description=\"Get the current weather in a given location\")\n",
"def get_weather(location: str) -> str:\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"# Initialize the model and bind the tool\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"llm_with_tools = llm.bind_tools([get_weather])\n",
"\n",
"# Invoke the model with a query that should trigger the tool\n",
"query = \"What's the weather in San Francisco?\"\n",
"ai_msg = llm_with_tools.invoke(query)\n",
"\n",
"# Check the tool calls in the response\n",
"print(ai_msg.tool_calls)\n",
"\n",
"# Example tool call message would be needed here if you were actually running the tool\n",
"from langchain_core.messages import ToolMessage\n",
"\n",
"tool_message = ToolMessage(\n",
" content=get_weather(*ai_msg.tool_calls[0][\"args\"]),\n",
" tool_call_id=ai_msg.tool_calls[0][\"id\"],\n",
")\n",
"llm_with_tools.invoke([ai_msg, tool_message]) # Example of passing tool result back"
]
},
{
"cell_type": "markdown",
"id": "91d42b86",
"metadata": {},
"source": [
"## Structured Output\n",
"\n",
"Force the model to respond with a specific structure using Pydantic models."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "7457dbe4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"name='Abraham Lincoln' height_m=1.93\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"\n",
"# Define the desired structure\n",
"class Person(BaseModel):\n",
" \"\"\"Information about a person.\"\"\"\n",
"\n",
" name: str = Field(..., description=\"The person's name\")\n",
" height_m: float = Field(..., description=\"The person's height in meters\")\n",
"\n",
"\n",
"# Initialize the model\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Person)\n",
"\n",
"# Invoke the model with a query asking for structured information\n",
"result = structured_llm.invoke(\n",
" \"Who was the 16th president of the USA, and how tall was he in meters?\"\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "90d4725e",
"metadata": {},
"source": [
"\n",
"\n",
"## Token Usage Tracking\n",
"\n",
"Access token usage information from the response metadata."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "edcc003e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt engineering is the art and science of crafting effective text prompts to elicit desired and accurate responses from large language models.\n",
"\n",
"Usage Metadata:\n",
"{'input_tokens': 10, 'output_tokens': 24, 'total_tokens': 34, 'input_token_details': {'cache_read': 0}}\n"
]
}
],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"\n",
"result = llm.invoke(\"Explain the concept of prompt engineering in one sentence.\")\n",
"\n",
"print(result.content)\n",
"print(\"\\nUsage Metadata:\")\n",
"print(result.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "28950dbc",
"metadata": {},
"source": [
"## Built-in tools\n",
"\n",
"Google Gemini supports a variety of built-in tools ([google search](https://ai.google.dev/gemini-api/docs/grounding/search-suggestions), [code execution](https://ai.google.dev/gemini-api/docs/code-execution?lang=python)), which can be bound to the model in the usual way."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd074816",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The next total solar eclipse visible in the United States will occur on August 23, 2044. However, the path of totality will only pass through Montana, North Dakota, and South Dakota.\n",
"\n",
"For a total solar eclipse that crosses a significant portion of the continental U.S., you'll have to wait until August 12, 2045. This eclipse will start in California and end in Florida.\n"
]
}
],
"source": [
"from google.ai.generativelanguage_v1beta.types import Tool as GenAITool\n",
"\n",
"resp = llm.invoke(\n",
" \"When is the next total solar eclipse in US?\",\n",
" tools=[GenAITool(google_search={})],\n",
")\n",
"\n",
"print(resp.content)"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "6964be2d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Executable code: print(2*2)\n",
"\n",
"Code execution result: 4\n",
"\n",
"2*2 is 4.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/philschmid/projects/google-gemini/langchain/.venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py:580: UserWarning: \n",
" ⚠️ Warning: Output may vary each run. \n",
" - 'executable_code': Always present. \n",
" - 'execution_result' & 'image_url': May be absent for some queries. \n",
"\n",
" Validate before using in production.\n",
"\n",
" warnings.warn(\n"
]
}
],
"source": [
"from google.ai.generativelanguage_v1beta.types import Tool as GenAITool\n",
"\n",
"resp = llm.invoke(\n",
" \"What is 2*2, use python\",\n",
" tools=[GenAITool(code_execution={})],\n",
")\n",
"\n",
"for c in resp.content:\n",
" if isinstance(c, dict):\n",
" if c[\"type\"] == \"code_execution_result\":\n",
" print(f\"Code execution result: {c['code_execution_result']}\")\n",
" elif c[\"type\"] == \"executable_code\":\n",
" print(f\"Executable code: {c['executable_code']}\")\n",
" else:\n",
" print(c)"
]
},
{
"cell_type": "markdown",
"id": "a27e6ff4",
"metadata": {},
"source": [
"## Native Async\n",
"\n",
"Use asynchronous methods for non-blocking calls."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "c6803e57",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Async Invoke Result: The sky is blue due to a phenomenon called **Rayle...\n",
"\n",
"Async Stream Result:\n",
"The thread is free, it does not wait,\n",
"For answers slow, or tasks of fate.\n",
"A promise made, a future bright,\n",
"It moves ahead, with all its might.\n",
"\n",
"A callback waits, a signal sent,\n",
"When data's read, or job is spent.\n",
"Non-blocking code, a graceful dance,\n",
"Responsive apps, a fleeting glance.\n",
"\n",
"Async Batch Results: ['1 + 1 = 2', '2 + 2 = 4']\n"
]
}
],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"\n",
"\n",
"async def run_async_calls():\n",
" # Async invoke\n",
" result_ainvoke = await llm.ainvoke(\"Why is the sky blue?\")\n",
" print(\"Async Invoke Result:\", result_ainvoke.content[:50] + \"...\")\n",
"\n",
" # Async stream\n",
" print(\"\\nAsync Stream Result:\")\n",
" async for chunk in llm.astream(\n",
" \"Write a short poem about asynchronous programming.\"\n",
" ):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print(\"\\n\")\n",
"\n",
" # Async batch\n",
" results_abatch = await llm.abatch([\"What is 1+1?\", \"What is 2+2?\"])\n",
" print(\"Async Batch Results:\", [res.content for res in results_abatch])\n",
"\n",
"\n",
"await run_async_calls()"
]
},
{
"cell_type": "markdown",
"id": "99204b32",
"metadata": {},
"source": [
"## Safety Settings\n",
@@ -421,8 +848,8 @@
},
{
"cell_type": "code",
"execution_count": 14,
"id": "238b2f96-e573-4fac-bbf2-7e52ad926833",
"execution_count": null,
"id": "d4c14039",
"metadata": {},
"outputs": [],
"source": [
@@ -442,7 +869,7 @@
},
{
"cell_type": "markdown",
"id": "5805d40c-deb8-4924-8e72-a294a0482fc9",
"id": "dea38fb1",
"metadata": {},
"source": [
"For an enumeration of the categories and thresholds available, see Google's [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict)."
@@ -450,7 +877,7 @@
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"id": "d6d0e853",
"metadata": {},
"source": [
"## API reference\n",
@@ -461,7 +888,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -475,7 +902,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -19,18 +19,50 @@
"id": "5bcea387"
},
"source": [
"# ChatLiteLLM\n",
"# ChatLiteLLM and ChatLiteLLMRouter\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.\n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library.\n",
"\n",
"This integration contains two main classes:\n",
"\n",
"- ```ChatLiteLLM```: The main Langchain wrapper for basic usage of LiteLLM ([docs](https://docs.litellm.ai/docs/)).\n",
"- ```ChatLiteLLMRouter```: A ```ChatLiteLLM``` wrapper that leverages LiteLLM's Router ([docs](https://docs.litellm.ai/docs/routing))."
]
},
{
"cell_type": "markdown",
"id": "2ddb7fd3",
"metadata": {},
"source": [
"## Table of Contents\n",
"1. [Overview](#overview)\n",
" - [Integration Details](#integration-details)\n",
" - [Model Features](#model-features)\n",
"2. [Setup](#setup)\n",
"3. [Credentials](#credentials)\n",
"4. [Installation](#installation)\n",
"5. [Instantiation](#instantiation)\n",
" - [ChatLiteLLM](#chatlitellm)\n",
" - [ChatLiteLLMRouter](#chatlitellmrouter)\n",
"6. [Invocation](#invocation)\n",
"7. [Async and Streaming Functionality](#async-and-streaming-functionality)\n",
"8. [API Reference](#api-reference)"
]
},
{
"cell_type": "markdown",
"id": "37be6ef8",
"metadata": {},
"source": [
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support| Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"| [ChatLiteLLMRouter](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellmrouter) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](https://python.langchain.com/docs/how_to/tool_calling/) | [Structured output](https://python.langchain.com/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Native async](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Token usage](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/) | [Logprobs](https://python.langchain.com/docs/how_to/logprobs/) |\n",
@@ -38,7 +70,7 @@
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"### Setup\n",
"To access ChatLiteLLM models you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI or Cohere account. Then you have to get an API key, and export it as an environment variable."
"To access ```ChatLiteLLM``` and ```ChatLiteLLMRouter``` models, you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI, or Cohere account. Then, you have to get an API key and export it as an environment variable."
]
},
{
@@ -53,23 +85,23 @@
"You have to choose the LLM provider you want and sign up with them to get their API key.\n",
"\n",
"### Example - Anthropic\n",
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable.\n",
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this, set the ANTHROPIC_API_KEY environment variable.\n",
"\n",
"\n",
"### Example - OpenAI\n",
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable."
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this, set the OPENAI_API_KEY environment variable."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "7595eddf",
"metadata": {
"id": "7595eddf"
},
"outputs": [],
"source": [
"## set ENV variables\n",
"## Set ENV variables\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\n",
@@ -85,7 +117,7 @@
"source": [
"### Installation\n",
"\n",
"The LangChain LiteLLM integration lives in the `langchain-litellm` package:"
"The LangChain LiteLLM integration is available in the `langchain-litellm` package:"
]
},
{
@@ -107,13 +139,21 @@
"id": "bc1182b4"
},
"source": [
"## Instantiation\n",
"Now we can instantiate our model object and generate chat completions:"
"## Instantiation"
]
},
{
"cell_type": "markdown",
"id": "d439241a",
"metadata": {},
"source": [
"### ChatLiteLLM\n",
"You can instantiate a ```ChatLiteLLM``` model by providing a ```model``` name [supported by LiteLLM](https://docs.litellm.ai/docs/providers)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
@@ -121,9 +161,52 @@
},
"outputs": [],
"source": [
"from langchain_litellm.chat_models import ChatLiteLLM\n",
"from langchain_litellm import ChatLiteLLM\n",
"\n",
"llm = ChatLiteLLM(model=\"gpt-3.5-turbo\")"
"llm = ChatLiteLLM(model=\"gpt-4.1-nano\", temperature=0.1)"
]
},
{
"cell_type": "markdown",
"id": "3d0ed306",
"metadata": {},
"source": [
"### ChatLiteLLMRouter\n",
"You can also leverage LiteLLM's routing capabilities by defining your model list as specified [here](https://docs.litellm.ai/docs/routing)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d26393a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_litellm import ChatLiteLLMRouter\n",
"from litellm import Router\n",
"\n",
"model_list = [\n",
" {\n",
" \"model_name\": \"gpt-4.1\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4.1\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2024-10-21\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-4o\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4o\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2024-10-21\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"llm = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-4.1\", temperature=0.1)"
]
},
{
@@ -133,7 +216,8 @@
"id": "63d98454"
},
"source": [
"## Invocation"
"## Invocation\n",
"Whether you've instantiated a `ChatLiteLLM` or a `ChatLiteLLMRouter`, you can now use the ChatModel through Langchain's API."
]
},
{
@@ -171,7 +255,8 @@
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c"
},
"source": [
"## `ChatLiteLLM` also supports async and streaming functionality:"
"## Async and Streaming Functionality\n",
"`ChatLiteLLM` and `ChatLiteLLMRouter` also support async and streaming functionality:"
]
},
{
@@ -212,7 +297,7 @@
},
"source": [
"## API reference\n",
"For detailed documentation of all `ChatLiteLLM` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
"For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations, head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
]
}
],

View File

@@ -1,218 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "59148044",
"metadata": {},
"source": [
"---\n",
"sidebar_label: LiteLLM Router\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "247da7a6",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# ChatLiteLLMRouter\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatLiteLLMRouter\n",
"from langchain_core.messages import HumanMessage\n",
"from litellm import Router"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model_list = [\n",
" {\n",
" \"model_name\": \"gpt-4\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4-1106-preview\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-35-turbo\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-35-turbo\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"chat = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-35-turbo\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime programmer.\")"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"metadata": {},
"source": [
"## `ChatLiteLLMRouter` also supports async and streaming functionality:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\"J'adore programmer.\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"J'adore programmer.\"))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer."
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\")"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatLiteLLMRouter(\n",
" router=litellm_router,\n",
" model_name=\"gpt-35-turbo\",\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
")\n",
"chat(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -17,21 +17,21 @@
"source": [
"# ChatClovaX\n",
"\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain).\n",
"\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio Guide [documentation](https://guide.ncloud-docs.com/docs/clovastudio-model).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:| :---: |:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatClovaX](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"| [ChatClovaX](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain#HyperCLOVAX%EB%AA%A8%EB%8D%B8%EC%9D%B4%EC%9A%A9) | [langchain-naver](https://pypi.org/project/langchain-naver/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_naver?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_naver?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"|:------------------------------------------:| :---: | :---: | :---: | :---: | :---: |:-----------------------------------------------------:| :---: |:------------------------------------------------------:|:----------------------------------:|\n",
"|| ❌ | ❌ | | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"|| ❌ | ❌ | | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
@@ -39,26 +39,23 @@
"\n",
"1. Creating [NAVER Cloud Platform](https://www.ncloud.com/) account\n",
"2. Apply to use [CLOVA Studio](https://www.ncloud.com/product/aiService/clovaStudio)\n",
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#테스트앱생성).)\n",
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/clovastudio-playground-testapp).)\n",
"4. Issue a Test or Service API key (See [here](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4).)\n",
"\n",
"### Credentials\n",
"\n",
"Set the `NCP_CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
" - Note that if you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need to get an additional API Key by clicking `App Request Status` > `Service App, Test App List` > `Details button for each app` in [CLOVA Studio](https://clovastudio.ncloud.com/studio-application/service-app) and set it as `NCP_APIGW_API_KEY`.\n",
"Set the `CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
"\n",
"You can add them to your environment variables as below:\n",
"\n",
"``` bash\n",
"export NCP_CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
"# Uncomment below to use a legacy API key\n",
"# export NCP_APIGW_API_KEY=\"your-api-key-here\"\n",
"export CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "2def81b5-b023-4f40-a97b-b2c5ca59d6a9",
"metadata": {},
"outputs": [],
@@ -66,22 +63,19 @@
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NCP_CLOVASTUDIO_API_KEY\"):\n",
" os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter your NCP CLOVA Studio API Key: \"\n",
" )\n",
"# Uncomment below to use a legacy API key\n",
"# if not os.getenv(\"NCP_APIGW_API_KEY\"):\n",
"# os.environ[\"NCP_APIGW_API_KEY\"] = getpass.getpass(\n",
"# \"Enter your NCP API Gateway API key: \"\n",
"# )"
"if not os.getenv(\"CLOVASTUDIO_API_KEY\"):\n",
" os.environ[\"CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CLOVA Studio API Key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "7c695442",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
@@ -101,7 +95,7 @@
"source": [
"### Installation\n",
"\n",
"The LangChain Naver integration lives in the `langchain-community` package:"
"The LangChain Naver integration lives in the `langchain-naver` package:"
]
},
{
@@ -112,7 +106,7 @@
"outputs": [],
"source": [
"# install package\n",
"!pip install -qU langchain-community"
"%pip install -qU langchain-naver"
]
},
{
@@ -127,21 +121,19 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatClovaX\n",
"from langchain_naver import ChatClovaX\n",
"\n",
"chat = ChatClovaX(\n",
" model=\"HCX-003\",\n",
" max_tokens=100,\n",
" model=\"HCX-005\",\n",
" temperature=0.5,\n",
" # clovastudio_api_key=\"...\" # set if you prefer to pass api key directly instead of using environment variables\n",
" # task_id=\"...\" # set if you want to use fine-tuned model\n",
" # service_app=False # set True if using Service App. Default value is False (means using Test App)\n",
" # include_ai_filters=False # set True if you want to detect inappropriate content. Default value is False\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
@@ -153,12 +145,12 @@
"source": [
"## Invocation\n",
"\n",
"In addition to invoke, we also support batch and stream functionalities."
"In addition to invoke, `ChatClovaX` also support batch and stream functionalities."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -167,10 +159,10 @@
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 1112164354, 'ai_filter': None}, id='run-b57bc356-1148-4007-837d-cc409dbd57cc-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b70c26671cd247a0864115bacfb5fc12', 'finish_reason': 'stop', 'logprobs': None}, id='run-3faf6a8d-d5da-49ad-9fbb-7b56ed23b484-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -190,7 +182,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "24e7377f",
"metadata": {},
"outputs": [
@@ -198,7 +190,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"저는 네이버 AI를 사용하는 것이 좋아요.\n"
"네이버 인공지능을 사용하는 것을 정말 좋아합니다.\n"
]
}
],
@@ -218,17 +210,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것 좋아.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 2575184681, 'ai_filter': None}, id='run-7014b330-eba3-4701-bb62-df73ce39b854-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
"AIMessage(content='저는 네이버 인공지능을 사용하는 것 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 28, 'total_tokens': 38, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b7a826d17fcf4fee8386fca2ebc63284', 'finish_reason': 'stop', 'logprobs': None}, id='run-35957816-3325-4d9c-9441-e40704912be6-0', usage_metadata={'input_tokens': 28, 'output_tokens': 10, 'total_tokens': 38, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -266,7 +258,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "2c07af21-dda5-4514-b4de-1f214c2cebcd",
"metadata": {},
"outputs": [
@@ -274,7 +266,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Certainly! In Korean, \"Hi\" is pronounced as \"안녕\" (annyeong). The first syllable, \"안,\" sounds like the \"ahh\" sound in \"apple,\" while the second syllable, \"녕,\" sounds like the \"yuh\" sound in \"you.\" So when you put them together, it's like saying \"ahhyuh-nyuhng.\" Remember to pronounce each syllable clearly and separately for accurate pronunciation."
"In Korean, the informal way of saying 'hi' is \"안녕\" (annyeong). If you're addressing someone older or showing more respect, you would use \"안녕하세요\" (annjeonghaseyo). Both phrases are used as greetings similar to 'hello'. Remember, pronunciation is key so make sure to pronounce each syllable clearly: 안-녀-엉 (an-nyeo-eong) and 안-녕-하-세-요 (an-nyeong-ha-se-yo)."
]
}
],
@@ -298,115 +290,37 @@
"\n",
"### Using fine-tuned models\n",
"\n",
"You can call fine-tuned models by passing in your corresponding `task_id` parameter. (You dont need to specify the `model_name` parameter when calling fine-tuned model.)\n",
"You can call fine-tuned models by passing the `task_id` to the `model` parameter as: `ft:{task_id}`.\n",
"\n",
"You can check `task_id` from corresponding Test App or Service App details."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "cb436788",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 너무 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 15, 'seed': 52559061, 'ai_filter': None}, id='run-5bea8d4a-48f3-4c34-ae70-66e60dca5344-0', usage_metadata={'input_tokens': 25, 'output_tokens': 15, 'total_tokens': 40})"
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': '2222d6d411a948c883aac1e03ca6cebe', 'finish_reason': 'stop', 'logprobs': None}, id='run-9696d7e2-7afa-4bb4-9c03-b95fcf678ab8-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"fine_tuned_model = ChatClovaX(\n",
" task_id=\"5s8egt3a\", # set if you want to use fine-tuned model\n",
" model=\"ft:a1b2c3d4\", # set as `ft:{task_id}` with your fine-tuned model's task id\n",
" # other params...\n",
")\n",
"\n",
"fine_tuned_model.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "f428deaf",
"metadata": {},
"source": [
"### Service App\n",
"\n",
"When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#서비스앱신청).)\n",
"\n",
"For a Service App, you should use a corresponding Service API key and can only be called with it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcf566df",
"metadata": {},
"outputs": [],
"source": [
"# Update environment variables\n",
"\n",
"os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter NCP CLOVA Studio Service API Key: \"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "cebe27ae",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatClovaX(\n",
" service_app=True, # True if you want to use your service app, default value is False.\n",
" # clovastudio_api_key=\"...\" # if you prefer to pass api key in directly instead of using env vars\n",
" model=\"HCX-003\",\n",
" # other params...\n",
")\n",
"ai_msg = chat.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "d73e7140",
"metadata": {},
"source": [
"### AI Filter\n",
"\n",
"AI Filter detects inappropriate output such as profanity from the test app (or service app included) created in Playground and informs the user. See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#AIFilter) for details."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32bfbc93",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatClovaX(\n",
" model=\"HCX-003\",\n",
" include_ai_filters=True, # True if you want to enable ai filter\n",
" # other params...\n",
")\n",
"\n",
"ai_msg = chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bd9e179",
"metadata": {},
"outputs": [],
"source": [
"print(ai_msg.response_metadata[\"ai_filter\"])"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
@@ -414,13 +328,13 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatNaver features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html"
"For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -434,7 +348,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.12.8"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -408,7 +408,7 @@
"\n",
":::\n",
"\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages.\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages, as well as the output from [reasoning processes](https://platform.openai.com/docs/guides/reasoning?api-mode=responses).\n",
"\n",
"`ChatOpenAI` will route to the Responses API if one of these features is used. You can also specify `use_responses_api=True` when instantiating `ChatOpenAI`.\n",
"\n",
@@ -1056,6 +1056,77 @@
"print(second_response.text())"
]
},
{
"cell_type": "markdown",
"id": "67bf5bd2-0935-40a0-b1cd-c6662b681d4b",
"metadata": {},
"source": [
"### Reasoning output\n",
"\n",
"Some OpenAI models will generate separate text content illustrating their reasoning process. See OpenAI's [reasoning documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) for details.\n",
"\n",
"OpenAI can return a summary of the model's reasoning (although it doesn't expose the raw reasoning tokens). To configure `ChatOpenAI` to return this summary, specify the `reasoning` parameter:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8d322f3a-0732-45ab-ac95-dfd4596e0d85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'3^3 = 3 × 3 × 3 = 27.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"reasoning = {\n",
" \"effort\": \"medium\", # 'low', 'medium', or 'high'\n",
" \"summary\": \"auto\", # 'detailed', 'auto', or None\n",
"}\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"o4-mini\",\n",
" use_responses_api=True,\n",
" model_kwargs={\"reasoning\": reasoning},\n",
")\n",
"response = llm.invoke(\"What is 3^3?\")\n",
"\n",
"# Output\n",
"response.text()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d7dcc082-b7c8-41b7-a5e2-441b9679e41b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Calculating power of three**\n",
"\n",
"The user is asking for the result of 3 to the power of 3, which I know is 27. It's a straightforward question, so Ill keep my answer concise: 27. I could explain that this is the same as multiplying 3 by itself twice: 3 × 3 × 3 equals 27. However, since the user likely just needs the answer, Ill simply respond with 27.\n"
]
}
],
"source": [
"# Reasoning\n",
"reasoning = response.additional_kwargs[\"reasoning\"]\n",
"for block in reasoning[\"summary\"]:\n",
" print(block[\"text\"])"
]
},
{
"cell_type": "markdown",
"id": "57e27714",
@@ -1342,6 +1413,23 @@
"second_output_message = llm.invoke(history)"
]
},
{
"cell_type": "markdown",
"id": "90c18d18-b25c-4509-a639-bd652b92f518",
"metadata": {},
"source": [
"## Flex processing\n",
"\n",
"OpenAI offers a variety of [service tiers](https://platform.openai.com/docs/guides/flex-processing). The \"flex\" tier offers cheaper pricing for requests, with the trade-off that responses may take longer and resources might not always be available. This approach is best suited for non-critical tasks, including model testing, data enhancement, or jobs that can be run asynchronously.\n",
"\n",
"To use it, initialize the model with `service_tier=\"flex\"`:\n",
"```python\n",
"llm = ChatOpenAI(model=\"o4-mini\", service_tier=\"flex\")\n",
"```\n",
"\n",
"Note that this is a beta feature that is only available for a subset of models. See OpenAI [docs](https://platform.openai.com/docs/guides/flex-processing) for more detail."
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
@@ -1349,7 +1437,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
"For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)."
]
}
],

View File

@@ -57,8 +57,8 @@
{
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-08T19:44:51.390231Z",
"start_time": "2024-11-08T19:44:51.387945Z"
"end_time": "2025-04-21T18:23:30.746350Z",
"start_time": "2025-04-21T18:23:30.744744Z"
}
},
"cell_type": "code",
@@ -70,7 +70,7 @@
],
"id": "fa57fba89276da13",
"outputs": [],
"execution_count": 1
"execution_count": 2
},
{
"metadata": {},
@@ -82,12 +82,25 @@
"id": "87dc1742af7b053"
},
{
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:23:33.359278Z",
"start_time": "2025-04-21T18:23:32.853207Z"
}
},
"cell_type": "code",
"source": "%pip install -qU langchain-predictionguard",
"id": "b816ae8553cba021",
"outputs": [],
"execution_count": null
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"execution_count": 3
},
{
"cell_type": "markdown",
@@ -103,13 +116,13 @@
"metadata": {
"id": "2xe8JEUwA7_y",
"ExecuteTime": {
"end_time": "2024-11-08T19:44:53.950653Z",
"start_time": "2024-11-08T19:44:53.488694Z"
"end_time": "2025-04-21T18:23:39.812675Z",
"start_time": "2025-04-21T18:23:39.666881Z"
}
},
"source": "from langchain_predictionguard import ChatPredictionGuard",
"outputs": [],
"execution_count": 2
"execution_count": 4
},
{
"cell_type": "code",
@@ -117,8 +130,8 @@
"metadata": {
"id": "Ua7Mw1N4HcER",
"ExecuteTime": {
"end_time": "2024-11-08T19:44:54.890695Z",
"start_time": "2024-11-08T19:44:54.502846Z"
"end_time": "2025-04-21T18:23:41.590296Z",
"start_time": "2025-04-21T18:23:41.253237Z"
}
},
"source": [
@@ -126,7 +139,7 @@
"chat = ChatPredictionGuard(model=\"Hermes-3-Llama-3.1-8B\")"
],
"outputs": [],
"execution_count": 3
"execution_count": 5
},
{
"metadata": {},
@@ -221,6 +234,132 @@
],
"execution_count": 6
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## Tool Calling\n",
"\n",
"Prediction Guard has a tool calling API that lets you describe tools and their arguments, which enables the model return a JSON object with a tool to call and the inputs to that tool. Tool-calling is very useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n"
],
"id": "1227780d6e6728ba"
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### ChatPredictionGuard.bind_tools()\n",
"\n",
"Using `ChatPredictionGuard.bind_tools()`, you can pass in Pydantic classes, dict schemas, and Langchain tools as tools to the model, which are then reformatted to allow for use by the model."
],
"id": "23446aa52e01d1ba"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"class GetPopulation(BaseModel):\n",
" \"\"\"Get the current population in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = chat.bind_tools(\n",
" [GetWeather, GetPopulation]\n",
" # strict = True # enforce tool args schema is respected\n",
")"
],
"id": "135efb0bfc5916c1"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:42:41.834079Z",
"start_time": "2025-04-21T18:42:40.289095Z"
}
},
"cell_type": "code",
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"Which city is hotter today and which is bigger: LA or NY?\"\n",
")\n",
"ai_msg"
],
"id": "8136f19a8836cd58",
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c', 'type': 'function', 'index': 0, 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"Los Angeles, CA\"}'}}, {'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080', 'type': 'function', 'index': 0, 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"New York, NY\"}'}}, {'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388', 'type': 'function', 'index': 0, 'function': {'name': 'GetPopulation', 'arguments': '{\"location\": \"Los Angeles, CA\"}'}}, {'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510', 'type': 'function', 'index': 0, 'function': {'name': 'GetPopulation', 'arguments': '{\"location\": \"New York, NY\"}'}}]}, response_metadata={}, id='run-4630cfa9-4e95-42dd-8e4a-45db78180a10-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c', 'type': 'tool_call'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080', 'type': 'tool_call'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388', 'type': 'tool_call'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510', 'type': 'tool_call'}])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 7
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### AIMessage.tool_calls\n",
"\n",
"Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic."
],
"id": "84f405c45a35abe5"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:43:00.429453Z",
"start_time": "2025-04-21T18:43:00.426399Z"
}
},
"cell_type": "code",
"source": "ai_msg.tool_calls",
"id": "bdcee85475019719",
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetWeather',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 8
},
{
"cell_type": "markdown",
"id": "ff1b51a8",

View File

@@ -14,7 +14,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API."
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API."
]
},
{
@@ -34,33 +36,46 @@
"id": "juAmbgoWD17u"
},
"source": [
"The AstraDB Document Loader returns a list of Langchain Documents from an AstraDB database.\n",
"The Astra DB Document Loader returns a list of Langchain `Document` objects read from an Astra DB collection.\n",
"\n",
"The Loader takes the following parameters:\n",
"The loader takes the following parameters:\n",
"\n",
"* `api_endpoint`: AstraDB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: AstraDB token. Looks like `AstraCS:6gBhNmsk135....`\n",
"* `api_endpoint`: Astra DB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: Astra DB token. Looks like `AstraCS:aBcD0123...`\n",
"* `collection_name` : AstraDB collection name\n",
"* `namespace`: (Optional) AstraDB namespace\n",
"* `namespace`: (Optional) AstraDB namespace (called _keyspace_ in Astra DB)\n",
"* `filter_criteria`: (Optional) Filter used in the find query\n",
"* `projection`: (Optional) Projection used in the find query\n",
"* `find_options`: (Optional) Options used in the find query\n",
"* `nb_prefetched`: (Optional) Number of documents pre-fetched by the loader\n",
"* `limit`: (Optional) Maximum number of documents to retrieve\n",
"* `extraction_function`: (Optional) A function to convert the AstraDB document to the LangChain `page_content` string. Defaults to `json.dumps`\n",
"\n",
"The following metadata is set to the LangChain Documents metadata output:\n",
"The loader sets the following metadata for the documents it reads:\n",
"\n",
"```python\n",
"{\n",
" metadata : {\n",
" \"namespace\": \"...\", \n",
" \"api_endpoint\": \"...\", \n",
" \"collection\": \"...\"\n",
" }\n",
"metadata={\n",
" \"namespace\": \"...\", \n",
" \"api_endpoint\": \"...\", \n",
" \"collection\": \"...\"\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"!pip install \"langchain-astradb>=0.6,<0.7\""
]
},
{
"attachments": {},
"cell_type": "markdown",
@@ -71,24 +86,43 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import AstraDBLoader"
"from langchain_astradb import AstraDBLoader"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[**API Reference:** `AstraDBLoader`](https://python.langchain.com/api_reference/astradb/document_loaders/langchain_astradb.document_loaders.AstraDBLoader.html#langchain_astradb.document_loaders.AstraDBLoader)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:41:22.643335Z",
"start_time": "2024-01-08T12:40:57.759116Z"
},
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
"ASTRA_DB_APPLICATION_TOKEN = ········\n"
]
}
],
"source": [
"from getpass import getpass\n",
"\n",
@@ -98,7 +132,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:25.395162Z",
@@ -112,19 +146,22 @@
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"movie_reviews\",\n",
" projection={\"title\": 1, \"reviewtext\": 1},\n",
" find_options={\"limit\": 10},\n",
" limit=10,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:30.236489Z",
"start_time": "2024-01-08T12:42:29.612133Z"
},
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
@@ -133,7 +170,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:31.369394Z",
@@ -144,10 +181,10 @@
{
"data": {
"text/plain": [
"Document(page_content='{\"_id\": \"659bdffa16cbc4586b11a423\", \"title\": \"Dangerous Men\", \"reviewtext\": \"\\\\\"Dangerous Men,\\\\\" the picture\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\"}', metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'})"
"Document(metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'}, page_content='{\"_id\": \"659bdffa16cbc4586b11a423\", \"title\": \"Dangerous Men\", \"reviewtext\": \"\\\\\"Dangerous Men,\\\\\" the picture\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\"}')"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -179,7 +216,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.12.8"
}
},
"nbformat": 4,

View File

@@ -49,7 +49,14 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import BrowserbaseLoader"
"import os\n",
"\n",
"from langchain_community.document_loaders import BrowserbaseLoader\n",
"\n",
"load_dotenv()\n",
"\n",
"BROWSERBASE_API_KEY = os.getenv(\"BROWSERBASE_API_KEY\")\n",
"BROWSERBASE_PROJECT_ID = os.getenv(\"BROWSERBASE_PROJECT_ID\")"
]
},
{
@@ -59,6 +66,8 @@
"outputs": [],
"source": [
"loader = BrowserbaseLoader(\n",
" api_key=BROWSERBASE_API_KEY,\n",
" project_id=BROWSERBASE_PROJECT_ID,\n",
" urls=[\n",
" \"https://example.com\",\n",
" ],\n",
@@ -78,52 +87,11 @@
"\n",
"- `urls` Required. A list of URLs to fetch.\n",
"- `text_content` Retrieve only text content. Default is `False`.\n",
"- `api_key` Optional. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.\n",
"- `project_id` Optional. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.\n",
"- `api_key` Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.\n",
"- `project_id` Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.\n",
"- `session_id` Optional. Provide an existing Session ID.\n",
"- `proxy` Optional. Enable/Disable Proxies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading images\n",
"\n",
"You can also load screenshots of webpages (as bytes) for multi-modal models.\n",
"\n",
"Full example using GPT-4V:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from browserbase import Browserbase\n",
"from browserbase.helpers.gpt4 import GPT4VImage, GPT4VImageDetail\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=256)\n",
"browser = Browserbase()\n",
"\n",
"screenshot = browser.screenshot(\"https://browserbase.com\")\n",
"\n",
"result = chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"What color is the logo?\"},\n",
" GPT4VImage(screenshot, GPT4VImageDetail.auto),\n",
" ]\n",
" )\n",
" ]\n",
")\n",
"\n",
"print(result.content)"
]
}
],
"metadata": {

View File

@@ -36,10 +36,7 @@
"pip install oracledb"
],
"metadata": {
"collapsed": false,
"pycharm": {
"is_executing": true
}
"collapsed": false
}
},
{
@@ -51,10 +48,7 @@
"from settings import s"
],
"metadata": {
"collapsed": false,
"pycharm": {
"is_executing": true
}
"collapsed": false
}
},
{
@@ -97,16 +91,14 @@
"doc_2 = doc_loader_2.load()"
],
"metadata": {
"collapsed": false,
"pycharm": {
"is_executing": true
}
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"With TLS authentication, wallet_location and wallet_password are not required."
"With TLS authentication, wallet_location and wallet_password are not required.\n",
"Bind variable option is provided by argument \"parameters\"."
],
"metadata": {
"collapsed": false
@@ -117,6 +109,8 @@
"execution_count": null,
"outputs": [],
"source": [
"SQL_QUERY = \"select channel_id, channel_desc from sh.channels where channel_desc = :1 fetch first 5 rows only\"\n",
"\n",
"doc_loader_3 = OracleAutonomousDatabaseLoader(\n",
" query=SQL_QUERY,\n",
" user=s.USERNAME,\n",
@@ -124,6 +118,7 @@
" schema=s.SCHEMA,\n",
" config_dir=s.CONFIG_DIR,\n",
" tns_name=s.TNS_NAME,\n",
" parameters=[\"Direct Sales\"],\n",
")\n",
"doc_3 = doc_loader_3.load()\n",
"\n",
@@ -133,6 +128,7 @@
" password=s.PASSWORD,\n",
" schema=s.SCHEMA,\n",
" connection_string=s.CONNECTION_STRING,\n",
" parameters=[\"Direct Sales\"],\n",
")\n",
"doc_4 = doc_loader_4.load()"
],

View File

@@ -0,0 +1,187 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: SingleStore\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SingleStoreLoader\n",
"\n",
"The `SingleStoreLoader` allows you to load documents directly from a SingleStore database table. It is part of the `langchain-singlestore` integration package.\n",
"\n",
"## Overview\n",
"\n",
"### Integration Details\n",
"\n",
"| Class | Package | JS Support |\n",
"| :--- | :--- | :---: |\n",
"| `SingleStoreLoader` | `langchain_singlestore` | ❌ |\n",
"\n",
"### Features\n",
"- Load documents lazily to handle large datasets efficiently.\n",
"- Supports native asynchronous operations.\n",
"- Easily configurable to work with different database schemas.\n",
"\n",
"## Setup\n",
"\n",
"To use the `SingleStoreLoader`, you need to install the `langchain-singlestore` package. Follow the installation instructions below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_singlestore**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_singlestore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"To initialize `SingleStoreLoader`, you need to provide connection parameters for the SingleStore database and specify the table and fields to load documents from.\n",
"\n",
"### Required Parameters:\n",
"- **host** (`str`): Hostname, IP address, or URL for the database.\n",
"- **table_name** (`str`): Name of the table to query. Defaults to `embeddings`.\n",
"- **content_field** (`str`): Field containing document content. Defaults to `content`.\n",
"- **metadata_field** (`str`): Field containing document metadata. Defaults to `metadata`.\n",
"\n",
"### Optional Parameters:\n",
"- **id_field** (`str`): Field containing document IDs. Defaults to `id`.\n",
"\n",
"### Connection Pool Parameters:\n",
"- **pool_size** (`int`): Number of active connections in the pool. Defaults to `5`.\n",
"- **max_overflow** (`int`): Maximum connections beyond `pool_size`. Defaults to `10`.\n",
"- **timeout** (`float`): Connection timeout in seconds. Defaults to `30`.\n",
"\n",
"### Additional Options:\n",
"- **pure_python** (`bool`): Enables pure Python mode.\n",
"- **local_infile** (`bool`): Allows local file uploads.\n",
"- **charset** (`str`): Character set for string values.\n",
"- **ssl_key**, **ssl_cert**, **ssl_ca** (`str`): Paths to SSL files.\n",
"- **ssl_disabled** (`bool`): Disables SSL.\n",
"- **ssl_verify_cert** (`bool`): Verifies server's certificate.\n",
"- **ssl_verify_identity** (`bool`): Verifies server's identity.\n",
"- **autocommit** (`bool`): Enables autocommits.\n",
"- **results_type** (`str`): Structure of query results (e.g., `tuples`, `dicts`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_singlestore.document_loaders import SingleStoreLoader\n",
"\n",
"loader = SingleStoreLoader(\n",
" host=\"127.0.0.1:3306/db\",\n",
" table_name=\"documents\",\n",
" content_field=\"content\",\n",
" metadata_field=\"metadata\",\n",
" id_field=\"id\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all SingleStore Document Loader features and configurations head to the github page: [https://github.com/singlestore-labs/langchain-singlestore/](https://github.com/singlestore-labs/langchain-singlestore/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1214,9 +1214,7 @@
"source": [
"### Connecting to the DB\n",
"\n",
"The Cassandra caches shown in this page can be used with Cassandra as well as other derived databases, such as Astra DB, which use the CQL (Cassandra Query Language) protocol.\n",
"\n",
"> DataStax [Astra DB](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html) is a managed serverless database built on Cassandra, offering the same interface and strengths.\n",
"The Cassandra caches shown in this page can be used with Cassandra as well as other derived databases that can use the CQL (Cassandra Query Language) protocol, such as DataStax Astra DB.\n",
"\n",
"Depending on whether you connect to a Cassandra cluster or to Astra DB through CQL, you will provide different parameters when instantiating the cache (through initialization of a CassIO connection)."
]
@@ -1517,6 +1515,12 @@
"source": [
"You can easily use [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) as an LLM cache, with either the \"exact\" or the \"semantic-based\" cache.\n",
"\n",
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API.\n",
"\n",
"_This approach differs from the `Cassandra` caches mentioned above in that it natively uses the HTTP Data API. The Data API is specific to Astra DB. Keep in mind that the storage format will also differ._\n",
"\n",
"Make sure you have a running database (it must be a Vector-enabled database to use the Semantic cache) and get the required credentials on your Astra dashboard:\n",
"\n",
"- the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
@@ -2525,7 +2529,17 @@
"source": [
"## `SingleStoreDB` semantic cache\n",
"\n",
"You can use [SingleStoreDB](https://python.langchain.com/docs/integrations/vectorstores/singlestoredb/) as a semantic cache to cache prompts and responses."
"You can use [SingleStore](https://python.langchain.com/docs/integrations/vectorstores/singlestore/) as a semantic cache to cache prompts and responses."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "596e15e8",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-singlestore"
]
},
{
@@ -2535,11 +2549,11 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.cache import SingleStoreDBSemanticCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_singlestore.cache import SingleStoreSemanticCache\n",
"\n",
"set_llm_cache(\n",
" SingleStoreDBSemanticCache(\n",
" SingleStoreSemanticCache(\n",
" embedding=OpenAIEmbeddings(),\n",
" host=\"root:pass@localhost:3306/db\",\n",
" )\n",
@@ -3102,8 +3116,8 @@
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBCache.html) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBCache.html) (deprecated since `langchain-community==0.0.28`) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBSemanticCache.html) (deprecated since `langchain-community==0.0.28`) |\n",
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",
@@ -3150,7 +3164,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.0"
}
},
"nbformat": 4,

Some files were not shown because too many files have changed in this diff Show More