Compare commits

...

87 Commits

Author SHA1 Message Date
Chester Curme
3e262334ce take usage from end of stream 2025-05-08 21:06:08 -04:00
Chester Curme
f5f6e869cd revert 2025-05-08 21:05:49 -04:00
ccurme
e9e597be8e docs: update sort order in integrations table (#31171) 2025-05-08 20:44:21 +00:00
ccurme
0ba8697286 infra: add to vercel overrides (#31170)
Incompatibility between langchain-redis and langchain-ai21 `tenacity`
dep
2025-05-08 20:36:43 +00:00
ccurme
9aac8923a3 docs: add web search to anthropic docs (#31169) 2025-05-08 16:20:11 -04:00
Victor Hiairrassary
efc52e18e9 docs: fix typing in how_to/custom_tools.ipynb (#31164)
Fix typing in how_to/custom_tools.ipynb
2025-05-08 13:51:42 -04:00
ccurme
2d202f9762 anthropic[patch]: split test into two (#31167) 2025-05-08 09:23:36 -04:00
ccurme
d4555ac924 anthropic: release 0.3.13 (#31162) 2025-05-08 03:13:15 +00:00
ccurme
e34f9fd6f7 anthropic: update streaming usage metadata (#31158)
Anthropic updated how they report token counts during streaming today.
See changes to `MessageDeltaUsage` in [this
commit](2da00f26c5 (diff-1a396eba0cd9cd8952dcdb58049d3b13f6b7768ead1411888d66e28211f7bfc5)).

It's clean and simple to grab these fields from the final
`message_delta` event. However, some of them are typed as Optional, and
language
[here](e42451ab3f/src/anthropic/lib/streaming/_messages.py (L462))
suggests they may not always be present. So here we take the required
field from the `message_delta` event as we were doing previously, and
ignore the rest.
2025-05-07 23:09:56 -04:00
Tanushree
6c3901f9f9 Change banner (#31159)
Changing the banner to lead to careers page instead of interrupt

---------

Co-authored-by: Brace Sproul <braceasproul@gmail.com>
2025-05-07 18:07:10 -07:00
ccurme
682f338c17 anthropic[patch]: support web search (#31157) 2025-05-07 18:04:06 -04:00
ccurme
d7e016c5fc huggingface: release 0.2 (#31153) 2025-05-07 15:33:07 -04:00
ccurme
4b11cbeb47 huggingface[patch]: update lockfile (#31152) 2025-05-07 15:17:33 -04:00
ccurme
b5b90b5929 anthropic[patch]: be robust to null fields when translating usage metadata (#31151) 2025-05-07 18:30:21 +00:00
ccurme
f70b263ff3 core: release 0.3.59 (#31150) 2025-05-07 17:36:59 +00:00
ccurme
bb69d4c42e docs: specify js support for tavily (#31149) 2025-05-07 11:30:04 -04:00
zhurou603
1df3ee91e7 partners: (langchain-openai) total_tokens should not add 'Nonetype' t… (#31146)
partners: (langchain-openai) total_tokens should not add 'Nonetype' t…

# PR Description

## Description
Fixed an issue in `langchain-openai` where `total_tokens` was
incorrectly adding `None` to an integer, causing a TypeError. The fix
ensures proper type checking before adding token counts.

## Issue
Fixes the TypeError traceback shown in the image where `'NoneType'`
cannot be added to an integer.

## Dependencies
None

## Twitter handle
None

![image](https://github.com/user-attachments/assets/9683a795-a003-455a-ada9-fe277245e2b2)

Co-authored-by: qiulijie <qiulijie@yuaiweiwu.com>
2025-05-07 11:09:50 -04:00
Collier King
19041dcc95 docs: update langchain-cloudflare repo/path on packages.yaml (#31138)
**Library Repo Path Update **: "langchain-cloudflare"

We recently changed our `langchain-cloudflare` repo to allow for future
libraries.
Created a `libs` folder to hold `langchain-cloudflare` python package.


https://github.com/cloudflare/langchain-cloudflare/tree/main/libs/langchain-cloudflare
 
On `langchain`, updating `packages.yaml` to point to new
`libs/langchain-cloudflare` library folder.
2025-05-07 11:01:25 -04:00
Simonas Jakubonis
3cba22d8d7 docs: Pinecone Rerank example notebook (#31147)
Created an example notebook of how to use Pinecone Reranking service
cc @jamescalam
2025-05-07 11:00:42 -04:00
Jacob Lee
66d1ed6099 fix(core): Permit OpenAI style blocks to be passed into convert_to_openai_messages (#31140)
Should effectively be a noop, just shouldn't throw

CC @madams0013

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-07 10:57:37 -04:00
Tushar Nitave
a15034d8d1 docs: Fixed grammar for chat prompt composition (#31148)
This PR fixes a grammar issue in the sentence:

"A chat prompt is made up a of a list of messages..." → "A chat prompt
is made up of a list of messages. "
2025-05-07 10:51:34 -04:00
Michael Li
57c81dc3e3 docs: replace initialize_agent with create_react_agent in graphql.ipynb (#31133)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-06 16:09:52 -04:00
pulvedu
52732a4d13 Update docs (#31135)
Updating tavily docs to indicate JS support

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-05-06 16:09:39 -04:00
Simonas Jakubonis
5dde64583e docs: Updated pinecone.mdx in the integration providers (#31123)
Updated pinecone.mdx in the integration providers

Added short description and examples for SparseVector store and
SparseEmbeddings
2025-05-06 12:58:32 -04:00
Tomaz Bratanic
6b6750967a Docs: Change to async llm graph transformer (#31126) 2025-05-06 12:53:57 -04:00
ccurme
703fce7972 docs: document that Anthropic supports boolean parallel_tool_calls param in guide (#31122) 2025-05-05 20:25:27 -04:00
唐小鸭
50fa524a6d partners: (langchain-deepseek) fix deepseek-r1 always returns an empty reasoning_content when reasoning (#31065)
## Description
deepseek-r1 always returns an empty string `reasoning_content` to the
first chunk when thinking, and sets `reasoning_content` to None when
thinking is over, to determine when to switch to normal output.

Therefore, whether the reasoning_content field exists should be judged
as None.

## Demo
deepseek-r1 reasoning output: 

```
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': 'assistant', 'tool_calls': None, 'reasoning_content': ''}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '好的'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': ','}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': '用户'}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

deepseek-r1 first normal output
```
...
{'delta': {'content': ' main', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
{'delta': {'content': '\n\nimport', 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None, 'reasoning_content': None}, 'finish_reason': None, 'index': 0, 'logprobs': None}
...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-05 22:31:58 +00:00
Michael Li
c0b69808a8 docs: replace initialize_agent with create_react_agent in openweathermap.ipynb (#31115)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-05 18:08:09 -04:00
ccurme
fce8caca16 docs: minor fix in Pinecone (#31110) 2025-05-03 20:16:27 +00:00
Simonas Jakubonis
b8d0403671 docs: updated pinecone example notebook (#30993)
- **Description:** Update Pinecone notebook example
  - **Issue:** N\A
  - **Dependencies:** N\A
  - **Twitter handle:** N\A


- [ x ] **Add tests and docs**: Just notebook updates


If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-05-03 16:02:21 -04:00
Michael Li
1204fb8010 docs: replace initialize_agent with create_react_agent in yahoo_finance_news.ipynb (#31108)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-05-03 15:45:52 -04:00
Adeel Ehsan
1e00116ae7 docs: add docs for vectara tools (#30958)
Thank you for contributing to LangChain!

- [ ] **Docs for Vectara Tools**: "langchain-vectara"
2025-05-03 15:39:16 -04:00
Haris Colic
3e25a93136 Docs: replace initialize agent with create react agent for google tools (#31043)
- **Description:** The deprecated initialize_agent functionality is
replaced with create_react_agent for the google tools. Also noticed a
potential issue with the non-existent "google-drive-search" which was
used in the old `google-drive.ipynb`. If this should be a by default
available tool, an issue should be opened to modify
langchain-community's `load_tools` accordingly.
- **Issue:**  #29277
- **Dependencies:** No added dependencies
- **Twitter handle:** No Twitter account
2025-05-03 15:20:07 -04:00
Stefano Lottini
325f729a92 docs: improvements to Astra DB pages, especially modernize Vector DB example notebook (#30961)
This PR brings several improvements and modernizations to the
documentation around the Astra DB partner package.

- language alignment for better matching with the terms used in the
Astra DB docs
- updated several links to pages on said documentation
- for the `AstraDBVectorStore`, added mentions of the new features in
the overall `astra.mdx`
- for the vector store, rewritten/upgraded most of the usage example
notebook for a more straightforward experience able to highlight the
main usage patterns (including new ones such as the newly-introduced
"autodetect feature")

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-03 14:26:52 -04:00
Asif Mehmood
00ac49dd3e Replace deprecated .dict() with .model_dump() for Pydantic v2 compatibility (#31107)
**What does this PR do?**
This PR replaces deprecated usages of ```.dict()``` with
```.model_dump()``` to ensure compatibility with Pydantic v2 and prepare
for v3, addressing the deprecation warning
```PydanticDeprecatedSince20``` as required in [Issue#
31103](https://github.com/langchain-ai/langchain/issues/31103).

**Changes made:**
* Replaced ```.dict()``` with ```.model_dump()``` in multiple locations
* Ensured consistency with Pydantic v2 migration guidelines
* Verified compatibility across affected modules

**Notes**
* This is a code maintenance and compatibility update
* Tested locally with Pydantic v2.11
* No functional logic changes; only internal method replacements to
prevent deprecation issues
2025-05-03 13:40:54 -04:00
ccurme
6268ae8db0 langchain: release 0.3.25 (#31101) 2025-05-02 17:42:32 +00:00
ccurme
77ecf47f6d openai: release 0.3.16 (#31100) 2025-05-02 13:14:46 -04:00
ccurme
ff41f47e91 core: release 0.3.58 (#31099) 2025-05-02 12:46:32 -04:00
Eugene Yurtsev
4da525bc63 langchain[patch]: Remove beta decorator from init_embeddings (#31098)
Remove beta decorator from init_embeddings.
2025-05-02 11:52:50 -04:00
ccurme
94139ffcd3 openai[patch]: format system content blocks for Responses API (#31096)
```python
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI


llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)

messages = [
    SystemMessage("test"),                                   # Works
    HumanMessage("test"),                                    # Works
    SystemMessage([{"type": "text", "text": "test"}]),       # Bug in this case
    HumanMessage([{"type": "text", "text": "test"}]),        # Works
    SystemMessage([{"type": "input_text", "text": "test"}])  # Works
]

llm._get_request_payload(messages)
```
2025-05-02 15:22:30 +00:00
ccurme
26ad239669 core, openai[patch]: prefer provider-assigned IDs when aggregating message chunks (#31080)
When aggregating AIMessageChunks in a stream, core prefers the leftmost
non-null ID. This is problematic because:
- Core assigns IDs when they are null to `f"run-{run_manager.run_id}"`
- The desired meaningful ID might not be available until midway through
the stream, as is the case for the OpenAI Responses API.

For the OpenAI Responses API, we assign message IDs to the top-level
`AIMessage.id`. This works in `.(a)invoke`, but during `.(a)stream` the
IDs get overwritten by the defaults assigned in langchain-core. These
IDs
[must](https://community.openai.com/t/how-to-solve-badrequesterror-400-item-rs-of-type-reasoning-was-provided-without-its-required-following-item-error-in-responses-api/1151686/9)
be available on the AIMessage object to support passing reasoning items
back to the API (e.g., if not using OpenAI's `previous_response_id`
feature). We could add them elsewhere, but seeing as we've already made
the decision to store them in `.id` during `.(a)invoke`, addressing the
issue in core lets us fix the problem with no interface changes.
2025-05-02 11:18:18 -04:00
ccurme
72f905a436 infra: fix notebook tests (#31097) 2025-05-02 14:33:11 +00:00
William FH
b5bf2d6218 0.3.57 (#31095) 2025-05-01 23:42:26 -07:00
William FH
167afa5102 Enable run mutation (#31090)
This lets you more easily modify a run in-flight
2025-05-01 17:00:51 -07:00
Simonas Jakubonis
0b79fc1733 docs: Pinecone Sparse vectorstore example (#31066)
Description: Pinecone SparseVectorStore example
cc @jamescalam

---------

Co-authored-by: James Briggs <35938317+jamescalam@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-01 18:08:10 -04:00
ccurme
c51eadd54f openai[patch]: propagate service_tier to response metadata (#31089) 2025-05-01 13:50:48 -04:00
ccurme
6110c3ffc5 openai[patch]: release 0.3.15 (#31087) 2025-05-01 09:22:30 -04:00
Ben Gladwell
da59eb7eb4 anthropic: Allow kwargs to pass through when counting tokens (#31082)
- **Description:** `ChatAnthropic.get_num_tokens_from_messages` does not
currently receive `kwargs` and pass those on to
`self._client.beta.messages.count_tokens`. This is a problem if you need
to pass specific options to `count_tokens`, such as the `thinking`
option. This PR fixes that.
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** @bengladwell

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-30 17:56:22 -04:00
Really Him
918c950737 DOCS: partners/chroma: Fix documentation around chroma query filter syntax (#31058)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

**Description**:
* Starting to put together some PR's to fix the typing around
`langchain-chroma` `filter` and `where_document` query filtering, as
mentioned:

https://github.com/langchain-ai/langchain/issues/30879
https://github.com/langchain-ai/langchain/issues/30507

The typing of `dict[str, str]` is on the one hand too restrictive (marks
valid filter expressions as ill-typed) and also too permissive (allows
illegal filter expressions). That's not what this PR addresses though.
This PR just removes from the documentation some examples of filters
that are illegal, and also syntactically incorrect: (a) dictionaries
with keys like `$contains` but the key is missing quotation marks; (b)
dictionaries with multiple entries - this is illegal in Chroma filter
syntax and will raise an exception. (`{"foo": "bar", "qux": "baz"}`).
Filter dictionaries in Chroma must have one and one key only. Again this
is just the documentation issue, which is the lowest hanging fruit. I
also think we need to update the types for `filter` and `where_document`
to be (at the very least `dict[str, Any]`), or, since we have access to
Chroma's types, they should be `Where` and `WhereDocument` types. This
has a wider blast radius though, so I'm starting small.

This PR does not fix the issues mentioned above, it's just starting to
get the ball rolling, and cleaning up the documentation.



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-04-30 17:51:07 -04:00
Mateus Scheper
ed7cd3c5c4 docs: Fixing typo: "chocalate" -> "chocolate". (#31078)
Just fixing a typo that was making me anxious: "chocalate" ->
"chocolate".
2025-04-30 15:09:36 -04:00
yberber-sap
952a0b7b40 Docs: Fix SAP HANA Cloud docs - remove pip output, update vectorstore link, rename provider (#31077)
This PR includes the following documentation fixes for the SAP HANA
Cloud vector store integration:
- Removed stale output from the `%pip install` code cell.
- Replaced an unrelated vectorstore documentation link on the provider
overview page.
- Renamed the provider from "SAP HANA" to "SAP HANA Cloud"
2025-04-30 08:57:40 -04:00
Akshay Dongare
0b8e9868e6 docs: update LiteLLM integration docs for router migration to langchain-litellm (#31063)
# What's Changed?
- [x] 1. docs: **docs/docs/integrations/chat/litellm.ipynb** : Updated
with docs for litellm_router since it has been moved into the
[langchain-litellm](https://github.com/Akshay-Dongare/langchain-litellm)
package along with ChatLiteLLM

- [x] 2. docs: **docs/docs/integrations/chat/litellm_router.ipynb** :
Deleted to avoid redundancy

- [x] 3. docs: **docs/docs/integrations/providers/litellm.mdx** :
Updated to reflect inclusion of ChatLiteLLMRouter class

- [x] Lint and test: Done

# Issue:
- [x] Related to the issue
https://github.com/langchain-ai/langchain/issues/30368

# About me
- [x] 🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-29 17:48:11 -04:00
Lukas Scheucher
275ba2ec37 Add Compass Labs toolkits to langchain docs (#30794)
- **Description**: Adding documentation notebook for [compass-langchain
toolkit](https://pypi.org/project/langchain-compass/).
- **Issue**: N/a
- **Dependencies**: langchain-compass  
- **Twitter handle**: @labs_compass

---------

Co-authored-by: ccosnett <conor142857@icloud.com>
2025-04-29 17:42:43 -04:00
ccurme
bdb7c4a8b3 huggingface: fix embeddings return type (#31072)
Integration tests failing

cc @hanouticelina
2025-04-29 18:45:04 +00:00
célina
868f07f8f4 partners: (langchain-huggingface) Chat Models - Integrate Hugging Face Inference Providers and remove deprecated code (#30733)
Hi there, I'm Célina from 🤗,
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers for chat completion and text
generation tasks.

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpoint`, in favor of the task-specific `text_generation`
method. `InferenceClient.post()` is deprecated and will be removed in
`huggingface_hub v0.31.0`.

---
## Changes made
- bumped the minimum required version of the `huggingface-hub` package
to ensure compatibility with the latest API usage.
- added a `provider` field to `HuggingFaceEndpoint`, enabling users to
select the inference provider (e.g., 'cerebras', 'together',
'fireworks-ai'). Defaults to `hf-inference` (HF Inference API).
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpoint` with the task-specific `text_generation` method
for future-proofing, `post()` will be removed in huggingface-hub
v0.31.0.
- updated the `ChatHuggingFace` component:
    - added async and streaming support.
    - added support for tool calling.
- exposed underlying chat completion parameters for more granular
control.
- Added integration tests for `ChatHuggingFace` and updated the
corresponding unit tests.

  All changes are backward compatible.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-29 09:53:14 -04:00
ccurme
3072e4610a community: move to separate repo (continued) (#31069)
Missed these after merging
2025-04-29 09:25:32 -04:00
ccurme
9ff5b5d282 community: move to separate repo (#31060)
langchain-community is moving to
https://github.com/langchain-ai/langchain-community
2025-04-29 09:22:04 -04:00
Sydney Runkle
7e926520d5 packaging: remove Python upper bound for langchain and co libs (#31025)
Follow up to https://github.com/langchain-ai/langsmith-sdk/pull/1696,
I've bumped the `langsmith` version where applicable in `uv.lock`.

Type checking problems here because deps have been updated in
`pyproject.toml` and `uv lock` hasn't been run - we should enforce that
in the future - goes with the other dependabot todos :).
2025-04-28 14:44:28 -04:00
Sydney Runkle
d614842d23 ci: temporarily run chroma on 3.12 for CI (#31056)
Waiting on a fix for https://github.com/chroma-core/chroma/issues/4382
2025-04-28 13:20:37 -04:00
Michael Li
ff1602f0fd docs: replace initialize_agent with create_react_agent in bash.ipynb (part of #29277) (#31042)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 13:07:22 -04:00
Christophe Bornet
aee7988a94 community: add mypy warn_unused_ignores rule (#30816) 2025-04-28 11:54:12 -04:00
Bae-ChangHyun
a2863f8757 community: add 'get_col_comments' option for retrieve database columns comments (#30646)
## Description
Added support for retrieving column comments in the SQL Database
utility. This feature allows users to see comments associated with
database columns when querying table information. Column comments
provide valuable metadata that helps LLMs better understand the
semantics and purpose of database columns.

A new optional parameter `get_col_comments` was added to the
`get_table_info` method, defaulting to `False` for backward
compatibility. When set to `True`, it retrieves and formats column
comments for each table.

Currently, this feature is supported on PostgreSQL, MySQL, and Oracle
databases.

## Implementation
You should create Table with column comments before.

```python
db = SQLDatabase.from_uri("YOUR_DB_URI")
print(db.get_table_info(get_col_comments=True)) 
```
## Result
```
CREATE TABLE test_table (
	name VARCHAR
        school VARCHAR)
/*
Column Comments: {'name': person name, 'school":school_name}
*/

/*
3 rows from test_table:
name
a
b
c
*/
```

## Benefits
1. Enhances LLM's understanding of database schema semantics
2. Preserves valuable domain knowledge embedded in database design
3. Improves accuracy of SQL query generation
4. Provides more context for data interpretation

Tests are available in
`langchain/libs/community/tests/test_sql_get_table_info.py`.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-28 15:19:46 +00:00
yberber-sap
3fb0a55122 Deprecate HanaDB, HanaTranslator and update example notebook to use new implementation (#30896)
- **Description:**  
This PR marks the `HanaDB` vector store (and related utilities) in
`langchain_community` as deprecated using the `@deprecated` annotation.
  - Set `since="0.1.0"` and `removal="1.0"`  
- Added a clear migration path and a link to the SAP-maintained
replacement in the
[`langchain_hana`](https://github.com/SAP/langchain-integration-for-sap-hana-cloud)
package.
Additionally, the example notebook has been updated to use the new
`HanaDB` class from `langchain_hana`, ensuring users follow the
recommended integration moving forward.

- **Issue:** None 

- **Dependencies:**  None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 16:37:35 -04:00
湛露先生
5fb8fd863a langchain_openai: clean duplicate code for openai embedding. (#30872)
The `_chunk_size` has not changed by method `self._tokenize`, So i think
these is duplicate code.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-27 15:07:41 -04:00
Philipp Schmid
79a537d308 Update Chat and Embedding guides (#31017)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-27 18:06:59 +00:00
ccurme
ba2518995d standard-tests: add condition for image tool message test (#31041)
Require support for [standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/).
2025-04-27 17:24:43 +00:00
ccurme
04a899ebe3 infra: support third-party integration packages in API ref build (#31021) 2025-04-25 16:02:27 -04:00
Stefano Lottini
a82d987f09 docs: Astra DB, replace leftover links to "community" legacy package + modernize doc loader signature (#30969)
This PR brings some much-needed updates to some of the Astra DB shorter
example notebooks,

- ensuring imports are from the partner package instead of the
(deprecated) community legacy package
- improving the wording in a few related places
- updating the constructor signature introduced with the latest partner
package's AstraDBLoader
- marking the community package counterpart of the LLM caches as
deprecated in the summary table at the end of the page.
2025-04-25 15:45:24 -04:00
ccurme
a60fd06784 docs: document OpenAI flex processing (#31023)
Following https://github.com/langchain-ai/langchain/pull/31005
2025-04-25 15:10:25 -04:00
ccurme
629b7a5a43 openai[patch]: add explicit attribute for service tier (#31005) 2025-04-25 18:38:23 +00:00
ccurme
ab871a7b39 docs: enable milvus in API ref build (#31016)
Reverts langchain-ai/langchain#30996

Should be fixed following
https://github.com/langchain-ai/langchain-milvus/pull/68
2025-04-25 12:48:10 +00:00
Georgi Stefanov
d30c56a8c1 langchain: return attachments in _get_response (#30853)
This is a PR to return the message attachments in _get_response, as when
files are generated these attachments are not returned thus generated
files cannot be retrieved

Fixes issue: https://github.com/langchain-ai/langchain/issues/30851
2025-04-24 21:39:11 -04:00
Abderrahmane Gourragui
09c1991e96 docs: update document examples (#31006)
## Description:

As I was following the docs I found a couple of small issues on the
docs.

this fixes some unused imports on the [extraction
page](https://python.langchain.com/docs/tutorials/extraction/#the-extractor)
and updates the examples on [classification
page](https://python.langchain.com/docs/tutorials/classification/#quickstart)
to be independent from the chat model.
2025-04-24 18:07:55 -04:00
ccurme
a7903280dd openai[patch]: delete redundant tests (#31004)
These are covered by standard tests.
2025-04-24 17:56:32 +00:00
Kyle Jeong
d0f0d1f966 [docs/community]: langchain docs + browserbaseloader fix (#30973)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

community: fix browserbase integration
docs: update docs

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updated BrowserbaseLoader to use the new python sdk.
    - **Issue:** update browserbase integration with langchain
    - **Dependencies:** n/a
    - **Twitter handle:** @kylejeong21

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 13:38:49 -04:00
ccurme
403fae8eec core: release 0.3.56 (#31000) 2025-04-24 13:22:31 -04:00
Jacob Lee
d6b50ad3f6 docs: Update Google Analytics tag in docs (#31001) 2025-04-24 10:19:10 -07:00
ccurme
10a9c24dae openai: fix streaming reasoning without summaries (#30999)
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
2025-04-24 16:01:45 +00:00
ccurme
8fc7a723b9 core: release 0.3.56rc1 (#30998) 2025-04-24 15:09:44 +00:00
ccurme
f4863f82e2 core[patch]: fix edge cases for _is_openai_data_block (#30997) 2025-04-24 10:48:52 -04:00
Philipp Schmid
ae4b6380d9 Documentation: Add Google Gemini dropdown (#30995)
This PR adds Google Gemini (via AI Studio and Gemini API). Feel free to
change the ordering, if needed.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 10:00:16 -04:00
Philipp Schmid
ffbc64c72a Documentation: Improve structure of Google integrations page (#30992)
This PR restructures the main Google integrations documentation page
(`docs/docs/integrations/providers/google.mdx`) for better clarity and
updates content.

**Key changes:**

* **Separated Sections:** Divided integrations into distinct `Google
Generative AI (Gemini API & AI Studio)`, `Google Cloud`, and `Other
Google Products` sections.
* **Updated Generative AI:** Refreshed the introduction and the `Google
Generative AI` section with current information and quickstart examples
for the Gemini API via `langchain-google-genai`.
* **Reorganized Content:** Moved non-Cloud Platform specific
integrations (e.g., Drive, GMail, Search tools, ScaNN) to the `Other
Google Products` section.
* **Cleaned Up:** Minor improvements to descriptions and code snippets.

This aims to make it easier for users to find the relevant Google
integrations based on whether they are using the Gemini API directly or
Google Cloud services.

| Before                | After      |
|-----------------------|------------|
| ![Screenshot 2025-04-24 at 14 56
23](https://github.com/user-attachments/assets/ff967ec8-a833-4e8f-8015-61af8a4fac8b)
| ![Screenshot 2025-04-24 at 14 56
15](https://github.com/user-attachments/assets/179163f1-e805-484a-bbf6-99f05e117b36)
|

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 09:58:46 -04:00
Jacob Lee
6b0b317cb5 feat(core): Autogenerate filenames for when converting file content blocks to OpenAI format (#30984)
CC @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 13:36:31 +00:00
ccurme
21962e2201 docs: temporarily disable milvus in API ref build (#30996) 2025-04-24 09:31:23 -04:00
Behrad Hemati
1eb0bdadfa community: add indexname to other functions in opensearch (#30987)
- [x] **PR title**: "community: add indexname to other functions in
opensearch"



- [x] **PR message**:
- **Description:** add ability to over-ride index-name if provided in
the kwargs of sub-functions. When used in WSGI application it's crucial
to be able to dynamically change parameters.


- [ ] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 08:59:33 -04:00
Nicky Parseghian
7ecdac5240 community: Strip URLs from sitemap. (#30830)
Fixes #30829

- **Description:** Simply strips the loc value when building the
element.
    - **Issue:** Fixes #30829
2025-04-23 18:18:42 -04:00
ccurme
faef3e5d50 core, standard-tests: support PDF and audio input in Chat Completions format (#30979)
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)

Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
2025-04-23 18:32:51 +00:00
2525 changed files with 8867 additions and 382084 deletions

View File

@@ -1,8 +1,8 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
@@ -24,6 +24,5 @@ Additional guidelines:
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

View File

@@ -16,7 +16,6 @@ LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/community",
]
# when set to True, we are ignoring core dependents
@@ -38,8 +37,8 @@ IGNORED_PARTNERS = [
]
PY_312_MAX_PACKAGES = [
"libs/partners/huggingface", # https://github.com/pytorch/pytorch/issues/130249
"libs/partners/voyageai",
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
@@ -134,12 +133,6 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
elif dir_ == "libs/community" and job == "extended-tests":
py_versions = ["3.9", "3.12"]
elif dir_ == "libs/community" and job == "compile-integration-tests":
# community integration deps are slow in 3.12
py_versions = ["3.9", "3.11"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
@@ -184,11 +177,6 @@ def _get_pydantic_test_configs(
else "0"
)
custom_mins = {
# depends on pydantic-settings 2.4 which requires pydantic 2.7
"libs/community": 7,
}
max_pydantic_minor = min(
int(dir_max_pydantic_minor),
int(core_max_pydantic_minor),
@@ -196,7 +184,6 @@ def _get_pydantic_test_configs(
min_pydantic_minor = max(
int(dir_min_pydantic_minor),
int(core_min_pydantic_minor),
custom_mins.get(dir_, 0),
)
configs = [

View File

@@ -22,7 +22,6 @@ import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
"numpy",
@@ -35,7 +34,6 @@ SKIP_IF_PULL_REQUEST = [
"langchain-core",
"langchain-text-splitters",
"langchain",
"langchain-community",
]

View File

@@ -20,6 +20,8 @@ def get_target_dir(package_name: str) -> Path:
base_path = Path("langchain/libs")
if package_name_short == "experimental":
return base_path / "experimental"
if package_name_short == "community":
return base_path / "community"
return base_path / "partners" / package_name_short
@@ -69,7 +71,7 @@ def main():
clean_target_directories([
p
for p in package_yaml["packages"]
if p["repo"].startswith("langchain-ai/")
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
@@ -78,7 +80,7 @@ def main():
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and p["repo"].startswith("langchain-ai/")
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])

View File

@@ -1,4 +1,3 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx

View File

@@ -34,11 +34,6 @@ jobs:
shell: bash
run: uv sync --group test --group test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: VIRTUAL_ENV=.venv uv pip install "boto3<2" "google-cloud-aiplatform<2"
- name: Run integration tests
shell: bash
env:

View File

@@ -30,7 +30,7 @@ jobs:
- name: Install langchain editable
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental -e libs/core libs/langchain libs/community
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: Check doc imports
shell: bash

View File

@@ -26,7 +26,20 @@ jobs:
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: yq '.packages[].repo' langchain/libs/packages.yml
cmd: |
yq '
.packages[]
| select(
(
(.repo | test("^langchain-ai/"))
and
(.repo != "langchain-ai/langchain")
)
or
(.include_in_api_ref // false)
)
| .repo
' langchain/libs/packages.yml
- name: Parse YAML and checkout repos
env:
@@ -38,11 +51,9 @@ jobs:
# Checkout each unique repository that is in langchain-ai org
for repo in $REPOS; do
if [[ "$repo" != "langchain-ai/langchain" && "$repo" == langchain-ai/* ]]; then
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: Setup python ${{ env.PYTHON_VERSION }}

View File

@@ -7,12 +7,6 @@ repos:
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: community
name: format community
language: system
entry: make -C libs/community format
files: ^libs/community/
pass_filenames: false
- id: langchain
name: format langchain
language: system

View File

@@ -1 +1 @@
eNrtVnl0E3Ueb4GtgAesT1dQjpgVdbWTziSTs+RBD5IW0jNJL4p1MvNLM81cnaNtWrpWLD4flC0p4AG7rpTSYFsK2AJySlFXFFRA0aUooOyKBy5KQURQ9pc0peWh7/n28d66+8wfyWR+3+PzPX+feaEKIEo0z8V20JwMRIKU4R+paV5IBOUKkOT6VhbIPp5qyc5yulYpIn34AZ8sC5IlIYEQaA0vAI6gNSTPJlRgCaSPkBPgs8CAiJkWD08FemNTatQskCSiFEhqy+waNclDV5ystqjzocJ9kkr2AVUlIOCPqKI5VTIvyTw3TR2vFnkGQDFFAqK6dk68muUpwMAXpYKM4DzC0hwNpSRZBASrtngJRgLxahmwAoxEVkSoi2pQ+IbnmX7XckAIG/QqXCRQqHzl0VKj5gg2fFoK5JIoHChAAYkUaaFfRm0H8lC4GiggECLUg8mTwjYEEeZElGkQ+cfwJDFgPeoboqW5UnVtLQwP5pgWAQWhDUrCMKOSvKcMkDKUrJ1TG/IBgoIuGlt8MDvBzquTv44gSQBzAjiSp6D14NrSalqIV1HAyxAyaIMZ50AkzGCbHwABIRi6ArT2awXXE4LA0P3uE8oknuuIVggJA7n2uC1cDwSWk5OD3VkQRFJ6QnYAdgmnwjQGTIOtr0IkmaA5BlYdYQiIp1WInG8beiAQpB8aQaIdGGztV+4cKsNLwdUZBJnlvMokIZK+4GpCZA1419D3osLJNAuCoZTsa91FDwfd6TSYVmPacJVhKcCRwdWRRtp8lTKQxQBC8tBGcCXaOZAfBnClsi+4SqfVrRGBJMCeB4+1QjVZkea1wFqAfXtC0d5vzpo1UMSjMXe0pMK6BHe4fEq8CjWoMghRpUW1ehVmsOh0FhxX2TNcHSlRN64fLcMGl0hwkheWYsZA2UOkT+H8gGpL+dGC7wgXHEYThg9HCwFVAi8BJIoq2FGA5PZPPZKe2tXfXQgvlhIcXR1xG9wRqXxldVUlRSoU5auoZFFzNa6jPUAhvd1RFTgCYTcQEMJKwVW4Tt8ZPRnIfRuMFUUwFEGxrVUInFXA0CwN8xn5jq4eKdiiR1H0xWsFZN4POCkYwtHIZ+dQCRGwsGhh34NmcLPZvP3HhQZM6aCI2ajferWUBIaiwbSs9OK1AlETzajUUTUgjdBU8PA98E8JoaXMHr0eJylCZzR4dVrMRGBmwmPEUEyvA8SW8D4goZVwMQVelBEJkHDPyoHg4XiWqArPmVUHRQ0w0kS4HklGoYBT8aTy4RikRJUgAoYnqHWkFyEJ0geQ/v4LhlILM5My0lPanBBkCs/7adDUGzuupIT0lnhYK5UXKChnUhiHO60cNYvJQoEsukRjEup345WEwNJYSpqU5EzPsfMIZsQxrdFk0uIIpkE1cEoRh8dv0uWSdjTfp+NLAnaz35lRaMygdTPNmWhBmZMRGCXTpfHmutOqzclV2pmObHNWkjbXVpXkypNmZmp0eHlmJenT0kwuhuL+VNLuNThmCTnp2Q5TmQE3G4n8HFdWieyHUQtw2VoTElWwYeG+lKzRsUHg2CDhoTFa0IGhSVRRkcRYNVevyERVGry3sjgmkKhyhjMM4C/c205aBtZMngOHl8LEKBU0Zc0kst0FAb3bnWVMwryluRTvtpFKURrhChQVlZXn8YYkD12ur/R4MoZkxmjEETSaHAOKmyKtOQj9P0S1qQAZugWQrMhFBIvL8RJHe72tTiDCqQq2kQyvUHDbi6A1xYbkJhUGu80YqcMxwmMwGSicBFokGe7RAWtXdkZL+KoIEQxsvAoy2OXTWdUwlTp1ooolrCYDnLHINf5oa//F9Wrs95MXjoyJfIY3OHP4FejYP57MLzg/4d7krulVs7ue722zyssMa/fWF0t1iWTNpvTZMy6cXnJ3bBM4WDX9vdrztZXnD2++cWTsQ9NtdYfe3WBb+JUl4zS//Ejo3N6LzT98c+GdD/s+fODW+FtnT3kP7bvp3Lf1+yYW8k+XdU9/bdkpwrvYcratj8Ial+CL7pi4f82w+SF37/wbbzcUn0H/9PGMcVM+mmpd7391fNGePY11Y5MrPjx2/g83zn20IbWLbEny0/MX35xeX5ez215XtHftqoM9zIhnmm5Zzc/ZlOyICS3dM/52T/wHj2x7PYAvHB1cN21u+iPH+iqPbT7w/pEzJ777W+ekT1NWOD4/d3DdVv8aZspdjuXHU6YmPmgbmdYrfzB6zg8N7x7yvfVpfYzefeQ5eafyd3b6/bYdx+NeuLn4wS8ulY+5r7tW+OyS9PETPS9sdE3osLxy/rHzye7eNfYa75SlxeLsk3HPf2lbbrlrQruiPGfdMnW/Td/7z8zvkz8Dd6QnjD5x8c6eC0eGf7p0lach4cTh32zcPt0eqNz8suRZeqigYd/XhQc2HS8bNmLJ65e5T5oqplmfOnR5T2MMuWbi77YYCoQKy1dPe/fzx146GNe47fGe5sPi41tO+C6xtxUczDubGPvNIQtzpmnv6EWG7bNvul+ZhCxpr+/sbJ4y4uLxMbDOly8Pj1m499ZX7h0RE3M9meGwnOvDDOOHaHIKw1w5JuAtBLchfN9PAktIgvlJJkhDHqYOC5SkzMg1lTvppMIKNG+WNz3FLpmLiCLF+HPoIiGWKixEAr2oa4qv0LpitUVVrO7HX6yuVYc53VDY6vRwuJLCcQHNYHhhzEOhl/wMjL/y5F958i+VJ+vN15kn6/+feDJu/l/hyej158l6j8ELtJQR1eEoYaSMmM6DkqhJqzNjqAkjTP81nqxPqp4VcNjQnGxthZ9zC7NYxlxt53J9AYdU5GYdgUwnmYWasllHziAb1F/hyRk+Zx5fVe0nBV7Qp4KkDGehKblcq/X4gSYnA6W5yjRCyVNm+sopv4Mql/SFvNGuoXL4alsZW00bZPsMtsgrAQOa7MqbZeRmujWKN4DZZ2T7s3MdnLmoxGPSZqQmp5vSfy5P1l8nnmxOE205zvI02sPYyqkiW7U5tyDgd3kU3FjIYeXmUszmrKCNKD4THZIZHDX+MnmyVw8oyuQ1UNeJJ68Z5MkRDtWQ28OtQm/b3nf7k2+vHPZGXBO52p1ndvdavyybX3wxllv0V/+0TYvaLl18Y/HjLnb1Q6Z928d7nz02a/e49xfw45/aoTKblhx999zbk2uwf710+vTkWxKPLdv8xZucvfHcae3J9jW7ttUsPtu2tclkGt5y4K3us6qRzf6HlgVyDilfs+o418PNHdLSJ5++pffz7j0LtG+24euyPp+0sX7X5lNZu8eRCe8YP+ke/eyFJ0F7PlU3NvWGuTWTPA01+G8/Uo+YmFbwcfv8pb5Rp277Pn/MqVEjzh1/ZtOpuOErgWU9WlTz4lZXwPeayiMm3intbjw5Li7x5NeaP3+rO1rWnnKsamwsewmo5LmexlHkI6GtJ1bEfRCYMEFfP+bhVYR7X8XcwjPJXOaEu0d17VJ2frerI2d68gphA6KZfyoJeQNs9LjuTHjiH81nLsZPDry9ceOXOz0vjc2d9/vsRX3Pdc89Ujm+b2zCV10b+haEdk2tuxyagrf0vFaavX/BgX2VPS/fPfamjHb2hqMB62T7F8uZOXPyPdJf1gZExw/D+yntPW9mrnwAPv8bzps/8g==
eNrtVgt0E2UWLiBFoSwg0CKsS8zqskAnncl7WqqkadN3+kjfwIbJzJ9kmsnMdB5pE057aOEgbKsSFXZRUaSlxfKoPa2glIKrAj5W8eAKIqWisIIiXXYVWRSX/ZOmpSzuOZ49nLPuHuckmcz89/Hde/97/6+hzQcEkebYUdtpVgICQUrwQXy0oU0AVTIQpZWtXiC5OaolP89W1CwL9PF5bknixcSEBIKnVRwPWIJWkZw3wYclkG5CSoD/eQaEzbQ4OMr/4SjzMqUXiCLhAqIycdEyJclBV6ykTFSWQoU5okJyA0U1IOBNUNCsIoUTJY59QBmvFDgGQDFZBIKydkm80stRgIEvXLyEaDnES7M0lBIlARBeZaKTYEQQr5SAl4eRSLIAdVEVCt9wHDPoWvLzIYNOmQ0HCpWH/yYuU7KEN7TqApI9AgcKUEAkBZoflFGmA2kkXBUU4AkB6sHkiSEbvABzIkg0CD8xHEkMWY/4hmhp1qWsrYXhwRzTAqAgtOuSMMyIJOeoBKQEJWuX1La5AUFBF4+0uGF2gjtvTH4HQZIA5gSwJEdB68EuV4Dm4xUUcDKEBOIVAVGi2mHeWRAONtjuAYBHCIb2gdZB3eDzBM8z9CCIhEqRY7dH6oSE4Ny83B6qCgKLykrB7jwIxZSZkO+He4VVYCoDzPvzNYgoETTLwNojDAFRtfLh9Z6RCzxBeqARJLIPg62DyjtHynBicEsuQebZbjBJCKQ7uIUQvHpt18j3gsxKtBcE28z5N7uLLA67a9OoMAx+Om+wLPpZMrglvJ9236ANJMGPkBw0EnwW3TmUIAawLskdbNaoNVsFIPJw64MVrVBNksWGFlgS8MfX2yItsDkve6iW/VFxLamwPMFei0DHK1C1IpfwK9SoWqfAtIkomqjBFem5RdvNETdF31uHziKBYEUnrEXaUPXbSLfMegDVbv7eiveGKg6jCcGHHYaAGp4TARJBFdxehhQONj+Smdo1uMkQTnARLB0Iuw32hktfHaippkiZoty+ai+KB7Qa2gFk0tkdUYGdEHIDASFeMdisR7U7IytDyW+HsaIIhiIotqcGgS0LGNpLw3yGfyMTSAy26FAUffFmAYnzAFYMtmnR8LVvpIQAvLBoId/XzWhxHN/7/UJDpjR46EL33CglgpFoMLVXfPFmgYiJzai4vWZIGqGp4PF74YMd1zkoVI2hetxBaSmDQ6/VGXQGI0oZjJSGwtUvhcYCCa2EislzgoSIgITjVvIHj8d7iZpQoyVrMJ1GDyNNglOSZGQK2GRHKheKQUxS8AJgOILqIJ0ISZBugAzuv2BbarnVlJtpbrdBkGaO89Dg0Q9HzbDbSafd4U12URkBvIzlspz2fFxj11EsTxfSqqpSPgW34lXFrF4vyTWUj1V5EMyg1WO4VqvBEUyFqmDfIH7Uwjoc2aaArjKHsXssWWVlbp81s5jKsJitOlDhK3YF0smiXKcaxfyswan3pweys0m7wV2dRbDFhS6DnCKkUKVian4a4TVYSqy8TV9dk1HKlGIqj8eOmdyeNN7k0bEcDBHO3OSEJAXcsHBsismRtkFg2yCDTaMZapokBRVOTLLqxkmZpMiAx1cey/iTFLZQhgG8w/FtoyWQbOVYcPxxmBjZR1PJuTZzZmVKdnpVoDil3FVdghEyzljK+GyrnvVlppdU5jkLcuXSCis7MjNqtQFBI8mBW94Y3prXof+HqHaVISOnAJIXPo9gcVlOZGmns9UGBNhVwXaS4WQKDn0BtJotSKGpPNiNa3Aj6iCNBCAIHa4jkUxT6vND1oZnRkvoxGgjGLjxfGSwy61JVibCeJRJCi+RbNTDHguf5vWtg+fXgdFjZzfeHhW+xsDvtWtNtlzuBBrT+23p7QsOrXis5srFzhPs/N/Wbhq7Z9qqee/tSixX37uHjj2zf9L6qScyNUvvnzF69kenfzn3wpiDC51r0cn3zSl4f97eK8/u69tX17+mY5Fq1pPf0b+o3m2oHvi47tsvp+ZUrPLXmUxzjo4j2ivuyWxOKis7s3LH59JdCa3njU93LMjfEFOcPb9yPDm1pPAtPvbtT9CKVe+9H1wyc+67plfqxjZhuy5fO9r2zsWJd8bmWoPjN54em3JkDhZ9dKt+dB/+2WpPwxtR45vvkD1LDFkPRaktWxrm66YN7Pvu3IKYxvnrdsYlXN72twuL4veJb860Xjq9+PcP9hT0WgOly8z+5rQHWqb0H3qs0dFU/+m6s7EZn8Xi69wJ55pG7U6zPdt6NPELw6amUztORT86sW9G+oW/Rrm6zfyFq9onVh/YW7W4d/b0Yx+uTTjluiSl/uy7isfnVnqePjl2uvDclvm2xr4r7CHzJKtl71j3O1+Pf0FXnDt3w0fnGg++ZZ22apth+XuPpK+NqdK9Ftf1xvQXtOZPjnT07qcbGu/sqsf7jP3UkablS/90ZZnmWHfPwBbVP9Lrola+/ASa7WncMO3sxEl07IkHr3R01x3sR43dHYVLs+709jwUI/ZvHXhp+Qn8lXlvb1G9e7TkG8MY/0bLl5cY3ZNp529vMF+MC9V6TNSRrx/GTbdFRd1Koji64NYQxfgRmqzMMMPLBDyN4FSE7wc5oZ0kmH9LDGlIy5QhATuurywxumsKtGQhIVEyIfEVGpIrzfsh7JEQXLIXIoFelMsWD7O8xcpExWLlIP7FylpliOKNhK3MDIUryizrV10PL4R5JHT7D8D4E23+iTb/yGmzDr+1tFmL/j/RZu3/Cm026m89bVar1ToNihkx4KQARRopA+mg9NAZcKJ6jZH4r9FmNaapFNJc1QV5pM6XU6bKkYtS5ZKULMpodWf4jRZXwIVhKUVZeVbTMDnUosO0WYWXyWxOlT2gN2ZWMGWlKqAp1trNIiNk56dRuLW0uLwSVFVxXFWWz5TKGr1VckWlLktTaUlLN+uskINmO4zpWpXBIFVzdGqGoVwf0Djz821OT3F5uSWfpDVVWrMrI9f7Q2nzYNPcAtpM1hCFTtqXw2W5y4tom8dMe6rZQF6KRiuSsujQ4Ga1k88pyPEWVY/IDIbhP07aDDCjwQB0TuzW0OZRnf9KmwtN7Al08t7z0287XKJ4K8V7ZYbi9R0P7sqYMK1+oW3+mrkH7ntx22uWU/vjHv95Nm3bFHe3z2fRvFqbu3Jt6/pl98htZZf7W3fbL5w8u76OOXVRKDr85F8m2QdmP5zu/nvR/W8UU6c+vkO7v5PVft0/epY0dfuYX6/auqc/9pmjoIB5c8q5zuhZ9I6z3S93G95p8a+Po3flHuuZ0GNurj/zm6s5Kyd/Il+1PLNzVXNfI3NJ/1l9xguKy8fuu4daOGtO4ZS7HH8u2+nMbyb55kvc3avj8vtMkw+uWXSvZU7MHTOfSjoZXz7qifoiaSDaE7dc+Fn5gTPNHx2aIOxL7P7gm+Xjvlkxb9bpGLk4Ku3c5lmOr4TUw2smP0JOfbWHPzLui+TSpv1Lr57UP3PmTN+1mPVxTcF1Gw/1XPp8TGd7dMnBI5sznls3sOnpDeL5yq0TVcupKVe4hNl0XudDnmvpddHd0YfzS+9f6P3q271fViyveepaRU/Jgpnrb/v4uHvrBwtXH843N0cnbWxqmYp9VUwunvCHcZqTuae3MbZPexYcfjPnV2fPGKIGKe6Krt850mCR/gk3ST0U

View File

@@ -1 +1 @@
eNqdVXtwVNUZ3220OjKDBRGwHe12BxBr7u597TOzbZNNWJIQNskueVSZ9Oy5Z3dvcu89l/vY7C4CFTs6KOJcH6NC21ESdp1tComkKAgqdup0QFQmOhpbqrYVaXV0aJyxiiU9u9mUZOCv3j9277nf6/d9v+/7zvZiBmm6iBX7iKgYSAPQIAfd2l7U0CYT6cYvCjIy0lgYbo/G4kOmJk7+MG0Yqh50u4EqurCKFCC6IJbdGcYN08Bwk3dVQhU3wwks5CZTm50y0nWQQrozeMdmJ8QkkmI4g861SJKws9apYQmRo6kjzbllY61TxgKSyIeUalA8pmRREYmWbmgIyM5gEkg6qnUaSFYJYMPUiC3torcU0wgIJJuHhtNYN6z98/EdABAi4g8pEAuikrJ+m8qLaq1DQEkJGKhEUCmokr1VGkBIpYAkZlBhxsoaBaoqiRCU5e5+HSsj1SwoI6eiy8Wlci4USVkxrPEoAVHf7G7PkUIqDsblZVzMaJbSDSAqEqkMJQGCp6BW5C/MFagADhAnVJUkqzBjvH+uDtatfW0ARmPzXAINpq19QJO9/MG53zVTMUQZWcVw++XhqsJL4TgXw7r8Y/Mc6zkFWvsqJDw3zxgZWo6CmPiwnqb3z9ZHQkrKSFtDDM09oyFdJW2B7ikQM8PUtw8TLtBrfyxW+2NvtHWWxL/Ylg03El6sY/G0WeugvY42oDlYmvU4GG+Q44I854i0xUfC1TDxK9IwFteAoicJFU2ztBdh2lQGkFAKX5HwY2XCSTZl+KQtKZRVsY6oKiprpIfqnBkMqrnx4Ex3UVhLAUXMV8JaxyrMD+azgwI0BSGdGZTpQJ7nxAQyYXK8aqJquByGAKJk3Rpiae/+qmS29iWSK00xNEUzR7IU6XMkibJI6ln5rU6nbg17aJp+/nIFAw8gMsdFnq48L87V0JBMSCvHvuSGDwQCR6+sNOuKIyoB/3w0hFE0Fw3DyvrzlytUXeyl9ZHsrDYlCtbkCnLoYz0AIS/rpZlkgKYTwMvxnM+T8EGO9UEv7z1MJl+ExEuZTBVrBqUjSFaRkbMma2WQLc9ZiGM8nJdkWucQFSiZAoqZiUZczkGvc6gakjAQDsAkBQFMI2qm/6xiY+/6+rbmcClGQIYxHhDRw+/Zl/f1wWRfQg7lUNwTiAs+RsFrNrV0xF2NmmfA71Kjme56LHd3x2TI5tRIjO1rohgfz7A+v5/lKMZFu8iUUk39PIwMCp2eVj7CmevyeqSzKSA2RrubeiMZF611NRiDjJoRcS4Qz/Zgo9/or491d3DqYKx9TS6Xp1vCrfGe5oS/q1Py+dZJ0J/IJnztZnN8fdf6dfl6r9ri6W/pEPPJNpIiMNIhd52DNKxIih6qjg1FxoYqD40vSM8OTZ1DqBQm5Jq/Iusca8lqjypSrs4RK1cYkX8go5hooNB6rKDJR0lhzIwohMxNG3LxCNfLh1k6BnSlK94Ge/yg2aer9AbIrkl1cJGGgWRLfBDPqQzrZSi6WhwvzfsrrXkJ+v+J6lAPNXcLUFF15g4rKlhXxGSyEEMamSqrBCVsCmTba6gQXkN11vda4wEGcjwDOC7hY3mB4akGskdnvf1vZwyXr4oikEjjZaB1MM2FnEGe55x1DhmE/F4yY5Wb7u5CuVGV1B/sQ99/4Fpb5anZ2Xl81wS95OjHt9/yI9/q5pOrvtg+fk18/OSOBYsbLPjSiuTNPz6ZWXqiNL3rl+Ntpat3tC7k8Crujc9X2n/W8myIPj3l+PZvJi6Uvv7Xafma9z960K9O//uFc3vOfvj0ha+/2vh7b0f9qeuXfLg3f3t69Lv3OLmJV35w/fKNU9rjy9CGG/+0Y/navbueff1eI/KPp3a2PDDy7r7eJ8++3XvmkcXnJm95caXN5p7wneUXbxsLCXsWsE99eTR86KWFdjoSdO7cfOCuD1aNnmq4jrWfWvKfn/904nv+N6Pf+fPLtz5xtTq86FvNW78ZXRqUIm/ALNe/5dXoOyt2Bz955nzq4v3PnVj9/jsLD996w/Q/a3q9Y8WrPrjwVvbobcdHho7fG3zzht3pnlOK1Dv1ykcfo9Gf3P3JjVsXnXni7cU13e8dWPrqVctCJ5oOS48/1Hn6/rVdj/Xd99WeL++848nPtn26rOHacTuSb25ZqSxww4V/W3Rx4NMjf72Nnjr368deDq42zt+04tBdb02J9LYv7Pcd2VAIp361+/Oh35mZbqX14tY9Z8b+/ug3dptterrG9hlmzlM1Ntt/AUZXjK8=
eNqdVX1sE+cZj1cKk5jKWtBgo2otZ60Q5Oz3fBcnF88Zjp0PJzg2cT5pUHa+e88+fHfvcR+OnZRJpVslFqrqKqoWqe2WYmxmopQQRqFLYB9iqgCtpdXaptNghWhthYrKuj/QtKV77ThtIvhrr+yz33ufj9/z/J7nefcX0lDTRaTYJkTFgBrLGXijW/sLGtxrQt34WV6GRhLxuWgk1n3E1MS5rUnDUPUGl4tVRSdSocKKTg7JrjTp4pKs4cL/VQmWzeTiiM/OJUYdMtR1NgF1R8MTow4OYU+K4WhwtEFJQo4ah4YkiLemDjXHvt01DhnxUMIvEqpB0IiQRUXEUrqhQVZ2NAispMMahwFlFQM2TA3rAifYV0hClsfRPJdLIt2wJlfie53lOIjtQYVDvKgkrOnEiKjW2HkoSKwBa+wjusEXMTYFlnNgFVMQqgQriWmYX9S1TrCqKokcWzp37dGRMlGJhTCyKrz7uFiKiMCBK4Z1KoKh+EOuaBanU7GTzjqM+USG0A1WVCScH0JiMaq8Wj7/7fIDleVS2AhRocrKLypPLpdBunU0zHKR2AqTrMYlraOsJnvo6eXvNVMxRBlahUD0bneVw6/dFSgnSeLP1ArLelbhrKNlLt5YoQ0NLUtwCBuxxsHkUoIkqCSMpHWEBNQxDeoqrg74dB6rGaa+P4cpgZffKlTK5LVIxxKXV6s25oKYHmu2RRNr7MBtD7NZuxu4a+0k3QBAA1Vnbw13TwQqbrrvycNUt8YquoC5aF5iv8AlTSUF+WLgnozPlhjH0ZTg4+okYEZFOiQqqKyJfqJrsT+IUHB6scgIpCVYRRwpu7Vmy9QPj2SGec7k+WR6WAbMCE2JcWhywqmKiqqhkhsMiJB16wjNUJOVk6XkF3GsgCABAcg3MwQudyiJsojzWX5WmlS3crUAgDN3CxgoBXE7F2hQXueWS2hQxqSVfH9jhmYYZubeQkumKKa0at9cKaXD5WhIt6yfuVugYuI1oE9klqQJkbfmfog3Q5yHpxgyDqjaeo4HUIAU5CnKA2qpeggZRjiLB4DIYSslMlWkGYQOOTyRjKw1VyOzmVKj+SiyFqsA4LWLCieZPIyZ8SAqxaB77aoGJcTyr3MCwbFcEhKL9WcVggOd/nAoUIxhkAGEUiJ8/iPbpqEhThiKy75m2GPw7qApd8QjdFsr6uJCMNzZQYIQUIZ29tL1UXcKtbQhqidBkHW0h2RomqojSCdw4r4hDEAJKtua2NHeinhJ03v7IrrY3tvdtRO5/ZTBZvTOKIBorxGLhDmqt6+F1gyVRUmnsEsHQZqWUm5/PUgzbLPQ2cS0+fcEI20wI2RVhHq6d0rDdbxI893t6b66EA6RNZI+l9eOC1bESfdV2obAbUMsNg211DReO19OjM+5clJ67W14wkcUKeu1x0oZhviXlWFMNKCvEylw7hBOjJkWed/e0ECmvr8pwGhkmmprbnK3NLn9Uc9QLDLQTjOBIIjEBCGsJ/uF5Zkhaz0EqCTHA+j6cml+A/3/RHW6n1g+BYiIuniVFRSkK6Ig5GNQw11lFTkJmTwe+hrMB1qILv+AdYqhmHoQj8eB4OGhh6wjQv7giSVrX8+MXOnGKLASLrw0Z00nKZ+jAcfj8Npl1lfvwT1WvvCeypcKVUlcsP3m0bFvV5XXffj71VcHuy4pfwXfnbm57fSPnvj5xKbbN7p+td1WN7shuc4/tv0nb71w+ZnrJ+eLB9bfuba2/fRjBeKdsQe83iOHPz7HV/0hOWUbTz/fc1NeWBj77xfi2Sedj7zxeeDM+PU/Nf6t8dq/0S/Pe358oYb8dGHN1sapq1v3zDZ75lf333/oYMg8Nzj49gdjF/5irjuVu/L4rhvMUPTqDxL/PLllY3bq8ifHq0dbW9gNm++8aqua+de1+ffEJx/avEUohMbIi6sv/GNmlf1bc/SD7pb1Aw3fm36we37TbrRv7P13N3/4bvWWz/5e/Z1VH8bW7Nh161r4gO2m8yJ/1Xy/s+qzh6d2fDx6abh1XvWIv7791K0b24rHnqX29t1/yBp0bzg88+mVwGr1gxev7J5f3xi9eTY+8ruF4eP+8EEXiH/JRfYfOPrK5Uhu7VR7dQd3I9cxnvto/JHD3O2T7z3z+HbPgf8UDp8X1009m1po/emqQ01vH9v2aHSz+f3M4NqXbn05lPrkF6N/dMTvwA29Esgl3um/9OcH+qqvN7pP98iDT/9+DfVy5+fHr+/ZPXnxZP9Gh5c/bytxc1+VLT8P2zBR/wOKYJQ0

View File

@@ -1 +1 @@
eNptVGtsFFUUXh6RthaiwUQMaCcbEB+925ndZemuMXRtS+1KH7RLbQHFu3fu7kx3dmaYudP0AYlUIU2BkMFQAsEX3e7i2lKaklSiqDxtgKRAiVqJVQMiVqBi8AHa1NvSAg3Mr5l7zvm+c77v3KmPV2FNFxV5UqsoE6xBROiHbtbHNbzawDp5OxbBRFD4aHFRqb/Z0MS+eQIhqu7JyICqaIMyETRFFZENKZGMKi4jgnUdhrAeDSh8Td/GOmsEVq8iShjLutXDsXZnunU8xepZUWfVFAlbPVZDx5o13YoU2oRM6IEfSxITwQxkKmkxAwOKQZgAhppuXfsaxVB4LNE0JEGDx8ABBCiGDWCnBKyDXUihCI6odB5iaBSftbFr4wKGPB223/JoVFB0YnbcN0A7RAirBGAZKbwoh8y2UK2opjM8DkqQ4ARtT8ajCpmJMMYqgJJYhTurgU6gKEt0LkDECKatmh8VFvlX5eWX5RbGboOa+6CqSiKCI+UZlboit45NC0iNiu8PJ0Y0AVQomZhd3vE2M4prqB0yw9qcbhu7715qCdKOY+po/NN7AypEYYoDxqw2Y7eL996bo+hmSwFERaUTIKGGBLMFahGXc8KUmiGPDGrGs4vvpxsL3qVz2Di7LbNjArBeIyOzJQglHXfc8eBOSYIa6QCsC7Bc1wRoTLQagBTKYH7I7h0XUMJyiAhmM+dw79GwrtINxm/FaBkx9Poo9RKf6o6Prd3uolfGN2FTNIe6ah70C0Y6w7qYAqgxlHgBw7k8DodnAcvkFfhbs8dI/A90qcOvQVkPUqdyx5cmjgRDDmM+kf3AdUmM3Swg8uZn9H0Vy+VFjBLVtzQYUivD4mpfdkmuXIjzDtzVRdFCUBZrR2lH6vrmOtwuxwLeEQA4EOSB0525ELjddg4E7PZM3pnJLXTyruYqEZoJzsYxIUUJSbgdBQGCSMDgtjRmPKei0FuQn91aDkqUgEJ04IchMyorMo6VYo26YSaQpBg8XX8Nx7IXgxJvhbnfzSGHk0OIDbpYJ0J28BJdm3GZ7sgQHbk7o7+BddQKjR4dm9SYtjHJMvpM4U3vlt6s1PXDjddbtu/3dTX4DrRmzWl882RK8qTcM+wFsmHJFfaCmLxs+AVf07asXek363p6mtpnzXFlXixdoaRdORL+/otzXWn/3fht6Je2J6/GH59Ztqf35c5GzTX/cMo7Rw9AbUbF9OfLMhuaG6Z/JzQ2/ylfLbh88tbez+teHxx4sUxYv62846a2dueeG/pftpZDU+Ytx+zvm5fUD56dSba+sfXsnJvdaUNTC6tbHu5RdyyrkN4f/Km7vXvn1uRzvVP7d/262pcX7Qn+7bV9nDqwuf/I7NOPda9LKX8i6UTS9g+eSr34yIWpCCataaqesaZ2yfkvvYsHTmya/+zpfDjNe+iZHHsFTDpzLPlUsfZj70P95uGs050rGdIp/sxseO9S+aLBYPm50uWTvx4Qma6VBw+d/yal88S3T9srZs2eW9XWdPSfafytpQ0AHpl3POXkHzsuNa77anebr2hXR9+15/Yfr62MWJu9nwy9Wrzoes7xoZ7o5R/eHbpW9e90i2V4eIrF9ARTHZMtlv8BHmuAkg==
eNptVH1sE2UY75gYHBWDJkSUj0sFEsJuvet1W7uY8VHYJNjuqwwWxfn27l176/Xe497rXEeIMAko4JIjmdH9gVG6dnR1WzcMHwKOMQgMppglyIiI0xkDG5KNhCBC5rtPWeByl9y9z/v8fs/z+z3vVUcroIpFJCfFRVmDKuA18oH16qgKtwYh1nZFAlDzISGcn1fkPhRUxd6lPk1TcJbZDBQxDciaT0WKyKfxKGCuYM0BiDHwQhz2ICHUu2+bKQAqSzXkhzI2ZbGMxZpqmtxiynpnm0lFEjRlmYIYqqZUE49IEbJGFtxQkqgApABVTpIp4EFBjfJAoGLT9i0EAwlQItt4CQQFSHO0D4j+IG0hBAzHZBIoDQYU0o8WVAk+k8Zsj/ogEEizvxrmhn0Ia3riqQaaAc9DRaOhzCNBlL16m7dKVFIpAZZJQIOpVBXWhBgpUoZjOukxP4QKDSSxArZV0lgDoiyR7mhNDEBSsH7YlecuzV1fvM4VGYfWW4CiSCIPRtPN5RjJ8YmeaS2kwKfDsVFlaCKXrOlHV08Wa84PEVNkikmz2tOYliepJUDqjihj8e+eDCiA9xMcesJwPTKe3PTkHoT1eifg84qmQQKV9+n1QA1kWKd1qQbl0Ub1qCP/abqJ4BRdlEtjWXInpiHjkMzr9WVAwjAxZcVUToz4ydFMBs2wR6dhQ00N0TwiFPpXTNOkghKUvZpPP8Ry9gYVYoUMMvwoQtK0IK4OE0vh5QvRien7Om/D5EDsD68l5uqnclQxlWIslBOEKEKcTrHWLIbJsqZTuU533DFB4n6mTQm3CmRcRqxaNzk7Ud4XlP1QiDmeOS+xiQNGi4J+kryXMizrcIkKWpOZU5ibU761fO2mYndG5dbj/+uCVC+Qxaox2tG83iWcPYNLFzgPDT1lAm212zJpu93C0h6LxSZYbWymVcg4VCECPUa0p7wIeSXYzJfRPOB9kB6XRo+uLXGtdq53xDfThciDNEy7gVcPy0iGkSKoEjf0GC+hoEBOgQojjhy6cHWJfsTO2W2MB3psEHJWFjL0uk2FLZMyTckQHj1CY3+DncQKlSydS6pdvG+WYexKJs/IiFCwBfXkGx+vOFj68ICrN8XTdalzV+eqWUNW60/LVXxj6ZLE2W+Ro+HhzYJW547K1v57fbj/jYyk2YN3EpfNTdnn+079cuPG0cfdfXea2+/+Xbo4m9+d/eKbbdf2L5rP7b/ePdviv7x+ATeYruyZWbywvn2g+0Qk4hy49Gh44+mLN9/fXbu3zTbvZo255oeV3juWlJN/XliRXL2waNfd65bB9I7w4WtdxaevvvJcba352Blb29uNV5vAisx3CyzPN4eq6+Y86m/13m8r2d74R1e7f9j/wLDo9Tm/pdTeN4o7EkOvzYrf2lPDtUQGlr16IP6BsOBgxxVXz70h/OBA544l7Q2scaFzlb6z7Ox7M3Pq5v17od8YvJ1e3rqye3BBdsLx45W9NVcG/6prNGzpGU6Ne02fVHXPdT8w5r00v24gP2dZ0ZkZJWtuLU/JTj6/8uUN5za/dfuEsWf4m98//tR4ccPPn51rOXZN2FywER8P+91dH77wz+xFX3SMfL5sZBuu/dL1fdKo9skGYeiInjvDYPgPFluJzw==

View File

@@ -1 +1 @@
eNqdVX1sE+cdDnSsdNqg7WhHitrerFYgyDl3vvP5S+4WO1kwIbWTeCFmouH13Wv74rt7L/fh2CaglbVoKF3bK6hp1Q/RJNhtloQAUUrDR2Gj2ibaIjZ1ItCVdm01NqXrumkIqSvda8cZieCvWbLPd+/v83l+z+92FjNQ00WkLBoRFQNqgDfwjW7tLGqw24S68VhBhkYKCUORcFt00NTE6bUpw1B1b20tUEU7UqECRDuP5NoMXcungFGL/6sSLIcZiiMhNy1ts8lQ10ES6jbvT7bZeIQzKYbNa4tCSSJkSACiC6XxJY5Mg4hDoOm2GpuGJIhtTB1qtu1bamwyEqCEHyRVg2TsTtIwtTjCdrqhQSDbvAkg6XB7MQWBgFt6aiiFdMMaW1jkAcDzEPtDhUeCqCSt0WReVGsIASYkYMBhXJoCyxBYw2kIVRJIYgYWZr2scaCqksiD0nltl46UkUorpJFT4Y3Hw6XaSdy3YlgTYVxEXag2ksNoKgRt52g7PZ4ldQOIioThISWA6ymo5fOj8w9UwKdxELLClFWYdR6bb4N0a38z4MNtC0ICjU9Z+4Emc+zh+c81UzFEGVrFYOTGdJXD6+kYO+2wuw8uCKznFN7aX4b89QXO0NByJI9wDOsVamwOHwkqSSNlDdKU+1UN6iqeDfizAnYzTH3nEOYCvv3bYmVIBsJNcyR+UPW9oXrMi3U8mjJrCIojmoFGOCiHk6A5L8N4WQ/R2BwdCVbSRG9Kw8GoBhQ9galomKO9yKdMJQ2F4eBNCT9eIhx3UyofjyEJsyrSIVmpyhrpIFtn1UGG6g/PTheJtCRQxHw5rXW8zHxPPtsj8KYgpDI9MuXJs4wYhyafmKi4qBoqpcEFkbJuDTI0O1Y5mcN+GPdKkTRFUvRUltQwFJIoixjP8m9Foro15KQo6siNBgZWFRZzkaXKnxPzLTQoY9JKua+HYT0ez7GbG82FYrCJx8VNLbTS4fxqaIesH7nRoBJigNJHsnPWpChY0w/gm06WYwDnctJ0nIlzgHYybifPO+Jx2gGcXMLlfAPrXORxlBKZKtIMUoc83kdGzpqukUG2pDM/g/043KmPEBVeMgXYZsbrUakH3UeoGpQQEA7wCZIHfAqSs/NnFetjD9c1h4LDbbjIIEJpET5zYdHKzk4+0RmX/XqCozktHNmoC4w9vSnTwWyoz6wPqmZrY0NQkNrzrhTNBc0mj9xD0i6WdrjcboeHpO2UHauUpELOphArpSI9kiMUYlvr8jFRocS0XbBvyCWNpBM1mNFQw8b2bgwzn3C2bwyCSDYaVpxNdTnUGg8505nG9nadk9nopqA7nd0cyNY1uzqCsXSek1uEaCJmpDwUZW+I4RaBkfLX+gg8sCIG3V+RDYllQ5ZE4/JSc6LxEUIZGL994Yr0Eevxfg8rUs5HtJUQhvgKZNgmGtD/MFLg9F4MjJkRBb87Bto7pM0bOsXubn0zciZBc93miJnPIEHgskoItCQ6+fWmqnW75yHD0k6SqoDDUay7PJrXS/8/q5rsIOdvATKszr7IigrSFTGRKLRBDavKGuYlZAp422uwEPwR2VoXsyY8NM+wNO9h3dDjTiQ8ZADv0blo/9sZQ6VXRRFIePAyvHU4xfhtXpZlbD5CBn43hzVWft09WigNqpJ8a9Gx+/uWVpU/t+Dv118/0XpKuUh9+/hf1rlf2jOQufv0O/CHj27Z9TK3Ynp82a6ac5P3xO6sz8daPrl0q0eZ2bpqvG9Zb2/4c6t3S9XSwbeWPdb14C/t59/b3tKbi5y9OPjum5/9AHoe6n3yhQPTH3187WB2xZnT/3iPWbuD/9OaS0v6pGO+X68dEDYOT1/Z/izqs7256qfvpid2//Ebj4wibt3f+U83vR47Uf38qOtboZkPv1xc9aGZ3dNY/KL//KnP/jn6++9bUmTG3lxly7+wNnBndfyvHTWrI7nfXP7mxS+vLJk4d5IMPO4I3NVy377lKvt+IL30nu/K937H+9Wil9/fc7TxwW2BfXd/tOa2fP/PDx26qlwLnZ5a/Xxg6oMLB3dMHZk8cdu5Teyf+/Pyi69MPrHmd0c/efr8rfS+4pKTTwHvVz/eesfA54v7Z5Zffmbr5ENndov3M6pv50zswmsn7ludH23vEsSadwrr9kbJv+32jAauXLhaPf4cvPaHty9NnH3jF+lrjTsW7/3VyVfP3N6yyrw329X33Ni/O+X+6uW7loD/iCuelaiCPzUC0dmeBy4/vvLoIW7s1MqB6ifF8/8Sv1hVouiWqgF09VwT5uu/A4Sdew==
eNqdVX1sG+UZd9bBqlWwaoBaYIWT11Ws+Ow739mNY3kojZvETZw4cbomYcy8vnt9d/F95d6zYzu0W0NbhIJGj4JoNaCtk8YlzZJAMpY2/fhjKiyhW7euZQqsqxhiMKoxaWwMjbLutePQRO1fe2X7fPc87/Px+z2/9/oKaWggSVMrRiTVhAbgTHyDrL6CAbtTEJk7hxRoiho/GGmOtg2kDGluvWiaOqpyuYAuOTUdqkBycpriStMuTgSmC//XZVgKMxjX+Oyc3GtXIEJAgMhe9XCvndNwJtW0V9nboCwTCiQA0aUl8SWupUwiDoGB7A67ockQ+6QQNOzbHnHYFY2HMn4g6CbJOD2kmTLiGvZDpgGBYq9KABnBbQURAh639PSgqCHTGl1a5BjgOIj3Q5XTeEkVrAkhJ+kOgocJGZjQQeSQyQ/jAlVYAsIaTkKok0CW0nBofq81DnRdljhQtLu6kKaOlBsizawObzQPFzsgcfeqaU0241KqQ65IFmOqErRzA+WkxjMkMoGkyhgkUga4qiG9ZJ9ebNABl8RByDJf1tD85tHFPhqyDocB1xxdEhIYnGgdBobiZScWPzdSqikp0CrURG5MVzZ+ma7AOGkaf15ZEhllVc46XEL+F0t2Q9PIkpyGg1iHqNEFgGSoCqZoDdBU5REDIh2PCHx8CG8zU6hvEFMCz/6qUJ6VfHPDApd/sq0aDGJ6rJO1huQgKDcRBlnCTbk9BM1WUVQVyxJ14baRmnKatpvy8EqbAVSUwFxsWmC/wIkpNQn54ZqbMn6yyDjuplg+nkYSZnQNQbJclTXSTrbOi4QMBSfmh4zUDAGoUq6U1jpZor4nl+nhuRTPi+kehfLlWEaKwxSXmCxv0Q2tmAYXRCrIGmBparRsWQB/GPdKkTRFUvTxDGlgKGRJkTCepd+yUpE16KEoaupGBxOLC2u6wFKldWqxhwEVTFox9/UwrM/nO3Fzp4VQjK+4qONLvRBcXA3tVtDUjQ7lEHkKjWQWvEmJt+bW4psYTccpNs7zbm+coyjo9fC+SornOZ8X4PJ98BiWu8ThKEUydc0wSQQ5fCyZWWvOoYBMUWgBhvYwXtypn5BUTk7xMJqKB7ViD8hP6AaUNcCPcQmSA5wIyfn5swrBjqbqcKhmOIqLrNG0pASfebtidSzGJWJxJRADht4hKK2bw3CjlEbZJtQZZsMUHWyoj4GOTtWdiZtJIWowqRBJb2C9tI/Fi6SdlBPrhuyoTTq/H1KUdmTobtVbDfRKb0prCkU6PaHOzS0CL+s1kGeZLY05JddBR7d0C1FJoDN1DdEYX6N4o3rLBnlTXW0r04A2sptCSo4VDaGhMZPp9gQ3bqVbWGe2TnYjhU7iFoEpBlx+Ag+shEEPlGVDYtmQ86JhFkTjJ/gSMAHn0pPST9TjY75ZlbN+IlpEGOIrUGBUMmGgSVPh3LMYmFRa4gMKtTUrtXfVJ7YmEp18jA+72+X6ZEt9hvOwm7JbtqhBqXpDrJsPYildR4aiK0mqDI6XYitLo3m99P+zqtfaycWnANmsz7/PCqqGVCmRGIpCA6vKGuZkLcXjQ9+AQzW1ZGt1hzXpY/CoxWElBXwJ6Im7yVB1cHwh2pdnxmDxjVEAMh68NGdNiEzAjqFk7H5CAYFKLx7S0ltvx1BxUFXhTMXs/f3LbaW1DH+vXXuq9R31Dnrltstjq6++0Bbea8ysyEfWrlz15PIdwaPPcc3p8Mn73mRWhI5e9f/jL6/e6/yma/s+RnjsN/uzD9ienei6ffZCZ+MdX3zW9e7YQ8d6xmInHhn89ONwbv8P5871Xylc+Xfsoe2RgbMH3+9hjUdfvLsrX9d9Z3o8dOSWTy7M7J17b2ZiTvb+7ce7Hf9pPJp/qiA8z0m/f+b1D/b2vxmYWCfsufitU3fZbN+49FPPrf1JzxOf/O7QeWJiNPLrp8O29da5uw7vX9N/e742ID6wfHfTp8/nlr3Vsece8GTVju882nfgw8CUTe6yX7u67o87H2SPw533VN87GfHvXn/s897TFT+4svv+207XT3V1CP2H/rXd9uHKevMnR5ie6brX6vwvDe8TZ063XJy95Y3U/gizy/HirvxHv1Uq9HfOvPXS2kPx/LrN3zWfuNSz96/qCx+tuvj4qs/rn9sjXv5D4WvnZybP58UPsiun3W9fEHsTp799qbNu2cHVmf9Gz15O/P2x6cgX5z67j7p14CDJbbxy9uV3T5Hge+/96OdrmI+/+uDr//zl3Y4zYLZ3fOr8zy6v2ffn21Z435/d9fVo4+j0qyfeiD989StFwpbZDlw70Mdj9v4HNFGplQ==

View File

@@ -1 +1 @@
eNptVXlsFGUUb8FYg4RUg2iIqduVxAQ725md2ZNU6LW1Qnd7LJTSYPn2m293pztX59ijFdGCRCIIg3IJVqBlF2stYhsuqTFeoCiJHNESQvxDMFYTbqMkgN9ut9IGJtnNzLz3/d7vvfd7bzpTUaSonCTm9nGihhQANfygGp0pBbXpSNVWJQWkhSW2p9bX4O/WFW54dljTZNVdXAxkziLJSAScBUpCcZQqhmGgFeN7mUcZmJ6AxCaG4x1mAakqCCHV7G7uMEMJRxI1s9ssczBiAiYFiKwkmERdCCDFXGRWJB5hq67ip+VLi8yCxCIevwjJGkFbbISmKwEJ+6magoBgdgcBr6Iis4YEGWeArfg0aXEtT4URYHF6F3Lye8KSqhn9EynvAxAijIlEKLGcGDI+DrVzcpGJRUEeaKgXExVRpiBGbwQhmQA8F0XJ0VPGJ0CWeQ6CtL24VZXEvmxihJaQ0f3m3nQ+BK6CqBmDPkyitLq4NoFrK5ooi52yUJ/ECVUDnMjjYhE8wHyScsb+2XiDDGAEgxDZvhnJ0cP9430k1dhTA6CvYQIkUGDY2AMUwc4MjH+v6KLGCchIldfeHy5rvBeOtlBWi3P/BGA1IUJjT6YNByccRpqSIKCEMYxdZBJKUoRDxrncvJYWGGwJCCWJKqUxHvR4xPkeuSHKxSo4ts7ZtiDm9NQtbK3iWgKKpC9ssQVtchNBORjK6nA6rTaCspAWnDNRHobWhL4gZqulyqpF2MjRkTaucYE34q22+ivraOgi+RoZvORzufyNXrHyRVQpu1wKFKOeejVQB1B9VX29FtNL6/yoIVIdb5VsdY3WJaXQWeubXweiMsW3Ly5nHKxVnWPClPUox5Z4I1RFqz8QtdlfXBimvazPW9nkrWTb2LL6Jb4oYnxtLQ0tiwOLrNVwHGcnZSfILG07yTjJ9NU/phgeiSEtbHRTVnKvglQZzw5amcSF1HS1swerE/1wPJUdot2++feEPaOnAivVGPKH9SITaTfVAMVkJa02E2V307SbsZuqavx95dkw/gcKc78fD6AaxOKsHBuEFAzrYgSxveUPHIGh9Ajg/qbp42ElUFyWVERkWRl9i4n60e1BVFcMjM4bISkhIHLtmbDGUGYWYu3xGAt1lg1HYwLpamdoLoB0GBzMHpEVKR0GEyIE1ei2knR/1jKmxl6cK0lQJEFSR+IEnn3EcwKH65n5z64w1eix4WIfut9BkyIIL7sUk+kG+fl4DwUJWMbp2PdgGJfLdfTBTmNQNHZxORxHJnqpaDwbyiqoh+53yELsJtW++Jg3wbHG8Cz80GIjaZvL6bJbqQAJSARJu9NJIRsZDFIOysY4DuNtyEGMkm6mLCkaoSKI97WWMIaLBBBPb54SmrLRdpzpHBMnQl5nUYMeqJDSOWCBywriJcDug0ECAhhGxKj+jFRFk7e0prr8wGJivJAInzz6rUiJkipywWCyASm4MUYv5CWdxStUQclyD1Ff2mQMuihIM1TA5gQuxslSFFGGl9MY2v+y60nv3xTgMfcoNAbCdInZzTC0eY5JACVOO25T5ovyejKdqxj6Jnf1M289kpO5JuPf3btrN/4ofknmr7qcmHIiNG/Wa28qked7u5o/HFnfe/yX7V+f2U90nO7Mv/yya97j/tk3N3716px3hpdefrZs73Tm2XcbpyaEs//Yaz4oODBw+edke9eFfaeOne0/44hdTAKpWJs+PTiU+3bo7Krt52bP4248puZNLTz07YnEjqLTTNPgTvsk/4ZNM44NfBdYu655+1XGO+M5hGbl+ehrBY+VHf/89uq1S77YHGTd7sbWK12b5g30r8kfWTvlie+PbinY5nGPFDpmlh7c//uJQrmW6Xpj2ZG6Ef9cbUV9cvBk3+01sZtlrX98dD73uievcf23pzYXmMteOLfmvbc2nHE3NP/w0yud01a3vTLYfXtax9YdT81kW3duW9e/9eKkDdUX8x7dtePqlIp9fx08NrPwx5yKlcaK0OFLy59uHrk29++SX58u6jz+8eTCfy8eXb595ZUv7rT+uej8w8mC81t+29D90JMnTTuXrXN3dd16v2rv9bKRx3fonw6t6LhUfmNyutiTc26dzvfcmZST8x/1AE4b
eNptVQtsE2UcH2DEYAwzRuWhrBQ1CL3urr1r184po91g7NE9urGhbH797mt79u6+2911faAkPEx8DOJBDAYFJxstGWM8Nl6DoSFO0eAjM5KAiAmCJoKaKDGiMfq162QLfGmbfvd//f7//+//v/XpDqRqApan9AmyjlQAdXLRjPVpFbVHkaZvTElID2O+p9bX4O+OqsL5RWFdVzR3YSFQBCtWkAwEK8RSYQdTCMNALyT/FRFl3fQEMJ84H19jlpCmgRDSzO5n15ghJpFk3ew2KwKMmIBJBTKPJZMclQJINVvMKhYRkUY1cntptcUsYR6J5EFI0Sm7laP0qBrARE/TVQQkszsIRA1ZzDqSFJIBkRJr2up6KR1GgCfpXcrL7wljTTf6J0PeDyBExCeSIeYFOWQMhJKCYjHxKCgCHVlMSU3newlcGWXLYvRGEFIoIAodKDVmaxwAiiIKEGTkhS9oWO7LpUfpCQXdLu7NZEWRWsi6MegjUEorCmsTpMKyibE6aSt9IE5pOhBkkZSMEgFBlVKy8hMTBQqAEeKEynXPSI0Z90/UwZqxuxpAX8Mkl0CFYWM3UCUHOzDxuRqVdUFCRtpTe3u4nPD/cGm7lWHI5+Akz1pChsbubDeOTrJGupqgICZOjPfoFMQ4IiDjwpTpbW0w2BaQSiLlqicoqw5P3N7RXNbuT0Za+JYGb0Roqot1yLFkqNnhrfA7RBmXUYyTdTAulmVpirHSVoKCqinn6gLN9atCEtteC+wRWlSWRXyxGri0LdHWKDK6zHK1ZWpzKFK9lLPpZa5EpVjvXeEPtge48Kr48kY26eGWN9WggI79dThYRJcnlvnaAVuDlRYoNmCfUFtfFaqqLYsVmwjkaIfAlwhBBfkb/AFeSciNarkzrq7U+RdsuExrsnmkFbCpuZYP60x95fKiCZgdLpaic7AdNFtEZ07/OGVEJIf0sNHN2Og9KtIUMkJoQ4oUUo9q63sISdHZM+ncLO3yVd7i90M9XkJYY7hcFSwm2maqBgmTjbZxJoZ107SbZUzLqv19nlwY/x2ZedBP5lALEnaWjc9DGoajcgTxvZ47zsBwZgZIfzPwycxSKK5gDVE5VEZfM1U/tkSoCu/A2NhRWA0BWUhmwxrD2WGIJeMxHkZ5PtwRk2hXkrULARSFwcGciaLiTBgCiJI0o9tmd/XnJON07CW5EjrQFM0MxSmyApAoSAKpZ/Y3t8k0o4cjxT52u4KOI4jsvDSb7QZ9aqKGiiRC40zsW25Yl8t18s5K467srsyxD03W0tBENIxN0o7drpBzsYvW+uLj2pTAG+cfI5e2QBAip4vnnAA5OUfAyTshxzB2yKAgsvFOdJwsRQESL5lmKljVKQ1Bsrb1hHHeIoF4ZvWU2BnO7iCZFpsEGYpRHjVEA16cyUErNikqEjHg98MgBQEMI2qMf0ba21JTWl3hOdJMTSQS5VPGXhlpGWuyEAymGpBKGmP0QhFHebJJVZTylFP1pS3GoMvuKqIDvMMVDLBcwM5RFaXeA+Pe/qddT2YNp4FIsHdAYyBsLzG7WdZuLjZJoKTIQdqUfbGsS2VylUMjUzoLXr8nL3umke+//3Zu8VVOY/JHfv37ydPvfDN6eN/IvAdn7kzm339/yyg/NPez4FLb4tdunKgcvfDtX9NPXnnju5PbnFtvLtyY1w0eX8Jtabz285+/nWvd/+vp4UPuVvbe4Z2rpe2HwZCwbSa9YsaFp7ovF37trbtGHZp/9ONFXXxV/1V1bWvr2R9/F+6mhMGKpjXzvnjidSYGPN8LiSNFC+faZizo+xDO35hfcLNq9o6PnM2vXt+KV3XWnbtnLzTy88+k5r7BbD8zynQm95xhhh458c59pR9ID6MZP7gj05e4l1kaoGv0q3XXN8+3fP30l0+s3Xr6XNEDby/q6rwyusH4xJM6sfSV9EUxtqXgxa5z2siN9TePH7r4Tekzl+btfHTWA5XFX54tf/+txjktc+ZOH5j6eOOGoZV/bHhzh4PNG+m6+m7L2t8vHZWe+6fg73kLX95Zt/LpLsdPg9+dsjx/Yfj9Tzcv+Hzd/ECcm7HYMWvUGb746ee/HJ16eQ0v7t3Xzt1dUNW66camkf4/78qUfFreoa3x1tmk/v8Bv8VQGQ==

View File

@@ -1 +1 @@
eNptVXlsFGUULxCPqFFT0UQ82G40IdKZndnZs4e17LaldEuPXbSFkPrtN9/sTHeuzrHd3arYVv4weI0oEpNKeu1iKQWlYgUxHqmagBojlRQV9Q/iAcYgicYL/Ha7lTYwyR4z732/93vv/d6bvmwCabqgyEvGBdlAGoAGvtGtvqyGukykG09kJGTwCjvS3BSODJuaMHsfbxiqXuZwAFUgFRXJQCChIjkStAPywHDg/6qI8jAjUYVNzYo9dgnpOogh3V62qccOFRxJNuxl9ggSRZuEbMDWqcTxT1QxDVsUAU23l9o1RUTYx9SRZn90c6ldUlgk4gcx1SAY0k0YphZVsJ9uaAhI9jIOiDp6NMsjwOKUThXdPMIrumFNLKa5D0CIMAKSocIKcszaG0sLaqmNRZwIDDSGyckoXwRrLI6QSgBRSKDM3ClrP1BVUYAgZ3d06oo8XkiGMFIqutw8lmNP4Mxlw5pswiSq6x3NKVxP2UaTHpqk9ycJ3QCCLOICESLAfDJq3n54oUEFMI5BiEKvrMzc4YmFPopujTYC2BReBAk0yFujQJM8rgMLn2umbAgSsrKB5svDFYyXwjEk7SR9ry0C1lMytEbzRX9z0WFkaCkCKhjDGqQyUFHiArJOLrmmowNyHVGpsjb9oDvQ3bRefyjlrVlvxkFAhrViMl0fqlFrYmRYpM0Etc5sp/hGgva6aKfX52PcBE1SJM6ZqDdDBt1c16lIzqivfmMqInVQzTBc19jA1DJiWCATDfUh3ddYw7W2mu4gUtvWyc1NnBxYX90AIw0BMaShQFuQbE9Wp9zORMuGuDvUBTtCAlPdqdVHNRALAX96Y7Mn2F5uw5TNhMBWJmtJ0q02JJy6DMNdUF3DC2l6bXojG6+JSeFgROl6MBzkw+2tvpYFnCnKQ1AF2h7K5aNy18S8YkQkxwzeGqYp324N6SqeF9SfwYU0TL1vBKsTHfs4WxicoaaGS8K+bSSIlWodifBmqY3y2BqBZnNSTreN9pQxTJnbbatrjIwHCmEiVxTmaxENyDqHxVkzPwhZyJtyHLFjgSuOwJHcCOD+5ujj0SRQUlV0RBRYWeNtROvcxiDqgwfm5o1QtBiQhXQ+rHUkPwvd6WQ3C02W5RPdEuVPuxghikzITRaOqJqSC4MJEZJuDTNO70TBMq/GMZwrRdAUQdGHkoSGSyEKkoDrmf8urC3dGsHlp6YudzDwpsELLuvKd4N6Z6GHhiQs41zsSzAuv9//9pWd5qEY7OL3eg4t9tLRQja0U9KnLncoQAxR+nhy3psQWGv2HnzTwXm8yIO8XhhlIeeNAhcHnSwLvBTnpwDyM2/h3SdAjJJrpqpoBqEjiHe0kbJmSyWQzG2eSoZ2Mx6cablNkKFosihsRoNKLge93KZqSFQAuw9yBASQR8Sc/qxssH19dWN94GAbsVBIRJM6937IyoouCxyXCSMNN8Yag6JisniFaigTqCVaq9utST8NGRfN0ZwPeZiojyXW4OU0j/a/7EZy+zcLRMw9Aa0DPFNpL3O5GHu5TQKVPg9uU/4t0pvJ5SrHppfsXbnt2qL8tQx/Ll58qvWo/BV189tnVh+s+KoXffTGmYbeO7ctvap4eeU9Wx/YHn+SmL53ak/F91scq1bv6Ms84Kg4dvYGbsffrUWrYjM3viBM7vT89edMokcnPt/y1Ndffvbtl+lzp3ue/Ca7r/fu/pfRVb9sGbQemtk66Ekvr1smFXe+d3ajTB49zR3evLdn6N5t9x/9edX5mek0ufnkF4+0kCf6Xy+JPTdzo3x90eMvXfjk9v7pujf6p8/eKljtJ3YnfigpevHjWFDgPhrq3z27dsV1fYPP/jucWnqmdG3//rqBd8u43/64lny3/7bQrl/f3zy1Zri8Vl1608slz5/46YPjzu8H4cDqrZ/+vaTqFSY9Bocqb/rn9qqp76ZeLe7MPHd+f3Oksqe06LHfD/946niYbrljtPivp0vu2LFn+8CKN6tKHNe0rp1cee409/szm1DL5PmK2VHnSRdfd3oq+U173S1DO40L5cd6TmVPkccvwqriDx9ePrkhxA+kNiXvbNipnO+AP9z1R++t8FD31au7dv5W1Vbx46e7JmrP3XL9wZn3tq4IhyaqBi7senZ6Zb4ny4oyzKFhP27Qf7FZXVw=
eNptVQ1sG+UZTgSjrLBS6NSq2taaC4jR+ew7++z4nGVg4thxUtdu7eaP0ej83Wf7kvvL/fgnWSjNEEwdERyjGlo1aFrX7qKQpmvSNO0SMsFGx5ZBWYGGruNHWkBQtCEhmICm++w4kKj9ZJ99973f8z7v+z7ve/2FFFRUThIrhzlRgwoDNHSjGv0FBXbrUNUezgtQS0psLhyKRA/pCje3Jalpsuq2WhmZs0gyFBnOAiTBmiKtIMloVvRf5mEJJheT2Owc34sJUFWZBFQx9/29GJCQJ1HD3FgU8rxJgCbG1Cl1oZ+YpGumGGQUFTNjisRDZKOrUMH6HjBjgsRCHj1IyBputzhwTVdiErJTNQUyAuaOM7wK+wpJyLAopH9VrM0lJVUzRlbSPMoAABECFIHEcmLCOJ7o4WSziYVxntGg2dSjauwQoijCUiqMoS4IZZzhuRTML541RhlZ5jnAFPetnaokDpdDwrWsDK/eHirGgKP4Rc0YCyEqnoA1nEVZFU2kpZqwEKMZXNUYTuRRmnCeQazycmn/9PINmQFdCAQvV8zILx4eWW4jqcbhIANCkRWQjAKSxmFGEZzU8eXPFV3UOAEahbrw1e7Km1+7K9gtJIk+x1Ygq1kRGIdLuZ9YcRpqShYHEgIxBok8kKQuDhpvVa7q6ADxjphQG/VtbYLtQtKesKea21xNO1l7QKe9nK9J294a6fCn0zIZ9m0TtutpnKymnCRNURSNkxbCgljgCdLZ2eZp1/1d7eT2docPRONbxUxdRG1xOEhAiZlM1JMOKUG2eWeseWddthv4JGe1w5+KeDPdejxhC9wnakRCoJsbiWyssQOm2r1KwBlu7KwOSgmipVt0uDypdFu2oavGhCjrKY6t7WlJSp3ZSMafSta7BMi1Q3/IE3UGAhmvnm2UMjG/GPFSlMfXmljGmXQSOFGm7SQoF1FcI0uS4aGY0JLGIZJwHVGgKqO2gT/Po0RqutqfQyKFfztTKPfPwVDTN/pen/MiwRpTPoUzmwibKchkTTbC5jCRlJsg3A7C5A9Gh+vKbqLXVOaxqMKIahyps36pHwogqYtdkB2qu2YPTBV7ANW3SB91KA4zsqRCvMzKGG7FdywODjzgPb7YdrikJBiR6ym5NaZKzZDuyaRZoLNsMpUWCLqHsnMxqIP4WPmIrEhFN4gQLqjGIZTKkfLOkhyHUKwETqLUkqcyuIJSwXMCh/JZupanl2rkHCjZJ6820NDAQXOuQJWqQUwvt1CggGRc9P0NDEXT9B+ubbQEZaeLizi10kqFy9mQNkE9ebVBGeIgoQ5nlqxxjjXm7kA3HbTLFmdIigGQpR020gFdMVd1jGaAw06TMZdtEo1ADiCUYjFlSdFwFQI0qrWsMWcWmExx9NTaSYfdiSKtMXEi4HUWRvSYVyrGoNaYZAXyEsMeBXEcMCAJ8UX9GQVv2zZPMFB3ohVfLiQ8JC++JgqipIpcPJ6PQAUVxhgCvKSzaJIqMF/nw3d42owx2k67CEBAwJIE66RteMDjHV1C+1p2ueIYLjA84p4CxvGkvRZzU5QdqzEJTK3LicpUepnsyRdjFRN/qjy7+Zc3VpTWdeh75cpjO4LSOWLt1PstN0/Lc8/95vIBzdqw74mT69f8bHzv93tv2+8+sTH0kDB+5Qf0nLjjh7dt2Dw/2/tMtbRpdQUzfn5P2Pf7Dw/ev6tPlu6Z2j3xxfQXP+29L/0JbHrz84v6xIPBz2PGxsGFR/Zubkts0c7efuEdf+utjX8ebt/F79r36/2z80rFqQNnYfv3spM/uWCZLLjNr/z19c/OUH/csjW2quOmiocefXfW3jPw/Itr/r7h8X1rIk/iey/+7tv3huVJjG10HvvUvH7dkf7Epy+c3zy7aez1mRsGfYHwupdrPqj6aoyfOX/9c3e++UT90bdnBp7/pOUI++zam/tXT/1j4c7Hzw17PmA/e/axAdPl6Zsets7vXufse3Lu9H9+u7/ytS9nqqpPT9dij/4qtOGthsFtu8P1YCqYx566+3+X7u7+RezeW7418/HoKwt/2evNvVo/4vrnu/99YHLTv1986p3rR16uOrPn8P4PP5p9b+N4Y1XTR6+ZbxnM2Y7cNf/dZy6Nf8d+4OnsQu/T74/5335j4krVPZWNDHtu5x3nmmsdPx6/df7LhZYL3EsvzGC3Xx5d/aMTDfSqgYFLFzNZ7ON1oMXxxkSDcUl+kH7pvfBXNxTrdl3FtlVzr/ahIv4fRx504Q==

View File

@@ -1 +1 @@
eNqdVWtwE9cVtovbuCmQtKGY0OmgKAmQ4JV3tbJerpzIsuWY2JaxZPzg4V7tXkkr78t7d23JxpNCMm1TG6dbm2nKQDLBD4HiBxQSHFI8GVIaaGknkISODW1DJo/OZMiESaZJhhL36uFgD/yqfkjavd95fd855+5KdEAFcZKYO8aJKlQAo+IHpO9KKLBdg0h9alSAakRih+t8/sCQpnAzD0dUVUbOoiIgcyZJhiLgTIwkFHVQRUwEqEX4v8zDtJvhoMTGZ3Mf6TYKECEQhsjo3NJtZCQcSlSNTmMjNliHDGoEGjohwD+KgRMNfu8jxkKjIvEQQzQEFWPPtkKjILGQxy/CskpYJELgRA6jkKpAIBidIcAjWGhUoSDjKlRNwbakicRvJInPhFXjcsphSBPTRWLjb/46u40iEFKnYai2ZlPBABYiRuHkDMZYCdWFqZowQAYKtsPEoZQPWcF8KCoH00+8xIB579nYOFtODBt7enB5mF9OgSxO7SYSl5lFSsEoZFSM7NnWk4hAwOIQzwxHJKTqE4uJnwQMAzEnUGQkFnvXx8NdnFxoYGGIBypMYrZFmC5TT7ZBKBOA5zrgaMZKPwxkmecy4YuiSBLHsuoQqURuPU6m9CCwlKKqH/PhJNxVRXVx3CGigTJZKRN1OEYgFXAijxUneIDzGZXT568uPJAB04adENnu00czxhMLMRLSR2oA4/MvcgkUJqKPAEWwWo4ufK9oosoJUE946m4Nlz28GY42UWaT/cgixyguMvpIupGOLzKGqhInGAn70F8gJ+b54aEYViP6EE05DioQybjf4ZOj2EzV0K5hrAU8dyaR7fsDvsfnRfxnTsFwOdZFPxmIaIUG0mqoAYrBTJqLDZTVSdNOC22orAmMebJhAreV4UhAASIKYSkq5mVPMBFNbINs0nNbwU+mBMfVpNLHo0XAmCwhSGSz0seaiPrMxBNV5Ucz3UVIShiIXFc6rH4yrXxnV6yTZTSWjXR0CqSjy0JzQagxoWNZEzwCqTA4IUJAmBwrOZE9mec+iWslCYokSOpEjMCzCnlO4DCf6e/s2kH6cDFJklO3AlSpDeIFlbCQ6c/0QoQCBSxaKvZNNxaHw/GH24PmXdEY4rAVn1iMQnBhNpRZQFO3ArIuDpBoLDaPJjhWn3kAP7SabRaKtNtIyk4DlmSDdiugg5AGVruZCdqLLa+k9gGDvaTElCVFJRBk8I5V4/pMoQBiqTlz0VQxZpEkS/BqZHiNhX4tWC6lakAlBlmBvATYSSZEMICJQCLTf3qivLnWXVPlSfpxkh5JauPgb2ZzV7W2MqHWoOBSZU+V2xoHQnu8vbjGjTRLwL7JXS/5N1eihvYyjxm1uDvaGqjI5hqCwkWYbXa7mSYoE2nCU0qwgUoEo22g1WNTWpHVFmDZOMNuDDTEvU1Nzc1as6MOVfg2bWLKKnzmmKUOKqQnYAGo3RGtkarVliaOlWJ2WfXxUTcdEDyq6tjULlVX14r18VhluLjM29q5WeVsSK3AJeJl6yoqMeCGxfsSubJjQ+CxIVJDY3OS80NTYmDTxLhMi1dkieExfGf5RD5eYvCnGIb4F+9tP6dCV60kwplBTIzWwbEu2ttIosdpqoGOhsrsUm2jucsW0hyVDc0+r6XFvtlnCtdvjJV7y7xoATN2i40gs+RYSYs93Zo3U/8/s3q5iVi4BQifnLmcE6KERC4UGvVDBU+VnmR4SWPxtlfgqMdL1Lub9WMOiqEtFCgO2c1BOmiHRBneo/PevtkZw6mrIgF43HgdjH40QruMTouFNpYYBOCyW/GMpa/wnaOZi+v0t5as6c3PSX+W9PlrpEvk0pPXG/N/8oZn+u7u6ZfO3DkWOQ5c/OvmVcnqSxvZswP/Hhd+PVd6ppZf9/HPvvpp/o4dz7w/dOyuVUt+5z716OC+8oYXP3X2rxFLC0ufOHH9P9e+tnad/WT7J1xNdLzz4y/Ia0s/m6pwr7v4I5Bsua9qqKSp6f2ntmsrifBn6/s/6l15sPrnf37rV3tPjXx3wyFof/5vv71iuTN8Td+2+qE33aem7+ijXpbmpn37r9zzAO90VDys9hfkVz3/x1VNQzvyptCF7/89byB3hfee6L9OiIOrc9m+jd9ujK6/3HP9fJzs3bCn9MZ/lRvvbCU+atwy+8Hx0JaZL7e9Yh7q2v85+od10Nby+drdn5Y+t3UFs9Qy2SLuHPzaW/vg7PeczGvC+cDBs8eXX1+2/v6pgsuP/n7nHRMzuz9oeWzdue8sl/9Umje5/4m+viPP2vpfHGLe+PKF1p3mCuGvQ6/KK5RDIxv8vZdfGti9bHntO7PON/ddvTvaltxbby55e3vHvdoBcuvR3H3j/YZ734v+cnLlQOPbF5Y+/ZeO0zfGf7B18Nll5650XFs7EMrLGzo7J35YfFXsOtQ7p6/Jue/w2nMXZoNrT3/1mmdvxVz+3J6L3ZetS5dfLXqIfPDpi9L9r++zJbtXv/vcj8GeD+sLIm0bLn2RG99f23je+J5U8NYP837x7l1Y7bm5JTnanIMn8nJy/gcqPgKj
eNqdVXlwE+cVtyANnmmgtKkDM9jDWik0uF5pVyvJkjwKGMnCR2XZljA+SsVq95O10l7sIUt2nMPJBAIhYZ3U7TSkUx9YrXEMwQ4GGjcDLW1n4gxDknHjNEPKQMuRUmbSUpgpifvpMNgDf3VH2uP73vF77/fe+3pScSDJjMDrRhleARJJKfBD1npSEtipAll5YZgDSkSgh+p9/sCgKjGzJRFFEWWH0UiKjEEQAU8yBkrgjHHcSEVIxQjfRRZkzAyFBDr5qW5jl54Dsky2A1nvaOvSUwJ0xSt6h34bVPi+jCgRgHQAEj4khOERv2ejvlQvCSyAIqoMJH339lI9J9CAhQvtooKaBZRjeAZKyYoESE7vCJOsDEr1CuBEGIWiSlAXM2BwRRDYrFslKaYNhlU+EyRUvvvq6NLzJJfebQdKMAcFCtBApiRGzMrotwBlIVQDFBBJCerBxMlpG6IE8yEpDMh8sQJFzlvP+YZoGb5d390Nw4P5ZSRAQ2j3JGGYOUkhFAWUAiW7t3enIoCkoYtXhyKCrGhjixN/mKQoAHMCeEqgoXVtvL2TEUsRGoRZUgGlSKes0CMw5zzIBKuNxAAQUZJl4mA4q6sdIUWRZbIgjFFZ4EdzHKFpOPdvj6RZQSGhvKJN+CCUimpjfRLWCY/ghjKY9yMJVFZIhmch7yhLQlTDYmb/Nws3RJKKQSNorga14azy2EIZQdYOeknK519kkpSoiHaQlDireXzhuqTyCsMBLeWqv99dbvOuuxRhwHH4e3uRZTnJU9rBTD1NLtIGipREKQEa0fqxsfkEsYBvVyLaIIHbfyUBWYRlD54fhmqKKvcMQUrA9J9SufIf8NXOc3k+b9WQG9KjTXkkphTBTIiXTCImzGRBcLMDwxyEDdniDYy6cm4CD+Th7YBE8nIYclE5z36Kiqh8DNAjrgcyPpVmHEaThg87DAUJUZABmkOljTajjdnGR6vd49kiQwWpneSZzoxbbSpDfUdnooOmVJqOxDs4zN5pJpgQUKnwRE4FdkLaDQSEcrI2aDPhY7md+eSPwFgxFMdQDD+ZQGHLApbhGJjPzD03fWRtyIJh2PH7BRQhBuCcSpmxzPXbhRIS4CBpad/3zJjtdvu7DxaaN0XY0xd2crGUDBaiwU2cfPx+gZyJAUweTcxLowytzX4PfgQxCwAh0gIhEGUhnKYokrDYccxMhewmS9gWOpEeCxS0kiZTFCQFlQEFR62S1GZLOTKRbjQngVsIK4y0HE5IilVp4FdDbiEdg1yOiBJgBZI+TIVRiqQiAM3Wn5Zyt9RVeKtdI34I0iUIMQb0fqpbHQxS4WCIc5pawt5Ea1Mg1mLAKxvpCjHmVhvqRX+dL0BaKbO/Jr5ZVv3+EFEroHiZ2YrbzWbChuIGzAD7Bt0Wjrutbi/GmJPRmtoQK4mmYLzO5y5jlRa/J1pvT9QxIpXwkJtbW4VgmGFtnWyZBXjLyIQH3xmzgQRdESKwuhbaW9Xi+aGSZFwNtq0Wb1MNVlkf3ErXWLaZq/joTmJzAwwRzlynsRyBBQvHpuzMtQ0K2wbNNg0x3zTlCJ1JjNOweFKWI1Xw6PLxbLIc8aczDOATjm8/owBnncCD2ddhYtQ4Qzs7G2or1NaqjvraVh8e6NgS5uMxr7XZEAK1UZ8hGLVhjVX4TrGBMnsXZAa321AslxwrZrZlSvMe9P8T1bFmdOEUQH1i9oxO8YLMM+HwsB9IsKu0EYoVVBoOfQkMuzxoY0WLNmEn7DYsROEmmjTbykgCra5wH5m3dndmDKVPjBTJwsKLU9p4hHDqHTAefTnCkU6bFfZY5iR/bjh7fp1Zsnzt3vy8zLUU/ufmXvafeuUNbOXUvwpmDrgq82uOfnYyeszJfotbMRsd3/xj4lk/1XZs1rrvyxsF63W93oPSppl3k8L56U92Pbvyr/ahh5oriw69k3/jz6Gbxz+7dOjsR1cu/i462fBG6vDRS3tvPz33jfWoa+qf+08QXyz1v/PYleO9RdOPNO5665qiTac6qk1tiV822ff0NUVXrS8ZPXGJLDxTvRo9ffXL3499tz1SvO6srviFwjvv3dwg31m+4aX1l6v2zXz4neLrr+XrBt1rdFF08pWaZa/p6BrH62+ps/n4kgONeiqwu1+8VeS5TDYHdrOpz2eIO01nnzyXutY1UZL8KrVh7TMvX1M/7Lq+vGAf/dV+T2Kt7/Ceby/pnflF28D7TxerBT//gaN4k3f24ye2x4pWtK3cc65J/ObVx2YCvctc6y72/RotvNDT/Qj/ZtDzPN5s7ttz69HBxpLLX9zWpg7sp94r6q/hPj40NbD51LrJreH9rZ//O/GfU7O7J/M+ONNZsWZr7Kf2m492n6PPvXRaXLpMdGx8rs5e8uLtp6wn+vv+duWJW3+5YSwcHD391I4Vq1612rc9jqxWvz4/jV4f6zK8P6etzftDffk0wWGX/3H0QvEHO/775NxPZs5+subhvX/fR0wXbvmIf/yPPdaBLsuOWzPkmwU/6otMXPj66qb+8Uh8lfGi7eRexPbwM7o050vzfqbTdlU9lJf3PxKZCgE=

View File

@@ -1 +1 @@
eNptVH1QFGUYhxzDRi3I0rLUnQOyyXuP3dvz4BgtTzBkEA7hciATfG/3vbuVvX2X/bjghBoVZgJMW5qxKSYy77ijG0IuFC0/pkadMZWZJv8orBynj2HUP0qzHM2il09ldP/afZ/n+f2e5/d73t0eCyJFFbCU3CNIGlIgp5EP1dgeU1CtjlStKRpAmh/zkVJXuTusK8JQpl/TZDU3KwvKggVKml/BssBZOBzICjJZAaSq0IfUiAfz9UNtW00BWFet4RokqaZchrbazKbJFFPuxq0mBYvIlGvSVaSYzCYOkyYkjRy4kShSAURBagsppqAH6xrlQVBRTY2bCAbmkUjSOBHqPAIs8EOhRgdWQkCzdDaB0lBAJvNoukLwaQvdGPMjyJNhLyalRfxY1YzEfQPshxyHZA0gicO8IPmMT30hQTZTPPKKUENx0p6ExhQy4jUIyQCKQhD11wFVg4IkkrmAJgQQadX4pMTlri4o3LCmJDoOavRBWRYFDo6WZ21RsdQzMS3Q6mV0fzg+qgkgQkmaccg52WZWaT2xQ6Joi81hofvupRYh6Tgqj8WP3BuQIVdDcMCE1UZ0vLj33hysGl3FkHOVT4OECuc3uqASsNumTano0uigRiyv9H66ieBdOtbCWC05iWnAar3EGV1eKKooMeXBVEmcGMkC2g5o5tA0aKQp9YDDhMH4mO6dFFBEkk/zG2GGdXQrSJXJBqMdUVKm6er2CPESnTsdm1i7fa6iyU3YGcknrhrH3H7dTNF2qhgqFCFeTjH2XJbNteVQBcXunrwJEvcDXUq4FSipXuLUmsmliXF+XapBfDzvgesSn7hZQOCNo+S9mmaKvF5vbQUKWiX7entdTijEhoLlzs/v6oIVH5SE0BjtaN1QBuuws8t51gOQx8sDmyMnGzgcVgZ4rNYc3pbDZNt4ezgoQCPOWBjKh7FPRPs5L+Ag50dgXBojll9Z4iwuzOupAGXYgzUVuKHPiEhYQtFypBA3jDgnYp0n66+gaN7LoMxZaRxwMBxrYzia53mbjeOsYDVZm0mZpmSIjN6dsd/ANmKFQo5OJbctaZuVNPbM4A1nzY+r5jSPtMbPVh1t6y9Z+EfmO+cfLnseFT32kye6eIj9q0JZ8YE++NnIse8FqmvptuM3b2RoTpw4+eLAK2/82dXx7dlrhzF+aeX80PBx+5Lfd33E5H2Y8qr7ibDwCPtspnnuvPDuLRXMiSrzTHPi0ecqL8zf6Gt4b+Cfhjc7Ftxa6a2YvSzhufXvdfv6yzd/3lTUkrJQmPfVly5r+p32XXuNNG/hby2uO4ODm5uzv7g+6/y+1rLXryWX3Fl7sNBYtWjw0rn3q2K2/Nuwdgl/Ib/UdyO5c1WBa757755Oai5dr2+mbqd6n3p8x3B/U//w38sDKSlPdxfO7sx0preAupnh1NP5zRdTnvHSja2dPyxa8e5q1zeJdZc6Th1IpDYUFPVsDLLmK/7XZgxc6V56pOrYiau754TPlL7N76P3rBWXFaf/Up16OTPtu6bMWKp13VXfmdmhxe0bXlhUe/J69HSf/WA4vvjr7gW9/z35K27I4EZ87Tcah48frhohqo+MzEhKOed+i30oKel/M3dwnw==
eNptVGtsFFUU3lJAQSJCUqDEhnHFSGFvd2b20e6GP7gFUmH7XKQNYrk7c3d2urNzpzOzC1ssj6IQhaiTQEiLaEq3u7CpfaSEBgVETUOBggk2aZogMeGRlEfCG41ivS1tpYHJ/Ji5557vO+f7zr11yShSNRHLaS2irCMVcjr50Yy6pIqqI0jTP0mEkR7EfLy4qMzXFFHFgXeCuq5obqsVKmIOlPWgihWRy+Fw2BplrGGkaVBAWtyP+djA7s3mMNxUqeMQkjWzm6FZu8U8tsXsXrfZrGIJmd3miIZUs8XMYVKErJMFH5IkKowoSFWRZAr6cUSn/Aiqmrl2PcHAPJLINk6CER4BGwhCMRQBLCGgbXQugdJRWCH96BGV4NM5dG0yiCBPmr1imhUPYk03Ol5ooA1yHFJ0gGQO86IsGJ1CjahYKB4FJKgjC1Wj6XyKFCmjEZ2MVAghBUBJjKLOTUDToShLpDugi2FECjaOFBb5KlcWfLC8MPEM2miHiiKJHBxOt1ZpWG4Z7RnoMQW9GE4NKwOIXLJudC0bK9ZaHCOmyBSdY3fl0O3PU0uQ1J1QRuI/PB9QIBciOGDUcCPxLLn1+T1YM5q9kCsqmwAJVS5oNEM17LRP6FKNyMONGklP8Yt0o8FxuqQth2HI2zEBWYvJnNEcgJKGOsatGM9JET9tgHYCmumagI10NQY4TCiMRrp1TEEJyYIeNJoYm+uwijSFDDLakSBpekSrixNLUW9PcnT6DhWtGhuIPfF8Yq5xcoUqWiiapbwwRhFiB8XY3TTtttuolV5fi2eUxPdSmzp8KpS1ALFq+djsJLlgRA4hPuV56bykRg8YEHnjBPmupBnGUygq2O56H5VFndjFK+9VBR3s8f91waoAZbFmhHY4b2ChzeW0OXibHyB/gAd2V14ucLlYBvhZNo+35zG5dt7ZFBWhkSLaUwLGgoTauADgIBdE4Jk0RjK/onCZt8DTUg5KsR/rGvBBwYjLWEaJMqQSN4wUJ+EIT06BihKeFaB0WYVx1GVz5dF+xDhcTmRnEA2Wry1tH5NpXIb48BEauQ22EytUstSd9uWC3a+aRp50vmT97r7iGU+XDJzoXSUM3l60pLbiYfesjPzyyVnZ9Vd6jx7z3vS2/s72bt0f3VhyL/urB4bWk3lg25TH0cUPLpzZdfZM25Mng4+fDl6/eOyV2r9jD+bbHop/1HcHdH8vXZVr7Wlh3q3ube8/nTWnPLJ3UlvXrYYG53f4/oG5fzl/7J6xJEOotpw6+G3WwQWBO+z0y9XX9qX9tD3755nhzC/mw0u+28haeXNa3blzcy9v+LoisSa0sX1/7ocl7NS2WF3DlntN542BtsCjwjP7tvdf23j39aXON09nLL6TecnEXXW8Uf7bzCxhg7ffSy+sWNCTt3Rn445E8mr/3ezP0gq2lXDz3A3bzOkXdp6cvscOHhtnM2/2hfr4rQ0Djo837PqmMSOr0XXRsTbt+8T1ivJf82c/anj78N2Mj04JwVZPZv3qKa/x05qTi/6cPG9otSWDe+vILXb1+a7PbU52oXX9ujm/+DsvtB9K3VAL+g/vHZp9X1jKfjrEeP99cmPNluP/TDWZhobSTcemHq9ZOclk+g8rUHyZ

View File

@@ -1 +1 @@
eNqdVXlsVNUaLzQIUZOnL/G5gV5G8OVp78zdZm0mtp12Smk7085MKYVoOXPumc5l7ta7TGeK+CLU5UUN3qYxkryEpe2M1AI2rYKFGtG4gGiCC6EaMS4Ed1/Ce7iDZ6ZTaQN/vfvHzD3323+/7/vOlnwaabqgyAtGBdlAGoAGPujWlryGuk2kG305CRlJhR9qCUdjg6YmTN+VNAxV9zkcQBXsiopkINihIjnStAMmgeHA76qIim6G4gqfnc5ssklI10EX0m2+9ZtsUMGRZMPms6kCTBGA0IDMKxIhm1IcabYKm6aICEtNHZ8231thkxQeifhDl2qQrN1JGqYWV7CebmgISDZfAog6qrAZSFJxBViKrSk7tTmfRIDH5W0bSiq6Ye2bn/B+ACHCHpEMFV6Qu6y9Xb2CWkHwKCECA43gNGVUhMMaSSGkkkAU0ig3Y2U9B1RVFCAoyB0bdUUeLZVFGlkVXS4eKVRDYgxkw5oI4ySqGxwtWYysTNB2F22nn8uQugEEWcRQkSLA+eTUovzQXIEKYAo7IUusWbkZ431zdRTdGm4GMByd5xJoMGkNA01yceNzv2umbAgSsvKBlsvDlYSXwrF2mrF7xuY51rMytIaLJByYZ4wMLUtCBfuwdlH7ZvERkdxlJK1BmqGe0ZCu4j5BW3PYzDD1LUOYC3T8zXypYXaHG2dJPF1241At5sWaiiXNCoJyEc1AIxiKcRK0y8eyPs5J1DfHRgOlMLEr0jAWw82mJzAVdbO052HSlFOIHwlckfCpAuG4mkL6uDFJlFEVHZGlrKzRtWRkZlLIhtrxme4iFa0LyEJvMaw1VWS+pzfTw0OT55PpHony9nKsEEcmTEyUTFRNKYTBCZGSjsHxuPeVJLPYj+BaKZKmSIqezJC4z5EoSALGs/hbGlfdGnJSFHXwcgVDSSE82HmOKj4vzdXQkIRJK8S+5Ibzer2Hr6w064rFKl63e3K+lo7mZkMzkn7wcoWSi92UPpqZ1SYF3ppegQ+dLpZ1cpDj4jDOMF4WxIsviEpAlmHdHs+LePIFiL0UyFQVzSB1BPFuMrLWdIUEMoU587O0k3XhSisJQYaiyaOoGa9VCjXolYSqIVEB/H6YICGASUTO9J+Vr+0IVTc3BEaiOMmAoqQE1P/hgps6O2GiMy75s/VaeyYRDMqNQTWaFnpqBb7V093U4wm2tm2sFzrjmmK2dToTTrWDpN0czeBkGSdJ2yk7nlIykIRM1mzqcbbQNQ0ybBfYVLfQ3hRKhRqYWF0rC72U2KyC1WGvN9YekutWoTrV69WgnA5G9HgrQJH6SMToMatbYyiaashsVJyt7cy6auhpCTe2grRKi71rA5ybZwolAiPpd1QSuGEFDLq/NDYkHhuyMDRuHzU7NJUEXwTGb5+/IiuJVXjXh2UxW0lECwgj/A8kFBUM5A8pMpoewMCYaYH3h1J07cZYPO10rWpLsiE+HKrrCNXx3XxNZF04jbhwd2e0c218DdMA5yDjoV0kVQLHRXGeYmteSv3/zOqFteTcLUCG1ZlLLS8ruiwkErko0vBUWSNQVEweb3sN5QJBMlLdYU14achydJx1JRKUh6dpsgbv0Vlvf+6MocJVkQcibrw0tMaTrN/m4zjWVklIwO9x4RkrXn0P5gqNKne9tuDB2x9bUlZ8yh/vp5VXqOse+vG3q9/Sq1YM/BWlFr+/q+m7qrY2a8zxrz1w/flFNyy/sOlMX//OyJ7Gh3/9furwoXO0rW91tV6TfftJ95rm28a/PSMcuG/42QPfgnOTnxw9++rmQ/cN3Lqt6vOd1EfL1N9am18URhcOBH98unLDkiPOjtMrv2Leerdu8S13VC3qQN3MDvudpyb3bj/eb3SvORnU/sP9/Yfrl/ctPaS8+cyi+0/8+/hnO1aXnx6/OvmAYOvzDdY8xPwwXJ+zXi//InjHl/b05NLyxeibjiW59cPv/O+9M+FjsZPb995zrnFqYN0vk+rLh08c+WBwbGIw/+iT2+Td5+9a8fw73N+u4Xbs3Ar6/5s2Tn1a9h4b2Nr0BHfup4c7lpdtjz0wcd2xm/dXX3sMbBq94d3Hq7hbfu55av3p309F2i+O3V52dsP1tWDZkqP7zy77cGnf4vPtH6HX7t3w9ZHvVj6ycOVB9aoFK1Nt/+z9+i+37frH6j13qyePfn/h4+3hE26M9cWL5WXjb4CnLiwsK/sDSUeFGQ==
eNqdVX9sG9Udb9Y/+KFpLVqhdGnh8CYQ4LPvfGfXTuZuqWMnIU3sxm5I6Lbs+d07+/Dde5e7s2O77dRfYxo/Vi5AEQJEaVwbZVHaKqFtSjvoaEtXEKjrYEoHnUQRVdG6btU2aWMbe3YcSNT+tSf77Hff7/v++Hy+3+/bWskhw1QIbhhTsIUMAC26Me2tFQMNZpFpbS9ryEoTqRSLxhMjWUOZvi9tWbrZ5HYDXXERHWGguCDR3DneDdPActP/uopqZkpJIhWm8xscGjJNkEKmo2n9Bgck1BO2HE0OXYEZBjAGwBLRGJzVkshwOB0GURGVZk262/RDp0MjElLpi5RusYLLy1pZI0monmkZCGiOJhmoJnI6LKTpNAMqpac5F7epkkZAountKKWJadnj8wPeCyBE1CLCkEgKTtkTqaKiOxkJySqwkJMpmpY0SoPFqAaKPZpBSGeBquRQeeasvQ/ouqpAUJW7HzYJHqsnx1oFHV0rHq3mxFIksGVPRmkoLR3uWIHiixnetZLGvC/PmhZQsEoBY1VAoyrrNflrcwU6gBlqhK1zZ5dnDo/P1SGmvacLwGh8nklgwLS9BxiaT5yY+97IYkvRkF0Jxa51Vxd+6a4iuHiefvbPs2wWMLT31Lg4OO80sowCCwk1Yr/Mjc8CpCKcstL2CO/hXjGQqdNyQdvK9JiVNbeWKCXonVOVet3sjnbOcnl+wdJSK6XHPhoxFCfDeZguUGA8nMfL8GITxzWJHNPWlRgL1d0krsvD/gStOVOmXIRn2a/AdBZnkDQaui7jR6uM02yq4dP6ZFFeJyZi61HZY31sz0zDsB2tEzNFxhIjBbBSrLm1j9aoHyrmhySYlaR0bkjjAkVRUJIoC+XJ+hHdIFU3NCBWM+0RwRMYr0tmwR+luXIsz7EcfzjP0nJHqqIpFM/as961pl3ychx36FoFi2QQ7e+KyNXWr+ZqGEijpFV9f2VGDAQCR66vNGtKCFSXcHi+lonmRsN7NPPQtQp1E7s5cyw/q80qkj39HboZEKCfC/hlmQd+UaD4r/RA6PFKyaQE/EAMyFN0ACiQWqmSqRPDYk0E6YiyCva0UwP5aqMFBd4r+GimzYyCoZqVUDybbCXVHMxmRjeQSoC0F8osBDCN2Jn6syut/d0tXR2h0TgNMkRIRkHD5xpuHxiA8kBSC2YiRkjGhi+UF3J94cFEMdMv9cdbM0rv2qEcHiqm+nytHQmfikmY5VeKPj4giiIlzcW5aN+w3RHv2mRfz0MpTRyMASHDqXpbJjrUDVcPFAbWqbyFRW8sbPSlMl2rvR4rHCh0qj2tDyTkwaQ3/VC+fZ1YDHnbe7tR0iKJtUT2c5FCW3QQiN1E74dqnESVWM+a1JpYeIimCKx00N3M0IJVKOjBetuwtG3YmaYRZpummZFqwARd8ydlM9NOR34Uq4VmJl5FGNFfoKG4YqFgN8Fo+mkKTDanSEFF1lEinkhKegGvMyIr88aDlvSwh4TNXk9IewD29sWktMX3dLb75yDjC4gsVwfHx4n+Wml+Ffr/GdWBPnbuFGCj+szdVsHExIosl+PIoF1lj0KVZCU69A1UDkXYnpZ+ezIgBPxcUhICEPm8ScHLdrS07pu19uXMKFVvjApQaeHloD2RFoKOJlEUHM2MBoJ+H+2x2g24pVwtVJw60bDjzsduXFBbC+n3iy8eH+7qXMgvfuTK5/f/+oX1lw78fSR4c+93b3p0aXin+BwemV7TLu9x/OfK8GTu4E82P7v86juff/ribzYuWb35DLd01wfD6y5c/svV8z/6qKew9sBjZ3bjtmd3PD/95qcgt2L8Cv7xk0smN7acGBj9ZviJg6ce+fAbv9w8zE/tvHh2wvC92N22TCydubv/csEfCKwwGneKy5+5cPao0RUB26fy921f/PG3X7p35OyhLVOnVv2zfOx29AT3vSU3NWxRDzcsG/m5c8Wbu7Y9unzk7f1vXdy0WH8/8tzWqQ/O3fHku7e8/bu+X/ztyD1/CN3Y+eqiycf/WNrnefd49tZYxLh07NQnd53V7jpu4gPB04tWxS5MJYsn/v2D/j//aVcjueGtDSeP//Yf4lPH3PIzkZ0nT54f+37Hq5GP7n3t2Onexs+27P791fecrxeunPON/3fHphueXv3eK/ffGWvMfkgevI27fH4A/eyWi28s5o8s++mtjV+/1CiOS5mh9Q1/DXp8758+9K1ti+7eG/nXyeT6jV+ror5wwapuEl5GKfgfnRWRgQ==

View File

@@ -1 +1 @@
eNptVGtMFFcUxtCSKlqosRHtD4bVPmKZ3ZndZdndii0siiiPhd2UajV4d+ayM7A7M8zcQYFaK7aagFoHbNWaGpF1wQ0iBEWRWpVq1dZXGm3VWLRNg9EIplKD9VF7QVCJzq+Ze875vnO+79ypqC+BssKLwqhGXkBQBgzCH4pWUS/DYhUq6POgHyJOZAPObJe7TpX5i29yCEmK3WAAEq8HAuJkUeIZPSP6DSW0wQ8VBXihEvCIbOnFL8t1frAkH4lFUFB0dpoymuN1wyk6+8flOln0QZ1dpypQ1sXrGBE3ISB8kCfzCBKAUDhRRoQkQj8BPKKKCA8EsqJbuhDjiCz04VTGB1QWkiaSA3yRShoxCWWiEjEcgn4Jz4RUGXNQemppPQcBiwfuCnstwIkK0lqeG2IXYBgoIRIKjMjyglfb6S3jpXiChQU+gGAItyjAQZW0UBGEEgl8fAlsXUIqCPCCD89GIt4Pcavajqxsd35a+oczs4KPQbVmIEk+ngED5YZCRRQahyYmUakEnw+HBnQhsVgC0vYmD7dpcJZiSwSC0ptteqr5WWofwB0HpcF4x7MBCTBFGIccslsLPi5uejZHVLTtmYDJdo2ABDLDaduB7LeYR0wpq8LAoFq9w/k83VDwKZ1JTxv11pYRwEqpwGjbC4BPgS1PPHhSEsJGmkjKQlL03hHQEMmlJCNiBq2WahoW0AcFL+K0OtpsapChIuEthiuCuAypSkUAewlPHq8fWr1t2XOHN2F1IBW7qh1wc2o8QVmITCATmDiBoC12k8meYCLSMt2NjiES9wtdanHLQFAKsFMzh5emnuFUoQiyIccL1yU0dLtIntW+w+/5FO12leQIljkSci2WzB7RmZPqRIUZ7U91EWUvEPiyQdqBuotTTTaLKYE1eUjoKWBJs82aSNpsRpr0GI1W1mylE82spa6EB1qI1tOEVxS9PrjLMYt0AIaDpGtQGq0+dV5Wcma6o/EjMlf0iEgh3cCrBQRRgEEXlLEbWojxiSqL11+GQVyemzxP222jGZOZZm3QVkCbPFYjmYLXZlimJzIEBu7O4K9gObZCxkdHR/XGVr0SNviEs9XZ4jkq+ujBn45P1Hedb+uYsP4N4u/sadOkDyq2flWTMb71RMbGSa3jzn+6+K/dkSHrvC39fZv7p15pv7Cvc82lnsSJefce9N03xAp38vqLux72HuxtkqZbb/m2UK6GyXbUNTqqNNpdN59QutZcvnp1TF5UXMK+S033oqN4tb39sr9ts/d0xzcn1p18/Urv6bUr9/7bI0/oJcekuF/ZmlR1+Ig1pcE6yR2TWlU9Z/+r7zvJX/unRBV2nkveGpEx/l71jOKYKeqipJSfKwpv/BGzqmD5/OQFG9APaest46gv6sq4ms/kosjc0bEnNvxTeYx21hx4Se28NN3ZPPVuau3ps/a3y06N3z92T7yzdvaRmzF7zjgSDlMhw/VZF5rbVnE5i7pTF76VU7jAEhvdek29Wbb5zLhw7vsVtPtC39m71V0BInNs4FBPrrnfEee+s+74Eu5r3y0uJy2Ljasua+iG22I+STf0RRxq+bPgv7zMa8U3NnWr6Ya1+yoDrvUb61aWvzP27NV1jm+THngqQ7MTujOWnfpx9c7fr3csLpze8/IKY3tnXEL53JTeZOeDTbeL8pyTI279cvu3yvcm8nPaEmcQO97Vt3Wdr+0tzo+MqMq4f+xQd2Pkyu5la2qS2srL7A9HhYU9ehQepj6sCEaEh4X9D/5zr+U=
eNptVAlMFFcYRvFoPdKAR0FLXTdeqQw7e3DsYqJ0xZNjgVVYlNC3M4+dgdmZYeYNZQFDihRi1ej00Ki1LbCwulnlsKlYUYqWqtGWmFhTWqrSGqKobWJabKvWPhCoRCczycz73/9///d9/5sKXxGUZFbgxwVYHkEJUAh/yGqFT4KFCpRRZYMbIkagvbbUDHudIrHdCxmERNmi0wGRjQI8YiRBZKkoSnDrivQ6N5Rl4IKy1ynQnu5dpVo3KM5FQgHkZa1FTxpMkdqRLVrLplKtJHBQa9EqMpS0kVpKwE3wCC9kSiyCGqCRGUFCGlGAbg1wCgrSOCGQZO2WHFxHoCGHt1IcUGhIGAkGsAUKYcAgpJGMxeUQdIuYE1IkjEFGkVt8DAQ0JnwtKMTLCDJSm58j0QgoCoqIgDwl0CzvUo+5SlgxUkPDPA4gGKkpkRHtx43ycEgr1V8AoUgAji2Cx4oJGQGW5zBDArFuiBtWD6ek2nNXr92YmNLwtLTaBESRYykwmK7LlwU+MMybQB4RPh/2D6pDYMl4pB5PGGlWZ/NgY3gNGWUyR5FNz0JzAPfdIA7FTz4bEAFVgOsQw6arDU+Tjz67R5DV+mRApWaMKQkkilHrgeSOMY1hKSn8IFHVZ7U9DzccHIXzGaP0enw3j6kse3hKrc8DnAybR60YzfFjP40EGUOQ+uNjakMkeQhKwBBqDXl0REEO8i7EqHV6k/GQBGURDzPc2oDTkCJXeLGl8NJ53/AE1qauHxmIHd6V2Fz11CqJjdSQBk0y8GgwcLRGb7KQpMUUq1mdbA9Yh0HsL7Sp2S4BXs7DViWOzI6PYhS+ANJ+6wvnxT98yAiWVtvwey6p11tTWFFITkorsNkTo0UWIEf2m4YT/+siSC7AsyVDsIN53QuM5hhjNG10EtCZRxMmc1wsYTYb9ITTYIijTXH6WBMdU1fEAtWPtde4BMHFwUYqj6AAxUDiqTSqb6UjJSF5rTWQRaQLTgHJhB24VC8v8LAhA0rYDdVPcYJC41MgwQbrKiI9waF+bjaa40hnHoijooE+zkATiZnpTSMyjcrgHTxCQ3+Ed7AVEl7qHPfXvO0vBQ1dwfh58oROz0ntWTHt0dKrp/+ACZOr7dHCxOwloaGPS9hGi3nbB9zau1cGBuLm/xv/48zdtk+v3tkX0ZVVOi99rp90tOdEHGm8Vn6qzRLRPy828/eo/q7rXe051w9Wh5UtzwBq+MBMpy18ffC375tbqsMWsey0DcyqS0RE4JVJScf/vlGslLVkN16rCpsDFg/YHreVb0RfLhX21k3Z651wIahjUlq1gxkfWBh/pTL/+6rYsp3tF5ZFP8kOubShdfLN2ZuNndPLQz4OfXAjOGHxur7ZYO83RfOVdai1d263WuSpDH97ds3FKYsO6j7MhucKz7oMyefdl9Nb9s3Z9dUtx+2dCRlrqjsvr067O/XwjHf322fM6qSt7P6lvXTh2dg3UnYud6yfUnqh59VFrPXizU21tZ6D/3Q86Ho5JNDTMTF/Ds39WqUz5B/YPd27qWJPmTcpL8Rwcse9X+7NaP+s/oet6f3hkV3rMhxXqw/vKcpddmrBHcXRtyLDceX29m2P7Rez+j5a0/OwKeu3WQe0qOqYr73167OFlT9r3lqZmhbP3SKXPXqvY3PfkdD94qHxhSf/DLz2SVDd8n2tNbbC18+F3b/f63+4u/i708qJhV/YIo5QTFM489OSmsW99wrO9IdnnhkoT2pp3xPvKS6eMGhvcBDZVjt9Kvb6P84drZ8=

View File

@@ -1 +1 @@
eNqdVXtQE3cexwd3Fp8z0mp91DTYuaGyYTebhBBEB0JA5CmJAvYc3Oz+kqzJPtxHgOATtVpRe+uJelrbUZA4FLCeUK1o1XpqK56jrcphxbNnLa3a4mOQ01O8X2I4YfSv25k8dn/fx+f7+Xy/3y33e4Eg0hw7oI5mJSAQpARvRKXcL4CFMhCllTUMkFwcVZ2bY7VVyQLd9q5LknjRFBtL8LSG4wFL0BqSY2K9WCzpIqRY+J/3gGCYajtHlbbxZWoGiCLhBKLa9F6ZmuRgJlZSm9T5Ai0BFaESXZwgqXgOMCrCzsmSyg4IQVTHqAXOA6CdLAJBvXhejJrhKOCBD5y8hOAaPSLJgp2DdqIkAIJRmxyERwSL/S5AULCsD6tdnCgpDf2B7iVIEkB/wJIcRbNOpd7po/kYFQUcHkICtRAeC4I0KLVuAHiE8NBeUPPcS/mM4HkPTRKB89gFIsfWhcpBpFIevHxcG8COwNpZSWnMgSCS0mNzSyGjrArTGDAN9lkJIkoEzXogRYiHgHhq+OB5c98DniDdMAgSUkupee7c0NeGE5XdWQSZY+0XkhBIl7KbEBiDbn/f54LMSjQDFL859+V0ocMX6XANptUY9/ULLJaypLI7SPmBfs5AEkoRkoMxlJ1oQy8/HsA6JZdShWHaPQIQedgfYEUNdJNksbwaagHOfu0PNcqunIxeEa+FjalOgbooR2wuOUaFGlRZhKDSolq9CjOYcNyk16vSsmx15lAa2ytl2GcTCFZ0QCksvbL7SZfMugFVa36l4EcCgsNqAvBhGyKghOdEgIRQKXUFSN7zCUHSU/Y/7y6EE5wES/uCaZUjQeWLfSXFFClTlMtbzKDxPh1O24FMOhpDLrzABdJAQAgjKtWYNl7bEDrqJb8WFosiGIqg2KESRIBceGiGhoQGv0NzCn31KIoefNlA4twATrRfhwavL/taCICBqgWSvwiji4+PP/xqo95QODSJj9Mf6m8lgr5oMC0jHnzZIBRiFyrWlfRaIzSltE2GN0V2vcGoi3Po4wzxFEUSqENH2YFdq9WSlFZnB9QXcNBpEkYJqMnDpYGIgIRLSSpV2mIYoiQwaIk4pscNsNIEFc2SHpkCVtmewgVqEBNUvAA8HEHtNaciZoJ0AcQabEDFn1KYnZSVbq61QpBmjnPTYOOVAWOLikhHkZ1JTPXN0ZuLc7LF/NI4S7bsJswsmeop8aVnWniLU2P1YLIXnSkXoq4sBIvTYdo4oxHXI5gG1cAxRdLlTAnLTVvAMVq7MX1uqY0pQnNJa1pWBp6Ke6y0xpuRnikasyyOvDxZnwL4gplsbo6DNWcnZZC2DLMnUwDmghRNYUlSqV7rnTXbrc9cSBZl0njSAiHdLhDOTCLeNzfXkFIISyQkV2Jsggp2LA1JTwzNDQLnBglMTZwJ7Z2aBBUVJCZR039HJqhmwCWfw3pKE1TWAMMA/hIMsMJ9nZjNsaBtEyRG9tJUYkmqRqPnM7xakSWtC0k+2UX7sBm+uZTb4mSsKTZu4RxristamGec1YcZFOJBQ+QYUJ0x2JovoP+fqD4vQPquASSHf/4287OcyNIOR40VCHCqlFrSw8kUXPcCqIGNkJdUqDTGYySuw4ABcziMuN1IIclwkfZG+9/SqA68K/yEBzael1T2u/BEtUmnw9UJKoZINBrgjAXfectrAo3KOk8OvDOpYkhY8BoEP8+erbN9v3Z82qjFrfmRD6ZOefivk7cXt4xb9Jph8N7jr80aVf/buKzT7bPNs/+efcrx9lHlny1bVrLfbNxs/NOYebeExx9Z5z2VB5z/2/mryTfKnNNG/nLnt59+6vF3c3fbbz79w69LlnTd4HvugGc/nGiPfXhf/m7ziGtPNlz7UkYSO97eaamZd2Wo6US5796TtpM3H2zf/umn276egFZ+/ut8Z37LbXw8OOHsGLOl9T/nPq5KY9gfWsPDjrU+LjTmHZu+7T0jd9SirT2EGA6sVi9LNVbOuGGr2hblvkJ+i7+Zt/vp5mO+lctWdEVcvbR04tDapvmad/jI+VM2JDfvmqQ6zXC4jahSDWmua4qOj2h5ffHay+a4CGP4zXNnDgyaKKYuX69acnxgyoCLg2l3YkHnsMNR6Fvvr5xfvqfl9fqfR+ywzdn+45OOmSBq6qyn+q4LEZfLlqxpDN9hd9LvH8yPxCnHzFV3BcuksnUDjVVf/Xi2u33rlbyx4drp81O14VlTm6b4Ms4nfJOJZ1pGzK5f+uzRluQtJbUXpz005nd1NZ6N+ktFdWXhB5Gyxt28q3jf/RsXHnZkYavemGk+FbbzMlLxhaljuH9X9/Ws5aMvblVvPD0x7vejMyJ2n+veIHdmZEab7hf80o5P1o1tkkb+fLtt7erj0Wv+XN69FnCFkWMHN308aHTymOhlK5KursubQM24NyCja3TxmXeio+p6bEzzowjH2fGbFpz+XbtL3LnIXtNJRrR8tz36H+tvnXGc3/rttJ6Sg7c+OLDibmr3I/PhisimoqEm8G7r3PWraxyTxoQdvVQ57hNPOj6k4/jlU58oEddPXhrREK+5R98/Oc3t0eyY3BD7fef1+kHbKtJzNv/13rnusRvqTw/3kmsWbSz4aE9P3I5OonNT2pWeSf8mvyqbMOyEb9SwC9MrF8WMjvnw6tLKiX9sWDneUrrmrdY3Zp2ZPNDLZjZ3+C8UDn9wqKLR8ubky3F7sh8vvDcy2NKDwoatcuwtDw8L+y+SEjoQ
eNqdVXtQFPcdR6nK+GgTGmNMxrjeqGla9ti9Fxx4jsjBcSBwcCgPy+De7u/uFvbFPu6BtaWWpDo6gcXamGhoFLxLkaAoxicWO22qg4k6liZUE51UjdrWJghOEVv6u+NIYPSv7tzt3e73/fl+P9/f5rAPiBLNc9M6aE4GIkHK8EFSN4dFUKsASW4IsUD28lSbo9BZ0qqI9MAPvbIsSGnJyYRAa3kBcAStJXk22Ycnk15CTob/BQZE3bS5eCo4IGzUsECSCA+QNGnrN2pIHkbiZE2aplSkZYAQiOTlRRkReMAihItXZMQFCFHSJGlEngFQT5GAqNlUmaRheQow8IVHkFG91ojKiujioZ4ki4BgNWlugpHAprAXEBQsq7HNy0uy2jk10YMESQJoDziSp2jOox7x1NFCEkIBN0PIIAmpk2SqHSbJgSgYansNAAJKMLQPhMZt1UOEIDA0SUTkydUSz3XEikLloACeFLdHKkAhApysdhfCVDLsyY4gxJVDcG0KpsUOBVBJJmiOgUChDAGzCglR+anJAoEga6ATNNYzNTRu3DlZh5fU/fkEWeic4pIQSa+6nxBZk+HI5Peiwsk0C9RwpuPJcDHhN+HCei2Ow0/XFM9SkCPV/VHkj02xBrIYREkeOlH3Yp0TADGA88hetRXHde+JQBLgmIBfhKCZrEib22BLwIVz4di87CvMm+jl53EL2qywPWpPtkgnIZgOySeCiA7TGRHckIZhaQYzYssv6ciMhSl5ah+6SkSCk9ywF1kT3Q+TXoWrAVR75lM73hPpOKwmkj6cRhQEBF4CaCwrtaMMLR4nCmq3HhkfMpQXPQRH10XDqj3R1vvrAn6KVCjK6/OzmLnOoKddQCHd3TETQeQjYWBCKCupbbhOb+qMiSbQb4fFYiiOoRh+MoCKEAuGZmkIaPQeoyu0NWIYdvxJBZmvAZDYYQMWvc5M1hABC7sWCf6tG4PZbD79dKUJV3pz5MJOTtWSwORscB0rHX9SIeZiHyZ1BCa0UZpSB5bChyodmWIGZr1LhxvcODDpUnGMMGGkzmCkXHozaT4B+U6T0EukmwLcHagESLib5KA6kMQSgQjTLHrcqDfBStMRmiMZhQJOxWXlIzVI6YggAoYnqIOkGyUJ0gvQ8QFUw9bygox8e2a7EyaZyfM1NGj+67QXqqpId5WLtZRkr8kDFaxX79H71pWn5q2l9HbFbKWz8+SiMmeVze8XcEd2AVuk+FE8xWDCzQaDwYziWkwLiYN6cFN1eUaFYqupwIsqjNlkiXsNF8h0SqVGI04auECgJMNfKOZT69a61q3NDNaS2bwpxWjzOa2BWsXt0dlXczLmYc3rcrGgK7cK+Cqsot3kyK1Oyec9WGktZ0zN8PnLgzk1sERC9lqS0xE4sTQE3RLjDQp5g46zRj/BmnSEigJj0U5dlelIDtz1hRwTTEecEYQB/CVY4IRr21LAc2DgVxAYxUdTlrpSL18ddAZsPm9WKgvoCmArzCgx2e0BqxLM5QMuG+e0GgwZ2WWeScjgJjjOMXBMmCE1Oprfpv5/ZvVBGTp5DaCFwvihFuZ4iaPd7pATiJBVajvJ8AoFt74IQpnZaHFGudpt1ptTMRLTuUmQSpnMOtSeYT004e2bpdEWOTLCBAMHz0eqR7x6iybNYNBr0hGWsKSaIMeiR9/PQ5FB5Tx/nN62eFtCXPSKh9+xse0lZ7mXbHN7/v6jLSeGDj/fkd1VdN2FvFrRvar+BXYzhziOXHtnq8c8p7kmK+Hxv85ufy6PTkQWLPr3bYuluXGUnnbD9SXXO1J/+0+vVl47Y08eW3j/Ue1Pb7x54MDosdHrIwfzxu7fe4Mbk/OJm/+cNfjwtaNDwT2J5Rd+Ut/zzIvz+/uHH1UPX63rOdPZuys99zXTGt/XzMjj37sHbm/r6z94sfFCU+srSspX16fH3XD+B335RGvj/Zna3ck71ZJaxP9gKKHXXrD/C4exfcnumoJ3LzJ/6H+4eO6m54udd7aEZn7QsA9ZaFsVv3p/6Nbvvo7/bdb3ju4tz3nJseNCPOK/NOfy4Vs/a51xxzPtow1LHo3OftSQji6b9wPnlcHKdy3n9iQkPfMP170NpW81NOfMn753l3Y9N7Lz5rOPT/e73LnHZ/XVf7ZnDbHgwLGr1ce3fB9h9P5Dy5Z3O4aHVxZfPjnnuRkzz59q6/8E0e+ePuK4ltvyy/izDTm7f11Vf6kshDaNbrZOoxZvnfdhz4vnM47fSlmYuPDC7JVN1vfTcsPEWeRc5ZXBmvPZn79RvaQRNO1oal5xqevtplOg7+DhVTfv5SRcKVsj7BzNOtt6iWjGPlzy6ZnOeQ1f9M4eWnX4+ozEiha+8r3fzM+7Fyi4O3fVbaaseP2Dh//dsmLlsqRbRPPp/rGiysXm6k8T9rq1dHHfjgN3/jb9xqb8wcdrUz9bGtzDkCMt966/37Lo5rW7vo97+4afPcrs+ri+X3mwqCff9jp+dwBNbVzp8O9r+e6sL7W39Rtq/zJ8zHcmvOnOisGv9IOv9G58e/vJj5b+OWtM6Qt7Vt+6OtjqKXR2vl5pdwx1d+UUmy4XNUppP+65kvhJ7RXbyznLtw4P9S1f0LjtYnQU4+P6Wma+M/KduLj/AQcpInA=

View File

@@ -1 +0,0 @@
eNrVVk1vG0UY5uPGkV8wWiEhIa+9ttfr2iiHKFRtoVGLaqqitlqNZ1/vDtmd2c7MxnEjHyg9Iy2/oCVRUkUtRQVxgUocOfAHwoHfwjtrO6VO2lThhGXL9vs1z/v1zN7b3wSluRRvP+bCgKLM4B/93b19BXcK0Ob+XgYmkdHOhfODnULxww8SY3LdbzRozus64yapp1TELKFc1JnMGlyM5O5QRpPf9xOgEYa/f/CFBuWuxiBM+bO1rvzcfNLw6s160+8+XWUMcuOeF0xGXMTlk/guz2skglFKDezN1OWPNM9TzqjF2PhKS3GwJoWACnN5sAGQuzTlm/BIgc4xDfhmTxtqCn1vF+PCn3/sZ6A1jeH7K58twP391vu/2fBauxjMKJm6q2kqx+56lbcuH360+wliKJ8PkqJGvICsU0VaXqtDmkG/3e77XXJhffDriTGuKB5zUT74acAzTGtJ+mSNsgQWLuWzvBhidjWS0S0XQa4E3v5qatxrm6w8rCftFafv+23nY9SvtDq9lud5taTttnonKJ4vwTm/lUsN7sVZzpjTyTm/0O9ep2pS7s2g/mCtsHnuZRCxScqdoNv6ZSnAOoLGDqPO83auc1oeYGdJLGWcwtMbrrWuYHDsTfnQ27uqaJzR8pGQLrNleHbDxTLTSMbuAMcQ3EtReUg6TW/YG3UpbfqtJgPai+g51qI96nm9YdRrHZIT81hTECFeTlNd7hpVwONFBoNJDsfnaH8BbN7kJvmUCtLsdT3ief3qbZtcjfXXOFMKm/nXOw+2nfn2OH3Hq/fqncCpOVzgzAkGIY5urJ3+tjNM5TDURmLGEIKgwxQip29h1ZZ1WGzAYNfaGCjCcmgwIWzRLE9Bh1mRGp5TZZaDnG6Bq4fLbSCkPMS9VpNlA6nikCmoShJGXM+VI6wganM6ybB6y045Zi8FTUP01se9dPtVWWugiiXHpIkch8akYcEXImNHITQcVBgVao6OTqqyplLEdtvR38dVsO7KzAVNf1pzxlJt6NwG0EzmYFGGXGxyA/oI411tohBpK8fu206+jAmDDKlBpNhvJEM0FCMe28MLDS9VO8olEuhRJoymEBZ5eEfzu4gftygGhbC8CuhCK0yCJY90mCI9oHMzWCgjORahgCw3kxfePmptuIV1FetIEA4nVWItr9dtdlredPreq1l85TQWxw9KdUOlWQM76OYKa2Qalo21+R/R+7enkftZiHs3eu2FEFiuODN1P2ZzqjInUtV/ZvZjZH7O959UFOyy+U10RMpvSq+vuwz2cDiQJsv9YpMzqcTy5fAyqb57eduZzV6YUJ0gF3Y832/RUbPdhqDZ6QbQ9TvtgAWdTsBgBM0RZThijEZeuzXy293uMPCY3w4gYBEbBoBMmlHBRzi3dnE5rvZN52jYUTsbbY2/UGLwaw2/rlbCAW6gnVDnds1JGa4cMhJ2BVFhqRBxwZDf0GNjTNWM6+cTiL9vvtFZFwsEtz5zOuuZs6CnJTe3qjlnPcYsPPrOl7IgVAHBS5IibdoLz5CRVKRiGxxVlwo9BttRgpfYhq4T5AhiEkArO0JWkXPAoSFyRBRg8wGJm1Szv2WIkWQWofJZRK2TSyMywbMjKT40ZEPIcaWfmdbIV4U2RNMJCqlZMlwgUABEg90Aezg+avGsyDBCRCzB/CucxcK4hvot8fn8/D7ZXkCZkltibQYWpXPYVrhaOferB4G8MOEmVdzeKHYinIW37f/MxZZ/UdgQK5jhVPSdkTtbB2eKr9tvHGo6ffEsgDa3p/8A34xtTA==

View File

@@ -1 +1 @@
eNptVG1MFFcUXZUooj/aWpM2mjputZbK7M7sLAtLo5UuYlBZiCxksa307czbnZHZeePMGwTFJkLVVtRkaGPSxqrIsttuQaTYGENpsZZW0baoxEg3MTZ+tRqjqfUj2kgfCCrR+TXz7r3n3HvOfVMTq4CaLiFlTLOkYKgBHpMP3ayJaXCVAXX8YTQMsYiESGFBka/R0KT+2SLGqp5ltwNVsgEFixpSJd7Go7C9grWHoa6DENQjASRU9dettYZBZRlG5VDRrVks43CmWUdSrFnvrLVqSIbWLKuhQ82aZuURaULB5MAHZZkKQwpQK0kxBQLIwFQAAk23rnuPYCAByiSNl4EhQJqjRSCVG7SDEDAck0GgMAyrZB5saASfsTHrYiIEAhn2rOX5iIh0bLY9NUAr4HmoYhoqPBIkJWS2hNZIaholwKAMMIyT9hQ4pJAZL4dQpYEsVcD2SlrHQFJkMheNpTAkrZpfeQt8ZYvyShZ6ow9BzX1AVWWJB4Pl9pU6UpqHp6VxlQqfDscHNaGJUAo2D2SPtGkvrCJ2KBRjc7ptzL4nqWVAOo6qQ/GOJwMq4MsJDj1stRl9WLz3yRykm035gC8oGgUJNF40m4AWdjlHTakZyuCgZsxT+DTdcPAxHWdjHbbMtlHAepXCm01BIOuw7ZEHj0rixEiOZlw0wx4YBQ2xVkXziDCYDczeEQFlqISwaDaynPtLDeoq2WBYGyVl2NBrIsRLePxIbHjt9hQsGdmELZEc4qrZ6RONNIpxUflAowhxOsW6sjguK52lFuX7mj3DJL5nutTm04CiB4lTC0eWJsaLhlIOhbjnmesSH75ZtCSY35H3MoZd5k33ZLCOVeWoeKVRIvBVyFfqCR18rAvSQkCR1gzRDtb1z+LcLi5d4AI0DAQF2unOzKDdbgdLBxyOTMGZyWY4BVdjhQTMOGtjqRBCIRm28kGaB7wI6YfSmLGcUm92fp6n2U8vQwGEddoHQmZEQQqMFkGNuGHGeRkZAll/DUY9ufSy7FJzv5vlOScrpLNAcHOBTAf9NlmbEZkeyRAZvDtDv4H1xAqNHHWP2TyjLtky9IwTzOy6xILJGwY2x5fYvDeUlNTv0/79JvXTefv3NTm7l4YuHO/jj35xYhY7c6DzTGLD9p1J986e+qR3au3h8Zuq5asdqwsO35yy6djfV28+OHU+NLejbp3T3+B99dc3evqnT5y79IXLG4+6hfRm/5/UNqtLbEzddjrxbUaD/er9ew/CnS2r/Q1TSwLdlxK3jD0n4W1b06F5s5dD5kbt0ppdJ6fg+vfrT06/21X2X5K3smnS7+rO4lJ514xzXdePfl4/se9U0tkdf8HFiyIoeGfFb19PvrJ1fteE3hePrE/xv5zck3x59yuTLzx3PokHydX1lSnV1XmJrhW5V3q2zEntzQMTsg+9nuMoBcknuiceL9TOnR8/v/7HBb3tO2bidukitXHnJf9b14P+vqLlY09fkagD73YeStxJae8585qj9KVpsypatv90d4Jwr/gjGhye/XPKsX8+u7R5/S97WhYX7Gg7mCgRt97q+yMnt1a937r7g4sfb7m/K+/ED292XLt2e5rFMjAwzpJ7fZKTG2ux/A/oUIK0
eNptVH9sE2UYLpJNVFCRKEjMOBog4nrdXa9r1waRrWMTZ9e5FhgMM7/efb0eu94dd1/nuskf+wERRyYfJqCQ4Ny6FsoCm0OQCGF/CEjkh3GRbWCQSMiQn8YgGkIyv41tssDl/rj73u99nvd9nvf76pNVUDckVZnUISkI6oBH5MfA9UkdrotCAzUmIhCFVSFe4vMH2qK6NDA/jJBmuLOygCZZgYLCuqpJvJVXI1lVbFYEGgYQoREPqkJsoKnWHAHVFUithIphdrOMzW4xj20xu8trzboqQ7PbHDWgbraYeZUUoSCyEICyTEUgBai1JJkCQTWKqCAEumFe/z7BUAUok228DKICpDk6DKTKKG0jBAzHOAkUghGN9IOiOsFnrMz6ZBgCgTR7yTQ9HlYNhLsea2A/4HmoIRoqvCpIioi7xRpJs1ACDMkAQQtVYyAhRYpU4IhOOFUJoUYDWaqC3dW0gYCkyKQ7GkkRSArGe4p9gYrCZSuWFiceQuNOoGmyxIPh9Ky1hqp0jPZMo5gGHw+nhpWhiVwKwodyx4rNKokRUxSKsdpdVqbzUWoZkLoT2kj8u0cDGuArCQ49ajhOPEze9+ge1cDtXsD7/BMggc6HcTvQIw77hC71qDLcKE56Sh6nGw2O0yU5K8uSt2sCshFTeNweArIBu8atGM9JET85mnHQDHtoAjZEeozmVUKBv2L2jSkoQ0VEYdzGcq7dOjQ0MsiwIUHSUNSojxNL4ekfkqPT1+orGhuIzfF8Yi4+WqBLFoqxUV4QowhxNsXa3QzjtjuoQm+gwzNKEniiTV0BHShGiFi1dGx2knw4qlRCIeV54rykRg8YLQn4CPmuYFjWUyxpagHPo+C6fP8q4LCFlr/nPfy/LqouAkWqGaEdzhuYx7kcXLbABWkYDAm03ZXjpF0uG0sHbbYcwZ7DOu2Co61KAjhFtKdEVRVluJ8P0Tzgw5B+KA1O5q8qzvUu83SU0aVqUEUGHQAijiuqAhN+qBM3cIqX1ahAToEOE54CujR3FT7g4lw5TDBkY0PZHJtjE+ilK0s7x2QalyE+fIRGboM6YoVOlo5PaprTNMU08kwWcG7TxSVTNwx9kiqyFv+pvNN7/+ufMnbkLbwJBvK6P99T2JzMKLr99oy+xgf//C7PXF3z0rVany9PnJqR+drdsu3OircunTu4c/GRWubVjNt/39p+0tm7Oe3Cop9bzcs/nGdhlq2ZXZy92+WwaC/6pmfKrT/uOn8xOpj3Zd+hq7du6OW3O3sK+j+etlcd3L6p4Ngrd74JfJvMz0x7+rdT8/+d+2Zu5jOhU7/UvfCg/8zlDSurrj7b21oq1RxtmHwseQLO72wOXft1067kgfx7a9fdgasXlg3efH5xXfuSjHdbcB31KZN2o6du4yxxFue5/Nn3/stXjAvp92ylO2cuKtq4wlY7Jf0stdXfMDd/jqh5zvecnPbRgoa6FvEL6kCTdHXuxl3b+pWBM11lhX/M7jzYnP5X315ndeyNEu8JN/aDji39fdtmXJnTcpeb4Wx8eXDWc6f7zgXEqYtCe701wePXQ+UfXNnRfH0B7qbWDGW+XlFtHhwSt966c1apPfwg3WQaGppsur+lqox7ymT6DwO7epk=

View File

@@ -1 +1 @@
eNptVWtsG1UWdgiPwv6gCIraCi2uQQiBrzMPj2OnhG5qO2lIY8eJQxJKa13fuc5MPK/Mw45TWtqEh4BdYKBQIfEojWO3aUiBPiClRS2IR0XDQ4BEgIVKkP2xoouAVVeobLvXjrNN1M4Py3fOud/5zjnfOTNUzGDdEFWlalxUTKxDZJKDYQ8VddxvYcN8oCBjU1D5fFu0Iz5i6eL0bYJpakZdTQ3URI+qYQWKHqTKNRm6BgnQrCH/NQmXYfJJlc9Nb9nokrFhwF5suOrWbXQhlURSTFedq01EaSd06lDhVdmpWHIS606YVDPYSbncLl2VMPGyDKy7Nq13u2SVxxJ50auZgPVwwLT0pEr8DFPHUHbVpaBkYLfLxLJGMiFWcpvyBDYVBQx5kuZ3jsV5QTVMe2Ih9b0QIUwwsYJUXlR67Vd6B0XN7eRxSoImHiOEFVwujD2WxlgDUBIzuDB7y34VapokIliy1/QZqjJeSRCYOQ1faB4r5QNINRTT3h8lJBqaa9pypMaKk/b4aA/96gAwTCgqEikakCDhU9DK9rfmGzSI0gQEVPpnF2YvT8z3UQ17tBWiaMcCSKgjwR6Fuuzz7pv/XrcUU5SxXQy2XRiuYjwfjvXQjMf/2gJgI6cge7TchjcWXMamngNIJRj2y1QBqWpaxPbXVVckEiiVSMr1uSa9ayDV2Ki0NGodGTEbEvmYv39t1t8Y6+xrEhNJXbU6E1yK03oAXeulmVq/n+EA7aE8JGcQFBCTs9ZmuTZ6dbOCukQ23S92rY2kI81MPBxjUYCSWjV4VzQQiHdFlPAaHNYCAR0pmcZ2IxmDuL2pvd3MWg2xOO5INw/0qVysi7mnAfnboi0xmNFoabA76K3lGWOlk1C2MiJfH0nTob54MsP51nQKbISPRsI9kTDfz69uvyeawd5of6Ij0Z28m2lG8zj7aR+gKrR9lNdPlZ6JOcVIWOk1BXuEZvy7dGxoZIbwcIEU0rSMoTxRJz7xYbEyTDujLeeFfX0+RJRqH4kLlttJ+ZytUHcyFMM5aV8dy9Z5fc6m1vh4sBImflFhvhYng2ikiDjDc4NQRIKlpDE/FrzoCBwpjQDpb4k+GVaABzTVwKDCyh7vBu2zWwQ0h/bNzhtQ9V6oiIPlsPaR8ixkBweyPLJ4XshkZSow6GXFJLZQan/liqarpTCEEJANe4ShvBMVy5wax0iuFKApQNGHBgCZfSyJskjqWf6trDLDznOk2G9e6GCqaUyWXtFb7gb19nwPHctExqXY52G8gUDg8MWd5qBY4hKo5Q4t9DLwfDY0IxtvXuhQgdhJGeMDc95A5O3pm8khkfKxqJZJJiFLQ4ZPMhxXy9DkmPLXYl+AqZ0k21BEBKXUTE3VTWBgRPa2mbOn3TIcKG2eepbmWB/JdKVTVJBk8bjDSobUUg5E4JqOJRXye1EKIIgEDGb1ZxdDPZGG1ubgwW4wX0ggqs1+M4qKaihiKlXowDppjD2GJNXiyQrVcSHYCNobeuz9ARqxXjoJ/QHM+XmaBqvJcppD+7/s8qX9W4QS4Z5B9j6BrXfVeb2sa6VThvV+H2lT+cuytVDKVel9r2rLjY8tcpSf6r8+9ZEyTC0O/+e+B2e4Ry6ZqsbTu8ZeP4VinUv0n6q+XfrNgYfvvG7m+9uvvSrUOqp0n3hu4yT7QzC06JmhiStnlgQ3rOMmP379zKkhfPUTv4LJv292nz4arspG90xNPfR+A8zsPfpLEzdTOHhV2/JN73xbFVw0cbzvxfwed8t2sOtvjiX86HvHvzTpY++fOsE/m38s0rPs+LHPudzji1b8+OlmUoSTT08cd5/d/cXJ+tu3Tpy5ZfUu9rb4rb9tGOSXi3f8RfEOKeiTLnRyh3DHn46ZI/GuVafvby/s//in+z44M7jiwL87/7UlNzK8pwmJLU9cs6LmVGiqZ03fC7vBP4TTO4Yhs2r6M/hw9dqVD13+3HLx2V8u23evg9n9h7BiG7th69Jt757uvim/bKvg/fVn9/bDUdSy/uwHmx3PD4cXT1rOm1e9lBWMxX/+KiH/99Fvjn4xvHH7Dfq2k+8su/S68Vjx845zM9f88/CThy7tfUH6+fsXf1+6qdrhOHeu2jE0tXPH2Uscjv8BZV1LvA==
eNptVQ9sE2UUZ6KOqCgmSOKIctY/ENy11961Wzenjm4dY24rWwcdqOPrd9+1R+/uu91917UjoCLRiNNwiyYgCOpKS5ZlIJsKyIgaNBoVjVFxohj/xRAkiIiaGKNfu063wKVtevfe93u/997vvduYSyLDlLFWMiRrBBkAEnpj2htzBuq2kEk2ZVVE4ljMhFrbwwOWIY8vjhOim1UuF9BlJ9aRBmQnxKor6XbBOCAu+l9XUAEmE8VievzhdQ4VmSaIIdNRtXqdA2IaSSOOKkdIhgkGMAbQRKwymqVGkcGAKE4ihnOUOwysIOplmchwrH+g3KFiESn0QUwnLO/0ssQyopj6mcRAQHVUSUAxUbmDIFWnmVArPc05/etzcQREmubJGXMycWwSe3g69b0AQkQxkQaxKGsxeyTWK+vljIgkBRBUzvSaRByktDVUKI89mEBIZ4EiJ1F24qy9D+i6IkOQt7vWmlgbKqbJkrSOLjYP5rNiaU00Yo+2Uiq1ja5QmlZaY9zOCs7J7UuxJgGyptDSsQqgrLJ6wf76VIMOYIKCsMUu2tmJw8NTfbBp724GsLV9GiQwYNzeDQzVJ4xMfW5YGpFVZOcCoYvDFY3/hcvxTrebfl6ehmymNWjvLnTjtWmnETHSLMQUxH6Ry0KMEzKyvywp7eqCUldUrUkEjYCkGb5Aik9G6rvDvYlOsbO9LiGvWN6T1Hp6YxFfXWPYp2i4nnVXCD63XxAEjnU7OSdlwbYEvcujkbZVMVXoDgE+wSl6Q6K1pwUu6Up3dShuogneUL0RiSWal3g9pN6fblLa6paFpe6oN74qtbRD6A14l65oQVGCw8uxVMkF0w2t3UBowXonVNpxqxxquy92X6i+p5qhlK2kLNbIko7C7eGoqKe1DiNYkTJWEnGtB9ebKzwBdRlcEQmJceJua1paOYWzzy+wXJG2jxMqufw1PCkZBWkxErcH3J7KPQYydTpK6NEsLSSxzI0ZKlL0wbu54ky91Nr0v77nZeqoYO2xoCGXM5yHaQZpxsN5vIxbqOK4KsHDNDSHhwLFMOFLKvPlMJ1HU6LqrJ+chxyMW1oCiYOBS87AWH4GaH/z9OnMsiilYxOxRVb2UIRtm1gmbGPdyMTYsdiIAU3uLYS1xwrD0NOb6hGhJYrxZI/K+XsFXo4iC0qjxSO6gfNhKCFWNe0BwcsPFy2TchykuVI5cCznPpRi6QpAiqzKtJ6F3+JGM+2Mlxb7wMUOBCcQ3X05odAN7shUDwOpVMb52P/DCH6///ClnSaheH/+4g5N9zLRVDZuj2oeuNihCPESZw6lJr1ZWbTHb6M3XQD6JAGIXhiFlZzHU4EkiffzfCX0iQD6PZ6DdCnKkKLkm6ljg7AmgnR9k7Q9Xq6CVH711PBuL++jmVYzsgYVS0TtVrQO53MwqxndQAoG4l4osRDAOGIn9Gfn6jpbapsbA69G2KlCYlv1iVdHTsOmJktSth0ZtDH2IFSwJdJNaqBsIMi21Xbao37eX8lFRShAye+N8l62sbZu3yTaf7LL5NdwDiiUexLaI3G+xlElCLyjmlFBTaWPtqnwgnkkm89Vi71d8vSCJ2fNKFwz6feff/r6m5tmuuc8dvavO9/asbr79OAXA8zNmxzhuxbtz7xH5n8oRT2nF/59tn80edz1S39gZ3rh2eptfyyZxZVlL/t02f61YzftOnZm/MeZR3beU3G+4ZsHusbGr36ohfDbrdl9/Rvn/bHnz/ZXcnsuzBvfcmLNDSFPdvjHbiu8M9zRIX3cH9wuPvXTuTdPfrv19X0vnNvg93+1eKU4N7h184kls245eX7bXVVPubgb/hr4aPs8+Mzst8jK2y6/fP+pK8W+O/aTawZuXmz0OU/sDAfm6OP12/oOHvuyZMux69+fW7760In+C8c27X6ndNVR3BYpi303eO2iWw/+tkb8dcupQ3O+cz5+5sifpQv2XLifr1mzYXjId+aFBfi6havfOfrJtULDQNdzn5V1ls0vHbns9s+usvb+XHpntRqfsbn2h10Pbjg99oR6//ldv9eErqoZGF5/r/rp8cPrO98YXffQs2eCO2YHjv4izv1c4TL44wPPPP+1csXR4+9+8XnT92XX3L31cNmp+U9a50ryRZ85Y12k8+SNtAP/AiMbVis=

View File

@@ -1 +1 @@
eNptVQtsU+cVDgLRVlPaTBQQQms8066Q5rfv9fUzWZSmdpI6iZM0NiGhm9Lf//3te+P7yn04thmtgE6jQCXuKgErj6rE2DRNwrOUhoQ92CrYoy+1SEm39N2qXdVpQ5Xo6Ep/O85IBFfy495z/u9855zvnLs1n8SqxsvSohFe0rEKkU5uNHNrXsUDBtb0J3Mi1jmZzXZ2hCNDhspPVXG6rmg1djtUeJusYAnyNiSL9iRtRxzU7eS/IuAiTDYqs+kpYZNVxJoG41iz1jy6yYpkEknSrTXWCBYEi4gt0NIvJ8hPVDZ0SxRDVbNWW1VZwMTH0LBq3fzzaqsos1ggD+KKDhibC+iGGpWJn6arGIrWmhgUNLw5z2HIkpRmyiqynKzp5thCmscgQpggYAnJLC/FzdF4hleqLSyOCVDHw4SchItFMIcTGCsACnwS52ZPmcehogg8ggW7vV+TpZFSMkBPK/hm83CBPSCZS7p5uoOQaAjaO9OknpKFtrlpG308BTQd8pJACgQESPjklKL93HyDAlGCgIBSr8zc7OGx+T6yZh4JQdQRXgAJVcSZR6Aqup2n5j9XDUnnRWzm/Z03hysZb4RjbLTD5j2xAFhLS8g8Uiz6ywsOY11NAyQTDPN5KodkOcFjc3rRbX19KNYXFeuaMt0u/2BHu7Yh7WlsNxLQL6EmIZUJtjUqjXFbWKCNJNVi9FJcCNAeJ+3weL2MC9A2ykZyBkGjTac7m/tl0RH1BjemI2If1YnCzaFWpokRwrwt2Rps07yhxlhXl+EKYKWnRersiEn+9oZWFGn1C20q9vcEbL2phrTLkXxkfcLVNoD62nimoV8NRlUYb4O+zMZOd6C31kIoG0merUs12WwupTXp0CQUHkDKQxyfoR/ObGQTjXExHIjIA93hABfu7fI+Mo8zRbkBVaLtppxeqnCNzSlGwFJc58whmvIeVbGmkHnB23KkkLqhbc0SdeK/XsyXBudwR+sNYa/IBohSzckIZ1RbKLclBFWLg3K4LLS7hmFqXC5Lcygy4i+FidxSmCciKpS0GBFn49wg5BFnSAnMDvtvOQKThREg/S3QJ6MJcEqRNQxKrMyRHtA1uzFAMHBqdt6ArMahxGeKYc3J4iwMZlKDLDJYlksOipQv42T4KDZQ7HTpiKLKhTCEEBA1c8jh842VLHNqHCa5UoCmAEWPp4BKSiHwIk/qWfwurS3NzJLyU2dvdtDJpiELLu8sdoM6P99DxSKRcSH2DRinz+ebuLXTHBRDXHwe1/hCLw3PZ0M7RO3szQ4liMOUNpKa8wY8a07dS276fJjC0EVHfYybdUVdvhhiPZQbuukoZijswK+Q3ccjglJopiKrOtAwIjtaT5tT1SJMFTZPHUO7GDfJtNbCS0gwWBw2ogG5kINWa1FULMiQPYZiAEHEYTCrPzMf6G1vCAX9Z3rAfCGBDmX2/ZCXZE3iY7FcGKukMeYwEmSDJStUxTl/E+hq6DVP+2jEOGkcY1inj4l6WfAQWU5zaP+XXbawf/NQINyTyDzFMXXWGqeTsdZaRFjndZM2Fd8iW3KFXKX4nxaNV+68vax4LSaf69d3dYUSq+mKyX8d2/eFcNu7q5d91Mq3b9nzxuUjtidPXnzry5aXutdkp8Z/fPW3K0frL+fKPz58YdOVQzO/mLi77A/9J5Y8v4t9Z6nnfF91d1/lXy69bH+xo/Lcu5V3TTx+deK7787uF+pH/rj2Ae7KD5Y/F/HsmH5/N/hm8ai15dUvaybPHXztqxU71ZllgNd7Ly+5Z6/nCj044P/oku6uryrv/WWw+oMXyspSn7934M3Et2v2UKsO7l4R/nX5jk9euf3BgLrqh4779vdkVgxtiXy8anPlJvFQw6PlQoV77RrB+megL0ruCd11f7Mwc6Guauq+JduOP3PHvrVM7dF1W39/9av/bNtu7GWlNwZf+9G//+c7MfSTwHR2zROvJv4pOtYHfnPxM/2pddsP/WNlWf21S/qGv+2oeKH8Z7T4ZvwCnz6+7IJtqZo88Oxj0c+XTvz0qbG/Vz1dPdq68pnlLVXbMp13JnYfPKM33L2y7vrqT5+Y+dX5UXkmWl/RAh97e/32o6PjUmrtzn32r186eWXXtd9Z4bfi8m6B/rSOG8Gf3em59y0tKrw4/d+laPuZ+uzjr1/70F5s0OIyZnDa3Uy69T3LDll/
eNptVQ1sE+cZTsgEm5aurSohyrr2ZFWsPz77znd2fE6zyYtxcEjqEDt/ZpCdv/vsu+Tuvsv9+Ce0HaX7KT9pdVU1tCKVlRi7crMAC1BgQNcxKghb127SqgRRVRoq/RlT6TqqjVXZZ8dpE8En3/nue9/vfZ/3fZ/3vW2lDNQNCan1E5JqQp0HJn4x7G0lHY5Y0DB/UlSgKSKh0BWNxcctXZp5SDRNzQi43bwmuZAGVV5yAaS4M7QbiLzpxs+aDKtmCkkk5GfkLQ4FGgafhoYjsHGLAyDsSTUdAUccyjKhQIInhtAw/ksiyySSkNcNh9OhIxliHcuAuuPxTU6HggQo4420ZpKMy0ualp5EWM8wdcgrjkCKlw34eEmEvIBDerfujoKIDNOeXArzAA8AxBagCpAgqWl7Kj0qaU5CgCmZN6GTGDVMoYwhqrCaCrs8DKFG8rKUgcX5s/ZBXtNkCfAVuXvIQOpELSTSzGvwZnG5EgOJ41dN+3AUQwlG3F15nFWVoF1NlIs6mCMNk5dUGaeJlHmMqqhV5b9dLNB4MIyNkLWK2cX5w5OLdZBh7+/kQTS2xCSvA9Hez+uKj51avK9bqikp0C61dt3srib80l2JcdE0/h1aYtnIq8DeX839q0tOQ1PPkwBhI/ZLVBEgNCxBe7Z+xeAgSA0mlZZ4uGM9TCgik2YyvQP+9T0CE7G4kBReb27ojw22ZbMa3RV+VNlgZUm6ifXRHMuyHEm7KBdGQaZp39BAMGG1DSfoDQlvGMRTHWquNWb0eb00YNVcLh7MRvVOobcn2dvTmh8BYeRr8rZlYqHciJVKeyI/UE0qrXC97VQ+2T4IM4mQHvF1tQ81daI01Teiev3BTHYgv264mcCQrYwktIz2iWgoH8u1ZcS1fgVKCdgWDcZ9kUguZOXbUS7ZpsZCLBsM96cXYaZ9FEnVYPso1k9V1uQCZWSopk3RHqcp/8s6NDTcNvCpIk6kaRnbCpik8I/nSrX+2Rdd/xW/VxZCmLD2qbAuOQnKQ3TyecJDebwEzQYoKsByRFtnfKK15iZ+S2Yeiuu8aqQwO9cu9EMJiJY6DIVy6y174FSlB3B9K/Bxh5IwpyEDkjVU9kQ/2T0/OMhIaGq+7Uikp3lVGq26tU9VmyE7mssKwBIEMZNVKG6UZaQktEDqcO2IpqOKGwyIVAx7nPFTkzXJAh3LOFaKpHFq6RM5UsepkCVFwvms3mvTy7ALXpzsYzcrmHjg4DlXYqvVoE4v1tChgmlc8f2VGZbjuJO3VlowxXCVRZ1YqmXAxWhoj2Icu1mhZmIfZUzkFrRJSbBn7scvg6lUU8rDJj0010R5eIYBlAfvQJaFtMcHWO44HoESwFYqxdSQbpIGBHhUm3l7xqnwucroaWFoL+PDkTYTkgpkS4AxKxlClRiMZkLToYx44QBIkYAHIiTn+WeXQgOPBjsjrUf7ycVEIqPa/GeipCJDlVKpYgzquDB2GcjIEvAk1WGxNUx2BwfswxzD+SlAJf2wiRF8nIeMBEMHF6x9SbtCZQyXeBljzwB7SmRaHAGWZRzNhMK3+H24TNWPyZPFSqxq+mz9yft2fr2uuhrwNTe3q/t19SLVeOrGw/5NmciOYyPN9++u/87P+Cs/6v/5EUrYfvTIuX3E22e2Oea+ty4Csh9v3WS+9/cto89ejtUTu1ZuvKO8e2I52p1tvnHtiXce+F9v5tJHZ/574VfZvf84MXsJXd971kG//J8nd+wd6NuxHNm3Hb9rYln7G4XE6VfUC509PXsaGwtvr0lcyR8/fWLg01WrE5PTH+9hpr47Tu0gH3PX1b346Z5HAmOv/WFl784r01vp6W+dvf6XrxHLZuJ3esJ3DQRW7vxG/PKqTUe/OPL9rWs+/8VLwbsd/VNtIhTE7Z/dWPH5e//sOk87tGdXdHn+/dD58ooz6P1y/UV6+vnpsb+9WffhPYc6Yufz9/753LXnftd9pcF77fayOMYcOtAwvc51fKxjs/BZf90j70fG3L//5n3iJz9krtI/7puM6snZPzWebHmqw3k1/tq7zs0bXvH+hvW9uYZ8Pt7y0Xbu12ufmX2r8YHUL+d8/3rh+oOvjlyaOxfd2v3B3dbVhqHxzctfnF1WvPBF30X4xszBp1c9sfqn3m83Srev3sjN/rVze+7pXa2XVr/Df/L65TPtW5zPnEXVEjXUbd732LI+XK//A71xYoc=

View File

@@ -1 +1 @@
eNptVXtsU2UUL6AZQWVoFBON8VogGbjb3UfbtcUZt3Udc6zd1sKYaJqv3/3a3vW+dh/tHszgUOKDMa4CYkRA2FqzjDkdkzeKBsWIkaB/OCCQaGKEGaPxFcPLr10nW+D+cXPvPef7nd8553fO7c4kkarxsjRjkJd0pAKo4xfN7M6oqNVAmv5iWkR6XOb66gPB0B5D5ceWxHVd0TwlJUDhbbKCJMDboCyWJOkSGAd6CX5WBJSD6YvIXPvY2k6riDQNxJBm9azutEIZR5J0q8daz8MEAQgVSJwsEpIhRpBKgIicRARlLbaqsoCwl6Eh1dr1XLFVlDkk4A8xRSdZm4PUDTUiYz9NVxEQrZ4oEDRUbNWRqOBMsBWfpmxUVyaOAIfTvGCZ1xeXNd0cmk79fQAhwphIgjLHSzFzb6yDV4oJDkUFoKMBTFhCucKYAwmEFBIIfBKlJ06Zw0BRBB6CrL2kRZOlwXyCpN6uoFvNA9l8SFwNSTf3BTCJ8pqS+nZcY4mgbU7aRg+3kZoOeEnARSMFgPmklZz98FSDAmACg5D5/pnpicNDU31kzeyvAzAQnAYJVBg3+4EqOu0jU7+rhqTzIjIzlfW3hssbb4ZjbTRjc30wDVhrl6DZn2vD/mmHka62k1DGGOa7VBrKcoJH5tkZBeEwjIYjYll7tdrUFvX5pFqfEkzyKS/PNbhal6dcvoYVLdV8OKLKxoqwI+pQmkm61E4zpS4X4yBpG2XDOZOVcci0G8tTjnq6okaCTTybaOWblvsT/homVNXAQjcl1Cng6YDbHWryS1XLUJXidqtQSvoatUgDQI3VjY16yihvCKFgoqatRXY0NDHPlENXfaC2ASQVWuhYVWkv5RhtKYEpG0meK/MnaG9LKJJ0OJetiLN+LuCvavZXca1cReMzgSSyB1rDwfCqyEqmBk7h7KKdJJWn7aTsLip7DU0qRkBSTI+be2jG9Z6KNAXPEFqXxoXUDa27D6sTnTqZyQ/T7kDtTWHP7/NipZpHQ3GjmKCcRB1QCYZiHATt9LCsx+4kqutCg5X5MKHbCvODEB5ELYrFWTU5CBkYN6QE4gYqbzsCR7MjgPubpY+HlURtiqwhMs/KHFxFNk5sEbLGOzIxb6SsxoDEd+TCmkdzs5DqaEtx0OC4eDIlUu4OO8tHkAGj+/JHFFXOhsGESFHDxXExQ3nLpBoHcK4USVMkRR9qI/HsI4EXeVzP3D2/yjSzz4GLfeBWB11OILz0MvZcN6hjUz1UJGIZZ2PfhLG73e4jt3eahGKxi7vUeWi6l4amsqEZUTtwq0MeYjelDbZNepM8Z44txC9hdykHWQcCkYg9iqIMa3c5KQo5aQTcTpaNMgfxNuQhRsk2U5FVndQQxHtbbzfHikXQlt08ZSztYPExainBS1AwOBQ0Il45mwMWuKIiQQbc+zBKQgDjiJzQn5nxNvvL62oqP1pFThUSGVAm/hkZSdYkPhpNB5GKG2MOQEE2OLxCVZSu9JGN5c3mPjcNWTsdKYWcq9TF0TRZgZfTJNr/suvL7t8MEDD3JDRH4myZ1WO3s9alhAjKXE7cptyf5YV0NlcpdmJG96OvzbbkrlkbXg/Xfkbdf+LilcUVOw6D8eubjy0uerWQ8Xq9T289b7zF3/f66nc2dKVqzu19xP9bf+E/qW8uHfwn+uC9Ff273O+u+XpT08qeoZFfLl1fYfvwzROf9ZwtenIk3PvN/tR58a8HetZ9Wlz6ye+7HtKCzXcW9Uh06pN53Qlm58JLT3x1pnnWkgVP3dnMt+ruHRuHk7XsoWVn+MyzD395/NvtR6penPdh4thje+YP97sPvly4Zc6R2KLTV7/3zjZ8L81BF2p7ly/o6TzZ4ztd94v+5B1zV47GHts2tOXy5Ss/7ty8eC9be/F32LVo9M/xT2f84Sto6v38zPpfF1RcePuVy29s+s4TbC4+vWbt3PVf9o3uuTa3c1vy7nsYx+lTP7DRlyxc47/HK4pia+7Z3fT3OTi6KWrZffzkxyPb/5C+2Hroam9XoavgNWLO+FPLnvfIF38inhj++OzGx6WZwcJfg2pRffddsXnDkdGWQODa/ur3Xt3y3SLXOKfd+OnUlZ8LLJYbN2ZZNj9EJK7PtFj+A1XHTc0=
eNptVX1wE2UaB+oJejejf4iDMuCa4w/n6La7ySZNWntHSVIotE1oA5ei2Hnz7rvJsh/vdj/SJD08DqqOUsC9O1G5O09tSLhcKZX2VETQm4EbcBAFP7A4RU9Oh3HKcAfjDMeN9t6kqbYDO0kmu8/z/p7f8zy/59kt+STSDRGrswdE1UQ6gCa5MewteR11Wcgwe3MKMhOYz4ZD7ZF+SxdHf5YwTc2ora4GmliFNaQCsQpipTrJVsMEMKvJf01GJZhsDPPp0c09DgUZBogjw1H7UI8DYhJJNR21jrAIJQpQOlB5rFCqpcSQToEYTiKKcVQ6dCwj4mUZSHds2lDpUDCPZPIgrpm0q8pNm5Yew8TPMHUEFEetAGQDVTpMpGgkE2Ilp5kqZlM+gQBP0jw/685sAhumPTiT+n4AISKYSIWYF9W4PRzPiFolxSNBBiaqpDKGyRcIbRWVymMXJIQ0GshiEuUmz9pDQNNkEYKivXqjgdWBcpq0mdbQjeZCMSua1EQ17ZEQodLQVB1Ok0qrFFtVQ1gPpWjDBKIqk9LRMiCsclrJfmi6QQNQIiB0uYt2bvLw4HQfbNh7WgAMtc+ABDpM2HuArni44enPdUs1RQXZeX/4xnBl4/fh8q4qliWfV2YgG2kV2ntK3Xhtxmlk6mkaYgJiv8TkIMaSiOxzs+d2dkKhM6bUS426X1B1jz/lSkaDXZGM1MF3tAckcd2a7qTanYlHPYGmiEdWcZBmazgP6+M4jqHZKqaKsKBbG91rYtG29XGF6woDl8TI2gop1N0Kl3emO9fKrKly7nBQj8alluVupxn0pVfLbYFVEaEr5k6sT61cy2X87pXrWlHMxJE1WPAyjekVoS7AtWKtA8rtOCSG25rjzeFgdx1FKFtJka8XBQ1F2iMxXkura/XGmpT+S5Pf6MRBY53Tr6yC66JhPmGybatXeqdx9vg4minT9jCclyleg1OSkZEaNxN2P+v07tWRoZFRQltzpJCmZWzJEpGik8fz5Zl6ObT6B33fnQ0QwdqHG3WxkmKcVAtIU07G6aZYrpZhajmWWtESGfCXw0RuqsxXImQeDYGoMzg1D3mYsFQJ8QX/TWfgcHEGSH+L9MnM0iilYQPRZVb2QJRum1wmdFNgeHLsaKzHgSpmSmHtw6Vh6M6kunlo8Xwi2a0wvgznEmPIgsJI+Yim42IYQohWDLvfWcMMli1TciyQXIkcGJph30jRZAUgWVREUs/Sb3mjGXbWTYr9+o0OJpYQ2X15rtQN5sh0Dx0pRMbF2D/AcD6f782bO01BuXzFi3ljppeBprNhnYrx+o0OZYiXGWMgNeVNi7w9uoTcdHpdHuhieQEJ0MkxXi/jRG62RqjxAQAQgJ6DZCmKkKAUm6lh3aQNBMn6NtP2aKUCUsXVU+9i3S4PybSOElUoWzxqt2IBXMzBqKM0HckY8PuhQEMAE4ie1J+dD3S0NrQ0+V+N0tOFRIe0yVdHXsWGKgpCrh3ppDF2AcrY4skm1VHO30i3NXTYIz6Xz8vEeB/jQ053zOWmmxoCQ1No38suW1zDeSAT7kloDydc9Y5ajnM56igF1Hs9pE2lF8xvcsVc1fix2c/ct23erNJVQb4TE32/bZAWsnc+dvl/S4/+8dLTVo3nP+zT2SfOZH+//MXe4C7h7KqRe4O76X3bJ3rs8VTNvoqDF1/E5y8ePi3/+P63E9Hbw9KJwtVk5puxRz77w6dJY7T30NeDnZse/e+Vb/4pXIVX3//11qa297++bf5bhVN3WecdC/WegTmr/vHh3x6OXL1yoFBwbt3BnVsi1Yd+8acNzQfvWLr7hWeHt23VWxpB755rHfPuP+r584n+D64t3PXFr65kPgB9B4LRxdt6Ny/zjG3uOx3YtQM4lwV3nEafDh9bcIt8/YHfsWcfqptzxhd7fgtavXPugbrdQx9eX/bXazvvuvdC88NnZ6Mlzy7ua17sHJu//MGvhr5dOPF5hfWClIuffO+Zj44s+smFRZ+Pv/1c9Jwt/Nx3euDj42c+8e6e/8BfPt4790smOir5fxruc0xc/jJyeZN84dzOkfs2t128x7pUsbH//FeL3pmTG+vZP+79Ql7geLyzEbx7q//6rdu3j196dCn13ZN9G8YWnI39++//+mjlZ9Hxp94q1b5i1qkfvXb7PaQR/weSPmR2

View File

@@ -1 +0,0 @@
eNrVVk1vG0UY5uPGkV8wWiEhIa+99vojNilSFKq2QGhRTNUqRKvZ3de70+zObGdm47iRD5SekZZf0JIorqKWooK4QCWOHPgD4cBv4Z21nVInbapwwrJl+/2a5/16Zu9OtkEqJvibjxjXIGmg8Y/67u5Ewu0clL53kIKORbh36WJ/L5fs6L1Y60z1ajWasapKmY6rCeVREFPGq4FIa4wPxL4vwtHvkxhoiOHvHX6pQNorEXBd/GysSz87G9Wcar1ab3aerAQBZNq+yAMRMh4Vj6M7LKuQEAYJ1XAwVRc/0ixLWEANxtotJfjhquAcSszF4RZAZtOEbcNDCSrDNOCbA6WpztXdfYwLf/4xSUEpGsH3Vz+dg/v7jXd/M+GVsjGYliKxV5JEDO21Mm9VPPhg/2PEUDzrx3mFOG2yRiVpOI0Wqbd7rttrdsiltf6vp8a4KlnEeHH/pz5LMa0F6eNVGsQwdymeZrmP2VVISndsBHmh7UxWEm2vbwfFUTV2L1i9ZtO1PkT9hUar23AcpxK7dqN7iuLZApyLO5lQYF+e5ow5nZ7zc/3+dSpHxcEU6g/GCptnfwY80nGx1+40flkIsIagscOoc5y964wWh9hZEgkRJfDkhm2sSxgMe1M8cA6uSRqltHjIhR2YMjy9YWOZaSgiu49jCPaVsDgi9S64IQ0dGviNptsMO92u67ZD120N/G7Q8Y/IqXmsSggRL6OJKva1zOHRPIP+KIOTczSZA5s1uU4+oRwP7zjEcXrl2zS5HOuvcaYkNvOvt+7vWrPtsXqWU+1WW22rYjGOM8cD8HB0I2X1di0/Eb6ntMCMwQNO/QRCq2dgVRZ1WGzAYOsuBgqxHAq0Bzs0zRJQXponmmVU6sUgZ1vg6uFya/Ao83Cv5WjRQMjICySUJfFCpmbKAVYQtRkdpVi9RacMsxecJh56q5Neyn1Z1gqoDOIT0lgMPa0TL2dzkTaj4GkG0gtzOUNHR2VZE8Ejs+3o38RVMO5SzwT15rhiDYXcUpkJoAKRgUHpMb7NNKhjjHeUDj2krQy7bzr5IiYM4lONSLHfSIZoyAcsMofnCl6odpgJJNDjTAKagJdn3m3F7iB+3KIIJMJySqBzLdcxljxUXoL0gM719lwZiiH3OKSZHj33bqLWhJtbl7GOBZ4/KhNrON1OvdVwxuN3Xs7iq2exOH5QqmoySWvYQTuTWCNtJwlNac1wstL/I5L/9iyKPw9974evvBbahjHOTeCPghlh6VMJ6z/z+wlKX+q0HpdEbAez++iYml+XZF91JRzgcCBZFpN8mwVC8sUr4kVqfXtr15pOoBdTFSMjtrsubYR+q9nsgOu3lgKn7dA2DZxGp7UUBp1BnZpXE6DdqDfDeqPjDzqOj2Wtt5baroN8mlLOBji3Zn0ZLviGdTzyqJ0OuMJfKNH4tYpf10phH/fQTKi1WbGSABcPeQm7gqiwVIg4D5Dl0GNrSOWU8WcTiL83XuusyzmCW5s6nffMadCzkptZVazzHqPnHj1r48rn6/3N5eX1m+sffURuipxQCQQvTopUai5BTQZCkpKBcHBtytUQTH8JXmxbqkqQMYiOAa3MQBlFxgBHiIgBkYCjAEjmpNyEHU20INMIpc88apVcGZARnh0K/r4mW1wMS/3UtEJu5UoTRUcopHrBcI5AAhAFZh/M4fj4xdI8xQghMXTzr3AGS8AUVJeXa9Osv+JfzID0yO4c0xjFq1PUKJ3hN8KVMkqPbNTK0pVPC1muvW0qmbl2zMBY8yhmPKaupjvzuntY0hSHpmcN7Om2WGN8bb52qPH4+QMD2myO/wFK03fT

View File

@@ -1 +0,0 @@
eNrVV91u3MYVTnLRi9zlovcDIoDRgFxxydXfGgaqOIbTH8cupAQuYoOYHR6SE5Ez9MxQ67Wgizp5AeYJklqQAsNuirRIUbQBcpmLvoB60WfpmeFSstcbK4ETIBEkcDVnzjnf+fsO9/7xHijNpXj5ERcGFGUG/9Ef3z9WcKcBbT46qsAUMn1w9crOg0bxk9cLY2o9XlmhNR/oiptiUFKRs4JyMWCyWuEik4cTmc6+Pi6Apmj+o4fvalDBVg7CtH+3t51eUM9WwsFwMBytf77FGNQmuCKYTLnI28f5PV77JIWspAaOOnH7V1rXJWfUYlz5QEvx8LIUAhzm9uEuQB3Qku/BZwp0jWHAh0faUNPo+4doF/7zzXEFWtMc/nz9dz24/730y+Mrd2uOKu1XO0Xjk3BIfksFGW6uhyQMx+6XXL228/nNwOIog/76p+GD9zhtH2IIJJcyL+GLm8Fb1NBU5sEO5hKC36TtCYmGaQjra/Eai9jqBMMNV0dsLU43hmtDlqXrX1mzWgcYjFHS2Zcagrc7gO2nb3x508kwfcHOrIbgeu2q1B4LqQXPsr/00t+DyE3RPlhbj75cMHqN3rUVQFkY/nPbKM6MxSgwUcoE28CwuGbWnvgVXsQUXYqHqzHeDS8SLljZpLDdTN6SFZZZXyS1glLS9PA9qmbt0XXFcy5OyILLrbKU0+CyghSxcVrq9tCoBv619Fpno/3k30ul11wT2lz8o4+1hxzckNgTiPxKpmgFARWob6TS5IKGMrtA+oZd0qwXiZx8gP0TaMXIBSEFXDi6oWhe0fYzIQNGWQHHWyU622PtyaCIL3nj0Sj2LpKKXopWNyPMj1/EQbS5RLA8krOqHmKrAPZcA9hzG2SrViQKo1UyjMfRxjhyPffoybo/2/+PL1uEvYf2i7qZoNwnfQ3Xwr/t8AoHaiHJbj7/hMOhUPbfVz7Z9+Y04I09HMlwEHm+h4U2NpkJzmCuvfG+NynlJLG5RdsJCDopIfXGtqb+ogzdABrbjtFQiiOhwSRwl1Z1CTqpmtLwmiqzaOT8G8ghyFIGEsoTJCg1W7wgVZ4wBS5HScr1XJhh+6G0prMK07moVGP0UtAyQW39rJaOvy1qDVSx4pnTQk4TY8qk4f2RsXSQGA4qSRs1R0dnLq2lFLmlLdQfYT9ZdWXmB8PRge9NpdrVtTWgmazBoky42OMG9CnGe9qkCbY0Tqa2lXwaExqZUINIsd44HXhRZDy3zhsNT2U7rSVugtNIGC0haerkjub3ED/2Tw4KYYUOaC8VpsCUpzopsdtQebjWC1M5FYmAqjazM+0RSq25/razdXqQTGYusCjcXB+uRuHBwavfvo62z1tH+IeneuX0NKB8Rd8pA9c9gZ5pA1VQK8ycWbHLRpuf1fZ67bzl8F1peXGXPWJzq2Yp9fxYLP6r5Sy+hKkXt97hENfV89beEdYRea49bvY4k0ocps8j4OGmJeDl/PnYbYaAzXn3xXfFua8A5y2TH26lP7UafvHrfa+boKSgukBGRwM0hM00zMJsNcWPsLkxDGmEn2jMNlMWpxAPJzHFO6thtB6NopDFMYw2YA3iyO6DimJfYnUd9zGcf6RH7DE0jglFxw3DwtsFhOT1vnc6uHjSjanGT3hi8HEZHzfc4Q5yjJ0277bv7U6p6jbWfGLw8/sv7GvbUcW1zuLznHaaLxLd3ILvPc8NF3Vjkj2quCV6G6KX4jjjkKCikXWya59Wlth3cqts5oaSTKoKgxp7WdBV2jsT4ulVpCFB8CXY+SCOenHwfeI2KxBK9EwY+2UBSb6cEZwmhX7J/hzAAXHsSowkqhH2UUBZk4yLlJgC9YWeghqQdwUC1+4I95AiugbGMw4aHZOC61PP1mMnY0Q01QTvyoz0rwvWwIxMuS6sKzkxmE+f0HKKC5a4nUJmslFnoKghldSI16XpgCDj4grUA/JH2RCGcUuFc+VwzUVkMkMMOCKwR4XBgMumcoEpMI0S7qoz6b5GWdAiP8PHuwv2RWiCr0KDW+KWeAeQi+aQsBwIt3SXOtOaZFj6J8N2lfSJFJhvqnc7HaeRwXQRmSa5K6EV9zl0Xm9Q1DaWNm1WET/mvbN55psIZGCbVOry5hKiAfooNBJdRXG7aaa4WzYD8ibqUgVZU1qbQponAusROXtpJ4W7XJsB2Sq19Em9iGlacFb0YLhLX3fkcuDiuG4hW+wuAdKyoc24u6DHt8T+WeMfeAf447/w/L/dIHH9pMffHX2fOf/DvDXGZN/pdqm67X9vbvF735jo07d41Lp98H8ZDhN8

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrVV01v3MYZbnvoobceeh8sCrgplityv/Rh+CBYgutWstRonTqIDWJIDsmJyBl6Zqj12tChbv7A9hcktSAFht0UaZGiaAP02EP/gHrob+kzw119rJXISNpDDRnkzjvvO8/79bzD5ycHTGkuxXdfcWGYorHBD/3b5yeKPa6ZNh8dl8zkMnlxZ3P0olb89Me5MZVeW1qiFe/okpu8U1CRxTnlohPLcomLVB5FMpn8/SRnNIH5j17e10x56xkTZvonu9vpedVkye8EnaC//Nl6HLPKeJsilgkX2fR19pRXbZKwtKCGHTfi6R9oVRU8phbj0odaipe3pRDMYZ6+3Ges8mjBD9iniukKbrDfHGtDTa2fH8Eu++c/TkqmNc3Y73Z+MQf37+/86GTzScWhMv1ylNdt4gfk51SQYHXZJ76/5v7Ine3RZw88i6Pw5ts/8V+8x+n0JVwgmZRZwT5/4G1QQxOZeSPEknl3k+kpiZJuEKU9libdwbAXp92BHyVBN0qX/X7SH0ZfWrNae3DGKOnsS828nzUAp5/89IsHTobweaNJxbydymVpeiKkFjxNfz+XbjGRmXz6Yrjc/WLB6DZ9YjMAme//Zc8oHhuLUSBQynh7LEZyzWR62i6xESG61QsGPez1bxIu4qJO2F4dbcgSadY3SaVYIWly9B5Vk+nxjuIZF6dk4cj1opBj77ZiCbBxWujpkVE1++uV2xob04//dqV02xWhjcWf577OIXu7EjUB5JupoiXzqIC+kUqTG5oV6Q0yL9grivUmkdGHqB9Pq5jcEFKwG8e7imYlnX4qpBfTOGcn6wUOO4inp528d6u11u/3WjdJSW91B6tdxKed97zu6hWCqz05z+oRSoWh5mqGmlsh65UiXb87IEFvrbuyhhfU3KuLeX+z/l/ftgjnJ0w/r+oI8jaZ53Do/3HESzTUQpBdf/4azaEg+9f3Pn7WmtFAa62FlvQ73Va7hUQbG8wQPZjp1tqzVlTIKLSxhe2QCRoVLGmt2Zy2F2U4hsHYXg+GErSEZiZkT2hZFUyHZV0YXlFlFo1cvwMcApYyLKQ8BEGpyeIGqbIwVszFKEy4nglTlB+kFZ2UCOeiUgXvpaBFCG39ppbufZXXmlEV52+s5nIcGlOENZ8vGUsHoeFMhUmtZujoxIW1kCKztAX9PurJqiszWwj6h+3WWKp9XVkDOpYVsyhDLg64YfoM41NtkhAljc7UNpOXMcFIRA2QIt/oDmwUKc/s4bVml6KdVBKT4MyTmBYsrKvwseZPgR/1kzEFWL4DOpcKkyPkiQ4LVBuUg+FcmMixCAUrKzM51+5Das3NdztbZwthNHGOdf3V5WDQ9Q8Pf/DV42jvunGE/1jVS2erHuVL+nFh+wN9pSfasNKrFCJnluyw0eb/anr98Lrh8La0vDjLXsUzq+ZK6vlfsfg7V7P4FUy9OPWOguFw+HVj7xh5BM9NT+oDHksljpKvJeC+JeCr+fO1mwxePOPdbz8rrr0CXDdM/nsj/dJo+P7zZ62mg8Kc6hyM3guCwTAZpKy32h8EKyuMBcvdYBBE/spwOegm3W60Ei/76UoaxP7AH6SD/mB5sNrvd/ss6Q3sYCkp6hLZtfTDQVAftM6aE9KmFTXesGLwuI3HrlscgUdsR7UetVtFDOIAr6I4gQqZAOI6RsVAY39MVTOxZh2D9w/e6qw9RwfbjdY3PbSxep13s13t1jc9xsw11lrvy5pQxQiurY7WQD2aZ4IlxEgyv9iTMWiLULL3yy1iR3KEodx5KO6AcoTV5KKqDXE0iyZvEzdFYZPoiTD2wwCEXkwIOkehEcmzBNyBl0Pi5rA9SdVQMznMFVLuE2rsDwI6w3zRRKbuZ7ObigQCUyvhFlG0Y6aA5r7A+NduDZNJEV2xmKecaYujeY+JqMsIMhicXxiswgQO6tzikJFBtNuEFmOMWOKmCpnIWp1DBbZSanhhZBXuH85BAoGNZYxwSIXWuuRABNj4UbADihDHsqhL4dw+98PZdAG3URTZOUDebLgQ+HsMZDRDlEoFtIXb0hjWJEVdXPTa2HneJlIgCVTvO50G3iVEmmQuo7NYu2TO3MrpAULtiMTiNlIW2pk5+/YDYlclC1B37JnIh1uPwBPjRrlDLknsp58q3YyYBQUViKgtaNmzz4q6yUvKcfs6rwILdvv+3ogkEjdahCRn8f7FDEYMJzFEFyTnUHPTIXdTu4VkzNhyZkrBs3HOi4v7aGOgDXxjsCNbKEljn5m9OzwUD8XGDrm3MwJN79sKnZCN7S1iRzKzd0hNfnL33t7mu6M2ub+7sT7abJONza1N93x3Z5cwE3fecWG+HMuHYiStFeVcJ7js1UVC1rd+tf7+3qW+cfl+o3CsRc0YHKONAVurzgHY3pAOsN7nFZS4xjmswvrI9uSF02ZtALMaoS3pvDdd+Z7VUwOg4z4FQA3hAVXcLVlSm3U/hK6DLG3NCSlsqgC8lHrNHGkd4t+jt7NzeHj+HYANjw7/AxNYKrM=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNp1VX9sG9UdTxfGMml/JGIZot3Y9UDKKL3znX+ek1pN4iRt2hQntknSRGBd7t7ZZ9/dO9+PxHYI0CTbiqBqD+gGFWjQpja4WdIuGUtpso0piBVYp00aKJRStGnroFC1IE3bKLB3jkMTtbs/7Hvvfb6f76/Pfd9oYRBougiVdZOiYgCN5Qy00K3RggbSJtCN8bwMjATkJzpDkegRUxOXNiUMQ9XrHQ5WFUmoAoUVSQ7KjkHawSVYw4HeVQmUaCYGIJ99Z92zw7gMdJ2NAx2vx/qHcQ4iX4qBFngPMqnTMQGaGmaICIY5XfhmDNegBOxzUwcaPnIf2pEhDyR7K64ahBsSsqiINlJBezT61w0NsDJaCKykA7RhAFlFGRmmZjNRJGXvQSiVgzCyasmDYCqlpG2ur97rsWFcYeUSQDYlQ1SlrA3ggc5polrG4LvKR5gxBDG7gHFUTcyAcVQzoJG2hcpqiAeVVi+RqhoqmWaIYHkpiJpuxMRSLa6FVGbCR+y0AKoW//8hNsbulagB3k5sFeVaa7uGK9ZwIAk4AxmP3DdSSACWR+G9V1E9kYC6YU2tbew0y3EAlRwoHORFJW79PJ4T1c0YDwSJNUAROVBAqWpWMQWASrCSOAjyy1bWcVZVJZFj7XNHUofKZLn5hB3L9cdFu+EEkopiWLMhFERTu6MzixSoYDTpcZLU8QyhG6yoSEhRhMSiePJq6fzU6gOV5VKIhCir28ovG0+txkDdOrqL5UKRNZSsxiWso6wme90zq/c1U7H1aRWCnde7Kx9ec+ciaZr0nVhDrGcVzjpaUuev1hgDQ8sSHEQc1vNUnoMwJQJr6ZNYjBNiA3IANHenvLpoinSKjLRL6aZMEDbnhM6eDiYYDadFrwQj7qZkh6+ji6B9Tr/X7/fRToImKZImaSJMcrwp7Ugo4SEtRjfxMWdbUPFGkh6llUnmBkgAki46GRO829Lbt2fIrkQo0dbVAdJJqPU2Kx6f2tpO9vZFu7algzmGhKFsLOv3NTVgKDpzUOQDRlDkk72MEunpSWmUfyfDp3tcgm60efwt6WSEVg05qnu71XDcvSo8j89PUOUIvZSboexnakUbElDiRsI64qZdL2hAV9EsAWN5VDLD1EcnkA7Bm78vlGfK4dDOaxL+zkQL0qS10AP4zWiSYCHOwJyU043R3nqKqXe5sG27opPBspvoDSV4Iqqxii4gGbauSL7AJUwlBfhi8IZiX7DFjjpph48GFwEyKtQBUY7KmuwlwsvTlGhvmVn+sgioxVlFzJXcWgsl1Q/lMkM8Z/J8YnBIpvw5t0scACYnzJZN0Oyw3aCACFlHxWGcU+WTFd0VUa4UQVMERb+cIdDsA5Ioi6iepd/ySNetCQ8q9tz1AAOmABr+BXepG9SvVyM0ICPB2r6v0bj9fv/8jUErVC4E8ft8L69F6WB1NLRT1ueuB5QpDlP6ZGYFTYi8tXQnWsRYP8UIAz7g8Qk072Q4zuv3MB6G9wCAxMXRJ9F9IHKIxW6mCjWDQLMQ3V9G1lraLLMZe8YEXLTH5UWZNqDhzUkmDyLmQAu0c9AbMFUDEmT56WAbEWS5BCAiJf1ZhZbd9zTtag++1EusFhIRUpfvzoICdUUUhHwEaKgxVpGToMmjYamBPOIKN+22Zhne5xQ4xgV4mmEE9C00ozG0wvaV7CbsSVtgJRT7IGfNJFwBvN7tduENmMwGGC9qU+mG3ZO3c1Xir6774/cfraooPZVS+A3lLFU9f/HuvYGdP2w91jXZb41ZeGIMb7xlcU9j5MnFJ6cPRMOvXf7NN3K94e9ebKx2vfP0ud2XKscfvvXAxuc8V6Lraz6fy879d+HUg7HCpS/eutPRdz4wv5B8422f46fVL9XmPnvzha3dI0f61r+y5e1i4UWwONPtuefKhwO/3MG9WxO+tWMWHLzSdfiO07X7L3+t7k/4vr82PfPWocg/sOD+dc1VH5+a+ThS+9FWPlFze83pZ+6oHZOufnPj8b7ZquoLc5+2ZKvqHk/+/elkf8++atfvUoS2Y/ypxz6DW0+F2JqNnzzq+c9d701f+nDrwd4viIcC5/929vDJ+89czX3w7AfrN21ofe2AcHVvvE6o2oP/4tjBfZff3XuunWhulP6yPf/AWd9Hvk7t4gNnDp5v/Mn4BvWx5L3fqrj/9vef+O1GZuihQ1/ndi8x+I8Wb75KTTX+8/XeBbPukebTtUOmclv6HP/pw2NH+i9U/qHwvdETfet/Nvz5/he7/9VwqZXNCbfcNn3zn1OhxfFCOtNGOJ6qm++/8u/iD5733J1/vZXacPnYBf8jD26fKjo2vfLcTcqFLU9kT9/76uSJ+eHRzkNbTi5+m3tcueumH79fWVHx5ZeVFWd6At0H0Pv/AJy+xfc=
eNp1VQ1wFPUVTzBWLOMoMxbSSsp6DUpj9rJ7t7mvENvkLt8kly9ILjGce7v/u9u7/cp+3BfF2lQgQBR3xonaKi2Q3GEIITEMUkNspy20KlNop8gQOwXS0VhpyxgUyqBD/3u5YDLQnbub233v/d7vvff7v+1JRYAkMwKfPczwCpBISoE3staTkkC3CmTluSQHlKBADzS6W1r3qxJzviCoKKLsKCoiRcYoiIAnGSMlcEURvIgKkkoR/C+yIA0z4BPo+FR2ZLOBA7JMBoBscHRuNlACTMUrBoehDQY8LiN+QZUQhYFOiMlsKDRIAgugVZWBZNjSVWjgBBqw8EFAVFBCQDmGZ6CXrEiA5AwOP8nKoNCgAE6EBSiqBGMxIwafCAI7l1GJizqgX+XT9cHg238dmw08yelWTmUVRmTj0EoDmZIYcc7BUJ8xIEpUQPQ2BWDPEEUIwM4AyQj9RVKCGLB/so4nSrAtksKA9J2fkWTFy+j13iaSATFsgVUA2A76/9mhgz4IRgI0LGQB1qJA2KJMoOALAUqBcVu6tqSCgKQhpb9nPTQQFGRFG1k8scMkRQHYUcBTAs3wAW08kGDEQoQGfpZUQCGSkBV6CCbhQbpV2lAYABElWSYCknOx2igpiixDkbq9KCQL/HBmuKhO6E7zkD5TFCqBV7QjbkilrKaoMQ4FxiO40QqnNhpDZYVkeBYKBmVJyCoppu0TCw0iSYUhCJoRr5acCx5Z6CPI2mA9SblbFkGSEhXUBkmJsxDjC59LKq8LUEs5G+9MlzHeTpcyG3EcfsYWIctxntIG02p8a1E0UKQ4SgkQRNuLJSlBCDNAm8q+z+ul/F4fV8pT5Spn7Gh3OqsSFQF/wilHEp6GcGPcCegqtwpcIbys1tfASM1hFLcSFtxeTBTjKG7EjJAF2hCVXa0uLlLe3UqzbjzAeGulyrqOULxOaPO1kHUdVfVeOWGJc/FmosXkbgNV630hq68DM0pEGYHXRwRPwBi3Bt2xaKQ94XIXe6y2WlzwVjd7cL6qImRsM1V2x9aT7bFwCQIpqxGGLhXLQ84g3mAT1JomTy3ZrRjFCpu9LkwHaipNcqWFawmYNoI6e3vItoAzbrWhWIa2BSNsmH6NzEuGBXxACWr7zVbigARkEW4Q8NMkbKSiyj0DUKTg1B9TmU2yz133tb5XDLigYLXJSokpRDATUk/GERNmKkZwwoFbHWYcqapvHXZm0rTeVZljrRLJy36ozor585CigiofBvSQ865nYFI/A3C+On24sVAQEwUZoBlW2nA72jy3Q9Ea1/jcsUMFKUDyTCKdVptMH4ZoIhalKZWmg5Eoh9kThJnxAZXyH8mEwG2ip4GEUE7W9hebbCMZy7wch2CtGIpjKIa/HUPhCgQswzGwn+nfzCKXtYFi2OxjdzooQhjAlZ8i0tPA3lnoIQEOyljP/TUMYbfbj9/daR7KbNcv89uLvWSwkA1u4uRjdzpkIPZh8nBs3htlaO18Przx4sDus5CY1VRswywWK2210jQJ/HaCsBTb7H77r+BLgaEgij5MUZAUFK5J+NZS4tr5Qo6M6aun1IwXmy2w0hK4zClWpUGL6nMJeg1yCSJKgBVI+jDlRymSCgJ0Tn9ayuVpKKuvcR5tRxcKCXWLc2/MFC/IPOP3J1uABAejDVGsoNJwk0og6axEm8s82hG72W4zETaT2W82mYENQ2vKXKPzaLdlN6Cv4RTJQu4RShsPmksNDoIwG0oQjiy1WeCY0u/VnyT1WvnAiSXLVu9ampW+7oHfW7f6Wt7nP8QeOn75id7Szq2D70UOdT7w+ZlVUw9XTC3pRs78tdZ3amV1QX3ovxeWNR3NL/l+Y16Ob7u55ObFRw05sfvt2R80nh5Y/tnHE8fC/JP/+dfpH3yUmuiavmadfqPp5x+N/+wt7HJu4nrvqV94rrzSX/nLmY4vsPHq0sjQuwWdj12uOFq9BFF37Nt/9uQfXn1hTdu6hhlmxem27tlLv+eJwCfI49eXP/rcqt/++uobtT/a9tqpwEyvayb4/IsPvrw0+2DFyuxPV0X3FDw4vpzuiLe+rp79Jp59/IN9T7f24t238h6bGfO09k5v7zzz6fuHh2/03bzw8bWNI+d2e62zH+Z7i77y9ewd2sAG/5KzevrGBlP5D5u+fezNb+155DPpxJqcA4c6d7wXfefS6NadnTPLRr5cMcsevXf7+oOzD1dfWH2p/GLupPrMK8RJ31lHwaZoX8Fg6b/7tveXvvbiP7C8vWHqiz91Ne2d3jmRHK3uWpuYuvkJm9/+7DXf8aeH+0+u3HQFz+8amFzb94it55+M+zdh94lc7+s1K0MVO/bMYl+Rl3LBA/3fk59aN9X/lPvZYPONsFlgrp+VN9za/eVSbNtJypO7a03g5Xt3nyuMXLnqPbftu/f97uKZJ7u3bZ0JTjAvtf954+eHv/G3XVfX7tp58DuHxtatyzuwfmzTm2zNQeJdxFH542x94vdk5Q1XPlOdk5X1P4F86do=

View File

@@ -1 +1 @@
eNqdVn1sE+cZDzA62KqOqfRji+gOq0Mazdl3/jj74rkodgIkTeIktnAKot7l7rV99n357rVjJ8q2UjqtZStcmyJYO6nFjp2ZLAklQEsKalZou7UFNG0VZKxsnbRIKxkMVqlby7L3HLskgv0z/2Hf+77P83uf5/d7nue8o5gGqsbL0pIRXoJAZViIFpq+o6iCZApocGdBBDAmc/kOfyCYS6n8hfUxCBWt3mJhFN4sK0BieDMri5Y0aWFjDLSgZ0UAZZh8j8xlp5e81G8SgaYxUaCZ6rFt/SZWRndJEC1MvchFwxwY5JEJFpFVmMVgr2yqw0yqLADDJKUB1TSwHe2IMgcEYyuqQNwu4yIv8YalhPZI9KtBFTAiWkQYQQNoAwJRQUnBlGogEWbC2JNloRIHzCrlGyIpqZy3gfXFcz3Wb5IYsWwgpgTIK0LWMOCAxqq8UrExtVWOjKgxg8MoIhSDchTRBlSz4aEwKsJB7GplUEVFrKmQB/PLCK9qMMyX6bgZUgXJNGCkBRBh3P82MWwMuXgVcEZiCyAXexscVr3lnjhgIXIe2D5QjAGGQ+HtzsdkDeqji5UdY1gWIMKBxMocL0X1X0b7eKUO40BEYCAoIXgJlDnTSwkAFJwR+DQozHvp44yiCDzLGOeWuCZLIxX1cSOSW49Lhtw4qhUJ6hN+FERDs6Uji0pQwkizw2omxjO4BhleElBJ4QKD4iko5fPJhQcKwyYQCF4pb70w7zy60EbW9KE2hvUHFkEyKhvThxhVpOyHF+6rKckoUr3o67j1usrhzetsZpI0Ow8tAtayEqsPlWvz2CJnANUszsoIQ3+ZGK3yIwApCmN6zk5SwyrQFNRQ4IkCcoMpbUceaQHee6dYaawD/keqIn5Yc1++EeminwgBrg6z2jA/CzErYbVjJFVPuOptVmxTW3DEV7kmeFsZDgVVRtIiSIqmquxFNpaSEoAr+W4r+AlDcJSNET5qXRxkFFkDeCUqfaQb75ofKXhz4+H56sJlNcpIfF/5Wv1EWfnevkwvx6Y4LpbuFQm6z27je0CKjUxUXFD3GNeggHBR03NOkhitnFS5L6FcCZwkcII8nsFR9wOBF3nEZ/m7Mtc0Pe8gCOLVWw2gnABoAhbtRPlzcqGFCkQkmnH3TRg7TdOv396oCmVDJrTTeXyxlQYWRkNaRe3VWw0qEAcIbSRTtcZ5Tr/wIFqEaUCxJMHYaYeLAsDqoiiWdpA22kpaGYqycq+hicizCMUQU0HDFUfTAA1xmNUv1IlMxugzj4102CiUqRuNL1ZIcSCQ6mmUjRw0N6aoQJAZbsy3EfcxbAzggXL96cXGR9sb2pp9pQAK0ifLCR48O71kWTjMRsI9ogd4tyQojU/xZMIcaBaSDRmf7O2LdIRaXb5gV5KnBDlgb4i3Ols7cdJppSnED2nFSTNhJs0k3mVmuZTQEpO6etUw2cCFrRt9EhWIO6QmV7yvxwxA3EbGwxFqU3Lz5oy5M+aPbexsBcm4rHZ7JYdTaWo2d28Ndm5K+vpcZtmfDWdpZwPKhoExj8WNodpEw1LzVDoERx2Cz/eHrdofbowrc+AxL56Gbmwzev/5JSHrxgIGmQD9ovEe4CHwtMsSuDCIOEilec4DfTwX73ZJgVAooRL0Iy4uGbJFNLjRQTcm4wFSgWJQo7YoXVH7AhIcThonKjxQhN1VrsKbof+fUR3txhc2PO5X5l/0RUnWJD4SKQSAihpIL7GCnOLQYFdBAWne1fCoPuHinNYI60SNTDldERSeF43MKtoX4yFvvBWKjIBqLM3qh2M2j6nebreZ3JjIeFwUaqfy34HHC0ZNStHTS97/1q4VNeXPMqFrm/wCcf/ATKj7kq31ce+K15t+P912cNrT9pu9K+m3fui+FK/9Refg3MPJlefs3+3+gH5RO7nL/4D3jZZ3vr7nDrfJ/rWj/snPZt6+1Dc5MH3iwdk3x2YjNy6e2r3hY1/P1JXVf7va8tTk8MlV37YXn/kOfe5Ux+XYzDcLnUfPvGK/s987/Oy19i2J+06tX9X6Cra1KTne8oRiK9WeXnFkT27DA94VP//34YOBvf9co05cs/z4pQ8++pn34Rs7V91/2v2l8SPv1det+dTWPAiD6aF3915f/ufBj5ZPvXh26s3erRdnGObMrtnPQsmZ8e4fFD9595izc8tf3/5D+0XLtTs+zv39q6WtzYW1+Kcrz479atXSp5rG7sF7W6//5fl7175RkJY/P3qwtr1/eN+h0e/9dnat/3NPd8tg3SbsmTNXp/50+Y/H57bf+da50pF9u++u/cf1iaWjcfF3U4+1rV6/br/4Fd+hWfe2u85P5j+hz33//NpYbmf6ycDc8GXhuQzx5L/2v8/t+2k+cNf1b/ivfF4rNTWvidx4uhhqIvgY/VDowwNTZ/cfq/vRuv0jG17L5O4+n0sFV9/znMMxPf6fld6hXet/fXXdT3ITS69cvvHlmpq5uWU1LxTS2T3Lamr+Cz2nCNA=
eNqdVmtwE9cVtoEMUALDnzZ0koG1QjKU8Uq70uplo7S25De2qGX8YlxltXslrbUvdu/KlogpcWnzA+p6SUpDCgkOwgLFD6hNwQRSUtqSzJQ2JEOpTUgzDGlDZqhLeCWTUnpXlok90D/d0Ui7e97fd8656k7HgaJykpg/wIkQKDQD0YOqd6cVsFEDKtzaLwAYldjUOn+gYZ+mcOOroxDKapHFQsucWZKBSHNmRhIscdLCRGloQfcyD7JuUiGJTUzkJzaZBKCqdASopqINm0yMhEKJ0FRk6kAGKmbHIIcUsLCkwAQGOyRToUmReIAUNBUopq62QpMgsYBHLyIyxCkJFziRQ1oqVAAtmIrCNK+CQhMEgoxqgJqCbAkzgd5IEj8VFCZkw2FYE7MlIuP7t0WbTCItGFJB4yEn8wkkZYHKKJw8pWCqzQmM7DADqQiCDYNSBIEDFDPSl2kF+UAQqoY/WUHIKJAD2acwp6gwyBkl308k58TUhaoACBH2f8mRgsEFpwAWFTLD1yxDBFHOUAq1AwYiu662rnQU0CxK6WepqKRCfWg2ZcM0wwCEJxAZieXEiD4SSXJyIcaCME9DUIglVchmUAgRZIHSMzEAZJzmuTjon7LVD9GyzHMMbcgt7aokDuTYxY10HhRnDEZx1Aoi1Ef9KJWSKsu6BOowESPNTsTZoU5chTQn8qhjcJ5GWfXLWfmbMwUyzcSQEzzXvXr/lPHQTB1J1ffX0ow/MMslrTBRfT+tCA5qZOZ7RRONLtTT3nUPhssJ74dL28wkiT6HZ3lWEyKj78/24tFZ1gAqCZyRkBO9jxiaBogHYgRG9X02p/OAAlQZDQz4UT8yg5ranUKUgD++k84Nzuv+mmkuP8p7LOVD9OgnyxWuECOsWC2dwKyE1Y6RVBHpLLKRWEVtw4A3F6bhoTwcblBoUQ0jLsqm2U8zUU2MATbjfSjjJw3GUTVG+mg6cdApSyrAc1npA814/dTKwKt8I1NNhktKhBa5ZDasfjJLfUeys4NlNJaNxjsEwp2kbFwIaEx4NGeCJscIgxLCBVXf57A5hnKSafAzqFYCJwmcII934mjcAc8JHMIz+53bW6qeshMEcexBBSjFANpwaYrIXm/N1FCAgEgzYn/thnK73ScerjTtyuY2rtnZIEbBzGxIq6Aee1Ah5+J1Qh3onNbGOVYfX4kegjY3SdFhu4O0U1YCjSVJsqzLRYSstJWy21jXGFqAHIO8GGTKaH3iaCWgJQ0T+nihQHcag+axkXabA1VajBYXw2ssCGghn2TUoBZjsgJ4iWaHmTDO0EwU4FP9p6d9LXUltVXeTAAl6ZWkGAd2TOQvCwaZcDAkeESmVBPMrc1eb0WyLBJOetV4sqUuti7hBWyFXwO+drKkOlTHKfUxnHRSDtJtp+wkTpoJM5obvK5D9TX4hHjpxgaW95MRLlitlNe0tidqpKZQgK5pragNqklHQkjUUwGrvwlUrA21O0OthFmhSiiyNi61RMwJZ9Tf2RFvTvr89hanq5qUgpX1LaRYUdZubrKWb+xcSzd3xlCJNIx6LMUYali0RFVPbmxwNDb41NBQ00NTjLFZYDzm2ZuyGKtEh55f5BPFWMBAGKBftO4DHASeOkkE4y8hYLQ4x3rk0nZvlKxzSVrV91uq6Y3QLJe53DUxNlJVblXLHUIgYm0ENe7mdtcMZEinCydy4DgIypVtza9T/z+z+nUzPnML4H556nRPi5IqcuFwfwAoaKr0DMNLGouWvgL6veV4fUmLPuq2uV1WysnaAXDZgIvAq0p8h6a93d8ZKePESNM8arw4o49EbR5TEUXZTMWYQHtcDjRj2f8Az/cbjSpGfj9n/optC/Ky19ztgYs9HxBLu/7c1Hzjw70/eH/n3/Xi6BHsvW/g8Jtjt112jR++hmUy93ouj1ZNtK38fGmL+tyfdo990bt0zg5v3RaBe/bn66++IX029Bz+1qfDIy13v1glfvTP3105feeXF7/y/3t5+eLT/OaCRZG/LAqvfoOo3LfhZXcq//Hz71o3rG28wT/T8z3gyvykcaI3/PRgaudVuObs+9ete06sOXy03Hqz98XXFhZsfeLyb64fdP/n+d2+wWUH39vUarpy6anSU98+79uStHTtLTp1YP6uc/ka03ahevGWZ7evXrY+4/9w8wtvj0nfWp/561crzp/Z8+aeg3ekRv7Wl+Idsc9RQ75zoXpSoFa2nz3+9OJ/mV8NPvXkPGrQdDSzvrfx2vCxrXkaaOtb8lPJNVBS+Q9vgdbsfWbF4NtP2l55bc74zQWPzTlzZezG6b3kkrJrB5MTZObIwlFy+YbbcmnPJfjSdwZtj/+w7JGCIzcv/rbg1uFP6ckXR/Ro/mfbrndr/b/qWxB+gby6JlbW97dTy3e/ente98KGW9c/2QX7GuVHG/5wtWPRmVUXTZO77l4+90qNLW9HySf+s2vSlxot2pff/dxzatnElhOHloyxv3jkzoUnHJM3gheOL684/fG5uzsf3Yads3/w8aro4ETT3M2Tcxt/3HvcF9mz3T9/K964aCd+4OU6a2XPu0HE+L17c/P0CWdV5by8vP8CUtgnaw==

View File

@@ -1 +0,0 @@
eNrtVs1u20YQRoA+CLHoqRAlkpJoiUUOhmEkQGo4QNwWRRAs1uSI3JrcZXaXkhVDh7q59sA+QVsbVmAkbQ9FL23QHnvoC7jnPkhnKSk/jooYOdVABQikdnZ2vpn55lsdz8egNJfixlMuDCgWG/yhvz6eK3hYgTaPzwowmUxObm3vnVSKX7yfGVPqqNNhJW/rgpusnTORxhnjoh3LosPFSJ7uy2T6+zwDluDxj88/1qDczRSEqX+yuxs/t5x2vLbf9oPBD5txDKVxt0UsEy7S+ln6iJctJ4FRzgycLcz1j6wscx4zi7HzuZbifEsKAQ3m+vwAoHRZzsfwRIEuMQ348kwbZip9fIrnwp9/zAvQmqXw3e6dFbivThOMUD//FJKWE3Sd3dg4gRf0HD+MvEHU7Tq3dvaexhKrI4xrpiW8CeO5xae1i2iMkrm7fVhKDe7tRYT62w9+vbRhM8/l5BX7ySec1edYCieVMs3hWcziDNx4sb1+IqTbrJxhHbFh9bwa81gq8cvac3cVT7mov/l+a4n6IxCpyeqT3qD38yWPHXZoG1OfhJ63HuVO03+L8sJZu2FLQYJROMt1fWpUBfPN3Lj3xnF90c66N0nU63XJh07Bbgb9YeB5XivrusFwjaGhzRfYM4Uc+OvG30dkyU4SEa89aIekRbDwgJ2lcFhy1bSAGl4AiUSV5y2yz0ycUXRH7lKs34inJDoiFXoUVW54yZShIJJSIt9JZOG2iI5ZDrQq6UPNHwHF8GkKikS+BfvSKkymsGWa5hz5i+ZwZUzkRFABRWmmL717aLXHrXY3Z71YoPtTA5pEgTfc8PuBN2sRLpCtIgaKpE+1hY0TgzNpgDJOcRzVFKGz/RySFXKpUhojqKYOCddL4wh7YfPK5IQak9OKrxwMjjhmyEHRpFrWL2HTJlouRWrHBw/oNWAzqcxywe8hQA1MYXUvYZhIdaBLe6yOZQnUYuJizJv0Vki6VBupcPRe957N/l1p7rxNafCLq7qTTXBNg7/RwfCCcddImWuXWb3pWP3Q5n9B+i8K0qkf+P1rpEjv/XZEFqSjSLgMVSn2B+FGMPCDjcGQdfeH4XAj7IdJ0u8OIRl5PjZu0E9iBt2wOwwTb7/fGzDoBawHSRiO+ihoBRN8hBS1E8hxLO6TFyxHa6kkqorGN1wx+NjCx91mcQ/1xpKRPEBVjHFEcbqRDRYVUg3TjnHi0ONgwtRCTpZkw/f7V4p1b6oNFDsLr3cNujj1bdktd7XIu4YxK4+IfCYrhylwmJNBXo6q3GFac6utNgAXZWXomCludcjWAmOsvOlIqgJzj8jIXTTdYsM7A7n0qs/RzH5aV6risn76bo4amckcR+zqacnSDjXLV2K7wkAFKxo/jEEzbsV1Sq6K6HaFpLt2bT1qOjdb28LFErkenVzeiZdb2dxXeIcq+xemZAm288HaXN/Y2HqdBq1VNTAvAocMC7Lg+ewf6nhqJA==

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrtnE1v28gZx1ugaItc2kN7J4ge2kKU+Sa+OMhBluOX2IkdyY2dXQfCiByKlPlmzlCWHPjQbL8AgX6B7jp2EaTZXexiu912e+6hXyB7WKBfobcCPXSGkmI6trOS6yBxPAbiiJyZhzO/eXkezn/kR4ddmCAvCr//1AsxTICFyQX6/aPDBG6nEOHfHQQQu5G9v7rSWPsoTbznv3YxjtH01BSIvXIUwxB4ZSsKprrSlOUCPEU+xz7Mzey3Irv/zQ/kh3wAEQJtiPhp7v2HvBWRZ4WYXPD3o5QDCeQA50I/dlKfAwh5CAOSXOL4JPIhzYb6CMOA3ytxx0qvgS3IKRyOOOxCzvEc7HJxtAMTDoQ2F6Q+9mK/TxIB5lr9PBNKAy5yOLwD/S7Ms2E3gbBEE0MObae0NjTjjkuezSUQESvFqqQIJvzeA3IniGzo01vtGAtqJARe6NGcIbknkf8RTiAIyAVOUvIAnrQgJohxmlBDYlmk96LIH1LB/Th/gJOGeS9QUy8+T5OGhyDIM4zaRTPYEFmJFw/z8LdfNHkn4miPtkn3Ej5t0okwKdMSMUiIHdLXKDcaJ6QPE+zBwaXjJQg3vRzvUZWGlih/HkHSAfbZWWgeOni8BNq0YQWTx0tThKPSUasDLUwK5z08Nglg26dAqNr2sfa/c82GvTgKyRTwAIantP9mITkfyi2A4GiSjMoO5skYaGjhM6mMrI3JJLdVLHY6jAd7hy4ENqnQt9/76b4bIZw9O77afAwsC5JZB0Mrsr2wnf2pvevFJc6Gjk9a/YTQDmEOLnuyBWEsAN/rwoNBqewTEMe+ZwGaPtVBUfh0uKYItC4nk5/QOS+Q9SvE2ecrpBLVxanVPlkWQ04qV+Sy+ElPIEuWF/pkmRN8QOpzEOfpfy0mxMDaIkaE4ZKbHQwKPyvmiVD2+DawVhrHTILEcrPHIAk09bPi/SQlvRzA7LC2evJxw8SjxyllSSrrnx4zjPqhlT12gI/gn48VhjjpC1ZEbGR/EA+sKNryYPb8382m5TRbwQ04c29LQ17qSVvlxqK/Xe3VopldZ3V92ait1bc9zY8aarWzrC/fFSRdNjXT1CVZkMpiWSpLQr1s2al/yw3rO0lTqtpNea4Wao1OJbxpdHZbZQg7itRpOtr89sJCr3zXXXHn7i7D7U6UbMyEFT2+uVjeeG/t7vx2bdcoRyv9Zt/Uq9c5Uru069k3cM2zOxtG2Fhf30pEc8mwt9cVB+G5ijm73WlIMQ7WkHYvrrfVQvUquimIwxpqomqI9OfZaGz4MGxjN9uXRF39I/ELZAgj+MEBYYZT9GifDET4z38cDj3dhytLR2P45/uzZFBmX69Du8TJCrdiYU4WZZWTtGnRmFYq3Pzttae14XPW6Bh8zmHYw1OwS+8MHMl1jvjXBEF8I8WOYHy6loAQOWRg3hxNgkPLTcMtaD+pnTr8v6bDn/QtbRDxZgKdhQgKw2pmTzeE+sDpC4uznw3mmhAlbRB6u/lcyL7O58HObm/HtlLbdrs7gWjuqorXgqnlfD4sQtYP+hhSISFA2UeGqT8bpoxG4hPSeFGQREGUvuoJxCNC3ws8Qjj/PYw8ULZfIfi/PJkBR1swRNmhmveP+PdijgQGZAjTZx+ZUU3T/NvpmUamFJLFVM2vjucirAtmJDlAX57MMDTxoYie9ka5Bc/Onv+CXDTtlii3YAXahgkrhi6pFVsXoa61JMvWFaD9hfStZxErtDPjKCGdDS0SZuF+9rwUgB5ddW4oUkXRSEuvE5dm+akNG2lrNqJtQNe5OIF+BOyPa3NCDVguFBr5gMwOZ+/fqd5erH2xIRRHlrCS+wmSHkYo9BznoAET0jHZE8uPUpssnwk8ILbq1fvZ54aty45l2HJLUg2HzI6ZlcYh8Eklu1b2mavc4KdVVeGvcwG4YWikP/KI77cHtFFh+5uf/csGGFDX4JH1n6fhoUWCQ6G6fL92lyyTVdybn+10lnbC+0hM0cyGBWe7fGnkCQYlykcBZTkf3ySDReYDpj7lxdxVS6NwrBiNCXSWCaIuSAYpNQgimw6pGkziJHflvBM3VUO2ZBnoLYuadiPPoq6PRGReaMMePy2WiIf1MeCnHw6jQL4QpB7Fo2Hq+3sl3o/aZAq00OBGiQQCoYfcJqkyov45z/Vg79q1d44NjWWbFvD9l3MMWkgSmuG9hYUlZV4F9a67rFTba8E8mqvvKsT4IAwoBD+F2GcU+rwU+fAgaacBuSQP5HkaNzD6J+kXMRaBPdwk8RijNiE1BDdpsMqwTYZNKXGbZPYybhNyo8sdgzYhtBCzOXoObJU9huw0ZFIxgmndTVfTmWC+vlCv6/1w8XY8ay/6STBuBJNvWbHAZRzorwhcHI9Rm5BavhvJqE1IbeBKJMZtQm4yjfcYtQmp5TIBozYhtVxXYaNtYm7TnHLVI77vbnoRawHB+7Mrd24+uHbtIoXyH2rvoFB+vCI54qMsx5pQHL9n6eF03HHftZU4pmpamArcYDNssLcz3KoYZKY38veiswTaQo3OfjV4tXz9ckVeKMd56JF70iP9eLMwbY/3sawqRfwU5zGszXHgvWxUqoxt88zms8MS7LAEOyzBDkuwwxJX8LBExRTZYYmxD0vIpnxZDktULv6whGpIBpAsSVJMzdB1GUqSaevko6a0DM1sXYbDEqaq2pp2jsMSP/nv2S9Xdf9evFBtLMhLerK6EbhSd6E2V+ku3zrfy1XlLTksURrnyICIG0lvrruxVTPurPY9Z+69am/Ju7U+7ob7UQB4Yte9RByukyLgX+DZjUvaVec5PXBhewdXhVke5TJqE1LLXwQYtMmgbZJ/jNlkzOgGCmM2GbMScwPnOEJG9zcYNuYHmB94G5lJFYZsQmR7V5zYW6Uj/miT6YhMR3wHdcTXNgbO3mMaQ1V8JXYC8yR3qXIqeEVTx6d0ZpWZ2srUVqa2MrWVqa1XUG01jAv+arr2Lqutily5LGqrfPFqq2EpplGp6FDTgaTahuaYsqyKui4CKNqaeSnUVkNpOfJ51Nb/nP0K2rjTiNcDeWXBdcTl+kao+xs9Oe136+d7BdUuk9rq3dbklWrNWMJ1aTmQjEXX6ix3G7MX9AXt16G4XtLueqOK61Vhlkd1DNqb2zO+KszIuztjNun3jRmxN6i2XhVmF/hHJ64KsuEOCMPG/OZrP3PDiL0xqfVyEnurpNYff8CkVia1Mqn1ikqtr43S2dtw/+9MIQ1/ebLIpyKSFNnQRHl8TGfWmSnSTJFmijRTpJkiffUUaVnSTaZIj69Ii9IlUaQV4+IVaUWrtAzJ1oBuQEUzRKUFNAcoFUmXWvQvkF8KRdqSTSidR5H+4hVv6u0ltLR4X1n6zXK9B2e8XR8ndxRrYenyKtI8/1q04MsJ6ojKmgt5RoIW5IgdMjYYjBzGcK+G0chpeIiRGJDY3PwlQzFAwTjkBSWFgcgLlhiHvKBsaIwEGxEFDnRXl5EY+k9GIi/4q/JVBjGGqIxwFJ8mJ/8Psj6Xlw==

View File

@@ -107,7 +107,7 @@ outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessag
response object. See for example:
- Generating [audio outputs](/docs/integrations/chat/openai/#audio-generation-preview) with OpenAI;
- Generating [image outputs](/docs/integrations/chat/google_generative_ai/#image-generation) with Google Gemini.
- Generating [image outputs](/docs/integrations/chat/google_generative_ai/#multimodal-usage) with Google Gemini.
#### Tools

View File

@@ -66,7 +66,7 @@ This API works with a list of [Document](https://python.langchain.com/api_refere
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
)

View File

@@ -13,23 +13,33 @@ Install `uv`: **[documentation on how to install it](https://docs.astral.sh/uv/g
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems.
- Partner integrations: Partner packages in `libs/partners` that are independently version controlled.
:::note
Some LangChain packages live outside the monorepo, see for example
[langchain-community](https://github.com/langchain-ai/langchain-community) for various
third-party integrations and
[langchain-experimental](https://github.com/langchain-ai/langchain-experimental) for
abstractions that are experimental (either in the sense that the techniques are novel
and still being tested, or they require giving the LLM more access than would be
possible in most production systems).
:::
Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
For this quickstart, start with langchain-community:
For this quickstart, start with `langchain`:
```bash
cd libs/community
cd libs/langchain
```
## Local Development Dependencies
Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
Install development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
```bash
uv sync
@@ -62,22 +72,15 @@ make docker_tests
There are also [integration tests and code-coverage](../testing.mdx) available.
### Only develop langchain_core or langchain_community
### Developing langchain_core
If you are only developing `langchain_core` or `langchain_community`, you can simply install the dependencies for the respective projects and run tests:
If you are only developing `langchain_core`, you can simply install the dependencies for the project and run tests:
```bash
cd libs/core
make test
```
Or:
```bash
cd libs/community
make test
```
## Formatting and Linting
Run these locally before submitting a PR; the CI system will check also.

View File

@@ -336,70 +336,6 @@
"chain.with_config(configurable={\"llm_temperature\": 0.9}).invoke({\"x\": 0})"
]
},
{
"cell_type": "markdown",
"id": "fb9637d0",
"metadata": {},
"source": [
"### With HubRunnables\n",
"\n",
"This is useful to allow for switching of prompts"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9a9ea077",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:\")])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.runnables.hub import HubRunnable\n",
"\n",
"prompt = HubRunnable(\"rlm/rag-prompt\").configurable_fields(\n",
" owner_repo_commit=ConfigurableField(\n",
" id=\"hub_commit\",\n",
" name=\"Hub Commit\",\n",
" description=\"The Hub commit to pull from\",\n",
" )\n",
")\n",
"\n",
"prompt.invoke({\"question\": \"foo\", \"context\": \"bar\"})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f33f3cf2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[HumanMessage(content=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]\")])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke(\n",
" {\"question\": \"foo\", \"context\": \"bar\"}\n",
")"
]
},
{
"cell_type": "markdown",
"id": "79d51519",

View File

@@ -530,7 +530,7 @@
"\n",
" def _run(\n",
" self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None\n",
" ) -> str:\n",
" ) -> int:\n",
" \"\"\"Use the tool.\"\"\"\n",
" return a * b\n",
"\n",
@@ -539,7 +539,7 @@
" a: int,\n",
" b: int,\n",
" run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n",
" ) -> str:\n",
" ) -> int:\n",
" \"\"\"Use the tool asynchronously.\"\"\"\n",
" # If the calculation is cheap, you can just delegate to the sync implementation\n",
" # as shown below.\n",

View File

@@ -167,7 +167,7 @@
"She was, in 1906, the first woman to become a professor at the University of Paris.\n",
"\"\"\"\n",
"documents = [Document(page_content=text)]\n",
"graph_documents = llm_transformer.convert_to_graph_documents(documents)\n",
"graph_documents = await llm_transformer.aconvert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents[0].relationships}\")"
]
@@ -205,7 +205,7 @@
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n",
")\n",
"graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents(\n",
"graph_documents_filtered = await llm_transformer_filtered.aconvert_to_graph_documents(\n",
" documents\n",
")\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
@@ -245,7 +245,9 @@
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"graph_documents_filtered = llm_transformer_tuple.convert_to_graph_documents(documents)\n",
"graph_documents_filtered = await llm_transformer_tuple.aconvert_to_graph_documents(\n",
" documents\n",
")\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
@@ -289,7 +291,9 @@
" allowed_relationships=[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n",
" node_properties=[\"born_year\"],\n",
")\n",
"graph_documents_props = llm_transformer_props.convert_to_graph_documents(documents)\n",
"graph_documents_props = await llm_transformer_props.aconvert_to_graph_documents(\n",
" documents\n",
")\n",
"print(f\"Nodes:{graph_documents_props[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_props[0].relationships}\")"
]

View File

@@ -100,7 +100,7 @@
"id": "8554bae5",
"metadata": {},
"source": [
"A chat prompt is made up a of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt.\n",
"A chat prompt is made up of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt.\n",
"\n",
"First, let's initialize the a [`ChatPromptTemplate`](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) with a [`SystemMessage`](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.system.SystemMessage.html)."
]

View File

@@ -6,9 +6,9 @@
"source": [
"# How to disable parallel tool calling\n",
"\n",
":::info OpenAI-specific\n",
":::info Provider-specific\n",
"\n",
"This API is currently only supported by OpenAI.\n",
"This API is currently only supported by OpenAI and Anthropic.\n",
"\n",
":::\n",
"\n",
@@ -55,12 +55,12 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = init_chat_model(\"openai:gpt-4.1-mini\")"
]
},
{
@@ -121,7 +121,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -74,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 2,
"id": "90187d07",
"metadata": {},
"outputs": [],
@@ -90,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "d7009e1a",
"metadata": {},
"outputs": [
@@ -99,7 +99,7 @@
"output_type": "stream",
"text": [
"multiply\n",
"multiply(first_int: int, second_int: int) -> int - Multiply two integers together.\n",
"Multiply two integers together.\n",
"{'first_int': {'title': 'First Int', 'type': 'integer'}, 'second_int': {'title': 'Second Int', 'type': 'integer'}}\n"
]
}
@@ -112,7 +112,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "be77e780",
"metadata": {},
"outputs": [
@@ -122,7 +122,7 @@
"20"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -154,7 +154,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "9bce8935-1465-45ac-8a93-314222c753c4",
"metadata": {},
"outputs": [],
@@ -177,7 +177,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "3bfe2cdc-7d72-457c-a9a1-5fa1e0bcde55",
"metadata": {},
"outputs": [],
@@ -195,7 +195,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"id": "68f30343-14ef-48f1-badd-b6a03977316d",
"metadata": {},
"outputs": [
@@ -204,10 +204,11 @@
"text/plain": [
"[{'name': 'multiply',\n",
" 'args': {'first_int': 5, 'second_int': 42},\n",
" 'id': 'call_cCP9oA3tRz7HDrjFn1FdmDaG'}]"
" 'id': 'call_8QIg4QVFVAEeC1orWAgB2036',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 9,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -237,7 +238,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 8,
"id": "4f5325ca-e5dc-4d1a-ba36-b085a029c90a",
"metadata": {},
"outputs": [
@@ -247,7 +248,7 @@
"92"
]
},
"execution_count": 12,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -274,58 +275,31 @@
"source": [
"## Agents\n",
"\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/concepts/agents/) let us do just this.\n",
"\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts/agents).\n",
"\n",
"We'll use the [tool calling agent](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n",
"We'll demonstrate a simple example using a LangGraph agent. See [this tutorial](/docs/tutorials/agents) for more detail.\n",
"\n",
"![agent](../../static/img/tool_agent.svg)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "21723cf4-9421-4a8d-92a6-eeeb8f4367f1",
"execution_count": null,
"id": "86789cfb-f441-4453-adf8-961eeceb00bc",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent"
"!pip install -qU langgraph"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6be83879-9da3-4dd9-b147-a79f76affd7a",
"execution_count": 9,
"id": "21723cf4-9421-4a8d-92a6-eeeb8f4367f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m System Message \u001b[0m================================\n",
"\n",
"You are a helpful assistant\n",
"\n",
"=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{chat_history}\u001b[0m\n",
"\n",
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{input}\u001b[0m\n",
"\n",
"=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{agent_scratchpad}\u001b[0m\n"
]
}
],
"outputs": [],
"source": [
"# Get the prompt to use - can be replaced with any prompt that includes variables \"agent_scratchpad\" and \"input\"!\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"prompt.pretty_print()"
"from langgraph.prebuilt import create_react_agent"
]
},
{
@@ -338,7 +312,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 10,
"id": "95c86d32-ee45-4c87-a28c-14eff19b49e9",
"metadata": {},
"outputs": [],
@@ -360,24 +334,13 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 11,
"id": "17b09ac6-c9b7-4340-a8a0-3d3061f7888c",
"metadata": {},
"outputs": [],
"source": [
"# Construct the tool calling agent\n",
"agent = create_tool_calling_agent(llm, tools, prompt)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "675091d2-cac9-45c4-a5d7-b760ee6c1986",
"metadata": {},
"outputs": [],
"source": [
"# Create an agent executor by passing in the agent and tools\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
"agent = create_react_agent(llm, tools)"
]
},
{
@@ -390,62 +353,72 @@
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f7dbb240-809e-4e41-8f63-1a4636e8e26d",
"execution_count": 13,
"id": "71c84594-d420-4703-8bdd-ca4eb7efefb6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" exponentiate (call_EHGS8gnEVNCJQ9rVOk11KCQH)\n",
" Call ID: call_EHGS8gnEVNCJQ9rVOk11KCQH\n",
" Args:\n",
" base: 3\n",
" exponent: 5\n",
" add (call_s2cxOrXEKqI6z7LWbMUG6s8c)\n",
" Call ID: call_s2cxOrXEKqI6z7LWbMUG6s8c\n",
" Args:\n",
" first_int: 12\n",
" second_int: 3\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: add\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `exponentiate` with `{'base': 3, 'exponent': 5}`\n",
"15\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" multiply (call_25v5JEfDWuKNgmVoGBan0d7J)\n",
" Call ID: call_25v5JEfDWuKNgmVoGBan0d7J\n",
" Args:\n",
" first_int: 243\n",
" second_int: 15\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: multiply\n",
"\n",
"3645\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" exponentiate (call_x1yKEeBPrFYmCp2z5Kn8705r)\n",
" Call ID: call_x1yKEeBPrFYmCp2z5Kn8705r\n",
" Args:\n",
" base: 3645\n",
" exponent: 2\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: exponentiate\n",
"\n",
"\u001b[0m\u001b[38;5;200m\u001b[1;3m243\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `add` with `{'first_int': 12, 'second_int': 3}`\n",
"13286025\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m15\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `multiply` with `{'first_int': 243, 'second_int': 15}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m3645\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `exponentiate` with `{'base': 405, 'exponent': 2}`\n",
"\n",
"\n",
"\u001b[0m\u001b[38;5;200m\u001b[1;3m13286025\u001b[0m\u001b[32;1m\u001b[1;3mThe result of taking 3 to the fifth power is 243. \n",
"\n",
"The sum of twelve and three is 15. \n",
"\n",
"Multiplying 243 by 15 gives 3645. \n",
"\n",
"Finally, squaring 3645 gives 13286025.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"The final result of taking 3 to the fifth power, multiplying it by the sum of twelve and three, and then squaring the whole result is **13,286,025**.\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result',\n",
" 'output': 'The result of taking 3 to the fifth power is 243. \\n\\nThe sum of twelve and three is 15. \\n\\nMultiplying 243 by 15 gives 3645. \\n\\nFinally, squaring 3645 gives 13286025.'}"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result\"\n",
" }\n",
")"
"# Use the agent\n",
"\n",
"query = (\n",
" \"Take 3 to the fifth power and multiply that by the sum of twelve and \"\n",
" \"three, then square the whole result.\"\n",
")\n",
"input_message = {\"role\": \"user\", \"content\": query}\n",
"\n",
"for step in agent.stream({\"messages\": [input_message]}, stream_mode=\"values\"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -473,7 +446,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

File diff suppressed because it is too large Load Diff

View File

@@ -1,35 +1,26 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"cell_type": "markdown",
"id": "d982c99f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google AI\n",
"sidebar_label: Google Gemini\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"id": "56a6d990",
"metadata": {},
"source": [
"# ChatGoogleGenerativeAI\n",
"\n",
"This docs will help you get started with Google AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html).\n",
"Access Google's Generative AI models, including the Gemini family, directly via the Gemini API or experiment rapidly using Google AI Studio. The `langchain-google-genai` package provides the LangChain integration for these models. This is often the best starting point for individual developers.\n",
"\n",
"Google AI offers a number of different chat models. For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini).\n",
"For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini). All examples use the `gemini-2.0-flash` model. Gemini 2.5 Pro and 2.5 Flash can be used via `gemini-2.5-pro-preview-03-25` and `gemini-2.5-flash-preview-04-17`. All model ids can be found in the [Gemini API docs](https://ai.google.dev/gemini-api/docs/models).\n",
"\n",
":::info Google AI vs Google Cloud Vertex AI\n",
"\n",
"Google's Gemini models are accessible through Google AI and through Google Cloud Vertex AI. Using Google AI just requires a Google account and an API key. Using Google Cloud Vertex AI requires a Google Cloud account (with term agreements and billing) but offers enterprise features like customer encryption key, virtual private cloud, and more.\n",
"\n",
"To learn more about the key features of the two APIs see the [Google docs](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai#google-ai).\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/google_generativeai) | Package downloads | Package latest |\n",
@@ -37,23 +28,46 @@
"| [ChatGoogleGenerativeAI](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) | [langchain-google-genai](https://python.langchain.com/api_reference/google_genai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-genai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-genai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"### Setup\n",
"\n",
"To access Google AI models you'll need to create a Google Acount account, get a Google AI API key, and install the `langchain-google-genai` integration package.\n",
"To access Google AI models you'll need to create a Google Account, get a Google AI API key, and install the `langchain-google-genai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://ai.google.dev/gemini-api/docs/api-key to generate a Google AI API key. Once you've done this set the GOOGLE_API_KEY environment variable:"
"**1. Installation:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"id": "8d12ce35",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "60be0b38",
"metadata": {},
"source": [
"**2. Credentials:**\n",
"\n",
"Head to [https://ai.google.dev/gemini-api/docs/api-key](https://ai.google.dev/gemini-api/docs/api-key) (or via Google AI Studio) to generate a Google AI API key.\n",
"\n",
"### Chat Models\n",
"\n",
"Use the `ChatGoogleGenerativeAI` class to interact with Google's chat models. See the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html) for full details.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fb18c875",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +80,7 @@
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"id": "f050e8db",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
@@ -75,7 +89,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"id": "82cb346f",
"metadata": {},
"outputs": [],
"source": [
@@ -85,27 +99,7 @@
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Google AI integration lives in the `langchain-google-genai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"id": "273cefa0",
"metadata": {},
"source": [
"## Instantiation\n",
@@ -115,15 +109,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"execution_count": 4,
"id": "7d3dc0b3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(\n",
" model=\"gemini-2.0-flash-001\",\n",
" model=\"gemini-2.0-flash\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
@@ -134,7 +128,7 @@
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"id": "343a8c13",
"metadata": {},
"source": [
"## Invocation"
@@ -142,19 +136,17 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"execution_count": 5,
"id": "82c5708c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []}, id='run-61cff164-40be-4f88-a2df-cca58297502f-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27, 'input_token_details': {'cache_read': 0}})"
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-3b28d4b8-8a62-4e6c-ad4e-b53e6e825749-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 3,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -173,8 +165,8 @@
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"execution_count": 6,
"id": "49d2d0c2",
"metadata": {},
"outputs": [
{
@@ -191,7 +183,7 @@
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"id": "ee3f6e1d",
"metadata": {},
"source": [
"## Chaining\n",
@@ -201,17 +193,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"execution_count": 7,
"id": "3c8407ee",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash-001', 'safety_ratings': []}, id='run-dd2f8fb9-62d9-4b84-9c97-ed9c34cda313-0', usage_metadata={'input_tokens': 15, 'output_tokens': 7, 'total_tokens': 22, 'input_token_details': {'cache_read': 0}})"
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-e5561c6b-2beb-4411-9210-4796b576a7cd-0', usage_metadata={'input_tokens': 15, 'output_tokens': 7, 'total_tokens': 22, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 5,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -241,22 +233,164 @@
},
{
"cell_type": "markdown",
"id": "41c2ff10-a3ba-4f40-b3aa-7a395854849e",
"id": "bdae9742",
"metadata": {},
"source": [
"## Image generation\n",
"## Multimodal Usage\n",
"\n",
"Some Gemini models (specifically `gemini-2.0-flash-exp`) support image generation capabilities.\n",
"Gemini models can accept multimodal inputs (text, images, audio, video) and, for some models, generate multimodal outputs.\n",
"\n",
"### Text to image\n",
"### Image Input\n",
"\n",
"See a simple usage example below:"
"Provide image inputs along with text using a `HumanMessage` with a list content format. The `gemini-2.0-flash` model can handle images."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7589e14d-8d1b-4c82-965f-5558d80cb677",
"execution_count": null,
"id": "6833fe5d",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"# Example using a public URL (remains the same)\n",
"message_url = HumanMessage(\n",
" content=[\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the image at the URL.\",\n",
" },\n",
" {\"type\": \"image_url\", \"image_url\": \"https://picsum.photos/seed/picsum/200/300\"},\n",
" ]\n",
")\n",
"result_url = llm.invoke([message_url])\n",
"print(f\"Response for URL image: {result_url.content}\")\n",
"\n",
"# Example using a local image file encoded in base64\n",
"image_file_path = \"/Users/philschmid/projects/google-gemini/langchain/docs/static/img/agents_vs_chains.png\"\n",
"\n",
"with open(image_file_path, \"rb\") as image_file:\n",
" encoded_image = base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
"\n",
"message_local = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe the local image.\"},\n",
" {\"type\": \"image_url\", \"image_url\": f\"data:image/png;base64,{encoded_image}\"},\n",
" ]\n",
")\n",
"result_local = llm.invoke([message_local])\n",
"print(f\"Response for local image: {result_local.content}\")"
]
},
{
"cell_type": "markdown",
"id": "1b422382",
"metadata": {},
"source": [
"Other supported `image_url` formats:\n",
"- A Google Cloud Storage URI (`gs://...`). Ensure the service account has access.\n",
"- A PIL Image object (the library handles encoding).\n",
"\n",
"### Audio Input\n",
"\n",
"Provide audio file inputs along with text. Use a model like `gemini-2.0-flash`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3461836",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"\n",
"# Ensure you have an audio file named 'example_audio.mp3' or provide the correct path.\n",
"audio_file_path = \"example_audio.mp3\"\n",
"audio_mime_type = \"audio/mpeg\"\n",
"\n",
"\n",
"with open(audio_file_path, \"rb\") as audio_file:\n",
" encoded_audio = base64.b64encode(audio_file.read()).decode(\"utf-8\")\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Transcribe the audio.\"},\n",
" {\n",
" \"type\": \"media\",\n",
" \"data\": encoded_audio, # Use base64 string directly\n",
" \"mime_type\": audio_mime_type,\n",
" },\n",
" ]\n",
")\n",
"response = llm.invoke([message]) # Uncomment to run\n",
"print(f\"Response for audio: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "0d898e27",
"metadata": {},
"source": [
"### Video Input\n",
"\n",
"Provide video file inputs along with text. Use a model like `gemini-2.0-flash`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3046e74b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"# Ensure you have a video file named 'example_video.mp4' or provide the correct path.\n",
"video_file_path = \"example_video.mp4\"\n",
"video_mime_type = \"video/mp4\"\n",
"\n",
"\n",
"with open(video_file_path, \"rb\") as video_file:\n",
" encoded_video = base64.b64encode(video_file.read()).decode(\"utf-8\")\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"Describe the first few frames of the video.\"},\n",
" {\n",
" \"type\": \"media\",\n",
" \"data\": encoded_video, # Use base64 string directly\n",
" \"mime_type\": video_mime_type,\n",
" },\n",
" ]\n",
")\n",
"response = llm.invoke([message]) # Uncomment to run\n",
"print(f\"Response for video: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "2df11d89",
"metadata": {},
"source": [
"### Image Generation (Multimodal Output)\n",
"\n",
"The `gemini-2.0-flash` model can generate text and images inline (image generation is experimental). You need to specify the desired `response_modalities`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0b7180f",
"metadata": {},
"outputs": [
{
@@ -266,17 +400,12 @@
"<IPython.core.display.Image object>"
]
},
"metadata": {
"image/png": {
"width": 300
}
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import base64\n",
"from io import BytesIO\n",
"\n",
"from IPython.display import Image, display\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
@@ -301,7 +430,7 @@
},
{
"cell_type": "markdown",
"id": "b14c0d87-cf7e-4d88-bda1-2ab40ec0350a",
"id": "14bf00f1",
"metadata": {},
"source": [
"### Image and text to image\n",
@@ -311,8 +440,8 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0f4ed7a5-980c-4b54-b743-0b988909744c",
"execution_count": null,
"id": "d65e195c",
"metadata": {},
"outputs": [
{
@@ -322,11 +451,7 @@
"<IPython.core.display.Image object>"
]
},
"metadata": {
"image/png": {
"width": 300
}
},
"metadata": {},
"output_type": "display_data"
}
],
@@ -349,7 +474,7 @@
},
{
"cell_type": "markdown",
"id": "a62669d8-becd-495f-8f4a-82d7c5d87969",
"id": "43b54d3f",
"metadata": {},
"source": [
"You can also represent an input image and query in a single message by encoding the base64 data in the [data URI scheme](https://en.wikipedia.org/wiki/Data_URI_scheme):"
@@ -357,8 +482,8 @@
},
{
"cell_type": "code",
"execution_count": 9,
"id": "6241da43-e210-43bc-89af-b3c480ea06e9",
"execution_count": null,
"id": "0dfc7e1e",
"metadata": {},
"outputs": [
{
@@ -368,11 +493,7 @@
"<IPython.core.display.Image object>"
]
},
"metadata": {
"image/png": {
"width": 300
}
},
"metadata": {},
"output_type": "display_data"
}
],
@@ -403,7 +524,7 @@
},
{
"cell_type": "markdown",
"id": "cfe228d3-6773-4283-9788-87bdf6912b1c",
"id": "789818d7",
"metadata": {},
"source": [
"You can also use LangGraph to manage the conversation history for you as in [this tutorial](/docs/tutorials/chatbot/)."
@@ -411,7 +532,313 @@
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"id": "b037e2dc",
"metadata": {},
"source": [
"## Tool Calling\n",
"\n",
"You can equip the model with tools to call."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0d759f9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'a6248087-74c5-4b7c-9250-f335e642927c', 'type': 'tool_call'}]\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"OK. It's sunny in San Francisco.\", additional_kwargs={}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'model_name': 'gemini-2.0-flash', 'safety_ratings': []}, id='run-ac5bb52c-e244-4c72-9fbc-fb2a9cd7a72e-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_read': 0}})"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.tools import tool\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"\n",
"# Define the tool\n",
"@tool(description=\"Get the current weather in a given location\")\n",
"def get_weather(location: str) -> str:\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"# Initialize the model and bind the tool\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"llm_with_tools = llm.bind_tools([get_weather])\n",
"\n",
"# Invoke the model with a query that should trigger the tool\n",
"query = \"What's the weather in San Francisco?\"\n",
"ai_msg = llm_with_tools.invoke(query)\n",
"\n",
"# Check the tool calls in the response\n",
"print(ai_msg.tool_calls)\n",
"\n",
"# Example tool call message would be needed here if you were actually running the tool\n",
"from langchain_core.messages import ToolMessage\n",
"\n",
"tool_message = ToolMessage(\n",
" content=get_weather(*ai_msg.tool_calls[0][\"args\"]),\n",
" tool_call_id=ai_msg.tool_calls[0][\"id\"],\n",
")\n",
"llm_with_tools.invoke([ai_msg, tool_message]) # Example of passing tool result back"
]
},
{
"cell_type": "markdown",
"id": "91d42b86",
"metadata": {},
"source": [
"## Structured Output\n",
"\n",
"Force the model to respond with a specific structure using Pydantic models."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "7457dbe4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"name='Abraham Lincoln' height_m=1.93\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"\n",
"# Define the desired structure\n",
"class Person(BaseModel):\n",
" \"\"\"Information about a person.\"\"\"\n",
"\n",
" name: str = Field(..., description=\"The person's name\")\n",
" height_m: float = Field(..., description=\"The person's height in meters\")\n",
"\n",
"\n",
"# Initialize the model\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Person)\n",
"\n",
"# Invoke the model with a query asking for structured information\n",
"result = structured_llm.invoke(\n",
" \"Who was the 16th president of the USA, and how tall was he in meters?\"\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "90d4725e",
"metadata": {},
"source": [
"\n",
"\n",
"## Token Usage Tracking\n",
"\n",
"Access token usage information from the response metadata."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "edcc003e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt engineering is the art and science of crafting effective text prompts to elicit desired and accurate responses from large language models.\n",
"\n",
"Usage Metadata:\n",
"{'input_tokens': 10, 'output_tokens': 24, 'total_tokens': 34, 'input_token_details': {'cache_read': 0}}\n"
]
}
],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"\n",
"result = llm.invoke(\"Explain the concept of prompt engineering in one sentence.\")\n",
"\n",
"print(result.content)\n",
"print(\"\\nUsage Metadata:\")\n",
"print(result.usage_metadata)"
]
},
{
"cell_type": "markdown",
"id": "28950dbc",
"metadata": {},
"source": [
"## Built-in tools\n",
"\n",
"Google Gemini supports a variety of built-in tools ([google search](https://ai.google.dev/gemini-api/docs/grounding/search-suggestions), [code execution](https://ai.google.dev/gemini-api/docs/code-execution?lang=python)), which can be bound to the model in the usual way."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd074816",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The next total solar eclipse visible in the United States will occur on August 23, 2044. However, the path of totality will only pass through Montana, North Dakota, and South Dakota.\n",
"\n",
"For a total solar eclipse that crosses a significant portion of the continental U.S., you'll have to wait until August 12, 2045. This eclipse will start in California and end in Florida.\n"
]
}
],
"source": [
"from google.ai.generativelanguage_v1beta.types import Tool as GenAITool\n",
"\n",
"resp = llm.invoke(\n",
" \"When is the next total solar eclipse in US?\",\n",
" tools=[GenAITool(google_search={})],\n",
")\n",
"\n",
"print(resp.content)"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "6964be2d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Executable code: print(2*2)\n",
"\n",
"Code execution result: 4\n",
"\n",
"2*2 is 4.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/philschmid/projects/google-gemini/langchain/.venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py:580: UserWarning: \n",
" ⚠️ Warning: Output may vary each run. \n",
" - 'executable_code': Always present. \n",
" - 'execution_result' & 'image_url': May be absent for some queries. \n",
"\n",
" Validate before using in production.\n",
"\n",
" warnings.warn(\n"
]
}
],
"source": [
"from google.ai.generativelanguage_v1beta.types import Tool as GenAITool\n",
"\n",
"resp = llm.invoke(\n",
" \"What is 2*2, use python\",\n",
" tools=[GenAITool(code_execution={})],\n",
")\n",
"\n",
"for c in resp.content:\n",
" if isinstance(c, dict):\n",
" if c[\"type\"] == \"code_execution_result\":\n",
" print(f\"Code execution result: {c['code_execution_result']}\")\n",
" elif c[\"type\"] == \"executable_code\":\n",
" print(f\"Executable code: {c['executable_code']}\")\n",
" else:\n",
" print(c)"
]
},
{
"cell_type": "markdown",
"id": "a27e6ff4",
"metadata": {},
"source": [
"## Native Async\n",
"\n",
"Use asynchronous methods for non-blocking calls."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "c6803e57",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Async Invoke Result: The sky is blue due to a phenomenon called **Rayle...\n",
"\n",
"Async Stream Result:\n",
"The thread is free, it does not wait,\n",
"For answers slow, or tasks of fate.\n",
"A promise made, a future bright,\n",
"It moves ahead, with all its might.\n",
"\n",
"A callback waits, a signal sent,\n",
"When data's read, or job is spent.\n",
"Non-blocking code, a graceful dance,\n",
"Responsive apps, a fleeting glance.\n",
"\n",
"Async Batch Results: ['1 + 1 = 2', '2 + 2 = 4']\n"
]
}
],
"source": [
"from langchain_google_genai import ChatGoogleGenerativeAI\n",
"\n",
"llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\")\n",
"\n",
"\n",
"async def run_async_calls():\n",
" # Async invoke\n",
" result_ainvoke = await llm.ainvoke(\"Why is the sky blue?\")\n",
" print(\"Async Invoke Result:\", result_ainvoke.content[:50] + \"...\")\n",
"\n",
" # Async stream\n",
" print(\"\\nAsync Stream Result:\")\n",
" async for chunk in llm.astream(\n",
" \"Write a short poem about asynchronous programming.\"\n",
" ):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print(\"\\n\")\n",
"\n",
" # Async batch\n",
" results_abatch = await llm.abatch([\"What is 1+1?\", \"What is 2+2?\"])\n",
" print(\"Async Batch Results:\", [res.content for res in results_abatch])\n",
"\n",
"\n",
"await run_async_calls()"
]
},
{
"cell_type": "markdown",
"id": "99204b32",
"metadata": {},
"source": [
"## Safety Settings\n",
@@ -421,8 +848,8 @@
},
{
"cell_type": "code",
"execution_count": 14,
"id": "238b2f96-e573-4fac-bbf2-7e52ad926833",
"execution_count": null,
"id": "d4c14039",
"metadata": {},
"outputs": [],
"source": [
@@ -442,7 +869,7 @@
},
{
"cell_type": "markdown",
"id": "5805d40c-deb8-4924-8e72-a294a0482fc9",
"id": "dea38fb1",
"metadata": {},
"source": [
"For an enumeration of the categories and thresholds available, see Google's [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict)."
@@ -450,7 +877,7 @@
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"id": "d6d0e853",
"metadata": {},
"source": [
"## API reference\n",
@@ -461,7 +888,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -475,7 +902,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -19,18 +19,50 @@
"id": "5bcea387"
},
"source": [
"# ChatLiteLLM\n",
"# ChatLiteLLM and ChatLiteLLMRouter\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.\n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library.\n",
"\n",
"This integration contains two main classes:\n",
"\n",
"- ```ChatLiteLLM```: The main Langchain wrapper for basic usage of LiteLLM ([docs](https://docs.litellm.ai/docs/)).\n",
"- ```ChatLiteLLMRouter```: A ```ChatLiteLLM``` wrapper that leverages LiteLLM's Router ([docs](https://docs.litellm.ai/docs/routing))."
]
},
{
"cell_type": "markdown",
"id": "2ddb7fd3",
"metadata": {},
"source": [
"## Table of Contents\n",
"1. [Overview](#overview)\n",
" - [Integration Details](#integration-details)\n",
" - [Model Features](#model-features)\n",
"2. [Setup](#setup)\n",
"3. [Credentials](#credentials)\n",
"4. [Installation](#installation)\n",
"5. [Instantiation](#instantiation)\n",
" - [ChatLiteLLM](#chatlitellm)\n",
" - [ChatLiteLLMRouter](#chatlitellmrouter)\n",
"6. [Invocation](#invocation)\n",
"7. [Async and Streaming Functionality](#async-and-streaming-functionality)\n",
"8. [API Reference](#api-reference)"
]
},
{
"cell_type": "markdown",
"id": "37be6ef8",
"metadata": {},
"source": [
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support| Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"| [ChatLiteLLMRouter](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellmrouter) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](https://python.langchain.com/docs/how_to/tool_calling/) | [Structured output](https://python.langchain.com/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Native async](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Token usage](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/) | [Logprobs](https://python.langchain.com/docs/how_to/logprobs/) |\n",
@@ -38,7 +70,7 @@
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"### Setup\n",
"To access ChatLiteLLM models you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI or Cohere account. Then you have to get an API key, and export it as an environment variable."
"To access ```ChatLiteLLM``` and ```ChatLiteLLMRouter``` models, you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI, or Cohere account. Then, you have to get an API key and export it as an environment variable."
]
},
{
@@ -53,23 +85,23 @@
"You have to choose the LLM provider you want and sign up with them to get their API key.\n",
"\n",
"### Example - Anthropic\n",
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable.\n",
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this, set the ANTHROPIC_API_KEY environment variable.\n",
"\n",
"\n",
"### Example - OpenAI\n",
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable."
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this, set the OPENAI_API_KEY environment variable."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "7595eddf",
"metadata": {
"id": "7595eddf"
},
"outputs": [],
"source": [
"## set ENV variables\n",
"## Set ENV variables\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\n",
@@ -85,7 +117,7 @@
"source": [
"### Installation\n",
"\n",
"The LangChain LiteLLM integration lives in the `langchain-litellm` package:"
"The LangChain LiteLLM integration is available in the `langchain-litellm` package:"
]
},
{
@@ -107,13 +139,21 @@
"id": "bc1182b4"
},
"source": [
"## Instantiation\n",
"Now we can instantiate our model object and generate chat completions:"
"## Instantiation"
]
},
{
"cell_type": "markdown",
"id": "d439241a",
"metadata": {},
"source": [
"### ChatLiteLLM\n",
"You can instantiate a ```ChatLiteLLM``` model by providing a ```model``` name [supported by LiteLLM](https://docs.litellm.ai/docs/providers)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
@@ -123,7 +163,50 @@
"source": [
"from langchain_litellm import ChatLiteLLM\n",
"\n",
"llm = ChatLiteLLM(model=\"gpt-3.5-turbo\")"
"llm = ChatLiteLLM(model=\"gpt-4.1-nano\", temperature=0.1)"
]
},
{
"cell_type": "markdown",
"id": "3d0ed306",
"metadata": {},
"source": [
"### ChatLiteLLMRouter\n",
"You can also leverage LiteLLM's routing capabilities by defining your model list as specified [here](https://docs.litellm.ai/docs/routing)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d26393a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_litellm import ChatLiteLLMRouter\n",
"from litellm import Router\n",
"\n",
"model_list = [\n",
" {\n",
" \"model_name\": \"gpt-4.1\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4.1\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2024-10-21\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-4o\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4o\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2024-10-21\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"llm = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-4.1\", temperature=0.1)"
]
},
{
@@ -133,7 +216,8 @@
"id": "63d98454"
},
"source": [
"## Invocation"
"## Invocation\n",
"Whether you've instantiated a `ChatLiteLLM` or a `ChatLiteLLMRouter`, you can now use the ChatModel through Langchain's API."
]
},
{
@@ -171,7 +255,8 @@
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c"
},
"source": [
"## `ChatLiteLLM` also supports async and streaming functionality:"
"## Async and Streaming Functionality\n",
"`ChatLiteLLM` and `ChatLiteLLMRouter` also support async and streaming functionality:"
]
},
{
@@ -212,7 +297,7 @@
},
"source": [
"## API reference\n",
"For detailed documentation of all `ChatLiteLLM` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
"For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations, head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
]
}
],

View File

@@ -1,218 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "59148044",
"metadata": {},
"source": [
"---\n",
"sidebar_label: LiteLLM Router\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "247da7a6",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# ChatLiteLLMRouter\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatLiteLLMRouter\n",
"from langchain_core.messages import HumanMessage\n",
"from litellm import Router"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"model_list = [\n",
" {\n",
" \"model_name\": \"gpt-4\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-4-1106-preview\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
" {\n",
" \"model_name\": \"gpt-35-turbo\",\n",
" \"litellm_params\": {\n",
" \"model\": \"azure/gpt-35-turbo\",\n",
" \"api_key\": \"<your-api-key>\",\n",
" \"api_version\": \"2023-05-15\",\n",
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
" },\n",
" },\n",
"]\n",
"litellm_router = Router(model_list=model_list)\n",
"chat = ChatLiteLLMRouter(router=litellm_router, model_name=\"gpt-35-turbo\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime programmer.\")"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"metadata": {},
"source": [
"## `ChatLiteLLMRouter` also supports async and streaming functionality:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\"J'adore programmer.\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"J'adore programmer.\"))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer."
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\")"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatLiteLLMRouter(\n",
" router=litellm_router,\n",
" model_name=\"gpt-35-turbo\",\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
")\n",
"chat(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1413,6 +1413,23 @@
"second_output_message = llm.invoke(history)"
]
},
{
"cell_type": "markdown",
"id": "90c18d18-b25c-4509-a639-bd652b92f518",
"metadata": {},
"source": [
"## Flex processing\n",
"\n",
"OpenAI offers a variety of [service tiers](https://platform.openai.com/docs/guides/flex-processing). The \"flex\" tier offers cheaper pricing for requests, with the trade-off that responses may take longer and resources might not always be available. This approach is best suited for non-critical tasks, including model testing, data enhancement, or jobs that can be run asynchronously.\n",
"\n",
"To use it, initialize the model with `service_tier=\"flex\"`:\n",
"```python\n",
"llm = ChatOpenAI(model=\"o4-mini\", service_tier=\"flex\")\n",
"```\n",
"\n",
"Note that this is a beta feature that is only available for a subset of models. See OpenAI [docs](https://platform.openai.com/docs/guides/flex-processing) for more detail."
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
@@ -1420,7 +1437,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html"
"For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)."
]
}
],

View File

@@ -14,7 +14,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API."
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API."
]
},
{
@@ -34,33 +36,46 @@
"id": "juAmbgoWD17u"
},
"source": [
"The AstraDB Document Loader returns a list of Langchain Documents from an AstraDB database.\n",
"The Astra DB Document Loader returns a list of Langchain `Document` objects read from an Astra DB collection.\n",
"\n",
"The Loader takes the following parameters:\n",
"The loader takes the following parameters:\n",
"\n",
"* `api_endpoint`: AstraDB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: AstraDB token. Looks like `AstraCS:6gBhNmsk135....`\n",
"* `api_endpoint`: Astra DB API endpoint. Looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"* `token`: Astra DB token. Looks like `AstraCS:aBcD0123...`\n",
"* `collection_name` : AstraDB collection name\n",
"* `namespace`: (Optional) AstraDB namespace\n",
"* `namespace`: (Optional) AstraDB namespace (called _keyspace_ in Astra DB)\n",
"* `filter_criteria`: (Optional) Filter used in the find query\n",
"* `projection`: (Optional) Projection used in the find query\n",
"* `find_options`: (Optional) Options used in the find query\n",
"* `nb_prefetched`: (Optional) Number of documents pre-fetched by the loader\n",
"* `limit`: (Optional) Maximum number of documents to retrieve\n",
"* `extraction_function`: (Optional) A function to convert the AstraDB document to the LangChain `page_content` string. Defaults to `json.dumps`\n",
"\n",
"The following metadata is set to the LangChain Documents metadata output:\n",
"The loader sets the following metadata for the documents it reads:\n",
"\n",
"```python\n",
"{\n",
" metadata : {\n",
" \"namespace\": \"...\", \n",
" \"api_endpoint\": \"...\", \n",
" \"collection\": \"...\"\n",
" }\n",
"metadata={\n",
" \"namespace\": \"...\", \n",
" \"api_endpoint\": \"...\", \n",
" \"collection\": \"...\"\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"!pip install \"langchain-astradb>=0.6,<0.7\""
]
},
{
"attachments": {},
"cell_type": "markdown",
@@ -71,24 +86,43 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import AstraDBLoader"
"from langchain_astradb import AstraDBLoader"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[**API Reference:** `AstraDBLoader`](https://python.langchain.com/api_reference/astradb/document_loaders/langchain_astradb.document_loaders.AstraDBLoader.html#langchain_astradb.document_loaders.AstraDBLoader)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:41:22.643335Z",
"start_time": "2024-01-08T12:40:57.759116Z"
},
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
"ASTRA_DB_APPLICATION_TOKEN = ········\n"
]
}
],
"source": [
"from getpass import getpass\n",
"\n",
@@ -98,7 +132,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:25.395162Z",
@@ -112,19 +146,22 @@
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" collection_name=\"movie_reviews\",\n",
" projection={\"title\": 1, \"reviewtext\": 1},\n",
" find_options={\"limit\": 10},\n",
" limit=10,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:30.236489Z",
"start_time": "2024-01-08T12:42:29.612133Z"
},
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
@@ -133,7 +170,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-01-08T12:42:31.369394Z",
@@ -144,10 +181,10 @@
{
"data": {
"text/plain": [
"Document(page_content='{\"_id\": \"659bdffa16cbc4586b11a423\", \"title\": \"Dangerous Men\", \"reviewtext\": \"\\\\\"Dangerous Men,\\\\\" the picture\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\"}', metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'})"
"Document(metadata={'namespace': 'default_keyspace', 'api_endpoint': 'https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com', 'collection': 'movie_reviews'}, page_content='{\"_id\": \"659bdffa16cbc4586b11a423\", \"title\": \"Dangerous Men\", \"reviewtext\": \"\\\\\"Dangerous Men,\\\\\" the picture\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\"}')"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -179,7 +216,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.12.8"
}
},
"nbformat": 4,

View File

@@ -49,7 +49,14 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import BrowserbaseLoader"
"import os\n",
"\n",
"from langchain_community.document_loaders import BrowserbaseLoader\n",
"\n",
"load_dotenv()\n",
"\n",
"BROWSERBASE_API_KEY = os.getenv(\"BROWSERBASE_API_KEY\")\n",
"BROWSERBASE_PROJECT_ID = os.getenv(\"BROWSERBASE_PROJECT_ID\")"
]
},
{
@@ -59,6 +66,8 @@
"outputs": [],
"source": [
"loader = BrowserbaseLoader(\n",
" api_key=BROWSERBASE_API_KEY,\n",
" project_id=BROWSERBASE_PROJECT_ID,\n",
" urls=[\n",
" \"https://example.com\",\n",
" ],\n",
@@ -78,52 +87,11 @@
"\n",
"- `urls` Required. A list of URLs to fetch.\n",
"- `text_content` Retrieve only text content. Default is `False`.\n",
"- `api_key` Optional. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.\n",
"- `project_id` Optional. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.\n",
"- `api_key` Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.\n",
"- `project_id` Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.\n",
"- `session_id` Optional. Provide an existing Session ID.\n",
"- `proxy` Optional. Enable/Disable Proxies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading images\n",
"\n",
"You can also load screenshots of webpages (as bytes) for multi-modal models.\n",
"\n",
"Full example using GPT-4V:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from browserbase import Browserbase\n",
"from browserbase.helpers.gpt4 import GPT4VImage, GPT4VImageDetail\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=256)\n",
"browser = Browserbase()\n",
"\n",
"screenshot = browser.screenshot(\"https://browserbase.com\")\n",
"\n",
"result = chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"What color is the logo?\"},\n",
" GPT4VImage(screenshot, GPT4VImageDetail.auto),\n",
" ]\n",
" )\n",
" ]\n",
")\n",
"\n",
"print(result.content)"
]
}
],
"metadata": {

View File

@@ -1214,9 +1214,7 @@
"source": [
"### Connecting to the DB\n",
"\n",
"The Cassandra caches shown in this page can be used with Cassandra as well as other derived databases, such as Astra DB, which use the CQL (Cassandra Query Language) protocol.\n",
"\n",
"> DataStax [Astra DB](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html) is a managed serverless database built on Cassandra, offering the same interface and strengths.\n",
"The Cassandra caches shown in this page can be used with Cassandra as well as other derived databases that can use the CQL (Cassandra Query Language) protocol, such as DataStax Astra DB.\n",
"\n",
"Depending on whether you connect to a Cassandra cluster or to Astra DB through CQL, you will provide different parameters when instantiating the cache (through initialization of a CassIO connection)."
]
@@ -1517,6 +1515,12 @@
"source": [
"You can easily use [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) as an LLM cache, with either the \"exact\" or the \"semantic-based\" cache.\n",
"\n",
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API.\n",
"\n",
"_This approach differs from the `Cassandra` caches mentioned above in that it natively uses the HTTP Data API. The Data API is specific to Astra DB. Keep in mind that the storage format will also differ._\n",
"\n",
"Make sure you have a running database (it must be a Vector-enabled database to use the Semantic cache) and get the required credentials on your Astra dashboard:\n",
"\n",
"- the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
@@ -3112,8 +3116,8 @@
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/astradb/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBCache.html) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBCache.html) (deprecated since `langchain-community==0.0.28`) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AstraDBSemanticCache.html) (deprecated since `langchain-community==0.0.28`) |\n",
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://python.langchain.com/api_reference/community/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",
@@ -3160,7 +3164,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.0"
}
},
"nbformat": 4,

View File

@@ -7,7 +7,9 @@
"source": [
"# Astra DB \n",
"\n",
"> DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.\n",
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently availablev\n",
"> through an easy-to-use JSON API.\n",
"\n",
"This notebook goes over how to use Astra DB to store chat message history."
]
@@ -17,22 +19,22 @@
"id": "f507f58b-bf22-4a48-8daf-68d869bcd1ba",
"metadata": {},
"source": [
"## Setting up\n",
"## Setup\n",
"\n",
"To run this notebook you need a running Astra DB. Get the connection secrets on your Astra dashboard:\n",
"\n",
"- the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`;\n",
"- the Token looks like `AstraCS:6gBhNmsk135...`."
"- the Database Token looks like `AstraCS:aBcD0123...`."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "d7092199",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"astrapy>=0.7.1 langchain-community\" "
"!pip install \"langchain-astradb>=0.6,<0.7\""
]
},
{
@@ -45,12 +47,12 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "163d97f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
@@ -65,14 +67,6 @@
"ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
]
},
{
"cell_type": "markdown",
"id": "55860b2d",
"metadata": {},
"source": [
"Depending on whether local or cloud-based Astra DB, create the corresponding database connection \"Session\" object."
]
},
{
"cell_type": "markdown",
"id": "36c163e8",
@@ -83,12 +77,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "d15e3302",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import AstraDBChatMessageHistory\n",
"from langchain_astradb import AstraDBChatMessageHistory\n",
"\n",
"message_history = AstraDBChatMessageHistory(\n",
" session_id=\"test-session\",\n",
@@ -98,22 +92,31 @@
"\n",
"message_history.add_user_message(\"hi!\")\n",
"\n",
"message_history.add_ai_message(\"whats up?\")"
"message_history.add_ai_message(\"hello, how are you?\")"
]
},
{
"cell_type": "markdown",
"id": "53acb4a8-d536-4a58-9fee-7d70033d9c81",
"metadata": {},
"source": [
"[**API Reference:** `AstraDBChatMessageHistory`](https://python.langchain.com/api_reference/astradb/chat_message_histories/langchain_astradb.chat_message_histories.AstraDBChatMessageHistory.html#langchain_astradb.chat_message_histories.AstraDBChatMessageHistory)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "64fc465e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
"[HumanMessage(content='hi!', additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content='hello, how are you?', additional_kwargs={}, response_metadata={})]"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -139,7 +142,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.8"
}
},
"nbformat": 4,

View File

@@ -1,8 +1,6 @@
# Astra DB
> [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless
> vector-capable database built on `Apache Cassandra®`and made conveniently available
> through an easy-to-use JSON API.
> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless AI-ready database built on `Apache Cassandra®` and made conveniently available through an easy-to-use JSON API.
See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/tutorials/chatbot.html).
@@ -10,19 +8,21 @@ See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-d
Install the following Python package:
```bash
pip install "langchain-astradb>=0.1.0"
pip install "langchain-astradb>=0.6,<0.7"
```
Get the [connection secrets](https://docs.datastax.com/en/astra/astra-db-vector/get-started/quickstart.html).
Set up the following environment variables:
Create a database (if needed) and get the [connection secrets](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html#create-a-database-and-store-your-credentials).
Set the following variables:
```python
ASTRA_DB_APPLICATION_TOKEN="TOKEN"
ASTRA_DB_API_ENDPOINT="API_ENDPOINT"
ASTRA_DB_APPLICATION_TOKEN="TOKEN"
```
## Vector Store
A few typical initialization patterns are shown here:
```python
from langchain_astradb import AstraDBVectorStore
@@ -32,8 +32,56 @@ vector_store = AstraDBVectorStore(
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
from astrapy.info import VectorServiceOptions
vector_store_vectorize = AstraDBVectorStore(
collection_name="my_vectorize_store",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
collection_vector_service_options=VectorServiceOptions(
provider="nvidia",
model_name="NV-Embed-QA",
),
)
from astrapy.info import (
CollectionLexicalOptions,
CollectionRerankOptions,
RerankServiceOptions,
VectorServiceOptions,
)
vector_store_hybrid = AstraDBVectorStore(
collection_name="my_hybrid_store",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
collection_vector_service_options=VectorServiceOptions(
provider="nvidia",
model_name="NV-Embed-QA",
),
collection_lexical=CollectionLexicalOptions(analyzer="standard"),
collection_rerank=CollectionRerankOptions(
service=RerankServiceOptions(
provider="nvidia",
model_name="nvidia/llama-3.2-nv-rerankqa-1b-v2",
),
),
)
```
Notable features of class `AstraDBVectorStore`:
- native async API;
- metadata filtering in search;
- MMR (maximum marginal relevance) search;
- server-side embedding computation (["vectorize"](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html) in Astra DB parlance);
- auto-detect its settings from an existing, pre-populated Astra DB collection;
- [hybrid search](https://docs.datastax.com/en/astra-db-serverless/databases/hybrid-search.html#the-hybrid-search-process) (vector + BM25 and then a rerank step);
- support for non-Astra Data API (e.g. self-hosted [HCD](https://docs.datastax.com/en/hyper-converged-database/1.1/get-started/get-started-hcd.html) deployments);
Learn more in the [example notebook](/docs/integrations/vectorstores/astradb).
See the [example provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/integrations/langchain.html).
@@ -82,8 +130,6 @@ set_llm_cache(AstraDBSemanticCache(
Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the appropriate section).
Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_message_history).
## Document loader
```python

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,10 @@ pip install langchain-litellm
```python
from langchain_litellm import ChatLiteLLM
```
```python
from langchain_litellm import ChatLiteLLMRouter
```
See more detail in the guide [here](/docs/integrations/chat/litellm).
## API reference
For detailed documentation of all `ChatLiteLLM` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm
For detailed documentation of all `ChatLiteLLM` and `ChatLiteLLMRouter` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm

View File

@@ -27,6 +27,48 @@ from langchain_pinecone import PineconeVectorStore
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone)
### Sparse Vector store
LangChain's `PineconeSparseVectorStore` enables sparse retrieval using Pinecone's sparse English model. It maps text to sparse vectors and supports adding documents and similarity search.
```python
from langchain_pinecone import PineconeSparseVectorStore
# Initialize sparse vector store
vector_store = PineconeSparseVectorStore(
index=my_index,
embedding_model="pinecone-sparse-english-v0"
)
# Add documents
vector_store.add_documents(documents)
# Query
results = vector_store.similarity_search("your query", k=3)
```
For a more detailed walkthrough, see the [Pinecone Sparse Vector Store notebook](/docs/integrations/vectorstores/pinecone_sparse).
### Sparse Embedding
LangChain's `PineconeSparseEmbeddings` provides sparse embedding generation using Pinecone's `pinecone-sparse-english-v0` model.
```python
from langchain_pinecone.embeddings import PineconeSparseEmbeddings
# Initialize sparse embeddings
sparse_embeddings = PineconeSparseEmbeddings(
model="pinecone-sparse-english-v0"
)
# Embed a single query (returns SparseValues)
query_embedding = sparse_embeddings.embed_query("sample text")
# Embed multiple documents (returns list of SparseValues)
docs = ["Document 1 content", "Document 2 content"]
doc_embeddings = sparse_embeddings.embed_documents(docs)
```
For more detailed usage, see the [Pinecone Sparse Embeddings notebook](/docs/integrations/vectorstores/pinecone_sparse).
## Retrievers
### Pinecone Hybrid Search

View File

@@ -7,19 +7,19 @@
## Installation and Setup
We need to install the `hdbcli` python package.
We need to install the `langchain-hana` python package.
```bash
pip install hdbcli
pip install langchain-hana
```
## Vectorstore
>[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is
>[SAP HANA Cloud Vector Engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/sap-hana-cloud-sap-hana-database-vector-engine-guide) is
> a vector store fully integrated into the `SAP HANA Cloud` database.
See a [usage example](/docs/integrations/vectorstores/sap_hanavector).
```python
from langchain_community.vectorstores.hanavector import HanaDB
from langchain_hana import HanaDB
```

View File

@@ -315,6 +315,17 @@
"Vectara offers Intelligent Query Rewriting option which enhances search precision by automatically generating metadata filter expressions from natural language queries. This capability analyzes user queries, extracts relevant metadata filters, and rephrases the query to focus on the core information need. For more details [go to this notebook](../retrievers/self_query/vectara_self_query.ipynb)."
]
},
{
"cell_type": "markdown",
"source": [
"## Vectara tools\n",
"Vectara provides serval tools that can be used with Langchain. For more details [go to this notebook](../tools/vectara.ipynb)"
],
"metadata": {
"collapsed": false
},
"id": "beadf6f485c1a69"
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -0,0 +1,322 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7fb27b941602401d91542211134fc71a",
"metadata": {},
"source": [
"# Pinecone Rerank\n",
"\n",
"> This notebook shows how to use **PineconeRerank** for two-stage vector retrieval reranking using Pinecone's hosted reranking API as demonstrated in `langchain_pinecone/libs/pinecone/rerank.py`."
]
},
{
"cell_type": "markdown",
"id": "acae54e37e7d407bbb7b55eff062a284",
"metadata": {},
"source": [
"## Setup\n",
"Install the `langchain-pinecone` package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a63283cbaf04dbcab1f6479b197f3a8",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU \"langchain-pinecone\""
]
},
{
"cell_type": "markdown",
"id": "8dd0d8092fe74a7c96281538738b07e2",
"metadata": {},
"source": [
"## Credentials\n",
"Set your Pinecone API key to use the reranking API."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "72eea5119410473aa328ad9291626812",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"PINECONE_API_KEY\"] = os.getenv(\"PINECONE_API_KEY\") or getpass(\n",
" \"Enter your Pinecone API key: \"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8edb47106e1a46a883d545849b8ab81b",
"metadata": {},
"source": [
"## Instantiation\n",
"Use `PineconeRerank` to rerank a list of documents by relevance to a query."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "10185d26023b46108eb7d9f57d49d2b3",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/jakit/customers/aurelio/langchain-pinecone/libs/pinecone/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Score: 0.9998 | Content: Paris is the capital of France.\n",
"Score: 0.1950 | Content: The Eiffel Tower is in Paris.\n",
"Score: 0.0042 | Content: Berlin is the capital of Germany.\n"
]
}
],
"source": [
"from langchain_core.documents import Document\n",
"from langchain_pinecone import PineconeRerank\n",
"\n",
"# Initialize reranker\n",
"reranker = PineconeRerank(model=\"bge-reranker-v2-m3\")\n",
"\n",
"# Sample documents\n",
"documents = [\n",
" Document(page_content=\"Paris is the capital of France.\"),\n",
" Document(page_content=\"Berlin is the capital of Germany.\"),\n",
" Document(page_content=\"The Eiffel Tower is in Paris.\"),\n",
"]\n",
"\n",
"# Rerank documents\n",
"query = \"What is the capital of France?\"\n",
"reranked_docs = reranker.compress_documents(documents, query)\n",
"\n",
"# Print results\n",
"for doc in reranked_docs:\n",
" score = doc.metadata.get(\"relevance_score\")\n",
" print(f\"Score: {score:.4f} | Content: {doc.page_content}\")"
]
},
{
"cell_type": "markdown",
"id": "8763a12b2bbd4a93a75aff182afb95dc",
"metadata": {},
"source": [
"## Usage\n",
"### Reranking with Top-N\n",
"Specify `top_n` to limit the number of returned documents."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7623eae2785240b9bd12b16a66d81610",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Top-1 Result:\n",
"Score: 0.9998 | Content: Paris is the capital of France.\n"
]
}
],
"source": [
"# Return only top-1 result\n",
"reranker_top1 = PineconeRerank(model=\"bge-reranker-v2-m3\", top_n=1)\n",
"top1_docs = reranker_top1.compress_documents(documents, query)\n",
"print(\"Top-1 Result:\")\n",
"for doc in top1_docs:\n",
" print(f\"Score: {doc.metadata['relevance_score']:.4f} | Content: {doc.page_content}\")"
]
},
{
"cell_type": "markdown",
"id": "7cdc8c89c7104fffa095e18ddfef8986",
"metadata": {},
"source": [
"## Reranking with Custom Rank Fields\n",
"If your documents are dictionaries or have custom fields, use `rank_fields` to specify the field to rank on."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b118ea5561624da68c537baed56e602f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ID: doc3 | Score: 0.9892\n",
"ID: doc1 | Score: 0.0006\n",
"ID: doc2 | Score: 0.0000\n"
]
}
],
"source": [
"# Sample dictionary documents with 'text' field\n",
"docs_dict = [\n",
" {\n",
" \"id\": \"doc1\",\n",
" \"text\": \"Article about renewable energy.\",\n",
" \"title\": \"Renewable Energy\",\n",
" },\n",
" {\"id\": \"doc2\", \"text\": \"Report on economic growth.\", \"title\": \"Economic Growth\"},\n",
" {\n",
" \"id\": \"doc3\",\n",
" \"text\": \"News on climate policy changes.\",\n",
" \"title\": \"Climate Policy\",\n",
" },\n",
"]\n",
"\n",
"# Initialize reranker with rank_fields\n",
"reranker_text = PineconeRerank(model=\"bge-reranker-v2-m3\", rank_fields=[\"text\"])\n",
"climate_docs = reranker_text.rerank(docs_dict, \"Latest news on climate change.\")\n",
"\n",
"# Show IDs and scores\n",
"for res in climate_docs:\n",
" print(f\"ID: {res['id']} | Score: {res['score']:.4f}\")"
]
},
{
"cell_type": "markdown",
"id": "a80bb6c3",
"metadata": {},
"source": [
"We can rerank based on title field"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a6f2768e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ID: doc2 | Score: 0.8918 | Title: Economic Growth\n",
"ID: doc3 | Score: 0.0002 | Title: Climate Policy\n",
"ID: doc1 | Score: 0.0000 | Title: Renewable Energy\n"
]
}
],
"source": [
"economic_docs = reranker_text.rerank(docs_dict, \"Economic forecast.\")\n",
"\n",
"# Show IDs and scores\n",
"for res in economic_docs:\n",
" print(\n",
" f\"ID: {res['id']} | Score: {res['score']:.4f} | Title: {res['document']['title']}\"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "938c804e27f84196a10c8828c723f798",
"metadata": {},
"source": [
"## Reranking with Additional Parameters\n",
"You can pass model-specific parameters (e.g., `truncate`) directly to `.rerank()`."
]
},
{
"cell_type": "markdown",
"id": "a94c501c",
"metadata": {},
"source": [
"How to handle inputs longer than those supported by the model. Accepted values: END or NONE.\n",
"END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "504fb2a444614c0babb325280ed9130a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ID: docA | Score: 0.6950\n",
"ID: docB | Score: 0.0001\n"
]
}
],
"source": [
"# Rerank with custom truncate parameter\n",
"docs_simple = [\n",
" {\"id\": \"docA\", \"text\": \"Quantum entanglement is a physical phenomenon...\"},\n",
" {\"id\": \"docB\", \"text\": \"Classical mechanics describes motion...\"},\n",
"]\n",
"\n",
"reranked = reranker.rerank(\n",
" documents=docs_simple,\n",
" query=\"Explain the concept of quantum entanglement.\",\n",
" truncate=\"END\",\n",
")\n",
"# Print reranked IDs and scores\n",
"for res in reranked:\n",
" print(f\"ID: {res['id']} | Score: {res['score']:.4f}\")"
]
},
{
"cell_type": "markdown",
"id": "ab78bcd8",
"metadata": {},
"source": [
"## Use within a chain"
]
},
{
"cell_type": "markdown",
"id": "59bbdb311c014d738909a11f9e486628",
"metadata": {},
"source": [
"## API reference\n",
"- `PineconeRerank(model, top_n, rank_fields, return_documents)`\n",
"- `.rerank(documents, query, rank_fields=None, model=None, top_n=None, truncate=\"END\")`\n",
"- `.compress_documents(documents, query)` (returns `Document` objects with `relevance_score` in metadata)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -4,9 +4,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Astra DB (Cassandra)\n",
"# Astra DB\n",
"\n",
">[DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on `Cassandra` and made conveniently available through an easy-to-use JSON API.\n",
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API.\n",
"\n",
"In the walkthrough, we'll demo the `SelfQueryRetriever` with an `Astra DB` vector store."
]
@@ -16,32 +18,46 @@
"metadata": {},
"source": [
"## Creating an Astra DB vector store\n",
"First we'll want to create an Astra DB VectorStore and seed it with some data. We've created a small demo set of documents that contain summaries of movies.\n",
"First, create an Astra DB vector store and seed it with some data.\n",
"\n",
"NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `astrapy` package."
"We've created a small demo set of documents containing movie summaries.\n",
"\n",
"NOTE: The self-query retriever requires the `lark` package installed (`pip install lark`)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet lark astrapy langchain-openai"
"!pip install \"langchain-astradb>=0.6,<0.7\" \\\n",
" \"langchain_openai>=0.3,<0.4\" \\\n",
" \"lark>=1.2,<2.0\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
"In this example, you'll use the `OpenAIEmbeddings`. Please enter an OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"OpenAI API Key: ········\n"
]
}
],
"source": [
"import os\n",
"from getpass import getpass\n",
@@ -69,14 +85,23 @@
"Create the Astra DB VectorStore:\n",
"\n",
"- the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"- the Token looks like `AstraCS:6gBhNmsk135....`"
"- the Token looks like `AstraCS:aBcD0123...`"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
"ASTRA_DB_APPLICATION_TOKEN = ········\n"
]
}
],
"source": [
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")"
@@ -84,11 +109,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import AstraDB\n",
"from langchain_astradb import AstraDBVectorStore\n",
"from langchain_core.documents import Document\n",
"\n",
"docs = [\n",
@@ -101,11 +126,13 @@
" metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2},\n",
" ),\n",
" Document(\n",
" page_content=\"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\",\n",
" page_content=\"A psychologist / detective gets lost in a series of dreams within dreams \"\n",
" \"within dreams and Inception reused the idea\",\n",
" metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6},\n",
" ),\n",
" Document(\n",
" page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\",\n",
" page_content=\"A bunch of normal-sized women are supremely wholesome and some men \"\n",
" \"pine after them\",\n",
" metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3},\n",
" ),\n",
" Document(\n",
@@ -123,7 +150,7 @@
" ),\n",
"]\n",
"\n",
"vectorstore = AstraDB.from_documents(\n",
"vectorstore = AstraDBVectorStore.from_documents(\n",
" docs,\n",
" embeddings,\n",
" collection_name=\"astra_self_query_demo\",\n",
@@ -136,13 +163,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating our self-querying retriever\n",
"Now we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents."
"## Creating a self-querying retriever\n",
"\n",
"Now you can instantiate the retriever.\n",
"\n",
"To do this, you need to provide some information upfront about the metadata fields that the documents support, along with a short description of the documents' contents."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -174,7 +204,11 @@
"llm = OpenAI(temperature=0)\n",
"\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm, vectorstore, document_content_description, metadata_field_info, verbose=True\n",
" llm,\n",
" vectorstore,\n",
" document_content_description,\n",
" metadata_field_info,\n",
" verbose=True,\n",
")"
]
},
@@ -183,14 +217,29 @@
"metadata": {},
"source": [
"## Testing it out\n",
"And now we can try actually using our retriever!"
"\n",
"Now you can try actually using our retriever:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(id='d7b9ec1edafa467caab524455e8c1f5d', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}, page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose'),\n",
" Document(id='8ad04ef2a73d4f74897a51e49be1a8d2', metadata={'year': 1995, 'genre': 'animated'}, page_content='Toys come alive and have a blast doing so'),\n",
" Document(id='5b07e600d3494506952b60e0a45a0546', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'rating': 9.9}, page_content='Three men walk into the Zone, three men walk out of the Zone'),\n",
" Document(id='a0cef19e27c341929098ac4793602829', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}, page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea')]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.invoke(\"What are some movies about dinosaurs?\")"
@@ -198,9 +247,21 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(id='5b07e600d3494506952b60e0a45a0546', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'rating': 9.9}, page_content='Three men walk into the Zone, three men walk out of the Zone'),\n",
" Document(id='a0cef19e27c341929098ac4793602829', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}, page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea')]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example specifies a filter\n",
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
@@ -208,9 +269,20 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(id='0539843fd203484c9be486c2a0e2454c', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3}, page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them')]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example only specifies a query and a filter\n",
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
@@ -218,9 +290,21 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(id='a0cef19e27c341929098ac4793602829', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}, page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea'),\n",
" Document(id='5b07e600d3494506952b60e0a45a0546', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'rating': 9.9}, page_content='Three men walk into the Zone, three men walk out of the Zone')]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example specifies a composite filter\n",
"retriever.invoke(\"What's a highly rated (above 8.5), science fiction movie ?\")"
@@ -228,9 +312,20 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(id='8ad04ef2a73d4f74897a51e49be1a8d2', metadata={'year': 1995, 'genre': 'animated'}, page_content='Toys come alive and have a blast doing so')]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.invoke(\n",
@@ -242,20 +337,20 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filter k\n",
"## Set a limit ('k')\n",
"\n",
"We can also use the self query retriever to specify `k`: the number of documents to fetch.\n",
"you can also use the self-query retriever to specify `k`, the number of documents to fetch.\n",
"\n",
"We can do this by passing `enable_limit=True` to the constructor."
"You achieve this by passing `enable_limit=True` to the constructor."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"retriever = SelfQueryRetriever.from_llm(\n",
"retriever_k = SelfQueryRetriever.from_llm(\n",
" llm,\n",
" vectorstore,\n",
" document_content_description,\n",
@@ -267,12 +362,24 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[Document(id='d7b9ec1edafa467caab524455e8c1f5d', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}, page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose'),\n",
" Document(id='8ad04ef2a73d4f74897a51e49be1a8d2', metadata={'year': 1995, 'genre': 'animated'}, page_content='Toys come alive and have a blast doing so')]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.invoke(\"What are two movies about dinosaurs?\")"
"retriever_k.invoke(\"What are two movies about dinosaurs?\")"
]
},
{
@@ -293,7 +400,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {
"collapsed": false,
"jupyter": {
@@ -322,7 +429,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.8"
}
},
"nbformat": 4,

View File

@@ -23,7 +23,9 @@
"\n",
"## Overview\n",
"\n",
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.\n",
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API.\n",
"\n",
"### Integration details\n",
"\n",

View File

@@ -1,13 +1,76 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8543d632",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Gemini\n",
"keywords: [google gemini embeddings]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "afab8b36-10bb-4795-bc98-75ab2d2081bb",
"metadata": {},
"source": [
"# Google Generative AI Embeddings\n",
"# Google Generative AI Embeddings (AI Studio & Gemini API)\n",
"\n",
"Connect to Google's generative AI embeddings service using the `GoogleGenerativeAIEmbeddings` class, found in the [langchain-google-genai](https://pypi.org/project/langchain-google-genai/) package."
"Connect to Google's generative AI embeddings service using the `GoogleGenerativeAIEmbeddings` class, found in the [langchain-google-genai](https://pypi.org/project/langchain-google-genai/) package.\n",
"\n",
"This will help you get started with Google's Generative AI embedding models (like Gemini) using LangChain. For detailed documentation on `GoogleGenerativeAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/v0.2/api_reference/google_genai/embeddings/langchain_google_genai.embeddings.GoogleGenerativeAIEmbeddings.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"import { ItemTable } from \"@theme/FeatureTables\";\n",
"\n",
"<ItemTable category=\"text_embedding\" item=\"Google Gemini\" />\n",
"\n",
"## Setup\n",
"\n",
"To access Google Generative AI embedding models you'll need to create a Google Cloud project, enable the Generative Language API, get an API key, and install the `langchain-google-genai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"To use Google Generative AI models, you must have an API key. You can create one in Google AI Studio. See the [Google documentation](https://ai.google.dev/gemini-api/docs/api-key) for instructions.\n",
"\n",
"Once you have a key, set it as an environment variable `GOOGLE_API_KEY`:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "47652620",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GOOGLE_API_KEY\"):\n",
" os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass(\"Enter your Google API key: \")"
]
},
{
"cell_type": "markdown",
"id": "67283790",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eccf1968",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
@@ -28,28 +91,6 @@
"%pip install --upgrade --quiet langchain-google-genai"
]
},
{
"cell_type": "markdown",
"id": "25f3f88e-164e-400d-b371-9fa488baba19",
"metadata": {},
"source": [
"## Credentials"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec89153f-8999-4aab-a21b-0bfba1cc3893",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"GOOGLE_API_KEY\" not in os.environ:\n",
" os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass(\"Provide your Google API key here\")"
]
},
{
"cell_type": "markdown",
"id": "f2437b22-e364-418a-8c13-490a026cb7b5",
@@ -60,17 +101,21 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 20,
"id": "eedc551e-a1f3-4fd8-8d65-4e0784c4441b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[0.05636945, 0.0048285457, -0.0762591, -0.023642512, 0.05329321]"
"[-0.024917153641581535,\n",
" 0.012005362659692764,\n",
" -0.003886754624545574,\n",
" -0.05774897709488869,\n",
" 0.0020742062479257584]"
]
},
"execution_count": 6,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@@ -78,7 +123,7 @@
"source": [
"from langchain_google_genai import GoogleGenerativeAIEmbeddings\n",
"\n",
"embeddings = GoogleGenerativeAIEmbeddings(model=\"models/text-embedding-004\")\n",
"embeddings = GoogleGenerativeAIEmbeddings(model=\"models/gemini-embedding-exp-03-07\")\n",
"vector = embeddings.embed_query(\"hello, world!\")\n",
"vector[:5]"
]
@@ -95,17 +140,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "6ec53aba-404f-4778-acd9-5d6664e79ed2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(3, 768)"
"(3, 3072)"
]
},
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -121,6 +166,56 @@
"len(vectors), len(vectors[0])"
]
},
{
"cell_type": "markdown",
"id": "c362bfbf",
"metadata": {},
"source": [
"## Indexing and Retrieval\n",
"\n",
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
"\n",
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "606a7f65",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'LangChain is the framework for building context-aware reasoning applications'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a vector store with a sample text\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"\n",
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
"\n",
"vectorstore = InMemoryVectorStore.from_texts(\n",
" [text],\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Use the vectorstore as a retriever\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"# Retrieve the most similar text\n",
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
"\n",
"# show the retrieved document's content\n",
"retrieved_documents[0].page_content"
]
},
{
"cell_type": "markdown",
"id": "1482486f-5617-498a-8a44-1974d3212dda",
@@ -129,70 +224,74 @@
"## Task type\n",
"`GoogleGenerativeAIEmbeddings` optionally support a `task_type`, which currently must be one of:\n",
"\n",
"- task_type_unspecified\n",
"- retrieval_query\n",
"- retrieval_document\n",
"- semantic_similarity\n",
"- classification\n",
"- clustering\n",
"- `SEMANTIC_SIMILARITY`: Used to generate embeddings that are optimized to assess text similarity.\n",
"- `CLASSIFICATION`: Used to generate embeddings that are optimized to classify texts according to preset labels.\n",
"- `CLUSTERING`: Used to generate embeddings that are optimized to cluster texts based on their similarities.\n",
"- `RETRIEVAL_DOCUMENT`, `RETRIEVAL_QUERY`, `QUESTION_ANSWERING`, and `FACT_VERIFICATION`: Used to generate embeddings that are optimized for document search or information retrieval.\n",
"- `CODE_RETRIEVAL_QUERY`: Used to retrieve a code block based on a natural language query, such as sort an array or reverse a linked list. Embeddings of the code blocks are computed using `RETRIEVAL_DOCUMENT`.\n",
"\n",
"By default, we use `retrieval_document` in the `embed_documents` method and `retrieval_query` in the `embed_query` method. If you provide a task type, we will use that for all methods."
"By default, we use `RETRIEVAL_DOCUMENT` in the `embed_documents` method and `RETRIEVAL_QUERY` in the `embed_query` method. If you provide a task type, we will use that for all methods."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "a223bb25-2b1b-418e-a570-2f543083132e",
"execution_count": null,
"id": "b7acc5c2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install --upgrade --quiet matplotlib scikit-learn"
]
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 19,
"id": "f1f077db-8eb4-49f7-8866-471a8528dcdb",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1\n",
"Cosine similarity with query: 0.7892893360164779\n",
"---\n",
"Document 2\n",
"Cosine similarity with query: 0.5438283285204146\n",
"---\n"
]
}
],
"source": [
"from langchain_google_genai import GoogleGenerativeAIEmbeddings\n",
"from sklearn.metrics.pairwise import cosine_similarity\n",
"\n",
"query_embeddings = GoogleGenerativeAIEmbeddings(\n",
" model=\"models/embedding-001\", task_type=\"retrieval_query\"\n",
" model=\"models/gemini-embedding-exp-03-07\", task_type=\"RETRIEVAL_QUERY\"\n",
")\n",
"doc_embeddings = GoogleGenerativeAIEmbeddings(\n",
" model=\"models/embedding-001\", task_type=\"retrieval_document\"\n",
")"
" model=\"models/gemini-embedding-exp-03-07\", task_type=\"RETRIEVAL_DOCUMENT\"\n",
")\n",
"\n",
"q_embed = query_embeddings.embed_query(\"What is the capital of France?\")\n",
"d_embed = doc_embeddings.embed_documents(\n",
" [\"The capital of France is Paris.\", \"Philipp is likes to eat pizza.\"]\n",
")\n",
"\n",
"for i, d in enumerate(d_embed):\n",
" print(f\"Document {i+1}:\")\n",
" print(f\"Cosine similarity with query: {cosine_similarity([q_embed], [d])[0][0]}\")\n",
" print(\"---\")"
]
},
{
"cell_type": "markdown",
"id": "79bd4a5e-75ba-413c-befa-86167c938caf",
"id": "f45ea7b1",
"metadata": {},
"source": [
"All of these will be embedded with the 'retrieval_query' task set\n",
"```python\n",
"query_vecs = [query_embeddings.embed_query(q) for q in [query, query_2, answer_1]]\n",
"```\n",
"All of these will be embedded with the 'retrieval_document' task set\n",
"```python\n",
"doc_vecs = [doc_embeddings.embed_query(q) for q in [query, query_2, answer_1]]\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "9e1fae5e-0f84-4812-89f5-7d4d71affbc1",
"metadata": {},
"source": [
"In retrieval, relative distance matters. In the image above, you can see the difference in similarity scores between the \"relevant doc\" and \"simil stronger delta between the similar query and relevant doc on the latter case."
"## API Reference\n",
"\n",
"For detailed documentation on `GoogleGenerativeAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/google_genai/embeddings/langchain_google_genai.embeddings.GoogleGenerativeAIEmbeddings.html).\n"
]
},
{
@@ -211,7 +310,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -225,7 +324,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -26,7 +26,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "f7b3767b",
"metadata": {
"tags": []
@@ -83,40 +83,39 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "851fee9f",
"metadata": {
"tags": []
},
"execution_count": null,
"id": "7f056cc3-628d-46ba-b394-ee1d89f8650a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Download the README here and identify the link for LangChain tutorials: https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" terminal (call_mr86V0d6E9nQiJZT7Xw5fH0G)\n",
" Call ID: call_mr86V0d6E9nQiJZT7Xw5fH0G\n",
" Args:\n",
" commands: ['curl -o README.md https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md']\n",
"Executing command:\n",
" ['curl -o README.md https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md']\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: terminal\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mQuestion: What is the task?\n",
"Thought: We need to download the langchain.com webpage and extract all the URLs from it. Then we need to sort the URLs and return them.\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"shell\",\n",
" \"action_input\": {\n",
" \"commands\": [\n",
" \"curl -s https://langchain.com | grep -o 'http[s]*://[^\\\" ]*' | sort\"\n",
" ]\n",
" }\n",
"}\n",
"```\n",
"\u001b[0m"
" % Total % Received % Xferd Average Speed Time Time Time Current\n",
" Dload Upload Total Spent Left Speed\n",
"100 5169 100 5169 0 0 114k 0 --:--:-- --:--:-- --:--:-- 114k\n",
"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/wfh/code/lc/lckg/langchain/tools/shell/tool.py:34: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n",
"/langchain/libs/community/langchain_community/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n",
" warnings.warn(\n"
]
},
@@ -124,50 +123,58 @@
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" terminal (call_LF8TGrgS84WvUvaazYnVfib8)\n",
" Call ID: call_LF8TGrgS84WvUvaazYnVfib8\n",
" Args:\n",
" commands: [\"grep -i 'tutorial' README.md\"]\n",
"Executing command:\n",
" [\"grep -i 'tutorial' README.md\"]\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: terminal\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3mhttps://blog.langchain.dev/\n",
"https://discord.gg/6adMQxSpJS\n",
"https://docs.langchain.com/docs/\n",
"https://github.com/hwchase17/chat-langchain\n",
"https://github.com/hwchase17/langchain\n",
"https://github.com/hwchase17/langchainjs\n",
"https://github.com/sullivan-sean/chat-langchainjs\n",
"https://js.langchain.com/docs/\n",
"https://python.langchain.com/en/latest/\n",
"https://twitter.com/langchainai\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe URLs have been successfully extracted and sorted. We can return the list of URLs as the final answer.\n",
"Final Answer: [\"https://blog.langchain.dev/\", \"https://discord.gg/6adMQxSpJS\", \"https://docs.langchain.com/docs/\", \"https://github.com/hwchase17/chat-langchain\", \"https://github.com/hwchase17/langchain\", \"https://github.com/hwchase17/langchainjs\", \"https://github.com/sullivan-sean/chat-langchainjs\", \"https://js.langchain.com/docs/\", \"https://python.langchain.com/en/latest/\", \"https://twitter.com/langchainai\"]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with\n",
"\n"
]
},
{
"data": {
"text/plain": [
"'[\"https://blog.langchain.dev/\", \"https://discord.gg/6adMQxSpJS\", \"https://docs.langchain.com/docs/\", \"https://github.com/hwchase17/chat-langchain\", \"https://github.com/hwchase17/langchain\", \"https://github.com/hwchase17/langchainjs\", \"https://github.com/sullivan-sean/chat-langchainjs\", \"https://js.langchain.com/docs/\", \"https://python.langchain.com/en/latest/\", \"https://twitter.com/langchainai\"]'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
"name": "stderr",
"output_type": "stream",
"text": [
"/langchain/libs/community/langchain_community/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The link for LangChain tutorials in the README is: https://python.langchain.com/docs/tutorials/\n"
]
}
],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"tools = [shell_tool]\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)\n",
"\n",
"shell_tool.description = shell_tool.description + f\"args {shell_tool.args}\".replace(\n",
" \"{\", \"{{\"\n",
").replace(\"}\", \"}}\")\n",
"self_ask_with_search = initialize_agent(\n",
" [shell_tool], llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")\n",
"self_ask_with_search.run(\n",
" \"Download the langchain.com webpage and grep for all urls. Return only a sorted list of them. Be sure to use double quotes.\"\n",
")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": (\n",
" \"Download the README here and identify the link for LangChain tutorials: \"\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
" ),\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -195,7 +202,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,346 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Compass DeFi Toolkit\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# Compass LangChain Toolkit\n",
"\n",
"Interact with various DeFi protocols. Non-custodial.",
"Tools return *unsigned transactions*. The toolkit is built on top of a Universal DeFi API ([Compass API](https://api.compasslabs.ai/)) allowing agents to perform financial operations like:\n",
"\n",
"- **Swapping tokens** on Uniswap and Aerodrome\n",
"- **Lending** or **borrowing** assets using protocols on Aave\n",
"- **Providing liquidity** on Aerodrome and Uniswap\n",
"- **Transferring funds** between wallets.\n",
"- Querying balances, portfolios and **monitoring positions**.\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Serializable | JS support | Package latest |\n",
"|:-------------------------|:--------------------| :---: | :---: |:----------------------------------------------------------------------------------------------:|\n",
"| LangchainCompassToolkit | `langchain-compass` | ❌ | ❌ | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-compass?style=flat-square&label=%20) |\n",
"\n",
"### Tool features\n",
"\n",
"Heres a sample of the tools this toolkit provides (subject to change daily):\n",
"\n",
"- `aave_supply`: Supply assets to Aave to earn interest.\n",
"- `aave_borrow`: Borrow assets from Aave using collateral.\n",
"- `uniswap_swap_sell_exactly`: Swap a specific amount of one token on Uniswap.\n",
"- `generic_portfolio_get`: Retrieve a wallets portfolio in USD and token balances.\n",
"- `generic_transfer_erc20`: Transfer ERC20 tokens between addresses.\n",
"\n",
"\n",
"## Setup\n",
"\n",
"Here we will:\n",
"\n",
"1. Install the langchain package\n",
"2. Import and instantiate the toolkit\n",
"3. Pass the tools to your agent with `toolkit.get_tools()`"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `langchain-compass` package:"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": "%pip install -qU langchain-compass",
"id": "652d6238"
},
{
"cell_type": "markdown",
"id": "a38cde65",
"metadata": {},
"source": [
"#### Environment Setup\n",
"\n",
"To run these examples, ensure LangChain has access to an LLM service. For instance, if you're using GPT-4o, create a `.env` file containing:\n",
"\n",
"```plaintext\n",
"# .env file\n",
"OPENAI_API_KEY=<your_openai_api_key_here>\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "5c5f2839",
"metadata": {},
"source": [
"### Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-16T15:00:25.188941Z",
"start_time": "2025-04-16T15:00:23.842165Z"
}
},
"cell_type": "code",
"outputs": [],
"execution_count": 3,
"source": [
"from langchain_compass.toolkits import LangchainCompassToolkit\n",
"\n",
"toolkit = LangchainCompassToolkit(compass_api_key=None)"
],
"id": "51a60dbe"
},
{
"cell_type": "markdown",
"id": "d11245ad",
"metadata": {},
"source": [
"### Tools\n",
"\n",
"View [available tools](#tool-features):"
]
},
{
"cell_type": "code",
"id": "310bf18e",
"metadata": {},
"source": [
"tools = toolkit.get_tools()\n",
"for tool in tools:\n",
" print(tool.name)"
],
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"```\n",
"# Expected output:\n",
"\n",
"aave_supply\n",
"aave_borrow\n",
"aave_repay\n",
"aave_withdraw\n",
"aave_asset_price_get\n",
"aave_liquidity_change_get\n",
"aave_user_position_summary_get\n",
"...\n",
"```"
],
"id": "c3fd41b52c203e03"
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## Invocation\n",
"\n",
"To invoke a single tool programmatically:"
],
"id": "73b871e54cf1996"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-16T15:16:33.924523Z",
"start_time": "2025-04-16T15:16:33.564016Z"
}
},
"cell_type": "code",
"source": [
"tool_name = \"generic_ens_get\"\n",
"tool = next(tool for tool in tools if tool.name == tool_name)\n",
"tool.invoke({\"ens_name\": \"vitalik.eth\", \"chain\": \"ethereum:mainnet\"})"
],
"id": "e384e604c38f07de",
"outputs": [
{
"data": {
"text/plain": [
"EnsNameInfoResponse(wallet_address='0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', registrant='0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 13
},
{
"cell_type": "markdown",
"id": "23e11cc9",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"We will need a LLM or chat model:"
]
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-16T15:00:27.027533Z",
"start_time": "2025-04-16T15:00:26.364789Z"
}
},
"cell_type": "code",
"outputs": [],
"execution_count": 5,
"source": [
"from dotenv import load_dotenv\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"load_dotenv()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o\")"
],
"id": "d1ee55bc"
},
{
"cell_type": "markdown",
"id": "3a5bb5ca",
"metadata": {},
"source": [
"Initialize the agent with the tools:"
]
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-16T15:00:27.948912Z",
"start_time": "2025-04-16T15:00:27.033842Z"
}
},
"cell_type": "code",
"outputs": [],
"execution_count": 6,
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"tools = toolkit.get_tools()\n",
"agent_executor = create_react_agent(llm, tools)"
],
"id": "f8a2c4b1"
},
{
"cell_type": "markdown",
"id": "b4a7c9d2",
"metadata": {},
"source": "Example usage:"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": [
"example_query = \"please set an allowance on Uniswap of 10 WETH for vitalic.eth.\" # spelt wrong intentionally\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
],
"id": "c9a8e4f3"
},
{
"cell_type": "markdown",
"id": "e5a7c9d4",
"metadata": {},
"source": [
"Expected output:\n",
"```\n",
"================================\u001B[1m Human Message \u001B[0m=================================\n",
"\n",
"please set an allowance on Uniswap of 10 WETH for vitalic.eth.\n",
"==================================\u001B[1m Ai Message \u001B[0m==================================\n",
"Tool Calls:\n",
" generic_ens_get (call_MHIXRXxWH0L7iUEYHwvDUdU1)\n",
" Call ID: call_MHIXRXxWH0L7iUEYHwvDUdU1\n",
" Args:\n",
" chain: ethereum:mainnet\n",
" ens_name: vitalic.eth\n",
"=================================\u001B[1m Tool Message \u001B[0m=================================\n",
"Name: generic_ens_get\n",
"\n",
"wallet_address='0x44761Ef63FaD902D8f8dC77e559Ab116929881Db' registrant='0x44761Ef63FaD902D8f8dC77e559Ab116929881Db'\n",
"==================================\u001B[1m Ai Message \u001B[0m==================================\n",
"Tool Calls:\n",
" generic_allowance_set (call_IEBftbtBfKCkI1zFXXtEY8tq)\n",
" Call ID: call_IEBftbtBfKCkI1zFXXtEY8tq\n",
" Args:\n",
" amount: 10\n",
" chain: ethereum:mainnet\n",
" contract_name: UniswapV3Router\n",
" sender: 0x44761Ef63FaD902D8f8dC77e559Ab116929881Db\n",
" token: WETH\n",
"=================================\u001B[1m Tool Message \u001B[0m=================================\n",
"Name: generic_allowance_set\n",
"\n",
"{\"type\": \"unsigned_transaction\", \"content\": {\"chainId\": 1, \"data\": \"0x095ea7b300000000000000000000000068b3465833fb72a70ecdf485e0e4c7bd8665fc450000000000000000000000000000000000000000000000008ac7230489e80000\", \"from\": \"0x44761Ef63FaD902D8f8dC77e559Ab116929881Db\", \"gas\": 46434, \"to\": \"0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2\", \"value\": 0, \"nonce\": 79, \"maxFeePerGas\": 2265376912, \"maxPriorityFeePerGas\": 6400594}}\n",
"\n",
"```"
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## API reference\n",
"\n",
"`langchain-compass` is built on top of the Compass API. Each tool corresponds to an API endpoint. [Please check out the docs here](https://api.compasslabs.ai/)"
],
"id": "2fe1ec8c22e5e79f"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -16,7 +16,7 @@
"1. `pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib`\n",
"\n",
"## Instructions for retrieving your Google Docs data\n",
"By default, the `GoogleDriveTools` and `GoogleDriveWrapper` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable using the `GOOGLE_ACCOUNT_FILE` environment variable. \n",
"By default, the `GoogleDriveTools` and `GoogleDriveWrapper` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable by setting the `GOOGLE_ACCOUNT_FILE` environment variable to your `custom/path/to/credentials.json`. \n",
"The location of `token.json` use the same directory (or use the parameter `token_path`). Note that `token.json` will be created automatically the first time you use the tool.\n",
"\n",
"`GoogleDriveSearchTool` can retrieve a selection of files with some requests. \n",
@@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -88,7 +88,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet unstructured"
"%pip install --upgrade --quiet unstructured langchain-googledrive"
]
},
{
@@ -99,9 +99,13 @@
},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_googledrive.tools.google_drive.tool import GoogleDriveSearchTool\n",
"from langchain_googledrive.utilities.google_drive import GoogleDriveAPIWrapper\n",
"\n",
"os.environ[\"GOOGLE_ACCOUNT_FILE\"] = \"custom/path/to/credentials.json\"\n",
"\n",
"# By default, search only in the filename.\n",
"tool = GoogleDriveSearchTool(\n",
" api_wrapper=GoogleDriveAPIWrapper(\n",
@@ -114,7 +118,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@@ -134,33 +138,52 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"\"A wrapper around Google Drive Search. Useful for when you need to find a document in google drive. The input should be formatted as a list of entities separated with a space. As an example, a list of keywords is 'hello word'.\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool.description"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"\n",
"tools = load_tools(\n",
" [\"google-drive-search\"],\n",
" folder_id=folder_id,\n",
" template=\"gdrive-query-in-folder\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an Agent"
"## Use the tool within a ReAct agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to create an agent that uses the Google Jobs tool install Langgraph"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langgraph langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and use the `create_react_agent` functionality to initialize a ReAct agent. You will also need to set up your OPEN_API_KEY (visit https://platform.openai.com) in order to access OpenAI's chat models."
]
},
{
@@ -171,32 +194,29 @@
},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain_openai import OpenAI\n",
"import os\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"agent = initialize_agent(\n",
" tools=tools,\n",
" llm=llm,\n",
" agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent.run(\"Search in google drive, who is 'Yann LeCun' ?\")"
"from langchain.chat_models import init_chat_model\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-api-key\"\n",
"\n",
"\n",
"llm = init_chat_model(\"gpt-4o-mini\", model_provider=\"openai\", temperature=0)\n",
"agent = create_react_agent(llm, tools=[tool])\n",
"\n",
"events = agent.stream(\n",
" {\"messages\": [(\"user\", \"Search in google drive, who is 'Yann LeCun' ?\")]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "venv",
"language": "python",
"name": "python3"
},
@@ -210,7 +230,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.7"
}
},
"nbformat": 4,

View File

@@ -6,24 +6,11 @@
"source": [
"# Google Finance\n",
"\n",
"This notebook goes over how to use the Google Finance Tool to get information from the Google Finance page\n",
"This notebook goes over how to use the Google Finance Tool to get information from the Google Finance page.\n",
"\n",
"To get an SerpApi key key, sign up at: https://serpapi.com/users/sign_up.\n",
"\n",
"Then install google-search-results with the command: \n",
"\n",
"pip install google-search-results\n",
"\n",
"Then set the environment variable SERPAPI_API_KEY to your SerpApi key\n",
"\n",
"Or pass the key in as a argument to the wrapper serp_api_key=\"your secret key\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the Tool"
"To use the tool with Langchain install following packages"
]
},
{
@@ -35,9 +22,16 @@
"%pip install --upgrade --quiet google-search-results langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then set the environment variable SERPAPI_API_KEY to your SerpApi key or pass the key in as a argument to the wrapper serp_api_key=\"your secret key\"."
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@@ -46,15 +40,26 @@
"from langchain_community.tools.google_finance import GoogleFinanceQueryRun\n",
"from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper\n",
"\n",
"os.environ[\"SERPAPI_API_KEY\"] = \"\"\n",
"os.environ[\"SERPAPI_API_KEY\"] = \"[your serpapi key]\"\n",
"tool = GoogleFinanceQueryRun(api_wrapper=GoogleFinanceAPIWrapper())"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"'\\nQuery: Google\\nstock: GOOGL:NASDAQ\\nprice: $161.96\\npercentage: 1.68\\nmovement: Up\\n'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool.run(\"Google\")"
]
@@ -63,7 +68,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Using it with Langchain"
"In order to create an agent that uses the Google Finance tool install Langgraph"
]
},
{
@@ -71,26 +76,77 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langgraph langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and use the `create_react_agent` functionality to initialize a ReAct agent. You will also need to set up your OPEN_API_KEY (visit https://platform.openai.com) in order to access OpenAI's chat models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What is Google's stock?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" google_finance (call_u676mJAkdojgkW806ZGSE8mF)\n",
" Call ID: call_u676mJAkdojgkW806ZGSE8mF\n",
" Args:\n",
" query: Google\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: google_finance\n",
"\n",
"\n",
"Query: Google\n",
"stock: GOOGL:NASDAQ\n",
"price: $161.96\n",
"percentage: 1.68\n",
"movement: Up\n",
"\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Google's stock (Ticker: GOOGL) is currently priced at $161.96, showing an increase of 1.68%.\n"
]
}
],
"source": [
"import os\n",
"\n",
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_openai import OpenAI\n",
"from langchain.agents import load_tools\n",
"from langchain.chat_models import init_chat_model\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"os.environ[\"SERP_API_KEY\"] = \"\"\n",
"llm = OpenAI()\n",
"os.environ[\"OPENAI_API_KEY\"] = \"[your openai key]\"\n",
"os.environ[\"SERP_API_KEY\"] = \"[your serpapi key]\"\n",
"\n",
"llm = init_chat_model(\"gpt-4o-mini\", model_provider=\"openai\")\n",
"tools = load_tools([\"google-scholar\", \"google-finance\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
"agent = create_react_agent(llm, tools)\n",
"\n",
"events = agent.stream(\n",
" {\"messages\": [(\"user\", \"What is Google's stock?\")]},\n",
" stream_mode=\"values\",\n",
")\n",
"agent.run(\"what is google's stock\")"
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "venv",
"language": "python",
"name": "python3"
},
@@ -104,7 +160,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
"version": "3.12.7"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -12,33 +12,49 @@
"\n",
"This Jupyter Notebook demonstrates how to use the `GraphQLAPIWrapper` component with an Agent.\n",
"\n",
"In this example, we'll be using the public `Star Wars GraphQL API` available at the following endpoint: https://swapi-graphql.netlify.app/.netlify/functions/index.\n",
"In this example, we'll be using the public `Star Wars GraphQL API` available at the following endpoint: https://swapi-graphql.netlify.app/graphql .\n",
"\n",
"First, you need to install `httpx` and `gql` Python packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"pip install httpx gql > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
@@ -56,21 +72,36 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_openai import OpenAI\n",
"import os\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"\n",
"tools = load_tools(\n",
" [\"graphql\"],\n",
" graphql_endpoint=\"https://swapi-graphql.netlify.app/.netlify/functions/index\",\n",
")\n",
"\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
" graphql_endpoint=\"https://swapi-graphql.netlify.app/graphql\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -80,35 +111,55 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"Search for the titles of all the stawars films stored in the graphql database that has this schema allFilms {\n",
" films {\n",
" title\n",
" director\n",
" releaseDate\n",
" speciesConnection {\n",
" species {\n",
" name\n",
" classification\n",
" homeworld {\n",
" name\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to query the graphql database to get the titles of all the star wars films\n",
"Action: query_graphql\n",
"Action Input: query { allFilms { films { title } } }\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m\"{\\n \\\"allFilms\\\": {\\n \\\"films\\\": [\\n {\\n \\\"title\\\": \\\"A New Hope\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Empire Strikes Back\\\"\\n },\\n {\\n \\\"title\\\": \\\"Return of the Jedi\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Phantom Menace\\\"\\n },\\n {\\n \\\"title\\\": \\\"Attack of the Clones\\\"\\n },\\n {\\n \\\"title\\\": \\\"Revenge of the Sith\\\"\\n }\\n ]\\n }\\n}\"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the titles of all the star wars films\n",
"Final Answer: The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.\u001b[0m\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" query_graphql (call_tN5A0dBbfOMewuw8Yy13bYpW)\n",
" Call ID: call_tN5A0dBbfOMewuw8Yy13bYpW\n",
" Args:\n",
" tool_input: query { allFilms { films { title } } }\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: query_graphql\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\"{\\n \\\"allFilms\\\": {\\n \\\"films\\\": [\\n {\\n \\\"title\\\": \\\"A New Hope\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Empire Strikes Back\\\"\\n },\\n {\\n \\\"title\\\": \\\"Return of the Jedi\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Phantom Menace\\\"\\n },\\n {\\n \\\"title\\\": \\\"Attack of the Clones\\\"\\n },\\n {\\n \\\"title\\\": \\\"Revenge of the Sith\\\"\\n }\\n ]\\n }\\n}\"\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The titles of all the Star Wars films stored in the database are:\n",
"1. A New Hope\n",
"2. The Empire Strikes Back\n",
"3. Return of the Jedi\n",
"4. The Phantom Menace\n",
"5. Attack of the Clones\n",
"6. Revenge of the Sith\n",
"\n",
"If you would like more information about any of these films, please let me know!\n"
]
},
{
"data": {
"text/plain": [
"'The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -133,9 +184,24 @@
"\n",
"suffix = \"Search for the titles of all the stawars films stored in the graphql database that has this schema \"\n",
"\n",
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": suffix + graphql_fields,\n",
"}\n",
"\n",
"agent.run(suffix + graphql_fields)"
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -157,7 +223,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.13.2"
}
},
"nbformat": 4,

View File

@@ -22,8 +22,26 @@
},
{
"cell_type": "code",
"execution_count": 9,
"id": "34bb5968",
"execution_count": 1,
"id": "8b81a74e-db10-4e8d-9f90-83219df30ab3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet pyowm"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "78ab9fcd-bb7b-434b-9a38-0a9249e35768",
"metadata": {},
"outputs": [],
"source": [
@@ -38,8 +56,8 @@
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ac4910f8",
"execution_count": 3,
"id": "0a8aa4b0-6aea-4172-9546-361e127a4a02",
"metadata": {},
"outputs": [
{
@@ -47,17 +65,17 @@
"output_type": "stream",
"text": [
"In London,GB, the current weather is as follows:\n",
"Detailed status: broken clouds\n",
"Wind speed: 2.57 m/s, direction: 240°\n",
"Humidity: 55%\n",
"Detailed status: overcast clouds\n",
"Wind speed: 4.12 m/s, direction: 10°\n",
"Humidity: 51%\n",
"Temperature: \n",
" - Current: 20.12°C\n",
" - High: 21.75°C\n",
" - Low: 18.68°C\n",
" - Feels like: 19.62°C\n",
" - Current: 12.82°C\n",
" - High: 13.98°C\n",
" - Low: 12.01°C\n",
" - Feels like: 11.49°C\n",
"Rain: {}\n",
"Heat index: None\n",
"Cloud cover: 75%\n"
"Cloud cover: 100%\n"
]
}
],
@@ -76,76 +94,82 @@
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b3367417",
"execution_count": 4,
"id": "402c832c-87c7-4088-b80f-ec1924a43796",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
"from langchain_openai import OpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"os.environ[\"OPENWEATHERMAP_API_KEY\"] = \"\"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"tools = load_tools([\"openweathermap-api\"], llm)\n",
"\n",
"agent_chain = initialize_agent(\n",
" tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
"tools = [weather.run]\n",
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "bf4f6854",
"execution_count": 5,
"id": "9b423a92-1568-4ee2-9c7d-3b9acf7756a1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What's the weather like in London?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" run (call_6vPq9neyy7oOnht29ExidE2g)\n",
" Call ID: call_6vPq9neyy7oOnht29ExidE2g\n",
" Args:\n",
" location: London\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: run\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the current weather in London.\n",
"Action: OpenWeatherMap\n",
"Action Input: London,GB\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mIn London,GB, the current weather is as follows:\n",
"Detailed status: broken clouds\n",
"Wind speed: 2.57 m/s, direction: 240°\n",
"Humidity: 56%\n",
"In London, the current weather is as follows:\n",
"Detailed status: overcast clouds\n",
"Wind speed: 4.12 m/s, direction: 10°\n",
"Humidity: 51%\n",
"Temperature: \n",
" - Current: 20.11°C\n",
" - High: 21.75°C\n",
" - Low: 18.68°C\n",
" - Feels like: 19.64°C\n",
" - Current: 12.82°C\n",
" - High: 13.98°C\n",
" - Low: 12.01°C\n",
" - Feels like: 11.49°C\n",
"Rain: {}\n",
"Heat index: None\n",
"Cloud cover: 75%\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the current weather in London.\n",
"Final Answer: The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None.\u001b[0m\n",
"Cloud cover: 100%\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"The weather in London is currently overcast with 100% cloud cover. The temperature is around 12.82°C, feeling like 11.49°C. The wind is blowing at 4.12 m/s from the direction of 10°. Humidity is at 51%. The high for the day is 13.98°C, and the low is 12.01°C.\n"
]
},
{
"data": {
"text/plain": [
"'The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(\"What's the weather like in London?\")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What's the weather like in London?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2af226a-9cca-468d-b07f-0a928ea61f48",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -164,7 +188,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.13.2"
}
},
"nbformat": 4,

View File

@@ -19,9 +19,9 @@
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Serializable | [JS support](https://js.langchain.com/docs/integrations/tools/tavily_search) | Package latest |\n",
"| Class | Package | Serializable | [JS support](https://js.langchain.com/docs/integrations/tools/tavily_extract/) | Package latest |\n",
"|:--------------------------------------------------------------|:---------------------------------------------------------------| :---: | :---: | :---: |\n",
"| [TavilyExtract](https://github.com/tavily-ai/langchain-tavily) | [langchain-tavily](https://pypi.org/project/langchain-tavily/) | ✅ | | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-tavily?style=flat-square&label=%20) |\n",
"| [TavilyExtract](https://github.com/tavily-ai/langchain-tavily) | [langchain-tavily](https://pypi.org/project/langchain-tavily/) | ✅ | | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-tavily?style=flat-square&label=%20) |\n",
"\n",
"### Tool features\n",
"| [Returns artifact](/docs/how_to/tool_artifacts/) | Native async | Return data | Pricing |\n",

View File

@@ -20,7 +20,7 @@
"### Integration details\n",
"| Class | Package | Serializable | [JS support](https://js.langchain.com/docs/integrations/tools/tavily_search) | Package latest |\n",
"|:--------------------------------------------------------------|:---------------------------------------------------------------| :---: | :---: | :---: |\n",
"| [TavilySearch](https://github.com/tavily-ai/langchain-tavily) | [langchain-tavily](https://pypi.org/project/langchain-tavily/) | ✅ | | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-tavily?style=flat-square&label=%20) |\n",
"| [TavilySearch](https://github.com/tavily-ai/langchain-tavily) | [langchain-tavily](https://pypi.org/project/langchain-tavily/) | ✅ | | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-tavily?style=flat-square&label=%20) |\n",
"\n",
"### Tool features\n",
"| [Returns artifact](/docs/how_to/tool_artifacts/) | Native async | Return data | Pricing |\n",

File diff suppressed because one or more lines are too long

View File

@@ -17,10 +17,18 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "38717a85-2c3c-4452-a1c7-1ed4dea3da86",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet yfinance"
]
@@ -35,121 +43,136 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "d137dd6c-d3d3-4813-af65-59eaaa6b3d76",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"...\""
"os.environ[\"OPENAI_API_KEY\"] = \"..\""
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "fc42f766-9ce6-4ba3-be6c-5ba8a345b0d3",
"execution_count": null,
"id": "af297977-4fc3-421f-9ce1-f62c1c5b026a",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain_community.tools.yahoo_finance_news import YahooFinanceNewsTool\n",
"from langchain_openai import ChatOpenAI\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"llm = ChatOpenAI(temperature=0.0)\n",
"tools = [YahooFinanceNewsTool()]\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
")"
"agent = create_react_agent(\"openai:gpt-4.1-mini\", tools)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "3d1614b4-508e-4689-84b1-2a387f80aeb1",
"execution_count": 4,
"id": "ac3cbec8-4135-4f5a-bb35-299730c000bd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What happened today with Microsoft stocks?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" yahoo_finance_news (call_s1Waj1rAoJ89CfxWX1RWDiWL)\n",
" Call ID: call_s1Waj1rAoJ89CfxWX1RWDiWL\n",
" Args:\n",
" query: MSFT\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: yahoo_finance_news\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI should check the latest financial news about Microsoft stocks.\n",
"Action: yahoo_finance_news\n",
"Action Input: MSFT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mMicrosoft (MSFT) Gains But Lags Market: What You Should Know\n",
"In the latest trading session, Microsoft (MSFT) closed at $328.79, marking a +0.12% move from the previous day.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI have the latest information on Microsoft stocks.\n",
"Final Answer: Microsoft (MSFT) closed at $328.79, with a +0.12% move from the previous day.\u001b[0m\n",
"Microsoft (MSFT), Meta Platforms (META) Reported “Home Run” Results: Dan Ives Recent Comments\n",
"Microsoft (MSFT) and Meta Platforms (META) delivered “home run” results yesterday, as the AI Revolution has not been slowed by the Trump administrations tariffs, Dan Ives, the Managing Director and Senior Equity Research Analyst at Wedbush Securities said on CNBC recently. Ives covers tech stocks. Tech Is Poised for a Comeback, Ives Indicates “The tech […]\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Today, Microsoft (MSFT) reported strong financial results, described as \"home run\" results by Dan Ives, Managing Director and Senior Equity Research Analyst at Wedbush Securities. Despite the Trump administrations tariffs, the AI Revolution driving tech stocks like Microsoft has not slowed down, indicating a positive outlook for the company and the tech sector overall.\n"
]
},
{
"data": {
"text/plain": [
"'Microsoft (MSFT) closed at $328.79, with a +0.12% move from the previous day.'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.invoke(\n",
" \"What happened today with Microsoft stocks?\",\n",
")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"What happened today with Microsoft stocks?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "c899b64d-86a5-452c-b576-e94f485c27ea",
"execution_count": 5,
"id": "4496b06b-8b57-4fa8-9b86-4db407caa807",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"How does Microsoft feels today comparing with Nvidia?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" yahoo_finance_news (call_r9m4YxdEqWeXotkNgK8jGzeJ)\n",
" Call ID: call_r9m4YxdEqWeXotkNgK8jGzeJ\n",
" Args:\n",
" query: MSFT\n",
" yahoo_finance_news (call_fxj3AIKPB4MYuquvFFWrBD8B)\n",
" Call ID: call_fxj3AIKPB4MYuquvFFWrBD8B\n",
" Args:\n",
" query: NVDA\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: yahoo_finance_news\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI should compare the current sentiment of Microsoft and Nvidia.\n",
"Action: yahoo_finance_news\n",
"Action Input: MSFT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mMicrosoft (MSFT) Gains But Lags Market: What You Should Know\n",
"In the latest trading session, Microsoft (MSFT) closed at $328.79, marking a +0.12% move from the previous day.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to find the current sentiment of Nvidia as well.\n",
"Action: yahoo_finance_news\n",
"Action Input: NVDA\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the current sentiment of both Microsoft and Nvidia.\n",
"Final Answer: I cannot compare the sentiment of Microsoft and Nvidia as I only have information about Microsoft.\u001b[0m\n",
"NVIDIA Corporation (NVDA): Among Ken Fishers Technology Stock Picks with Huge Upside Potential\n",
"We recently published an article titled Billionaire Ken Fishers 10 Technology Stock Picks with Huge Upside Potential. In this article, we are going to take a look at where NVIDIA Corporation (NASDAQ:NVDA) stands against the other technology stocks. Technology stocks have faced heightened volatility in 2025, with market sentiment swinging sharply in response to President Donald […]\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Nvidia (NVDA) Redesigns Chips to Sidestep U.S. Export Ban, Eyes June China Rollout\n",
"Nvidia plans China-specific AI chip revamp after new U.S. export limits\n",
"\n",
"Is NVIDIA (NVDA) the Best NASDAQ Stock to Buy According to Billionaires?\n",
"We recently published a list of 10 Best NASDAQ Stocks to Buy According to Billionaires. In this article, we are going to take a look at where NVIDIA Corporation (NASDAQ:NVDA) stands against other best NASDAQ stocks to buy according to billionaires. The latest market data shows that the US economy contracted at an annualized rate […]\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Today, Microsoft (MSFT) is viewed positively with recent strong earnings reported, described as \"home run\" results, indicating confidence in its performance amid an ongoing AI revolution.\n",
"\n",
"Nvidia (NVDA) is also in focus with its strategic moves, such as redesigning AI chips to bypass U.S. export bans and targeting a China rollout. It is considered one of the technology stocks with significant upside potential, attracting attention from notable investors.\n",
"\n",
"In summary, both Microsoft and Nvidia have positive sentiments today, with Microsoft showing strong financial results and Nvidia making strategic advancements in AI technology and market positioning.\n"
]
},
{
"data": {
"text/plain": [
"'I cannot compare the sentiment of Microsoft and Nvidia as I only have information about Microsoft.'"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.invoke(\n",
" \"How does Microsoft feels today comparing with Nvidia?\",\n",
")"
"input_message = {\n",
" \"role\": \"user\",\n",
" \"content\": \"How does Microsoft feels today comparing with Nvidia?\",\n",
"}\n",
"\n",
"for step in agent.stream(\n",
" {\"messages\": [input_message]},\n",
" stream_mode=\"values\",\n",
"):\n",
" step[\"messages\"][-1].pretty_print()"
]
},
{
@@ -162,7 +185,7 @@
},
{
"cell_type": "code",
"execution_count": 37,
"execution_count": 6,
"id": "7879b79c-b5c7-4a5d-8338-edda53ff41a6",
"metadata": {},
"outputs": [],
@@ -172,17 +195,17 @@
},
{
"cell_type": "code",
"execution_count": 38,
"execution_count": 7,
"id": "ac989456-33bc-4478-874e-98b9cb24d113",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'No news found for company that searched with NVDA ticker.'"
"'NVIDIA Corporation (NVDA): Among Ken Fishers Technology Stock Picks with Huge Upside Potential\\nWe recently published an article titled Billionaire Ken Fishers 10 Technology Stock Picks with Huge Upside Potential. In this article, we are going to take a look at where NVIDIA Corporation (NASDAQ:NVDA) stands against the other technology stocks. Technology stocks have faced heightened volatility in 2025, with market sentiment swinging sharply in response to President Donald […]\\n\\nNvidia (NVDA) Redesigns Chips to Sidestep U.S. Export Ban, Eyes June China Rollout\\nNvidia plans China-specific AI chip revamp after new U.S. export limits\\n\\nIs NVIDIA (NVDA) the Best NASDAQ Stock to Buy According to Billionaires?\\nWe recently published a list of 10 Best NASDAQ Stocks to Buy According to Billionaires. In this article, we are going to take a look at where NVIDIA Corporation (NASDAQ:NVDA) stands against other best NASDAQ stocks to buy according to billionaires. The latest market data shows that the US economy contracted at an annualized rate […]'"
]
},
"execution_count": 38,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -193,7 +216,7 @@
},
{
"cell_type": "code",
"execution_count": 40,
"execution_count": 8,
"id": "46c697aa-102e-48d4-9834-081671aad40a",
"metadata": {},
"outputs": [
@@ -201,11 +224,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Top Research Reports for Apple, Broadcom & Caterpillar\n",
"Today's Research Daily features new research reports on 16 major stocks, including Apple Inc. (AAPL), Broadcom Inc. (AVGO) and Caterpillar Inc. (CAT).\n",
"\n",
"Apple Stock on Pace for Worst Month of the Year\n",
"Apple (AAPL) shares are on pace for their worst month of the year, according to Dow Jones Market Data. The stock is down 4.8% so far in August, putting it on pace for its worst month since December 2022, when it fell 12%.\n"
"No news found for company that searched with AAPL ticker.\n"
]
}
],
@@ -239,7 +258,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.13.2"
}
},
"nbformat": 4,

View File

@@ -7,9 +7,11 @@
"source": [
"# Astra DB Vector Store\n",
"\n",
"This page provides a quickstart for using [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) as a Vector Store.\n",
"This page provides a quickstart for using Astra DB as a Vector Store.\n",
"\n",
"> DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Apache Cassandra® and made conveniently available through an easy-to-use JSON API.\n",
"> [DataStax Astra DB](https://docs.datastax.com/en/astra-db-serverless/index.html) is a serverless \n",
"> AI-ready database built on `Apache Cassandra®` and made conveniently available \n",
"> through an easy-to-use JSON API.\n",
"\n",
"## Setup"
]
@@ -19,6 +21,8 @@
"id": "dbe7c156-0413-47e3-9237-4769c4248869",
"metadata": {},
"source": [
"### Dependencies\n",
"\n",
"Use of the integration requires the `langchain-astradb` partner package:"
]
},
@@ -26,10 +30,15 @@
"cell_type": "code",
"execution_count": null,
"id": "8d00fcf4-9798-4289-9214-d9734690adfc",
"metadata": {},
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"pip install -qU \"langchain-astradb>=0.3.3\""
"!pip install \\\n",
" \"langchain>=0.3.23,<0.4\" \\\n",
" \"langchain-core>=0.3.52,<0.4\" \\\n",
" \"langchain-astradb>=0.6,<0.7\""
]
},
{
@@ -41,30 +50,40 @@
"\n",
"In order to use the AstraDB vector store, you must first head to the [AstraDB website](https://astra.datastax.com), create an account, and then create a new database - the initialization might take a few minutes. \n",
"\n",
"Once the database has been initialized, you should [create an application token](https://docs.datastax.com/en/astra-db-serverless/administration/manage-application-tokens.html#generate-application-token) and save it for later use. \n",
"Once the database has been initialized, retrieve your [connection secrets](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html#create-a-database-and-store-your-credentials), which you'll need momentarily. These are:\n",
"- an **`API Endpoint`**, such as `\"https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com/\"`\n",
"- and a **`Database Token`**, e.g. `\"AstraCS:aBcD123......\"`\n",
"\n",
"You will also want to copy the `API Endpoint` from the `Database Details` and store that in the `ASTRA_DB_API_ENDPOINT` variable.\n",
"\n",
"You may optionally provide a namespace, which you can manage from the `Data Explorer` tab of your database dashboard. If you don't wish to set a namespace, you can leave the `getpass` prompt for `ASTRA_DB_NAMESPACE` empty."
"You may optionally provide a **`keyspace`** (called \"namespace\" in the LangChain components), which you can manage from the `Data Explorer` tab of your database dashboard. If you wish, you can leave it empty in the prompt below and fall back to a default keyspace."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 1,
"id": "b7843c22",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"ASTRA_DB_API_ENDPOINT = https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com\n",
"ASTRA_DB_APPLICATION_TOKEN = ········\n",
"(optional) ASTRA_DB_KEYSPACE = \n"
]
}
],
"source": [
"import getpass\n",
"\n",
"ASTRA_DB_API_ENDPOINT = getpass.getpass(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(\"ASTRA_DB_APPLICATION_TOKEN = \")\n",
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \").strip()\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(\"ASTRA_DB_APPLICATION_TOKEN = \").strip()\n",
"\n",
"desired_namespace = getpass.getpass(\"ASTRA_DB_NAMESPACE = \")\n",
"if desired_namespace:\n",
" ASTRA_DB_NAMESPACE = desired_namespace\n",
"desired_keyspace = input(\"(optional) ASTRA_DB_KEYSPACE = \").strip()\n",
"if desired_keyspace:\n",
" ASTRA_DB_KEYSPACE = desired_keyspace\n",
"else:\n",
" ASTRA_DB_NAMESPACE = None"
" ASTRA_DB_KEYSPACE = None"
]
},
{
@@ -77,7 +96,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "3cb739c0",
"metadata": {},
"outputs": [],
@@ -93,28 +112,46 @@
"source": [
"## Initialization\n",
"\n",
"There are two ways to create an Astra DB vector store, which differ in how the embeddings are computed.\n",
"There are various ways to create an Astra DB vector store:\n",
"\n",
"#### Method 1: Explicit embeddings\n",
"\n",
"You can separately instantiate a `langchain_core.embeddings.Embeddings` class and pass it to the `AstraDBVectorStore` constructor, just like with most other LangChain vector stores.\n",
"\n",
"#### Method 2: Integrated embedding computation\n",
"#### Method 2: Server-side embeddings ('vectorize')\n",
"\n",
"Alternatively, you can use the [Vectorize](https://www.datastax.com/blog/simplifying-vector-embedding-generation-with-astra-vectorize) feature of Astra DB and simply specify the name of a supported embedding model when creating the store. The embedding computations are entirely handled within the database. (To proceed with this method, you must have enabled the desired embedding integration for your database, as described [in the docs](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html).)\n",
"Alternatively, you can use the [server-side embedding computation](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html) feature of Astra DB ('vectorize') and simply specify an embedding model when creating the server infrastructure for the store. The embedding computations will then be entirely handled within the database in subsequent read and write operations. (To proceed with this method, you must have enabled the desired embedding integration for your database, as described [in the docs](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html).)\n",
"\n",
"### Explicit Embedding Initialization\n",
"#### Method 3: Auto-detect from a pre-existing collection\n",
"\n",
"Below, we instantiate our vector store using the explicit embedding class:\n",
"You may already have a [collection](https://docs.datastax.com/en/astra-db-serverless/api-reference/collections.html) in your Astra DB, possibly pre-populated with data through other means (e.g. via the Astra UI or a third-party application), and just want to start querying it within LangChain. In this case, the right approach is to enable the `autodetect_collection` mode in the vector store constructor and let the class figure out the details. (Of course, if your collection has no 'vectorize', you still need to provide an `Embeddings` object).\n",
"\n",
"#### A note on \"hybrid search\"\n",
"\n",
"Astra DB vector stores support metadata search in vector searches; furthermore, version 0.6 introduced full support for _hybrid search_ through the [findAndRerank](https://docs.datastax.com/en/astra-db-serverless/api-reference/document-methods/find-and-rerank.html) database primitive: documents are retrieved from both a vector-similarity _and_ a keyword-based (\"lexical\") search, and are then merged through a reranker model. This search strategy, entirely handled on server-side, can boost the accuracy of your results, thus improving the quality of your RAG application. Whenever available, hybrid search is used automatically by the vector store (though you can exert manual control over it if you wish to do so).\n",
"\n",
"#### Additional information\n",
"\n",
"The `AstraDBVectorStore` can be configured in many ways; see the [API Reference](https://python.langchain.com/api_reference/astradb/vectorstores/langchain_astradb.vectorstores.AstraDBVectorStore.html) for a full guide covering e.g. asynchronous initialization; non-Astra-DB databases; custom indexing allow-/deny-lists; manual hybrid-search control; and much more."
]
},
{
"cell_type": "markdown",
"id": "8d7e33e0-f948-47b5-a9c2-6407fdde170e",
"metadata": {},
"source": [
"### Explicit embedding initialization (method 1)\n",
"\n",
"Instantiate our vector store using an explicit embedding class:\n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>\n"
"<EmbeddingTabs/>"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 3,
"id": "d71a1dcb",
"metadata": {},
"outputs": [],
@@ -128,19 +165,19 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 4,
"id": "0b32730d-176e-414c-9d91-fd3644c54211",
"metadata": {},
"outputs": [],
"source": [
"from langchain_astradb import AstraDBVectorStore\n",
"\n",
"vector_store = AstraDBVectorStore(\n",
"vector_store_explicit_embeddings = AstraDBVectorStore(\n",
" collection_name=\"astra_vector_langchain\",\n",
" embedding=embeddings,\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" namespace=ASTRA_DB_NAMESPACE,\n",
" namespace=ASTRA_DB_KEYSPACE,\n",
")"
]
},
@@ -149,26 +186,26 @@
"id": "84a1fe85-a42c-4f15-92e1-f79f1dd43ea2",
"metadata": {},
"source": [
"### Integrated Embedding Initialization\n",
"### Server-side embedding initialization (\"vectorize\", method 2)\n",
"\n",
"Here it is assumed that you have\n",
"In this example code, it is assumed that you have\n",
"\n",
"- Enabled the OpenAI integration in your Astra DB organization,\n",
"- Added an API Key named `\"OPENAI_API_KEY\"` to the integration, and scoped it to the database you are using.\n",
"\n",
"For more details on how to do this, please consult the [documentation](https://docs.datastax.com/en/astra-db-serverless/integrations/embedding-providers/openai.html)."
"For more details, including instructions to switch provider/model, please consult the [documentation](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "9d18455d-3fa6-4f9e-b687-3a2bc71c9a23",
"metadata": {},
"outputs": [],
"source": [
"from astrapy.info import CollectionVectorServiceOptions\n",
"from astrapy.info import VectorServiceOptions\n",
"\n",
"openai_vectorize_options = CollectionVectorServiceOptions(\n",
"openai_vectorize_options = VectorServiceOptions(\n",
" provider=\"openai\",\n",
" model_name=\"text-embedding-3-small\",\n",
" authentication={\n",
@@ -176,125 +213,183 @@
" },\n",
")\n",
"\n",
"vector_store_integrated = AstraDBVectorStore(\n",
" collection_name=\"astra_vector_langchain_integrated\",\n",
"vector_store_integrated_embeddings = AstraDBVectorStore(\n",
" collection_name=\"astra_vectorize_langchain\",\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" namespace=ASTRA_DB_NAMESPACE,\n",
" namespace=ASTRA_DB_KEYSPACE,\n",
" collection_vector_service_options=openai_vectorize_options,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "24508a60-9591-4b24-a9b7-ecc90ed71b68",
"metadata": {},
"source": [
"### Auto-detect initialization (method 3)\n",
"\n",
"You can use this pattern if the collection already exists on the database and your `AstraDBVectorStore` needs to use it (for reads and writes). The LangChain component will inspect the collection and figure out the details.\n",
"\n",
"This is the recommended approach if the collection has been created and -- most importantly -- populated by tools other than LangChain, for example if the data has been ingested through the Astra DB Web interface.\n",
"\n",
"Auto-detect mode cannot coexist with _collection_ settings (such as the similarity metric and such); on the other hand, if no server-side embeddings are employed, one still needs to pass an `Embeddings` object to the constructor.\n",
"\n",
"In the following example code, we will \"auto-detect\" the very same collection that was created by method 2 above (\"vectorize\"). Hence, no `Embeddings` object needs to be supplied."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "683b0f6e-884f-4a09-bc3a-454bb1eefd30",
"metadata": {},
"outputs": [],
"source": [
"vector_store_autodetected = AstraDBVectorStore(\n",
" collection_name=\"astra_vectorize_langchain\",\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
" namespace=ASTRA_DB_KEYSPACE,\n",
" autodetect_collection=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "fbcfe8e8-2f4e-4fc7-a332-7a2fa2c401bf",
"metadata": {},
"source": [
"## Manage vector store\n",
"\n",
"Once you have created your vector store, interact with it by adding and deleting different items.\n",
"\n",
"All interactions with the vector store proceed regardless of the initialization method: please **adapt the following cell**, if you desire, to select a vector store you have created and want to put to test."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "54d63f59-1e6b-49b4-a7c1-ac7717c92ac0",
"metadata": {},
"outputs": [],
"source": [
"# If desired, uncomment a different line here:\n",
"\n",
"# vector_store = vector_store_explicit_embeddings\n",
"vector_store = vector_store_integrated_embeddings\n",
"# vector_store = vector_store_autodetected"
]
},
{
"cell_type": "markdown",
"id": "d3796b39",
"metadata": {},
"source": [
"## Manage vector store\n",
"\n",
"Once you have created your vector store, we can interact with it by adding and deleting different items.\n",
"\n",
"### Add items to vector store\n",
"\n",
"We can add items to our vector store by using the `add_documents` function."
"Add documents to the vector store by using the `add_documents` method.\n",
"\n",
"_The \"id\" field can be supplied separately, in a matching `ids=[...]` parameter to `add_documents`, or even left out entirely to let the store generate IDs._"
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 8,
"id": "afb3e155",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[UUID('89a5cea1-5f3d-47c1-89dc-7e36e12cf4de'),\n",
" UUID('d4e78c48-f954-4612-8a38-af22923ba23b'),\n",
" UUID('058e4046-ded0-4fc1-b8ac-60e5a5f08ea0'),\n",
" UUID('50ab2a9a-762c-4b78-b102-942a86d77288'),\n",
" UUID('1da5a3c1-ba51-4f2f-aaaf-79a8f5011ce3'),\n",
" UUID('f3055d9e-2eb1-4d25-838e-2c70548f91b5'),\n",
" UUID('4bf0613d-08d0-4fbc-a43c-4955e4c9e616'),\n",
" UUID('18008625-8fd4-45c2-a0d7-92a2cde23dbc'),\n",
" UUID('c712e06f-790b-4fd4-9040-7ab3898965d0'),\n",
" UUID('a9b84820-3445-4810-a46c-e77b76ab85bc')]"
"['entry_00',\n",
" 'entry_01',\n",
" 'entry_02',\n",
" 'entry_03',\n",
" 'entry_04',\n",
" 'entry_05',\n",
" 'entry_06',\n",
" 'entry_07',\n",
" 'entry_08',\n",
" 'entry_09',\n",
" 'entry_10']"
]
},
"execution_count": 23,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_2 = Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_3 = Document(\n",
" page_content=\"Building an exciting new project with LangChain - come check it out!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_4 = Document(\n",
" page_content=\"Robbers broke into the city bank and stole $1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_5 = Document(\n",
" page_content=\"Wow! That was an amazing movie. I can't wait to see it again.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_6 = Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_7 = Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
")\n",
"\n",
"document_8 = Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"document_9 = Document(\n",
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
")\n",
"\n",
"document_10 = Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
"documents = [\n",
" document_1,\n",
" document_2,\n",
" document_3,\n",
" document_4,\n",
" document_5,\n",
" document_6,\n",
" document_7,\n",
" document_8,\n",
" document_9,\n",
" document_10,\n",
"documents_to_insert = [\n",
" Document(\n",
" page_content=\"ZYX, just another tool in the world, is actually my agent-based superhero\",\n",
" metadata={\"source\": \"tweet\"},\n",
" id=\"entry_00\",\n",
" ),\n",
" Document(\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs \"\n",
" \"for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
" id=\"entry_01\",\n",
" ),\n",
" Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and \"\n",
" \"overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
" id=\"entry_02\",\n",
" ),\n",
" Document(\n",
" page_content=\"Building an exciting new project with LangChain \"\n",
" \"- come check it out!\",\n",
" metadata={\"source\": \"tweet\"},\n",
" id=\"entry_03\",\n",
" ),\n",
" Document(\n",
" page_content=\"Robbers broke into the city bank and stole \"\n",
" \"$1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
" id=\"entry_04\",\n",
" ),\n",
" Document(\n",
" page_content=\"Thanks to her sophisticated language skills, the agent \"\n",
" \"managed to extract strategic information all right.\",\n",
" metadata={\"source\": \"tweet\"},\n",
" id=\"entry_05\",\n",
" ),\n",
" Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this \"\n",
" \"review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
" id=\"entry_06\",\n",
" ),\n",
" Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
" id=\"entry_07\",\n",
" ),\n",
" Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, \"\n",
" \"agentic applications!\",\n",
" metadata={\"source\": \"tweet\"},\n",
" id=\"entry_08\",\n",
" ),\n",
" Document(\n",
" page_content=\"The stock market is down 500 points today due to \"\n",
" \"fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
" id=\"entry_09\",\n",
" ),\n",
" Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"tweet\"},\n",
" id=\"entry_10\",\n",
" ),\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
"\n",
"vector_store.add_documents(documents=documents_to_insert)"
]
},
{
@@ -304,12 +399,12 @@
"source": [
"### Delete items from vector store\n",
"\n",
"We can delete items from our vector store by ID by using the `delete` function."
"Delete items by ID by using the `delete` function."
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 9,
"id": "d3f69315",
"metadata": {},
"outputs": [
@@ -319,13 +414,13 @@
"True"
]
},
"execution_count": 24,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vector_store.delete(ids=uuids[-1])"
"vector_store.delete(ids=[\"entry_10\", \"entry_02\"])"
]
},
{
@@ -333,20 +428,20 @@
"id": "d12e1a07",
"metadata": {},
"source": [
"## Query vector store\n",
"## Query the vector store\n",
"\n",
"Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. \n",
"Once the vector store is created and populated, you can query it (e.g. as part of your chain or agent).\n",
"\n",
"### Query directly\n",
"\n",
"#### Similarity search\n",
"\n",
"Performing a simple similarity search with filtering on metadata can be done as follows:"
"Search for documents similar to a provided text, with additional metadata filters if desired:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 10,
"id": "770b3467",
"metadata": {},
"outputs": [
@@ -354,19 +449,20 @@
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]\n"
"* \"Building an exciting new project with LangChain - come check it out!\", metadata={'source': 'tweet'}\n",
"* \"LangGraph is the best framework for building stateful, agentic applications!\", metadata={'source': 'tweet'}\n",
"* \"Thanks to her sophisticated language skills, the agent managed to extract strategic information all right.\", metadata={'source': 'tweet'}\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=2,\n",
" k=3,\n",
" filter={\"source\": \"tweet\"},\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
" print(f'* \"{res.page_content}\", metadata={res.metadata}')"
]
},
{
@@ -376,12 +472,12 @@
"source": [
"#### Similarity search with score\n",
"\n",
"You can also search with score:"
"You can return the similarity score as well:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 11,
"id": "5924309a",
"metadata": {},
"outputs": [
@@ -389,16 +485,69 @@
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=0.776585] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]\n"
"* [SIM=0.71] \"Building an exciting new project with LangChain - come check it out!\", metadata={'source': 'tweet'}\n",
"* [SIM=0.70] \"LangGraph is the best framework for building stateful, agentic applications!\", metadata={'source': 'tweet'}\n",
"* [SIM=0.61] \"Thanks to her sophisticated language skills, the agent managed to extract strategic information all right.\", metadata={'source': 'tweet'}\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" \"Will it be hot tomorrow?\", k=1, filter={\"source\": \"news\"}\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=3,\n",
" filter={\"source\": \"tweet\"},\n",
")\n",
"for res, score in results:\n",
" print(f\"* [SIM={score:3f}] {res.page_content} [{res.metadata}]\")"
" print(f'* [SIM={score:.2f}] \"{res.page_content}\", metadata={res.metadata}')"
]
},
{
"cell_type": "markdown",
"id": "73b8f418-91a7-46d0-91c3-3c76e9586193",
"metadata": {},
"source": [
"#### Specify a different keyword query (requires hybrid search)\n",
"\n",
"> Note: this cell can be run only if the collection supports the [find-and-rerank](https://docs.datastax.com/en/astra-db-serverless/api-reference/document-methods/find-and-rerank.html) command and if the vector store is aware of this fact.\n",
"\n",
"If the vector store is using a hybrid-enabled collection and has detected this fact, by default it will use that capability when running searches.\n",
"\n",
"In that case, the same query text is used for both the vector-similarity and the lexical-based retrieval steps in the find-and-rerank process, _unless you explicitly provide a different query for the latter_:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e282a48b-081a-4d94-9483-33407e8d6da7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* \"Building an exciting new project with LangChain - come check it out!\", metadata={'source': 'tweet'}\n",
"* \"LangGraph is the best framework for building stateful, agentic applications!\", metadata={'source': 'tweet'}\n",
"* \"ZYX, just another tool in the world, is actually my agent-based superhero\", metadata={'source': 'tweet'}\n"
]
}
],
"source": [
"results = vector_store_autodetected.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
" k=3,\n",
" filter={\"source\": \"tweet\"},\n",
" lexical_query=\"agent\",\n",
")\n",
"for res in results:\n",
" print(f'* \"{res.page_content}\", metadata={res.metadata}')"
]
},
{
"cell_type": "markdown",
"id": "60688e8c-d74d-4921-b213-b48d88600f95",
"metadata": {},
"source": [
"_The above example hardcodes the \"autodetected\" vector store, which has surely inspected the collection and figured out if hybrid is available. Another option is to explicitly supply hybrid-search parameters to the constructor (refer to the API Reference for more details/examples)._"
]
},
{
@@ -408,7 +557,9 @@
"source": [
"#### Other search methods\n",
"\n",
"There are a variety of other search methods that are not covered in this notebook, such as MMR search or searching by vector. For a full list of the search abilities available for `AstraDBVectorStore` check out the [API reference](https://python.langchain.com/api_reference/astradb/vectorstores/langchain_astradb.vectorstores.AstraDBVectorStore.html)."
"There are a variety of other search methods that are not covered in this notebook, such as MMR search and search by vector.\n",
"\n",
"For a full list of the search modes available in `AstraDBVectorStore` check out the [API reference](https://python.langchain.com/api_reference/astradb/vectorstores/langchain_astradb.vectorstores.AstraDBVectorStore.html)."
]
},
{
@@ -418,24 +569,24 @@
"source": [
"### Query by turning into retriever\n",
"\n",
"You can also transform the vector store into a retriever for easier usage in your chains. \n",
"You can also make the vector store into a retriever, for easier usage in your chains. \n",
"\n",
"Here is how to transform your vector store into a retriever and then invoke the retreiever with a simple query and filter."
"Transform the vector store into a retriever and invoke it with a simple query + metadata filter:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 13,
"id": "dcee50e6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
"[Document(id='entry_04', metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
]
},
"execution_count": 17,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -490,7 +641,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"id": "fd405a13-6f71-46fa-87e6-167238e9c25e",
"metadata": {},
"outputs": [],
@@ -505,7 +656,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `AstraDBVectorStore` features and configurations head to the API reference: https://python.langchain.com/api_reference/astradb/vectorstores/langchain_astradb.vectorstores.AstraDBVectorStore.html"
"For detailed documentation of all `AstraDBVectorStore` features and configurations, consult the [API reference](https://python.langchain.com/api_reference/astradb/vectorstores/langchain_astradb.vectorstores.AstraDBVectorStore.html)."
]
}
],
@@ -525,7 +676,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.12.0"
}
},
"nbformat": 4,

View File

@@ -402,7 +402,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"* I had chocalate chip pancakes and fried eggs for breakfast this morning. [{'source': 'tweet'}]\n"
"* I had chocolate chip pancakes and fried eggs for breakfast this morning. [{'source': 'tweet'}]\n"
]
}
],

View File

@@ -144,7 +144,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",

View File

@@ -301,7 +301,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
@@ -492,7 +492,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='I had chocalate chip pancakes and scrambled eggs for breakfast this morning.' metadata={'source': 'tweet'}\n"
"page_content='I had chocolate chip pancakes and scrambled eggs for breakfast this morning.' metadata={'source': 'tweet'}\n"
]
}
],

View File

@@ -255,7 +255,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",

View File

@@ -150,7 +150,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",

View File

@@ -138,7 +138,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",

View File

@@ -206,7 +206,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",

View File

@@ -26,7 +26,7 @@
},
"outputs": [],
"source": [
"pip install -qU langchain-pinecone pinecone-notebooks"
"pip install -qU langchain langchain-pinecone langchain-openai"
]
},
{
@@ -34,7 +34,7 @@
"id": "1917d123",
"metadata": {},
"source": [
"Migration note: if you are migrating from the `langchain_community.vectorstores` implementation of Pinecone, you may need to remove your `pinecone-client` v2 dependency before installing `langchain-pinecone`, which relies on `pinecone-client` v3."
"Migration note: if you are migrating from the `langchain_community.vectorstores` implementation of Pinecone, you may need to remove your `pinecone-client` v2 dependency before installing `langchain-pinecone`, which relies on `pinecone-client` v6."
]
},
{
@@ -49,7 +49,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "eb554814",
"metadata": {},
"outputs": [],
@@ -57,7 +57,7 @@
"import getpass\n",
"import os\n",
"\n",
"from pinecone import Pinecone, ServerlessSpec\n",
"from pinecone import Pinecone\n",
"\n",
"if not os.getenv(\"PINECONE_API_KEY\"):\n",
" os.environ[\"PINECONE_API_KEY\"] = getpass.getpass(\"Enter your Pinecone API key: \")\n",
@@ -98,59 +98,41 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 4,
"id": "276a06dd",
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"from pinecone import ServerlessSpec\n",
"\n",
"index_name = \"langchain-test-index\" # change if desired\n",
"\n",
"existing_indexes = [index_info[\"name\"] for index_info in pc.list_indexes()]\n",
"\n",
"if index_name not in existing_indexes:\n",
"if not pc.has_index(index_name):\n",
" pc.create_index(\n",
" name=index_name,\n",
" dimension=3072,\n",
" dimension=1536,\n",
" metric=\"cosine\",\n",
" spec=ServerlessSpec(cloud=\"aws\", region=\"us-east-1\"),\n",
" )\n",
" while not pc.describe_index(index_name).status[\"ready\"]:\n",
" time.sleep(1)\n",
"\n",
"index = pc.Index(index_name)"
]
},
{
"cell_type": "markdown",
"id": "3a4d377f",
"metadata": {},
"source": [
"Now that our Pinecone index is setup, we can initialize our vector store. \n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 5,
"id": "1485db56",
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")"
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 6,
"id": "6e104aee",
"metadata": {},
"outputs": [],
@@ -176,37 +158,17 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "70e688f4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['167b8681-5974-467f-adcb-6e987a18df01',\n",
" 'd16010fd-41f8-4d49-9c22-c66d5555a3fe',\n",
" 'ffcacfb3-2bc2-44c3-a039-c2256a905c0e',\n",
" 'cf3bfc9f-5dc7-4f5e-bb41-edb957394126',\n",
" 'e99b07eb-fdff-4cb9-baa8-619fd8efeed3',\n",
" '68c93033-a24f-40bd-8492-92fa26b631a4',\n",
" 'b27a4ecb-b505-4c5d-89ff-526e3d103558',\n",
" '4868a9e6-e6fb-4079-b400-4a1dfbf0d4c4',\n",
" '921c0e9c-0550-4eb5-9a6c-ed44410788b2',\n",
" 'c446fc23-64e8-47e7-8c19-ecf985e9411e']"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
@@ -268,7 +230,6 @@
" document_10,\n",
"]\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
@@ -282,7 +243,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 8,
"id": "5b8437cd",
"metadata": {},
"outputs": [],
@@ -306,19 +267,10 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 9,
"id": "ffbcb3fb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]\n"
]
}
],
"outputs": [],
"source": [
"results = vector_store.similarity_search(\n",
" \"LangChain provides abstractions to make working with LLMs easy\",\n",
@@ -341,18 +293,10 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "5fb24583",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* [SIM=0.553187] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]\n"
]
}
],
"outputs": [],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" \"Will it be hot tomorrow?\", k=1, filter={\"source\": \"news\"}\n",
@@ -377,25 +321,14 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "78140e87",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"retriever = vector_store.as_retriever(\n",
" search_type=\"similarity_score_threshold\",\n",
" search_kwargs={\"k\": 1, \"score_threshold\": 0.5},\n",
" search_kwargs={\"k\": 1, \"score_threshold\": 0.4},\n",
")\n",
"retriever.invoke(\"Stealing from the bank is a crime\", filter={\"source\": \"news\"})"
]
@@ -421,13 +354,13 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all __ModuleName__VectorStore features and configurations head to the API reference: https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html"
"For detailed documentation of all features and configurations head to the API reference: https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -441,7 +374,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.15"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,576 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Pinecone (sparse)\n",
"\n",
">[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.\n",
"\n",
"This notebook shows how to use functionality related to the `Pinecone` vector database.\n",
"\n",
"## Setup\n",
"\n",
"To use the `PineconeSparseVectorStore` you first need to install the partner package, as well as the other packages used throughout this notebook."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "5g1-ZqcEONGD",
"outputId": "2d49c259-683b-46c2-994f-642f35e30357"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: pinecone 6.0.2 does not provide the extra 'async'\u001b[0m\u001b[33m\n",
"\u001b[0m"
]
}
],
"source": [
"%pip install -qU \"langchain-pinecone==0.2.5\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-1vfBVLhONGE"
},
"source": [
"### Credentials\n",
"Create a new Pinecone account, or sign into your existing one, and create an API key to use in this notebook."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "k_Dp_DIlONGF",
"outputId": "01728754-8708-4f05-e53d-2e251541370e"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your Pinecone API key: ··········\n"
]
}
],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from pinecone import Pinecone\n",
"\n",
"# get API key at app.pinecone.io\n",
"os.environ[\"PINECONE_API_KEY\"] = os.getenv(\"PINECONE_API_KEY\") or getpass(\n",
" \"Enter your Pinecone API key: \"\n",
")\n",
"\n",
"# initialize client\n",
"pc = Pinecone()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OFqeT2xHONGF"
},
"source": [
"## Initialization\n",
"Before initializing our vector store, let's connect to a Pinecone index. If one named index_name doesn't exist, it will be created."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "9xNxmZsRONGF",
"outputId": "b661d0af-26bd-43b2-f277-5366efd1d865"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Index `langchain-sparse-vector-search` host: https://langchain-sparse-vector-search-yrrgefy.svc.aped-4627-b74a.pinecone.io\n"
]
}
],
"source": [
"from pinecone import AwsRegion, CloudProvider, Metric, ServerlessSpec\n",
"\n",
"index_name = \"langchain-sparse-vector-search\" # change if desired\n",
"model_name = \"pinecone-sparse-english-v0\"\n",
"\n",
"if not pc.has_index(index_name):\n",
" pc.create_index_for_model(\n",
" name=index_name,\n",
" cloud=CloudProvider.AWS,\n",
" region=AwsRegion.US_EAST_1,\n",
" embed={\n",
" \"model\": model_name,\n",
" \"field_map\": {\"text\": \"chunk_text\"},\n",
" \"metric\": Metric.DOTPRODUCT,\n",
" },\n",
" )\n",
"\n",
"index = pc.Index(index_name)\n",
"print(f\"Index `{index_name}` host: {index.config.host}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E0fDNTF9ONGF"
},
"source": [
"For our sparse embedding model we use [`pinecone-sparse-english-v0`](https://docs.pinecone.io/models/pinecone-sparse-english-v0), we initialize it like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "swM_SFhOONGF"
},
"outputs": [],
"source": [
"from langchain_pinecone.embeddings import PineconeSparseEmbeddings\n",
"\n",
"sparse_embeddings = PineconeSparseEmbeddings(model=model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZDL58ZRZONGF"
},
"source": [
"Now that our Pinecone index and embedding model are both ready, we can initialize our sparse vector store in LangChain:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "GrBUli1VONGF"
},
"outputs": [],
"source": [
"from langchain_pinecone import PineconeSparseVectorStore\n",
"\n",
"vector_store = PineconeSparseVectorStore(index=index, embedding=sparse_embeddings)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Bp8aeN4SONGG"
},
"source": [
"## Manage vector store\n",
"Once you have created your vector store, we can interact with it by adding and deleting different items.\n",
"\n",
"### Add items to vector store\n",
"We can add items to our vector store by using the `add_documents` function."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Ef0s0ovdONGG",
"outputId": "bd61d82f-902d-4010-9701-b0e2ecb02d17"
},
"outputs": [
{
"data": {
"text/plain": [
"['95b598af-c3dc-4a8a-bdb7-5d21283e5a86',\n",
" '838614a5-5635-4efd-9ac3-5237a37a542b',\n",
" '093fd11f-c85b-4c83-83f0-117df64ff442',\n",
" 'fb3ba32f-f802-410a-ad79-56f7bce938fe',\n",
" '75cde9bf-7e91-4f06-8bae-c824dab16a08',\n",
" '9de8f769-d604-4e56-b677-ee333cbc8e34',\n",
" 'f5f4ae97-88e6-4669-bcf7-87072bb08550',\n",
" 'f9f82811-187c-4b25-85b5-7a42b4da3bff',\n",
" 'ce45957c-e8fc-41ef-819b-1bd52b6fc815',\n",
" '66cacc6f-b8e2-441b-9f7f-468788aad88f']"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"from langchain_core.documents import Document\n",
"\n",
"documents = [\n",
" Document(\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"social\"},\n",
" ),\n",
" Document(\n",
" page_content=\"The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.\",\n",
" metadata={\"source\": \"news\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Building an exciting new project with LangChain - come check it out!\",\n",
" metadata={\"source\": \"social\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Robbers broke into the city bank and stole $1 million in cash.\",\n",
" metadata={\"source\": \"news\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Wow! That was an amazing movie. I can't wait to see it again.\",\n",
" metadata={\"source\": \"social\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Is the new iPhone worth the price? Read this review to find out.\",\n",
" metadata={\"source\": \"website\"},\n",
" ),\n",
" Document(\n",
" page_content=\"The top 10 soccer players in the world right now.\",\n",
" metadata={\"source\": \"website\"},\n",
" ),\n",
" Document(\n",
" page_content=\"LangGraph is the best framework for building stateful, agentic applications!\",\n",
" metadata={\"source\": \"social\"},\n",
" ),\n",
" Document(\n",
" page_content=\"The stock market is down 500 points today due to fears of a recession.\",\n",
" metadata={\"source\": \"news\"},\n",
" ),\n",
" Document(\n",
" page_content=\"I have a bad feeling I am going to get deleted :(\",\n",
" metadata={\"source\": \"social\"},\n",
" ),\n",
"]\n",
"\n",
"uuids = [str(uuid4()) for _ in range(len(documents))]\n",
"\n",
"vector_store.add_documents(documents=documents, ids=uuids)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KUIIEuYxONGG"
},
"source": [
"### Delete items from vector store"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pLFI8xEJONGG"
},
"source": [
"We can delete records from our vector store using the `delete` method, providing it with a list of document IDs to delete."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "eboEnsbRONGG"
},
"outputs": [],
"source": [
"vector_store.delete(ids=[uuids[-1]])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ggaQ5g4bONGG"
},
"source": [
"## Query vector store\n",
"\n",
"Once we have loaded our documents into the vector store we're most likely ready to begin querying. There are various method for doing this in LangChain.\n",
"\n",
"First, we'll see how to perform a simple vector search by querying our `vector_store` directly via the `similarity_search` method:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "QcIS_P8CONGG",
"outputId": "774da46e-b919-4128-bc77-6c392e77f9f3"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'social'}]\n",
"* Building an exciting new project with LangChain - come check it out! [{'source': 'social'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'social'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\"I'm building a new LangChain project!\", k=3)\n",
"\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n-JlbidpONGG"
},
"source": [
"We can also add [metadata filtering](https://docs.pinecone.io/guides/data/understanding-metadata#metadata-query-language) to our query to limit our search based on various criteria. Let's try a simple filter to limit our search to include only records with `source==\"social\"`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "N19HBLNCONGG",
"outputId": "9c5e96c2-0b4e-4083-cd6a-dd6d8662df09"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Building an exciting new project with LangChain - come check it out! [{'source': 'social'}]\n",
"* Building an exciting new project with LangChain - come check it out! [{'source': 'social'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'social'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search(\n",
" \"I'm building a new LangChain project!\",\n",
" k=3,\n",
" filter={\"source\": \"social\"},\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yEs4xmESONGG"
},
"source": [
"When comparing these results, we can see that our first query returned a different record from the `\"website\"` source. In our latter, filtered, query — this is no longer the case."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vWGkIfzCONGH"
},
"source": [
"### Similarity Search and Scores\n",
"\n",
"We can also search while returning the similarity score in a list of `(document, score)` tuples. Where the `document` is a LangChain `Document` object containing our text content and metadata."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "aRiHj8ADONGH",
"outputId": "5649d70a-0bd0-446d-8e60-2348170da706"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[SIM=12.959961] Building an exciting new project with LangChain - come check it out! [{'source': 'social'}]\n",
"[SIM=12.959961] Building an exciting new project with LangChain - come check it out! [{'source': 'social'}]\n",
"[SIM=1.942383] LangGraph is the best framework for building stateful, agentic applications! [{'source': 'social'}]\n"
]
}
],
"source": [
"results = vector_store.similarity_search_with_score(\n",
" \"I'm building a new LangChain project!\", k=3, filter={\"source\": \"social\"}\n",
")\n",
"for doc, score in results:\n",
" print(f\"[SIM={score:3f}] {doc.page_content} [{doc.metadata}]\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aqGelo6sONGH"
},
"source": [
"### As a Retriever\n",
"\n",
"In our chains and agents we'll often use the vector store as a `VectorStoreRetriever`. To create that, we use the `as_retriever` method:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Tsn9--KsONGH",
"outputId": "3da43258-49fb-4080-cbdd-f97101fb8099"
},
"outputs": [
{
"data": {
"text/plain": [
"VectorStoreRetriever(tags=['PineconeSparseVectorStore', 'PineconeSparseEmbeddings'], vectorstore=<langchain_pinecone.vectorstores_sparse.PineconeSparseVectorStore object at 0x7c8087b24290>, search_type='similarity_score_threshold', search_kwargs={'k': 3, 'score_threshold': 0.5})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vector_store.as_retriever(\n",
" search_type=\"similarity_score_threshold\",\n",
" search_kwargs={\"k\": 3, \"score_threshold\": 0.5},\n",
")\n",
"retriever"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yDqb3q-VUY2t"
},
"source": [
"We can now query our retriever using the `invoke` method:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "mGV0p0TDUJyx",
"outputId": "89db72c3-4ff2-4900-d302-482e54549b39"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.11/dist-packages/langchain_core/vectorstores/base.py:1082: UserWarning: Relevance scores must be between 0 and 1, got [(Document(id='093fd11f-c85b-4c83-83f0-117df64ff442', metadata={'source': 'social'}, page_content='Building an exciting new project with LangChain - come check it out!'), 6.97998045), (Document(id='54f8f645-9f77-4aab-b9fa-709fd91ae3b3', metadata={'source': 'social'}, page_content='Building an exciting new project with LangChain - come check it out!'), 6.97998045), (Document(id='f9f82811-187c-4b25-85b5-7a42b4da3bff', metadata={'source': 'social'}, page_content='LangGraph is the best framework for building stateful, agentic applications!'), 1.471191405)]\n",
" self.vectorstore.similarity_search_with_relevance_scores(\n"
]
},
{
"data": {
"text/plain": [
"[Document(id='093fd11f-c85b-4c83-83f0-117df64ff442', metadata={'source': 'social'}, page_content='Building an exciting new project with LangChain - come check it out!'),\n",
" Document(id='54f8f645-9f77-4aab-b9fa-709fd91ae3b3', metadata={'source': 'social'}, page_content='Building an exciting new project with LangChain - come check it out!'),\n",
" Document(id='f9f82811-187c-4b25-85b5-7a42b4da3bff', metadata={'source': 'social'}, page_content='LangGraph is the best framework for building stateful, agentic applications!')]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\n",
" input=\"I'm building a new LangChain project!\", filter={\"source\": \"social\"}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage for retrieval-augmented generation\n",
"\n",
"For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:\n",
"\n",
"- [Tutorials](/docs/tutorials/)\n",
"- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n",
"- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all features and configurations head to the API reference: \n",
"https://python.langchain.com/api_reference/pinecone/vectorstores_sparse/langchain_pinecone.vectorstores_sparse.PineconeSparseVectorStore.html#langchain_pinecone.vectorstores_sparse.PineconeSparseVectorStore\n",
"\n",
"Sparse Embeddings:\n",
"https://python.langchain.com/api_reference/pinecone/embeddings/langchain_pinecone.embeddings.PineconeSparseEmbeddings.html"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -6,73 +6,46 @@
"source": [
"# SAP HANA Cloud Vector Engine\n",
"\n",
">[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is a vector store fully integrated into the `SAP HANA Cloud` database.\n",
"\n",
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration"
">[SAP HANA Cloud Vector Engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/sap-hana-cloud-sap-hana-database-vector-engine-guide) is a vector store fully integrated into the `SAP HANA Cloud` database."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setting up\n",
"## Setup\n",
"\n",
"Installation of the HANA database driver."
"Install the `langchain-hana` external integration package, as well as the other packages used throughout this notebook."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Pip install necessary package\n",
"%pip install --upgrade --quiet hdbcli"
"%pip install -qU langchain-hana"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For `OpenAIEmbeddings` we use the OpenAI API key from the environment."
"### Credentials\n",
"\n",
"Ensure your SAP HANA instance is running. Load your credentials from environment variables and create a connection:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-09T08:02:16.802456Z",
"start_time": "2023-09-09T08:02:07.065604Z"
}
},
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"# Use OPENAI_API_KEY env variable\n",
"# os.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI API key\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a database connection to a HANA Cloud instance."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-09T08:02:28.174088Z",
"start_time": "2023-09-09T08:02:28.162698Z"
}
},
"outputs": [],
"source": [
"\n",
"from dotenv import load_dotenv\n",
"from hdbcli import dbapi\n",
"\n",
@@ -88,6 +61,92 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Learn more about SAP HANA in [What is SAP HANA?](https://www.sap.com/products/data-cloud/hana/what-is-sap-hana.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialization\n",
"To initialize a `HanaDB` vector store, you need a database connection and an embedding instance. SAP HANA Cloud Vector Engine supports both external and internal embeddings."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- #### Using External Embeddings\n",
"\n",
"import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n",
"\n",
"<EmbeddingTabs/>"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- #### Using Internal Embeddings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can compute embeddings directly in SAP HANA using its native `VECTOR_EMBEDDING()` function. To enable this, create an instance of `HanaInternalEmbeddings` with your internal model ID and pass it to `HanaDB`. Note that the `HanaInternalEmbeddings` instance is specifically designed for use with `HanaDB` and is not intended for use with other vector store implementations. For more information about internal embedding, see the [SAP HANA VECTOR_EMBEDDING Function](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/vector-embedding-function-vector).\n",
"\n",
"> **Caution:** Ensure NLP is enabled in your SAP HANA Cloud instance."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_hana import HanaInternalEmbeddings\n",
"\n",
"embeddings = HanaInternalEmbeddings(internal_embedding_model_id=\"SAP_NEB.20240715\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have your connection and embedding instance, create the vector store by passing them to `HanaDB` along with a table name for storing vectors:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_hana import HanaDB\n",
"\n",
"db = HanaDB(\n",
" embedding=embeddings, connection=connection, table_name=\"STATE_OF_THE_UNION\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -104,7 +163,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-09T08:02:25.452472Z",
@@ -122,40 +181,16 @@
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores.hanavector import HanaDB\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"text_documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_documents = TextLoader(\n",
" \"../../how_to/state_of_the_union.txt\", encoding=\"UTF-8\"\n",
").load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"text_chunks = text_splitter.split_documents(text_documents)\n",
"print(f\"Number of document chunks: {len(text_chunks)}\")\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a LangChain VectorStore interface for the HANA database and specify the table (collection) to use for accessing the vector embeddings"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-09T08:04:16.696625Z",
"start_time": "2023-09-09T08:02:31.817790Z"
}
},
"outputs": [],
"source": [
"db = HanaDB(\n",
" embedding=embeddings, connection=connection, table_name=\"STATE_OF_THE_UNION\"\n",
")"
"print(f\"Number of document chunks: {len(text_chunks)}\")"
]
},
{
@@ -167,7 +202,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -176,7 +211,7 @@
"[]"
]
},
"execution_count": 12,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -199,7 +234,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 8,
"metadata": {},
"outputs": [
{
@@ -235,7 +270,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 9,
"metadata": {},
"outputs": [
{
@@ -254,7 +289,7 @@
}
],
"source": [
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_hana.utils import DistanceStrategy\n",
"\n",
"db = HanaDB(\n",
" embedding=embeddings,\n",
@@ -286,7 +321,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 10,
"metadata": {
"ExecuteTime": {
"end_time": "2023-09-09T08:05:23.276819Z",
@@ -336,7 +371,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 11,
"metadata": {},
"outputs": [
{
@@ -411,7 +446,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 12,
"metadata": {},
"outputs": [
{
@@ -420,7 +455,7 @@
"True"
]
},
"execution_count": 19,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -443,7 +478,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 13,
"metadata": {},
"outputs": [
{
@@ -452,7 +487,7 @@
"[]"
]
},
"execution_count": 20,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -471,7 +506,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 14,
"metadata": {},
"outputs": [
{
@@ -480,7 +515,7 @@
"[]"
]
},
"execution_count": 21,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -508,7 +543,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 15,
"metadata": {},
"outputs": [
{
@@ -539,7 +574,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 16,
"metadata": {},
"outputs": [
{
@@ -578,13 +613,14 @@
"| `$nin` | Not contained in a set of given values (not in) |\n",
"| `$between` | Between the range of two boundary values |\n",
"| `$like` | Text equality based on the \"LIKE\" semantics in SQL (using \"%\" as wildcard) |\n",
"| `$contains` | Filters documents containing a specific keyword |\n",
"| `$and` | Logical \"and\", supporting 2 or more operands |\n",
"| `$or` | Logical \"or\", supporting 2 or more operands |"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
@@ -592,15 +628,15 @@
"docs = [\n",
" Document(\n",
" page_content=\"First\",\n",
" metadata={\"name\": \"adam\", \"is_active\": True, \"id\": 1, \"height\": 10.0},\n",
" metadata={\"name\": \"Adam Smith\", \"is_active\": True, \"id\": 1, \"height\": 10.0},\n",
" ),\n",
" Document(\n",
" page_content=\"Second\",\n",
" metadata={\"name\": \"bob\", \"is_active\": False, \"id\": 2, \"height\": 5.7},\n",
" metadata={\"name\": \"Bob Johnson\", \"is_active\": False, \"id\": 2, \"height\": 5.7},\n",
" ),\n",
" Document(\n",
" page_content=\"Third\",\n",
" metadata={\"name\": \"jane\", \"is_active\": True, \"id\": 3, \"height\": 2.4},\n",
" metadata={\"name\": \"Jane Doe\", \"is_active\": True, \"id\": 3, \"height\": 2.4},\n",
" ),\n",
"]\n",
"\n",
@@ -632,7 +668,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 18,
"metadata": {},
"outputs": [
{
@@ -640,19 +676,19 @@
"output_type": "stream",
"text": [
"Filter: {'id': {'$ne': 1}}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'id': {'$gt': 1}}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'id': {'$gte': 1}}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'id': {'$lt': 1}}\n",
"<empty result>\n",
"Filter: {'id': {'$lte': 1}}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n"
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n"
]
}
],
@@ -687,7 +723,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 19,
"metadata": {},
"outputs": [
{
@@ -695,13 +731,13 @@
"output_type": "stream",
"text": [
"Filter: {'id': {'$between': (1, 2)}}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'name': {'$in': ['adam', 'bob']}}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'name': {'$nin': ['adam', 'bob']}}\n",
"{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n"
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'name': {'$in': ['Adam Smith', 'Bob Johnson']}}\n",
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'name': {'$nin': ['Adam Smith', 'Bob Johnson']}}\n",
"{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n"
]
}
],
@@ -710,11 +746,11 @@
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$in\": [\"adam\", \"bob\"]}}\n",
"advanced_filter = {\"name\": {\"$in\": [\"Adam Smith\", \"Bob Johnson\"]}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$nin\": [\"adam\", \"bob\"]}}\n",
"advanced_filter = {\"name\": {\"$nin\": [\"Adam Smith\", \"Bob Johnson\"]}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
@@ -728,7 +764,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 20,
"metadata": {},
"outputs": [
{
@@ -736,10 +772,10 @@
"output_type": "stream",
"text": [
"Filter: {'name': {'$like': 'a%'}}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"<empty result>\n",
"Filter: {'name': {'$like': '%a%'}}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n"
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n"
]
}
],
@@ -753,6 +789,51 @@
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Text filtering with `$contains`"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Filter: {'name': {'$contains': 'bob'}}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'name': {'$contains': 'bo'}}\n",
"<empty result>\n",
"Filter: {'name': {'$contains': 'Adam Johnson'}}\n",
"<empty result>\n",
"Filter: {'name': {'$contains': 'Adam Smith'}}\n",
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n"
]
}
],
"source": [
"advanced_filter = {\"name\": {\"$contains\": \"bob\"}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$contains\": \"bo\"}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$contains\": \"Adam Johnson\"}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\"name\": {\"$contains\": \"Adam Smith\"}}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -762,7 +843,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 22,
"metadata": {},
"outputs": [
{
@@ -770,14 +851,15 @@
"output_type": "stream",
"text": [
"Filter: {'$or': [{'id': 1}, {'name': 'bob'}]}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"Filter: {'$and': [{'id': 1}, {'id': 2}]}\n",
"<empty result>\n",
"Filter: {'$or': [{'id': 1}, {'id': 2}, {'id': 3}]}\n",
"{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n"
"{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n",
"{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n",
"Filter: {'$and': [{'name': {'$contains': 'bob'}}, {'name': {'$contains': 'johnson'}}]}\n",
"{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n"
]
}
],
@@ -792,6 +874,12 @@
"\n",
"advanced_filter = {\"$or\": [{\"id\": 1}, {\"id\": 2}, {\"id\": 3}]}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n",
"\n",
"advanced_filter = {\n",
" \"$and\": [{\"name\": {\"$contains\": \"bob\"}}, {\"name\": {\"$contains\": \"johnson\"}}]\n",
"}\n",
"print(f\"Filter: {advanced_filter}\")\n",
"print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))"
]
},
@@ -804,13 +892,10 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Access the vector DB with a new table\n",
"db = HanaDB(\n",
" connection=connection,\n",
@@ -837,7 +922,7 @@
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
@@ -874,6 +959,8 @@
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"memory = ConversationBufferMemory(\n",
@@ -898,7 +985,7 @@
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 26,
"metadata": {},
"outputs": [
{
@@ -907,7 +994,7 @@
"text": [
"Answer from LLM:\n",
"================\n",
"The United States has set up joint patrols with Mexico and Guatemala to catch more human traffickers. This collaboration is part of the efforts to address immigration issues and secure the borders in the region.\n",
"The United States has set up joint patrols with Mexico and Guatemala to catch more human traffickers at the border. This collaborative effort aims to improve border security and combat illegal activities such as human trafficking.\n",
"================\n",
"Number of used source document chunks: 5\n"
]
@@ -954,7 +1041,7 @@
},
{
"cell_type": "code",
"execution_count": 34,
"execution_count": 28,
"metadata": {},
"outputs": [
{
@@ -963,12 +1050,12 @@
"text": [
"Answer from LLM:\n",
"================\n",
"Mexico and Guatemala are involved in joint patrols to catch human traffickers.\n"
"Countries like Mexico and Guatemala are participating in joint patrols to catch human traffickers. The United States is also working with partners in South and Central America to host more refugees and secure their borders. Additionally, the U.S. is working with twenty-seven members of the European Union, as well as countries like France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and Switzerland.\n"
]
}
],
"source": [
"question = \"What about other countries?\"\n",
"question = \"How many casualties were reported after that?\"\n",
"\n",
"result = qa_chain.invoke({\"question\": question})\n",
"print(\"Answer from LLM:\")\n",
@@ -996,7 +1083,7 @@
},
{
"cell_type": "code",
"execution_count": 35,
"execution_count": 29,
"metadata": {},
"outputs": [
{
@@ -1005,7 +1092,7 @@
"[]"
]
},
"execution_count": 35,
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
@@ -1038,7 +1125,7 @@
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 30,
"metadata": {},
"outputs": [
{
@@ -1101,7 +1188,7 @@
},
{
"cell_type": "code",
"execution_count": 39,
"execution_count": 32,
"metadata": {},
"outputs": [
{
@@ -1111,7 +1198,7 @@
"None\n",
"Some other text\n",
"{\"start\": 400, \"end\": 450, \"doc_name\": \"other.txt\"}\n",
"<memory at 0x7f5edcb18d00>\n"
"<memory at 0x110f856c0>\n"
]
}
],
@@ -1168,7 +1255,7 @@
},
{
"cell_type": "code",
"execution_count": 40,
"execution_count": 33,
"metadata": {},
"outputs": [
{
@@ -1176,9 +1263,9 @@
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------------\n",
"Some other text\n",
"Some more text\n",
"--------------------------------------------------------------------------------\n",
"Some more text\n"
"Some other text\n"
]
}
],
@@ -1214,7 +1301,7 @@
},
{
"cell_type": "code",
"execution_count": 41,
"execution_count": 34,
"metadata": {},
"outputs": [
{
@@ -1224,7 +1311,7 @@
"Filters on this value are very performant\n",
"Some other text\n",
"{\"start\": 400, \"end\": 450, \"doc_name\": \"other.txt\", \"CUSTOMTEXT\": \"Filters on this value are very performant\"}\n",
"<memory at 0x7f5edcb193c0>\n"
"<memory at 0x110f859c0>\n"
]
}
],
@@ -1291,7 +1378,7 @@
},
{
"cell_type": "code",
"execution_count": 42,
"execution_count": 35,
"metadata": {},
"outputs": [
{
@@ -1299,9 +1386,9 @@
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------------\n",
"Some other text\n",
"Some more text\n",
"--------------------------------------------------------------------------------\n",
"Some more text\n"
"Some other text\n"
]
}
],
@@ -1330,9 +1417,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "dev311",
"language": "python",
"name": "python3"
"name": "dev311"
},
"language_info": {
"codemirror_mode": {
@@ -1344,7 +1431,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -156,7 +156,7 @@
"from langchain_core.documents import Document\n",
"\n",
"document_1 = Document(\n",
" page_content=\"I had chocalate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" page_content=\"I had chocolate chip pancakes and scrambled eggs for breakfast this morning.\",\n",
" metadata={\"source\": \"tweet\"},\n",
")\n",
"\n",
@@ -356,7 +356,7 @@
"output_type": "stream",
"text": [
"* [SIM=0.595] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]\n",
"* [SIM=0.212] I had chocalate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]\n",
"* [SIM=0.212] I had chocolate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]\n",
"* [SIM=0.118] Wow! That was an amazing movie. I can't wait to see it again. [{'source': 'tweet'}]\n"
]
}
@@ -387,7 +387,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"* I had chocalate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]\n",
"* I had chocolate chip pancakes and scrambled eggs for breakfast this morning. [{'source': 'tweet'}]\n",
"* Wow! That was an amazing movie. I can't wait to see it again. [{'source': 'tweet'}]\n",
"* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]\n",
"* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]\n"

View File

@@ -89,7 +89,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "39f3ce3e",
"metadata": {},
"outputs": [],
@@ -118,15 +118,13 @@
" language: str = Field(description=\"The language the text is written in\")\n",
"\n",
"\n",
"# LLM\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4o-mini\").with_structured_output(\n",
" Classification\n",
")"
"# Structured LLM\n",
"structured_llm = llm.with_structured_output(Classification)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "5509b6a6",
"metadata": {},
"outputs": [
@@ -144,7 +142,7 @@
"source": [
"inp = \"Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!\"\n",
"prompt = tagging_prompt.invoke({\"input\": inp})\n",
"response = llm.invoke(prompt)\n",
"response = structured_llm.invoke(prompt)\n",
"\n",
"response"
]
@@ -159,7 +157,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "9154474c",
"metadata": {},
"outputs": [
@@ -177,7 +175,7 @@
"source": [
"inp = \"Estoy muy enojado con vos! Te voy a dar tu merecido!\"\n",
"prompt = tagging_prompt.invoke({\"input\": inp})\n",
"response = llm.invoke(prompt)\n",
"response = structured_llm.invoke(prompt)\n",
"\n",
"response.model_dump()"
]

View File

@@ -145,15 +145,12 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "a5e490f6-35ad-455e-8ae4-2bae021583ff",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from pydantic import BaseModel, Field\n",
"\n",
"# Define a custom prompt to provide instructions and any additional context.\n",
"# 1) You can add examples into the prompt template to improve extraction quality\n",

Some files were not shown because too many files have changed in this diff Show More