Compare commits

...

237 Commits

Author SHA1 Message Date
Vadym Barda
23fa70f328 core[patch]: release 0.3.44 (#30236) 2025-03-11 18:59:02 -04:00
Vadym Barda
c7842730ef core[patch]: support single-node subgraphs and put subgraph nodes under the respective subgraphs (#30234) 2025-03-11 18:55:45 -04:00
Dharshan A
81d1653a30 docs: Fix typo in Generating Examples section of few-shot prompting doc (#30219)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-11 09:44:20 -04:00
ccurme
27d86d7bc8 infra: update release workflow (#30207)
Fix condition
2025-03-10 17:53:03 -04:00
ccurme
70fc0b8363 infra: update release workflow (#30203) 2025-03-10 20:18:33 +00:00
ccurme
62c570dd77 standard-tests, openai: bump core (#30202) 2025-03-10 19:22:24 +00:00
ccurme
38420ee76e docs: add note on Deepseek R1 (#30201) 2025-03-10 15:17:20 -04:00
ccurme
f896e701eb deepseek: install local langchain-tests in test deps (#30198) 2025-03-10 16:58:17 +00:00
ccurme
7b8f266039 infra: additional testing on core release (#30180)
Here we add a job to the release workflow that, when releasing
`langchain-core`, tests prior published versions of select packages
against the new version of core. We limit the testing to the most recent
published versions of langchain-anthropic and langchain-openai.

This is designed to catch backward-incompatible updates to core. We
sometimes update core and downstream packages simultaneously, so there
may not be any commit in the history at which tests would fail. So
although core and latest downstream packages could be consistent, we can
benefit from testing prior versions of downstream packages against core.

I tested the workflow by simulating a [breaking
change](d7287248cf)
in core and running it with publishing steps disabled:
https://github.com/langchain-ai/langchain/actions/runs/13741876345. The
workflow correctly caught the issue.
2025-03-10 08:59:59 -04:00
Hugh Gao
aa6dae4a5b community: Remove the system message count limit for ChatTongyi. (#30192)
## Description
The models in DashScope support multiple SystemMessage. Here is the
[Doc](https://bailian.console.aliyun.com/model_experience_center/text#/model-market/detail/qwen-long?tabKey=sdk),
and the example code on the document page:
```python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),  # 如果您没有配置环境变量,请在此处替换您的API-KEY
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务base_url
)
# 初始化messages列表
completion = client.chat.completions.create(
    model="qwen-long",
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        # 请将 'file-fe-xxx'替换为您实际对话场景所使用的 file-id。
        {'role': 'system', 'content': 'fileid://file-fe-xxx'},
        {'role': 'user', 'content': '这篇文章讲了什么?'}
    ],
    stream=True,
    stream_options={"include_usage": True}
)

full_content = ""
for chunk in completion:
    if chunk.choices and chunk.choices[0].delta.content:
        # 拼接输出内容
        full_content += chunk.choices[0].delta.content
        print(chunk.model_dump())

print({full_content})
```
Tip: The example code is for OpenAI, but the document said that it also
supports the DataScope API, and I tested it, and it works.
```
Is the Dashscope SDK invocation method compatible?

Yes, the Dashscope SDK remains compatible for model invocation. However, file uploads and file-ID retrieval are currently only supported via the OpenAI SDK. The file-ID obtained through this method is also compatible with Dashscope for model invocation.
```
2025-03-10 08:58:40 -04:00
Dharshan A
34e94755af Fix typo in astream_events in streaming docs (#30195)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-10 08:56:07 -04:00
ccurme
67aff1648b community: Add OpenGradient integration (Toolkit) (#30190)
Commandeering https://github.com/langchain-ai/langchain/pull/30135

---------

Co-authored-by: kylexqian <kylexqian@gmail.com>
2025-03-09 18:08:07 -04:00
ccurme
b209d46eb3 mistral[patch]: set global ssl context (#30189) 2025-03-09 21:27:41 +00:00
Vijay Selvaraj
df459d0d5e community: add Valthera integration (#30105)
```markdown
**Description:**  
This PR integrates Valthera into LangChain, introducing an framework designed to send highly personalized nudges by an LLM agent. This is modeled after Dr. BJ Fogg's Behavior Model. This integration includes:

- Custom data connectors for HubSpot, PostHog, and Snowflake.
- A unified data aggregator that consolidates user data.
- Scoring configurations to compute motivation and ability scores.
- A reasoning engine that determines the appropriate user action.
- A trigger generator to create personalized messages for user engagement.

**Issue:**  
N/A

**Dependencies:**  
N/A

**Twitter handle:**  
- `@vselvarajijay`

**Tests and Docs:**  
- `docs/docs/integrations/tools/valthera` 
- `https://github.com/valthera/langchain-valthera/tree/main/tests`

```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 21:19:08 +00:00
ccurme
3823daa0b9 cli: update integration doc template for tools (#30188)
Chain example -> langgraph agent
2025-03-09 21:14:43 +00:00
David Skarbrevik
0d7cdf290b langchain: clean pyproject ruff section (#30070)
## Changes
- `/Makefile` - added extra step to `make format` and `make lint` to
ensure the lint dep-group is installed before running ruff (documented
in issue #30069)

- `/pyproject.toml` - removed ruff exceptions for files that no longer
exist or no longer create formatting/linting errors in ruff

## Testing

**running `make format` on this branch/PR**
<img width="435" alt="image"
src="https://github.com/user-attachments/assets/82751788-f44e-4591-98ed-95ce893ce623"
/>

## Issue

fixes #30069

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 15:06:02 -04:00
Jonathan Feng
911accf733 docs: add contextualai documentation (#30050)
Thank you for contributing to LangChain!
 
**Description:** adds ContextualAI's `langchain-contextual` package's
documentation

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 02:43:13 +00:00
Bharat
b9746a6910 fixes#30182: update tool names to match OpenAI function name pattern (#30183)
The OpenAI API requires function names to match the pattern
'^[a-zA-Z0-9_-]+$'. This updates the JIRA toolkit's tool names to use
underscores instead of spaces to comply with this requirement and
prevent BadRequestError when using the tools with OpenAI functions.

Error fixed:
```
File "langgraph-bug-fix/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1023, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
During task with name 'agent' and id 'aedd7537-e8d5-6678-d0c5-98129586d3ac'
```

Issue:#30182
2025-03-08 20:48:25 -05:00
ccurme
cee0fecb08 docs: update package registry counts (#30181) 2025-03-08 20:37:59 -05:00
William FH
bac3a28e70 Flush (#30157) 2025-03-07 16:32:15 -08:00
ccurme
a7ab5e8372 community[patch]: ChatPerplexity: track usage metadata (#30175) 2025-03-07 23:25:05 +00:00
Vadym Barda
6c05d4b153 docs[patch]: update trim messages wording (#30174) 2025-03-07 17:05:51 -05:00
ccurme
1c993b921c core[patch]: release 0.3.43 (#30173) 2025-03-07 21:56:00 +00:00
ccurme
9893e5cb80 core[patch]: catch structured_output_format (#30172)
Change to `ls_structured_output_format` was not backward-compatible with
older versions of integration packages.
2025-03-07 16:50:06 -05:00
ccurme
88dc479c4a docs: update model used in ChatGroq (#30170)
`mixtral-8x7b-32768` is being retired on March 20.
2025-03-07 16:29:05 -05:00
ccurme
33a3510243 core[patch]: export ArgsSchema (#30169)
This is needed for type hints

see: https://github.com/langchain-ai/langchain/pull/30167
2025-03-07 20:43:05 +00:00
OysterMax
01317fde21 DOC: type checker complain on args_schema type hint when inheriting from BaseTool (#30167)
Thank you for contributing to LangChain!

- **Description:** update docs to suppress type checker complain on
args_schema type hint when inheriting from BaseTool
- **Issue:** #30142 
- **Dependencies:** N/A
- **Twitter handle:** N/A

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-07 15:41:53 -05:00
ccurme
17507c9ba6 groq[patch]: release 0.2.5 (#30168) 2025-03-07 20:25:51 +00:00
andyzhou1982
9e863c89d2 add JiebaLinkExtractor for chinese doc extracting (#30150)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: chinese doc extracting"


- [ ] **PR message**: 
- **Description:** add jieba_link_extractor.py for chinese doc
extracting
    - **Dependencies:** jieba


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
  /doc/doc/integrations/providers/jieba.md
  /doc/doc/integrations/vectorstores/jieba_link_extractor.ipynb
  /libs/packages.yml

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-07 20:21:46 +00:00
ccurme
74e7772a5f groq[patch]: warn if model is not specified (#30161)
Groq is retiring `mixtral-8x7b-32768`, which is currently the default
model for ChatGroq, on March 20. Here we emit a warning if the model is
not specified explicitly.

A version 0.3.0 will be released ahead of March 20 that removes the
default altogether.
2025-03-07 15:21:13 -05:00
Ioannis Bakagiannis
3444e587ee docs: Integration Update - ADS4GPTs (#30153)
docs: New integration for LangChain - ads4gpts-langchain

Description: Tools and Toolkit for Agentic integration natively within
LangChain with ADS4GPTs, in order to help applications monetize with
advertising.

Twitter handle: @ads4gpts

Co-authored-by: knitlydevaccount <loom+github@knitly.app>
2025-03-07 14:35:44 -05:00
ccurme
3c258194ae tests[patch]: release 0.3.14 (#30165) 2025-03-07 18:34:05 +00:00
ccurme
34638ccfae openai[patch]: release 0.3.8 (#30164) 2025-03-07 18:26:40 +00:00
ccurme
4e5058f29c core[patch]: release 0.3.42 (#30163) 2025-03-07 18:14:45 +00:00
Eugene Yurtsev
894fd63a61 cli: release 0.0.36 (#30159)
Bump for 0.0.36
2025-03-07 13:05:40 -05:00
ccurme
806211475a core[patch]: update structured output tracing (#30123)
- Trace JSON schema in `options`
- Rename to `ls_structured_output_format`
2025-03-07 13:05:25 -05:00
Jakub Kopecký
d0f5bcda29 docs: fix apify actors notebok main heading text (#30040)
- **Description:** Fix Apify Actors tool notebook main heading text so
there is an actual description instead of "Overview" in the tool
integration description on [LangChain tools integration
page](https://python.langchain.com/docs/integrations/tools/#all-tools).
2025-03-07 12:58:10 -05:00
ccurme
230876a7c5 anthropic[patch]: add PDF input example to API reference (#30156) 2025-03-07 14:19:08 +00:00
ccurme
5c7440c201 docs: update configuration how-to guide (#30139) 2025-03-06 11:51:48 -05:00
joeconstantino
022ff9eead Tableau docs for new datasource qa tool (#30125)
- **Description: a notebook showing langchain and langraph agents using
the new langchain_tableau tool
- **Twitter handle: @joe_constantin0

---------

Co-authored-by: Joe Constantino <joe@constantino.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-06 14:58:56 +00:00
ccurme
52b0570bec core, openai, standard-tests: improve OpenAI compatibility with Anthropic content blocks (#30128)
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI

---

Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.

Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.

That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
2025-03-06 09:53:14 -05:00
Pat Patterson
b3dc66f7a3 community: fix AttributeError when creating LanceDB vectorstore (#30127)
**Description:**

This PR adds a call to `guard_import()` to fix an AttributeError raised
when creating LanceDB vectorstore instance with an existing LanceDB
table.

**Issue:**

This PR fixes issue #30124.

**Dependencies:**

No additional dependencies.

**Twitter handle:**

[@metadaddy](https://x.com/metadaddy), but I spend more time at
[@metadaddy.net](https://bsky.app/profile/metadaddy.net) these days.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 23:04:38 +00:00
Hugh Gao
9b7b8e4a1a community: make DashScope models support Partial Mode for text continuation. (#30108)
## Description
make DashScope models support Partial Mode for text continuation.

For text continuation in ChatTongYi, it supports text continuation with
a prefix by adding a "partial" argument in AIMessage. The document is
[Partial Mode
](https://help.aliyun.com/zh/model-studio/user-guide/partial-mode?spm=a2c4g.11186623.help-menu-2400256.d_1_0_0_8.211e5b77KMH5Pn&scm=20140722.H_2862210._.OR_help-T_cn~zh-V_1).
The API example is:
```py
import os
import dashscope

messages = [{
    "role": "user",
    "content": "请对“春天来了,大地”这句话进行续写,来表达春天的美好和作者的喜悦之情"
},
{
    "role": "assistant",
    "content": "春天来了,大地",
    "partial": True
}]
response = dashscope.Generation.call(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    model='qwen-plus',
    messages=messages,
    result_format='message',  
)

print(response.output.choices[0].message.content)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 16:22:14 +00:00
黑牛
f0153414d5 Add request_id field to improve request tracking and debugging (for Tongyi model) (#30110)
- **Description**: Added the request_id field to the check_response
function to improve request tracking and debugging, applicable for the
Tongyi model.
- **Issue**: None
- **Dependencies**: None
- **Twitter handle**: None

- **Add tests and docs**: None

- **Lint and test**: Ran `make format`, `make lint`, and `make test` to
ensure the code meets formatting and testing requirements.
2025-03-05 11:03:47 -05:00
Manthan Surkar
1ee8aceaee community: fix Jira API wrapper failing initialization with cloud param (#30117)
### **Description**  
Converts the boolean `jira_cloud` parameter in the Jira API Wrapper to a
string before initializing the Jira Client. Also adds tests for the
same.

### **Issue**  
[Jira API Wrapper
Bug](8abb65e138/libs/community/langchain_community/utilities/jira.py (L47))

```python
jira_cloud_str = get_from_dict_or_env(values, "jira_cloud", "JIRA_CLOUD")
jira_cloud = jira_cloud_str.lower() == "true"
```

The above code has a bug where the value of `"jira_cloud"` is a boolean.
If it is passed, calling `.lower()` on a boolean raises an error.
Additionally, `False` cannot be passed explicitly since
`get_from_dict_or_env` falls back to environment variables.

Relevant code in `langchain_core`:  

[Source](https://github.com/thesmallstar/langchain/blob/master/.venv/lib/python3.13/site-packages/langchain_core/utils/env.py#L46)

```python
if isinstance(key, str) and key in data and data[key]:  # Here, data[key] is False
```

This PR fixes both issues.

### **Twitter Handle**  
[Manthan Surkar](https://x.com/manthan_surkar)
2025-03-05 10:49:25 -05:00
Adrián Panella
c599ba47d5 core(mermaid): fix error when 3+ subgraph levels (#29970) 2025-03-04 13:27:49 -05:00
Alexander Henlein
417efa30a6 docs: add Taiga Tool integration docs (#30042)
This PR adds documentation for the langchain-taiga Tool integration,
including an example notebook at
'docs/docs/integrations/tools/taiga.ipynb' and updates to
'libs/packages.yml' to track the new package.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-04 17:51:20 +00:00
Mathias Marciano
5f0102242a Fixed an issue with the OpenAI Assistant's 'retrieval' tool and adding support for the 'attachments' parameter (#30006)
PR Title:
langchain: add attachments support in OpenAIAssistantRunnable

PR Description:
This PR fixes an issue with the "retrieval" tool (internally named
"file_search") in the OpenAI Assistant by adding support for the
"attachments" parameter in the invoke method. This change allows files
to be linked to messages when they are inserted into threads, which is
essential for utilizing OpenAI's Retrieval Augmented Generation (RAG)
feature.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-04 17:34:11 +00:00
Philippe PRADOS
4710c1fa8c community[minor]: Fix regular expression in visualize and outlines modules. (#30002)
Fix invalid escape characteres
2025-03-04 12:23:48 -05:00
ccurme
577c0d0715 community[patch]: release 0.3.19 (#30104) 2025-03-04 16:12:03 +00:00
ccurme
ba5ddb218f anthropic[patch]: release 0.3.9 (#30103) 2025-03-04 10:53:55 -05:00
ccurme
9383a0536a tests[patch]: release 0.3.13 (#30102) 2025-03-04 10:53:43 -05:00
ccurme
fb16c25920 langchain[patch]: release 0.3.20 (#30101) 2025-03-04 15:47:27 +00:00
ccurme
692a68bf1c core[patch]: release 0.3.41 (#30100) 2025-03-04 15:08:57 +00:00
ccurme
484d945500 community[patch]: remove numpy cap for python < 3.12 (#30084) 2025-03-04 09:46:41 -05:00
Cheney Zhang
7eb6dde720 docs: refine milvus server description (#30071)
Document refinement: optimize milvus server description. The description
of "milvus standalone", and "milvus server" is confusing, so I clarify
it with a detailed description.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-03-04 09:39:54 -05:00
ZhangShenao
8575d7491f [Doc] Improve api doc (#30073)
- Update api_doc for `BaseMessage`
- add static method decorator for `retry_runnable`
2025-03-04 09:39:07 -05:00
Antonio Pisani
9a11e0edcd docs:Add SWI-Prolog for langchain-prolog (#30081)
Some users have complained that t is not clear that SWI-Prolog must be
installed before installing langchain-prolog.
2025-03-04 09:12:47 -05:00
Samuel Dion-Girardeau
ccb64e9f4f docs: Fix typo in code samples for max_tokens_for_prompt (#30088)
- **Description:** Fix typo in code samples for max_tokens_for_prompt.
Code blocks had singular "token" but the method has plural "tokens".
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-04 09:11:21 -05:00
ccurme
33354f984f docs: update contributing docs (#30064) 2025-03-01 17:36:35 -05:00
ccurme
c7cd666a17 docs: add to vercel overrides (#30063) 2025-03-01 17:21:15 -05:00
ArrayPD
c671d54c6f core: make with_alisteners() example workable. (#30059)
**Description:**

5 fix of example from function with_alisteners() in
libs/core/langchain_core/runnables/base.py
Replace incoherent example output with workable example's output.

1. SyntaxError: unterminated string literal
    print(f"on start callback starts at {format_t(time.time())}
    correct as
    print(f"on start callback starts at {format_t(time.time())}")

2. SyntaxError: unterminated string literal
    print(f"on end callback starts at {format_t(time.time())}
    correct as
    print(f"on end callback starts at {format_t(time.time())}")

3. NameError: name 'Runnable' is not defined
    Fix as
    from langchain_core.runnables import Runnable

4. NameError: name 'asyncio' is not defined
    Fix as
    import asyncio

5. NameError: name 'format_t' is not defined.
    Implement format_t() as
    from datetime import datetime, timezone

    def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
2025-03-01 15:39:02 -05:00
Chandra Nandan
eca8c5515d docs: sidebar-content-render (#30061) (#30062)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: added proper width to sidebar content"

- [x] **PR message**: added proper width to sidebar content
- **Description:** While accessing the [LangChain Python API
Reference](https://python.langchain.com/api_reference/index.html) the
sidebar content does not display correctly.
    - **Issue:** Follow-up to #30061
    - **Dependencies:** None
    - **Twitter handle:** https://x.com/implicitdefcnc


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 15:30:41 -05:00
cold-eye
7c175e3fda Update ascend.py (#30060)
add batch_size to fix oom when embed large amount texts

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 14:10:41 -05:00
ccurme
3b066dc005 anthropic[patch]: allow structured output when thinking is enabled (#30047)
Structured output will currently always raise a BadRequestError when
Claude 3.7 Sonnet's `thinking` is enabled, because we rely on forced
tool use for structured output and this feature is not supported when
`thinking` is enabled.

Here we:
- Emit a warning if `with_structured_output` is called when `thinking`
is enabled.
- Raise `OutputParserException` if no tool calls are generated.

This is arguably preferable to raising an error in all cases.

```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel


class Person(BaseModel):
    name: str
    age: int


llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    max_tokens=5_000,
    thinking={"type": "enabled", "budget_tokens": 2_000},
)
structured_llm = llm.with_structured_output(Person)  # <-- this generates a warning
```

```python
structured_llm.invoke("Alice is 30.")  # <-- works
```

```python
structured_llm.invoke("Hello!")  # <-- raises OutputParserException
```
2025-02-28 14:44:11 -05:00
ccurme
f8ed5007ea anthropic, mistral: return model_name in response metadata (#30048)
Took a "census" of models supported by init_chat_model-- of those that
return model names in response metadata, these were the only two that
had it keyed under `"model"` instead of `"model_name"`.
2025-02-28 18:56:05 +00:00
Christophe Bornet
9e6ffd1264 core: Add ruff rules PTH (pathlib) (#29338)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-28 13:22:20 -05:00
TheSongg
86b364de3b Add asynchronous generate interface (#30001)
- [ ] **PR title**: [langchain_community.llms.xinference]: Add
asynchronous generate interface

- [ ] **PR message**: The asynchronous generate interface support stream
data and non-stream data.
          
        chain = prompt | llm
        async for chunk in chain.astream(input=user_input):
            yield chunk


- [ ] **Add tests and docs**:

       from langchain_community.llms import Xinference
       from langchain.prompts import PromptTemplate

       llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
           stream = True
            )
prompt = PromptTemplate(input=['country'], template="Q: where can we
visit in the capital of {country}? A:")
       chain = prompt | llm
       async for chunk in chain.astream(input=user_input):
           yield chunk
2025-02-28 12:32:44 -05:00
Cheney Zhang
a1897ca621 docs: refine milvus doc with hybrid-search (#30037)
Milvus Document refinement: add more detailed hybrid search description
with full-text search introduction here.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-02-28 10:22:53 -05:00
Tiest van Gool
476cd26f57 Add xAI to ChatModelTabs drop down (#30028)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: add xAI to ChatModelTabs"

- [ ] **PR message**:
- **Description:** Added `ChatXAI` to `ChatModelTabs` dropdown to
improve visibility of xAI chat models (e.g., "grok-2", "grok-3").
    - **Issue:** Follow-up to #30010 
    - **Dependencies:** none
    - **Twitter handle:** @tiestvangool 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-28 09:08:12 -05:00
Fakai Zhao
f07338d2bf Implementing the MMR algorithm for OLAP vector storage (#30033)
Thank you for contributing to LangChain!

-  **Implementing the MMR algorithm for OLAP vector storage**: 
  - Support Apache Doris and StarRocks OLAP database.
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- **Implementing the MMR algorithm for OLAP vector storage**: 
    - **Apache Doris
    - **StarRocks
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- **Add tests and docs**: 
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: fakzhao <fakzhao@cisco.com>
2025-02-28 08:50:22 -05:00
Daniel Rauber
186cd7f1a1 community: PlaywrightURLLoader should wait for page load event before attempting to extract data (#30043)
## Description

The PlaywrightURLLoader should wait for a page to be loaded before
attempting to extract data.
2025-02-28 08:45:51 -05:00
Ikko Eltociear Ashimine
46908ee3da docs: update google_cloud_vertexai_rerank.ipynb (#30039)
recieve -> receive
2025-02-28 08:45:06 -05:00
ccurme
0dbcc1d099 docs: document anthropic features (#30030)
Update integrations page with extended thinking feature.

Update API reference with extended thinking and citations.
2025-02-27 19:37:04 -05:00
ccurme
6c7c8a164f openai[patch]: add unit test (#30022)
Test `max_completion_tokens` is propagated to payload for
AzureChatOpenAI.
2025-02-27 11:09:17 -05:00
DamonXue
156a60013a docs: fix tavily_search code-block format. (#30012)
This pull request includes a change to the `TavilySearchResults` class
in the `tool.py` file, which updates the code block format in the
documentation.

Documentation update:

*
[`libs/community/langchain_community/tools/tavily_search/tool.py`](diffhunk://#diff-e3b6a980979268b639c6a86e9b182756b0f7c7e9e5605e613bc0a72ea6aa5301L54-R59):
Changed the code block format from Python to JSON in the example
provided in the docstring.Thank you for contributing to LangChain!
2025-02-27 10:55:15 -05:00
kawamou
8977ac5ab0 community[fix]: Handle None value in raw_content from Tavily API response (#30021)
## **Description:**

When using the Tavily retriever with include_raw_content=True, the
retriever occasionally fails with a Pydantic ValidationError because
raw_content can be None.

The Document model in langchain_core/documents/base.py requires
page_content to be a non-None value, but the Tavily API sometimes
returns None for raw_content.

This PR fixes the issue by ensuring that even when raw_content is None,
an empty string is used instead:

```python
page_content=result.get("content", "")
            if not self.include_raw_content
            else (result.get("raw_content") or ""),
2025-02-27 10:53:53 -05:00
Yan
d0c9b98171 docs: writer integration docs cosmetic fixes (#29984)
Fixed links at Writer partners integration docs
2025-02-27 10:52:49 -05:00
Lakindu Boteju
f69deee1bd community: Add cost data for aws bedrock anthropic.claude-3-7 model (#30016)
This pull request includes updates to the
`libs/community/langchain_community/callbacks/bedrock_anthropic_callback.py`
file to add a new model version to the list of supported models.

Updates to supported models:

* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.003` for 1000 input tokens.
* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.015` for 1000 output tokens.

AWS Bedrock pricing reference : https://aws.amazon.com/bedrock/pricing
2025-02-27 09:51:52 -05:00
Mark Perfect
289b3422dc docs: Add Milvus Standalone to documentation (#29650)
- [x] **PR title**:


- [x] **PR message**:
- Added a new section for how to set up and use Milvus with Docker, and
added an example of how to instantiate Milvus for hybrid retrieval
- Fixed the documentation setup to run `make lint` and `make format`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 21:31:40 +00:00
Lakindu Boteju
e0e9e560b3 PyMuPDF4LLM integration to LangChain (#29953)
## PyMuPDF4LLM integration to LangChain for PDF content extraction in
Markdown format

### Description

[PyMuPDF4LLM](https://github.com/pymupdf/RAG) makes it easier to extract
PDF content in Markdown format, needed for LLM & RAG applications.
(License: GNU Affero General Public License v3.0)


[langchain-pymupdf4llm](https://github.com/lakinduboteju/langchain-pymupdf4llm)
integrates PyMuPDF4LLM to LangChain as a Document Loader.
(License: MIT License)

This pull request introduces the integration of
[PyMuPDF4LLM](https://pymupdf.readthedocs.io/en/latest/pymupdf4llm) into
the LangChain project as an integration package:
[`langchain-pymupdf4llm`](https://github.com/lakinduboteju/langchain-pymupdf4llm).
The most important changes include adding new Jupyter notebooks to
document the integration and updating the package configuration file to
include the new package.

### Documentation:

* `docs/docs/integrations/providers/pymupdf4llm.ipynb`: Added a new
Jupyter notebook to document the integration of `PyMuPDF4LLM` with
LangChain, including installation instructions and class imports.
* `docs/docs/integrations/document_loaders/pymupdf4llm.ipynb`: Added a
new Jupyter notebook to document the usage of `langchain-pymupdf4llm` as
a LangChain integration package in detail.

### Package registration:

* `libs/packages.yml`: Updated the package configuration file to include
the `langchain-pymupdf4llm` package.

### Additional information

* Related to: https://github.com/langchain-ai/langchain/pull/29848

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:59:12 -05:00
Dan Mirsky
d98c3f76c2 core[patch]: Fix FileCallbackHandler name resolution, Fixes #29941 (#29942)
- **Description:** Same changes as #26593 but for FileCallbackHandler
- **Issue:**  Fixes #29941
- **Dependencies:** None
- **Twitter handle:** None

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-26 14:54:24 -05:00
Christophe Bornet
b3885c124f core: Add ruff rules TC (#29268)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
Some fixes done for TC001,TC002 and TC003 but these rules are excluded
since they don't play well with Pydantic.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 19:39:05 +00:00
talos
9cd20080fc community: Update SQLiteVec table trigger (#29914)
**Issue**: This trigger can only be used by the first table created.
Cannot create additional triggers for other tables.

**fixed**: Update the trigger name so that it can be used for new
tables.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:10:13 +00:00
ccurme
7562677f3f langchain[patch]: delete erroneous lock file (#30007)
Picked up during merge.
2025-02-26 15:01:05 +00:00
Erick Friis
3c96012f5e langchain: make numpy optional (#29182)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 14:35:24 +00:00
James Yang
8c28742980 docs: fix kinetica vectorstore typo (#29999)
Description:
- fix kinetica vectorstore typo
- add links

Co-authored-by: jamesongithub@users.noreply.github.com <jamesongithub@users.noreply.github.com>
2025-02-26 08:31:56 -05:00
Artem Yankov
6177b9f9ab community: add title, score and raw_content to tavily search results (#29995)
**Description:**

Tavily search results returned from API include useful information like
title, score and (optionally) raw_content that is missed in wrapper
although it's documented there properly. Add this data to the result
structure.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 23:27:21 +00:00
Eugene Yurtsev
b525226531 core[patch]: version 0.3.40 (#29997)
Version 0.3.40 release
2025-02-25 23:09:40 +00:00
Vadym Barda
0fc50b82a0 core[patch]: allow passing description to @tool decorator (#29976) 2025-02-25 17:45:36 -05:00
Naveen SK
21bfc95e14 docs: Correct grammatical typos in various documentation files (#29983)
**Description:**
Fixed grammatical typos in various documentation files

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@MrNaveenSK

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-25 19:13:31 +00:00
ccurme
1158d3134d langchain[patch]: remove aiohttp (#29991)
My guess is this was left over from when `community` was in langchain.
2025-02-25 11:43:00 -05:00
ccurme
afd7888392 langchain[patch]: remove explicit dependency on tenacity (#29990)
Not used anywhere in `langchain`, already a dependency of
langchain-core.
2025-02-25 11:31:55 -05:00
naveencloud
143c39a4a4 Update gitlab.mdx (#29987)
Instead of Github it was mentioned that Gitlab which causing confusion
while refering the documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 15:06:37 +00:00
ccurme
32704f0ad8 langchain: update extended test (#29988) 2025-02-25 14:58:20 +00:00
Yan
47e1a384f7 Writer partners integration docs (#29961)
**Documentation of Writer provider and additional features**
* [PyPi langchain-writer
web-page](https://pypi.org/project/langchain-writer/)
* [GitHub langchain-writer
repo](https://github.com/writer/langchain-writer)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 19:30:09 -05:00
Antonio Pisani
820a4c068c Transition prolog_tool doc to langgraph (#29972)
@ccurme As suggested I transitioned the prolog_tool documentation to
langgraph

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 23:34:53 +00:00
ccurme
79f5bbfb26 anthropic[patch]: release 0.3.8 (#29973) 2025-02-24 15:24:35 -05:00
ccurme
ded886f622 anthropic[patch]: support claude 3.7 sonnet (#29971) 2025-02-24 15:17:47 -05:00
Bagatur
d00d645829 docs[patch]: update disable_streaming docstring (#29968) 2025-02-24 18:40:31 +00:00
ccurme
b7a1705052 openai[patch]: release 0.3.7 (#29967) 2025-02-24 11:59:28 -05:00
ccurme
5437ee385b core[patch]: release 0.3.39 (#29966) 2025-02-24 11:47:01 -05:00
ccurme
291a232fb8 openai[patch]: set global ssl context (#29932)
We set 
```python
global_ssl_context = ssl.create_default_context(cafile=certifi.where())
```
at the module-level and share it among httpx clients.
2025-02-24 11:25:16 -05:00
ccurme
9ce07980b7 core[patch]: pydantic 2.11 compat (#29963)
Resolves https://github.com/langchain-ai/langchain/issues/29951

Was able to reproduce the issue with Anthropic installing from pydantic
`main` and correct it with the fix recommended in the issue.

Thanks very much @Viicos for finding the bug and the detailed writeup!
2025-02-24 11:11:25 -05:00
ccurme
0d3a3b99fc core[patch]: release 0.3.38 (#29962) 2025-02-24 15:04:53 +00:00
ccurme
b1a7f4e106 core, openai[patch]: support serialization of pydantic models in messages (#29940)
Resolves https://github.com/langchain-ai/langchain/issues/29003,
https://github.com/langchain-ai/langchain/issues/27264
Related: https://github.com/langchain-ai/langchain-redis/issues/52

```python
from langchain.chat_models import init_chat_model
from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from pydantic import BaseModel

cache = SQLiteCache()

set_llm_cache(cache)

class Temperature(BaseModel):
    value: int
    city: str

llm = init_chat_model("openai:gpt-4o-mini")
structured_llm = llm.with_structured_output(Temperature)
```
```python
# 681 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
```python
# 6.98 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
2025-02-24 09:34:27 -05:00
HackHuang
1645ec1890 docs(tool_artifacts.ipynb) : Remove the unnecessary information (#29960)
Update tool_artifacts.ipynb : Remove the unnecessary information as
below.


8b511a3a78/docs/docs/how_to/tool_artifacts.ipynb (L95)
2025-02-24 09:22:18 -05:00
HackHuang
78c54fccf3 docs(custom_tools.ipynb) : Fix the invalid URL link (#29958)
Update custom_tools.ipynb : Fix the invalid URL link about `@tool
decorator`
2025-02-24 14:03:26 +00:00
ccurme
927ec20b69 openai[patch]: update system role to developer for o-series models (#29785)
Some o-series models will raise a 400 error for `"role": "system"`
(`o1-mini` and `o1-preview` will raise, `o1` and `o3-mini` will not).

Here we update `ChatOpenAI` to update the role to `"developer"` for all
model names matching `^o\d`.

We only make this change on the ChatOpenAI class (not BaseChatOpenAI).
2025-02-24 08:59:46 -05:00
Ahmed Tammaa
8b511a3a78 [Exception Handling] DeepSeek JSONDecodeError (#29758)
For Context please check #29626 

The Deepseek is using langchain_openai. The error happens that it show
`json decode error`.

I added a handler for this to give a more sensible error message which
is DeepSeek API returned empty/invalid json.

Reproducing the issue is a bit challenging as it is inconsistent,
sometimes DeepSeek returns valid data and in other times it returns
invalid data which triggers the JSON Decode Error.

This PR is an exception handling, but not an ultimate fix for the issue.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 15:00:32 -05:00
Julien Elkaim
e586bffe51 community: Repair embeddings/llamacpp's embed_query method (#29935)
**Description:** As commented on the commit
[41b6a86](41b6a86bbe)
it introduced a bug for when we do an embedding request and the model
returns a non-nested list. Typically it's the case for model
**_nomic-embed-text_**.

- I added the unit test, and ran `make format`, `make lint` and `make
test` from the `community` package.
- No new dependency.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:32:17 +00:00
Saraswathy Kalaiselvan
5ca4933b9d docs: updated ChatLiteLLM model_kwargs description (#29937)
- [x] **PR title**: docs: (community) update ChatLiteLLM

- [x] **PR message**:
- **Description:** updated description of model_kwargs parameter which
was wrongly describing for temperature.
    - **Issue:** #29862 
    - **Dependencies:** N/A
    
- [x] **Add tests and docs**: N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:27:13 +00:00
ccurme
512eb1b764 anthropic[patch]: update models for integration tests (#29938) 2025-02-23 14:23:48 -05:00
Christophe Bornet
f6d4fec4d5 core: Add ruff rules ANN (type annotations) (#29271)
See https://docs.astral.sh/ruff/rules/#flake8-annotations-ann
The interest compared to only mypy is that ruff is very fast at
detecting missing annotations.

ANN101 and ANN102 are deprecated so we ignore them 
ANN401 (no Any type) ignored to be in sync with mypy config

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-22 17:46:28 -05:00
Bagatur
979a991dc2 core[patch]: dont deep copy merge_message_runs (#28454)
afaict no need to deep copy here, if we merge messages then we convert
them to chunks first anyways
2025-02-22 21:56:45 +00:00
Mohammad Mohtashim
afa94e5bf7 _wait_for_run calling fix for OpenAIAssistantRunnable (#29927)
- **Description:** Fixed the `OpenAIAssistantRunnable` call of
`_wait_for_run`
- **Issue:**  #29923
2025-02-22 00:27:24 +00:00
Vadym Barda
437fe6d216 core[patch]: return ToolMessage from tools when tool call ID is empty string (#29921) 2025-02-21 11:53:15 -05:00
Taofiq Aiyelabegan
5ee8a8f063 [Integration]: Langchain-Permit (#29867)
## Which area of LangChain is being modified?
- This PR adds a new "Permit" integration to the `docs/integrations/`
folder.
- Introduces two new Tools (`LangchainJWTValidationTool` and
`LangchainPermissionsCheckTool`)
- Introduces two new Retrievers (`PermitSelfQueryRetriever` and
`PermitEnsembleRetriever`)
- Adds demo scripts in `examples/` showcasing usage.

## Description of Changes
- Created `langchain_permit/tools.py` for JWT validation and permission
checks with Permit.
- Created `langchain_permit/retrievers.py` for custom Permit-based
retrievers.
- Added documentation in `docs/integrations/providers/permit.ipynb` (or
`.mdx`) to explain setup, usage, and examples.
- Provided sample scripts in `examples/demo_scripts/` to illustrate
usage of these tools and retrievers.
- Ensured all code is linted and tested locally.

Thank you again for reviewing!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-21 10:59:00 -05:00
Jean-Philippe Dournel
ebe38baaf9 community/mlx_pipeline: fix crash at mlx call (#29915)
- **Description:** 
Since mlx_lm 0.20, all calls to mlx crash due to deprecation of the way
parameters are passed to methods generate and generate_step.
Parameters top_p, temp, repetition_penalty and repetition_context_size
are not passed directly to those method anymore but wrapped into
"sampler" and "logit_processor".


- **Dependencies:** mlx_lm (optional)

-  **Tests:** 
I've had a new test to existing test file:
tests/integration_tests/llms/test_mlx_pipeline.py

---------

Co-authored-by: Jean-Philippe Dournel <jp@insightkeeper.io>
2025-02-21 09:14:53 -05:00
Sinan CAN
bd773cffc3 docs: remove redundant cell in sql_large_db guide (#29917)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-21 08:38:49 -05:00
ccurme
1fa9f6bc20 docs: build mongo in api ref (#29908) 2025-02-20 19:58:35 -05:00
Chaunte W. Lacewell
d972c6d6ea partners: add langchain-vdms (#29857)
**Description:** Deprecate vdms in community, add integration
langchain-vdms, and update any related files
**Issue:** n/a
**Dependencies:** langchain-vdms
**Twitter handle:** n/a

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 19:48:46 -05:00
Mohammad Mohtashim
8293142fa0 mistral[patch]: support model_kwargs (#29838)
- **Description:** Frequency_penalty added as a client parameter
- **Issue:** #29803

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 18:47:39 -05:00
ccurme
924d9b1b33 cli[patch]: fix retriever template (#29907)
Chat model tabs don't render correctly in .ipynb template.
2025-02-20 17:51:19 +00:00
Brayden Zhong
a70f31de5f Community: RankLLMRerank AttributeError (Handle list-based rerank results) (#29840)
# community: Fix AttributeError in RankLLMRerank (`list` object has no
attribute `candidates`)

## **Description**
This PR fixes an issue in `RankLLMRerank` where reranking fails with the
following error:

```
AttributeError: 'list' object has no attribute 'candidates'
```

The issue arises because `rerank_batch()` returns a `List[Result]`
instead of an object containing `.candidates`.

### **Changes Introduced**
- Adjusted `compress_documents()` to support both:
  - Old API format: `rerank_results.candidates`
  - New API format: `rerank_results` as a list
  - Also fix wrong .txt location parsing while I was at it.

---

## **Issue**
Fixes **AttributeError** in `RankLLMRerank` when using
`compression_retriever.invoke()`. The issue is observed when
`rerank_batch()` returns a list instead of an object with `.candidates`.

**Relevant log:**
```
AttributeError: 'list' object has no attribute 'candidates'
```

## **Dependencies**
- No additional dependencies introduced.

---

## **Checklist**
- [x] **Backward compatible** with previous API versions
- [x] **Tested** locally with different RankLLM models
- [x] **No new dependencies introduced**
- [x] **Linted** with `make format && make lint`
- [x] **Ready for review**

---

## **Testing**
- Ran `compression_retriever.invoke(query)`

## **Reviewers**
If no review within a few days, please **@mention** one of:
- @baskaryan
- @efriis
- @eyurtsev
- @ccurme
- @vbarda
- @hwchase17
2025-02-20 12:38:31 -05:00
Levon Ghukasyan
ec403c442a Separate deepale vector store (#29902)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:37:19 +00:00
Jorge Piedrahita Ortiz
3acf842e35 core: add sambanova chat models to load module mapping (#29855)
- **Description:** add sambanova integration package chat models to load
module mapping, to allow serialization and deserialization
2025-02-20 12:30:50 -05:00
ccurme
d227e4a08e mistralai[patch]: release 0.2.7 (#29906) 2025-02-20 17:27:12 +00:00
Hande
d8bab89e6e community: add cognee retriever (#29878)
This PR adds a new cognee integration, knowledge graph based retrieval
enabling developers to ingest documents into cognee’s knowledge graph,
process them, and then retrieve context via CogneeRetriever.
It includes:
- langchain_cognee package with a CogneeRetriever class
- a test for the integration, demonstrating how to create, process, and
retrieve with cognee
- an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


Followed additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

Thank you for the review!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:15:23 +00:00
Sinan CAN
97dd5f45ae Update retrieval.mdx (#29905)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:12:29 +00:00
dokato
92b415a9f6 community: Made some Jira fields optional for agent to work correctly (#29876)
**Description:** Two small changes have been proposed here:
(1)
Previous code assumes that every issue has a priority field. If an issue
lacks this field, the code will raise a KeyError.
Now, the code checks if priority exists before accessing it. If priority
is missing, it assigns None instead of crashing. This prevents runtime
errors when processing issues without a priority.

(2)

Also If the "style" field is missing, the code throws a KeyError.
`.get("style", None)` safely retrieves the value if present.

**Issue:** #29875 

**Dependencies:** N/A
2025-02-20 12:10:11 -05:00
am-kinetica
ca7eccba1f Handled a bug around empty query results differently (#29877)
Thank you for contributing to LangChain!

- [ ] **Handled query records properly**: "community:
vectorstores/kinetica"

- [ ] **Bugfix for empty query results handling**: 
- **Description:** checked for the number of records returned by a query
before processing further
- **Issue:** resulted in an `AttributeError` earlier which has now been
fixed

@efriis
2025-02-20 12:07:49 -05:00
Antonio Pisani
2c403a3ea9 docs: Add langchain-prolog documentation (#29788)
I want to add documentation for a new integration with SWI-Prolog.

@hwchase17 check this out:

https://github.com/apisani1/langchain-prolog/tree/main/examples/travel_agent

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 11:50:28 -05:00
Marlene
be7fa920fa Partner: Azure AI Langchain Docs and Package Registry (#29879)
This PR adds documentation for the Azure AI package in Langchain to the
main mono-repo

No issue connected or updated dependencies.

Utilises existing tests and makes updates to the docs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 14:35:26 +00:00
Hankyeol Kyung
2dd0ce3077 openai: Update reasoning_effort arg documentation (#29897)
**Description:** Update docstring for `reasoning_effort` argument to
specify that it applies to reasoning models only (e.g., OpenAI o1 and
o3-mini), clarifying its supported models.
**Issue:** None
**Dependencies:** None
2025-02-20 09:03:42 -05:00
Joe Ferrucci
c28ee329c9 Fix typo in local_llms.ipynb docs (#29903)
Change `tailed` to `tailored`

`Docs > How-To > Local LLMs:`

https://python.langchain.com/docs/how_to/local_llms/#:~:text=use%20a%20prompt-,tailed,-for%20your%20specific
2025-02-20 09:03:10 -05:00
ccurme
ed3c2bd557 core[patch]: set version="v2" as default in astream_events (#29894) 2025-02-19 23:21:37 +00:00
Fabian Blatz
a2d05a376c community: ConfluenceLoader: add a filter method for attachments (#29882)
Adds a `attachment_filter_func` parameter to the ConfluenceLoader class
which can be used to determine which files are indexed. This is useful
if you are interested in excluding files based on their media type or
other metadata.
2025-02-19 18:20:45 -05:00
ccurme
9ed47a4d63 community[patch]: release 0.3.18 (#29896) 2025-02-19 20:13:00 +00:00
ccurme
92889edafd core[patch]: release 0.3.37 (#29895) 2025-02-19 20:04:35 +00:00
ccurme
ffd6194060 core[patch]: de-beta rate limiters (#29891) 2025-02-19 19:19:59 +00:00
Erick Friis
5637210a20 infra: run docs build on packages.yml updates (#29796)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-19 18:45:30 +00:00
ccurme
fb4c8423f0 docs: fix builds (#29890)
Missed in https://github.com/langchain-ai/langchain/pull/29889
2025-02-19 13:35:59 -05:00
ccurme
68b13e5172 pinecone: delete from monorepo (#29889)
This now lives in https://github.com/langchain-ai/langchain-pinecone
2025-02-19 12:55:15 -05:00
Erick Friis
6c1e21d128 core: basemessage.text() (#29078) 2025-02-18 17:45:44 -08:00
Ben Burns
e2ba336e72 docs: fix partner package table build for packages with no download stats (#29871)
The build in #29867 is currently broken because `langchain-cli` didn't
add download stats to the provider file.

This change gracefully handles sorting packages with missing download
counts. I initially updated the build to fetch download counts on every
run, but pypistats [requests](https://pypistats.org/api/) that users not
fetch stats like this via CI.
2025-02-19 11:05:57 +13:00
Eugene Yurtsev
8e5074d82d core: release 0.3.36 (#29869)
Release 0.3.36
2025-02-18 19:51:43 +00:00
Vadym Barda
d04fa1ae50 core[patch]: allow passing JSON schema as args_schema to tools (#29812) 2025-02-18 14:44:31 -05:00
ccurme
5034a8dc5c xai[patch]: release 0.2.1 (#29854) 2025-02-17 14:30:41 -05:00
ccurme
83dcef234d xai[patch]: support dedicated structured output feature (#29853)
https://docs.x.ai/docs/guides/structured-outputs

Interface appears identical to OpenAI's.
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel

class Joke(BaseModel):
    setup: str
    punchline: str

llm = init_chat_model("xai:grok-2").with_structured_output(
    Joke, method="json_schema"
)
llm.invoke("Tell me a joke about cats.")
```
2025-02-17 14:19:51 -05:00
ccurme
9d6fcd0bfb infra: add xai to scheduled testing (#29852) 2025-02-17 18:59:45 +00:00
ccurme
8a3b05ae69 langchain[patch]: release 0.3.19 (#29851) 2025-02-17 13:36:23 -05:00
ccurme
c9061162a1 langchain[patch]: add xai to extras (#29850) 2025-02-17 17:49:34 +00:00
Bagatur
1acf57e9bd langchain[patch]: init_chat_model xai support (#29849) 2025-02-17 09:45:39 -08:00
Paul Nikonowicz
1a55da9ff4 docs: Update gemini vector docs (#29841)
# Description

2 changes: 
1. removes get pass from the code example as it reads from stdio causing
a freeze to occur
2. updates to the latest gemini model in the example

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-17 07:54:23 -05:00
hsm207
037b129b86 weaviate: Add-deprecation-warning (#29757)
- **Description:** add deprecation warning when using weaviate from
langchain_community
  - **Issue:** NA
  - **Dependencies:** NA
  - **Twitter handle:** NA

---------

Signed-off-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 21:42:18 -05:00
Đỗ Quang Minh
cd198ac9ed community: add custom model for OpenAIWhisperParser (#29831)
Add `model` properties for OpenAIWhisperParser. Defaulted to `whisper-1`
(previous value).
Please help me update the docs and other related components of this
repo.
2025-02-16 21:26:07 -05:00
Cole McIntosh
6874c9c1d0 docs: add notebook for langchain-salesforce package (#29800)
**Description:**  
This PR adds a Jupyter notebook that explains the features,
installation, and usage of the
[`langchain-salesforce`](https://github.com/colesmcintosh/langchain-salesforce)
package. The notebook includes:
- Setup instructions for configuring Salesforce credentials  
- Example code demonstrating common operations such as querying,
describing objects, creating, updating, and deleting records

**Issue:**  
N/A

**Dependencies:**  
No new dependencies are required.

**Tests and Docs:**  
- Added an example notebook demonstrating the usage of the
`langchain-salesforce` package, located in `docs/docs/integrations`.

**Lint and Test:**  
- Ran `make format`, `make lint`, and `make test` successfully.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 08:34:57 -05:00
Jan Heimes
60f58df5b3 community: add top_k as param to Needle Retriever (#29821)
Thank you for contributing to LangChain!

- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
This PR adds top_k as a param to the Needle Retriever. By default we use
top 10.



- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-16 08:30:52 -05:00
Mateusz Szewczyk
8147679169 docs: Rename IBM product name to IBM watsonx (#29802)
Thank you for contributing to LangChain!

Rename IBM product name to `IBM watsonx`

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-15 21:48:02 -05:00
Jesus Fernandez Bes
1dfac909d8 community: Adding IN Operator to AzureCosmosDBNoSQLVectorStore (#29805)
- ** Description**: I have added a new operator in the operator map with
key `$in` and value `IN`, so that you can define filters using lists as
values. This was already contemplated but as IN operator was not in the
map they cannot be used.
- **Issue**: Fixes #29804.
- **Dependencies**: No extra.
2025-02-15 21:44:54 -05:00
Wahed Hemati
8901b113c3 docs: add Discord integration docs (#29822)
This PR adds documentation for the `langchain-discord-shikenso`
integration, including an example notebook at
`docs/docs/integrations/tools/discord.ipynb` and updates to
`libs/packages.yml` to track the new package.

  **Issue:**  
  N/A

  **Dependencies:**  
  None

  **Twitter handle:**  
  N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:43:45 -05:00
Akmal Ali Jasmin
f1792e486e fix: Correct getpass usage in Google Generative AI Embedding docs (#29809) (#29810)
**fix: Correct getpass usage in Google Generative AI Embedding docs
(#29809)**

- **Description:** Corrected the `getpass` usage in the Google
Generative AI Embedding documentation by replacing `getpass()` with
`getpass.getpass()` to fix the `TypeError`.
- **Issue:** #29809  
- **Dependencies:** None  

**Additional Notes:**  
The change ensures compatibility with Google Colab and follows Python's
`getpass` module usage standards.
2025-02-15 21:41:00 -05:00
HackHuang
80ca310c15 langchain : Add the full code snippet in rag.ipynb (#29820)
docs(rag.ipynb) : Add the `full code` snippet, it’s necessary and useful
for beginners to demonstrate.

Preview the change :
https://langchain-git-fork-googtech-patch-3-langchain.vercel.app/docs/tutorials/rag/

Two `full code` snippets are added as below :
<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

##################################################################################################
# 5.Using LangGraph to tie together the retrieval and generation steps into a single application #                               #
##################################################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    context: List[Document]
    answer: str

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    retrieved_docs = vector_store.similarity_search(state["question"])
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()
```

</details>

<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph
from typing import Literal
from typing_extensions import Annotated

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

# Search analysis: Add some metadata to the documents in our vector store,
# so that we can filter on section later. 
total_documents = len(all_splits)
third = total_documents // 3
for i, document in enumerate(all_splits):
    if i < third:
        document.metadata["section"] = "beginning"
    elif i < 2 * third:
        document.metadata["section"] = "middle"
    else:
        document.metadata["section"] = "end"

# Search analysis: Define the schema for our search query
class Search(TypedDict):
    query: Annotated[str, ..., "Search query to run."]
    section: Annotated[
        Literal["beginning", "middle", "end"], ..., "Section to query."]

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

###################################################################
# 5.Using LangGraph to tie together the analyze_query, retrieval  #
# and generation steps into a single application                  #
###################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    query: Search
    context: List[Document]
    answer: str

# Search analysis: Define the node of application, 
# which be used to generate a query from the user's raw input
def analyze_query(state: State):
    structured_llm = llm.with_structured_output(Search)
    query = structured_llm.invoke(state["question"])
    return {"query": query}

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    query = state["query"]
    retrieved_docs = vector_store.similarity_search(
        query["query"],
        filter=lambda doc: doc.metadata.get("section") == query["section"],
    )
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([analyze_query, retrieve, generate]) 
graph_builder.add_edge(START, "analyze_query")
graph = graph_builder.compile()
```

</details>

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:37:58 -05:00
Michael Chin
b2c21f3e57 docs: Update SagemakerEndpoint examples (#29814)
Related issue: https://github.com/langchain-ai/langchain-aws/issues/361

Updated the AWS `SagemakerEndpoint` LLM documentation to import from
`langchain-aws`.
2025-02-15 21:34:56 -05:00
Krishna Kulkarni
a98c5f1c4b langchain_community: add image support to DuckDuckGoSearchAPIWrapper (#29816)
- [ ] **PR title**: langchain_community: add image support to
DuckDuckGoSearchAPIWrapper

- **Description:** This PR enhances the DuckDuckGoSearchAPIWrapper
within the langchain_community package by introducing support for image
searches. The enhancement includes:
  - Adding a new method _ddgs_images to handle image search queries.
- Updating the run and results methods to process and return image
search results appropriately.
- Modifying the source parameter to accept "images" as a valid option,
alongside "text" and "news".
- **Dependencies:** No additional dependencies are required for this
change.
2025-02-15 21:32:14 -05:00
Iris Liu
0d9f0b4215 docs: updates Chroma integration API ref docs (#29826)
- Description: updates Chroma integration API ref docs
- Issue: #29817
- Dependencies: N/A
- Twitter handle: @irieliu

Co-authored-by: “Iris <“liuirisny@gmail.com”>
2025-02-15 21:05:21 -05:00
ccurme
3fe7c07394 openai[patch]: release 0.3.6 (#29824) 2025-02-15 13:53:35 -05:00
ccurme
65a6dce428 openai[patch]: enable streaming for o1 (#29823)
Verified streaming works for the `o1-2024-12-17` snapshot as well.
2025-02-15 12:42:05 -05:00
Christophe Bornet
3dffee3d0b all: Bump blockbuster version to 1.5.18 (#29806)
Has fixes for running on Windows and non-CPython runtimes.
2025-02-14 07:55:38 -08:00
ccurme
d9a069c414 tests[patch]: release 0.3.12 (#29797) 2025-02-13 23:57:44 +00:00
ccurme
e4f106ea62 groq[patch]: remove xfails (#29794)
These appear to pass.
2025-02-13 15:49:50 -08:00
Erick Friis
f34e62ef42 packages: add langchain-xai (#29795)
wasn't registered per the contribution guide:
https://python.langchain.com/docs/contributing/how_to/integrations/
2025-02-13 15:36:41 -08:00
ccurme
49cc6106f7 tests[patch]: fix query for test_tool_calling_with_no_arguments (#29793) 2025-02-13 23:15:52 +00:00
Erick Friis
1a225fad03 multiple: fix uv path deps (#29790)
file:// format wasn't working with updates - it doesn't install as an
editable dep

move to tool.uv.sources with path= instead
2025-02-13 21:32:34 +00:00
Erick Friis
ff13384eb6 packages: update counts, add command (#29789) 2025-02-13 20:45:25 +00:00
Mateusz Szewczyk
8d0e31cbc5 docs: Fix model_id on EmbeddingTabs page (#29784)
Thank you for contributing to LangChain!

Fix `model_id` in IBM provider on EmbeddingTabs page

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 09:41:51 -08:00
Mateusz Szewczyk
61f1be2152 docs: Added IBM to ChatModelTabs and EmbeddingTabs (#29774)
Thank you for contributing to LangChain!

Added IBM to ChatModelTabs and EmbeddingTabs

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:43:42 -08:00
HackHuang
76d32754ff core : update the class docs of InMemoryVectorStore in in_memory.py (#29781)
- **Description:** Add the new introduction about checking `store` in
in_memory.py, It’s necessary and useful for beginners.
```python
Check Documents:
    .. code-block:: python
    
        for doc in vector_store.store.values():
            print(doc)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 16:41:47 +00:00
Mateusz Szewczyk
b82cef36a5 docs: Update IBM WatsonxLLM and ChatWatsonx documentation (#29752)
Thank you for contributing to LangChain!

Update presented model in `WatsonxLLM` and `ChatWatsonx` documentation.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:41:07 -08:00
Mohammad Mohtashim
96ad09fa2d (Community): Added API Key for Jina Search API Wrapper (#29622)
- **Description:** Simple change for adding the API Key for Jina Search
API Wrapper
- **Issue:** #29596
2025-02-12 20:12:07 -08:00
ccurme
f1c66a3040 docs: minor fix to provider table (#29771)
Langfair renders as LangfAIr
2025-02-13 04:06:58 +00:00
Jakub Kopecký
c8cb7c25bf docs: update apify integration (#29553)
**Description:** Fixed and updated Apify integration documentation to
use the new [langchain-apify](https://github.com/apify/langchain-apify)
package.
**Twitter handle:** @apify
2025-02-12 20:02:55 -08:00
ccurme
16fb1f5371 chroma[patch]: release 0.2.2 (#29769)
Resolves https://github.com/langchain-ai/langchain/issues/29765
2025-02-13 02:39:16 +00:00
Mohammad Mohtashim
2310847c0f (Chroma): Small Fix in add_texts when checking for embeddings (#29766)
- **Description:** Small fix in `add_texts` to make embedding
nullability is checked properly.
- **Issue:** #29765

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 02:26:13 +00:00
Eric Pinzur
716fd89d8e docs: contributed Graph RAG Retriever integration (#29744)
**Description:** 

This adds the `Graph RAG` Retriever integration documentation, per
https://python.langchain.com/docs/contributing/how_to/integrations/.

* The integration exists in this public repository:
https://github.com/datastax/graph-rag
* We've implemented the standard langchain tests for retrievers:
https://github.com/datastax/graph-rag/blob/main/packages/langchain-graph-retriever/tests/test_langchain.py
* Our integration is published to PyPi:
https://pypi.org/project/langchain-graph-retriever/
2025-02-12 18:25:48 -08:00
Sunish Sheth
f42dafa809 Deprecating sql_database access for creating UC functions for agent tools (#29745)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-13 02:24:44 +00:00
Thor 雷神 Schaeff
a0970d8d7e [WIP] chore: update ElevenLabs tool. (#29722)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:54:34 +00:00
Chaymae El Aattabi
4b08a7e8e8 Fix #29759: Use local chunk_size_ for looping in embed_documents (#29761)
This fix ensures that the chunk size is correctly determined when
processing text embeddings. Previously, the code did not properly handle
cases where chunk_size was None, potentially leading to incorrect
chunking behavior.

Now, chunk_size_ is explicitly set to either the provided chunk_size or
the default self.chunk_size, ensuring consistent chunking. This update
improves reliability when processing large text inputs in batches and
prevents unintended behavior when chunk_size is not specified.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:28:26 +00:00
Jorge Piedrahita Ortiz
1fbc01c350 docs: update sambanova integration api reference links (#29762)
- **Description:** update sambanova external package integration api
reference links in docs
2025-02-12 15:58:00 -08:00
Sunish Sheth
043d78d85d Deprecate langhchain community ucfunctiontoolkit in favor for databricks_langchain (#29746)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-12 15:50:35 -08:00
Hugues Chocart
e4eec9e9aa community: add langchain-abso documentation (#29739)
Add the documentation for the community package `langchain-abso`. It
provides a new Chat Model class, that uses https://abso.ai

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-02-12 19:57:33 +00:00
ccurme
e61f463745 core[patch]: release 0.3.35 (#29764) 2025-02-12 18:13:10 +00:00
Nuno Campos
fe59f2cc88 core: Fix output of convert_messages when called with BaseMessage.model_dump() (#29763)
- additional_kwargs was being nested twice
- example, response_metadata was placed inside additional_kwargs
2025-02-12 10:05:33 -08:00
Jacob Lee
f4e3e86fbb feat(langchain): Infer o3 modelstrings passed to init_chat_model as OpenAI (#29743) 2025-02-11 16:51:41 -08:00
Mohammad Mohtashim
9f3bcee30a (Community): Adding Structured Support for ChatPerplexity (#29361)
- **Description:** Adding Structured Support for ChatPerplexity
- **Issue:** #29357
- This is implemented as per the Perplexity official docs:
https://docs.perplexity.ai/guides/structured-outputs

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-11 15:51:18 -08:00
Jawahar S
994c5465e0 feat: add support for IBM WatsonX AI chat models (#29688)
**Description:** Updated init_chat_model to support Granite models
deployed on IBM WatsonX
**Dependencies:**
[langchain-ibm](https://github.com/langchain-ai/langchain-ibm)

Tagging @baskaryan @efriis for review when you get a chance.
2025-02-11 15:34:29 -08:00
Shailendra Mishra
c7d74eb7a3 Oraclevs integration (#29723)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"
  community: langchain_community/vectorstore/oraclevs.py


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Refactored code to allow a connection or a connection
pool.
- **Issue:** Normally an idel connection is terminated by the server
side listener at timeout. A user thus has to re-instantiate the vector
store. The timeout in case of connection is not configurable. The
solution is to use a connection pool where a user can specify a user
defined timeout and the connections are managed by the pool.
    - **Dependencies:** None
    - **Twitter handle:** 


- [ ] **Add tests and docs**: This is not a new integration. A user can
pass either a connection or a connection pool. The determination of what
is passed is made at run time. Everything should work as before.

- [ ] **Lint and test**:  Already done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-11 14:56:55 -08:00
ccurme
42ebf6ae0c deepseek[patch]: release 0.1.2 (#29742) 2025-02-11 11:53:43 -08:00
ccurme
ec55553807 pinecone[patch]: release 0.2.3 (#29741) 2025-02-11 19:27:39 +00:00
ccurme
001cf99253 pinecone[patch]: add support for python 3.13 (#29737) 2025-02-11 11:20:21 -08:00
ccurme
ba8f752bf5 openai[patch]: release 0.3.5 (#29740) 2025-02-11 19:20:11 +00:00
ccurme
9477f49409 openai, deepseek: make _convert_chunk_to_generation_chunk an instance method (#29731)
1. Make `_convert_chunk_to_generation_chunk` an instance method on
BaseChatOpenAI
2. Override on ChatDeepSeek to add `"reasoning_content"` to message
additional_kwargs.

Resolves https://github.com/langchain-ai/langchain/issues/29513
2025-02-11 11:13:23 -08:00
Christopher Menon
1edd27d860 docs: fix SQL-based metadata filter syntax, add link to BigQuery docs (#29736)
Fix the syntax for SQL-based metadata filtering in the [Google BigQuery
Vector Search
docs](https://python.langchain.com/docs/integrations/vectorstores/google_bigquery_vector_search/#searching-documents-with-metadata-filters).
Also add a link to learn more about BigQuery operators that can be used
here.

I have been using this library, and have found that this is the correct
syntax to use for the SQL-based filters.

**Issue**: no open issue.
**Dependencies**: none.
**Twitter handle**: none.

No tests as this is only a change to the documentation.

<!-- Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2025-02-11 11:10:12 -08:00
ccurme
d0c2dc06d5 mongodb[patch]: fix link in readme (#29738) 2025-02-11 18:19:59 +00:00
zzaebok
3b3d52206f community: change wikidata rest api version from v0 to v1 (#29708)
**Description:**

According to the [wikidata
documentation](https://www.wikidata.org/wiki/Wikidata_talk:REST_API),
Wikibase REST API version 1 (stable) is released from November 11, 2024.
Their guide is to use the new v1 API and, it just requires replacing v0
in the routes with v1 in almost all cases.
So I replaced WIKIDATA_REST_API_URL from v0 to v1 for stable usage.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-10 17:12:38 -08:00
ccurme
4a389ef4c6 community: fix extended testing (#29715)
v0.3.100 of premai sdk appears to break on import:
89d9276cbf/premai/api/__init__.py (L230)
2025-02-10 16:57:34 -08:00
Yoav Levy
af3f759073 docs: fixed nimble's provider page and retriever (#29695)
## **Description:**
- Added information about the retriever that Nimble's provider exposes.
- Fixed the authentication explanation on the retriever page.
2025-02-10 15:30:40 -08:00
Bhav Sardana
624216aa64 community:Fix for Pydantic model validator of GoogleApiYoutubeLoader (#29694)
- **Description:** Community: bugfix for pedantic model validator for
GoogleApiYoutubeLoader
- **Issue:** #29165, #27432 
Fix is similar to #29346
2025-02-10 08:57:58 -05:00
Changyong Um
60740c44c5 community: Add configurable text key for indexing and the retriever in Pinecone Hybrid Search (#29697)
**issue**

In Langchain, the original content is generally stored under the `text`
key. However, the `PineconeHybridSearchRetriever` searches the `context`
field in the metadata and cannot change this key. To address this, I
have modified the code to allow changing the key to something other than
context.

In my opinion, following Langchain's conventions, the `text` key seems
more appropriate than `context`. However, since I wasn't sure about the
author's intent, I have left the default value as `context`.
2025-02-10 08:56:37 -05:00
Jun He
894b0cac3c docs: Remove redundant line (#29698)
If I understand it correctly, chain1 is never used.
2025-02-10 08:53:21 -05:00
Tiest van Gool
6655246504 Classification Tutorial: Replaced .dict() with .model_dump() method (#29701)
The .dict() method is deprecated inf Pydantic V2.0 and use `model_dump`
method instead.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-10 08:38:15 -05:00
Edmond Wang
c36e6d4371 docs: Add Comments and Supplementary Example Code to Vearch Vector Dat… (#29706)
- **Description:** Added some comments to the example code in the Vearch
vector database documentation and included commonly used sample code.
- **Issue:** None
- **Dependencies:** None

---------

Co-authored-by: wangchuxiong <wangchuxiong@jd.com>
2025-02-10 08:35:38 -05:00
Akmal Ali Jasmin
bc5fafa20e [DOC] Fix #29685: HuggingFaceEndpoint missing task argument in documentation (#29686)
## **Description**
This PR updates the LangChain documentation to address an issue where
the `HuggingFaceEndpoint` example **does not specify the required `task`
argument**. Without this argument, users on `huggingface_hub == 0.28.1`
encounter the following error:

```
ValueError: Task unknown has no recommended model. Please specify a model explicitly.
```

---

## **Issue**
Fixes #29685

---

## **Changes Made**
 **Updated `HuggingFaceEndpoint` documentation** to explicitly define
`task="text-generation"`:
```python
llm = HuggingFaceEndpoint(
    repo_id=GEN_MODEL_ID,
    huggingfacehub_api_token=HF_TOKEN,
    task="text-generation"  # Explicitly specify task
)
```

 **Added a deprecation warning note** and recommended using
`InferenceClient`:
```python
from huggingface_hub import InferenceClient
from langchain.llms.huggingface_hub import HuggingFaceHub

client = InferenceClient(model=GEN_MODEL_ID, token=HF_TOKEN)

llm = HuggingFaceHub(
    repo_id=GEN_MODEL_ID,
    huggingfacehub_api_token=HF_TOKEN,
    client=client,
)
```

---

## **Dependencies**
- No new dependencies introduced.
- Change only affects **documentation**.

---

## **Testing**
-  Verified that adding `task="text-generation"` resolves the issue.
-  Tested the alternative approach with `InferenceClient` in Google
Colab.

---

## **Twitter Handle (Optional)**
If this PR gets announced, a shout-out to **@AkmalJasmin** would be
great! 🚀

---

## **Reviewers**
📌 **@langchain-maintainers** Please review this PR. Let me know if
further changes are needed.

🚀 This fix improves **developer onboarding** and ensures the **LangChain
documentation remains up to date**! 🚀
2025-02-08 14:41:02 -05:00
manukychen
3de445d521 using getattr and default value to prevent 'OpenSearchVectorSearch' has no attribute 'bulk_size' (#29682)
- Description: Adding getattr methods and set default value 500 to
cls.bulk_size, it can prevent the error below:
Error: type object 'OpenSearchVectorSearch' has no attribute 'bulk_size'

- Issue: https://github.com/langchain-ai/langchain/issues/29071
2025-02-08 14:39:57 -05:00
Yao Tianjia
5d581ba22c langchain: support the situation when action_input is null in json output_parser (#29680)
Description:
This PR fixes handling of null action_input in
[langchain.agents.output_parser]. Previously, passing null to
action_input could cause OutputParserException with unclear error
message which cause LLM don't know how to modify the action. The changes
include:

Added null-check validation before processing action_input
Implemented proper fallback behavior with default values
Maintained backward compatibility with existing implementations

Error Examples:
```
{
  "action":"some action",
  "action_input":null
}
```

Issue:
None

Dependencies:
None
2025-02-07 22:01:01 -05:00
Philippe PRADOS
beb75b2150 community[minor]: 05 - Refactoring PyPDFium2 parser (#29625)
This is one part of a larger Pull Request (PR) that is too large to be
submitted all at once. This specific part focuses on updating the
PyPDFium2 parser.

For more details, see
https://github.com/langchain-ai/langchain/pull/28970.
2025-02-07 21:31:12 -05:00
Christophe Bornet
723031d548 community: Bump ruff version to 0.9 (#29206)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:21:10 +00:00
Christophe Bornet
30f6c9f5c8 community: Use Blockbuster to detect blocking calls in asyncio during tests (#29609)
Same as https://github.com/langchain-ai/langchain/pull/29043 for
langchain-community.

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:10:39 +00:00
Christophe Bornet
3a57a28daa langchain: Use Blockbuster to detect blocking calls in asyncio during tests (#29616)
Same as https://github.com/langchain-ai/langchain/pull/29043 for the
langchain package.

**Dependencies:**
- blockbuster (test)

**Twitter handle:** cbornet_

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 01:08:15 +00:00
Keenan Pepper
c67d473397 core: Make abatch_as_completed respect max_concurrency (#29426)
- **Description:** Add tests for respecting max_concurrency and
implement it for abatch_as_completed so that test passes
- **Issue:** #29425
- **Dependencies:** none
- **Twitter handle:** keenanpepper
2025-02-07 16:51:22 -08:00
Aaron V
dcfaae85d2 Core: Fix __add__ for concatting two BaseMessageChunk's (#29531)
Description:

The change allows you to use the overloaded `+` operator correctly when
`+`ing two BaseMessageChunk subclasses. Without this you *must*
instantiate a subclass for it to work.

Which feels... wrong. Base classes should be decoupled from sub classes
and should have in no way a dependency on them.

Issue:

You can't `+` a BaseMessageChunk with a BaseMessageChunk

e.g. this will explode

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import BaseMessageChunk


chunk1 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=BaseMessageChunk(
        type="customChunk",
        content="HI",
    ),
)

# this will throw
new_chunk = chunk1 + chunk2
```

In case anyone ran into this issue themselves, it's probably best to use
the AIMessageChunk:

a la 

```py
from langchain_core.outputs import (
    ChatGenerationChunk,
)
from langchain_core.messages import AIMessageChunk


chunk1 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

chunk2 = ChatGenerationChunk(
    message=AIMessageChunk(
        content="HI",
    ),
)

# No explosion!
new_chunk = chunk1 + chunk2
```

Dependencies:

None!

Twitter handle: 
`aaron_vogler`

Keeping these for later if need be:
```
baskaryan
efriis 
eyurtsev
ccurme 
vbarda
hwchase17
baskaryan
efriis
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-08 00:43:36 +00:00
Marlene
4fa3ef0d55 Community/Partner: Adding Azure community and partner user agent to better track usage in Python (#29561)
- This pull request includes various changes to add a `user_agent`
parameter to Azure OpenAI, Azure Search and Whisper in the Community and
Partner packages. This helps in identifying the source of API requests
so we can better track usage and help support the community better. I
will also be adding the user_agent to the new `langchain-azure` repo as
well.

- No issue connected or  updated dependencies. 
- Utilises existing tests and docs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 23:28:30 +00:00
Ella Charlaix
c401254770 huggingface: Add ipex support to HuggingFaceEmbeddings (#29386)
ONNX and OpenVINO models are available by specifying the `backend`
argument (the model is loaded using `optimum`
https://github.com/huggingface/optimum)

```python
from langchain_huggingface import HuggingFaceEmbeddings

embedding = HuggingFaceEmbeddings(
    model_name=model_id,
    model_kwargs={"backend": "onnx"},
)
```

With this PR we also enable the IPEX backend 



```python
from langchain_huggingface import HuggingFaceEmbeddings

embedding = HuggingFaceEmbeddings(
    model_name=model_id,
    model_kwargs={"backend": "ipex"},
)
```
2025-02-07 15:21:09 -08:00
Bruno Alvisio
3eaf561561 core: Handle unterminated escape character when parsing partial JSON (#29065)
**Description**
Currently, when parsing a partial JSON, if a string ends with the escape
character, the whole key/value is removed. For example:

```
>>> from langchain_core.utils.json import parse_partial_json
>>> my_str = '{"foo": "bar", "baz": "qux\\'
>>> 
>>> parse_partial_json(my_str)
{'foo': 'bar'}
```

My expectation (and with this fix) would be for `parse_partial_json()`
to return:
```
>>> from langchain_core.utils.json import parse_partial_json
>>> 
>>> my_str = '{"foo": "bar", "baz": "qux\\'
>>> parse_partial_json(my_str)
{'foo': 'bar', 'baz': 'qux'}
```

Notes:
1. It could be argued that current behavior is still desired.
2. I have experienced this issue when the streaming output from an LLM
and the chunk happens to end with `\\`
3. I haven't included tests. Will do if change is accepted.
4. This is specially troublesome when this function is used by

187131c55c/libs/core/langchain_core/output_parsers/transform.py (L111)

since what happens is that, for example, if the received sequence of
chunks are: `{"foo": "b` , `ar\\` :

Then, the result of calling `self.parse_result()` is:
```
{"foo": "b"}
```
and the second time:
```
{}
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 23:18:21 +00:00
ccurme
0040d93b09 docs: showcase extras in chat model tabs (#29677)
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 18:16:44 -05:00
Viren
252cf0af10 docs: add LangFair as a provider (#29390)
**Description:**
- Add `docs/docs/providers/langfair.mdx`
- Register langfair in `libs/packages.yml`

**Twitter handle:** @LangFair

**Tests and docs**
1. Integration tests not needed as this PR only adds a .mdx file to
docs.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Dylan Bouchard <dylan.bouchard@cvshealth.com>
Co-authored-by: Dylan Bouchard <109233938+dylanbouchard@users.noreply.github.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 21:27:37 +00:00
Erick Friis
eb9eddae0c docs: use init_chat_model (#29623) 2025-02-07 12:39:27 -08:00
ccurme
bff25b552c community: release 0.3.17 (#29676) 2025-02-07 19:41:44 +00:00
ccurme
01314c51fa langchain: release 0.3.18 (#29654) 2025-02-07 13:40:26 -05:00
ccurme
92e2239414 openai[patch]: make parallel_tool_calls explicit kwarg of bind_tools (#29669)
Improves discoverability and documentation.

cc @vbarda
2025-02-07 13:34:32 -05:00
ccurme
2a243df7bb infra: add UV_NO_SYNC to monorepo makefile (#29670)
Helpful for running `api_docs_quick_preview` locally.
2025-02-07 17:17:05 +00:00
Marc Ammann
5690575f13 openai: Removed tool_calls from completion chunk after other chunks have already been sent. (#29649)
- **Description:** Before sending a completion chunk at the end of an
OpenAI stream, removing the tool_calls as those have already been sent
as chunks.
- **Issue:** -
- **Dependencies:** -
- **Twitter handle:** -

@ccurme as mentioned in another PR

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-07 10:15:52 -05:00
Ikko Eltociear Ashimine
0d45ad57c1 community: update base_o365.py (#29657)
extention -> extension
2025-02-07 08:43:29 -05:00
weeix
1b064e198f docs: Fix llama.cpp GPU Installation in llamacpp.ipynb (Deprecated Env Variable) (#29659)
- **Description:** The llamacpp.ipynb notebook used a deprecated
environment variable, LLAMA_CUBLAS, for llama.cpp installation with GPU
support. This commit updates the notebook to use the correct GGML_CUDA
variable, fixing the installation error.
- **Issue:** none
-  **Dependencies:** none
2025-02-07 08:43:09 -05:00
Vincent Emonet
3645181d0e qdrant: Add similarity_search_with_score_by_vector() function to the QdrantVectorStore (#29641)
Added `similarity_search_with_score_by_vector()` function to the
`QdrantVectorStore` class.

It is required when we want to query multiple time with the same
embeddings. It was present in the now deprecated original `Qdrant`
vectorstore implementation, but was absent from the new one. It is also
implemented in a number of others `VectorStore` implementations

I have added tests for this new function

Note that I also argued in this discussion that it should be part of the
general `VectorStore`
https://github.com/langchain-ai/langchain/discussions/29638

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-07 00:55:58 +00:00
ccurme
488cb4a739 anthropic: release 0.3.7 (#29653) 2025-02-06 17:05:57 -05:00
577 changed files with 19353 additions and 11547 deletions

4
.github/CODEOWNERS vendored
View File

@@ -1,2 +1,2 @@
/.github/ @efriis @baskaryan @ccurme
/libs/packages.yml @efriis
/.github/ @baskaryan @ccurme
/libs/packages.yml @ccurme

View File

@@ -26,4 +26,4 @@ Additional guidelines:
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

View File

@@ -39,7 +39,6 @@ IGNORED_PARTNERS = [
PY_312_MAX_PACKAGES = [
"libs/partners/huggingface", # https://github.com/pytorch/pytorch/issues/130249
"libs/partners/pinecone",
"libs/partners/voyageai",
]

View File

@@ -64,8 +64,6 @@ jobs:
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}

View File

@@ -63,12 +63,12 @@ jobs:
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --group test
uv sync --inexact --group test
- name: Install unit+integration test dependencies
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --group test --group test_integration
uv sync --inexact --group test --group test_integration
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}

View File

@@ -22,6 +22,7 @@ on:
env:
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:
@@ -296,8 +297,6 @@ jobs:
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
@@ -313,12 +312,88 @@ jobs:
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
test-prior-published-packages-against-new-core:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
if: ${{ startsWith(inputs.working-directory, 'libs/core') }}
runs-on: ubuntu-latest
strategy:
matrix:
partner: [openai, anthropic]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v4
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Test against ${{ matrix.partner }}
run: |
# Identify latest tag
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| sort -Vr \
| head -n 1
)"
echo "Latest package tag: $LATEST_PACKAGE_TAG"
# Shallow-fetch just that single tag
git fetch --depth=1 origin tag "$LATEST_PACKAGE_TAG"
# Navigate to the partner directory
cd $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}
# Checkout the latest package files
git checkout "$LATEST_PACKAGE_TAG" -- .
# Print as a sanity check
echo "Version number from pyproject.toml: "
cat pyproject.toml | grep "version = "
# Run tests
uv sync --group test --group test_integration
uv pip install ../../core/dist/*.whl
make integration_tests
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- test-prior-published-packages-against-new-core
if: >
always() &&
needs.build.result == 'success' &&
needs.release-notes.result == 'success' &&
needs.test-pypi-publish.result == 'success' &&
needs.pre-release-checks.result == 'success' && (
(startsWith(inputs.working-directory, 'libs/core') && needs.test-prior-published-packages-against-new-core.result == 'success')
|| (!startsWith(inputs.working-directory, 'libs/core'))
)
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:

View File

@@ -14,6 +14,7 @@ on:
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:

View File

@@ -19,6 +19,7 @@ on:
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:

View File

@@ -19,6 +19,7 @@ concurrency:
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
build:

View File

@@ -15,7 +15,7 @@ on:
env:
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
jobs:
@@ -139,6 +139,7 @@ jobs:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}

View File

@@ -97,12 +97,6 @@ repos:
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: pinecone
name: format partners/pinecone
language: system
entry: make -C libs/partners/pinecone format
files: ^libs/partners/pinecone/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system

View File

@@ -82,3 +82,6 @@ lint lint_package lint_tests:
format format_diff:
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --select I --fix docs cookbook
update-package-downloads:
uv run python docs/scripts/packages_yml_get_downloads.py

View File

@@ -21,7 +21,6 @@ Notebook | Description
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools

View File

@@ -66,7 +66,7 @@
},
"outputs": [],
"source": [
"#!python3 -m pip install --upgrade langchain deeplake openai"
"#!python3 -m pip install --upgrade langchain langchain-deeplake openai"
]
},
{
@@ -666,89 +666,26 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your Deep Lake dataset has been successfully created!\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" \r"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset(path='hub://adilkhan/langchain-code', tensors=['embedding', 'id', 'metadata', 'text'])\n",
"\n",
" tensor htype shape dtype compression\n",
" ------- ------- ------- ------- ------- \n",
" embedding embedding (8244, 1536) float32 None \n",
" id text (8244, 1) str None \n",
" metadata json (8244, 1) str None \n",
" text text (8244, 1) str None \n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": []
},
{
"data": {
"text/plain": [
"<langchain_community.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_deeplake.vectorstores import DeeplakeVectorStore\n",
"\n",
"username = \"<USERNAME_OR_ORG>\"\n",
"\n",
"\n",
"db = DeepLake.from_documents(\n",
" texts, embeddings, dataset_path=f\"hub://{username}/langchain-code\", overwrite=True\n",
"db = DeeplakeVectorStore.from_documents(\n",
" documents=texts,\n",
" embedding=embeddings,\n",
" dataset_path=f\"hub://{username}/langchain-code\",\n",
" overwrite=True,\n",
")\n",
"db"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`Optional`: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"# from langchain_community.vectorstores import DeepLake\n",
"\n",
"# db = DeepLake.from_documents(\n",
"# texts, embeddings, dataset_path=f\"hub://{<org_id>}/langchain-code\", runtime={\"tensor_db\": True}\n",
"# )\n",
"# db"
]
},
{
"attachments": {},
"cell_type": "markdown",
@@ -760,24 +697,16 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Deep Lake Dataset in hub://adilkhan/langchain-code already exists, loading from the storage\n"
]
}
],
"outputs": [],
"source": [
"db = DeepLake(\n",
"db = DeeplakeVectorStore(\n",
" dataset_path=f\"hub://{username}/langchain-code\",\n",
" read_only=True,\n",
" embedding=embeddings,\n",
" embedding_function=embeddings,\n",
")"
]
},
@@ -796,36 +725,6 @@
"retriever.search_kwargs[\"k\"] = 20"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also specify user defined functions using [Deep Lake filters](https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def filter(x):\n",
" # filter based on source code\n",
" if \"something\" in x[\"text\"].data()[\"value\"]:\n",
" return False\n",
"\n",
" # filter based on path e.g. extension\n",
" metadata = x[\"metadata\"].data()[\"value\"]\n",
" return \"only_this\" in metadata[\"source\"] or \"also_that\" in metadata[\"source\"]\n",
"\n",
"\n",
"### turn on below for custom filtering\n",
"# retriever.search_kwargs['filter'] = filter"
]
},
{
"cell_type": "code",
"execution_count": 20,
@@ -837,10 +736,8 @@
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo-0613\"\n",
") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0613\") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
"qa = RetrievalQA.from_llm(model, retriever=retriever)"
]
},
{

View File

@@ -1,273 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "707d13a7",
"metadata": {},
"source": [
"# Databricks\n",
"\n",
"This notebook covers how to connect to the [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.\n",
"It is broken into 3 parts: installation and setup, connecting to Databricks, and examples."
]
},
{
"cell_type": "markdown",
"id": "0076d072",
"metadata": {},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "739b489b",
"metadata": {},
"outputs": [],
"source": [
"!pip install databricks-sql-connector"
]
},
{
"cell_type": "markdown",
"id": "73113163",
"metadata": {},
"source": [
"## Connecting to Databricks\n",
"\n",
"You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the `SQLDatabase.from_databricks()` method.\n",
"\n",
"### Syntax\n",
"```python\n",
"SQLDatabase.from_databricks(\n",
" catalog: str,\n",
" schema: str,\n",
" host: Optional[str] = None,\n",
" api_token: Optional[str] = None,\n",
" warehouse_id: Optional[str] = None,\n",
" cluster_id: Optional[str] = None,\n",
" engine_args: Optional[dict] = None,\n",
" **kwargs: Any)\n",
"```\n",
"### Required Parameters\n",
"* `catalog`: The catalog name in the Databricks database.\n",
"* `schema`: The schema name in the catalog.\n",
"\n",
"### Optional Parameters\n",
"There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.\n",
"* `host`: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `warehouse_id`: The warehouse ID in the Databricks SQL.\n",
"* `cluster_id`: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.\n",
"* `engine_args`: The arguments to be used when connecting Databricks.\n",
"* `**kwargs`: Additional keyword arguments for the `SQLDatabase.from_uri` method."
]
},
{
"cell_type": "markdown",
"id": "b11c7e48",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8102bca0",
"metadata": {},
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain_community.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog=\"samples\", schema=\"nyctaxi\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9dd36f58",
"metadata": {},
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
]
},
{
"cell_type": "markdown",
"id": "5b5c5f1a",
"metadata": {},
"source": [
"### SQL Chain example\n",
"\n",
"This example demonstrates the use of the [SQL Chain](https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html) for answering a question over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "36f2270b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4e2b5f25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What is the average duration of taxi rides that start between midnight and 6am?\n",
"SQLQuery:\u001b[32;1m\u001b[1;3mSELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration\n",
"FROM trips\n",
"WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(987.8122786304605,)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3mThe average duration of taxi rides that start between midnight and 6am is 987.81 seconds.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\n",
" \"What is the average duration of taxi rides that start between midnight and 6am?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e496d5e5",
"metadata": {},
"source": [
"### SQL Database Agent example\n",
"\n",
"This example demonstrates the use of the [SQL Database Agent](/docs/integrations/tools/sql_database) for answering questions over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9918e86a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c484a76e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mtrips\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI should check the schema of the trips table to see if it has the necessary columns for trip distance and duration.\n",
"Action: schema_sql_db\n",
"Action Input: trips\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"CREATE TABLE trips (\n",
"\ttpep_pickup_datetime TIMESTAMP, \n",
"\ttpep_dropoff_datetime TIMESTAMP, \n",
"\ttrip_distance FLOAT, \n",
"\tfare_amount FLOAT, \n",
"\tpickup_zip INT, \n",
"\tdropoff_zip INT\n",
") USING DELTA\n",
"\n",
"/*\n",
"3 rows from trips table:\n",
"tpep_pickup_datetime\ttpep_dropoff_datetime\ttrip_distance\tfare_amount\tpickup_zip\tdropoff_zip\n",
"2016-02-14 16:52:13+00:00\t2016-02-14 17:16:04+00:00\t4.94\t19.0\t10282\t10171\n",
"2016-02-04 18:44:19+00:00\t2016-02-04 18:46:00+00:00\t0.28\t3.5\t10110\t10110\n",
"2016-02-17 17:13:57+00:00\t2016-02-17 17:17:55+00:00\t0.7\t5.0\t10103\t10023\n",
"*/\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3mSELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe query is correct. I will now execute it to find the longest trip distance and its duration.\n",
"Action: query_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[(30.6, '0 00:43:31.000000000')]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
"Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the longest trip distance and how long did it take?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -115,7 +115,7 @@
"\n",
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n",
"\n",
"Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.\n",
"Unless told to do not query for all the columns from a specific index, only ask for a few relevant columns given the question.\n",
"\n",
"Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.\n",
"\n",

View File

@@ -21,40 +21,6 @@
"* Passing raw images and text chunks to a multimodal LLM for answer synthesis "
]
},
{
"cell_type": "markdown",
"id": "6a6b6e73",
"metadata": {},
"source": [
"## Start VDMS Server\n",
"\n",
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5f483872",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a1b9206b08ef626e15b356bf9e031171f7c7eb8f956a2733f196f0109246fe2b\n"
]
}
],
"source": [
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
"\n",
"# Connect to VDMS Vector Store\n",
"from langchain_community.vectorstores.vdms import VDMS_Client\n",
"\n",
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "2498a0a1",
@@ -67,20 +33,20 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental\n",
"! pip install --quiet -U langchain-vdms langchain-experimental langchain-ollama\n",
"\n",
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" \"onnxruntime==1.17.0\" pillow pydantic lxml open_clip_torch"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "78ac6543",
"metadata": {},
"outputs": [],
@@ -89,6 +55,40 @@
"# load_dotenv(find_dotenv(), override=True);"
]
},
{
"cell_type": "markdown",
"id": "e5c8916e",
"metadata": {},
"source": [
"## Start VDMS Server\n",
"\n",
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1e6e2c15",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a701e5ac3523006e9540b5355e2d872d5d78383eab61562a675d5b9ac21fde65\n"
]
}
],
"source": [
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
"\n",
"# Connect to VDMS Vector Store\n",
"from langchain_vdms.vectorstores import VDMS_Client\n",
"\n",
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
@@ -115,11 +115,12 @@
"import requests\n",
"\n",
"# Folder to store pdf and extracted images\n",
"datapath = Path(\"./data/multimodal_files\").resolve()\n",
"base_datapath = Path(\"./data/multimodal_files\").resolve()\n",
"datapath = base_datapath / \"images\"\n",
"datapath.mkdir(parents=True, exist_ok=True)\n",
"\n",
"pdf_url = \"https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\"\n",
"pdf_path = str(datapath / pdf_url.split(\"/\")[-1])\n",
"pdf_path = str(base_datapath / pdf_url.split(\"/\")[-1])\n",
"with open(pdf_path, \"wb\") as f:\n",
" f.write(requests.get(pdf_url).content)"
]
@@ -185,8 +186,8 @@
"source": [
"import os\n",
"\n",
"from langchain_community.vectorstores import VDMS\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from langchain_vdms import VDMS\n",
"\n",
"# Create VDMS\n",
"vectorstore = VDMS(\n",
@@ -312,10 +313,10 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.ollama import Ollama\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_ollama.llms import OllamaLLM\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",
@@ -340,8 +341,8 @@
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please convert answers to english and use your extensive knowledge \"\n",
" \"and analytical skills to provide a comprehensive summary that includes:\\n\"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
@@ -359,7 +360,7 @@
" \"\"\"Multi-modal RAG chain\"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" llm_model = Ollama(\n",
" llm_model = OllamaLLM(\n",
" verbose=True, temperature=0.5, model=\"llava\", base_url=\"http://localhost:11434\"\n",
" )\n",
"\n",
@@ -419,6 +420,121 @@
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"© 2017 LARRY D. MOORE\n",
"\n",
"contemporary criticism of the less-than- thoughtful circumstances under which Lange photographed Thomson, the pictures power to engage has not diminished. Artists in other countries have appropriated the image, changing the mothers features into those of other ethnicities, but keeping her expression and the positions of her clinging children. Long after anyone could help the Thompson family, this picture has resonance in another time of national crisis, unemployment and food shortages.\n",
"\n",
"A striking, but very different picture is a 1900 portrait of the legendary Hin-mah-too-yah- lat-kekt (Chief Joseph) of the Nez Percé people. The Bureau of American Ethnology in Washington, D.C., regularly arranged for its photographer, De Lancey Gill, to photograph Native American delegations that came to the capital to confer with officials about tribal needs and concerns. Although Gill described Chief Joseph as having “an air of gentleness and quiet reserve,” the delegate skeptically appraises the photographer, which is not surprising given that the United States broke five treaties with Chief Joseph and his father between 1855 and 1885.\n",
"\n",
"More than a glance, second looks may reveal new knowledge into complex histories.\n",
"\n",
"Anne Wilkes Tucker is the photography curator emeritus of the Museum of Fine Arts, Houston and curator of the “Not an Ostrich” exhibition.\n",
"\n",
"28\n",
"\n",
"28 LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"THEYRE WILLING TO HAVE MEENTERTAIN THEM DURING THE DAY,BUT AS SOON AS IT STARTSGETTING DARK, THEY ALLGO OFF, AND LEAVE ME! \n",
"ROSA PARKS: IN HER OWN WORDS\n",
"\n",
"COMIC ART: 120 YEARS OF PANELS AND PAGES\n",
"\n",
"SHALL NOT BE DENIED: WOMEN FIGHT FOR THE VOTE\n",
"\n",
"More information loc.gov/exhibits\n",
"Nuestra Sefiora de las Iguanas\n",
"\n",
"Graciela Iturbides 1979 portrait of Zobeida Díaz in the town of Juchitán in southeastern Mexico conveys the strength of women and reflects their important contributions to the economy. Díaz, a merchant, was selling iguanas to cook and eat, carrying them on her head, as is customary.\n",
"\n",
"GRACIELA ITURBIDE. “NUESTRA SEÑORA DE LAS IGUANAS.” 1979. GELATIN SILVER PRINT. © GRACIELA ITURBIDE, USED BY PERMISSION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"Iturbide requested permission to take a photograph, but this proved challenging because the iguanas were constantly moving, causing Díaz to laugh. The result, however, was a brilliant portrait that the inhabitants of Juchitán claimed with pride. They have reproduced it on posters and erected a statue honoring Díaz and her iguanas. The photo now appears throughout the world, inspiring supporters of feminism, womens rights and gender equality.\n",
"\n",
"—Adam Silvia is a curator in the Prints and Photographs Division.\n",
"\n",
"6\n",
"\n",
"6 LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"Migrant Mother is Florence Owens Thompson\n",
"\n",
"The iconic portrait that became the face of the Great Depression is also the most famous photograph in the collections of the Library of Congress.\n",
"\n",
"The Library holds the original source of the photo — a nitrate negative measuring 4 by 5 inches. Do you see a faint thumb in the bottom right? The photographer, Dorothea Lange, found the thumb distracting and after a few years had the negative altered to make the thumb almost invisible. Langes boss at the Farm Security Administration, Roy Stryker, criticized her action because altering a negative undermines the credibility of a documentary photo.\n",
"Shrimp Picker\n",
"\n",
"The photos and evocative captions of Lewis Hine served as source material for National Child Labor Committee reports and exhibits exposing abusive child labor practices in the United States in the first decades of the 20th century.\n",
"\n",
"LEWIS WICKES HINE. “MANUEL, THE YOUNG SHRIMP-PICKER, FIVE YEARS OLD, AND A MOUNTAIN OF CHILD-LABOR OYSTER SHELLS BEHIND HIM. HE WORKED LAST YEAR. UNDERSTANDS NOT A WORD OF ENGLISH. DUNBAR, LOPEZ, DUKATE COMPANY. LOCATION: BILOXI, MISSISSIPPI.” FEBRUARY 1911. NATIONAL CHILD LABOR COMMITTEE COLLECTION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"For 15 years, Hine\n",
"\n",
"crisscrossed the country, documenting the practices of the worst offenders. His effective use of photography made him one of the committee's greatest publicists in the campaign for legislation to ban child labor.\n",
"\n",
"Hine was a master at taking photos that catch attention and convey a message and, in this photo, he framed Manuel in a setting that drove home the boys small size and unsafe environment.\n",
"\n",
"Captions on photos of other shrimp pickers emphasized their long working hours as well as one hazard of the job: The acid from the shrimp made pickers hands sore and “eats the shoes off your feet.”\n",
"\n",
"Such images alerted viewers to all that workers, their families and the nation sacrificed when children were part of the labor force. The Library holds paper records of the National Child Labor Committee as well as over 5,000 photographs.\n",
"\n",
"—Barbara Natanson is head of the Reference Section in the Prints and Photographs Division.\n",
"\n",
"8\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"Intergenerational Portrait\n",
"\n",
"Raised on the Apsáalooke (Crow) reservation in Montana, photographer Wendy Red Star created her “Apsáalooke Feminist” self-portrait series with her daughter Beatrice. With a dash of wry humor, mother and daughter are their own first-person narrators.\n",
"\n",
"Red Star explains the significance of their appearance: “The dress has power: You feel strong and regal wearing it. In my art, the elk tooth dress specifically symbolizes Crow womanhood and the matrilineal line connecting me to my ancestors. As a mother, I spend hours searching for the perfect elk tooth dress materials to make a prized dress for my daughter.”\n",
"\n",
"In a world that struggles with cultural identities, this photograph shows us the power and beauty of blending traditional and contemporary styles.\n",
"American Gothic Product #216040262 Price: $24\n",
"\n",
"U.S. Capitol at Night Product #216040052 Price: $24\n",
"\n",
"Good Reading Ahead Product #21606142 Price: $24\n",
"\n",
"Gordon Parks created an iconic image with this 1942 photograph of cleaning woman Ella Watson.\n",
"\n",
"Snow blankets the U.S. Capitol in this classic image by Ernest L. Crandall.\n",
"\n",
"Start your new year out right with a poster promising good reading for months to come.\n",
"\n",
"▪ Order online: loc.gov/shop ▪ Order by phone: 888.682.3557\n",
"\n",
"26\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"SUPPORT\n",
"\n",
"A PICTURE OF PHILANTHROPY Annenberg Foundation Gives $1 Million and a Photographic Collection to the Library.\n",
"\n",
"A major gift by Wallis Annenberg and the Annenberg Foundation in Los Angeles will support the effort to reimagine the visitor experience at the Library of Congress. The foundation also is donating 1,000 photographic prints from its Annenberg Space for Photography exhibitions to the Library.\n",
"\n",
"The Library is pursuing a multiyear plan to transform the experience of its nearly 2 million annual visitors, share more of its treasures with the public and show how Library collections connect with visitors own creativity and research. The project is part of a strategic plan established by Librarian of Congress Carla Hayden to make the Library more user-centered for Congress, creators and learners of all ages.\n",
"\n",
"A 2018 exhibition at the Annenberg Space for Photography in Los Angeles featured over 400 photographs from the Library. The Library is planning a future photography exhibition, based on the Annenberg-curated show, along with a documentary film on the Library and its history, produced by the Annenberg Space for Photography.\n",
"\n",
"“The nations library is honored to have the strong support of Wallis Annenberg and the Annenberg Foundation as we enhance the experience for our visitors,” Hayden said. “We know that visitors will find new connections to the Library through the incredible photography collections and countless other treasures held here to document our nations history and creativity.”\n",
"\n",
"To enhance the Librarys holdings, the foundation is giving the Library photographic prints for long-term preservation from 10 other exhibitions hosted at the Annenberg Space for Photography. The Library holds one of the worlds largest photography collections, with about 14 million photos and over 1 million images digitized and available online.\n",
"18 LIBRARY OF CONGRESS MAGAZINE\n"
]
}
],
"source": [
@@ -461,10 +577,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
" The image depicts a woman with several children. The woman appears to be of Cherokee heritage, as suggested by the text provided. The image is described as having been initially regretted by the subject, Florence Owens Thompson, due to her feeling that it did not accurately represent her leadership qualities.\n",
"The historical and cultural context of the image is tied to the Great Depression and the Dust Bowl, both of which affected the Cherokee people in Oklahoma. The photograph was taken during this period, and its subject, Florence Owens Thompson, was a leader within her community who worked tirelessly to help those affected by these crises.\n",
"The image's symbolism and meaning can be interpreted as a representation of resilience and strength in the face of adversity. The woman is depicted with multiple children, which could signify her role as a caregiver and protector during difficult times.\n",
"Connections between the image and the related text include Florence Owens Thompson's leadership qualities and her regretted feelings about the photograph. Additionally, the mention of Dorothea Lange, the photographer who took this photo, ties the image to its historical context and the broader narrative of the Great Depression and Dust Bowl in Oklahoma. \n"
" The image is a black and white photograph by Dorothea Lange titled \"Destitute Pea Pickers in California. Mother of Seven Children. Age Thirty-Two. Nipomo, California.\" It was taken in March 1936 as part of the Farm Security Administration-Office of War Information Collection.\n",
"\n",
"The photograph features a woman with seven children, who appear to be in a state of poverty and hardship. The woman is seated, looking directly at the camera, while three of her children are standing behind her. They all seem to be dressed in ragged clothing, indicative of their impoverished condition.\n",
"\n",
"The historical context of this image is related to the Great Depression, which was a period of economic hardship in the United States that lasted from 1929 to 1939. During this time, many people struggled to make ends meet, and poverty was widespread. This photograph captures the plight of one such family during this difficult period.\n",
"\n",
"The symbolism of the image is multifaceted. The woman's direct gaze at the camera can be seen as a plea for help or an expression of desperation. The ragged clothing of the children serves as a stark reminder of the poverty and hardship experienced by many during this time.\n",
"\n",
"In terms of connections to the related text, it is mentioned that Florence Owens Thompson, the woman in the photograph, initially regretted having her picture taken. However, she later came to appreciate the importance of the image as a representation of the struggles faced by many during the Great Depression. The mention of Helena Zinkham suggests that she may have played a role in the creation or distribution of this photograph.\n",
"\n",
"Overall, this image is a powerful depiction of poverty and hardship during the Great Depression, capturing the resilience and struggles of one family amidst difficult times. \n"
]
}
],
@@ -491,11 +614,17 @@
"source": [
"! docker kill vdms_rag_nb"
]
},
{
"cell_type": "markdown",
"id": "fe4a98ee",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".langchain-venv",
"display_name": ".test-venv",
"language": "python",
"name": "python3"
},
@@ -509,7 +638,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

View File

@@ -233,7 +233,7 @@ Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Never query for all the columns from a specific table, only ask for a few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.

View File

@@ -26,7 +26,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"2e44b44201c8778b462342ac97f5ccf05a4e02aa8a04505ecde97bf20dcc4cbb\n"
"76e78b89cee4d6d31154823f93592315df79c28410dfbfc87c9f70cbfdfa648b\n"
]
}
],
@@ -49,7 +49,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental sentence-transformers opencv-python open_clip_torch torch accelerate"
"! pip install --quiet -U langchain-vdms langchain-experimental sentence-transformers opencv-python open_clip_torch torch accelerate"
]
},
{
@@ -63,7 +63,16 @@
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/data1/cwlacewe/apps/cwlacewe_langchain/.langchain-venv/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"import json\n",
"import os\n",
@@ -80,10 +89,10 @@
"from langchain_community.embeddings.sentence_transformer import (\n",
" SentenceTransformerEmbeddings,\n",
")\n",
"from langchain_community.vectorstores.vdms import VDMS, VDMS_Client\n",
"from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from langchain_vdms.vectorstores import VDMS, VDMS_Client\n",
"from transformers import (\n",
" AutoModelForCausalLM,\n",
" AutoTokenizer,\n",
@@ -363,7 +372,7 @@
"\t\tThere are 2 shoppers in this video. Shopper 1 is wearing a plaid shirt and a spectacle. Shopper 2 who is not completely captured in the frame seems to wear a black shirt and is moving away with his back turned towards the camera. There is a shelf towards the right of the camera frame. Shopper 2 is hanging an item back to a hanger and then quickly walks away in a similar fashion as shopper 2. Contents of the nearer side of the shelf with respect to camera seems to be camping lanterns and cleansing agents, arranged at the top. In the middle part of the shelf, various tools including grommets, a pocket saw, candles, and other helpful camping items can be observed. Midway through the shelf contains items which appear to be steel containers and items made up of plastic with red, green, orange, and yellow colors, while those at the bottom are packed in cardboard boxes. Contents at the farther part of the shelf are well stocked and organized but are not glaringly visible.\n",
"\n",
"\tMetadata:\n",
"\t\t{'fps': 24.0, 'id': 'c6e5f894-b905-46f5-ac9e-4487a9235561', 'total_frames': 120.0, 'video': 'clip16.mp4'}\n",
"\t\t{'fps': 24.0, 'total_frames': 120.0, 'video': 'clip16.mp4'}\n",
"Retrieved Top matching video!\n",
"\n",
"\n"
@@ -392,18 +401,12 @@
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "3edf8783e114487ca490d8dec5c46884",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
"name": "stderr",
"output_type": "stream",
"text": [
"Loading checkpoint shards: 100%|██████████| 2/2 [00:18<00:00, 9.01s/it]\n",
"WARNING:accelerate.big_modeling:Some parameters are on the meta device because they were offloaded to the cpu.\n"
]
}
],
"source": [
@@ -555,7 +558,7 @@
"\t\tA single shopper is seen in this video standing facing the shelf and in the bottom part of the frame. He's wearing a light-colored shirt and a spectacle. The shopper is carrying a red colored basket in his left hand. The entire basket is not clearly visible, but it does seem to contain something in a blue colored package which the shopper has just placed in the basket given his right hand was seen inside the basket. Then the shopper leans towards the shelf and checks out an item in orange package. He picks this single item with his right hand and proceeds to place the item in the basket. The entire shelf looks well stocked except for the top part of the shelf which is empty. The shopper has not picked any item from this part of the shelf. The rest of the shelf looks well stocked and does not need any restocking. The contents on the farther part of the shelf consists of items, majority of which are packed in black, yellow, and green packages. No other details are visible of these items.\n",
"\n",
"\tMetadata:\n",
"\t\t{'fps': 24.0, 'id': '37ddc212-994e-4db0-877f-5ed09965ab90', 'total_frames': 162.0, 'video': 'clip10.mp4'}\n",
"\t\t{'fps': 24.0, 'total_frames': 162.0, 'video': 'clip10.mp4'}\n",
"Retrieved Top matching video!\n",
"\n",
"\n"
@@ -585,7 +588,7 @@
"User : Find a man holding a red shopping basket\n",
"Assistant : Most relevant retrieved video is **clip9.mp4** \n",
"\n",
"I see a person standing in front of a well-stocked shelf, they are wearing a light-colored shirt and glasses, and they have a red shopping basket in their left hand. They are leaning forward and picking up an item from the shelf with their right hand. The item is packaged in a blue-green box. Based on the scene description, I can confirm that the person is indeed holding a red shopping basket.</s>\n"
"I see a person standing in front of a well-stocked shelf, they are wearing a light-colored shirt and glasses, and they have a red shopping basket in their left hand. They are leaning forward and picking up an item from the shelf with their right hand. The item is packaged in a blue-green box. Based on the available information, I cannot confirm whether the basket is empty or contains items. However, the rest of the\n"
]
}
],
@@ -655,7 +658,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": ".langchain-venv",
"language": "python",
"name": "python3"
},
@@ -669,7 +672,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

View File

@@ -328,7 +328,7 @@ html[data-theme=dark] .MathJax_SVG * {
}
.bd-sidebar-primary {
width: 22%; /* Adjust this value to your preference */
width: max-content; /* Adjust this value to your preference */
line-height: 1.4;
}

View File

@@ -0,0 +1 @@
eNrtVnl0E3Ueb4GtgAesT1dQjpgVdbWTziSTs+RBD5IW0jNJL4p1MvNLM81cnaNtWrpWLD4flC0p4AG7rpTSYFsK2AJySlFXFFRA0aUooOyKBy5KQURQ9pc0peWh7/n28d66+8wfyWR+3+PzPX+feaEKIEo0z8V20JwMRIKU4R+paV5IBOUKkOT6VhbIPp5qyc5yulYpIn34AZ8sC5IlIYEQaA0vAI6gNSTPJlRgCaSPkBPgs8CAiJkWD08FemNTatQskCSiFEhqy+waNclDV5ystqjzocJ9kkr2AVUlIOCPqKI5VTIvyTw3TR2vFnkGQDFFAqK6dk68muUpwMAXpYKM4DzC0hwNpSRZBASrtngJRgLxahmwAoxEVkSoi2pQ+IbnmX7XckAIG/QqXCRQqHzl0VKj5gg2fFoK5JIoHChAAYkUaaFfRm0H8lC4GiggECLUg8mTwjYEEeZElGkQ+cfwJDFgPeoboqW5UnVtLQwP5pgWAQWhDUrCMKOSvKcMkDKUrJ1TG/IBgoIuGlt8MDvBzquTv44gSQBzAjiSp6D14NrSalqIV1HAyxAyaIMZ50AkzGCbHwABIRi6ArT2awXXE4LA0P3uE8oknuuIVggJA7n2uC1cDwSWk5OD3VkQRFJ6QnYAdgmnwjQGTIOtr0IkmaA5BlYdYQiIp1WInG8beiAQpB8aQaIdGGztV+4cKsNLwdUZBJnlvMokIZK+4GpCZA1419D3osLJNAuCoZTsa91FDwfd6TSYVmPacJVhKcCRwdWRRtp8lTKQxQBC8tBGcCXaOZAfBnClsi+4SqfVrRGBJMCeB4+1QjVZkea1wFqAfXtC0d5vzpo1UMSjMXe0pMK6BHe4fEq8CjWoMghRpUW1ehVmsOh0FhxX2TNcHSlRN64fLcMGl0hwkheWYsZA2UOkT+H8gGpL+dGC7wgXHEYThg9HCwFVAi8BJIoq2FGA5PZPPZKe2tXfXQgvlhIcXR1xG9wRqXxldVUlRSoU5auoZFFzNa6jPUAhvd1RFTgCYTcQEMJKwVW4Tt8ZPRnIfRuMFUUwFEGxrVUInFXA0CwN8xn5jq4eKdiiR1H0xWsFZN4POCkYwtHIZ+dQCRGwsGhh34NmcLPZvP3HhQZM6aCI2ajferWUBIaiwbSs9OK1AlETzajUUTUgjdBU8PA98E8JoaXMHr0eJylCZzR4dVrMRGBmwmPEUEyvA8SW8D4goZVwMQVelBEJkHDPyoHg4XiWqArPmVUHRQ0w0kS4HklGoYBT8aTy4RikRJUgAoYnqHWkFyEJ0geQ/v4LhlILM5My0lPanBBkCs/7adDUGzuupIT0lnhYK5UXKChnUhiHO60cNYvJQoEsukRjEup345WEwNJYSpqU5EzPsfMIZsQxrdFk0uIIpkE1cEoRh8dv0uWSdjTfp+NLAnaz35lRaMygdTPNmWhBmZMRGCXTpfHmutOqzclV2pmObHNWkjbXVpXkypNmZmp0eHlmJenT0kwuhuL+VNLuNThmCTnp2Q5TmQE3G4n8HFdWieyHUQtw2VoTElWwYeG+lKzRsUHg2CDhoTFa0IGhSVRRkcRYNVevyERVGry3sjgmkKhyhjMM4C/c205aBtZMngOHl8LEKBU0Zc0kst0FAb3bnWVMwryluRTvtpFKURrhChQVlZXn8YYkD12ur/R4MoZkxmjEETSaHAOKmyKtOQj9P0S1qQAZugWQrMhFBIvL8RJHe72tTiDCqQq2kQyvUHDbi6A1xYbkJhUGu80YqcMxwmMwGSicBFokGe7RAWtXdkZL+KoIEQxsvAoy2OXTWdUwlTp1ooolrCYDnLHINf5oa//F9Wrs95MXjoyJfIY3OHP4FejYP57MLzg/4d7krulVs7ue722zyssMa/fWF0t1iWTNpvTZMy6cXnJ3bBM4WDX9vdrztZXnD2++cWTsQ9NtdYfe3WBb+JUl4zS//Ejo3N6LzT98c+GdD/s+fODW+FtnT3kP7bvp3Lf1+yYW8k+XdU9/bdkpwrvYcratj8Ial+CL7pi4f82w+SF37/wbbzcUn0H/9PGMcVM+mmpd7391fNGePY11Y5MrPjx2/g83zn20IbWLbEny0/MX35xeX5ez215XtHftqoM9zIhnmm5Zzc/ZlOyICS3dM/52T/wHj2x7PYAvHB1cN21u+iPH+iqPbT7w/pEzJ777W+ekT1NWOD4/d3DdVv8aZspdjuXHU6YmPmgbmdYrfzB6zg8N7x7yvfVpfYzefeQ5eafyd3b6/bYdx+NeuLn4wS8ulY+5r7tW+OyS9PETPS9sdE3osLxy/rHzye7eNfYa75SlxeLsk3HPf2lbbrlrQruiPGfdMnW/Td/7z8zvkz8Dd6QnjD5x8c6eC0eGf7p0lach4cTh32zcPt0eqNz8suRZeqigYd/XhQc2HS8bNmLJ65e5T5oqplmfOnR5T2MMuWbi77YYCoQKy1dPe/fzx146GNe47fGe5sPi41tO+C6xtxUczDubGPvNIQtzpmnv6EWG7bNvul+ZhCxpr+/sbJ4y4uLxMbDOly8Pj1m499ZX7h0RE3M9meGwnOvDDOOHaHIKw1w5JuAtBLchfN9PAktIgvlJJkhDHqYOC5SkzMg1lTvppMIKNG+WNz3FLpmLiCLF+HPoIiGWKixEAr2oa4qv0LpitUVVrO7HX6yuVYc53VDY6vRwuJLCcQHNYHhhzEOhl/wMjL/y5F958i+VJ+vN15kn6/+feDJu/l/hyej158l6j8ELtJQR1eEoYaSMmM6DkqhJqzNjqAkjTP81nqxPqp4VcNjQnGxthZ9zC7NYxlxt53J9AYdU5GYdgUwnmYWasllHziAb1F/hyRk+Zx5fVe0nBV7Qp4KkDGehKblcq/X4gSYnA6W5yjRCyVNm+sopv4Mql/SFvNGuoXL4alsZW00bZPsMtsgrAQOa7MqbZeRmujWKN4DZZ2T7s3MdnLmoxGPSZqQmp5vSfy5P1l8nnmxOE205zvI02sPYyqkiW7U5tyDgd3kU3FjIYeXmUszmrKCNKD4THZIZHDX+MnmyVw8oyuQ1UNeJJ68Z5MkRDtWQ28OtQm/b3nf7k2+vHPZGXBO52p1ndvdavyybX3wxllv0V/+0TYvaLl18Y/HjLnb1Q6Z928d7nz02a/e49xfw45/aoTKblhx999zbk2uwf710+vTkWxKPLdv8xZucvfHcae3J9jW7ttUsPtu2tclkGt5y4K3us6qRzf6HlgVyDilfs+o418PNHdLSJ5++pffz7j0LtG+24euyPp+0sX7X5lNZu8eRCe8YP+ke/eyFJ0F7PlU3NvWGuTWTPA01+G8/Uo+YmFbwcfv8pb5Rp277Pn/MqVEjzh1/ZtOpuOErgWU9WlTz4lZXwPeayiMm3intbjw5Li7x5NeaP3+rO1rWnnKsamwsewmo5LmexlHkI6GtJ1bEfRCYMEFfP+bhVYR7X8XcwjPJXOaEu0d17VJ2frerI2d68gphA6KZfyoJeQNs9LjuTHjiH81nLsZPDry9ceOXOz0vjc2d9/vsRX3Pdc89Ujm+b2zCV10b+haEdk2tuxyagrf0vFaavX/BgX2VPS/fPfamjHb2hqMB62T7F8uZOXPyPdJf1gZExw/D+yntPW9mrnwAPv8bzps/8g==

View File

@@ -0,0 +1 @@
eNqdVXtwVNUZ3220OjKDBRGwHe12BxBr7u597TOzbZNNWJIQNskueVSZ9Oy5Z3dvcu89l/vY7C4CFTs6KOJcH6NC21ESdp1tComkKAgqdup0QFQmOhpbqrYVaXV0aJyxiiU9u9mUZOCv3j9277nf6/d9v+/7zvZiBmm6iBX7iKgYSAPQIAfd2l7U0CYT6cYvCjIy0lgYbo/G4kOmJk7+MG0Yqh50u4EqurCKFCC6IJbdGcYN08Bwk3dVQhU3wwks5CZTm50y0nWQQrozeMdmJ8QkkmI4g861SJKws9apYQmRo6kjzbllY61TxgKSyIeUalA8pmRREYmWbmgIyM5gEkg6qnUaSFYJYMPUiC3torcU0wgIJJuHhtNYN6z98/EdABAi4g8pEAuikrJ+m8qLaq1DQEkJGKhEUCmokr1VGkBIpYAkZlBhxsoaBaoqiRCU5e5+HSsj1SwoI6eiy8Wlci4USVkxrPEoAVHf7G7PkUIqDsblZVzMaJbSDSAqEqkMJQGCp6BW5C/MFagADhAnVJUkqzBjvH+uDtatfW0ARmPzXAINpq19QJO9/MG53zVTMUQZWcVw++XhqsJL4TgXw7r8Y/Mc6zkFWvsqJDw3zxgZWo6CmPiwnqb3z9ZHQkrKSFtDDM09oyFdJW2B7ikQM8PUtw8TLtBrfyxW+2NvtHWWxL/Ylg03El6sY/G0WeugvY42oDlYmvU4GG+Q44I854i0xUfC1TDxK9IwFteAoicJFU2ztBdh2lQGkFAKX5HwY2XCSTZl+KQtKZRVsY6oKiprpIfqnBkMqrnx4Ex3UVhLAUXMV8JaxyrMD+azgwI0BSGdGZTpQJ7nxAQyYXK8aqJquByGAKJk3Rpiae/+qmS29iWSK00xNEUzR7IU6XMkibJI6ln5rU6nbg17aJp+/nIFAw8gMsdFnq48L87V0JBMSCvHvuSGDwQCR6+sNOuKIyoB/3w0hFE0Fw3DyvrzlytUXeyl9ZHsrDYlCtbkCnLoYz0AIS/rpZlkgKYTwMvxnM+T8EGO9UEv7z1MJl+ExEuZTBVrBqUjSFaRkbMma2WQLc9ZiGM8nJdkWucQFSiZAoqZiUZczkGvc6gakjAQDsAkBQFMI2qm/6xiY+/6+rbmcClGQIYxHhDRw+/Zl/f1wWRfQg7lUNwTiAs+RsFrNrV0xF2NmmfA71Kjme56LHd3x2TI5tRIjO1rohgfz7A+v5/lKMZFu8iUUk39PIwMCp2eVj7CmevyeqSzKSA2RrubeiMZF611NRiDjJoRcS4Qz/Zgo9/or491d3DqYKx9TS6Xp1vCrfGe5oS/q1Py+dZJ0J/IJnztZnN8fdf6dfl6r9ri6W/pEPPJNpIiMNIhd52DNKxIih6qjg1FxoYqD40vSM8OTZ1DqBQm5Jq/Iusca8lqjypSrs4RK1cYkX8go5hooNB6rKDJR0lhzIwohMxNG3LxCNfLh1k6BnSlK94Ge/yg2aer9AbIrkl1cJGGgWRLfBDPqQzrZSi6WhwvzfsrrXkJ+v+J6lAPNXcLUFF15g4rKlhXxGSyEEMamSqrBCVsCmTba6gQXkN11vda4wEGcjwDOC7hY3mB4akGskdnvf1vZwyXr4oikEjjZaB1MM2FnEGe55x1DhmE/F4yY5Wb7u5CuVGV1B/sQ99/4Fpb5anZ2Xl81wS95OjHt9/yI9/q5pOrvtg+fk18/OSOBYsbLPjSiuTNPz6ZWXqiNL3rl+Ntpat3tC7k8Crujc9X2n/W8myIPj3l+PZvJi6Uvv7Xafma9z960K9O//uFc3vOfvj0ha+/2vh7b0f9qeuXfLg3f3t69Lv3OLmJV35w/fKNU9rjy9CGG/+0Y/navbueff1eI/KPp3a2PDDy7r7eJ8++3XvmkcXnJm95caXN5p7wneUXbxsLCXsWsE99eTR86KWFdjoSdO7cfOCuD1aNnmq4jrWfWvKfn/904nv+N6Pf+fPLtz5xtTq86FvNW78ZXRqUIm/ALNe/5dXoOyt2Bz955nzq4v3PnVj9/jsLD996w/Q/a3q9Y8WrPrjwVvbobcdHho7fG3zzht3pnlOK1Dv1ykcfo9Gf3P3JjVsXnXni7cU13e8dWPrqVctCJ5oOS48/1Hn6/rVdj/Xd99WeL++848nPtn26rOHacTuSb25ZqSxww4V/W3Rx4NMjf72Nnjr368deDq42zt+04tBdb02J9LYv7Pcd2VAIp361+/Oh35mZbqX14tY9Z8b+/ug3dptterrG9hlmzlM1Ntt/AUZXjK8=

View File

@@ -1 +1 @@
eNptU39QFGUYPhQbUhmIGZv6R3ZupBqHvdtjr+OApomOEEQ6hBsGRhM/dj9ul9vbXXe/xQO0ApnUKZlZSy2omYjjzllJpSgkMSMpayAHGkeHqRwrM5pKBRRicqDvTjAZ3b++fX887/s+z/s2hmugovKSGNPJiwgqgEH4R9UbwwrcpkEVNYX8EHESGyxyl3jaNYUfTeEQktVMqxXIvAWIiFMkmWcsjOS31tisfqiqwAvVYKXE1o4eqDf7QaACST4oquZMwkal2VMJ80IQtmyqNyuSAPHLrKlQMWMvI+FORBQxeaAgEH5IAKIaQxCgUtIQUQmBopp3vhQBklgoRAIZAWgsJGmSA7xPI9NwHYqm0iNwCPplPBjSlEgVykLtDHMQsHjsS6bEICepSO+6b5RjgGGgjEgoMhLLi179Q28dL6cSLKwSAIIG7lGEUa50wwehTAKBr4GhO1n6cSDLAs+AiN9arUpi5/xMJKqV4f1uIzI6iRkRkd6TvdCHtagWMy/ilmmHhToeIFUEeFHA3JECwC2F5Kj/5L0OGTA+jEPOq6qH7iQfvTdGUvWOQsC4SxZBAoXh9A6g+B32j++1K5qIeD/Uw66i+8vNO/8vR1tsNkt61yJgtVZk9I4qIKiw6y7Jd1MMrBVNUg6SsvUsgoZIqSUZCVfQ26ijCwQKUPQiTm+32TMOK1CV8bLCXSGchjS1MYjFgkPfhOf36wN3wYLUbwRzsGz6KQ+npRK2dMLNICKyJITNnkmnZdopYl2hp9M1X8TzQJW6PAoQ1Sqs1AsLWxFmOE30QdZwPXAfjPkjInlW78PvCspGF+Ruc2nFG9VAYVFengLp9EKQzX8SIBlB0lgS4QuEZHTYANJHiUo7nWG3MU4nTdMOSDkhpKg0J2ScjN3BVjrp9hoe6IbNYiO8kuQV4DFXLukCDAfJkiglejin/MXswnxXZxlZLFVKSCU9wKsHRUmEoRKoYBV0I1oa77UCQzi9OLtc73aydqqqislgYYbdARhAPo/XZYGeu+MHI0cRvfQGLIGCTV/FbE9+Pc4U/ZZu2Fjh++G5lXNrW7ZM/NLU/XDs7eKxM3knv9/t79gFm1rdzdKzBefyD51fMXPtzLWH6pNuz64fa+xn9r63emqw9eKtyetTJ2Zvjxe03hz/Z+I7/+eXXvlyiX+M9+hGVuKjb69pfrUkvro5fsVfAyUDSUfGPgrPVF86MXx15u+JmlObyfSyrSnCwGu/TQ+ezmnvGcnal9TzFhe6fPnJ3tjTwZTDKf1E74/jzRWtMX03hp7Y8vMzhscovb717LSntzP2wrBpX9ZI/fpCNjl3uvQcvcz37k/9S8uWX+mPXftHnJLw+9Z453Dir6adVXE79geW7whIm74orTMm8xL2tx0cWt3YQO+5cv79FSNnYzYfevoGiHGzw490p3rjbnEttXvaOro+fXy7mnqhqGX51QtF617+rK7hYvIyI/+0+4rnsYI1U/WDe8etbWhoYBX4NuPrJZM335lobPhXqH/zYFlXX/mqp0ZndzuO/HkrATM8N7fURLZuWDkdYzL9Byv+bUg=
eNptVGtsFFUUXh6RthaiwUQMaCcbEB+925ndZemuMXRtS+1KH7RLbQHFu3fu7kx3dmaYudP0AYlUIU2BkMFQAsEX3e7i2lKaklSiqDxtgKRAiVqJVQMiVqBi8AHa1NvSAg3Mr5l7zvm+c77v3KmPV2FNFxV5UqsoE6xBROiHbtbHNbzawDp5OxbBRFD4aHFRqb/Z0MS+eQIhqu7JyICqaIMyETRFFZENKZGMKi4jgnUdhrAeDSh8Td/GOmsEVq8iShjLutXDsXZnunU8xepZUWfVFAlbPVZDx5o13YoU2oRM6IEfSxITwQxkKmkxAwOKQZgAhppuXfsaxVB4LNE0JEGDx8ABBCiGDWCnBKyDXUihCI6odB5iaBSftbFr4wKGPB223/JoVFB0YnbcN0A7RAirBGAZKbwoh8y2UK2opjM8DkqQ4ARtT8ajCpmJMMYqgJJYhTurgU6gKEt0LkDECKatmh8VFvlX5eWX5RbGboOa+6CqSiKCI+UZlboit45NC0iNiu8PJ0Y0AVQomZhd3vE2M4prqB0yw9qcbhu7715qCdKOY+po/NN7AypEYYoDxqw2Y7eL996bo+hmSwFERaUTIKGGBLMFahGXc8KUmiGPDGrGs4vvpxsL3qVz2Di7LbNjArBeIyOzJQglHXfc8eBOSYIa6QCsC7Bc1wRoTLQagBTKYH7I7h0XUMJyiAhmM+dw79GwrtINxm/FaBkx9Poo9RKf6o6Prd3uolfGN2FTNIe6ah70C0Y6w7qYAqgxlHgBw7k8DodnAcvkFfhbs8dI/A90qcOvQVkPUqdyx5cmjgRDDmM+kf3AdUmM3Swg8uZn9H0Vy+VFjBLVtzQYUivD4mpfdkmuXIjzDtzVRdFCUBZrR2lH6vrmOtwuxwLeEQA4EOSB0525ELjddg4E7PZM3pnJLXTyruYqEZoJzsYxIUUJSbgdBQGCSMDgtjRmPKei0FuQn91aDkqUgEJ04IchMyorMo6VYo26YSaQpBg8XX8Nx7IXgxJvhbnfzSGHk0OIDbpYJ0J28BJdm3GZ7sgQHbk7o7+BddQKjR4dm9SYtjHJMvpM4U3vlt6s1PXDjddbtu/3dTX4DrRmzWl882RK8qTcM+wFsmHJFfaCmLxs+AVf07asXek363p6mtpnzXFlXixdoaRdORL+/otzXWn/3fht6Je2J6/GH59Ztqf35c5GzTX/cMo7Rw9AbUbF9OfLMhuaG6Z/JzQ2/ylfLbh88tbez+teHxx4sUxYv62846a2dueeG/pftpZDU+Ytx+zvm5fUD56dSba+sfXsnJvdaUNTC6tbHu5RdyyrkN4f/Km7vXvn1uRzvVP7d/262pcX7Qn+7bV9nDqwuf/I7NOPda9LKX8i6UTS9g+eSr34yIWpCCataaqesaZ2yfkvvYsHTmya/+zpfDjNe+iZHHsFTDpzLPlUsfZj70P95uGs050rGdIp/sxseO9S+aLBYPm50uWTvx4Qma6VBw+d/yal88S3T9srZs2eW9XWdPSfafytpQ0AHpl3POXkHzsuNa77anebr2hXR9+15/Yfr62MWJu9nwy9Wrzoes7xoZ7o5R/eHbpW9e90i2V4eIrF9ARTHZMtlv8BHmuAkg==

View File

@@ -1 +1 @@
eNqdVW1sG0UadluuQoJDOqVAdRLqYsEJQnez6931V3Cp4zjNR12ncUJT0J1vvDu2N96v7sw6cXL9cYWrQNypXVqoysePUscuJm2aNrQcbenpuCvtFXEH3J0IEhVV+VFAgBAoIEGBWce5Jmp/3f7weGbeeT+e531mtlWL0EKKoS+ZUHQMLSBhMkHOtqoFt9gQ4UcrGsR5Qy73JlP9+21LmWnOY2yicEsLMBXGMKEOFEYytJYi1yLlAW4h/00V1t2UM4Zcmtk55tUgQiAHkTdMPTzmlQwSSsdk4u2HqkppkALUkFEgQ8awMZWBwELe1ZTXMlToWtkIWt6tvyYrmiFD1V3KmZjmGZHGtpUxXFudrHJkRNiCQCOTLFARJAsYaiYpjBi6vlgmsLWah0AmZe8o5w2EnUOLC5kEkgSJd6hLhqzoOedgblQxV1MyzKoAwxrJXod1mJxaAUKTBqpShJW5U85hYJqqIgF3v2UIGfpEo1oal0x47XbNrY0m2OjYmU6SJKJdLb0lgrhOcYwQZNjDIzTCQNFVAiGtApJPxazvn1i4YQKpQJzQDTadytzhQwttDOSMJ4CUTC1yCSwp74wDS/MLRxeuW7aOFQ061VjvteEam1fD8QzHMYGpRY5RSZec8ToNxxcdhtgq0ZJBfDj72EPz+KhQz+G8s58ThAMWRCbpH/hIhRzDNtpWJlzAN89WG430QrJnnsQLntvL7YQX51R/3l5NcQEqKWHKx/oEihPCvC/MB6l1if6JWCNM/3VpmOq3gI6yhIr4PO1VKW/rBSjXYtcl/JRLOKnGTZ/0KQ1HTANBupGVMzFI980piO5qPzrXXbRh5YCujNbDOqfqzA+PjgzLki3L+eKwxoZGBV7JQFvKTjeOmJbhhiEJ0Rpy9vt57lBjZx77GqmVpTmWZrlXR2jS6FBVNIXgWf9tyBg5ZZFl2VeuNcBEd0TwVYGtf68ttLCgRkhzY191I4RCoZPXN5p3xROTUEB8dbEVgguz4XwaeuVag4aLF1g0MTJvTSuyM3MXmaSDok/MhPggHwqwPlEUWAGEIBviQhzL+WR/6M9E/IpEvLhkmoaFaQQlcmfhkjOzWgMjrs4iPCfyflJpK6XokmrLMGVn2g23BtRKmRZUDSBPxjroGJDykE7V+8+ptm/eEE10xWopkmTMMAoKfPL9JcvSaSmbzmiRbpvBw1GspyxN8aVD6YQh2pIciOfibahL797UWSxuiveYWxL2MM0FfCEuIIpikOYYluEYjjZ4vbetLbNe7tR7Ep1tfWzRH+gG4kDM6var6eSD3XhTVttUjJo+f1f/6IC4rs0uDPWW1qc5ExaVUDZbTG5WChrGMcvkt8QHbMbfNxQl1QCcj7S0UqQ3FYJvpKEQmiiEdvUhhtl5fbRSch2DCLP4NmylOsl1n9TVUiuVcsGEZAQaTCkYRjYYOpzZTTCwi4ocCchxkOhghmzWig32GcxwMj2QeyjR548XgqZtGsPRQk6IM2aiJ7cABJHjabaBg58VgvUuvJr6/5nVsUF6oeDppDn3rlV1A+lKNltJQYsIyKlJqmHL5GK3YIVw3hfd7EwHZYHNZiWBC/plnvVxdBu5Mue9/e96KLuvQhWopMeKknM0z0e8YUHgva2UBiJBP5FT/fX7fcXtST339yVo1RM3eurfMnVjIrmUW3Hyi8mR2Zu3/yxw4+37f3XXL/WO9Sti3d9MPXx59j+zB52VP5z4+MjvVp76mn1371vrTzd59iFu+b7uczWrMvZd8MrIoFGrHrxjw8Y1H1xUhbGts98cPffb5vDG6Hmxec0B5taBt6jCkbGHSrv/+LZ4x+y6j9nzRz7dfuCedU3wD5c+/9D3qfXZnn9/Nn5lYuDp8q3Lue3Hb/B8eCIkdTx7ac/K2CRa23xz9LZ7zrzR4/nrrscf23nn5b+8c+Tyc53xp9/2T6X/QX90Q8/uNWvvfeAX3S9euW1F1y0B1Hp81/uPfPW3A5nJJ9CqHS+Fz773+n/PTH/7+co3ey/eT8fOr/ryqWPh3Tft8O38051LdzX9/ESP58KzSz5JP//oV/2h37Davzpex4ePN1XP/fPs9N17B59punD6k+8vHTvrjPsvqmv3lZu5B+4dO7f55dPGD7mpyar25Rt3K8+3v7f8zNDUO8fWfnFSf62819rzPUH3xx+XeW46fd+7zlKP5yeHyHrs
eNqdVX1sE+cdDnSsdNqg7WhHitrerFYgyDl3vvP5S+4WO1kwIbWTeCFmouH13Wv74rt7L/fh2CaglbVoKF3bK6hp1Q/RJNhtloQAUUrDR2Gj2ibaIjZ1ItCVdm01NqXrumkIqSvda8cZieCvWbLPd+/v83l+z+92FjNQ00WkLBoRFQNqgDfwjW7tLGqw24S68VhBhkYKCUORcFt00NTE6bUpw1B1b20tUEU7UqECRDuP5NoMXcungFGL/6sSLIcZiiMhNy1ts8lQ10ES6jbvT7bZeIQzKYbNa4tCSSJkSACiC6XxJY5Mg4hDoOm2GpuGJIhtTB1qtu1bamwyEqCEHyRVg2TsTtIwtTjCdrqhQSDbvAkg6XB7MQWBgFt6aiiFdMMaW1jkAcDzEPtDhUeCqCSt0WReVGsIASYkYMBhXJoCyxBYw2kIVRJIYgYWZr2scaCqksiD0nltl46UkUorpJFT4Y3Hw6XaSdy3YlgTYVxEXag2ksNoKgRt52g7PZ4ldQOIioThISWA6ymo5fOj8w9UwKdxELLClFWYdR6bb4N0a38z4MNtC0ICjU9Z+4Emc+zh+c81UzFEGVrFYOTGdJXD6+kYO+2wuw8uCKznFN7aX4b89QXO0NByJI9wDOsVamwOHwkqSSNlDdKU+1UN6iqeDfizAnYzTH3nEOYCvv3bYmVIBsJNcyR+UPW9oXrMi3U8mjJrCIojmoFGOCiHk6A5L8N4WQ/R2BwdCVbSRG9Kw8GoBhQ9galomKO9yKdMJQ2F4eBNCT9eIhx3UyofjyEJsyrSIVmpyhrpIFtn1UGG6g/PTheJtCRQxHw5rXW8zHxPPtsj8KYgpDI9MuXJs4wYhyafmKi4qBoqpcEFkbJuDTI0O1Y5mcN+GPdKkTRFUvRUltQwFJIoixjP8m9Foro15KQo6siNBgZWFRZzkaXKnxPzLTQoY9JKua+HYT0ez7GbG82FYrCJx8VNLbTS4fxqaIesH7nRoBJigNJHsnPWpChY0w/gm06WYwDnctJ0nIlzgHYybifPO+Jx2gGcXMLlfAPrXORxlBKZKtIMUoc83kdGzpqukUG2pDM/g/043KmPEBVeMgXYZsbrUakH3UeoGpQQEA7wCZIHfAqSs/NnFetjD9c1h4LDbbjIIEJpET5zYdHKzk4+0RmX/XqCozktHNmoC4w9vSnTwWyoz6wPqmZrY0NQkNrzrhTNBc0mj9xD0i6WdrjcboeHpO2UHauUpELOphArpSI9kiMUYlvr8jFRocS0XbBvyCWNpBM1mNFQw8b2bgwzn3C2bwyCSDYaVpxNdTnUGg8505nG9nadk9nopqA7nd0cyNY1uzqCsXSek1uEaCJmpDwUZW+I4RaBkfLX+gg8sCIG3V+RDYllQ5ZE4/JSc6LxEUIZGL994Yr0Eevxfg8rUs5HtJUQhvgKZNgmGtD/MFLg9F4MjJkRBb87Bto7pM0bOsXubn0zciZBc93miJnPIEHgskoItCQ6+fWmqnW75yHD0k6SqoDDUay7PJrXS/8/q5rsIOdvATKszr7IigrSFTGRKLRBDavKGuYlZAp422uwEPwR2VoXsyY8NM+wNO9h3dDjTiQ8ZADv0blo/9sZQ6VXRRFIePAyvHU4xfhtXpZlbD5CBn43hzVWft09WigNqpJ8a9Gx+/uWVpU/t+Dv118/0XpKuUh9+/hf1rlf2jOQufv0O/CHj27Z9TK3Ynp82a6ac5P3xO6sz8daPrl0q0eZ2bpqvG9Zb2/4c6t3S9XSwbeWPdb14C/t59/b3tKbi5y9OPjum5/9AHoe6n3yhQPTH3187WB2xZnT/3iPWbuD/9OaS0v6pGO+X68dEDYOT1/Z/izqs7256qfvpid2//Ebj4wibt3f+U83vR47Uf38qOtboZkPv1xc9aGZ3dNY/KL//KnP/jn6++9bUmTG3lxly7+wNnBndfyvHTWrI7nfXP7mxS+vLJk4d5IMPO4I3NVy377lKvt+IL30nu/K937H+9Wil9/fc7TxwW2BfXd/tOa2fP/PDx26qlwLnZ5a/Xxg6oMLB3dMHZk8cdu5Teyf+/Pyi69MPrHmd0c/efr8rfS+4pKTTwHvVz/eesfA54v7Z5Zffmbr5ENndov3M6pv50zswmsn7ludH23vEsSadwrr9kbJv+32jAauXLhaPf4cvPaHty9NnH3jF+lrjTsW7/3VyVfP3N6yyrw329X33Ni/O+X+6uW7loD/iCuelaiCPzUC0dmeBy4/vvLoIW7s1MqB6ifF8/8Sv1hVouiWqgF09VwT5uu/A4Sdew==

View File

@@ -1 +1 @@
eNptVWtsFFUULhIU/CMiKlEC40JCgM50Zmf21Vqx7FJobNnSXWiXh5u7d+52pzuvzmPZLaKhEklEsKPGxII8t7ultuVRVARKAgmCkYAGDFlMIEGMKCKJ0SiBgHe3W2kD82N37j3nfuc753z3THs2gTRdUOQxvYJsIA1AAy90qz2roVYT6cb6jISMmMKn6/2B4G5TE3JzYoah6uVlZUAVKEVFMhAoqEhlCaYMxoBRht9VERVg0hGFT+XeXWOTkK6DZqTbyokVa2xQwaFkAy9sqgDjBCA0IPOKRMimFEGarZSwaYqI8nZTx+u1q/COpPBIzG81qwbJUg7SMLWIkveV8S6D/3VDQ0DCiygQdYQ3DCSpOCXsmMeiKc/abAwBHid8uWRiOqbohtU/Oom9AEKE8ZEMFV6Qm62+5jZBLSV4FBWBgXowcxkVSmT1xBFSSSAKCZQZOmXtA6oqChDk7WUtuiL3FjMljZSKHjb35LMjcV1kwzroxySqasrqU7jaMsFQnJui9yVJ3QCCLOLykSLAfDJqwX5kpEEFMI5ByGInrczQ4f6RPopuddUB6A+MggQajFldQJOc3MDIfc2UDUFCVtZb/3C4ovFBOJZiGMq1fxSwnpKh1VVoxJejDiNDS5FQwRjWTjoDFSUuICv3ZzgMo+GIVEkvCqyGSbfaEoQN1aK5mIm0LnC5mpY6fLpz4aKGUFgPaouESFiohiTjsnsYl8PhYEmGoimGYkifSYVbW5OaZ+GS1jgdFms4KVXbxFeFFtY7l4nVcktESbqF+fZQi1iPElEoBhtbwi2IYpUmuQ60ur1L6huqG+mQ7EkmI0Ba4vVG/WJVBYHZmQmBr+SWsUq9BFoT82vbgn7GHpcSTVRYEhvtXkOJL/fVsgbV5mUXBVtCI+jRmCFdZOikOTedf/qHtSEiudmIWbsZ1t2tIV3F9wa9ncElM0y9PY11iM6czhYv0C7/aw8k/FzahzVpDQZjZinBuAg/NAg7becIhitn7eUsSyysC/Z6i2GCj5Tg/iC+enoUy3DBsOSzMGbKccT3eB8p9sG82HEn8/TxLSVRUlV0RBZZWb1NZMPQ5CBrfANDN4tUtGYgC22FsNZgQfWr25KreWjyfCyxWqI9bRwrRJAJoweLR1RNyYfBhEhJt3Y7HPb+omVYdz04V5pkaJJmDidJfM2RKEgCrmfhtzi+dCvtwMU+9LAD7hfCgy7LFbpBHxvpoSEJCzYf+wEM5/F4jj7aaRiKxS4el+vwaC8djWTD2CX90MMORYhdtN6bHPYmBd7KzcSLsJPnaIhQBEVRlHaxHIMflkEOYI9GgRugr/DoEyBGyTdTVTSD1BHEs9pIWblSCSTzM6aSZRysE2daQQgyFE0eBcyIT8nnoFcQqoZEBfB7vdWkF8AYIgMF/VlZX2hxVV2N94smcqSQSL869J3IyoouC9FoJoA03BirB4qKyeNhqaEMxmqoClkH3Zg95umMuu2Ig9BJzsdjaBjtf9ml85M2C0TMPQGtgRhbaSvnONZWQUig0u3EbSp8TdZl8rnKzSfHLJm+cXxJ4RkrdhyXT9ATfbduPxk/N8G2Y7L14qSWcX1t3RcCSxdsHphJbb55aUvPs1evzN0wueH4yQ2pbblcpe+9db1EM/HC4l2fzwlJ21+5+OOGU0fu/PLzxUtKmB/M/hDf8nFiYJ3bXXt7Xfu9lc+A5Z91pL+fVTpVO9HRSUW/NSyUO/pE3576cSuk2Z/4D6wvb+zkO88eOJ6bMvs779nLtq+nVPylXO+ee/edrXU/zetacCu1qWP7PObxs907Sv55ve1qJ3HsVcDt/COe3cioh268/NRJI3thx/RzH360p4/tmv5v6EoNuTbgvnb5esdv3zRd/5WJ/D54pftO46Retq+r/cw0c3zt+cDUlbK7fewM78pN10JvzOjt7jgUfNOYOHEl17Rq6/nYwdzfp1c8nb2Z7t/bsOulCXffnyVu/PSCo/wG/9bVex+cuv58Scn9+2NL7m5bOg08VlLyH7A2NRs=
eNptVXlsFGUUb8FYg4RUg2iIqduVxAQ725md2ZNU6LW1Qnd7LJTSYPn2m293pztX59ijFdGCRCIIg3IJVqBlF2stYhsuqTFeoCiJHNESQvxDMFYTbqMkgN9ut9IGJtnNzLz3/d7vvfd7bzpTUaSonCTm9nGihhQANfygGp0pBbXpSNVWJQWkhSW2p9bX4O/WFW54dljTZNVdXAxkziLJSAScBUpCcZQqhmGgFeN7mUcZmJ6AxCaG4x1mAakqCCHV7G7uMEMJRxI1s9ssczBiAiYFiKwkmERdCCDFXGRWJB5hq67ip+VLi8yCxCIevwjJGkFbbISmKwEJ+6magoBgdgcBr6Iis4YEGWeArfg0aXEtT4URYHF6F3Lye8KSqhn9EynvAxAijIlEKLGcGDI+DrVzcpGJRUEeaKgXExVRpiBGbwQhmQA8F0XJ0VPGJ0CWeQ6CtL24VZXEvmxihJaQ0f3m3nQ+BK6CqBmDPkyitLq4NoFrK5ooi52yUJ/ECVUDnMjjYhE8wHyScsb+2XiDDGAEgxDZvhnJ0cP9430k1dhTA6CvYQIkUGDY2AMUwc4MjH+v6KLGCchIldfeHy5rvBeOtlBWi3P/BGA1IUJjT6YNByccRpqSIKCEMYxdZBJKUoRDxrncvJYWGGwJCCWJKqUxHvR4xPkeuSHKxSo4ts7ZtiDm9NQtbK3iWgKKpC9ssQVtchNBORjK6nA6rTaCspAWnDNRHobWhL4gZqulyqpF2MjRkTaucYE34q22+ivraOgi+RoZvORzufyNXrHyRVQpu1wKFKOeejVQB1B9VX29FtNL6/yoIVIdb5VsdY3WJaXQWeubXweiMsW3Ly5nHKxVnWPClPUox5Z4I1RFqz8QtdlfXBimvazPW9nkrWTb2LL6Jb4oYnxtLQ0tiwOLrNVwHGcnZSfILG07yTjJ9NU/phgeiSEtbHRTVnKvglQZzw5amcSF1HS1swerE/1wPJUdot2++feEPaOnAivVGPKH9SITaTfVAMVkJa02E2V307SbsZuqavx95dkw/gcKc78fD6AaxOKsHBuEFAzrYgSxveUPHIGh9Ajg/qbp42ElUFyWVERkWRl9i4n60e1BVFcMjM4bISkhIHLtmbDGUGYWYu3xGAt1lg1HYwLpamdoLoB0GBzMHpEVKR0GEyIE1ei2knR/1jKmxl6cK0lQJEFSR+IEnn3EcwKH65n5z64w1eix4WIfut9BkyIIL7sUk+kG+fl4DwUJWMbp2PdgGJfLdfTBTmNQNHZxORxHJnqpaDwbyiqoh+53yELsJtW++Jg3wbHG8Cz80GIjaZvL6bJbqQAJSARJu9NJIRsZDFIOysY4DuNtyEGMkm6mLCkaoSKI97WWMIaLBBBPb54SmrLRdpzpHBMnQl5nUYMeqJDSOWCBywriJcDug0ECAhhGxKj+jFRFk7e0prr8wGJivJAInzz6rUiJkipywWCyASm4MUYv5CWdxStUQclyD1Ff2mQMuihIM1TA5gQuxslSFFGGl9MY2v+y60nv3xTgMfcoNAbCdInZzTC0eY5JACVOO25T5ovyejKdqxj6Jnf1M289kpO5JuPf3btrN/4ofknmr7qcmHIiNG/Wa28qked7u5o/HFnfe/yX7V+f2U90nO7Mv/yya97j/tk3N3716px3hpdefrZs73Tm2XcbpyaEs//Yaz4oODBw+edke9eFfaeOne0/44hdTAKpWJs+PTiU+3bo7Krt52bP4248puZNLTz07YnEjqLTTNPgTvsk/4ZNM44NfBdYu655+1XGO+M5hGbl+ehrBY+VHf/89uq1S77YHGTd7sbWK12b5g30r8kfWTvlie+PbinY5nGPFDpmlh7c//uJQrmW6Xpj2ZG6Ef9cbUV9cvBk3+01sZtlrX98dD73uievcf23pzYXmMteOLfmvbc2nHE3NP/w0yud01a3vTLYfXtax9YdT81kW3duW9e/9eKkDdUX8x7dtePqlIp9fx08NrPwx5yKlcaK0OFLy59uHrk29++SX58u6jz+8eTCfy8eXb595ZUv7rT+uej8w8mC81t+29D90JMnTTuXrXN3dd16v2rv9bKRx3fonw6t6LhUfmNyutiTc26dzvfcmZST8x/1AE4b

View File

@@ -1 +1 @@
eNptVXtsE3UcH8MgUUEJCkQTvBTlIbvurr2+NhfoujoK6zbWMcoIlOvdr71b77V7dO0Ir/GcEOBEIT54CKMlzRiMbTwm4x/kFQyJgpgJ6CAxPkZMQBNBjPhr18kWuD96/X2/39/n+/p8v9eUjAJZYUVhRCsrqEAmKRUeFL0pKYN6DSjqugQPVEakWyorfNUHNJntfYdRVUkpyM8nJdYoSkAgWSMl8vlRPJ9iSDUf/pc4kIFpCYp0vHf7cgMPFIUMA8VQgCxebqBE6EpQ4cFQDTgO4QFCInViBL6CoqYiQUDKiiEPMcgiB9JWmgJkw4olUMKLNODSorCkomajBVU1OSimbQUoxeFbUWVA8vAQIjkFQIEKeAkmBg3TWJjRtiLJAJKGaf+Q80oLIyqq3jY8lSMkRQGIDwRKpFkhrB8ON7JSHkKDEEeqIAXjF0CmUHoqAoCEkhwbBYmBW/pRUpI4liLT+vw6RRRas/mialwCT6tT6exQWB1B1TsrYBBOT35lHNZcQHAjYTdiR2OoopKswMEiohwJ40lIGf0XQxUSSUUgCJrtp54YuNw21EZU9INekqrwDYMkZYrRD5IybyU6hsplTVBZHuhJV+XT7rLKJ+7MRhw32tqHAStxgdIPZhpxYthloMpxlBIhhv45lqBEMcICvfd+IECFAkG+qDwQrGFrg3MCxVUYUeOiTcAPfJgvEvVWe+qKrQsX0VUxu8BFF5U7UdxmcuA2i8XqQHEjZsSNOIo5Of8cY71pzkJ/CRVjmGrVNN8bwGzEArbGBVSV9ou1NQruqdHCc/C4sQEPlS1g3FXOuFaplXKlbndFsadeNhbzrgYz4W60+nhigRguRGB0WpSli2qpWqu7qsEi1FviXo32WZ0RWTLFJDzgsTJOsnw+VQYwvMFmLhkantliQrFshFaMsGPpp22QGxwQwiqjH8AJ4pAMFAlOD1ibgCVTNaWpBfIQfHUxmR2j/RXznlB4QksJ5KTeU81oeQhuQyooFTFhJgLBiQKzqcCCI6Xe6lZX1k31MynYXi2TghKCNHQPUj5JMZoQAXTK9Uyy96TJDjuZDh9OKQpikqgANBuV3upHqwb2B+op6RiYLFSUw6TANmbc6j0Z1jc0xhpoSqNpJtrAY45GwswGgUaFOrNXJFlMu4EBobyiH7BhjrasZpB3KZgrhuIYiuHdMRSOOeBYnoX1zPxml5iit1hgsU8+baDCrQPXXZLIdAM7M9RCBjwkbNr3ExjC4XCcfrbRIJQZmjhslu7hVgoYGg1u4pWTTxtkIfZjSmts0Bplab33LXgI0LagzRHEYeEpm4V2OIAdd9B0KEiE7BhuxSyn4OpjKYiSbqYkyiqqAApubDWu9+bxZCy9Y4rMuMVshZkWIqxAcRoNfFqwREznoBQikgw4kaSPuN5DXSTFANSX4Z+eLIGT5vW4jvvRoURCK6SBr0VSEBWBDYUSPiDDxugpihM1Gi5LGSQgVpVzkd5ppwkcw3AToIHDHgI0WgzX0CDa/7RrSW/aJMnB2KOU3sGYiwwFBGE2FCI8WWS3wjZlvilrEulchfC5EVvf3Dw6J/OM5OY7hRvYK6f7X1u8VG4eeWlD3Ze7kJ5XR8vexftmvPR+7eYtM4zz2vpeWPN46faavOcvrd/48e7bV+5GRiBlM63IhztbG8WJf728bdXeVOnf3YF/7ScDroed/an7hdf767fdKZhxKO48J6becPecuLjhZnPNiLcp+fD1yzfqlnzXbxpf1MXsaP3jtP/WtRcnF7s2H6u9vqNqx/xJ50atnJiTc+Sht7B7wuPmMfTeLX9OOnC3krviyjFs927ajn9y8Zv2k3s87q1f/9S16rdxtVdzI+tGgdVT990823d7mT/s33H23tg9tzoebZg7dpxzyvTXuf62By1zlXurL46lnvv1/J7claf+MfcJm0wgurZr/Mp3Nx699nC297MH5jPmKyU9Y6ZPOdUp0fcNHRdy+yqMkdiuwIPZPT+jVycvMbXWTCvE94an7vLpTc0Hd9/79tNfdt847t55/vzmY+s/mEsVzrx8x/D9hVmzOuznzG2P+qrKyg1b9fYff9/Sjz1Ydv+h/NGFO7Ng4R8/Hpkzratb78nNyfkPIFJRJQ==
eNptVXlsFGUULxCPqFFT0UQ82G40IdKZndnZs4e17LaldEuPXbSFkPrtN9/sTHeuzrHd3arYVv4weI0oEpNKeu1iKQWlYgUxHqmagBojlRQV9Q/iAcYgicYL/Ha7lTYwyR4z732/93vv/d6bvmwCabqgyEvGBdlAGoAGvtGtvqyGukykG09kJGTwCjvS3BSODJuaMHsfbxiqXuZwAFUgFRXJQCChIjkStAPywHDg/6qI8jAjUYVNzYo9dgnpOogh3V62qccOFRxJNuxl9ggSRZuEbMDWqcTxT1QxDVsUAU23l9o1RUTYx9SRZn90c6ldUlgk4gcx1SAY0k0YphZVsJ9uaAhI9jIOiDp6NMsjwOKUThXdPMIrumFNLKa5D0CIMAKSocIKcszaG0sLaqmNRZwIDDSGyckoXwRrLI6QSgBRSKDM3ClrP1BVUYAgZ3d06oo8XkiGMFIqutw8lmNP4Mxlw5pswiSq6x3NKVxP2UaTHpqk9ycJ3QCCLOICESLAfDJq3n54oUEFMI5BiEKvrMzc4YmFPopujTYC2BReBAk0yFujQJM8rgMLn2umbAgSsrKB5svDFYyXwjEk7SR9ry0C1lMytEbzRX9z0WFkaCkCKhjDGqQyUFHiArJOLrmmowNyHVGpsjb9oDvQ3bRefyjlrVlvxkFAhrViMl0fqlFrYmRYpM0Etc5sp/hGgva6aKfX52PcBE1SJM6ZqDdDBt1c16lIzqivfmMqInVQzTBc19jA1DJiWCATDfUh3ddYw7W2mu4gUtvWyc1NnBxYX90AIw0BMaShQFuQbE9Wp9zORMuGuDvUBTtCAlPdqdVHNRALAX96Y7Mn2F5uw5TNhMBWJmtJ0q02JJy6DMNdUF3DC2l6bXojG6+JSeFgROl6MBzkw+2tvpYFnCnKQ1AF2h7K5aNy18S8YkQkxwzeGqYp324N6SqeF9SfwYU0TL1vBKsTHfs4WxicoaaGS8K+bSSIlWodifBmqY3y2BqBZnNSTreN9pQxTJnbbatrjIwHCmEiVxTmaxENyDqHxVkzPwhZyJtyHLFjgSuOwJHcCOD+5ujj0SRQUlV0RBRYWeNtROvcxiDqgwfm5o1QtBiQhXQ+rHUkPwvd6WQ3C02W5RPdEuVPuxghikzITRaOqJqSC4MJEZJuDTNO70TBMq/GMZwrRdAUQdGHkoSGSyEKkoDrmf8urC3dGsHlp6YudzDwpsELLuvKd4N6Z6GHhiQs41zsSzAuv9//9pWd5qEY7OL3eg4t9tLRQja0U9KnLncoQAxR+nhy3psQWGv2HnzTwXm8yIO8XhhlIeeNAhcHnSwLvBTnpwDyM2/h3SdAjJJrpqpoBqEjiHe0kbJmSyWQzG2eSoZ2Mx6cablNkKFosihsRoNKLge93KZqSFQAuw9yBASQR8Sc/qxssH19dWN94GAbsVBIRJM6937IyoouCxyXCSMNN8Yag6JisniFaigTqCVaq9utST8NGRfN0ZwPeZiojyXW4OU0j/a/7EZy+zcLRMw9Aa0DPFNpL3O5GHu5TQKVPg9uU/4t0pvJ5SrHppfsXbnt2qL8tQx/Ll58qvWo/BV189tnVh+s+KoXffTGmYbeO7ctvap4eeU9Wx/YHn+SmL53ak/F91scq1bv6Ms84Kg4dvYGbsffrUWrYjM3viBM7vT89edMokcnPt/y1Ndffvbtl+lzp3ue/Ca7r/fu/pfRVb9sGbQemtk66Ekvr1smFXe+d3ajTB49zR3evLdn6N5t9x/9edX5mek0ufnkF4+0kCf6Xy+JPTdzo3x90eMvXfjk9v7pujf6p8/eKljtJ3YnfigpevHjWFDgPhrq3z27dsV1fYPP/jucWnqmdG3//rqBd8u43/64lny3/7bQrl/f3zy1Zri8Vl1608slz5/46YPjzu8H4cDqrZ/+vaTqFSY9Bocqb/rn9qqp76ZeLe7MPHd+f3Oksqe06LHfD/946niYbrljtPivp0vu2LFn+8CKN6tKHNe0rp1cee409/szm1DL5PmK2VHnSRdfd3oq+U173S1DO40L5cd6TmVPkccvwqriDx9ePrkhxA+kNiXvbNipnO+AP9z1R++t8FD31au7dv5W1Vbx46e7JmrP3XL9wZn3tq4IhyaqBi7senZ6Zb4ny4oyzKFhP27Qf7FZXVw=

View File

@@ -0,0 +1 @@
eNqdVWtwE9cVtovbuCmQtKGY0OmgKAmQ4JV3tbJerpzIsuWY2JaxZPzg4V7tXkkr78t7d23JxpNCMm1TG6dbm2nKQDLBD4HiBxQSHFI8GVIaaGknkISODW1DJo/OZMiESaZJhhL36uFgD/yqfkjavd95fd855+5KdEAFcZKYO8aJKlQAo+IHpO9KKLBdg0h9alSAakRih+t8/sCQpnAzD0dUVUbOoiIgcyZJhiLgTIwkFHVQRUwEqEX4v8zDtJvhoMTGZ3Mf6TYKECEQhsjo3NJtZCQcSlSNTmMjNliHDGoEGjohwD+KgRMNfu8jxkKjIvEQQzQEFWPPtkKjILGQxy/CskpYJELgRA6jkKpAIBidIcAjWGhUoSDjKlRNwbakicRvJInPhFXjcsphSBPTRWLjb/46u40iEFKnYai2ZlPBABYiRuHkDMZYCdWFqZowQAYKtsPEoZQPWcF8KCoH00+8xIB579nYOFtODBt7enB5mF9OgSxO7SYSl5lFSsEoZFSM7NnWk4hAwOIQzwxHJKTqE4uJnwQMAzEnUGQkFnvXx8NdnFxoYGGIBypMYrZFmC5TT7ZBKBOA5zrgaMZKPwxkmecy4YuiSBLHsuoQqURuPU6m9CCwlKKqH/PhJNxVRXVx3CGigTJZKRN1OEYgFXAijxUneIDzGZXT568uPJAB04adENnu00czxhMLMRLSR2oA4/MvcgkUJqKPAEWwWo4ufK9oosoJUE946m4Nlz28GY42UWaT/cgixyguMvpIupGOLzKGqhInGAn70F8gJ+b54aEYViP6EE05DioQybjf4ZOj2EzV0K5hrAU8dyaR7fsDvsfnRfxnTsFwOdZFPxmIaIUG0mqoAYrBTJqLDZTVSdNOC22orAmMebJhAreV4UhAASIKYSkq5mVPMBFNbINs0nNbwU+mBMfVpNLHo0XAmCwhSGSz0seaiPrMxBNV5Ucz3UVIShiIXFc6rH4yrXxnV6yTZTSWjXR0CqSjy0JzQagxoWNZEzwCqTA4IUJAmBwrOZE9mec+iWslCYokSOpEjMCzCnlO4DCf6e/s2kH6cDFJklO3AlSpDeIFlbCQ6c/0QoQCBSxaKvZNNxaHw/GH24PmXdEY4rAVn1iMQnBhNpRZQFO3ArIuDpBoLDaPJjhWn3kAP7SabRaKtNtIyk4DlmSDdiugg5AGVruZCdqLLa+k9gGDvaTElCVFJRBk8I5V4/pMoQBiqTlz0VQxZpEkS/BqZHiNhX4tWC6lakAlBlmBvATYSSZEMICJQCLTf3qivLnWXVPlSfpxkh5JauPgb2ZzV7W2MqHWoOBSZU+V2xoHQnu8vbjGjTRLwL7JXS/5N1eihvYyjxm1uDvaGqjI5hqCwkWYbXa7mSYoE2nCU0qwgUoEo22g1WNTWpHVFmDZOMNuDDTEvU1Nzc1as6MOVfg2bWLKKnzmmKUOKqQnYAGo3RGtkarVliaOlWJ2WfXxUTcdEDyq6tjULlVX14r18VhluLjM29q5WeVsSK3AJeJl6yoqMeCGxfsSubJjQ+CxIVJDY3OS80NTYmDTxLhMi1dkieExfGf5RD5eYvCnGIb4F+9tP6dCV60kwplBTIzWwbEu2ttIosdpqoGOhsrsUm2jucsW0hyVDc0+r6XFvtlnCtdvjJV7y7xoATN2i40gs+RYSYs93Zo3U/8/s3q5iVi4BQifnLmcE6KERC4UGvVDBU+VnmR4SWPxtlfgqMdL1Lub9WMOiqEtFCgO2c1BOmiHRBneo/PevtkZw6mrIgF43HgdjH40QruMTouFNpYYBOCyW/GMpa/wnaOZi+v0t5as6c3PSX+W9PlrpEvk0pPXG/N/8oZn+u7u6ZfO3DkWOQ5c/OvmVcnqSxvZswP/Hhd+PVd6ppZf9/HPvvpp/o4dz7w/dOyuVUt+5z716OC+8oYXP3X2rxFLC0ufOHH9P9e+tnad/WT7J1xNdLzz4y/Ia0s/m6pwr7v4I5Bsua9qqKSp6f2ntmsrifBn6/s/6l15sPrnf37rV3tPjXx3wyFof/5vv71iuTN8Td+2+qE33aem7+ijXpbmpn37r9zzAO90VDys9hfkVz3/x1VNQzvyptCF7/89byB3hfee6L9OiIOrc9m+jd9ujK6/3HP9fJzs3bCn9MZ/lRvvbCU+atwy+8Hx0JaZL7e9Yh7q2v85+od10Nby+drdn5Y+t3UFs9Qy2SLuHPzaW/vg7PeczGvC+cDBs8eXX1+2/v6pgsuP/n7nHRMzuz9oeWzdue8sl/9Umje5/4m+viPP2vpfHGLe+PKF1p3mCuGvQ6/KK5RDIxv8vZdfGti9bHntO7PON/ddvTvaltxbby55e3vHvdoBcuvR3H3j/YZ734v+cnLlQOPbF5Y+/ZeO0zfGf7B18Nll5650XFs7EMrLGzo7J35YfFXsOtQ7p6/Jue/w2nMXZoNrT3/1mmdvxVz+3J6L3ZetS5dfLXqIfPDpi9L9r++zJbtXv/vcj8GeD+sLIm0bLn2RG99f23je+J5U8NYP837x7l1Y7bm5JTnanIMn8nJy/gcqPgKj

View File

@@ -1 +1 @@
eNptU29QFGUYx5wIUyY/ZMWHar0ZKp3bu13uuD9gxHHkpUgccBkmgu/tvtwtt7e77b6LdxglSM2U0swOqEzK9MfjLq9LITGaTPtQFs1kMjZDkijqJHxIbGrSGG2g987DZHQ/vfv8+T3P8/s9T1usCcoKJwoLEpyAoAwYhH8UrS0mw1dVqKD2aBAiv8hG3BXVnv2qzI3m+hGSlAKjEUicAQjIL4sSxxgYMWhsoo1BqCjAB5WIV2TDo7u26oIgVI/EABQUXQFBU3lmPaGbC8KWjVt1sshD/NKpCpR12MuIuBMBJU0eyPNEEBKAaMQQBPCKKiK8EMiKrmVTEkhkIZ8MZHigspA0kX7ABVQyD9ehTJQ1CYdgUMKDIVVOVqEMVEvMDwGLxz6fsTTiFxWk9d81yiHAMFBCJBQYkeUEn/apr5mT9AQLG3iAYBz3KMAUV1o8AKFEAp5rgtFbWVofkCSeY0DSb2xURCGRnolEYQne7Y4nRycxIwLSBh1zfRjdYcy8gFs2WQxUX4hUEOAEHnNH8gC3FJVS/qN3OiTABDAOmVZVi95KPnhnjKhoveWAqaieBwlkxq/1AjloMR++0y6rAuKCUIs53XeXSzv/L2cy0LTB2j8PWAkLjNbbAHgF9t8m+XZKHGtlIikLSdGD86AhksMkI+IK2ofUwTkCeSj4kF/bT5vtH8tQkfCywu1RnIZUpS2CxYI/DsXS+/VRRdmc1DsjpVg27ZjHr+oJ2kpUMIhILglBmwtMeQUmK+Eq9ySc6SKee6rU75GBoDRgpZ6f24oY41eFAGTjznvuQzx9RCTHal/hdz1F01V0CWdRnSG6zOYoAeH1DkuTy30kRDK8qLIkwhcIydSwIaSNEnZLvs1szwdsvt1M51tsXjND2/OslB2Txlqphv1NHNDitIEmfKLo4+Eh52rSCRg/JKtTlGix0g0vOsrXOBM1ZJXoFZFCeoBPiwiiAKPVUMYqaPFUabzXMozi9CrHBm3Axpqphgav18bCPAtgAFmC12WOntvjR5JHkbr0ViyBjE0nFmx5ckdWRupbuK6yPnC2eMnsynjdX5fazTlouneYHlpdWJB5bnPOxYnOlpffz/3ldG7+8vGWM+c2V7SN1y/qoZv1/T9P9B2fGWl5bebPy+oXN29M28fEolVTT5y3VtR80PFs+coqNRu0bo8skuKVJ/tBa09obY13V+31gTHD9h3BOmFk75bfp25kLk60d5zp+zc8s8f/d0F4svLSu5mvPGObLhe2+bhFjTmbu2qvhk+Nf5NVFwKmQ9J7npUvfTLYN1kYr11/32JzieRqLjvw9pVG+Y/fBtx9F648UtSara3qLs593HH14sPktoUdKzqGfW8NudZ8P3Ry49kHm05X9zzd3LXkCHt9afY+hy6WuWldUceFLxPa3mVjnQ7X8Gf7iJFO7uLyE7ujv1JTEzX6YepRPel2vX68+c099ffH17zx2NABavcLVx7a/cB3X7efLnvqYFbZTyuWdU+e6vLtHPMVdg98+87la58X/3D9OXT4WO0MZnh2dmHG0WulN/9ZkJHxH+xyZNQ=
eNptVH1QFGUYhxzDRi3I0rLUnQOyyXuP3dvz4BgtTzBkEA7hciATfG/3vbuVvX2X/bjghBoVZgJMW5qxKSYy77ijG0IuFC0/pkadMZWZJv8orBynj2HUP0qzHM2il09ldP/afZ/n+f2e5/d73t0eCyJFFbCU3CNIGlIgp5EP1dgeU1CtjlStKRpAmh/zkVJXuTusK8JQpl/TZDU3KwvKggVKml/BssBZOBzICjJZAaSq0IfUiAfz9UNtW00BWFet4RokqaZchrbazKbJFFPuxq0mBYvIlGvSVaSYzCYOkyYkjRy4kShSAURBagsppqAH6xrlQVBRTY2bCAbmkUjSOBHqPAIs8EOhRgdWQkCzdDaB0lBAJvNoukLwaQvdGPMjyJNhLyalRfxY1YzEfQPshxyHZA0gicO8IPmMT30hQTZTPPKKUENx0p6ExhQy4jUIyQCKQhD11wFVg4IkkrmAJgQQadX4pMTlri4o3LCmJDoOavRBWRYFDo6WZ21RsdQzMS3Q6mV0fzg+qgkgQkmaccg52WZWaT2xQ6Joi81hofvupRYh6Tgqj8WP3BuQIVdDcMCE1UZ0vLj33hysGl3FkHOVT4OECuc3uqASsNumTano0uigRiyv9H66ieBdOtbCWC05iWnAar3EGV1eKKooMeXBVEmcGMkC2g5o5tA0aKQp9YDDhMH4mO6dFFBEkk/zG2GGdXQrSJXJBqMdUVKm6er2CPESnTsdm1i7fa6iyU3YGcknrhrH3H7dTNF2qhgqFCFeTjH2XJbNteVQBcXunrwJEvcDXUq4FSipXuLUmsmliXF+XapBfDzvgesSn7hZQOCNo+S9mmaKvF5vbQUKWiX7entdTijEhoLlzs/v6oIVH5SE0BjtaN1QBuuws8t51gOQx8sDmyMnGzgcVgZ4rNYc3pbDZNt4ezgoQCPOWBjKh7FPRPs5L+Ag50dgXBojll9Z4iwuzOupAGXYgzUVuKHPiEhYQtFypBA3jDgnYp0n66+gaN7LoMxZaRxwMBxrYzia53mbjeOsYDVZm0mZpmSIjN6dsd/ANmKFQo5OJbctaZuVNPbM4A1nzY+r5jSPtMbPVh1t6y9Z+EfmO+cfLnseFT32kye6eIj9q0JZ8YE++NnIse8FqmvptuM3b2RoTpw4+eLAK2/82dXx7dlrhzF+aeX80PBx+5Lfd33E5H2Y8qr7ibDwCPtspnnuvPDuLRXMiSrzTHPi0ecqL8zf6Gt4b+Cfhjc7Ftxa6a2YvSzhufXvdfv6yzd/3lTUkrJQmPfVly5r+p32XXuNNG/hby2uO4ODm5uzv7g+6/y+1rLXryWX3Fl7sNBYtWjw0rn3q2K2/Nuwdgl/Ib/UdyO5c1WBa757755Oai5dr2+mbqd6n3p8x3B/U//w38sDKSlPdxfO7sx0preAupnh1NP5zRdTnvHSja2dPyxa8e5q1zeJdZc6Th1IpDYUFPVsDLLmK/7XZgxc6V56pOrYiau754TPlL7N76P3rBWXFaf/Up16OTPtu6bMWKp13VXfmdmhxe0bXlhUe/J69HSf/WA4vvjr7gW9/z35K27I4EZ87Tcah48frhohqo+MzEhKOed+i30oKel/M3dwnw==

View File

@@ -1 +1 @@
eNqdVWtsU1Uc3+SDssSoUUNUkMvEaHD39t7e29eWGreWsSmjZS2w8rCcnnva3vW+eh9dO/CD0xgRFS8xSqKMx0qrdQ4IExA3o6JEIxoFNRlGo4mPmPhAIxFExXO7TrbAJ++H3p7z/5//4/f7/84dKOeQpguKXD8syAbSADTwQrcGyhrKmkg3Hi5JyEgrfDEcikSHTE2YWJQ2DFVvdjiAKlCKimQgUFCRHDnGAdPAcOD/qoiqYYoJhS9MPLahUUK6DlJIb2wm1mxohApOJRt40agKMEMAQgMyr0iEbEoJpDU2EY2aIiLbbup4/cA6vCMpPBLtrZRqkCzlIg1TSyi2r4x3GfzWDQ0BCS+SQNQR3jCQpOKWsKMdi6boB8ppBHjc8JZiWtENa2RmC3sBhAhHRzJUeEFOWS+n+gW1ieBRUgQGquC6ZVQFyKpkEFJJIAo5VJo8Ze0DqioKENh2R6+uyMO1PkmjoKJLzRW7NxKjIhvWaAgX0drpCBcw1jLBUJyXovflSd0Agixi8EgR4HpKatX+2nSDCmAGByFrPFqlycMj030U3drTBWAoMiMk0GDa2gM0yc0dmL6vmbIhSMgqB8KXpqsZL6ZjKYahPPtnBNYLMrT2VGk4NOMwMrQCCRUcw9pFj0zhIyI5ZaStIYb1vqAhXcWTgx4q4WOGqQ8UMRfo+Lvl2gjtDt03ReKXdXOKQcyLNR5Nm00E4yFC0CCctJMjGK6ZdTazLLGkKzocqKWJXpaG/VE8fHoSU7F4ivYyTJtyBvGVwGUJH7cJx93Y5eM5JVFeVXRE1qqyhnvI7kntkJ3BA5PTRSpaCshCfzWtNV5lvq8/38dDk+fTuT6J9vVzrJBAJkyO1o6ommKnwQWRkm4NOX3OkZplCvsK7pUmGZqkmSN5Eg86EgVJwHhWf2sC1q2ii6bpw5c6GEoGYamXObr6vD7dQ0MSJs3OfTEM5/P5xi7vNBWKxS4+j+fITC8dTa+GcUr64UsdaiF20/pwfsqbFHhrYiFexHm3h3ZBt8uFmKTXyXsTTg4kII+c0OWBnkTyVSx+AeIoNpmqohmkjiC+rYyCNdEkgbytMz/LuFg37rSFEGQomjyKmImgYvegtxCqhkQF8HsD7WQAwDQiI9X5s8rB2LLWrs5AJYKLDChKRkBbT9XPisdhMp6Q/HRHpA/mvWpvFHa3i+YyJpFd7PH0rHAFdfeSju5YXI9qHUIiLrRDkvE4fYzH5XKxJEPRFEMxZNCk4tlsXvMtWZ7N0HGxk5MKS3v41tiSsHul2C73JpS8V2hzxnrFMMoloRhd1RvvRRSr9MhdIOsNLA93t6+iY7Ivn08AaXkgkAyJrbgbYKT9jhYCz6aA8fXXFEJihZC2PlzN9JQ+Wgi+ioGfmnkbthAd+KIPyWKhhYjYYCL8BhKKCAbyL1NkNPE0xsDMCbyfW8kqYQlkc21L+6MhxpmRcj1UXBJXOQOY1tXBpaxB9QfYjmhvbBoINMaBruHgpjlvdQovlv4/qzrYQ04XPBlSJ79oZVnRZSGZLEWQhgVkVaComDy+2DVUwpx3t8asUS/P0ckk4GiIEAehm2zDV+ZUtP+uh6L9VSgDEc9YDloH0qy/sZnj2MYWQgJ+rxvLqfrde7Bkz6Sceqc+PH/zVXXVZ5b41JvLjtLXBn851/D+98Xegd1t8kfXv3DVplva7tS2dyXUsZHx1qcrF558viSFN/wKBlO5XN91cO2KsfVrDw7wKVO7MHqqaccHv60ZEz1bzvz00rl15IlDN95tHPkuq3715/UL561jN23/9J6GD+rvu3KAKJY2/0it+6YwePXOow3GPtczW5+LDYZ/+Wzb8ZNN87eTi2YPzvnRn3h8/xenH3myNDD//n2FjdHm0/dzb2w+ecexw4F5N1BjqwYfXnDrjl3BwPpHTv2QaJj7yficXdt+eOXWM38LW48da1573trx++qbvv38xIOn022VO0Y37jz/6P4tYOifeKc8L/fMvW856AV01zXZvzYenT33pgWz/daBs/f2xOpn7/n7G2Jtw0Sbmftw6L3ks7ftXTz3YKT0asemYwt/PjS4864Muln97MWv3z56+x8fz8fQXbgwq27pE9edBVfU1f0LdzdoqQ==
eNqdVXlsVNUaLzQIUZOnL/G5gV5G8OVp78zdZm0mtp12Smk7085MKYVoOXPumc5l7ta7TGeK+CLU5UUN3qYxkryEpe2M1AI2rYKFGtG4gGiCC6EaMS4Ed1/Ce7iDZ6ZTaQN/vfvHzD3323+/7/vOlnwaabqgyAtGBdlAGoAGPujWlryGuk2kG305CRlJhR9qCUdjg6YmTN+VNAxV9zkcQBXsiopkINihIjnStAMmgeHA76qIim6G4gqfnc5ssklI10EX0m2+9ZtsUMGRZMPms6kCTBGA0IDMKxIhm1IcabYKm6aICEtNHZ8231thkxQeifhDl2qQrN1JGqYWV7CebmgISDZfAog6qrAZSFJxBViKrSk7tTmfRIDH5W0bSiq6Ye2bn/B+ACHCHpEMFV6Qu6y9Xb2CWkHwKCECA43gNGVUhMMaSSGkkkAU0ig3Y2U9B1RVFCAoyB0bdUUeLZVFGlkVXS4eKVRDYgxkw5oI4ySqGxwtWYysTNB2F22nn8uQugEEWcRQkSLA+eTUovzQXIEKYAo7IUusWbkZ431zdRTdGm4GMByd5xJoMGkNA01yceNzv2umbAgSsvKBlsvDlYSXwrF2mrF7xuY51rMytIaLJByYZ4wMLUtCBfuwdlH7ZvERkdxlJK1BmqGe0ZCu4j5BW3PYzDD1LUOYC3T8zXypYXaHG2dJPF1241At5sWaiiXNCoJyEc1AIxiKcRK0y8eyPs5J1DfHRgOlMLEr0jAWw82mJzAVdbO052HSlFOIHwlckfCpAuG4mkL6uDFJlFEVHZGlrKzRtWRkZlLIhtrxme4iFa0LyEJvMaw1VWS+pzfTw0OT55PpHony9nKsEEcmTEyUTFRNKYTBCZGSjsHxuPeVJLPYj+BaKZKmSIqezJC4z5EoSALGs/hbGlfdGnJSFHXwcgVDSSE82HmOKj4vzdXQkIRJK8S+5Ibzer2Hr6w064rFKl63e3K+lo7mZkMzkn7wcoWSi92UPpqZ1SYF3ppegQ+dLpZ1cpDj4jDOMF4WxIsviEpAlmHdHs+LePIFiL0UyFQVzSB1BPFuMrLWdIUEMoU587O0k3XhSisJQYaiyaOoGa9VCjXolYSqIVEB/H6YICGASUTO9J+Vr+0IVTc3BEaiOMmAoqQE1P/hgps6O2GiMy75s/VaeyYRDMqNQTWaFnpqBb7V093U4wm2tm2sFzrjmmK2dToTTrWDpN0czeBkGSdJ2yk7nlIykIRM1mzqcbbQNQ0ybBfYVLfQ3hRKhRqYWF0rC72U2KyC1WGvN9YekutWoTrV69WgnA5G9HgrQJH6SMToMatbYyiaashsVJyt7cy6auhpCTe2grRKi71rA5ybZwolAiPpd1QSuGEFDLq/NDYkHhuyMDRuHzU7NJUEXwTGb5+/IiuJVXjXh2UxW0lECwgj/A8kFBUM5A8pMpoewMCYaYH3h1J07cZYPO10rWpLsiE+HKrrCNXx3XxNZF04jbhwd2e0c218DdMA5yDjoV0kVQLHRXGeYmteSv3/zOqFteTcLUCG1ZlLLS8ruiwkErko0vBUWSNQVEweb3sN5QJBMlLdYU14achydJx1JRKUh6dpsgbv0Vlvf+6MocJVkQcibrw0tMaTrN/m4zjWVklIwO9x4RkrXn0P5gqNKne9tuDB2x9bUlZ8yh/vp5VXqOse+vG3q9/Sq1YM/BWlFr+/q+m7qrY2a8zxrz1w/flFNyy/sOlMX//OyJ7Gh3/9furwoXO0rW91tV6TfftJ95rm28a/PSMcuG/42QPfgnOTnxw9++rmQ/cN3Lqt6vOd1EfL1N9am18URhcOBH98unLDkiPOjtMrv2Leerdu8S13VC3qQN3MDvudpyb3bj/eb3SvORnU/sP9/Yfrl/ctPaS8+cyi+0/8+/hnO1aXnx6/OvmAYOvzDdY8xPwwXJ+zXi//InjHl/b05NLyxeibjiW59cPv/O+9M+FjsZPb995zrnFqYN0vk+rLh08c+WBwbGIw/+iT2+Td5+9a8fw73N+u4Xbs3Ar6/5s2Tn1a9h4b2Nr0BHfup4c7lpdtjz0wcd2xm/dXX3sMbBq94d3Hq7hbfu55av3p309F2i+O3V52dsP1tWDZkqP7zy77cGnf4vPtH6HX7t3w9ZHvVj6ycOVB9aoFK1Nt/+z9+i+37frH6j13qyePfn/h4+3hE26M9cWL5WXjb4CnLiwsK/sDSUeFGQ==

View File

@@ -1 +1 @@
eNptU3tsE3Uc3wRFGLKpOIcQuTTDsLhre2vXbSUIpYPNwVjdinW8xq93v/Zuvd7d7q6jHS66Bw8zEjglEubGa31R92oQpkNChCwBo5g4CA4nM8BgoCQoII9g8NeyIQTur99935/P5/utD1VDUWJ4LrGd4WQoAlJGP5JSHxJhlQdKcmPQDWWap/yWkjJrm0dkBmbRsixIRo0GCIwacDIt8gJDqknerakmNG4oScAJJb+dp3wDO9ap3MBbIfMuyEkqI0Zos/SZmGosCFlWrFOJPAvRS+WRoKhCXpJHk3ByzGQTGRliAJNoXpQxgYduDNh5j4zZIRAlVe2qWDGegmwsmGSBh4K4DqcB4/LgWaiXVqfNiZWUoVtA4GSPGOukVWtrQzQEFIJ+LiHFT/OSrESfgtMFSBIKMg45kqcYzql0OGsYIROjoIMFMoygOTkY50uJuCAUcMAy1TD4MEvpBoLAMiSI+TWVEs+1j+LCZZ8An3ZHYvBxxAonKz2msTk0Fh9in0Mj6wxqbbcXl2TAcCziD2cBGikoxP2HHncIgHShOvioskrwYXLn4zG8pASKAVlS9kRJIJK0EgCi26Df/7hd9HAy44ZKyGx5ut2o8/92OjVBqHOiTxSWfBypBByAlWD0EcmPUiJIKx2uNeBaoueJ0lAWfTjJow7KHm3nGIEs5JwyrbQR2bqwCCUBLSxsCKI02SPV+5FY8IfjodEd21uyeEzqzf58JJty2Ep7MjEiByshZSy2JBihN+qyjHodVlBsbTePNrE+U6WoVQSc5EBKLRzbihBJezgXpCLmZ+5DZPSQcIZSvkXvCi2xoDTHvIQp9HIOU3nVkvwam6m0Sqo84MVJlvdQuIyuEOJxsF5ZGcD0kIR2u94BCSo3R59DGewIuJ7IgiBP58jNJtuqGaBECDWBOXneycIu8yLcDEga4mVxSpRQfvlSU/G75vYP8FLezssSbgVOxc/xHAyWQRGpoETirdFeizCI0ktN5cpXuZRe63BQlCFXn62zG/LwBWhdxuh5BN8fO4r4tdchCURk6ku8PrPpxYT4Ny5fKeF/1U7uOzJpz2J10VbDsUl3vgYf1T23SjPOmFzH9h9MbbpZWLna8ueh5KLBa+lba2ruz7060HdymfqKMDg02NJ//87N4VsTh2rfPHvst89Gzgxvv3HypC71tciGxNa1R8uY8afDqcdb1n4yp2Xl8NmMi1N6Nl5LO5/B87d7uv7Z+G/rjU6YlS6d7rn8esGxoeFrw+HBgqmN4ZT5BakNOwgTnTtdn5b/ZXNlb/I8C37wzOcTssEbmGXCW9Nqi7fMKPUnsvkj5ZbiuZlVV9V7V6eszZ57vf7nyxMvdYc3vpM0+3a1oj+adLf5nFhU13t+Vlh38FLTlWnpJd7ozEad59DRWztnTV54dv/8BtXLTfypqP70eHNpRX9DN3C9vWZqrhT8rmth+oHCjoyWFMbWXztj0ZwjjS/tDrxA0sLFfTeya4+betMK69btXlPeza5PSqCydnZ+vGukfmVq3YJw74aMmb90XOjyfT+F6HP9/eFfrfYt09eX2ye9OtJ878d5EanSey+5+73C509tjxoLMpYC0ybbJiI0f/rh4MA384YOl124Oztt24llU/bNWZ4q/rH+/E+bb9pO7Ct63xfIyzMWBX5vpJd1bNthsyx/peLTXa3NtnYK0u1t2yqW16sou3VFEpL2wYNxCaGA84vLzyUk/AexLZt7
eNptVGtMFFcUxtCSKlqosRHtD4bVPmKZ3ZndZdndii0siiiPhd2UajV4d+ayM7A7M8zcQYFaK7aagFoHbNWaGpF1wQ0iBEWRWpVq1dZXGm3VWLRNg9EIplKD9VF7QVCJzq+Ze875vnO+79ypqC+BssKLwqhGXkBQBgzCH4pWUS/DYhUq6POgHyJOZAPObJe7TpX5i29yCEmK3WAAEq8HAuJkUeIZPSP6DSW0wQ8VBXihEvCIbOnFL8t1frAkH4lFUFB0dpoymuN1wyk6+8flOln0QZ1dpypQ1sXrGBE3ISB8kCfzCBKAUDhRRoQkQj8BPKKKCA8EsqJbuhDjiCz04VTGB1QWkiaSA3yRShoxCWWiEjEcgn4Jz4RUGXNQemppPQcBiwfuCnstwIkK0lqeG2IXYBgoIRIKjMjyglfb6S3jpXiChQU+gGAItyjAQZW0UBGEEgl8fAlsXUIqCPCCD89GIt4Pcavajqxsd35a+oczs4KPQbVmIEk+ngED5YZCRRQahyYmUakEnw+HBnQhsVgC0vYmD7dpcJZiSwSC0ptteqr5WWofwB0HpcF4x7MBCTBFGIccslsLPi5uejZHVLTtmYDJdo2ABDLDaduB7LeYR0wpq8LAoFq9w/k83VDwKZ1JTxv11pYRwEqpwGjbC4BPgS1PPHhSEsJGmkjKQlL03hHQEMmlJCNiBq2WahoW0AcFL+K0OtpsapChIuEthiuCuAypSkUAewlPHq8fWr1t2XOHN2F1IBW7qh1wc2o8QVmITCATmDiBoC12k8meYCLSMt2NjiES9wtdanHLQFAKsFMzh5emnuFUoQiyIccL1yU0dLtIntW+w+/5FO12leQIljkSci2WzB7RmZPqRIUZ7U91EWUvEPiyQdqBuotTTTaLKYE1eUjoKWBJs82aSNpsRpr0GI1W1mylE82spa6EB1qI1tOEVxS9PrjLMYt0AIaDpGtQGq0+dV5Wcma6o/EjMlf0iEgh3cCrBQRRgEEXlLEbWojxiSqL11+GQVyemzxP222jGZOZZm3QVkCbPFYjmYLXZlimJzIEBu7O4K9gObZCxkdHR/XGVr0SNviEs9XZ4jkq+ujBn45P1Hedb+uYsP4N4u/sadOkDyq2flWTMb71RMbGSa3jzn+6+K/dkSHrvC39fZv7p15pv7Cvc82lnsSJefce9N03xAp38vqLux72HuxtkqZbb/m2UK6GyXbUNTqqNNpdN59QutZcvnp1TF5UXMK+S033oqN4tb39sr9ts/d0xzcn1p18/Urv6bUr9/7bI0/oJcekuF/ZmlR1+Ig1pcE6yR2TWlU9Z/+r7zvJX/unRBV2nkveGpEx/l71jOKYKeqipJSfKwpv/BGzqmD5/OQFG9APaest46gv6sq4ms/kosjc0bEnNvxTeYx21hx4Se28NN3ZPPVuau3ps/a3y06N3z92T7yzdvaRmzF7zjgSDlMhw/VZF5rbVnE5i7pTF76VU7jAEhvdek29Wbb5zLhw7vsVtPtC39m71V0BInNs4FBPrrnfEee+s+74Eu5r3y0uJy2Ljasua+iG22I+STf0RRxq+bPgv7zMa8U3NnWr6Ya1+yoDrvUb61aWvzP27NV1jm+THngqQ7MTujOWnfpx9c7fr3csLpze8/IKY3tnXEL53JTeZOeDTbeL8pyTI279cvu3yvcm8nPaEmcQO97Vt3Wdr+0tzo+MqMq4f+xQd2Pkyu5la2qS2srL7A9HhYU9ehQepj6sCEaEh4X9D/5zr+U=

View File

@@ -1 +1 @@
eNqdVX1QFOcZP9EkpU1rRDM41Oh6UTDIHrt3e19cz3ocCIzCAXfgSWKve7vv3a3sF7t7B4ehMWiNtmPsoqnVdho/4I4wfEigiDE4dqqpydBMJtEOpB+2aWNorNJqM62Jgb53HBVG/+r+cXvv+z7v73me3+95nm2NR4AkMwK/oJvhFSCRlAIXstoal0BDGMjKnhgHlJBAt1e63J5TYYkZzw0piigX5OeTIqMTRMCTjI4SuPwInk+FSCUf/hdZkIRp9wt0dPzwTi0HZJkMAllbgDy7U0sJ0BWvwIV2q8QoACEROSRICiIKgENIvxBWED8gJVmbh2glgQUJy7AMJG3LdrjDCTRgE1tBUUENOiOqhCW/kLDl4S4O37IiAZKDiwDJygBuKIATYXLQMIGF6cwt8RAgaZj6wfaQICtq7/xk+kiKAhAd8JRAM3xQ7Qk2M2IeQoMASyqgC2bAgyRValc9ACJKskwExGZuqadJUWQZikyc5++QBb47lTGqREXw4HFXIjcU8sMr6qALBuEoy6+MQtZ5BNcRFh12ugmVFZLhWUgjypIwnpiYPD8390AkqXoIgqYUVWMzl3vn2giy2lFOUi73PEhSokJqBylxJmJg7r4U5hWGA2rcWfmgu9ThfXcGHY7rzP3zgOUoT6kdSRnOzLsMFCmKUgLEUE9gvbP8sIAPKiH1FE5YOiUgi7CGwO4YvKaE5dZ2qAUYvRxPFdNJ1+ZZEf+oyWwvgrqoI55QOA/BzYiLUhA9picQnCgw6AsIK1JS7ul2ptx4HipDv0cieTkApSielT1OhcJ8PaC7nA8VfCQhOMwmET6sUxQ0iYIM0FRUarcXrZ7pIrSsaGCmulBBCpI805x0q44klW9sbmqkqTBNhyKNHGZtJgyMH4SpwGDqiigJCTcwIJST1Xaj0WLoTR3Nkt8Fk8VQHEMx/I0mFFY6YBmOgYQmf1O9nLiLYdjwgwaKUA9g18cJLPmcn2shAQ6qlnB+H4awWq1vPtxoFsoATaxm4xvzrWQwNxpcz8nDDxqkIE5icnfTrDXK0Or4Grjw4Sag12MGvQGzGHHaBEgDbcIIU4DETAHcSBjOwu5nKIiSUFOEYwWVAQUHlxJVx/M4sinRaHYDbjSYYKY2hOEpNkwDd9hfJCRykG2IKAFWIOk+5ybUSVIhgLqTBajGi7ZVOMrLnF1uGKRTEOoZ0PbhgoU+HxXw+Tl7hc9fy9T5S32F1RhR66T1wAvcmLs+Uu4p21Fo2rqNrm6y8GwEgqC4WW/FzUajyYriOkyH63AUc7DeUl2DvnSrt4hqCoU8ir6q3IeZiRqm1gkUhfYKdbUyXlYbDpbiUV0jHthSEyqudkTDleEStqS42FVY1iDpCjlno4Eobja5OaJGCMJsSCVkz7chsDgZyK891SIobBE00SDGAmy2QWwIneTArps/Dm1IKZz5Lp6N2hB3gkwA3yQH3HB42ysEHowfhhyEIwxtr6PqTMXVjUa+wRgtD9Nuk6NeEvVNIu4rM4UcZEUVtQVgeKPZUDSXBINRj2IpHqCYlmQV3g/9/4xqyIvO7XjUJc583OK8IPNMIBBzAwk2kNpFsUKYhpNdAjGoebVjmzpooQksEAAWykpSlgCg0UI4M2fR/jcf2hOfhTjJwhqLUOpAyGDXFhCEQWtDONJuMcF2Sn4CX4wlapIPXkpzrfrhVzTJZyHb5hJ+hz1+afLusj35IxbH+wh67Nl+cuMZ8saUxj7qvHB5sHtT7Dm1N3PZl7d2r4j/7fR36F2uyWBbs2vprtFXxrP07lPHYp99cOPgq6vOD37x2NkD525tn75564aTO3IzJ2jbqX789pKhf9P/mjh29L3Bn195ZMPl7IzctZ9+dPvejoazddvzHjuRvvYuPdBTcWTgqufc8Nc/uPvR88aCG396vGxor2+pZlf8z1vsUtkLrxtH71Adq5cMHu7x6RbQ+H88b1de3K+t//GrRUu9+18aEpYPZncWrqlat79uo+5E4/nRr02tPL1k4/LrE1rHx+uysnqJd48MtE7+7LeV4SWVxy8Ob7721it/Z3sQW0lb5r2B7LGMTRPq5h+ktUifdcZ6CxYP5hx98rl/LD7w4s19Z35xSx471Lq21nV89V8fufbWcfpK//dbJsZWejd8K6NUc+Cp33z1PDnx1PpLhgVu2+Qd4vrKHWc6Lncezry3IcOUvjT3R6aCoWDa2F9ettSs9xQQT144VJjz3nefX7O4M/26dmxfttp7dk/p+23W3Nbs6vSekkPIqsJFR0w3F33+RIs1svugQlxZzvx+2Us98RVZb+ZUBW83ZP7q0WDfNbz6anpsRcMnX3pf+F763sVDB8j1p6aq+1bYXjOrlWnxJ/btNX3zyifpFyZX7/rDt4++8+HyY7XmrMjF9GHvSnH8mds5//zUv8y47vOtsbs1U4/y7raTd27n/vKow+2Ovv5uTl/gi+m4sPobd6xD01XZTz9zaGSd8afv9Hf8RHv819s3uKbSNJrp6YWakrrXrmYs0mj+C91Z35g=
eNqdVXtQE3cexwd3Fp8z0mp91DTYuaGyYTebhBBEB0JA5CmJAvYc3Oz+kqzJPtxHgOATtVpRe+uJelrbUZA4FLCeUK1o1XpqK56jrcphxbNnLa3a4mOQ01O8X2I4YfSv25k8dn/fx+f7+Xy/3y33e4Eg0hw7oI5mJSAQpARvRKXcL4CFMhCllTUMkFwcVZ2bY7VVyQLd9q5LknjRFBtL8LSG4wFL0BqSY2K9WCzpIqRY+J/3gGCYajtHlbbxZWoGiCLhBKLa9F6ZmuRgJlZSm9T5Ai0BFaESXZwgqXgOMCrCzsmSyg4IQVTHqAXOA6CdLAJBvXhejJrhKOCBD5y8hOAaPSLJgp2DdqIkAIJRmxyERwSL/S5AULCsD6tdnCgpDf2B7iVIEkB/wJIcRbNOpd7po/kYFQUcHkICtRAeC4I0KLVuAHiE8NBeUPPcS/mM4HkPTRKB89gFIsfWhcpBpFIevHxcG8COwNpZSWnMgSCS0mNzSyGjrArTGDAN9lkJIkoEzXogRYiHgHhq+OB5c98DniDdMAgSUkupee7c0NeGE5XdWQSZY+0XkhBIl7KbEBiDbn/f54LMSjQDFL859+V0ocMX6XANptUY9/ULLJaypLI7SPmBfs5AEkoRkoMxlJ1oQy8/HsA6JZdShWHaPQIQedgfYEUNdJNksbwaagHOfu0PNcqunIxeEa+FjalOgbooR2wuOUaFGlRZhKDSolq9CjOYcNyk16vSsmx15lAa2ytl2GcTCFZ0QCksvbL7SZfMugFVa36l4EcCgsNqAvBhGyKghOdEgIRQKXUFSN7zCUHSU/Y/7y6EE5wES/uCaZUjQeWLfSXFFClTlMtbzKDxPh1O24FMOhpDLrzABdJAQAgjKtWYNl7bEDrqJb8WFosiGIqg2KESRIBceGiGhoQGv0NzCn31KIoefNlA4twATrRfhwavL/taCICBqgWSvwiji4+PP/xqo95QODSJj9Mf6m8lgr5oMC0jHnzZIBRiFyrWlfRaIzSltE2GN0V2vcGoi3Po4wzxFEUSqENH2YFdq9WSlFZnB9QXcNBpEkYJqMnDpYGIgIRLSSpV2mIYoiQwaIk4pscNsNIEFc2SHpkCVtmewgVqEBNUvAA8HEHtNaciZoJ0AcQabEDFn1KYnZSVbq61QpBmjnPTYOOVAWOLikhHkZ1JTPXN0ZuLc7LF/NI4S7bsJswsmeop8aVnWniLU2P1YLIXnSkXoq4sBIvTYdo4oxHXI5gG1cAxRdLlTAnLTVvAMVq7MX1uqY0pQnNJa1pWBp6Ke6y0xpuRnikasyyOvDxZnwL4gplsbo6DNWcnZZC2DLMnUwDmghRNYUlSqV7rnTXbrc9cSBZl0njSAiHdLhDOTCLeNzfXkFIISyQkV2Jsggp2LA1JTwzNDQLnBglMTZwJ7Z2aBBUVJCZR039HJqhmwCWfw3pKE1TWAMMA/hIMsMJ9nZjNsaBtEyRG9tJUYkmqRqPnM7xakSWtC0k+2UX7sBm+uZTb4mSsKTZu4RxristamGec1YcZFOJBQ+QYUJ0x2JovoP+fqD4vQPquASSHf/4287OcyNIOR40VCHCqlFrSw8kUXPcCqIGNkJdUqDTGYySuw4ABcziMuN1IIclwkfZG+9/SqA68K/yEBzael1T2u/BEtUmnw9UJKoZINBrgjAXfectrAo3KOk8OvDOpYkhY8BoEP8+erbN9v3Z82qjFrfmRD6ZOefivk7cXt4xb9Jph8N7jr80aVf/buKzT7bPNs/+efcrx9lHlny1bVrLfbNxs/NOYebeExx9Z5z2VB5z/2/mryTfKnNNG/nLnt59+6vF3c3fbbz79w69LlnTd4HvugGc/nGiPfXhf/m7ziGtPNlz7UkYSO97eaamZd2Wo6US5796TtpM3H2zf/umn276egFZ+/ut8Z37LbXw8OOHsGLOl9T/nPq5KY9gfWsPDjrU+LjTmHZu+7T0jd9SirT2EGA6sVi9LNVbOuGGr2hblvkJ+i7+Zt/vp5mO+lctWdEVcvbR04tDapvmad/jI+VM2JDfvmqQ6zXC4jahSDWmua4qOj2h5ffHay+a4CGP4zXNnDgyaKKYuX69acnxgyoCLg2l3YkHnsMNR6Fvvr5xfvqfl9fqfR+ywzdn+45OOmSBq6qyn+q4LEZfLlqxpDN9hd9LvH8yPxCnHzFV3BcuksnUDjVVf/Xi2u33rlbyx4drp81O14VlTm6b4Ms4nfJOJZ1pGzK5f+uzRluQtJbUXpz005nd1NZ6N+ktFdWXhB5Gyxt28q3jf/RsXHnZkYavemGk+FbbzMlLxhaljuH9X9/Ws5aMvblVvPD0x7vejMyJ2n+veIHdmZEab7hf80o5P1o1tkkb+fLtt7erj0Wv+XN69FnCFkWMHN308aHTymOhlK5KursubQM24NyCja3TxmXeio+p6bEzzowjH2fGbFpz+XbtL3LnIXtNJRrR8tz36H+tvnXGc3/rttJ6Sg7c+OLDibmr3I/PhisimoqEm8G7r3PWraxyTxoQdvVQ57hNPOj6k4/jlU58oEddPXhrREK+5R98/Oc3t0eyY3BD7fef1+kHbKtJzNv/13rnusRvqTw/3kmsWbSz4aE9P3I5OonNT2pWeSf8mvyqbMOyEb9SwC9MrF8WMjvnw6tLKiX9sWDneUrrmrdY3Zp2ZPNDLZjZ3+C8UDn9wqKLR8ubky3F7sh8vvDcy2NKDwoatcuwtDw8L+y+SEjoQ

View File

@@ -1 +1 @@
eNrtVUFrJEUUZvGPFIUgyPRMz0zPTNKSQwhhV9wQxSjIbmgq1a+7a9Nd1VtVnclsmINxrx7aX6AmJEvYVQ/iRRc8evAPxLv/w1fdMxuNkV32IAgODNPzXr33vvfqe18fnx+ANkLJW0+FtKAZt/jHfHF8ruFhBcY+PivAZio+ub25c1JpcflmZm1pwl6PlaJrCmGzbs5kyjMmZJeroidkok73VDz7+TwDFmP6xxcfGdDeegrS1t+7002cV856frff7Q9Wvl3nHErrbUquYiHT+ln6SJQdEkOSMwtnrbv+jpVlLjhzGHsPjJIXG0pKaDDXF/sApcdycQBPNJgS24DPzoxltjLHp5gXfv3lvABjWApfb7+3BPf5yceC1RcIg6RKpTk85QonIa1nZyX8veQZtoIzq8+rA8GVlj84bMZ4iMRqlXtb7NB1Wp+Mff+na771PFdTb6sZqKm/evubjUWpuyBTm9UnwcroxxtjtrVIhay/vCQ3ujc0xJhHsNzUp1ZXcBrj2OrnO1nVIf0J2eaWDPxBQPpBOByEwxG5vbXzjDOegcfbTPUTqbzGcr6eW+/DA15fdrPhGg2DYEjfIQVbG4xWB77vd7KhN1i9wfH8GrbNw1IZ8O60g8Z+b57Hlb+hzad4Zxo58Nut34/ogp00pH530h0HtEPxNgCvNoLDUujmXiIrCqChrPK8Q/eY5VmE8UjeCHtLRErDI1phRFHlVpRM2whkXCokPA3dsDrUcJZDVJXRQyMeQYT10xQ0Dfuu3SuvtJlGsCbKBRIY3eOlM1ZTGUkoSju7ig7Q69ItTze5XhiivZkFQ8OBvzrpjwb+vEOFRLpKDhGyPjUONq4MLqWFiIkI91HPEDrbyyFeIlc6jTiCauYQC7NwJsgE11emppG1eVSJZYDFHccOBegorhbzi9msqZYrmbr9wQRBAzZT2i4M/QABGmAap3sNw1TpfVO6tIarEiKHScgD0bS3RDKMjFUad++v0fP5P0vN2sukBr9oNT2dFz1M7ZVa4Q30nGQY+78GvZ4GrQTBv6JBwX9Bg964e0RblkUZMxnq0MgPggFL+sMhjPujyRgmwWg45uPRaMwhgX7CeD+YcBb7w0ESDCeTvbHPg+EYxjzme2NABSuYFAky1K2cwD24R1/QGr0tiQ0+ocXizwb+vN8Yd1BgHBfpLsogx53EdUaCICocICKuOK4YRuxPmW71Y8E1fL73SrXuVAhuqw163Zpt0pc1tzjVoa9bxi4jQvqJqgjTQJgkzBjhRNSSRGnS6ApujcekmYK7UWKZ2TddgmpAbAZ4yt2/c5QCkBhEJUQDXj6g6pFmDQ8tsYq0GZqYZdYueTchM6wdK/mWJftSTRt/e7RDHlTGEsNmaGT22sElAg1ADLgNdMULdiiKqsAMMXFS8qd0DgsXBrr35QeL+iE5WkKZk/tyowWL1gVsZ1xvgkPqXi5lZaMDpoWTX8cIuox299+GuPEvBxvhBAtkRUgTr10HOsfP7iunmuMbAw4ZZmvO7M7/ACbG6BE=
eNrVVk1vG0UY5uPGkV8wWiEhIa+9ttfr2iiHKFRtoVGLaqqitlqNZ1/vDtmd2c7MxnEjHyg9Iy2/oCVRUkUtRQVxgUocOfAHwoHfwjtrO6VO2lThhGXL9vs1z/v1zN7b3wSluRRvP+bCgKLM4B/93b19BXcK0Ob+XgYmkdHOhfODnULxww8SY3LdbzRozus64yapp1TELKFc1JnMGlyM5O5QRpPf9xOgEYa/f/CFBuWuxiBM+bO1rvzcfNLw6s160+8+XWUMcuOeF0xGXMTlk/guz2skglFKDezN1OWPNM9TzqjF2PhKS3GwJoWACnN5sAGQuzTlm/BIgc4xDfhmTxtqCn1vF+PCn3/sZ6A1jeH7K58twP391vu/2fBauxjMKJm6q2kqx+56lbcuH360+wliKJ8PkqJGvICsU0VaXqtDmkG/3e77XXJhffDriTGuKB5zUT74acAzTGtJ+mSNsgQWLuWzvBhidjWS0S0XQa4E3v5qatxrm6w8rCftFafv+23nY9SvtDq9lud5taTttnonKJ4vwTm/lUsN7sVZzpjTyTm/0O9ep2pS7s2g/mCtsHnuZRCxScqdoNv6ZSnAOoLGDqPO83auc1oeYGdJLGWcwtMbrrWuYHDsTfnQ27uqaJzR8pGQLrNleHbDxTLTSMbuAMcQ3EtReUg6TW/YG3UpbfqtJgPai+g51qI96nm9YdRrHZIT81hTECFeTlNd7hpVwONFBoNJDsfnaH8BbN7kJvmUCtLsdT3ief3qbZtcjfXXOFMKm/nXOw+2nfn2OH3Hq/fqncCpOVzgzAkGIY5urJ3+tjNM5TDURmLGEIKgwxQip29h1ZZ1WGzAYNfaGCjCcmgwIWzRLE9Bh1mRGp5TZZaDnG6Bq4fLbSCkPMS9VpNlA6nikCmoShJGXM+VI6wganM6ybB6y045Zi8FTUP01se9dPtVWWugiiXHpIkch8akYcEXImNHITQcVBgVao6OTqqyplLEdtvR38dVsO7KzAVNf1pzxlJt6NwG0EzmYFGGXGxyA/oI411tohBpK8fu206+jAmDDKlBpNhvJEM0FCMe28MLDS9VO8olEuhRJoymEBZ5eEfzu4gftygGhbC8CuhCK0yCJY90mCI9oHMzWCgjORahgCw3kxfePmptuIV1FetIEA4nVWItr9dtdlredPreq1l85TQWxw9KdUOlWQM76OYKa2Qalo21+R/R+7enkftZiHs3eu2FEFiuODN1P2ZzqjInUtV/ZvZjZH7O959UFOyy+U10RMpvSq+vuwz2cDiQJsv9YpMzqcTy5fAyqb57eduZzV6YUJ0gF3Y832/RUbPdhqDZ6QbQ9TvtgAWdTsBgBM0RZThijEZeuzXy293uMPCY3w4gYBEbBoBMmlHBRzi3dnE5rvZN52jYUTsbbY2/UGLwaw2/rlbCAW6gnVDnds1JGa4cMhJ2BVFhqRBxwZDf0GNjTNWM6+cTiL9vvtFZFwsEtz5zOuuZs6CnJTe3qjlnPcYsPPrOl7IgVAHBS5IibdoLz5CRVKRiGxxVlwo9BttRgpfYhq4T5AhiEkArO0JWkXPAoSFyRBRg8wGJm1Szv2WIkWQWofJZRK2TSyMywbMjKT40ZEPIcaWfmdbIV4U2RNMJCqlZMlwgUABEg90Aezg+avGsyDBCRCzB/CucxcK4hvot8fn8/D7ZXkCZkltibQYWpXPYVrhaOferB4G8MOEmVdzeKHYinIW37f/MxZZ/UdgQK5jhVPSdkTtbB2eKr9tvHGo6ffEsgDa3p/8A34xtTA==

View File

@@ -1 +1 @@
eNptU31QFGUYP7TMCRoZnWkmamA7hj8y9thlz+Ug+8BDTAyOuAMVMXlv973b5fZ21913CVJHjtLGUpu1j5mmsJLjLk8SL3EcE5uIGG1qJE0swhz+0JE0p8bJyGaQ3juBYHT/evf5+D3P8/s9T2u0EWq6qMgpnaKMoAY4hH90szWqwY0G1NFrkSBEgsKHK11uT7uhiUM5AkKqXpSXB1TRBmQkaIoqcjZOCeY10nlBqOvAD/WwV+Gbh97dZA2Cpg1ICUBZtxYRNJVvzyWsU0HYsm6TVVMkiF9WQ4eaFXs5BXcio4TJAyWJCEICEA0YggBexUCEFwJNt25ZnwBSeCglAjkJGDwkGVIAYsAg83EdiqEKEnAIBlU8GDK0RBXKRm2JChDweOyLlvSwoOjIjN81ShfgOKgiEsqcwouy3/zM/4qo5hI89EkAwRjuUYZJrsxYAEKVBJLYCCN3ssxDQFUlkQMJf16DrsidkzORqFmFd7tjidFJzIiMzKPFU33kVTZj5mXcMsPaqENNpI6AKEuYO1ICuKWImvQfn+lQARfAOOSkqmbkTvLBmTGKbnaUA87lngUJNE4wO4AWZO2HZ9o1Q0ZiEJpRZ+Xd5Sad/5djbDRtK4jPAtabZc7s8AFJh/FpkqdTYlgrhqRYkqKPzoKGSGsmOQVXMD+hDk4RKEHZjwSznbYXfqpBXcXLCl+N4DRk6K1hLBb8/lR0cr/2uVZNSb0zXIJlM094BCOXoAsIF4eIxJIQtL2IyS+y08SKck+nc7KI554qxT0akHUfVmr51FZEOcGQA5CPOe+5D7HJIyJF3uzB7w0UXRqQ/QXLlWoBVi8p4dwVqwur1wjBI00kJykGTyJ8gZBMDtuEzCGCc3gdPtpBF7KQKXSwNM+ydi/DUpwdMA6Gge2NIjBjtI0m/Iril2CXs5R0Ak6ApDtJiRktWVtRXL7S2bmGrFK8CtJJD/CbYVmRYcQNNayCGUuWxnutwQhOrypea3Y7eDvl8/FLHLCwgPGyheQyvC5T9EyPH04cRfLSQ1gCDZv6U4ysN+dbkt/cF15cFRh+Lv32k28f+8f66LMPzlu/rMxdd6q3+/XHrnrZX8yX2wd37nv/yIGMiTFrR2jvvC+vpOn9o/FvalwnLzdMtGX9PbCh5vpP1984fy06/tburIn6EIvssecH2+an/p6zq8W9q2H3Q6lSv/vrhQcufR695cusk25sPlG3tVZ+b13uoYWst+/GlZvZZ0bHRttKdzx1VYyMjGyrW9B1Nv3c2T7i2NI/Qx/1WV5q8i3qUh+pXlzdPqaODHiOdd73sNAycvqvvVWLhn4dvfgtjNT3Dj29ee617GYqVFbbeqE3p9byQIY74+Pv0syBPadN1x/B1J9XrjAyfyvOP7dnyGlX5+34MKObvmVvCRwufYYc2ZFW27r98px4Zjmz4AnWfn77xoKT3tC/++vlUO2NHwdTe8YXV5Z/wbkqtsVLvFu7LzC3s9+5mL9x+RqtYv9XS7dEeryXbMNnwo/X5K7+YE7O8MQPjfrx8fstlomJuZabLY6ysRSL5T/v1GTL
eNptVG1MFFcUXZUooj/aWpM2mjputZbK7M7sLAtLo5UuYlBZiCxksa307czbnZHZeePMGwTFJkLVVtRkaGPSxqrIsttuQaTYGENpsZZW0baoxEg3MTZ+tRqjqfUj2kgfCCrR+TXz7r3n3HvOfVMTq4CaLiFlTLOkYKgBHpMP3ayJaXCVAXX8YTQMsYiESGFBka/R0KT+2SLGqp5ltwNVsgEFixpSJd7Go7C9grWHoa6DENQjASRU9dettYZBZRlG5VDRrVks43CmWUdSrFnvrLVqSIbWLKuhQ82aZuURaULB5MAHZZkKQwpQK0kxBQLIwFQAAk23rnuPYCAByiSNl4EhQJqjRSCVG7SDEDAck0GgMAyrZB5saASfsTHrYiIEAhn2rOX5iIh0bLY9NUAr4HmoYhoqPBIkJWS2hNZIaholwKAMMIyT9hQ4pJAZL4dQpYEsVcD2SlrHQFJkMheNpTAkrZpfeQt8ZYvyShZ6ow9BzX1AVWWJB4Pl9pU6UpqHp6VxlQqfDscHNaGJUAo2D2SPtGkvrCJ2KBRjc7ptzL4nqWVAOo6qQ/GOJwMq4MsJDj1stRl9WLz3yRykm035gC8oGgUJNF40m4AWdjlHTakZyuCgZsxT+DTdcPAxHWdjHbbMtlHAepXCm01BIOuw7ZEHj0rixEiOZlw0wx4YBQ2xVkXziDCYDczeEQFlqISwaDaynPtLDeoq2WBYGyVl2NBrIsRLePxIbHjt9hQsGdmELZEc4qrZ6RONNIpxUflAowhxOsW6sjguK52lFuX7mj3DJL5nutTm04CiB4lTC0eWJsaLhlIOhbjnmesSH75ZtCSY35H3MoZd5k33ZLCOVeWoeKVRIvBVyFfqCR18rAvSQkCR1gzRDtb1z+LcLi5d4AI0DAQF2unOzKDdbgdLBxyOTMGZyWY4BVdjhQTMOGtjqRBCIRm28kGaB7wI6YfSmLGcUm92fp6n2U8vQwGEddoHQmZEQQqMFkGNuGHGeRkZAll/DUY9ufSy7FJzv5vlOScrpLNAcHOBTAf9NlmbEZkeyRAZvDtDv4H1xAqNHHWP2TyjLtky9IwTzOy6xILJGwY2x5fYvDeUlNTv0/79JvXTefv3NTm7l4YuHO/jj35xYhY7c6DzTGLD9p1J986e+qR3au3h8Zuq5asdqwsO35yy6djfV28+OHU+NLejbp3T3+B99dc3evqnT5y79IXLG4+6hfRm/5/UNqtLbEzddjrxbUaD/er9ew/CnS2r/Q1TSwLdlxK3jD0n4W1b06F5s5dD5kbt0ppdJ6fg+vfrT06/21X2X5K3smnS7+rO4lJ514xzXdePfl4/se9U0tkdf8HFiyIoeGfFb19PvrJ1fteE3hePrE/xv5zck3x59yuTLzx3PokHydX1lSnV1XmJrhW5V3q2zEntzQMTsg+9nuMoBcknuiceL9TOnR8/v/7HBb3tO2bidukitXHnJf9b14P+vqLlY09fkagD73YeStxJae8585qj9KVpsypatv90d4Jwr/gjGhye/XPKsX8+u7R5/S97WhYX7Gg7mCgRt97q+yMnt1a937r7g4sfb7m/K+/ED292XLt2e5rFMjAwzpJ7fZKTG2ux/A/oUIK0

View File

@@ -1 +1 @@
eNptVWtsFFUULgKRRxQwBDBRGDZFA3S2Mzuzr9aGlC2Fhpbdbtc+VFzv3rnTne68Oo/ttgQTSklDJMqg6A8CEbrsmlJasKQ8Skl4Ki9REyGVBCT8wMRQiCJqYoJ3t1tpA/Njd+45537n9Z0z7ek40nRBkSf1CLKBNAANfNCt9rSGmk2kGx0pCRlRhUsG/DWhLlMThpdFDUPViwoLgSrYFRXJQLBDRSqM04UwCoxC/K6KKAuTjChc6/CODTYJ6TpoRLqtiHh3gw0q2JVs4IMtIMAYAQgNyJwiEbIpRZBGgIgSRwRlKyBsmiKijJ2pI822cT2WSAqHxIyoUTVIxu4kDVOLKBlbGUtp/K8bGgISPvBA1BEWGEhScWrYMINF2b0b01EEOJz4rbzZyaiiG1bvxGT6AIQI4yMZKpwgN1oHG9sEtYDgEC8CA3XjDGSULZXVHUNIJYEoxFFq9JZ1CKiqKECQ0Rc26Yrck8uYNFpV9Ky6O5MdiesjG9YRPw6itKIw0IqrLhO0nfXYqUMJUjeAIIu4jKQIcDwpNasfHK9QAYxhEDLXUSs1erl3vI2iW/urAPTXTIAEGoxa+4Emudj+8XLNlA1BQlbaF3jWXU751B1jp2m7+/AEYL1Vhtb+bCOOTriMDK2VhArGsPZSKagoMQFZw7+Hw5APR6QSak1NC0x41KYQDJaL5jo60rzK7a5/21mmu1avCTaE9ZC2RoiEhXJI0m6Hl3Y7nU6GpO2UnbbTZJlpDzc3JzTv6urmGBUWK1iptbKeK21YHXDViuVyU0RJeISVjoYmMYDiPBRDdU3hJmRnlHq5CjR7fNWBYHkd1SB7E4kIkKp9Pt4vlhYTODozLnAlbC2jBCTQHF9Z2Rby046YFK+3hyWxzuEzlNg7ZZWMYW/zMWtCTQ3jwqNwhFQuQhfFeqjM0zvGDRHJjUbU6qJZ11ca0lU8P2hzCpfMMPX2JOYhuvJtOjdI+/xrn1J4XrIMc9IaCkXNAoJ2E35oEA7KwRI0W8Q4ihgnsboq1OPLuQk9l4KHQ3gEdR7TcNUY5dMwasoxxHX7nkv2oQzZcScz4eMpJVFCVXRE5qKyeurJ4OgGISvK+kcni1S0RiALbVm31lCW9S1tiRYOmhwXjbdIlLeNZYQIMiF/JHdF1ZSMGxwQKelWF+Nw9OY0Y7zrxrlSJE2RFH0iQeIxR6IgCbie2d/cGtOtpBMX+9izBrhfCC+8NJvtBnVqvIWGJEzYjO+nMKzX6z35fKMxKAabeN2uExOtdDQ+Gtoh6ceeNchB7KP0nsSYNSlw1nA+PoQB74q4OMR4XQxDcR7ew7p5wHo5ikFOnoGe43j1CRCjZJqpKppB6gjinW20WsMFEkhkdkwJQzsZF860mBBkKJocqjEjZUomB72YUDUkKoDr85WTPgCjiKzJ8s9KlzWsK62q8A3Uk+OJRPrV0e9FWlZ0WeD5VA3ScGOsbigqJoeXpYZSGCtY2mAd8XAsxfMR2gsgz0LoIlfiNTSG9j/tkplNmwYijj0Orf4oU2IrYlnGVkxIoMTjwm3KflU2pTK5yo3nJ4UWfTQtL/tMFrdflc9QszsetM64fG/KHGLV1YOd7T9rv+4Nbule5vhuxzHz5IVb3y86N1g3fVLw9PnOf97q7x+cueDusntL7349N3/5ga7Bb87zA6/evvZF38xPZo38JhzfffXA7XsXH6NzZy9sOjBru2G8lF9bO3kp+2LB69Pa33Du+Vi+K+zIr5i57Eqpe+BMkF2YP+cnecsP1MIlu/vLlpz+47OhqdvogdjtVNfFO7vS4oKFO2ecnD/14aNHW+mRgHvG/FvL/Wdmz7oW2LeYku7vEhZ56m+8z56vPGimH/g/Zeddv3Op5d9k387qU1dem/ZJquLPjs+PGouox7fk0JPlFw+evRQs/3t6avq2Nzs9H54Cs+LbtneMXKxILCm4vKnjx1Mts+1bChavKO4sVfovWzdsL99cWvNecsr6V/5asefLS9cvzf2lp+b+1ptrRzavwOV78mRynvbBw17wQl7ef6XuMgw=
eNptVWtsG1UWdgiPwv6gCIraCi2uQQiBrzMPj2OnhG5qO2lIY8eJQxJKa13fuc5MPK/Mw45TWtqEh4BdYKBQIfEojWO3aUiBPiClRS2IR0XDQ4BEgIVKkP2xoouAVVeobLvXjrNN1M4Py3fOud/5zjnfOTNUzGDdEFWlalxUTKxDZJKDYQ8VddxvYcN8oCBjU1D5fFu0Iz5i6eL0bYJpakZdTQ3URI+qYQWKHqTKNRm6BgnQrCH/NQmXYfJJlc9Nb9nokrFhwF5suOrWbXQhlURSTFedq01EaSd06lDhVdmpWHIS606YVDPYSbncLl2VMPGyDKy7Nq13u2SVxxJ50auZgPVwwLT0pEr8DFPHUHbVpaBkYLfLxLJGMiFWcpvyBDYVBQx5kuZ3jsV5QTVMe2Ih9b0QIUwwsYJUXlR67Vd6B0XN7eRxSoImHiOEFVwujD2WxlgDUBIzuDB7y34VapokIliy1/QZqjJeSRCYOQ1faB4r5QNINRTT3h8lJBqaa9pypMaKk/b4aA/96gAwTCgqEikakCDhU9DK9rfmGzSI0gQEVPpnF2YvT8z3UQ17tBWiaMcCSKgjwR6Fuuzz7pv/XrcUU5SxXQy2XRiuYjwfjvXQjMf/2gJgI6cge7TchjcWXMamngNIJRj2y1QBqWpaxPbXVVckEiiVSMr1uSa9ayDV2Ki0NGodGTEbEvmYv39t1t8Y6+xrEhNJXbU6E1yK03oAXeulmVq/n+EA7aE8JGcQFBCTs9ZmuTZ6dbOCukQ23S92rY2kI81MPBxjUYCSWjV4VzQQiHdFlPAaHNYCAR0pmcZ2IxmDuL2pvd3MWg2xOO5INw/0qVysi7mnAfnboi0xmNFoabA76K3lGWOlk1C2MiJfH0nTob54MsP51nQKbISPRsI9kTDfz69uvyeawd5of6Ij0Z28m2lG8zj7aR+gKrR9lNdPlZ6JOcVIWOk1BXuEZvy7dGxoZIbwcIEU0rSMoTxRJz7xYbEyTDujLeeFfX0+RJRqH4kLlttJ+ZytUHcyFMM5aV8dy9Z5fc6m1vh4sBImflFhvhYng2ikiDjDc4NQRIKlpDE/FrzoCBwpjQDpb4k+GVaABzTVwKDCyh7vBu2zWwQ0h/bNzhtQ9V6oiIPlsPaR8ixkBweyPLJ4XshkZSow6GXFJLZQan/liqarpTCEEJANe4ShvBMVy5wax0iuFKApQNGHBgCZfSyJskjqWf6trDLDznOk2G9e6GCqaUyWXtFb7gb19nwPHctExqXY52G8gUDg8MWd5qBY4hKo5Q4t9DLwfDY0IxtvXuhQgdhJGeMDc95A5O3pm8khkfKxqJZJJiFLQ4ZPMhxXy9DkmPLXYl+AqZ0k21BEBKXUTE3VTWBgRPa2mbOn3TIcKG2eepbmWB/JdKVTVJBk8bjDSobUUg5E4JqOJRXye1EKIIgEDGb1ZxdDPZGG1ubgwW4wX0ggqs1+M4qKaihiKlXowDppjD2GJNXiyQrVcSHYCNobeuz9ARqxXjoJ/QHM+XmaBqvJcppD+7/s8qX9W4QS4Z5B9j6BrXfVeb2sa6VThvV+H2lT+cuytVDKVel9r2rLjY8tcpSf6r8+9ZEyTC0O/+e+B2e4Ry6ZqsbTu8ZeP4VinUv0n6q+XfrNgYfvvG7m+9uvvSrUOqp0n3hu4yT7QzC06JmhiStnlgQ3rOMmP379zKkhfPUTv4LJv292nz4arspG90xNPfR+A8zsPfpLEzdTOHhV2/JN73xbFVw0cbzvxfwed8t2sOtvjiX86HvHvzTpY++fOsE/m38s0rPs+LHPudzji1b8+OlmUoSTT08cd5/d/cXJ+tu3Tpy5ZfUu9rb4rb9tGOSXi3f8RfEOKeiTLnRyh3DHn46ZI/GuVafvby/s//in+z44M7jiwL87/7UlNzK8pwmJLU9cs6LmVGiqZ03fC7vBP4TTO4Yhs2r6M/hw9dqVD13+3HLx2V8u23evg9n9h7BiG7th69Jt757uvim/bKvg/fVn9/bDUdSy/uwHmx3PD4cXT1rOm1e9lBWMxX/+KiH/99Fvjn4xvHH7Dfq2k+8su/S68Vjx845zM9f88/CThy7tfUH6+fsXf1+6qdrhOHeu2jE0tXPH2Uscjv8BZV1LvA==

View File

@@ -1 +1 @@
eNptVX1sE2UYHwMTBFRCAP9Q4Kya6dzb3bXXdt1YYOsGm6PrtnZjQ7Be7972br2v3Uc/BhjcFAXGx4EkSvwD99GOOcYWFj4FEghiAE1MJDhQZoSEqCAqJigxwbddJ1vg/uj1fZ7n/T1fv+e5tmQEKioniVP6OVGDCkVr6KAabUkFtuhQ1d5NCFBjJaa7xuP1dekKN5LLapqsFubnUzJnlmQoUpyZloT8CJFPs5SWj/7LPEzDdAckJj6yY61JgKpKhaBqKsTeWGuiJeRK1NDB5IM8jwkQo7BmKYxeAUnXsACkFNWUh5kUiYcpK12Fimn9GiQRJAbyKVFI1oDVbAOargSklK2IpAR6q5oCKQEdghSvQiTQoCCjxJBhCgs3O9YnWUgxKO1rWbO7WUnVjIHJqRygaBoifCjSEsOJIWN/qJWT8zAGBnlKg30ofhGmC2X0hSGUAcVzEZgYu2UMUrLMczSV0uc3q5LYn8kXaHEZPqruS2UHUHVEzRj2oCBKKvNr4qjmIkaYyQIzPhgDqkZxIo+KCHgKxZOQ0/rjExUyRYcRCMj000iMXR6YaCOpRo+boj3eSZCUQrNGD6UIdvLgRLmiixonQCPpqnnUXUb50J3VTBBmx9AkYDUu0kZPuhGHJ12GmhIHtIQwjE/xBC1JYQ4aI3/6/XTQHxCKq/2BBm5VoMJfWoeTDS7GAhuhF/eGI25fZXOpfWUTUxcrEPlIU3UJIBwWJ+Gw2exOQJhxM2EmAF7CN1aYWywVKxvL6BjL+jRLrduPO8h6rsEFNY1plFY1qERlgx6qIOLmKBFcUc+W15XE9Rp9Ob+8vNxTWtmimEsFV9RKlrfavQJZL4WKMBSdHuGY4lX0Knt5XdQmttjibp3x2kvCimyJyYS/0s6WUNW19AqIE1GHtWxieFabBeCZCO04WYCnnoFxbvBQDGms0UWQZK8CVRlND2xPoJJputrWjXgIL36ZzIxRp6fqIYXnd5chThonfKyehxEOzENrmAW3kBhBFlothTYcW+729bsybnyPpeCQT6FENYhoWD5O+STN6mIYMn2ux5L9RIrsqJOp8NGUAhiTJRWCTFRGfyOoG9sfoLLs4NhkAUkJUSLXmnZrnEizPtoaizK0zjBsJCrgzlbSygWgTgeHM1dkRUq5QQEBQTW6SCs5kNGM864P5YoDAgc4cSwG0JhDnhM4VM/0b2aJqUa3DRX7yKMGGto6aN0lyXQ38JMTLRQoIMKmfD+EIZ1O5+ePNxqHsiITp8N+bLKVCidGQ1gE9cijBhmITlztj41bA44xRl5CB78tYLMU4EwAWi023A6DpA1xK2CF0BZwWgnoPIpWH0cjlFQzZUnRgApptLG1uDGSJ1Cx1I4pthI2qx1lWoRxIs3rDPTqgTIplYNahMkK5CWKOeBaBlwUzULgTfPPSJahSXNXug41golEAh557GuRFCVV5ILBhBcqqDFGH81LOoOWpQITCKuupMkYLmBIAvkNEiiFgiBkQClaQ+No/9OuO7VpkxSPYo/QxkHWWmwqJEmrqQgTqOICO2pT+pvyTiKVqxg6O2Xtoi3Ts9LPVL52qPoMPvf7u/Oeyc8Dzdk9nTNzS2dN3zRlWumG3Mvxre4Fp368l+3rfbBt58ZDH5gX/rbu5vG70ey9qwe+2NtxKXD+/L2cy44793YlO0avn17zw3u3Ni9smWNeJAqtna1b24ISe2OlzvT2L97cNZhwl4HvjiS6buUteOr9ZAjU1m07aczb/df27RvX77vecb+dfC7yetXX2GfE4pysrNHbI8ws59v7iFmtpyo2XTy21HZtdXaNsa79la9uNWqzL13dubHtLZZecuXlG3OffiLY++KZnF+m80/+8/ybp6uOLO0KL5u9pWrw/tSra/HlH7avufDB3T2699Vvr0X/bt71jbe2fE/LOen3ozO33SY7gzPcC+f3+S4MzNlwObLlvqe4vjz3tY9nAJzc+PMfV6qFy6MveArrKoZHfxrpPXxO2NPF3oyHquDw0EfS7nYmeeWB415T/69Ha/5VKwtzzlbNz0uqz87oqNdu3xn2DuxchKr84MHUrP07pi35JDsr6z8PST+u
eNptVQtsU+cVDgLRVlPaTBQQQms8066Q5rfv9fUzWZSmdpI6iZM0NiGhm9Lf//3te+P7yn04thmtgE6jQCXuKgErj6rE2DRNwrOUhoQ92CrYoy+1SEm39N2qXdVpQ5Xo6Ep/O85IBFfy495z/u9855zvnLs1n8SqxsvSohFe0rEKkU5uNHNrXsUDBtb0J3Mi1jmZzXZ2hCNDhspPVXG6rmg1djtUeJusYAnyNiSL9iRtRxzU7eS/IuAiTDYqs+kpYZNVxJoG41iz1jy6yYpkEknSrTXWCBYEi4gt0NIvJ8hPVDZ0SxRDVbNWW1VZwMTH0LBq3fzzaqsos1ggD+KKDhibC+iGGpWJn6arGIrWmhgUNLw5z2HIkpRmyiqynKzp5thCmscgQpggYAnJLC/FzdF4hleqLSyOCVDHw4SchItFMIcTGCsACnwS52ZPmcehogg8ggW7vV+TpZFSMkBPK/hm83CBPSCZS7p5uoOQaAjaO9OknpKFtrlpG308BTQd8pJACgQESPjklKL93HyDAlGCgIBSr8zc7OGx+T6yZh4JQdQRXgAJVcSZR6Aqup2n5j9XDUnnRWzm/Z03hysZb4RjbLTD5j2xAFhLS8g8Uiz6ywsOY11NAyQTDPN5KodkOcFjc3rRbX19KNYXFeuaMt0u/2BHu7Yh7WlsNxLQL6EmIZUJtjUqjXFbWKCNJNVi9FJcCNAeJ+3weL2MC9A2ykZyBkGjTac7m/tl0RH1BjemI2If1YnCzaFWpokRwrwt2Rps07yhxlhXl+EKYKWnRersiEn+9oZWFGn1C20q9vcEbL2phrTLkXxkfcLVNoD62nimoV8NRlUYb4O+zMZOd6C31kIoG0merUs12WwupTXp0CQUHkDKQxyfoR/ObGQTjXExHIjIA93hABfu7fI+Mo8zRbkBVaLtppxeqnCNzSlGwFJc58whmvIeVbGmkHnB23KkkLqhbc0SdeK/XsyXBudwR+sNYa/IBohSzckIZ1RbKLclBFWLg3K4LLS7hmFqXC5Lcygy4i+FidxSmCciKpS0GBFn49wg5BFnSAnMDvtvOQKThREg/S3QJ6MJcEqRNQxKrMyRHtA1uzFAMHBqdt6ArMahxGeKYc3J4iwMZlKDLDJYlksOipQv42T4KDZQ7HTpiKLKhTCEEBA1c8jh842VLHNqHCa5UoCmAEWPp4BKSiHwIk/qWfwurS3NzJLyU2dvdtDJpiELLu8sdoM6P99DxSKRcSH2DRinz+ebuLXTHBRDXHwe1/hCLw3PZ0M7RO3szQ4liMOUNpKa8wY8a07dS276fJjC0EVHfYybdUVdvhhiPZQbuukoZijswK+Q3ccjglJopiKrOtAwIjtaT5tT1SJMFTZPHUO7GDfJtNbCS0gwWBw2ogG5kINWa1FULMiQPYZiAEHEYTCrPzMf6G1vCAX9Z3rAfCGBDmX2/ZCXZE3iY7FcGKukMeYwEmSDJStUxTl/E+hq6DVP+2jEOGkcY1inj4l6WfAQWU5zaP+XXbawf/NQINyTyDzFMXXWGqeTsdZaRFjndZM2Fd8iW3KFXKX4nxaNV+68vax4LSaf69d3dYUSq+mKyX8d2/eFcNu7q5d91Mq3b9nzxuUjtidPXnzry5aXutdkp8Z/fPW3K0frL+fKPz58YdOVQzO/mLi77A/9J5Y8v4t9Z6nnfF91d1/lXy69bH+xo/Lcu5V3TTx+deK7787uF+pH/rj2Ae7KD5Y/F/HsmH5/N/hm8ai15dUvaybPHXztqxU71ZllgNd7Ly+5Z6/nCj044P/oku6uryrv/WWw+oMXyspSn7934M3Et2v2UKsO7l4R/nX5jk9euf3BgLrqh4779vdkVgxtiXy8anPlJvFQw6PlQoV77RrB+megL0ruCd11f7Mwc6Guauq+JduOP3PHvrVM7dF1W39/9av/bNtu7GWlNwZf+9G//+c7MfSTwHR2zROvJv4pOtYHfnPxM/2pddsP/WNlWf21S/qGv+2oeKH8Z7T4ZvwCnz6+7IJtqZo88Oxj0c+XTvz0qbG/Vz1dPdq68pnlLVXbMp13JnYfPKM33L2y7vrqT5+Y+dX5UXkmWl/RAh97e/32o6PjUmrtzn32r186eWXXtd9Z4bfi8m6B/rSOG8Gf3em59y0tKrw4/d+laPuZ+uzjr1/70F5s0OIyZnDa3Uy69T3LDll/

View File

@@ -1 +1 @@
eNptVX1sE2UYHyAIxOAiqAn+sUvBRHDX3vWuXbs5ktExQDa6jwIrhpS3773XXntfu4+uHRKyOUgIDDgJMVFAga7FOccmhI8pKhINGBMSwI8ZIcAfEgNBEREVyXzbdbIF7o/23ud53t/z9Xue68gmkKYLijyhV5ANpAFo4INudWQ11GIi3ejMSMiIKly63t8UOGBqwtD8qGGoernDAVTBrqhIBoIdKpIjQTtgFBgO/K6KKA+TDitcaujNdTYJ6TqIIN1WTry2zgYV7Eo28MFWL8A4AQgNyJwiEbIphZFGgLCSQARlKyVsmiKinJ2pI822fg2WSAqHxJwoohokY3eRhqmFlZytjKU0/tcNDQEJH3gg6ggLDCSpODVsmMOi7NT6bBQBDid+uag4HVV0w+obn8whACHC+EiGCifIEevDSJuglhIc4kVgoB6cgYzypbJ64gipJBCFBMqM3LL6gaqKAgQ5vSOmK3JvIWPSSKnoUXVPLjsS10c2rCN+HETVUkd9ClddJmg767FT/UlSN4Agi7iMpAhwPBk1r/94rEIFMI5ByEJHrczI5b6xNopuddcB6G8aBwk0GLW6gSa52cNj5ZopG4KErKyv/lF3BeVDd4ydpu1lA+OA9ZQMre58I46Nu4wMLUVCBWNY+6gMVJS4gKyh30MhyIfCUiW1pKkVJj1qLAAba0RzOR1uWVRW1rzCVa27Fy9pDIb0gLZECIeEGkjSZU4vXeZyuRiStlN22k6T1aY91NKS1LyLG1riVEhcykqp2mauKri43r1SrJFjYSXpERY6gzGxHiV4KAZWxUIxZGeUZrkOtHh8DfWNNauooOxNJsNAavD5eL9YVUHg6MyEwFWyKxmlXgItiYW1bQE/7YxLiWZ7SBJXOX2GEl9dXcsY9jYfsyQQC44Jj8IRUoUI3RTroXJP3yg3RCRHjKh1gGbdBzWkq3h+0BsZXDLD1DvSmIfomzPZwiDt9y97SOHn0tWYk9bJQNQsJegywg8Nwkk5WYJmyxlnOcMSi+sCvb6Cm8BjKTgQwCOo85iGi0Ypn4VRU44jrsf3WLKfzJEddzIXPp5SEiVVRUdkISqrt5lsHNkg5NLqwyOTRSpaBMhCW96tdTLP+ta2ZCsHTY6LJlolytvGMkIYmZA/UriiakrODQ6IlHTrgMdL9xU0o7zrwblSJE2RFD2YJPGYI1GQBFzP/G9hjelW2oWLffxRA9wvhBdels13g/p0rIWGJEzYnO+HMKzX6/3k8UajUAw28Za5B8db6WhsNLRT0o8/alCA2E/pvclRa1LgrKG5+BDyeHBbEcWHUZgHTJhy8U6Ogy7K5QE8T1HMCbz6BIhRcs1UFc0gdQTxzjZS1lCpBJK5HVPJ0C7GjTOtIAQZiiaHmsxwtZLLQa8gVA2JCuAO+WpIH4BRRDbl+Wdlq4PLq+qW+o42k2OJRPrVke9FVlZ0WeD5TBPScGOsHigqJoeXpYYyGKuxKmgd8XAsxfMgzEIXw0LoJhfiNTSK9j/t0rlNmwUijj0BrcNRptJWzrKMrYKQQKXHjduU/6q0Z3K5ypEvJzSUbJlalH8miTto/2mqeOO9f6ff+DkdW+hbdnTG1M7ptU9N3XfG7Nt8aqBrvTV9xXCFNvvH9xK3f1vwWeuv8vP0lI86Otqjns3f9XB/9/8z+ODqpUtiy7vkMceuttWV117mh+7emXjtCnXj6Vl/XH976oXVG+d2LVh7dcfMTReNvZfSd145F6mM1m2c/0Ns2dlv5/oP6vGdtRdKS3Z3WRMrZ90kw1sH1gxv2jb4BTlLnD08j2w7F++89tdmWq2ZPPn4vfMzv3/i9rTpB6aZoYp3trefrrueeilT0T1l/Ybd6VppT/B+e8ndrovny7p9t7pmvL9368a9t+Z8dfl+1+CGF0+VX1g9/+zr+2e0/3Kz+MnsC9uufH6meNUc73ZH565nn3nr6+0zJnZsmfLBTn12l7uiJDXvWv+ei3eDr7Jz24f/7JROWIM1kT03/T+FtJIHE4uKhocnFT0oPrUW4Pf/ANGuMu4=
eNptVXtsU2UUL6AZQWVoFBON8VogGbjb3UfbtcUZt3Udc6zd1sKYaJqv3/3a3vW+dh/tHszgUOKDMa4CYkRA2FqzjDkdkzeKBsWIkaB/OCCQaGKEGaPxFcPLr10nW+D+cXPvPef7nd8553fO7c4kkarxsjRjkJd0pAKo4xfN7M6oqNVAmv5iWkR6XOb66gPB0B5D5ceWxHVd0TwlJUDhbbKCJMDboCyWJOkSGAd6CX5WBJSD6YvIXPvY2k6riDQNxJBm9azutEIZR5J0q8daz8MEAQgVSJwsEpIhRpBKgIicRARlLbaqsoCwl6Eh1dr1XLFVlDkk4A8xRSdZm4PUDTUiYz9NVxEQrZ4oEDRUbNWRqOBMsBWfpmxUVyaOAIfTvGCZ1xeXNd0cmk79fQAhwphIgjLHSzFzb6yDV4oJDkUFoKMBTFhCucKYAwmEFBIIfBKlJ06Zw0BRBB6CrL2kRZOlwXyCpN6uoFvNA9l8SFwNSTf3BTCJ8pqS+nZcY4mgbU7aRg+3kZoOeEnARSMFgPmklZz98FSDAmACg5D5/pnpicNDU31kzeyvAzAQnAYJVBg3+4EqOu0jU7+rhqTzIjIzlfW3hssbb4ZjbTRjc30wDVhrl6DZn2vD/mmHka62k1DGGOa7VBrKcoJH5tkZBeEwjIYjYll7tdrUFvX5pFqfEkzyKS/PNbhal6dcvoYVLdV8OKLKxoqwI+pQmkm61E4zpS4X4yBpG2XDOZOVcci0G8tTjnq6okaCTTybaOWblvsT/homVNXAQjcl1Cng6YDbHWryS1XLUJXidqtQSvoatUgDQI3VjY16yihvCKFgoqatRXY0NDHPlENXfaC2ASQVWuhYVWkv5RhtKYEpG0meK/MnaG9LKJJ0OJetiLN+LuCvavZXca1cReMzgSSyB1rDwfCqyEqmBk7h7KKdJJWn7aTsLip7DU0qRkBSTI+be2jG9Z6KNAXPEFqXxoXUDa27D6sTnTqZyQ/T7kDtTWHP7/NipZpHQ3GjmKCcRB1QCYZiHATt9LCsx+4kqutCg5X5MKHbCvODEB5ELYrFWTU5CBkYN6QE4gYqbzsCR7MjgPubpY+HlURtiqwhMs/KHFxFNk5sEbLGOzIxb6SsxoDEd+TCmkdzs5DqaEtx0OC4eDIlUu4OO8tHkAGj+/JHFFXOhsGESFHDxXExQ3nLpBoHcK4USVMkRR9qI/HsI4EXeVzP3D2/yjSzz4GLfeBWB11OILz0MvZcN6hjUz1UJGIZZ2PfhLG73e4jt3eahGKxi7vUeWi6l4amsqEZUTtwq0MeYjelDbZNepM8Z44txC9hdykHWQcCkYg9iqIMa3c5KQo5aQTcTpaNMgfxNuQhRsk2U5FVndQQxHtbbzfHikXQlt08ZSztYPExainBS1AwOBQ0Il45mwMWuKIiQQbc+zBKQgDjiJzQn5nxNvvL62oqP1pFThUSGVAm/hkZSdYkPhpNB5GKG2MOQEE2OLxCVZSu9JGN5c3mPjcNWTsdKYWcq9TF0TRZgZfTJNr/suvL7t8MEDD3JDRH4myZ1WO3s9alhAjKXE7cptyf5YV0NlcpdmJG96OvzbbkrlkbXg/Xfkbdf+LilcUVOw6D8eubjy0uerWQ8Xq9T289b7zF3/f66nc2dKVqzu19xP9bf+E/qW8uHfwn+uC9Ff273O+u+XpT08qeoZFfLl1fYfvwzROf9ZwtenIk3PvN/tR58a8HetZ9Wlz6ye+7HtKCzXcW9Uh06pN53Qlm58JLT3x1pnnWkgVP3dnMt+ruHRuHk7XsoWVn+MyzD395/NvtR6penPdh4thje+YP97sPvly4Zc6R2KLTV7/3zjZ8L81BF2p7ly/o6TzZ4ztd94v+5B1zV47GHts2tOXy5Ss/7ty8eC9be/F32LVo9M/xT2f84Sto6v38zPpfF1RcePuVy29s+s4TbC4+vWbt3PVf9o3uuTa3c1vy7nsYx+lTP7DRlyxc47/HK4pia+7Z3fT3OTi6KWrZffzkxyPb/5C+2Hroam9XoavgNWLO+FPLnvfIF38inhj++OzGx6WZwcJfg2pRffddsXnDkdGWQODa/ur3Xt3y3SLXOKfd+OnUlZ8LLJYbN2ZZNj9EJK7PtFj+A1XHTc0=

View File

@@ -1 +1 @@
eNrtVVFv3EQQVsUfWa2QkND5zufz3SUmrRRFUVuVEFACUhUia8+es7exd93ddS7X6B4IfeXB/AIgUVJFLfCAeIFKPPLAHwjv/A9m7bsEQlCrPtfSyXczOzPfzH7z3dHZPijNpbj1nAsDikUGf+ivj84UPC5Bm6enOZhUxsd317ePS8Uv3k2NKXTQ6bCCt3XOTdrOmEiilHHRjmTe4WIsT0Yynv52lgKLMf3T8081KGc1AWGqn+zpOs4pph233W13vaUfVqMICuOsi0jGXCTVi+QJL1okhnHGDJw27upHVhQZj5jF2HmkpThfk0JAjbk63wMoHJbxfXimQBfYBnx5qg0zpT46wbzwx+9nOWjNEvhu88EC3FfHn3FWnSMMkkiZZPA8kjgJYRwzLeC/JU+xFZxZdVbu80gq8bPFprWDSIySmbPBDmyn1fHAdX+95lvNMjlxNuqB6urb979fm5f6EERi0urYX+r/cmPMpuIJF9U3F+RG95qCGPNwlunqxKgSTmIcW/VyOy1bpDskm5Ehnuv5pOsHPS/o9cndje0XEYtScKImU/VMSKe2nK1mxtnaj6qLdtq7TQPf79EPSM5ue/1lz3XdVtpzvOUbHC+vYVs/KKQG514zaOz35nlc+WvafIF3ppADf97665DO2UkD6raH7YFPWxRvA/BqQzgouKrvJTQ8BxqIMstadMRMlIYYj+QNsbcxT2hwSEuMyMvM8IIpE4KIC4mEp4EdVovqiGUQlkX4WPMnEGL9JAFFg65t98orTKoQrA4zjgRG92DhjOVEhALywkyvon302nSL03WuS0M4mhrQNPDc5WG377mzFuUC6SoiCJH1ibawcWVwKQ2EjIe4j2qK0Nkog3iBXKokjBBUPYeY67lzjEywfaVyEhqThSVfBBjcceyQgwrjcj6/mE3rapkUid0fTODXYFOpzNzQ9RGgBqZwutcwTKTa04VNqyNZQGgxcbHP6/YWSHqhNlLh7v07ejb7f6lZe5XU4AetuqOyvIOpnUJJvAEny1jOOlY4tHmrRG+mREvDt0p0qUTv7B3ShmthynSKajRY7jEvHvV9fwi9UX8pcgcuG7DI9Yb9pTgajrvMPj7AwOv6cdcbjsZDd4Q4u/2lQc9FHcuZ4GNkqF08jtuwQy/Jjd6Gyhq/ocXgaw1fH9fGbZQZy0W6i2IY4WbiUiNBEBUOEBGXES4aRuxNmGpUZM41/L7zWrXulQhuowl605pN0lc1Nz/Vom9axiwiArpz/6Ot7d2Vla2HW3fukIeyJEwBYYIwrbkVVkPGUpFaa3CHHCb0BOz9EsP0nm4T1AZiUsBTlg3WUXBAmhA5JgqQCoBKSOqlPDDESNJkqGMWWdvk/phMsXYsxXuG7Ak5qf3N0RZ5VGpDNJuikZlrBxcIFADRYPfRFs/ZAc/LHDPExArLP9JZLBHX0F5Z6TRdfy4+mQMJyOEC0wzNaw1qtM7xW+NqnSUgO516dNT+AxWlCfeZ4lajLWHoIoulRxNqb2cx9xBHmiNpAjp2mm2hM3x2XzvVDP9W4IBhtvrM7uxvcr/ytw==
eNrVVk1vG0UY5uPGkV8wWiEhIa+99vojNilSFKq2QGhRTNUqRKvZ3de70+zObGdm47iRD5SekZZf0JIorqKWooK4QCWOHPgD4cBv4Z21nVInbapwwrJl+/2a5/16Zu9OtkEqJvibjxjXIGmg8Y/67u5Ewu0clL53kIKORbh36WJ/L5fs6L1Y60z1ajWasapKmY6rCeVREFPGq4FIa4wPxL4vwtHvkxhoiOHvHX6pQNorEXBd/GysSz87G9Wcar1ab3aerAQBZNq+yAMRMh4Vj6M7LKuQEAYJ1XAwVRc/0ixLWEANxtotJfjhquAcSszF4RZAZtOEbcNDCSrDNOCbA6WpztXdfYwLf/4xSUEpGsH3Vz+dg/v7jXd/M+GVsjGYliKxV5JEDO21Mm9VPPhg/2PEUDzrx3mFOG2yRiVpOI0Wqbd7rttrdsiltf6vp8a4KlnEeHH/pz5LMa0F6eNVGsQwdymeZrmP2VVISndsBHmh7UxWEm2vbwfFUTV2L1i9ZtO1PkT9hUar23AcpxK7dqN7iuLZApyLO5lQYF+e5ow5nZ7zc/3+dSpHxcEU6g/GCptnfwY80nGx1+40flkIsIagscOoc5y964wWh9hZEgkRJfDkhm2sSxgMe1M8cA6uSRqltHjIhR2YMjy9YWOZaSgiu49jCPaVsDgi9S64IQ0dGviNptsMO92u67ZD120N/G7Q8Y/IqXmsSggRL6OJKva1zOHRPIP+KIOTczSZA5s1uU4+oRwP7zjEcXrl2zS5HOuvcaYkNvOvt+7vWrPtsXqWU+1WW22rYjGOM8cD8HB0I2X1di0/Eb6ntMCMwQNO/QRCq2dgVRZ1WGzAYOsuBgqxHAq0Bzs0zRJQXponmmVU6sUgZ1vg6uFya/Ao83Cv5WjRQMjICySUJfFCpmbKAVYQtRkdpVi9RacMsxecJh56q5Neyn1Z1gqoDOIT0lgMPa0TL2dzkTaj4GkG0gtzOUNHR2VZE8Ejs+3o38RVMO5SzwT15rhiDYXcUpkJoAKRgUHpMb7NNKhjjHeUDj2krQy7bzr5IiYM4lONSLHfSIZoyAcsMofnCl6odpgJJNDjTAKagJdn3m3F7iB+3KIIJMJySqBzLdcxljxUXoL0gM719lwZiiH3OKSZHj33bqLWhJtbl7GOBZ4/KhNrON1OvdVwxuN3Xs7iq2exOH5QqmoySWvYQTuTWCNtJwlNac1wstL/I5L/9iyKPw9974evvBbahjHOTeCPghlh6VMJ6z/z+wlKX+q0HpdEbAez++iYml+XZF91JRzgcCBZFpN8mwVC8sUr4kVqfXtr15pOoBdTFSMjtrsubYR+q9nsgOu3lgKn7dA2DZxGp7UUBp1BnZpXE6DdqDfDeqPjDzqOj2Wtt5baroN8mlLOBji3Zn0ZLviGdTzyqJ0OuMJfKNH4tYpf10phH/fQTKi1WbGSABcPeQm7gqiwVIg4D5Dl0GNrSOWU8WcTiL83XuusyzmCW5s6nffMadCzkptZVazzHqPnHj1r48rn6/3N5eX1m+sffURuipxQCQQvTopUai5BTQZCkpKBcHBtytUQTH8JXmxbqkqQMYiOAa3MQBlFxgBHiIgBkYCjAEjmpNyEHU20INMIpc88apVcGZARnh0K/r4mW1wMS/3UtEJu5UoTRUcopHrBcI5AAhAFZh/M4fj4xdI8xQghMXTzr3AGS8AUVJeXa9Osv+JfzID0yO4c0xjFq1PUKJ3hN8KVMkqPbNTK0pVPC1muvW0qmbl2zMBY8yhmPKaupjvzuntY0hSHpmcN7Om2WGN8bb52qPH4+QMD2myO/wFK03fT

View File

@@ -30,7 +30,7 @@ At a high-level, the basic ways to generate examples are:
- User feedback: users (or labelers) leave feedback on interactions with the application and examples are generated based on that feedback (for example, all interactions with positive feedback could be turned into examples).
- LLM feedback: same as user feedback but the process is automated by having models evaluate themselves.
Which approach is best depends on your task. For tasks where a small number core principles need to be understood really well, it can be valuable hand-craft a few really good examples.
Which approach is best depends on your task. For tasks where a small number of core principles need to be understood really well, it can be valuable hand-craft a few really good examples.
For tasks where the space of correct behaviors is broader and more nuanced, it can be useful to generate many examples in a more automated fashion so that there's a higher likelihood of there being some highly relevant examples for any runtime input.
**Single-turn v.s. multi-turn examples**
@@ -39,8 +39,8 @@ Another dimension to think about when generating examples is what the example is
The simplest types of examples just have a user input and an expected model output. These are single-turn examples.
One more complex type if example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer.
This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where its useful to show common errors and spell out exactly why they're wrong and what should be done instead.
One more complex type of example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer.
This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where it's useful to show common errors and spell out exactly why they're wrong and what should be done instead.
## 2. Number of examples
@@ -77,7 +77,7 @@ If we insert our examples as messages, where each example is represented as a se
One area where formatting examples as messages can be tricky is when our example outputs have tool calls. This is because different models have different constraints on what types of message sequences are allowed when any tool calls are generated.
- Some models require that any AIMessage with tool calls be immediately followed by ToolMessages for every tool call,
- Some models additionally require that any ToolMessages be immediately followed by an AIMessage before the next HumanMessage,
- Some models require that tools are passed in to the model if there are any tool calls / ToolMessages in the chat history.
- Some models require that tools are passed into the model if there are any tool calls / ToolMessages in the chat history.
These requirements are model-specific and should be checked for the model you are using. If your model requires ToolMessages after tool calls and/or AIMessages after ToolMessages and your examples only include expected tool calls and not the actual tool outputs, you can try adding dummy ToolMessages / AIMessages to the end of each example with generic contents to satisfy the API constraints.
In these cases it's especially worth experimenting with inserting your examples as strings versus messages, as having dummy messages can adversely affect certain models.

View File

@@ -74,6 +74,8 @@ As an example, query decomposition can simply be accomplished using prompting an
These can then be run sequentially or in parallel on a downstream retrieval system.
```python
from typing import List
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

View File

@@ -91,7 +91,7 @@ For more information, please see:
#### Usage with LCEL
If you compose multiple Runnables using [LangChains Expression Language (LCEL)](/docs/concepts/lcel), the `stream()` and `astream()` methods will, by convention, stream the output of the last step in the chain. This allows the final processed result to be streamed incrementally. **LCEL** tries to optimize streaming latency in pipelines such that the streaming results from the last step are available as soon as possible.
If you compose multiple Runnables using [LangChains Expression Language (LCEL)](/docs/concepts/lcel), the `stream()` and `astream()` methods will, by convention, stream the output of the last step in the chain. This allows the final processed result to be streamed incrementally. **LCEL** tries to optimize streaming latency in pipelines so that the streaming results from the last step are available as soon as possible.
@@ -104,7 +104,7 @@ Use the `astream_events` API to access custom data and intermediate outputs from
While this API is available for use with [LangGraph](/docs/concepts/architecture#langgraph) as well, it is usually not necessary when working with LangGraph, as the `stream` and `astream` methods provide comprehensive streaming capabilities for LangGraph graphs.
:::
For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from te chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app.
For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from the chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app.
There are ways to do this [using callbacks](/docs/concepts/callbacks), or by constructing your chain in such a way that it passes intermediate
values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an
@@ -125,7 +125,7 @@ prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt | model | parser
async for event in chain.astream_events({"topic": "parrot"}, version="v2"):
async for event in chain.astream_events({"topic": "parrot"}):
kind = event["event"]
if kind == "on_chat_model_stream":
print(event, end="|", flush=True)

View File

@@ -50,11 +50,6 @@ locally to ensure that it looks good and is free of errors.
If you're unable to build it locally that's okay as well, as you will be able to
see a preview of the documentation on the pull request page.
From the **monorepo root**, run the following command to install the dependencies:
```bash
poetry install --with lint,docs --no-root
````
### Building
@@ -158,14 +153,6 @@ the working directory to the `langchain-community` directory:
cd [root]/libs/langchain-community
```
Set up a virtual environment for the package if you haven't done so already.
Install the dependencies for the package.
```bash
poetry install --with lint
```
Then you can run the following commands to lint and format the in-code documentation:
```bash

View File

@@ -35,5 +35,5 @@ Please reference our [Review Process](review_process.mdx).
### I think my PR was closed in a way that didn't follow the review process. What should I do?
Tag `@efriis` in the PR comments referencing the portion of the review
Tag `@ccurme` in the PR comments referencing the portion of the review
process that you believe was not followed. We'll take a look!

View File

@@ -270,7 +270,7 @@
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n"
"<ChatModelTabs overrideParams={{openai: {model: \"gpt-4\"}}} />\n"
]
},
{

View File

@@ -127,20 +127,18 @@
},
{
"cell_type": "code",
"execution_count": 11,
"id": "27bd1dfd-8ae2-49d6-b526-97180c81b5f4",
"metadata": {
"tags": []
},
"execution_count": 3,
"id": "5a03086e-2813-4cb1-b12b-d00e7eeba122",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_start', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}, 'data': {'input': 'Write me a 1 verse song about goldfish on the moon'}}\n",
"{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content='Here', id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n",
"{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content=\"'s\", id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n",
"{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content=' a', id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n",
"{'event': 'on_chat_model_start', 'data': {'input': 'Write me a 1 verse song about goldfish on the moon'}, 'name': 'ChatAnthropic', 'tags': [], 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-1d430164-52b1-4d00-8c00-b16460f7737e', usage_metadata={'input_tokens': 21, 'output_tokens': 2, 'total_tokens': 23, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'data': {'chunk': AIMessageChunk(content=\"Here's\", additional_kwargs={}, response_metadata={}, id='run-1d430164-52b1-4d00-8c00-b16460f7737e')}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'data': {'chunk': AIMessageChunk(content=' a short one-verse song', additional_kwargs={}, response_metadata={}, id='run-1d430164-52b1-4d00-8c00-b16460f7737e')}, 'parent_ids': []}\n",
"...Truncated\n"
]
}
@@ -152,7 +150,7 @@
"idx = 0\n",
"\n",
"async for event in chat.astream_events(\n",
" \"Write me a 1 verse song about goldfish on the moon\", version=\"v1\"\n",
" \"Write me a 1 verse song about goldfish on the moon\"\n",
"):\n",
" idx += 1\n",
" if idx >= 5: # Truncate the output\n",
@@ -178,7 +176,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -21,7 +21,7 @@
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n",
"- [The Runnable interface](/docs/concepts/runnables/)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"- [Binding runtime arguments](/docs/how_to/binding/)\n",
"\n",
@@ -62,6 +62,163 @@
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "9d25f63f-a048-42f3-ac2f-e20ba99cff16",
"metadata": {},
"source": [
"### Configuring fields on a chat model\n",
"\n",
"If using [init_chat_model](/docs/how_to/chat_models_universal_init/) to create a chat model, you can specify configurable fields in the constructor:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "92ba5e49-b2b4-432b-b8bc-b03de46dc2bb",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = init_chat_model(\n",
" \"openai:gpt-4o-mini\",\n",
" # highlight-next-line\n",
" configurable_fields=(\"temperature\",),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "61ef4976-9943-492b-9554-0d10e3d3ba76",
"metadata": {},
"source": [
"You can then set the parameter at runtime using `.with_config`:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "277e3232-9b77-4828-8082-b62f4d97127f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! How can I assist you today?\n"
]
}
],
"source": [
"response = llm.with_config({\"temperature\": 0}).invoke(\"Hello\")\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "44c5fe89-f0a6-4ff0-b419-b927e51cc9fa",
"metadata": {},
"source": [
":::tip\n",
"\n",
"In addition to invocation parameters like temperature, configuring fields this way extends to clients and other attributes.\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "fed7e600-4d5e-4875-8d37-082ec926e66f",
"metadata": {},
"source": [
"#### Use with tools\n",
"\n",
"This method is applicable when [binding tools](/docs/concepts/tool_calling/) as well:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "61a67769-4a15-49e2-a945-1f4e7ef19d8c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'get_weather',\n",
" 'args': {'location': 'San Francisco'},\n",
" 'id': 'call_B93EttzlGyYUhzbIIiMcl3bE',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_weather(location: str):\n",
" \"\"\"Get the weather.\"\"\"\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([get_weather])\n",
"response = llm_with_tools.with_config({\"temperature\": 0}).invoke(\n",
" \"What's the weather in SF?\"\n",
")\n",
"response.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "b71c7bf5-f351-4b90-ae86-1100d2dcdfaa",
"metadata": {},
"source": [
"In addition to `.with_config`, we can now include the parameter when passing a configuration directly. See example below, where we allow the underlying model temperature to be configurable inside of a [langgraph agent](/docs/tutorials/agents/):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9bb36a46-7b67-4f11-b043-771f3005f493",
"metadata": {},
"outputs": [],
"source": [
"! pip install --upgrade langgraph"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "093d1c7d-1a64-4e6a-849f-075526b9b3ca",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(llm, [get_weather])\n",
"\n",
"response = agent.invoke(\n",
" {\"messages\": \"What's the weather in Boston?\"},\n",
" {\"configurable\": {\"temperature\": 0}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9dc5be03-528f-4532-8cb0-1f149dddedc9",
"metadata": {},
"source": [
"### Configuring fields on arbitrary Runnables\n",
"\n",
"You can also use the `.configurable_fields` method on arbitrary [Runnables](/docs/concepts/runnables/), as shown below:"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -604,7 +761,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -22,7 +22,7 @@
"2. LangChain [Runnables](/docs/concepts/runnables);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"\n",
"In this guide we provide an overview of these methods.\n",
"\n",
@@ -492,7 +492,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "1dad8f8e",
"metadata": {
"execution": {
@@ -504,13 +504,14 @@
},
"outputs": [],
"source": [
"from typing import Optional, Type\n",
"from typing import Optional\n",
"\n",
"from langchain_core.callbacks import (\n",
" AsyncCallbackManagerForToolRun,\n",
" CallbackManagerForToolRun,\n",
")\n",
"from langchain_core.tools import BaseTool\n",
"from langchain_core.tools.base import ArgsSchema\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
@@ -524,7 +525,7 @@
"class CustomCalculatorTool(BaseTool):\n",
" name: str = \"Calculator\"\n",
" description: str = \"useful for when you need to answer questions about math\"\n",
" args_schema: Type[BaseModel] = CalculatorInput\n",
" args_schema: Optional[ArgsSchema] = CalculatorInput\n",
" return_direct: bool = True\n",
"\n",
" def _run(\n",

View File

@@ -551,7 +551,7 @@
"\n",
"While a parser encapsulates the logic needed to parse binary data into documents, *blob loaders* encapsulate the logic that's necessary to load blobs from a given storage location.\n",
"\n",
"A the moment, `LangChain` only supports `FileSystemBlobLoader`.\n",
"At the moment, `LangChain` only supports `FileSystemBlobLoader`.\n",
"\n",
"You can use the `FileSystemBlobLoader` to load blobs and then use the parser to parse them."
]

View File

@@ -354,7 +354,7 @@
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n",
" overrideParams={{openai: {model: \"gpt-4-0125-preview\", kwargs: \"temperature=0\"}}}\n",
"/>\n"
]
},

View File

@@ -179,7 +179,7 @@
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" openaiParams={`model=\"gpt-4o\", temperature=0`}\n",
" overrideParams={{openai: {model: \"gpt-4o\", kwargs: \"temperature=0\"}}}\n",
"/>\n"
]
},

View File

@@ -167,7 +167,7 @@
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
" overrideParams={{fireworks: {model: \"accounts/fireworks/models/firefunction-v1\", kwargs: \"temperature=0\"}}}\n",
"/>\n",
"\n",
"We can use the `bind_tools()` method to handle converting\n",

View File

@@ -99,8 +99,6 @@
"\n",
"prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n",
"\n",
"chain1 = prompt | model\n",
"\n",
"chain = (\n",
" {\n",
" \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n",

View File

@@ -68,7 +68,7 @@
"\n",
"### Formatting prompts\n",
"\n",
"Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model.\n",
"Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailored for your specific model.\n",
"\n",
"This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",

View File

@@ -329,7 +329,7 @@
"id": "fc6059fd-0df7-4b6f-a84c-b5874e983638",
"metadata": {},
"source": [
"We can also pass in an arbitrary function or a runnable. This function/runnable should take in a the graph state and output a list of messages.\n",
"We can also pass in an arbitrary function or a runnable. This function/runnable should take in a graph state and output a list of messages.\n",
"We can do all types of arbitrary formatting of messages here. In this case, let's add a SystemMessage to the start of the list of messages and append another user message at the end."
]
},

View File

@@ -512,44 +512,6 @@
"db.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "fcdd8432-07a4-4609-8214-b1591dd94950",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SELECT DISTINCT Genre.Name\n",
"FROM Genre\n",
"JOIN Track ON Genre.GenreId = Track.GenreId\n",
"JOIN Album ON Track.AlbumId = Album.AlbumId\n",
"JOIN Artist ON Album.ArtistId = Artist.ArtistId\n",
"WHERE Artist.Name = 'Elenis Moriset'\n"
]
},
{
"data": {
"text/plain": [
"''"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Without retrieval\n",
"query = query_chain.invoke(\n",
" {\"question\": \"What are all the genres of elenis moriset songs\", \"proper_nouns\": \"\"}\n",
")\n",
"print(query)\n",
"db.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 14,

View File

@@ -720,22 +720,13 @@
},
{
"cell_type": "code",
"execution_count": 14,
"id": "c00df46e-7f6b-4e06-8abf-801898c8d57f",
"execution_count": 13,
"id": "bab5f910-fee0-4a94-9f05-b469006333b8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.\n",
" warn_beta(\n"
]
}
],
"outputs": [],
"source": [
"events = []\n",
"async for event in model.astream_events(\"hello\", version=\"v2\"):\n",
"async for event in model.astream_events(\"hello\"):\n",
" events.append(event)"
]
},
@@ -746,15 +737,7 @@
"source": [
":::note\n",
"\n",
"Hey what's that funny version=\"v2\" parameter in the API?! 😾\n",
"\n",
"This is a **beta API**, and we're almost certainly going to make some changes to it (in fact, we already have!)\n",
"\n",
"This version parameter will allow us to minimize such breaking changes to your code. \n",
"\n",
"In short, we are annoying you now, so we don't have to annoy you later.\n",
"\n",
"`v2` is only available for langchain-core>=0.2.0.\n",
"For `langchain-core<0.3.37`, set the `version` kwarg explicitly (e.g., `model.astream_events(\"hello\", version=\"v2\")`).\n",
"\n",
":::"
]
@@ -769,8 +752,8 @@
},
{
"cell_type": "code",
"execution_count": 15,
"id": "ce31b525-f47d-4828-85a7-912ce9f2e79b",
"execution_count": 14,
"id": "c4a2f5dc-2c75-4be4-a8ca-b5b84a3cdbef",
"metadata": {},
"outputs": [
{
@@ -780,23 +763,38 @@
" 'data': {'input': 'hello'},\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'metadata': {}},\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'parent_ids': []},\n",
" {'event': 'on_chat_model_stream',\n",
" 'data': {'chunk': AIMessageChunk(content='Hello', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}},\n",
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66', usage_metadata={'input_tokens': 8, 'output_tokens': 4, 'total_tokens': 12, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})},\n",
" 'parent_ids': []},\n",
" {'event': 'on_chat_model_stream',\n",
" 'data': {'chunk': AIMessageChunk(content='!', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}}]"
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'data': {'chunk': AIMessageChunk(content='Hello! How can', additional_kwargs={}, response_metadata={}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66')},\n",
" 'parent_ids': []}]"
]
},
"execution_count": 15,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -807,7 +805,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 15,
"id": "76cfe826-ee63-4310-ad48-55a95eb3b9d6",
"metadata": {},
"outputs": [
@@ -815,20 +813,30 @@
"data": {
"text/plain": [
"[{'event': 'on_chat_model_stream',\n",
" 'data': {'chunk': AIMessageChunk(content='?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}},\n",
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66', usage_metadata={'input_tokens': 0, 'output_tokens': 12, 'total_tokens': 12, 'input_token_details': {}})},\n",
" 'parent_ids': []},\n",
" {'event': 'on_chat_model_end',\n",
" 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66', usage_metadata={'input_tokens': 8, 'output_tokens': 16, 'total_tokens': 24, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})},\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}}]"
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'parent_ids': []}]"
]
},
"execution_count": 16,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -849,7 +857,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 16,
"id": "4328c56c-a303-427b-b1f2-f354e9af555c",
"metadata": {},
"outputs": [],
@@ -864,7 +872,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
" )\n",
"]"
]
@@ -947,29 +954,26 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Chat model chunk: ''\n",
"Chat model chunk: '{'\n",
"Parser chunk: {}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'countries'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' ['\n",
"Chat model chunk: '\\n \"countries'\n",
"Chat model chunk: '\": [\\n '\n",
"Parser chunk: {'countries': []}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '{'\n",
"Chat model chunk: '{\\n \"'\n",
"Parser chunk: {'countries': [{}]}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'name'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': ''}]}\n",
"Chat model chunk: 'France'\n",
"Chat model chunk: 'name\": \"France'\n",
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
"Chat model chunk: '\",'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'population'\n",
"Chat model chunk: '\",\\n \"'\n",
"Chat model chunk: 'population\": 67'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}\n",
"Chat model chunk: '413'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413}]}\n",
"Chat model chunk: '000\\n },'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}]}\n",
"Chat model chunk: '\\n {'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {}]}\n",
"Chat model chunk: '\\n \"name\":'\n",
"...\n"
]
}
@@ -981,7 +985,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
"):\n",
" kind = event[\"event\"]\n",
" if kind == \"on_chat_model_stream\":\n",
@@ -1023,24 +1026,24 @@
{
"cell_type": "code",
"execution_count": 20,
"id": "4f0b581b-be63-4663-baba-c6d2b625cdf9",
"id": "42145735-25e8-4e67-b081-b0c15ea45dd1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': []}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'metadata': {}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': []}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"...\n"
]
}
@@ -1055,7 +1058,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
" include_names=[\"my_parser\"],\n",
"):\n",
" print(event)\n",
@@ -1077,24 +1079,24 @@
{
"cell_type": "code",
"execution_count": 21,
"id": "096cd904-72f0-4ebe-a8b7-d0e730faea7f",
"id": "2a7d8fe0-47ca-4ab4-9c10-b34e3f6106ee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\":', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c', usage_metadata={'input_tokens': 56, 'output_tokens': 1, 'total_tokens': 57, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n \"countries', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\": [\\n ', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{\\n \"', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='name\": \"France', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\",\\n \"', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='population\": 67', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='413', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='000\\n },', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"...\n"
]
}
@@ -1107,7 +1109,6 @@
"max_events = 0\n",
"async for event in chain.astream_events(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`',\n",
" version=\"v2\",\n",
" include_types=[\"chat_model\"],\n",
"):\n",
" print(event)\n",
@@ -1136,24 +1137,24 @@
{
"cell_type": "code",
"execution_count": 22,
"id": "26bac0d2-76d9-446e-b346-82790236b88d",
"id": "c237c218-5fd6-4146-ac68-020a038cf582",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'metadata': {}}\n",
"{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chain_stream', 'data': {'chunk': {}}, 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\"', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\":', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': '58d1302e-36ce-4df7-a3cb-47cb73d57e44', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`', additional_kwargs={}, response_metadata={})]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b', usage_metadata={'input_tokens': 56, 'output_tokens': 1, 'total_tokens': 57, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': '75604c84-e1e6-494a-8b2a-950f45d932e8', 'metadata': {}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b')}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_parser_stream', 'run_id': '75604c84-e1e6-494a-8b2a-950f45d932e8', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {}}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chain_stream', 'run_id': '58d1302e-36ce-4df7-a3cb-47cb73d57e44', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'chunk': {}}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n \"countries', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b')}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\": [\\n ', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b')}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_parser_stream', 'run_id': '75604c84-e1e6-494a-8b2a-950f45d932e8', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {'countries': []}}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chain_stream', 'run_id': '58d1302e-36ce-4df7-a3cb-47cb73d57e44', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'chunk': {'countries': []}}, 'parent_ids': []}\n",
"...\n"
]
}
@@ -1164,7 +1165,6 @@
"max_events = 0\n",
"async for event in chain.astream_events(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`',\n",
" version=\"v2\",\n",
" include_tags=[\"my_chain\"],\n",
"):\n",
" print(event)\n",
@@ -1263,40 +1263,40 @@
{
"cell_type": "code",
"execution_count": 25,
"id": "b08215cd-bffa-4e76-aaf3-c52ee34f152c",
"id": "2c83701e-b801-429f-b2ac-47ed44d2d11a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Chat model chunk: ''\n",
"Chat model chunk: '{'\n",
"Parser chunk: {}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'countries'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' ['\n",
"Chat model chunk: '\\n \"countries'\n",
"Chat model chunk: '\": [\\n '\n",
"Parser chunk: {'countries': []}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '{'\n",
"Chat model chunk: '{\\n \"'\n",
"Parser chunk: {'countries': [{}]}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'name'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': ''}]}\n",
"Chat model chunk: 'France'\n",
"Chat model chunk: 'name\": \"France'\n",
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
"Chat model chunk: '\",'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'population'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: '67'\n",
"Chat model chunk: '\",\\n \"'\n",
"Chat model chunk: 'population\": 67'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}\n",
"Chat model chunk: '413'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413}]}\n",
"Chat model chunk: '000\\n },'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}]}\n",
"Chat model chunk: '\\n {'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {}]}\n",
"Chat model chunk: '\\n \"name\":'\n",
"Chat model chunk: ' \"Spain\",'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}\n",
"Chat model chunk: '\\n \"population\":'\n",
"Chat model chunk: ' 47'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}\n",
"Chat model chunk: '351'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351}]}\n",
"...\n"
]
}
@@ -1308,7 +1308,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
"):\n",
" kind = event[\"event\"]\n",
" if kind == \"on_chat_model_stream\":\n",
@@ -1376,7 +1375,7 @@
" return reverse_word.invoke(word)\n",
"\n",
"\n",
"async for event in bad_tool.astream_events(\"hello\", version=\"v2\"):\n",
"async for event in bad_tool.astream_events(\"hello\"):\n",
" print(event)"
]
},
@@ -1412,7 +1411,7 @@
" return reverse_word.invoke(word, {\"callbacks\": callbacks})\n",
"\n",
"\n",
"async for event in correct_tool.astream_events(\"hello\", version=\"v2\"):\n",
"async for event in correct_tool.astream_events(\"hello\"):\n",
" print(event)"
]
},
@@ -1454,7 +1453,7 @@
"\n",
"await reverse_and_double.ainvoke(\"1234\")\n",
"\n",
"async for event in reverse_and_double.astream_events(\"1234\", version=\"v2\"):\n",
"async for event in reverse_and_double.astream_events(\"1234\"):\n",
" print(event)"
]
},
@@ -1495,7 +1494,7 @@
"\n",
"await reverse_and_double.ainvoke(\"1234\")\n",
"\n",
"async for event in reverse_and_double.astream_events(\"1234\", version=\"v2\"):\n",
"async for event in reverse_and_double.astream_events(\"1234\"):\n",
" print(event)"
]
},
@@ -1528,7 +1527,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -87,13 +87,6 @@
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Failed to batch ingest runs: LangSmithRateLimitError('Rate limit exceeded for https://api.smith.langchain.com/runs/batch. HTTPError(\\'429 Client Error: Too Many Requests for url: https://api.smith.langchain.com/runs/batch\\', \\'{\"detail\":\"Monthly unique traces usage limit exceeded\"}\\')')\n"
]
}
],
"source": [

View File

@@ -200,7 +200,12 @@
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
" overrideParams={{\n",
" fireworks: {\n",
" model: \"accounts/fireworks/models/firefunction-v1\",\n",
" kwargs: \"temperature=0\",\n",
" }\n",
" }}\n",
"/>\n"
]
},

View File

@@ -33,7 +33,7 @@
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
" overrideParams={{fireworks: {model: \"accounts/fireworks/models/firefunction-v1\", kwargs: \"temperature=0\"}}}\n",
"/>\n"
]
},

View File

@@ -46,7 +46,7 @@
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
" fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n",
" overrideParams={{fireworks: {model: \"accounts/fireworks/models/firefunction-v1\", kwargs: \"temperature=0\"}}}\n",
"/>\n"
]
},

View File

@@ -131,13 +131,11 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"stream = special_summarization_tool.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"stream = special_summarization_tool.astream_events({\"long_text\": LONG_TEXT})\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
@@ -156,7 +154,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -190,21 +188,19 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-d23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', usage_metadata={'input_tokens': 182, 'output_tokens': 16, 'total_tokens': 198}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\")]]}}, 'run_id': 'd23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['f25c41fe-8972-4893-bc40-cecf3922c1fa']}\n"
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-337ac14e-8da8-4c6d-a69f-1573f93b651e', usage_metadata={'input_tokens': 182, 'output_tokens': 19, 'total_tokens': 201, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\", additional_kwargs={}, response_metadata={})]]}}, 'run_id': '337ac14e-8da8-4c6d-a69f-1573f93b651e', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['225beaa6-af73-4c91-b2d3-1afbbb88d53e']}\n"
]
}
],
"source": [
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"stream = special_summarization_tool_with_config.astream_events({\"long_text\": LONG_TEXT})\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
@@ -222,33 +218,24 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n"
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', usage_metadata={'input_tokens': 182, 'output_tokens': 2, 'total_tokens': 184, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' defies physics;', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry chooses outfit for', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation day.', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', usage_metadata={'input_tokens': 0, 'output_tokens': 17, 'total_tokens': 17, 'input_token_details': {}})}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n"
]
}
],
"source": [
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"stream = special_summarization_tool_with_config.astream_events({\"long_text\": LONG_TEXT})\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_stream\":\n",
@@ -290,7 +277,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -91,7 +91,7 @@
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n",
"<ChatModelTabs overrideParams={{openai: {model: \"gpt-4\"}}} />\n",
"\n",
"To illustrate the idea, we'll use `phi3` via Ollama, which does **NOT** have native support for tool calling. If you'd like to use `Ollama` as well follow [these instructions](/docs/integrations/chat/ollama/)."
]

View File

@@ -30,7 +30,8 @@
"1. The resulting chat history should be **valid**. Usually this means that the following properties should be satisfied:\n",
" - The chat history **starts** with either (1) a `HumanMessage` or (2) a [SystemMessage](/docs/concepts/messages/#systemmessage) followed by a `HumanMessage`.\n",
" - The chat history **ends** with either a `HumanMessage` or a `ToolMessage`.\n",
" - A `ToolMessage` can only appear after an `AIMessage` that involved a tool call. \n",
" - A `ToolMessage` can only appear after an `AIMessage` that involved a tool call.\n",
"\n",
" This can be achieved by setting `start_on=\"human\"` and `ends_on=(\"human\", \"tool\")`.\n",
"3. It includes recent messages and drops old messages in the chat history.\n",
" This can be achieved by setting `strategy=\"last\"`.\n",

View File

@@ -0,0 +1,206 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Abso\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatAbso\n",
"\n",
"This will help you getting started with ChatAbso [chat models](https://python.langchain.com/docs/concepts/chat_models/). For detailed documentation of all ChatAbso features and configurations head to the [API reference](https://python.langchain.com/api_reference/en/latest/chat_models/langchain_abso.chat_models.ChatAbso.html).\n",
"\n",
"- You can find the full documentation for the Abso router [here] (https://abso.ai)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/abso) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatAbso](https://python.langchain.com/api_reference/en/latest/chat_models/langchain_abso.chat_models.ChatAbso.html) | [langchain-abso](https://python.langchain.com/api_reference/en/latest/abso_api_reference.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-abso?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-abso?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"To access ChatAbso models you'll need to create an OpenAI account, get an API key, and install the `langchain-abso` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"- TODO: Update with relevant info.\n",
"\n",
"Head to (TODO: link) to sign up to ChatAbso and generate an API key. Once you've done this set the ABSO_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatAbso integration lives in the `langchain-abso` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-abso"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_abso import ChatAbso\n",
"\n",
"llm = ChatAbso(fast_model=\"gpt-4o\", slow_model=\"o3-mini\")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAbso features and configurations head to the API reference: https://python.langchain.com/api_reference/en/latest/chat_models/langchain_abso.chat_models.ChatAbso.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -315,6 +315,59 @@
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "6e36d25c-f358-49e5-aefa-b99fbd3fec6b",
"metadata": {},
"source": [
"## Extended thinking\n",
"\n",
"Claude 3.7 Sonnet supports an [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) feature, which will output the step-by-step reasoning process that led to its final answer.\n",
"\n",
"To use it, specify the `thinking` parameter when initializing `ChatAnthropic`. It can also be passed in as a kwarg during invocation.\n",
"\n",
"You will need to specify a token budget to use this feature. See usage example below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a34cf93b-8522-43a6-a3f3-8a189ddf54a7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[\n",
" {\n",
" \"signature\": \"ErUBCkYIARgCIkCx7bIPj35jGPHpoVOB2y5hvPF8MN4lVK75CYGftmVNlI4axz2+bBbSexofWsN1O/prwNv8yPXnIXQmwT6zrJsKEgwJzvks0yVRZtaGBScaDOm9xcpOxbuhku1zViIw9WDgil/KZL8DsqWrhVpC6TzM0RQNCcsHcmgmyxbgG9g8PR0eJGLxCcGoEw8zMQu1Kh1hQ1/03hZ2JCOgigpByR9aNPTwwpl64fQUe6WwIw==\",\n",
" \"thinking\": \"To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\\n\\nI can try to estimate this first. \\n$3^3 = 27$\\n$4^3 = 64$\\n\\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\\n\\nLet me try to compute this more precisely. I can use the cube root function:\\n\\ncube root of 50.653 = 50.653^(1/3)\\n\\nLet me calculate this:\\n50.653^(1/3) \\u2248 3.6998\\n\\nLet me verify:\\n3.6998^3 \\u2248 50.6533\\n\\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\\n\\nActually, let me compute this more precisely:\\n50.653^(1/3) \\u2248 3.69981\\n\\nLet me verify once more:\\n3.69981^3 \\u2248 50.652998\\n\\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.\",\n",
" \"type\": \"thinking\"\n",
" },\n",
" {\n",
" \"text\": \"The cube root of 50.653 is approximately 3.6998.\\n\\nTo verify: 3.6998\\u00b3 = 50.6530, which is very close to our original number.\",\n",
" \"type\": \"text\"\n",
" }\n",
"]\n"
]
}
],
"source": [
"import json\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-3-7-sonnet-latest\",\n",
" max_tokens=5000,\n",
" thinking={\"type\": \"enabled\", \"budget_tokens\": 2000},\n",
")\n",
"\n",
"response = llm.invoke(\"What is the cube root of 50.653?\")\n",
"print(json.dumps(response.content, indent=2))"
]
},
{
"cell_type": "markdown",
"id": "301d372f-4dec-43e6-b58c-eee25633e1a6",

View File

@@ -0,0 +1,275 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AzureAIChatCompletionsModel\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# AzureAIChatCompletionsModel\n",
"\n",
"This will help you getting started with AzureAIChatCompletionsModel [chat models](/docs/concepts/chat_models). For detailed documentation of all AzureAIChatCompletionsModel features and configurations head to the [API reference](https://python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.html)\n",
"\n",
"The AzureAIChatCompletionsModel class uses the Azure AI Foundry SDK. AI Foundry has several chat models including AzureOpenAI, Cohere, Llama, Phi-3/4, and DeepSeek-R1 to name a few. You can find information about their latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/azure/ai-studio/how-to/model-catalog-overview).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://v03.api.js.langchain.com/classes/_langchain_openai.AzureChatOpenAI.html) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [AzureAIChatCompletionsModel](https://python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.html) | [langchain-azure-ai](https://python.langchain.com/api_reference/langchain_azure_ai/index.html) | ❌ | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-azure-ai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-azure-ai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅| \n",
"\n",
"## Setup\n",
"\n",
"To access AzureAIChatCompletionsModel models you'll need to create an [Azure account](https://azure.microsoft.com/pricing/purchase-options/azure-account), get an API key, and install the `langchain-azure-ai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/sdk-overview?tabs=sync&pivots=programming-language-python) to see how to create your deployment and generate an API key. Once your model is deployed you click the 'get endpoint' button in AI Foundry. This will show you your endpoint and api key. Once you've done this set the AZURE_INFERENCE_CREDENTIAL and AZURE_INFERENCE_ENDPOINT environment variables:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"AZURE_INFERENCE_CREDENTIAL\"):\n",
" os.environ[\"AZURE_INFERENCE_CREDENTIAL\"] = getpass.getpass(\n",
" \"Enter your AzureAIChatCompletionsModel API key: \"\n",
" )\n",
"\n",
"if not os.getenv(\"AZURE_INFERENCE_ENDPOINT\"):\n",
" os.environ[\"AZURE_INFERENCE_ENDPOINT\"] = getpass.getpass(\n",
" \"Enter your model endpoint: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AzureAIChatCompletionsModel integration lives in the `langchain-azure-ai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-azure-ai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel\n",
"\n",
"llm = AzureAIChatCompletionsModel(\n",
" model_name=\"gpt-4\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={}, response_metadata={'model': 'gpt-4o-2024-05-13', 'token_usage': {'input_tokens': 31, 'output_tokens': 4, 'total_tokens': 35}, 'finish_reason': 'stop'}, id='run-c082dffd-b1de-4b3f-943f-863836663ddb-0', usage_metadata={'input_tokens': 31, 'output_tokens': 4, 'total_tokens': 35})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'model': 'gpt-4o-2024-05-13', 'token_usage': {'input_tokens': 26, 'output_tokens': 5, 'total_tokens': 31}, 'finish_reason': 'stop'}, id='run-01ba6587-6ff4-4554-8039-13204a7d95db-0', usage_metadata={'input_tokens': 26, 'output_tokens': 5, 'total_tokens': 31})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all AzureAIChatCompletionsModel features and configurations head to the API reference: https://python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-3-9",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,253 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: ContextualAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatContextual\n",
"\n",
"This will help you getting started with Contextual AI's Grounded Language Model [chat models](/docs/concepts/chat_models/).\n",
"\n",
"To learn more about Contextual AI, please visit our [documentation](https://docs.contextual.ai/).\n",
"\n",
"This integration requires the `contextual-client` Python SDK. Learn more about it [here](https://github.com/ContextualAI/contextual-client-python).\n",
"\n",
"## Overview\n",
"\n",
"This integration invokes Contextual AI's Grounded Language Model.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatContextual](https://github.com/ContextualAI//langchain-contextual) | [langchain-contextual](https://pypi.org/project/langchain-contextual/) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-contextual?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-contextual?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access Contextual models you'll need to create a Contextual AI account, get an API key, and install the `langchain-contextual` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [app.contextual.ai](https://app.contextual.ai) to sign up to Contextual and generate an API key. Once you've done this set the CONTEXTUAL_AI_API_KEY environment variable:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CONTEXTUAL_AI_API_KEY\"):\n",
" os.environ[\"CONTEXTUAL_AI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Contextual API key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Contextual integration lives in the `langchain-contextual` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-contextual"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions.\n",
"\n",
"The chat client can be instantiated with these following additional settings:\n",
"\n",
"| Parameter | Type | Description | Default |\n",
"|-----------|------|-------------|---------|\n",
"| temperature | Optional[float] | The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness. | 0 |\n",
"| top_p | Optional[float] | A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness. | 0.9 |\n",
"| max_new_tokens | Optional[int] | The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048. | 1024 |"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_contextual import ChatContextual\n",
"\n",
"llm = ChatContextual(\n",
" model=\"v1\", # defaults to `v1`\n",
" api_key=\"\",\n",
" temperature=0, # defaults to 0\n",
" top_p=0.9, # defaults to 0.9\n",
" max_new_tokens=1024, # defaults to 1024\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"The Contextual Grounded Language Model accepts additional `kwargs` when calling the `ChatContextual.invoke` method.\n",
"\n",
"These additional inputs are:\n",
"\n",
"| Parameter | Type | Description |\n",
"|-----------|------|-------------|\n",
"| knowledge | list[str] | Required: A list of strings of knowledge sources the grounded language model can use when generating a response. |\n",
"| system_prompt | Optional[str] | Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly. |\n",
"| avoid_commentary | Optional[bool] | Optional (Defaults to `False`): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# include a system prompt (optional)\n",
"system_prompt = \"You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability.\"\n",
"\n",
"# provide your own knowledge from your knowledge-base here in an array of string\n",
"knowledge = [\n",
" \"There are 2 types of dogs in the world: good dogs and best dogs.\",\n",
" \"There are 2 types of cats in the world: good cats and best cats.\",\n",
"]\n",
"\n",
"# create your message\n",
"messages = [\n",
" (\"human\", \"What type of cats are there in the world and what are the types?\"),\n",
"]\n",
"\n",
"# invoke the GLM by providing the knowledge strings, optional system prompt\n",
"# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument\n",
"ai_msg = llm.invoke(\n",
" messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True\n",
")\n",
"\n",
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "2c35a9e0",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can chain the Contextual Model with output parsers."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "545e1e16",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chain = llm | StrOutputParser\n",
"\n",
"chain.invoke(\n",
" messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatContextual features and configurations head to the Github page: https://github.com/ContextualAI//langchain-contextual"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -38,6 +38,12 @@
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
":::note\n",
"\n",
"DeepSeek-R1, specified via `model=\"deepseek-reasoner\"`, does not support tool calling or structured output. Those features [are supported](https://api-docs.deepseek.com/guides/function_calling) by DeepSeek-V3 (specified via `model=\"deepseek-chat\"`).\n",
"\n",
":::\n",
"\n",
"## Setup\n",
"\n",
"To access DeepSeek models you'll need to create a/an DeepSeek account, get an API key, and install the `langchain-deepseek` integration package.\n",

View File

@@ -85,21 +85,10 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"execution_count": null,
"id": "3f3f510e-2afe-4e76-be41-c5a9665aea63",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.1.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
]
@@ -116,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -124,7 +113,7 @@
"from langchain_groq import ChatGroq\n",
"\n",
"llm = ChatGroq(\n",
" model=\"mixtral-8x7b-32768\",\n",
" model=\"llama-3.1-8b-instant\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
@@ -143,7 +132,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -152,10 +141,10 @@
{
"data": {
"text/plain": [
"AIMessage(content='I enjoy programming. (The French translation is: \"J\\'aime programmer.\")\\n\\nNote: I chose to translate \"I love programming\" as \"J\\'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.', response_metadata={'token_usage': {'completion_tokens': 73, 'prompt_tokens': 31, 'total_tokens': 104, 'completion_time': 0.1140625, 'prompt_time': 0.003352463, 'queue_time': None, 'total_time': 0.117414963}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_c5f20b5bb1', 'finish_reason': 'stop', 'logprobs': None}, id='run-64433c19-eadf-42fc-801e-3071e3c40160-0', usage_metadata={'input_tokens': 31, 'output_tokens': 73, 'total_tokens': 104})"
"AIMessage(content='The translation of \"I love programming\" to French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 55, 'total_tokens': 77, 'completion_time': 0.029333333, 'prompt_time': 0.003502892, 'queue_time': 0.553054073, 'total_time': 0.032836225}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-2b2da04a-993c-40ab-becc-201eab8b1a1b-0', usage_metadata={'input_tokens': 55, 'output_tokens': 22, 'total_tokens': 77})"
]
},
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@@ -174,7 +163,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -182,9 +171,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"I enjoy programming. (The French translation is: \"J'aime programmer.\")\n",
"The translation of \"I love programming\" to French is:\n",
"\n",
"Note: I chose to translate \"I love programming\" as \"J'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.\n"
"\"J'adore le programmation.\"\n"
]
}
],
@@ -204,17 +193,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='That\\'s great! I can help you translate English phrases related to programming into German.\\n\\n\"I love programming\" can be translated as \"Ich liebe Programmieren\" in German.\\n\\nHere are some more programming-related phrases translated into German:\\n\\n* \"Programming language\" = \"Programmiersprache\"\\n* \"Code\" = \"Code\"\\n* \"Variable\" = \"Variable\"\\n* \"Function\" = \"Funktion\"\\n* \"Array\" = \"Array\"\\n* \"Object-oriented programming\" = \"Objektorientierte Programmierung\"\\n* \"Algorithm\" = \"Algorithmus\"\\n* \"Data structure\" = \"Datenstruktur\"\\n* \"Debugging\" = \"Fehlersuche\"\\n* \"Compile\" = \"Kompilieren\"\\n* \"Link\" = \"Verknüpfen\"\\n* \"Run\" = \"Ausführen\"\\n* \"Test\" = \"Testen\"\\n* \"Deploy\" = \"Bereitstellen\"\\n* \"Version control\" = \"Versionskontrolle\"\\n* \"Open source\" = \"Open Source\"\\n* \"Software development\" = \"Softwareentwicklung\"\\n* \"Agile methodology\" = \"Agile Methodik\"\\n* \"DevOps\" = \"DevOps\"\\n* \"Cloud computing\" = \"Cloud Computing\"\\n\\nI hope this helps! Let me know if you have any other questions or if you need further translations.', response_metadata={'token_usage': {'completion_tokens': 331, 'prompt_tokens': 25, 'total_tokens': 356, 'completion_time': 0.520006542, 'prompt_time': 0.00250165, 'queue_time': None, 'total_time': 0.522508192}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_c5f20b5bb1', 'finish_reason': 'stop', 'logprobs': None}, id='run-74207fb7-85d3-417d-b2b9-621116b75d41-0', usage_metadata={'input_tokens': 25, 'output_tokens': 331, 'total_tokens': 356})"
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 50, 'total_tokens': 56, 'completion_time': 0.008, 'prompt_time': 0.003337935, 'queue_time': 0.20949214500000002, 'total_time': 0.011337935}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-e33b48dc-5e55-466e-9ebd-7b48c81c3cbd-0', usage_metadata={'input_tokens': 50, 'output_tokens': 6, 'total_tokens': 56})"
]
},
"execution_count": 7,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -269,7 +258,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -210,7 +210,7 @@
"id": "96ed13d4",
"metadata": {},
"source": [
"Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described in [Working with TuneExperiment and PromptTuner](https://ibm.github.io/watsonx-ai-python-sdk/pt_working_with_class_and_prompt_tuner.html)."
"Instead of `model_id`, you can also pass the `deployment_id` of the previously [deployed model with reference to a Prompt Template](https://cloud.ibm.com/apidocs/watsonx-ai#deployments-text-chat)."
]
},
{
@@ -228,6 +228,31 @@
")"
]
},
{
"cell_type": "markdown",
"id": "3d29767c",
"metadata": {},
"source": [
"For certain requirements, there is an option to pass the IBM's [`APIClient`](https://ibm.github.io/watsonx-ai-python-sdk/base.html#apiclient) object into the `ChatWatsonx` class."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ae9531e",
"metadata": {},
"outputs": [],
"source": [
"from ibm_watsonx_ai import APIClient\n",
"\n",
"api_client = APIClient(...)\n",
"\n",
"chat = ChatWatsonx(\n",
" model_id=\"ibm/granite-34b-code-instruct\",\n",
" watsonx_client=api_client,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f571001d",
@@ -448,9 +473,7 @@
"source": [
"## Tool calling\n",
"\n",
"### ChatWatsonx.bind_tools()\n",
"\n",
"Please note that `ChatWatsonx.bind_tools` is on beta state, so we recommend using `mistralai/mistral-large` model."
"### ChatWatsonx.bind_tools()"
]
},
{
@@ -563,7 +586,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain_ibm",
"language": "python",
"name": "python3"
},

View File

@@ -17,7 +17,7 @@ If you'd like to contribute an integration, see [Contributing integrations](/doc
import ChatModelTabs from "@theme/ChatModelTabs";
<ChatModelTabs openaiParams={`model="gpt-4o-mini"`} />
<ChatModelTabs overrideParams={{openai: {model: "gpt-4o-mini"}}} />
```python
model.invoke("Hello, world!")

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatSambaNovaCloud\n",
"\n",
"This will help you getting started with SambaNovaCloud [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatSambaNovaCloud features and configurations head to the [API reference](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.ChatSambaNovaCloud.html).\n",
"This will help you getting started with SambaNovaCloud [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatSambaNovaCloud features and configurations head to the [API reference](https://docs.sambanova.ai/cloud/docs/get-started/overview).\n",
"\n",
"**[SambaNova](https://sambanova.ai/)'s** [SambaNova Cloud](https://cloud.sambanova.ai/) is a platform for performing inference with open-source models\n",
"\n",
@@ -28,7 +28,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatSambaNovaCloud](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.ChatSambaNovaCloud.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"| [ChatSambaNovaCloud](https://docs.sambanova.ai/cloud/docs/get-started/overview) | [langchain-sambanova](https://python.langchain.com/docs/integrations/providers/sambanova/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
@@ -545,7 +545,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatSambaNovaCloud features and configurations head to the API reference: https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.ChatSambaNovaCloud.html"
"For detailed documentation of all SambaNovaCloud features and configurations head to the API reference: https://docs.sambanova.ai/cloud/docs/get-started/overview"
]
}
],

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatSambaStudio\n",
"\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.chat_models.sambanova.ChatSambaStudio.html).\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://docs.sambanova.ai/sambastudio/latest/index.html).\n",
"\n",
"**[SambaNova](https://sambanova.ai/)'s** [SambaStudio](https://docs.sambanova.ai/sambastudio/latest/sambastudio-intro.html) SambaStudio is a rich, GUI-based platform that provides the functionality to train, deploy, and manage models in SambaNova [DataScale](https://sambanova.ai/products/datascale) systems.\n",
"\n",
@@ -28,7 +28,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatSambaStudio](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.chat_models.sambanova.ChatSambaStudio.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"| [ChatSambaStudio](https://docs.sambanova.ai/sambastudio/latest/index.html) | [langchain-sambanova](https://python.langchain.com/docs/integrations/providers/sambanova/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
@@ -483,7 +483,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatSambaStudio features and configurations head to the API reference: https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.sambanova.chat_models.ChatSambaStudio.html"
"For detailed documentation of all SambaStudio features and configurations head to the API reference: https://docs.sambanova.ai/sambastudio/latest/api-ref-landing.html"
]
}
],

View File

@@ -26,22 +26,9 @@
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install the package\n",
"%pip install --upgrade --quiet dashscope"
@@ -49,8 +36,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-05T01:11:20.457141Z",
"start_time": "2025-03-05T01:11:18.810160Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -66,8 +57,12 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-05T01:11:24.270318Z",
"start_time": "2025-03-05T01:11:24.268064Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -266,6 +261,52 @@
"ai_message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Partial Mode\n",
"Enable the large model to continue generating content from the initial text you provide."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-05T01:31:29.155824Z",
"start_time": "2025-03-05T01:31:27.239667Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' has cast off its heavy cloak of snow, donning instead a vibrant garment of fresh greens and floral hues; it is as if the world has woken from a long slumber, stretching and reveling in the warm caress of the sun. Everywhere I look, there is a symphony of life: birdsong fills the air, bees dance from flower to flower, and a gentle breeze carries the sweet fragrance of blossoms. It is in this season that my heart finds particular joy, for it whispers promises of renewal and growth, reminding me that even after the coldest winters, there will always be a spring to follow.', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': '447283e9-ee31-9d82-8734-af572921cb05', 'token_usage': {'input_tokens': 40, 'output_tokens': 127, 'prompt_tokens_details': {'cached_tokens': 0}, 'total_tokens': 167}}, id='run-6a35a91c-cc12-4afe-b56f-fd26d9035357-0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.chat_models.tongyi import ChatTongyi\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"\"\"Please continue the sentence \"Spring has arrived, and the earth\" to express the beauty of spring and the author's joy.\"\"\"\n",
" ),\n",
" AIMessage(\n",
" content=\"Spring has arrived, and the earth\", additional_kwargs={\"partial\": True}\n",
" ),\n",
"]\n",
"chatLLM = ChatTongyi()\n",
"ai_message = chatLLM.invoke(messages)\n",
"ai_message"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -1,362 +1,231 @@
{
"cells": [
{
"cell_type": "raw",
"id": "85e07aae70a15572",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Writer\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cb4dd00a-8893-4a45-96f7-9a9fc341cd61",
"id": "e815de6298bf07ca",
"metadata": {},
"source": [
"# ChatWriter\n",
"# Chat Writer\n",
"\n",
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n",
"This notebook provides a quick overview for getting started with Writer [chat](/docs/concepts/chat_models/).\n",
"\n",
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home).\n",
"\n",
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "617a6e98205ab7c8",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: |:----------:| :---: | :---: |\n",
"| ChatWriter | langchain-community | ❌ | ❌ | | ❌ | |\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"|:-------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
"| [ChatWriter](https://github.com/writer/langchain-writer/blob/main/langchain_writer/chat_models.py#L308) | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-writer?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-writer?style=flat-square&label=%20) |\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | Structured output | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | Logprobs |\n",
"| :---: |:-----------------:| :---: | :---: | :---: | :---: | :---: | :---: |:--------------------------------:|:--------:|\n",
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Writer models you'll need to create a Writer account, get an API key, and install the `writer-sdk` and `langchain-community` packages.\n",
"\n",
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |"
]
},
{
"cell_type": "markdown",
"id": "3fd9903e685808d9",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"Head to [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to sign up to OpenAI and generate an API key. Once you've done this set the WRITER_API_KEY environment variable:"
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) and follow this [Quickstart](https://dev.writer.com/api-guides/quickstart) to obtain an API key. Then, set the WRITER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e817fe2e-4f1d-4533-b19e-2400b1cf6ce8",
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:26.800627Z",
"start_time": "2024-11-14T09:27:59.652281Z"
"jupyter": {
"is_executing": true
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.environ.get(\"WRITER_API_KEY\"):\n",
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key:\")"
]
"if not os.getenv(\"WRITER_API_KEY\"):\n",
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "c59722a9-6dbb-45f7-ae59-5be50ca5733d",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Writer integration lives in the `langchain-community` package:"
"`ChatWriter` is available from the `langchain-writer` package. Install it with:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2113471c-75d7-45df-b784-d78da4ef7aba",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:32.415354Z",
"start_time": "2024-11-14T09:46:26.826112Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\r\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\r\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\r\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"source": [
"%pip install -qU langchain-community writer-sdk"
]
"%pip install -qU langchain-writer"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "1098bc9d-ce83-462b-8c19-f85bf3a159dc",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"### Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
"Now we can instantiate our model object in order to generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "522686de",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:33.504711Z",
"start_time": "2024-11-14T09:46:32.574505Z"
},
"tags": []
},
"outputs": [],
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"source": [
"from langchain_community.chat_models.writer import ChatWriter\n",
"from langchain_writer import ChatWriter\n",
"\n",
"llm = ChatWriter(\n",
" model=\"palmyra-x-004\",\n",
" temperature=0.7,\n",
" max_tokens=1000,\n",
" # other params...\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
")"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "6511982a-734a-4193-a47d-254f8dcaff5e",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
"## Usage\n",
"\n",
"To use the model, you pass in a list of messages and call the `invoke` method:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
"id": "62e0dbc3",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:38.856174Z",
"start_time": "2024-11-14T09:46:33.520062Z"
},
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that writes poems about the Python programming language.\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"Write a poem about Python.\"),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2cd224b8-4499-41fb-a604-d53a7ff17b2e",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:38.866651Z",
"start_time": "2024-11-14T09:46:38.863817Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In realms of code, where logic weaves and flows,\n",
"A language rises, Python by its name,\n",
"With syntax clear, where elegance it shows,\n",
"A serpent, wise, that time and space can tame.\n",
"\n",
"Born from the mind of Guido, pure and bright,\n",
"Its beauty lies in simplicity and grace,\n",
"A tool of power, yet gentle in its might,\n",
"In every programmer's heart, a cherished place.\n",
"\n",
"It dances through the data, vast and deep,\n",
"With libraries that span the digital realm,\n",
"From machine learning's secrets to keep,\n",
"To web development, it wields the helm.\n",
"\n",
"In the hands of the novice and the sage,\n",
"Python spins the threads of digital dreams,\n",
"A language that can turn the age,\n",
"With a gentle learning curve, its appeal gleams.\n",
"\n",
"It's more than code, a community it builds,\n",
"Where knowledge freely flows, and all are heard,\n",
"In Python's world, the future unfolds,\n",
"A language of the people, for the world.\n",
"\n",
"So here's to Python, in its gentle might,\n",
"A master of the modern coding art,\n",
"May it continue to light our path each night,\n",
"In the vast, evolving world of code, its heart.\n"
]
}
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
],
"source": [
"print(ai_msg.content)"
]
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "35b3a5b3dabef65",
"id": "5cf7293d",
"metadata": {},
"source": [
"## Streaming"
"Then, you can access the content of the message:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2725770182bf96dc",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:38.914883Z",
"start_time": "2024-11-14T09:46:38.912564Z"
}
},
"outputs": [],
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"source": [
"ai_stream = llm.stream(messages)"
"print(ai_msg.content)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "4391289ce0a80e19",
"metadata": {},
"source": [
"## Streaming\n",
"\n",
"You can also stream the response. First, create a stream:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a48410d9488162e3",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:43.226449Z",
"start_time": "2024-11-14T09:46:38.955512Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In realms of code where logic weaves,\n",
"A language rises, Python, it breezes,\n",
"With syntax clear and simple to read,\n",
"Through its elegance, our spirits are fed.\n",
"\n",
"Like rivers flowing, smooth and serene,\n",
"Its structure harmonious, a coder's dream,\n",
"Indentations guide the flow of control,\n",
"In Python's world, confusion takes no toll.\n",
"\n",
"A vast library, a treasure trove so bright,\n",
"For web and data, it offers its might,\n",
"With modules and packages, a rich array,\n",
"Python empowers us to code in play.\n",
"\n",
"From AI to scripts, in flexibility it thrives,\n",
"A language of the future, as many now derive,\n",
"Its community, a beacon of support and cheer,\n",
"With Python, the possibilities are vast, far and near.\n",
"\n",
"So here's to Python, in its gentle grace,\n",
"A tool that enhances, a language that embraces,\n",
"The art of coding, with a fluent, flowing pen,\n",
"In the Python world, we code, and we begin."
]
}
"id": "4a0f2112b3a4c79e",
"metadata": {},
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming. Sing a song about it\"),\n",
"]\n",
"ai_stream = llm.stream(messages)\n",
"ai_stream"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "23cc74b6",
"metadata": {},
"source": [
"Then, iterate over the stream to get the chunks:"
]
},
{
"cell_type": "code",
"id": "8c4b7b9b9308c757",
"metadata": {},
"source": [
"for chunk in ai_stream:\n",
" print(chunk.content, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "778f912a-66ea-4a5d-b3de-6c7db4baba26",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fbb043e6",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:50.721645Z",
"start_time": "2024-11-14T09:46:43.234590Z"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessageChunk(content='In the realm of code, where logic weaves and flows, \\nA language rises, like a phoenix from the code\\'s throes. \\nJava, the name, a cup of coffee\\'s steam, \\nBrewed in the minds, where digital dreams gleam.\\n\\nWith syntax clear, like morning\\'s misty hue, \\nIn classes and objects, it spins a tale so true. \\nA platform agnostic, with a byte to spare, \\nAcross the devices, it journeys everywhere.\\n\\nInheritance and polymorphism, its power\\'s core, \\nLike ancient runes, in every line they bore. \\nEncapsulation, a shield, with data it does hide, \\nIn the vast jungle of code, it stands as a guide.\\n\\nFrom applets small, to vast, server-side apps, \\nIts threads run swift, through the computing traps. \\nA language of the people, by the people, for the peoples use, \\nBuilt on the principle, \"write once, run anywhere, with no excuse.\"\\n\\nIn the heart of Android, it beats, a steady drum, \\nCrafting experiences, in every smartphone\\'s hum. \\nIn the cloud, in the enterprise, its presence is vast, \\nA cornerstone of computing, built to last.\\n\\nOh Java, thy elegance, thy robust design, \\nA language that stands, in any computing line. \\nWith every update, with every new release, \\nThy community grows, with a vibrant, diverse peace.\\n\\nSo here\\'s to Java, the versatile, the grand, \\nA language that shapes the digital land. \\nMay it continue to evolve, to grow, to inspire, \\nIn the endless quest of turning thoughts into digital fire.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 345, 'prompt_tokens': 33, 'total_tokens': 378, 'completion_tokens_details': None, 'prompt_token_details': None}, 'model_name': 'palmyra-x-004', 'system_fingerprint': 'v1', 'finish_reason': 'stop'}, id='run-a5b4be59-0eb0-41bd-80f7-72477861b0bd-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that writes poems about the {input_language} programming language.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"Java\",\n",
" \"input\": \"Write a poem about Java.\",\n",
" }\n",
")"
]
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "0b1b52a5-b58d-40c9-bcdd-88eb8fb351e2",
"id": "e632bf7d0873f933",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Writer supports [tool calling](https://dev.writer.com/api-guides/tool-calling), which lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.\n",
"Writer models like Palmyra X 004 support [tool calling](https://dev.writer.com/api-guides/tool-calling), which lets you describe tools and their arguments. The model will return a JSON object with a tool to invoke and the inputs to that tool.\n",
"\n",
"### ChatWriter.bind_tools()\n",
"### Binding tools\n",
"\n",
"With `ChatWriter.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to tool schemas, which looks like:\n",
"With `ChatWriter.bind_tools`, you can easily pass in Pydantic classes, dictionary schemas, LangChain tools, or even functions as tools to the model. Under the hood, these are converted to tool schemas, which look like this:\n",
"```\n",
"{\n",
" \"name\": \"...\",\n",
@@ -364,20 +233,15 @@
" \"parameters\": {...} # JSONSchema\n",
"}\n",
"```\n",
"and passed in every model invocation."
"These are passed in every model invocation.\n",
"\n",
"For example, to use a tool that gets the weather in a given location, you can define a Pydantic class and pass it to `ChatWriter.bind_tools`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "b7ea7690-ec7a-4337-b392-e87d1f39a6ec",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:50.891937Z",
"start_time": "2024-11-14T09:46:50.733463Z"
}
},
"outputs": [],
"id": "47e2f0faceca533",
"metadata": {},
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
@@ -388,86 +252,175 @@
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([GetWeather])"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1d1ab955-6a68-42f8-bb5d-86eb1111478a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:51.725422Z",
"start_time": "2024-11-14T09:46:50.904699Z"
}
},
"llm.bind_tools([GetWeather])"
],
"outputs": [],
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in New York City\",\n",
")"
]
"execution_count": null
},
{
"cell_type": "markdown",
"id": "768d1ae4-4b1a-48eb-a329-c8d5051067a3",
"id": "68e22d3b",
"metadata": {},
"source": [
"### AIMessage.tool_calls\n",
"Notice that the AIMessage has a `tool_calls` attribute. This contains in a standardized ToolCall format that is model-provider agnostic."
"Then, you can invoke the model with the tool:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "166cb7ce-831d-4a7c-9721-abc107f11084",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:51.744202Z",
"start_time": "2024-11-14T09:46:51.738431Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'New York City, NY'},\n",
" 'id': 'chatcmpl-tool-fe70912c800d40fc8700d604d4823001',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
"id": "765527dd533ec967",
"metadata": {},
"source": [
"ai_msg = llm.invoke(\n",
" \"what is the weather like in New York City\",\n",
")\n",
"ai_msg"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "57544bdf",
"metadata": {},
"source": [
"Finally, you can access the tool calls and proceed to execute your functions:"
]
},
{
"cell_type": "code",
"id": "f361c4769e772fe",
"metadata": {},
"source": [
"print(ai_msg.tool_calls)"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "e082c9ac-c7c7-4aff-a8ec-8e220262a59c",
"id": "3baf53021834d2ff",
"metadata": {},
"source": [
"For more on binding tools and tool call outputs, head to the [tool calling](/docs/how_to/function_calling) docs."
"### A note on tool binding\n",
"\n",
"The `ChatWriter.bind_tools()` method does not create new instance with bound tools, but stores the received `tools` and `tool_choice` in the initial class instance attributes to pass them as parameters during the Palmyra LLM call while using `ChatWriter` invocation. This approach allows the support of different tool types, e.g. `function` and `graph`. `Graph` is one of the remotely called Writer Palmyra tools. For further information visit our [docs](https://dev.writer.com/api-guides/knowledge-graph#knowledge-graph). \n",
"\n",
"For more information about tool usage in LangChain, visit the [LangChain tool calling documentation](https://python.langchain.com/docs/concepts/tool_calling/)."
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
"id": "a4674b1b82ce9d1f",
"metadata": {},
"source": [
"## Batching\n",
"\n",
"You can also batch requests and set the `max_concurrency`:"
]
},
{
"cell_type": "code",
"id": "c8a217f6190747fe",
"metadata": {},
"source": [
"ai_batch = llm.batch(\n",
" [\n",
" \"How to cook pancakes?\",\n",
" \"How to compose poem?\",\n",
" \"How to run faster?\",\n",
" ],\n",
" config={\"max_concurrency\": 3},\n",
")\n",
"ai_batch"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "2eb81e1d",
"metadata": {},
"source": [
"Then, iterate over the batch to get the results:"
]
},
{
"cell_type": "code",
"id": "b6a228d448f3df23",
"metadata": {},
"source": [
"for batch in ai_batch:\n",
" print(batch.content)\n",
" print(\"-\" * 100)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "58a9ab241fe09a71",
"metadata": {},
"source": [
"## Asynchronous usage\n",
"\n",
"All features above (invocation, streaming, batching, tools calling) also support asynchronous usage."
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Prompt templates\n",
"\n",
"[Prompt templates](https://python.langchain.com/docs/concepts/prompt_templates/) help to translate user input and parameters into instructions for a language model. You can use `ChatWriter` with a prompt templates like so:\n"
]
},
{
"cell_type": "code",
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"For detailed documentation of all ChatWriter features and configurations head to the [API reference](https://python.langchain.com/api_reference/writer/chat_models/langchain_writer.chat_models.ChatWriter.html#langchain_writer.chat_models.ChatWriter).\n",
"\n",
"For detailed documentation of all Writer features, head to our [API reference](https://dev.writer.com/api-guides/api-reference/completion-api/chat-completion)."
"## Additional resources\n",
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -481,7 +434,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -2,7 +2,9 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "xwiDq5fOuoRn"
},
"source": [
"# Apify Dataset\n",
"\n",
@@ -20,33 +22,63 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qRW2-mokuoRp",
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet apify-client"
"%pip install --upgrade --quiet langchain langchain-apify langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "8jRVq16LuoRq"
},
"source": [
"First, import `ApifyDatasetLoader` into your source code:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"execution_count": 2,
"metadata": {
"id": "umXQHqIJuoRq"
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import ApifyDatasetLoader\n",
"from langchain_apify import ApifyDatasetLoader\n",
"from langchain_core.documents import Document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "NjGwKy59vz1X"
},
"source": [
"Find your [Apify API token](https://console.apify.com/account/integrations) and [OpenAI API key](https://platform.openai.com/account/api-keys) and initialize these into environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "AvzNtyCxwDdr"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"APIFY_API_TOKEN\"] = \"your-apify-api-token\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-api-key\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d1O-KL48uoRr"
},
"source": [
"Then provide a function that maps Apify dataset record fields to LangChain `Document` format.\n",
"\n",
@@ -64,8 +96,10 @@
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"execution_count": 8,
"metadata": {
"id": "m1SpA7XZuoRr"
},
"outputs": [],
"source": [
"loader = ApifyDatasetLoader(\n",
@@ -78,8 +112,10 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"execution_count": 9,
"metadata": {
"id": "0hWX7ABsuoRs"
},
"outputs": [],
"source": [
"data = loader.load()"
@@ -87,7 +123,9 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "EJCVFVKNuoRs"
},
"source": [
"## An example with question answering\n",
"\n",
@@ -96,21 +134,26 @@
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"execution_count": 14,
"metadata": {
"id": "sNisJKzZuoRt"
},
"outputs": [],
"source": [
"from langchain.indexes import VectorstoreIndexCreator\n",
"from langchain_community.utilities import ApifyWrapper\n",
"from langchain_apify import ApifyWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAI\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"execution_count": 15,
"metadata": {
"id": "qcfmnbdDuoRu"
},
"outputs": [],
"source": [
"loader = ApifyDatasetLoader(\n",
@@ -123,27 +166,47 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"execution_count": 16,
"metadata": {
"id": "8b0xzKJxuoRv"
},
"outputs": [],
"source": [
"index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])"
"index = VectorstoreIndexCreator(\n",
" vectorstore_cls=InMemoryVectorStore, embedding=OpenAIEmbeddings()\n",
").from_loaders([loader])"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"execution_count": 17,
"metadata": {
"id": "7zPXGsVFwUGA"
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-4o-mini\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"id": "ecWrdM4guoRv"
},
"outputs": [],
"source": [
"query = \"What is Apify?\"\n",
"result = index.query_with_sources(query, llm=OpenAI())"
"result = index.query_with_sources(query, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"execution_count": null,
"metadata": {
"id": "QH8r44e9uoRv",
"outputId": "361fe050-f75d-4d5a-c327-5e7bd190fba5"
},
"outputs": [
{
"name": "stdout",
@@ -162,6 +225,9 @@
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
@@ -181,5 +247,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
}
"nbformat_minor": 0
}

View File

@@ -443,6 +443,7 @@
"llm = HuggingFaceEndpoint(\n",
" repo_id=GEN_MODEL_ID,\n",
" huggingfacehub_api_token=HF_TOKEN,\n",
" task=\"text-generation\",\n",
")"
]
},

View File

@@ -0,0 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "db23d51760310705",
"metadata": {},
"source": [
"# Writer PDF Parser\n",
"\n",
"This notebook provides a quick overview for getting started with the Writer `PDFParser` [document loader](/docs/concepts/document_loaders/).\n",
"\n",
"Writer's [PDF Parser](https://dev.writer.com/api-guides/api-reference/tool-api/pdf-parser#parse-pdf) converts PDF documents into other formats like text or Markdown. This is particularly useful when you need to extract and process text content from PDF files for further analysis or integration into your workflow. In `langchain-writer`, we provide usage of Writer's PDF Parser as a LangChain document parser.\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
"| [PDFParser](https://github.com/writer/langchain-writer/blob/main/langchain_writer/pdf_parser.py#L55) | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-writer?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-writer?style=flat-square&label=%20) |"
]
},
{
"cell_type": "markdown",
"id": "c5f08d23df5dc127",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"The `PDFParser` is available in the `langchain-writer` package:"
]
},
{
"cell_type": "code",
"id": "a8d653f15b7ee32d",
"metadata": {},
"source": [
"%pip install --quiet -U langchain-writer"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "3b9709c26797edf",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to generate an API key (you can follow this [Quickstart](https://dev.writer.com/api-guides/quickstart)). Then, set the WRITER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"id": "2983e19c9d555e58",
"metadata": {},
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"WRITER_API_KEY\"):\n",
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "92a22c77f03d43dc",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability. If you wish to do so, you can set the `LANGCHAIN_TRACING_V2` and `LANGCHAIN_API_KEY` environment variables:"
]
},
{
"cell_type": "code",
"id": "98d8422ecee77403",
"metadata": {},
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "67ab78950a3da8ba",
"metadata": {},
"source": [
"### Instantiation\n",
"\n",
"Next, instantiate an instance of the Writer PDF Parser with the desired output format:"
]
},
{
"cell_type": "code",
"id": "787b3ba8af32533f",
"metadata": {},
"source": [
"from langchain_writer.pdf_parser import PDFParser\n",
"\n",
"parser = PDFParser(format=\"markdown\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "d91c6f752fd31cee",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"There are two ways to use the PDF Parser, either synchronously or asynchronously. In either case, the PDF Parser will return a list of `Document` objects, each containing the parsed content of a page from the PDF file.\n",
"\n",
"### Synchronous usage\n",
"\n",
"To invoke the PDF Parser synchronously, pass a `Blob` object to the `parse` method referencing the PDF file you want to parse:"
]
},
{
"cell_type": "code",
"id": "d1a24b81a8a96f09",
"metadata": {},
"source": [
"from langchain_core.documents.base import Blob\n",
"\n",
"file = Blob.from_path(\"../example_data/layout-parser-paper.pdf\")\n",
"\n",
"parsed_pages = parser.parse(blob=file)\n",
"parsed_pages"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "f89c048c7d23807a",
"metadata": {},
"source": [
"### Asynchronous usage\n",
"\n",
"To invoke the PDF Parser asynchronously, pass a `Blob` object to the `aparse` method referencing the PDF file you want to parse:"
]
},
{
"cell_type": "code",
"id": "e2f7fd52b7188c6c",
"metadata": {},
"source": [
"parsed_pages_async = await parser.aparse(blob=file)\n",
"parsed_pages_async"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "ab25a3bed8437a05",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `PDFParser` features and configurations, head to the [API reference](https://python.langchain.com/api_reference/writer/pdf_parser/langchain_writer.pdf_parser.PDFParser.html#langchain_writer.pdf_parser.PDFParser).\n",
"\n",
"## Additional resources\n",
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home).\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,721 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: PyMuPDF4LLM\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# PyMuPDF4LLMLoader\n",
"\n",
"This notebook provides a quick overview for getting started with PyMuPDF4LLM [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all PyMuPDF4LLMLoader features and configurations head to the [GitHub repository](https://github.com/lakinduboteju/langchain-pymupdf4llm).\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [PyMuPDF4LLMLoader](https://github.com/lakinduboteju/langchain-pymupdf4llm) | [langchain_pymupdf4llm](https://pypi.org/project/langchain-pymupdf4llm) | ✅ | ❌ | ❌ |\n",
"\n",
"### Loader features\n",
"\n",
"| Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |\n",
"| :---: | :---: | :---: | :---: | :---: |\n",
"| PyMuPDF4LLMLoader | ✅ | ❌ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access PyMuPDF4LLM document loader you'll need to install the `langchain-pymupdf4llm` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"No credentials are required to use PyMuPDF4LLMLoader."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community** and **langchain-pymupdf4llm**."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain_community langchain-pymupdf4llm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate our model object and load documents:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_pymupdf4llm import PyMuPDF4LLMLoader\n",
"\n",
"file_path = \"./example_data/layout-parser-paper.pdf\"\n",
"loader = PyMuPDF4LLMLoader(file_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'trapped': '', 'modDate': 'D:20210622012710Z', 'creationDate': 'D:20210622012710Z', 'page': 0}, page_content='```\\nLayoutParser: A Unified Toolkit for Deep\\n\\n## Learning Based Document Image Analysis\\n\\n```\\n\\nZejiang Shen[1] (<28>), Ruochen Zhang[2], Melissa Dell[3], Benjamin Charles Germain\\nLee[4], Jacob Carlson[3], and Weining Li[5]\\n\\n1 Allen Institute for AI\\n```\\n shannons@allenai.org\\n\\n```\\n2 Brown University\\n```\\n ruochen zhang@brown.edu\\n\\n```\\n3 Harvard University\\n_{melissadell,jacob carlson}@fas.harvard.edu_\\n4 University of Washington\\n```\\n bcgl@cs.washington.edu\\n\\n```\\n5 University of Waterloo\\n```\\n w422li@uwaterloo.ca\\n\\n```\\n\\n**Abstract. Recent advances in document image analysis (DIA) have been**\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\n[The library is publicly available at https://layout-parser.github.io.](https://layout-parser.github.io)\\n\\n**Keywords: Document Image Analysis · Deep Learning · Layout Analysis**\\n\\n - Character Recognition · Open Source library · Toolkit.\\n\\n### 1 Introduction\\n\\n\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classification [11,\\n\\n')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z',\n",
" 'page': 0}\n"
]
}
],
"source": [
"import pprint\n",
"\n",
"pprint.pp(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"6"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pages = []\n",
"for doc in loader.lazy_load():\n",
" pages.append(doc)\n",
" if len(pages) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" pages = []\n",
"len(pages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Markdown, display\n",
"\n",
"part = pages[0].page_content[778:1189]\n",
"print(part)\n",
"# Markdown rendering\n",
"display(Markdown(part))"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z',\n",
" 'page': 10}\n"
]
}
],
"source": [
"pprint.pp(pages[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The metadata attribute contains at least the following keys:\n",
"- source\n",
"- page (if in mode *page*)\n",
"- total_page\n",
"- creationdate\n",
"- creator\n",
"- producer\n",
"\n",
"Additional metadata are specific to each parser.\n",
"These pieces of information can be helpful (to categorize your PDFs for example)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Splitting mode & custom pages delimiter"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When loading the PDF file you can split it in two different ways:\n",
"- By page\n",
"- As a single text flow\n",
"\n",
"By default PyMuPDF4LLMLoader will split the PDF by page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract the PDF by page. Each page is extracted as a langchain Document object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"16\n",
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z',\n",
" 'page': 0}\n"
]
}
],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(len(docs))\n",
"pprint.pp(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this mode the pdf is split by pages and the resulting Documents metadata contains the `page` (page number). But in some cases we could want to process the pdf as a single text flow (so we don't cut some paragraphs in half). In this case you can use the *single* mode :"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract the whole PDF as a single langchain Document object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1\n",
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z'}\n"
]
}
],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"single\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(len(docs))\n",
"pprint.pp(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Logically, in this mode, the `page` (page_number) metadata disappears. Here's how to clearly identify where pages end in the text flow :"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add a custom *pages_delimiter* to identify where are ends of pages in *single* mode:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"single\",\n",
" pages_delimiter=\"\\n-------THIS IS A CUSTOM END OF PAGE-------\\n\\n\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[0].page_content[10663:11317]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The default `pages_delimiter` is \\n-----\\n\\n.\n",
"But this could simply be \\n, or \\f to clearly indicate a page change, or \\<!-- PAGE BREAK --> for seamless injection in a Markdown viewer without a visual effect."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extract images from the PDF"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can extract images from your PDFs (in text form) with a choice of three different solutions:\n",
"- rapidOCR (lightweight Optical Character Recognition tool)\n",
"- Tesseract (OCR tool with high precision)\n",
"- Multimodal language model\n",
"\n",
"The result is inserted at the end of text of the page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract images from the PDF with rapidOCR:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU rapidocr-onnxruntime pillow"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.parsers import RapidOCRBlobParser\n",
"\n",
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" extract_images=True,\n",
" images_parser=RapidOCRBlobParser(),\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[5].page_content[1863:]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Be careful, RapidOCR is designed to work with Chinese and English, not other languages."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract images from the PDF with Tesseract:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU pytesseract"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.parsers import TesseractBlobParser\n",
"\n",
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" extract_images=True,\n",
" images_parser=TesseractBlobParser(),\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(docs[5].page_content[1863:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract images from the PDF with multimodal model:"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain_openai"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"if not os.environ.get(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass(\"OpenAI API key =\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.parsers import LLMImageBlobParser\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" extract_images=True,\n",
" images_parser=LLMImageBlobParser(\n",
" model=ChatOpenAI(model=\"gpt-4o-mini\", max_tokens=1024)\n",
" ),\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(docs[5].page_content[1863:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extract tables from the PDF\n",
"\n",
"With PyMUPDF4LLM you can extract tables from your PDFs in *markdown* format :"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" # \"lines_strict\" is the default strategy and\n",
" # is the most accurate for tables with column and row lines,\n",
" # but may not work well with all documents.\n",
" # \"lines\" is a less strict strategy that may work better with\n",
" # some documents.\n",
" # \"text\" is the least strict strategy and may work better\n",
" # with documents that do not have tables with lines.\n",
" table_strategy=\"lines\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[4].page_content[3210:]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Working with Files\n",
"\n",
"Many document loaders involve parsing files. The difference between such loaders usually stems from how the file is parsed, rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.\n",
"\n",
"As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded.\n",
"You can use this strategy to analyze different files, with the same parsing parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import FileSystemBlobLoader\n",
"from langchain_community.document_loaders.generic import GenericLoader\n",
"from langchain_pymupdf4llm import PyMuPDF4LLMParser\n",
"\n",
"loader = GenericLoader(\n",
" blob_loader=FileSystemBlobLoader(\n",
" path=\"./example_data/\",\n",
" glob=\"*.pdf\",\n",
" ),\n",
" blob_parser=PyMuPDF4LLMParser(),\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[0].page_content[:562]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all PyMuPDF4LLMLoader features and configurations head to the GitHub repository: https://github.com/lakinduboteju/langchain-pymupdf4llm"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.21"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

File diff suppressed because it is too large Load Diff

View File

@@ -546,7 +546,7 @@
"id": "ud_cnGszb1i9"
},
"source": [
"Let's inspect a couple of reranked documents. We observe that the retriever still returns the relevant Langchain type [documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) but as part of the metadata field, we also recieve the `relevance_score` from the Ranking API."
"Let's inspect a couple of reranked documents. We observe that the retriever still returns the relevant Langchain type [documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) but as part of the metadata field, we also receive the `relevance_score` from the Ranking API."
]
},
{

View File

@@ -88,13 +88,13 @@
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../modules/state_of_the_union.txt\").load()\n",
"documents = TextLoader(\"../document_loaders/example_data/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"for idx, text in enumerate(texts):\n",
" text.metadata[\"id\"] = idx\n",
"\n",
"embedding = OpenAIEmbeddings(model=\"text-embedding-ada-002\")\n",
"embedding = OpenAIEmbeddings(model=\"text-embedding-3-large\")\n",
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})"
]
},
@@ -114,25 +114,22 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025-02-17 04:37:08,458 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n",
"\n",
"Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n",
"\n",
"He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n",
"\n",
"He met the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions. \n",
"\n",
"We are cutting off Russias largest banks from the international financial system. \n",
@@ -141,12 +138,22 @@
"\n",
"We are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"Document 2:\n",
"\n",
"And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \n",
"\n",
"The Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n",
@@ -167,50 +174,24 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"To all Americans, I will be honest with you, as Ive always promised. A Russian dictator, invading a foreign country, has costs around the world. \n",
"\n",
"And Im taking robust action to make sure the pain of our sanctions is targeted at Russias economy. And I will use every tool at our disposal to protect American businesses and consumers. \n",
"\n",
"Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n",
"\n",
"Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n",
"\n",
"He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n",
"\n",
"He met the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \n",
"\n",
"Putin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \n",
"\n",
"And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n",
"\n",
"These steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \n",
"\n",
"But I want you to know that we are going to be okay. \n",
"\n",
"When the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising. \n",
@@ -225,51 +206,39 @@
"\n",
"He rejected repeated efforts at diplomacy.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \n",
"\n",
"For that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \n",
"\n",
"As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"Document 9:\n",
"\n",
"While it shouldnt have taken something so terrible for people around the world to see whats at stake now everyone sees it clearly. \n",
"\n",
"We see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"Document 10:\n",
"\n",
"He met the Ukrainian people. \n",
"America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"These steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"But I want you to know that we are going to be okay. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"When the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"Document 11:\n",
"\n",
"In the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"This is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \n",
"\n",
"Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people.\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"Document 12:\n",
"\n",
"Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n",
"And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \n",
"\n",
"We are giving more than $1 Billion in direct assistance to Ukraine. \n",
"Putin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \n",
"\n",
"And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n",
"\n",
"Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine.\n",
"And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"Document 13:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
@@ -281,7 +250,17 @@
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"Document 14:\n",
"\n",
"Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n",
"\n",
"We are giving more than $1 Billion in direct assistance to Ukraine. \n",
"\n",
"And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n",
"\n",
"Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"It fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \n",
"\n",
@@ -289,25 +268,56 @@
"\n",
"And as my Dad used to say, it gave people a little breathing room.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"In the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \n",
"\n",
"This is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \n",
"\n",
"To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \n",
"\n",
"Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \n",
"\n",
"For that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \n",
"\n",
"As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
"I understand. \n",
"\n",
"Our troops in Iraq and Afghanistan faced many dangers. \n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"When they came home, many of the worlds fittest and best trained warriors were never the same. \n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Headaches. Numbness. Dizziness.\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Every Administration says theyll do it, but we are actually doing it. \n",
"And as my Dad used to say, it gave people a little breathing room. \n",
"\n",
"We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n",
"And unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people—and left no one behind. \n",
"\n",
"But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors.\n"
"And it worked. It created jobs. Lots of jobs. \n",
"\n",
"In fact—our economy created over 6.5 Million new jobs just last year, more jobs created in one year \n",
"than ever before in the history of America.\n"
]
}
],
@@ -456,34 +466,35 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He met the Ukrainian people. \n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Vice President Harris and I ran for office with a new economic vision for America. \n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \n",
"and the middle out, not from the top down.\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
@@ -501,6 +512,14 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
"\n",
"Ive worked on these issues a long time. \n",
@@ -509,7 +528,15 @@
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"Document 9:\n",
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice. \n",
"\n",
"Lets come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
"\n",
"Thats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
@@ -517,60 +544,18 @@
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"Im also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n",
"Lets pass the Paycheck Fairness Act and paid leave. \n",
"\n",
"And fourth, lets end cancer as we know it. \n",
"Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n",
"\n",
"This is personal to me and Jill, to Kamala, and to so many of you. \n",
"Lets increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls Americas best-kept secret: community colleges. \n",
"\n",
"Cancer is the #2 cause of death in Americasecond only to heart disease.\n",
"And lets pass the PRO Act when a majority of workers want to form a union—they shouldnt be stopped.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Headaches. Numbness. Dizziness. \n",
"\n",
"A cancer that would put them in a flag-draped coffin. \n",
"\n",
"I know. \n",
"\n",
"One of those soldiers was my son Major Beau Biden. \n",
"\n",
"We dont know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n",
"\n",
"But Im committed to finding out everything we can. \n",
"\n",
"Committed to military families like Danielle Robinson from Ohio. \n",
"\n",
"The widow of Sergeant First Class Heath Robinson.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
@@ -581,73 +566,105 @@
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"Well I know this nation. \n",
"\n",
"We will meet the test. \n",
"\n",
"To protect freedom and liberty, to expand fairness and opportunity. \n",
"\n",
"We will save democracy. \n",
"\n",
"As hard as these times have been, I am more optimistic about America today than I have been my whole life. \n",
"\n",
"Because I see the future that is within our grasp. \n",
"\n",
"Because I know there is simply nothing beyond our capacity. \n",
"\n",
"We are the only nation on Earth that has always turned every crisis we have faced into an opportunity.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we havent done in a long time: build a better America. \n",
"If we want to go forward—not backward—we must protect access to health care. Preserve a womans right to choose. And lets continue to advance maternal health care in America. \n",
"\n",
"For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n",
"\n",
"And I know youre tired, frustrated, and exhausted. \n",
"\n",
"But I also know this.\n",
"And for our LGBTQ+ Americans, lets finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"My plan to fight inflation will lower your costs and lower the deficit. \n",
"He met the Ukrainian people. \n",
"\n",
"17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And heres the plan: \n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"First cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"And soon, well strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"So tonight Im offering a Unity Agenda for the Nation. Four big things we can do together. \n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"First, beat the opioid epidemic. \n",
"\n",
"There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit. \n",
"Its not only the right thing to do—its the economically smart thing to do. \n",
"\n",
"The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. \n",
"Thats why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n",
"\n",
"But in my administration, the watchdogs have been welcomed back. \n",
"Lets get it done once and for all. \n",
"\n",
"Were going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.\n",
"Advancing liberty and justice also requires protecting the rights of women. \n",
"\n",
"The constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice. \n",
"Smartphones. The Internet. Technology we have yet to invent. \n",
"\n",
"Lets come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
"But thats just the beginning. \n",
"\n",
"Thats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
"Intels CEO, Pat Gelsinger, who is here tonight, told me they are ready to increase their investment from \n",
"$20 billion to $100 billion. \n",
"\n",
"That would be one of the biggest investments in manufacturing in American history. \n",
"\n",
"And all theyre waiting for is for you to pass this bill. \n",
"\n",
"So lets not wait any longer. Send it to my desk. Ill sign it. \n",
"\n",
"And we will really take off.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"I understand. \n",
"And as my Dad used to say, it gave people a little breathing room. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"And unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people—and left no one behind. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"And it worked. It created jobs. Lots of jobs. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"In fact—our economy created over 6.5 Million new jobs just last year, more jobs created in one year \n",
"than ever before in the history of America.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"The only nation that can be defined by a single word: possibilities. \n",
"\n",
"So on this night, in our 245th year as a nation, I have come to report on the State of the Union. \n",
"\n",
"And my report is this: the State of the Union is strong—because you, the American people, are strong. \n",
"\n",
"We are stronger today than we were a year ago. \n",
"\n",
"And we will be stronger a year from now than we are today. \n",
"\n",
"Now is our moment to meet and overcome the challenges of our time. \n",
"\n",
"And we will, as one people. \n",
"\n",
"One America. \n",
"\n",
"The United States of America. \n",
"\n",
"May God bless you all. May God protect our troops.\n"
"One America.\n"
]
}
],
@@ -666,14 +683,14 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank\n",
"\n",
"compressor = RankLLMRerank(top_n=3, model=\"gpt\", gpt_model=\"gpt-3.5-turbo\")\n",
"compressor = RankLLMRerank(top_n=3, model=\"gpt\", gpt_model=\"gpt-4o-mini\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")"
@@ -702,9 +719,11 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"We cannot let this happen. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n"
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n"
]
},
{
@@ -729,17 +748,29 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/tmp/ipykernel_2153001/1437145854.py:10: LangChainDeprecationWarning: The method `Chain.__call__` was deprecated in langchain 0.1.0 and will be removed in 1.0. Use :meth:`~invoke` instead.\n",
" chain({\"query\": query})\n",
"2025-02-17 04:30:00,016 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
" 0%| | 0/1 [00:00<?, ?it/s]2025-02-17 04:30:01,649 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"100%|██████████| 1/1 [00:01<00:00, 1.63s/it]\n",
"2025-02-17 04:30:02,415 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
" 'result': \"The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. He highlighted her background as a former top litigator in private practice and a former federal public defender, as well as coming from a family of public school educators and police officers. He also mentioned that since her nomination, she has received broad support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.\"}"
" 'result': \"The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and comes from a family of public school educators and police officers. He also highlighted her as a consensus builder and noted the broad range of support she has received since being nominated.\"}"
]
},
"execution_count": 14,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}

View File

@@ -195,7 +195,7 @@
"id": "96ed13d4",
"metadata": {},
"source": [
"Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described [here](https://ibm.github.io/watsonx-ai-python-sdk/pt_working_with_class_and_prompt_tuner.html)."
"Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described in [Working with TuneExperiment and PromptTuner](https://ibm.github.io/watsonx-ai-python-sdk/pt_tune_experiment_run.html)."
]
},
{
@@ -420,7 +420,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain_ibm",
"language": "python",
"name": "python3"
},

View File

@@ -65,7 +65,7 @@
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
@@ -81,7 +81,7 @@
"metadata": {},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
"!CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
]
},
{
@@ -149,9 +149,9 @@
"\n",
"```\n",
"set FORCE_CMAKE=1\n",
"set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n",
"set CMAKE_ARGS=-DGGML_CUDA=OFF\n",
"```\n",
"If you have an NVIDIA GPU make sure `DLLAMA_CUBLAS` is set to `ON`\n",
"If you have an NVIDIA GPU make sure `DGGML_CUDA` is set to `ON`\n",
"\n",
"#### Compiling and installing\n",
"\n",

View File

@@ -221,7 +221,7 @@
"source": [
"## JSONFormer LLM Wrapper\n",
"\n",
"Let's try that again, now providing a the Action input's JSON Schema to the model."
"Let's try that again, now providing the Action input's JSON Schema to the model."
]
},
{

View File

@@ -104,8 +104,8 @@
"\n",
"import boto3\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_community.llms import SagemakerEndpoint\n",
"from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_aws.llms import SagemakerEndpoint\n",
"from langchain_aws.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"query = \"\"\"How long was Elizabeth hospitalized?\n",
@@ -174,8 +174,8 @@
"from typing import Dict\n",
"\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_community.llms import SagemakerEndpoint\n",
"from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_aws.llms import SagemakerEndpoint\n",
"from langchain_aws.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"query = \"\"\"How long was Elizabeth hospitalized?\n",

View File

@@ -0,0 +1,14 @@
# Abso
[Abso](https://abso.ai/#router) is an open-source LLM proxy that automatically routes requests between fast and slow models based on prompt complexity. It uses various heuristics to chose the proper model. It's very fast and has low latency.
## Installation and setup
```bash
pip install langchain-abso
```
## Chat Model
See usage details [here](/docs/integrations/chat/abso)

View File

@@ -0,0 +1,82 @@
# ADS4GPTs
> [ADS4GPTs](https://www.ads4gpts.com/) is building the open monetization backbone of the AI-Native internet. It helps AI applications monetize through advertising with a UX and Privacy first approach.
## Installation and Setup
### Using pip
You can install the package directly from PyPI:
```bash
pip install ads4gpts-langchain
```
### From Source
Alternatively, install from source:
```bash
git clone https://github.com/ADS4GPTs/ads4gpts.git
cd ads4gpts/libs/python-sdk/ads4gpts-langchain
pip install .
```
## Prerequisites
- Python 3.11+
- ADS4GPTs API Key ([Obtain API Key](https://www.ads4gpts.com))
## Environment Variables
Set the following environment variables for API authentication:
```bash
export ADS4GPTS_API_KEY='your-ads4gpts-api-key'
```
Alternatively, API keys can be passed directly when initializing classes or stored in a `.env` file.
## Tools
ADS4GPTs provides two main tools for monetization:
### Ads4gptsInlineSponsoredResponseTool
This tool fetches native, sponsored responses that can be seamlessly integrated within your AI application's outputs.
```python
from ads4gpts_langchain import Ads4gptsInlineSponsoredResponseTool
```
### Ads4gptsSuggestedPromptTool
Generates sponsored prompt suggestions to enhance user engagement and provide monetization opportunities.
```python
from ads4gpts_langchain import Ads4gptsSuggestedPromptTool
```
### Ads4gptsInlineConversationalTool
Delivers conversational sponsored content that naturally fits within chat interfaces and dialogs.
```python
from ads4gpts_langchain import Ads4gptsInlineConversationalTool
```
### Ads4gptsInlineBannerTool
Provides inline banner advertisements that can be displayed within your AI application's response.
```python
from ads4gpts_langchain import Ads4gptsInlineBannerTool
```
### Ads4gptsSuggestedBannerTool
Generates banner advertisement suggestions that can be presented to users as recommended content.
```python
from ads4gpts_langchain import Ads4gptsSuggestedBannerTool
```
## Toolkit
The `Ads4gptsToolkit` combines these tools for convenient access in LangChain applications.
```python
from ads4gpts_langchain import Ads4gptsToolkit
```

View File

@@ -14,20 +14,34 @@ blogs, or knowledge bases.
## Installation and Setup
- Install the Apify API client for Python with `pip install apify-client`
- Install the LangChain Apify package for Python with:
```bash
pip install langchain-apify
```
- Get your [Apify API token](https://console.apify.com/account/integrations) and either set it as
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
an environment variable (`APIFY_API_TOKEN`) or pass it as `apify_api_token` in the constructor.
## Tool
## Utility
You can use the `ApifyActorsTool` to use Apify Actors with agents.
```python
from langchain_apify import ApifyActorsTool
```
See [this notebook](/docs/integrations/tools/apify_actors) for example usage and a full example of a tool-calling agent with LangGraph in the [Apify LangGraph agent Actor template](https://apify.com/templates/python-langgraph).
For more information on how to use this tool, visit [the Apify integration documentation](https://docs.apify.com/platform/integrations/langgraph).
## Wrapper
You can use the `ApifyWrapper` to run Actors on the Apify platform.
```python
from langchain_community.utilities import ApifyWrapper
from langchain_apify import ApifyWrapper
```
For more information on this wrapper, see [the API reference](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.apify.ApifyWrapper.html).
For more information on how to use this wrapper, see [the Apify integration documentation](https://docs.apify.com/platform/integrations/langchain).
## Document loader
@@ -35,7 +49,10 @@ For more information on this wrapper, see [the API reference](https://python.lan
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
```python
from langchain_community.document_loaders import ApifyDatasetLoader
from langchain_apify import ApifyDatasetLoader
```
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset).
Source code for this integration can be found in the [LangChain Apify repository](https://github.com/apify/langchain-apify).

View File

@@ -0,0 +1,59 @@
# Azure AI
All functionality related to [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/developer/python/get-started) and its related projects.
Integration packages for Azure AI, Dynamic Sessions, SQL Server are maintained in
the [langchain-azure](https://github.com/langchain-ai/langchain-azure) repository.
## Chat models
We recommend developers start with the (`langchain-azure-ai`) to access all the models available in [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog-overview).
### Azure AI Chat Completions Model
Access models like Azure OpenAI, DeepSeek R1, Cohere, Phi and Mistral using the `AzureAIChatCompletionsModel` class.
```bash
pip install -U langchain-azure-ai
```
Configure your API key and Endpoint.
```bash
export AZURE_INFERENCE_CREDENTIAL=your-api-key
export AZURE_INFERENCE_ENDPOINT=your-endpoint
```
```python
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
llm = AzureAIChatCompletionsModel(
model_name="gpt-4o",
api_version="2024-05-01-preview",
)
llm.invoke('Tell me a joke and include some emojis')
```
## Embedding models
### Azure AI model inference for embeddings
```bash
pip install -U langchain-azure-ai
```
Configure your API key and Endpoint.
```bash
export AZURE_INFERENCE_CREDENTIAL=your-api-key
export AZURE_INFERENCE_ENDPOINT=your-endpoint
```
```python
from langchain_azure_ai.embeddings import AzureAIEmbeddingsModel
embed_model = AzureAIEmbeddingsModel(
model_name="text-embedding-ada-002"
)
```

View File

@@ -0,0 +1,27 @@
# Cognee
Cognee implements scalable, modular ECL (Extract, Cognify, Load) pipelines that allow
you to interconnect and retrieve past conversations, documents, and audio
transcriptions while reducing hallucinations, developer effort, and cost.
Cognee merges graph and vector databases to uncover hidden relationships and new
patterns in your data. You can automatically model, load and retrieve entities and
objects representing your business domain and analyze their relationships, uncovering
insights that neither vector stores nor graph stores alone can provide.
Try it in a Google Colab <a href="https://colab.research.google.com/drive/1g-Qnx6l_ecHZi0IOw23rg0qC4TYvEvWZ?usp=sharing">notebook</a> or have a look at the <a href="https://docs.cognee.ai">documentation</a>.
If you have questions, join cognee <a href="https://discord.gg/NQPKmU5CCg">Discord</a> community.
Have you seen cognee's <a href="https://github.com/topoteretes/cognee-starter">starter repo</a>? Check it out!
## Installation and Setup
```bash
pip install langchain-cognee
```
## Retrievers
See detail on available retrievers [here](/docs/integrations/retrievers/cognee).

View File

@@ -0,0 +1,110 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Contextual AI\n",
"\n",
"Contextual AI is a platform that offers state-of-the-art Retrieval-Augmented Generation (RAG) technology for enterprise applications. Our platformant models helps innovative teams build production-ready AI applications that can process millions of pages of documents with exceptional accuracy.\n",
"\n",
"## Grounded Language Model (GLM)\n",
"\n",
"The Grounded Language Model (GLM) is specifically engineered to minimize hallucinations in RAG and agentic applications. The GLM achieves:\n",
"\n",
"- State-of-the-art performance on the FACTS benchmark\n",
"- Responses strictly grounded in provided knowledge sources\n",
"\n",
"## Using Contextual AI with LangChain\n",
"\n",
"See details [here](/docs/integrations/chat/contextual).\n",
"\n",
"This integration allows you to easily incorporate Contextual AI's GLM into your LangChain workflows. Whether you're building applications for regulated industries or security-conscious environments, Contextual AI provides the grounded and reliable responses your use cases demand.\n",
"\n",
"Get started with a free trial today and experience the most grounded language model for enterprise AI applications."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "y8ku6X96sebl"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"According to the information available, there are two types of cats in the world:\n",
"\n",
"1. Good cats\n",
"2. Best cats\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain_contextual import ChatContextual\n",
"\n",
"# Set credentials\n",
"if not os.getenv(\"CONTEXTUAL_AI_API_KEY\"):\n",
" os.environ[\"CONTEXTUAL_AI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Contextual API key: \"\n",
" )\n",
"\n",
"# intialize Contextual llm\n",
"llm = ChatContextual(\n",
" model=\"v1\",\n",
" api_key=\"\",\n",
")\n",
"# include a system prompt (optional)\n",
"system_prompt = \"You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability.\"\n",
"\n",
"# provide your own knowledge from your knowledge-base here in an array of string\n",
"knowledge = [\n",
" \"There are 2 types of dogs in the world: good dogs and best dogs.\",\n",
" \"There are 2 types of cats in the world: good cats and best cats.\",\n",
"]\n",
"\n",
"# create your message\n",
"messages = [\n",
" (\"human\", \"What type of cats are there in the world and what are the types?\"),\n",
"]\n",
"\n",
"# invoke the GLM by providing the knowledge strings, optional system prompt\n",
"# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument\n",
"ai_msg = llm.invoke(\n",
" messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True\n",
")\n",
"\n",
"print(ai_msg.content)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -103,14 +103,7 @@ See [MLflow LangChain Integration](/docs/integrations/providers/mlflow_tracking)
SQLDatabase
-----------
You can connect to Databricks SQL using the SQLDatabase wrapper of LangChain.
```
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_databricks(catalog="samples", schema="nyctaxi")
```
See [Databricks SQL Agent](https://docs.databricks.com/en/large-language-models/langchain.html#databricks-sql-agent) for how to connect Databricks SQL with your LangChain Agent as a powerful querying tool.
To connect to Databricks SQL or query structured data, see the [Databricks structured retriever tool documentation](https://docs.databricks.com/en/generative-ai/agent-framework/structured-retrieval-tools.html#table-query-tool) and to create an agent using the above created SQL UDF see [Databricks UC Integration](https://docs.unitycatalog.io/ai/integrations/langchain/).
Open Models
-----------

View File

@@ -0,0 +1,16 @@
# Deeplake
[Deeplake](https://www.deeplake.ai/) is a database optimized for AI and deep learning
applications.
## Installation and Setup
```bash
pip install langchain-deeplake
```
## Vector stores
See detail on available vector stores
[here](/docs/integrations/vectorstores/activeloop_deeplake).

View File

@@ -0,0 +1,65 @@
# Discord
> [Discord](https://discord.com/) is an instant messaging, voice, and video communication platform widely used by communities of all types.
## Installation and Setup
Install the `langchain-discord-shikenso` package:
```bash
pip install langchain-discord-shikenso
```
You must provide a bot token via environment variable so the tools can authenticate with the Discord API:
```bash
export DISCORD_BOT_TOKEN="your-discord-bot-token"
```
If `DISCORD_BOT_TOKEN` is not set, the tools will raise a `ValueError` when instantiated.
---
## Tools
Below is a snippet showing how you can read and send messages in Discord. For more details, see the [documentation for Discord tools](/docs/integrations/tools/discord).
```python
from langchain_discord.tools.discord_read_messages import DiscordReadMessages
from langchain_discord.tools.discord_send_messages import DiscordSendMessage
# Create tool instances
read_tool = DiscordReadMessages()
send_tool = DiscordSendMessage()
# Example: Read the last 3 messages from channel 1234567890
read_result = read_tool({"channel_id": "1234567890", "limit": 3})
print(read_result)
# Example: Send a message to channel 1234567890
send_result = send_tool({"channel_id": "1234567890", "message": "Hello from Markdown example!"})
print(send_result)
```
---
## Toolkit
`DiscordToolkit` groups multiple Discord-related tools into a single interface. For a usage example, see [the Discord toolkit docs](/docs/integrations/tools/discord).
```python
from langchain_discord.toolkits import DiscordToolkit
toolkit = DiscordToolkit()
tools = toolkit.get_tools()
read_tool = tools[0] # DiscordReadMessages
send_tool = tools[1] # DiscordSendMessage
```
---
## Future Integrations
Additional integrations (e.g., document loaders, chat loaders) could be added for Discord.
Check the [Discord Developer Docs](https://discord.com/developers/docs/intro) for more information, and watch for updates or advanced usage examples in the [langchain_discord GitHub repo](https://github.com/Shikenso-Analytics/langchain-discord).

View File

@@ -1,4 +1,4 @@
# Discord
# Discord (community loader)
>[Discord](https://discord.com/) is a VoIP and instant messaging social platform. Users have the ability to communicate
> with voice calls, video calls, text messaging, media and files in private chats or as part of communities called

View File

@@ -1,34 +0,0 @@
# FalkorDB
>[FalkorDB](https://www.falkordb.com/) is a creator of the [FalkorDB](https://docs.falkordb.com/),
> a low-latency Graph Database that delivers knowledge to GenAI.
## Installation and Setup
See [installation instructions here](/docs/integrations/graphs/falkordb/).
## Graphs
See a [usage example](/docs/integrations/graphs/falkordb).
```python
from langchain_community.graphs import FalkorDBGraph
```
## Chains
See a [usage example](/docs/integrations/graphs/falkordb).
```python
from langchain_community.chains.graph_qa.falkordb import FalkorDBQAChain
```
## Memory
See a [usage example](/docs/integrations/memory/falkordb_chat_message_history).
```python
from langchain_falkordb import FalkorDBChatMessageHistory
```

View File

@@ -27,5 +27,5 @@ from langchain_community.agent_toolkits.gitlab.toolkit import GitLabToolkit
Tool for interacting with the GitLab API.
```python
from langchain_community.tools.github.tool import GitHubAction
from langchain_community.tools.gitlab.tool import GitLabAction
```

View File

@@ -0,0 +1,22 @@
# Graph RAG
## Overview
[Graph RAG](https://datastax.github.io/graph-rag/) provides a retriever interface
that combines **unstructured** similarity search on vectors with **structured**
traversal of metadata properties. This enables graph-based retrieval over **existing**
vector stores.
## Installation and setup
```bash
pip install langchain-graph-retriever
```
## Retrievers
```python
from langchain_graph_retriever import GraphRetriever
```
For more information, see the [Graph RAG Integration Guide](/docs/integrations/retrievers/graph_rag).

View File

@@ -20,7 +20,7 @@ from langchain_community.chat_models.kinetica import ChatKinetica
The Kinetca vectorstore wrapper leverages Kinetica's native support for [vector
similarity search](https://docs.kinetica.com/7.2/vector_search/).
See [Kinetica Vectorsore API](/docs/integrations/vectorstores/kinetica) for usage.
See [Kinetica Vectorstore API](/docs/integrations/vectorstores/kinetica) for usage.
```python
from langchain_community.vectorstores import Kinetica
@@ -28,8 +28,8 @@ from langchain_community.vectorstores import Kinetica
## Document Loader
The Kinetica Document loader can be used to load LangChain Documents from the
Kinetica database.
The Kinetica Document loader can be used to load LangChain [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) from the
[Kinetica](https://www.kinetica.com/) database.
See [Kinetica Document Loader](/docs/integrations/document_loaders/kinetica) for usage

View File

@@ -0,0 +1,129 @@
# LangFair: Use-Case Level LLM Bias and Fairness Assessments
LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. The LangFair [repository](https://github.com/cvs-health/langfair) includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices.
Explore our [documentation site](https://cvs-health.github.io/langfair/) for detailed instructions on using LangFair.
## ⚡ Quickstart Guide
### (Optional) Create a virtual environment for using LangFair
We recommend creating a new virtual environment using venv before installing LangFair. To do so, please follow instructions [here](https://docs.python.org/3/library/venv.html).
### Installing LangFair
The latest version can be installed from PyPI:
```bash
pip install langfair
```
### Usage Examples
Below are code samples illustrating how to use LangFair to assess bias and fairness risks in text generation and summarization use cases. The below examples assume the user has already defined a list of prompts from their use case, `prompts`.
##### Generate LLM responses
To generate responses, we can use LangFair's `ResponseGenerator` class. First, we must create a `langchain` LLM object. Below we use `ChatVertexAI`, but **any of [LangChains LLM classes](https://js.langchain.com/docs/integrations/chat/) may be used instead**. Note that `InMemoryRateLimiter` is to used to avoid rate limit errors.
```python
from langchain_google_vertexai import ChatVertexAI
from langchain_core.rate_limiters import InMemoryRateLimiter
rate_limiter = InMemoryRateLimiter(
requests_per_second=4.5, check_every_n_seconds=0.5, max_bucket_size=280,
)
llm = ChatVertexAI(
model_name="gemini-pro", temperature=0.3, rate_limiter=rate_limiter
)
```
We can use `ResponseGenerator.generate_responses` to generate 25 responses for each prompt, as is convention for toxicity evaluation.
```python
from langfair.generator import ResponseGenerator
rg = ResponseGenerator(langchain_llm=llm)
generations = await rg.generate_responses(prompts=prompts, count=25)
responses = generations["data"]["response"]
duplicated_prompts = generations["data"]["prompt"] # so prompts correspond to responses
```
##### Compute toxicity metrics
Toxicity metrics can be computed with `ToxicityMetrics`. Note that use of `torch.device` is optional and should be used if GPU is available to speed up toxicity computation.
```python
# import torch # uncomment if GPU is available
# device = torch.device("cuda") # uncomment if GPU is available
from langfair.metrics.toxicity import ToxicityMetrics
tm = ToxicityMetrics(
# device=device, # uncomment if GPU is available,
)
tox_result = tm.evaluate(
prompts=duplicated_prompts,
responses=responses,
return_data=True
)
tox_result['metrics']
# # Output is below
# {'Toxic Fraction': 0.0004,
# 'Expected Maximum Toxicity': 0.013845130120171235,
# 'Toxicity Probability': 0.01}
```
##### Compute stereotype metrics
Stereotype metrics can be computed with `StereotypeMetrics`.
```python
from langfair.metrics.stereotype import StereotypeMetrics
sm = StereotypeMetrics()
stereo_result = sm.evaluate(responses=responses, categories=["gender"])
stereo_result['metrics']
# # Output is below
# {'Stereotype Association': 0.3172750176745329,
# 'Cooccurrence Bias': 0.44766333654278373,
# 'Stereotype Fraction - gender': 0.08}
```
##### Generate counterfactual responses and compute metrics
We can generate counterfactual responses with `CounterfactualGenerator`.
```python
from langfair.generator.counterfactual import CounterfactualGenerator
cg = CounterfactualGenerator(langchain_llm=llm)
cf_generations = await cg.generate_responses(
prompts=prompts, attribute='gender', count=25
)
male_responses = cf_generations['data']['male_response']
female_responses = cf_generations['data']['female_response']
```
Counterfactual metrics can be easily computed with `CounterfactualMetrics`.
```python
from langfair.metrics.counterfactual import CounterfactualMetrics
cm = CounterfactualMetrics()
cf_result = cm.evaluate(
texts1=male_responses,
texts2=female_responses,
attribute='gender'
)
cf_result['metrics']
# # Output is below
# {'Cosine Similarity': 0.8318708,
# 'RougeL Similarity': 0.5195852482361165,
# 'Bleu Similarity': 0.3278433712872481,
# 'Sentiment Bias': 0.0009947145187601957}
```
##### Alternative approach: Semi-automated evaluation with `AutoEval`
To streamline assessments for text generation and summarization use cases, the `AutoEval` class conducts a multi-step process that completes all of the aforementioned steps with two lines of code.
```python
from langfair.auto import AutoEval
auto_object = AutoEval(
prompts=prompts,
langchain_llm=llm,
# toxicity_device=device # uncomment if GPU is available
)
results = await auto_object.evaluate()
results['metrics']
# # Output is below
# {'Toxicity': {'Toxic Fraction': 0.0004,
# 'Expected Maximum Toxicity': 0.013845130120171235,
# 'Toxicity Probability': 0.01},
# 'Stereotype': {'Stereotype Association': 0.3172750176745329,
# 'Cooccurrence Bias': 0.44766333654278373,
# 'Stereotype Fraction - gender': 0.08,
# 'Expected Maximum Stereotype - gender': 0.60355167388916,
# 'Stereotype Probability - gender': 0.27036},
# 'Counterfactual': {'male-female': {'Cosine Similarity': 0.8318708,
# 'RougeL Similarity': 0.5195852482361165,
# 'Bleu Similarity': 0.3278433712872481,
# 'Sentiment Bias': 0.0009947145187601957}}}
```

Some files were not shown because too many files have changed in this diff Show More